text
stringlengths
56
7.94M
\begin{document} \title{Efficient Tensor Network Simulation for Few-Atom, Multimode Dicke Model via Coupling Matrix Transformation} \author{Christopher J. Ryu} \affiliation{Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA} \author{Dong-Yeop Na} \affiliation{Department of Electrical Engineering, Pohang University of Science and Technology, Pohang 37673, Republic of Korea} \author{Weng C. Chew} \email{[email protected]} \affiliation{Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA} \affiliation{Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA} \author{Erhan Kudeki} \affiliation{Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA} \date{\today} \begin{abstract} We present a novel generalization of the chain mapping technique that applies to multi-atom, multimode systems by making use of coupling matrix transformations. This is extremely useful for tensor network simulations of multimode Dicke model and multi-spin-boson model because their coupling structures are altered from the star form to the chain form with near-neighbor interactions. Our approach produces an equivalent Hamiltonian with the latter coupling form, which we call the band Hamiltonian, and we demonstrate its equivalence to the multimode Dicke Hamiltonian. In the single atom case, our approach reduces to the chain mapping technique. When considering several tens of field modes, we have found that tensor network simulation of two atoms in the ultrastrong coupling regime is possible with our approach. We demonstrate this by considering a pair of entangled atoms confined in a cavity, interacting with thirty electromagnetic modes. \end{abstract} \maketitle \textit{Introduction.}---The Dicke model describes the physics between a collection of two-level atoms and quantized electromagnetic field \cite{Dicke1954}. It has been used to study rich and nontrivial physics such as superradiance and quantum phase transitions \cite{Emary_and_Brandes_2003_QPT_Dicke_PRL,Emary_and_Brandes_2003_QPT_Dicke_PRE}. Such physics have been found to occur in the ultrastrong coupling regime \cite{Garbe_et_al_2017_SPT_in_USC,Bin_et_al_2019_Collective_radiance_in_USC,Frisk_Kockum_2019_Ultrastrong_coupling} where the field-atom coupling coefficient is comparable to the atomic transition frequency. In this regime, the rotating wave approximation is invalid, rendering the analysis of the system much more difficult. Hence, approximate techniques such as the Holstein-Primakoff transformation are often used, which is valid only in specific settings. For the study of single atom interacting with multiple field modes, the chain mapping technique has been extremely useful for tensor network analysis of the spin-boson model \cite{Bulla_et_al_2005_NRG,Bulla_et_al_2008_NRG,Prior_et_al_2010_Strong_system-environment_interactions,Chin_et_al_2010_exact_transformation_to_chain} and multimode quantum Rabi model \cite{Munoz_superluminal_2018,Ryu_Na_and_Chew_2022_MPS_arXiv}. Since these models have the so-called star coupling structure \cite{Bulla_et_al_2005_NRG}, they are transformed to an equivalent Hamiltonian with a linear chain coupling structure with nearest-neighbor interactions. Once transformed, numerical algorithms such as matrix product states (MPS) \cite{Vidal_2003_MPS,Vidal_2004_MPS} or density matrix renormalization group \cite{White_1992_DMRG} can be applied efficiently. Although it is highly effective, the chain mapping technique is limited to systems with single two-level atom or spin-1/2 system. We simply refer to these as atoms for the remainder of this letter. Here, we propose, for the first time to the best of our knowledge, a novel generalization of this mapping technique that should work with arbitrary numbers of atoms and modes. Our method utilizes coupling matrix transformations to achieve this, and it leads to equivalent Hamiltonians that are more compatible with tensor network algorithms. We have so far simulated efficient tensor networks for quantum states with two atoms ultrastrongly coupled to multiple field modes in a general setting. Extending this to higher numbers of atoms remains a formidable challenge to be part of our future work. However, for weaker coupling strengths especially for applications in open quantum systems, it is possible to go beyond two atoms with our approach for a few-atom, multimode model (although we do not demonstrate this here). \textit{Formulation.}---We are primarily concerned with quantum electrodynamic applications in the ultrastrong coupling regime where the rotating wave approximation is invalid. Therefore, we use the multimode Dicke Hamiltonian to represent a system with $N_a$ atoms and $M$ electromagnetic modes, namely \begin{equation}\label{eq:Dicke_Ham} \hat{H}_D=\hbar\sum_{j=1}^{N_a}\tfrac{\omega_{a,j}}{2}\hat\sigma^z_j+\hbar\sum_{k=1}^{M}\Big[\omega_k\hat{a}_k^\dagger\hat{a}_k-i\sum_{j=1}^{N_a}g_{j,k}\hat\sigma^x_j(\hat{a}_k-\hat{a}_k^\dagger)\Big], \end{equation} where $\hbar$ is the reduced Planck constant; $\hbar\omega_{a,j}$ and $\hat\sigma^l_j$ are the energy gap and the Pauli operator, respectively, of the $j$-th atom where $l=x,y,$ or $z$; $\omega_k$ is the electromagnetic mode frequency, and $\hat{a}_k$ ($\hat{a}_k^\dagger$) is the photon annihilation (creation) operator, all for mode-$k$. The coupling coefficient between the $j$-th atom and mode-$k$ is $g_{j,k}$, which is based on electric dipole interaction \cite{Ryu_Na_and_Chew_2022_MPS_arXiv}. \begin{figure*} \caption{The coupling matrices for the Dicke Hamiltonian (left) and the band form (right). The matrix entries correspond to the coefficients in (\ref{eq:Dicke_Ham} \label{fig:coup_mats} \end{figure*} The coupling structure of the multimode Dicke Hamiltonian can be represented by a coupling matrix of size $(N_a+M)\times(N_a+M)$ partitioned as \begin{equation} \overline{\bf M}_D= \begin{bmatrix} \overline{\boldsymbol{\omega}}_a & \overline{\bf g} \\ \overline{\bf g}^T & \overline{\boldsymbol{\omega}}_f \end{bmatrix}, \end{equation} where $\overline{\boldsymbol{\omega}}_a$ and $\overline{\boldsymbol{\omega}}_f$ are diagonal matrices of atomic and field frequencies, and $\overline{\bf g}$ is an $N_a\times M$ dense matrix representing the field-atom coupling coefficients. The Dicke coupling matrix is a real-valued, symmetric matrix that is visualized on the left in Fig.~\ref{fig:coup_mats}. The far off-diagonal coupling elements of $\overline{\bf M}_D$ such as $g_{1,M}$ (shown in Fig.~\ref{fig:coup_mats}) are what makes the tensor network simulation of (\ref{eq:Dicke_Ham}) inefficient. In MPS simulations, this type of interaction terms are referred to as long-range interactions \cite{Haegeman_et_al_2016_TDVP}, and they require implementing a great number of SWAP gates \footnote{A SWAP gate is a quantum gate that exchanges the states of two qubits. More generally, for MPS simulations, a SWAP gate exchanges the states of two identical quantum systems \cite{Stoudenmire_and_White_2010_MPS_zip-up}}. To avoid this inefficiency, we annihilate these coupling elements by applying a series of Householder transformations and orthogonally transform the coupling matrix into a band matrix as \begin{equation}\label{eq:CMT} \overline{\bf M}_B=\underbrace{\overline{\bf Q}_{M-N_a-1}\dots\overline{\bf Q}_2\overline{\bf Q}_1}_{=\overline{\bf Q}}\overline{\bf M}_D\overline{\bf Q}_1^T\overline{\bf Q}_2^T\dots\overline{\bf Q}_{M-N_a-1}^T \end{equation} or simply $\overline{\bf M}_B=\overline{\bf Q}\,\overline{\bf M}_D\overline{\bf Q}^T$ where each $\overline{\bf Q}_i$ implements a Householder transformation that annihilates everything below the $(N_a+i)$-th entry of the $i$-th column. The result is a symmetric band matrix of the same size as $\overline{\bf M}_D$ with bandwidth $N_a$ that can be expressed in the block matrix form as \begin{equation} \overline{\bf M}_B= \begin{bmatrix} \overline{\boldsymbol{\omega}}_a & \overline{\boldsymbol{\rho}} \\ \overline{\boldsymbol{\rho}}^T & \overline{\boldsymbol{\xi}} \end{bmatrix}, \end{equation} where $\overline{\boldsymbol{\rho}}$ is a lower-triangular matrix of size $N_a\times M$ representing the modified field-atom coupling coefficients, and $\overline{\boldsymbol{\xi}}$ is a band matrix of size $M\times M$ with diagonal elements $[\overline{\boldsymbol{\xi}}]_{ii}=\xi_i$ representing the transformed bosonic frequencies and off-diagonal elements $[\overline{\boldsymbol{\xi}}]_{ij}=t_{ij}$ for $i\ne j$ and $|i-j|\le N_a$ representing the newly created boson-boson coupling coefficients. The resulting band matrix is visualized on the right in Fig.~\ref{fig:coup_mats}. Inspired by the chain mapping technique \cite{Bulla_et_al_2005_NRG,Bulla_et_al_2008_NRG,Prior_et_al_2010_Strong_system-environment_interactions,Chin_et_al_2010_exact_transformation_to_chain}, the Hamiltonian whose coupling structure is represented by the band coupling matrix $\overline{\bf M}_B$ can be written in terms of the entries of $\overline{\bf M}_B$ as \begin{equation}\label{eq:band_Ham} \begin{aligned} \hat{H}_B&=\hbar\sum_{j=1}^{N_a}\bigg[\frac{\omega_{a,j}}{2}\hat\sigma^z_j-i\sum_{k\le j}\rho_{j,k}\hat\sigma^x_j(\hat{b}_k-\hat{b}_k^\dagger)\bigg]\\ &\quad+\hbar\sum_{k=1}^{M}\Big[\xi_k\hat{b}_k^\dagger\hat{b}_k+\sum_{j=1}^{N_a}t_{k,k+j}(\hat{b}_k^\dagger\hat{b}_{k+j}+\hat{b}_{k+j}^\dagger\hat{b}_k)\Big] \end{aligned} \end{equation} with $t_{k,k+j}=0$ for $k+j>M$. It is remarkable that (\ref{eq:band_Ham}) compared to (\ref{eq:Dicke_Ham}) lacks the far off-diagonal couplings as a result of the coupling matrix transformation. This is explicitly shown by the summation indices for the interaction terms that are limited by the atomic index $j$ (first row, second summation) and the number of atoms $N_a$ (second row, second summation). This absence of far off-diagonal couplings is what makes (\ref{eq:band_Ham}) much more compatible with tensor network algorithms. The orthogonal matrix $\overline{\bf Q}$ in (\ref{eq:CMT}) that implements the transformation is of the form \begin{equation}\label{eq:Q_def} \overline{\bf Q}= \begin{bmatrix} \overline{\bf I}_{N_a} & \overline{\bf 0} \\ \overline{\bf 0} & \overline{\bf U} \end{bmatrix}, \end{equation} where $\overline{\bf I}_{N_a}$ is an $N_a\times N_a$ identity matrix, and $\overline{\bf U}$ is an $M\times M$ orthogonal matrix. From this form of $\overline{\bf Q}$, it is evident that the transformation only applies to the bosons and not the atoms. This is why the atomic frequencies in $\overline{\bf M}_B$ are left unchanged and are equal to those in $\overline{\bf M}_D$. The photonic operator $\hat{a}_j$ and the chain bosonic operator $\hat{b}_k$ are related as $\hat{a}_j=\sum_{k=1}^{M}U_{jk}\hat{b}_k$ where $U_{jk}=[\overline{\bf U}]_{jk}$ is the block matrix from (\ref{eq:Q_def}). In other words, a particular way of clustering the photons gives rise to the chain bosonic modes. This is precisely the same as what is done in the chain mapping technique \cite{Bulla_et_al_2005_NRG,Bulla_et_al_2008_NRG,Prior_et_al_2010_Strong_system-environment_interactions,Chin_et_al_2010_exact_transformation_to_chain}. There is a good reason that we do not trim the off-diagonal elements of $\overline{\bf M}_D$ all the way to the tridiagonal form. If it were tridiagonalized, then we would lose the identity block matrix $\overline{\bf I}_{N_a}$ in the upper left corner of (\ref{eq:Q_def}), and $\overline{\bf Q}$ will end up being a full matrix, which would mix the two-level and bosonic operators in the process of transformation. This will make the resulting Hamiltonian not well-defined. To avoid this, we make sure that the transformation only applies to the bosonic operators as shown in (\ref{eq:Q_def}). What is reassuring is that when $N_a=1$, our coupling matrix transformation technique reduces to the chain mapping technique and implements the exact same transformation. This is numerically demonstrated \footnote{Since Householder transformations are implemented numerically, it is difficult (or impossible) to mathematically prove the equivalence of our coupling matrix transformation technique and the chain mapping technique. This is why we numerically demonstrate their equivalence.} in Fig.~\ref{fig:plot_orig_vs_new_transformation}. Here, we consider a single atom placed at the center of a closed, 1D cavity with perfect electric conductor (PEC) walls, and we consider the lowest fifty electromagnetic eigenmodes of this cavity. It is observed in Fig.~\ref{fig:plot_orig_vs_new_transformation} that the coefficients agree perfectly, so it is clear that the coupling matrix transformation reduces to the chain mapping technique in the single atom case. Since our scheme is based on Householder transformations which costs $O(N^3)$ where $N=N_a+M$ is the number of rows and columns of the coupling matrices, it is numerically stable unlike the chain mapping technique that is unstable due to the Lanczos algorithm [$O(MN^2)$] on which it is based. However, chain mapping needs stabilization using methods such as the modified Gram-Schmidt orthogonalization \cite{Ryu_Na_and_Chew_2022_MPS_arXiv} which costs $O(N^3)$ also. \begin{figure} \caption{Comparison of the original chain mapping technique from \cite{Bulla_et_al_2005_NRG} \label{fig:plot_orig_vs_new_transformation} \end{figure} \textit{Results.}---To show that the multimode Dicke Hamiltonian (\ref{eq:Dicke_Ham}) is equivalent to the band Hamiltonian (\ref{eq:band_Ham}), we perform a simple numerical time-domain simulation for both systems for 3-atom, 5-mode case in the ultrastrong coupling regime \footnote{This is realized by setting one of the coupling coefficients as $g_{1,1}/\omega_1=0.25$, i.e., the normalized coupling coefficient between the first atom and fundamental field mode of the cavity is 0.25. The coupling coefficients for the other modes and other atoms are determined by the electric dipole interaction which depends on the field profile, atoms' positions, and their dipole moments. Since we assume identical atoms here, their dipole moments are equal. The expression for the electric dipole coupling coefficient is given in Eq.~(7) of \cite{Ryu_Na_and_Chew_2022_MPS_arXiv}}. This is small enough that the computational cost for simulating either system is very low (and tensor network algorithm is not needed here). The three atoms are assumed to be identical, and they are all placed in a 1D PEC cavity. The cavity occupies $x\in[-L/2,L/2]$ where $L$ is the length of the cavity, and the atoms are placed at $0$, $L/4$, and $-3L/8$. This setting is depicted in Fig.~\ref{fig:three_atoms_in_cav}. The plot of the time-domain simulation result is shown in Fig.~\ref{fig:plot_Dicke_vs_band}. We numerically solve the quantum state equation (also known as Schr\"odinger equation) for Hamiltonians (\ref{eq:Dicke_Ham}) and (\ref{eq:band_Ham}) to obtain the time-evolved state $\ket{\psi(t)}$ and compute the excited-state atomic population $\braket{\sigma^+_j\sigma^-_j}=\braket{\psi(t)|\hat\sigma^+_j\hat\sigma^-_j|\psi(t)}$ for each atom. The initial state is given by $\ket{\psi_0}=\ket{e,e,e,0,\dots,0}$, i.e., three excited atoms in vacuum. Excellent agreement is observed in Fig.~\ref{fig:plot_Dicke_vs_band}, and we conclude that the Dicke (\ref{eq:Dicke_Ham}) and band (\ref{eq:band_Ham}) Hamiltonians represent an equivalent physical system. Although our demonstration is for 3 atoms and 5 modes, the coupling matrix transformation should work for any numbers of atoms and modes. \begin{figure} \caption{Illustration of three identical atoms placed in a 1D PEC cavity interacting with the first five modes.} \label{fig:three_atoms_in_cav} \end{figure} \begin{figure} \caption{3-atom, 5-mode simulation of the Dicke (\ref{eq:Dicke_Ham} \label{fig:plot_Dicke_vs_band} \end{figure} In the ultrastrong coupling regime, single electromagnetic mode approximation is likely to fail due to the possibility of superluminal signaling \cite{Munoz_superluminal_2018}, and multiple modes must be considered. It was found that several tens of modes are enough to accurately characterize the propagation effects in this coupling regime \cite{Munoz_superluminal_2018}. When so many modes need to be incorporated into the model, it is highly inefficient (or impossible) to numerically simulate the system using the multimode Dicke Hamiltonian (\ref{eq:Dicke_Ham}). Since we have seen their equivalence, we use the band Hamiltonian (\ref{eq:band_Ham}) for tensor network simulations for the remainder of this letter. We have investigated time-domain simulations of this system using the time-evolving block decimation (TEBD) algorithm \cite{Vidal_2004_MPS,Paeckel_et_al2019_TE_methods_for_MPS} and the time-dependent variational principle \cite{Haegeman_et_al_2011_TDVP,Haegeman_et_al_2016_TDVP} and determined that MPS simulation of (\ref{eq:band_Ham}) for $N_a\ge3$ in the ultrastrong coupling regime is computationally impracticable \footnote{Regardless of the time-evolution algorithm, dealing with $N_a\ge3$ appears to be computationally impracticable in the ultrastrong coupling regime because it requires forming an $(N_a+1)$-site operator. This leads to a very large numerical array, and furthermore, its matrix product operator representation exhibits very high bond dimensions, making computations with it prohibitively expensive for a classical computer. For weaker coupling strengths, going beyond two atoms is possible since less Fock-state levels per bosonic mode can be taken into account}. Therefore, we restrict our MPS simulations to two identical atoms in 1D PEC cavity in the presence of thirty electromagnetic modes ($M=30$). This is adequate for characterizing propagation effects \cite{Munoz_superluminal_2018}, and we are interested in how entangled atoms interact with multiple field modes in the ultrastrong coupling regime. The setting is similar to the one depicted in Fig.~\ref{fig:three_atoms_in_cav} but with atoms now placed at $x=\pm L/4$. The MPS is used to efficiently represent time-evolving quantum state $\ket{\psi(t)}$, and the time evolution operator $e^{-i\hat{H}_B\Delta t/\hbar}$ is approximately constructed as a matrix product operator using TEBD. \begin{figure*} \caption{MPS simulation results of the two-atom, thirty-mode Dicke model using the band Hamiltonian ({\ref{eq:band_Ham} \label{fig:ent_sim_results} \end{figure*} We consider three initial states: \begin{subequations} \begin{align} \ket{\psi_1}&=\big(\ket{e_1e_2}+\ket{g_1g_2}\big)/\sqrt{2},\label{eq:psi_1}\\ \ket{\psi_2}&=\big(\ket{e_1g_2}+\ket{g_1e_2}\big)/\sqrt{2},\label{eq:psi_2}\\ \ket{\psi_3}&=\big(\ket{e_1}+\ket{g_1}\big)\big(\ket{e_2}+\ket{g_2}\big)/2\label{eq:psi_3} \end{align} \end{subequations} with vacuum (no photons) in all three cases. The subscripts in the above distinguishes the two atoms. The first two states are maximally entangled with different configurations, while the last is separable (non-entangled). We reveal the differences in their time evolution characteristics for the three initial states. We deal with the onset of the ultrastrong coupling regime where $\max_{j,k}|g_{j,k}/\omega_k|=0.1$. The simulation results are shown in Fig.~\ref{fig:ent_sim_results}. In particular, the von Neumann entanglement entropies are plotted in the last row to quantify the degree of entanglement for two different bipartitions of the time-evolving MPS. The entropies are computed as \begin{subequations} \begin{align} S_1(t)&=-\operatorname{tr}[\hat\rho_1(t)\ln\hat\rho_1(t)],\\ S_{1:2}(t)&=-\operatorname{tr}[\hat\rho_{1:2}(t)\ln\hat\rho_{1:2}(t)], \end{align} \end{subequations} where the reduced density operators in the above are obtained by taking the partial trace of the total density operator, $\rho(t)=\ket{\psi(t)}\bra{\psi(t)}$, as $\hat\rho_{1:m}(t)=\operatorname{tr}_{m+1:N}[\hat\rho(t)]$, where $m$ indexes the physical sites of MPS, and the bipartition is taken between sites $m$ and $m+1$. When $m=1$, we simply denote $\hat\rho_{1:1}(t)=\hat\rho_1(t)$. The density operator and its bipartitions are further explained in Fig.~\ref{fig:mps_dens_op_bipartition}. With MPS, these entropies can be simply computed by taking the singular values at the zero-site center located right along the bipartition \cite{Haegeman_et_al_2016_TDVP}. \begin{figure} \caption{Tensor network diagram of the total density operator formed by taking the outer product of the MPS. The first two sites represent the atoms, and the remaining sites represent the electromagnetic modes. The bipartition is taken between sites one and two (red dashed line) to calculate $S_1(t)$, and sites two and three (blue dotted line) to calculate $S_{1:2} \label{fig:mps_dens_op_bipartition} \end{figure} The simulations in Fig.~\ref{fig:ent_sim_results} take place in the lowest end of the ultrastrong coupling regime where both the weak and ultrastrong coupling effects take place. The weak coupling effect is shown in the field correlation plot where a ``glow'' in the cavity is observed. This glow represents the fundamental mode of the cavity to which the atoms couple dominantly. The propagation effects characterized by the traveling wavefront (traveling at the speed of light) is also visible due to the ultrastrong coupling between the atoms and field modes. What is notable about the simulation results in Fig.~\ref{fig:ent_sim_results} is that although both initial states (\ref{eq:psi_1}) and (\ref{eq:psi_2}) are maximally entangled states, they exhibit very different behaviors in the presence of multiple electromagnetic modes. The initial state (\ref{eq:psi_2}) displays a highly periodic behavior resembling Rabi oscillations. It can be seen that this initial state is almost fully revived when $t/(2\pi/\omega_{a,1})=5$, meaning that both atoms nearly go back to the maximally entangled initial state. This does not happen for initial states (\ref{eq:psi_1}) and (\ref{eq:psi_3}). Regarding the von Neumann entanglement entropy, the maximum possible value of $S_1(t)$ is $\ln2\approx0.693$ since the first site of the MPS is occupied by an atom. When $S_1(t)$ goes back close to the maximum value at times $t>0$ for the entangled initial states (\ref{eq:psi_1}) and (\ref{eq:psi_2}), either the atoms are back to the maximally entangled initial state (\ref{eq:psi_2}), or the first atom is entangled with both the second atom and the field modes in the case of (\ref{eq:psi_1}). For the separable initial state (\ref{eq:psi_3}), we observe the entropies starting out at zero and slowly increasing over time. For long enough simulations, these values will saturate to a level that depends on the coupling strength. \textit{Conclusion.}---We have presented a novel generalization of the chain mapping technique based on coupling matrix transformations that works accurately for any numbers of atoms and modes. Our technique is very useful for tensor network simulations of the multimode Dicke model and multi-spin-boson model because it can take the coupling structures of these models and alter them into a linear chain form with near-neighbor interactions, which is highly compatible with MPS. The coupling matrix transformations are numerically stable, and this technique reduces to the chain mapping technique in the single atom case. We have demonstrated the equivalence between the Dicke (\ref{eq:Dicke_Ham}) and band (\ref{eq:band_Ham}) Hamiltonians and applied the band Hamiltonian for MPS simulations of two entangled atoms with thirty electromagnetic modes. Our future work involves extending this technique for realistic 3D models such as flux qubits ultrastrongly coupled to coplanar waveguide resonators \cite{Niemczyk_et_al_2010_Circuit_QED_in_the_USC_regime,Wang_et_al_2020_Bloch-Siegert_shift_in_US_coupled_circuit_QED}. \begin{acknowledgments} This work was supported by the National Science Foundation Grant No.~2202389 and teaching assistantship from the Department of Electrical and Computer Engineering at University of Illinois Urbana-Champaign. \end{acknowledgments} \input{ms.bbl} \end{document}
\begin{document} \title{Very large set axioms over constructive set theories} \begin{abstract} We investigate large set axioms defined in terms of elementary embeddings over constructive set theories, focusing on $\mathsf{IKP}$ and $\mathsf{CZF}$. Most previously studied large set axioms, notably, the constructive analogues of large cardinals below $0^\sharp$, have proof-theoretic strength weaker than full Second-order Arithmetic. On the other hand, the situation is dramatically different for those defined via elementary embeddings. We show that by adding to $\mathsf{IKP}$ the basic properties of an elementary embedding $j\colon V\to M$ for $\Delta_0$-formulas, which we will denote by $\Delta_0\text{-}\mathsf{BTEE}_M$, we obtain the consistency of $\mathsf{ZFC}$ and more. We will also see that the consistency strength of a Reinhardt set exceeds that of $\mathsf{ZF+WA}$. Furthermore, we will define super Reinhardt sets and $\mathsf{TR}$, which is a constructive analogue of $V$ being totally Reinhardt, and prove that their proof-theoretic strength exceeds that of $\mathsf{ZF}$ with choiceless large cardinals. \end{abstract} \tableofcontents \section{Introduction} Large cardinals have played a pivotal role in set theory, and many of them can be defined in terms of elementary embeddings. Associating large cardinals with elementary embeddings appeared first in Scott's pioneering paper \cite{Scott1961}, and was further systematically developed throughout the 1960s and 1970s. Many of these results were collected by Reinhardt and Solovay around the early 1970s, which was published later with Kanamori in the expository paper \cite{SolovayReinhardtKanamori1978}. The attempt to find ever stronger notions of large cardinal axioms culminated in the principle now known as a \emph{Reinhardt cardinal}, which was first mentioned in Reinhardt's doctoral thesis \cite{ReinhardtPhD}. A Reinhardt cardinal is a critical point of a non-trivial elementary embedding $j\colon V\to V$. Unfortunately, the fate of a Reinhardt cardinal in $\mathsf{ZF}C$ is that of inconsistency, as if Icarus falls into the sea as he flew too close to the sun. A famous result by Kunen \cite{Kunen1971} proves that Reinhardt cardinals are incompatible with the Axiom of Choice and it is still unknown if $\mathsf{ZF}$ with a Reinhardt cardinal is consistent. However, there has been little study of the consistency of Reinhardt cardinals in the choiceless context and {few results about the implications of such axioms have appeared in the literature before 2010. T}he only exception the authors know of is a result of Apter and Sargsyan \cite{ApterSargsyan2004}, and other relevant results about Reinhardt embeddings focused on its inconsistency. Notable examples include Suzuki's non-definability of embeddings $j\colon V\to V$ over $\mathsf{ZF}$ \cite{Suzuki1999} or Zapletal's PCF-theoretic proof of Kunen's inconsistency theorem \cite{Zapletal1996}. \\ Choiceless large cardinals are large cardinal notions that extend a Reinhardt cardinal and are therefore incompatible with the Axiom of Choice. A super Reinhardt cardinal was employed by Hugh Woodin in 1983 to prove the consistency of $\mathsf{ZFC+I_0}$, which had a focal role in establishing the consistency of $\mathsf{ZF+AD}^{L(\mathbb{R})}$. A Berkeley cardinal appeared around 1992 by Woodin at his set theory seminar as an attempt to provide a large cardinal notion that was refutable from $\mathsf{ZF}$ alone. While no such inconsistency has been found so far, it has since become an interesting principle in itself. Current research on choiceless large cardinals emerged in the mid-2010s as part of a project to explore Woodin's $\mathsf{HOD}$ dichotomy (see Section 7.1 of \cite{Woodin2010} or \cite{BagariaKoellnerWoodin2019} for details), under the thesis that such cardinals would indicate the $V$ was `far' from $\mathsf{HOD}$ in some sense. Bagaria, Koellner, and Woodin collected and analyzed notions of choiceless large cardinals in \cite{BagariaKoellnerWoodin2019}, and the theory of choiceless large cardinals was further developed in various papers by authors including Cutolo, Goldberg and Schlutzenberg. One of the most striking results along this line is a result by Goldberg \cite{Goldberg2021EvenOrdinals}, which establishes the consistency of $\mathsf{ZF} + j\colon V_{\lambda+2}\to V_{\lambda+2}$ modulo large cardinals over $\mathsf{ZF}C$: \begin{theorem}[Goldberg \cite{Goldberg2021EvenOrdinals}, Theorem 6.20] \pushQED{\qed} These two theories are equiconsistent over $\mathsf{ZF+DC}$: \begin{enumerate} \item For some ordinal $\lambda$, there is an elementary embedding $j\colon V_{\lambda+2}\to V_{\lambda+2}$.\footnote{Karagila proposed the term \emph{Kunen cardinal} for a critical point of an elementary embedding $j\colon V_{\lambda+2}\to V_{\lambda+2}$ since Kunen's result \cite{Kunen1971} shows no such cardinal can exist in $\mathsf{ZF}C$.} \item $\mathsf{AC+I_0}$. \qedhere \end{enumerate} \end{theorem} Furthermore, Goldberg proved that the existence of an elementary embedding $j\colon V_{\lambda+3}\to V_{\lambda+3}$ exceeds almost all of the traditional large cardinal hierarchy over $\mathsf{ZF}C$. \begin{theorem}[Goldberg \cite{Goldberg2021EvenOrdinals}, Theorem 6.16] \label{theorem:Goldberg} Working over $\mathsf{ZF+DC}$, the existence of a $\Sigma_1$-elementary embedding $j\colon V_{\lambda+3}\to V_{\lambda+3}$ implies the consistency of $\mathsf{ZF}C+\mathsf{I_0}$. \end{theorem} In the other direction, we can try to `salvage' an elementary embedding $j\colon V\to V$ by weakening the background set theory. One direction of research in this manner was conducted by Corazza. Corazza \cite{Corazza2000} introduced the \emph{Wholeness axiom}, $\mathsf{WA}$, by dropping Replacement for $j$-formulas. $\mathsf{WA}$ is known to be weaker than $\mathsf{I}_3$. Corazza further weakened $\mathsf{WA}$ to the \emph{Basic Theory of Elementary Embeddings}, $\mathsf{BTEE}$, in \cite{Corazza2006} which is the weakest setting one needs to express the existence of an elementary embedding by dropping all axioms for $j$-formulas in the extended language. The resulting axiom is known to be weaker than the existence of $0^\sharp$. Another direction to weaken the assumptions is by dropping Powerset. However, it should be noted that $\ensuremath{\mathsf{ZFC}\cminus}$, the theory obtained by ejecting Powerset from $\mathsf{ZF}C$ and only assuming Replacement, is ill-behaved with elementary embeddings. For example, in \cite{GitmanHamkinsJohnstone2016} it is shown that \L o\'s's theorem can fail over $\ensuremath{\mathsf{ZFC}\cminus}$, and a cofinal $\Sigma_1$-elementary embedding need not be fully elementary. On the other hand, they also show that these issues can be avoided by strengthening $\ensuremath{\mathsf{ZFC}\cminus}$ to $\mathsf{ZF}Cminus$, which is obtained by additionally assuming Collection. Further research along this line is characterizing large cardinal notions in terms of models of $\mathsf{ZF}Cminus$ with an ultrafilter predicate (For example, \cite{HolyLucke2021} or \cite{GitmanSchlichtUnpublished}). Large cardinals defined in this way refine the large cardinal hierarchy between a Ramsey cardinal and a measurable cardinal, and provide bounds for the consistency strength of $\mathsf{ZF}Cminus$ with an elementary embedding. As one such example of this characterization, we have the following theorem, \begin{theorem}[\cite{MatthwesPhD}, Theorem 10.5.7] \label{theorem:ZFC-CriticalStrength} $\mathsf{ZF}C$ with a locally measurable cardinal proves the consistency of $\mathsf{ZF}Cminus + \mathsf{DC}_{\mathrm{<Ord}}$ plus the existence of a non-trivial elementary embedding $j\colon V\to M$. \end{theorem} Finally, it can be shown that an elementary embedding $j \colon V \rightarrow V$ is possible in $\mathsf{ZF}Cminus$ under the large cardinal assumption of $\mathsf{ZFC + I_1}$: \begin{theorem}[\cite{MatthwesPhD}, Theorem 9.3.2 or \cite{Matthews2020}, Theorem 2.2] $\mathsf{ZF}C$ proves the following: there is an elementary embedding $k\colon V_{\lambda+1}\to V_{\lambda+1}$ if and only if there is an elementary embedding $j\colon H_{\lambda^+}\to H_{\lambda^+}$. As a corollary, $\mathsf{ZF}Cminus_j$\footnote{Where $\mathsf{ZF}Cminus_j$ is $\mathsf{ZF}Cminus$ in the langauge expanded to include $j$. See \autoref{Convention:T_j}.} is compatible (modulo large cardinals over $\mathsf{ZF}C$) with a non-trivial elementary embedding $j\colon V\to V$ which additionally satisfies that $V_{\mathsf{op}eratorname{crit} j}$ exists. \end{theorem} However, \cite{Matthews2020} also showed there is a limitation on the properties of such an elementary embedding \mbox{$j\colon V\to V$} over $\mathsf{ZF}Cminus$. One of these restrictions is that $j\colon V\to V$ cannot be cofinal\footnote{{An embedding $j \colon M \rightarrow N$ is said to \emph{cofinal} if for every $y \in N$ there is an $x \in M$ such that $y \in j(x)$.}}: \begin{theorem}[\cite{MatthwesPhD}, Theorem 10.2.3 or \cite{Matthews2020}, Theorem 5.4] Work over $\mathsf{ZF}Cminus_j$, if $j\colon V\to V$ is a non-trivial $\Sigma_0$-elementary embedding such that $V_{\mathsf{op}eratorname{crit} j}$ exists, then it cannot be cofinal. \end{theorem} We may also weaken the background set theory by dropping the law of excluded middle, that is, moving into a constructive setting. Since some statements are no longer equivalent over a constructive background, we need to carefully formulate constructive counterparts of classical notions, including the axiom systems of constructive set theories. The first form of a constructive set theory was defined by H. Friedman \cite{Friedman1973}, and is now known as \emph{Intuitionistic $\mathsf{ZF}$}, $\mathsf{IZF}$. In \cite{Friedman1973}, Friedman showed that $\mathsf{IZF}$ and $\mathsf{ZF}$ are mutually interpretable by using the combination of double-negation translation and non-extensional set theory. Another flavor of constructive set theory appeared as an attempt to establish a formalization of Bishop's Constructive analysis. Myhill \cite{Myhill1975} gave his own formulation of a constructive set theory $\mathsf{CST}$. However, its language is different from the standard one -- $\mathsf{CST}$ includes natural numbers as primitive objects, while $\mathsf{ZF}$ does not. The other formulation of a constructive set theory, which is closer to standard $\mathsf{ZF}$, was given by Aczel \cite{Aczel1978} via a type-theoretic interpretation. Aczel's constructive set theory is called \emph{Constructive $\mathsf{ZF}$,} $\mathsf{CZF}$. Aczel further developed a theory on $\mathsf{CZF}$ and its relationship with Martin-L\"of type theory in the consequent works \cite{Aczel1982} and \cite{Aczel1986}. In particular, Aczel's last paper in the sequel, \cite{Aczel1986}, defined \emph{regular sets}, which begins the program to define large cardinals over $\mathsf{CZF}$. The first research on constructive analogues of large cardinal axioms was done by Friedman and \v{S}\v{c}edrov \cite{FriedmanScedrov1984}. Unlike classical set theories, ordinals over constructive set theories are not well-behaved. This motivated defining large cardinal notions over constructive set theories in a structural manner, resulting in \emph{large set axioms}. They defined and analyzed inaccessible sets, Mahlo sets, and various elementary embeddings over $\mathsf{IZF}$, and proved that their consistency strength is no different from their classical counterparts. Large set axioms over constructive set theories appeared first in various papers of Rathjen (for example, \cite{Rathjen1993}, \cite{Rathjen1998}, \cite{RathjenGrifforPalmgren1998}, \cite{Rathjen1999Realm}). The first appearance of large set axioms over $\mathsf{CZF}$ was in a proof-theoretic context, and their relationship with better known theories, like extensions of Martin-L\"of type theories or $\mathsf{KP}$, were emphasized. An initial analysis of large set axioms over $\mathsf{CZF}$ can be found in \cite{AczelRathjen2001} or \cite{AczelRathjen2010}, and this has been further extended by Gibbons \cite{GibbonsPhD} and Ziegler \cite{ZieglerPhD} in each of their doctoral theses. Gibbons \cite{GibbonsPhD} extended Rathjen's analysis on the proof-theoretic strength of $\mathsf{CZF}$ in \cite{Rathjen1993} to $\mathsf{CZF}$ with Mahlo sets. Furthermore, Gibbons' thesis is the first publication that shows the definition of a critical set\footnote{He called it a \emph{measurable set}. We will address this terminology in \autoref{Section:LargeLargeSet}.} Unlike other results around large set axioms over $\mathsf{CZF}$, Ziegler's thesis focused on what we can derive about large set axioms from $\mathsf{CZF}$ alone. For example, he observed that the number of inaccessible sets may not affect the proof-theoretic strength: \begin{theorem}[\cite{ZieglerPhD}, Chapter 6] \label{theorem:ZieglerInaccessibleStrength} The following theories are equiconsistent: \begin{enumerate} \item $\mathsf{CZF}$ with an inaccessible set, \item $\mathsf{CZF}$ with two inaccessible sets, and \item $\mathsf{CZF}$ with $\omega$ inaccessible sets. \end{enumerate} \end{theorem} Ziegler also examined elementary embeddings over $\mathsf{CZF}$ in detail, and one of his striking results is that every Reinhardt embedding $j\colon V\to V$ must be cofinal (see \autoref{Proposition:CZFReinhardtEmbeddingCofinal} for the formal statement of the theorem). Ziegler's thesis ends with the following result that elementary embeddings are incompatible with the principle of \emph{subcountability}, which asserts that for every set there is a partial surjection from $\omega$ onto it. This can be seen as a constructive analogue to Scott's early result in \cite{Scott1961} that measurable cardinals are incompatible with the Axiom of Constructibility. \begin{theorem}[Ziegler \cite{ZieglerPhD}, Theorem 9.93] \pushQED{\qed} Over $\mathsf{{C}ZF}$, the combination of the Axiom of Subcountability and the existence of a critical set results in a contradiction. \qedhere \end{theorem} Rathjen analyzed the proof-theoretic strength of \emph{small large sets}, large set notions whose classical counterpart is weaker than the existence of $0^\sharp$, over $\mathsf{CZF}$ (See \autoref{Section:SmallLargeSet}, especially \autoref{Proposition:Prooftheoreticstrength-smalllargesets}), and the proof-theoretic strength of all currently known small large set axioms over $\mathsf{CZF}$ is weaker than that of Second-order Arithmetic. However, the proof-theoretic strength of \emph{large large sets}, large set notions defined in terms of elementary embeddings, is yet to receive a formal rigorous treatment. We end this introduction by noting some history of the development of this work. A first version of this paper can be found in \cite{Jeon2021CZF} where the first author studied the consistency strength of a Reinhardt set over $\mathsf{CZF}$ with Full Separation. The second author's PhD thesis, \cite{MatthwesPhD}, included an initial investigation into elementary embeddings of $\mathsf{KP}$ and $\mathsf{IKP}$. This version can be seen as a combination of the previous work by the individual authors and extensively extends the results found in either source. \subsection*{Main results} It turns out that the proof-theoretic strength of large large set vastly exceeds that of $\mathsf{ZF}C$. In fact, we just need a small fragment of the properties of an elementary embedding{, that we shall denote by} $\Delta_0\text{-}\mathsf{BTEE}_M$, which is the minimal theory needed to claim that $j\colon V\to M$ is a $\Delta_0$-elementary embedding, to exceed the proof-theoretic strength of $\mathsf{ZF}C$. \begin{theorem*} \phantom{a} \begin{enumerate} \item (\autoref{Theorem:CriticalPointModelsIZF}) Work over $\mathsf{IKP}$, let $K$ be a transitive set such that $K\models \Delta_0\text{-}\mathsf{Sep}$, $\omega\in K$ and let $j \colon V\to M$ be a $\Delta_0$-elementary embedding whose critical point is $K$. Then $K\models \mathsf{IZF}$. \item (\autoref{Corollary:LambdaModelsIZFBTEEInd}) Furthermore, if we additionally allow Set Induction and Collection for $\Sigma^{j,M}$-formulas and add Separation for $\Delta_0^{j,M}$-formulas, then we can define $j^\omega(K):=\bigcup_{n\in\omega} j^n(K)$ and prove that $j^\omega(K)$ satisfies $\mathsf{IZF+BTEE}$ plus Set Induction for $j$-formulas. \item (\autoref{Theorem:IKPSigmaOrdimpliesIZF+BTEE} and \autoref{Theorem:InterpretationResults}) As a consequence, the following two theories prove the consistency of $\mathsf{ZFC+BTEE}$ {plus Set Induction for $j$-formulas}: $\mathsf{CZF}_{j, M}$ with a critical set, and $\mathsf{IKP}_{j,M}$ plus a \mbox{$\Sigma$-$\mathrm{Ord}$-inary} elementary embedding with a critical point $\kappa\in\mathrm{Ord}$. \end{enumerate} \end{theorem*} The next natural question would be how strong a Reinhardt set is. It turns out that Reinhardt sets are very strong over $\mathsf{CZF}$: \begin{theorem*}(\autoref{Theorem:CZF+ReinhardtinterpretsZF+WA}) $\mathsf{CZF}$ with a Reinhardt set proves the consistency of $\mathsf{ZF+WA}$. \end{theorem*} These two results motivate an idea that the proof-theoretic strength of stronger large set notions may go beyond that of $\mathsf{ZF}$ with choiceless large cardinals. {This idea turns out to hold}, and a constructive formulation of super Reinhardt cardinals witnesses this. We can push this idea further, so that we can expect that we may reach an `equilibrium' by strengthening large set axioms once more, in the sense that adding some large set notions to $\mathsf{CZF}$ has the same proof-theoretic strength as that of $\mathsf{IZF}$ plus the same axiom. We can see that the assertion that $V$ resembles $V_\kappa$ for a total Reinhardt cardinal $\kappa$, which we will call $\mathsf{TR}$, witnesses this claim. However, both our analogues of super Reinhardts and total Reinhardts require a second-order formulation. We will resolve this issue by defining $\mathsf{CGB}$, which is a constructive version of $\mathsf{GB}$. Moreover, formulating an elementary embedding in a constructive context requires infinite connectives because there is no obvious way to cast the elementarity into a single formula. (It is possible in a classical context because every $\Sigma_1$-elementary embedding $j\colon V\to M$ is fully elementary.) This motivates $\mathsf{CGB}_\infty$, $\mathsf{CGB}$ with infinite connectives. Extending $\mathsf{CGB}$ with infinite connectives will turn out to be `harmless' in the sense that $\mathsf{CGB}_\infty$ is conservative over $\mathsf{CGB}$. Under this setting, we have the following results: \begin{theorem*} \phantom{a} \begin{enumerate} \item (\autoref{Theorem:LowerBound-superReinhardt}) $\mathsf{CGB}_\infty$ with a super Reinhardt set proves the consistency of $\mathsf{ZF}$ with a Reinhardt cardinal, \item (\autoref{Corollary:CGBTRinterpretsGBTR}) $\mathsf{CGB}_\infty + \mathsf{TR}$ interprets $\mathsf{ZF+TR}$. \end{enumerate} \end{theorem*} \subsection*{The structure of the paper} This paper is largely divided into two parts: the `internal' analysis of large set axioms over constructive set theories, and deriving a lower bound for large large set axioms in terms of extensions of classical set theories. Defining some of these axioms will require largely unexplored notions including second-order constructive set theories, so we define the necessary preliminary notions in \autoref{Section:Prelim}. Next, we review the basic properties of large set axioms over constructive set theories. This is also divided into two sections: in \autoref{Section:SmallLargeSet}, we review small large sets over constructive set theories and their connection with classical set theories. Then in \autoref{Section:LargeLargeSet}, we define large large sets over constructive set theories, including analogues of choiceless large cardinals. Having laid down the necessary framework, in \autoref{Section:StructuralLowerbdd} we provide an internal analysis of large set axioms. The main consequence of \autoref{Section:StructuralLowerbdd} is that we provide lower bounds for the consistency strength of large large set axioms in terms of extensions of $\mathsf{IZF}$, from which it will be easier to interpret classical theories by applying a double-negation translation. In any case, we want to derive the consistency strengths in terms of extensions of classical set theories. Hence we need to develop one of the methods to transform intuitionistic theories into classical ones. This is what \autoref{Section:HeytingInterpretation} mainly focuses on. In \autoref{Section:HeytingInterpretation}, we review Gambino's Heyting-valued interpretation defined in \cite{Gambino2006} and investigate this interpretation under the double-negation topology. We will see that the interpretation translates $\mathsf{IZF}$ into the classical theory $\mathsf{ZF}$. Furthermore, we will also reduce extensions of $\mathsf{IZF}$ to those of $\mathsf{ZF}$ by using the double-negation topology. \autoref{Section:DNT-SOST} is devoted to the double-negation translation of second-order set theories and the concept of the universe being totally Reinhardt. We summarize the lower bound of the consistency strength of large large set axioms over constructive set theories in terms of extensions of $\mathsf{ZF}C$ in \autoref{Section:ConsistencyStrength:Final} before ending by posing some questions for future investigation in \autoref{Section:RemarkQuestions}. \section{Preliminaries}\label{Section:Prelim} In this section, we will briefly review $\mathsf{ZF}C$ without Power Set, $\mathsf{ZF}Cminus$, and constructive set theory. There are various formulations of constructive set theories, but we will focus on $\mathsf{CZF}$. In addition, we will define the second-order variant $\mathsf{CGB}$ and $\mathsf{IGB}$ of $\mathsf{CZF}$ and $\mathsf{IZF}$ respectively. \subsection{\texorpdfstring{$\mathsf{ZF}C$}{ZFC} without Power Set} We will frequently mention $\mathsf{ZF}C$ without Power Set, denoted $\mathsf{ZF}Cminus$. However, $\mathsf{ZF}Cminus$ is not obtained by just dropping Power Set from $\mathsf{ZF}C$: { \begin{definition} $\mathsf{ZF}minus$ is the theory obtained from $\mathsf{ZF}$ by dropping Power Set and using Collection instead of Replacement. $\mathsf{ZF}Cminus$ is obtained by adding the Well-Ordering Principle to $\mathsf{ZF}minus$. \end{definition} } Note that using Collection instead of Replacement is necessary to avoid pathologies. See \cite{GitmanHamkinsJohnstone2016} for the details. It is also known by \cite{FriedmanGitmanKanovei2019} that $\mathsf{ZF}Cminus$ does not prove the reflection principle. \subsection{Intuitionistic set theory \texorpdfstring{$\mathsf{IZF}$}{IZF} and Constructive set theory \texorpdfstring{$\mathsf{CZF}$}{CZF}} There are two possible constructive formulations of $\mathsf{ZF}$, namely $\mathsf{IZF}$ and $\mathsf{CZF}$, although we will focus on the latter. $\mathsf{IZF}$ appeared first in H. Friedman's paper \cite{Friedman1973} on the double-negation of set theory. Friedman introduced $\mathsf{IZF}$ as an intuitionistic counterpart of $\mathsf{ZF}$ and showed that there is a double-negation translation from $\mathsf{ZF}$ to $\mathsf{IZF}$, analogous to that from $\mathsf{PA}$ to $\mathsf{HA}$. \begin{definition} $\mathsf{IZF}$ is the theory that comprises the following axioms: Extensionality, Pairing, Union, Infinity, Set Induction, Separation, Collection, and Power Set. \end{definition} \begin{remark} \label{InfinityDefinition} We take the axiom of Infinity to be the statement $\exists a (\exists x (x \in a) \land \forall x \in a \, \exists y \in a \, (x \in y))$. See Remark \ref{IKPformulation} for alternative, equivalent, ways to define Infinity. \end{remark} Constructive Zermelo-Fraenkel set theory, $\mathsf{CZF}$, is introduced by Aczel \cite{Aczel1978} with his type-theoretic interpretation of $\mathsf{CZF}$. We will introduce subtheories called \emph{Basic Constructive Set Theory}, $\mathsf{BCST}$, and $\mathsf{CZF}minus$ before defining the full $\mathsf{CZF}$. \begin{definition} $\mathsf{BCST}$ is the theory that consists of Extensionality, Pairing, Union, Emptyset, Replacement, and $\Delta_0$-Separation. $\mathsf{CZF}minus$ is obtained by adding the following axioms to $\mathsf{BCST}$: Infinity, Set Induction, and \emph{Strong Collection} that states the following: if $\phi(x,y)$ is a formula such that for given $a$, if $\forall x\in a\exists y \phi(x,y)$, then we can find $b$ such that \begin{equation*} \forall x\in a\exists y \in b \phi(x,y) \land \forall y\in b\exists x\in a \phi(x,y). \end{equation*} \end{definition} We also provide notation for frequently-mentioned axioms: \begin{definition} We will use $\mathsf{Sep}$, $\Delta_0\text{-}\mathsf{Sep}$, $\Delta_0\text{-}\mathsf{LEM}$ for denoting Full Separation (i.e., Separation for all formulas), $\Delta_0$-Separation and the law of excluded middle for $\Delta_0$-formulas respectively. \end{definition} The combination of Full Separation and Collection proves Strong Collection, but the implication does not hold if we weaken Full Separation to $\Delta_0$-Separation. It is also known that $\Delta_0$-Separation is equivalent to the existence of the intersection of two sets. See Section 9.5 of \cite{AczelRathjen2010} for its proof. \begin{proposition}\pushQED{\qed} Working over $\mathsf{BCST}$ without $\Delta_0$-Separation, $\Delta_0$-Separation is equivalent to the \emph{Axiom of Binary Intersection}, which asserts that $a\cap b$ exists if $a$ and $b$ are sets. \qedhere \popQED \end{proposition} It is convenient to introduce the notion of \emph{multi-valued function} to describe the Strong Collection and Subset Collection axioms that we will discuss shortly. Let $A$ and $B$ be classes. A relation $R\subseteq A\times B$ is a \emph{multi-valued function from $A$ to $B$} if $\mathsf{op}eratorname{dom} R=A$. In this case, we write $R\colon A\rightrightarrows B$. We use the notation $R \colon A\leftrightarrowlrarrows B$ if both $R \colon A\rightrightarrows B$ and $R \colon B\rightrightarrows A$ hold. The reader is kept in mind that the previous definition must be rephrased in an appropriate first-order form if one of $A$, $B$, or $R$ is a (definable) proper class in the same way as how we translate classes over $\mathsf{ZF}$. Then we can rephrase Strong Collection as follows: for every set $a$ and a class multi-valued function \mbox{$R \colon a\rightrightarrows V$}, there is an `image' $b$ of $a$ under $R$, that is, a set $b$ such that $R \colon a\leftrightarrowlrarrows b$. Now we can state the Axiom of Subset Collection: \begin{definition} The Axiom of Subset Collection states the following: Assume that $\phi(x,y,u)$ is a formula that defines a collection of multi-valued functions from $a$ to $b$ parametrized by $u\in V$: that is, $\phi(x,y,u)$ satisfies $\forall u \forall x\in a\exists y\in b \phi(x,y,u)$. Then we can find a set $c$ such that \begin{equation*} \forall u \exists d\in c [\forall x\in a\exists y\in d \phi(x,y,u) \land \forall y\in d\exists x\in a \phi(x,y,u)]. \end{equation*} $\mathsf{CZF}$ is the theory obtained by adding Subset Collection to $\mathsf{CZF}minus$. \end{definition} We may state Subset Collection informally as follows: for every first-order definable collection of multi-valued class functions $\langle R_u \colon a\rightrightarrows b \mid u\in V\mathsf{op}eratorname{ran}gle$ from $a$ to $b$, we can find a set $c$ of all `images' of $a$ under some $R_u$. That is, for every $u\in V$ there is $d\in c$ such that $R_u \colon a \leftrightarrowlrarrows d$. There is a simpler axiom equivalent to Subset Collection, known as \emph{Fullness}, which is a bit easier to understand. \begin{definition} The Axiom of Fullness states the following: Let $\mathsf{op}eratorname{mv}(a,b)$ be the class of all multi-valued functions from $a$ to $b$. Then there is a subset $c\subseteq \mathsf{op}eratorname{mv}(a,b)$ such that if $r\in \mathsf{op}eratorname{mv}(a,b)$, then there is $s\in c$ such that $s\subseteq r$. Such a $c$ is said to be \emph{full} in $\mathsf{op}eratorname{mv}(a,b)$. \end{definition} Then the following hold: \begin{proposition}[\cite{AczelRathjen2001}, \cite{AczelRathjen2010}]\label{Proposition:SubsetCollection} \leavevmode \begin{enumerate} \item\label{Item:Fullness} \normalfont{($\mathsf{CZF}minus$)} Subset Collection is equivalent to Fullness. \item \normalfont{($\mathsf{CZF}minus$)} Power Set implies Subset Collection. \item \normalfont{($\mathsf{CZF}minus$)} Subset Collection proves the function set ${^a}b$ exists for all $a$ and $b$. \item \normalfont{($\mathsf{CZF}minus$)} If $\Delta_0\text{-} \mathsf{LEM}$ holds, then Subset Collection implies Power Set. \end{enumerate} \end{proposition} We will not provide a proof for the above proposition, but the reader may consult with \cite{AczelRathjen2001} or \cite{AczelRathjen2010} for its proof. We also note here that \cite{Rathjen1993} showed that Subset Collection does not increase the proof-theoretic strength of $\mathsf{CZF}minus$ while \cite{Rathjen2012Power} showed that the Axiom of Power Set does. The following lemma is useful to establish \eqref{Item:Fullness} of \autoref{Proposition:SubsetCollection}, and is also useful to treat multi-valued functions in general: \begin{lemma}\label{Lemma:PrelimAdjuectmentFtn} Let $R \colon A\rightrightarrows B$ be a multi-valued function. Define $\mathcal{A}(R) \colon A\rightrightarrows A\times B$ by \begin{equation*} \mathcal{A}(R) = \{\langle a,\langle a,b\rangle\rangle \mid \langle a,b\rangle \in R\}. \end{equation*} For $S\subseteq A\times B$, let $\mathcal{A}^S(R) = \{ \langle a, \langle a, b \mathsf{op}eratorname{ran}gle \mathsf{op}eratorname{ran}gle | \langle a, b \mathsf{op}eratorname{ran}gle \in R \cap S\}$.\footnote{After this lemma we will often abuse notation by referring to $\mathcal{A}^S(R)$ simply as $\mathcal{A}(R)$.} Then \begin{enumerate} \item ${\mathcal{A}^S(R)} \colon A\rightrightarrows S \iff R\cap S \colon A\rightrightarrows B$, \item ${\mathcal{A}^S(R)} \colon A\leftleftarrows S\iff S\subseteq R$. \end{enumerate} \end{lemma} \begin{proof} For the first statement, observe that $\mathcal{A}^S(R) \colon A\rightrightarrows S$ is equivalent to \begin{equation*} \forall a\in A \exists s\in S [\langle a,s\rangle \in \mathcal{A}(R)]. \end{equation*} By the definition of $\mathcal{A}$, this is equivalent to \begin{equation*}\label{Formula:Mvaluedftn_eq00} \forall a\in A \exists s\in S [\exists b\in B ( s=\langle a,b\rangle \land \langle a,b\rangle\in R)]. \end{equation*} We can see that the above statement is equivalent to $\forall a\in A\exists b\in B [\langle a,b\rangle\in R\cap S]$, which is the definition of $R\cap S \colon A\rightrightarrows B$. For the second claim, observe that $\mathcal{A}^S(R) \colon A\leftleftarrows S$ is equivalent to \begin{equation*} \forall s\in S \exists a\in A [\langle a,s\rangle \in \mathcal{A}(R)]. \end{equation*} By rewriting $\mathcal{A}$ to its definition, we have \begin{equation*} \forall s\in S \exists a\in A [\exists b\in B ( s=\langle a,b\rangle \in R)]. \end{equation*} We can see that it is equivalent to $S\subseteq R$. \end{proof} The following lemma is useful when we work with multi-valued functions because it allows us to replace class multi-valued functions over $A$ with set multi-valued functions in $A$: \begin{lemma}\label{Lemma:SetMV} Assume that $A$ satisfies second-order Strong Collection, that is, for every $a\in A$ and \mbox{$R \colon a\rightrightarrows A$,} we have $b\in A$ such that $R \colon a\leftrightarrowlrarrows b$.\footnote{If $A$ is also transitive then {$A$ shall be called} \emph{regular}, and this will be formally defined in \autoref{Definition:RegularityandInaccessibility}.} If $a\in A$ and $R \colon a\rightrightarrows A$, then there is a set $c\in A$ such that $c\subseteq R$ and $c \colon a\rightrightarrows A$. \end{lemma} \begin{proof} Consider $\mathcal{A}(R) \colon a\rightrightarrows a\times A$. By second-order Strong Collection over $A$, there is $c\in A$ such that $\mathcal{A}(R) \colon a\leftrightarrowlrarrows c$. Hence by \autoref{Lemma:PrelimAdjuectmentFtn}, we have $c\subseteq R$ and $c \colon a\rightrightarrows A$. \end{proof} It is known that every theorem of $\mathsf{CZF}$ is also provable in $\mathsf{IZF}$. Moreover, $\mathsf{IZF}$ is quite strong in the sense that its proof-theoretic strength is the same as that of $\mathsf{ZF}$. On the other hand, it is known that the proof-theoretic strength of $\mathsf{CZF}$ is equal to that of Kripke-Platek set theory $\mathsf{KP}$. $\mathsf{IZF}$ is deemed to be \emph{impredicative} due to the presence of Full Separation and Power Set.\footnote{There is no consensus on the definition of predicativity. The usual informal description of predicativity is rejecting self-referencing definitions.} On the other hand, $\mathsf{CZF}$ is viewed as predicative since it allows for a \emph{type-theoretic interpretation} such as the one given by Aczel, \cite{Aczel1978}. However, adding the full law of excluded middle to $\mathsf{IZF}$ or $\mathsf{CZF}$ results in the same theory, namely $\mathsf{ZF}$. \subsection{Kripke-Platek set theory} Kripke-Platek set theory is the natural intermediate theory between arithmetic and stronger set theories like $\mathsf{ZF}$. $\mathsf{KP}$ has a natural intuitionistic counterpart called Intuitionistic Kripke-Platek, which we denote by $\mathsf{IKP}$. \begin{definition} $\mathsf{IKP}$ is the theory consisting of Extensionality, Pairing, Union, Infinity, Set Induction, $\Delta_0$-Collection, and $\Delta_0$-Separation. \end{definition} \begin{remark}\label{IKPformulation} The reader is reminded that there are different formulations of $\mathsf{KP}$ and $\mathsf{IKP}$. \begin{enumerate} \item Some authors such as in \cite{Avigad2000} restricts Set Induction in $\mathsf{KP}$ and $\mathsf{IKP}$ to $\Pi_1$-formulas. We include Full Set Induction in $\mathsf{KP}$ and $\mathsf{IKP}$. \item The formulation of Infinity over $\mathsf{IKP}$ is more subtle. Some authors such as \cite{Avigad2000} exclude Infinity from $\mathsf{KP}$ and $\mathsf{IKP}$, and denote $\mathsf{KP}$ with Infinity $\mathsf{KP\omega}$. We also have an apparently stronger formulation, namely Strong Infinity, which is defined as follows: let $\mathsf{Ind}(a)$ be the formula $\varnothing \in a\land \forall x\in a (x\cup\{x\}\in a)$. Then Strong Infinity is the statement \begin{equation*} \exists a [\mathsf{Ind} (a) \land \forall b[\mathsf{Ind}(b)\to a\subseteq b]]. \end{equation*} Moreover, Lubarsky \cite{Lubarsky2002IKP} uses another alternative formulation of Infinity stated as follows: \begin{equation*} \exists a [\mathsf{Ind}(a)\land \forall x\in a[x=0\lor \exists y\in a (x=y\cup\{y\})]]. \end{equation*} However, these formulations are all equivalent over $\mathsf{IKP}$. The equivalence of Strong Infinity and Lubarsky's Infinity is easy to prove. The harder part is proving Strong Infinity from Infinity. This is done over $\mathsf{CZF}$ in Proposition 4.7 of \cite{AczelRathjen2001}, and one can verify that the proof also works over $\mathsf{IKP}$. \end{enumerate} \end{remark} Finally, let us observe that $\mathsf{IKP}$ proves Collection for a broader class of formulas named \emph{$\Sigma$-formulas}: \begin{definition} \label{SigmaFormulasDefinition} The collection of $\Sigma$-formulas is the least collection which contains the $\Delta_0$-formulas and is closed under conjunction, disjunction, bounded quantifications, and unbounded $\exists$. \end{definition} \begin{theorem} \label{Theorem:IKPSigmaCollection} For every $\Sigma$-formula $\varphi(x, y, u)$ the following is a theorem of $\mathsf{IKP}$: For every set $u$, if $\forall x \in a \exists y \varphi(x, y, u)$ then there is a set $b$ such that \[ \forall x \in a \exists y \in b \varphi(x, y, u) \land \forall y \in b \exists x \in a \varphi(x, y, u). \] \end{theorem} We refer the reader to Section 19 of \cite{AczelRathjen2010} or Section 11 of \cite{AczelRathjen2001} for some of the basic axiomatic consequences of $\mathsf{IKP}$ and their proofs. \subsection{Inductive definition} Various recursive constructions on $\mathsf{CZF}$ are given by inductive definitions. The reader might refer to \cite{AczelRathjen2001} or \cite{AczelRathjen2010} to see general information about the inductive definition, but we will review some of the details for the reader who are not familiar with it. \begin{definition} \label{InductiveDefinition} An \emph{inductive definition} $\Phi$ is a class of pairs $\langle X,a\rangle$. To any inductive definition $\Phi$, associate the operator $\Gamma_\Phi(C)=\{a\mid \exists X\subseteq C \langle X,a\rangle\in\Phi\}$. A class $C$ is $\Phi$-closed if $\Gamma_\Phi(C)\subseteq C$. \end{definition} We may think of $\Phi$ as a generalization of a deductive system, and $\Gamma_\Phi(C)$ as a class of theorems derivable from the class of axioms $C$. Some authors use the notation $X\vdash_\Phi a$ or $X/a\in\Phi$ instead of $\langle X,a\rangle\in \Phi$. The following theorem says each inductive definition induces a least fixed point: \begin{theorem}[Class Inductive Definition Theorem, $\mathsf{CZF}minus$] Let $\Phi$ be an inductive definition. Then there is the smallest $\Phi$-closed class $I(\Phi)$. \end{theorem} The following lemma is the essential tool for the proof of the Class Inductive Definition Theorem. See Lemma 12.1.2 of \cite{AczelRathjen2010} for its proof: \begin{lemma}[$\mathsf{CZF}minus$]\label{Lemma:ItreationClass} Every inductive definition $\Phi$ has a corresponding \emph{iteration class} $J$, which satisfies \mbox{$J^a=\Gamma\left(\bigcup_{x\in a}J^x\right)$} for all $a$, where $J^a=\{x\mid \langle a,x\rangle\in J\}$. \end{lemma} \subsection{Constructive \texorpdfstring{$L$}{L}} In this subsection, we will define the constructible universe $L$ and discuss its properties over $\mathsf{IKP}$. Constructing $L$ under a constructive manner was first studied by Lubarsky \cite{Lubarsky1993L}. Lubarsky developed the properties of $L$ over $\mathsf{IZF}$, and showed that $\mathsf{IZF}$ proves $L$ satisfies $\mathsf{IZF}$ plus $V=L$. Crosilla \cite{CrosillaPhD2000} showed that the construction of $L$ carries over to $\mathsf{IKP}$\footnote{Crosilla uses $\mathsf{CZF^r}$ to denote what we called $\mathsf{IKP}$}. There are at least two ways of defining $L$: the first is using definability over a set model, which Lubarsky \cite{Lubarsky1993L} and Crosilla \cite{CrosillaPhD2000} had taken. Another one is using \emph{fundamental operations}, also called G\"odel operations in the classical context, which was taken by the second author in \cite{MatthwesPhD}. We will follow the second method. \begin{definition}[Fundamental operations] Define \begin{itemize} \item $1^{st}(x)=a$ iff $\exists u\in x\exists b\in u(x=\langle a,b\mathsf{op}eratorname{ran}gle)$, \item $2^{nd}(x)=b$ iff $\exists u\in x\exists a\in u(x=\langle a,b\mathsf{op}eratorname{ran}gle)$, \item $y^"\{z\} := \{u\mid \langle z,u\mathsf{op}eratorname{ran}gle \in y\}$. \end{itemize} Then we define the fundamental operations as follows: \begin{itemize} \item $\mathcal{F}_p(x,y):=\{x,y\}$, \item $\mathcal{F}_\cap(x,y):=x\cap \bigcap y$, \item $\mathcal{F}_\cup(x):=\bigcup x$, \item $\mathcal{F}_\setminus(x,y) : =x\setminus y$, \item $\mathcal{F}_\times(x,y) :=x\times y$, \item $\mathcal{F}_\to(x, y) = x \cap \{ z \mid \text{$y$ is an ordered pair and } (z \in 1^{st}(y) \rightarrow z \in 2^{nd}(y) ) \}$, \item $\mathcal{F}_\forall(x,y):=\{x^"\{z\}\mid z\in y\}$, \item $\mathcal{F}_d(x,y):=\mathsf{op}eratorname{dom} x$ \item $\mathcal{F}_r(x,y):=\mathsf{op}eratorname{ran} x$ \item $\mathcal{F}_{123}(x,y):=\{\langle u,v,w\mathsf{op}eratorname{ran}gle\mid \langle u,v\mathsf{op}eratorname{ran}gle\in x\land w\in y\}$, \item $\mathcal{F}_{132}(x,y):=\{\langle u,w,v\mathsf{op}eratorname{ran}gle\mid \langle u,v\mathsf{op}eratorname{ran}gle\in x\land w\in y\}$, \item $\mathcal{F}_=(x,y):=\{\langle v,u\mathsf{op}eratorname{ran}gle\in y\times x\mid u=v\}$, \item $\mathcal{F}_\in(x,y):=\{\langle v,u\mathsf{op}eratorname{ran}gle\in y\times x\mid u\in v\}$. \end{itemize} For simplicity, we shall let $\mathcal{I}$ be the set of all indices $i$ of $\mathcal{F}_i$ presented in the above definition. \end{definition} The following lemma says that we can represent every $\Delta_0$-formula in terms of fundamental operations: \begin{lemma}[\cite{MatthwesPhD}, Lemma 5.2.4, $\mathsf{IKP}$] \pushQED{\qed} Let $\phi(x_1,\cdots, x_n)$ be a bounded formula whose free variables are all expressed. Then there is a term $\mathcal{F}_\phi$ built up from fundamental operations such that \begin{equation*} \mathsf{IKP}\vdash \mathcal{F}_\phi(a_1,\cdots,a_n) = \{\langle x_n,\cdots,x_1\mathsf{op}eratorname{ran}gle\in a_n\times\cdots\times a_1\mid\phi(x_1,\cdots,x_n)\}. \qedhere \end{equation*} \end{lemma} \begin{definition} For a set $a$, define \begin{itemize} \item $\mathcal{E}(a) := a\cup \{\mathcal{F}_i(\vec{x})\mid \vec{x}\in a\land i\in\mathcal{I}\}$, \item $\mathcal{D}(a):=\mathcal{E}(a\cup\{a\})$, and \item $\mathsf{op}eratorname{Def}(a) := \bigcup_{n\in\omega} \mathcal{D}^n(a)$. \end{itemize} \end{definition} \begin{definition} For an ordinal $\alpha$, define $L_\alpha :=\bigcup_{\beta\in\alpha}\mathsf{op}eratorname{Def}(L_\beta)$ and $L:=\bigcup_{\alpha\in\mathrm{Ord}} L_\alpha$. \end{definition} Then we have the following properties of the constructible hierarchy: \begin{proposition}[\cite{MatthwesPhD}, Proposition 5.3.12, $\mathsf{IKP}$] \label{Proposition:propertiesofL} \pushQED{\qed} For all ordinals $\alpha,\beta$, \begin{enumerate} \item If $\beta\in\alpha$, then $L_\beta\subseteq L_\alpha$, \item $L_\alpha\in L_{\alpha+1}$, \item $L_\alpha$ is transitive, and \item\label{item:LalphamodelsboundedSep} $L_\alpha$ is a model of Bounded Separation. \qedhere \end{enumerate} \end{proposition} Moreover we can see that $\mathsf{IKP}$ proves $L$ is a model of $\mathsf{IKP}$: \begin{theorem}[\cite{MatthwesPhD}, Theorem 5.3.6 and 5.3.7, $\mathsf{IKP}$] \pushQED{\qed} $\mathsf{IKP}^L$ holds. That is, if $\sigma$ is a theorem of $\mathsf{IKP}$, then $\mathsf{IKP}$ proves $\sigma^L$. Furthermore, $L$ thinks $V=L$ holds. \qedhere \end{theorem} We will not examine its proof in detail, but it is still worthwhile to mention relevant notions that are necessary for the proof. One of these is \emph{hereditary addition}, which was first formulated by Lubarsky \cite{Lubarsky1993L}. This is necessary because $\alpha\in\beta$ does not entail $\alpha+1\in\beta+1$ constructively. \begin{definition} \label{Definition:HereditaryAddition} For ordinals $\alpha$ and $\gamma$, define $\alpha+_H\gamma$ by recursion on $\alpha$: \begin{equation*} \alpha +_H \gamma := \left(\bigcup \{\beta +_H\gamma \mid\beta\in\alpha\}\cup\{\alpha\}\right) + \gamma. \end{equation*} \end{definition} Another relevant notion is \emph{augmented ordinals}. Lubarsky \cite{Lubarsky1993L} introduced this notion to develop properties of the constructible hierarchy over $\mathsf{IZF}$. Augmented ordinals are not needed to verify that $L$ is a model of $\mathsf{IKP}$, but are used to prove the Axiom of Constructibility, and we will use them when working with elementary embeddings. \begin{definition} \label{Definition:AugmentedOrdinal} Let $\alpha$ be an ordinal. Then $\alpha$ \emph{augmented}, $\alpha^\#$, is defined recursively on $\alpha$ as \begin{equation*} \alpha^\# := \bigcup\{\beta^\#\mid\beta\in\alpha\}\cup (\omega+1). \end{equation*} \end{definition} \begin{remark} In \cite{MatthwesPhD}, the second author distinguished Strong Infinity and Infinity, and take a more cautious way to define $L$: first, one defines a subsidiary hierarchy $\mathbb{L}_\alpha$ by $\mathbb{L}_\alpha:=\bigcup_{\beta\in\alpha} \mathcal{D}(\mathbb{L}_\beta)$. Then define $\mathbb{L}:=\bigcup_{\alpha\in\mathrm{Ord}}\mathbb{L}_\alpha$. Finally, it is shown that if Strong Infinity holds, then $\mathbb{L}$ satisfies $\mathsf{IKP}$ and $\mathbb{L}=L$. The reason for taking this method is that the equivalence of Infinity and Strong Infinity over $\mathsf{IKP}$ without Infinity is quite non-trivial. This way of defining $L$ also has the benefit that it works even in the absence of any form of Infinity. \end{remark} \begin{remark} Unlike either $\mathsf{IKP}$ or $\mathsf{IZF}$, $\mathsf{CZF}$ does not prove that $L$ satisfies $\mathsf{CZF}$, in particular $\mathsf{CZF}$ cannot prove that Exponentiation holds in $L$. Crosilla \cite{CrosillaPhD2000} showed that $\mathsf{CZF}$ proves $L$ validates $\mathsf{IKP}$ with full Collection. Full details of the above construction using fundamental operations, alongside a full investigation of which axioms of $\mathsf{CZF}$ hold in $L$ was undertaken by the second author and Michael Rathjen in \cite{MatthewsRathjen22}. \end{remark} \subsection{Second-order set theories} We need a second-order formulation of constructive set theories in order to define large set axioms corresponding to large cardinals beyond choice. We will formulate constructive analogues of G\"odel-Bernays set theory $\mathsf{GB}$ whose first-order counterparts are $\mathsf{CZF}$ and $\mathsf{IZF}$, respectively. We will use $\forall^0$ and $\exists^0$ for quantifications over sets, and $\forall^1$ and $\exists^1$ for quantifications over classes. We omit the superscript if the context is clear. Following standard conventions, we will use uppercase letters for classes and lowercase letters for sets, unless specified otherwise. The reader could consult with \cite{WilliamsPhD} if they are interested in classical second-order set theory. \begin{definition}\label{Definition:CGB} \emph{Constructive G\"odel-Bernays set theory} ($\mathsf{CGB}$) is defined as follows: the language of $\mathsf{CGB}$ is two-sorted, that is, $\mathsf{CGB}$ has sets and classes as its objects. $\mathsf{CGB}$ comprises the following axioms: \begin{itemize} \item Axioms of $\mathsf{CZF}$ for sets. \item Every set is a class, and every element of a class is a set. \item Class Extensionality: two classes are equal if they have the same set of members. Formally, \begin{equation*} \forall^1 X,Y [X=Y\leftrightarrow \forall^0x (x\in X \leftrightarrow x\in Y)]. \end{equation*} \item Elementary Comprehension: if $\phi(x,p,C)$ is a first-order formula with a class parameter $C$, then there is a class $A$ such that $A=\{x\mid \phi(x,p,C)\}$. Formally, \begin{equation*} \forall^0p\forall^1C\exists^1 A \forall^0x [x\in A\leftrightarrow \phi(x,p,C)]. \end{equation*} \item Class Set Induction: if $A$ is a class, and if we know a set $x$ is a member of $A$ if every element of $x$ belongs to $A$, then $A$ is the class of all sets. Formally, \begin{equation*} \forall^1A \big[ [\forall^0 x(\forall^0y\in x (y\in A)\to x\in A)]\to \forall^0x (x\in A) \big]. \end{equation*} \item Class Strong Collection: if $R$ is a class multi-valued function from a set $a$ to the class of all sets, then there is a set $b$ which is an `image' of $a$ under $R$. Formally, \begin{equation*} \forall^1 R\forall^0a[R \colon a\rightrightarrows V \to \exists^0b(R \colon a\leftrightarrowlrarrows b)]. \end{equation*} \end{itemize} \end{definition} \begin{remark} We do not need to add the Class Subset Collection axiom \begin{equation*} \forall^1 R\forall^0a\forall^0b[[R\subseteq V\times V\times V\land \forall^0u (R\restricts u \colon a\rightrightarrows b)]\to \exists^0c\forall^0u\exists^0d\in c(R\restricts u \colon a \leftrightarrowlrarrows d) ] \footnotemark \end{equation*} \footnotetext{Here $R\restricts u$ is the class $\{\langle x,y\mathsf{op}eratorname{ran}gle \mid \langle u,x,y\mathsf{op}eratorname{ran}gle \in R\}$.} as an axiom of $\mathsf{CGB}$ because it is derivable from the current formulation of $\mathsf{CGB}$: we can prove it from the first-order Fullness and the Class Strong Collection by mimicking the proof of Set Collection from Fullness and Strong Collection. The reader should also note that, unlike classical $\mathsf{GB}$, there is no additional Separation axiom for sets: that is, we do not assume $a\cap A=\{x\in a \mid x\in A\}$ is a set for a given class $A$. In fact, the assumption that $a\cap A$ is always a set implies Full Separation due to Elementary Comprehension. \end{remark} Thus we introduce a new terminology for classes such that $A\cap a$ is always a set: \begin{definition} A class $A$ is \emph{amenable} if $A\cap a$ is a set for any set $a$. \end{definition} The following lemma shows every class function is amenable: \begin{lemma}[$\mathsf{CGB}$]\label{Lemma:FunctionalClass-amenable} Let $F$ be a class function, then $F$ is amenable. \end{lemma} \begin{proof} It suffices to show that if $F$ is a class function then $F\restricts a$ is a set for each $a\in V$. Consider the class function $\mathcal{A}(F\restricts a):a\to a\times V$, where $\mathcal{A}$ is the operation we defined in \autoref{Lemma:PrelimAdjuectmentFtn}. By Class Strong Collection, we can find a set $b$ such that $\mathcal{A}(F\restricts a) : a\leftrightarrowlrarrows b$. We claim that $b=F\restricts a$. For $F\restricts a\subseteq b$, if $x\in a$ then we can find $y\in b$ such that $\langle x,y\rangle \in \mathcal{A}(F\restricts a)$. Hence $y=\langle x,z\rangle \in F\restricts a$ for some $z$. By functionality of $F$, we have $y=\langle x,F(x)\rangle$. For the remaining inclusion, if $y\in b$, then there is $x\in a$ such that $\langle x,y\rangle \in\mathcal{A}(F\restricts a)$. Now we can prove that $y=\langle x,F(x)\rangle$ since $F$ is a class function. \end{proof} Next, we define the second-order variant of $\mathsf{IZF}$ which we denote by $\mathsf{IGB}$. $\mathsf{IGB}$ allows Separation for arbitrary classes as classical $\mathsf{GB}$ does. \begin{definition} \label{Definition:IGB} \emph{Intuitionistic G\"odel-Bernays set theory} ($\mathsf{IGB}$) is obtained by adding the following axioms to $\mathsf{CGB}$: \begin{enumerate} \item Axioms of $\mathsf{IZF}$ for sets. \item Class Separation: every class is amenable. \end{enumerate} \end{definition} We know that classical $\mathsf{GB}$ is a conservative extension of $\mathsf{ZF}$. We expect the same results to hold for constructive set theories. The following theorem shows it actually holds, however, its proof will require a small amount of proof theory. We borrow ideas of the proof from \cite{2010MOanswer}. \begin{proposition}\label{Proposition:Secondordersettheory-conservativity} $\mathsf{CGB}$ is conservative over $\mathsf{CZF}$. $\mathsf{IGB}$ is conservative over $\mathsf{IZF}$. \end{proposition} \begin{proof} We only provide the proof for the conservativity of $\mathsf{CGB}$ over $\mathsf{CZF}$ since the same argument applies to the conservativity of $\mathsf{IGB}$ over $\mathsf{IZF}$. We will rely on the cut-elimination theorem of intuitionistic predicate logic. The reader who might be unfamiliar with this can consult with \cite{NegrivonPlato2008} or \cite{Arai2020}. Assume that $\sigma$ is a first-order sentence that is deducible from $\mathsf{CGB}$. Then we have a cut-free derivation of $\sigma$ from a finite set $\Gamma$ of axioms of $\mathsf{CGB}$. It is known that every cut-free derivation satisfies the \emph{subformula property}, that is, every formula appearing in the deduction is a subformula of $\sigma$ or $\Gamma$. Thus we have a finite set $\{X_0,\cdots X_n\}$ of class variables that appear in the deduction. Now we divide into the following cases: \begin{enumerate} \item If $X_i$ appears in an instance of Class Comprehension in the deduction, then the deduction contains $x\in X_i\leftrightarrow \phi(x,p,C)$ for some class-quantifier free $\phi$. Now replace every $x\in X_i$ with $\phi(x,p,C)$, and $X_i=Y$ with $\forall x (\phi(x,p,C)\leftrightarrow x\in Y)$ in the deduction. \item Otherwise, replace $x\in X_i$ with $x=x$, and $X_i=Y$ with $\forall x (x=x\leftrightarrow x\in Y)$. \end{enumerate} Then we can see that the resulting deduction is a first-order proof of $\sigma$. Hence $\mathsf{CGB}\vdash \sigma$. \end{proof} \subsection{Second-order set theories with infinite connectives} We will use infinite conjunctions to circumvent technical issues about defining elementary embeddings. Hence we define an appropriate infinitary logic and the corresponding second-order set theories. The following definition appears in \cite{Negri2021}: \begin{definition}[$\mathsf{G3i_\omega}$, \cite{Negri2021}] The intuitionistic cut-free first-order sequent calculus $\mathsf{G3i_\omega}$ is defined by the following rules: initial sequents are of the form $P,\Gamma\implies \Delta,P$ and its deduction rules are: \renewcommand*{\arraystretch}{2.7} \setlength{\tabcolsep}{12pt} \begin{longtable}{*2{>{\(\displaystyle} c <{\)}}} \frac{A,B,\Gamma\implies \Delta}{A\land B,\Gamma\implies\Delta} \ L\land & \frac{\Gamma\implies \Delta, A \qquad \Gamma\implies\Delta, B}{\Gamma\implies \Delta,A\land B} \ R\land \\ \frac{A_k,\bigwedge_{n\in\omega}A_n,\Gamma\implies \Delta}{\bigwedge_{n\in\omega}A_n,\Gamma\implies\Delta} \ \sideset{}{_k}{L\bigwedge} & \frac{\{\Gamma\implies \Delta, A_n\mid n>0\}}{\Gamma\implies \Delta,\bigwedge_{n\in\omega}A_n} \ R\bigwedge \\ \frac{A,\Gamma\implies \Delta\qquad B,\Gamma\implies\Delta}{A\lor B,\Gamma\implies\Delta} \ L\lor & \frac{\Gamma\implies \Delta,A,B}{\Gamma\implies \Delta,A\lor B} \ R\lor \\ \frac{\{\Gamma,A_n\implies \Delta\mid n<\omega\}}{\Gamma,\bigvee_{n\in\omega}A_n\implies \Delta} \ L\bigvee & \frac{\Gamma\implies\Delta,\bigvee_{n\in\omega}A_n,A_k}{\Gamma\implies \Delta, \bigvee_{n\in\omega}A_n} \ \sideset{}{_k}{R\bigvee} \\ \frac{\Gamma\implies \Delta,A \qquad B,\Gamma\implies\Delta}{A\to B,\Gamma\implies\Delta}\ L\to & \frac{A,\Gamma\implies \Delta, B}{\Gamma\implies \Delta,A\to B}\ R\to \\ \frac{}{\bot,\Gamma\implies\Delta}\ L\bot &\\ \frac{A[t/x], \Gamma \implies \Delta}{\exists x A, \Gamma \implies \Delta}\ L\exists & \frac{\Gamma\implies \Delta, \exists x A, A[t/x]}{\Gamma\implies \Delta, \exists x A}\ R\exists \\ \frac{\forall x A, A[t/x], \Gamma\implies \Delta}{\forall x A, \Gamma \implies \Delta}\ L\forall & \frac{\Gamma\implies \Delta, A[y/x]}{\Gamma\implies\Delta,\forall x A}\ R\forall\ \text{($y$ fresh)} \end{longtable} \end{definition} The difference between finitary logic and $\mathsf{G3i}_\omega$ is that the latter allows for infinite conjunctions and disjunctions over countable sets of formulas. This will only be needed in order to formalise the concepts of Super Reinhardt sets and the total Reinhardtness of $V$ in \autoref{Subsection: Largeset-BeyondChoice}. Otherwise, throughout this paper, we will stick to finitary logic. It is well-known that the usual classical and intuitionistic predicate calculus enjoys the cut-elimination rule. This is the same for $\mathsf{G3i_\omega}$. We state the following proposition without proof. The reader might refer to \cite{Negri2021} for its proof. \begin{proposition}[Cut elimination for $\mathsf{G3i_\omega}$]\label{Proposition:CutEliminationG3iomega} $\mathsf{G3i_\omega}$ with the cut rule proves the same sequents as $\mathsf{G3i_\omega}$ itself (that is, without the cut rule.) \end{proposition} {Finally, w}e are ready to define set theories over $\mathsf{G3i_\omega}$: \begin{definition} \label{Definition:CGBIGBinfty} $\mathsf{CGB}_\infty$ and $\mathsf{IGB}_\infty$ are obtained by replacing the underlying logic of $\mathsf{CGB}$ and $\mathsf{IGB}$ with $\mathsf{G3i_\omega}$, respectively. \end{definition} Note that the infinitary logic $\mathsf{G3i_\omega}$ has no role in defining new sets from the given sets: for example, we do not have Separation schemes for infinite conjunctions and disjunctions. The only reason we are using $\mathsf{G3i}$ is to enhance the expressive power of set theories as we introduced classes into first-order set theories. The following theorem shows that infinite connectives do not provide new theorems into $\mathsf{CGB}$ or $\mathsf{IGB}$: \begin{proposition} $\mathsf{CGB}_\infty$ is conservative over $\mathsf{CGB}$ for formulas with no infinite connectives. The analogous result also holds between $\mathsf{IGB}_\infty$ and $\mathsf{IGB}$. \end{proposition} \begin{proof} We only prove the first claim since the proof for the second claim is identical. We can see from \autoref{Proposition:CutEliminationG3iomega} that the \emph{subformula property for $\mathsf{G3i_\omega}$} holds: that is, every formula in a derivation of $\Gamma\Rightarrow\Delta$ is a subformula of $\Gamma$ or $\Delta$. Now assume that we have a derivation $\Gamma\implies\sigma$ from a finite set $\Gamma$ of axioms of $\mathsf{CGB}$ and an infinite-connective free formula $\sigma$. By the subformula property, the derivation has no infinite connectives. This means that the entire derivation can be done without rules for infinite connectives. Hence $\mathsf{CGB}$ proves $\sigma$. \end{proof} \section{Small Large set axioms}\label{Section:SmallLargeSet} In this and the next section, we will discuss large set axioms, which is an analogue of large cardinal axioms over constructive set theories. Since ordinals over $\mathsf{CZF}$ could be badly behaved (for example, they need not be well-ordered), we focus on the structural properties of given sets to obtain higher infinities over $\mathsf{CZF}$. This approach is not unusual even under classical context since many large cardinals over $\mathsf{ZF}$ are characterized and defined by structural properties of $V_\kappa$ or $H_\kappa$. (One example would be the definition of indescribable cardinals.) We also compare the relation between large cardinal axioms over well-known theories like $\mathsf{ZF}$ and its large set axiom counterparts. The first large set notions over $\mathsf{CZF}$ would be \emph{regular sets}. Regular sets appear first in Aczel's paper \cite{Aczel1986} about inductive definitions over $\mathsf{CZF}$. As we will see later, regular sets can `internalize' most inductive constructions, which turns out to be useful in many practical cases. We will follow definitions given by \cite{ZieglerPhD}, and will briefly discuss differences in the terminology between different references. \vbox{ \begin{definition} \label{Definition:RegularityandInaccessibility} A transitive set $A$ is \begin{enumerate} \item \emph{Regular} if it satisfies second-order Strong Collection: \begin{equation*} \forall a\in A\forall R [R \colon a\rightrightarrows A\to \exists b\in A (R \colon a\leftrightarrowlrarrows b)]. \end{equation*} \item \emph{Weakly regular} if it satisfies second-order Collection, \item \emph{Functionally regular} if it satisfies second-order Replacement, \item \emph{$\bigcup$-regular} if it is regular and $\bigcup a\in A$ for all $a\in A$. \item \emph{Strongly regular} if it is $\bigcup$-regular and $\prescript{a}{}b\in A$ for all $a, b \in A$, and \item \emph{Inaccessible} if $(A,\in)$ is a model of second-order $\mathsf{CZF}$. \end{enumerate} The \emph{Regular Extension Axiom} $\mathsf{REA}$ asserts that every set is contained in some regular set. The \emph{Inaccessible Extension Axiom} $\mathsf{IEA}$ asserts that every set is contained in an inaccessible set. \end{definition} } Note that there is an alternative characterization of inaccessible sets: \begin{lemma}[\cite{AczelRathjen2001}, Corollary 10.27, $\mathsf{CZF}$] \pushQED{\qed} A regular set $A$ is inaccessible if and only if $A$ satisfies the following conditions: \begin{enumerate} \item $\omega\in A$, \item $\bigcup a\in A$ if $a\in A$, \item $\bigcap a\in A$ if $a\in A$ and $a$ is inhabited, and \item for $a,b\in A$ we can find $c\in A$ such that $c$ is full in $\mathsf{op}eratorname{mv}(a,b)$. \qedhere \end{enumerate} \end{lemma} \begin{corollary}\label{Corollary:Inaccessiblesets-ExpClosed} Inaccessible sets are strongly regular. Especially, if $a,b\in A$ and $A$ is inaccessible, then ${}^{a}b\in A$. \end{corollary} \begin{proof} Let $A$ be an inaccessible set and $a,b\in A$. If $r \colon a\to b$ is a function and $c\in A$ is full in $\mathsf{op}eratorname{mv}(a,b)$, then we can find a multi-valued function $s\in c$ of the domain $a$ such that $s\subseteq r$. Hence $r=s\in c\in A$, which proves $r\in A$, and thus we have ${{}^a}b\subseteq A$. To see ${{}^a}b\in A$, since $A$ satisfies fullness, we can find $c\in A$ which is full in $\mathsf{op}eratorname{mv}(a,b)\cap A$. Then ${{}^a}b\subseteq \mathsf{op}eratorname{mv}(a,b)\cap A$, and so ${{}^a}b=\{f\in c\mid f\colon a\to b\}\in A$ by $\Delta_0$-Separation over $A$. \end{proof} There is no need for the notion `pair-closed regular sets' since every regular set is closed under pairings if it contains 2: \begin{lemma}[\cite{AczelRathjen2010} Lemma 11.1.5, $\mathsf{CZF}minus$]\label{Lemma:Regular2-Pairing}\pushQED{\qed} If $A$ is regular and $2\in A$, then $\langle a,b\rangle \in A$ for all $a,b\in A$. \qedhere \end{lemma} $\mathsf{REA}$ has various consequences. For example, $\mathsf{\mathsf{CZF}minus+REA}$ proves Subset Collection. Moreover, it also proves that every \emph{bounded} inductive definition $\Phi$ has a set-sized fixed point $I(\Phi)$. (See \cite{AczelRathjen2001} or \cite{AczelRathjen2010} for details.) The notion of regular set is quite restrictive, as it does not have Separation axioms, not even for $\Delta_0$-formulas. Thus we have no way to do any internal construction over a regular set. The following notion is a strengthening of regular set, which resolves the issue of internal construction: \begin{definition}\label{Definition: BCST-regular} A regular set $A$ is \emph{BCST-regular} if $A\models \mathsf{BCST}$. Equivalently, $A$ is a regular set satisfying Union, Pairing, Empty set, and Binary Intersection. \end{definition} We do not know if $\mathsf{CZF}$ proves every regular set is BCST-regular, although Lubarsky and Rathjen \cite{LubarskyRathjen2003} proved that the set of all hereditarily countable sets in the Feferman-Levy model is functionally regular but not $\bigcup$-regular. It is not even known that the existence of a regular set implies that of a BCST-regular set. However, every inaccessible set is BCST-regular, and every BCST-regular set we work with in this paper will also be inaccessible. What are regular sets and inaccessible sets in the classical context? The following result illustrates what these sets look like under some well-known classical set theories: \begin{proposition} \label{Prop:ClassicalSmallLargeSets} \phantom{a} \begin{enumerate} \item \normalfont{($\mathsf{ZF}minus$)} Every $\bigcup$-regular set containing $2$ is a transitive model of second-order $\mathsf{ZF}minus$, $\mathsf{ZF}minus_2$. \item \normalfont{($\mathsf{ZF}Cminus$)} Every $\bigcup$-regular set containing $2$ is of the form $H_\kappa$ for some regular cardinal $\kappa$. \item \normalfont{($\mathsf{ZF}minus$)} Every inaccessible set is of the form $V_\kappa$ for some inaccessible cardinal $\kappa$.\footnote{Without choice, the various definitions of inaccessibility are no longer equivalent. Therefore, following \cite{HayutKaragila2020}, we define an ordinal $\kappa$ to be inaccessible if $V_\kappa$ is a model of second-order $\mathsf{ZF}$. Equivalently, $\kappa$ is inaccessible if $\kappa$ is a regular cardinal, and for every $\alpha<\kappa$ there is no surjection from $V_\alpha$ to $\kappa$.} \end{enumerate} \end{proposition} \begin{proof} \phantom{a} \begin{enumerate} \item Let $A$ be a {$\bigcup$}-regular set containing $2$. We know that $A$ satisfies Extensionality, Set Induction, Union, and the second-order Collection. Hence it remains to show that second-order Separation holds. Let $X\subseteq A$, $a\in A$ and suppose that $X\cap a$ is inhabited. Fix $c$ in this intersection. Now consider the function $f \colon a\to A$ defined by \begin{equation*} f(x)=\begin{cases} x&\text{if }x\in X,\text{ and}\\ c&\text{otherwise}. \end{cases} \end{equation*} By second-order Strong Collection over $A$, we have $b\in A$ such that $f \colon a\leftrightarrowlrarrows b$. It is easy to see that $b=X\cap a$ holds. \item Let $A$ be a regular set. Let $\kappa$ be the least ordinal that is not a member of $A$. Then $\kappa$ must be a regular cardinal: if not, there is $\alpha<\kappa$ and a cofinal map $f \colon \alpha\to \kappa$. By transitivity of $A$ and the definition of $\kappa$, we have $\alpha\in A$, so $\kappa\in A$ by the second-order Replacement and Union, a contradiction. We can see that $\mathsf{ZF}Cminus$ proves $H_\kappa=\{x \mid |\mathsf{op}eratorname{TC}(x)|<\kappa\}$ is a class model of $\mathsf{ZF}Cminus$, and $A$ satisfies the Well-ordering Principle. We can also show that $A\subseteq H_\kappa$ holds: if not, there is a set $x\in A\setminus H_\kappa$. Since $A$ is closed under transitive closures, $A$ contains a set whose cardinality is at least $\kappa$. Now derive a contradiction from second-order Replacement. We know that $A\cap\mathrm{Ord}= H_\kappa\cap \mathrm{Ord}=\kappa$. By second-order Separation over $A$, $\mathcal{P}(\mathrm{Ord})\cap A=\mathcal{P}(\mathrm{Ord})\cap H_\kappa$. Hence we have $A=H_\kappa$: for each $x\in H_\kappa$, we can find $\theta<\kappa$, $R\subseteq \theta\times\theta$, and $X\subseteq\theta$ such that $(\mathsf{op}eratorname{TC}(x),\in,x)\cong (\theta,R,X)$. (Here we treat $x$ as a unary relation.) Then $(\theta,R,X)\in A$, so $x\in A$ by Mostowski Collapsing Lemma. \item If $A$ is inaccessible, then $A$ is closed under the true power set of its elements since second-order Subset Collection implies $A$ is closed under exponentiation $a,b\mapsto {}^ab$. Hence $A$ must be of the form $V_\kappa$ for some $\kappa$. Moreover, $\kappa$ is inaccessible because $V_\kappa=A\models \mathsf{ZF_2}$. \qedhere \end{enumerate} \end{proof} It is easy to see from results in \cite{LubarskyRathjen2003} that $\mathsf{ZF}$ proves every rank of a regular set is a regular cardinal. However, the complete characterization of regular sets in a classical context is open: \begin{question} Is there a characterization of regular sets over $\mathsf{ZF}C$? How about $\bigcup$-regular sets over $\mathsf{ZF}minus$? \end{question} {While w}e can see that every inaccessible set over $\mathsf{ZF}$ is closed under power sets, there is no reason to believe that inaccessible sets over constructive set theories are closed under power sets, even if the Axiom of Power Set holds. Hence we introduce the following notion: \begin{definition}\label{Definition:PowerInaccessible} An inaccessible set $K$ is \emph{power inaccessible} if, for every $a\in K$, every subset of $a$ belongs to $K$. $\mathsf{pIEA}$ is the assertion that every set is an element of some power inaccessible set. \end{definition} Notice that the existence of a power inaccessible set implies the Axiom of Power Set over $\mathsf{CZF}$. This is because if $K$ is power inaccessible, then $\mathcal{P}(1)\subseteq K$, and from this one can see that every set has a power set since there is a bijection between $\mathcal{P}(a)$ and ${^a}\mathcal{P}(1)$ through characteristic functions, see Proposition 10.1.1 of \cite{AczelRathjen2010} for further details. We will mention power inaccessible sets only in the context of $\mathsf{IZF}$. Also, the reader should note that \cite{MatthwesPhD} and \cite{FriedmanScedrov1984} use the word `inaccessible sets' to denote power inaccessible sets. The reader is reminded that the meaning of an inaccessible set varies over the references. On the one hand, \cite{ZieglerPhD} and \cite{AczelRathjen2010} follows our definition of inaccessibility. On the other hand, other references like \cite{Rathjen1998}, \cite{RathjenGrifforPalmgren1998}, \cite{AczelRathjen2001} and \cite{Rathjen2017} use the following definition, which we will call \emph{REA-inaccessibility}: \begin{definition}\label{Definition:REA-inaccessible} A set $I$ is \emph{REA-inaccessible} if $I$ is inaccessible and $I\models\mathsf{REA}$. \end{definition} The disagreement in the terminology may come from the difference between their set-theoretic and proof-theoretic properties. On the one hand, \autoref{Prop:ClassicalSmallLargeSets} shows that inaccessible sets over $\mathsf{ZF}minus$ are exactly of the form $V_\kappa$ for some inaccessible cardinal $\kappa$. However, an REA-inaccessible set over $\mathsf{ZF}minus$ is not only of the form $V_\kappa$ for some inaccessible $\kappa$, but also satisfies $\mathsf{REA}$. Gitik \cite{Gitik1980} proved that $\mathsf{ZF}$ with no regular cardinal other than $\omega$ is consistent if there is a proper class of strongly compact cardinals, and Gitik's construction can carry over $V_\kappa$ for an inaccessible cardinal $\kappa$ which is a limit of strongly compact cardinals while preserving the inaccessibility of $\kappa$. Thus $\mathsf{ZF}$ with the existence of an inaccessible set does not imply there is an REA-inaccessible set. On the other hand, \cite{Rathjen2003AFA} showed that the theory $\mathsf{CZF+IEA}$ is equiconsistent with $\mathsf{CZF+REA}$.\footnote{\cite{Rathjen2003AFA} defined inaccessible sets as transitive models of $\mathsf{CZF}minus$ and second-order Strong Collection. These two definitions are equivalent since $\mathsf{REA}$ implies Subset Collection.} However, $\mathsf{CZF} + \forall x\exists y(x\in y\land \text{$y$ is REA-inaccessible})$ has a stronger consistency strength than that of $\mathsf{CZF+REA}$. Before finishing this section, let us remark that the consistency strength of small large sets over $\mathsf{CZF}$ is quite weak compared to their counterparts over $\mathsf{ZF}$. The reader should refer to \cite{Rathjen1993}, \cite{Rathjen1998} or \cite{Rathjen1999Realm} for additional accounts for the following results: \begin{theorem}[Rathjen]\pushQED{\qed} \label{Proposition:Prooftheoreticstrength-smalllargesets} Each pair of theories have an equal consistency strength and proves the same $\Pi^0_2$-sentences. \begin{enumerate} \item $\mathsf{CZF+REA}$ and $\mathsf{KPi}$, the theory $\mathsf{KP}$ with a proper class of admissible ordinals. \item $\mathsf{CZF}+\forall x\exists y (x\in y\land\text{$y$ is REA-inaccessible})$ and $\mathsf{KPI}$, the theory $\mathsf{KP}$ with a proper class of recursively inaccessible ordinals. \qedhere \popQED \end{enumerate} \end{theorem} \section{Large Large set axioms}\label{Section:LargeLargeSet} There is no reason to refrain from defining larger large sets. Hence we define stronger large set axioms. We have two ways of defining large cardinals above measurable cardinals: elementary embeddings and ultrafilters. On a fundamental level, ultrafilters make use of the law of excluded middle because either a set or its complement must be in the ultrafilter. This means that it is possible to prove that a set is in the ultrafilter by showing that a different set is not in the ultrafilter, an inherently nonconstructive principle. On a more structural note, the ultrafilter characterization does not immediately entail the existence of smaller large set notions. For example, it is well-known that it is possible to have a model of $\mathsf{ZF}$ in which there is a non-principal, countably complete ultrafilter over $\omega_1$. To obtain the consistency of inaccessible cardinals from this, one is then obliged to work in an inner model, which adds additional complexity to the arguments. Unlike ultrafilters, the embedding characterization will give direct, positive results on large sets even when working in a weak constructive setting. For example, we will see in \autoref{Corollary:CriticalSetsModelsIZFIEA} that, over $\mathsf{IKP}$, if $K$ is a transitive set (which satisfies some minor additional assumptions) which is a `critical point' of an elementary embedding $j \colon V\to M$, then $K$ is a model of $\mathsf{IZF+pIEA}$ and more. Therefore, we will use elementary embeddings to access stronger large cardinals from the level of measurable cardinals. \subsection{Critical sets and Reinhardt sets} In this section, we will give the basic tools that we shall need to work with elementary embeddings. When working with some embedding $j \colon V \rightarrow M$ we will not in general be assumed that either $j$ or $M$ is definable in a first-order manner. This means that we cannot state Collection or Separation for formulas involving $j$ or $M$ in a purely first-order way. Therefore, to deal with them, we will need to expand our base theory to accommodate class parameters. This leads to the following convention, whose main purpose is to simplify later notation. \begin{convention}\label{Convention:T_j} Let $T$ be a `reasonable' set theory such as $\mathsf{CZF}$, $\mathsf{IKP}$ or $\mathsf{IZF}$ and let $A$ be a class predicate. The theory $T_{A}$ is the theory with the same axiom schemes as $T$ in the language expanded to include the predicate $A$. For example, $\mathsf{CZF}_A$ has the axiom schemes of $A$ plus $\Delta_0$-Separation, Set Induction, Strong Collection, and Subset Collection in the expanded language $\{ \in, A \}$. \end{convention} \begin{definition}\label{Definition:ElementaryEmbedding} We work over the language $\in$ extended by a unary functional symbol $j$ and a unary predicate symbol $M$. Let $T$ be a `reasonable' set theory such as $\mathsf{CZF}$, $\mathsf{IKP}$ or $\mathsf{IZF}$ and suppose that $V \models T$. We say that $j \colon V \rightarrow M$ is a \emph{(fully) elementary embedding} if it satisfies the following: \begin{enumerate} \item $j$ is a map from $V$ into $M$; $\forall x M(j(x))$, \item $M$ is transitive; $\forall x (M(x) \rightarrow \forall y \in x M(y))$, \item $M \models T$, \item\label{Item:Elementarity} (Elementarity) $\forall \vec{x} [\phi(\vec{x})\leftrightarrow \phi^M(j(\vec{x}))]$ for every $\in$-formula $\phi$, where $\phi^M$ is the relativization of $\phi$ over $M$. \end{enumerate} For a class of formulas $Q$ (usually $Q=\Delta_0$ or $\Sigma$), we call $j$ \emph{$Q$-elementary} if $j$ satisfies the above definition with \eqref{Item:Elementarity} restricted to $\phi$ a $Q$-formula. Finally, we say that $K$ is a \emph{critical point} of $j$ if it is a transitive set that satisfies $K \in j(K)$ and $j(x)=x$ for all $x\in K$. \end{definition} We begin with the weak theory of the Basic Theory of Elementary Embeddings, originally introduced by Corazza in \cite{Corazza2006}. This is the minimal theory required to state that $j \colon V \rightarrow V$ is an elementary embedding with a critical point but without assuming anything about the relevant Separation, Collection, or Set Induction schemes over the extended language. For our purposes, it will in fact be beneficial to weaken this further to $\mathsf{BTEE}_M$ which is the minimal theory stating that $j \colon V \rightarrow M$ is an elementary embedding with a critical point for some $M \subseteq V$. We shall see in \autoref{Section:StructuralLowerbdd} that $\mathsf{IKP}_j$ with a critical point implies the consistency of $\mathsf{IZF + BTEE}$ and in \autoref{Section:HeytingInterpretation} that the latter is equiconsistent with $\mathsf{ZF} + \mathsf{BTEE}$, which already has a quite high consistency strength. \begin{definition}\label{Definition:BTEE} We work over the language $\in$ extended by a unary functional symbol $j$ and a unary predicate symbol $M$. Let $\mathsf{BTEE}_M$ be the assumption that there exists some transitive class $M \subseteq V$ and elementary embedding $j\colon V\rightarrow M$ which has a critical point. We drop $M$ and simply use $\mathsf{BTEE}$ when $V=M$ holds, that is, $\forall x M(x)$. In addition, continuing our previous notation, given a class of formulas $Q$ we let $Q$-$\mathsf{BTEE}_M$ denote $\mathsf{BTEE}_M$ with elementarity replaced by $Q$-elementarity. \end{definition} \begin{remark} If $V \models T + \mathsf{BTEE}_M$ (where $T$ is a reasonable set theory) we will always assume that $V \models T_M$, that is we will assume that $M$ is allowed to appear in the axiom schemes of $T$. \end{remark} Classically, $\mathsf{BTEE}$ is a relatively weak principle which is implied by the large cardinal axiom $0^\#$, which is the statement that there is a non-trivial elementary embedding $j \colon L \rightarrow L$. The reason this is weak is because we do not have Separation in the extended language which means that we cannot produce sets of the form $j \restricts a$ for arbitrary sets $a$. \begin{convention} For $T$ a reasonable theory, we will slightly abuse the notation $T_{j, M}$ to extend it to be the theory $T$ plus a fully elementary embedding $j \colon V \rightarrow M$ which has a critical point and which satisfies the axiom schemes of $T$ in the language expanded to include predicates for $j$ and $M$. In particular, we will allow $j$ and $M$ to appear in the Separation Schemes of $T$ (for example, $\Delta_0$-Separation if $T = \mathsf{CZF}$), Set Induction and appropriate Collection axioms of $T$ (for example, $\Sigma$-Collection if $T=\mathsf{IKP}$ or Strong Collection and Subset Collection if $T=\mathsf{CZF}$). \end{convention} Over $\mathsf{IKP}_j$, being a critical point of an elementary embedding is sufficient to prove some of the basic properties one would want from the theory of elementary embeddings. For example, one can show that if there is a critical point then there is (at least one) critical point which is an ordinal (see Chapter 9 of \cite{ZieglerPhD} or Chapter 7 of \cite{MatthwesPhD} for details) or that there is a model of (first-order) $\mathsf{IZF}$. However, such an assumption is not necessarily the most natural counterpart to the classical notion of non-trivial elementary embeddings. For example, consider $\mathsf{ZF}C$ plus an elementary embedding $j \colon V \rightarrow M$ with critical point $\kappa$. Then $L_\kappa$ can easily be seen to be a critical point as defined above however it is the `wrong' set to work within this context because it is not a model of second-order $\mathsf{ZF}C$. In particular, one can show that $\mathcal{P}(\omega)^L$ is merely a countable set. Instead, what one wants to work with is a critical point which is an inaccessible set, and this is what we shall define next. Note that in weak theories there is no reason that having a critical point will imply the existence of such a set. For example, it is proven in Chapter 10 of \cite{MatthwesPhD} that it is possible to produce a model of $\mathsf{ZF}Cminus$ with a non-trivial elementary embedding, while also having that $\mathcal{P}(\omega)$ is a proper class, and in such a model there will be no set sized models of second-order $\mathsf{ZF}Cminus$. Now let us define critical sets and Reinhardt sets in terms of elementary embeddings: \begin{definition}[$T_{j, M}$]\label{Definition:CriticalReinhardt} Let $K$ be a critical point of an elementary embedding $j \colon V\to M$. We call $K$ a \emph{critical set} if $K$ is also inaccessible. If $M=V$ and $K$ is transitive and inaccessible, then we call $K$ a \emph{Reinhardt set}. \end{definition} We introduce one final definition which is intermediate between $\mathsf{BTEE}$ and Reinhardt sets; the \emph{Wholeness Axiom}. This was first formulated by Corazza \cite{Corazza2000} and later stratified by Hamkins \cite{Hamkins2001WA} in order to investigate what theory was necessary to produce the Kunen Inconsistency under $\mathsf{ZF}C$. \begin{definition}\label{Definition:WA} The \emph{Wholeness axiom} $\mathsf{WA}$ is the combination of $\mathsf{BTEE}$ and Separation\textsubscript{$j$}. $\mathsf{WA}_0$ is the combination of $\mathsf{BTEE}$ and $\Delta_0$-Separation\textsubscript{$j$}. \end{definition} We will use the term critical point and critical set simultaneously, so the reader should distinguish the difference between these two terms. (For example, a critical point need not be a critical set unless it is inaccessible.) Note that the definition of critical sets is different from that suggested by Hayut and Karagila \cite{HayutKaragila2020}: they defined a critical cardinal as a critical point of an elementary embedding $j \colon V_{\kappa+1}\to M$ for some transitive set $M$. This is done to ensure that the embedding $j$ is a set and therefore first-order definable. We will instead take the more natural, but not obviously first-order definable, definition where the domain is taken to be the universe which is what Schlutzenberg calls \emph{V-critical} in \cite{Schlutzenberg2020Extenders}. Finding and analyzing the $\mathsf{CZF}$-definition of a critical set that is classically equivalent to a critical set in the style of Hayut and Karagila would be good future work. Also, \cite{ZieglerPhD} uses the term `measurable sets' to denote critical sets, but we will avoid this term for the following reasons: it does not reflect that the definition is given by an elementary embedding, and it could be confusing with measurable sets in measure theory. We do not know if every elementary embedding $j \colon V\to M$ over $\mathsf{CZF}$ enjoys being cofinal. Surprisingly, the following lemma shows that $j$ becomes a cofinal map if $M=V$. Note that the following lemma heavily uses Subset Collection. See Theorem 9.37 of \cite{ZieglerPhD} for its proof. \begin{proposition}[Ziegler \cite{ZieglerPhD}, $\mathsf{CZF}_j$]\label{Proposition:CZFReinhardtEmbeddingCofinal}\pushQED{\qed} Let $j \colon V\to V$ be a non-trivial elementary embedding. Then $j$ is \emph{cofinal}, that is, we can find $y$ such that $x\in j(y)$ for each $x$. \qedhere \end{proposition} Note that \cite{ZieglerPhD} uses the term \emph{set cofinality} to denote our notion of cofinality. However, we will use the term cofinality to harmonize the terminology with that of \cite{Matthews2020}. Ziegler's theorem on the cofinality of Reinhardt embedding was an early indication of the high consistency strength of $\mathsf{CZF}$ with a Reinhardt set. This is because \cite{Matthews2020} shows that a related weak set theory, namely $\mathsf{ZF}Cminus$ with the $\mathsf{DC}_\mu$-scheme for all cardinals $\mu$, already proves there is no cofinal non-trivial elementary embedding $j \colon V\to V$. The following lemma, due to Ziegler \cite{ZieglerPhD}, has an essential role in the proof of \autoref{Proposition:CZFReinhardtEmbeddingCofinal} and developing properties of large large sets. We note here that the proof will not require $V$ to satisfy any of our axioms in the language expanded with $j$ or for the embedding to be fully elementary and therefore the Lemma is also provable over weaker theories such as $\mathsf{IKP + BTEE}$. Note that, when working without Power Set, $\mathcal{P}(\mathcal{P}(a))$ should be seen as an abbreviation for the class consisting of those sets whose elements are all subsets of $a$. \begin{lemma}[$\mathsf{IKP} + \Delta_0\text{-}\mathsf{BTEE}_M$] \label{Lemma:PowerSetPreserving} Assume that $j \colon V\to M$ is a $\Delta_0$-elementary embedding and $K$ is a transitive set such that $j(x)=x$ for all $x\in K$. Then we have the following results for each $a\in K$: \begin{enumerate} \item If $b\subseteq a$, then $j(b)=b$. \item If $b\subseteq \mathcal{P}(a)$, then $j(b)=b$. \item If $b\subseteq \mathcal{P}(\mathcal{P}(a))$, then $j(b)=b$. \end{enumerate} Furthermore, if we can apply the induction to $j$-formulas, then we can show that $j(b)=b$ for all $b\subseteq \mathcal{P}^n(a)$ for each $n\in\omega$, where $\mathcal{P}^n(a)$ is the $n\textsuperscript{th}$ application of the power set operator to $a$. \end{lemma} \begin{proof} \leavevmode \begin{enumerate} \item If $x\in b$, then $x\in K$ by transitivity of $K$, so $x=j(x)\in j(b)$. Hence $b\subseteq j(b)$. Conversely, $x\in j(b)\subseteq j(a)=a$ implies $x\in K$, so $j(x)=x$. Hence $j(x)\in j(b)$, and we have $x\in b$. This shows $j(b)\subseteq b$. \item If $x\in b$, then $x\in\mathcal{P}(a)$. Hence $j(x)=x$ by the previous claim, and we have $x=j(x)\in j(b)$. In sum, $b\subseteq j(b)$. Conversely, suppose $x\in j(b)$. Observe that $\forall t \in b ( t \subseteq a)$ and this is $\Delta_0$. Thus, by elementarity, $x \subseteq j(a) = a$. Hence by the previous claim, $x=j(x)$. This shows $j(x)\in j(b)$, so $x\in b$. Hence $j(b)\subseteq b$. \end{enumerate} The proof of the remaining case is identical, so we omit it. \end{proof} \subsection{Large set axioms beyond choice} \label{Subsection: Largeset-BeyondChoice} As before, there is no reason why we should stop at Reinhardt sets. While Reinhardt cardinals are incompatible with $\mathsf{ZF}C$, it is known that all proofs of this use the Axiom of Choice in an essential way. This has led to the study of stronger large cardinals in the $\mathsf{ZF}$ context, the most notable example being \cite{BagariaKoellnerWoodin2019}. We will focus on the concept of super Reinhardt sets and totally Reinhardt sets. However, before we do so, we note a technical difficult{ly regarding} truth predicates and first-order definability, which is why we will need to work in an infinitary second-order theory. Using Theorem 1 of \cite{Gaifman1974} or Proposition 5.1 of \cite{Kanamori2008}, in $\mathsf{ZF}$, an elementary map $j \colon V\to M$ is fully elementary if it is $\Sigma_1$-elementary. Unfortunately, in our context, we do not know whether $\Sigma_n$-elementary embeddings are fully elementary even if $n$ is sufficiently large. During this subsection, we will fix an enumeration $\langle \phi_{n}(X,\vec{x})\mid n\in\mathbb{N}\mathsf{op}eratorname{ran}gle$ of all first-order formulas over the language of set theory with one class variable $X$. \begin{definition}[$\mathsf{CGB}_\infty$] Let $A$ be a class. A class $j$ is an $A$-\emph{(fully) elementary embedding} from $V$ to $M$ if $j$ is a class function and $\bigwedge_{n\in\mathbb{N}}\forall\vec{x}[\phi_n(A,\vec{x})\leftrightarrow \phi^M_{n}(A\cap M,j(\vec{x}))]$. We simply call $j$ \emph{elementary} if $A=V$. \end{definition} \begin{definition}[$\mathsf{CGB}_\infty$]\label{Definition:SuperReinhardt} An inaccessible set $K$ is \emph{super Reinhardt} if for every set $a$ there is an elementary embedding $j \colon V\to V$ such that $K$ is a critical point of $j$ and $a\in j(K)$. More generally, an inaccessible set $K$ is \emph{$A$-super Reinhardt} for an amenable $A$ if for each set $a$ we can find an $A$-elementary embedding $j \colon V\to V$ whose critical point is $K$ such that $a\in j(K)$. \end{definition} The reader should notice that we have formulated critical sets and Reinhardt sets over $\mathsf{CGB}_\infty$ instead of $\mathsf{CZF}_{j,M}$. Some readers could think that formulating criticalness and Reinhardtness over $\mathsf{CGB}_\infty$ would be more natural, as Bagaria-Koellner-Woodin formulated Reinhardt cardinals over $\mathsf{GB}$ instead of $\mathsf{ZF}_j$. It will turn out that sticking to the more first-order formulation of criticalness and Reinhardtness is better for the following technical reason. We will take a double-negation interpretation based on Gambino's Heyting-valued interpretation \cite{Gambino2006} over first-order constructive set theories in \autoref{Section:HeytingInterpretation}. It is harder to extend Gambino's interpretation to a second-order set theory than extending it to $\mathsf{CZF}_{j,M}$ because the former requires extending Gambino's forcing to all classes and defining the interpretation of second-order quantifiers, while the latter does not. However, we cannot avoid the full second-order formulation of super Reinhardtness, unlike we did for Reinhardtness, because its definition asks for class many elementary embeddings. Our definition of $A$-super Reinhardtness is different from that of Bagaria-Koellner-Woodin \cite{BagariaKoellnerWoodin2019} because they required $j$ to satisfy $j^+[A] :=\bigcup_{x\in V} j(A\cap x)$ is equal to $A$ instead of $A$-elementarity. It turns out by \autoref{Lemma:DeltaA0elementarityandfullelementarity} that our definition is stronger than that of Bagaria-Koellner-Woodin in our constructive context, but they are equivalent in the classical context. \begin{lemma}[$\mathsf{CGB}$] \label{Lemma:RestrictingAClassOntoaset} Let $\phi(X, x_0,\cdots, x_n)$ be a $\Delta^X_0$-formula whose free variables are all expressed. If $A$ is an amenable class and if we take $a = \mathsf{op}eratorname{TC}(\{x_0,\cdots,x_n\})$, then $$\phi(A,x_0,\cdots,x_n) \leftrightarrow \phi(A\cap a,x_0,\cdots, x_n).$$ \end{lemma} \begin{proof} The proof proceeds by induction on $\phi$. \begin{itemize} \item Assume that $\phi$ is an atomic formula. The only non-trivial case is $x\in A$, and it is easy to see that $x\in A\leftrightarrow x\in \mathsf{op}eratorname{TC}(\{x\})\cap A$. \item The cases for logical connectives are easy to check. \item Assume that $\phi(A,x_0,\cdots, x_n)$ is \(\forall y\in x_0 \psi(A,y,x_0,\cdots, x_n)\), and assume that \begin{equation*} \psi(A,y,x_0,\cdots, x_n)\leftrightarrow \psi(A\cap f(y),x_0,\cdots, x_n) \end{equation*} for all $y$, where $f(y)=\mathsf{op}eratorname{TC}(\{y,x_0,\cdots, x_n\})$. If $y\in x_0$, then $f(y)=a=\mathsf{op}eratorname{TC}(\{x_0,\cdots, x_n\})$. Hence \begin{align*} \forall y\in x_0 \psi(A,y,x_0,\cdots, x_n) \leftrightarrow \forall y\in x_0 \psi(A\cap a,y,x_0,\cdots, x_n). \end{align*} The case for $\exists y\in x_0\psi(y,x_0,\cdots, x_n)$ is also similar. \qedhere \end{itemize} \end{proof} \begin{remark} \label{Remark:SuperReinhardtsareReinhardts} It is worthwhile to note that if $K$ is a super Reinhardt set then it is also a Reinhardt set, in the sense that it is a critical point for some elementary embedding $j \colon V \rightarrow V$ for which $\langle V, j \rangle$ satisfies every axiom of $\mathsf{CZF}_j$. This is not obvious because, while $\mathsf{CGB}_\infty$ satisfies Class Set Induction and Class Strong Collection, it does not include a Class Separation axiom. Therefore, for an arbitrary class $A$, there is no reason why $\Delta_0^A$-Separation should hold. To circumvent this issue let $j \colon V \rightarrow V$ be an elementary embedding such that $K \in j(K)$ (and everything in $K$ is fixed by $j$). We observe that, by \autoref{Lemma:FunctionalClass-amenable}, $j$ is an amenable class and therefore we can apply the above Lemma from which one can conclude that $\Delta_0^j$-Separation holds. \end{remark} \begin{lemma}[$\mathsf{CGB}_{\infty}$] \label{Lemma:DeltaA0elementarityandfullelementarity} Let $A$ be an amenable class and $j \colon V \rightarrow V$ be a class function. \begin{enumerate} \item If $j$ is fully elementary, then $j$ is $\Delta^A_0$-elementary if and only if $j^+[A]=A$. \item If $\mathsf{LEM}$ holds, then every $\Sigma_1\cup \Delta^A_0$-elementary embedding $j$ is elementary for all $A$-formulas. \end{enumerate} \end{lemma} \begin{proof} \phantom{a} \begin{enumerate} \item Assume that $j$ is elementary for $\Delta^A_0$-formulas. We first show that $j(A\cap x)=A\cap j(x)$: we know that $\forall y [y\in A\cap x \leftrightarrow y\in A \land y\in x]$. We can view this sentence as a conjunction of two $\Delta^A_0$-formulas, so we have \begin{equation*} \forall y [y\in j(A\cap x) \leftrightarrow y\in A \land y\in j(x)]. \end{equation*} This shows the claim we desired. Hence $j^+[A]=\bigcup_{x\in V} j(A\cap x) = \bigcup_{x\in V} A \cap j(x) = A$. (The last equality holds by the cofinality of $j$, \autoref{Proposition:CZFReinhardtEmbeddingCofinal}.) Conversely, assume that $j^+[A]=A$ holds and let $\phi(X,x_0,\cdots,x_n)$ be a $\Delta^X_0$-formula with a unique class variable $X$, all of whose free variables are displayed. By \autoref{Lemma:RestrictingAClassOntoaset}, $a=\mathsf{op}eratorname{TC}(\{x_0,\cdots, x_n\})$ satisfies $\phi(A,x_0,\cdots, x_n)\leftrightarrow \phi(A\cap a,x_0,\cdots, x_n)$. Since $j$ is elementary and $A \cap a$ is a set by amenability, we have \begin{equation*} \phi(A\cap a,x_0,\cdots,x_n)\leftrightarrow \phi(j(A\cap a),j(x_0),\cdots, j(x_n)). \end{equation*} Note that $\forall y\in j(a)[y\in j^+[A] \leftrightarrow y\in j(A\cap a)]$ holds. Furthermore, every bounded variable $y$ appearing in $\phi$ is bounded by $a$. Hence \begin{equation*} \phi(j(A\cap a),j(x_0),\cdots, j(x_n)) \leftrightarrow \phi(j^+[A],j(x_0),\cdots, j(x_n)). \end{equation*} From the assumption $j^+[A]=A$, we finally have $\phi(A,x_0,\cdots, x_n)\leftrightarrow \phi(A, j(x_0),\cdots, j(x_n))$. \item $\mathsf{LEM}$ implies our background theory is $\mathsf{GB}$. We follow Kanamori's proof (\cite{Kanamori2008}, Proposition 5.1(c)) that every $\Sigma_1$-elementary cofinal embeddings over $\mathsf{ZF}$ is fully elementary. Assume that we have the $\Sigma_n^A$-elementarity of $j$. Consider the $A$-formula $\exists x\phi(A,x,a)$, where $\phi$ is a $\Pi^A_n$-formula. Then $\phi(A,x,a)$ implies $\phi(A,j(x),j(a))$, and we have $\exists y \phi(A,y,j(a))$. Conversely, assume that we have $\exists x \phi(A,x,j(a))$. Since $j$ is cofinal by \autoref{Proposition:CZFReinhardtEmbeddingCofinal}, we can find $\alpha$ such that $x\in V_{j(\alpha)}$. Since $j$ is elementary, $V_{j(\alpha)} = j(V_\alpha)$ and hence $\exists x\in j(V_\alpha)\phi(A,x,j(a))$ holds. Note that, by using second-order Collection for formulas using the class $A$, if $\theta$ is $\Sigma_n^A$ then so is $\forall x \in a \theta$ and, since we have a classical background theory, by taking negations, if $\theta$ is $\Pi_n^A$ then so is $\exists x \in a \theta$ and thus the formula $\exists x\in j(V_\alpha)\phi(A,x,j(a))$ is $\Pi^A_n$. So, by $\Sigma^A_n$-elementarity, we have $\exists x\in V_\alpha \phi(A,x,a)$. \qedhere \end{enumerate} \end{proof} The upshot of the second claim in the above lemma is that $\mathsf{GB}$ proves that given a fully elementary embedding, it is $A$-elementary if it is $\Delta^A_0$-elementary. Furthermore, the condition $j^+[A]=A$ is equivalent to $\Delta^A_0$-elementarity. Hence \autoref{Lemma:DeltaA0elementarityandfullelementarity} justifies that our definition of super Reinhardtness is a reasonable constructive formulation of that of Bagaria-Koellner-Woodin. We also note here that we can weaken the assumption in the second clause of \autoref{Lemma:DeltaA0elementarityandfullelementarity}: namely, $\mathsf{GB}$ proves every \emph{cofinal} $\Delta^A_0$-elementary embedding $j\colon V\to V$ is fully elementary for $A$-formulas. We end this section with one final large set axiom which we will see suffices to bring the constructive theory into proof-theoretic equilibrium with its classical counterpart. \begin{definition}\label{Definition:TR} \emph{$V$ is totally Reinhardt} ($\mathsf{TR}$) is the following statement: for every class $A$, there is an $A$-super Reinhardt set. \end{definition} \section{How strong are large large sets over \texorpdfstring{$\mathsf{CZF}$}{CZF}?}\label{Section:StructuralLowerbdd} In this section, we perform the preparatory work needed to derive a lower bound for the consistency strength of various large large set axioms over $\mathsf{CZF}$. We will show that over $\mathsf{IKP}$, a critical point with a moderate property is a model of $\mathsf{IZF+pIEA}$. We will see that we can find a model of a stronger extension of $\mathsf{IZF}$ under a large large set axioms over $\mathsf{CZF}$. One might ask why we focus on deriving the models of $\mathsf{IZF}$ with large set properties. The reason is that deriving consistency strength from $\mathsf{CZF}$ is difficult: double-negation translation does not behave well over $\mathsf{CZF}$ with large set axioms. On the contrary, $\mathsf{IZF}$ or its extensions go well with double-negation translations. \subsection{The strength of elementary embeddings over \texorpdfstring{$\mathsf{BTEE}_M$}{BTEE-M}} The aim of this subsection is to derive a lower bound for the consistency strength of an elementary embedding $j\colon V\to M$ with a critical point. One amazing consequence of \autoref{Lemma:PowerSetPreserving} is that the existence of a critical set is already quite strong, compared to small large set axioms over $\mathsf{CZF}$. As in the Lemma, we remark here that we do not require $V$ to satisfy any axiom in the expanded language and therefore the proof also works over $\mathsf{IKP} + \Delta_0\text{-} \mathsf{BTEE}_M$. \begin{theorem}[$\mathsf{IKP} + \Delta_0\text{-} \mathsf{BTEE}_M$] \label{Theorem:CriticalPointModelsIZF} Let $K$ be a transitive set such that $K\models \Delta_0\text{-}\mathsf{ Sep}$, $\omega\in K$ and let $j \colon V\to M$ be a $\Delta_0$-elementary embedding whose critical point is $K$. Then $K\models \mathsf{IZF}$. \end{theorem} \begin{proof} $K$ satisfies Extensionality and Set Induction since $K$ is transitive. We only prove that Power Set, Full Separation, and Collection are valid over $K$ since the validity of other axioms is not hard to prove. The focal fact of the proof is that $j(K)$ is also a transitive model of $\Delta_0$-Separation and $K\in j(K)$. \begin{enumerate} \item Power Set: let $a\in K$, and define $b$ as $b=\{c\in K \mid c\subseteq a\}$. We can see that $b\in j(K)$ and $b=j(b)$ by \autoref{Lemma:PowerSetPreserving}. Hence $j(b)\in j(K)$, and thus we have $b\in K$. It remains to show that $K$ thinks $b$ is a power set of $a$. We claim that $K\models \forall x (x\in b\leftrightarrow x\subseteq a)$, which is equivalent to $\forall x\in K (x\in b\leftrightarrow x\subseteq a)$, but this is obvious from the definition of $b$. \item Full Separation: let $a\in K$ and $\phi(x,p)$ be a first-order formula with a parameter $p\in K$. Observe that the relativization $\phi^K(x,p)$ is $\Delta_0$, so $b=\{x\in a\mid \phi^K(x,p)\}\in j(K)$. Since $b\subseteq a$, we have $j(b)=b$. Hence $b\in K$. It suffices to show that $K$ thinks $b$ witnesses this instance of Separation for $\phi$ and $a$. Formally, it means $\forall x\in K [x\in b\leftrightarrow x\in a\land \phi^K(x,p)]$ holds, which trivially holds by the definition of $b$. \item Collection: Since $K$ models Full Separation, by $\Delta_0$-elementarity of $j$, we also have $j(K)$ satisfies Full Separation. Now, let $\phi(x,y,p)$ be a formula and $a,p\in K$, and suppose that $\forall x\in a \exists y\in K \phi^K(x,y,p)$ holds. Now, for each $x\in a$ and $y\in K$, since everything is fixed by $j$, we have \begin{equation*} V\models \phi^K(x,y,p) ~~~ \implies ~~~ M\models \phi^{j(K)} (x,y,p). \end{equation*} Therefore, $M\models \forall x\in a\exists y\in K \phi^{j(K)}(x,y,p)$. Define $b=\{y\in K \mid \exists x\in a\ \phi^{j(K)}(x,y,p)\}\in j(K)$, then $b$ witnesses $M\models \exists b\in j(K) \forall x\in a \exists y\in b\ \phi^{j(K)}(x,y,p)$. Thus, by elementarity, we have $\exists b\in K\forall x\in a\exists y\in b \ \phi^K(x,y,p)$. Namely, $K$ satisfies this instance of Collection. \qedhere \end{enumerate} \end{proof} Hence the existence of an elementary embedding with a critical point is already tremendously strong compared to $\mathsf{CZF}$. We will see in \autoref{Section:HeytingInterpretation} that $\mathsf{IZF}$ interprets classical $\mathsf{ZF}$, so the existence of a critical set over $\mathsf{CZF}$ is stronger than $\mathsf{ZF}$. However, it turns out that the hypotheses given in \autoref{Theorem:CriticalPointModelsIZF} proves the critical point $K$ satisfies not only $\mathsf{IZF}$, but also large set axioms. For example, we next show that $K$ is a model of $\mathsf{IZF+pIEA}$. The proof begins with the following lemmas: \begin{lemma}[$\mathsf{IKP} + \Delta_0\text{-} \mathsf{BTEE}_M$] \label{Lemma:KandjK-samepowerset} Assume that $K$ satisfies the conditions given in the hypotheses of \autoref{Theorem:CriticalPointModelsIZF}. If $a\in K$, then $\mathcal{P}(a)\cap K = \mathcal{P}(a)\cap j(K)$. \end{lemma} \begin{proof} Let $a\in K$, $b\in j(K)$ and $b\subseteq a$. By \autoref{Lemma:PowerSetPreserving}, we have $b\in K$. \end{proof} \begin{lemma}[$\mathsf{IKP} + \Delta_0\text{-} \mathsf{BTEE}_M$] \label{Lemma:jKThinksKInacc} Under the hypotheses for $K$ given in \autoref{Theorem:CriticalPointModelsIZF}, $j(K)$ thinks $K$ is power inaccessible. \end{lemma} \begin{proof} We proved that $K$ is a model of $\mathsf{IZF}$ and $\mathcal{P}(a) \cap K = \mathcal{P}(a) \cap j(K)$ for any $a \in K$. Thus it suffices to show that $j(K)$ believes $K$ is regular. That is, \begin{equation*} \forall a \in K \forall R \in j(K) [ R\colon a\rightrightarrows K\to \exists b \in K (R \colon a \leftrightarrowlrarrows b)]. \end{equation*} The proof employs basically the same argument as the proof that $K$ satisfies Collection given in \autoref{Theorem:CriticalPointModelsIZF}: let $a\in K$, $R\in j(K)$, and suppose that $R\colon a\rightrightarrows K$. Then \begin{equation*} V\models R: a\rightrightarrows K ~~~ \implies ~~~ M\models j(R): a\rightrightarrows j(K). \end{equation*} Now let $b=\{y\in K \mid \exists x\in a \ \langle x,y\mathsf{op}eratorname{ran}gle\in j(R)\}\in M$. Since $b$ is a subset of $K\in j(K)$, we have $b\in j(K)$. Furthermore, $b$ witnesses $M \models \exists b\in j(K) [j(R)\colon a\leftrightarrowlrarrows b]$. Hence, by elementarity, there exists $b \in K$ such that $\forall x\in a\exists y\in b (R\colon a\leftrightarrowlrarrows b)$. \end{proof} \begin{corollary}[$\mathsf{IKP} + \Delta_0\text{-}\mathsf{BTEE}_M$] \label{Corollary:CriticalSetsModelsIZFIEA} Under the hypotheses for $K$ given in \autoref{Theorem:CriticalPointModelsIZF}, $K$ satisfies $\mathsf{pIEA}$. \end{corollary} \begin{proof} By \autoref{Lemma:KandjK-samepowerset}, $j(K)$ believes $K$ is power inaccessible. Fix $a\in K$, then $b=K$ witnesses the following statement: \begin{equation*} M \models \big[ \exists b\in j(K) \big( a\in b \land j(K)\models \text{$b$ is power inaccessible} \big) \big]. \end{equation*} Hence by elementarity, there is $b\in K$ such that $a\in b$ and $K$ believes $b$ is power inaccessible. Since $a$ is arbitrary, we have $K \models \mathsf{IZF + pIEA}$. \end{proof} In fact, we can derive more: we can show that not only does $K$ satisfy $\mathsf{pIEA}$, but $K$ also satisfies \mbox{$\forall x\exists y[x\in y\land \Phi(y)]$}, where $\Phi$ is any large set properties such that $j(K) \models \Phi(K)$. Especially, $\Phi$ can be a combination of power inaccessibility with \emph{Mahloness} or \emph{2-strongness}. (See \cite{Rathjen1998} or \cite{ZieglerPhD} for the details of Mahlo sets or 2-strong sets.) \subsection{Iterating \texorpdfstring{$j$}{j} to a critical point} In the classical context, one usually works with $\lambda=j^\omega(\kappa)=\sup_{n<\omega}j^n(\kappa)$ for $\kappa=\mathsf{op}eratorname{crit} j$ when they study large cardinals at the level of rank-into-rank embeddings and beyond. We can see in the classical context that if $j\colon V\to M$ is elementary embedding with $\kappa=\mathsf{op}eratorname{crit} j$, then $V_\lambda$ is still a model of $\mathsf{ZF}C$ with some large cardinal properties. We may expect the same in a constructive manner, but defining the analogue of $\lambda=j^\omega(\kappa)$ cannot be done over $\mathsf{IKP}+\Sigma\text{-} \mathsf{BTEE}_M$. The problem is that the definition of $\lambda$ requires defining $\langle j^n(\kappa) \mid n\in\omega\mathsf{op}eratorname{ran}gle$, which requires recursion for $j$-formulas. It turns out that we can define this sequence if we have Set Induction for $\Sigma^{j, M}$-formulas. Moreover, even if we have this sequence, we require an instance $\Sigma^{j, M}$-Replacement to turn the sequence into a set, which is when we will have to start working in $\mathsf{IKP}_{j,M}$. \begin{definition}[$\mathsf{IKP}+\Sigma\text{-} \mathsf{BTEE}_M$]\label{Definition:Iterationofj} Let $j\colon V\to M$ be an elementary embedding with a critical point $K$. Following Corazza in \cite{Corazza2006}, (2.7) to (2.9), let $\Theta$ and $\Phi$ be the following formulas: \begin{align*} \Theta(f,n,x,y) & \equiv \text{$f$ is a function}\land \mathsf{op}eratorname{dom} f = n+1 \land f(0) = x \, \land \\ &\forall i \big( 0<i\le n\to f(i) = j(f(i-1)) \land f(n) = y\big)\\ \Phi(n,x,y) & \equiv n\in\omega \to \exists f\ \Theta (f,n,x,y). \end{align*} \end{definition} Informally speaking, $\Theta(f,n,x,y)$ states that $f$ is a function with domain $n + 1$ computing $j^n(x)=y$, and $\Phi(n,x,y)$ asserts that $j^n(x)=y$. We can see that $\Theta$ is $\Delta^{j,M}_0$ and $\Phi$ is $\Sigma^{j,M}$. A careful analysis will show that $\Phi$ allows a $\Pi^{j,M}$-formulation, so $\Phi$ is actually $\Delta^{j,M}$. However, this fact is irrelevant in our context. \begin{lemma}[{Corazza,} $\mathsf{IKP}+\Sigma\text{-} \mathsf{BTEE}_M+\Sigma^{j,M}\text{-}\text{Set Induction}$] \label{Lemma:DefinabilityIterationOfj} \phantom{a} \begin{enumerate} \item For all $n\in\omega$ and $x$, $y$, there is at most one $f$ such that $\Theta(f,n,x,y)$ holds. That is, \begin{equation*} \forall n\in\omega \forall x,y,f,g\ [\Theta(f,n,x,y)\land \Theta(g,n,x,y)\to f=g]. \end{equation*} \item $\Phi$ defines a function. That is, $\forall n\in\omega \forall x \exists! y \Phi(n,x,y)$. \end{enumerate} \end{lemma} \begin{proof} \phantom{a} \begin{enumerate} \item Fix $x,y,f,g$ and assume that both $\Theta(f,n,x,y)$ and $\Theta(g,n,x,y)$ hold. To be precise, we apply Set Induction to the following formula: \begin{equation*} \theta(i) \equiv [i\le n\to f(i)=g(i)]. \end{equation*} Assume that $\forall j\in i \ \theta(j)$ holds. Then we can see that $f(i) = j(f(i-1))=j(g(i-1))=g(i)$ holds. In other words, we have $\theta(i)$. By Set Induction for $\Delta_0$-formulas, we have $\forall i \ \theta(i)$, so $f=g$. \item For uniqueness, assume that we have $\Phi(n,x,y_0)$ and $\Phi(n,x,y_1)$. By the previous claim, we can see that for the same $f$, $\Theta(f,n,x,y_0)$ and $\Theta(f,n,x,y_1)$ hold. Hence $y_0=f(n)=y_1$. For existence, we claim that $\forall n\in\omega\forall x \exists y,f\ \Theta(f,n,x,y)$. This uses Set Induction for $\Sigma^{j,M}$-formulas{. F}ix $x$ and let \begin{equation*} \phi(n, x) \equiv n\in\omega \to \exists y\exists f\ \Theta(f,n,x,y). \end{equation*} Assume that $\forall i\in n \ \phi(i, x)$ holds. Suppose that $y$ and $f$ witnesses $\Theta(f,n-1,x,y)$, then we can extend $f$ to $f'$ by setting $f':=f\cup \{\langle n, j(y) \mathsf{op}eratorname{ran}gle\}$. It is immediate that $\Theta(f',n,x,y)$. Hence, by Set Induction for $\Sigma^{j,M}$-formulas, we have $\forall n \phi(n,x)$. \qedhere \end{enumerate} \end{proof} \begin{remark} When working with elementary embeddings derived from ultrafilters in $\mathsf{ZF}C$ one can show that the ultrapower construction can be iterated. That is, one starts with $j \colon V \rightarrow M_0$ and shows that for every $n$ there is a transitive class $M_{n+1}$ and an embedding $i_n \colon M_n \rightarrow M_{n + 1}$. Then one can show that $j^2 = i_0 \circ j \colon V \rightarrow M_1$ is an elementary embedding with a critical set. However, this argument does not go through in our weaker context. In particular, it is unclear what the codomain of $j^2$ should be and therefore why $j^2$ should be an elementary embedding. Therefore, for our purposes, in general, $j^n(x)$ should be a formal object which is the result of applying $j$ to $x$ $n$ times. On the other hand, one should note that if $j$ is restricted to some set $A$ then it is possible to iterate the (set) embedding $j \restricts A \colon A \rightarrow j(A)$. \end{remark} Hence we can define the constructive analogue of $\lambda=j^\omega(\kappa)$, by using an instance of $\Sigma^{j,M}$-Replacement: \begin{definition}[$\mathsf{IKP}_{j,M}$] Let $j\colon V\to M$ be a $\Delta_0$-elementary embedding with a critical point $K$. By \autoref{Lemma:DefinabilityIterationOfj} and Replacement for $\Sigma^{j,M}$-formulas, $\langle j^n(K)\mid K\in\omega\mathsf{op}eratorname{ran}gle$ is a set. Define $\Lambda := \bigcup_{n\in\omega} j^n(K)$. \end{definition} \begin{remark} In order to define $\Lambda$ in the above definition it is important that $V$ satisfies at least $\Sigma^{j,M}$-Replacement and $\Sigma^{j,M}$-Induction for $\omega$. One can see that the class function sending $n$ to $j^n(K)$ is $\Sigma^{j,M}$-definable and that $\Sigma^{j,M}$-Induction then implies that this function is total. Finally, an instance of $\Sigma^{j,M}$-Replacement gives us that $\{ j^n(K) \mid n \in \omega \}$ is a set, and therefore so is its union. We refer to the end of Section 2 of \cite{Corazza2006} or Section 6.3 of \cite{MatthwesPhD} for details of this and what issues can potentially arise without assuming any induction in the extended language. \end{remark} \begin{remark} In the above definition, we keep talking about $\Delta_0$-elementary embedding while we work over $\mathsf{IKP}_{j,M}$, which technically includes the full elementarity of $j$ as an axiom. Hence we actually work with a weaker subtheory of $\mathsf{IKP}_{j,M}$, which is obtained by weakening the elementarity scheme to $\Delta_0$-formulas. This fact becomes important in \autoref{Subsection:Sigma-Ord-inary-IKP} where we work in $\mathsf{IKP}$ with a $\Sigma$-elementary embedding $j\colon V\to M$. However, we will continue to refer to the background theory $\mathsf{IKP}_{j,M}$ when this does not cause confusion. To emphasize, we work with the Set Induction scheme over the extended language as well as the $\Delta_0^{j,M}$-Separation and the $\Sigma^{j,M}$-Collection schemes. \end{remark} Now we can see that $K$ is an elementary submodel of $\Lambda$: \begin{lemma}[$\mathsf{IKP}_{j,M}$]\label{Lemma:KelementarysubmodelLambda} Let $j \colon V\to M$ be an $\Delta_0$-elementary embedding with a critical point $K$, and assume that $K\models \Delta_0\text{-} \mathsf{Sep}$. Then $K$ is an elementary submodel of $\Lambda$. That is, the following holds: for every formula $\phi(\vec{x})$ (without $j$ or $M$) all of whose free variable are expressed, \begin{equation}\label{Formula:PhiKLambdaAbsoluteness} V\models \forall \vec{x}\in K [\phi^K(\vec{x})\leftrightarrow \phi^\Lambda(\vec{x})]. \end{equation} \end{lemma} \begin{proof} We proceed with the proof by induction on formulas. Note that we can formulate our proof over $\mathsf{IKP}$ since the truth predicate for transitive models is $\Sigma$-definable. Atomic cases and the cases for $\land$, $\lor$, and $\to$ are trivial, and so we concentrate on cases for quantifiers. \begin{itemize} \item Case $\forall$: assume that $\vec{a}\in K$ and $\phi(x,\vec{a})$ is absolute between $K$ and $\Lambda$. That is, assume that \eqref{Formula:PhiKLambdaAbsoluteness} holds for $\phi$. Obviously \mbox{$V\models (\forall x\phi(x,\vec{a}))^\Lambda\to (\forall x\phi(x,\vec{a}))^K$.} Conversely, assume that $V\models \forall x\in K \phi^K(x,\vec{a})$. Since this is $\Delta_0$-expressible in $V$, by elementarity we have $M\models \forall x\in j(K) \phi^{j(K)}(x,\vec{a})$. Since $\forall x\in j(K) \phi^{j(K)}(x,\vec{a})$ is bounded and $M$ is a transitive subclass of $V$, $\forall x\in j(K) \phi^{j(K)}(x,\vec{a})$ is absolute between $V$ and $M$, so $V\models \forall x\in j(K) \phi^{j(K)}(x,\vec{a})$. We can iterate $j$ in a similar fashion, so we have $V\models \forall x\in j^n(K) \phi^{j^n(K)}(x,\vec{a})$ for all $n\in\omega$. Here $\vec{a}$ is unchanged because it is in $K$. Similarly, by elementarity and absoluteness of bounded formulas, we have from \eqref{Formula:PhiKLambdaAbsoluteness} that \begin{equation*} V\models \forall n\in\omega \forall x,\vec{a}\in j^n(K) [\phi^{j^n(K)}(x,\vec{a})\leftrightarrow \phi^\Lambda(x,\vec{a})]. \end{equation*} Hence $V$ thinks $\phi^\Lambda(x,\vec{a})$ for all $n\in\omega$ and $\vec{a}\in K$. Since $\Lambda=\bigcup_{n\in\omega} j^n(K)$, we have \mbox{$\forall x\in\Lambda \phi^\Lambda(x,\vec{a})$.} \item Case $\exists$: The proof is similar to the case for $\forall$. Assume the same conditions to $\vec{a}$ and $\phi(x,\vec{a})$ as we did before. Showing \mbox{$V\models (\exists x\phi(x,\vec{a}))^K\to (\exists x\phi(x,\vec{a}))^\Lambda$} is trivial. For the converse, assume that $V\models \exists x\in\Lambda\phi^\Lambda(x,\vec{a})$. Then we can find $n\in\omega$ such that $V\models \exists x\in j^n(K)\phi^\Lambda(x,\vec{a})$. We can prove $\phi^\Lambda(x,\vec{a})$ is equivalent to $\phi^{j^n(K)}(x,\vec{a})$ for $x,\vec{a}\in j^n(K)$, so $V\models \exists x\in j^n(K) \phi^{j^n(K)}(x,\vec{a})$. Then we have the desired result if $n=0$. If $n>0$, then by absoluteness of bounded formulas, $M\models \exists x\in j^n(K) \phi^{j^n(K)}(x,\vec{a})$. By applying elementarity, we have $V\models \exists x\in j^{n-1}(K) \phi^{j^{n-1}(K)}(x,\vec{a})$. Now we can see by repeating this argument that $V\models \exists x\in K \phi(x,\vec{a})$. \qedhere \end{itemize} \end{proof} An upshot of \autoref{Lemma:KelementarysubmodelLambda} is that $j^n(K)$ is an elementary submodel of $\Lambda$.\footnote{In fact, the proof of \autoref{Lemma:KelementarysubmodelLambda} implicitly implies $K$ is an elementary submodel of $j^n(K)$.} Especially, by applying this fact to \autoref{Lemma:jKThinksKInacc}, we have \begin{lemma}[$\mathsf{IKP}_{j,M}$]\pushQED{\qed} \label{Lemma:LambdaThinksKInacc} Assume that $K$ satisfies the conditions given in the hypotheses of \autoref{Theorem:CriticalPointModelsIZF}. Then $\Lambda$ thinks $K$ is a power inaccessible set. \qedhere \end{lemma} Especially, we have an analogue of \autoref{Lemma:KandjK-samepowerset} between $K$ and $\Lambda$: \begin{lemma}[$\mathsf{IKP}_{j,M}$]\label{Lemma:KandLambda-samepowerset} \pushQED{\qed} Assume that $K$ satisfies the conditions given in the hypotheses of \autoref{Theorem:CriticalPointModelsIZF}. For $n\in\omega$ and $a\in j^n(K)$, we have $\mathcal{P}(a)\cap\Lambda=\mathcal{P}(a)\cap j^n(K)$. \qedhere \end{lemma} As a corollary of the previous results, we have \begin{corollary}[$\mathsf{IKP}_{j,M}$]\label{Corollary:LambdaModelsIZFBTEEInd} Assume that $K$ satisfies the conditions given in the hypotheses of \autoref{Theorem:CriticalPointModelsIZF}.\ Then $\Lambda$ satisfies $\langle \Lambda,j\restricts\Lambda \rangle \models\mathsf{IZF+BTEE}+\text{Set Induction}_j$. \end{corollary} \begin{proof} $\Lambda$ is a model of $\mathsf{IZF}$ by \autoref{Lemma:KelementarysubmodelLambda} and \autoref{Theorem:CriticalPointModelsIZF}. Moreover, $j\restricts \Lambda:\Lambda\to\Lambda$ has a critical point $K\in\Lambda$, so $ \langle \Lambda, j\restricts \Lambda \rangle$ is a model of $\mathsf{BTEE}$. Since $\Lambda$ is transitive and Set Induction for $\Delta_0^{j,M}$ formulas holds, we have $\langle \Lambda, j\restricts\Lambda \rangle$ believes Set Induction for $j$-formulas holds. \end{proof} \begin{remark} The reader might notice that we always try to reveal where the given formula holds over, like $V\models\phi$ or $M\models\phi$, and explicitly state how to transfer between statements holding over $V$ and over $M$, for example, by relying on absoluteness of bounded formulas. The main reason for it is that while $V\models\phi(K)$ implies $M\models \phi(j(K))$, we cannot say anything about $V\models \phi(j(K))$ and $M\models\phi(K)$. For example, work over $\mathsf{ZF}C$ with a measurable cardinal $\kappa$, and consider an ultrapower map \mbox{$j\colon V\to\mathsf{op}eratorname{Ult}(V,U)\cong M$.} Then $V\models (V_\kappa\models \mathsf{ZF}C_2)$ and $M\models (V_{j(\kappa)}\models \mathsf{ZF}C_2)$. However, $j(\kappa)$ is not even a cardinal over $V$, so $V\not\models (V_{j(\kappa)}\models \mathsf{ZF}C_2)$. Similarly, while $V$ thinks $\kappa$ is measurable, $M$ does not in general think $\kappa$ is measurable. \end{remark} \subsection{Playing with a \texorpdfstring{$\Sigma$-$\mathrm{Ord}$-inary}{Sigma-Ord-inary} elementary embedding over \texorpdfstring{$\mathsf{IKP}$}{IKP}} \label{Subsection:Sigma-Ord-inary-IKP} Let us examine what we can derive from a $\Sigma$-elementary embedding with an ordinal critical point. An illuminating result along this line is that of Ziegler \cite{ZieglerPhD}, which is proved by considering the rank of any fixed critical point $K$: \begin{proposition}[Ziegler \cite{ZieglerPhD}, Section 9.1 or \cite{MatthwesPhD}, Lemma 7.2.4., $\mathsf{IKP}+ \mathrm{\Sigma}\text{-} \mathsf{BTEE}_M$] \pushQED{\qed} Let $j\colon V\to M$ be a $\Sigma$-elementary map. Then the following statements are all equivalent: \begin{enumerate} \item There is $K$ such that $K\in j(K)$ and $\forall x\in \mathsf{op}eratorname{TC}(K) \ (j(x)=x)$, \item There is a transitive $K$ such that $K\in j(K)$ and $\forall x\in K\ (j(x)=x)$, \item There is an ordinal $\kappa$ such that $\kappa\in j(\kappa)$ and $\forall\alpha\in\kappa \ (j(\alpha)=\alpha)$. \qedhere \end{enumerate} \end{proposition} The main idea of the above proposition is extracting the rank of a given $K$, then observing that $j$ respects the rank of a set since the rank function is $\Sigma$-definable. However, it is not likely that we can extend this result further. Ziegler provided a way to give a model of $\mathsf{CZF^-_{Rep}+Exp}$, $\mathsf{CZF}^-$ with Exponentiation and Replacement in place of Strong Collection, from a $\Delta_0$-elementary embedding $j\colon V\to M$ with an ordinal critical point. However, Ziegler's result requires an additional assumption on $j$ called $j\mathsf{IEA}$. Despite that, $\Sigma$-elementary embedding over $\mathsf{IKP}$ still shows quite a strong consistency strength provided if it has an ordinal critical point. The following result is from \cite{MatthwesPhD}: \begin{theorem}[\cite{MatthwesPhD}, Theorem 7.3.2, $\mathsf{IKP}+ \mathrm{\Sigma}\text{-} \mathsf{BTEE}_M$] \label{Theorem:SigmaOrdEmbedding-LmodelsIZF} Let $j\colon V\to M$ be a $\Sigma$-$\mathrm{Ord}$-inary elementary embedding with witnessing ordinal $\kappa$, that is, a $\Sigma$-elementary embedding with a critical point $\kappa$ that is an ordinal. Furthermore, let $\kappa^\#$ be defined as in \autoref{Definition:AugmentedOrdinal}. Then $L_{\kappa^\#}\models\mathsf{IZF}$. \end{theorem} Using the results we previously developed we can extend this to a model of $\mathsf{IZF +pIEA}.$ \begin{proposition}[$\mathsf{IKP}+\Sigma\text{-} \mathsf{BTEE}_M$] Let {$j \colon V \rightarrow M$} be a $\Sigma$-$\mathrm{Ord}$-inary elementary embedding with witnessing ordinal $\kappa$. Then $j\restricts L^V : L^V\to L^M$ is a $\Sigma$-elementary embedding over $L$ and $L_{\kappa^\#}\models \Delta_0\text{-} \mathsf{Sep}$. \end{proposition} \begin{proof} $\Delta_0$-elementarity of $j\restricts L^V : L^V\to L^M$ follows from the fact that the formula $x\in L$ is $\Sigma$-definable and $j$ is $\Sigma$-elementary. Secondly, by \autoref{item:LalphamodelsboundedSep} of \autoref{Proposition:propertiesofL}, $L_{\kappa^\#}\models \Delta_0\text{-} \mathsf{Sep}$.) \end{proof} By applying \autoref{Theorem:CriticalPointModelsIZF} and \autoref{Corollary:CriticalSetsModelsIZFIEA}, we have \begin{corollary}[$\mathsf{IKP}+\Sigma\text{-} \mathsf{BTEE}_M$] Let $j$ be a $\Sigma$-$\mathrm{Ord}$-inary elementary embedding with witnessing ordinal $\kappa$. Then $L_\kappa^\#$ is a model of $\mathsf{IZF+pIEA}$. \end{corollary} \begin{remark} Classically, $L^V=L^M$ if $M$ is a proper class transitive model of $\mathsf{KP}$. Furthermore, if $M$ is a transitive (set or class) model of $\mathsf{KP}$, then $L^M=L\cap M$ only depends on $\mathrm{Ord}^M=\mathrm{Ord}\cap M$. This is not constructively valid even if $M$ is a proper class since there is no reason to believe \mbox{$\mathrm{Ord}\cap V=\mathrm{Ord}\cap M$.} In fact, we can construct a Kripke model of $\mathsf{IZF}$ that satisfies $\mathrm{Ord}\cap V\neq \mathrm{Ord}\cap L$. See Section 5.5 of \cite{MatthwesPhD} for the details. \end{remark} Of course, we can derive more: \cite{MatthwesPhD} proved that $L_{\kappa^\#}$ also thinks every set is included in a \emph{totally indescribable set}. Also, under the presence of Set Induction and Collection for $\Sigma^{j, M}$-formulas, we have \begin{theorem}[$\mathsf{IKP}_{j,M}$]\label{Theorem:IKPSigmaOrdimpliesIZF+BTEE} Let $\lambda=\bigcup_{n\in\omega} j^n(\kappa^\#)$. Then $L_\lambda = \bigcup_{n\in\omega} L_{j^n(\kappa^\#)}$ and $ \langle L_\lambda, j\restricts L_\lambda \rangle$ is a model of $\mathsf{IZF+BTEE} + \text{Set Induction\textsubscript{$j$}}$. \end{theorem} \begin{proof} Set Induction and Collection for $\Sigma^{j,M}$-formulas are necessary to ensure the existence of $\lambda$. By definition of $L_\alpha$, $L_\lambda = \bigcup_{\alpha\in\lambda}\mathsf{op}eratorname{Def}(L_\alpha) = \bigcup_{n\in\omega}\bigcup_{\alpha\in j^n(\kappa^\#)} \mathsf{op}eratorname{Def}(L_\alpha) = \bigcup_{n\in\omega} L_{j^n(\kappa^\#)}$. Now, the desired result follows from \autoref{Corollary:LambdaModelsIZFBTEEInd}. \end{proof} \subsection{Reinhardt sets} Let $j \colon V\to M$ be an elementary embedding and $K$ a critical point of $j$. It is not generally true that $j(K)$ is also a regular set, although $M$ believes it is. However, for a Reinhardt embedding $j \colon V\to V$, $j(K)$ is regular. This yields a better lower bound for the consistency strength of Reinhardt sets. Observe that the proof only requires $j$ to be $\Delta_0$-elementary because all of the formulas can be bound by $\Lambda$. \begin{theorem}[{$\mathsf{IKP}_j$}]\label{Theorem:ReinhardtCriticalPtModelsWA} If $K$ is Reinhardt, then $\Lambda$ satisfies $\mathsf{IZF+WA}$. \end{theorem} Since we already know that $\Lambda \models \mathsf{IZF}$, the above proposition follows immediately from the following lemma: \begin{lemma} For any $j$-formula $\phi$, $t\in\omega$ and $a,p\in j^t(K)$, $\{x\in a\mid \langle \Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models \phi(x, p)\}\in j^t(K)$. \end{lemma} \begin{proof} Let $F_\theta(a,p):=\{x\in a\mid \langle \Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models \theta(x,p)\}$ for a formula $\theta$. We will prove that{, for every formula $\theta$,} $F_\theta(a,p)\in j^t(K)$ for all $a,p\in j^t(K)$ by induction on the complexity of $\theta$. Atomic cases in which $j$ does not appear follow immediately from the inaccessibility of $j^t(K)$. We consider the case for equality where $j$ appears, the cases for element-hood being similar. To do this, we need to show that for $a \in j^t(K)$, \begin{equation*} \{ \langle x, y \mathsf{op}eratorname{ran}gle \in a \mid x = j(y) \} \in j^t(K). \end{equation*} First, since $j^{t+1}(K)$ is inaccessible in $V$, $j^{t+1}(K)$ is Exp-closed by \autoref{Corollary:Inaccessiblesets-ExpClosed}, and therefore \mbox{$j \restricts a \in \mathsf{op}eratorname{mv}(\prescript{a}{}j(a))$} is in $j^{t+1}(K)$. Next, since $j^{t+1}(K)$ is closed under intersections, $j \restricts a \cap a \in j^{t+1}(K)$. However, this is a subset of $a \in j^t(K)$ so, by \autoref{Lemma:KandLambda-samepowerset}, $j \restricts a \cap a = \{ \langle x, y \mathsf{op}eratorname{ran}gle \in a \mid x = j(y) \} \in j^t(K)$. Conjunctions, disjunctions and implications follow from the fact that $j^t(K)$ satisfies Union and \mbox{$\Delta_0$-Separation}: let us examine the proof for implications. Suppose that $F_\phi(a,p),F_\psi(a,p)\in j^t(K)$ for any $a,p\in j^t(K)$. Then{, by $\Delta_0$-Separation,} \begin{equation*} F_{\phi\to\psi}(a,p) = \{x\in a \mid x\in F_\phi(a,p) \to x\in F_\psi(a,p)\}\in j^t(K). \end{equation*} Next, suppose that $\phi(x,p)$ is $\forall y \psi(x,y,p)$. For each $n\in\omega$, let \begin{equation*} S_n := \{x\in a \mid \forall y\in j^{t+n}(K) \langle \Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models\psi(x,y,p) \}. \end{equation*} We claim that $S_n\in j^t(K)$ for every $n\in\omega$. By the inductive hypothesis, for every $y\in j^{t+n}(K)$, \begin{equation*} F_\psi(a,\langle y,p\mathsf{op}eratorname{ran}gle ):= \{x\in a \mid \langle \Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models\psi(x,y,p)\} \in j^{t+n}(K). \end{equation*} Then we can define a function $R \colon j^{t+n}(K)\to j^{t+n+1}(K)$ given by $R(y)=F_\psi(a,\langle y,p\mathsf{op}eratorname{ran}gle)$. Furthermore, $R(y)\in \mathcal{P}(a)\cap j^{t+n+1}(K)=\mathcal{P}(a)\cap j^{t+n}(K)$. Hence the codomain of $R$ is $\mathcal{P}(a)\cap j^{t+n}(K)$, which is an element of $j^{t+n+1}(K)$. Since $K$ is regular, so inaccessible by \autoref{Theorem:CriticalPointModelsIZF}, so is $j^{t+n+1}(K)$. Hence by \autoref{Corollary:Inaccessiblesets-ExpClosed}, $R\in j^{t+n+1}(K)$. Hence $S_n =\bigcap\mathsf{op}eratorname{ran} R \in j^{t+n+1}(K)$. By \autoref{Lemma:KandLambda-samepowerset} and $S_n\subseteq a \in j^t(K)$, we have $S_n \in j^t(K)$. Finally for this case, since $S_n\in j^t(K)$ for each $n\in\omega$, we can define a function $S \colon \omega\to j^t(K)$ by $S(n):=S_n$. By repeating the previous argument with the inaccessibility of $j^t(K)$ and $\omega\in j^t(K)$, we have $S\in j^t(K)$. Hence \begin{equation*} {\textstyle \bigcap_{n\in\omega} S_n} = \{x\in a\mid \forall n\in\omega \forall y\in j^{t+n}(K) \langle\Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models\psi(x,y,p)\} = F_\phi(a,p) \end{equation*} is also a member of $j^t(K)$. The last case is when $\phi(x,p)$ is $\exists y \psi(x,y,p)$. The proof is quite similar to the previous one. Similar to the previous case, for every $n\in\omega$, let \begin{equation*} S'_n := \{x\in a \mid \exists y\in j^{t+n}(K) \langle \Lambda, j\restricts\Lambda\mathsf{op}eratorname{ran}gle \models\psi(x,y,p) \}. \end{equation*} We again prove that for each $n\in\omega$, $S'_n\in j^t(K)$ by first obtaining that $R\in j^{t+n+1}(K)$, where $R \colon y\mapsto F_\psi(a,\langle y,p\mathsf{op}eratorname{ran}gle)$ is the same function we defined in the proof for the previous case. Therefore, $S'_n = \bigcup_{y\in j^{n+1}(K)}R(y) \in j^{n+1}(K)$. Furthermore, $S'_n\subseteq a\in j^t(K)$ shows $S'_n\in j^t(K)$. If we define $S'\colon \omega\to j^t(K)$ by $S'(n):=S'_n$, then $S'\in j^t(K)$. Hence $F_\phi(a,p)=\bigcup_{n\in\omega} S'(n)\in j^t(K)$. \end{proof} \subsection{Super Reinhardt sets} The following theorem shows that super Reinhardt sets reflect first-order properties of $V$. \begin{theorem}[$\mathsf{CGB}_\infty$] \label{Theorem:SuperReinhardtReflection} Let $K$ be a super Reinhardt set. Then $K$ is an elementary submodel of $V$. That is, for every formula (without $j$) $\phi(\vec{x})$ all of whose variables are displayed, \begin{equation*} \forall\vec{x}\in K \phi^K(\vec{x})\leftrightarrow\phi(\vec{x}). \end{equation*} \end{theorem} \begin{proof} Atomic cases and the cases for logical connectives are trivial. Hence we focus on quantifications. \begin{itemize} \item Case $\forall$: assume that $a\in K$ and $\phi(x,a)$ is absolute between $K$ and $V$. Then clearly we have \mbox{$\forall x \phi(x,a)\to\forall x\in K \phi^K(x,a)$}. Conversely, assume that $\forall x\in K \phi^K(x,a)$. Fix $b\in V$ and $j$ such that $b\in j(K)$. Then $\forall x\in j(K) \phi^{j(K)}(x,a)$ implies $\phi^{j(K)}(b,a)$. Furthermore, we can see that $\phi(x,a)$ is also absolute between $j(K)$ by applying $j$ to our inductive hypothesis; $\forall x, a\in K \phi^K(x,a)\leftrightarrow\phi(x,a)$. Thus $\phi(b,a)$ for all $b\in V$. \item Case $\exists$: assume the same conditions on $a$ and $\phi(x,a)$ as we did before. Then obviously we have $\exists x\in K \phi^K(x,a)\to \exists x \phi(x,a)$. For the converse, assume that there is $b$ such that $\phi(b,a)$. Find $j$ such that $b\in j(K)$. Since $\phi$ is also absolute between $j(K)$ and $V$, we have $\exists x\in j(K) \phi^{j(K)}(x,a)$. Thus $\exists x\in K \phi^K(x,a)$. \qedhere \end{itemize} \end{proof} Note that the above theorem requires the full elementarity of elementary embeddings. Next, we shall see that not only do strong Reinhardt sets reflect all first-order properties of $V$, but also they contain every true subset of a member of themselves: \begin{proposition}[$\mathsf{CGB}_\infty$ ] \label{Prop:SuperReinhardtPowerInaccessible} Let $K$ be a super Reinhardt set and $a\in K$. If $b\subseteq a$, then $b\in K$. Especially, $\mathcal{P}^K(a)=\mathcal{P}(a)$ for $a\in K$, so $K$ is power inaccessible. \end{proposition} \begin{proof} Find $j \colon V\to V$ such that $b\in j(K)$. By \autoref{Lemma:PowerSetPreserving}, $j(b)=b\in j(K)$, so $b\in K$. \end{proof} \begin{corollary}[$\mathsf{CGB}_\infty$] \label{Corollary:SuperReinhardtmodelsIZFpIEA} Suppose that there is a super Reinhardt set. Then $V$ is a model of $\mathsf{IZF + pIEA}$. \end{corollary} However, it is not, in general, true that we will also satisfy the full second-order theory of $\mathsf{IGB}$. The issue here is that there is no reason why a (not first-order definable) class should be amenable. On the other hand, one should observe that by restricting our attention to amenable classes we will obtain a model of $\mathsf{IGB}$. Also, note that being a super Reinhardt is a second-order property, so its existence does not reflect down to $K$. Bagaria-Koellner-Woodin \cite{BagariaKoellnerWoodin2019} showed that super Reinhardt cardinals \emph{rank-reflect} Reinhardt cardinals, that is, there is an inaccessible cardinal $\gamma$ such that $(V_\gamma,V_{\gamma+1})$ models $\mathsf{ZF}_2$ with a Reinhardt cardinal. We will show in \autoref{Theorem:LowerBound-superReinhardt} that $\mathsf{CGB}_\infty$ with a super Reinhardt set interprets $\mathsf{ZF}$ with a proper class of $\gamma$ such that $(V_\gamma,V_{\gamma+1})\models \mathsf{ZF_2}+\text{`there is a Reinhardt cardinal.'}$ However, its proof `mixes up' large set arguments over $\mathsf{CZF}$ with a double-negation translation, so the following question is still open: \begin{question} Work over $\mathsf{CGB}_\infty$ with a super Reinhardt set. Can we prove there is an inaccessible set $M$ such that $(M,\mathcal{P}(M))$ satisfies $\mathsf{CGB}_\infty$ with the existence of a Reinhardt set? \end{question} However, we can still derive various large set principles from super Reinhardtness. For example, we can see that super Reinhardtness implies the analogue of $j \colon V_{\lambda+n}\prec V_{\lambda+n}$ over $\mathsf{ZF}$: \begin{proposition}[$\mathsf{CGB}_\infty$] Assume that there is a super Reinhardt set. Define $V_\alpha(x)$ recursively as $V_\alpha({ x})=\mathsf{op}eratorname{TC}({ x})\cup \bigcup_{\beta\in\alpha}\mathcal{P}(V_\beta(x))$. If $j(\xi)=\xi$, then for each set $a$ we can find $\Lambda\ni a$, which is a countable union of power inaccessible sets, with an elementary embedding $j \colon V_\xi(\Lambda)\to V_\xi(\Lambda)$. Especially, for each $n\in\omega$ and $a\in V$, we can find $\Lambda\ni a$, which is a countable union of power inaccessible sets, such that there is an elementary embedding $j \colon \mathcal{P}^n(\Lambda)\to \mathcal{P}^n(\Lambda)$. \end{proposition} \begin{proof} Let $K$ be a super Reinhardt set and $j$ be an elementary embedding with a critical point $K$ such that $a\in j(K)$. Now let $\Lambda = j^\omega(K)$. We can see that $j\restricts V_\xi(\Lambda) \colon V_\xi(\Lambda)\to j(V_\xi(\Lambda)) = V_\xi(\Lambda)$ is the desired elementary embedding. The latter claim follows by letting $\xi=n$. \end{proof} \subsection{Totally Reinhardt sets} Then how about the case for totally Reinhardt sets? We examined that if $K$ is super Reinhardt, then $K\prec V$. However, $K$ does not reflect $j$-formulas. We can see that $A$-super Reinhardt sets reflect not only usual set-theoretic formulas, but also $A$-formulas. The proof of the following theorem is identical to that of \autoref{Theorem:SuperReinhardtReflection}, so we omit it. We note here that in the theorem we will not need to assume that $A \cap K$ is a set, and if $A$ is not amenable then in fact this will not be the case. \begin{theorem}[$\mathsf{CGB}_\infty$] \pushQED{\qed} Let $K$ be an $A$-super Reinhardt set. Then $K$ reflects every $A$-formula. That is, for every formula $\phi(X, \vec{x})$ with one class parameter $X$ and all of whose variables are displayed, \begin{equation*} \forall\vec{x}\in K [\phi^K(A \cap K, \vec{x})\leftrightarrow\phi(A, \vec{x})]. \qedhere \end{equation*} \end{theorem} Note that $A$-elementarity of $j$ is necessary for the above theorem: The reader can see that the proof of \autoref{Theorem:SuperReinhardtReflection} applies $j$ to the inductive hypothesis, namely, $\phi(\vec{x})$ is absolute between $K$ and $V$, and this is where we need the $A$-elementarity. We can also see that the proof breaks down if we do not assume $A$-elementarity: if the proof were to work without $A$-elementarity, then \autoref{Theorem:SuperReinhardtReflection} would hold even for $j$-formulas. This would imply $K$ thinks there is a critical point of $j$, which is invalid. One consequence of the reflection of $A$-formulas is that $V$ satisfies Full Separation for $A$-formulas when $A$ is amenable. This is because, for amenable classes, it is relatively straightforward to prove that $K \models \mathsf{Sep}_A$. The proof of the following lemma is similar to that of \autoref{Theorem:CriticalPointModelsIZF}, so we omit it. \begin{lemma}[$\mathsf{CGB}_\infty$] \pushQED{\qed} Let $A$ be an amenable class and $K$ be an $A$-super Reinhardt set. Then $K$ satisfies Full Separation for $A$-formulas. \qedhere \end{lemma} \begin{corollary}[$\mathsf{CGB}_\infty$]\label{Corollary:TRprovesASep} \pushQED{\qed} Assume that there is an $A$-super Reinhardt set for an amenable $A$. Then Full Separation for $A$-formulas hold. Especially, if $V$ is totally Reinhardt, then Full Separation for $A$-formulas hold for all amenable classes $A$. \qedhere \end{corollary} Note that every $A$-super Reinhardt set is super Reinhardt, so is power inaccessible by \autoref{Prop:SuperReinhardtPowerInaccessible}. In sum, {$\mathsf{CGB_\infty+TR}$} proves {$\mathsf{IGB_\infty+TR}$} without Class Separation. It is unknown if Class Separation follows from the remaining axioms of $\mathsf{IGB_\infty+TR}$. However, we can still see that $\mathsf{IGB_\infty+TR}$ is interpretable within itself without Class Separation: \begin{theorem}[$\mathsf{CGB}_\infty$] \label{Theorem:CGBTRinterpretsIGBTR} $\mathsf{CGB_\infty+TR}$ interprets $\mathsf{IGB_\infty+TR}$. \end{theorem} \begin{proof} We proved that $\mathsf{CGB_\infty+TR}$ proves every axiom of $\mathsf{IGB_\infty+TR}$ except for Class Separation. We claim that $\mathsf{IGB_\infty+TR}$ is interpreted in itself without Class Separation. Let $\phi$ be a formula of $\mathsf{IGB_\infty+TR}$, and $\phi^\mathfrak{a}$ be a formula obtained by bounding every second-order quantifier to the collection of all amenable classes. That is, we get $\phi^\mathfrak{a}$ by replacing every $\forall^1 X$ and $\exists^1 X$ occurring in $\phi$ to $\forall^1 X (\text{$X$ is amenable})\to\cdots$ and $\exists^1 X (\text{$X$ is amenable})\land\cdots$. By \autoref{Corollary:TRprovesASep}, Class Separation holds for amenable classes: that is, if $A$ is amenable and $\phi(x,p,C)$ is a second-order quantifier-free formula whose free variables are all expressed, then $\{x\mid \phi(x,p,A)\}$ is also amenable. Thus the $\mathfrak{a}$-interpretation of Class Separation holds. Moreover, it is easy to see that the $\mathfrak{a}$-interpretation of the other axioms of $\mathsf{IGB_\infty+TR}$ are valid. \end{proof} \section{Heyting-valued interpretation and the double-negation interpretation}\label{Section:HeytingInterpretation} In this section, we will develop tools to analyze the consistency strength of large set axioms over constructive set theories. The main tool we will use is the double-negation translation. Especially, we will heavily rely on Gambino's Heyting-valued interpretation (Chapter 5 of \cite{GambinoPhDThesis} or \cite{Gambino2006}) with the double-negation topology because its presentation is similar to classical Boolean-valued interpretations. Grayson \cite{Grayson1979} and Bell \cite{Bell2014} also introduced a way to interpret classical $\mathsf{ZF}$ from $\mathsf{IZF}$, and their method is much more similar to Boolean-valued models. Despite this, we stick to Gambino's method because it still works over the weaker background theory of $\mathsf{CZF}minus$. \subsection{Heyting-valued interpretation of \texorpdfstring{$\mathsf{CZF}minus$}{CZF--}} Forcing is a powerful tool to construct a model of set theory. Gambino's definition of Heyting-valued model (or alternatively, forcing) opens up a way to produce models of $\mathsf{CZF}minus$. His Heyting-valued model starts from a \emph{formal topology}, which formalizes a poset of open sets with a covering relation: \begin{definition} A \emph{formal topology} is a structure $\mathcal{S}=(S,\le,\vartriangleleft)$ such that $(S,\le)$ is a poset and $\vartriangleleft\subseteq S\times\mathcal{P}(S)$ satisfies the following conditions: \begin{enumerate} \item if $a\in p$, then $a\vartriangleleft p$, \item if $a\le b$ and $b\vartriangleleft p$, then $a\vartriangleleft p$, \item if $a\vartriangleleft p$ and $\forall x\in p (x\vartriangleleft q)$, then $a\vartriangleleft q$, and \item if $a\vartriangleleft p, q$, then $a\vartriangleleft (\downwards p)\cap (\downwards q)$, where $\downwards p = \{b\in S \mid \exists c \in p (b\le c )\}$. \end{enumerate} \end{definition} Intuitively, $S$ describes a basis of a topology, and $\vartriangleleft$ is a covering relation. Then, for each collection of `open sets' $p$, we have the notion of a \emph{nucleus}, $\jmath p$, which is the set of all open sets that are covered by $p$. We can view $\jmath p$ as a `union' of all open sets in $p$, defined by \begin{equation*} \jmath p = \{x\in S \mid x\vartriangleleft p\}. \end{equation*} Then the class $\operatorname{Low}(\mathcal{S})_\jmath$ of all lower subsets\footnote{A subset $p\subseteq S$ is a lower set if $\downwards p = p$.} that are stable under $\jmath$ (i.e., $\jmath p = p$) form a \emph{set-generated frame}: \begin{definition} A structure $\mathcal{A}=(A,\le,\bigvee,\land,\top,g)$ is a \emph{set-generated frame} if $(A,\le,\bigvee,\land,\top)$ is a complete distributive lattice with the generating set $g\subseteq A$, such that the class $g_a=\{x\in g\mid x\le a\}$ is a set, and $a=\bigvee g_a$ for any $a\in A$. \end{definition} The reader is reminded that we can endow a Heyting algebra structure over a set-generated frame. For example, we can define $a\to b$ by $a\to b = \bigvee\{x\in g\mid x\land a\le b\}$, $\bot$ by $\varnothing$, and $\bigwedge p$ by $\bigvee\{x\in g\mid \forall y\in p (x\le y)\}$. \begin{proposition}\pushQED{\qed} For every formal topology $\mathcal{S}$, the class $\operatorname{Low}(\mathcal{S})_\jmath$ has a set-generated frame structure under the following definition of relations and operations: \begin{itemize} \item $p\land q=p\cap q$, \item $\bigvee p=\jmath(\bigcup p)$, \item $\top = S$, \item $\le$ as the inclusion relation, and \item $g=\{\{x\}\mid x\in S\}$. \end{itemize} Furthermore, we can make $\operatorname{Low}(\mathcal{S})_\jmath$ a Heyting algebra with the following additional operations: \begin{itemize} \item $p\lor q=\jmath(p\cup q)$, \item $p\to q=\{x\in S\mid x\in p\to x\in q\}$, \item $\bigwedge p=\bigcap p$. \qedhere \end{itemize} \end{proposition} We extend the nucleus $\jmath$ to a lower subclass $P\subseteq S$, which is a subclass of $\mathcal{S}$ satisfying \mbox{$P = \downwards P := \{a\in\mathcal{S} \mid \exists b\in P (a\le b)\}$,} by taking \begin{equation*} JP := \bigcup\{\jmath p \mid p\subseteq P\}. \end{equation*} Then we define Heyting operations for classes as follows: \begin{itemize} \item $P\land Q=P\cap Q$, \item $P\lor Q=J(P\cup Q)$, \item $P\to Q =\{x\in S\mid x\in P\to x\in Q\}$. \end{itemize} For a set-indexed collection of classes $\{P_x\mid x\in I\}$, take $\bigwedge_{x\in I} P_x=\bigcap_{x\in I} P_x$ and $\bigvee_{x\in I} P_x=J\left(\bigcup_{x\in I} P_x\right)$. \\ The following results can be proven by direct calculation and so we omit their proofs: \vbox{ \begin{remark} \label{Remark:lowersubclassproperties} \phantom{a} \begin{itemize} \item If $p$ is a set then $Jp = \jmath p$. \item For any lower subclass $P \subseteq S$, $P \subseteq JP$. \item If $P$, $Q$ and $R$ are subclasses of $S$ which are stable under $J$ then $R \subseteq (P \rightarrow Q)$ if and only if $R \land P \subseteq Q$. \item If $\{ P_x \mid x \in I \}$ is a family of subclasses of $S$ such that for each $x \in I$, $JP_x = P_x$ and $R$ is another subclass of $S$ such that $JR = R$, then $R \subseteq \bigwedge_{x \in I} P_x$ if and only if $R \subseteq P_x$ for each $x \in I$. \end{itemize} \end{remark} } Given $x \in S$ one can consider the class of all $p \subseteq S$ such that $x \in \jmath p$, namely the collection of all covers of $x$. In general, this need not be a set however in many cases it can be sufficiently approximated by a set. This is particularly vital to verify Subset Collection in our eventual model in \autoref{Theorem:CZFPersistence}. \begin{definition} \label{Definition:FormalTopology} A formal topology $(\mathcal{S},\le,\vartriangleleft)$ is said to be \emph{set-presentable} if there is a \emph{set-presentation} \mbox{$R:\mathcal{S}\to\mathcal{P(P(S))}$,} which is a set function satisfying \begin{equation*} a\vartriangleleft p \leftrightarrow \exists u\in R(a) [u\subseteq p] \end{equation*} for all $a\in \mathcal{S}$ and $p\in\mathcal{P(S)}$. \end{definition} Since some readers may not be familiar with the definition of formal topology, we give here a brief informal description of it. We then refer the reader to Chapter 4 of \cite{GambinoPhDThesis} or Chapter 15 of \cite{AczelRathjen2010} for a more detailed account. The notion of formal topology stems from an attempt to formulate point-free topology over a predicative system such as Martin-L\"of type theory. Thus we may view $(\mathcal{S}, \le)$ as a collection of open sets. We usually describe open sets by using a subbasis, and sometimes the full topology is more complex than the subbasis. This issue can particularly arise in the constructive set-theoretic context and, even worse, it could be that while a subbasis is a set, the whole topology generated by the subbasis is a proper class. In that case, we want to have a simpler surrogate for the full topology. This explains why we do not define the formal topology as a $\bigvee$-semilattice. Also, the reader would agree that the covering relation has a pivotal role in topology and sheaf theory. Hence we formulate the covering relation $\vartriangleleft$ into the definition of formal topology. Producing $\operatorname{Low}(\mathcal{S})_\jmath$ from the formal topology corresponds to recovering the full topology from a subbasis. Although we hope that $\mathcal{S}$ will be as simple as possible, the use of a `complex' formal topology is sometimes unavoidable. For example, even the natural double-negation formal topology defined over $\mathcal{P}(1)$, which we will shortly define, is a class unless we have Power Set. Thus we want to define a `small formal topology' separately, and the notion of set-presented formal topology is exactly for such a purpose. Roughly, the set-representation $R$ decomposes $a\in\mathcal{S}$ into some collection of `open sets,' and we can track the covering relation by using $R$. \\ The Heyting universe $V^\mathcal{S}$ over $\mathcal{S}$ is defined inductively as follows: $a\in V^\mathcal{S}$ if and only if $a$ is a function from a set-sized subset of $V^{\mathcal{S}}$ to $\operatorname{Low}(\mathcal{S})_\jmath$. For each set $x$, we have the canonical representation $\check{x}\in V^{\mathcal{S}}$ of $x$ recursively defined by $\mathsf{op}eratorname{dom} \check{x}=\{\check{y}\mid y\in x\}$ and $\check{x}(\check{y})=\top$. We can now define the Heyting interpretation, $[\mkern-2mu[\phi]\mkern-2mu]\in \operatorname{Low}_\jmath$, with parameters in $V^\mathcal{S}$ as follows: \begin{definition} Let $\phi$ be a formula of first-order set theory and $\vec{a}\in V^\mathcal{S}$. Then we define the Heyting-valued interpretation, $[\mkern-2mu[\phi(\vec{a})]\mkern-2mu]$, as follows: \begin{itemize} \item $[\mkern-2mu[ a= b]\mkern-2mu]=\left(\bigwedge_{x\in\mathsf{op}eratorname{dom} a}a(x)\to\bigvee_{y\in\mathsf{op}eratorname{dom} b} b(y)\land[\mkern-2mu[ x=y]\mkern-2mu]\right) \land \left(\bigwedge_{y\in\mathsf{op}eratorname{dom} b}b(y)\to\bigvee_{x\in\mathsf{op}eratorname{dom} a}a(x)\land[\mkern-2mu[ x=y]\mkern-2mu]\right)$, \item $[\mkern-2mu[ a\in b]\mkern-2mu]=\bigvee_{y\in\mathsf{op}eratorname{dom} b} b(y)\land[\mkern-2mu[ a=y]\mkern-2mu]$, \item $[\mkern-2mu[\bot]\mkern-2mu]=\bot$, $[\mkern-2mu[ \phi\land \psi]\mkern-2mu]=[\mkern-2mu[\phi]\mkern-2mu]\land [\mkern-2mu[\psi]\mkern-2mu]$, $[\mkern-2mu[ \phi\lor \psi]\mkern-2mu]=[\mkern-2mu[\phi]\mkern-2mu]\lor [\mkern-2mu[\psi]\mkern-2mu]$, $[\mkern-2mu[ \phi\to \psi]\mkern-2mu]=[\mkern-2mu[\phi]\mkern-2mu]\to [\mkern-2mu[\psi]\mkern-2mu]$, and $[\mkern-2mu[ \lnot\phi]\mkern-2mu]=[\mkern-2mu[\phi\to\bot]\mkern-2mu]$, \item $[\mkern-2mu[\forall x\in a\phi(x)]\mkern-2mu]=\bigwedge_{x\in\mathsf{op}eratorname{dom} a}a(x)\to [\mkern-2mu[\phi(x)]\mkern-2mu]$ and $[\mkern-2mu[\exists x\in a\phi(x)]\mkern-2mu]=\bigvee_{x\in\mathsf{op}eratorname{dom} a}a(x)\land [\mkern-2mu[\phi(x)]\mkern-2mu]$, \item $[\mkern-2mu[\forall x\phi(x)]\mkern-2mu]=\bigwedge_{x\in V^\mathcal{S}}[\mkern-2mu[\phi(x)]\mkern-2mu]$ and $[\mkern-2mu[\exists x\phi(x)]\mkern-2mu]=\bigvee_{x\in V^\mathcal{S}}[\mkern-2mu[\phi(x)]\mkern-2mu]$. \end{itemize} We write $V^{\mathcal{S}} \models \phi$ when $[\mkern-2mu[ \phi ]\mkern-2mu] = \top$ holds, and in such a case we say that $\phi$ is \emph{valid} in $V^{\mathcal{S}}$. \end{definition} The next result is that the interpretation validates every axiom of $\mathsf{CZF}minus$. Since the proof of this was already done in \cite{Gambino2006}, we will omit most of the proof. However we will replicate the proof of the validity of Strong Collection because it is more involved and the method will be necessary to show the persistence of BCST-regularity in \autoref{Theorem:BCSTset-preserving}. Let $a\in V^{\mathcal{S}}$ and $R$ be a class. We want to show the following statement holds: \begin{equation*} [\mkern-2mu[ (R\colon a\rightrightarrows V) \to \exists b (R\colon a\leftrightarrowlrarrows b)]\mkern-2mu] = \top. \end{equation*} Here $\to$ is translated to a Heyting implication operation, and the focal property of the implication operation is the following: $q\to r=\top$ if and only if for every $p\le q$, we have $p\le r$. Hence we try the following strategy: take any $p\in\operatorname{Low}(\mathcal{S})_\jmath$ such that $p\subseteq [\mkern-2mu[ R \colon a \rightrightarrows V ]\mkern-2mu]$(that is, $p \leq [\mkern-2mu[ R \colon a \rightrightarrows V ]\mkern-2mu]$ in the inclusion ordering). Then we claim that we can find $b\in V^\mathcal{S}$ such that $p\subseteq [\mkern-2mu[ R\colon a\leftrightarrowlrarrows b]\mkern-2mu]$. We will, and must use a form of Strong Collection to prove the validity of Strong Collection over $V^\mathcal{S}$. The role of Strong Collection is to confine the codomain of a class-sized multi-valued function to a set-sized range. Next note that, using the third and fourth items in \autoref{Remark:lowersubclassproperties}, the assumption $p\subseteq [\mkern-2mu[ R\colon a\rightrightarrows V]\mkern-2mu]$ is equivalent to $p\land a(x)\subseteq [\mkern-2mu[ \exists y\ R(x,y)]\mkern-2mu]$ for all $x\in\mathsf{op}eratorname{dom} a$. So, using this assumption, we can codify the relation $R$ by using \begin{equation*} P = \{\langle x,y,z\mathsf{op}eratorname{ran}gle \mid x\in\mathsf{op}eratorname{dom} a,\ y\in V^\mathcal{S},\ z\in p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu]\}. \end{equation*} Intuitively, $P$ encodes a family of classes \begin{equation*} \{p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu] \mid x\in\mathsf{op}eratorname{dom} a,\ y\in V^\mathcal{S}\}. \end{equation*} Since $[\mkern-2mu[ R(x,y)]\mkern-2mu]$ could be a proper class in which we cannot form a collection of all $p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu] $, we introduce $P$ to code this family into a single class. We will construct $b$ by searching for an appropriate subset of $P$ and making use of it. We will cast appropriate lemmas when we need them. \begin{lemma}\label{Lemma:StrongCollectionValid-MainLemma} Fix $p\in \operatorname{Low}(\mathcal{S})_\jmath$ and $a\in V^\mathcal{S}$ such that $p\subseteq [\mkern-2mu[ R\colon a\rightrightarrows V]\mkern-2mu]$. Let $P$ be the class we defined before. Then we can find a set $r\subseteq P$ such that $p\land a(x)\subseteq \jmath \{ z\mid \exists y \ \langle x,y,z\mathsf{op}eratorname{ran}gle\in r\}$ for all $x \in \mathsf{op}eratorname{dom} a$. \end{lemma} \begin{proof} Before starting the proof, let us remark that the $b$ we will eventually construct as our witness for Strong Collection will satisfy $\jmath \{ z\mid \exists y \ \langle x,y,z\mathsf{op}eratorname{ran}gle\in r\}\subseteq [\mkern-2mu[ \exists y\in b\ R(x,y)]\mkern-2mu]$. Observe that $p\subseteq [\mkern-2mu[ R\colon a\rightrightarrows V]\mkern-2mu]$ is equivalent to $\forall x\in\mathsf{op}eratorname{dom} a (p\land a(x)\subseteq \bigvee_{y\in V^\mathcal{S}} [\mkern-2mu[ R(x,y)]\mkern-2mu])$. Hence we have \begin{equation*} p\land a(x)\subseteq \bigvee_{y\in V^\mathcal{S}} p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu] = J \left( \bigcup_{y\in V^\mathcal{S}} p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu] \right) \end{equation*} for every $x\in\mathsf{op}eratorname{dom} a$. We want to have a family of classes $Q_x = \bigcup_{y\in V^\mathcal{S}} p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu]$. For this, we define the coding $Q$ of the family $Q_x$ by \begin{equation*} Q = \{\langle x,z\mathsf{op}eratorname{ran}gle \mid \exists y\in V^\mathcal{S} [\langle x,y,z\mathsf{op}eratorname{ran}gle \in P]\} \end{equation*} and let $Q_x = \{z\mid\langle x,z\mathsf{op}eratorname{ran}gle\in Q\}${, which one can easily see satisfies our requirement}. Then $p\land a(x)\subseteq JQ_x$. By the definition of $J$, we have the following sublemma, which is Lemma 2.8 of \cite{Gambino2006}: \renewcommand{$\square$}{$\dashv$} \begin{lemma} Let $P$ be a lower subclass of $\mathcal{S}$ and $u\subseteq JP$. Then we can find $v\subseteq P$ such that $u\subseteq \jmath v$. \qed \end{lemma} In sum, we have that for each $x\in \mathsf{op}eratorname{dom} a$ there is a $v\subseteq Q_x$ such that $p\land a(x)\subseteq \jmath v$. The following lemma shows we can find $v$ in a uniform way: \begin{lemma} \label{Lemma:TargetSubsetMVChoice-uniform-overV} Let $a$ be a set, $S \colon a \rightrightarrows V$ a multi-valued class function, $Q\subseteq a\times V$ a class. For each $x \in a$, let $Q_x := \{ z \mid \langle x, z \rangle \in Q\}$. Moreover, assume that \begin{enumerate} \item For each $x\in a$ there is $u\subseteq Q_x$ such that $S(x,u)$ holds and \item (Monotone Closure) {If} $S(x,u)$ holds and $u\subseteq v\subseteq Q_x$ then $S(x,v)$, \end{enumerate} then there is $f\colon a\to V$ such that $f(x)\subseteq Q_x$ and $S(x,f(x))$ for all $x\in a$. \end{lemma} \begin{proof} Consider the multi-valued function $S'$ of domain $a$ defined by \begin{equation*} S'(x,u) \qquad \text{if and only if} \qquad S(x,u)\text{ and } u\subseteq Q_x. \end{equation*} By mimicking the proof of \autoref{Lemma:SetMV}, we can find a set $g\colon a\rightrightarrows V$ such that $g\subseteq S'$. Now take $f(x)=\bigcup g_x = \bigcup \{u \mid \langle x,u\rangle \in g\}$, which is a set by Union and Replacement. We can see that $f(x) \subseteq Q_x$ for each $x\in a$. By the first clause of our assumptions and monotone closure of $S$, we have $S(x,f(x))$ for all $x\in a$. \end{proof} Let us return to the proof of \autoref{Lemma:StrongCollectionValid-MainLemma}. Consider the relation $S$ defined by \begin{equation*} S(x,u)\quad\text{if and only if}\quad(u\subseteq Q_x\text{ and }p\land a(x)\subseteq \jmath u). \end{equation*} It is easy to see that $S$ satisfies the hypotheses of \autoref{Lemma:TargetSubsetMVChoice-uniform-overV}. Therefore, by \autoref{Lemma:TargetSubsetMVChoice-uniform-overV} applied to $S$, we can find $f\colon \mathsf{op}eratorname{dom} a\to \operatorname{Low}(\mathcal{S})_\jmath$ such that $f(x)\subseteq Q_x$ and $p\land a(x)\subseteq \jmath f(x)$. Recall that $f(x)\subseteq Q_x = \bigcup_{y\in V^\mathcal{S}} p\land a(x)\land[\mkern-2mu[ R(x,y)]\mkern-2mu]$. Hence for each $x\in\mathsf{op}eratorname{dom} a$ and $z\in f(x)$ we can find $y\in V^\mathcal{S}$ such that $z\in p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu]$. However, the class of such $y$ may not be a set, so we will apply Strong Collection to find a subset of the class uniformly. Now let \begin{equation*} q=\{\langle x,z\rangle\mid x\in\mathsf{op}eratorname{dom} a,\ z\in f(x)\}. \end{equation*} Then for each $\langle x,z\rangle \in q$ there is $y$ such that $\langle x,y,z\rangle\in P$. Now consider the multi-valued function $P'\colon q\rightrightarrows V^\mathcal{S}$ defined by \begin{equation*} P'(\langle \langle x,z\rangle,y\rangle) \quad\text{if and only if} \quad P(x,y,z), \end{equation*} By Strong Collection applied to $\mathcal{A}(P')\colon q \rightrightarrows q \times V^\mathcal{S}$, we have $d$ such that $\mathcal{A}(P')\colon q \leftrightarrowlrarrows d$. By \autoref{Lemma:PrelimAdjuectmentFtn}, $d\subseteq P'$ and $d\colon q \rightrightarrows V^\mathcal{S}$. Define $g(x,z) = \{y \mid \langle \langle x,z\rangle, y\rangle \in d\}$. Then we can see that $y\in g(x,z)$ implies $\langle x,y,z\rangle \in P$. Finally, let \begin{equation*} r = \{\langle x,y,z\rangle \mid \langle x,z\rangle \in q,\ y\in g(x,z)\}. \end{equation*} It is clear that $r$ is a set. We know that $p\land a(x)\subseteq \jmath f(x)$, and $z\in f(x)$ implies $\exists y[y\in g(x,z)]$, so $\exists y [\langle x,y,z\rangle \in r]$. By combining these facts, we have $p\land a(x)\subseteq \jmath \{z\mid \exists y\ \langle x,y,z\rangle \in r\}$. Thus concluding the proof of \autoref{Lemma:StrongCollectionValid-MainLemma}. \renewcommand{$\square$}{$\square$} \qedhere \end{proof} \begin{proposition}\label{Proposition:StrongCollectionValid} Working over $\mathsf{CZF}minus$, $V^\mathcal{S}$ validates Strong Collection. \end{proposition} \begin{proof} Let $a\in V^\mathcal{S}$ and let $R$ be a class relation. Let $p\in\operatorname{Low}(\mathcal{S})_\jmath$ and assume that $p\subseteq [\mkern-2mu[ R\colon a\rightrightarrows V]\mkern-2mu]$. By \autoref{Lemma:StrongCollectionValid-MainLemma} (and using the previously defined notation), we can fix some $r\subseteq P$ such that \begin{equation*} p\land a(x)\subseteq \jmath \{ z\mid \exists y \ \langle x,y,z\mathsf{op}eratorname{ran}gle\in r\} \end{equation*} holds for all $x\in\mathsf{op}eratorname{dom} a$. Now define $b\in V^\mathcal{S}$ as follows: $\mathsf{op}eratorname{dom} b = \{y \mid \exists x\exists z[\langle x,y,z\rangle \in r]\}$, and \mbox{$b(y) = \jmath \{ z \mid \exists x \in \mathsf{op}eratorname{dom} a [\langle x,y,z\rangle \in r]\}$.} We claim that $p\subseteq [\mkern-2mu[ R:a\leftrightarrowlrarrows b ]\mkern-2mu]$. \begin{itemize} \item $p\subseteq [\mkern-2mu[ R:a\rightrightarrows b]\mkern-2mu]$: if $\langle x,y,z\rangle \in r$, then $y\in \mathsf{op}eratorname{dom} b$, hence \begin{align*} p\land a(x)& \subseteq \jmath \{ z\mid \exists y \ \langle x,y,z\mathsf{op}eratorname{ran}gle\in r\} \subseteq \bigvee_{y\in\mathsf{op}eratorname{dom} b} \jmath \{z \mid \langle x,y,z\rangle\in r\} \\ &\subseteq \bigvee_{y\in\mathsf{op}eratorname{dom} b} \jmath\{z\mid \langle x,y,z\rangle \in r\}\land p\land a(x)\land [\mkern-2mu[ R(x,y)]\mkern-2mu] \\ &\subseteq \bigvee_{y\in\mathsf{op}eratorname{dom} b} b(y) \land [\mkern-2mu[ R(x,y)]\mkern-2mu] = [\mkern-2mu[ \exists y\in b \ R(x,y)]\mkern-2mu]. \end{align*} \item $p\subseteq [\mkern-2mu[ R:b\rightrightarrows a]\mkern-2mu]$: note that $\langle x,y,z\rangle\in r$ implies $x\in\mathsf{op}eratorname{dom} a$. Hence \begin{align*} b(y)& = \jmath \{z \mid \exists x \in \mathsf{op}eratorname{dom} a \langle x,y,z\rangle \in r\} \subseteq \bigvee_{x\in\mathsf{op}eratorname{dom} a} a(x)\land [\mkern-2mu[ R(x,y) ]\mkern-2mu] = [\mkern-2mu[\exists x\in a\ R(x,y)]\mkern-2mu]. \qedhere \end{align*} \end{itemize} \end{proof} Hence we have \begin{theorem}\label{Theorem:CZFPersistence} Working over $\mathsf{CZF}minus$, the Heyting-valued model $V^\mathcal{S}$ also satisfies $\mathsf{CZF}minus$. If $\mathcal{S}$ is set-presented and Subset Collection holds, then $V^\mathcal{S}\models \mathsf{CZF}$. Furthermore, if our background theory satisfies Full Separation or Power Set, then so does $V^\mathcal{S}$ respectively. \end{theorem} \begin{proof} The first part of the theorem is shown by Gambino \cite{Gambino2006}, and we have already replicated the proof for the validity of Strong Collection. Hence we omit this part of the proof, and we concentrate on the preservation of Full Separation and Power Set. For Full Separation, it suffices to see that the proof for Bounded Separation over $V^\mathcal{S}$ also works for Full Separation. It actually works since Full Separation ensures $[\mkern-2mu[\phi]\mkern-2mu]$ is a set for every formula $\phi$ because $[\mkern-2mu[\forall x \phi(x)]\mkern-2mu] = \{s\in S \mid \forall x (x\in V^\mathcal{S} \to s\in [\mkern-2mu[\phi(x)]\mkern-2mu])\}$ and $[\mkern-2mu[\exists x\phi(x)]\mkern-2mu] = \jmath(\{s\in S\mid \exists x (x\in V^\mathcal{S} \land s\in [\mkern-2mu[\phi(x)]\mkern-2mu])\})$. For Power Set, let $a\in V^\mathcal{S}$. We can show that $\operatorname{Low}_\jmath(S)$ is a set due to Power Set. Thus we have the name $b \in V^{\mathcal{S}}$ defined by $\mathsf{op}eratorname{dom} b = {{}^{\mathsf{op}eratorname{dom} a}}(\operatorname{Low}_\jmath(\mathcal{S}))$ and $b(c)=\top$. We claim that $b$ witnesses Power Set. Let $c\in V^\mathcal{S}$. We will find $d\in\mathsf{op}eratorname{dom} b$ such that \begin{equation*} [\mkern-2mu[ c\subseteq a]\mkern-2mu] \le \bigvee_{d\in\mathsf{op}eratorname{dom} b}[\mkern-2mu[ c=d]\mkern-2mu]. \end{equation*} Let $d$ be the name such that $\mathsf{op}eratorname{dom} d=\mathsf{op}eratorname{dom} a$ and $d(y)=[\mkern-2mu[ y\in c]\mkern-2mu]$. Then $d\in \mathsf{op}eratorname{dom} b$. Furthermore, we have \begin{align*} [\mkern-2mu[ c\subseteq a]\mkern-2mu] =\bigwedge_{x\in\mathsf{op}eratorname{dom} c} c(x)\to \left(\bigvee_{y\in\mathsf{op}eratorname{dom} d} a(y)\land[\mkern-2mu[ x=y]\mkern-2mu] \right) \le \bigwedge_{x\in\mathsf{op}eratorname{dom} c} c(x)\to \left(\bigvee_{y\in\mathsf{op}eratorname{dom} d} a(y)\land[\mkern-2mu[ x=y]\mkern-2mu]\land c(x) \right) \\ \le \bigwedge_{x\in\mathsf{op}eratorname{dom} c} c(x)\to \left(\bigvee_{y\in\mathsf{op}eratorname{dom} d} [\mkern-2mu[ x=y]\mkern-2mu]\land [\mkern-2mu[ x\in c]\mkern-2mu] \right) \le \bigwedge_{x\in\mathsf{op}eratorname{dom} c} c(x)\to \left(\bigvee_{y\in\mathsf{op}eratorname{dom} d} [\mkern-2mu[ x=y]\mkern-2mu]\land [\mkern-2mu[ y\in c]\mkern-2mu] \right) = [\mkern-2mu[ c\subseteq d]\mkern-2mu] \end{align*} and \begin{align*} [\mkern-2mu[ d\subseteq c]\mkern-2mu] = \bigwedge_{x\in \mathsf{op}eratorname{dom} a}[\mkern-2mu[ x\in c]\mkern-2mu] \to [\mkern-2mu[ x\in c]\mkern-2mu] = \top. \end{align*} Hence $[\mkern-2mu[ c\subseteq a]\mkern-2mu] \le [\mkern-2mu[ c=d]\mkern-2mu]$. \end{proof} Let us finish this subsection with some constructors, which we will need later. \begin{definition} For $\mathcal{S}$-names $a$ and $b$, $\mathsf{up}(a,b)$ is defined by $\mathsf{op}eratorname{dom}(\mathsf{up}(a,b))=\{a,b\}$ and $(\mathsf{up}(a,b))(x)=\top$. $\mathsf{op}(a,b)$ is the name defined by $\mathsf{op}(a,b)=\mathsf{up}(\mathsf{up}(a,a),\mathsf{up}(a,b))$. \end{definition} $\mathsf{up}(a,b)$ is a canonical name for the unordered pair $\{a,b\}$ over $V^\mathcal{S}$. That is, we can prove that \begin{equation*} [\mkern-2mu[ \forall x\forall y \forall z [z=\mathsf{up}(x,y) \leftrightarrow z=\{x,y\}]]\mkern-2mu] = \top. \end{equation*} Hence the name $\mathsf{op}(a,b)$ is the canonical name for the ordered pair given by $a$ and $b$ over $V^\mathcal{S}$. \subsection{Double-negation formal topology} \label{Subsection:DoubleNegationFormalTopology} Our main tool to determine the consistency strength of the theories in this paper is the Heyting-valued interpretation with the \emph{double-negation formal topology}. \begin{definition} The \emph{double-negation formal topology}, $\Omega$, is the formal topology $(1,=,\vartriangleleft)$, where $x\vartriangleleft p$ if and only if $\lnot\lnot(x\in p)$. \end{definition} Unlike set-sized realizability or set-represented formal topology, the double-negation topology and the resulting Heyting-valued interpretation need not be absolute between BCST-regular sets or transitive models of $\mathsf{CZF}minus$. This is because, for a transitive model $M$ of $\mathsf{CZF}$, it need not be the case that $\Omega=\Omega^M$ holds. Hence we need a careful analysis of the double-negation formal topology, which is the aim of this subsection. We can see that the class of lower sets, $\operatorname{Low}(\Omega)=\{p\subseteq 1 \mid p= \downwards p\}$, is just the powerclass of 1, $\mathcal{P}(1)$, and the nucleus of $\Omega$ is given by the double complement $p^{\lnot\lnot} = (p^\lnot)^\lnot$, where \begin{equation*} p^\lnot=\{0\mid \lnot(0\in p)\}, \end{equation*} so $p^{\lnot\lnot} = \{0\mid \lnot\lnot (0\in p)\}$. Hence $\operatorname{Low}(\Omega)_\jmath$ is the collection of all \emph{stable} subsets of 1, that is, those sets $p\subseteq 1$ such that $p=p^{\lnot\lnot}$. The main feature of $\Omega$ is that Heyting-valued interpretation over $\Omega$ forces a law of excluded middle for some class of formulas: \begin{proposition}\label{Proposition:DoubleNegationTopologyForcesLEM} Let $p\in \operatorname{Low}(\Omega) = \mathcal{P}(1)$. Then $(p\cup p^\lnot)^{\lnot\lnot} =1$. As a corollary, if $[\mkern-2mu[ \phi]\mkern-2mu]$ is a set, then $[\mkern-2mu[\phi\lor\lnot\phi]\mkern-2mu]=1$. Especially, \begin{enumerate} \item $[\mkern-2mu[\phi\lor\lnot\phi]\mkern-2mu]=1$ holds for every bounded formula $\phi$, and \item If Full Separation holds, then $[\mkern-2mu[\phi\lor\lnot\phi]\mkern-2mu]=1$ holds for every $\phi$. \end{enumerate} \end{proposition} \begin{proof} $\lnot\lnot(p\cup \lnot p)=1$ follows from the fact that $\lnot\lnot(\phi\lor\lnot\phi)$ is derivable in intuitionistic logic. Moreover, $[\mkern-2mu[\phi\lor\lnot\phi]\mkern-2mu] = ([\mkern-2mu[\phi]\mkern-2mu] \cup \lnot [\mkern-2mu[\phi]\mkern-2mu])^{\lnot\lnot}$ if $[\mkern-2mu[\phi]\mkern-2mu]$ is a set. Lastly, under $\mathsf{CZF}minus$, if $\phi$ is bounded then $[\mkern-2mu[ \phi ]\mkern-2mu]$ is a set and if Full Separation holds then $[\mkern-2mu[ \phi ]\mkern-2mu]$ is a set for every $\phi$. \end{proof} The following corollary is immediate from the previous proposition and \autoref{Theorem:CZFPersistence}: \begin{corollary} \label{Corollary:VOmegamodelsZFminus} \pushQED{\qed} If $V$ satisfies $\mathsf{CZF}minus$, then $V^\Omega\models \mathsf{CZF}minus+\Delta_0\text{-}\mathsf{ LEM}$. Furthermore, if $V$ satisfies Full Separation, then $V^\Omega\models \mathsf{ZF}minus$. \qedhere \end{corollary} We will frequently mention the relativized Heyting-valued interpretation. For a transitive model $A$ of $\mathsf{CZF}minus$, we can consider the construction of $V^\Omega$ internal to $A$. We define the following notions to distinguish relativized interpretation from the usual one. \begin{definition}\label{Definition:AOmega} Let $A$ be a transitive model of $\mathsf{CZF}minus$. Then $A^\Omega:=(V^\Omega)^A$ is the $\Omega$-valued universe relativized to $A$. If $A$ is a set, then $\tilde{A}$ denotes the $\Omega$-name defined by $\mathsf{op}eratorname{dom} \tilde{A}:=A^\Omega$ and $\tilde{A}(x)=\top$ for all $x\in\mathsf{op}eratorname{dom} \tilde{A}$. \end{definition} We shall see in \autoref{Lemma:HeytingUniversePreserving} that whenever $A$ is a set, so is $A^\Omega$ which means that the definition of $\tilde{A}$ is well defined. Also, we will often confuse $\tilde{A}$ and $A^\Omega$ if the context is clear. Finally, it is worth mentioning that if $j$ is an elementary embedding, then $j(\tilde{K})=\widetilde{j(K)}$, so we may write $j^n(\tilde{K})$ instead of $\widetilde{j^n(K)}$. As before with $\Omega$, we do not know whether $\operatorname{Low}(\Omega)_\jmath$ is equal to its relativization $(\operatorname{Low}(\Omega)_\jmath)^M$ in $M$. As a result, we do not know whether its Heyting-valued universe, $V^\Omega$, and Heyting-valued interpretation, $[\mkern-2mu[\cdot]\mkern-2mu]$, are absolute. Fortunately, the formula $p\in\operatorname{Low}(\Omega)_\jmath$, which is $p\subseteq 1\land p=p^{\lnot\lnot}$, is $\Delta_0$. Hence $p\in\operatorname{Low}(\Omega)_\jmath$ is absolute between transitive models of $\mathsf{CZF}minus$. As a result, we have the following absoluteness result on the Heyting-valued universe: \begin{lemma}\label{Lemma:HeytingUniversePreserving} Let $A$ be a transitive model of $\mathsf{CZF}minus$ without Infinity. Then we have $A^\Omega=V^\Omega\cap A$. Moreover, if $A$ is a set, then $A^\Omega$ is also a set. \end{lemma} \begin{proof} We will follow the proof of Lemma 6.1 of \cite{Rathjen2003Realizability}. Let $\Phi$ be the inductive definition given by \begin{equation*} \langle X,a \rangle\in\Phi\iff \text{$a$ is a function such that $\mathsf{op}eratorname{dom} a\subseteq X$, $a(x)\subseteq 1$ and $a(x)^{\lnot\lnot}=a(x)$ for all $x\in\mathsf{op}eratorname{dom} a$}. \end{equation*} We can see that the $\Phi$ defines the class $V^\Omega$. Furthermore, $\Phi$ is $\Delta_0$, so it is absolute between transitive models of $\mathsf{CZF}minus$. By \autoref{Lemma:ItreationClass}, we have a class $J$ such that $V^\Omega = \bigcup_{a\in V} J^a$, and for each $s\in V$, $J^s=\Gamma_\Phi(\bigcup_{t\in s} J^t)$. Now consider the operation $\Upsilon$ given by \begin{equation*} \Upsilon(X):=\{a\in A\mid\exists Y\in A(Y\subseteq X\land \langle Y,a\rangle\in\Phi)\}. \end{equation*} By \autoref{Lemma:ItreationClass} again, there is a class $Y$ such that $Y^s=\Upsilon(\bigcup_{t\in s}Y^t)$ for all $s\in V$. Furthermore, we can see that $Y^s\subseteq V^\Omega$ by induction on $a$. Let $Y=\bigcup_{s\in A}Y^s$. We claim by induction on $s$ that $J^s\cap A\subseteq Y$. Assume that $J^t\cap A\subseteq Y$ holds for all $t\in s$. If $a\in J^s\cap A$, then the domain of $a$ is a subset of $A\cap \left(\bigcup_{t\in s} J^t\right)$, which is a subclass of $Y$ by the inductive assumption and the transitivity of $A$. Moreover, for each $x\in\mathsf{op}eratorname{dom} a$ there is $u \in A$ such that $x\in Y^u$. By Strong Collection over $A$, there is $v\in A$ such that for each $x\in \mathsf{op}eratorname{dom} a$ there is $u\in v$ such that $x\in Y^u$. Hence $\mathsf{op}eratorname{dom} a\subseteq \bigcup_{u\in v} Y^u$, which implies $a\in Y^v\subseteq Y$. Hence $V^\Omega\cap A\subseteq Y$, which gives us that $Y=V^\Omega\cap A$. We can see that the construction of $Y$ is the relativized construction of $V^\Omega$ to $A$, so $Y=A^\Omega$. Hence $A^\Omega=V^\Omega\cap A$. If $A$ is a set, then $\Upsilon(X)$ is a set for each set $X$, so we can see by induction on $a$ that $Y^a$ is also a set for each $a\in A$. Hence $A^\Omega=Y=\bigcup_{a\in A} Y^a$ is also a set. \end{proof} We extended the nucleus $\jmath$ to $J$ for subclasses of $\operatorname{Low}\mathcal{S}$, and used it to define the validity of formulas of the forcing language. Now, we are working with the specific formal topology $\mathcal{S}=\Omega$, and in this case, for a class $P\subseteq 1$, $JP = \bigcup\{q^{\lnot\lnot} \mid q\subseteq P\}$. It is easy to see that $P\subseteq JP\subseteq P^{\lnot\lnot}$. We also define the following relativized notion for any transitive class $A$ such that $1\in A$: \begin{equation*} J^AP = \bigcup\{q^{\lnot\lnot} \mid q\subseteq P\text{ and }q\in A\}. \end{equation*} If $P\in A$, then $J^AP=P^{\lnot\lnot}$, and in general, we have $P\subseteq J^AP\subseteq JP\subseteq P^{\lnot\lnot}$. Moreover, we can prove the following facts by straight forward computations. \begin{lemma}\label{Lemma:PropertiesOfRelativizedJ} \pushQED{\qed} Let $A$ and $B$ be transitive classes such that $1\in A,B$ and let $P\subseteq 1$ be a class. \begin{enumerate} \item $A\subseteq B$ implies $J^AP\subseteq J^BP$. \item If $\mathcal{P}(1)\cap A=\mathcal{P}(1)\cap B$, then $J^AP=J^BP$. \qedhere \end{enumerate} \end{lemma} However, the following proposition shows that $\mathsf{CZF}minus$ does not prove $J^AP=J^BP$ or $JP = P^{\lnot\lnot}$ in general: \begin{proposition} \phantom{a} \begin{enumerate} \item If $A\cap \mathcal{P}(1)=2$ (for example when $A=2$ or $A=V$ and $\Delta_0\text{-}\mathsf{LEM}$ holds), then $J^AP=P$. \item If $P^{\lnot\lnot}\subseteq JP$ for every class $P$, then if $\Delta_0\text{-} \mathsf{LEM}$ holds so does the law of excluded middle for arbitrary formulas. \qed \end{enumerate} \end{proposition} $J^A$ has a crucial role in defining Heyting-valued interpretation, but $J^A$ and $J^B$ might have different effects unless $A=B$. This causes absoluteness problems, which appears to be impossible to in general avoid. The following lemma states provable facts about relativized Heyting interpretations: \begin{lemma}\label{Lemma:HeytingInterpretationBetweenModels} Let $A\subseteq B$ be transitive models of $\mathsf{CZF}minus$. Assume that $\phi$ is a formula with parameters in $A^\Omega$. \begin{enumerate} \item If $\phi$ is bounded, then $[\mkern-2mu[\phi]\mkern-2mu]^A=[\mkern-2mu[\phi]\mkern-2mu]^B$. \item If $\phi$ only contains bounded quantifications, logical connectives between bounded formulas, unbounded $\forall$, and $\land$, then $[\mkern-2mu[\phi]\mkern-2mu]^A=[\mkern-2mu[\phi^{\tilde{A}}]\mkern-2mu]^B$. \item If every conditional appearing as a subformula of $\phi$ is of the form $\psi\to\chi$ for a bounded formula $\psi$ and a formula $\chi$, then $[\mkern-2mu[\phi]\mkern-2mu]^A\subseteq [\mkern-2mu[\phi^{\tilde{A}}]\mkern-2mu]^B$. \item If $\mathcal{P}(1)\cap A=\mathcal{P}(1)\cap B$, then $[\mkern-2mu[\phi]\mkern-2mu]^A=[\mkern-2mu[\phi^{\tilde{A}}]\mkern-2mu]^B$. \end{enumerate} \end{lemma} \begin{proof} \phantom{a} \begin{enumerate} \item If $\phi$ is bounded, then $[\mkern-2mu[\phi]\mkern-2mu]$ is defined in terms of double complement, Heyting connectives between subsets of 1, and set-sized union and intersection. These notions are absolute between transitive sets, so we can prove $[\mkern-2mu[\phi]\mkern-2mu]$ is also absolute by induction on $\phi$. In the case of atomic formulas as an initial stage, we apply the induction on $A^\Omega$-names. \item We apply induction on formulas. For the unbounded $\forall$, we have \begin{equation*} [\mkern-2mu[\forall x\phi(x)]\mkern-2mu]^A = \bigwedge_{x\in A^\Omega}[\mkern-2mu[ \phi(x)]\mkern-2mu]^A = \bigwedge_{x\in A^\Omega} [\mkern-2mu[\phi^{\tilde{A}}(x)]\mkern-2mu]^B = [\mkern-2mu[\forall x\in\tilde{A} \phi^{\tilde{A}}(x)]\mkern-2mu]^B. \end{equation*} The remaining clauses follows from the absoluteness argument we used in the proof of the previous item, so we omit them. \item The proof again uses induction on formulas. By the previous argument, if $[\mkern-2mu[\phi(x)]\mkern-2mu]^A\subseteq[\mkern-2mu[\phi^{\tilde{A}}(x)]\mkern-2mu]^B$ for all $x\in A^\Omega$, then $[\mkern-2mu[ \forall x \phi(x)]\mkern-2mu]^A\subseteq [\mkern-2mu[\forall x\in\tilde{A} \phi^{\tilde{A}}(x)]\mkern-2mu]^B$. For an unbounded $\exists$, we have \begin{equation*} [\mkern-2mu[\exists x\phi(x)]\mkern-2mu]^A = J^A \left(\bigcup \{[\mkern-2mu[ \phi(x)]\mkern-2mu]^A\mid x\in A^\Omega\}\right) \subseteq J^B \left(\bigcup \{[\mkern-2mu[ \phi^{\tilde{A}}(x)]\mkern-2mu]^B\mid x\in A^\Omega\}\right) = [\mkern-2mu[\exists x\in \tilde{A} \phi^{\tilde{A}}(x) ]\mkern-2mu]^B. \end{equation*} The remaining cases are straightforward except for $\to$, which requires some inspection to see how the inclusion works. By the assumption, our conditional is of the form $\psi\to\chi$ for some bounded formula $\psi$ and a (possibly unbounded) formula $\chi$. Since $\psi$ is bounded, we have $[\mkern-2mu[\psi]\mkern-2mu]^A=[\mkern-2mu[\psi^{\tilde{A}}]\mkern-2mu]^B$. Furthermore, we can see that if $a, b, c\in\operatorname{Low}(\Omega)_\jmath$ satisfies $a\subseteq b$, then $c\to a\subseteq c\to b$. Hence \begin{equation*} [\mkern-2mu[ \psi\to\chi]\mkern-2mu]^A = \left( [\mkern-2mu[ \psi]\mkern-2mu]^A \to [\mkern-2mu[\chi]\mkern-2mu]^A \right)\subseteq \left( [\mkern-2mu[ \psi^{\tilde{A}}]\mkern-2mu]^B \to [\mkern-2mu[\chi^{\tilde{A}}]\mkern-2mu]^B \right) = [\mkern-2mu[(\psi\to\chi)^{\tilde{A}}]\mkern-2mu]^B. \end{equation*} \item We can see that $[\mkern-2mu[\phi(x)]\mkern-2mu]^A = [\mkern-2mu[\phi^{\tilde{A}}(x)]\mkern-2mu]^B$ holds by induction on $\phi$. The calculations we did before are helpful to see the inductive argument works for unbounded quantifications. In particular, we observe that if $\mathcal{P}(1) \cap A = \mathcal{P}(1) \cap B$ then $J^A = J^B$. \qedhere \end{enumerate} \end{proof} \begin{remark} In the third clause of \autoref{Lemma:HeytingInterpretationBetweenModels}, if $A\in B$, then $\bigcup_{x\in A^\Omega} [\mkern-2mu[ \phi^{\tilde{A}}(x)]\mkern-2mu]^B\in \mathcal{P}(1)\cap B$. Thus, in this case, we have $J^B\left(\bigcup_{x\in A^\Omega} [\mkern-2mu[ \phi^{\tilde{A}}(x)]\mkern-2mu]^B\right) = \left(\bigcup_{x\in A^\Omega} [\mkern-2mu[ \phi^{\tilde{A}}(x)]\mkern-2mu]^B\right)^{\lnot\lnot}$. \end{remark} \begin{remark} We can see that \autoref{Lemma:HeytingInterpretationBetweenModels} also holds for arbitrary formal topologies $\mathcal{S}\in A$. The proof is identical, and its verification is left to the reader. \end{remark} \subsection{Preservation of small large sets} \label{Section:PreservationSmallLargeSets} Heyting-valued models do not necessarily preserve large sets unless we impose some additional restrictions. For example, on the one hand, Ziegler proved in \cite{ZieglerPhD} that large set properties are preserved under `small' pcas and formal topologies. \begin{proposition}[Ziegler, \cite{ZieglerPhD}, Chapter 4] \label{Proposition:negationtranslationpreservesinaccessibles} \pushQED{\qed} Let $K$ be either a regular set, inaccessible set, critical set or Reinhardt set. \begin{enumerate} \item Let $\mathcal{A}$ be a pca and $\mathcal{A}\in K$. Then $K[\mathcal{A}]:=V[\mathcal{A}]\cap K$ is a set and the realizability model $V[\mathcal{A}]$ thinks $K[\mathcal{A}]$ possesses the same large set property $K$ does. \item Let $\mathcal{S}$ be a formal topology that is set-represented by $R$. Assume that $\mathcal{S},R\in K$. Then $K^\mathcal{S} := V^\mathcal{S}\cap K$ is a set and $V^\mathcal{S}$ thinks the canonical name $\tilde{K}$ given by $\mathsf{op}eratorname{dom}\tilde{K}=K^\mathcal{S}$, $\tilde{K}(a)=\top$ possesses the same large set property $K$ does. \end{enumerate} \end{proposition} On the other hand, however, our double-negation formal topology will usually lose large set properties. For example, $\mathsf{CZF}minus$ cannot prove that Heyting-valued interpretations under $\Omega$ preserve regular sets regardless of how many regular sets exist in $V$: If it were possible, then $\mathsf{CZF+REA}$ would interpret $\mathsf{ZF}$, but the latter theory proves the consistency of the former. Hence our preservation results under the double-negation topology are also quite limited, which is a great obstacle for deriving the consistency strength of large sets over constructive set theories. Fortunately, almost all lower bounds we derived in \autoref{Section:StructuralLowerbdd} are models of $\mathsf{IZF}$. Furthermore, we mostly work with power inaccessible sets instead of mere inaccessible sets. This will make deriving the lower bounds easier, and we will examine the detailed account of the aforementioned statements in this subsection. We mostly follow the proof of \autoref{Lemma:StrongCollectionValid-MainLemma} and \autoref{Proposition:StrongCollectionValid}. However, for the sake of verification, we will provide most of the details of relevant lemmas and their proofs. Throughout this section, $A$ and $B$ are classes such that \begin{itemize} \item $A\in B$ (thus $A$ is a set), \item $\mathcal{P}(1)\cap A=\mathcal{P}(1)\cap B$, \item $A$ is BCST-regular and $B$ is a transitive (set or class) model of $\mathsf{CZF}minus$. \end{itemize} Finally, $R\in B$ is a set that will be a multi-valued function over the Heyting-valued universe $B^\Omega$ unless specified. \\ The main goal of this subsection is proving the following preservation theorem: \begin{theorem}\label{Theorem:BCSTset-preserving} Let $A$ be a BCST-regular set and $B \supseteq A$ a transitive model of $\mathsf{CZF}minus$ such that $\mathcal{P}(1)\cap A = \mathcal{P}(1)\cap B$. Then $B^\Omega$ thinks $\tilde{A}$ is BCST-regular. \end{theorem} Its proof requires a sequence of lemmas in a similar way to how we proved \autoref{Proposition:StrongCollectionValid}, the validity of Strong Collection over $V^\mathcal{S}$. The following lemma is an analogue of \autoref{Lemma:TargetSubsetMVChoice-uniform-overV}: \begin{lemma}\label{Lemma:TargetSubsetMVChoice} Let $a\in A$, $S \colon a\rightrightarrows A$ a multi-valued function and $Q\subseteq a\times A$ a class. Moreover, assume that \begin{enumerate} \item For each $x\in a$ there is $u\subseteq Q_x=\{z \mid \langle x,z\rangle\in Q\}$ such that $\langle x,u\rangle\in S$, and \item (Monotone Closure) If $\langle x,u\rangle \in S$ and $u \subseteq v \subseteq Q_x$ then $\langle x,v\mathsf{op}eratorname{ran}gle\in S$. \end{enumerate} Then there is an $f\in A\cap \prescript{a}{}A$ such that $f(x)\subseteq Q_x$ and $\langle x,f(x)\rangle \in S$ for all $x\in a$. \end{lemma} \begin{proof} As before, consider the multi-valued function $S'$ with domain $a$ defined by \begin{equation*} S'(x,u) \qquad \text{if and only if} \qquad S(x,u)\text{ and } u\subseteq Q_x. \end{equation*} By \autoref{Lemma:SetMV}, there is $g\in A$ such that $g \colon a\rightrightarrows A$ and $g\subseteq S'$. Let $g_x=\{y\mid \langle x,y\rangle \in g\}$, then $\bigcup g_x\subseteq Q_x$. Now take $f(x) = \bigcup g_x$, then $f\in A$ since $A$ satisfies Union and second-order Replacement. Moreover, since $S'$ is monotone closed, we have $\langle x,f(x)\rangle\in S'$ for all $x\in a$. \end{proof} The following lemma is an analogue of \autoref{Lemma:StrongCollectionValid-MainLemma}. The reader is reminded that the proof of the following lemma necessarily uses the assumption $\mathcal{P}(1)\cap A=\mathcal{P}(1)\cap B$. \begin{lemma}\label{Lemma:Preserving-MainLemma} Let $a\in A^\Omega$ and $R\in B$. Fix $p\in A$ such that $p=p^{\lnot\lnot}$ and $p\subseteq[\mkern-2mu[ R \colon a\rightrightarrows \tilde{A}]\mkern-2mu]^B$. If we define $P$ as \begin{equation*} P = \{\langle x,y,z\rangle\in \mathsf{op}eratorname{dom} a\times A^\Omega\times 1 \mid z\in (p\land a(x)\land [\mkern-2mu[ \mathsf{op}(x,y) \in R]\mkern-2mu]^B)\}, \end{equation*} then there exists $r\in A$ such that $r\subseteq P$ and \mbox{$p\land a(x)\subseteq \{z\mid \exists y \in A^\Omega\ \langle x,y,z\rangle \in r\}^{\lnot\lnot}$.} \end{lemma} \begin{proof} Again, observe that $p\subseteq [\mkern-2mu[ R \colon a\rightrightarrows \tilde{A}]\mkern-2mu]^B$ is equivalent to \begin{equation*} p\land a(x)\subseteq \sideset{}{^B}\bigvee_{y\in A^\Omega} [\mkern-2mu[ \mathsf{op}(x,y)\in R]\mkern-2mu]^B = J^B \left(\bigcup_{y\in A^\Omega} [\mkern-2mu[ \mathsf{op}(x,y)\in R]\mkern-2mu]^B\right) \end{equation*} for all $x\in \mathsf{op}eratorname{dom} a$. Now let us take \begin{equation*} Q=\{\langle x,z\rangle\mid \exists y\in A^\Omega (\langle x,y,z\rangle\in P)\}, \end{equation*} then take $Q_x=\{z\mid \langle x,z\rangle \in Q\}\subseteq 1$. We can easily see that \mbox{$Q_x=\bigcup_{y\in A^\Omega} [\mkern-2mu[\mathsf{op}(x,y)\in R]\mkern-2mu]^B$} holds. Thus we have $p\land a(x)\subseteq J^BQ_x$. Furthermore, it is also true that $Q_x\in \mathcal{P}(1)\cap B=\mathcal{P}(1)\cap A$. So we have $Q_x\in \mathcal{P}(1)\cap B${, which implies that} $J^B Q_x=Q_x^{\lnot\lnot}$. Now consider the relation $S\subseteq \mathsf{op}eratorname{dom} a\times (\mathcal{P}(1)\cap A)$ defined by \begin{equation*} \langle x,u\rangle \in S \quad\text{ if and only if }\quad u\subseteq Q_x\text{ and } p\land a(x)\subseteq u^{\lnot\lnot}. \end{equation*} We want to apply \autoref{Lemma:TargetSubsetMVChoice} to $S$, so we will check the hypotheses of \autoref{Lemma:TargetSubsetMVChoice} hold. The first condition holds because $\langle x,Q_x\rangle \in S$ for each $x\in\mathsf{op}eratorname{dom} a$. Furthermore, this shows that $S$ is a multi-valued function with domain $\mathsf{op}eratorname{dom} a$. For the second condition, the relation is monotone closed in $A$ because if $v \subseteq w$ are in $\mathcal{P}(1) \cap A$ then $v^{\lnot\lnot} \subseteq w^{\lnot\lnot}$. Therefore, by \autoref{Lemma:TargetSubsetMVChoice} applied to $S$, we have a function $f\in {^{\mathsf{op}eratorname{dom} a}}A\cap A$ such that $p\land a(x)\subseteq f(x)^{\lnot\lnot}$ and $f(x)\subseteq Q_x$ for all $x\in\mathsf{op}eratorname{dom} a$. Now let \begin{equation*} q = \{\langle x,z\rangle \mid x\in\mathsf{op}eratorname{dom} a\text{ and } z\in f(x)\}. \end{equation*} Then for each $\langle x,z\rangle\in q$ there is $y\in A^\Omega$ such that $\langle x,y,z\rangle \in P$ holds. By \autoref{Lemma:SetMV} applied to $P'\colon q\rightrightarrows A^\Omega$, defined by \begin{equation*} P'(\langle x,z\rangle, y) \qquad \text{if and only if} \qquad P(x,y,z), \end{equation*} there is $r\in A$ such that $r\subseteq P$ and $r \colon q\rightrightarrows A^\Omega$. It is easy to see that $r$ satisfies our desired property. \end{proof} \begin{remark} There is a technical note for the proof of \autoref{Lemma:Preserving-MainLemma}, which is that there is no need for $P$, $Q$, and $Q_x$ to be definable over $A$ in general. The reason is that we do not know if either $R$ or $[\mkern-2mu[\cdot ]\mkern-2mu]^B$ are accessible from $A$. However, we do not need to {worry} about this since we are relying on the second-order Strong Collection over $A$. Furthermore, the proof of \autoref{Lemma:Preserving-MainLemma} also uses an assumption that $B$ is a transitive model of $\mathsf{CZF}minus$ implicitly. The reason is that we made use of Heyting operations relative to $B$, which we formulate over $\mathsf{CZF}minus$. Thus relativized Heyting operations are viable when $B$ satisfies $\mathsf{CZF}minus$. \end{remark} We are now ready to prove the preservation theorem, \autoref{Theorem:BCSTset-preserving}. Its proof is parallel to that of \autoref{Proposition:StrongCollectionValid}. \begin{proof}[Proof of \autoref{Theorem:BCSTset-preserving}] First, observe that since $B$ is a transitive model of $\mathsf{CZF}minus$ both $B^\Omega$ and the Heyting operations over $B$ are well defined. Furthermore, it is easy to see that $B^\Omega$ thinks $\tilde{A}$ is transitive, and closed under Pairing, Union and Binary Intersection. Hence it remains to show that $B^\Omega$ thinks $\tilde{A}$ satisfies second-order Strong Collection, that is, \begin{equation*} [\mkern-2mu[ \forall a\in \tilde{A} \, \forall R [R \colon a\rightrightarrows \tilde{A}\to \exists b\in \tilde{A}(R \colon a\leftrightarrowlrarrows b)] ]\mkern-2mu]^B = 1. \end{equation*} Take $a\in \mathsf{op}eratorname{dom}\tilde{A}$, $R\in B^\Omega$ and $p\in B$ such that $p\subseteq 1$ and $p=p^{\lnot\lnot}$. We claim that if $p\subseteq [\mkern-2mu[ R \colon a\rightrightarrows \tilde{A}]\mkern-2mu]^B$, then there is $b\in \mathsf{op}eratorname{dom}\tilde{A}$ such that $p\subseteq [\mkern-2mu[ R \colon a\leftrightarrowlrarrows b]\mkern-2mu]^B$. Taking $P$ as in the statement of \autoref{Lemma:Preserving-MainLemma}, by \autoref{Lemma:Preserving-MainLemma}, we can find some $r\in A$ such that $r\subseteq P$ and \begin{equation*} p\land a(x)\subseteq \{z\mid \exists y \langle x,y,z\rangle\in r\}^{\lnot\lnot}. \end{equation*} Define $b$ such that $\mathsf{op}eratorname{dom} b=\{y\mid\exists x,z(\langle x,y, {z} \rangle\in r)\}$ and \begin{equation*} b(y) =\{z\mid \exists x \langle x,y,z\rangle\in r\}^{\lnot\lnot}. \end{equation*} for $y\in\mathsf{op}eratorname{dom} b$. Note that $b(y)\in A$ since $r$ is. Then we can show that $p\subseteq [\mkern-2mu[ R \colon a\leftrightarrowlrarrows b]\mkern-2mu]^B$ by following the computation given in the proof of \autoref{Proposition:StrongCollectionValid}. \end{proof} As a corollary, we have \begin{corollary}\label{Corollary:PreservationofPowerInaccessibles} Let $K$ be a power inaccessible set. Then $V^\Omega$ thinks $\tilde{K}$ is power inaccessible. \end{corollary} \begin{proof} Since $\mathcal{P}(1)\in K$, we have $\mathcal{P}(1)\cap K=\mathcal{P}(1)$. Thus \autoref{Theorem:BCSTset-preserving} shows $V^\Omega$ thinks $\tilde{K}$ is BCST-regular. It remains to show that $V^\Omega$ thinks $\tilde{K}$ is closed under the true power set. We claim that the argument for the preservation of Power Set given in the proof of \autoref{Theorem:CZFPersistence} relativizes to $K$. Work in $V$, and take $a\in K^\Omega$. Since $K$ is power inaccessible, the name defined by $\mathsf{op}eratorname{dom} b =$ \mbox{$\prescript{\mathsf{op}eratorname{dom} a}{} \{p^{\lnot\lnot} \mid p\subseteq 1\}$}, $b(c)=1$ is a member of $K$. By the same calculation as in the proof of \autoref{Theorem:CZFPersistence}, we have \mbox{$[\mkern-2mu[\forall c (c\subseteq a\to c\in b)]\mkern-2mu] = 1$.} Hence the desired result holds. \end{proof} \subsection{Interpreting an elementary embedding} In this subsection, we work over $\mathsf{CZF + BTEE}_M$ unless otherwise specified. We will show that elementary embeddings are persistent under Heyting-valued interpretation over $\Omega$. We will mostly follow Subsection 4.1.6 of Ziegler \cite{ZieglerPhD}, but we need to check his proof works in our setting since his applicative topology does not include Heyting algebras generated by formal topologies that are not set-presentable. Furthermore, we are working with a weaker theory than Ziegler assumed. Especially, we do not assume Set Induction, $\Delta_0$-Separation and Strong Collection for $(j,M)$-formulas, which calls for additional care. In most of our results in the rest of the paper, we will consider the case $M=V$, so considering the target universe, $M$, of $j$ will not be needed for our main results. Nevertheless, to begin with we will work in the more general setting and not assume that $M = V$. We need to define Heyting-valued interpretations for $M$ and $j$ in the forcing language. Since $j$ preserves names, we can interpret $j$ as $j$ itself. We will interpret $M$ by $M^\Omega$, as defined in \autoref{Definition:AOmega}, which was given by following the construction of $V^\Omega$ inside $M$. Thus defining $M^\Omega$ does not require Strong Collection or Set Induction for the language extended by $j$. The existence of $M^\Omega$ follows from the assumption that $M$ satisfied $\mathsf{CZF}minus$. One possible obstacle to defining the interpretation for the extended language is a non-absoluteness of $\Omega$ and the resulting double-negation formal topology between transitive models of $\mathsf{CZF}minus$. We discussed in \autoref{Subsection:DoubleNegationFormalTopology} that the Heyting interpretation $[\mkern-2mu[\phi]\mkern-2mu]$ need not be absolute between transitive sets. It would be convenient if we have $[\mkern-2mu[\phi]\mkern-2mu]=[\mkern-2mu[\phi]\mkern-2mu]^M$, which follows from $\mathcal{P}(1)=\mathcal{P}(1)\cap M${, which thankfully we have.} \begin{lemma} For any formula $\phi$ with parameters in $M^\Omega$, we have $[\mkern-2mu[ \phi]\mkern-2mu]^M = [\mkern-2mu[ \phi^{M^\Omega}]\mkern-2mu]$. \end{lemma} \begin{proof} By \autoref{Lemma:PowerSetPreserving}, $\mathcal{P}(1)=\mathcal{P}(1)\cap M$. Hence the conclusion follows from the last clause of \autoref{Lemma:HeytingInterpretationBetweenModels}. \end{proof} Thus we do not need to worry about the absoluteness issue on the Heyting interpretation. Now we are ready to extend our forcing language to $\{\in, j, M\}$. \begin{definition} Define $[\mkern-2mu[\phi]\mkern-2mu]$ for the extended language as follows: \begin{itemize} \item $[\mkern-2mu[ j^m(a)\in j^n(b)]\mkern-2mu]$ and $[\mkern-2mu[ j^m(a)=j^n(b)]\mkern-2mu]$ are defined in the same way as $[\mkern-2mu[ x\in y]\mkern-2mu]$ and $[\mkern-2mu[ x=y]\mkern-2mu]$ were. \item $[\mkern-2mu[ a\in M ]\mkern-2mu] := \bigvee_{x\in M^\Omega}[\mkern-2mu[ a=x]\mkern-2mu]$, \item $[\mkern-2mu[\forall x\in M \phi(x)]\mkern-2mu] := \bigwedge_{x\in M^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]$, and \item $[\mkern-2mu[\exists x\in M \phi(x)]\mkern-2mu] := \bigvee_{x\in M^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]$. \end{itemize} \end{definition} \begin{remark}\label{Remark:InterpretationJFormulas} The reader should be careful that $[\mkern-2mu[\phi(\vec{x})]\mkern-2mu]$ is not in general a set. Provably over $\mathsf{CZF + BTEE}_M$, $[\mkern-2mu[ j^m(a)\in j^n(b)]\mkern-2mu]$ and $[\mkern-2mu[ j^m(a)=j^n(b)]\mkern-2mu]$ are always sets. In general, we can see that if $\phi(\vec{x})$ is a Heyting combination of atomic formulas, then $[\mkern-2mu[ \phi(\vec{x})]\mkern-2mu]$ is a set regardless of what the background theory is. However, taking a bounded quantification could make $[\mkern-2mu[\phi]\mkern-2mu]$ a class that is not provably a set. For example, consider taking bounded universal quantification over $[\mkern-2mu[ \phi(x.y)]\mkern-2mu]$ for a Heyting combination of atomic $(j,M)$-formulas $\phi$. Then $[\mkern-2mu[\forall x\in a \phi(x,y) ]\mkern-2mu] = \bigwedge_{x\in \mathsf{op}eratorname{dom} a} a(x)\land [\mkern-2mu[ \phi(x,y)]\mkern-2mu]$. Each of $[\mkern-2mu[ \phi(x,y)]\mkern-2mu]$ can be a set, but the family $\{[\mkern-2mu[ \phi(x,y)]\mkern-2mu] \mid x\in\mathsf{op}eratorname{dom} a\}$ need not be a set unless we have Strong Collection and $\Delta_0$-Separation for $(j,M)$-formulas. The situation is even worse for bounded existential quantifiers and disjunctions: we took a nucleus $\jmath$ when defining the interpretation of these, so $[\mkern-2mu[\exists x\in a\phi(x)]\mkern-2mu] = \bigvee_{x\in\mathsf{op}eratorname{dom} a}[\mkern-2mu[ \phi(x)]\mkern-2mu] =\jmath\left( \bigcup_{x\in\mathsf{op}eratorname{dom} a}[\mkern-2mu[ \phi(x)]\mkern-2mu]\right)$. This is ill-defined when the union $\bigcup_{x\in\mathsf{op}eratorname{dom} a}[\mkern-2mu[ \phi(x)]\mkern-2mu]$ is a class rather than a set. Hence we have to use the join operator for classes instead of sets. The trade-off for the new definition of $[\mkern-2mu[\phi]\mkern-2mu]$, in this case, is that we do not know if $[\mkern-2mu[\phi\lor\lnot\phi]\mkern-2mu]=1$ for a $j$-formula $\phi$, unless we can ensure $[\mkern-2mu[\phi]\mkern-2mu]$ is a set. \end{remark} From this definition, we have an analogue of Lemma 4.26 of \cite{ZieglerPhD}, which is useful to check that $j$ is still elementary over $V^\Omega$: \begin{lemma}\label{Lemma:ValuePreserving} For any bounded formula $\phi(\vec{x})$ with all free variables displayed in the language $\in$ (that is, without $j$ and $M$), we have \begin{equation*} [\mkern-2mu[ \phi(\vec{a}) ]\mkern-2mu] = [\mkern-2mu[ \phi^{M^\Omega} (j(\vec{a})) ]\mkern-2mu] = [\mkern-2mu[ \phi (j(\vec{a})) ]\mkern-2mu] \end{equation*} for every $\vec{a}\in V^\Omega$. \end{lemma} \begin{proof} For the first equality, note that $[\mkern-2mu[\phi(\vec{a})]\mkern-2mu]\subseteq 1$. Hence we have \begin{equation*} [\mkern-2mu[ \phi(\vec{a}) ]\mkern-2mu] = j([\mkern-2mu[ \phi(\vec{a}) ]\mkern-2mu]) = [\mkern-2mu[\phi(j(\vec{a}))]\mkern-2mu]^M = [\mkern-2mu[ \phi^{M^\Omega}(j(\vec{a})) ]\mkern-2mu]. \end{equation*} by \autoref{Lemma:PowerSetPreserving}. The second equality will follow from the claim that for any bounded formula $\phi$ and $\vec{b} \in M^\Omega$, $[\mkern-2mu[ \phi(\vec{b})]\mkern-2mu] = [\mkern-2mu[\phi^{M^\Omega}(\vec{b})]\mkern-2mu]$. The proof proceeds by induction on $\phi$: the atomic case and cases for $\land$, $\lor$, and $\to$ are trivial. For bounded $\forall$, observe that if $c,\vec{b}\in M^\Omega$ then $[\mkern-2mu[ c = c \cap M^\Omega]\mkern-2mu] = 1$, so $[\mkern-2mu[ \forall x\in c \ \phi(x,\vec{b}) \leftrightarrow (\forall x\in c \ \phi(x,\vec{b}) )^{M^\Omega}]\mkern-2mu]=1$. This proves $[\mkern-2mu[ \forall x\in c \ \phi(x,\vec{b})]\mkern-2mu] = [\mkern-2mu[ \forall x\in c \ \phi^{M^\Omega}(x,\vec{b})]\mkern-2mu]$. The case for bounded existential quantifiers is similar. \end{proof} Moreover, we can check the following equalities easily: \vbox{ \begin{proposition} \phantom{a} \begin{enumerate} \item $[\mkern-2mu[ \forall x,y (x=y\to j(x)=j(y))]\mkern-2mu]=1$, \item $[\mkern-2mu[ \forall x (j(x)\in M)]\mkern-2mu]=1$, \item $[\mkern-2mu[ \forall x (x\in M\to \forall y\in x(y\in M)) ]\mkern-2mu]=1$. \end{enumerate} \end{proposition} } \begin{proof} The first equality follows from $[\mkern-2mu[ x=y]\mkern-2mu] = [\mkern-2mu[ j(x)=j(y)]\mkern-2mu]$, which holds by the previous lemma, and the remaining two follow from direct calculations. \end{proof} \begin{lemma}\label{Lemma:ElementarityPersistentUnderDoubleNegationTranslation} For every $\vec{a}\in V^\Omega$ and formula $\phi$ that does not contain $j$ or $M$, we have \mbox{$[\mkern-2mu[ \phi(\vec{a})\leftrightarrow \phi^{M^\Omega}(j(\vec{a})) ]\mkern-2mu]=1$.} \end{lemma} \begin{proof} \autoref{Lemma:ValuePreserving} proved that this lemma holds for bounded formulas $\phi$. We will use full induction on $\phi$ to prove $[\mkern-2mu[\phi(\vec{a})]\mkern-2mu]=[\mkern-2mu[\phi^M(j(\vec{a}))]\mkern-2mu]$ for all $\vec{a}\in V^\Omega$. If $\phi$ is $\forall x\psi(x,\vec{a})$, we have $[\mkern-2mu[ \forall x\psi(x,\vec{a})]\mkern-2mu]=\bigwedge_{x\in V^\Omega}[\mkern-2mu[\psi(x,\vec{a})]\mkern-2mu]$. Now \begin{align*} 0\in \bigwedge_{x\in V^\Omega}[\mkern-2mu[ \psi(x,\vec{a})]\mkern-2mu] &\iff \forall x\in V^\Omega (0\in [\mkern-2mu[\psi(x,\vec{a})]\mkern-2mu])\\ &\iff \forall x \in (V^\Omega)^M (0\in [\mkern-2mu[\psi(x,j(\vec{a}))]\mkern-2mu]), \end{align*} where the last equivalence follows from applying $j$ to the above formula. Since the last formula is equivalent to $0\in \bigwedge_{x\in M^\Omega} [\mkern-2mu[\psi(x, j(\vec{a}))]\mkern-2mu]$, we have \begin{equation*} \bigwedge_{x\in V^\Omega} [\mkern-2mu[\psi(x, \vec{a})]\mkern-2mu]=\bigwedge_{x\in M^\Omega} [\mkern-2mu[\psi(x, j(\vec{a}))]\mkern-2mu]=[\mkern-2mu[ \forall x\in M \psi^M(x, j(\vec{a}))]\mkern-2mu]. \end{equation*} If $\phi$ is $\exists x\psi(x,\vec{a})$, we have $[\mkern-2mu[ \exists x\psi(x,\vec{a})]\mkern-2mu]=\bigvee_{x\in V^\Omega}[\mkern-2mu[\psi(x,\vec{a})]\mkern-2mu]$. Moreover, \begin{align*} 0\in \bigvee_{x\in V^\Omega}[\mkern-2mu[ \psi(x,\vec{a})]\mkern-2mu] &\iff \exists p\subseteq 1 \left[ p\subseteq \bigcup_{x\in V^\Omega} [\mkern-2mu[\psi(x,\vec{a})]\mkern-2mu] \text{ and } 0\in p^{\lnot\lnot} \right] \\&\iff \exists p\subseteq 1 \left[ p\subseteq \bigcup_{x\in M^\Omega} [\mkern-2mu[\psi^{M^\Omega}(x,j(\vec{a}))]\mkern-2mu] \text{ and } 0\in p^{\lnot\lnot} \right] \end{align*} Hence $0\in \bigvee_{x\in V^\Omega}[\mkern-2mu[ \psi(x,\vec{a})]\mkern-2mu]$ if and only if $0\in \bigvee_{x\in M^\Omega}[\mkern-2mu[ \psi(x,j(\vec{a}))]\mkern-2mu]$ \end{proof} We now work in the extended language $\mathsf{CZF}_{j,M}$. We need to check that $\Delta_0$-Separation and Strong Collection under the extended language are also persistent under the double-negation interpretation. We can see that the proof given by \cite{Gambino2006} and \autoref{Theorem:CZFPersistence} carries over, so we have the following claim: \begin{proposition}\pushQED{\qed} \label{Proposition:InterpretingJFormulaAxioms} If $V$ satisfies any of Set Induction, Strong Collection, $\Delta_0$-Separation or Full Separation for the extended language, then the corresponding axiom for the extended language is valid in $V^\Omega$. \qedhere \end{proposition} The essential property of a critical set is that it is inaccessible. However, inaccessibility is not preserved under Heyting-valued interpretations in general. Fortunately, being a critical point is preserved provided it is regular: \begin{lemma}\label{Lemma:BeingACriticalPointIsPreserved} Let $K$ be a regular set such that $K\in j(K)$ and $j(x)=x$ for all $x\in K$. Then \mbox{$[\mkern-2mu[\tilde{K}\in j(\tilde{K})\land \forall x\in \tilde{K}(j(x)=x)]\mkern-2mu]=1$.} \end{lemma} \begin{proof} Since $(\text{$j(K)$ is inaccessible})^M$, we have $j(K)\models\mathsf{CZF}minus$. Thus, by \autoref{Lemma:HeytingUniversePreserving}, $j(K)^\Omega = j(K)\cap V^\Omega$. By applying the same argument internal to $M$, we have $(j(K)^\Omega)^M = (j(K)\cap V^\Omega)^M = j(K)\cap M^\Omega$. Since $j(K)\cap V^\Omega \subseteq j(K)\subseteq M$, we have $j(K)\cap V^\Omega\subseteq M$. This implies $j(K)\cap V^\Omega = j(K)\cap V^\Omega\cap M = j(K)\cap M^\Omega$. In sum, we have \begin{equation*} j(K)^\Omega = j(K)\cap V^\Omega = j(K)\cap M^\Omega = (j(K)^\Omega)^M, \end{equation*} and these are sets by \autoref{Lemma:HeytingUniversePreserving}. Also, $K\in j(K)$ implies $\tilde{K}\in j(K)$. Since the domain of $j(\tilde{K})=\widetilde{j(K)}$ is $j(K)\cap V^\Omega$, we have $\tilde{K}\in \mathsf{op}eratorname{dom} j(\tilde{K})$, which implies $[\mkern-2mu[\tilde{K}\in j(\tilde{K})]\mkern-2mu]=1$. For the assertion $[\mkern-2mu[\forall x\in \tilde{K} (j(x)=x)]\mkern-2mu]=1$, observe that if $x\in \mathsf{op}eratorname{dom}\tilde{K}$ then $j(x)=x$, so we have the desired conclusion. \end{proof} \subsection{Consistency strength: intermediate results} By using the results from the previous sections and subsections, we have the following: \begin{theorem}\label{Theorem:InterpretationResults} \phantom{a} \begin{enumerate} \item $\mathsf{IZF}+\mathsf{BTEE}$ interprets $\mathsf{ZF}+\mathsf{BTEE}$. \item $\mathsf{IZF}+\mathsf{BTEE}+\text{Set Induction}_j$ interprets $\mathsf{ZF}+\mathsf{BTEE}+\text{Set Induction\textsubscript{$j$}}$. \item $\mathsf{IZF}+\mathsf{WA}$ interprets $\mathsf{ZF}+\mathsf{WA}$. \end{enumerate} \end{theorem} Hence \begin{corollary} \label{Corollary:MainConsistencyResults} \phantom{a} \begin{enumerate} \item $(\mathsf{IKP}_{j,M})$ $\mathsf{IKP}$ with a critical point implies $\mathsf{Con(ZF + BTEE + \text{Set Induction\textsubscript{$j$}})}$. \item $\mathsf{CZF}$ with a Reinhardt set implies $\mathsf{Con(ZF+WA)}$. \end{enumerate} \end{corollary} \begin{proof} Using \autoref{Theorem:IKPSigmaOrdimpliesIZF+BTEE}, if $K$ is a critical point of an embedding $j \colon V \rightarrow M$ then we can find some ordinal $\lambda$ for which $\langle L_\lambda, j \restricts L_\lambda \rangle$ is a model of $\mathsf{IZF + BTEE}+\text{Set Induction}_j$. Therefore the first claim follows from the first claim of \autoref{Theorem:InterpretationResults}. The second claim follows directly from \autoref{Theorem:ReinhardtCriticalPtModelsWA}. \end{proof} By mimicking Bagaria-Koellner-Woodin's argument in \cite{BagariaKoellnerWoodin2019} over $V^\Omega$, we have the following consistency result: \begin{theorem}[$\mathsf{CGB}_\infty$]\label{Theorem:LowerBound-superReinhardt} Let $K$ be a super Reinhardt set. Then $V^\Omega$ satisfies $\mathsf{ZF}$ plus there is a proper class of inaccessible cardinals $\gamma$ such that $(V_\gamma,V_{\gamma+1})\models \mathsf{ZF_2} + \text{there is a Reinhardt cardinal}$. \end{theorem} \begin{proof} The proof will proceed as follows: First, we will show that the background theory interprets some moderate semi-intuitionistic theory. Then we will derive that this semi-intuitionistic theory proves that there is a proper class of inaccessible cardinals $\gamma$ such that $V_\gamma$ is a model of $\mathsf{ZF}$ with a Reinhardt cardinal. So, let $K$ be a super Reinhardt set. For any $a\in V^\Omega$, we can find an amenable elementary embedding $j\colon V\to V$ with critical set $K$ such that $a\in j(K)$. As per usual, let $\Lambda = j^{\omega}(K)$. We shall restrict our background theory to its first-order part to facilitate the proof. By \autoref{Prop:SuperReinhardtPowerInaccessible} and \autoref{Corollary:SuperReinhardtmodelsIZFpIEA}, $K$ is power inaccessible and $V$ satisfies $\mathsf{IZF+pIEA}$. Thus by \autoref{Corollary:VOmegamodelsZFminus} and \autoref{Corollary:PreservationofPowerInaccessibles}, $V^\Omega$ interprets the following statements: \begin{itemize} \item Axioms of $\mathsf{ZF}$, and \item There is a proper class of inaccessible cardinals. \end{itemize} Especially, $V^\Omega$ interprets the law of excluded middle for formulas without $j$. However, we do not know if $V^\Omega$ satisfies $\mathsf{ZF}_j$ because there is no reason why $V$ should satisfy the Separation scheme for $j$-formulas. As a result, the double-negation translation does not force the law of excluded middle for $j$-formulas. Despite this, $V^\Omega$ still believes the following statements are valid due to \autoref{Lemma:ElementarityPersistentUnderDoubleNegationTranslation} and \autoref{Proposition:InterpretingJFormulaAxioms}: \begin{itemize} \item $j$ is amenable and elementary, and \item Collection and Set Induction for $j$-formulas. \end{itemize} Amenability of $j$ needs some justification. Working over $V$, $j$ is amenable by \autoref{Lemma:FunctionalClass-amenable}. As we observed in \autoref{Remark:SuperReinhardtsareReinhardts}, if $j$ is amenable then Separation for $\Delta_0^j$-formulas holds. By \autoref{Proposition:InterpretingJFormulaAxioms}, $\Delta_0$-Separation for $j$-formulas is valid in $V^\Omega$ from which it follows that $V^\Omega$ thinks $j$ is amenable. Now work in $V^\Omega$. Since $V^\Omega$ validates $\mathsf{ZF}$ with a proper class of inaccessible cardinals, we can find the least inaccessible cardinal $\gamma$ such that $\gamma > j^\omega(\kappa)$, where $\kappa=\mathsf{op}eratorname{ran}k \tilde{K}$. Here $j^\omega(\kappa)$ is well-defined since the sequence $\langle j^n(\kappa)\mid n\in\omega\mathsf{op}eratorname{ran}gle$ required Set Induction for $j$-formulas for its definition and Collection for its supremum to exist. Furthermore, one can see that $V^\Omega$ thinks $\tilde{K}$ is a critical point of $j$ and $\tilde{K}=V_\kappa$. (The latter equality follows from the fact that $\tilde{K}$ is power inaccessible.) From which it follows that $\kappa=\mathsf{op}eratorname{crit} j$. Since $\gamma$ is definable from the parameter $j^{\omega}(\kappa)$, which is fixed by $j$, by elementarity we have $\gamma=j(\gamma)$. Moreover, $V^\Omega$ also believes $j\restricts V_\gamma\in V_{\gamma+1}$ and $\mathsf{op}eratorname{crit} (j\restricts V_\gamma)=\kappa$. (Amenability of $j$ ensures $j\restricts V_\gamma$ exists.) Hence $V^\Omega$ thinks $(V_\gamma,V_{\gamma+1})$ is a model of $\mathsf{ZF_2}$ with a Reinhardt cardinal. Finally, recall that $a\in j(K)$ and $a$ were arbitrary. Hence we have proved that for every $a\in V^\Omega$, $V^\Omega$ thinks there is an inaccessible cardinal $\gamma$ such that $(V_\gamma,V_{\gamma+1})$ believes that there is a Reinhardt cardinal $\kappa$ for which $a\in V_{j(\kappa)}$. Hence $V^\Omega$ thinks there is a proper class of such $\gamma$. \end{proof} \begin{remark} \label{Remark:DoubleNegationvsTopology} Most results in this section can be obtained by using Friedman's double-negation translation as defined in \cite{Friedman1973} and \cite{FriedmanScedrov1984}. In some cases, applying Friedman's argument would be simpler: for example, verifying the axioms of $\mathsf{IZF}$ under Friedman's double-negation translation does not involve any lengthy proof, unlike the verification of Strong Collection over Gambino's Heyting universe $V^\mathcal{S}$. However, Friedman's proof of the validity of Collection under this translation heavily relies on Full Separation, an axiom scheme that $\mathsf{CZF}$ does not enjoy. Thus we may ask what the advantages are to using Gambino's Heyting-valued interpretation over Friedman's double-negation interpretation. The main reason is that Gambino's presentation is closer to forcing, which is a technique more familiar to set theorists. Secondly, as an intermediate step, Friedman first interprets a non-extensional set theory. This can be very notationally heavy and difficult to follow when one first tries to understand the arguments. Finally, most of this work has been done over the weak system of $\mathsf{CZF}$ and it is unknown how to achieve Friedman's interpretation in this system. On the other hand, there may be no advantage to our main results concerning critical sets and Reinhardt sets. This is because in Theorems \ref{Theorem:IKPSigmaOrdimpliesIZF+BTEE} and \ref{Theorem:ReinhardtCriticalPtModelsWA} we obtained a lower bound for their consistency strength in terms of $\mathsf{IZF}$ plus some large set axioms. One could then take Freidman's double-negation translation of this theory to obtain the consistency of $\mathsf{ZF} + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ and $\mathsf{ZF + WA}$ for critical sets and Reinhardt sets respectively. However, using Gambino's method we can strengthen this result to preserve our background theory while containing a set model of this classical theory. Namely, we will see that $V^\Omega$ preserves the theory $\mathsf{CZF}minus_j$ and contains, as a set, $\tilde{\Lambda}$ which is a model of the above theory. Since Friedman's interpretation does not work over weak theories, it does not seem to be possible to obtain a similar result with that translation. \end{remark} \begin{theorem} \label{Theorem:CZF+ReinhardtinterpretsZF+WA} Working with the theory $\mathsf{CZF}_{j,M}$ with a critical set, $V^\Omega$ validates $\mathsf{CZF}minus_j + \Delta_0\text{-} \mathsf{LEM}$ with a critical point $\tilde{K}$ of $j$. Furthermore, $\tilde{\Lambda} = j^\Omega(\tilde{K})$ satisfies $\mathsf{ZF} + \mathsf{BTEE}$ plus Set Induction for $j$-formulas. If we strengthen a critical set to a Reinhardt set, then $V^\Omega$ validates $\tilde{\Lambda}\models \mathsf{ZF}+\mathsf{WA}$. If we add Full Separation into the background theory, then $V^\Omega$ satisfies not only $\mathsf{CZF}minus_j + \Delta_0\text{-} \mathsf{LEM}$, but also $\mathsf{ZF}minus$. \end{theorem} \begin{proof} The results in the previous subsections show $V^\Omega$ validates $\mathsf{CZF}minus_j + \Delta_0\text{-} \mathsf{LEM}$, and that $j$ is an elementary embedding from $V^\Omega$ to $M^\Omega$ with a critical point $\tilde{K}$. Since $K$ is BCST-regular, \mbox{$K\in j(K)$}, and $\mathcal{P}(1)\cap K = \mathcal{P}(1)\cap j(K)$, we can apply \autoref{Theorem:BCSTset-preserving} to $K$ and $j(K)$. Hence we have that $[\mkern-2mu[ \tilde{K}\text{ is BCST-regular}]\mkern-2mu]^{j(K)^\Omega}=1$, which implies that $[\mkern-2mu[ \tilde{K}\models \mathsf{CZF}minus]\mkern-2mu]^{j(K)^\Omega}=1$. Observe that the claim $\tilde{K}\models \mathsf{CZF}minus$ is $\Delta_0$, so applying the second clause of \autoref{Lemma:HeytingInterpretationBetweenModels} proves $[\mkern-2mu[\tilde{K}\models \mathsf{CZF}minus]\mkern-2mu]=1$. Now working in $V^\Omega$, the excluded middle for bounded formulas gives us that every transitive set satisfies the full excluded middle. Especially, both $\tilde{K}$ and $\tilde{\Lambda}$ satisfy the full excluded middle. In addition, $\tilde{\Lambda}$ satisfies $\mathsf{IZF}+\mathsf{BTEE}$ plus Set Induction for $j$-formulas by \autoref{Corollary:LambdaModelsIZFBTEEInd}. Thus we have the desired result. The case for a Reinhardt set is analogous to the previous one, except that we apply \autoref{Theorem:ReinhardtCriticalPtModelsWA} to show that $\tilde{\Lambda}$ validates $\mathsf{IZF}+\mathsf{WA}$. Finally, if we have Full Separation, then $V^\Omega$ also satisfies Full Separation which completes the proof since the combination of Full Separation and $\Delta_0\text{-} \mathsf{LEM}$ implies the full excluded middle. \end{proof} We will end this section by observing that the translation preserves the cofinality of an elementary embedding over a moderate extension of $\mathsf{CZF}$. None of the previous analysis has required either Full Separation or Subset Collection in the background universe. On the other hand, the following proof, which we include for completeness, requires either Full Separation or $\mathsf{REA}$. \begin{lemma}[$\mathsf{CZF}_{j}$]\label{Lemma:CofinalityPreserved} Assume either Full Separation or $\mathsf{REA}$. If $j \colon V\prec V$ is a cofinal elementary embedding, then $V^\Omega$ thinks $j$ is cofinal. \end{lemma} \begin{proof} Let $a\in V^\Omega$. Then there is a set $X$ such that $a\in j(X)$. If we assume Full Separation, then $X\cap V^\Omega$ is a set, and $j(X\cap V^\Omega)=j(X)\cap V^\Omega$. Let $b$ be a name such that $\mathsf{op}eratorname{dom} b= X\cap V^\Omega$ and $b(y)=1$ for all $y\in\mathsf{op}eratorname{dom} b$. Then $[\mkern-2mu[ a\in j(b) ]\mkern-2mu]=\top$. An additional step is required if we assume $\mathsf{REA}$ instead: Take a set $X$ such that $a\in j(X)$. By $\mathsf{REA}$, we can find a regular set $Y$ such that $X\in Y$. By \autoref{Lemma:HeytingUniversePreserving}, $Y^\Omega=Y\cap V^\Omega$ is a set. The remaining argument is then identical to the previous one. \end{proof} While \autoref{Lemma:CofinalityPreserved} does not directly suggest anything about consistency strength, when combined with \autoref{Theorem:CZF+ReinhardtinterpretsZF+WA} it does tell us that $\mathsf{CZF+Sep}$ with a Reinhardt set interprets $\mathsf{ZF}minus$ with the existence of a non-trivial cofinal embedding \mbox{$j\colon V\to V$.} As we pointed out after \autoref{Proposition:CZFReinhardtEmbeddingCofinal}, over $\mathsf{ZF}Cminus$ with the $\mathsf{DC}_\mu$-schemes for all cardinals $\mu$ there cannot be a non-trivial cofinal embedding $j\colon V\to V$. The $\mathsf{DC}_\mu$-schemes are a variant of the Axiom of Choice and adding these schemes to $\mathsf{ZF}minus$ does not bolster its consistency strength. In fact, $\mathsf{ZF}minus$ proves $L$ satisfies the $\mathsf{DC}_\mu$-scheme for every cardinal $\mu$ in $L$. Thus the absence of a cofinal embedding over $\mathsf{ZF}Cminus$ with the $\mathsf{DC}_\mu$-schemes for any cardinal $\mu$ can be seen as a variant of the Kunen inconsistency phenomenon. However, it is unclear how we can use the above results to obtain a stronger bound for the consistency strength of $\mathsf{CZF}$ with a Reinhardt set. It is known that if one extends $\mathsf{CZF}$ by the \emph{Relation Reflection Scheme} ($\mathsf{RRS}$), as defined by Aczel \cite{Aczel2008}, then this is also persistent under Gambino's Heyting-values interpretation and $\mathsf{ZF}Cminus$ proves that $\mathsf{RRS}$ is equivalent to the $\mathsf{DC}$-Scheme. It is further possible that one might be able to generalize such a scheme to $\mathsf{DC}_\mu$ for larger cardinals, however, this does not appear to be sufficient to derive an inconsistency due to the heavy use of Well-Ordering in the proof of inconsistency in \cite{Matthews2020}. Alternatively, it is proven in \cite{Matthews2020} that we can remove the assumption of the Dependent Choice Schemes if we instead require that $V_{\mathsf{op}eratorname{crit} j} \in V$ (in fact, the main reason one assumes the $\mathsf{DC}_\mu${-scheme} for every cardinal $\mu$ is to prove this). The issue is that we do not know whether $V^\Omega$ believes that $V_{\mathsf{op}eratorname{crit} j} \in V$, even if we assume $V$ satisfies Full Separation or $\mathsf{REA}$. If we were to assume that there was a Reinhardt set which was Power Inaccessible then this would be the case, however, we can only obtain that $\Lambda$ believes that $K$ is Power Inaccessible which is insufficient to derive the required result. \section{Double-negation translation of second-order set theories}\label{Section:DNT-SOST} In this section, we will provide a double-negation translation of $\mathsf{IGB}$ and $\mathsf{TR}$. One technical issue is that the statement of $\mathsf{TR}$ requires an infinite conjunction. However, we know that the full elementarity of $j \colon V\to V$ is definable in a classical context, namely $\Sigma_1$-elementarity, and that this can be codified into a single formula by using the partial truth predicate for $\Sigma_1$-formulas. Thus we will not try to interpret infinite connectives. \subsection{Interpreting \texorpdfstring{$\mathsf{GB}$}{GB} from \texorpdfstring{$\mathsf{IGB}$}{IGB}} \begin{definition} A class $A$ is an \emph{$\Omega$-class name}, or simply a \emph{class name} if: \begin{itemize} \item The elements of $A$ are of the form $\langle a, p\mathsf{op}eratorname{ran}gle$, where $a\in V^\Omega$, $p\subseteq 1$ and $p^{\lnot\lnot} = p$, \item It is \emph{functional} in the sense that $\langle a,p\mathsf{op}eratorname{ran}gle, \langle a,q\mathsf{op}eratorname{ran}gle\in A$ implies $p=q$. \end{itemize} If $A$ is a class name, define $\mathsf{op}eratorname{dom} A := \{a \mid \exists p \ \langle a,p\mathsf{op}eratorname{ran}gle\in A\}$ and $A(a)=p$ for the unique $p$ such that $\langle a,p\mathsf{op}eratorname{ran}gle\in A$. \end{definition} Then we can extend $[\mkern-2mu[\phi ]\mkern-2mu]$ to atomic second-order formulas as follows: \begin{definition} Let $a\in V^\Omega$ and $A$, $B$ be class names. Define $[\mkern-2mu[ a\in A]\mkern-2mu] := \bigvee_{x\in\mathsf{op}eratorname{dom} A}A(x)\land [\mkern-2mu[ x=a]\mkern-2mu]$ and \begin{equation*} [\mkern-2mu[ A=B]\mkern-2mu] :=\left( \bigwedge_{x\in\mathsf{op}eratorname{dom} A} A(x) \to\bigvee_{y\in\mathsf{op}eratorname{dom} B} B(y)\land [\mkern-2mu[ x=y]\mkern-2mu] \right) \land \left( \bigwedge_{y\in\mathsf{op}eratorname{dom} B} B(y) \to\bigvee_{x\in\mathsf{op}eratorname{dom} A} A(x)\land [\mkern-2mu[ x=y]\mkern-2mu] \right). \end{equation*} \end{definition} Based on the above definition, we can extend our interpretation to any formula $\phi$ with no second-order quantifiers. Now we want to extend the double-negation interpretation to formulas which have second-order quantifiers, but this calls into question how this should be formulated in our second-order context. To illustrate the problem, observe that we take a class meet and join to interpret unbounded first-order quantifiers, and the resulting $[\mkern-2mu[\phi]\mkern-2mu]$ is a class rather than a set. Thus we may suspect that interpreting second-order quantifiers results in $[\mkern-2mu[\phi]\mkern-2mu]$ being a \emph{hyperclass}, a collection of classes, which is clearly not an object of $\mathsf{CGB}$ or $\mathsf{IGB}$. This situation is analogous to the one encountered in \autoref{Remark:InterpretationJFormulas}, where $[\mkern-2mu[\phi]\mkern-2mu]$ for a $j$-formula $\phi$ may not be a set. This was because the definitions of $\lor$ and $\exists$ for $j$-formulas used $J$ instead of $\jmath$ and we only know that $J([\mkern-2mu[\phi]\mkern-2mu]\cup\lnot[\mkern-2mu[\phi]\mkern-2mu])=1$ is true when $[\mkern-2mu[\phi]\mkern-2mu]$ is a set. We may suspect the same issue happens for second-order formulas: we may need to extend $\jmath$ to some function $\mathcal{J}$ which is defined for hyperclasses, as $J$ serves as an extension of $\jmath$, to define $\lor$ and $\exists$ for second-order formulas. However, even if we have a proper definition of $\mathcal{J}$, there is no reason to believe that $\mathcal{J}([\mkern-2mu[\phi]\mkern-2mu]\cup\lnot[\mkern-2mu[\phi]\mkern-2mu])=1$ should hold for a second-{order} formula $\phi$. The situation would be better if we can ensure $[\mkern-2mu[\phi]\mkern-2mu]$ is a set. We know that if we additionally assume Full Separation then, for $\phi$ a first-order formula, $[\mkern-2mu[\phi]\mkern-2mu]$ is a set. Analogously, we may suspect that we can prove $[\mkern-2mu[\phi]\mkern-2mu]$ is a set when $\phi$ is a second-order formula, if we have Full Separation and the \emph{Full Class Comprehension scheme}, which is the assertion that $\{x\mid\phi(x)\}$ is a class for any class formula $\phi$. However, adding this axiom to $\mathsf{GBC}$ results in Kelley-Morse theory, $\mathsf{KM}$, which would considerably increase the strength of the underlying theory we wish to work with. Because we do not have a good framework for working with hyperclasses we will circumvent this issue in another way, by combining Gambino's interpretation with Friedman's double-negation translation. We will take Gambino's interpretations for formulas with no second-order quantifiers, and extend it using the same translation as in Friedman's double-negation interpretation. \begin{definition} For a second-order formula $\phi$ with set parameters from $V^\Omega$ and $\Omega$-class name parameters, define $\phi^-$ inductively as follows: \begin{itemize} \item If $\phi$ is an atomic formula, then $\phi^- \equiv ([\mkern-2mu[\phi]\mkern-2mu] = 1)$, \item $(\phi\land\psi)^- \equiv \phi^-\land \psi^-$, \item $(\phi\lor\psi)^- \equiv \lnot\lnot(\phi^-\lor \psi^-)$, \item $(\phi\to\psi)^- \equiv \phi^-\to \psi^-$, \item $(\forall^0 x \phi(x))^- \equiv \forall^0 x\in V^\Omega\ \phi^-(x)$, \item $(\exists^0 x \phi(x))^- \equiv \lnot\lnot\exists^0 x\in V^\Omega\ \phi^-(x)$, \item $(\forall^1 X \phi(X))^- \equiv \forall^1 X ((\text{$X$ is a class name})\to \phi^-(X))$, \item $(\exists^1 X \phi(X))^- \equiv \lnot\lnot \exists^1 X ((\text{$X$ is a class name})\land \phi^-(X))$. \end{itemize} \end{definition} \begin{lemma}[$\mathsf{IGB}$]\label{Lemma:TwoInterpretationsCoindice} If $\phi$ is a formula with no second-order quantifiers, then $\phi^-\equiv ([\mkern-2mu[\phi]\mkern-2mu]=1)$. \end{lemma} \begin{proof} The proof proceeds by induction on $\phi$. Atomic cases follows by definition. For conjunction, \begin{align*} (\phi\land \psi)^- \equiv \phi^- \land \psi^- \iff [\mkern-2mu[\phi]\mkern-2mu] = 1\text{ and } [\mkern-2mu[\psi]\mkern-2mu] = 1 \iff [\mkern-2mu[\phi ]\mkern-2mu]\cap [\mkern-2mu[\psi]\mkern-2mu] = 1 \end{align*} and we can see that $[\mkern-2mu[\phi]\mkern-2mu]\cap[\mkern-2mu[\psi]\mkern-2mu] = [\mkern-2mu[\phi\cap\psi]\mkern-2mu]$ by the definition of our translation. The cases for bounded and unbounded $\forall$ and implications are analogous. We need some care for the case for disjunction and $\exists$ and in particular we will need to make use of both Full Separation and Powerset. We examine the case for unbounded $\exists$, to see why we need these two axioms. We know that $[\mkern-2mu[\exists x \phi(x)]\mkern-2mu] = \bigvee_{x\in V^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]$, and this is $J\left(\bigcup_{x\in V^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]\right)$. By Full Separation, $\bigcup_{x\in V^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu] = \{0\mid \exists x\in V^\Omega [\mkern-2mu[ \phi(x)]\mkern-2mu]=1\}$ is a set. Furthermore, Powerset proves $Jp=p^{\lnot\lnot}$ for all $p\subseteq 1$. Hence $[\mkern-2mu[\exists x\phi(x)]\mkern-2mu] = \left(\bigcup_{x\in V^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]\right)^{\lnot\lnot}$, so \begin{align*} (\exists x\phi(x))^- & \equiv {\lnot\lnot}\exists x\in V^\Omega \phi^-(x) \iff {\lnot\lnot}\exists x\in V^\Omega([\mkern-2mu[\phi(x)]\mkern-2mu] =1) \\ & \iff 0 \in \{ 0 \mid {\lnot\lnot} ( \exists x \in V^\Omega ( [\mkern-2mu[ \phi(x) ]\mkern-2mu] = 1)) \} \iff \left(\bigcup_{x\in V^\Omega} [\mkern-2mu[ \phi(x)]\mkern-2mu]\right)^{\lnot\lnot} = 1. \end{align*} The cases for bounded existential quantifiers and disjunctions are analogous, so we omit them. \end{proof} \begin{lemma}\label{Lemma:ClassEqualityRespectsLogicalFormulas} Let $A$ and $B$ be class names and $\phi(X)$ be a formula of second-order set theory. Then \begin{equation*} ( [\mkern-2mu[ A=B]\mkern-2mu]=1 \land \phi^-(A) ) \to \phi^-(B). \end{equation*} \end{lemma} \begin{proof} The proof proceeds by induction on $\phi$, and one can see that the only non-trivial part of this is the atomic case where $\phi(X)$ is $x\in X$. To do this, we show that $[\mkern-2mu[ A=B]\mkern-2mu] \land [\mkern-2mu[ a\in A]\mkern-2mu] \le [\mkern-2mu[ a\in B]\mkern-2mu]$: \begin{align*} [\mkern-2mu[ A=B]\mkern-2mu] \land [\mkern-2mu[ a\in A]\mkern-2mu] &\le \left( \bigwedge_{x\in\mathsf{op}eratorname{dom} A} A(x) \to\bigvee_{y\in\mathsf{op}eratorname{dom} B} B(y)\land [\mkern-2mu[ x=y]\mkern-2mu] \right) \land\left(\bigvee_{x\in\mathsf{op}eratorname{dom} A}A(x)\land [\mkern-2mu[ x=a]\mkern-2mu]\right)\\ & \le \bigvee_{x\in\mathsf{op}eratorname{dom} A} \left( \left(\bigvee_{y\in\mathsf{op}eratorname{dom} B} B(y)\land [\mkern-2mu[ x=y]\mkern-2mu] \right) \land [\mkern-2mu[ x=a]\mkern-2mu] \right)\\ & \le \bigvee_{y\in\mathsf{op}eratorname{dom} B} B(y)\land [\mkern-2mu[ y=a]\mkern-2mu] = [\mkern-2mu[ a\in B]\mkern-2mu]. \end{align*} Therefore, if both $[\mkern-2mu[ A=B]\mkern-2mu]=1$ and $[\mkern-2mu[ a\in A]\mkern-2mu]=1$ then $[\mkern-2mu[ a \in B ]\mkern-2mu] = 1$. \end{proof} \begin{theorem}[$\mathsf{IGB}$] Working over $\mathsf{IGB}$, every axiom of $\mathsf{GB}$ is valid in $V^\Omega$. \end{theorem} \begin{proof} By \autoref{Theorem:CZFPersistence} and \autoref{Lemma:TwoInterpretationsCoindice}, we can see that the first-order part of $\mathsf{IGB}$ is valid. Moreover, $(\phi\lor\lnot\phi)^-$ is valid since it is equivalent to $\lnot\lnot(\phi^-\lor\lnot\phi^-)$, which is constructively valid. It remains to show that the second-order part of $\mathsf{IGB}$ is valid under the interpretation. \begin{itemize} \item Class Extensionality: follows from \autoref{Lemma:ClassEqualityRespectsLogicalFormulas}. \item Elementary Comprehension: Let $a\in V^\Omega$, $A$ be a class name, and $\phi$ be a formula without class quantifiers. Then $[\mkern-2mu[\phi(x,a,A)]\mkern-2mu]$ is well-defined for $x\in V^\Omega$. Now consider $B$ to be the class name \begin{equation*} B = \{\langle x,[\mkern-2mu[\phi(x,a,A)]\mkern-2mu]\mathsf{op}eratorname{ran}gle \mid x\in V^\Omega\}. \end{equation*} Furthermore, we can easily see that $[\mkern-2mu[ x\in B\leftrightarrow \phi(x,a,A)]\mkern-2mu]=1$. Hence $B$ witnesses Elementary Comprehension for $\phi(x,a,A)$. \item Class Set Induction: the usual argument for first-order Set Induction carries over. \item Class Strong Collection: we can see that the proof of \autoref{Proposition:StrongCollectionValid} works if we replace $R(x,y)$ with $\langle x,y\rangle \in R$. \end{itemize} We do not need to check that the double-negation translation interpretation also validates Class Separation since $\mathsf{GB}$ without Class Separation proves Class Separation: it follows from Class Replacement, and the proof is similar to the derivation of Separation from Replacement over classical set theory. \end{proof} \subsection{Interpreting \texorpdfstring{$\mathsf{TR}$}{TR}} This subsection is devoted to the following result: \begin{theorem}\label{Theorem:IGBTRinterprets} If $\mathsf{GB+TR}\vdash\phi$, then $\mathsf{IGB_\infty+TR}\vdash\phi^-$. \end{theorem} As mentioned before, we will dismiss infinite connectives from the interpretation. The main reason is that in $\mathsf{GB}$ every $\Sigma^A_1$-elementary embedding is fully $A$-elementary embedding by essentially \autoref{Lemma:DeltaA0elementarityandfullelementarity} (2). \begin{proof}[Proof of \autoref{Theorem:IGBTRinterprets}] It suffices to show that $\mathsf{IGB_{\infty}+TR}$ proves $(\mathsf{TR})^-$. Let $A$ be a class name and $a\in V^\Omega$. We claim that there is a class name $\tilde{\jmath}$ such that \begin{equation*} [\mkern-2mu[ \tilde{\jmath}\text{ is $A$-elementary}\land a\in \tilde{\jmath}({\tilde{K}})\land {\tilde{K}} \text{ is a critical point of } {\tilde{\jmath}} ]\mkern-2mu] = 1. \end{equation*} By $\mathsf{TR}$, we can find some elementary embedding $j \colon V\to V$ and an inaccessible set $K$ such that $j$ is $A$-elementary, $a\in j(K)$ and $K$ is a critical point of $j$. Define $\tilde{\jmath} = \{\langle \mathsf{op}(a,j(a)),1\mathsf{op}eratorname{ran}gle \mid a\in V^\Omega\}$. It is easy to see that $[\mkern-2mu[\tilde{\jmath}\text{ is a function}]\mkern-2mu]=1$, and, by \autoref{Lemma:BeingACriticalPointIsPreserved}, $[\mkern-2mu[ \tilde{K}\text{ is a critical point of }\tilde{\jmath}]\mkern-2mu]=1$. Furthermore, $[\mkern-2mu[ a\in\tilde{\jmath}(\tilde{K})]\mkern-2mu]=1$ is equivalent to $[\mkern-2mu[ a\in j(\tilde{K})]\mkern-2mu]=1$, the latter of which follows from $a\in j(K)$. It remains to show that $V^\Omega$ thinks $\tilde{\jmath}$ is an $A$-elementary embedding. For this, it suffices to show that $\tilde{\jmath}$ is $\Sigma^A$-elementary by \autoref{Lemma:DeltaA0elementarityandfullelementarity}. Let $\psi(x)$ be a $\Sigma^A$-formula. Then \begin{equation*} {\big[\mkern-4mu\big[}\forall \vec{x} {\big(} \psi(\vec{x}) \leftrightarrow\psi(j(\vec{x})) {\big)} {\big]\mkern-4mu\big]} =1 \end{equation*} is equivalent to \begin{equation*} \forall \vec{x} \in V^\Omega \big( [\mkern-2mu[ \psi(\vec{x})]\mkern-2mu] = 1 \leftrightarrow [\mkern-2mu[ \psi(j(\vec{x}))]\mkern-2mu] = 1 \big). \end{equation*} Finally, since $[\mkern-2mu[ \psi(\vec{x}) ]\mkern-2mu] = 1$ is expressible in $V$ using a $\Sigma^A$-formula (without any other class parameters), the above formula is immediate from the $A$-elementarity of $j$. \end{proof} Combining the above analysis with \autoref{Theorem:CGBTRinterpretsIGBTR}, the following corollary is immediate. \begin{corollary}\pushQED{\qed} \label{Corollary:CGBTRinterpretsGBTR} $\mathsf{CGB_\infty+TR}$ interprets $\mathsf{GB+TR}$. \qedhere \end{corollary} \begin{remark} Some readers may wonder about whether we need full elementarity in the formulation of $\mathsf{TR}$, because the proof of \autoref{Theorem:IGBTRinterprets} would work if $j$ preserves formulas of some bounded complexity, probably $\Sigma^A$-formulas. It is correct that what the proof of \autoref{Theorem:IGBTRinterprets} actually shows is $\mathsf{IGB_{\infty}}$ with a statement weaker than $\mathsf{TR}$ interprets $\mathsf{GB+TR}$. However, the full power of $\mathsf{TR}$ was necessary in \autoref{Theorem:CGBTRinterpretsIGBTR} to derive $\mathsf{IGB_{\infty}+TR}$ from $\mathsf{CGB_{\infty}+TR}$. \end{remark} \section{Consistency strength: final results} \label{Section:ConsistencyStrength:Final} We have proven in \autoref{Corollary:MainConsistencyResults} that $\mathsf{IKP}$ with a critical point implies the consistency of $\mathsf{ZF} + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ while $\mathsf{CZF}$ with a Reinhardt set implies the consistency of $\mathsf{ZF + WA}$. However, one may ask how strong these notions are with respect to the traditional large cardinal hierarchy over $\mathsf{ZF}C$, which is a question we address here. Let us examine the theory $\mathsf{ZF} + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ first. We can see that $\mathsf{ZF} + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ proves $L$ satisfies the same theory. Hence we have the consistency of $\mathsf{ZF}C + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ from that of $\mathsf{ZF} + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$. Let us compare the consistency strength of $\mathsf{ZF}C + \mathsf{BTEE} + \text{Set Induction\textsubscript{$j$}}$ with that of other large cardinal axioms to illustrate how strong it is. We can see that \emph{virtually rank-into-rank cardinals}, defined by Gitman and Schindler \cite{GitmanSchindler2018}, provide an upper bound: \begin{definition}[\cite{GitmanSchindler2018}] A cardinal $\kappa$ is \emph{virtually rank-into-rank} if in some set-forcing extension it is the critical point of an elementary embedding $j\colon V_\lambda\to V_\lambda$ for some $\lambda > \kappa$. \end{definition} \begin{lemma} Let $\kappa$ be a virtually rank-into-rank cardinal and $\lambda$ witness this. If $j\colon V_\lambda\to V_\lambda$ is a elementary embedding over a set-generic extension with $\mathsf{op}eratorname{crit} j=\kappa$, then, in this extension, $(V_{j^\omega(\kappa)},\in j)$ satisfies $\mathsf{ZFC+BTEE} + \text{Set Induction\textsubscript{$j$}}$. \end{lemma} \begin{proof} First, $\kappa$ is inaccessible in $V$. This follows from Theorem 4.20 of \cite{GitmanSchindler2018} and known facts about $\omega$-iterable cardinals, but we shall also give a direct proof for it. Suppose, for a contradiction, that $V$ thinks $\kappa$ is singular. Then there is a cofinal sequence $\langle \alpha_\xi\mid\xi<\mathsf{op}eratorname{cf}\kappa \mathsf{op}eratorname{ran}gle \in V_\lambda$ that converges to $\kappa$. Since $j(\mathsf{op}eratorname{cf}\kappa)=\mathsf{op}eratorname{cf}\kappa$ and $j(\alpha_\xi)=\alpha_\xi$ for all $\xi<\mathsf{op}eratorname{cf}\kappa$, $j(\kappa)=\kappa$, which gives our contradiction. Similarly, if $\kappa$ is not a strong limit cardinal in $V$, then there is $\xi<\kappa$ and a surjection $f:\mathcal{P}^{V}(\xi)\to \kappa$ in $V$. Then $f\in V_\lambda$ since $\mathsf{op}eratorname{ran}k f\le \kappa+3<j^3(\kappa)<\lambda$. (This follows from the fact that the $j^n(\kappa)$ for $n<\omega$ form a strictly increasing sequence.) Now we can derive a contradiction in the usual way by considering $j(f)$. Hence $V_\kappa$ is a model of $\mathsf{ZF}C$. Also, we can see in the extension that $V_\kappa\prec V_{j^n(\kappa)}$ for all $n<\omega$ which shows that $V_{j^\omega(\kappa)}=\bigcup_{n<\omega} V_{j^n(\kappa)}$ is a model of $\mathsf{ZF}C$. Finally, $j\restricts V_{j^\omega(\kappa)}:V_{j^\omega(\kappa)}\to V_{j^\omega(\kappa)}$ and, by the transitivity of $V_{j^\omega(\kappa)}$, $V_{j^\omega(\kappa)}$ satisfies \text{Set Induction\textsubscript{$j$}}. \end{proof} As a lower bound, Corazza \cite{Corazza2006} proved that $\mathsf{ZFC+BTEE}$ proves there is an $n$-ineffable cardinal for each (meta-)natural number $n$. The authors do not know the exact consistency strength of $\mathsf{ZF+WA}$ (the assumption of $\Sigma^j$-Induction is useful to ensure that the critical sequence is total) in the $\mathsf{ZF}C$-context but we can still find a lower bound for it. Suppose that $\kappa$ was the critical point of such an embedding. Then we have that the critical sequence $\langle j^n(\kappa) | n \in \omega \rangle$ is definable, although it may not be a set (see Proposition 3.2 of \cite{Corazza2000} for details). From this, we have \begin{lemma}[$\mathsf{ZF+WA_0}$] If the critical sequence is cofinal over the class of all ordinals, then $\kappa$ is extendible. \end{lemma} \begin{proof} Let $\eta$ be an ordinal. Take $n$ such that $\eta<j^n(\kappa)$, then $j^n \colon V_{\kappa+\eta}\prec V_{j^n(\kappa+\eta)}$ and $\mathsf{op}eratorname{crit} j^n=\kappa$. Hence $\kappa$ satisfies $\eta$-extendibility. \end{proof} However, it should be noted that it is also possible that the critical sequence $\langle j^n(\kappa)\mid n<\omega \rangle$ is bounded. In this case $V_\lambda$, for $\lambda = \sup_{n<\omega} j^n(\kappa)$, is a model of $\mathsf{ZF+WA}$ in which the critical sequence is cofinal. Thus we can proceed with the argument by cutting off the universe at $\lambda$. By an easy reflection argument, we can see also see that $\mathsf{ZF+WA_0}$, with the critical sequence cofinal, proves not only that there is an extendible cardinal, but also the consistency of $\mathsf{ZF}$ with a proper class of extendible cardinals, an extendible limit of extendible cardinals, and much more. Since extendible cardinals are preserved by Woodin's forcing (Theorem 226 of \cite{Woodin2010}), we have a lower bound for the consistency strength of $\mathsf{ZF+WA_0}$, e.g., $\mathsf{ZF}C$ plus there is a proper class of extendible cardinals. Clearly, $\mathsf{ZF + WA_0}$ should be much stronger than this but to find a better lower bound that involves even more sophisticated machinery than is currently available. To summarize our consistency bound derived in this section, we have the following: \vbox{ \begin{corollary} \phantom{a} \begin{itemize} \item $\mathsf{IKP}_{j,M}$ with a $\Sigma$-$\mathrm{Ord}$-inary elementary embedding or $\mathsf{CZF}$ with a critical set implies the consistency of $\mathsf{ZFC+BTEE} + \text{Set Induction\textsubscript{$j$}}$. Furthermore, $\mathsf{ZFC+BTEE} + \text{Set Induction\textsubscript{$j$}}$ proves that the critical point, $\kappa$, of $j$ is $n$-ineffable for every (meta-)natural $n$. \item $\mathsf{CZF}$ with a Reinhardt set implies the consistency of $\mathsf{ZF+WA}$. Furthermore, the consistency $\mathsf{ZF}+\mathsf{WA}$ implies the consistency of $\mathsf{ZF}C$ with a proper class of extendible cardinals. \end{itemize} \end{corollary} } \section{Future works and Questions}\label{Section:RemarkQuestions} \subsection{Upper bounds for large large set axioms} We may wonder how to find an upper bound for the consistency strength of $\mathsf{CZF}$ with very large set axioms in terms of classical set theories. The authors believe that the currently known methods do not suffice to provide non-trivial upper bounds for the proof-theoretic strength of $\mathsf{CZF}$ with very large set axioms. The known methods for analyzing the strength of $\mathsf{CZF}$ and its extensions are the followings: \begin{enumerate} \item Reducing $\mathsf{CZF}$ or its extension to Martin-L\"of type theory or its extension, and constructing a model of the type theory into a classical theory such as $\mathsf{KP}$ or its extensions. This is how Rathjen (cf., \cite{Rathjen1993}, \cite{Rathjen2005Brouwer}, \cite{Rathjen2014Omniscience}, \cite{Rathjen2017}) provides a relative proof-theoretic strength for extensions of $\mathsf{CZF}$. \item More generally, sets-as-trees interpretation or its variants: for example, Lubarsky \cite{Lubarsky2006SOA} proved that we can reduce $\mathsf{CZF+Sep}$ into Second-order Arithmetic by combining realizability with a sets-as-trees interpretation. We may associate Lubarsky's construction with Rathjen's interpretations because the sets-as-types interpretation is a special case of the sets-as-trees interpretation. (Types have tree-like structures.) Another construction on that line is functional realizability, which we define below for the sake of completeness: \begin{definition} Let $\mathcal{A}$ be a pca and $V(\mathcal{A})$ be the realizability universe. (See Definition 3.1 of \cite{Rathjen2003Realizability} or Definition 2.3.1 of \cite{McCartyPhD}.) A name $a\in V(\mathcal{A})$ is \emph{functional} if for every $\langle e,b\rangle, \langle e, c\rangle \in a$, $b=c$. $V^f(\mathcal{A})$ is the class of all functional names $a\in V(\mathcal{A})$. The realizability relation $\Vdash^f$ over $V^f(\mathcal{A})$ is identical with the usual realizablity relation $\Vdash$ over $V(\mathcal{A})$. (See Definition 4.1 of \cite{Rathjen2003Realizability}), except that we restrict quantifiers to $V^f(\mathcal{A})$ instead of $V(\mathcal{A})$. We say that $V^f(\mathcal{A})\models \phi(\vec{a})$ if there is an $e\in\mathcal{A}$ such that $e\Vdash^f \phi(\vec{a})$. \end{definition} \item Set realizability, which appears in \cite{Rathjen2012Power} and \cite{Rathjen2012Existence}. This exploits the computational nature of sets to construct an interpretation. Unfortunately, set realizablity does not result in models of $\mathsf{CZF}$: it produces an interpretation of $\mathsf{CZF}^-$, $\mathsf{CZF}^-+\mathsf{Exp}$, or $\mathsf{CZF+Pow}$ to $\mathsf{IKP}$, $\mathsf{IKP}(\mathcal{E})$, or $\mathsf{IKP}(\mathcal{P})$ respectively. Furthermore, set realizability can be used to prove the existence property of $\mathsf{CZF}^-$, $\mathsf{CZF}^-+\mathsf{Exp}$, or $\mathsf{CZF+Pow}$. However, Swan proved in \cite{Swan2014} that $\mathsf{CZF}$ does not have the existence property, which suggests set realizability cannot model $\mathsf{CZF}$. \end{enumerate} Thus the only currently known way to analyze the consistency strength of $\mathsf{CZF}$ and its extensions is by combining realizability with sets-as-trees interpretations. However, this has significant issues when we try and generalize the method to large large cardinals. The first point is that almost all of the currently known methods (possibly except for \cite{Rathjen2005Brouwer} and \cite{Swan2014}) rely on Kleene's first pca to construct an interpretation of variants of Martin-L\"of type theory into classical theories. The upshot is that the resulting interpretation of $\mathsf{CZF}$ also validates \emph{the Axiom of Subcountability}, which claims that every set is subcountable. However, Ziegler \cite{ZieglerPhD} observed that the Axiom of Subcountability is incompatible with critical sets. This feature may simply be due to the fact that the pca we are using in the construction of the interpretation is countable, so we may avoid this issue by using a larger pca. However, it seems that there is no obvious way of constructing a realizer for $\forall\vec{x}\phi^M(j(\vec{x}))\to \phi(\vec{x})$ regardless of how large the pca $\mathcal{A}$ is. Since we can view sets-as-types interpretations as a special case of sets-as-trees interpretations, we can expect there would be a similar difficulty when we construct a model of $\mathsf{CZF}$ with large large set axioms by using the combination of realizability models and type-theoretic interpretations of $\mathsf{CZF}$. \begin{question} Can we provide any non-trivial upper bound for the consistency strength of $\mathsf{CZF}$ with a critical set or a Reinhardt set? \end{question} \subsection{Improving lower bounds} Our current lower bound could also be improved. For example, the proof of \autoref{Corollary:LambdaModelsIZFBTEEInd}, \autoref{Theorem:IKPSigmaOrdimpliesIZF+BTEE}, and \autoref{Theorem:ReinhardtCriticalPtModelsWA} produces a set model of some theory. Hence the resulting lower bound for the consistency strength is strict. This brings into question whether we can provide a better lower bound for the given theories, possibly by constructing a class model of $\mathsf{IZF}$ with a large set axiom. Since obtaining a lower bound heavily relies on double-negation translations, it would be important to develop the relationship between double-negation translations and large set axioms. For example, Avigad \cite{Avigad2000} provided a way to interpret $\mathsf{KP}$ from $\mathsf{IKP}$ by combining Friedman's double-negation interpretation \cite{Friedman1973} and a proof-theoretic forcing. It may be possible to extend Avigad's interpretation for $\mathsf{IKP}$ with a $\Sigma$-$\mathrm{Ord}$-inary elementary embedding with a critical point, which could result in a better lower bound (or possibly an equiconsistency result). The `classical' side of the lower bounds should also be improved. For example, we claimed that $\mathsf{CZF}$ with a Reinhardt set implies the consistency of $\mathsf{ZF+WA}$, and the latter implies the consistency of $\mathsf{ZF}C$ with a proper class of extendible cardinals. The authors do not know if it is possible to prove the consistency of $\mathsf{ZFC+WA}$ from that of $\mathsf{ZF+WA}$. We conclude this subsection with the following obvious question: \begin{question} Can we obtain a better lower bound for the consistency of any of the theories analyzed in this paper? \end{question} \subsection{Developing technical tools} Some concepts in this paper are of independent interest. For example, we defined second-order constructive set theories $\mathsf{CGB}$ and $\mathsf{IGB}$ to handle super Reinhardt sets and $\mathsf{TR}$. However, constructive second-order set theories bring their own questions, associated with their classical counterparts. The following questions are untouched in this paper: \begin{question}\phantom{a} \begin{enumerate} \item Williams \cite{WilliamsPhD} defined and analyzed second-order set-theoretic principles that bolster second-order set theory, including \emph{Class Collection schema}, \emph{Elementary Transfinite Recursion schema}, $\mathsf{ETR}$, and its restrictions $\mathsf{ETR}_\Gamma$. Can we define constructive analogues of these principles? If so, what are their consistency strength? For example, does $\mathsf{CGB+ETR_\omega}$ prove the existence of the truth predicate of first-order set theory (and hence proves $\mathsf{Con(CZF})$)? \item Do constructive second-order set theories admit the unrolling construction that was introduced by Williams \cite{WilliamsPhD}? \item Can we develop a realizability model or Heyting-valued model for constructive second-order theories? It is known that not every class forcing preserves Collection and Powerset over the classical $\mathsf{GB}$. We may expect that we need some restriction on a class realizability or a class formal topology to ensure they preserve $\mathsf{CGB}$. \end{enumerate} \end{question} The relativization of Heyting-valued models is also an interesting topic. We only focused on the double-negation topology, and it seems that the formal topologies appearing in the current literature are either set-presentable or the double-negation topology. However, it is plausible that another formal topology might appear in the future, and its interaction with different inner models could be non-trivial. In that case, absoluteness and relativization issues become important. \vbox{ \subsection{Other large set axioms} In this paper, we analyzed critical sets, Reinhardt sets, and some constructive analogue of choiceless large cardinals. We did not provide the definition and analysis for analogues of other large cardinal axioms, such as supercompactness or hugeness. These concepts were first defined over $\mathsf{IZF}$ by Friedman and \v{S}\v{c}edrov} in \cite{FriedmanScedrov1984}, in such a way as to be equivalent to their classical counterparts. However, for the sake of completeness, we include variations below that work better in our weaker context of $\mathsf{CZF}$. \begin{convention} Assume that $K$ is a critical point of an elementary embedding $j\colon V\to M$. Over $\mathsf{ZF}C$, many large cardinals above measurable cardinals are defined as critical points of some elementary embedding with additional properties, for example, closure properties of $M$. We shall give constructive analogues of the main methods used to define closure where, here, $\mathcal{MP}$ is Ziegler's modified powerclass operator and $\hat{V}_a$ is Ziegler's modified hierarchy. (See Section 5.3 of \cite{ZieglerPhD} for the details). \end{convention} \begin{itemize} \item Replace the \emph{closure under $<\gamma$-sequences}, $\prescript{< \gamma}{} M\subseteq M$, to \emph{closure under multi-valued functions whose domain is an element of $\hat{V}_a$}: if $b\in \hat{V}_a$ and $R:b\rightrightarrows M$ is a multi-valued function\footnote{It is unclear whether we need to restrict $R$ to set-sized multi-valued functions. Future research should analyze the difference between these two.}, then there is $c\in M$ such that $R:b\leftrightarrowlrarrows c$. \item Replace the \emph{closure under $\gamma$-sequences}, $\prescript{\gamma}{} M\subseteq M$, to \emph{closure under multi-valued functions whose domain is an element of $\hat{V}_{a\cup\{a\}}=\hat{V}_a\cup \mathcal{MP}(\hat{V}_a)$}. \item Replace $V_\alpha \subseteq M$ with $\hat{V}_\alpha\subseteq M$. \end{itemize} The reader might wonder why we use multi-valued functions of the domain in $\hat{V}_a$ and $\hat{V}_{a\cup\{a\}}$. For example, we may formulate a constructive analogue of the closure under $\gamma$-sequences $\prescript{\gamma}{} M\subseteq M$ as the closure under multi-valued functions of domain $a$. There is a reason why we should allow multi-valued functions of the domain in $a\cup\{a\}$: for a transitive set $a$, the closure under multi-valued functions of domain in $a\cup\{a\}$ proves $a\in M$. However, it is unclear if this can be achieved from just those multi-valued functions whose domain is in $a$. It still remains a question why we use $\hat{V}_{a\cup\{a\}}$ instead of $a\cup\{a\}$. The main reason is that $\hat{V}_{a\cup\{a\}}$ includes $a\cup\{a\}$ but has a much richer set-theoretic structure, which is the same reason we have worked with large sets rather than large cardinals. This will mean that our definitions will be equivalent to those presented in \cite{FriedmanScedrov1984} over $\mathsf{IZF}$. Observe that, over $\mathsf{ZF}C$, these modified definitions are equivalent to the standard ones. Thus, for example, we can define supercompact sets or strong sets\footnote{This is an analogue of strong cardinals. Unfortunately, this terminology overlaps with 2-strong sets (See \cite{Rathjen1998} or \cite{ZieglerPhD} for the definition of 2-strongness.)} as follows: \begin{definition}[$\mathsf{CGB}_\infty$] \label{Definition:Supercompact,Huge,Strong} \phantom{a} \begin{enumerate} \item Let $a$ be a transitive set. A set $K$ is \emph{$a$-supercompact} if there is an elementary embedding $j\colon V\to M$ such that $K$ is a critical point of $j$ and $M$ is closed under multi-valued functions whose domain is in $\hat{V}_{a\cup\{a\}}$ A set $K$ is \emph{supercompact} if $K$ is $a$-supercompact for all transitive sets $a$. \item A set $K$ is \emph{$n$-huge} if there is an elementary embedding $j\colon V\to M$ such that $K$ is a critical point of $j$ and $M$ is closed under multi-valued functions whose domain is in $\hat{V}_{j^n(K)\cup \{j^n(K)\}}$. \item A set $K$ is \emph{$\alpha$-strong} if there is an elementary embedding $j\colon V\to M$ such that $K$ is a critical point of $j$ and $\hat{V}_\alpha\subseteq M$. A set $K$ is \emph{strong} if $K$ is $\alpha$-strong for all $\alpha$. Here it suffices to restrict our attention to ordinals rather than defining $a$-strongness for arbitrary sets since $\hat{V}_a = \hat{V}_{\mathsf{op}eratorname{ran}k(a)}$. \end{enumerate} \end{definition} The reader is reminded that we may formulate $a$-supercompactness or $\alpha$-strongness over $\mathsf{CZF}_{j,M}$. However, formulating the full supercompactness and strongness would require us to quantify over $j$ and $M${, so these should be stated over $\mathsf{CGB}_\infty$}. As a remark, let us mention that Friedman and \v{S}\v{c}edrov \cite{FriedmanScedrov1984} also defined hugeness and supercompactness over $\mathsf{IZF}$. A set $K$ is \emph{huge in the sense of Friedman-\v{S}\v{c}edrov} if $K$ is a critical point of an elementary embedding $j\colon V\to M$ which is power inaccessible and satisfies the following statement: for any subset $u$ of $j(K)$, if $t\colon u\rightrightarrows M$ then we can find some $v\in M$ such that $t\colon u\leftrightarrowlrarrows v$.\footnote{Their definition is adjusted for the intensional $\mathsf{IZF}$, but we work with Extensionality. Also, they required $v$ only to satisfy $t\colon u\rightrightarrows v$, but their definition is `incorrect' in the sense that their formulation does not imply the closure of $M$ under sequences over $\mathsf{ZF}C$. The reason is that we do not know whether $t$ is amenable over $M$ in general.} The following proposition shows our definition of hugeness and that of Friedman-\v{S}\v{c}edrov coincides in some sense: \begin{proposition}[$\mathsf{IGB}_\infty$] A transitive set $K$ is huge in the sense of Friedman-\v{S}\v{c}edrov if and only if $K$ is power inaccessible and huge in the sense of \autoref{Definition:Supercompact,Huge,Strong}. \end{proposition} \vbox{ \begin{proof} Assume that $K$ is huge in the sense of Friedman-\v{S}\v{c}edrov. It is known that $\hat{V}_K=K$ if $K$ is inaccessible (See Theorem 5.18 of \cite{ZieglerPhD}). Furthermore, power inaccessibility implies $\mathcal{P}(1)\in K$, so $\mathcal{MP}(K)=\mathcal{P}(K)$. Since transitivity implies $K\subseteq \mathcal{P}(K)$, we have $\hat{V}_K\cup \mathcal{MP}(K)=\mathcal{P}(K)$. Thus we have \begin{equation*} \forall u\in \hat{V}_K\cup \mathcal{MP}(K) [\forall t\colon u\rightrightarrows M \to \exists v\in M (t\colon u\leftrightarrowlrarrows v)]. \end{equation*} Hence $K$ is huge. The remaining direction is trivial. \end{proof} } The case for supercompactness is tricky because Friedman and \v{S}\v{c}edrov employed a family of elementary embeddings instead of working over a second-order set theory to formulate it. We leave examining the difference between these two definitions of supercompact sets to possible future work. These new notions of large set axioms bring the following question: \begin{question} Can we provide any consistency result for the large set axioms we can define by following the above schemes? Can we define an $\mathsf{IKP}$-analogue of such very large cardinals? \end{question} \overfullrule=0pt \printbibliography \appendix \section{Tables for notions appearing in this paper} \begin{figure} \caption{Large set notions appearing in this paper} \label{Table1item-1} \label{Table1item-2} \label{Table1item-3} \label{Table1item-4} \end{figure} \begin{figure} \caption{Theories appearing in this paper} \label{Table2Item-1} \end{figure} \end{document}
{\bf e}gin{document} {\bf i}bliographystyle{IEEEtran} \title{The Asymptotic Behavior of Grassmannian Codes} \author{{Simon R. Blackburn} and {Tuvi Etzion,~\IEEEmembership{Fellow,~IEEE}} \thanks{S. Blackburn is with the Department of Mathematics, Royal Holloway, University of London, Egham, Surrey TW20 0EX, United Kingdom. (email: [email protected]).} \thanks{T. Etzion is with the Department of Computer Science, Technion --- Israel Institute of Technology, Haifa 32000, Israel. (email: [email protected]).} \thanks{This work was supported in part by the Israeli Science Foundation (ISF), Jerusalem, Israel, under Grant 230/08.} } \maketitle {\bf e}gin{abstract} The iterated Johnson bound is the best known upper bound on a size of an error-correcting code in the Grassmannian ${\cal G}_q(n,k)$. The iterated Sch\"{o}nheim bound is the best known lower bound on the size of a covering code in ${\cal G}_q(n,k)$. We use probabilistic methods to prove that both bounds are asymptotically attained for fixed $k$ and fixed radius, as $n$ approaches infinity. We also determine the asymptotics of the size of the best Grassmannian codes and covering codes when $n-k$ and the radius are fixed, as $n$ approaches infinity. \end{abstract} {\bf e}gin{keywords} Covering bound, Grassmannian, hypergraph, packing bound, constant dimension code. \end{keywords} \section{Introduction} \label{sec:introduction} \PARstart{L}{et} $\field{F}_q$ be the finite field of order $q$ and let $n$ and $k$ be integers such that $0 \leq k \leq n$. The \emph{Grassmannian} ${\cal G}_q(n,k)$ is the set of all $k$-dimensional subspaces of $\field{F}_q^n$. We have that $$ | {\cal G}_q (n,k)| = \qnom{n}{k} \deff \frac{(q^n-1)(q^{n-1}-1) \cdots (q^{n-k+1}-1)}{(q^k-1)(q^{k-1}-1) \cdots (q-1)}~, $$ where $\qnom{n}{k}$ is the $q-$\emph{ary Gaussian binomial coefficient}. A~natural measure of distance in ${\cal G}_q(n,k)$ is the \emph{subspace metric}~\cite{AAK01,KoKs08} given by \[ d_S (U,V) \deff 2k - 2 \dim (U \cap V) \] for $U,V \in {\cal G}_q(n,k)$. We say that $\field{C} \subseteq {\cal G}_q(n,k)$ is an \emph{$(n,M,d,k)_q$ code in the Grassmann space} if $|\field{C}|=M$ and $d_S (U,V) \geq d$ for all distinct $U, V \in \field{C}$. Such a code~$\field{C}$ is also called a constant dimension code. The subspaces in~$\field{C}$ are called \emph{codewords}. (Note that the distance between any pair of elements of ${\cal G}_q(n,k)$ is even. Because of this, some authors define the distance between subspaces $U$ and $V$ as $\frac{1}{2}d_S(U,V)$.) An important observation is the following: a code $\field{C}$ in the Grassmann space ${\cal G}_q(n,k)$ has minimum distance $2\delta+2$ or more if and only if each subspace in ${\cal G}(n,k-\delta)$ is contained in at most one codeword. There is a `dual' notion to a Grassmannian code, known as a $q-$covering design: we say that $\field{C} \subseteq {\cal G}_q(n,k)$ is a \emph{$q$-covering design} $\field{C}_q(n,k,r)$ if each element of ${\cal G}_q(n,r)$ is contained in at least one element of $\field{C}$. If each element of ${\cal G}_q(n,r)$ is contained in exactly one element of $\field{C}$, we have a \emph{Steiner structure}, which is both an optimal Grassmannian code and an optimal $q$-covering design~\cite{EtVa11,ScEt02}. Codes and designs in the Grassmannian have been studied extensively in the last five years due to the work by Koetter and Kschischang~\cite{KoKs08} in random network coding, who showed that an $(n,M,d,k)_q$ code can correct any $t$ packet insertions and any $s$ packet erasures, as long as $2t+2s <d$. Our goal in this paper is to examine cases in which we can determine the asymptotic behavior of codes and designs in the Grassmannian. Let ${\cal A}_q(n,d,k)$ denote the maximum number of codewords in an $(n,M,d,k)_q$ code. The \emph{packing bound} is the best known asymptotic upper bound for ${\cal A}_q(n,d,k)$. If we write $d=2\delta+2$, we have {\bf e}gin{equation} \label{eqn:ineq_packing} {\cal A}_q(n,2\delta+2,k) \leq \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}}. \end{equation} This bound is proved by noting that in an $(n,M,2\delta+2,k)_q$ code, each $(k-\delta)$-dimensional subspace can be contained in at most one codeword. Bounds on ${\cal A}_q(n,d,k)$ were given in many papers, e.g.~\cite{EtSi09,EtSi11,EtVa08,EtVa11,KoKs08,KoKu08,TrRo10,WXS,XiFu09}, In particular, the well-known Johnson bound for constant weight codes was adapted for constant dimension codes independently in~\cite{EtVa08,EtVa11,XiFu09} to show that \[ {\cal A}_q (n,2 \delta+2 ,k)\leq \frac{q^n-1}{q^k-1} {\cal A}_q(n-1,2\delta+2,k-1). \] By iterating this bound, using the observation that ${\cal A}_q(n,2\delta+2,k)=1$ for all $k \leq \delta$, we obtain the \emph{iterated Johnson bound}: {\bf e}gin{multline*} {\cal A}_q(n,2\delta+2,k) \\ \leq \left\lfloor\frac{q^n-1}{q^k-1} \left\lfloor\frac{q^{n-1}-1}{q^{k-1}-1} \cdots \left\lfloor\frac{q^{n-k+\delta+1}-1}{q^{\delta+1}-1} \cdots\right\rfloor\right\rfloor\right\rfloor. \end{multline*} It is not difficult to see that the iterated Johnson bound is always stronger than the packing bound (indeed, the packing bound may be derived as a simple corollary of the iterated Johnson bound). However, the main goal of this paper is to prove that the packing bound (and so the iterated Johnson bound) is attained asymptotically for fixed $k$ and $\delta$, $k \geq \delta$, when $n$ tends to infinity. In other words, we will prove the following theorem, in which the term $A(n) \sim B(n)$ means that $\lim_{n \rightarrow \infty} A(n)/B(n) =1$. {\bf e}gin{theorem} \label{thm:packing_asymptotics} Let $q$, $k$ and $\delta$ be fixed integers, with $0 \leq \delta\leq k$ and such that $q$ is a prime power. Then {\bf e}gin{equation} \label{eqn_packing} {\cal A}_q(n,2\delta+2,k)\sim \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}} \end{equation} as $n\rightarrow\infty$. \end{theorem} In fact, the proof of our theorem shows a little more than this: see the proof of the theorem and the comment in the last section of this paper. Our proof of the lower bound is probabilistic, making use of some of the theory of quasi-random hypergraphs. There are known explicit constructions that produce codes whose size is within a constant factor of the packing bound as $n\rightarrow\infty$. Currently, the best codes known are the codes of Etzion and Silberstein~\cite{EtSi09} that are obtained by extending the codes of Silva, Kschischang, and Koetter~\cite{SKK08} using a `multi-level construction'. If $q=2$ and $\delta=2$, then the ratio between the size of the code and the packing bound is 0.6657, 0.6274, and 0.625 when $k=4$, $k=8$, and $k=30$ respectively, as $n$ tends to infinity. When $k=3$, the ratio of 0.7101 in~\cite{SKK08} was improved in~\cite{EtSi11} to 0.7657. The Reed--Solomon-like codes of~\cite{KoKs08} represented as a lifting of codewords of maximum rank distance codes~\cite{SKK08} approach the packing bound as $n\rightarrow\infty$ when one of $\delta$ or $q$ also tends to infinity~\cite[Lemma~19]{EtSi11}. Theorem~\ref{thm:packing_asymptotics} shows that there exist codes approaching the packing bound as $n\rightarrow\infty$ even when $\delta$ and $q$ are fixed; of course, the challenge is now to construct such codes explicitly. The paper also proves a similar result for $q$-covering designs.~ Let ${\cal C}_q(n,k,r)$ denote the minimum number of $k$-dimensional subspaces in a $q$-covering design $\field{C}_q(n,k,r)$. Bounds on ${\cal C}_q(n,k,r)$ can be found in~\cite{Etz11,EtVa11a}. Setting $r=k-\delta$, the \emph{covering bound} states that {\bf e}gin{small} {\bf e}gin{equation} \label{eq:covering} {\cal C}_q(n,k,r) \geq \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}}. \end{equation} \end{small} This bound may be proved by observing that in a $\field{C}_q(n,k,k-\delta)$ covering design each $(k-\delta)$-dimensional subspace must be contained in at least one codeword. The \emph{Sch\"{o}nheim bound} is an analogous result to the Johnson bound above: \[ {\cal C}_q(n,k,r) \geq \frac{q^n-1}{q^k-1} {\cal C}_q(n-1,k-1,r-1). \] This bound implies the iterated Sch\"{o}nheim bound~\cite{EtVa11a}: {\bf e}gin{equation} \label{eq:schonheim} {\cal C}_q(n,k,r) \geq \left\lceil\frac{q^n\!-\!1}{q^k\!-\!1} \left\lceil\frac{q^{n-1}\!-\!1}{q^{k-1}\!-\!1} \cdots \left\lceil\frac{q^{n-r+1}\!-\!1}{q^{k-r+1}\!-\!1}\right\rceil \cdots\right\rceil\right\rceil. \end{equation} The iterated Sch\"{o}nheim bound is always at least as strong as the covering bound. But the following theorem shows that when $k$ and $\delta$ are fixed with $n\rightarrow\infty$ the covering bound (and so the iterated Sch\"{o}nheim bound) is attained asymptotically: {\bf e}gin{theorem} \label{thm:covering_asymptotics} Let $q$, $k$ and $\delta$ be fixed integers, with ${0\leq \delta\leq k}$ and such that $q$ is a prime power. Then \[ {\cal C}_q(n,k,k-\delta)\sim \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}} \] as $n\rightarrow\infty$. \end{theorem} The proof of the theorem does not explicitly construct families of $q$-designs whose ratio with the covering bound approaches $1$.~ The relationship between the best known $q$-covering designs and the covering bound is more complicated than in the case of Grassmannian codes, but it is usually the case that better ratios can be obtained by explicit constructions of $q$-covering designs when compared to the corresponding problem for Grassmannian codes. For example, a ratio of 1.05 can be obtained by explicit constructions~\cite{Etz11} when $q=2$, $k=3$, and $\delta=1$, as $n \rightarrow \infty$. The asymptotics of ${\cal A}_q(n,2\delta+2,k)$ when $n-k$ and $\delta$ are fixed, and of ${\cal C}_q(n,k,r)$ when $n-k$ and $r$ are fixed, are also determined in this paper. The result for ${\cal A}_q(n,2\delta+2,k)$ is a simple corollary of Theorem~\ref{thm:packing_asymptotics}, whereas the result for ${\cal C}_q(n,k,r)$ follows from results in finite geometry. The rest of the paper is organized as follows. In Section~\ref{sec:main} we will present the proofs for our main theorems. In Section~\ref{sec:large_k} we consider the case when $n-k$ is fixed as $n\rightarrow\infty$. Finally, in Section~\ref{sec:otherAsy} we provide comments on our results, and state some open questions. \section{Proofs of the main Theorems} \label{sec:main} We begin by observing a simple relationship between the minimum size of a $q$-covering design and the maximum size of a Grassmannian code. {\bf e}gin{proposition} \label{prop:design_equiv_code} We have that {\bf e}gin{multline*} {\cal C}_q(n,k,k-\delta)\leq {\cal A}_q(n,2\delta+2,k)+\\ \qnom{n}{k-\delta}-\qnom{k}{k-\delta} {\cal A}_q(n,2\delta+2,k) \end{multline*} and {\bf e}gin{multline*} {\cal A}_q(n,2\delta+2,k)\geq {\cal C}_q(n,k,k-\delta) +\\ \qnom{n}{k-\delta}- \qnom{k}{k-\delta}{\cal C}_q(n,k,k-\delta). \end{multline*} In particular, Theorems~\ref{thm:packing_asymptotics} and~\ref{thm:covering_asymptotics} are equivalent. \end{proposition} {\bf e}gin{proof} Let $\field{C}$ be a Grassmannian code of size ${\cal A}_q(n,2\delta+2,k)$. There are exactly $\qnom{k}{k-\delta}{\cal A}_q(n,2\delta+2,k)$ subspaces of dimension $k-\delta$ that lie in some element of~$\field{C}$, since no subspace of dimension $k-\delta$ is contained in more than one element of $\field{C}$. Thus there are $\Upsilon \deff\qnom{n}{k-\delta}-\qnom{k}{k-\delta} {\cal A}_q(n,2\delta+2,k)$ uncovered subspaces of dimension $k-\delta$, and we may construct a $q$-covering design by adding $\Upsilon$ or fewer $k$-dimensional subspaces to $\field{C}$. This establishes the first inequality of the proposition. To establish the second inequality, let $\field{C}$ be a $q-$covering design of size ${\cal C}_q(n,k,k-\delta)$. There are ${\qnom{k}{k-\delta}{\cal C}_q(n,k,k-\delta)}$ pairs $(U,V)$ such that ${U\in{\cal G}_q(n,k-\delta)}$, $V\in\field{C}$ and ${U\subseteq V}$. Suppose we order these pairs in some way. Since every $(k-\delta)-$dimensional subspace $U$ occurs at least once as the first element of a pair, there are $\qnom{k}{k-\delta}{\cal C}_q(n,k,k-\delta)-\qnom{n}{k-\delta}$ pairs $(U,V)$ where a pair $(U,V')$ for some $V'\in \field{C}$ occurs earlier in the ordering. Removing the corresponding subspaces $V$ from $\field{C}$ produces a Grassmannian code of size at least ${\cal C}_q(n,k,k-\delta) +\qnom{n}{k-\delta}- \qnom{k}{k-\delta}{\cal C}_q(n,k,k-\delta)$, and so the second inequality follows. Suppose Theorem~\ref{thm:packing_asymptotics} holds. Let $q$ be a fixed prime power, and let $k$ and $\delta$ be fixed integers such that $0\leq \delta\leq k$. Then~\eqref{eqn_packing} implies that $\qnom{n}{k-\delta}-\qnom{k}{k-\delta} {\cal A}_q(n,2\delta+2,k)=o\left(\qnom{n}{k-\delta}\right)$ and so the first inequality of the proposition implies that {\bf e}gin{align*} {\cal C}_q(n,k,k-\delta)&\leq {\cal A}_q(n,2\delta+2,k)+o\left(\qnom{n}{k-\delta}\right)\\ &\leq \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}}+o\left(\qnom{n}{k-\delta}\right) ~~~ \text{by}~(\ref{eqn:ineq_packing})\\ &\sim \frac{\qnom{n}{k-\delta}}{\qnom{k}{k-\delta}}. \end{align*} Theorem~\ref{thm:covering_asymptotics} now follows from this asymptotic inequality and the covering bound~\eqref{eq:covering}. The proof that Theorem~\ref{thm:packing_asymptotics} follows from Theorem~\ref{thm:covering_asymptotics} is similar to the above, and is omitted. \end{proof} We prove Theorem~\ref{thm:packing_asymptotics} by using a result in quasi-random hypergraphs.~ To state this result,~ we begin by recalling some terminology from hypergraph theory.~ A hypergraph~$\Gamma$ is \emph{$\ell$-uniform} if all its hyperedges have cardinality~$\ell$. The \emph{degree} $\deg(u)$ of a vertex $u\in\Gamma$ is the number of hyperedges containing~$u$; if $\deg(u)=r$ for all $u\in\Gamma$, we say that $\Gamma$ is \emph{$r$-regular}. The \emph{codegree} $\mathcal{C}g(u_1,u_2)$ of a pair of distinct vertices $u_1,u_2\in\Gamma$ is the number of hyperedges containing both $u_1$ and $u_2$. A \emph{matching} (or edge packing) in $\Gamma$ is a set of pairwise disjoint hyperedges of $\Gamma$. We write ${\cal U}(\Gamma)$ for the minimum number of vertices left uncovered by a matching in $\Gamma$. Thus the largest number of hyperedges in a matching of an $\ell$-uniform hypergraph $\Gamma$ on $v$ vertices is $(v-{\cal U}(\Gamma))/\ell$. The main theorem we use is due to Vu~\cite[Theorem~1.2.1]{Vu}: {\bf e}gin{theorem} \label{thm:hypergraph} Let $\ell$ be a fixed integer, where $\ell\geq 4$. Then there exist constants $\alpha$ and ${\bf e}ta$ with the following property. Let $\Gamma$ be an $\ell$-uniform $r$-regular hypergraph with $v$ vertices. Define $c =\max\mathcal{C}g(u_1,u_2)$, where the maximum is taken over all distinct vertices $u_1,u_2\in \Gamma$. Then \[ {\cal U}(\Gamma)\leq \alpha v(c/r)^{1/(\ell-1)}(\log r)^{\bf e}ta. \] \end{theorem} The proof of Theorem~\ref{thm:hypergraph} uses probabilistic methods, inspired by the techniques of Frankl and R\"odl~\cite{FrRo85,Rod85}. See~\cite{ABKV03,AlSp08,PiSp89} for related work. {\bf e}gin{proof}[Proof of Theorem~\ref{thm:packing_asymptotics}] If $\delta=0$, then the set of all subspaces in the Grassmannian is a code that achieves the packing bound; if $\delta=k$ then any single subspace of dimension $k$ achieves the packing bound. So we may assume that $0<\delta<k$. Now suppose that $k=2$, so $\delta=1$. The theorem follows in this case since it is known~\cite{EtVa11} that ${\cal A}_q(n,4,2)= \frac{q^n-1}{q^2-1}$ if $n$ is even; and ${\cal A}_q(n,4,2) \geq \frac{q^n-1}{q^2-1} - \frac{q^2}{q+1}$ if $n$ is odd. Thus we may suppose that $k\geq 3$. Define a hypergraph $\Gamma_n$ as follows. We identify the set of vertices of $\Gamma_n$ with ${\cal G}_q(n,k-\delta)$, and the set of hyperedges of $\Gamma_n$ with ${\cal G}_q(n,k)$. We define a hyperedge $V$ to contain a vertex $U$ if and only if $U\subseteq V$ (as subspaces). We note that ${\cal A}_q(n,2\delta+2,k)$ is exactly the maximum size of a matching in $\Gamma_n$. Now $\Gamma_n$ is an $\ell$-uniform hypergraph, where $\ell=\qnom{k}{k-\delta}$. Note that $\ell\geq 4$, and $\ell$ does not depend on $n$. Every vertex of $\Gamma_n$ has degree $r(n)=\qnom{n-(k-\delta)}{\delta}$. Let $U_1$ and $U_2$ be distinct vertices, so $\dim (U_1+U_2)=k-\delta+i$ for some positive integer $i$. Then $\mathcal{C}g(U_1,U_2)$ is the number of $k-$dimensional subspaces containing $U_1+U_2$, which is at most the number of $k$-dimensional subspaces containing a $(k-\delta+1)$-dimensional subspace of $U_1+U_2$. So \[ \mathcal{C}g(U_1,U_2) = \qnom{n-(k-\delta+i)}{\delta-i} \leq \qnom{n-(k-\delta+1)}{\delta-1}. \] But {\bf e}gin{align*} \qnom{n-(k-\delta)}{\delta}&=\field{T}heta(q^{n\delta})\text{ and }\\ \qnom{n-(k-\delta+1)}{\delta-1}&=\field{T}heta(q^{n(\delta-1)}) \end{align*} and so $\max_{u_1,u_2\in\Gamma_n}\mathcal{C}g(u_1,u_2)=O(q^{-n}r(n))$. Theorem~\ref{thm:hypergraph} now implies that there exists an integer ${\bf e}ta$ such that \[ {\cal U}(\Gamma_n)=O\left( \qnom{n}{k-\delta} q^{-n/(\ell-1)}(\log r(n))^{\bf e}ta\right). \] Thus ${\cal U}(\Gamma_n)=o(\qnom{n}{k-\delta})$, and so the largest matching in~$\Gamma_n$ contains at least $\qnom{n}{k-\delta}(1-o(1))/\ell$ edges. The packing bound shows that the largest matching in $\Gamma_n$ has size at most $\qnom{n}{k-\delta}/\ell$, and so ${\cal A}(n,2\delta+2,k)\sim \qnom{n}{k-\delta}/\ell$, as required. \end{proof} {\bf e}gin{proof}[Proof of Theorem~\ref{thm:covering_asymptotics}] Theorem~\ref{thm:covering_asymptotics} immediately follows from Proposition~\ref{prop:design_equiv_code} and Theorem~\ref{thm:packing_asymptotics}. \end{proof} \section{The case of large $k$} \label{sec:large_k} In the previous section, we assumed that $k$ is fixed (and therefore is small when compared to $n$). In this section, we consider the `dual' case, where $n-k$ is assumed to be fixed (and so $k$ is large). It is proved in~\cite{EtVa11,KoKs08,XiFu09} that ${\cal A}_q(n,2\delta+2,k)={\cal A}_q(n,2\delta+2,n-k)$. (This holds because taking the duals of all subspaces in an $(n,M,d,k)_q$ code in the Grassmann space produces an $(n,M,d,n-k)_q$-code.) Thus we have the following corollary of Theorem~\ref{thm:packing_asymptotics}, which establishes the asymptotics of ${\cal A}_q(n,2\delta+2,k)$ when $n-k$ and $\delta$ are fixed with $n\rightarrow\infty$. {\bf e}gin{cor} \label{cor:packing_large_k} Let $q$, $t$ and $\delta$ be fixed integers such that $0\leq \delta\leq t$, and such that $q$ is a prime power. Then {\bf e}gin{equation} \label{eqn_packing} {\cal A}_q(n,2\delta+2,n-t)\sim \frac{\qnom{n}{t-\delta}}{\qnom{t}{t-\delta}} \end{equation} as $n\rightarrow\infty$. \end{cor} Note that when $\delta>t$ we have that ${\cal A}_q(n,2\delta+2,n-t)={\cal A}_q(n,2\delta+2,t)=1$, so the restriction on $\delta$ in Corollary~\ref{cor:packing_large_k} is a natural one. The same techniques do not establish a similar result for $q$-covering designs, since ${\cal C}_q(n,k,r)$ and ${\cal C}_q(n,n-k,r)$ are not equal in general. However, by translating some of the results known in finite geometry into our language, we can determine ${\cal C}_q(n,k,r)$ when $q$, $r$ and $n-k$ are fixed, as Theorem~\ref{thm:covering_large_k} below shows. For the proof of the theorem will need the notion of a $q-$Tur\'{a}n design. We say that $\field{C} \subseteq {\cal G}_q(n,r)$ is a \emph{$q$-Tur\'{a}n design} $\field{T}_q(n,k,r)$ if each element of ${\cal G}_q(n,k)$ contains at least one element of $\field{C}$. Let ${\cal T}_q(n,k,r)$ denote the minimum number of $r$-dimensional subspaces in a $q$-covering design $\field{T}_q(n,k,r)$. The notions of $q$-covering designs and $q$-Tur\'an designs are dual; the following result was proved in~\cite{EtVa11a}: {\bf e}gin{theorem} \label{thm:coveringTuran} ${\cal C}_q(n,k,r) = {\cal T}_q(n,n-r,n-k)$ for all ${1 \leq r \leq k \leq n}$. \end{theorem} Using normal spreads~\cite{Lun99} (also known as geometric spreads) Beutelspacher and Ueberberg~\cite{BeUe91} proved the following theorem using some of the theory of finite projective geometry. {\bf e}gin{theorem} \label{thm:Tspreads} ${\cal T}_q (vm+\delta,vm-v+1+\delta,m) = \frac{q^{vm}-1}{q^m-1}$ for all $v \geq 2$ and $m \geq 2$. \end{theorem} We remark that Beutelspacher and Ueberberg show much more: that there is essentially only one optimal construction for a $q$-Tur\'an design with these parameters. As a consequence from Theorems~\ref{thm:coveringTuran} and~\ref{thm:Tspreads} we obtain the following result for $q$-covering designs. {\bf e}gin{cor} \label{cor:spread} Let $r$ and $n$ be positive integers such that $r+1$ divides $n$. Then \[ {\cal C}_q(n,n-n/(r+1),r)=\frac{q^n-1}{q^{n/(r+1)}-1}. \] \end{cor} {\bf e}gin{proof} Theorems~\ref{thm:coveringTuran} and~\ref{thm:Tspreads} (in the case when $\delta=0$) show that \[ {\cal C}_q(vm,vm-m,v-1)=\frac{q^{vm}-1}{q^m-1} \] for any integers $v\geq 2$ and $m\geq 2$. If we set $v=r+1$ and $m=n/v$, the corollary follows except in the case when $n=2$ and $r=1$. But the corollary is true in this case also, as a $q$-covering design with these parameters must consist of all $1$-dimensional subspaces. \end{proof} {\bf e}gin{theorem} \label{thm:covering_large_k} Let integers $q$, $t$ and $r$ be fixed, where $q$ is a prime power. For all sufficiently large integers $n$, \[ {\cal C}_q(n,n-t,r)=\frac{q^{(r+1)t}-1}{q^t-1}. \] \end{theorem} {\bf e}gin{proof} We first note that {\bf e}gin{equation} \label{eq:covering_large_k_eqn} {\cal C}_q(n+1,n+1-t,r)\leq {\cal C}_q(n,n-t,r). \end{equation} This is proved in~\cite{EtVa11a}. To see why~\eqref{eq:covering_large_k_eqn} holds, fix a $1-$dimensional subspace $K$ of an ($n+1$)-dimensional vector space $V$. Let $\field{C}$ be a $q$-covering design $\field{C}_q(n,n-t,r)$ contained in the $n$-dimensional space $V/K$. Then the set of subspaces $U$ such that $K\subseteq U\subseteq V$ and $U/K\in\field{C}$ is a $q$-covering design $\field{C}_q(n+1,n+1-t,r)$ containing at most ${\cal C}_q(n,n-t,r)$ subspaces. The inequality~\eqref{eq:covering_large_k_eqn} implies that for any fixed $t$ and $r$, we have that $\field{C}_q(n,n-t,r)$ is a non-increasing sequence of positive integers as $n$ increases. So there exists a constant $c$ (depending only on $q$, $t$ and $r$) so that $\field{C}_q(n,n-t,r)=c$ whenever $n$ is sufficiently large. It remains to show that $c= (q^{(r+1)t}-1)/(q^t-1)$. Set $n'=t(r+1)$, so $n'-t=n'-n'/(r+1)$. Corollary~\ref{cor:spread} implies that \[ c\leq {\cal C}_q(n',n'-t,r)=\frac{q^{n'}-1}{q^{n'/(r+1)}-1} =\frac{q^{(r+1)t}-1}{q^t-1}. \] Now $c$ is bounded below by the Sch\"onheim bound~\eqref{eq:schonheim}. We give a simpler form for the Sch\"onheim bound that holds for all sufficiently large $n$ as follows. When $n$ is sufficiently large we find that \[ \left\lceil \frac{q^{n-r+1}-1}{q^{k-r+1}-1}\right\rceil=q^t+1=\frac{q^{2t}-1}{q^t-1}. \] Moreover, for $i$ such that $0\leq i\leq r-2$, \[ \left\lceil\frac{q^{n-i}-1}{q^{k-i}-1} \times \frac{q^{(r-i)t}-1}{q^t-1}\right\rceil=\frac{q^{(r-i+1)t}-1}{q^t-1} \] provided that $n$ is sufficiently large. These equalities show that the right hand side of the Sch\"onheim bound~\eqref{eq:schonheim} is equal to $(q^{(r+1)t}-1)/(q^t-1)$ for all sufficiently large integers $n$. So $c\geq (q^{(r+1)t}-1)/(q^t-1)$, as required. \end{proof} \section{Optimal Codes and Research directions} \label{sec:otherAsy} In this section, we comment on our results, we provide a little extra background, and we propose topics for further study. We have proved that for a given $q$, if we fix $k$, and $\delta$, where $\delta < k$, the packing bound for Grassmannian codes is asymptotically attained when $n$ tends to infinity. We commented in Section~\ref{sec:introduction} that the same is true when $q$ or $\delta$ grows. In Section~\ref{sec:large_k}, we determined the asymptotics of ${\cal A}_q(n,2\delta+2,k)$ when $n-k$ and $\delta$ are fixed. These results do not address the cases when $q$ and $\delta$ are fixed, but $k$ and $n-k$ both grow (for example when $k=\lfloor \alpha n\rfloor$ for some fixed real number $\alpha\in (0,1)$). Can similar results be obtained a wide range of these cases? When $k$ grows rather slowly when compared to $n$, it should be possible to use a result of Alon et al~\cite{ABKV03} to show that ${\cal A}_q(n,2\delta+2,k)$ still approaches the packing bound. The proof of Theorem~\ref{thm:packing_asymptotics} does not just give the leading term of ${\cal A}_q(n,2\delta+2,k)$: the order of the error term is also given. However, we do not see any reason why this error term is tight. Similar questions can be asked about the relationship between the covering bound and ${\cal C}_q(n,k,r)$. It seems that small $q$-covering designs are easier to construct than large Grassmannian codes; certainly there are more construction methods currently known~\cite{Etz11,EtVa11a}. As well as trivial cases, there are a few sets of parameters for which the exact (or almost the exact) values of ${\cal A}_q (n,d,k)$ and ${\cal C}_q(n,k,r)$ are known. Section~\ref{sec:large_k} discusses a family of optimal $q$-covering designs. A family of optimal Grassmannian codes is known when $d=2k$. \emph{Spreads} (from projective geometry) give rise to optimal codes as well as $q$-covering designs when $k$ divides $n$. Known \emph{partial spreads} of maximum size give rise to optimal codes in other cases~\cite{DeMe07,ESS,EJSSS,GaSz03}. For small parameters, the best known codes are very often cyclic codes, which are defined as follows. Let $\alpha$ be a primitive element of GF($q^n$). We say that~a~code $\field{C} \subseteq {\cal G}_q(n,k)$ is \emph{cyclic} if it has the following property: whenever $\{{\bf 0},\alpha^{i_1},\alpha^{i_2},\ldots,\alpha^{i_m}\}$ is a codeword of $\field{C}$, so is its cyclic~shift $\{{\bf 0}, \alpha^{i_1+1},\alpha^{i_2+1},\ldots,\alpha^{i_m+1} \}$. In other words, if we map each subspace $V \,{\in}\, \field{C}$ into the corresponding binary characteristic vector $x_V = (x_0,x_1,\ldots,x_{q^n-2})$ given by $$ x_i = 1 ~~\text{if $\alpha^i {\in}\kern1pt V$} \hspace{4ex}\text{and}\hspace{4ex} x_i = 0 ~~\text{if $\alpha^i {\not\in}\, V$} $$ then the set of all such characteristic vectors is closed under cyclic shifts. It would be very interesting to find out whether cyclic codes approach the packing bound and the covering bound asymptotically. Again, in this case we would like to see proofs similar to the ones of Theorems~\ref{thm:packing_asymptotics} and~\ref{thm:covering_asymptotics}. Of course, explicit families of asymptotically good cyclic codes would be even more worthwhile. {\bf e}gin{center} {\bf Acknowledgement} \end{center} The authors would like to thank Simeon Ball for introducing to them the concept of normal spreads. {\bf e}gin{thebibliography}{10} \providecommand{\url}[1]{#1} \csname url@rmstyle\endcsname \providecommand{\newblock}{\relax} \providecommand{{\bf i}binfo}[2]{#2} \providecommand\field{B}IBentrySTDinterwordspacing{\spaceskip=0pt\relax} \providecommand\field{B}IBentryALTinterwordstretchfactor{4} \providecommand\field{B}IBentryALTinterwordspacing{\spaceskip=\fontdimen2\font plus \field{B}IBentryALTinterwordstretchfactor\fontdimen3\font minus \fontdimen4\font\relax} \providecommand\field{B}IBforeignlanguage[2]{{ \expandafter\ifx\csname l@#1\endcsname\relax \typeout{** WARNING: IEEEtran.bst: No hyphenation pattern has been} \typeout{** loaded for the language `#1'. Using the pattern for} \typeout{** the default language instead.} \else \language=\csname l@#1\endcsname \fi #2}} {\bf i}bitem{AAK01} R.\ Ahlswede, H.\,K.\ Aydinian, and L.\,H.\ Khachatrian, ``On perfect codes and related concepts,'' \emph{Designs, Codes, Crypt.}, vol. 22, pp. 221--237, 2001. {\bf i}bitem{ABKV03} N. Alon, B. Bollobas, J.H. Kim and V.H. Vu, ``Economical covers with geometric applications,'' \emph{Proc. London Math. Soc.}, vol. 86, pp. 273--301, 2003. {\bf i}bitem{AlSp08} N. Alon and J. H. Spencer, {\em The Probabilistic Method}, 3rd edition, John Wiley \& Sons, Hoboken, 2008. {\bf i}bitem{DeMe07} J. de Beule and K. Metsch, ``The maximum size of a partial spread in $H(5,q^2)$ is $q^3+1$,'' \emph{J. Comb. Theory, Ser. A,} vol. 114, pp. 761--768, 2007. {\bf i}bitem{BeUe91} A. Beutelspacher and J. Ueberberg, ``A characteristic property of geometric $t$-spreads in finite projective spaces,'' \emph{Europ. J. Comb.,} vol. 12, pp. 277--281, 1991. {\bf i}bitem{ESS} J. Eisfeld, L. Storme, and P. Sziklai, ``On the spectrum of the sizes of maximal partial line spreads in $PG(2n,q)$, $n \geq 3$,'' \emph{Designs, Codes, and Cryptography,} vol. 36, pp. 101--110, 2005. {\bf i}bitem{EJSSS} S. El-Zanati, H. Jordon, G. Seelinger, P. Sissokho, and L. Spence, ``The maximum size of a partial 3-spread in a finite vector space over $GF(2)$,'' \emph{Designs, Codes, and Cryptography,} vol. 54, pp. 101--107, 2010. {\bf i}bitem{Etz11} T. Etzion, ``Covering subspaces by subspaces'', in preparation. {\bf i}bitem{EtSi09} T. Etzion and N. Silberstein, ''Error-correcting codes in projective space via rank-metric codes and Ferrers diagrams'', \emph{IEEE Trans.\ Inform. Theory}, vol. 55, no.7, pp. 2909--2919, July 2009. {\bf i}bitem{EtSi11} T. Etzion and N. Silberstein, ``Codes and Designs Related to Lifted MRD Codes,'' \emph{arxiv.org/abs/1102.2593}. {\bf i}bitem{EtVa08} T. Etzion and A. Vardy, ``Error-correcting codes in projective space'', in proceedings of \emph{International Symposium on Information Theory}, pp. 871--875, July 2008. {\bf i}bitem{EtVa11} T. Etzion and A. Vardy, ``Error-correcting codes in projective space'', \emph{IEEE Trans.\ Inform. Theory}, vol.\,57, no.\,2, pp.\,1165--1173, February 2011. {\bf i}bitem{EtVa11a} T. Etzion and A. Vardy, ``On $q$-Analogs for Steiner Systems and Covering Designs'', \emph{Advances in Mathematics of Communications}, vol. 5, no. 2, pp. 161--176, 2011. {\bf i}bitem{GaSz03} A. G\'{a}cs and T. Sz\H{o}nyi, ``On maximal partial spreads in $PG(n,q)$,'' \emph{Designs Codes Crypt.,} vol. 29, pp. 123--129, 2003. {\bf i}bitem{FrRo85} P. Frankl and V. R\"{o}dl, ``Near perfect coverings in graphs and hypergraphs'', \emph{European J. Combin.}, vol. 6, pp. 317--326,~1985. {\bf i}bitem{KoKs08} R.\ Koetter and F.\,R.\ Kschischang, ``Coding for errors and erasures~in~random network coding,'' \emph{IEEE Trans.\ Inform. Theory}, vol.\,54, no.\,8, pp.\,3579--3591, August 2008. {\bf i}bitem{KoKu08} A.\,Kohnert and S.\,Kurz, ``Construction of large constant-dimension~codes with a prescribed minimum distance,'' \emph{Lecture Notes in Computer Science}, vol.\,5393, pp.\,31--42, December 2008. {\bf i}bitem{Lun99} G. Lunardon, ``Normal spreads'', \emph{Geometriae Dedicata}, vol. 75, pp. 245--261, 1999. {\bf i}bitem{PiSp89} N. Pippenger and J. Spencer, ``Asymptotic behavior of the chromatic index for hypergraphs'', \emph{J. Comb. Theory, Ser. A}, vol. 51, pp. 24--42, 1989. {\bf i}bitem{Rod85} V. R\"odl, ```On a packing and covering problem'', \emph{Europ. J. Comb.}, vol. 6, pp. 69--78, 1985. {\bf i}bitem{ScEt02} M.\ Schwartz and T.\ Etzion, ``Codes and anticodes in the Grassman graph'', \emph{J.\ Comb.\ Theory, Ser.\,A}, vol. 97, pp. 27--42, 2002. {\bf i}bitem{SKK08} D. Silva, F.\,R.\ Kschischang, and R.\ Koetter, ``A rank-metric approach to error control in random network coding,'' \emph{IEEE Trans. Inform. Theory}, vol.~54, pp. 3951--3967, September 2008. {\bf i}bitem{TrRo10} A.-L. Trautmann and J. Rosenthal, ``New improvements on the echelon-Ferrers construction'', in proc. of \emph{Int. Symp. on Math. Theory of Networks and Systems}, pp. 405--408, July 2010. {\bf i}bitem{Vu} Van H. Vu, ``New bounds on nearly perfect matchings in hypergraphs: Higher codegrees do help'', \emph{Random Structures and Algorithms}, vol.\,17, pp.\,29--63, 2000. {\bf i}bitem{WXS} H. Wang, C. Xing C and R. Safavi-Naini, ``Lee metric codes over integer residue rings'', \emph{IEEE Trans. on Inform. Theory}, vol.\,IT-49, pp.\,866--872, 2003. {\bf i}bitem{XiFu09} S.-T. Xia and F.-W. Fu, ``Johnson type bounds on constant dimension codes'', \emph{Designs, Codes Crypt.}, vol. 50, pp. 163--172,~2009. \end{thebibliography} \end{document}
\begin{document} \title{Optimal dynamic insurance contracts\thanks{ This paper supersedes an earlier paper entitled ``Dynamic Competitive Insurance''. } \thanks{ I would like to thank Dirk Bergemann, Johannes Hörner and Larry Samuelson for encouragement and feedback on this project. I would also like to thank Brian Baisa, Eduardo Souza Rodrigues, Jaqueline Oliveira, Bruno Badia, Sofia Moroni, Marina Carvalho, Marcelo Sant'anna, Sabyasachi Das, Eduardo Faingold, Li Hao, Mike Peters, Alex Frankel and the seminar participants at the Yale Micro lunch and Summer workshop, 2022 EUI Alumni Conference, NYU, Olin Business School, University of British Columbia, Columbia Business School, Notre Dame, Boston University, Warwick, EPGE/FGV, EESP/FGV and the Catholic University of Rio de Janeiro for helpful comments. }} \author{Vitor Farinha Luz\thanks{ University of British Columbia. Email: \href{mailto:[email protected]}{[email protected]} } } \maketitle \begin{abstract} I analyze long-term contracting in insurance markets with asymmetric information. The buyer privately observes her risk type, which evolves stochastically over time. A long-term contract specifies a menu of insurance policies, contingent on the history of type reports and contractable accident information. The optimal contract offers the consumer in each period a choice between a perpetual complete coverage policy with fixed premium and a risky continuation contract in which current period's accidents may affect not only within-period consumption (partial coverage) but also future policies. The model allows for arbitrary restrictions to the extent to which firms can use accident information in pricing. In the absence of pricing restrictions, accidents as well as choices of partial coverage are used in the efficient provision of incentives. If firms are unable to use accident history, longer periods of partial coverage choices are rewarded, leading to menus with cheaper full-coverage options and more attractive partial-coverage options; and allocative inefficiency decreases along all histories. These results are used to study a model of perfect competition, where the equilibrium is unique whenever it exists, as well as the monopoly problem, where necessary and sufficient conditions for the presence of information rents are given. \end{abstract} \section{Introduction} A vast majority of insurance contracts cover risks that are present over many periods, such as auto, auto or home insurance. This allows insurance firms to benefit from the use of dynamic pricing schemes, where consumers premium and coverage evolve over time and incorporate any information observed over the course of their interaction. The dynamics of coverage and premium are a central issue in several insurance markets including auto insurance (\cite{dionne1994adverse}, \cite{cohen2005asymmetric}), health insurance (\cite{handel2015equilibria}, \cite{atal2019lock}, \cite{atal2020long}) and life insurance (\cite{HendelLizzeri2003}, \cite{daily2008does}). This observation raises the fundamental theoretical question of what dynamic pricing schemes arise as a result of profit maximization and what are their consequences for coverage and premium dynamics. The theoretical insurance literature has focused on scenarios with either symmetric risk information\footnote{ See \cite{HendelLizzeri2003} and \cite{ghili2019welfare}. } or permanent risk types.\footnote{ See \citet{cooper1987multi} and \citet{dionne1994adverse}. } In this paper, I characterize profit maximizing long-term contracts in repeated interactions with evolving, persistent and private risk information, and apply this characterization to study equilibrium outcomes under perfect competition and monopoly environments. In my model, a risk-averse consumer (she) may incur incidental losses in each period, such as a car accident (auto insurance) or expenditures from medical procedures (health insurance); and the probability distribution over losses is determined by her risk-type, which is privately known by the consumer and follows a persistent Markov process. A high (low) type has a higher (lower) expected expected income, net of losses, in each period. I assume that an insurance firm offers a long-term contract to the consumer after she has observed her initial risk-type. A long-term contract represents a commitment to a schedule, in which a menu of insurance policies (referred to as flow contracts) with different premiums and coverage levels is offered to the consumer in each period. Crucially, the menu offered on a given period may depend on previous choices made by the consumer as well as any available information regarding accident history. Motivated by common regulatory policies that limit the use of accident history in pricing (\cite{handel2015equilibria}, \cite{farinha2021risk}), I allow for restrictions on the amount of information about past accidents that can be explicitly used by firms in setting insurance policy offers to be made to the consumer. These restrictions include as special cases both fully contingent contracts, where the whole history of accidents can be used, and realization-independent contracts, where the firm cannot use any past accident information in pricing. The first part of the paper (Sections \ref{Sec:profit-maximizing-contracts}-\ref{sec:Utility-Distortion-dynamics}) considers fixed discounted utility levels for the consumer, dependent on her initial risk-type, and characterizes the profit maximizing long-term contract that delivers these utility levels. This allows for the derivation of qualitative properties of optimal contracts that hold in both the competitive (Section \ref{sec:competitive-analysis}) and monopolistic (Section \ref{sec:monopoly}) settings studied in the second part of the paper, where each market setting corresponds to different equilibrium utility levels for the consumer. I show that the optimal contract features a simple pricing scheme: in every period the consumer chooses between a complete coverage insurance policy with a perpetual fixed premium (efficient in this environment), and a partial coverage policy, in which case future offers will depend on additional accident realizations. The consumer is induced to choose the full coverage option when having a low-type realization, and to choose partial coverage when having successive high-type realizations. In each period, an insurance firm has access to two sources of information: accident information --- which is directly observed by the firm --- and the consumer's choice (or announcement in a direct mechanism) which provides endogenous information about their type. In the optimal mechanism, policy offers use both pieces of information to efficiently screen different risk types. Accident information that is indicative of high-types is rewarded with higher continuation utility, in the form of more attractive future contracts. In a similar fashion, the choice of partial coverage in a given period is indicative of a high-type and is rewarded with higher continuation utility. The characterization of distortion and coverage dynamics is challenging due to two technical challenges. First, flow contracts are multi-dimensional objects, describing a premium and coverage for each possible loss level. Second, the presence of risk aversion leads to a non-separability issue absent in quasi-linear environments. The separation of consumers with different types requires the introduction of distortions, in the form of partial coverage. The marginal efficiency loss from the introduction of distortions potentially depends on the underlying utility level obtained by the consumers in a given period. As a consequence, the optimal contract requires a joint consideration of the issues of (i) intertemporal allocation of utility to be provided ot the consumer as well as (ii) the efficient spreading of distortions over time. I tackle both of these issues by introducing an auxiliary static cost minimization problem in Section \ref{sec:auxiliary-problem}, which finds the optimal flow contract that (i) delivers a certain utility level if consumed by a high-type, and (ii) provides a fixed punishment if chosen by a low-type pretending to be high-type. In Section \ref{sec:Utility-Distortion-dynamics} two optimality conditions are derived which characterize both the efficient spreading of utility and distortions over time, in terms of the solution to this auxiliary cost minimization problem. Under a mild condition on the consumer's Bernoulli utility function,\footnote{ This condition requires that the consumer's absolute risk aversion does not decrease ``too quickly'' with consumption, which is guaranteed if the consumer has non-decreasing absolute risk aversion. } the auxiliary cost function is supermodular, i.e., the marginal cost of distortions in a flow contract increases in the utility level generated by it. Under cost-supermodularity, the intertemporal optimality conditions can be used to obtain sharp characterization of the dynamics of utility and distortions. For the case of realization-independent contracts, we show that distortions --- measured in terms of the consumer's exposure to risk --- are strictly decreasing over time. As a consequence, the optimal mechanism has decreasing distortions along \textit{all} paths. When it comes to the flow utility dynamics, we show that consecutive high-type announcements --- which are revealed by the choice of partial coverage --- are rewarded by leading to the offer of partial contract offers that generate higher flow utility, as well as lower distortion. For the case of fully contingent contracts, distortions and utility depend on the history of income realizations and hence are stochastic, even conditioning on types. I present results for two special cases. First, I focus on the case of two periods and show that the firm has an incentive to reward high-type announcements and reduce distortions. I recast the firm's contract design problem as a cost minimization one and show that, when averaging over possible first-period income realizations, the expected marginal cost of distortions in flow contracts decrease over time, while the expected marginal cost of flow utility increases over time following the choice of partial coverage. Second, we focus on the illuminating knife-edge case of constant relative risk aversion utility $u(c) = \sqrt{c}$, which corresponds to a separable auxiliary cost function, and an arbitrary number of periods. Consecutive choices of partial coverage --- which corresponds to high-type announcements --- lead to a path of insurance policies that depend on the history of income realizations. I show that these policies have lower distortions and higher flow utility levels over time, when averaging over possible income realizations. In Section \ref{sec:competitive-analysis}, we consider a competitive model in which multiple firms make long-term contract offers to a consumer who is able to commit to a long-term contract. I extend the characterization of \cite{rothschild1976equilibrium} to our environment and show that a unique outcome, featuring the flow utility and distortion dynamics aforementioned, can be sustained by a pure strategy equilibrium, and obtain necessary and sufficient conditions for such an equilibrium to exist. The assumption of consumer commitment is reasonable in markets with high search or switching costs which preclude or inhibit the consumer's consideration of or transition to new firms. In the absence of such frictions, the commitment assumption is potentially with loss. If the low-type is an absorbing state of the types' Markov process (such as a chronic condition in health insurance), I propose a simple competitive model with consumer reentry in the market and show that firms' endogenous beliefs about the type of consumer searching for a new contract discourages consumer firm switching and, as a consequence, the commitment outcome can be sustained in a Perfect Bayesian Equilibrium of the no-commitment model. This follows from the fact that, the optimal contract with commitment is back-loaded: both the partial- and the full-coverage policy options within the optimal menu become more attractive over time. Section \ref{sec:monopoly} studies the case of monopoly, where the consumer's (type-dependent) outside option is given by zero coverage. The optimal contract is always separating and hence features the flow utility and distortion dynamics aforementioned. The consumer with initial high-type has no information rent, having total utility equal to that in the absence of insurance. I present a condition that is necessary and sufficient for the optimal contract to leave information rents to the consumer with initial low-type, meaning that her utility is strictly higher than the no-insurance option. \subsection*{Related literature}\label{subsec:Related-literature} This paper contributes to the literature on competitive screening, initiated by the seminal contributions of \citet{rothschild1976equilibrium} and \citet{wilson1977model}, which studies situations in which private information leads to inefficiencies in competitive markets. \citet{rothschild1976equilibrium} considers insurance markets in which customers have private information regarding their risk characteristics. They show that competition leads to a unique equilibrium in which high-type consumers, who have lower accident probabilities, are screened by the choice of partial insurance at better premium rates (per unit of coverage). \citet{cooper1987multi} extends the analysis of \citet{rothschild1976equilibrium} to a multi-period setting in which consumers have fixed risk types and full commitment. They find that optimal long-term contracts use experience rating as an efficient sorting device in addition to partial coverage. The optimal contract features a single type announcement in the first period and customers make no further choices. \cite{dionne1994adverse} allows for renegotiation and competition in the same fixed-types model and find equilibria with semi-pooling. My analysis is also related to the dynamic mechanism design literature (see \citet{courty2000sequential}, \citet{battaglini2005long}, \citet{esHo2007optimal}, \citet{pavan2014dynamic}), which differs from the insurance model considered here in two relevant ways. First, two sources of information exist in each period: the consumer's coverage choice (or type announcement) as well as the accident information that can be directly used in future offers. This is in contrast with standard screening models in which the only source of information to the mechanism designer over the course of an interaction is the series of choices or announcements made by the consumer. Second, the presence of curvature in preferences implies that the premium dynamics are completely pinned down in the optimal contract. This is in contrast with quasi-linear environments considered in the literature, where transfers are only pinned down up to their total ex-post present value.\footnote{ This statement assumes equal discount rates for both involved parties. \cite{krasikov2021implications} show that, if the mechanism designer is more patient than the consumer, the efficiency gain from front-loading completely pins down transfers in the optimal mechanism. } A notable exception on both fronts is \cite{garrett2015dynamic}, which studies the optimal compensation scheme for a manager who is privately informed about his persistent productivity type in two periods. Production in each period is observable and corresponds to the sum of productivity and unobservable effort. Instead of studying a relaxed problem, their approach is to construct two sets of perturbations which retain implementability in the original mechanism design problem and exploit these perturbations to characterize the dynamics of distortions, which correspond to under-provision of effort, in the optimal mechanism. In their model, even the implementation of constant efforts (which only depend on first period productivity) requires the introduction of additional consumption variation in the second period, which is costly for a risk averse manager. This is a crucial difference to my framework, where distortions \textit{correspond to} consumption variability within a period. As a consequence, they show that, if the manager is risk neutral and types are sufficiently persistent, distortions in the optimal mechanism increase over time on average. The reverse is true if the manager is sufficiently close to risk neutrality. Another closely related paper is \citet{battaglini2005long}, which considers the design of dynamic selling mechanisms in a monopoly setting where customer's valuation follows a two-state Markov chain. In the optimal mechanism, production becomes efficient when the customer obtains his first high valuation and converges to the efficient level along the history path of consecutive low valuations, which is analogous to our characterization of distortions for realization-independent contracts (Proposition \ref{prop:RI_monotonicity}). Even though the productive allocation is fully characterized, the pricing scheme is not unique. \cite{HendelLizzeri2003} study long-term life insurance contracts with symmetric information and one-side commitment. The optimal contract features front-loaded payments, which remedies the consumer's inability to commit by locking in consumers who expect to pay lower premiums later in the relationship. Using data from the U.S. life insurance market, they show that front-loading is a common feature of contracts, with more front-loading being negatively correlated with net present value of premiums. \cite{ghili2019welfare} study the same contract design problem in health insurance markets. They characterize the optimal dynamic contract assuming symmetric risk information and one-sided commitment, which features full insurance in each period. The presence of one-sided commitment precludes complete consumption smoothing over time, hence the consumer is offered a \textit{consumption floor}, which is adjusted upwards whenever the consumer's outside option is attractive enough so that the participation constraint becomes binding. Finally, they estimate a stochastic Markovian model of health status and medical expenses using data from the state of Utah and numerically find the optimal long-term contract for the estimated model parameters. The analysis of commitment in Subsection \ref{subsec:commitment} shows that the presence of adverse selection consumers may serve as a lock-in device, allowing for implementation of the commitment solution. The argument relies on firms making a negative inference about consumer's risk level when observing a switching consumer. The presence of such pessimistic beliefs is in line with the findings in \cite{cohen2005asymmetric} who shows, in the context of Israeli auto insurance, that consumers switching insurance companies have disproportionally bad accident histories and are high risk. In a related paper, \cite{ThoronJPE2005} studies a two-period model with ex-ante symmetric information and learning, and shows that keeping firms from observing the accident history of switching consumers serves as a commitment device and leads to welfare improvements. \section{\label{sec:Model}Model} \subsubsection*{Types} A consumer (she/her) lives for $T \leq \infty$ periods. At the beginning of each period, she privately observes her type $\theta_{t}\in\Theta \equiv \left\{ l,h \right\}$, which determines a probability distribution over realized income $y_{t}\in Y$, with $Y \subset \mathbb{R}_+$ finite. The occurrence of higher losses or damages is represented by a lower level of final income.\footnote{ If the customer has fixed per period flow income $y_{0}>0$ and losses $\mathfrak{l} \in \mathfrak{L}$, her realized income is $y=y_{0}-\mathfrak{l}$. } I refer to type $h$ as the high-type. In each period $t=1,\dots,T$, the probability distribution of income level $y_t$ depends on type $\theta_t$ and is represented by $p_i \in \Delta Y$, for $i=l,h$, satisfying \[ \sum_{y\in Y}p_{{l}}\left(y\right)y<\sum_{y\in Y}p_{{h}}\left(y\right)y. \] I assume types $\left\{ \theta_t \right\}_{t=1}^T$ follow Markov process. The time-invariant transition probabilities are denoted as \[ \pi_{ij} \equiv \mathbb{P}\left(\theta_{t+1}=j\mid\theta_{t}=i\right), \] while the distribution of initial type $\theta_1$ is denoted by $\pi_i \equiv \mathbb{P}(\theta_1 = i)$. Types are persistent, i.e., having a given type in period $t<T$ leads to a higher probability of having the same type in period $t+1$. In short, we assume that $\pi_{ii} > \pi_{ji}$, for $i,j \in \left\{ l,h \right\}$. Note that $\pi_{ll}=\pi_{hh}=1$ is included as a special case.\footnote{ The time-invariance assumptions made here can be significantly relaxed, as discussed in Section \ref{sec:conclusion}. } \subsubsection*{Preferences} Consumer preferences over final consumption flows are determined by Bernoulli utility function \( u:\mathbb{R}_{+} \mapsto \mathbb{R}, \), assumed to be twice continuously differentiable, strictly concave and strictly increasing.\footnote{ This formulation rules out the relevant case of logarithmic utility, as its domain excludes zero. For finite $T$, the results extend to preferences satisfying $\lim_{c\downarrow0}u\left(c\right)=-\infty$. } The inverse of utility function $u\left( \cdot \right)$ is denoted as $\psi\left( \cdot \right)$. The consumer discounts the future according to factor $\delta\in\left(0,1\right)$. The utility obtained from deterministic consumption stream $\left\{ c_t \right\}_{t=1}^T$ is given by \[ \sum_{t=1}^{T}\delta^{t-1}u\left(c_{t}\right). \] I study the profit maximization of a risk-neutral firm with the same discount factor. The payoff (or profit) obtained by a firm is determined by its net payments made to the consumer, if its contract is accepted, and zero otherwise. Hence the payoff obtained by a firm, for a given realization of the income and consumption paths $\left\{ y_t , c_t \right\}_{t=1}^T$ is given by \[ \sum_{t=1}^{T}\delta^{t-1}\left(y_{t}-c_{t}\right). \] Both the firm's and consumer's preferences are extended to random outcomes by using expected payoffs. \subsubsection*{Flow contracts} A flow contract is a standard single period insurance policy: it specifies a premium and coverage for all possible income realizations. I assume that the income realization is observable and contractable and hence the fulfillment of a flow contract does not involve incentive considerations. For tractability, a flow contract is described here through the induced final consumption of the consumer, i.e., the set of flow contracts is given by \( Z\equiv\mathbb{R}_{+}^{Y}. \) Each contract $z\in Z$ specifies that the final consumption of the consumer if income $y\in Y$ is realized, which is equal to the income $y$ plus any policy coverage minus the premium paid, is equal to $z\left(y\right)$. If a flow contract $z$ is provided to a consumer with type $\theta$, the consumer's utility from flow contracts can be described by function \[ v\left(z,\theta\right)\equiv\sum_{y\in Y}p_{\theta}\left(y\right)u\left[z\left(y\right)\right], \] while the profits obtained by a firm can be described by \[ \xi\left(z,\theta\right)\equiv\sum_{y\in Y}p_{\theta}\left(y\right)\left[y-z\left(y\right)\right]. \] \subsubsection*{Long-term contracts} A long-term contract is a mechanism which specifies at the initial period $t=1$ the flow contracts to be offered to the consumer at each period, which may depend on messages sent by the consumer and, potentially, information about the history of past income realizations. In several insurance markets, regulatory restrictions limit the extent to which firms can explicitly use the history of accidents or losses in pricing contracts (see \cite{handel2015equilibria}, for example). I allow for such restrictions by assuming that firms are only able to use a coarsening of the history of past income realizations. These restrictions are represented by a \textit{signal structure}, which is composed of a finite \textit{set of signals} $\Phi$ and a surjective \textit{signal function} $\phi:Y \mapsto \Phi$. The signal structure is exogenously fixed. It imposes restrictions on the firm's contract design problem, as the income realization $y_t$ can only impact future offers via $\phi_t \equiv \phi(y_t)$. One special case is that of \textit{fully-contingent mechanisms}, with $\Phi=Y$ and $\phi\left(y\right)=y$ for all $y\in Y$. In this case, prices depend explicitly on both the history of reports by the consumer as well as the observed history of accidents, both of which are informative to firms. Another extreme case of interest is that of \textit{realization independent mechanisms}, which corresponds to $\Phi$ being a singleton. In this case, firms are completely unable to use the history of previous income realizations in pricing. While the results in Subsections \ref{subsec:realization-independent-contracts} and \ref{subsec:commitment} focus on the case of realization-independent mechanisms, all other results apply for an arbitrary signal structure. From the revelation principle, we can restrict attention to direct mechanisms, where the set of possible messages in each period coincides with the set of types $\Theta$, and truthful equilibria, in which the consumer finds it optimal to truthfully report her type. I denote the history of announcements and signals up to period $t$ as the observable history $\eta^{t}\in H^{t}\equiv\Phi^{t}\times\Theta^{t}$.\footnote{ For completeness, we define $H^{0}\equiv\left\{ \emptyset\right\} $. } I denote a history of types $(\theta_1,\dots,\theta_t)$ up to period $t$ as $\theta^t$, the expanded history $(\theta_1,\dots,\theta_t,\theta_{t+1})$ as $(\theta^t,\theta_{t+1})$, the sub-history $(\theta_\tau,\dots,\theta_{\tau'})$, for $\tau\leq \tau' \leq t$, as $\left[ \theta^t \right]_\tau^{\tau'}$, and, for a history $\theta^t$ and $\tau \leq t$, refer to $\theta_\tau$ as $\left[ \theta^t \right]_\tau$. Also define $h^t$ as the $t$-period history $(h,\dots,h)$. The same notation is used for income realizations $y_t$, signal realizations $\phi_t$ and histories $\eta^t$. A direct mechanism is defined as $M=\left\{ z_{t}\right\} _{t=1}^{T}$, with $z_{t}:H^{t-1}\times\Theta\mapsto Z$, and the set of direct mechanisms is $\mathcal{M}$. A direct mechanism $M=\left\{ z_{t}\right\} _{t=1}^{T}$ specifies the flow contract $z_{t}\left(\eta^{t},\hat{\theta}_{t}\right)$ to be provided to the consumer at each period $t$, which depends on the history of signals and announcements up to period $t-1$, as well as the new report $\hat{\theta}_{t}$ made at period $t$. The flow contract at period $t$ determines the level of coverage obtained by the consumer within period $t$, and hence her realized consumption in period $t$, which is given by \[ z_{t}\left(y_{t}\mid\eta^{t},\hat{\theta}_{t}\right) \] and also depends on the realized income $y_{t}$. Figure \ref{fig:TimingMechanism} illustrates the sequence of events within a direct mechanism for a particular period. I assume that flow contracts with arbitrary income-contingent transfers can be executed without frictions. In the example of health insurance, this means that can specify and enforce transfers/coverage that depends directly on health shocks within a given period, but may --- depending on $\phi$ --- not be able to use this information explicitly when determining future offers to be made to the consumer. \begin{figure} \caption{Timing of events of a particular period $t$ within a direct mechanism} \label{fig:TimingMechanism} \end{figure} A private history in period $t$ includes all information available to the consumer, and is described as $ \eta_{p}^{t}=\left(y^{t} , \hat{\theta}^{t} , \theta^{t}\right)\in H_{p}^{t} $, which includes the history of income realizations $y^{t}$, type realizations $\theta^{t}$ and reported types $\hat{\theta}^{t}$. The set of private histories in period $t$ is denoted as $H_{p}^{t}$. A reporting strategy is denoted by $r=\left\{ r_{t}\right\} _{t=1}^{T}$ with $r_{t}:H_{p}^{t-1}\times\Theta\mapsto\Theta$. The truth-telling strategy is denoted by $r^{*}$ and satisfies $r_{t}^{*}\left(\eta_{p}^{t-1},\theta\right)=\theta$, for all $\eta_{p}^{t-1}\in H_{p}^{t-1}$ and $\theta\in\Theta$. The history of reports up to period $t$ generated on-path by strategy $r$ --- which depends solely on realized history types $\theta^{t}$ and income levels $y^{t-1}$ --- is denoted as $r^{t}\left( y^{t-1} , \theta^{t}\right)$. The set of reporting strategies is denoted as $R$. The consumer's payoff from a reporting strategy $r\in R$ is denoted as \[ V_{0}^{r}\left(M\right) \equiv \mathbb{E}\left\{ \sum\delta^{t-1}v\left[ z_{t}\left( \phi^{t-1} , r^{t}\left( y^{t-1} , \theta^{t} \right)\right) ,\theta_{t} \right] \right\} . \] \begin{defn} \label{def:incentive-compatility} A direct mechanism $M=\left\{ z_{t}\right\} _{t=1}^{T}$ is incentive compatible if, for all $r\in R$, \[ V_{0}^{r^{*}}\left(M\right)\geq V_{0}^{r}\left(M\right). \] \end{defn} The set of incentive compatible mechanisms is denoted as $\mathcal{M}_{IC}$. Finally, for an observable history $\eta^{t-1}=\left(\theta^{t-1},y^{t-1}\right)\in H^{t-1}$ and period $t$ type $\theta_{t}\in\Theta$, we denote the continuation utility of a consumer following reporting strategy $r$ as $V_{t}^r \left(M\mid\eta^{t-1},\theta_{t}\right)$, or simply as $V_{t}\left(M\mid\eta^{t-1},\theta_{t}\right)$ for reporting strategy $r^{*}$. When the mechanism $M$ is clear --- such as the profit maximizing one --- we may simply use $V_t (\eta^{t-1}, \theta_t)$ for brevity. \section{Profit maximization} \label{Sec:profit-maximizing-contracts} I assume throughout that the consumer privately learns $\theta_1$ prior to contracting with the firm. An insurance firm's mechanism design problem can be separated into two parts. First, a firm must decide on how attractive its offer is for different types of potential customers, which is described by the total expected utility from participation (utility choice). Second, firms must choose the mechanism details in order to maximize profits within all mechanisms that deliver the same utility to the consumer (feature design). I start by focusing on the feature design problem and provide a characterization of profit maximizing mechanisms, for fixed discounted expected utility to be provided to the consumer with each initial type. The study of the feature design problem allows for the characterization of qualitative properties of optimal mechanisms that hold under multiple market structures which lead to different equilibrium utility levels for the consumer. Sections \ref{sec:competitive-analysis} and \ref{sec:monopoly} analyze the cases of perfect competition and monopoly, respectively. The total discounted expected profits from a direct mechanism $M\in\mathcal{M}$ is given by \begin{equation} \Pi\left(M\right)\equiv\mathbb{E}\left[\sum_{t=1}^{T}\delta^{t-1}\xi\left(z_{t}\left(\eta^{t-1},\theta_{t}\right),\theta_{t}\right)\right].\label{eq:define_profit} \end{equation} I am interested in studying market settings where consumers receive offers by firms after learning their initial state $\theta_{1}\in\left\{ l,h\right\} $. The set of feasible utility pairs for both initial types consistent with finite profits is given by\footnote{ Restricting attention to mechanisms generating finite profits is a technical condition that avoids the analysis of transversality conditions when studying the firm's problem and is without loss for the applications considered in Sections \ref{sec:competitive-analysis} and \ref{sec:monopoly}. It is innocuous in the case $T < \infty$. } \[ \mathcal{V}\equiv\left\{ \left(V_{1}\left(M\mid l\right),V_{1}\left(M\mid h\right)\right) \mid M\in\mathcal{M}_{IC},\Pi(M) > - \infty \right\}. \] I refer to the profit maximization problem of the firm, for any $V=\left(V_{l},V_{h}\right)\in\mathcal{V}$ as \begin{equation} \Pi^{*}\left(V\right)\equiv\sup_{M\in\mathcal{M}_{IC}}\Pi\left(M\right),\label{eq:profit_max} \end{equation} subject to, for $\theta\in\left\{ l,h\right\} $, \begin{equation} V_{1}\left(M\mid\theta\right)=V_{\theta}.\label{eq:utility_constraints} \end{equation} If this problem has a unique solution, we denote it as $M^V$. \section{Complete information benchmark} In the absence of private information, efficient long-term contracts provide complete coverage to the consumer, with final consumption that does not depend on their realized history of incomes and types, except for initial type $\theta_1$. I denote a full coverage flow contract with consumption level $c\in\mathbb{R}_{+}$ as $z^{c}$, i.e., \[ z^{c}\left(y\right)=c, \] for all $y\in Y$. I define the complete information problem as that of maximizing profits, given by (\ref{eq:define_profit}), subject to payoff constraint (\ref{eq:utility_constraints}), with choice set $\mathcal{M}$. The solution to the complete information problem is denoted by $\left\{ z_{t}^{CI}\right\} _{t=1}^{T}$. The following lemma, whose proof is standard and hence omitted, formally states that the complete information solution is perfect consumption smoothing. \begin{lem} (Complete information benchmark) The solution to the full information problem satisfies \[ z_{t}^{CI}\left(\eta_{p}^{t-1},{\theta}_{t}\right)=z^{c^{CI}\left(\theta_{1}\right)}, \] where $c^{CI}\left(\cdot\right)$ is defined by \[ \sum_{t=1}^{T}\delta^{t-1}u\left(c^{CI}\left(\theta_1\right)\right) = V_{\theta_1}. \] \end{lem} If $V_l = V_h$, the complete information solution is incentive compatible and hence solves $\Pi^*(V)$. Our analysis of distortions will focus on utility pairs $V \in \mathcal{V}$ satisfying $V_h > V_l$, i.e., where the utility delivered to an initially high-type consumer is higher than that of a low type. The focus on this case is justified in Sections \ref{sec:competitive-analysis} and \ref{sec:monopoly}, where this inequality is shown to hold in equilibrium for both the competitive and monopoly settings. In the reverse case where $V_l > V_h$, all the results presented hold when interchanging the role of types $h$ and $l$. In particular, the relevant relaxed problem would consider ``upward'' incentive constraints. \section{Incentives and distortions} \label{sec:Incentives_Distortions} I now characterize the solution to the profit maximization problem (\ref{eq:profit_max}). The main result in this section, Lemma \ref{Lem:Properties_relaxed_problem}, provides a characterization of the solution of the relaxed problem, which is used to show that the solution to the relaxed problem also solves the firm's original profit maximization problem. In an optimal mechanism, the impact of any information revealed over time --- be it signal realizations or type announcements --- is determined by its likelihood ratio: the ratio of its probability when coming from either a low- or high-type. For example, income realizations that are relatively more likely for high-types are rewarded with higher flow and higher continuation utilities. Additionally, the assumption of type persistence implies that future high-type announcements, under truth-telling, are relatively more likely to come from a consumer with initially high-type. As a consequence, high-type announcements are rewarded with more attractive continuation contracts. More precisely, a solution to the relaxed problem is shown to satisfy three monotonicity properties that provide insights into the the efficient provision of dynamic incentives. These monotonicity properties are also used, in Subsection \ref{subsec:sufficiency}, to guarantee that the solution to the relaxed problem solves the original profit maximization problem. \subsection{Relaxed problem} \label{subsec:relaxed-problem} Define a reporting strategy $r$ to be a one-shot critical deviation (OSCD) from truth-telling at period $t$ with signal history $\phi^{t-1}$ if the consumer reports a high-type when having a low-type for the \textit{first} time at period $t$, but otherwise reports her type truthfully. In other words, it satisfies $r_t (\phi^{t-1},h^{t-1},h^{t-1},l)=h$, for some period $t \in \left\{ 1,\dots,T \right\}$ and signal history $\phi^{t-1} \in \Phi^{t-1}$, but is otherwise identical to the truth-telling strategy. I say that a mechanism is one-shot incentive compatible (OSIC) if the consumer has no profitable OSCD, and the set of OSIC direct mechanisms is defined as $\mathcal{M}_{OSIC}$. One-shot deviations are restrictive in three ways. It involves a misreport in a single contingency, and truth-telling otherwise. This single deviation occurs along a truth-telling path, i.e., the consumer considers a single misreport at a given period $t$ after $t-1$ periods of truth-telling. Finally, this deviation involves the consumer announcing to have a low-risk type within period $t$, when in fact having a high-risk type. Figure \ref{fig:OneShotIncentiveConstraints} represents graphically the potential deviations considered in the relaxed problem if the set of possible signals is $\Phi = \left\{ \phi_a, \phi_b, \phi_c \right\}$. The blue arrows representing the direction of misreports, where the low-type pretends to have high-type in a given period. \begin{figure} \caption{This figure represents the set of binding incentive constraints in the relaxed problem. The black branches represent possible type realizations, red branches correspond to possible signal realizations, and the blue curved arrows represent the upward incentive constraints considered in the relaxed problem.} \label{fig:OneShotIncentiveConstraints} \end{figure} The relaxed problem is the following: \begin{equation} \Pi^R\left(V\right)\equiv\sup_{M\in\mathcal{M}_{OSIC}}\Pi\left(M\right), \label{eq:RelaxedProblem} \end{equation} subject to, for $\theta\in\left\{ l,h\right\} $, \[ V_{1}\left(M\mid\theta\right)=V_{\theta}. \] \begin{assum} The relaxed problem has a solution, for all $V \in \mathcal{V}$. \end{assum} As shown below, existence of a solution is guaranteed in the presence of a finite horizon ($T<\infty$), or in the infinite horizon case as long as utility function $u(\cdot)$ is bounded. I do not provide an exhaustive analysis of the issue of existence of a solution for the infinite horizon case, but the characterization results in this paper apply more generally whenever an optimal mechanism exists. \begin{lem} If $T<\infty$ or $u(\cdot)$ is bounded, the relaxed problem has a solution for all $V \in \mathcal{V}$. \end{lem} \begin{proof} Consider any $V \in \mathcal{V}$. The definition of $\mathcal{V}$ implies that $\Pi^R(V) \geq \Pi^*(V) > -\infty$. Consider a sequence of mechanisms $\left\{ M^n = \left\{ z^n_t \right\}_{t\leq T} \right\}_{n \in \mathbb{N}}$ feasible in problem $\Pi^R(V)$ such that $\Pi(M^n) > - \infty $ and $ \Pi(M^n) \rightarrow^{{n \rightarrow \infty}} \Pi^R(V)$. The fact that $ \Pi(M^n) > - \infty$ implies that, for each period $t$ and history $(\phi^{t-1},\theta^{t}) \in \Phi^{t-1}\times \theta^t$, sequence $\left\{ z^n_t(\phi^{t-1} , \theta^t) \right\}_{n \in \mathbb{N}}$ is uniformly bounded by a constant $\bar{z}(\phi^{t-1},\theta^{t}) < \sup_{c \geq 0}u(c)$ and, as a consequence, we can assume without loss (potentially using a subsequence) that $\left\{ z^n_t(\phi^{t-1} , \theta^t) \right\}_{n \in \mathbb{N}}$ converges for any $t,\phi^{t-1},\theta^t$. Now let $M^* \equiv \left\{ \lim_n z^n_t \right\}_{t \leq T}$. If $u(\cdot)$ is bounded or $T<\infty$, the dominated convergence theorem implies that $V^r(M^*) = \lim_{n \rightarrow\infty} V^r(M^n)$ for any reporting strategy $r$, which implies that $M^*$ is feasible in problem $\Pi^R(V)$. Fatou's lemma implies that (the profits of any two mechanisms only differ in the consumption paid out to the agent): \begin{equation} -\Pi(M^*) \leq \liminf_n \left[ -\Pi(M^n) \right] = -\Pi^R(V), \end{equation} which means that $M^*$ solves $\Pi^R(V)$. \end{proof} Define $\tau (\eta ^ {t-1}, \theta_t)$ as the first period at which the consumer has a low type (if any), i.e., \[ \tau (\eta ^ {t-1}, \theta_t) = \min \{ t' \in \{0, \dots , t \} \mid [\theta^ {t-1}, \theta_t]_{t'} = l \}, \] with $\inf \emptyset = \emptyset$. The example $ \tau \left( \phi^t,(l,h,\dots,h) \right) =1 $ represents a first period low-type, and $\tau = \emptyset$ represents the absence of a low-type realization, i.e., $ \tau \left( \phi^t,(h,\dots,h) \right) = \emptyset $. In the rest of this section, we omit the dependence of $\tau$ on the history of signals and types for brevity. For any income level $y \in Y$ and signal $\phi_0 \in \Phi$, define likelihood ratios \[ \ell (y) \equiv \frac{p_{l}\left(y\right)}{p_{h}\left(y\right)}, \] and \begin{equation*} \ell (\phi_0) \equiv \frac{ \sum_{y \in \phi^{-1}(\phi_0)} p_{l}\left(y\right) }{ \sum_{y \in \phi^{-1}(\phi_0)} p_{h}\left(y\right) }. \end{equation*} As is standard in contract theory, these ratios are useful because they represent how useful each income/signal realization is in screening different types. A realization with high likelihood ratio is more ``indicative'' of a low-type. The following Lemma states that the relaxed problem has a unique solution and outlines key properties of its solution. This result is proved in the Appendix and its proof exploits the recursive structure of problem $\Pi^R$. \begin{lem} \label{Lem:Properties_relaxed_problem} For any $V\in\mathcal{V}$, the relaxed problem has a unique solution. Moreover, if $V_{h}>V_{l}$, its solution satisfies: (i) All OSIC constraints hold as equalities. (ii) No distortions following low-type: there exists $\{ c_t \}_{1}^T$, with $c_t:\Phi^{t-1} \mapsto \mathbb{R}_+$ such that \[ z_t (\phi^{t-1},\theta^{t-1}, \theta_t) = z^{c_\tau (\phi^{\tau-1})}, \] whenever $\tau \neq \emptyset.$ Additionally, there exist $\{\mu_t, \lambda_t\}_{t=1}^{T}$ with $(\mu_t, \lambda_t):\Phi^t \mapsto \mathbb{R} \times \mathbb{R}_+$ such that, for any $(\eta ^ {t-1}, \theta_t)$ such that $\tau = \emptyset$, (iii) Flow contracts following high-types have partial coverage: \begin{equation} \label{FOC_relaxed} \mu_{t-1}(\phi^{t-1}) - \lambda_{t-1}(\phi^{t-1}) \ell(y_t) \leq \frac{1}{ u' \left( z_t ( y_t \mid \eta ^ {t-1}, h) \right)} \end{equation} with (\ref*{FOC_relaxed}) holding as a equality if $z_t(y_t \mid \eta ^ {t-1}, \theta_t) > 0$. (iv) Future type reward: \[ V_{t}(\eta^{t-1},h) > V_{t}(\eta^{t-1},l) \] (v) Future signal effect: for any $\phi, \phi'$ such that $\ell (\phi') \leq \ell (\phi)$: \begin{multline*} \pi_{lh} V_t( (\phi^{t-2},\phi' , h^{t-1}) , h) +\pi_{ll} V_t( (\phi^{t-2},\phi' , h^{t-1}) , l) \\ \geq \pi_{lh} V_t( (\phi^{t-2},\phi , h^{t-1}) , h) +\pi_{ll} V_t( (\phi^{t-2},\phi , h^{t-1}) , l) \end{multline*} \end{lem} Property (i) state that all upward incentive constraints considered in the relaxed problem hold as equalities. The optimal contract is supposed to provide higher utility to a consumer with high initial type, while deterring deviations from low-types. Given the presence of type persistence, one way to achieve this goal is to use a \textit{future} high-type realization as a signal that the consumer's initial type is $\theta_1 = h$. In other words, the utility gap between consumers with initially high and low types is propagated to future periods, with consecutive high-type announcements being ``rewarded'' with higher utility. For this reason, upward incentive constraints bind not only at $t=1$, but in all periods. Property (ii) corresponds to a ``no distortion at the bottom'' result. This follows directly from the fact that the relaxed profit maximization problem ignores ``downward'' incentive constraints. In Subsection \ref{subsec:sufficiency} we show that those ignored constraints are guaranteed to hold in the solution to the relaxed problem. Properties (iii)-(v) of Lemma \ref{Lem:Properties_relaxed_problem} illustrate how exogenous signal information ($\phi_t$) and endogenous reports ($\hat{\theta}_t$) are used in an optimal mechanism to efficiently screen consumers with different types using both within-period coverage as well as continuation utility. Given the crucial role of this properties, we now discuss in detail the interpretation of each such property and show, in Lemmas \ref{Lem:Flow_Monotonicity} and \ref{Lem:Continuation_Monotonicity}, how these properties imply that incentives within the mechanism are properly aligned. These results are akin to the standard role played by the monotonicity property in mechanism design, which are used to show that mechanisms solving an appropriately defined relaxed problem ignoring certain incentive constraints satisfies the ignored incentive constraints. \subsection{Monotonicity} \label{sec:monotonicity} I now introduce a new set of monotonicity conditions on mechanisms which, together with binding ``upward'' incentive compatibility constraints, imply incentive compatibility. Lemma \ref{Lem:Properties_relaxed_problem} implies that the solution to the relaxed problem satisfies all three monotonicity conditions and all OSIC constraints as equalities, and hence solves the firm's original problem. As discussed in the \hyperref[Sec:Appendix]{Appendix}, these monotonicity properties allows us to exploit the recursive structure of the firm's relaxed problem, which can be broken into a series of one-period problems in which the designer chooses in each period a pair of flow contracts as well as promised continuation utilities to the agent. In this subsection, I discuss how the monotonicity conditions provide insights into the the efficient provision of dynamic incentives and their role in connecting the relaxed and original profit maximization problems. \subsubsection{Flow monotonicity} Flow monotonicity is akin to standard static monotonicity notions, in that it guarantees that within-period incentives are aligned by making sure that the partial coverage contract tailored to the (current) high-type consumer rewards accident realizations that are indicative of a high type, according to a likelihood ratio condition. More precisely, flow monotonicity corresponds to property (iii) in Lemma \ref{Lem:Properties_relaxed_problem}. For any income level $y \in Y$, the likelihood $\ell (y)$ provides a measure of how much income $y$ is indicative of low-types. As a consequence, it is expected that profit maximizing contracts seeking to attract high-type consumers will use flow contracts that punish income realizations that are indicative of low-types (high $\ell (y)$) and reward realizations that are indicative of high-types (low $\ell (y)$). \begin{defn} A flow contract $z \in Z$ satisfies flow monotonicity if it satisfies \begin{equation*} \ell (y') > \ell (y) \implies z(y') \leq z(y). \end{equation*} \end{defn} The use of flow contracts satisfying flow monotonicity implies that consumers with high-types have a higher benefit from choosing them, relative to a full-coverage contract. \begin{lem} \label{Lem:Flow_Monotonicity} For any pair of flow contracts $z,z^c \in Z$, with $z$ satisfying flow monotonicity and $c\geq 0$, \begin{equation*} v(z , h) - v(z^c , h) \geq v(z , l) - v(z^c , l) \end{equation*} \end{lem} \begin{proof} First notice that $v(z^c , h) = v(z^c , l)$. Consider an ordering $\left\{ y_{[k]} \right\}_{k=1}^{\# Y}$ of income realizations such that $\ell (y_{[k]})$ is weakly decreasing in $k$. Flow monotonicity implies that both $z(y_k)$ and $\left[ 1 - \ell (y_{[k]}) \right]$ are increasing in $k$. Hence we have that \begin{align*} v(z , h) - v(z , l) & = \sum_{k=1}^{K} p_h (y_k) u(z(y_k)) \left[ 1- \ell (y_k) \right]\\ & \geq \left\{ \sum_{k=1}^{K} p_h (y_k) u(z(y_k)) \right\} \left\{ \sum_{k=1}^{K} p_h (y_k) \left[ 1- \ell (y_k) \right] \right\}, \end{align*} which is equal to zero since the last term in this expression is zero. \end{proof} \subsubsection{Continuation monotonicity} The notion of flow monotonicity is inherently static. However, in a dynamic environment firms have extra instruments to screen consumers, namely they can use future contracts as additional screening instruments. From the consumer's incentive perspective, the use of such dynamic incentive schemes are represented by changes to their expected continuation utility within the mechanism as a response to either their choices or realized income within a period. The term continuation monotonicity relates to how, in any given period, the consumer's continuation utility depends on her previous announcements and signal realizations, which correspond to respectively to properties (iv) and (v) in Lemma \ref{Lem:Properties_relaxed_problem}. The first such notion is continuation-signal-monotonicity (CSM), which states that signal realizations that are indicative of low-types are punished in future periods. From the point of view of period $t$, the consumer's continuation payoff from period $t+1$ onwards depends no only on period $t$'s realized signal but also on her realized type in period $t+1$. CSM compares different signal realizations, while taking averages over possible future types using the probability distribution of a low-type in period $t$, who should be discouraged from misreporting their type in period $t$. \begin{defn} For any $t<T$, a mechanism $M$ satisfies CSM at period $t$ and history $\eta^{t-1}$ if, for any $y,y' \in Y$ \begin{equation*} \ell (y') > \ell (y) \end{equation*} implies \begin{multline*} \pi_{lh} V_t( M \mid (\phi^{t-2},\phi' , h^{t-1}) , h) +\pi_{ll} V_t( M \mid (\phi^{t-2},\phi' , h^{t-1}) , l) \\ \geq \pi_{lh} V_t( M \mid (\phi^{t-2},\phi , h^{t-1}) , h) +\pi_{ll} V_t( M \mid (\phi^{t-2},\phi , h^{t-1}) , l) \end{multline*} \end{defn} In a similar vein to the previous notions of monotonicity, we now introduce the notion of continuation-type-monotonicity (CTM), which considers the impact of the consumer's type on their continuation utility. As discussed in Section \ref{Sec:profit-maximizing-contracts}, we restrict attention to mechanisms that deliver a higher discounted utility to a consumer with initial type $\theta_1=h$. CTM requires that this ordering holds for a specific period and history. \begin{defn} For any $t<T$, a mechanism $M$ satisfies continuation type monotonicity in period $t$ and history $\eta^{t-1}$ if, for any $\eta^t= (\eta^{t-1}, \phi, h)$, \begin{equation*} V_{t+1}(M \mid \eta^{t},h) > V_{t+1}(M \mid \eta^{t},l). \end{equation*} \end{defn} These monotonicity properties allows us to show that incentives are aligned since the continuation contract following an announcement $\hat{\theta} = h$ is more attractive to a consumer whose true type is indeed $\theta_t=h$. For a fixed mechanism $M$, period $t<T$ and history $\eta^{t-1} = (\phi^{t-1}, \theta^{t-1})$, define the continuation utility obtained by a consumer with type $i \in \left\{ l, h \right\}$ announcing to have a high type as \begin{equation} \label{def:Vhat_continuation} \hat{V}_{t+1} (i \mid \eta^{t-1}) \equiv \sum_{\phi \in \Phi} p_{i}(\phi) \left[ \sum_{j=l,h} \pi_{ij} V_{t+1} \left( (\phi^{t-1}, \phi , \theta^{t-1}, h) , j \right) \right] \end{equation} \begin{lem} \label{Lem:Continuation_Monotonicity} If mechanism $M$ satisfies CSM and CTM in period $t$, given history $\eta^{t-1}$, then \begin{equation*} \hat{V}_{t+1} (h \mid \eta^{t-1}) \geq \hat{V}_{t+1} (l \mid \eta^{t-1}). \end{equation*} \end{lem} \begin{proof} Rearranging summation terms gives us: \begin{align*} \hat{V}_{t+1} (h \mid \eta^{t-1}) - \hat{V}_{t+1} (l \mid \eta^{t-1}) = & \sum_{\phi \in \Phi} p_h (\phi) (\pi_{hh} - \pi_{lh}) \left[ V_{t+1} (\eta^{t-1}, \phi, h, h) - V_{t+1} (\eta^{t-1}, \phi, h, l) \right] \\ & + \sum_{\phi \in \Phi} p_h(\phi) \sum_{j=l,h} \pi_{lj} V_{t+1}(\eta^{t-1} , \phi, h, j) \\ &- \sum_{\phi \in \Phi} p_l(\phi) \sum_{j=l,h} \pi_{lj} V_{t+1}(\eta^{t-1} , \phi, h, j), \end{align*} Persistence of types and CTM implies that the first term is non-negative. The second and term terms can be rewritten as \begin{equation*} \sum_{\phi \in \Phi} p_h(\phi) (1 - \ell(\phi)) \left[ \sum_{j=l,h} \pi_{lj} V_{t+1}(\eta^{t-1} , \phi, h, j) \right], \end{equation*} which is positive since it is an average of the product of two terms that are positively correlated (from CSM), with the first term having zero expectation: \begin{equation*} \sum_{\phi \in \Phi} p_h(\phi) (1 - \ell(\phi)) =0. \end{equation*} \end{proof} \subsection{Sufficiency of OSIC} \label{subsec:sufficiency} Besides providing insights into the efficient design of incentives, the three monotonicity notions introduced also allows us to guarantee that the solution of the relaxed problem is feasible in the original profit maximization problem, and hence solves it. \begin{lem} If $M$ solves the relaxed problem, the consumer has no profitable one-shot deviation from truth-telling in $M$. \end{lem} \begin{proof} Consider a period $t$ with private history $\eta_p^{t-1}=(y^{t-1} , \hat{\theta}^{t-1}, \theta^{t-1})$ and current type $\theta_t$. Since types follow a Markov process, the consumer's preferences over continuation reporting strategies are identical for (i) private history $\eta_p^{t-1}=(y^{t-1} , \hat{\theta}^{t-1}, \theta^{t-1})$ with period $t$ type $\theta_t$, and (ii) private history $\tilde{\eta_p}^{t-1}=(y^{t-1} , {\theta^{t-1}}, \theta^{t-1})$ with period $t$ type $\theta_t$. In other words, we only need to look for deviations for private histories without past misreports. If ${\theta}^{t-1}$ includes any low-type realization, the result holds trivially since the optimal mechanism allocates constant consumption from period $t$ onwards. I focus now on the case ${\theta}^{t-1} = h^{t-1}$. If $\theta_t = l$, a one-shot deviation is a special case of the constraints considered in the OSCD concept and hence is satisfied in any mechanism that is feasible in the relaxed problem. If $\theta_t = h$, using Lemma \ref{Lem:Properties_relaxed_problem}-(ii) we can represent the net gain from a one-shot deviation as \begin{equation} \label{eq:NetGain_OSD} \frac{1-\delta^{T-t+1}}{1-\delta} u(c_t (\phi^{t-1})) - \left[ v(z_t(\eta^{t-1}, h)) + \delta \hat{V}_{t+1} (h \mid \eta^{t-1}) \right] , \end{equation} where $\eta^{t-1} = (\phi^{t-1},\theta^{t-1})$ is the public history connected with the private history in focus and $\hat{V}_{t+1}$ is defined as in (\ref{def:Vhat_continuation}). From Lemma \ref{Lem:Properties_relaxed_problem}, items (iii)-(v), we know that the optimal mechanism satisfies flow and continuation monotonicity and hence, using Lemmas \ref{Lem:Flow_Monotonicity} and \ref{Lem:Continuation_Monotonicity}, the net gain in (\ref{eq:NetGain_OSD}) is weakly lower than \begin{equation} \frac{1-\delta^{T-t+1}}{1-\delta} u(c^t) - \left[ v(z_t(\eta^{t-1}, l)) + \delta \sum_{\phi \in \Phi} p_l (\phi) \sum_{j=l,h} \pi_{lj} \hat{V}_{t+1} (l \mid \eta^{t-1}) \right] , \end{equation} which is the net gain from truth-telling for a consumer with type $\theta_t = l$ relative to a misreport in period $t$. From Lemma \ref{Lem:Properties_relaxed_problem}-(i), we know this is zero. \end{proof} The absence of profitable one-shot deviations is enough to guarantee that a mechanism $M=\left\{ z_t \right\}_{t=1}^T$ is incentive compatible as long as the consumer's reporting problem satisfies continuity at infinity, which is guaranteed as long as flow utilities are bounded: \begin{equation*} \sup \left\{ |v(z_t(\eta^{t-1}) , \theta)| \mid t=1,\dots,T, \eta^{t-1}\in H^{t-1},\theta\in \Theta \right\} < \infty, \end{equation*} in which case we say that the mechanism $M$ is bounded. This condition is automatically guaranteed if time is finite or the Bernoulli utility function $u (\cdot)$ is bounded. It holds more generally if consumption in the solution to the relaxed problem is uniformly bounded. \begin{cor} If the solution to the relaxed problem is bounded, it is optimal. \end{cor} \begin{proof} If a profitable deviation $r$ from truth-telling $r^*$ exists in mechanism $M$, boundedness of $M$ implies --- through standard arguments --- that a finite profitable deviation $r'$ only involving misreports up to period $t<\infty$ also exists. Since one-shot deviations from truth-telling are not profitable, then modifying $r'$ by dictating truth-telling in period $t$, regardless of the history, is a weak improvement from $r'$. Repeated application of this argument implies that $r^*$ weakly dominates $r'$, a contradiction. Hence $M$ solves the relaxed problem and is feasible in the firm's original problem, so it is optimal. \end{proof} \section{Auxiliary problem} \label{sec:auxiliary-problem} The characterization of coverage and price dynamics in an insurance setting poses two technical challenges. First, the flow contract space, $Z = \mathbb{R}_+^Y$, is multi-dimensional and does not have a natural notion of distortions, such as as under-provision in standard screening models (\cite{mussa1978monopoly}, \cite{Myerson81}). The underlying inefficiency in this model is the exposure of the consumer to risk, or partial coverage, which is introduced as a way of screening high-types. I introduce a notion of distortion that is directly tied to the source of distortions, which is the need to preclude low-type consumers from misreporting their types. Second, the introduction of risk aversion in a dynamic screening environment leads to a non-separability issue absent from linear environments. The marginal efficiency loss of from the introduction of distortions potentially depends on the underlying utility level obtained by the consumers. As a consequence, the optimal contract jointly chooses both the intertemporal allocation of utility to be provided ot the consumer as well as the spreading of distortions over time. I tackle both of these issues by introducing a tractable auxiliary static cost minimization problem. This auxiliary cost function is then used to study the original dynamic profit maximization problem. \subsection{Definition} From the point of view of incentives, flow contracts fulfill two roles: to provide a certain utility level to a consumer with currently high-type, while also discouraging misreports by a low-type consumer. Hence we consider, for $(\nu, \Delta) \in \mathbb{R}^2$, the problem of finding an optimal flow contract $z \in Z$ satisfying a promise keeping constraint: \begin{equation} \label{eq:Chi_PromiseKeep} v(z,h) =\nu, \end{equation} as well as a threat-keeping constraint introducing a penalty for misreporting $l$-type consumers (we will later focus on the case $\Delta <0$) \begin{equation} \label{eq:Chi_ThreatKeeping} v(z,l) = \nu - \Delta. \end{equation} I denote the set of pairs $(\nu, \Delta) \in \mathbb{R}^2$ such that a flow contract satisfying both (\ref*{eq:Chi_PromiseKeep}) and (\ref*{eq:Chi_ThreatKeeping}) exists as $A \subset \mathbb{R}^2$. The utility wedge $\Delta$ between consumers with different types is the source of distortions in the contract since exposing the consumer to risk is the only way to discourage low-type consumers from misreporting. I refer to it directly as the level of distortion. Define the following auxiliary static cost minimization problem $\mathcal{P}^A$, for any $(\nu, \Delta) \in A$: \begin{equation*} \chi\left(\nu,\Delta\right) \equiv \inf_{z \in Z}\sum_{y \in Y}p_{h}\left(y\right)z\left(y\right), \end{equation*} subject to constraints (\ref*{eq:Chi_PromiseKeep}) and (\ref*{eq:Chi_ThreatKeeping}). \begin{lem} \label{Lem:Chi_uniqueness} For any $(\nu, \Delta) \in A$, problem $\mathcal{P}^A$ has a unique solution and $\chi$ is strictly increasing in both coordinates if $\Delta \geq 0$. Moreover, if the solution at $(\nu, \Delta)$ is interior then $\chi$ is twice differentiable in an open neighborhood of $(\nu, \Delta)$. \end{lem} \begin{proof} Follows directly from Lemma \ref{Lem:Appendix_Chi_Properties} in the Appendix. \end{proof} For any $(\nu, \Delta) \in A$, we denote the solution of $\mathcal{P}^A$ as $\zeta(\nu, \Delta)$. In the remainder of the analysis, I assume that this cost function is supermodular, i.e., that the marginal cost of distortions is increasing in the utility level delivered to the agent. \begin{assum} (Cost supermodularity) \label{Assumption:Cost_supermodularity} Function $\chi(\cdot)$ satisfies \begin{equation*} \frac{\partial^2 \chi(\nu, \Delta)}{ \partial \nu \partial \Delta } \geq 0, \end{equation*} whenever $\mathcal{P}^A$ has an interior solution. \end{assum} The supermodularity of function $\chi (\cdot)$ can be written in terms of utility function $u(\cdot )$, and is equivalent to the requirement that the coefficient of absolute risk aversion $r_u$ does not decrease with consumption ``too quickly''. This is guaranteed if $u\left(\cdot\right)$ has non-decreasing absolute risk aversion (IARA), which includes constant absolute risk aversion (CARA) utility as a special case. It also holds for the case of constant relative risk aversion (CRRA) utility with coefficient above $1/2$. More formally, defining the coefficient of absolute risk aversion as \begin{equation*} r_u(c) \equiv - \frac{u'' (c)}{u' (c)}, \end{equation*} we can state the following result, which is proved in the Appendix. \begin{lem} \label{prop:Sufficient_supermodularity} Assume $r_u:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ is differentiable, then the following are equivalent: \\ (i) Assumption \ref{Assumption:Cost_supermodularity} holds, \\ (ii) \( r_u'\left(x\right)u'\left(x\right)+2\left[r_u\left(x\right)\right]^{2}\geq0, \) \\ (iii) $\psi''' (x) > 0$. \end{lem} \begin{proof} In the \hyperref[proof:conditions-supermodularity]{Appendix}. \end{proof} \subsection{Interpretation} \label{subsec:interpretation} The auxiliary problem provides a way to study feature of the flow contracts offered along the optimal mechanism. In this subsection, we discuss how properties of the cost function $\chi(\cdot)$ translate into risk exposure and consumption within the cost-minimizing flow contract. The optimal intertemporal allocation of consumption and distortions, to be studied in Section \ref{sec:Utility-Distortion-dynamics}, is characterized in terms of the marginal costs of flow utility and distortions. For any $(\nu, \Delta ) \in \mathbb{R} \times \mathbb{R}_+$ for which problem $\mathcal{P}^A$ has an interior solution $\zeta(\cdot)$, we show in the Appendix (Lemma \ref{lem:chi-derivatives-foc}) that marginal costs relate to consumption in cost-minimizing contract according to: \begin{equation} \label{eq:Chi_FOC_Derivatives} \frac{1}{ u' (\zeta(y)) } = \chi_\nu (\nu, \Delta) + \chi_\Delta (\nu, \Delta) \left[ 1 - \ell (y) \right] , \end{equation} where we use notation $\chi_j(\cdot) \equiv \frac{\partial \chi(\cdot)}{\partial j}$. The inverse marginal utility, which is strictly increasing in consumption, represents the marginal cost for the firm to deliver an additional infinitesimal amount of utility to the consumers for each specific income realization. Given that $\chi(\cdot)$ is a convex function, an increase in flow utility $\nu$ (or in distortion $\Delta$) leads to an increase in $\chi_\nu$ (or in $\chi_\Delta$). From (\ref*{eq:Chi_FOC_Derivatives}), we can separately relate both marginal costs with the consumer's consumption and risk exposure. First, the marginal cost of flow utility is related to the consumer's marginal utility through the following equation: \begin{equation*} \sum_{y \in Y} p_h(y) \frac{1}{ u' (\zeta(y)) } = \chi_\nu (\nu, \Delta), \end{equation*} This is a standard inverse marginal utility optimality condition in contract theory (see \cite{rogerson1985repeated}, for example). As I illustrate in the examples below, since the function $\left[ u'(c) \right]^{-1}$ is strictly increasing in consumption, an increase in $\chi_\nu$ can be seen as an overall increase in consumption, with the exact connection being dependent on the shape of utility $u(\cdot)$. Second, the following condition relates the distortion level $\Delta$ to the responsiveness of consumption to income realizations within a period: for any $y,y' \in Y$ \begin{equation*} \frac{1}{ u' (\zeta(y')) } - \frac{1}{ u' (\zeta(y)) } = -\chi_\Delta (\nu, \Delta) \left[ \ell (y') - \ell (y) \right] . \end{equation*} Intuitively, cost minimizing flow contracts reward income realizations with low likelihood ratio $\ell(\cdot)$. These two marginal cost equations imply that \begin{equation*} \chi_\Delta^2(\nu, \Delta) \sum_{\phi \in \Phi} p_h(y) \left[ 1 - \ell(y) \right]^2 = \sum_{\phi \in \Phi} p_h(y) \left\{ [u'(\zeta (y))]^{-1} - \sum_{y' \in Y} p_h(y') [u'(\zeta (y'))]^{-1} \right\}^2. \end{equation*} The right-hand side corresponds to the variance of the inverse marginal utility of the consumer. Hence an increase in distortion $\Delta$, and hence in $\chi_\Delta$, generates a larger dispersion in the inverse marginal utility of consumer across income realizations. A larger distortion requires that the flow contract expose the consumer to more risk, so that low-risk consumer finds this contract less attractive. This condition shows that this additional dispersion is monotonically related to $\chi_\Delta (\cdot)$. A clearer connection can be made for parametric families of utility functions. \subsubsection*{Example: CRRA preferences} Consider the case of CARA preferences, with \begin{equation*} u(c) = \frac{c^{1- \rho} -1}{1 -\rho}, \end{equation*} with coefficient of relative risk aversion $\rho > 0$. In this case, an increases in the flow utility $\nu$ is directly related to an increase in the $\rho$-th moment of the consumption distribution: \begin{equation*} \chi_\nu (\nu, \Delta) = \sum_{y \in Y} p_h(y) \zeta(y)^\rho, \end{equation*} while an increase in distortion $\Delta$ leads to a larger variance of the $\rho$-th power of final consumption: \begin{equation*} \chi_\Delta^2(\nu , \Delta) = \frac{ \sum_{y\in Y} \left[ \zeta(y)^\rho - \mathbb{E}_h(\zeta^\rho)\right]^2 }{ \sum_{y\in Y} p_h(y) \left[ 1 - \ell (y) \right]^2 }, \end{equation*} where $\mathbb{E}_h(\zeta^\rho) \equiv \sum_{y \in Y} p_h(y)\zeta(y)$. Two particular cases are of special interest. In the case $\rho =1$, where the utility becomes $u(c) = \log(c)$, the marginal costs $\chi_\nu$ and $\chi_\Delta$ are multiples of the mean and variance, respectively, of the within-period consumption of the consumer. Alternatively, if $\rho=1/2$, the marginal costs $\chi_\nu$ and $\chi_\Delta$ are multiples of the mean and variance, respectively, of the within-period ex-post utility of the consumer --- $u(\zeta(y))$ --- in the cost minimizing contract $\zeta(\cdot )$. Hence distortion parameter $\Delta$ represents the consumer's exposure to risk, but evaluated from the point of view of her utility level. \subsubsection*{Example: binary outcomes} In Section \ref{sec:Utility-Distortion-dynamics}, I provide results on the dynamics of utility flow and distortions. In the special case of binary outcomes, with $Y = \left\{ \underline{y} , \bar{y} \right\}$ satisfying $\ell (\underline{y}) > \ell (\bar{y})$, conditions (\ref{eq:Chi_PromiseKeep}) and (\ref{eq:Chi_ThreatKeeping}) imply that both are connected to consumer's income-contingent utility in a simple way: \begin{align*} u(\zeta(\bar{y})) = \nu + \frac{p_h(\underline{y})}{p_h (\bar{y}) - p_l (\bar{y})} \Delta, \\ u(\zeta(\underline{y})) = \nu - \frac{p_h(\bar{y})}{p_h (\bar{y}) - p_l (\bar{y})} \Delta. \end{align*} In which case $\nu$ and $\Delta$ correspond directly to the utility level and dispersion of the consumer within a period. \section{Distortion and consumption dynamics} \label{sec:Utility-Distortion-dynamics} In this section, we use the auxiliary cost problem introduced in Section \ref{sec:auxiliary-problem} to shed light on the dynamics of distortions and consumption in the optimal mechanism. The firm's profit maximization is directly linked with problem $\mathcal{P}^A$ since each within-period flow contract provided by the firm must be optimal within the set of flow contracts that deliver the same utility level for truth-telling consumers (which corresponds to $\nu$) and discourages misreporting consumers by the same amount (which corresponds to $\Delta$). In other words, each contract solves the auxiliary problem $\mathcal{P}^A$ for a particular pair $(\nu, \Delta)$. We can then separate the problem of the firm into two ``sub-problems'': one in which flow utility and distortions are allocated across periods, which we study in this section, and another one in which the chosen flow utility and distortion levels must be delivered in a cost-minimizing fashion, as discussed in Section \ref{sec:auxiliary-problem}. Our analysis now restricts attention to the case of finitely many periods. The following proposition formalizes this separation, by showing that all flow contracts delivered in the optimal mechanism are indeed solutions to the auxiliary cost minimization problem, with suitably chosen pairs of utility flow and distortion. Given optimal mechanism $M = \left\{ z_t \right\}_{t=1}^T$, define the following: \begin{align*} \nu_t(\phi^{t-1}) & \equiv v \left( z_t(h^{t-1},\phi^{t-1},h),h \right),\\ \Delta_t(\phi^{t-1}) & \equiv \nu_t(\phi^{t-1}) - v \left( z_t(h^{t-1},\phi^{t-1},h), l \right), \end{align*} which correspond respectively to the flow utility and distortion in the partial coverage contract offered in period $t$ following type announcements $h^t$. \begin{proposition} \label{prop:Optimal_solves_Chi} The optimal mechanism $M=\left\{ z_t \right\}_{t=1}^T$ satisfies, for any $t=1,\dots,T$, \begin{equation*} z_t(h^{t-1},\phi^{t-1},h) = \zeta(\nu_t(\phi^{t-1}),\Delta_t(\phi^{t-1})). \end{equation*} \end{proposition} \begin{proof} Fix any period $t<T$ and signal history $ \phi^t $. For any mechanism $\tilde{M}=\left\{ \tilde{z}_t \right\}_{t=1}^T$, the flow contract $\tilde{z}_t(h^{t-1},\phi^{t-1},h)$ only affects the OSIC constraints in the relaxed problem via \begin{equation*} v \left( \tilde{z}_t(h^{t-1},\phi^{t-1},h), h \right) \text{ and } v \left( \tilde{z}_t(h^{t-1},\phi^{t-1},h), l \right), \end{equation*} while it only affects the firm's profits via \begin{equation*} \sum_{y \in Y}p_{h}\left(y\right) \tilde{z}_t(h^{t-1},\phi^{t-1},h)\left(y\right). \end{equation*} Hence modifying mechanism $M$ by substituting flow contract $z_t(h^{t-1},\phi^{t-1},h)$ by $\zeta(\nu_t(\phi^t), \Delta_t(\phi^t))$ still satisfies OSIC and (given uniqueness of solution in $\mathcal{P}^A$) strictly increases profits in the relaxed problem if $z_t(h^{t-1},\phi^{t-1},h) \neq \zeta(\nu_t(\phi^t), \Delta_t(\phi^t))$. \end{proof} From now on, we refer to $\left\{ \nu_t,\Delta_t \right\}_{t=1}^T$ simply as the flow utility and distortion in the optimal mechanism following a sequence of high-type announcements. I use a similar notation to refer to the flow per-period utility derived by the consumer following a first low-type announcement in period $t$: \begin{equation*} \nu^l_t(\phi^{t-1}) \equiv u(c_t(\phi^{t-1})). \end{equation*} We can now use Proposition \ref{prop:Optimal_solves_Chi} and focus solely on the problem of intertemporal allocation of flow utility $\nu$ and distortion $\Delta$ within the optimal contract. The following proposition displays the optimality conditions connected with intertemporal allocation of utility and distortions. Since all terms in Proposition \ref{prop:intertemporal_condition_general} depend on $\phi^{t-1}$, we omit this dependence for brevity. \begin{proposition} \label{prop:intertemporal_condition_general} For any $t=1,\dots,T$ and $\phi^{t-1} \in \Phi^{t-1}$, if the solution to the cost minimization problem $\mathcal{P}^A$ has an interior solution, i.e., \begin{align*} (\nu_t, \Delta_t)(\phi^{t-1}) & \in int(A), \\ (\nu_t, \Delta_t)(\phi^{t-1},\phi') & \in int(A) \text{, for all }\phi' \in \Phi, \end{align*} then the following optimality conditions hold \begin{equation} \label{eq:Intertemporal_util_Chi_general} \chi_\nu (\nu_t,\Delta_t) = \sum_{\phi \in \Phi}p_h(\phi) \left\{ \pi_{hh}\chi_\nu[(\nu_{t+1},\Delta_{t+1})(\phi_t)] + \pi_{hl}\chi_\nu[(\nu_{t+1}^l (\phi_t) ,0)] \right\}, \end{equation} \begin{equation} \label{eq:Intertemporal_delta_Chi_general} \begin{split} \chi_\Delta (\nu_t,\Delta_t) = \sum_{\phi \in \Phi} p_h(\phi) \frac{\pi_{hh}}{\pi_{hh} - \pi_{lh}} \left\{ \chi_\Delta[(\nu_{t+1},\Delta_{t+1})(\phi_t)] \vphantom{\frac{1}{2}} \right. \\ \left. + \pi_{hl} \left[ \chi_\nu[(\nu_{t+1},\Delta_{t+1}) (\phi_t)] - \chi_\nu[(\nu_{t+1}^l(\phi_t),0)] \right] \vphantom{\frac{1}{2}} \right\}. \end{split} \end{equation} \end{proposition} The Proposition above relies on the use of local optimality conditions of the firm's profit maximization problem and hence assumes its solution is interior. The interiority condition in Proposition \ref{prop:intertemporal_condition_general} holds if the optimal mechanism has strictly positive consumption flows. Conditions (\ref{eq:Intertemporal_util_Chi_general}) and (\ref{eq:Intertemporal_delta_Chi_general}) characterize the dynamics of flow utility and distortions along the non-trivial histories that involve distortions. Equation (\ref{eq:Intertemporal_util_Chi_general}) represents the efficient intertemporal allocation of flow utilities, or consumption. It states that the marginal cost of providing a high-type with higher flow utility in period $t$ must be equalized to the expected marginal cost of higher flow utility in period $t+1$. The marginal cost of flow utility coincides with the expectation of the inverse of the consumer's marginal utility (see Subsection \ref{subsec:interpretation}), which shows that this condition is a special case of the inverse Euler equation studied in dynamic allocation problems (see \cite{rogerson1985repeated}, \cite{farhi2012capital}). Equation (\ref{eq:Intertemporal_delta_Chi_general}), on the other hand, represents the efficient intertemporal allocation of distortions. Let's consider the problem, in period $t<T$, of discouraging a consumer with low period $t$ type from pretending to have a high type. Due to the presence of type persistence, this can be done in two ways: by exposing the consumer to risk in period $t$, which is represented by a larger $\Delta_t$; or by exposing the consumer to more risk in period $t+1$ as long as the consumer still claims to be of high type ($\theta_{t+1}=h$). Of course, given the convexity of the cost function $\chi(\cdot)$, the optimal mechanism uses both screening methods in a balanced way. However, (\ref{eq:Intertemporal_delta_Chi_general}) illustrates the differences in using current versus future period distortions. First, notice that future distortions are only useful due to the persistence of types. As type persistence is reduced, or $(\pi_{hh} - \pi_{lh})$ is smaller, the optimal mechanism relies mostly on current period distortions for screening purposes. In the limit where $\pi_{hh} = \pi_{lh}$, the use of future distortions in screening is useless and, as a consequence, the optimal contract only features distortions in the first period. Second, reducing the distortion in period $t$ while increasing it in period $t+1$ not only has a direct cost impact as it requires exposing the consumer to risk (this is captured by the $\chi_\Delta$ term), but it also implies that next period's type realization now has a larger impact on the consumer's final utility. This additional utility dispersion has a marginal cost which depends on the gap of the marginal cost of flow utility in period $t+1$ for both possible types (represented by the second and third terms on the right-hand side of (\ref{eq:Intertemporal_delta_Chi_general})). \subsection{Realization-independent contracts} \label{subsec:realization-independent-contracts} I start by focusing on the case of realization-independent contracts, i.e., long-term contracts that do not use the history of past income realizations when determining the menu to be offered to the consumer within a period. As discussed in the introduction, this represents an extreme case of price restrictions that limit how much information can be used explicitly by firms when pricing consumption. This subsection also restricts attention to the finite-horizon case, i.e., $T < \infty$. Since flow contracts now only depend on the history of announcements, we drop the dependence on history $\phi^t$. In this case, equations (\ref{eq:Intertemporal_util_Chi_general}) and (\ref{eq:Intertemporal_delta_Chi_general}) become \begin{equation} \label{eq:Intertemporal_util_Chi_RI} \chi_\nu (\nu_t,\Delta_t) = \pi_{hh}\chi_\nu(\nu_{t+1},\Delta_{t+1}) + \pi_{hl}\chi_\nu(\nu_{t+1}^l ,0), \end{equation} and \begin{align} \label{eq:Intertemporal_delta_Chi_RI} \chi_\Delta (\nu_t,\Delta_t) = \frac{\pi_{hh}}{\pi_{hh} - \pi_{lh}} & \left\{ \chi_\Delta(\nu_{t+1},\Delta_{t+1}) + \pi_{hl} \left[ \chi_\nu(\nu_{t+1},\Delta_{t+1}) - \chi_\nu(\nu_{t+1}^l,0) \right] \right\}. \end{align} The supermodularity guaranteed by Assumption \ref{Assumption:Cost_supermodularity} allows us to characterize the dynamic behavior of utility and distortions. Let's focus on the intertemporal allocation between a given periods $t$ and $t+1$, following a sequence of high-type realizations $h^t$. Given that the optimal contract rewards high-type announcements, the flow utility following an additional high-type realization in period $t+1$, $\nu_{t+1}$, is higher relative to that of a consumer with a low-type realization, $\nu^l_{t+1}$. Together with the presence of distortions following a high-type announcement ($\Delta_{t+1}>0$) and supermodularity of $\chi(\cdot)$ (Assumption \ref{Assumption:Cost_supermodularity}), we can then conclude that \begin{equation*} \chi_\nu(\nu_{t+1},\Delta_{t+1}) > \chi_\nu(\nu_{t+1}^l,0). \end{equation*} From equation (\ref{eq:Intertemporal_util_Chi_RI}), we can already conclude that consecutive high-type announcements lead to an increase in the marginal cost of flow utility: \begin{equation} \chi_\nu(\nu_{t+1},\Delta_{t+1}) > \chi_\nu (\nu_t,\Delta_t) > \chi_\nu(\nu_{t+1}^l ,0). \end{equation} Now, using intertemporal distortion allocation condition (\ref{eq:Intertemporal_delta_Chi_RI}), we have that \begin{equation*} \chi_\Delta (\nu_t,\Delta_t) > \frac{\pi_{hh}}{\pi_{hh} - \pi_{lh}} \chi_\Delta(\nu_{t+1},\Delta_{t+1}) > \chi_\Delta(\nu_{t+1},\Delta_{t+1}) \end{equation*} In other words, profit maximization mandates that, along high-type path $h^T$, the marginal cost of flow utility is increasing while the marginal cost of distortions is decreasing. Using convexity of $\chi$ and, once again, Assumption \ref{Assumption:Cost_supermodularity} (see Lemma \ref{lemma:single-crossing_chi}) these properties can be translated into statements about the dynamic behavior of flow utility and distortions, which are summarized in the following Proposition. \begin{proposition} \label{prop:RI_monotonicity} If $V_h > V_l$, the optimal mechanism satisfies: \\ (i) High-type utility flows increase, i.e., $\left\{ \nu_t \right\}_{t=1}^T$ is strictly increasing, \\ (ii) Distortions decrease, i.e., $\left\{ \Delta_t \right\}_{t=1}^T$ is strictly decreasing. \end{proposition} \begin{proof} In the \hyperref[Proof_prop_4]{Appendix}. \end{proof} \subsection{Realization-dependent contracts} \label{subsec:realization-dependent} Considering a general signal structure, we are able to use Proposition \ref{prop:intertemporal_condition_general} to extend the monotonicity result of Proposition \ref{prop:RI_monotonicity} in two cases. First, we use the case of constant absolute risk aversion with coefficient $1/2$ as an illustrative example. This knife-edge example simplifies the analysis since it is the only one in which the marginal costs of flow utility and distortions are separable. We also look at the case of general preferences with two periods. \subsubsection{Quadratic cost} \label{subsubsec:quadraticCase} I now assume that the consumer's utility function takes the following form: \begin{equation*} u(c) = 2\sqrt{c}. \end{equation*} This example is particularly tractable since the auxiliary cost function $\chi \left( \cdot \right)$ is (i) separable in $\nu$ and $\Delta$, i.e., \begin{equation*} \frac{ \partial^2 \chi(\nu,\Delta) }{ \partial \nu \partial \Delta } =0, \end{equation*} whenever twice differentiable, and (ii) quadratic, which implies that $\chi_\nu \left( \cdot \right)$ and $\chi_\Delta \left( \cdot \right)$ are linear in $\nu$ and $\Delta$, respectively. In this case, intertemporal optimality conditions (\ref{eq:Intertemporal_util_Chi_general}) and (\ref{eq:Intertemporal_delta_Chi_general}) in Proposition (\ref{prop:intertemporal_condition_general}) can be rewritten as statements in terms of utility flow and distortions: \begin{equation} \label{eq:Quadratic_martingale_flow} \nu_t (\phi^{t-1}) = \sum_{\phi \in \Phi}p_h(\phi_t) \left[ \pi_{hh} \nu_{t+1} (\phi^t) + \pi_{hl} \nu_{t+1}^l (\phi^t) \right], \end{equation} \begin{equation} \label{eq:Quadratic_distortion} \Delta_t(\phi^{t-1}) = \sum_{\phi \in \Phi} p_h(\phi_t) \frac{\pi_{hh}}{\pi_{hh} - \pi_{lh}} \Delta_{t+1}(\phi^t) + N \left[ \nu_{t+1}(\phi^t) - \nu^l_{t+1}(\phi^t) \right], \end{equation} for constant $N \equiv \frac{\partial^2 \chi}{\partial \nu^2}>0$. The first equation implies that flow utilities form a martingale in the optimal mechanism. In the optimal mechanism, the continuation utility following a high-type realization is always larger than the one following a low-type announcement, and the martingale property of flow utilities implies that \textit{flow utilities} in any given period $t$ are also larger following a high-type realization $\theta_t=h$, everything else equal. Together with (\ref{eq:Quadratic_martingale_flow}) and (\ref{eq:Quadratic_distortion}), this implies that distortions follow a supermartingale, conditional on the type path $\theta^T$. Additionally, the change in flow utility across periods is similar to the realization-independent case: a high-type realization leads to higher utility flow while a low-type realization leads to lower utility flows – when integrating over the possible signal realizations. These statements are formalized below. \begin{proposition} If $V_h > V_l$ and the optimal mechanism is interior, then: \\ (i) distortions follow a supermartingale, conditional on $\theta^T$, i.e., \begin{equation*} \Delta_t (\phi^{t-1}) > \sum_{\phi \in \Phi}p_h(\phi_t) \Delta_{t+1} (\phi^t) \end{equation*} and (ii) within period flow utilities are increasing in past periods' type announcements, i.e., \begin{equation*} \sum_{\phi \in \Phi}p_h(\phi_t) \nu_{t+1} (\phi^t) > \nu_t (\phi^{t-1}) > \sum_{\phi \in \Phi}p_h(\phi_t) \nu_{t+1}^l (\phi^t) \end{equation*} \end{proposition} \begin{proof} First notice that the flow utility in period $t$ if the consumer's first low-type announcement is in period $t$ is strictly lower than that if the consumer has one more high-type realization in period $t$ since: \begin{align*} V_{t}(\phi^{t-1},h^{t-1},l) = & \nu_t^l (\phi^{t-1}) \sum_{\tau=t}^T\delta^{\tau-t} \\ = & \nu_t(\phi^{t-1}) + \delta \sum_{\phi \in \Phi} p_l (\phi_t) \left[ \pi_{lh} V_{t+1}(\phi^{t},h^{t},h) + \pi_{ll} V_{t+1}(\phi^{t},h^{t},l) \right] - \Delta_t (\phi^{t-1}) \\ < & \nu_t(\phi^{t-1}) + \delta \sum_{\phi \in \Phi} p_h (\phi_t) \left[ \pi_{hh} V_{t+1}(\phi^{t},h^{t},h) + \pi_{hl} V_{t+1}(\phi^{t},h^{t},l) \right] - \Delta_t (\phi^{t-1}) \\ = & \nu_t(\phi^{t-1}) \sum_{\tau=t}^T\delta^{\tau-t} - \Delta_t (\phi^{t-1}). \end{align*} The first equality follows from $\nu^l_t$ being defined as the constant flow utility obtained by the consumer following a first low-type realization in period $t$. The second equality corresponds to the binding incentive constraint in period $t$. The inequality follows from Lemma \ref{Lem:Properties_relaxed_problem}-(iv). Finally, the last equality follows from the martingale property (\ref{eq:Quadratic_martingale_flow}). Hence we conclude that $\nu_t(\phi^{t-1}) > \nu^l_t (\phi^{t-1})$, for all $t$ and $\phi^{t-1} \in \Phi^{t-1}$. The result then follows directly from equations (\ref{eq:Quadratic_martingale_flow}) and (\ref{eq:Quadratic_distortion}). \end{proof} \subsubsection{General preferences} Now consider an arbitrary signal structure and focus on the case of two periods. As mentioned before, in maximizing profits the firm has an incentive to reward subsequent high-type announcements (as long as $V_h >V_l$), and to rely less on later periods' distortions in order to provide incentives. As a consequence, the marginal costs of flow utility must be increasing, while the marginal cost of distortions must be decreasing following consecutive high-type announcements (which correspond to partial coverage), as shown below. As discussed in Subsection \ref{subsec:interpretation}, the marginal cost terms $\chi_\nu$ and $\chi_\Delta$ are related to the consumption level and risk exposure in a given flow contract. Hence the interpretation of Proposition \ref{prop:General_T2} is that the consecutive choices of partial coverage are rewarded, when averaging over signal realizations, with more consumption and less distortions. \begin{proposition} \label{prop:General_T2} Assume $T=2$, the optimal mechanism satisfies \begin{equation*} \chi_\Delta (\nu_1,\Delta_1) > \sum_{\phi \in \Phi}p_h(\phi) \chi_\Delta[(\nu_{2},\Delta_{2})(\phi)] \end{equation*} and \begin{equation*} \sum_{\phi \in \Phi}p_h(\phi) \chi_\nu[(\nu_{2},\Delta_{2})(\phi)] > \chi_\nu (\nu_1,\Delta_1) > \sum_{\phi \in \Phi}p_h(\phi) \chi_\nu[(\nu_{2}^l (\phi) ,0) ], \end{equation*} \end{proposition} \begin{proof} Lemma \ref{Lem:Properties_relaxed_problem} implies that \begin{equation*} V_2(\phi,h) = \nu_2(\phi,h) > V_2(\phi,l) = \nu^l_2(\phi), \end{equation*} which implies that $ \chi_\nu[(\nu_{2},\Delta_{2})(\phi)] > \chi_\nu[\nu^l_{2}(\phi)] $, for all signals $\phi \in \Phi$. The result then follows directly from (\ref{eq:Intertemporal_util_Chi_general}) and (\ref{eq:Intertemporal_delta_Chi_general}). \end{proof} In the case of realization independent contracts, monotonicity properties of the marginal cost functions (which are guaranteed by supermodularity) allow us to translate statements about marginal costs directly into statements about the \textit{levels} of flow utility and distortion, as in Proposition \ref{prop:RI_monotonicity}. Once we allow for richer signal structures, statements about the expectations of marginal costs cannot be translated into statements about the expectations of flow utility and distortion, except for the special case in which the marginal cost functions are linear in the flow utility and distortion pair $(\nu,\Delta)$ covered in Subsection \ref{subsubsec:quadraticCase}. \section{Competitive analysis} \label{sec:competitive-analysis} I now consider a competitive model that extends the analysis of \cite{rothschild1976equilibrium} and \cite{cooper1987multi} in allowing for both persistent non-constant risk types and offers containing dynamic mechanisms. For now, I assume a single contracting stage between consumers and firms in the first period, which means that both firms and consumers can commit. The role of the commitment assumption and possible ways of relaxing it are discussed in Subsection \ref{subsec:commitment}. Consider the following extensive form. A finite number of firms simultaneously offer a mechanism to a consumer. The consumer observes her initial type $\theta_{1}$ and decides which firm's mechanism to accept, if any. I assume exclusivity, i.e., the consumer can choose at most one mechanism. If the buyer does not accept any offer, she obtains no insurance coverage and gets discounted utility \begin{equation} \label{eq:OutsideOption} \underline{V}_i \equiv \mathbb{E} \left[ \sum_{t=1}^T \delta^{t-1} u \left( {y}_t \right) \mid {\theta}_1=i \right]. \end{equation} If the consumer accepts a contract, in each period she observes type $\theta_{t}\in\Theta$, then announces a message to the chosen firm. At the end of the period the income realization $y_{t}$ is observed and the customer receives (or pays) transfers from the firm as described in the chosen mechanism. I study (weak) Perfect Bayesian Bayesian (PBE) of this extensive form. If the consumer's types were observed by firms, the consumer would obtain actuarily fair full insurance in equilibrium, smoothing consumption both across income realizations within a period as well as across periods. In other words, a consumer with initial type $\theta \in \left\{ l, h \right\}$ would receive a time- and income-independent flow consumption with total discounted utility \begin{equation} \label{eq:outside-option} V^{FI}_{i} \equiv u (c^{FI}_i) \sum_{t=1}^T \delta^{t-1}, \end{equation} where $c^{FI}_i$ represents the discounted average expected lifetime income of a consumer with initial type $\theta_1 = i$: \begin{equation*} c^{FI}_i \equiv \mathbb{E} \left[ \frac{ \sum_{t=1}^T \delta^{t-1} {y}_t }{ \sum_{t=1}^T \delta^{t-1} } \mid {\theta}_1=i \right]. \end{equation*} Equilibrium outcomes are reminiscent of RS, with (initially) low-type consumers receiving their full-information utility level efficiently, while (initially) high-type consumers receive an inefficient partial coverage contract which delivers utility in the interval $\left( V^{FI}_l, V^{FI}_h \right)$. I assume that the full information continuation utility vector is feasible in the presence of private information, i.e., that $(V^{FI}_l,V^{FI}_h) \in \mathcal{V}$. This assumption insures that the following critical payoff vector, which will be shown to correspond to the equilibrium utility level of the consumer, is well-defined. Define as $\Pi^*_i(V)$, for $i=l,h$, the expected discounted profit obtained by the firm conditional on the consumer's initial type being $\theta_1=i$. \begin{lem} There exists a unique pair $V^* \equiv \left( V^{*}_l, V^{*}_h \right) \in \mathcal{V}$ satisfying \begin{gather*} V^*_l = V^{FI}_l, \\ \Pi_h \left( V^*_l, V^*_h \right) =0. \end{gather*} \end{lem} \begin{proof} Utility level $V^*_l$ can be defined as $V^{FI}_l$. The existence and uniqueness of $V^*_h$ follows from the fact that $\Pi_h\left( V^*_l, \cdot \right)$ is strictly decreasing and continuous (since it is convex) and satisfies \begin{gather*} \Pi_h\left( V^*_l, V^*_l \right) > 0,\\ \Pi_h\left( V^*_l, V^{FI}_l \right) < 0. \end{gather*} \end{proof} For simplicity, we also focus on equilibria with two properties.\footnote{ The last two restrictions do not affect equilibrium outcomes but substantially simplify the analysis. } First, the consumer's strategy is symmetric, i.e., the probability she accepts the offer of firm $j$, when firms' offered mechanisms are $(M,M')$, is equal to the probability of accepting the offer of firm $j' \neq j$ when firms' offered mechanisms are exchanged. Second, we assume that the consumer follows the truth-telling reporting strategy whenever it is optimal. I say that a direct mechanism $M=\left\{ z_t \right\}_{t=1}^T$ constitutes an equilibrium outcome if a PBE exists in which the on-path net consumption of the consumer corresponds exactly to their consumption in mechanism $M$ when following a truth-telling reporting strategy. The following result --- proven in the Appendix --- shows that the unique equilibrium outcome is characterized by the solution of problem $\Pi\left( \cdot \right)$ studied in Sections \ref*{sec:Incentives_Distortions}-\ref*{sec:Utility-Distortion-dynamics}.\footnote{ We use notation $\frac{ \partial_+ \Pi }{\partial V_i} (V)$ to denote the right-derivative of $\Pi$ with respect to $V_i$ at $V$. } \begin{proposition} \label{prop:Competitive-equilibrium-characterization} Any pure strategy PBE has outcome $M^{V^*}$. Moreover, a pure strategy equilibrium exists if, and only if, \begin{equation} \label{eq:existence condition} {u'\left[ u^{-1}\left( c^{FI}_l \right) \right]} { \frac{\partial_{+}\Pi_{h}\left(V^{*}\right)}{\partial V_{l}} } < \frac{\mu_{l}}{\mu_{h}}. \end{equation} \end{proposition} The left-hand side of condition (\ref*{eq:existence condition}) does not depend on initial type distribution $(\mu_l,\mu_h)$, and hence a pure strategy equilibrium exists if, and only if, the share of (initially) low-types in the population is sufficiently large. This is in line with the classical analysis in RS, which requires that the share of high-risk consumers be sufficiently high. Since $V^*_l<V^*_h$, the equilibrium outcome in the competitive model --- described in Proposition \ref*{prop:Competitive-equilibrium-characterization} --- is the solution of a particular instance of the problem studied in Sections \ref*{sec:Incentives_Distortions}-\ref*{sec:Utility-Distortion-dynamics}.\footnote{ The interiority of the equilibrium outcome, which is assumed in multiple characterization results, can be guaranteed as long as the income distribution induced by both types is not ``too'' distinct. To see this notice that, if $p_h=p_l$, we have that $V^*_l=V^*_l$ and as a consequence the equilibrium outcome is the full information one, which is interior. } \subsection{The role of commitment} \label{subsec:commitment} I have assumed so far that consumers are able to commit to a long-term mechanism. The goal of this section is to discuss the validity of this assumption, its role in the formal analysis, and the impact of relaxing it. If consumers have the option to renege on a mechanism at any given period and an available offer dominates their anticipated continuation contract, they would choose to do so and start a relationship with a new firm. The assumption of consumer commitment is reasonable in the presence of institutional features that prevent or hinder consumers' contract switching. For example, the presence of switching costs leads to lock-in effects allowing firms, at the outset, to credibly offer contracts that, at some future point in the interaction, may lead to lower continuation utility to the consumer relative to the offers available in the market (see \cite{honka2014quantifying} and \cite{handel2018frictions} for discussions of switching costs in auto and health insurance, respectively) An alternative justification for commitment is informational: since consumers have private information about their types, their decision to search for a new contract may be interpreted as a negative signal by potential new firms and, as a consequence, lead to less attractive offers for switching consumers. I now show that, if consumer's low-type is an absorbing state, firm's negative inference from consumer's switching decision may serve as a commitment device and allow for the equilibrium outcome in the model with commitment to be robust to consumer reentry. \subsubsection*{Absorbing state} This section assumes that the low-type is an absorbing state of the consumer type's Markov process, i.e., $\pi_{ll}=1$. I also restrict attention to the case of realization-independent contracts. Consider the following extension of the extensive form considered in Section \ref*{sec:competitive-analysis}. For simplicity, assume that a different pair of firms can potentially make offers to the consumer in each period, i.e., the set of firms is $\mathcal{F} \equiv \left\{ F^j_t \mid j \in \left\{ 1 , 2 \right\}, t=1,\dots,T \right\}$. In the first period, the consumer and firms $\left\{ F^1_1, F^2_1 \right\}$ interact as described in the baseline competitive model.\footnote{ The assumption of two firms is purely for notational convenience. } However, in each period $t=2,\dots,T$ the consumer decides whether to stay with her current mechanism or reenter the market searching for a new contract. If the consumer decides to reenter the market, firms $F^1_t$ and $F^2_t$ observe the consumer reentry decision and then simultaneously offer a mechanism to this consumer. The consumer then decides whether to accept any of the new offers made or to remain uninsured. Regardless of period $t$'s choices, the same set of moves occurs in period $t+1$, if $t<T$. The outcome described in Proposition \ref{prop:Competitive-equilibrium-characterization} can arise in a PBE of this extended model, as long as firms' off-path beliefs are pessimistic. Firms' offers upon reentry in periods $t=2,\dots,T$ depends on their beliefs about the type of a consumer that decides to reenter the market. The most pessimistic belief firms may hold is to assume that this consumer has a low type in the current period for sure. In this case, it is optimal for firms $F^1_t$ and $F^2_t$ to behave as if they were in a market without information asymmetry and offer an efficient contract which provides perfect consumption smoothing for a consumer with low-type in period $t$, i.e., with consumption flow \begin{equation} \label{eq:outside_option} c^o \equiv \mathbb{E} \left[ \tilde{y}_t \mid \tilde{\theta}_t = l \right]. \end{equation} Define $V^o_t \equiv u(c^o) \sum_{\tau=t}^T \delta^{\tau-t}$ to be the continuation utility from taking such an offer. Notice that, while extreme, these strategies are sequentially optimal for firms given their beliefs. Consider the strategy profile for firms such that, for $j=1,2$, firm $F^j_1$ use the strategy described in the commitment model, while firm $F^j_t$ for $t\geq 2$ makes the constant consumption offer described in (\ref{eq:outside-option}). The following lemma shows that, given firms' strategy profile, consumers never want to reenter the market. \begin{lem} If the equilibrium contract $M^{V^*}$ is interior, then the consumer's continuation utility satisfies \begin{equation*} V_t(\theta^{t}) \geq V^o_t \end{equation*} for all periods $t=1,\dots,T$, and almost all $\theta^t \in \Theta^{t}$. \end{lem} \begin{proof} First notice that, from Proposition \ref{prop:Competitive-equilibrium-characterization} and $\pi_{ll}=1$ we have that $ V_1(l) = \sum_{\tau=1}^T \delta^{\tau-1} u(c^o). $ Now consider period $t=2,\dots, T$ and history $(h^{t-1},l)$. The consumer's continuation utility in this case is given by \begin{equation*} V_t(h^{t-1},l) = \sum_{\tau=t}^T \delta^{\tau - t} \left( \nu_\tau - \Delta_\tau \right). \end{equation*} Since, from Proposition \ref{prop:RI_monotonicity}, $\left\{ \nu_t \right\}_{t=1}^T$ is increasing and $\left\{ \nu_t \right\}_{t=1}^T$ is decreasing, it follows that \begin{equation*} V_{t}(h^{t-1},l) > V_1(l) \frac{ \sum_{\tau=t+1}^T \delta^{\tau - t -1} }{ \sum_{\tau=1}^T \delta^{\tau - 1} }, \end{equation*} and hence the left-hand side is larger than $V_1(l)=V^o_t$. The only possible histories remaining are $h^t$, for $t=1,\dots,T$ and the proof is concluded since $V_t(h^t) > V_t(h^{t-1},l)$. \end{proof} \section{Monopoly} \label{sec:monopoly} Now consider an interaction between a Monopolist and a consumer. While the consumer's utility vector $(V^*_l,V^*_h)$ in the competitive model is an equilibrium object determined by firms' zero profit and no-deviation conditions, in the Monopolist problem, it is the result of an additional layer of optimization by the seller, taking into account the consumer's type-dependant participation constraint. To be more precise, we consider the Monopolist's problem of designing a mechanism to maximize revenue, with the assumption that the consumer is privately informed about their initial type at the contracting stage. All future type realizations are also privately observed by the consumer. The consumer's outside option are given by $\underline{V}_{\theta_1}$ defined in (\ref{eq:OutsideOption}). Intuitively, the Monopolist's problem can be divided in two parts. First, for any utility level is delivered to the consumer, conditional on her initial type, the offered contract must provide these utility levels in a cost minimizing way. In other words, the optimal mechanism is the solution to $\Pi(V)$, for some $V \in \mathcal{V}$. Second, the utility vector to be offered to the consumer, conditional on her initial type, must be chosen optimally. This gives rise to the following problem, which we denote as $\mathcal{P}^M$: \begin{equation*} \max_{V \in \mathcal{V}} \Pi(V), \end{equation*} subject to \begin{equation*} V_i \geq \underline{V}_i \text{, for }i=l,h. \end{equation*} The next proposition shows that the Monopolist's optimal contract solves $\Pi^{*}\left(V_{l},\underline{V}_{h}\right)$, with $V_{l}<\underline{V}_{h}$. The participation constraint for the initially high-type buyer necessarily binds. However, the low-type buyer's participation constraint might be slack, i.e., the consumer may have information rents. This can be optimal because increasing the utility offered to the low-type buyer relaxes the binding incentive constraint in the profit maximization problem, leading to higher profits from the high-type buyers ($\Pi_{h}\left(\cdot\right)$ increases with $V_l$). \begin{proposition} \label{prop:monopoly-problem} The Monopolist's optimal offer is $M^{V^M}$, where $V^M$ is the solution to $\mathcal{P}^M$. Moreover, $V^M$ satisfies: \begin{gather*} V^M_h = \underline{V}_h, \\ V^M_l \in [\underline{V}_l, \underline{V}_h). \end{gather*} \end{proposition} \begin{proof} The first part of the proposition is trivial. We now prove that $V^M_h=\underline{V}_h$. By way of contradiction, suppose this is not the case, i.e., $V^M_h > \underline{V}_h$. If, additionally, $V^M_l \geq V^M_h$, then mechanism $M^{V'}$, with $V' = V^M(1-\gamma) +\gamma u(0)\sum_{t=1}^T\delta^{t-1}$ for $\gamma>0$ sufficiently small is feasible and a strict improvement given concavity of $\mathcal{V}$ and $\Pi(\cdot)$. On the other hand, if $V^M_l < V^M_h$, then mechanism $M^{V'}$, with $V' = (V^M_l,V^M_h - \varepsilon)$, for $\varepsilon>0$ sufficiently small is feasible, since it is in the convex hull of $\left\{ V^M, (V^M_l,V^M_l) \right\} \subset \mathcal{V}$. It also strictly increases profits, which contradicts optimality of $V^M$. We now show that $V^M_l < \underline{V}_h$. If, by way of contradiction, $V^M=(\underline{V}_h , \underline{V}_h)$ then a reduction in the utility offered to the low-type is feasible and profitable since \begin{equation*} \frac{ \partial \Pi }{ \partial V_l } (\underline{V}_h , \underline{V}_h) = - \mu_l \psi'(\underline{c}_h), \end{equation*} where $\underline{c}_h$ is the constant consumption flow that generates discounted utility $\underline{V}_h$ (see Lemma \ref{lem:derivative-at-45degreeline} in the Appendix). \end{proof} Proposition \ref{prop:monopoly-problem} implies that the Monopolist's optimal mechanism is also a special instance of the problem proposed in Section \ref{Sec:profit-maximizing-contracts} and characterized in Sections \ref{sec:Incentives_Distortions}-\ref{sec:Utility-Distortion-dynamics}. Additionally, Proposition \ref{prop:monopoly-problem} allows us to obtain a simple characterization of information rents in the optimal contract. Given the concavity of $\Pi(\cdot)$, the utility vector $\underline{V} \equiv (\underline{V}_l, \underline{V}_h)$ solves problem $\mathcal{P}^M$ if, and only if, \begin{equation} \label{eq:condition-binding-both-ics} \frac{ \partial \Pi }{\partial V_l} (\underline{V}) = - \frac{ \mu_l }{ u' \left[ u^{-1} \left( \underline{c}_l \right) \right] } + \mu_h \frac{ \partial_+ \Pi_h }{ \partial V_l } (\underline{V}) \leq 0, \end{equation} where $\underline{c}_l$ is the constant consumption flow that generates discounted utility $\underline{V}_l$. The first term in (\ref{eq:condition-binding-both-ics}) corresponds to $\frac{\partial \Pi_l}{\partial V_l} (\underline{V})$, which follows from the fact that the low-type receives an efficient continuation contract in mechanism $M^V$, for any utility vector $V$ close to $\underline{V}$. This analysis can be summarized in the following result. \begin{proposition} The Monopolist's optimal mechanism leaves no information rents (i.e., $V^M=\underline{V}$) if, and only if, \begin{equation*} u' \left[ u^{-1} \left( \underline{c}_l \right) \right] \frac{ \partial_+ \Pi_h }{ \partial V_l } (\underline{V}) \leq \frac{ \mu_l }{ \mu_h }. \end{equation*} \end{proposition} In summary, the high-type receives no information rent and a distorted allocation as the least willing to pay for coverage. On the other hand, the low-type, who is most willing to pay for coverage, receives an efficient continuation contract. Additionally, if an initial low-type is sufficiently likely, this type receives no information rent in the optimal mechanism. It is a key observation that the optimal mechanism \textit{always} separates different initial types into different continuation contracts, even if both participation constraints bind. This is a consequence of the type-dependent outside options in this model. \section{Conclusion} \label{sec:conclusion} This paper studies a natural dynamic extension of the workhorse insurance theoretical models proposed in the literature following \cite{rothschild1976equilibrium}, \cite{wilson1977model} and \cite{stiglitz1977monopoly}. The analysis introduces new tools to the dynamic mechanism design literature to deal with the presence of curvature and exploit the recursive structure of the profit maximization problem, namely the use of an auxiliary cost minimization problem and the characterization of continuation-monotonicity. I show that the optimal contract uses both signals about consumer accidents as well as partial coverage as informative signals, affecting future offers made to consumers. In the case of realization-independent contracts, a strong efficiency result hods: distortions decrease along all histories. It is also shown that partial coverage contracts become more attractive over time, which implies --- if $T = \infty$ or types are fully persistent --- that the partial insurance contract offer becomes cheaper following a longer spell of partial coverage. The assumption of time-invariance on the types and income processes can be significantly relaxed. The results in Section \ref{sec:Incentives_Distortions} can be extended to time-dependent transitions $\left\{ \pi^t_{ij} \right\}_{i,j , t\leq T}$ and income distributions $\left\{ p^t_i \left( \cdot \right) \right\}_{i , t\leq T }$, as long as types are persistent, i.e., $\pi^t_{ii} > \pi^t_{ji}$ for all periods. The time-invariance of income distributions $p_i\left( \cdot \right)$ is necessary for the results in Section \ref{sec:Utility-Distortion-dynamics}, as the relevant cost-minimization problem $\mathcal{P}^A$ must be time-invariant. The analysis of competition focuses on pure strategies. If condition (\ref*{eq:existence condition}) does not hold, an equilibrium will necessarily involve randomization over mechanisms, following \cite{farinha2017characterization}. In that paper, firms randomize over the utility level offered to both types, however all offers on the equilibrium path have the property that the utility delivered to the high-type is higher that the one delivered to the low-type.\footnote{A high-type corresponds to a low-risk in the notation used in \cite{farinha2017characterization}.} It is a natural conjecture that a similar mixed strategy equilibrium may exist in the setting considered here, which would imply that on-path outcomes still correspond to special instances of the problem studied here. Multiple related directions for research remain open. The extension to multiple types leads to technical difficulties present in other dynamic mechanism design problems related to the relevance of non-local incentive constraints (see \cite{BattagliniLamba}). A natural conjecture that follows from my characterization is that the optimal contract features a finite menu of flow contracts with different levels of coverage, with reductions in coverage at a given period being rewarded, leading to subsequent menus that are more attractive. The recursive characterization of monotonicity, in terms of flow allocation and continuation utilities, may prove useful in the study of other dynamic mechanism design problems with curvature and persistence, such as screening risk averse buyers or Mirrleesian taxation problems. A crucial question in dynamic contracting is the issue of commitment, especially on the side of consumers. The discussion in Subsection \ref{subsec:commitment} illustrates the role that information asymmetry across firms may play as a commitment device. However, it is important to understand how the optimal contract structure changes as we consider more complex models where consumer reentry may be driven by preference or exogenous separation shocks leading to presence of switching consumers with high-types, improving firms' beliefs about switching consumers and hence leading to more attractive outside offers. \pagebreak \section*{Appendix} \label{Sec:Appendix} \setcounter{section}{0} \section{Recursive formulation} In this section, I provide a recursive representation of the firm's relaxed problem described in (\ref{eq:RelaxedProblem}) This representation is then used to prove Lemma \ref{Lem:Properties_relaxed_problem}. To explore the recursive structure of the problem at hand, we define, for any period, the set of feasible continuation utilities that can be delivered to a consumer for any given current type as well as the maximal continuation profit that can be obtained by the firm. \subsection{Definitions} \subsubsection{Utility mechanisms} For tractability, I will represent direct mechanisms through the utility flow generated for each period and history. Consider any direct mechanism $M=\left\{ z_{t}\right\} _{t=1}^{T}$, period $t$ and sequence of types and incomes $\left(\theta^{t},y^{t}\right) \in \Theta^t \times Y^t$. The utility flow generated in period $t$ is $\vartheta_{t}\left(\theta^{t},y^{t}\right)=u\left(z_{t}\left(y_{t}\mid\eta^{t-1},\theta_{t}\right)\right)$, where $\eta^{t-1}=\left(\theta^{t-1},\left(\phi\left(y_{1}\right),...,\phi\left(y_{t-1}\right)\right)\right)$. I define a utility-direct mechanism (UDM) as a sequence of functions $M_{1}^{u}\equiv\left\{ \vartheta_{t}\right\} _{t=1}^{T}$, with $\vartheta_{t}:\Theta^{t}\times Y^{t}\mapsto u\left(\mathbb{R}\right)$, such that $\vartheta_{t}\left(\theta^{t},y^{t}\right)$ is $\left(\eta^{t-1},\theta_{t},y_{t}\right)$-measurable. In other words, we impose that the utility flow in period $t$ only depend on the history of incomes $y^{t-1}$ through the observable signal history $\phi^{t-1}$. Similarly, a period $t$ continuation UDM assigns utility flows beyond period $t$ which depend on the history of types and income levels starting at period $t$, i.e., it is equal to $M_{t}^{u}\equiv\left\{ \vartheta_{\tau}\right\} _{\tau=t}^{T}$ such that $\vartheta_{\tau}:\Theta^{\tau-t}\times Y^{\tau-t}\mapsto u\left(\mathbb{R}\right)$ and $\vartheta_{\tau}\left(\left[\theta^{\tau}\right]_{t}^{\tau},\left[y^{\tau}\right]_{t}^{\tau}\right)$ is $\left(\left[\eta^{\tau}\right]_{t}^{\tau-1},\theta_{\tau} , y_\tau\right)$-measurable. I define the set of period $t$ continuation UDM such that no one-shot OSCD with a misreport in periods $\tau\geq t$ is profitable as $\mathcal{M}_{t,OSIC}^{u}$. \subsubsection{Profits} Define, for any $t=1,...,T$, the set of type-contingent continuation utilities generated by incentive compatible continuation mechanisms: \[ \mathcal{V}_{t}\equiv\left\{ \left(V_{l},V_{h}\right)\in\mathbb{R}^{2} \mid \begin{array}{c} \exists M_{t}^{u}\equiv\left\{ \vartheta_{\tau}\right\} _{\tau=t}^{T}\in\mathcal{M}_{t,OSIC}^{u}\text{ s.t.}\\ V_{i} = \mathbb{E}\left[\sum_{\tau=t}^{T}\delta^{\tau-t}\vartheta_{\tau}\left(\left[\theta^{\tau}\right]_{t}^{\tau},\left[y^{\tau}\right]_{t}^{\tau}\right)\mid\theta_{t}=i\right]\text{, for }i=l,h. \end{array}\right\} , \] with $\mathcal{V}_{T+1}\equiv\left\{ \left(0,0\right)\right\} $ if $T<\infty$. It is easy to show that these sets are convex. For any $V\in\mathcal{V}_{t}$, define the problem $\mathcal{P}_{t}\left(V\right)$ of finding the maximal continuation profit obtained by a firm, conditional on continuation utility levels $V$, as \begin{equation} \Pi_{t}\left(V\right) \equiv \sup_{M_{t}^{u}\in\mathcal{M}_{t,OSIC}^{u}} \mathbb{E}\left[ \sum_{\tau=t}^{T} \delta^{\tau-t}\left[y_{t}-\psi\left[\vartheta_{\tau}\left(\left[\theta^\tau\right]_{t}^{\tau},\left[y^\tau\right]_{t}^{\tau}\right)\right]\right]\mid\theta^{t-1}=h^{t-1} \right], \label{eq:profit_max-1} \end{equation} subject to, for $i\in\left\{ l,h\right\}$, \begin{equation} V_{i}=\mathbb{E}\left[\sum_{\tau=t}^{T}\delta^{\tau-t} \vartheta_{\tau}\left(\left[\theta^\tau\right]_{t}^{\tau},\left[y^\tau\right]_{t}^{\tau}\right) \mid \theta^{t}=\left(h^{t-1},i\right)\right],\label{eq:PK} \end{equation} with $\Pi_{T+1}\left(0,0\right)=0$ if $T<\infty$. Denote as $M_{t}^{*,V}\in\mathcal{M}_{t,OSIC}^{u}$ the solution to this problem, whenever is exists and is unique. Finally, for $v\in\frac{1-\delta^{T-t+1}}{1-\delta}u\left(\mathbb{R_{+}}\right)$, define the full information continuation profit function as \[ \Pi_{t,i}^{FI}\left(v\right)\equiv\sum_{\tau=t}^{T}\delta^{\tau-t}\left[\mathbb{E}\left[{y}_{t}\mid\theta_{t}=i\right]-\psi\left(\frac{1-\delta}{1-\delta^{T-t+1}}v\right)\right]. \] It is easy to show that $\Pi_{t}\left(V\right)\leq\pi_{hl}\Pi_{t,l}^{FI}\left(V_{l}\right)+\pi_{hh}\Pi_{t,h}^{FI}\left(V_{h}\right)$. If $V_{l}\geq V_{h}$, both functions coincide since offering constant utility $\frac{1-\delta}{1-\delta^{T-t+1}}V_{i}$, following type $\theta_{t}=i$, is feasible (incentive constraints of relaxed problem do not bind). \subsection{Recursive representation} The following preliminary result is useful in studying the recursive structure of the relaxed problem. \begin{lem} \label{lem:uniqueness-concavity} If problem $\Pi_{t}\left(\cdot\right)$ has a solution then: (i) It is strictly concave, (ii) Its solution is unique. \end{lem} \begin{proof} Proof of (i). For any $t$, consider utility pairs $V^{1},V^{2}\in\mathcal{V}_{t}$, with $V^{1}\neq V^{2}$, and $\alpha\in\left(0,1\right)$. For $k=1,2$, take optimal period $t$ continuation UDMs $M_{t}^{u,k}\equiv\left\{ \vartheta_{\tau}^{k}\right\} _{\tau=t}^{T}$ in problem $\Pi_{t}\left(V^{i}\right)$. The mechanism $M_{t}^{u,\alpha}=\left\{ \alpha\vartheta_{\tau}^{1}+\left(1-\alpha\right)\vartheta_{\tau}^{2}\right\} _{\tau=t}^{T}$ is feasible in $\Pi_{t}\left(V^{\alpha}\right)$ and, since the objective function in (\ref{eq:profit_max-1}) is strictly concave, generates profits strictly above $\alpha\Pi_{t}\left(V^{1}\right)+\left(1-\alpha\right)\Pi_{t}\left(V^{2}\right)$. Since all incentive constraints as well as (\ref{eq:PK}) are linear in utility flows, we have that $M_{t}^{u,\alpha}\equiv\left\{ \vartheta_{\tau}^{k}\right\} _{\tau=t}^{T}$ is feasible in $\Pi_{t}\left(V^{\alpha}\right)$. Hence $\Pi_{t}\left(V^{\alpha}\right)>\alpha\Pi_{t}\left(V^{1}\right)+\left(1-\alpha\right)\Pi_{t}\left(V^{2}\right)$. Proof of (ii). Follows similarly from strict concavity of objective function and linearity of constraints in problem $\Pi_{t}\left(\cdot\right)$. \end{proof} We extend the definition of $\xi$ by defining, for any function $\vartheta:Y\mapsto u\left(\mathbb{R}_{+}\right)$, \[ \xi\left(\vartheta,\theta\right)\equiv\sum_{y\in Y}p_{\theta}\left(y\right)\left[y-\psi\left[\vartheta\left(y\right)\right]\right]. \] Define a period $t$ policy as a any pair $\left(\vartheta,N\right)$ such that $\vartheta:Y\mapsto u\left(\mathbb{R}_{+}\right)$ and $N:\Phi\mapsto\mathcal{V}_{t+1}$, and the set of period $t$ policies as $\mathcal{N}_{t}$. Finally, define \[ \Gamma_{t}\equiv\left\{ \left(V_{l},V_{h}\right)\mid\begin{array}{c} \exists\left(\vartheta,N\right)\in\mathcal{N}_{t}\text{ such that}\\ V_{h}=\sum_{y\in Y}p_{h}\left(y\right)\left[\vartheta\left(y\right)+\pi_{hh}N_{h}\left(\phi\left(y\right)\right)+\pi_{hl}N_{l}\left(\phi\left(y\right)\right)\right]\\ V_{l}\geq\sum_{y\in Y}p_{l}\left(y\right)\left[\vartheta\left(y\right)+\pi_{hh}N_{h}\left(\phi\left(y\right)\right)+\pi_{hl}N_{l}\left(\phi\left(y\right)\right)\right]\\ V_{l}\in\frac{1-\delta^{T-t+1}}{1-\delta}u\left(\mathbb{R}_{+}\right) \end{array}\right\} , \] and the following optimization problem, choosing the optimal period $t$ policy: \begin{equation} P_{t}\left(V\right)\equiv\begin{array}{c} \sup_{\left(\vartheta,N\right)\in\mathcal{N}_{t}}\pi_{hh}\left[\xi\left(\vartheta,h\right)+\delta\sum_{y\in Y}p_{h}\left(y\right)\Pi_{t+1}\left(N\left(\phi\left(y\right)\right)\right)\right]+\pi_{hl}\Pi_{t,l}^{FI}\left(V_{l}\right)\\ \text{s.t. }\left\{ \begin{array}{c} V_{l}\geq\sum_{y\in Y}p_{l}\left(y\right)\left\{ \vartheta\left(y\right)+\delta\left[\pi_{lh}N_{h}\left(\phi\left(y\right)\right)+\pi_{ll}N_{l}\left(\phi\left(y\right)\right)\right]\right\} \\ V_{h}=\sum_{y\in Y}p_{h}\left(y\right)\left\{ \vartheta\left(y\right)+\delta\left[\pi_{hh}N_{h}\left(\phi\left(y\right)\right)+\pi_{hl}N_{l}\left(\phi\left(y\right)\right)\right]\right\} \end{array}\right. \end{array},\label{eq:recusrive_profit} \end{equation} with solution, whenever it exists, denoted as $\left(\vartheta_{t}^{V},N_{t}^{V}\right)$. \begin{lem} \label{lem:show-recursivity} For any $t=1,...,T$ and $V\in\mathcal{V}_{t}$, let $\left\{ \vartheta_{\tau}^{*}\right\} _{\tau=t}^{T}$ be the solution to problem (\ref{eq:profit_max-1}). The following hold, for any $t\leq T$: (i) (Full insurance following low type) if UDM $\left\{ \vartheta_{t}^{*}\right\} _{\tau = t}^T$ solves $\Pi_t(V)$, then, for any $1 \leq \tau \leq T-t+1$ and $y^{\tau -1} \in Y^{\tau-1}$, \begin{equation*} \vartheta_{t + \tau'}(\theta^{\tau'},y^{\tau'}) = \vartheta_{t + \tau''}(\theta^{\tau''},y^{\tau''}) \end{equation*} if $\theta^{\tau'} , \theta^{\tau''} \succeq (h^{\tau-1},l)$ and $y^{\tau'} , y^{\tau''} \succeq y^{\tau -1 }$.\footnote{ For completeness, we define $Y^0 \equiv \left\{ \emptyset \right\}$ and impose that all income histories succeed $\emptyset$. } (ii) Recursive utility set representation: $\mathcal{V}_{t}=\Gamma_{t}$, (iii) $\Pi_{t}\left(\cdot\right)$ satisfies the following recursive equation, i.e., $\Pi_{t}\left(\cdot\right)=P_{t}\left(\cdot\right)$, (iv) Flow contract and utilities following high type: for any $y\in Y$, $V \in \mathcal{V}_t$ and $i\in\left\{ l,h\right\} $, \[ \vartheta_{t}^{*}\left(h,y\right)=\vartheta_{t}^{V}\left(y\right), \] \[ \mathbb{E}\left[\sum_{\tau=t+1}^{T}\delta^{\tau-t-1}\vartheta\left(\left[\theta\right]_{t}^{\tau},\left[y\right]_{t}^{\tau}\right)\mid\theta^{t}=\left(h^{t},i\right),y_{t}=y\right]=N_{i}^{V}\left(\phi\left(y\right)\right). \] \end{lem} \begin{proof} We start by proving (i). Consider problem $\Pi_t(V)$, $1 \leq \tau \leq T-t+1$, and $y^{\tau -1 } \in Y^{\tau-1}$. Utility flows for histories $(\theta^{\tau'}, y^{\tau'})$ satisfying $\theta^{\tau'} \succeq (h^{\tau-1},l)$ and $y^{\tau'} \succeq y^{\tau -1 }$ only affect the objective function through \begin{equation} \label{eq:loss_follow_l} \mathbb{E}\left\{ \sum_{\tau' = t + \tau -1}^{T} \delta^{\tau'-t }\left[ y_{\tau'} - \psi\left[ \vartheta_{\tau'}\left(\left[\theta^{\tau'} \right]_{t}^{\tau'},\left[y^{\tau'}\right]_{t}^{\tau'}\right) \right] \right] \mid [\theta^T]_t^{t+\tau-1}=(h^{\tau -1},l), [y^T]_t^{t+\tau-2}=y^{\tau-1} \right\}, \end{equation} and they only affect incentive constraints through \begin{equation} \label{eq:Vl_promise} \mathbb{E}\left\{ \sum_{\tau' = t + \tau -1}^{T} \delta^{\tau'-t } \vartheta_{\tau'}\left(\left[\theta^{\tau'} \right]_{t}^{\tau'},\left[y^{\tau'}\right]_{t}^{\tau'}\right) \mid [\theta^T]_t^{t+\tau-1}=(h^{\tau -1},l), [y^T]_t^{t+\tau-2}=y^{\tau-1} \right\}, \end{equation} Convexity of $\psi$ implies that a constant utility flow, following type realizations $(h^{\tau -1},l)$ and income realizations $y^{\tau-1}$ uniquely maximizes (\ref{eq:loss_follow_l}), subject to (\ref{eq:Vl_promise}). Hence any solution to the profit maximization problem $\Pi_t$ must satisfy (i), in which case (\ref{eq:loss_follow_l}) is equal to $\Pi_{t,l}^{FI}\left(V_{l}\right)$. Hence, we can restrict the choice set in the profit minimization problem to the set $\bar{\mathcal{M}}_{t,OSIC}^{u}$, which is the subset of ${\mathcal{M}}_{t,OSIC}^{u}$ satisfying (i). As a consequence, the continuation mechanism following a low-type announcement is pinned down by the continuation utility of the consumer: it corresponds to the constant consumption amount that delivers such continuation utility. We now show (ii)-(iv). Fix a period any $t$. Any OSIC mechanism in $\bar{\mathcal{M}}_{t,OSIC}^{u}$ can be described by the continuation utility obtained conditional on $\theta_{t}=l$, the within-period $t$ utility flows generated to a consumer with $\theta_{t}=h$ and the continuation utility flows provided from period $t+1$ onwards for a consumer with $\theta_{t}=h$ (which constitutes a period $t+1$ OSIC mechanism). This statement is formalized below. Define $\bar{A}_{t}\equiv\mathbb{R}\times\left[u\left(\mathbb{R}\right)\right]^{Y}\times\left[\mathcal{M}_{t+1,OSIC}^{u}\right]^{\Phi}$ as the set of triples $(V_l, \vartheta(\cdot), \bar{N} (\cdot))$ where, in period $t$ with type announcement $h$ and income realization $y \in Y$, $\vartheta(y)$ represents flow utility in period $t$ and $\bar{N} (\phi(y)) \in \mathcal{M}_{t+1,OSIC}^{u}$ represents the continuation mechanism offered from period $t+1$ onwards. The flow utility in a continuation mechanism $\bar{N} (\phi(y))$, if history $(\theta^\tau, y^\tau)$ are observed starting in period $t+1$ is denoted as $\bar{N}\left(\theta^{\tau},y^{\tau} ; \phi\left(y\right)\right)$. So a one-to-one mapping exists between the set $\bar{\mathcal{M}}_{t,OSIC}^{u}$ and the set $A_{t}$ defined as \begin{equation} \left\{ \left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)\in\bar{A}_{t} \mid \begin{array}{c} \exists \left(V_{l}',V_{h}'\right)\in\mathbb{R}^{2}\text{ s.t.}\\ V_{i}'\left(\phi\right)=\\ \mathbb{E}\left[ \sum_{\tau=t+1}^{T}\delta^{\tau-t-1}\bar{N}\left(\left[\theta^{\tau},y^{\tau}\right]_{t+1}^{\tau};\phi\right)\mid\theta_{t+1}=i \right],\\ V_{l} \geq \sum_{y\in Y}p_{h}\left(y\right)\left[ \vartheta\left(y\right) + \delta (\pi_{lh}V_{h}'\left(\phi\left(y\right)\right)+\pi_{ll}V_{l}'\left(\phi\left(y\right)\right)) \right],\\ V_{l}\in\frac{1-\delta^{T-t+1}}{1-\delta}u\left(\mathbb{R}_{+}\right). \end{array} \right\} .\label{eq:aux_setA} \end{equation} The one-to-one mapping $a:A_{t}\mapsto\bar{\mathcal{M}}_{t,OSIC}^{u}$ assigns, for each $\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)\in A_{t}$, the mechanism $a\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)=\left\{ \vartheta_{\tau}\right\} _{\tau=t}^{T}$ satisfying: $\vartheta_{\tau}\left(\theta^{\tau-t},y^{\tau-t}\right)=\frac{1-\delta}{1-\delta^{T-t+1}}V_{l}$, for any $\left(\theta^{\tau-t},y^{\tau-t}\right)\succeq\left(l,y\right)$ and any $y\in Y$; $\vartheta_{t}\left(h,y\right)=\vartheta\left(y\right)$ for any $y\in Y$; and for any $\tau\geq t+1$ and $\left(\theta^{\tau-t-1},y^{\tau-t-1}\right)$, $\vartheta_{\tau}\left(\left(h,\theta^{\tau-1}\right),\left(y,y^{\tau-1}\right)\right)=\bar{N}\left(\theta^{\tau-1},y^{\tau-1}\mid\phi\left(y\right)\right)$. The mechanism $a\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)$ satisfies OSIC since: (i) the inequality in expression (\ref{eq:aux_setA}) implies that the period $t$ one-shot incentive constraint is satisfied, and (ii) the one-shot incentive constraints for periods $\tau\geq t+1$ are guaranteed since the continuation mechanism $\bar{N}\left(\phi\left(y_{t}\right)\right)$, for any $y_{t}\in Y$, is contained in $\mathcal{M}_{t+1,OSIC}^{u}$. It is easy to show that, for any $M_{t}^{u}\in\bar{\mathcal{M}}_{t,OSIC}^{u}$, the vector $a^{-1}\left(M_{t}^{u}\right)$ is an element of $A_{t}$. Moreover, the set of utilities generated by mechanisms $a\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)$, for all $\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)\in A_{t}$ coincides with $\Gamma_{t}$, which implies (ii). Now consider any $V\in\mathcal{V}_{t}$ and any mechanism $a\left(V_{l},\vartheta\left(\cdot\right),\bar{N}\left(\cdot\right)\right)\in\bar{\mathcal{M}}_{t,OSIC}^{u}$ feasible in the problem defining $\Pi_{t}\left(V\right)$. Define $N=\left(N_{l},N_{h}\right):\Phi\mapsto\mathbb{R}^{2}$, for each $i\in\left\{ l,h\right\} $, by \[ N_{i}\left(\phi\right) \equiv \mathbb{E}\left[ \sum_{\tau=t+1}^{T}\delta^{\tau-t-1}\bar{N}\left(\left[\theta^{\tau},y^{\tau}\right]_{t+1}^{\tau};\phi\right)\mid\theta_{t+1}=i \right]. \] The new mechanism $a\left(V_{l},\vartheta\left(\cdot\right),M_{t+1}^{*,N\left(\cdot\right)}\right)$, where $M_{t+1}^{*,N\left(\phi\right)}\in\mathcal{M}_{t+1,OSIC}^{u}$ is the optimal mechanism in the problem $\mathcal{P}_{t+1}\left(N_{l}\left(\phi\right),N_{h}\left(\phi\right)\right)$, for all $\phi\in\Phi$, is also in $A_{t}$, generates the same continuation utility in period $t$, for any $\theta_{t}\in\left\{ l,h\right\} $, and generates strictly higher profits if $N_{t+1}^{*,N\left(\cdot\right)}\neq\bar{N}$. Hence, without loss of optimality in problem $\mathcal{P}_{t}\left(V\right)$, we can focus on mechanisms indexed by utility level $V_{l}$ and mappings $\left(\vartheta,N\right):Y\mapsto u\left(\mathbb{R}_{+}\right)\times\mathcal{V}_{t+1}$, which are given by $a\left(V_{l},\vartheta\left(\cdot\right),M_{t+1}^{*,N\left(\cdot\right)}\right)$. From now on, we refer to such mechanisms via the triple $\left(V_{l},\vartheta,N\right)\in u\left(\mathbb{R}_{+}\right)\times\left[u\left(\mathbb{R}_{+}\right)\right]^{Y}\times\left[\mathcal{V}_{t+1}\right]^{\Phi}$. The mechanism connected with $\left(V_{l},\vartheta,N\right)$ satisfies OSIC and constraint (\ref{eq:PK}) if, and only if, it satisfies \begin{equation} V_{l} \geq \sum_{y\in Y} p_{l} \left(y\right) \left\{ \vartheta\left(y\right) + \delta \left[ \pi_{hh}N_{h}\left(\phi\left(y\right)\right)+\pi_{hl}N_{l}\left(\phi\left(y\right)\right) \right] \right\}, \label{eq:recursive_IC} \end{equation} \[ V_{h}=\sum_{y\in Y}p_{h}\left(y\right)\left\{ \vartheta\left(y\right)+\delta\left[\pi_{hh}N_{h}\left(\phi\left(y\right)\right)+\pi_{hl}N_{l}\left(\phi\left(y\right)\right)\right]\right\} \] and it generates profits \[ \pi_{hh}\left[ \xi\left(\vartheta,h\right) + \delta \sum_{y\in Y}p_{h}\left(y\right)\Pi_{t+1}\left(N\left(\phi\left(y\right)\right)\right) \right] +\pi_{hl}\Pi_{t,l}^{FI}\left(V_{l}\right). \] Hence, the maximal profit $\Pi_{t}\left(\cdot\right)$ must satisfy (\ref{eq:recusrive_profit}) and the profit maximizing continuation UDM must be generated by the solution to (\ref{eq:recusrive_profit}). This implies (iii) and (iv). \end{proof} \subsection{Properties} We use notation $\partial_{i}^{+}\Pi_{t}\left(\cdot\right)$ and $\partial_{i}^{-}\Pi_{t}\left(\cdot\right)$ to represent the right- and left-derivatives of $\Pi_{t}\left(\cdot\right)$ with respect to $V_{i}$, for $i=l,h$. For any time $t$, continuation utility vector $V \in int (\mathcal{V}_{t})$ and constant $c$, we denote the expression \[ 0 \in [\partial_{i}^+ \Pi_{t}(V) + c, \partial_{i}^- \Pi_{t}(V) + c] \] simply by $ 0 \in \partial_i \Pi_{t}( V ) + c.$ \begin{lem} \label{lem:derivative-at-45degreeline} For any $V\in int (\mathcal{V}_{t})$ with $V_l \geq V_h$, $\Pi_t (\cdot)$ is differentiable at $V$ and \begin{equation*} \frac{\partial}{\partial V_{i}}\Pi_{t}\left(V\right) = \pi_{hi} \frac{d}{dV_i} \Pi_{t,i}^{FI}\left(V_{i}\right) = \pi_{hi} \psi'(u_i), \end{equation*} where $ \left( \sum_{\tau=t}^T \delta^{\tau - t} \right) u_i = V_i$ \end{lem} \begin{proof} Since $\Pi_{t}\left(V\right)=\pi_{hh}\Pi_{t,h}^{FI}\left(V_{h}\right)+\pi_{hl}\Pi_{t,l}^{FI}\left(V_{l}\right)$, for $V_{l}\geq V_{h}$, the result is obvious for $V_{l}>V_{h}$. Now consider a pair $\left(V_{0},V_{0}\right)\in\mathcal{V}_{t}$, with $V_{0}\equiv\frac{1-\delta^{T-t+1}}{1-\delta}u_{0}$ and $u_{0}>u\left(0\right)$. The optimal period $t$ policy $\left(\vartheta^{V},N^{V}\right)$ induces constant utility flow $u_{0}$, i.e., $\vartheta^{V}\left(y\right)=u_{0}$, for all $y\in Y$. Now for $\varepsilon$ sufficiently small, define an alternative policy $\left(\tilde{\vartheta},N^{V}\right)$ with utility flow given by \[ \tilde{\vartheta}^{\varepsilon}\left(y\right) \equiv u_{0} + \varepsilon \frac{ p_{l}\left(y\right)^{-1}- |Y| }{ \sum_{y\in Y}\ell\left(y\right)^{-1} - |Y| }. \] This alternative policy is feasible in problem $\mathcal{P}_{t}\left(V_{0}+\varepsilon,V_{0}\right)$, which implies \[ \Pi_{t}\left(V_{0}+\varepsilon,V_{0}\right)-\Pi_{t}\left(V_{0},V_{0}\right)\geq\pi_{hh}\sum_{y\in Y}p_{h}\left(y\right)\left\{ \psi\left(u_{0}\right)-\psi\left[\tilde{\vartheta}^{\varepsilon}\left(y\right)\right]\right\} , \] holding as an equality for $\varepsilon=0$. Since the right-hand side (RHS) is differentiable and the left-hand side is strictly concave, their derivative coincides. The derivative of the RHS at $\varepsilon=0$ is $\pi_{hh}\psi'\left(u_{0}\right)=\pi_{hh}\frac{d}{dv}\Pi_{t,h}^{FI}\left(V_{0}\right)$, which pins down $\frac{\partial}{\partial V_{h}}\Pi_{t}\left(V_{0},V_{0}\right)$. A similar argument can be used to find $\frac{\partial}{\partial V_{l}}\Pi_{t}\left(V_{0},V_{0}\right)$, with an $\varepsilon$-perturbation that is feasible in problem $\mathcal{P}_{t}\left(V_{0},V_{0}+\varepsilon\right)$, for $\varepsilon$ small. \end{proof} The following result shows what type of distortions arise in the profit maximizing mechanism and the dynamic behavior of continuation utilities. \begin{lem} \label{lem:recursive-properties} For any $V\in int\left(\mathcal{V}_{t}\right)$, the solution $\left(\vartheta^{V},N^{V}\right)$ of (\ref{eq:recusrive_profit}) satisfies: there exists multipliers $\lambda \geq 0$ and $\mu > 0$ such that: (i) Multiplier signal: $\lambda>0$ and both constraints in (\ref{eq:recusrive_profit}) hold as equalities if, and only if, $V_{h}>V_{l}$, (ii) Within-period distortions: $\vartheta_{t}^{V}$ satisfies \[ -\psi'\left(\vartheta^{V}\left(y\right)\right)+\mu-\lambda\ell\left(y\right)\begin{cases} \leq0 & \vartheta^{V}\left(y\right)=u\left(0\right),\\ =0 & \text{, if }\vartheta^{V}\left(y\right)>u\left(0\right). \end{cases} \] (iii) Continuation utility rewards high types: if $V_{h}>V_{l}$, $t<T$, then \[ N_{h}^{V}\left(\phi\right)>N_{l}^{V}\left(\phi\right). \] for any $\phi \in \Phi$ such that $ N_{l}^{V}\left(\phi\right), N_{h}^{V}\left(\phi\right) >\frac{1-\delta^{T-t+1}}{1-\delta}u_{0} $, (iv) Continuation utility signal rewards: for any $\phi, \phi' \in \Phi$ \[ \ell (\phi') \leq \ell (\phi) \implies \pi_{ll} N_{l}^{V}\left(\phi'\right) + \pi_{lh} N_{h}^{V}\left(\phi'\right) \geq \pi_{ll} N_{l}^{V}\left(\phi\right) + \pi_{lh} N_{h}^{V}\left(\phi\right), \] with this inequality holding strictly if $V_h > V_l$ and $N^ {V}(\phi) \in int(\mathcal{V}_{t+1})$. \end{lem} \begin{proof} Consider arbitrary period $t$ and $V\in int\left(\mathcal{V}_{t}\right)$. Step 1. There exists period t policy $\left(\vartheta,N\right)\in\mathcal{N}_{t}$ feasible in problem $\mathcal{P}_{t}\left(V\right)$ such that inequality constraint in (\ref{eq:recusrive_profit}) holds strictly: just consider a feasible policy in problem $\mathcal{P}_{t}\left(V_{l}-\varepsilon,V_{h}\right)$, for $\varepsilon>0$ small. Step 2. The optimization problem $\mathcal{P}_{t}\left(V\right)$ has a concave objective, convex choice set $\mathcal{N}_{t}$ and linear constraints. Step 1 implies that a feasible policy $\left(\vartheta^{V},N^{V}\right)\in\mathcal{N}_{t}$ for problem $\mathcal{P}_{t}\left(V\right)$ is optimal if, and only if, there exists multipliers $\lambda \geq 0$ and $\mu\in\mathbb{R}$ such that $\left[\left(\vartheta^{V},N^{V}\right),\left(\lambda\right)\right]\in\mathcal{N}_{t}\times\mathbb{R}_{+}$ form a saddle point of the Lagrangian \begin{multline} \xi\left(\vartheta,h\right)+\sum_{y\in Y}p_{h}\left(y\right)\vartheta\left(y\right)\left[\mu-\lambda\ell\left(y\right)\right] + \lambda V_l - \mu V_h\\ + \sum_{\phi \in \Phi} p_{h}\left( \phi \right) \left\{ \begin{array}{c} \Pi_{t+1}\left(N\left(\phi\left(y\right)\right)\right)+\left[\mu\pi_{hh}-\lambda\ell\left( \phi \right)\pi_{lh}\right]N_{h}\left(\phi\left(y\right)\right)\\ +\left[\mu\pi_{hl}-\lambda\ell\left( \phi \right)\pi_{ll}\right]N_{l}\left(\phi\left(y\right)\right) \end{array} \right\}. \label{eq:Lagrangian} \end{multline} Moreover, the necessary condition for optimality of $N(\cdot)$ is \begin{equation} 0 \in \partial_i \Pi_{t+1}(N ( \phi (y))) + \mu \pi_{hi} - \lambda \ell(\phi) \pi_{li}, \label{FOC_N} \end{equation} while (ii) is a necessary condition for optimality of $\vartheta(\cdot)$. Since $\partial_h^{-} \Pi_{t+1}(N ( \phi (y))) < 0$, (\ref*{FOC_N}) implies $\mu > 0$. Step 3 (item i). If $\lambda=0$, (\ref{eq:Lagrangian}) becomes the Lagrangian of the problem $\bar{\mathcal{P}}_{t}\left(V\right)$, where the period $t$ incentive constraint is ignored, which has as unique solution constant flow utility equal to $\frac{1-\delta}{1-\delta^{T-t+1}}V_{i}$, following announcement $\theta=i$, which only satisfies the period $t$ one-shot incentive constraint if $V_{h}\leq V_{l}$. Alternatively, if $V_{l}\geq V_{h}$, the optimal mechanism in $\mathcal{P}_{t}\left(V\right)$ involves constant utilities that do not depend on income $y$, which is only optimal in (\ref{eq:Lagrangian}) if $\lambda=0$. Step 4 (item ii). Property (ii) is equivalent to local optimality of $\vartheta^{V}$ in (\ref{eq:Lagrangian}). Step 6 (item iii). If $V_{l}<V_{h}$ and $N_{h}^{V}\left(\phi_{0}\right)\leq N_{l}^{V}\left(\phi_{0}\right)$ for some $\phi_{0}\in\Phi$, a contradiction follows. Necessary condition (\ref{FOC_N}) implies \[ \partial_{h}^{+}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)+\left[\mu\pi_{hh}-\lambda\ell\left(\phi_{0}\right)\pi_{lh}\right] \leq 0 \leq \partial_{l}^{-}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)+\left[\mu\pi_{hl}-\lambda\ell\left(\phi_{0}\right)\pi_{ll}\right] \] which, using $\lambda>0$, gives us \begin{equation} \label{eq:Diff_N_compare} \frac{\partial_{h}^{+}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)}{\pi_{hh}} \leq \lambda\ell\left(\phi_{0}\right)\frac{\pi_{lh}}{\pi_{hh}}-\mu < \lambda\ell\left(\phi_{0}\right)\frac{\pi_{ll}}{\pi_{hl}}-\mu \leq \frac{\partial_{l}^{-}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)}{\pi_{hl}}. \end{equation} Finally, $N_{h}^{V}\left(\phi_{0}\right)\leq N_{l}^{V}\left(\phi_{0}\right)$ and Lemma \ref{lem:derivative-at-45degreeline} imply that $\Pi_{t+1}$ is differentiable at $V$ and \[ \frac{\partial_{i}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)}{\pi_{hi}} = \frac{d}{dV_{i}} \Pi_{t+1,i}^{FI}\left(N_{i}^{V}\left(\phi_{0}\right)\right) = \frac{d}{dV_{l}} \Pi_{t+1,l}^{FI}\left(N_{i}^{V}\left(\phi_{0}\right)\right) \] but, given convexity of $\Pi^{FI}_{t+1,l} (\cdot)$, we have \[ \frac{\partial_{h}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)}{\pi_{hh}} > \frac{\partial_{l}\Pi_{t+1}\left(N^{V}\left(\phi_{0}\right)\right)}{\pi_{hl}}, \] which contradicts (\ref*{eq:Diff_N_compare}). Step 7 (item iv). For any $\phi \in \Phi$, equation (\ref*{FOC_N}) implies \[ \left[ \begin{matrix} \lambda \ell(\phi) \pi_{ll} - \mu \pi_{hl} \\ \lambda \ell(\phi) \pi_{lh} - \mu \pi_{hh} \end{matrix} \right] \in \partial \Pi_{t+1}(N^V ( \phi (y))) \] Considering any two $\phi,\phi' \in \Phi$, convexity of $\Pi_{t+1} (\cdot)$ implies: \begin{align*} \Pi_{t+1}(N^V ( \phi' )) - \Pi_{t+1}(N^V ( \phi)) \leq \left[ \begin{matrix} \lambda \ell(\phi) \pi_{ll} - \mu \pi_{hl} \\ \lambda \ell(\phi) \pi_{lh} - \mu \pi_{hh} \end{matrix} \right]^T [N^V (\phi') - N^V(\phi)] \\ \Pi_{t+1}(N^V ( \phi )) - \Pi_{t+1}(N^V ( \phi')) \leq \left[ \begin{matrix} \lambda \ell(\phi') \pi_{ll} - \mu \pi_{hl} \\ \lambda \ell(\phi') \pi_{lh} - \mu \pi_{hh} \end{matrix} \right]^T [N^V (\phi) - N^V(\phi')] \end{align*} Summing up both equations gives us: \[ 0 \leq \lambda [\ell(\phi) - \ell(\phi')] \left[ \begin{matrix} \pi_{ll} \\ \pi_{lh} \end{matrix} \right]^T [N^V (\phi') - N^V(\phi)], \] which implies (iv). \end{proof} \section{Proof of Lemma \ref{Lem:Properties_relaxed_problem}} \begin{proof} Uniqueness follows from Lemma \ref{lem:uniqueness-concavity}. The remaining statements are proved in the order: $ii \rightarrow iv \rightarrow i \rightarrow iii \rightarrow v$. Statement (ii) follows directly from Lemma \ref{lem:show-recursivity}-(i), which has $\Pi_1=\Pi^*$ as a special case. All other properties are derived from Lemma \ref{lem:recursive-properties}. For any $t=1,\dots,T$ and $\eta^{t-1} = (h^{t-1}, \phi^{t-1})$, we have that: \begin{gather*} z_t(y \mid \eta^{t-1}, h) = \vartheta(y), \\ V_{t+1}(\eta^{t-1},\phi_t,h,i) = N_i(\phi_t) \text{, for }i=l,h, \end{gather*} where $(\vartheta,N)$ is the solution to problem \begin{equation} \label{eq:update-continuation-utility} P_t \left( V_t(\eta^{t-1},l), V_t(\eta^{t-1},h) \right). \end{equation} Proof of (iv). Follows from $V_l<V_h$ and Lemma \ref{lem:recursive-properties}-(iii). Proof of (i). For any history $\eta^{t-1} = (h^{t-1},\phi^{t-1})$, the result follows from (iv), which states that the continuation utility following $\theta_t=h$ is strictly higher than that following $\theta_t=l$, and Lemma \ref{lem:recursive-properties}-(i), which implies that the within-period $t$ upward incentive constraint binds as a result. Let $\mu_{t-1}(\phi^{t-1})$ and $\lambda_{t-1}(\phi^{t-1})$ be the Lagrange multipliers of the problem (\ref{eq:update-continuation-utility}), with $\eta^{t-1} = (h^{t-1}, \phi^{t-1})$. Proof of (iii). Follows from Lemma \ref{lem:recursive-properties}-(ii), using the following relationship to the solution $\vartheta(\cdot)$ of problem (\ref{eq:update-continuation-utility}): \begin{equation*} \psi'(\vartheta(y_t)) = \frac{ 1 }{ u' \left( z_t(y_t \mid \eta^{t-1},h) \right) }. \end{equation*} Proof of (v). Follows directly from Lemma \ref{lem:recursive-properties}-(iv). \end{proof} \section{Results on the auxiliary problem} For simplicity, I write the auxiliary problem in terms of utility levels: \begin{eqnarray*} \chi\left(\nu,\Delta\right) & = & \inf_{x:Y\rightarrow u\left(\mathbb{R}_{+}\right)}\sum_{y}p_{h}\left(y\right)\psi\left[x\left(y\right)\right], \end{eqnarray*} subject to \[ \sum_{y}p_{h}\left(y\right)x\left(y\right)=\nu, \] and \[ \sum_{y}p_{l}\left(y\right)x\left(y\right)=\nu-\Delta. \] The following statement provides important properties of cost function $\chi$ that are used in the analysis. \begin{lem} \label{Lem:Appendix_Chi_Properties} The problem $\mathcal{P}^A$ satisfies the following: \\ (i) It has a unique solution, \\ (ii) $\chi(\cdot)$ is strictly convex,\\ moreover, if $x\left(\cdot\right) \in int\left[u\left(\mathbb{R}_{+}\right)\right]^{Y}$ solves $\mathcal{P}^A$, then: \\ (iii) $\chi\left(\nu,\Delta\right)$ is twice continuously differentiable in an open neighborhood of $\left(\nu,\Delta\right)$, \\ (iv) $sign\left(\frac{\partial\chi\left(\nu,\Delta\right)}{\partial\Delta}\right)=sign\left(\Delta\right)$, \\ (v) cross derivative sign: \begin{equation*} sign\left(\frac{\partial^{2}\chi}{\partial \nu\partial\Delta}\right) = \begin{cases} sign\left(\Delta\right)\text{, if }\psi'''>0 \\ -sign\left(\Delta\right)\text{, if }\psi''' < 0 \\ = 0 \text{, if } \psi''' = 0, \end{cases} \end{equation*}\\ (vi) $ \psi'(x(y)) = \chi_\nu (\nu, \Delta) + \chi_\Delta (\nu, \Delta) \left[ 1 - \ell (y) \right] $. \end{lem} \begin{proof} Existence of solution follows from the fact that \[ \left\{ x\in\left[u\left(\mathbb{R}_{+}\right)\right]^{Y}\mid\sum_{y}p_{h}\left(y\right)\psi\left[x\left(y\right)\right]\leq K\right\} \] is compact, for any $K\in\mathbb{R}_{+}$. Uniqueness and convexity (items i-ii) follows from the strict convexity of the objective function and linearity of the constraints in $(x,\nu,\Delta)$. The following are necessary and sufficient conditions for $x\left(\cdot\right)\in int\left[u\left(\mathbb{R}_{+}\right)\right]^{Y}$ to be interior are: $\exists\lambda,\mu\in\mathbb{R}$ such that \begin{equation} \psi'\left(x\left(y\right)\right) - \lambda + \mu \ell (y) =0,\label{eq:chiFOC1} \end{equation} \begin{equation} \sum_{y\in Y}p_{h}\left(y\right)x\left(y\right)=\nu,\label{eq:chiFOC2} \end{equation} \begin{equation} \sum_{y\in Y}p_{l}\left(y\right)x\left(y\right)=\nu-\Delta.\label{eq:chiFOC3} \end{equation} for all $y\in Y$. Consider $\left\{ y_{i}\right\} _{i\in I}$ an ordering of $Y$ such that $\left\{ \ell (y_i ) \right\} _{i\in I}$ is increasing. Then distributions $\left\{ p_{h}\left(y_{i}\right)\right\} _{i\in I}$ and $\left\{ p_{l}\left(y_{i}\right)\right\} _{i\in I}$ are ordered in terms of the monotone likelihood ratio property (MLRP). It then follows that $\left\{ x\left(y_{i}\right)\right\} _{i\in I}$ is decreasing (increasing) if $\mu>0$ ($\mu<0$), which implies that $\Delta$ must be strictly positive (negative). As a consequence, $sign\left(\mu\right)=sign\left(\Delta\right)$ (item iv). If $\left(\lambda,\mu,x\left(\cdot\right)\right)$ solve $\left(\ref{eq:chiFOC1}\right)-\left(\ref{eq:chiFOC3}\right)$, then by the implicit function theorem the system has a unique continuously differentiable solution $\left(\lambda^{\nu',\Delta'},\mu^{\nu',\Delta'},x\left(\cdot\mid \nu',\Delta'\right)\right)$ for $\left(\nu',\Delta'\right)$ in an open neighborhood of $\left(\nu,\Delta\right)$. Therefore $\chi$ is continuously differentiable at $\left(\nu,\Delta\right)$, and its derivative is given by \begin{equation} \left[\begin{array}{c} \frac{\partial}{\partial \nu}\chi\left(\nu,\Delta\right)\\ \frac{\partial}{\partial\Delta}\chi\left(\nu,\Delta\right) \end{array}\right] =\left[ \begin{array}{c} \lambda^{\nu,\Delta}-\mu^{\nu,\Delta}\\ \mu^{\nu,\Delta} \end{array} \right]. \label{eq:EnvelopeChiFunction} \end{equation} Continuous differentiability of $\left(\lambda^{\nu,\Delta},\mu^{\nu,\Delta}\right)$ implies that $\chi\left(\cdot\right)$ is twice continuously differentiable at $\left(\nu,\Delta\right)$ (item iii). Finally, simple differentiation implies \[ \frac{\partial^{2}\chi\left(\nu,\Delta\right)}{\partial \nu\partial\Delta} = \frac{\sum_{y}\frac{p_{l}\left(y\right)}{\psi''\left(x\left(y\right)\right)}-\sum_{y}\frac{p_{h}\left(y\right)}{\psi''\left(x\left(y\right)\right)}}{\left(\sum_{y}\frac{p_{h}\left(y\right)}{\psi''\left(x\left(y\right)\right)}\right)\left(\sum_{y}\frac{\left[p_{l}\left(y\right)\right]^{2}}{p_{h}\left(y\right)\psi''\left(x\left(y\right)\right)}\right)+\left(\sum_{y}\frac{p_{l}\left(y\right)}{\psi''\left(x\left(y\right)\right)}\right)^{2}}. \] Now assume $\psi'''>0$, $\left\{ \frac{1}{\psi''\left(x\left(y_{i}\right)\right)}\right\}_{i\in I}$ is increasing (decreasing) in $i$ if, only if, $\Delta>0$ ($\Delta<0$). Since $\left\{ p_{h}\left(y_{i}\right)\right\} _{i\in I}$ and $\left\{ p_{l}\left(y_{i}\right)\right\} _{i\in I}$ are MLRP ordered, we have that \[ sign\left(\frac{\partial^{2}\chi\left(\nu,\Delta\right)}{\partial \nu\partial\Delta}\right)=sign\left(\Delta\right). \] An analogous argument proves the cases $\psi'''=0$ and $\psi'''<0$ (item v). Item (vi) follows from (\ref{eq:chiFOC1}) and (\ref{eq:EnvelopeChiFunction}). \end{proof} \begin{lem} \label{lem:chi-derivatives-foc} The solution to $\mathcal{P}^A$ for the pair $(\nu, \Delta)$ satisfies \begin{equation*} \frac{1}{ u' (\zeta(y)) } = \chi_\nu (\nu, \Delta) + \chi_\Delta (\nu, \Delta) \left[ 1 - \ell (y) \right]. \end{equation*} \end{lem} \begin{proof} Follows from Lemma \ref{Lem:Appendix_Chi_Properties}, with the solution in terms of utility flows and consumption being connected via $ \frac{1}{ u' (\zeta(y)) } = \psi'(x(y)). $ \end{proof} The following statement connects variation in marginal cost of utility and distortions to the levels of utility and distortions and will be instrumental in the analysis of Subsection \ref{subsec:realization-independent-contracts}. \begin{lem} \label{lemma:single-crossing_chi}Suppose $\chi\left(\cdot\right)$ is strictly convex and $\frac{\partial^{2}\chi}{\partial \nu\partial\Delta}>0$, then \[ \left\{ \begin{array}{c} \chi_{\nu}\left(\nu,\Delta\right)\geq\chi_{\nu}\left(\nu',\Delta'\right),\\ \chi_{\Delta}\left(\nu,\Delta\right)\leq\chi_{\Delta}\left(\nu',\Delta'\right) \end{array}\right\} \Rightarrow\left\{ \begin{array}{c} \nu\geq \nu',\\ \Delta\leq\Delta' \end{array}\right\} . \] Additionally, if the left conditions hold with strict inequalities, then the implications also hold with strict inequalities. \end{lem} \begin{proof} Since $\chi$ is twice continuously differentiable, the following holds: \[ \chi_{\nu}\left(\nu,\Delta\right)-\chi_{\nu}\left(\nu',\Delta'\right)=\left(\nu-\nu'\right)\int_{0}^{1}\chi_{vv}\left(\iota\left(\alpha\right)\right)d\alpha+\left(\Delta-\Delta'\right)\int_{0}^{1}\chi_{\nu\Delta}\left(\iota\left(\alpha\right)\right)d\alpha\geq0, \] \[ \chi_{\Delta}\left(\nu,\Delta\right)-\chi_{\Delta}\left(\nu',\Delta'\right)=\left(\nu-\nu'\right)\int_{0}^{1}\chi_{\nu\Delta}\left(\iota\left(\alpha\right)\right)d\alpha+\left(\Delta-\Delta'\right)\int_{0}^{1}\chi_{\Delta\Delta}\left(\iota\left(\alpha\right)\right)d\alpha\leq0, \] where $\iota\left(\alpha\right)=\alpha\left(\nu,\Delta\right)+\left(1-\alpha\right)\left(\nu',\Delta'\right)$, for $\alpha\in\left[0,1\right]$. Using $ $$\frac{\partial^{2}\chi}{\partial \nu\partial\Delta}>0$, these imply \[ \left(\Delta-\Delta'\right)\left[\frac{\int_{0}^{1}\chi_{\nu\Delta}\left(\iota\left(\alpha\right)\right)d\alpha}{\int_{0}^{1}\chi_{vv}\left(\iota\left(\alpha\right)\right)d\alpha}-\frac{\int_{0}^{1}\chi_{\Delta\Delta}\left(\iota\left(\alpha\right)\right)d\alpha}{\int_{0}^{1}\chi_{\nu\Delta}\left(\iota\left(\alpha\right)\right)d\alpha}\right]\geq0. \] However, convexity of $\chi\left(\cdot\right)$ second order continuous differentiability, implies that the function $\Gamma$ defined over a neighborhood of $\left(0,0\right)$, given by $\left(v_{0},\Delta_{0}\right)\mapsto\int_{0}^{1}\chi\left(f\left(\alpha\right)+\left(v_{0},\Delta_{0}\right)\right)d\alpha$ is also convex and twice continuously differentiable. Convexity implies that \[ \left|\Gamma''\left(0,0\right)\right|=\left(\int_{0}^{1}\chi_{vv}\left(\iota\left(\alpha\right)\right)d\alpha\right)\left(\int_{0}^{1}\chi_{\Delta\Delta}\left(\iota\left(\alpha\right)\right)d\alpha\right)-\left(\int_{0}^{1}\chi_{\nu\Delta}\left(\iota\left(\alpha\right)\right)d\alpha\right)^{2}>0. \] This implies that $\Delta \leq \Delta'$, which together with $\chi_\nu (\nu, \Delta) \geq \chi_\nu (\nu',\Delta')$ implies $\nu \geq \nu'$. Moreover, if the left inequalities in the Lemma hold strictly, we have similarly that $\Delta<\Delta'$ and $\nu > \nu'$. \end{proof} \subsection*{Proof of Lemma \ref{prop:Sufficient_supermodularity}} \label{proof:conditions-supermodularity} \begin{proof} The equivalence of (i) and (iii) follows from Lemma \ref{Lem:Appendix_Chi_Properties}-(v). The second order derivative of $\psi=u^{-1}$ is given by \begin{eqnarray*} \psi''\left(x\right) & = & \left(-\frac{u''\left(\psi\left(x\right)\right)}{u'\left(\psi\left(x\right)\right)}\right)\frac{1}{u'\left(\psi\left(x\right)\right)^{2}}. \end{eqnarray*} The absolute risk aversion of utility $u\left(\cdot\right)$ at $c\in\mathbb{R}_{+}$ is given by \( r_u\left(c\right)\equiv-\frac{u''\left(c\right)}{u'\left(c\right)}. \) Therefore direct derivation implies that \[ \psi'''\left(x\right)=\frac{r_u'\left(x\right)u'\left(x\right)+2\left[r_u\left(x\right)\right]^{2}}{\left[u'\left(x\right)\right]^{5}}, \] where $x=u\left(x\right)$. This implies the equivalence between (ii) and (iii). \end{proof} \section{Utility and distortion dynamics} In this section, we focus throughout on a particular $t=1,\dots,T-1$ and $\phi^{t-1} \in \Phi^{t-1}$ and study the problem of reallocating both flow utilities and distortions across periods $t$ and $t+1$, following series of announcements $\hat{\theta}^t=h^t$ and signals $\phi^{t-1}$. For notational brevity, we will omit the dependence of distortions and flow utilities on $\phi^{t-1}$, denoting $\nu_t(\phi^{t-1})$ and $\nu_{t+1}(\phi^{t-1},\phi)$ simply as $\nu_t$ and $\nu_{t+1}(\phi)$, for example. Define problem $\mathcal{P}^I$ \begin{equation*} \min_{(\nu,\Delta,(\nu'(\phi),\Delta'(\phi),\nu^l(\phi))_{\phi \in \Phi}) \in \bar{A}} \chi (\nu, \Delta) + \delta \sum_{\phi \in \Phi} p_h(\phi) \left[ \pi_{hh} \chi \left( \nu'(\phi), \Delta'(\phi) \right) + \pi_{hl } \chi(\nu^l(\phi), 0) \right] \end{equation*} subject to: \begin{equation} \label{eq:Intertemporal_promise_keep} \nu + \delta \sum_{\phi \in \Phi} p_h(\phi) \left[ \pi_{hh} \nu'(\phi) + \pi_{hl } \nu^l(\phi) \right] = V_t(h^{t-1},h), \end{equation} \begin{equation} \label{eq:Intertemporal_IC_t} \nu -\Delta + \delta \sum_{\phi \in \Phi} p_l(\phi) \left[ \pi_{lh} \nu'(\phi) + \pi_{ll } \nu^l(\phi) \right] = V_t(h^{t-1},l), \end{equation} and \begin{multline} \label{Intertemporal_iC_t1} \nu'(\phi) -\Delta'(\phi) + \delta \sum_{\phi' \in \Phi} p_l(\phi') \left[ \pi_{lh} V_{t+2}(h^{t+1},(\phi,\phi'),h) + \pi_{ll } V_{t+2}(h^{t+1},(\phi,\phi'),l) \right] = \\ \nu^l(\phi) + \sum_{t' = t+2}^T \delta^{t'-t-1} u(c_{t+1}(\phi)). \end{multline} With the set $\bar{A}$ defined as \begin{equation*} \bar{A} \equiv \left\{ (\nu,\Delta,(\nu'(\phi),\Delta'(\phi),\nu^l(\phi))_{\phi \in \Phi}) \mid (\nu, \Delta), (\nu'(\phi), \Delta'(\phi)), (\nu^l(\phi), 0) \in A, \forall\phi\in \Phi \right\}. \end{equation*} \begin{lem} The vector \begin{equation*} \left( \nu_t , \Delta_t, \left( \nu_{t+1} (\phi) , \Delta_{t+1}(\phi), \nu^l_{t+1}(\phi) \right)_{\phi \in \Phi} \right), \end{equation*} \end{lem} solves problem $\mathcal{P}^I$. \begin{proof} First notice that, from Proposition \ref{prop:Optimal_solves_Chi}, vector $\left( \nu_t , \Delta_t, \left( \nu_{t+1} (\phi) , \Delta_{t+1}(\phi), \nu^l_{t+1}(\phi) \right)_{\phi \in \Phi} \right)$ is feasible in $\mathcal{P}^I$. For any $(\nu,\Delta,(\nu'(\phi),\Delta'(\phi),\nu^l)_{\phi \in \Phi}) \in \bar{A}$, consider a new mechanism that is identical to optimal mechanism $M$ except for changing:\\ (i) $z_t(h^{t-1},h)$ to $\zeta(\nu,\Delta)$,\\ (ii) $z_{t+1}(h^{t},\phi,h)$ to $\zeta(\nu'(\phi),\Delta'(\phi))$,\\ (iii) consumption in $t+1$ following signals $(\phi^{t-1},\phi)$ and announcements $(h^t,l)$ to $\psi(\nu'(\phi))$. All incentive constraints in relaxed problem for announcements in $t' \geq t+2$ are unaffected. All incentive constraints in relaxed problem for announcements in $t' \leq t-1$ are unaffected since (\ref{eq:Intertemporal_promise_keep}) guarantees that the continuation payoff of the consumer at the start of period $t$ is unchanged. The period $t$ incentive constraint, following $\phi^{t-1}$, is guaranteed by (\ref{eq:Intertemporal_IC_t}), while all period $t+1$ incentive constraints are guaranteed by (\ref{Intertemporal_iC_t1}). Finally, the mechanism modification only affects the firm's expected discounted profits via the expression in the objective function of problem $\mathcal{P}^I$. Optimality of $M$ implies the result. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:intertemporal_condition_general}] Using equations (\ref{eq:Intertemporal_promise_keep})-(\ref{Intertemporal_iC_t1}), we can simplify problem $\mathcal{P}^I$ to one where only flow utilities in period $t+1$ are chosen by finding expressions for distortions $(\Delta,(\Delta'(\phi))_{\phi \in \Phi})$ and period $t$ flow utility $\nu$, in terms of $(\nu'(\phi))_{\phi \in \Phi}$. By assumption, the solution of this problem is interior and hence the following local optimality conditions must be satisfied: \begin{equation} \label{Chi_FOC_1} \chi_v^1 + \chi_\Delta^1 \left[ 1 - \frac{\pi_{lh}}{\pi_{hh}} \ell(\phi) \right] = \chi_v^2(\phi) + \chi_\Delta^2(\phi), \end{equation} and \begin{equation} \label{Chi_FOC_2} \chi_v^1 + \chi_\Delta^1 \left[ 1 - \frac{\pi_{ll}}{\pi_{hl}} \ell(\phi) \right] = \chi_v^{2,l}(\phi) - \frac{\pi_{hh}}{\pi_{hl}} \chi_\Delta^2 (\phi), \end{equation} where $\chi_k^1 \equiv \chi_k(\nu_t,\Delta_t)$, $\chi_k^2(\phi) \equiv \chi_k(\nu_{t+1}(\phi),\Delta_{t+1}(\phi))$, for $k\in \left\{ \nu,\Delta \right\}$, and $\chi_v^{2,l} \equiv \chi_v(\nu^l_{t+1}(\phi))$. Multiplying (\ref{Chi_FOC_1}) by $\pi_{hh}p_h(\phi)$ and (\ref{Chi_FOC_2}) by $\pi_{hl} p_h(\phi)$, adding both equations, and finally summing across signals $\phi \in \Phi$ gives us (\ref{eq:Intertemporal_util_Chi_general}). Subtracting (\ref{Chi_FOC_1}) from \ref{Chi_FOC_2}, multiplying the result by $p_h(\phi)$ and summing over $\phi \in \Phi$ gives us (\ref{eq:Intertemporal_delta_Chi_general}). \end{proof} \subsection{Realization-independent mechanisms} For brevity, we now define $\chi_k^t \equiv \chi_k(\nu_t,\Delta_t)$ and $\chi^{t,l}_\nu \equiv \chi_\nu(\nu^l_t,0)$ for for $k\in \left\{ \nu,\Delta \right\}$ and $t=1,\dots,T$. We also introduce notation \begin{equation*} V^d_t \equiv V_t(h^{t-1},h) - V_t(h^{t-1},l), \end{equation*} which, using the binding period $t$ incentive constraint, satisfies \begin{equation} \label{eq:Delta_cont_util} V^d_t = \Delta_t + \delta (\pi_{hh} - \pi_{lh}) V^d_{t+1}, \end{equation} with $V^d_{T+1}=0$, if $T<\infty$. \begin{lem} \label{lem:Condition_monotonicity_dynamics} For any $t=2,\dots,T$, if $V_h > V_l$, the optimal mechanism is interior and satisfies $\chi^t_\nu > \chi_\nu^{t,l}$, then it must satisfy: \begin{equation*} \nu_{t-1} < \nu_t, \end{equation*} and \begin{equation*} \Delta_{t-1} > \Delta_t. \end{equation*} \end{lem} \begin{proof} The intertemporal optimality condition \ref{eq:Intertemporal_util_Chi_RI} implies that \begin{equation*} \chi^t_\nu > \chi_\nu^{t-1} > \chi_\nu^{t,l}, \end{equation*} while, using $\chi^t_\nu > \chi_\nu^{t,l}$ and optimality condition (\ref{eq:Intertemporal_delta_Chi_RI}) gives us \begin{equation*} \chi_\Delta^{t-1} < \chi_\Delta^{t}. \end{equation*} Hence, we have that \begin{align*} \chi_\nu(\nu_t, \Delta_t) > \chi_\nu(\nu_{t-1}, \Delta_{t-1}),\\ \chi_\Delta(\nu_t, \Delta_t) < \chi_\Delta(\nu_{t-1}, \Delta_{t-1}) \end{align*} which, using Lemma \ref{lemma:single-crossing_chi}, imply the result. \end{proof} \begin{lem} \label{lem:Recursive_Chi} For any $t=2,\dots,T$, if $V_h > V_l$ then an interior optimal mechanism satisfies: \\ (i) $\chi^t_\nu > \chi_\nu^{t,l}$,\\ (ii) $V^d_{t-1} > V^d_t$. \end{lem} \begin{proof} We prove this statement by induction. First, consider $t=T$. In this case we have that \begin{equation*} \Delta_T = \nu_T - \nu^l_T, \end{equation*} and, since Proposition \ref{prop:Optimal_solves_Chi} guarantees that $\Delta_T>0$, supermodularity implies (i) as: \begin{equation*} \chi_\nu(\nu_T,\Delta_T) \geq \chi_\nu(\nu_T,0) > \chi_\nu(\nu^l_T). \end{equation*} Lemma \ref{lem:Condition_monotonicity_dynamics} then implies that $\Delta_{T-1}> \Delta_T$. Property (ii) follows since $V^d_T=\Delta_t$ and $V^d_{T-1} = \Delta_{T-1} + \delta (\pi_{hh} - \pi_{lh})\Delta_T$. Now suppose that properties (i)-(ii) hold for all $t'=t+1,\dots,T$. We start by showing that property (i) holds. The continuation utility of a high-type consumer satisfies \begin{equation} \label{eq:Cont_util_high} V_t(h^{t-1},h) = \nu_t + \delta \left[ \pi_{hh} V_{t+1}(h^t,h) + \pi_{hl} V_{t+1} (h^t,l) \right], \end{equation} while the continuation utility of a lot-type consumer satisfies \begin{equation} \label{eq:Cont_util_low} V_t(h^{t-1},l) = \sum_{\tau=t}^T \delta^{\tau - t} \nu^l_t. \end{equation} Substituting equations (\ref{eq:Cont_util_high}) and (\ref{eq:Cont_util_low}) into (\ref{eq:Delta_cont_util}) gives us the following, after some manipulation: \begin{equation} \label{eq:Induction_manipulate} \Delta_t = (\nu_t - \nu^l_t) + \sum_{\tau>t}\delta^{\tau - t} (\nu^l_{t+1} - \nu^l_t) +\delta\pi_{lh}V_{t+1}^d \end{equation} Suppose, by way of contradiction, that \begin{equation*} \chi^t_\nu < \chi_\nu^{t,l}. \end{equation*} But, given supermodularity of $\chi$, this inequality requires \begin{equation} \label{eq:Order_nu_contradiction} \nu^l_t > \nu_t. \end{equation} Also, from our inductive hypothesis, we know that $\chi^{t+1}_\nu > \chi^{t+1,l}_\nu$, which together with (\ref{eq:Intertemporal_util_Chi_RI}) implies that \begin{equation} \label{eq:Order_nu_contradiction2} \chi^{t,l}_\nu > \chi^{t}_\nu = \pi_{hh} \chi^{t+1}_\nu + \pi_{hl} \chi^{t+1,l}_\nu > \chi^{t+1,l}_\nu \implies \nu^l_t > \nu^l_{t+1}. \end{equation} Combining (\ref{eq:Induction_manipulate}), (\ref{eq:Order_nu_contradiction}) and (\ref{eq:Order_nu_contradiction2}) we have \begin{equation*} \Delta_t < \delta\pi_{lh} V^d_{t+1}, \end{equation*} and, using (\ref{eq:Delta_cont_util}) once again, we have that \begin{equation*} V^d_t \leq \delta \pi_{hh}V^d_{t+1}, \end{equation*} which contradicts property (ii), which holds at period $t+1$ by our inductive assumption. We now prove property (ii). Since property (i) holds for any $t' \geq t$, Lemma \ref{lem:Condition_monotonicity_dynamics} implies that $\Delta_{t'-1}>\Delta_{t'}$ for all $t \leq t' \leq T$. Now notice that \begin{equation*} V^d_t = \sum_{\tau=t}^T \left[ \delta (\pi_{hh} - \pi_{lh}) \right]^{\tau - t} \Delta_\tau, \end{equation*} and hence we have \begin{equation*} V^d_{t-1} - V^d_t = \sum_{s=0}^{T-t} \left[ \delta (\pi_{hh} - \pi_{lh}) \right]^{s} \left( \Delta_{t+s-1} - \Delta_{t+s} \right) + \left[ \delta (\pi_{hh} - \pi_{lh}) \right]^{T-t} \Delta_{T}, \end{equation*} which is strictly positive since the series $\left\{ \Delta_\tau \right\}_{\tau=t-1}^T$ is strictly increasing. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:RI_monotonicity}] \label{Proof_prop_4} \\ Follows directly from Lemmas \ref{lem:Condition_monotonicity_dynamics} and \ref{lem:Recursive_Chi} \end{proof} \section{Competitive analysis} The two firms are labeled $A$ and $B$. Let $V^E=(V^E_l, V^E_h)$ denote the equilibrium utility level of the consumer conditional on her initial type, and $V^j=(V^j_l, V^j_h)$ denote the vector describing the utility level obtained the consumer with both possible initial types when accepting the equilibrium offer of firm $j = A,B$. Finally, we denote the equilibrium profit of firm $j=A,B$ as $\Pi^*_j$. \begin{lem} \label{lem:RS_unique} Any pure strategy PBE has outcome $M^{V^*}$. \end{lem} \begin{proof} The proof is divided into four parts. I) $\Pi(V^E) = 0$ and both firms make zero profits in equilibrium. Optimality of the consumer's acceptance strategy implies that, for $j=A,B$ and $i=l,h$, the following hold:\\ - $V^j_i \leq V^E_i$, \\ - $V^j_i = V^E_i$ if $j$'s offer is accepted with positive probability by type $i$. Hence, if firm $j$'s offer is accepted by consumer with type $i$ with positive probability, its continuation profits are at most \begin{equation*} \Pi_i(V^E_i,V^j_{i'}) \leq \Pi_i(V^E_i,V^E_{i'}), \end{equation*} as increasing the utility of type $i' \neq i$ increases the profit opportunities from the consumer with type $i$. This means that the total profits obtained by firms is at most \begin{equation*} \sum_{i=l,h}\pi_i\Pi_i(V^E) = \Pi(V^E). \end{equation*} However, by offering mechanism $M^{V'}$ with $V' = V^E +(\epsilon, \epsilon)$, for $\epsilon>0$ sufficiently small, each firm can guarantee profits $\Pi(V')$. In equilibrium, the offer made by each firm must dominate $M^{V'}$. Since profit function $\Pi$ is continuous, we have $\lim_{\epsilon \rightarrow 0} \Pi(V^E + (\epsilon, \epsilon)) = \Pi(V^E)$. Combining the two implications, we have \begin{equation*} \Pi(V^E) \geq \Pi^*_A + \Pi^*_B \geq 2\Pi(V^E). \end{equation*} This implies that $\Pi(V^E) = 0$ and that both firms make zero profits. II) $V^E_i \geq V^{FI}_l$, for $i=l,h$. First suppose that, by way of contradiction, $V^E_l < V^{FI}_l$. In this case firm $A$ could guarantee positive profits by offering a mechanism with non-contingent constant consumption satisfying \begin{equation*} u(\bar c)\sum_{{t=1}}^T\delta^{t-1} = V^E_l +\epsilon, \end{equation*} for $\epsilon>0$ sufficiently small, since it makes positive profits from the $l$-type consumer and, if the $h$-type consumer were to choose this contract, it would also generate positive profits since the discounted average income of the $h$-type consumer is strictly higher than that of the $l$-type consumer. Finally, if $V^E_h < V^{FI}_l$, a similar profitable deviation exists. III) $\Pi_h(V^E) \leq 0$. Suppose, by way of contradiction, that $\Pi_h(V^E) > 0$. This implies that $V^E_h < V^{FI}_h$ and hence concavity of the feasible utility set $\mathcal{V}$ implies that, for $\epsilon >0$ sufficiently small, $(V^E_l,V^E_h + \epsilon) \in \mathcal{V}$. Also, we know that $V^E_l \geq V^{FI}_l>\sum_{t=1}^T \delta^{t-1} u(0)$ and hence incentive compatibility implies that both types' equilibrium utility are strictly above $\sum_{t=1}^T \delta^{t-1} u(0)$. As a consequence, an incentive compatible mechanism in which the utility of both consumers is reduced exists, i.e., there exists $V' \in \mathcal{V}$ such that $V'_i < V^E_i$, for $i=l,h$. The existence of feasible utility vectors $(V^E_l,V^E_h + \epsilon)$, for $\epsilon>0$ and $V'$, together with convexity of $\mathcal{V}$, implies that: for $\varepsilon>0$ sufficiently small, there exists utility vector $\tilde{V} \in \mathcal{V}$ such that \begin{gather*} V^E_l - \varepsilon < \tilde{V}_l < V^E_l,\\ V^E_h + \varepsilon > \tilde{V}_h > V^E_h. \end{gather*} Considering $\varepsilon$ sufficiently small, a deviation to offer $M^{\tilde{V}}$ leads to profits approximately equal to \begin{equation*} \mu_h \Pi_h(V^E)>0, \end{equation*} which contradicts point I. IV) Points II and III imply that total profits are non-positive, conditional on any realization of the consumer's initial type. Combined with I, it implies that \begin{equation*} \Pi_i(V^E) = 0, \end{equation*} for both $i=l,h$. Using II, we must have $V^E_l = V^{FI}_l$. Finally, since $\Pi_h(V^E_l , \cdot)$ is strictly decreasing in $(V^{FI}_l,\infty) \cap \mathcal{V}$, the unique possible utility $V^E_h$ is $V^*_h$. \end{proof} \begin{lem} \label{lem:RS_existence} A pure strategy equilibrium exists if, and only if, \begin{equation*} {u'\left[ u^{-1}\left( c^{FI}_l \right) \right]} { \frac{\partial_{+}\Pi_{h}\left(V^{*}\right)}{\partial V_{l}} } < \frac{\mu_{l}}{\mu_{h}}. \end{equation*} \end{lem} \begin{proof} (If) Consider the following strategy profile. Both firms offer mechanism $M^{V^*}$ and consumers follow an optimal equilibrium that follows truth-telling whenever optimal and treats firms symmetrically. Given the definition of $V^*$, no firm has a profitable deviation in which a single type is attracted to its offer. This is trivially the case for type $l$. For type $h$, such a deviation would require that it offer utility pair $V'$ with $V'_h>V^*_h$ and $V'_l > V^*_l$, which implies $\Pi_h(V')<\Pi_h(V^E) = 0$. Hence, this strategy profile constitutes a pure strategy equilibrium if, and only if, no alternative mechanism can attract both types and generate strictly positive expected profits. But given concavity of $\Pi(\cdot)$, we can find $V' \geq V^*$ such that \begin{equation*} \Pi(V') > \Pi(V^*). \end{equation*} if, and only if, we can find a pair $(d_h,d_l)\geq 0$ such that \begin{equation*} \frac{\partial \Pi}{\partial V_l} (V^*) d_l + \frac{\partial_+ \Pi}{\partial V_h} (V^*) d_h >0, \end{equation*} where we have used the fact that $\Pi(\cdot )$ is differentiable in $V_l$, whenever $V_l<V_h$. Since $\frac{\partial_+ \Pi}{\partial V_h} (V^*) < 0$, such a pair exists if, and only if, $\frac{\partial \Pi}{\partial V_l} (V^*)>0$. Condition $\frac{\partial \Pi}{\partial V_l} (V^*)>0$ coincides with the expression in the lemma since, for $V$ with $V_l<V_h$ \begin{equation*} \Pi(V) = \mu_l \Pi^{FI}(V_l) + \mu_h \Pi_h(V_l,V_h). \end{equation*} (Only if) If the condition in the Lemma fails, we can find $V' \geq V^*$ such that $\Pi(V') > \Pi(V^*)$. Now consider any pure strategy equilibrium with outcome $M^{V^*}$. Offer $M^{V'}$ is a profitable deviation by any firm. \end{proof} \end{document}
\begin{document} \title{ extbf{Quantum Communication Complexity} \thispagestyle{empty} \begin{abstract} Can quantum communication be more efficient than its classical counterpart? Holevo's theorem rules out the possibility of communicating more than $n$ bits of classical information by the transmission of $n$ quantum bits--- unless the two parties are entangled, in which case twice as many classical bits can be communicated but no more. In~apparent contradiction, there are distributed computational tasks for which quantum communication cannot be simulated efficiently by classical means. In~extreme cases, the effect of transmitting quantum bits cannot be achieved classically short of transmitting an exponentially larger number of bits. In a similar vein, can entanglement be used to save on classical communication? It~is well known that entanglement on its own is useless for the transmission of information. Yet, there are distributed tasks that cannot be accomplished at all in a classical world when communication is not allowed, but that become possible if the non-communicating parties share prior entanglement. This leads to the question of how expensive it is, in terms of classical communication, to provide an exact simulation of the spooky power of entanglement. \end{abstract} \section{Introduction} It is well known that the use of quantum information allows for tasks that would be provably impossible in a classical world, such as the transmission of unconditionally confidential information between parties that share only a short secret key~\cite{BB84,sciam}. Apart from this \emph{quantum cryptography}, are there advantages to be gained in setting up an infrastructure that would facilitate the transmission of quantum information? In~particular, are there advantages to be gained in terms of communication \emph{efficiency}? A~different but related question concerns quantum entanglement: can entangled parties make better use of a classical communication channel than their non-entangled counterparts? Better still: can entangled parties benefit from their entanglement if they are not allowed any form of direct communication? There are good reasons to believe at first that the answer to all the above questions is negative. In~particular, Holevo's theorem~\cite{holevo} states that no more than $n$ bits of expected classical information can be communicated between unentangled parties by the transmission of $n$ quantum bits---henceforth called \emph{qubits}---regardless of the coding scheme that could be used. If~the communicating parties share prior entanglement, twice as much classical information can be transmitted~\cite{densecoding}, but no more. This applies even if the communication is not restricted to be \mbox{one-way}~\cite{CvDNT}. It~is thus reasonable to expect that no significant savings in communication can be achieved by the transmission of qubits, and possibly no savings at all if the communicating parties do not share prior entanglement. As~for the last question, it is well known that entanglement alone cannot be used to signal information---otherwise faster-than-light communication would be possible and causality would be violated---and thus it would seem that entanglement is useless if it is not supplemented by direct forms of communication. Here~we survey striking results to the effect that all the intuition in this paragraph is wrong. After a review of classical communication complexity in Section~\ref{class}, we consider in Section~\ref{alayao} the situation in which quantum communication is allowed. In~Section~\ref{alaCB}, we revert to classical communication but allow unlimited prior entanglement between the communicating parties. Section~\ref{spooky} investigates in more detail the power of prior entanglement when no direct communication is allowed to take place, which we call spooky communication complexity. In~Section~\ref{simul}, we determine how expensive it is to simulate the effect of entanglement in a purely classical world. Finally, we conclude with open problems in Section~\ref{concl}. Although we do not cover the important topic of lower bounds for quantum communication complexity, we encourage the reader to consult~\mbox{\cite{kremer,CvDNT}} for early results and~\cite{BdW} for powerful new techniques. \section{Classical Communication Complexity}\label{class} Let~\mbox{Alice} and Bob be two distant parties who wish to collaborate on a common task that depends on distributed inputs. More precisely, let $X$, $Y$ and $Z$ be sets and consider a function \mbox{$f:X \times Y \rightarrow Z$}. Assume Alice and Bob are given some \mbox{$x \in X$} and \mbox{$y \in Y$}, respectively, and their goal is to compute \mbox{$z=f(x,y)$}. Sometimes, we add a \emph{promise} \mbox{$P(x,y)$} for some Boolean function~$P$, in which case Alice and Bob are required to compute the correct answer \mbox{$f(x,y)$} only when~\mbox{$P(x,y)$} holds. Whether or not there is a promise, the obvious recipe is for Alice to communicate $x$ to Bob, which allows him to compute~$z$. Once obtained, Bob can then communicate $z$ back to Alice if both parties need to know the answer. If~we are concerned with the amount of communication required to achieve this task---paying no attention to the computing effort involved in the process---could there be more efficient solutions for some functions~$f$\,? For~\emph{all} functions? The answer is obviously positive for the first question. For~example, if \mbox{$X=Y=\{0,1\}^n$} for some integer~$n$, \mbox{$Z=\{0,1\}$} and \mbox{$f(x,y)$} is defined to be 0 if and only if $x$ and $y$ have the same Hamming weight (the same number of~1s), it suffices for Alice to communicate the Hamming weight of $x$ to Bob for him to verify the condition. Thus, about $\lg n$ bits\,\footnote{\,The symbol ``$\lg$''~is used to denote the base-two logarithm.} of communication are sufficient for this task, which is much more economical than if Alice had transmitted all $n$ bits of her input to Bob. The~\mbox{answer} to the second question, however, is negative: There are functions for which the obvious solution \emph{is} optimal. For~instance, $n$ bits of communication are necessary and sufficient in the worst case for Bob to decide whether or not Alice's input is the same as~his. This unsurprising statement is not easy to prove, but a host of techniques have been developed to handle that kind of questions. See~\cite{KN97} for a survey. A more interesting scenario takes place when we do not insist on the correct \mbox{answer} to be obtained with certainty. If~we accept a small \mbox{error} probability, we can do significantly better on the above-mentioned equality-testing problem. Let~\mbox{$\varepsilon>0$} be the tolerated error probability and let $p$ be the smallest prime number larger than~$n/\varepsilon$. Let $\mathbb F$ be the finite field with $p$ elements. Upon receiving their inputs $x$ and~$y$, \mbox{Alice} forms polynomial \mbox{$P(z)=x_1+x_2 z+x_3 z^2+\cdots+x_n z^{n-1}$} over $\mathbb F$ and Bob forms \mbox{$Q(z)=y_1+y_2 z+y_3 z^2+\cdots+y_n z^{n-1}$}. Then, Alice chooses a random element \mbox{$w \in \mathbb F$}. She~computes $v=P(w)$ and transmits both $w$ and $v$ to~Bob, who computes $Q(w)$ and compares the answer with~$v$. If~\mbox{$Q(w) \neq v$}, it has been established that \mbox{$x \neq y$}. \mbox{Otherwise}, Bob can claim with confidence that \mbox{$x=y$} because two distinct polynomials of degree smaller than~$n$ cannot agree on more than~$n$ distinct points and therefore the proportion of points in~$\mathbb F$ on which $P(z)$ and $Q(z)$ agree must be less than \mbox{$n/\#{\mathbb F} = n/p < \varepsilon$}. Note that \mbox{$p \leq 2 n/\varepsilon$} since there is always a prime number between any number and its double, and hence each of $w$ and $v$ can be written with no more than \mbox{$2+\lg n + \lg \oneovereps$} bits. The~communication complexity of this protocol is therefore at most twice this many bits, which is much less than $n$ for any fixed error probability $\varepsilon$ when $n$ is large enough. In the classical model of communication complexity, it is often allowed for Alice and Bob to \emph{share random variables} even though one may argue that this does not make much sense from a mathematical point of view. In~this scenario, we assume that Alice draws a random bit string (or integer) according to some specific distribution---or~sometimes even a random real number---and she tells Bob the outcome of this draw in an \emph{initialization phase}. This communication is not accounted for in the complexity of the protocol because it takes place \emph{before} Alice and Bob are given their respective inputs. When correctness of the protocol is analysed for a given input, probabilities are taken over the possible choices of that random variable, as if it had been chosen \emph{after} the inputs were determined\, \footnote{\,Note that drawing shared random bits once Alice and Bob are separated makes perfect sense in a quantum world if they share entanglement: they simply have to measure corresponding qubits from \mbox{$\ket{\Phi^{+}}=\oosrt\ket{00}+\oosrt\ket{11}$} pairs in the computational basis.}. In~this model, the equality-testing problem can be solved with error probability $\varepsilon$ with only \mbox{$m=\lceil \, \lg 1/\varepsilon \rceil$} bits of communication, regardless of the value of~$n$. In~the initialization phase, Alice and Bob share $m$ random bit strings \mbox{$a_1$, $a_2$, \ldots, $a_m$}\,, each of length~$n$. Once they receive their inputs $x$ and~$y$, Alice computes \mbox{$b_i=x \cdot a_i$} for each~$i$, where $x \cdot a$ is the \emph{inner product}\, \footnote{\,To compute the inner product between two bit strings of equal length, line them up one under the other and count the number of positions in which they both have a~1. If~this number is even, the inner product is~0; otherwise it is~1. Mathematically, it is the exclusive~or (sum modulo~2) of the bitwise \textsc{and} of the two strings.} between bit strings $x$ and~$a$. Alice transmits \mbox{$b_1$, $b_2$, \ldots, $b_m$} to Bob, who verifies whether or not \mbox{$b_i=y \cdot a_i$} for each~$i$. If~not, it has been established that \mbox{$x \neq y$}. Otherwise, Bob can claim with confidence that \mbox{$x=y$} because the probability of error of this strategy is \mbox{$2^{-m} \leq \varepsilon$} since it is~$\pbhalf$ independently for each~$i$. \section{Quantum Communication Complexity}\label{alayao} The topic of classical communication complexity was introduced and first studied by Andrew Yao in 1979~\cite{yaoc}. Almost 15 years elapsed before the same pioneer thought of asking how the situation might change if Alice and Bob were allowed to exchange quantum rather than classical bits~\cite{yaoq}. It~seems at first that no savings in communication are to be expected at all because of Holevo's theorem~\cite{holevo}, which states that no more than $n$ bits of expected classical information can be communicated between unentangled parties by the transmission of $n$ qubits. (It~was implicit in Yao's original model that Alice and Bob were not allowed to share prior entanglement in the initialization phase.) The first hint that quantum communication could be more efficient than classical communication was given in August~1997 by Richard Cleve, Wim van Dam, Michael Nielsen and Alain Tapp~\cite{CvDNT} in a probabilistic setting\, \footnote{\,For~the sake of historical completeness, it is easy to modify a protocol given three months earlier by Buhrman, Cleve and van~Dam~\cite{BCvD} (original \texttt{quant-ph} version) to achieve a similar goal, and indeed this is done in the final version of that paper, which is to appear in \textit{SIAM Journal on Computing}.}. Alice and Bob are given two-bits vectors $x_1 x_2$ and $y_1 y_2$, respectively. They must both decide if \mbox{$x_1 y_1 + x_2 y_2$} is even or odd, and they are restricted to two bits of communication. Shared random variables are allowed. It~is proven in~\cite{CvDNT} that no classical protocol for this task can give the correct answer with a probability better than~\pbfrac{7}{9}. Yet, if two quantum bits of communication are allowed, instead of two classical bits, the correct answer can be obtained with an improved probability of~\pbfrac{4}{5}. A more convincing case for the superiority of quantum communication came the following year when Harry Buhrman, Richard Cleve and Avi Wigderson~\cite{BCW} proved that quantum communication can be \emph{exponentially} better than classical communication in the error-free model, provided the inputs respect a given promise, and almost quadratically better in the bounded-error promise-free model. Subsequently, Ran Raz proved that an exponential advantage exists to quantum communication even in the bounded-error promise-problem model~\cite{raz}. The first exponential separation~\cite{BCW} was inspired by the \mbox{famous} Deutsch--Jozsa \mbox{problem}~\cite{DJ}, which was used in 1992 to show for the first time that quantum computers could be exponentially faster than classical computers in an oracle-based~\cite{oracle} error-free setting. More specifically, Buhrman, Cleve and Wigderson defined the \mbox{following} \mbox{scenario}, where $\Delta(x,y)$ denotes the \emph{Hamming distance} between bit strings $x$ and~$y$, which is the number of bit \mbox{positions} on which they differ. Let~$k$ be an integer, \mbox{$n=2^k$}, \mbox{$X=Y=\{0,1\}^n$} and \mbox{$Z=\{0,1\}$}. Function \mbox{$f:X \times Y \rightarrow Z$} is the equality function: \mbox{$f(x,y)=1$} if and only if \mbox{$x=y$}. We~have seen already that this communication complexity problem \mbox{requires} $n$ bits of classical communication if errors are not tolerated. Now, we introduce the promise $P(x,y)$, which is defined to be \textsf{true} if and only if \mbox{$\Delta(x,y) \in \{0,n/2\}$}. In~other words, $P(x,y)$ holds if and only if either \mbox{$x=y$} or $x$ and $y$ differ on exactly half their positions. It~is proven in~\cite{BCW} that the error-free equality-testing problem requires at least $cn$ classical bits of communication, for some real \mbox{positive} constant~$c$ and all sufficiently large~$n$, even when the correct answer is required only when the promise holds. (The~hard part of that proof is taken from~\cite{FR}.) Even though quantum communication cannot be significantly more efficient than classical communication for the straight equality-testing problem~\cite{BdW}, it is shown in~\cite{BCW} that it can be solved with certainty using as few as $k$ quantum bits of communication whenever the promise holds, which is exponentially better than the $cn$ bits that would be required in a classical scenario. A~quantum protocol for this problem is easily derived from the first example of ``spooky communication'' given in Section~\ref{spooky}. This exponential ``superiority'' of quantum over classical communication is not \mbox{entirely} convincing because it vanishes as soon as we tolerate an arbitrarily small probability of error. Indeed, we have seen that the equality-testing problem can be solved with a \emph{constant} number of bits of classical communication, for any fixed error probability, when shared random variables are allowed\,\footnote{\,Even if we do not allow shared random variables, the problem can be solved with \mbox{$2k+c$} bits of classical communication, where \mbox{$k=\lg n$} and $c$ is a constant that depends only on the error probability.}. The other problem featured in~\cite{BCW} is more interesting, even though the quantum superiority is merely \mbox{almost} quadratic, because it applies in the more realistic bounded-error model. Consider the following scenario. Alice and Bob are very busy and they would like to find a time when they are simultaneously free for lunch. They each have an engagement calendar, which we think of as an \mbox{$n$--bit} string $x$ (resp~$y$), where \mbox{$x_i=1$} (resp.~\mbox{$y_i=1$}) means that Alice (resp.~Bob) is free for lunch on day~$i$. Mathematically, they want to find an index~$i$ such that \mbox{$x_i=y_i=1$} or establish that such an index does not exist. Balasubramanian Kalyanasundaram and Georg Schnitger~\cite{KS} proved in 1987 that this task requires at least $cn$ classical bits of expected communication in the worst case, for some real positive constant~$c$ and all sufficiently large~$n$, even when the answer is only required to be correct with probability at least~\pbfrac{2}{3}. Intuitively, this means that lunch cannot be scheduled short of exchanging a constant fraction of the appointment calendar. In~sharp contrast, it is shown in~\cite{BCW} that this problem can be solved with the exchange of at most \mbox{$d \sqrt{n} \lg{n}$} quantum bits for some constant $d$ and all sufficiently large~$n$. This is accomplished by implementing a distributed version of Grover's quantum search algorithm~\cite{grover} in which we search for a~$1$ in the bitwise \textsc{and} of $x$ and~$y$. A~\mbox{$\lceil\,\lg n\rceil$--qubit} quantum register is shuttled back and forth between Alice and Bob for each of the approximately $\sqrt{n}$ iterations of Grover's algorithm. \section{Substituting Entanglement for Communication}\label{alaCB} A slightly different model was introduced by Richard Cleve and Harry Buhrman~\cite{CB}. \mbox{Assume} Alice and Bob are restricted to communicating classical information. Are~there tasks for which they could save on the required amount of communication if they share prior entanglement? Again, it is tempting to think that this is not possible because entanglement cannot be used to increase the capacity of a classical channel: Alice cannot communicate more than $n$ expected bits of classical information to Bob if less than $n$ bits are actually transmitted between them---even if they are allowed two-way communication and unlimited use of entanglement. And again, this intuition is wrong. In their original paper~\cite{CB}, Cleve and Buhrman were able to show that entanglement can be used to save one bit of classical communication, but only in a three-party scenario. Still, this was the very first example of a distributed task that could be solved more efficiently (in~terms of communication) in our quantum world than would be possible in a sad classical world, because it predates~\cite{CvDNT} by four months. Subsequently, Buhrman, Cleve and van~Dam~\cite{BCvD} found a \emph{two-party} distributed problem that can be solved with a probability of success exceeding $85\%$ if prior shared entanglement is available, whereas the probability of success in a classical world could not exceed $75\%$ with the same amount of communication, even if shared random variables are allowed. The first problem for which communication complexity could be reduced by more than a constant additive amount was also discovered in this shared-entanglement \mbox{scenario}, rather than in Yao's original qubit-transmission scenario described in the previous section: Harry Buhrman, Wim van Dam, Peter H{\o}yer and Alain Tapp~\cite{BvDHT} gave a \mbox{$k$--party} distributed task that requires roughly \mbox{$k \lg k$} bits of communication in a classical world, yet it can be carried out with exactly $k$ bits of classical communication if the parties are allowed to share prior entanglement. The exponential and almost-quadratic improvements mentioned in the \mbox{previous} section~\cite{BCW,raz} apply just as well in the shared-entanglement scenario. This is obvious since the effect of any protocol that requires the communication of $\ell$ quantum bits can be achieved by the transmission of~$2\ell$ classical bits---through \emph{quantum teleportation}~\cite{teleport}---provided $\ell$ bits of shared entanglement are available. There is yet another natural communication complexity scenario, in which the \mbox{parties} are allowed to share prior entanglement \emph{and} to communicate quantum bits. For~the sake of brevity, we shall not elaborate on this approach here. \section{Spooky Communication Complexity}\label{spooky} An even more intriguing question is to determine if entanglement can be used \emph{instead~of} communication. Are there tasks that would be impossible to achieve in a classical world if Alice and Bob were not allowed to communicate, yet those tasks can be performed without \emph{any} form of communication provided the participants share prior entanglement? In~the words of Alain Tapp, this would provide a form of \emph{pseudo telepathy} because it would give the \emph{illusion} of communication between Alice and Bob when in fact no such communication takes place. And indeed there would be no communication because entanglement cannot be used to signal information: nothing Alice can do locally on her quantum system can cause a measurable change in Bob's, no matter how they are entangled. A moment's thought suffices to realize that pseudo telepathy is possible if we are content with probabilistic tasks. Define the \emph{EPR~task} as follows. Once separated, Alice and Bob are given each a real number $x$ and~$y$, respectively, between 0 and~$\pi$. They are to produce each a single bit: $a$~for Alice and $b$ for Bob. Alice's output must be equally likely to be 0 or~1, and so must Bob's output, but the required correlation is that \mbox{$a=b$} with probability \mbox{$\cos^2(x-y)$}. It~is precisely the essence of Bell's theorem~\cite{bell} that such correlations cannot be established in a classical world if communication between Alice and Bob is not allowed, even if the inputs are restricted to binary choices \mbox{$x \in \{0,\pi/6\}$} and \mbox{$y \in \{0,5\pi/6\}$}. Yet, it is easy for participants who share a \mbox{$\ket{\Phi^{+}}=\oosrt\ket{00}+\oosrt\ket{11}$} state to realize this task. If~we think of this state as a pair of entangled polarized photons, it suffices for Alice and Bob to measure their photons at polarization angles~$x$ and~$y$, respectively, and the outcomes of the measurements provide the required outputs $a$ and~$b$. But is pseudo telepathy possible when there is a deterministic criterion to decide if the goal has been achieved and when errors are not tolerated? This brings us to our last form of communication complexity, which we call \emph{spooky communication \mbox{complexity}}. As~usual, let $X$, $Y$ and $Z$ be sets and consider a function \mbox{$f:X \times Y \rightarrow Z$} such that it is not possible to compute the value of $f(x,y)$ with certainty from knowledge of either $x$ or~$y$ alone. It~follows that if Alice and Bob are given \mbox{$x \in X$} and \mbox{$y \in Y$}, respectively, they cannot compute $f(x,y)$ without communication. Can such a function $f$ exist so that Alice and Bob---or~at least one of them---could compute $f(x,y)$ nevertheless \mbox{provided} the participants share prior entanglement? Of~course not, since this would allow for faster-than-light communication! Thus, we have to define spooky communication \mbox{complexity} in a more subtle way, through a \emph{relation} rather than a function. Let $X$, $Y$, $A$ and $B$ be sets and consider a relation \mbox{$R \subseteq X \times Y \times A \times B$}. In~an \emph{initialization phase}, Alice and Bob are allowed to discuss strategy and share random variables. They are also allowed to share entanglement. After Alice and Bob are physically separated, they are given \mbox{$x \in X$} and \mbox{$y \in Y$}, respectively. Without being allowed any forms of communication, their goal is to produce \mbox{$a \in A$} and \mbox{$b \in B$}, respectively, such that \mbox{$(x,y,a,b) \in R$}. We~say that \emph{spooky communication}, or~\emph{pseudo telepathy}, takes place if this task could not be fulfilled with certainty in a classical world, whereas it can provided Alice and Bob share prior entanglement. The~amount of spooky communication \emph{complexity} is measured in the number of bits of entanglement that are required to succeed in the worst case. The~\emph{spooky advantage} is defined as the function that relates the spooky complexity to the number of classical bits of communication that would be needed in the worst case to succeed in the classical setting. The first example of spooky communication was provided by Gilles Brassard, Richard Cleve and Alain Tapp~\cite{BCT} as yet another variation on the Deutsch--Jozsa problem~\cite{DJ}. Let~$k$ be an integer, \mbox{$n=2^k$}, \mbox{$X=Y=\{0,1\}^n$} and \mbox{$A=B=\{0,1\}^k$}. The~\emph{Deutsch--Jozsa \mbox{relation}} $R$ is defined as follows, where $\Delta(x,y)$ denotes again the Hamming distance between $x$ and~$y$. \[ (x,y,a,b) \in R ~~\Longleftrightarrow~~ \left\{ \begin{array}{l} x=y \mbox{ and } a=b, \mbox{ or} \\[1ex] \Delta(x,y)=n/2 \mbox{ and } a \neq b, \mbox{ or} \\[1ex] \Delta(x,y) \not\in \{0,n/2\} \end{array} \right. \] In other words, Alice and Bob are promised that either their inputs are the same, or that they differ on exactly half the bits. They must produce identical outputs if and only if their inputs are the same. But if the promise is not fulfilled, there are no conditions on what Alice and Bob produce. The challenge comes from the fact that the outputs $a$ and $b$ must be exponentially shorter than the inputs $x$ and~$y$. It is not immediate that the Deutsch--Jozsa relation requires communication to be established in the classical setting. After all, it \emph{can} be established when \mbox{$n=2$} (easily) and~\mbox{$n=4$} (think about it!). But~it is proven in~\cite{BCT} that there exists a positive constant $c$ such that the Deutsch--Jozsa relation cannot be \mbox{established} classically with fewer than $cn$ bits of communication provided $n$ is sufficiently large, based on the similar lower bound from~\cite{BCW} that we had mentioned in Section~\ref{alayao}. On~the other hand, we show below that the Deutsch--Jozsa relation \emph{can} be established in the spooky setting with as few as $k$ bits of entanglement. It~follows that the spooky advantage of this problem is exponential because $n$ is exponential in~$k$. To establish the Deutsch--Jozsa relation, Alice creates a \mbox{$2k$-qubit} register in state \[ \sum_{z \in \{0,1\}^k} \, 2^{-k/2} \, \ket{z,z} \, , \] which is the same as $k$ pairs in state \ket{\Phi^{+}} up to ordering of the qubits. She keeps the first $k$ qubits of that register and gives the other $k$ qubits to Bob. After Alice and Bob are separated, they receive their respective inputs $x$ and~$y$. To~each integer~$i$, \mbox{$1 \le i \le n$}, associate the bit string \mbox{$z_i \in \{0,1\}^k$} that represents number \mbox{$i-1$} in binary. Now, Alice applies to her register the unitary transformation that maps \ket{z_i} to \mbox{$(-1)^{x_i} \, \ket{z_i}$} for each~$i$, and Bob does the same to his register, but with $y_i$ instead of~$x_i$. This produces joint state \looseness=-1 \[ \sum_{i=1}^{n} \, 2^{-k/2} \, (-1)^{x_i} (-1)^{y_i} \ket{z_i,z_i} ~=~ \sum_{i=1}^{n} \, 2^{-k/2} \, (-1)^{x_i \oplus y_i} \ket{z_i,z_i} \, . \] Next, Alice applies the Walsh--Hadamard transform, which sends \ket{0} to \mbox{$\oosrt \ket{0} + \oosrt \ket{1}$} and \ket{1} to \mbox{$\oosrt \ket{0} - \oosrt \ket{1}$}, to each of the $k$ qubits of her register, and Bob does the same on his register. Finally, Alice and Bob measure their registers in the computational basis. The~resulting classical strings, $a$ and~$b$, are their final output. It~is proven in~\cite{BCT} that this process accomplishes the required job. \section{Classical Simulation of Entanglement}\label{simul} Once we have established that pseudo telepathy is possible, the next natural question is to determine how much classical communication is necessary and sufficient to simulate the effect of $k$ bits of entanglement. It~follows from the previous section (and Bell's theorem~\cite{bell} for small values of~$k$) that at least $c2^k$ bits are required, for some constant \mbox{$c>0$} and all \mbox{$k \ge 1$}. But are these many bits of classical communication \emph{sufficient} to simulate \emph{everything} that can be accomplished with $k$ bits of entanglement? In fact, can the effect of entanglement be simulated \emph{at all} with a finite amount of classical communication? As~the simplest possible example, can the EPR task, as defined in Section~\ref{spooky}, be simulated by classical communication? In~particular, we must have \mbox{$a=b$} if \mbox{$x=y$} and \mbox{$a \neq b$} if \mbox{$|x-y|=\pi/2$}. Surely, it is not possible for Alice to communicate her input $x$ to Bob, for this would require an infinite amount of communication. Yet, it is shown in~\cite{BCT} that \emph{four} bits of classical communication are sufficient in the worst case for an exact simulation of the EPR task, provided Alice and Bob are allowed to share a continuous real random variable in the initialization phase---admittedly an unreasonable proposition. The essence of the idea is best explained if we further restrict the inputs $x$ and $y$ to be between $0$ and~$1$, rather than between $0$ and~$\pi$. In~this case, \emph{a~single bit} of classical communication suffices to simulate the EPR task exactly. This restriction is somewhat bizarre since it translates to requiring the angles to be between $0$ and approximately $57.3$ degrees. Without this restriction, a rather painful piecewise construction has to be implemented, as explained in~\cite{BCT}, and we need four bits of classical communication to take care of the various possible cases. In the initialization phase, Alice and Bob share a boolean variable~$c$, which is equally likely to be 0 or~1, and a continuous real variable $r$ chosen uniformly in the interval~\mbox{$(0,1)$}. After they are separated, they receive their angles $x$ and $y$, respectively. Alice outputs \mbox{$a=c$}, a random bit as required. Then, she tells Bob if~\mbox{$a<x$} with a single bit of classical communication. This allows Bob to determine whether or not $r$ lies in between $x$ and~$y$. In~case it does not, Bob outputs the same bit \mbox{$b=c$} as Alice. Note in particular that if $x=y$, then we get $a=b$ with certainty, as required. On~the other hand, if $r$ does lie between $x$ and $y$, then Bob outputs \mbox{$b=1-c$}, a bit complementary to Alice's, with probability \mbox{$\sin(2|y-r|)$}, otherwise he outputs \mbox{$b=c$} just like Alice. The probability that $a=b$ is calculated as an integral over the various possibilities for~$r$, as if it had been chosen after $x$ and $y$ are fixed. For~simplicity, assume that \mbox{$x \le y$}. The probability that \mbox{$a=b$} is~1 if \mbox{$0 \le r \le x$} or \mbox{$y \le r \le 1$} since, in that case, $r$~does not lie between $x$ and~$y$. Otherwise, if \mbox{$x<r<y$}, the probability that \mbox{$a=b$} is \mbox{$1-\sin (2(y-r))$}. Therefore, the global probability that \mbox{$a=b$} is \[ \int_{r=0}^{x} 1\,\mbox{\rm d} r + \int_{r=x}^{y} [ 1-\sin(2(y-r)) ] \,\mbox{\rm d} r + \int_{r=y}^{1} 1\,\mbox{\rm d} r \] \[ = \half + \half \cos(2(y-x)) = \cos^2(x-y) \, ,\] as required. It~is tempting to ``improve'' on this approach and make it work for all angles between $0$ and $\pi$, simply with an appropriate change in the probability function \mbox{$\sin (2|y-r|)$} that determines whether or not Bob will output the same bit as Alice when $r$ lies between $x$ and~$y$. Unfortunately, any such attempt will result in ``probabilities'' that are either negative or greater than~1\,! It is shown in~\cite{BCT} how to simulate an arbitrary von Neumann measurement with only \emph{eight} classical bits of communication in the worst case, but it is left as an open question to determine whether or not the effect of an arbitrary positive-operator-valued measurement (\textsc{povm}) can be simulated with a bounded amount of classical communication in the worst case. A different approach to the classical simulation of entanglement was taken independently by Michael Steiner~\cite{steiner}, who showed that the EPR relation can be simulated exactly with significantly fewer \emph{expected} bits of classical communication, provided we accept that there be no upper limit on the required amount of communication in the case of bad luck. Steiner's technique was subsequently refined by Nicolas Cerf, Nicolas Gisin and Serge Massar~\cite{CGM}, who showed that as few as 1.19 expected bits of classical communication suffice to simulate exactly an arbitrary von Neumann measurement. Even an arbitrary \textsc{povm} can be simulated by this technique, at the expected cost of 6.38 bits of classical communication. Building on~\mbox{\cite{steiner,CGM}}, Serge Massar, Dave Bacon, Nicolas Cerf and Richard Cleve~\cite{MBCC} discovered that the exact classical simulation of quantum entanglement can be achieved without any need for shared random variables, provided we are satisfied with an \emph{expected} bounded amount of classical communication. In~particular, they show how to simulate the effect of an arbitrary \textsc{povm} on one bit of entanglement with less than 20 bits of expected classical communication. Conversely, they also show that the exact simulation of quantum entanglement with a worst-case bounded amount of classical communication (as in the scenario of~\cite{BCT} described earlier in this section) is \emph{not} possible without an infinite amount of shared randomness. Finally, the question asked at the beginning of this section is almost resolved in~\cite{BCT}. It~is still unknown if there exists a constant~$c$ such that $c2^k$ bits of classical communication are sufficient to simulate exactly the effect of $k$ bits of entanglement for all values of~$k$. However, it is shown in~\cite{BCT} that as few as \mbox{$(3k+6)2^k$} expected bits of classical communication suffice to simulate the outcome of any \textsc{povm} that Alice and Bob could perform on their respective shares of $k$ bits of entanglement. This simulation protocol does not require Alice and Bob to share random variables in the initialization phase. No~similar results are known for worst-case bounded communication even if we allow the sharing of continuous random variables. \section{Conclusions and Open Problems}\label{concl} We have seen a variety of scenarios according to which quantum mechanics allows for a significant improvement in the efficiency of communication, compared to what would be possible in a classical world. This is surprising because the transmission of $n$ quantum bits cannot serve to communicate more than $n$ classical bits of information, and because quantum entanglement on its own cannot be used to communicate at all. Perhaps the most interesting aspect of quantum communication complexity is that the advantage provided by quantum mechanics has been established rigourously. This is in sharp contrast with the field of quantum \emph{computing}, in which it is merely \emph{believed} that quantum mechanics allows for an exponential speedup in some computational tasks, such as the factorization of large numbers~\cite{shor}. Indeed, it has not yet been ruled out that there might exist an efficient factorization algorithm for the classical computer. Several interesting questions are still open. The exponential advantage of quantum communication over classical communication has been established only in the case of promise problems, in both the error-free~\cite{BCW} and bounded-error~\cite{raz} scenarios. Could there be a (total) function \mbox{$f:X \times Y \rightarrow Z$}, where \mbox{$X=Y=\{0,1\}^n$}, for which the distributed computation of $f$ would be exponentially more efficient with quantum communication compared to classical communication? We have seen at the end of Section~\ref{alaCB} that the amount of classical communication required for the accomplishment of a distributed task in the presence of unlimited entanglement cannot be more than twice the amount of quantum communication that would suffice for the same task, because quantum teleportation can be used to transmit quantum bits through a classical channel. How about the other direction? Could there be a task that can be accomplished with a small amount of classical communication in the presence of unlimited entanglement, but that would require a much larger amount of quantum communication if prior entanglement were not available? We have seen in Section~\ref{spooky} that the Deutsch--Jozsa relation can be established classically without communication when \mbox{$n=2$} or \mbox{$n=4$}, but not when $n$ is arbitrarily large. But how large must ``large''~be? In~particular, can it be established for~\mbox{$n=8$}? It~is~\mbox{interesting} to note that the Deutsch--Jozsa relation becomes easier and easier to \emph{fake} when $n$ becomes larger. Indeed, if Alice and Bob share $k$ random variables \mbox{$t_1$, $t_2$, \ldots, $t_k \in \{0,1\}^n$} in the initialization phase, and if they output \mbox{$a_i = x \cdot t_i$}, and \mbox{$b_i = y \cdot t_i$}, respectively, their probability of being caught with \mbox{$a=b$} if in fact \mbox{$\Delta(x,y)=n/2$} goes down as~$2^{-k}$. A~nice open question is to determine the task on $n$ input and $k$ output bits that can be handled with certainty given sufficient entanglement and no communication, but for which the probability of success would be as small as possible in a classical world. Finally, several open questions are given in Section~\ref{simul} concerning the classical simulation of quantum entanglement. Is~it possible to achieve the EPR task with fewer than four bits of classical communication in the worst case? Is~it possible to simulate an arbitrary \textsc{povm} with a \emph{worst-case} bounded amount of classical communication? How much classical communication is sufficient in the worst case to simulate the effect of $k$ bits of entanglement? In~the expected case? We~have seen how classical communication can be used to simulate entanglement for tasks that did not involve classical communication in the quantum setting. How about the classical simulation of tasks that use not only quantum entanglement but also classical (or perhaps quantum) communication? \end{document}
\begin{document} \title{Efficient Approximation Algorithms for String Kernel Based Sequence Classification} \begin{abstract} Sequence classification algorithms, such as SVM, require a definition of distance (similarity) measure between two sequences. A commonly used notion of similarity is the number of matches between $k$-mers ($k$-length subsequences) in the two sequences. Extending this definition, by considering two $k$-mers to match if their distance is at most $m$, yields better classification performance. This, however, makes the problem computationally much more complex. Known algorithms to compute this similarity have computational complexity that render them applicable only for small values of $k$ and $m$. In this work, we develop novel techniques to efficiently and accurately estimate the pairwise similarity score, which enables us to use much larger values of $k$ and $m$, and get higher predictive accuracy. This opens up a broad avenue of applying this classification approach to audio, images, and text sequences. Our algorithm achieves excellent approximation performance with theoretical guarantees. In the process we solve an open combinatorial problem, which was posed as a major hindrance to the scalability of existing solutions. We give analytical bounds on quality and runtime of our algorithm and report its empirical performance on real world biological and music sequences datasets. \end{abstract} \section{Introduction}\left(abel{sec:intro} Sequence classification is a fundamental task in pattern recognition, machine learning, and data mining with numerous applications in bioinformatics, text mining, and natural language processing. Detecting proteins homology (shared ancestry measured from similarity of their sequences of amino acids) and predicting proteins fold (functional three dimensional structure) are essential tasks in bioinformatics. Sequence classification algorithms have been applied to both of these problems with great success \cite{cheng2006machine,kuang2005profile,kuksa2009scalable,leslie2002spectrum,leslie2002mismatch,leslie2004fast,sonnenburg2005large}. Music data, a real valued signal when discretized using vector quantization of MFCC features is another flavor of sequential data \cite{tzanetakis2002musical}. Sequence classification has been used for recognizing genres of music sequences with no annotation and identifying artists from albums \cite{kuksa2008fast,kuksa2009scalable,kuksa2012generalized}. Text documents can also be considered as sequences of words from a language lexicon. Categorizing texts into classes based on their topics is another application domain of sequence classification \cite{Kuksa2011PhdThesis,kuksa2010spatial}. While general purpose classification methods may be applicable to sequence classification, huge lengths of sequences, large alphabet sizes, and large scale datasets prove to be rather challenging for such techniques. Furthermore, we cannot directly apply classification algorithms devised for vectors in metric spaces because in almost all practical scenarios sequences have varying lengths unless some mapping is done beforehand. In one of the more successful approaches, the variable-length sequences are represented as fixed dimensional feature vectors. A feature vector typically is the spectra (counts) of all $k$-length substrings ($k$-mers) present exactly \cite{leslie2002spectrum} or inexactly (with up to $m$ mismatches) \cite{leslie2002mismatch} within a sequence. A {\em kernel function} is then defined that takes as input a pair of feature vectors and returns a real-valued similarity score between the pair (typically inner-product of the respective spectra's). The matrix of pairwise similarity scores (the kernel matrix) thus computed is used as input to a standard {\em support vector machine} (SVM) \cite{cristianini2000introduction, vapnik1998statistical} classifier resulting in excellent classification performance in many applications \cite{leslie2002mismatch}. In this setting $k$ (the length of substrings used as bases of feature map) and $m$ (the mismatch parameter) are independent variables directly related to classification accuracy and time complexity of the algorithm. It has been established that using larger values of $k$ and $m$ improve classification performance \cite{Kuksa2011PhdThesis, kuksa2009scalable}. On the other hand, the runtime of kernel computation by the efficient trie-based algorithm \cite{leslie2002mismatch,shawe2004kernel} is $O(k^{m+1} |\Sigma|^m (|X|+|Y|))$ for two sequences $X$ and $Y$ over alphabet $\Sigma$. Computation of mismatch kernel between two sequences $X$ and $Y$ reduces to the following two problems. i) Given two $k$-mers $\alpha$ and $\beta$ that are at Hamming distance $d$ from each other, determine the size of intersection of $m$-mismatch neighborhoods of $\alpha$ and $\beta$ ($k$-mers that are at distance at most $m$ from both of them). ii) For $0\left(eq d \left(eq \min\{2m,k\}$ determine the number of pairs of $k$-mers $(\alpha,\beta) \in X\times Y$ such that Hamming distance between $\alpha$ and $\beta$ is $d$. In the best known algorithm \cite{kuksa2009scalable} the former problem is addressed by precomputing the intersection size in constant time for $m\left(eq 2$ only. While a sorting and enumeration based technique is proposed for the latter problem that has computational complexity $O(2^k (|X|+|Y|))$, which makes it applicable for moderately large values of $k$ (of course limited to $m\left(eq 2$ only). In this paper, we completely resolve the combinatorial problem (problem i) for all values of $m$. We prove a closed form expression for the size of intersection of $m$-mismatch neighborhoods that lets us precompute these values in $O(m^3)$ time (independent of $|\Sigma|$, $k$, lengths and number of sequences). For the latter problem we devise an efficient approximation scheme inspired by the theory of locality sensitive hashing to accurately estimate the number of $k$-mer pairs between the two sequences that are at distance $d$. Combining the above two we design a polynomial time approximation algorithm for kernel computation. We provide probabilistic guarantees on the quality of our algorithm and analytical bounds on its runtime. Furthermore, we test our algorithm on several real world datasets with large values of $k$ and $m$ to demonstrate that we achieve excellent predictive performance. Note that string kernel based sequence classification was previously not feasible for this range of parameters. \section{Related Work}\left(abel{section:background} In the computational biology community pairwise alignment similarity scores were used traditionally as basis for classification, like the local and global alignment \cite{cristianini2000introduction,waterman1991computer}. String kernel based classification was introduced in \cite{watkins1999dynamic,haussler1999convolution}. Extending this idea, \cite{watkins1999dynamic} defined the {\em gappy $n$-gram kernel} and used it in conjunction with SVM \cite{vapnik1998statistical} for text classification. The main drawback of this approach is that runtime for kernel evaluations depends quadratically on lengths of the sequences. An alternative model of string kernels represents sequences as fixed dimensional vectors of counts of occurrences of $k$-mers in them. These include $k$-spectrum \cite{leslie2002spectrum} and substring \cite{Vishwanathan2002fast} kernels. This notion is extended to count inexact occurrences of patterns in sequences as in mismatch \cite{leslie2002mismatch} and profile \cite{kuang2005profile} kernels. In this transformed feature space SVM is used to learn class boundaries. This approach yields excellent classification accuracies \cite{kuksa2009scalable} but computational complexity of kernel evaluation remains a daunting challenge \cite{Kuksa2011PhdThesis}. The exponential dimensions ($|\Sigma|^k$) of the feature space for both the $k$-spectrum kernel and $k,m$-mismatch kernel make explicit transformation of strings computationally prohibitive. SVM does not require the feature vectors explicitly; it only uses pairwise dot products between them. A trie-based strategy to implicitly compute kernel values for pairs of sequences was proposed in \cite{leslie2002spectrum} and \cite{leslie2002mismatch}. A $(k,m)$-mismatch tree is introduced which is a rooted $|\Sigma|$-ary tree of depth $k$, where each internal node has a child corresponding to each symbol in $\Sigma$ and every leaf corresponds to a $k$-mer in $\Sigma^k$. The runtime for computing the $k,m$ mismatch kernel value between two sequences $X$ and $Y$, under this trie-based framework, is $O((|X| + |Y|)k^{m+1}|\Sigma|^m)$, where $|X|$ and $|Y|$ are lengths of sequences. This makes the algorithm only feasible for small alphabet sizes and very small number of allowed mismatches. The $k$-mer based kernel framework has been extended in several ways by defining different string kernels such as restricted gappy kernel, substitution kernel, wildcard kernel \cite{leslie2004fast}, cluster kernel \cite{weston2003cluster}, sparse spatial kernel \cite{kuksa2008fast}, abstraction-augmented kernel \cite{kuksa2010semi}, and generalized similarity kernel \cite{kuksa2012generalized}. For literature on large scale kernel learning and kernel approximation see \cite{Yang2012Nystrom,Bach2005Predictive,Drineas2005Nystrom,Rahimi2007Random,Rahimi2008Weighted,Williams2000Using} and references therein. \section{Algorithm for Kernel Computation} \left(abel{section:algo} In this section we formulate the problem, describe our algorithm and analyze it's runtime and quality. \textbf{$k$-spectrum and $k,m$-mismatch kernel:} Given a sequence $X$ over alphabet $\Sigma$, the $k,m$-mismatch spectrum of $X$ is a $|\Sigma|^k$-dimensional vector, $\Phi_{k,m}(X)$ of number of times each possible $k$-mer occurs in $X$ with at most $m$ mismatches. Formally, \begin{equation} \left(abel{Eq:MismatchSpectrum} \Phi_{k,m}(X) = \left(eft( \Phi_{k,m}(X)[\gamma]\right)ight)_{\gamma \in \Sigma^k} = \left(eft( \sum_{\alpha \in X} I_m(\alpha,\gamma)\right)ight)_{\gamma \in \Sigma^k}, \end{equation} where $I_m(\alpha,\gamma)=1$, if $\alpha$ belongs to the set of $k$-mers that differ from $\gamma$ by at most $m$ mismatches, i.e. the Hamming distance between $\alpha$ and $\gamma$, $d(\alpha,\gamma)\left(eq m$. Note that for $m=0$, it is known as \textit{$k$-spectrum} of $X$. The {\em $k,m$-mismatch} kernel value for two sequences $X$ and $Y$ (the mismatch spectrum similarity score) \cite{leslie2002mismatch} is defined as: \begin{align}\left(abel{Eq:MK} &K(X,Y|k,m) = \left(angle \Phi_{k,m}(X),\Phi_{k,m}(Y)\right)angle = \sum_{\gamma \in \Sigma^k} \Phi_{k,m}(X)[\gamma] \Phi_{k,m}(Y)[\gamma]\notag\\ &= \sum_{\gamma \in \Sigma^k} \sum_{\alpha \in X} I_m(\alpha,\gamma) \sum_{\beta \in Y} I_m(\beta,\gamma) = \sum_{\alpha \in X} \sum_{\beta \in Y} \sum_{\gamma \in \Sigma^k} I_m(\alpha,\gamma) I_m(\beta,\gamma). \end{align} For a $k$-mer $\alpha$, let $N_{k,m}(\alpha)=\{\gamma\in \Sigma^k: d(\alpha,\gamma)\left(eq m\}$ be the {\em $m$-mutational neighborhood} of $\alpha$. Then for a pair of sequences $X$ and $Y$, the $k,m$-mismatch kernel given in eq (\right)ef{Eq:MK}) can be equivalently computed as follows \cite{kuksa2009scalable}: \begin{align}\left(abel{Eq:MK1} K(X,Y|k,m) =&\sum_{\alpha \in X} \sum_{\beta \in Y} \sum_{\gamma \in \Sigma^k} I_m(\alpha,\gamma) I_m(\beta,\gamma)\notag{} \\ =& \sum_{\alpha \in X} \sum_{\beta \in Y} |N_{k,m}(\alpha) \cap N_{k,m}(\beta)| = \sum_{\alpha \in X} \sum_{\beta \in Y} {\mathfrak{I}}_m(\alpha,\beta), \end{align} where ${\mathfrak{I}}_m(\alpha,\beta) = |N_{k,m}(\alpha) \cap N_{k,m}(\beta)|$ is the size of intersection of $m$-mutational neighborhoods of $\alpha$ and $\beta$. We use the following two facts. \begin{fact}\left(abel{independence} ${\mathfrak{I}}_m(\alpha,\beta)$, the size of the intersection of $m$-mismatch neighborhoods of $\alpha$ and $\beta$, is a function of $k$, $m$, $|\Sigma|$ and $d(\alpha,\beta)$ and is independent of the actual $k$-mers $\alpha$ and $\beta$ or the actual positions where they differ. (See section \right)ef{subsec:closedform}) \end{fact} \begin{fact}\left(abel{min2mk} If $d(\alpha,\beta) > 2m$, then ${\mathfrak{I}}_m(\alpha,\beta) = 0$. \end{fact} In view of the above two facts we can rewrite the kernel value (\right)ef{Eq:MK1}) as \begin{equation}}\def\eeq{\end{equation} \left(abel{Eq:MK2} K(X,Y|k,m) = \sum_{\alpha \in X} \sum_{\beta \in Y} {\mathfrak{I}}_m(\alpha,\beta) = \sum_{i=0}^{\min\{2m,k\}} M_i\cdot {\cal I}_i,\eeq where ${\cal I}_i={\mathfrak{I}}_m(\alpha,\beta)$ when $d(\alpha,\beta)=i$ and $M_i$ is the number of pairs of $k$-mers $(\alpha,\beta)$ such that $d(\alpha,\beta)=i$, where $\alpha\in X$ and $\beta\in Y$. Note that bounds on the last summation follows from Fact \right)ef{min2mk} and the fact that the Hamming distance between two $k$-mers is at most $k$. Hence the problem of kernel evaluation is reduced to computing $M_i$'s and evaluating ${\cal I}_i$'s. \subsection{Closed form for Intersection Size}\left(abel{subsec:closedform} Let $N_{k,m}(\alpha,\beta)$ be the intersection of $m$-mismatch neighborhoods of $\alpha$ and $\beta$ i.e. $$N_{k,m}(\alpha,\beta) = N_{k,m}(\alpha) \cap N_{k,m}(\beta).$$ As defined earlier $|N_{k,m}(\alpha,\beta)| = {\mathfrak{I}}_m(\alpha,\beta)$. Let $N_q(\alpha) = \{\gamma \in \Sigma^k : d(\alpha,\gamma) = q\}$ be the set of $k$-mers that differ with $\alpha$ in exactly $q$ indices. Note that $N_q(\alpha) \cap N_r(\alpha) = \emptyset$ for all $q\neq r$. Using this and defining $n^{qr}(\alpha,\beta)=|N_q(\alpha) \cap N_r(\beta)|$, $$N_{k,m}(\alpha,\beta) = \bigcup_{q=0}^m \bigcup_{r=0}^m N_q(\alpha) \cap N_r(\beta)\;\;\; \text{ and } \;\;\; {\mathfrak{I}}_m(\alpha,\beta)=\sum_{q=0}^m \sum_{r=0}^m n^{qr}(\alpha,\beta).$$ Hence we give a formula to compute $n^{ij}(\alpha,\beta)$. Let $s =|\Sigma|$. \begin{theorem} Given two $k$-mers $\alpha$ and $\beta$ such that $d(\alpha,\beta)=d$, we have that $$n^{ij}(\alpha,\beta)= \sum_{t=0}^{\frac{i+j-d}{2}}{2d - i-j+2t\choose d-(i-t)} {d \choose i+j-2t-d} (s -2)^{i+j-2t-d} {k-d\choose t} (s-1)^t.$$ \end{theorem} \begin{proof} $n^{ij}(\alpha,\beta)$ can be interpreted as the number of ways to make $i$ changes in $\alpha$ and $j$ changes in $\beta$ to get the same string. For clarity, we first deal with the case when we have $d(\alpha,\beta)=0$, i.e both strings are identical. We wish to find $n^{ij}(\alpha,\beta)=|N_i(\alpha) \cap N_j(\beta)|$. It is clear that in this case $i=j$, otherwise making $i$ and $j$ changes to the same string will not result in the same string. Hence $n^{ij}= {k\choose i} (s-1)^i$. Second we consider $\alpha,\beta$ such that $d(\alpha,\beta)=k$. Clearly $k\geq i$ and $k\geq j$. Moreover, since both strings do not agree at any index, character at every index has to be changed in at least one of $\alpha$ or $\beta$. This gives $k\left(eq i+j$. Now for a particular index $p$, $\alpha[p]$ and $\beta[p]$ can go through any one of the following three changes. Let $\alpha [p]=x$, $\beta [p]=y$. (I) Both $\alpha [p]$ and $\beta [p]$ may change from $x$ and $y$ respectively to some character $z$. Let $l_1$ be the count of indices going through this type of change. (II) $\alpha [p]$ changes from $x$ to $y$, call the count of these $l_2$. (III) $\beta [p]$ changes from $y$ to $x$, let this count be $l_3$. It follows that $$i = l_1+l_2 \;\;\; ,\;\;\; j = l_1+l_3, \;\;\; ,\;\;\; l_1+l_2+l_3 = k.$$ This results in $l_1 = i+j-k$. Since $l_1$ is the count of indices at which characters of both strings change, we have $s-2$ character choices for each such index and ${k \choose i+j- k}$ possible combinations of indices for $l_1$. From the remaining $l_2 + l_3 = 2k-i-j$ indices, we choose $l_2=k-j$ indices in ${2k-i-j \choose k-j}$ ways and change the characters at these indices of $\alpha$ to characters of $\beta$ at respective indices. Finally, we are left with only $l_3$ remaining indices and we change them according to the definition of $l_3$. Thus the total number of strings we get after making $i$ changes in $\alpha$ and $j$ changes in $\beta$ is $$\left(eft(s-2\right)ight)^{i+j-k} {k \choose i+j- k} {2k-i-j \choose k-j}.$$ Now we consider general strings $\alpha$ and $\beta$ of length $k$ with $d(\alpha, \beta)=d$. Without loss of generality assume that they differ in the first $d$ indices. We parameterize the system in terms of the number of changes that occur in the last $k-d$ indices of the strings i.e let $t$ be the number of indices that go through a change in last $k-d$ indices. Number of possible such changes is \begin{equation}}\def\eeq{\end{equation}\left(abel{same} {k-d\choose t} (s-1)^t.\eeq Lets call the first $d$-length substrings of both strings $\alpha'$ and $\beta'$. There are $i-t$ characters to be changed in $\alpha'$ and $j-t$ in $\beta'$. As reasoned above, we have $ d\left(eq (i-t)+(j-t)\implies t\left(eq \frac{i+j-d}{2}$. In this setup we get $i-t = l_1+l_2 $, $j-t = l_1+l_3 $, $l_1+l_2+l_3 = d $ and $l_1 = (i-t)+(j-t)-d$. We immediately get that for a fixed $t$, the total number of resultant strings after making $i-t$ changes in $\alpha'$ and $j-t$ changes in $\beta'$ is \begin{equation}}\def\eeq{\end{equation}\left(abel{diff} {2d - (i-t) -(j-t)\choose d-(i-t)} {d \choose (i-t)+(j-t) - d} (s-2)^{(i-t)+(j-t) - d}.\eeq For a fixed $t$, every substring counted in \eqref{same}, every substring counted in \eqref{diff} gives a required string obtained after $i$ and $j$ changes in $\alpha$ and $\beta$ respectively. The statement of the theorem follows. \end{proof} \begin{corollary}\left(abel{thmIntersection} Runtime of computing ${\cal I}_d$ is $O(m^3)$, independent of $k$ and $|\Sigma|$. \end{corollary} This is so, because if $d(\alpha,\beta)=d$, ${\cal I}_d = \sum\left(imits_{q=0}^m \sum\left(imits_{r=0}^m n^{qr}(\alpha,\beta)$ and $n^{qr}(\alpha,\beta)$ can be computed in $O(m)$. \subsection{Computing $M_i$}\left(abel{subsec:computeMd} Recall that given two sequences $X$ and $Y$, $M_i$ is the number of pairs of $k$-mers $(\alpha,\beta)$ such that $d(\alpha,\beta)=i$, where $\alpha\in X$ and $\beta\in Y$. Formally, the problem of computing $M_i$ is as follows: \begin{problem}\left(abel{problem:1} Given $k$, $m$, and two sets of $k$-mers $S_X$ and $S_Y$ (set of $k$-mers extracted from the sequences $X$ and $Y$ respectively) with $|S_X|=n_X$ and $|S_Y| = n_Y$. Compute $$M_i = |\{(\alpha,\beta) \in S_X\times S_Y : d(\alpha,\beta) = i\}| \;\; \text{ for } 0\left(eq i \left(eq \min\{2m,k\}.$$ \end{problem} Note that the brute force approach to compute $M_i$ requires $O(n_X\cdot n_Y\cdot k)$ comparisons. Let $\mathcal{Q} _k(j)$ denote the set of all $j$-sets of $\{1,\left(dots,k\}$ (subsets of indices). For $\theta \in \mathcal{Q} _k(j)$ and a $k$-mer $\alpha$, let $\alpha|_{\theta}$ be the $j$-mer obtained by selecting the characters at the $j$ indices in $\theta$. Let $f_{\theta}(X,Y)$ be the number of pairs of $k$-mers in $S_X\times S_Y$ as follows; $$f_{\theta}(X,Y) = |\{(\alpha,\beta) \in S_X\times S_Y : d(\alpha|_{\theta},\beta|_{\theta}) = 0\}|.$$ We use the following important observations about $f_{\theta}$. \begin{fact} For $0\left(eq i \left(eq k$ and $\theta \in \mathcal{Q} _k(k-i)$, if $d(\alpha|_{\theta},\beta|_{\theta}) = 0$, then $d(\alpha,\beta) \left(eq i$. \end{fact} \begin{fact}\left(abel{fact6} For $0\left(eq i \left(eq k$ and $\theta \in \mathcal{Q}_k(k-i)$, $f_{\theta}(X,Y)$ can be computed in $O(kn\left(og n)$ time. \end{fact} This can be done by first lexicographically sorting the $k$-mers in each of $S_X$ and $S_Y$ by the indices in $\theta$. The pairs in $S_X\times S_Y$ that are the same at indices in $\theta$ can then be enumerated in one linear scan over the sorted lists. Let $n=n_X+n_Y$, runtime of this computation is $O(k(n+|\Sigma|))$ if we use counting sort (as in \cite{kuksa2009scalable}) or $O(kn\left(og n)$ for mergesort (since $\theta$ has $O(k)$ indices.) Since this procedure is repeated many times, we refer to this as the \texttt{SORT-ENUMERATE} subroutine. We define \begin{equation}}\def\eeq{\end{equation}\left(abel{Eq:Fiformula} F_i(X,Y) = \sum_{\theta \in \mathcal{Q}_k(k-i)} f_{\theta}(X,Y).\eeq \begin{lemma} \begin{equation}}\def\eeq{\end{equation} \left(abel{FalternateForm} F_i(X,Y) = \sum_{j=0}^i {k-j \choose k-i}M_j.\eeq \end{lemma} \begin{proof} Let $(\alpha,\beta)$ be a pair that contributes to $M_j$, i.e. $d(\alpha,\beta) = j$. Then for every $\theta \in \mathcal{Q}_k(k-i)$ that has all indices within the $k-j$ positions where $\alpha$ and $\beta$ agree, the pair $(\alpha,\beta)$ is counted in $f_{\theta}(X,Y)$. The number of such $\theta$'s are ${k-j \choose k-i}$, hence $M_j$ is counted ${k-j \choose k-i}$ times in $F_i(X,Y)$, yielding the required equality. \end{proof} \begin{corollary}\left(abel{corrMi} $M_i$ can readily be computed as: $ M_i = F_i(X,Y) - \sum\left(imits_{j=0}^{i-1} {k-j \choose k-i}M_j$. \end{corollary} By definition, $F_i(X,Y)$ can be computed with ${k\choose k-i} ={k\choose i}$ $f_{\theta}$ computations. Let $t=\min\{2m,k\}$. $K(X,Y|k,m)$ can be evaluated by \eqref{Eq:MK2} after computing $M_i$ (by \eqref{FalternateForm}) and ${\cal I}_i$ (by Corollary \right)ef{thmIntersection}) for $0\left(eq i\left(eq t$. The overall complexity of this strategy thus is $$\left(eft(\sum_{i=0}^t {k\choose i} (k-i)(n\left(og n + n)\right)ight) + O(n) = O(k\cdot 2^{k-1} \cdot (n\left(og n)).$$ We give our algorithm to approximate $K(X,Y|k,m)$, it's explanation followed by it's analysis. \begin{algorithm}[h] \caption{: Approximate-Kernel($S_X$,$S_Y$,$k$,$m$,$\epsilon$,$\delta$,$B$)} \left(abel{algo:approxKernel} \begin{algorithmic}[1] \State ${\cal I}, M' \gets \Call{zeros}{t+1}$ \State $\sigma \gets \epsilon\cdot \sqrt{\delta}$ \State Populate ${\cal I}$ using Corollary \right)ef{thmIntersection} \For{$i =0$ to $t$} \State $\mu_F \gets 0$ \State $iter \gets 1$ \State $var_F \gets \infty$ \While{$var_F > \sigma^2 \wedge iter < B$} \left(abel{line9} \State $\theta \gets \Call{random}{{k\choose k-i}}$ \State $\mu_F \gets \dfrac{\mu_F \cdot (iter - 1) + \Call{sort-enumerate}{S_X,S_Y,k,\theta}}{iter}$ \Comment{Application of Fact \right)ef{fact6}} \State $var_F \gets \Call{variance}{\mu_F,var_F,iter}$ \Comment{Compute online variance} \State $iter \gets iter + 1$ \EndWhile \State $F'[i] \gets \mu_F\cdot {k\choose k-i}$ \State $M'[i]\gets F'[i]$ \For{$j = 0$ to $i-1$} \Comment{Application of Corollary \right)ef{corrMi}} \State $M'[i] \gets M'[i] - {k-j \choose k-i}\cdot M'[j]$ \EndFor \EndFor \State $K' \gets \Call{sumproduct}{M',{\cal I}}$ \Comment{Applying Equation \eqref{Eq:MK2}} \State \mathbb{R}eturn $K'$ \end{algorithmic} \end{algorithm} Algorithm \right)ef{algo:approxKernel} takes $\epsilon, \delta \in (0,1)$, and $B \in \mathbb{Z}^+$ as input parameters; the first two controls the accuracy of estimate while $B$ is an upper bound on the sample size. We use \eqref{Eq:Fiformula} to estimate $F_i=F_i(X,Y)$ with an online sampling algorithm, where we choose $\theta\in\mathcal{Q}_k(k-i)$ uniformly at random and compute the online mean and variance of the estimate for $F_i$. We continue to sample until the variance is below the threshold ($\sigma^2=\epsilon^2 \delta$) or the sample size reaches the upper bound $B$. We scale up our estimate by the population size and use it to compute $M_i'$ (estimates of $M_i$) using Corollary \right)ef{corrMi}. These $M_i'\;$'s together with the precomputed exact values of ${\cal I}_i$'s are used to compute our estimate, $K'(X,Y|k,m,\sigma,\delta,B)$, for the kernel value using \eqref{Eq:MK2}. First we give an analytical bound on the runtime of Algorithm \right)ef{algo:approxKernel} then we provide guarantees on it's performance. \begin{theorem}\left(abel{runtime} Runtime of Algorithm \right)ef{algo:approxKernel} is bounded above by $O(k^2 n\left(og n)$. \end{theorem} \begin{proof} Observe that throughout the execution of the algorithm there are at most $tB$ computations of $f_{\theta}$, which by Fact \right)ef{fact6} needs $O(kn\left(og n)$ time. Since $B$ is an absolute constant and $t\left(eq k$, we get that the total runtime of the algorithm is $O(k^2n\left(og n)$. Note that in practice the while loop in line \right)ef{line9} is rarely executed for $B$ iterations; the deviation is within the desired range much earlier. \end{proof} Let $K' = K'(X,Y|k,m,\epsilon,\delta,B)$ be our estimate (output of Algorithm \right)ef{algo:approxKernel}) for $K = K(X,Y|k,m)$. \begin{theorem}\left(abel{unbiasedEst} $K'$ is an unbiased estimator of the true kernel value, i.e. $E(K') = K$. \end{theorem} \begin{proof} For this we need the following result, whose proof is deferred. \begin{lemma}\left(abel{lem:unbiasedKernel} $E(M_i') = M_i$. \end{lemma} By Line 17 of Algorithm \right)ef{algo:approxKernel}, $E(K') =E( \sum_{i=0}^{t} {\cal I}_i M_i').$ Using the fact that ${\cal I}_i$'s are constants and Lemma \right)ef{lem:unbiasedKernel} we get that $$E(K') = \sum_{i=0}^{t} {\cal I}_i E(M_i')= \sum_{i=0}^{\min\{2m,k\}} {\cal I}_i M_i = K.$$\end{proof} \begin{theorem}\left(abel{error} For any $0< \epsilon, \delta < 1$, Algorithm \right)ef{algo:approxKernel} is an $(\epsilon {\cal I}_{max}, \delta)-$additive approximation algorithm, i.e. $Pr(|K-K'| \geq \epsilon {\cal I}_{max} ) < \delta$, where ${\cal I}_{max} = \max_{i}\{{\cal I}_i\}$. \end{theorem} Note that these are very loose bounds, in practice we get approximation far better than these bounds. Furthermore, though ${\cal I}_{max}$ could be large, but it is only a fraction of one of the terms in summation for the kernel value $K(X,Y|k,m)$. \begin{proof} Let $F'_i$ be our estimate for $F_i\left( X,Y\right) = F_i$. We use the following bound on the variance of $K'$ that is proved later. \begin{lemma}\left(abel{lem:kernelVariance} $Var(K') \left(eq \delta(\epsilon\cdot{\cal I}_{max})^2.$ \end{lemma} By Lemma \right)ef{lem:unbiasedKernel} we have $E(K') = K$, hence by Lemma \right)ef{lem:kernelVariance}, $Pr[|K' - K|] \geq \epsilon {\cal I}_{max}$ is equivalent to $Pr[|K' - E(K')|] \geq \frac{1}{\sqrt{\delta}}\sqrt{Var(K')}$. By the Chebychev's inequality, this latter probability is at most $\delta$. Therefore, Algorithm \right)ef{algo:approxKernel} is an $(\epsilon {\cal I}_{max}, \delta)-$additive approximation algorithm. \end{proof} \begin{proof}(Proof of Lemma \right)ef{lem:unbiasedKernel}) We prove it by induction on $i$. The base case ($i=0$) is true as we compute $M'[0]$ exactly, i.e. $M'[0] = M[0]$. Suppose $E(M'_j) = M_j$ for $0\left(eq j \left(eq i-1$. Let $iter$ be the number of iterations for $i$, after execution of Line 10 we get $$F'[i] = \mu_F {k\choose k-i} = \dfrac{\sum_{r=1}^{iter} f_{\theta_r}(X,Y)}{iter} {k\choose k-i},$$ where $\theta_r$ is the random $(k-i)$-set chosen in the $r$th iteration of the while loop. Since $\theta_r$ is chosen uniformly at random we get that \begin{equation}}\def\eeq{\end{equation}\left(abel{Eq:expectedOurF} E(F'[i]) = E(\mu_F) {k\choose k-i} = E(f_{\theta_r}(X,Y)) {k\choose k-i} = \dfrac{F_i(X,Y)}{{k\choose k-i}} {k\choose k-i}.\eeq After the loop on Line 15 is executed we get that $E(M'[i]) = F_i(X,Y) - \sum\left(imits_{j=0}^{i-1} {k-j \choose k-i}E(M_j')$. Using $E(M_j')=M_j$ (inductive hypothesis) in \eqref{FalternateForm} we get that $E(M_i') = M_i$. \end{proof} \begin{proof} (Proof of Lemma \right)ef{lem:kernelVariance}) After execution of the while loop in Algorithm \right)ef{algo:approxKernel}, we have $F_i' = \sum\left(imits_{j=0}^i {k-j \choose k-i}M_j'.$ We use the following fact that follows from basic calculations. \begin{fact}\left(abel{fact:varLinComb} Suppose $X_0,\left(dots,X_t$ are random variables and let $S= \sum_{i=0}^t a_iX_i$, where $a_0,\left(dots,a_t$ are constants. Then $$Var(S) = \sum_{i=0}^t a_i^2Var(X_i) + 2\sum_{i=0}^t\sum_{j=i+1}^t a_ia_jCov(X_i,X_j).$$ \end{fact} Using fact \right)ef{fact:varLinComb} and definitions of ${\cal I}_{max}$ and $\sigma$ we get that $$Var(K')=\sum_{i=0}^t {{\cal I}_i}^2 Var( M_i') +2\sum_{i=0}^t\sum_{j=i+1}^t{\cal I}_i{\cal I}_j Cov(M_i', M_j')$$ $$\left(eq {\cal I}_{max}^2 \left(eft[\sum_{i=0}^t Var(M_i') +2\sum_{i=0}^t\sum_{j=i+1}^t Cov( M_i', M_j')\right)ight] \left(eq {\cal I}_{max}^2 Var(F_t') \left(eq {\cal I}_{max}^2 \sigma^2 =\delta(\epsilon\cdot{\cal I}_{max})^2.$$ The last inequality follows from the following relation derived from definition of $F_i'$ and Fact \right)ef{fact:varLinComb}. \begin{equation}}\def\eeq{\end{equation}\left(abel{Eq:VarourF} Var(F_t')=\sum_{i=0}^t {k-i\choose k-t}^2 Var(M_i') +2\sum_{i=0}^t\sum_{j=i+1}^t{k-i\choose k-t}{k-j\choose k-t} Cov( M_i', M_j').\eeq \end{proof} \section{Evaluation}\left(abel{section:experimental} We study the performance of our algorithm in terms of runtime, quality of kernel estimates and predictive accuracies on standard benchmark sequences datasets (Table \right)ef{tab:datasets}) . For the range of parameters feasible for existing solutions, we generated kernel matrices both by algorithm of \cite{kuksa2009scalable} (exact) and our algorithm (approximate). These experiments are performed on an Intel Xeon machine with ($8$ Cores, $2.1$ GHz and $32$ GB RAM) using the same experimental settings as in \cite{kuksa2009scalable,kuksa2010spatial, kuksa2014efficient}. Since our algorithm is applicable for significantly wider range of $k$ and $m$, we also report classification performance with large $k$ and $m$. For our algorithm we used $B\in \{300,500\}$ and $\sigma \in\{0.25,0.5\}$ with no significant difference in results as implied by the theoretical analysis. In all reported results $B=300$ and $\sigma=0.5$. In order to perform comparisons, for a few combinations of parameters we generated exact kernel matrices of each dataset on a much more powerful machine (a cluster of $20$ nodes, each having $24$ CPU's with $2.5$ GHz speed and $128GB$ RAM). Sources for datasets and source code are available at \footnote{\url{https://github.com/mufarhan/sequence_class_NIPS_2017}}. \begin{table}[H] \vskip-.1in \centering\caption {Datasets description} \left(abel{tab:datasets} \begin{tabular}{llllll} \hline Name & Task & Classes&Seq.&Av.Len. & Evaluation\\ \hline\hline Ding-Dubchak \cite{ding2001multi} & protein fold recognition & $27$ & $694$ & $169$ & 10-fold CV \\ \hline SCOP \cite{conte2000scop,weston2005semi}& protein homology detection & $54$ & $7329$ & $308$ & 54 binary class.\\ \hline Music \cite{li2003comparative,tzanetakis2002musical} & music genre recognition & $10$ & $1000$ & $2368$ & 5-fold CV \\ \hline Artist20 \cite{ellis2007classifying,kuksa2014efficient} & artist identification & $20$ & $1413$ & $9854$ & 6-fold CV \\ \hline ISMIR \cite{kuksa2014efficient} & music genre recognition & $6$ & $729$ & $10137$ & 5-fold CV\\ \hline \end{tabular} \vskip-.1in \end{table} {\bf \em Running Times:} We report difference in running times for kernels generation in Figure \right)ef{fig:runtime1}. Exact kernels are generated using code provided by authors of \cite{kuksa2009scalable,kuksa2012generalized} for $8\left(eq k\left(eq 16$ and $m=2$ only. We achieve significant speedups for large values of $k$ (for $k=16$ we get one order of magnitude gains in computational efficiency on all datasets). The running times for these algorithms are $O(2^k n)$ and $O(k^2 n \left(og n)$ respectively. We can use larger values of $k$ without an exponential penalty, which is visible in the fact that in all graphs, as $k$ increases the growth of running time of the exact algorithm is linear (on the log-scale), while that of our algorithm tends to taper off. \begin{figure} \caption{Log scaled plot of running time of approximate and exact kernel generation for $m=2$} \end{figure} {\bf \em Kernel Error Analysis:} We show that despite reduction in runtimes, we get excellent approximation of kernel matrices. In Table \right)ef{tab:ErrorAnalysis} we report point-to-point error analysis of the approximate kernel matrices. We compare our estimates with exact kernels for $m=2$. For $m> 2$ we report statistical error analyses. More precisely, we evaluate differences with principal submatrices of the exact kernel matrix. These principal submatrices are selected by randomly sampling $50$ sequences and computing their pairwise kernel values. We report errors for four datasets; the fifth one, not included for space reasons, showed no difference in error. From Table \right)ef{tab:ErrorAnalysis} it is evident that our empirical performance is significantly more precise than the theoretical bounds proved on errors in our estimates. \begin{table} \centering \vskip-.05in \caption {Mean absolute error (MAE) and root mean squared error (RMSE) of approximate kernels. For $m>2$ we report average MAE and RMSE of three random principal submatrices of size $50 \times 50$} \left(abel{tab:ErrorAnalysis} \begin{tabular}{|c|c|c||c|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{Music Genre}&\multicolumn{2}{c||}{ISMIR}&\multicolumn{2}{c||}{Artist20}&\multicolumn{2}{c|}{SCOP} \\ \hline $(k,m) $&RMSE&MAE&RMSE&MAE&RMSE&MAE&RMSE&MAE \\ \hline\hline $(10,2)$&$0$&$0$&$0$&$0$&$ 0 $&$ 0 $&$1.3$E$-6$&$ 9.0$E$-8$ \\ \hline $(12,2)$&$0$&$0$&$0$&$0$&$ 0 $&$ 0 $&$1.4$E$-6$&$ 1.0$E$-8$ \\ \hline $(14,2)$&$2.0$E$-8$&$0$&$2.0$E$-8$&$0$&$3.3$E$-8$&$1.3$E$-8$&$2.9$E$-6$&$ 1.3$E$-8 $ \\ \hline $(16,2)$&$1.3$E$-8$&$0$&$4.0$E$-8$&$3.3$E$-9$&$ $&$ $&$2.9$E$-6$&$1.0$E$-8 $ \\ \hline $(12,6)$&$1.97$E$-5$&$8.5$E$-7$&$ $&$ $&$ $&$ $&$ 2.4$E$-4 $&$ 1.8$E$-5 $ \\ \hline \end{tabular} \vskip-.05in \end{table} {\bf \em Prediction Accuracies:} We compare the outputs of SVM on the exact and approximate kernels using the publicly available SVM implementation LIBSVM \cite{CC01a}. We computed exact kernel matrices by brute force algorithm for a few combinations of parameters for each dataset on the much more powerful machine. Generating these kernels took days; we only generated to compare classification performance of our algorithm with the exact one. We demonstrate that our predictive accuracies are sufficiently close to that with exact kernels in Table \right)ef{tab:SCOP_DD} (bio-sequences) and Table \right)ef{tab:Music_Accuracy} (music). The parameters used for reporting classification performance are chosen in order to maintain comparability with previous studies. Similarly all measurements are made as in \cite{kuksa2009scalable,kuksa2012generalized}, for instance for music genre classification we report results of $10$-fold cross-validation (see Table \right)ef{tab:datasets}). For our algorithm we used $B=300$ and $\sigma = 0.5$ and we take an average of performances over three independent runs. \begin {table}[H] \vskip-.1in \centering\caption {Classification performance comparisons on SCOP (ROC) and Ding-Dubchak (Accuracy)} \left(abel{tab:SCOP_DD} \begin{tabular}{|c||cc|cc||cc|} \hline & \multicolumn{4}{c||}{SCOP} & \multicolumn{2}{c|}{Ding-Dubchak}\\ \hline & \multicolumn{2}{c|}{Exact} & \multicolumn{2}{c||}{Approx} & Exact & Approx\\ \hline $k,m$ & ROC & ROC50 & ROC & ROC50 &\multicolumn{2}{c|}{Accuracy} \\ \hline $8,2$ & $ 88.09 $ & $ 38.71 $ & $ 88.05 $ & $ 38.60 $ &$34.01$ & $31.65$\\ \hline $10,2$ & $ 81.65 $ & $ 28.18 $ & $ 80.56 $ & $ 26.72 $ & $28.1$ & $26.9$\\ \hline $12,2$ & $ 71.31 $ & $ 23.27 $ & $ 66.93 $ & $ 11.04 $ &$27.23$ & $26.66$ \\ \hline $14,2$ & $ 67.91 $ & $ 7.78 $ & $ 63.67 $ & $ 6.66 $ & $25.5$ & $25.5$\\ \hline $16,2$ & $ 64.45 $ & $ 6.89 $ & $ 61.64 $ & $ 5.76 $ &$25.94$ & $25.03$ \\ \hline $10,5$ & $ 91.60 $ & $ 53.77 $ & $ 91.67 $ & $ 54.1 $ & $45.1$ & $43.80$ \\ \hline $10,7$ & $ 90.27 $ & $ 48.18 $ & $ 90.30 $ & $ 48.44 $ & $58.21$ & $57.20$\\ \hline $12,8$ & $ 91.44 $ & $ 50.54 $ & $ 90.97 $ & $ 52.08 $ & $58.21$ &$57.83$ \\ \hline \end{tabular} \vskip-.07in \end{table} \begin{table}[H] \vskip-.15in \centering\caption {Classification error comparisons on music datasets exact and estimated kernels} \left(abel{tab:Music_Accuracy} \begin{tabular}{|l||l|l|l|l|l|l|l|l|} \hline & \multicolumn{2}{c|}{Music Genre} & \multicolumn{2}{c|}{ISMIR} & \multicolumn{2}{c|}{Artist20} \\ \hline $k,m $ & Exact & Estimate & Exact & Estimate & Exact & Estimate \\ \hline $10,2$ & $ 61.30\pm3.3 $ & $ 61.30\pm3.3 $ & $ 54.32\pm1.6 $ & $ 54.32\pm1.6 $ & $ 82.10\pm2.2 $ & $ 82.10\pm2.2 $ \\ \hline $14,2$ & $ 71.70\pm3.0 $ & $ 71.70\pm3.0 $ & $ 55.14\pm1.1 $ & $ 55.14\pm1.1 $ & $ 86.84\pm1.8 $ & $ 86.84\pm1.8 $ \\ \hline $16,2$ & $ 73.90\pm1.9 $ & $ 73.90\pm1.9 $ & $ 54.73\pm1.5 $ & $ 54.73\pm1.5 $ & $ 87.56\pm1.8 $ & $ 87.56\pm1.8 $ \\ \hline $10,7$ & $37.00\pm3.5$ & $ 37.00\pm3.5 $ & $ $ & $ 27.16\pm1.6 $ & $ 55.75\pm4.7 $ & $55.75\pm4.7$ \\ \hline $12,6$ & $ 54.20\pm2.7 $ & $ 54.13\pm2.9 $ & $52.12\pm2.0 $ & $ 52.08\pm1.5 $ & $ 79.57\pm2.4$ & $ 80.00\pm2.6$ \\ \hline $12,8$ & $43.70\pm3.2 $ & $ 44.20\pm3.2 $ & $47.03\pm 2.6 $ & $ 47.41\pm2.4 $ & $ $ & $67.57\pm3.6$ \\\hline \end{tabular} \vskip-.06in \end{table} \section{Conclusion} In this work we devised an efficient algorithm for evaluation of string kernels based on inexact matching of subsequences ($k$-mers). We derived a closed form expression for the size of intersection of $m$-mismatch neighborhoods of two $k$-mers. Another significant contribution of this work is a novel statistical estimate of the number of $k$-mer pairs at a fixed distance between two sequences. Although large values of the parameters $k$ and $m$ were known to yield better classification results, known algorithms are not feasible even for moderately large values. Using the two above mentioned results our algorithm efficiently approximate kernel matrices with probabilistic bounds on the accuracy. Evaluation on several challenging benchmark datasets for large $k$ and $m$, show that we achieve state of the art classification performance, with an order of magnitude speedup over existing solutions. \end{document}
\begin{document} \begin{frontmatter} \title{Comments on the "Optimal strategy of deteriorating items with capacity constraints under two-levels of trade credit policy"} \author[1st_address]{Sunil Tiwari\href{https://orcid.org/0000-0002-0499-2794}{\includegraphics[scale=.6]{figures/orcid.png}}} \corref{mycorrespondingauthor} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected]} \author[2nd_address]{Masih Fadaki} \ead{[email protected]} \author[3rd_address]{Anuj Kumar Sharma} \ead{[email protected]} \address[1st_address]{\textcolor{white}{"}Department of Industrial Systems Engineering and Management, National University of Singapore, 1 Engineering Drive 2, 117576, Singapore\textcolor{white}{"}} \address[2nd_address]{\textcolor{white}{"}School of Business IT \& Logistics, RMIT University, Melbourne, VIC 3000, Australia\textcolor{white}{"}} \address[3rd_address]{Department of Mathematics, Shyam Lal College, University of Delhi, India} \begin{abstract} This technical note rectified the mathematical and conceptual errors present in \citet{liao2014}. \citet{liao2014} proposed an EOQ model under two-levels trade credit policy considering limited storage capacity whereby the supplier provides a permissible delay period ($M$) to the retailer, and the retailer also offers a permissible delay period ($N$) (where $M>N$) to its customers. In the current technical note, we point out some defects of their model from the logical viewpoints of mathematics regarding both interest charged and interest earned. Furthermore, as an example, one of the affected numerical results is re-evaluated.\\ \end{abstract} \begin{keyword} Trade credit financing \sep Permissible delay in payments \sep Deterioration \sep Limited storage capacity \sep Inventory. \end{keyword} \end{frontmatter} \section{Introduction}\label{sec:Intr} To reflect the real business environment and addressing the inter-dependencies among the operations and financing processes, there is recently a trend of optimising inventory while the trade credit is also taken into consideration. \citet{liao2014} presented an inventory model for the perishable items with two-levels of trade credit and limited storage capacity. In this two-echelon supply chain inventory model, a supplier provides the products to a retailer and the retailer sells them to its customers. Under two levels of trade credit, supplier offers a fixed credit period to the retailer to settle the account and the retailer provides some trade credit to his customers. Beyond the allowable credit period, the retailer is charged as per agreed interest rate while she receives the interest regarding the customer's \textcolor{white}{"}outstanding amount if it is not paid within the allowable delay period.\textcolor{white}{"} While the study contributed an interesting idea of considering the capacity constrained in the context of inventory optimisation under trade credit policy, we have identified a major issue in developing the mathematical expression regarding the interest earned of the second case ($N < T \le M $). Furthermore, a number of minor issues including an error in the Notations section and an incorrect mathematical expression for the annual deteriorating cost is spotted. Since the outlined major issue has a significant impact on the numerical results and the corresponding recommendations, we also amend the impacted computational results. \section{Notations} We have reviewed the notations of the original paper and compared the provided notations with the terms used in the paper. A minor issue exists in the notations with respect to the formula of $W^*$. The unit selling price of the item is denoted by '$s$' in the notation list while the $W^*$ is formulated as $pDM+\frac{pI_eDM^2}{2}$ in which '$p$' is used for the unit selling price. This expression should be updated to $sDM+\frac{sI_eDM^2}{2}$. \section{Model} Suppose there is a delay in payment time, denoted as \textit{M}. The inventory level $I(t)$ at a time $t\in \left[0,{\rm \; }T \right]$ can be formulated as: \begin{flalign} \textcolor{white}{"} \frac{dI\left(t\right)}{dt} +\theta I\left(t\right)=-D;{\rm \; \; \; \; }0\le t\le T \textcolor{white}{"} \end{flalign} \textcolor{white}{"} Given the boundary condition as $I(T)=0$, we get \textcolor{white}{"} \begin{flalign} I\left(t\right)=\frac{D}{\theta } \left(e^{\theta \left(T -t\right)} -1\right);{\rm \; \; \; \; }0\le t\le T \end{flalign} At time $t = 0$, \textit{I}(0) = \textit{Q} and thus: \begin{align} \label{eq:Qj} Q &= \frac{D}{\theta } \left(e^{\theta T } -1\right) \\ \Rightarrow \; \frac{dQ }{dT } &= De^{\theta T } \notag \end{align} Considering Equation \ref{eq:Qj}, the time at which \textit{W} inventory is exhausted (\textit{T${}_{a}$}) can be computed as: \begin{align} \label{eq:Ta} W &=\frac{D}{\theta } \left(e^{\theta T_{a} } -1\right) \\ \Rightarrow T_{a} & =\frac{1}{\theta } \ln \left(\frac{\theta W}{D} +1\right) \nonumber \end{align} \subsection{Estimation of total annual cost components} There is another minor issue regarding the mathematical expression of annual deteriorating cost as it has been developed as $\frac{Dh(e^{\theta T}-\theta T -1)}{\theta T}$ in \citeauthor{liao2014}'s \citeyearpar{liao2014} model; however, the correct term is: \begin{align*} D_{T } &=\frac{c\left(Q -DT \right)}{T } \\ &=\frac{cD}{\theta T } \left\{e^{\theta T } -\theta T -1\right\} \end{align*} The interest earned per year, \textbf{Case 2: $N < T \le M $} The incomplete term in \citeauthor{liao2014}'s \citeyearpar{liao2014} model pertains to computing the annual interest earned for the second case where "$N < T \le M$". This is the case where \citet{liao2014} did not consider the interest earned during \textit{N} to \textit{T}. As depicted in Figure \ref{fig:case2}, annual interest earned should be calculated for both periods of $N$ to $T$ and $T$ to $M$. They formulated the interest earned for this period as $\frac{sI_{e}D (2MT-N^2-T^2)}{2T }$; however, the correct interest earned for this case should be formulated as: \textcolor{white}{"} \begin{align*} &=\frac{sI_{e} \left(DN+DT \right)\left(T -N\right)}{2T } +\frac{1}{T } \left\{sDT+\frac{sDI_{e} \left(T^{2} -N^{2} \right)}{2} \right\}\left(M -T \right)I_{e} \\ &=\frac{sDI_{e} \left(T^{2} -N^{2} \right)+\left(2sDT+sD\left(T^{2} -N^{2} \right)I_{e} \right)I_{e} \left(M -T \right)}{2T } \end{align*} \textcolor{white}{"} \begin{figure} \caption{Demonstration of annual interest earned for case 2.} \label{fig:case2} \end{figure} \subsection{Implications of corrected term on the total annual cost} Putting together all components of retailer's total annual cost, is can be expressed as: \begin{quote} \begin{tabular}{ll} \textit{TC}(\textit{T}) = & "ordering cost + deterioration cost + stock-holding cost in RW \\ & + stock-holding cost in OW + interest paid -- interest earned" \end{tabular} \end{quote} Given the possible values of \textit{N}, \textit{M}, and \textit{T}, the retailer's total annual cost is estimated for the following configurations: \textbf{Configuration 1: \textcolor{white}{"}$T_{a} <N < M < W^*$\textcolor{white}{"}} Regarding this configuration, the total annual cost can be estimated as follows: \[TC\left(T \right)=\left\{\begin{array}{l} {TC_{1} \left(T \right);{\rm \; \; \; \; }if{\rm \; }0<T \le T_{a} } \\ {TC_{2} \left(T \right);{\rm \; \; \; \; }if{\rm \; }T_{a} <T \le N} \\ {TC_{3} \left(T \right);{\rm \; \; \; \; }if{\rm \; }N<T \le M } \\ {TC_{4} \left(T \right);{\rm \; \; \; \; }if{\rm \; }M<T \le W^* } \\ {TC_{5} \left(T \right);{\rm \; \; \; \; }if{\rm \; }W^* <T } \end{array}\right. \] In this configuration, the total cost regarding the third time window ($N < T \le M$) should be amended to: \begin{flalign} \textcolor{white}{"} TC_{3} \left(T \right) &= \frac{A}{T } +\frac{D\left(k+c\theta \right)}{\theta ^{2} T } \left(e^{\theta T } -\theta T -1\right)-\frac{\left(k-h\right)}{\theta ^{2} T } \left\{D\left(e^{\theta T_{a} } -\theta T_{a} -1\right)+\theta ^{2} W\left(T -T_{a} \right)\right\} \\ &\hphantom{{}=} -\frac{1}{2T } \left\{sDI_{e} \left(T^{2} -N^{2} \right)+\left(2sDT +sD\left(T^{2} -N^{2} \right)I_{e} \right)I_{e} \left(M -T \right)\right\} & \nonumber \textcolor{white}{"} \end{flalign} \textbf{Configuration 2: \textcolor{white}{"}$N<T_{a} <M < W^*$\textcolor{white}{"}} With respect to this configuration, the total annual cost can be estimated as: \textcolor{white}{"} \[TC\left(T \right)=\left\{\begin{array}{l} {TC_{1} \left(T \right);{\rm \; \; \; \; }if{\rm \; }0<T \le N} \\ {TC_{6} \left(T \right);{\rm \; \; \; \; }if{\rm \; }N<T \le T_{a} } \\ {TC_{3} \left(T \right);{\rm \; \; \; \; }if{\rm \; }T_{a} <T \le M } \\ {TC_{4} \left(T \right);{\rm \; \; \; \; }if{\rm \; }M <T {\rm \; } \le W^* } \\ {TC_{5} \left(T \right);{\rm \; \; \; \; }if{\rm \; }W^* <T {\rm \; }} \end{array}\right. \] \textcolor{white}{"} where \begin{flalign} \textcolor{white}{"} TC_{3} \left(T \right) &= \frac{A}{T } +\frac{D\left(k+c\theta \right)}{\theta ^{2} T } \left(e^{\theta T } -\theta T -1\right)-\frac{\left(k-h\right)}{\theta ^{2} T } \left\{D\left(e^{\theta T_{a} } -\theta T_{a} -1\right)+\theta ^{2} W\left(T -T_{a} \right)\right\} \\ &\hphantom{{}=} -\frac{1}{2T } \left\{sDI_{e} \left(T^{2} -N^{2} \right)+\left(2sDT +sD\left(T^{2} -N^{2} \right)I_{e} \right)I_{e} \left(M -T \right)\right\} & \nonumber \textcolor{white}{"} \end{flalign} \subsection{Probing the convexity for total annual cost function} In this section, we investigate the convexity of the total annual cost functions ($TC_3(T)$). \begin{theorem} \label{thrm:T1} $TC_{3} \left(T \right)$ is convex on $T >0$. \end{theorem} \begin{proof} Considering the first and the second derivatives of the revised $TC_3(T)$: \begin{flalign*} \textcolor{white}{"} TC_{3} \left(T \right) &=\frac{A}{T } +\frac{D\left(k+c\theta \right)}{\theta ^{2} T } \left(e^{\theta T } -\theta T -1\right)-\frac{\left(k-h\right)}{\theta ^{2} T } \left\{D\left(e^{\theta T_{a} } -\theta T_{a} -1\right)+\theta ^{2} W\left(T -T_{a} \right)\right\} & \\ &\hphantom{{}=}-\frac{1}{2T } \left\{DsI_{e} \left(T^{2} -N^{2} \right)+\left(2DT s+Ds\left(T^{2} -N^{2} \right)I_{e} \right)I_{e} \left(M -T \right)\right\} & \end{flalign*} \textcolor{white}{"} \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} \left(T \right) &=\frac{1}{\left(\theta T \right)^{2} } \left[-\theta ^{2} A+D\left(k+c\theta \right)\left(\theta T e^{\theta T } -e^{\theta T } +1\right)-D\left(k-h\right)\left(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1\right)\right] & \\ &\hphantom{{}=}-\frac{DsI_{e} }{2T^{2} } \left\{\left(N^{2} -T^{2} \right)+\left(1+T I_{e} \right)+\left(N^{2} +T^{2} \right)I_{e} \right\} & \end{flalign*} \textcolor{white}{"} \textcolor{white}{"} \begin{flalign*} TC_{3}^{''} \left(T \right) &=\frac{2A}{T^{3} } +\frac{D\left(k+c\theta \right)}{\theta ^{2} T^{3} } \left\{\theta ^{2} T^{2} e^{\theta T } -2\theta T e^{\theta T } +2e^{\theta T } -2\right\}+\frac{2D\left(k-h\right)}{\theta ^{2} T^{3} } \left(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1\right) & \\ &\hphantom{{}=}+\frac{DsI_{e} }{2T^{3} } \left\{T^{2} +N^{2} +I_{e} \left(N^{2} -T^{2} +2T^{3} \right)\right\} & \end{flalign*} \textcolor{white}{"} \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} \left(T \right) &= TC_{2}^{'} \left(T \right)-\frac{DsI_{e} }{2T^{2} } \left\{\left(N^{2} -T^{2} \right)+\left(1+T I_{e} \right)+\left(N^{2} +T^{2} \right)I_{e} \right\} & \end{flalign*} \textcolor{white}{"} \textcolor{white}{"} \begin{flalign} \label{eq:thrm3.5} TC_{3}^{''} \left(T \right) &=TC_{2}^{''} \left(T \right)+\frac{DsI_{e} }{2T^{3} } \left\{T^{2} +N^{2} +I_{e} \left(N^{2} -T^{2} +2T^{3} \right)\right\} & \end{flalign} \textcolor{white}{"} \begin{lemma} \label{lemma:forT3} $T^{2} +N^{2} +I_{e} \left(N^{2} -T^{2} +2T^{3} \right)$$\mathrm{>}$0 \end{lemma} \begin{proof} Let \begin{equation} \label{eq:lemma3.6} f(T )=T^{2} +N^{2} +I_{e} \left(N^{2} -T^{2} +2T^{3} \right) \end{equation} \[f^{'} (T )=2T -2I_{e} T +6T^{2} \] \[f^{'} (T )=2T (1-I_{e} )+6T^{2} \] \[f^{'} (T )>0 \, \, \therefore \, \, 0<I_{e} <1\] This implies that $f(T )$ is increasing function of $T $ and $f(0)=N^{2} (1+I_{e} )>0$. Hence, $f(T )>0$, when $T >0$ This completes the proof of Lemma \ref{lemma:forT3}. \end{proof} \textit{Continuing the proof of Theorem \ref{thrm:T1}.} From Equations \ref{eq:thrm3.5} and \ref{eq:lemma3.6}: \begin{flalign*} TC_{3}^{''} \left(T \right) &=TC_{2}^{''} \left(T \right)+\frac{DsI_{e} }{2T^{3} } f(T ) & \end{flalign*} From Lemma \ref{lemma:forT3} and as $TC_{2} \left(T \right)$ is convex (See \citet{liao2014}'s paper), therefore $TC_{3}^{''} \left(T \right)>0$ Hence, $TC_{3} \left(T \right)$ is convex on $T >0$. This completes the proof. \end{proof} \section{Updated Decision rule} From Theorem 3.1, we get, \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} \left(T \right) &=\frac{1}{\left(\theta T \right)^{2} } \left[-\theta ^{2} A+D\left(k+c\theta \right)\left(\theta T e^{\theta T } -e^{\theta T } +1\right)-D\left(k-h\right)\left(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1\right)\right] & \\ &\hphantom{{}=}-\frac{DsI_{e} }{2T^{2} } \left\{\left(N^{2} -T^{2} \right)+\left(1+T I_{e} \right)+\left(N^{2} +T^{2} \right)I_{e} \right\} & \end{flalign*} \textcolor{white}{"} After simplifying above equation, we get \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} (T ) &=\frac{1}{(\theta T )^{2} } [-\theta ^{2} A+D(k+c\theta )(\theta T e^{\theta T } -e^{\theta T } +1)-D(k-h)(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1) & \\ &\hphantom{{}=}-\frac{DsI_{e}\theta^{2} }{2} \{(N^{2} -T^{2} )+(1+T I_{e} )+(N^{2} +T^{2} )I_{e} \} ] & \end{flalign*} \textcolor{white}{"} Now, replacing $T=M$ in above equation, we get \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} (M ) &=\frac{1}{(\theta M )^{2} } [-\theta ^{2} A+D(k+c\theta )(\theta M e^{\theta M } -e^{\theta M } +1)-D(k-h)(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1) & \\ &\hphantom{{}=}-\frac{DsI_{e}\theta^{2} }{2} \{(N^{2} -M^{2} )+(1+M I_{e} )+(N^{2} +M^{2} )I_{e} \}] & \end{flalign*} \textcolor{white}{"} \begin{flalign*} TC_{3}^{'} \left(M \right) &=\frac{\Delta_3}{\left(\theta M \right)^{2} } & \end{flalign*} where \textcolor{white}{"} \begin{flalign*} \Delta_3 &=[-\theta ^{2} A+D(k+c\theta )(\theta M e^{\theta M } -e^{\theta M } +1)-D(k-h)(\theta T_{a} e^{\theta T_{a} } -e^{\theta T_{a} } +1) & \\ &\hphantom{{}=}-\frac{DsI_{e}\theta^{2} }{2} \{(N^{2} -M^{2} )+(1+M I_{e} )+(N^{2} +M^{2} )I_{e} \}] & \end{flalign*} \textcolor{white}{"} \section{Implications on the Final Result} In Tables 1 and 2 of Liao's study, where the optimal value of $T$ obtains from $\mathrm{TC_3}$, the value of optimal $T$, the corresponding decision rule, and value of total cost need to be updated. To illustrate this matter, we have re-calculated the values of optimal $T$ and the corresponding total cost for row 3 of Table 1 in Liao's study (Table \ref{tab:implications}). \textbf{Parameters of experiment regarding row 3 of Table 1} \begin{tabular}{lllp{0.8in}llll} $\theta$&=&0.00001&& $h$&=&1& \\ $D$&=&9000000&& $\mathrm{I_e}$&=&0.000005& \\ $A$&=&1224.04585&& $\mathrm{I_p}$&=&0.15& \\ $c$&=&1.9999&& $N$&=&0.0161& \\ $s$&=&2&& $M$&=&0.0165& \\ $k$&=&1.1&& $W$&=&144900& \\ \end{tabular} \begin{table}[h] \centering \caption{Implications of updated $\mathrm{TC_3}$ on the final result.} \label{tab:implications} \scalebox{0.8}{ \begin{tabular}{@{}lcccccccccccl@{}} \toprule & Table & Row No & Dec. Rule & $T_a$ & W* & $\Delta_1$ & $\Delta_2$ & $\Delta_3$ & $\Delta_4$ & $\Delta_5$ & T* & TVC(T*) \\ \midrule Liao & 1 & 3 & (B1) & 0.0161 & 0.0165 & \textless{}0 & \textless{}0 & \textgreater{}0 & \textless{}0 & \textless{}0 & $\mathrm{T^*_3}$=0.0161 & 148020 \\ Updated & & & (B2) & & & & & $\sim$0 & & \textgreater{}0 & $\mathrm{T^*_3}$=1.000129 & 4936419.16 \\ \bottomrule \end{tabular} } \end{table} \section{Conclusion} In this technical note, we reviewed the \citeauthor{liao2014}'s \citeyearpar{liao2014} model in which an inventory model was developed by considering that the storage capacity is limited and there are two levels of trade credit. The unavoidable complexity emanating from the difference between permissible delay regarding the periods of interest earned and interest charged requires the decision maker to seek for an optimal inventory model. We identified some minor and one major issue regarding the formulation of $\mathrm{TC_3}$ as the original study did not consider the interest earned during \textit{N} to \textit{T}. The errors have been corrected and the impact of updated terms on the computational result has been presented. \section*{\refname} \footnotesize{ } \end{document}
\begin{document} \title[Tilting modules, polytopes, and Catalan numbers]{Tilting Modules over the Path Algebra of Type $\mathbb{A}$, Polytopes, and Catalan Numbers} \author{Lutz Hille} \address{Mathematisches Institut, Universit\"at M\"unster, Einsteinstrasse 62, D-48149 M\"unster, Germany} \email{[email protected]} \dedicatory{Dedicated to Helmut Strade on the occasion of his 70th birthday} \subjclass[2010]{Primary 16G20, 16G99, 05A19, 05A10, 05A99; Secondary 52B20, 52B99, 17B99} \keywords{Quiver, path algebra, Dynkin diagram, root system, convex hull, tilting module, support tilting module, $2$-support tilting module, polytope, volume, Catalan number, tilting sequence, rooted tree, Dyck path} \maketitle \date{} \begin{abstract} It is well known that the number of tilting modules over a path algebra of type $\mathbb{A}_n$ coincides with the Catalan number $C_n$. Moreover, the number of support tilting modules of type $\mathbb{A}_n$ is the Catalan number $C_{n+1}$. We show that the convex hull $C(\mathbb{A}_n)$ of all roots of a root system of type $\mathbb{A}_n$ is a polytope with integral volume $(n+1)C_{n+1}={2n\choose n}$. Moreover, we associate to the set of tilting modules and to the set of support tilting modules certain polytopes and show that their volumes coincide with the number of those modules, respectively. Finally, we show that these polytopes can be defined just using the root system and relate their volumes so that we can derive the above results in a new way. \end{abstract} \section{Introduction} We consider a quiver $Q$ of type $\mathbb{A}$ with $n$ vertices, more details can be found in Section 2. An indecomposable representation of $Q$ can be identified with an interval $[i,j]$ representing the support of the dimension vector $(0,...,0,1,\ldots,1,0,...,0)$. We associate to $Q$ two series of polytopes, the $C$-- and the $P$--series, which both come in three versions. The first series of polytopes $C^+(Q)\subset C^{\mathrm{clus}}(Q)\subset C(Q)$ consists just of the convex hulls of certain roots in the root system of type $\mathbb{A}$. Thus these polytopes are defined independently of the orientation of $Q$. All modules in this note should be understood as modules over the path algebra of type $\mathbb{A}$. A tilting module is a particular module satisfying certain genericity properties. We always identify modules over the path algebra with representations of the corresponding quiver. The second series of polytopes $P^+(Q)\subset P^{\mathrm{clus}}(Q)\subset P(Q)$ consists of the union of certain simplices $\sigma_T$ associated to tilting modules $T$ or some generalizations of these. Note that the first series of polytopes only depends on the underlying graph of $Q$, whereas the second one depends on a chosen orientation of the quiver. However, we will show that the polytopes in the second series also only depend on the underlying graph of $Q$. Thus we will write $C(Q)$ or $C(\mathbb{A}_n)$ interchangeably, but we have to write $P(Q)$ until we haven proven that the latter definition is independent of the orientation of the quiver. The principal goal of this paper is to compare both types of polytopes. In fact, we show that they coincide (see Theorem \ref{Thmequal}). Moreover, we obtain the number of certain versions of tilting modules as the volume of the corresponding polytope, where we use a certain normalization of the euclidean volume. A second aim of this paper is to further simplify the counting by passing to tilting sequences. This approach is explained in Section 5. Here we consider an additional order on the indecomposable direct summands, and then the counting gets even easier, namely, we just obtain a bijection between the tilting sequences and a certain symmetric group (see Theorem \ref{ThmTiltSequ}). In fact, in this paper we do not use any representation theory other than some geometric interpretations of representation-theoretic notions. For any dimension vector $d$ there is exactly one rigid (or generic) module that is dense in the corresponding representation space (see Section \ref{sectArep} or also \cite{BaurHille} for more details), and we will work with this rigid module. Note that we use the integral volume in this paper (see also Section 6 for more explanations), that is, the volume $\mathrm{vol}\,\,\Delta$ of any simplex $\Delta$ generated by an integral basis is $1$. Thus our volume is just $n!$ times the usual euclidean volume. The advantage of this definition is that the volume of any lattice polytope is an integer. To be precise we define certain positive numbers (not depending on the orientation by Theorem \ref{Thmequal}) $$ t^+(Q)=\mathrm{vol}\, P^+(Q)\,,\quad t^{\mathrm{clus}}(Q)=\mathrm{vol}\, P^{\mathrm{clus}}(Q)\,,\quad t(Q)=\mathrm{vol}\, P(Q)\,. $$ In order to get an interpretation of these numbers, we need to consider several versions of tilting modules. Note that a tilting module $T=\oplus_{i=1}^nT(i)$ is a direct sum of $n$ pairwise non-isomorphic indecomposable modules $T(i)$ satisfying $\mathrm{Ext}^1(T,T)=0$. For an indecomposable representation $[i,j]$ the support is just the interval $[i,j]$, for a direct sum of such modules the support is just the union of the supports of the indecomposable direct summands. Thus, the support of a module (or a representation) is the support of its dimension vector (see Section 2 for details). A support (or cluster) tilting module is a module $T$ that is a tilting module if restricted to its support. A $2$--support tilting module is a module $T$ together with a decomposition $T=T^+\oplus T^-$ such that both $T^+$ and $T^-$ are support tilting modules and the supports are a disjoint union of the vertices of the quiver $Q$ of type $\mathbb{A}_n$. Thus, any 2-support tilting module $T$ defines a subset $I$ of the set $Q_0$ of vertices of $Q$ such that $I$ is the support of $T^+$ and $Q_0\setminus I$ is the support of $T^-$. We define $\mathcal{T}^+(Q)$, $\mathcal{T}^{\mathrm{clus}}(Q)$, and $\mathcal{T}(Q)$, respectively, as the set (of isomorphism classes) of tilting modules, the set (of isomorphism classes) of support tilting modules, and the set (of isomorphism classes) of 2-support tilting modules. By using the results in \cite{VolHille} we obtain the following interpretation of the volumes of the polytopes associated to tilting modules and their generalizations. \begin{Thm}\label{Thmnumbervol} $$ t^+(Q)=\sharp\mathcal{T}^+(Q)\,,\quad t^{\mathrm{clus}}(Q)=\sharp\mathcal{T}^{\mathrm{clus}}(Q)\,,\quad t(Q)=\sharp\mathcal{T}(Q)\,. $$ \end{Thm} Note that the classification of tilting modules $\mathcal{T}^+(Q)$ over quivers $Q$ of type $\mathbb{A}_n$ is well known. A description using trees for the directed orientation can be found in \cite{VolHille}. This leads to a recursion formula for $t^+(Q)$. In particular, the recursion formula in Theorem \ref{Thmcompare} is the same as the recursion formula for the number of $3$--regular trees with $n+1$ leaves and one root (see Section \ref{sectDyck}). The key point for the correspondence in type $\mathbb{A}$ is Theorem \ref{Thmequal} that does not generalize to the other Dynkin quivers apart from type $\mathbb{C}$ and is certainly wrong for euclidean and wild quivers. Let us briefly comment on the non-simply laced types. There is the notion of a path algebra, where we use two fields, one being a field extension of degree two (respectively, three for Dynkin type $\mathbb{G}_2$) of the other one. This is certainly more technical and will be explained in our forthcoming paper \cite{PolDynkHille} in detail. We also note that most of the combinatorics related to the Catalan numbers is already known for type $\mathbb{A}$, but in all other cases it is unknown. The next theorem shows that for type $\mathbb{A}$ the polytopes in the $P$--series are independent of the chosen orientation. Thus we can simply write $P(\mathbb{A}_n)$ instead of $P(Q)$. \begin{Thm}\label{Thmequal} $$ C(Q)=P(Q)\,,\quad C^{\mathrm{clus}}(Q)=P^{\mathrm{clus}}(Q)\,,\quad C^{+}(Q)=P^{+}(Q)\,. $$ \end{Thm} By using decompositions of the polytopes we get several recursion formulas relating the three different polytopes. Note that $t(\mathbb{A}_1)=t^{\mathrm{clus}}(\mathbb{A}_1)=2t^+(\mathbb{A}_1)=2$. Consequently, already these formulas determine the numbers $t(\mathbb{A}_n)$, $t^{\mathrm{clus}}(\mathbb{A}_n)$, and $t^+(\mathbb{A}_n)$ uniquely, by induction. However, we can also compute $t(\mathbb{A}_n)={2n\choose n}=(2n)!/n!n!$ directly, which gives even more ways to determine these numbers. The next result is a standard decomposition that holds for any quiver $Q$. In Section \ref{sectArep} we will present two examples illustrating these formulas. \begin{Thm}\label{Thmcompare} $$ t(\mathbb{A}_n)=\sum_{I\subseteq Q_0} t^+(\mathbb{A}_n|_I) t^+(\mathbb{A}_n|_{Q_0\setminus I})\,, $$ $$ t^{\mathrm{clus}}(\mathbb{A}_n)=\sum_{i\in Q_0}t^+(\mathbb{A}_n|_{Q_0\setminus\{i\}})=\sum_{i=0}^{n-1}t^+(\mathbb{A}_i)t^+(\mathbb{A}_{n-1-i})\,. $$ \end{Thm} For the $C$--series of polytopes we can prove the following formulas directly by determining the facets and their volumes, where the polytope $C(\mathbb{A}_n)$ is the simplest one to consider. \begin{Thm}\label{Thmcompute} $$ \mathrm{vol}\, C(\mathbb{A}_n)={2n\choose n}=(n+1)C_n\,,\quad\mathrm{vol}\, C^{\mathrm{clus}}(\mathbb{A}_n)={2n+2\choose n+1}/(n+2)=C_{n+1}\,, $$ $$\mathrm{vol}\, C^{+}(\mathbb{A}_n)={2n\choose n}/(n+1)=C_n\,. $$ \end{Thm} In type $\mathbb{A}$ we have one dimension vector $(1,1,\ldots,1)$ corresponding to the interval $[1,n]$. This provides us with yet another recursion formula which is easy to prove for the directed orientation. \begin{Thm}\label{Thmcompute1} $$ t^+(\mathbb{A}_n)=\sum_{i=1}^n t^+(\mathbb{A}_n|_{Q_0\setminus\{ i\}})\,. $$ \end{Thm} Since there are several ways to prove these theorems, we give a short outline of their proofs. One way to prove the results uses induction over the facets and another way uses the fact that a quiver of type $\mathbb{A}$ has precisely one sincere root. However, by using a purely combinatorial approach, we obtain a different proof by simply using counting arguments as explained in Stanley's book on enumerative combinatorics \cite{Stanley}. We also like to mention that the same idea works for arbitrary Dynkin quivers. However, in this case the polytope $P(Q)$ is not convex, and thus does not coincide with $C(Q)$, which is the convex hull of all the roots of the root system. This makes the above formulas more complicated, and for the details we refer the reader to our forthcoming paper \cite{PolDynkHille}. We would also like to point out that the number of tilting modules for arbitrary Dynkin quivers has recently been computed in \cite{Ringeletal} by completely different methods. Moreover, the polytope $P(Q)$ (and its variations) can be defined even for arbitrary infinite quivers $Q$. This way we would get a simplicial complex that is a triangulation of a certain quadric in the real Grothendieck group $K_0(Q)_{\mathbb{R}}$ of the category of representations of $Q$. The outline of the paper is as follows. After the introduction we start with some basic representation theory in Section 2. Here we only recall a few facts that are well known. Details and further references can be found in \cite{BaurHille}. In Section 3 we prove the first three theorems and in Section 4 we proceed with the last two theorems. In Section 5 we modify the problem slightly. Instead of tilting modules we will consider tilting (or full strongly exceptional) sequences. Finally, in the last section we will give some combinatorial interpretation of the results using Stanley's book \cite{Stanley}. \noindent {\bf Acknowledgments.} This work started during a stay of the author in Bielefeld at the SFB 701 'Spectral Structures and Topological Methods in Mathematics'. He would like to thank Henning Krause for the invitation and the stimulating working conditions. Moreover, this work was supported by SPP 1388 'Representation Theory'. The author is also indebted to Friedrich Knop for several hints concerning the combinatorics of the Catalan numbers and to Claus Michael Ringel for discussing further combinatorial aspects of the Catalan numbers. Finally, he is grateful to Karin Baur, J\"org Feldvoss, and the referee for many helpful comments on various drafts of this paper. \section{Representations of $\mathbb{A}_n$}\label{sectArep} In this section we always consider a quiver $Q$ of type $\mathbb{A}_n$, that is, a Dynkin diagram of type $\mathbb{A}_n$, where we choose an orientation of any edge between the vertices $i$ and $ i + 1$. For every oriented edge $\alpha\in Q_1$, called arrow, we define its starting point to be $s(\alpha)$ and its terminal point to be $t(\alpha)$. A representation of $Q$ consists of $n$ finite dimensional vector spaces $V_i$ over a fixed field $k$, where $i=1,\ldots,n$, with linear maps $V(\alpha):V_{s(\alpha)}\longrightarrow V_{t(\alpha)}$. Note that either $s(\alpha)+1=t(\alpha)$ or $s(\alpha)-1=t(\alpha)$. The dimension vector $\mathrm{\underline{dim}\,} V$ of the representation $V=\{V_i\}$ is defined as $\mathrm{\underline{dim}\,} V=(\dim V_1,\ldots,\dim V_n)$. A {\sl sincere root} is a dimension vector with $\dim V_i\not= 0$ for all vertices $i$. The set of all representations of a quiver with the usual homomorphisms forms an abelian category of global dimension one that has enough projective and also enough injective representations. Note that any representation $V$ has a short projective resolution $0\longrightarrow P^1\longrightarrow P^0\longrightarrow V\longrightarrow 0$ and $\mathrm{Ext}^1(V,W)$ is defined as the cokernel of the induced map $\mathrm{Hom}\,(P^0,W)\longrightarrow\mathrm{Hom}\,(P^1,W)$. The kernel of this map is the set of all homomorphisms $\mathrm{Hom}\,(V,W)$. A representation $V$ is {\sl simple} if it has no proper subrepresentations, it is {\sl indecomposable} if it has no non-trivial decomposition into a direct sum of two representations, and it is {\sl rigid} if $\mathrm{Ext}^1(V,V)=0$. The latter condition has also a natural interpretation in the space $\mathcal{R}(Q;d)$ of all representations of dimension vector $d$ consisting just of all possible linear maps $$ \mathcal{R}(Q;d)=\bigoplus_{\alpha\in Q_1}\mathrm{Hom}\,(k^{d(s(\alpha))},k^{d(t(\alpha))})\,. $$ The group $G(d)=\prod\mathrm{GL}(d_i)$ acts on $\mathcal{R}(Q;d)$ via base change, and $V$ is an element of the dense orbit over an algebraic closure precisely when $\mathrm{Ext}^1(V,V)=0$. Since $\mathcal{R}(Q; d)$ is irreducible (as an affine space), there is at most one rigid representation $M(d)$ of dimension vector $d$. Conversely, when $Q$ is a Dynkin quiver, in particular, when $Q$ is of type $\mathbb{A}$, there are only finitely many orbits. Consequently, for any dimension vector $d$ up to isomorphism there is precisely one representation that has dimension vector $d$ and is rigid. Using this fact, we can define an equivalence relation on the possible dimension vectors as follows. We say $d$ and $d'$ are {\sl equivalent} provided the indecomposable direct summands of $M(d)$ and $M(d')$ coincide (up to positive multiplicity). Any module $M(d)$ can have at most $n$ pairwise non-isomorphic indecomposable direct summands. Let us assume $M(d)$ has just $n-1$ such indecomposable direct summands (thus it is almost complete). Then there are at most two complements $M_1$ and $M_2$, meaning that $M(d) \oplus M_1$ is rigid, $M(d)\oplus M_2$ is rigid, and neither $M_1$ nor $M_2$ is already a direct summand of $M(d)$. Then it is known that (up to renumbering) there is an exact sequence $M_1\longrightarrow M\longrightarrow M_2$, with $M$ consisting of direct summands of $M(d)$. Such a sequence is called {\sl exchange sequence}. In type $\mathbb{A}$, $M$ has at most two indecomposable summands. These sequences play a crucial role for the recursive construction of all tilting modules. In fact, for any equivalence class of dimension vectors $d$ there exists a unique $d'$ in this class so that $M(d')$ has no multiple indecomposable direct summands. Such a module is called {\sl basic}. The maximal ones among those modules are the tilting modules that contribute to the volume of the polytope $P(Q)$. All the other ones correspond to certain faces. We are dealing with the maximal ones, all others contribute with volume $0$. The principal aim of this paper is to determine the number of maximal equivalence classes $M(d)$ by using polytopes together with their basic representations. The situation becomes quite elementary if $Q$ is a quiver of type $\mathbb{A}_n$ with its directed orientation. The details can already be found in \cite{VolHille}. If $Q$ is of type $\mathbb{A}_n$ with another orientation, the situation is slightly different, however the approach in \cite{BaurHille} can be modified as follows. Instead of diagrams with all connections on the top, we use for arrows from $i+1$ to $i$ connections at the bottom of the diagram. This even gives a constructive way to compute $M(d)$ for any $d$ and any orientation of the quiver. Finally, we define the cones $\sigma$ associated to tilting modules, support tilting modules, and $2$-support tilting modules, respectively. We start with any module $M$ and its decomposition into indecomposable direct summands $M=\oplus M(i)^{a(i)}$, where $a(i)$ is the multiplicity of the indecomposable direct summand $M(i)$ in $M$. For such an $M$ we define $\sigma_M=\mathrm{conv}\, \{\mathrm{\underline{dim}\,} M(i), 0 \mid i \in I\}$ to be the convex hull of zero and the dimension vectors of the indecomposable direct summands. Note that the multiplicities $a(i)>0$ don't play a role in the definition. Also note that the dimension vectors of the indecomposable direct summands of a tilting module $T$ form an integral basis of $\mathbb{Z}^n$, and thus $\sigma_T$ is a simplex which has volume $1$ by our definition of the volume. If $T$ is a support tilting module, we add the negative standard basis of the complement of the support of $T$, thus we define $\sigma_T=\mathrm{conv}\,\{\mathrm{\underline{dim}\,} T(i),-e_j, 0\mid i\in I,j\in J\}$, where $T=\oplus T(i)^{a(i)}$ and $J$ is the complement of the support of $T$. Finally, for a $2$-support tilting module $T$ we decompose both $T^+$ and $T^-$ as $T^+= \oplus_{i\in I}T(i)^{a(i)}$, $T^-=\oplus_{j\in J}T(j)^{a(j)}$ and define $\sigma_T=\mathrm{conv}\,\{\mathrm{\underline{dim}\,} T(i),-\mathrm{\underline{dim}\,} T(j),0\mid i\in I, j\in J\}$. We illustrate the construction of the simplices and the two series of polytopes by two examples, namely, quivers of types $\mathbb{A}_2$ and $\mathbb{A}_3$. {\sc Example 1.} Let $Q$ be a quiver of type $\mathbb{A}_2$. Then there is just one orientation up to a permutation of the two vertices. The dimension vectors of the indecomposable representations are $(1,0)$, $(1,1)$, and $(0,1)$. Thus for the roots we get $$ \Phi^+=\{(1,0),(1,1),(0,1)\}\,,\qquad\Phi^{\mathrm{clus}}=\{(1,0),(1,1),(0,1),(-1,0),(0,-1)\}\,,\mbox{ and } $$ $$ \Phi=\{(1,0),(1,1),(0,1),(-1,0),(-1,-1),(0,-1)\}\,. $$ The convex hull $C^+(Q)$ of $\Phi^+$ has volume $2$, the convex hull $C^{\mathrm{clus}}(Q)$ of $\Phi^{\mathrm{clus}}$ is a pentagon of volume $5$, and the convex hull $C(Q)$ of $\Phi$ is a hexagon of volume $6$. The following pairs of roots, together with zero, form a simplex in $P^+(Q)$: $(1,0),(1,1)$, and $(1,1),(0,1) $. In the polytope $P^{\mathrm{clus}}(Q)$ we have three additional simplices defined by the pairs $(0,1),(-1,0)$, $(-1,0),(0,-1)$, and $(0,-1),(1,0)$. The second of these simplices is replaced in $P(Q)$ by two simplices defined by $(-1,0),(-1,-1)$ and $(-1,-1), (0,-1)$. {\sc Example 2.} The case $\mathbb{A}_3$ is more complicated, since we have two, essentially different, orientations in the quiver $Q$. We first describe the parts independent of the orientation, a difference only occurs for the polytopes of the $P$--series. For the roots we obtain $$ \Phi^+=\{(1,0,0),(0,1,0),(0,0,1),(1,1,0),(0,1,1),(1,1,1)\}\,, $$ $$ \Phi^{\mathrm{clus}}=\Phi^+\cup\{(-1,0,0),(0,-1,0),(0,0,-1)\}\,,\quad\Phi=\Phi^+\cup-\Phi^+\,. $$ The corresponding convex hulls have volume $5$ for $C^+(Q)$, volume $14$ for $C^{\mathrm{clus}}(Q)$, and volume $20$ for $C(Q)$. This can easily be seen from the decomposition of the $P$--series. We have to chose an orientation for this and describe the triples defining a simplex for the directed orientation first: $$ \begin{array}{l} \{(1,0,0),(1,1,0),(1,1,1)\}\,,\{(0,1,0),(1,1,0),(1,1,1)\}\,,\{(1,0,0),(1,1,1),(0,0,1)\}\,,\\ \{(0,1,1),(0,1,0),(1,1,1)\}\,,\{(1,1,1),(0,1,1),(0,0,1)\}\,. \end{array} $$ For the other orientation we get two simplices replaced by two others, however, the union of both pairs is the same. For this we replace the second and the fourth one by $$ \{(1,1,0),(0,1,0),(0,1,1)\}\mbox{ and }\{(1,1,0),(1,1,1),(0,1,1)\}\,. $$ For the polytope $P^{\mathrm{clus}}(Q)$ we need to add for any pair of roots, obtained by deleting $(1,1,1)$, a simplex, and we also need to add for any simple root one simplex. Thus we get for example $\{(1,0,0), (1,1,0),(0,0,-1)\}$ for the first simplex in $P^+(Q)$, and we get $(1,0,0),(0,-1,0),(0,0,-1)$ for the simple root $(1,0,0)$. Finally, we add $(-1,0,0),(0,-1,0),(0,0,-1)$ and obtain for the volume $5+5+ 3+1=14$. In a similar way we get for the volume of $P(Q)$ the sum $5+5+5+5=20$. Alternatively, using the computation of the volume with counting the facets and their volume for $C(Q)$, the volume of $P(Q)$ is $2\times 6+8=20$. This comes from the fact that $C(Q)$ has $6$ squares and $8$ triangles as facets. \section{Proofs of Theorems \ref{Thmnumbervol}--\ref{Thmcompare}} We first prove Theorem \ref{Thmnumbervol}: It is well known that for any tilting module $T=\oplus T(i)$ the dimension vectors $\mathrm{\underline{dim}\,} T(i)$ form a $\mathbb{Z}$--basis of $\mathbb{Z}^n$. Thus $\mathrm{vol}\,\sigma_T=1$. Thus we have proven the following lemma. \begin{Lemma} For any tilting module, any support tilting module, and any $2$--support tilting module $T$ we have $\mathrm{vol}\,\sigma_T=1$. \end{Lemma} It is therefore sufficient to show that $\mathrm{vol}\,(\sigma_T\cap\sigma_{T'})=0$ for any two different tilting modules $T$ and $T'$. This follows from the definition of the volume in \cite{VolHille}. Crucial for our computation is now Theorem \ref{Thmequal}, which is not true for arbitrary Dynkin quivers. Note that any non-trivial exchange sequence $$ 0\longrightarrow T(i)\longrightarrow\oplus T(j)\longrightarrow T(i)'\longrightarrow 0 $$ has at most two middle terms. Interpreting this as the relation $\mathrm{\underline{dim}\,} T(i)+\mathrm{\underline{dim}\,} T(i)'=\sum\mathrm{\underline{dim}\,} T(j)$, we see that $P^+(Q)$ is convex, strictly convex at the common facet for one middle term, and flat for two middle terms. Consequently, $P^+(Q)$ is convex precisely when there are at most two middle terms for any exchange relation. Note that such a relation corresponds to two simplices $\sigma_T$ and $\sigma_{T'}$ with a common facet. If $P^+(Q)$ is convex, then so are $P(Q)^{\mathrm{clus}}$ and $P(Q)$. Since $P(Q)$ is convex and has the roots as its vertices, it must coincide with $C(Q)$. The same argument works for $P^+(Q)$ and $P^{\mathrm{clus}}(Q)$. The proof of Theorem \ref{Thmcompare} follows from the decomposition of the polytope $P^{\mathrm{clus}} (Q)$ with respect to the possible quadrants. Any subset $I$ of the vertices $Q_0$ of $Q$ defines the quadrant consisting of non-negative entries, whenever the index is not in $I$, and non-positive otherwise. The volume of $P^{\mathrm{clus}}(\mathbb{A}_n)$ intersected with this quadrant has the same volume as $P^+(\mathbb{A}_n)$ intersected with the corresponding face. Thus its volume coincides with the volume of $P^+(\mathbb{A}_n|_{Q_0 \setminus I})$. A similar argument holds for $P(\mathbb{A}_n)$. Here we need to determine again the volume in the quadrant defined by $I$. The volume in this case coincides with the product of the volume of $P^+(\mathbb{A}_n|_{Q_0\setminus I})$ and the volume of $P^+(\mathbb{A}_n|_I)$. Theorems \ref{Thmcompute} and \ref{Thmcompute1} can also be proven by using the combinatorial interpretation in Section 6. Alternatively, one could have used induction over the facets. Since these proofs need some detailed computations, we defer them to the next section. \section{Proofs of Theorems \ref{Thmcompute} and \ref{Thmcompute1}} We first recall the recursion formula $t^+(\mathbb{A}_n)=\sum_{i= 0}^{n-1}t^+(\mathbb{A}_i)t^+(\mathbb{A}_{n-1-i})$. From this we get one of the standard recursions of the Catalan numbers $C_n$, since $t^+(\mathbb{A}_0) =t^+(\mathbb{A}_1)=1$. This is the same recursion as for the number of trees in $B_n$ which will be considered in the last section. Further details can be found in Stanley's book \cite{Stanley}. We start with the analogous formula for the polytope $P^+(\mathbb{A}_n)$ for the directed orientation. \begin{Lemma} $\mathrm{vol}\, P^+(\mathbb{A}_n)=\sum\limits_{i=1}^n\mathrm{vol}\, P^+(\mathbb{A}_n|_{Q_0\setminus\{i\}})$. \end{Lemma} This simply uses the fact that there exists precisely one sincere root for $\mathbb{A}_n$. Thus, the volume of $P^+(\mathbb{A}_n)$ is just the sum of the volumes of the facets of $P^+(\mathbb{A}_n)$ not containing $0$. This is obvious for the quiver of type $\mathbb{A}_n$ with its directed orientation, since every tilting module contains the projective injective (having the sincere root as dimension vector) as a direct summand. Since $P (\mathbb{A}_n) = C(\mathbb{A}_n)$ (independent of the orientation of the arrows) we get the same formula for the volume. Taking away the projective injective direct summand, we obtain a partial tilting module with support at $\mathbb{A}_n|_{Q_0\setminus\{i\}}$ for precisely one vertex $i$ of the quiver $\mathbb{A}_n$. Such a partial tilting module corresponds to a facet (defined by $d_i=0$). Thus, each tilting module corresponds to precisely one facet in $P^+(\mathbb{A}_n)$ not containing $0$. Moreover, the volume of $\sigma_T$ and the volume of each of its faces (in particular, of the facet from which we delete the sincere root) is always $1$. Hence we obtain the formula in Lemma 4.1, and the last formula in Theorem \ref{Thmcompute} follows directly. In a next step we compute the volumes of all the polytopes $P^*(\mathbb{A}_n)=C^*(\mathbb{A}_n)$. This can be done in several ways. We give a combinatorial approach using certain paths from $(0,0)$ to $(0,2n)$ later. Firstly, we use the volume of the facets of $C(\mathbb{A}_n)$. \begin{Lemma} $\mathrm{vol}\, C(\mathbb{A}_n)=\sum\limits_{i=1}^n{n-1\choose i-1}{n+1\choose i}={2n\choose n}$. \end{Lemma} We start by explaining the first equality. Each hyperplane $d_i=1$ contains precisely one facet $F_i$, and each facet is in the orbit under the symmetric group $S_{n+1}$ (that is, the Weyl goup of the root system $\mathbb{A}_n$) of precisely one such facet. Thus we need to compute the orbit of $F_i$ and the volume of the facet $F_i$. The volume is just $\left(n-1\atop i-1\right)$ and the orbit has exactly $\left(n+1\atop i\right)$ elements. This shows the first equality. The second one is a simple recursion using binomial coefficients: $$ {2n\choose n}={2n-1\choose n-1}+{2n-1\choose n}={2n-2\choose n-2}+2{2n-2\choose n-1}+ {2n-2\choose n}=\ldots $$ This proves the first formula in Theorem \ref{Thmcompute}. The formula in Theorem \ref{Thmcompute1} is just the Catalan recursion, as well as the second formula in Theorem \ref{Thmcompute}. This finishes the proofs of Theorems \ref{Thmcompute} and \ref{Thmcompute1}. In the next section we will need another formula which will be used to determine the number of tilting (or full strongly exceptional) sequences and to relate them to our combinatorial description. Note that the right hand side is $n+1$ times $n!$. \begin{Lemma}\label{Loverline} $(n+1)!=0!n!+1!(n-1)!n+2!(n-2)!\frac{n(n-1)}{2}+\ldots$ \end{Lemma} \section{Tilting sequences} If we replace a tilting module by an ordered tuple of modules compatible with non-vanishing homomorphisms, we even get an easier formula. We define $\overline\mathcal{T}^+(Q)$ to be the set of tilting sequences, i.e., $(T(1),...,T(n))$ satisfying two conditions: $T=\oplus T(i)$ has no self extensions and $\mathrm{Hom}\,(T(j),T(i)) =0$ for all $j>i$. Note that a tilting sequence is also called a full strongly exceptional sequence of modules. In a similar way we define support tilting sequences and $2$--support tilting sequences. Then we get the following formulas for the corresponding numbers, as we will see below. \begin{Thm}\label{Thmoverline}\label{ThmTiltSequ} $$ \overline t(\mathbb{A}_n)=(n+1)!=0!n!+n1!(n-1)!+\frac{n(n-1)}{2}2!(n-2)!+\ldots\,, $$ $$ \overline t^+(\mathbb{A}_n)=n!\,. $$ \end{Thm} The result follows from the interpretation of the map considered in \cite{VolHille}. Define the set $B_n$ as the set of all $3$--regular trees with $n+1$ leaves and one root. There is a natural map $S_n\longrightarrow B_n$ from the symmetric group to the set of all those trees. The number of elements in the preimage of this map is just the number of tilting sequences (interpreted as a tree with a compatible order on the inner vertices) defining the same tilting module (interpreted as a tree). Thus we have a bijection between tilting sequences and elements of the symmetric group. In order to define the corresponding polytope, we extend the positive roots to $\overline\Phi^+$ consisting of the positive roots for $\mathbb{A}_n$ {\sl together with all sums of orthogonal roots}. In case $Q$ is of type $\mathbb{A}_3$ we just add the vector $(1,0,1)$. In terms of dimension vectors we simply consider vectors with arbitrary entries $0$,$1$ that are non-zero. Using this set as vertices, we can define $\overline P^+(\mathbb{A}_n)$ as the convex hull of zero and $\overline\Phi$. In a similar way, we define $\overline P(\mathbb{A}_n)$ as the convex hull of $\overline\Phi^+$ and $-\overline\Phi^+$, and $\overline P^{\mathrm{clus}}(\mathbb{A}_n)$ as the convex hull of $\overline\Phi^+$ and the negative simple roots. To complete the picture, we also need to define a simplex $\overline\sigma_T$ for every tilting sequence $T$. This can be done as follows. Whenever we have a sequence $(T(i(1)),T(i(2)), \ldots T(i(r)))$ of indecomposable direct summands of $T$ with $i(1)<i(2)<i(3)<\ldots<i(r)$ and all components being incomparable (no homomorphisms and no extensions between different members), then we consider the vertices $\mathrm{\underline{dim}\,} T(i(1)),\mathrm{\underline{dim}\,} T(i(1))+\mathrm{\underline{dim}\,} T(i(2)),\ldots,\mathrm{\underline{dim}\,} T(i(1))+\ldots+ \mathrm{\underline{dim}\,} T(i(r))$. In this way we get different simplices for different tilting sequences, and the union of all simplices $\overline\sigma$ of the tilting sequences for a given tilting module $T$ is just the simplex $\sigma$ of $T$. Thus, the number of tilting sequences is just the volume of $\overline P^+(\mathbb{A}_n)$, the number of support tilting sequences is the volume of $\overline P^{\mathrm{clus}}(\mathbb{A}_n)$, and the number of $2$--support tilting sequences is the volume of $\overline P(\mathbb{A}_n)$. The first and the last of these numbers have been computed in Theorem \ref{ThmTiltSequ}. For the corresponding polytope $\overline P^+(\mathbb{A}_n)$ we can form $\overline P(\mathbb{A}_n)$ just as the join of $\overline P^+(\mathbb{A}_n)$ with $-\overline P^+(\mathbb{A}_n)$. This defines a polytope $\overline P(\mathbb{A}_n)$, and its volume is the number of $2$--support tilting sequences. The counting in the above theorem then computes the volume of $\overline P(\mathbb{A}_n)$ from the volume of $\overline P^+(\mathbb{A}_n)$ using the formula with the volume of the facets. So we have to compute the facets (in particular, in all quadrants except the positive and the negative) and their volumes. The facets in the positive quadrant correspond to elements of the symmetric group, and each facet contributes with volume $1$. A facet in a quadrant corresponding to a subset $I$ of the vertices of the quiver corresponds to a facet for $\mathbb{A}_n|_I$ and an opposite facet for $\mathbb{A}_n|_{Q_0 \setminus I}$. Thus Lemma \ref{Loverline} just computes the volume of $\overline P(\mathbb{A}_n)$ from $\overline P ^+(\mathbb{A})$ and Theorem \ref{Thmoverline} is proven. Summarizing this we have the following result. \begin{Thm} $\overline P(\mathbb{A}_n)=\overline C(\mathbb{A}_n)$ have as volume the number of $2$--support tilting sequences $(n+1)!$. Moreover, the number of tilting sequences for $\mathbb{A}_n$ is $n!$, that is, the volume of $\overline P^+(\mathbb{A}_n)=\overline C^+(\mathbb{A}_n)$. \end{Thm} \section{Some further comments}\label{sectDyck} In the final section we present some further explanation for our use of the notion of the volume. This method is inspired by toric geometry and lattice polytopes. Moreover, we give another interpretation of the computation of the volume using Stanley's exercise of a combinatorial interpretation of the Catalan numbers (see Exercise 6.19 in \cite{Stanley}). Very surprisingly, it does not only give an interpretation of the volume of $P^+(\mathbb{A}_n)$ and $P^{\mathrm{clus}}(\mathbb{A}_n)$ (where the Catalan numbers occur naturally), but also for the volume of $P(\mathbb{A}_n)$, if we modify Dyck paths so that they correspond to the $2$--support tilting modules. \subsection{The integral volume} Note that any cube of the form $[0,1]^n$ has volume one in the euclidean metric and can be decomposed into $n!$ many simplices, all of volume $1$. This is just a recursive computation. For $n=$ $1$,$2$ the claim is obvious. Then proceed by induction, and observe that the cube has $n$ facets containing $0$. (In fact, we could use any vertex instead of $0$). By induction, the formula holds for the facets, and consequently, for the convex hull of the facet and $(1, \ldots,1)$ (a pyramid over the facet). Now one checks that the $n$ pyramids over the $n$ facets decompose the cube into $n$ pyramids of volume $(n-1)!$. Thus the cube has volume $n!$. \subsection{Rooted trees and tilting modules} We consider the set $B_n$ of rooted $3$--regular trees with one root and $n+1$ leaves (examples can be found in \cite{VolHille}). If we consider the quiver $\mathbb{A}_n$ with its directed orientation, then we can identify $\mathcal{T}^+(Q)$ with $B_n$ (see \cite{VolHille}). Thus, we can compute the number of tilting modules using a standard recursion formula for the number of trees. Take such a tree and take the unique vertex connected to the root. Decompose the tree $S$ into the two connected components $S^+$ and $S^-$ obtained from deleting this vertex. Then we get $$ C_n=\sharp B_n=\sum_{i=0}^{n-1}\sharp B_i\sharp B_{n-1-i}\,,\quad\sharp B_0=\sharp B_1=1\,. $$ This is one of the standard recursion formulas for the Catalan numbers $C_n$. \subsection{Dyck paths} Dyck paths can be used for a combinatorial description of the volume of the polytopes. A {\sl Dyck path} is a path from $(0,0)$ to $(0,2n)$ using only steps $(1,-1)$ or $(1,1)$ so that the path never goes below the $x$--axes (meaning that the first coordinate of a point is non-negative). We denote the set of Dyck paths by $D^+_n$. For given $n$ the number of Dyck paths coincides with the number of possible bracketings of an expression with $n$ inputs. Moreover, this can be identified with the elements in $B_n$ and with the vertices of the associahedron. For the combinatorics we refer to the famous exercise in Stanley's book \cite{Stanley}, where we use only 5 interpretations of the $66$ (in fact, even more can be found on Stanley's homepage). If we consider arbitrary paths from $(0,0)$ to $(0,2n)$ with steps $(1,1)$ or $(1,-1)$ (without the condition to be above the $x$--axes) we obtain a set $D_n$ that has $2n\choose n$ elements. An interpretation of $2$-support tilting sequences is obtained as follows. Whenever the path stays above the $x$--axes, we identify the corresponding Dyck path with its tree, and thus with a direct summand $T^+$ of $T$. Whenever the path stays below the $x$--axes, we identify the corresponding path with $T^-$. This bijection identifies paths in $D_n$ with $2$--support tilting modules for $\mathbb{A}_n$. Consequently, we have computed the volume of $P(\mathbb{A}_n)$ as the number of elements in $D_n$ which is $2n \choose n$. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Additivity of R\"{o}ntgen term and recoil-induced correction to spontaneous emission rate} \author{Anwei Zhang} \email{[email protected]} \affiliation{School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China} \author{Danying Yu} \affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China} \begin{abstract} For a moving atom, the spontaneous emission rate is modified due to the contributions from two factors, the R\"{o}ntgen interaction term and the recoil effect induced by the emitted photon. Here we investigate the emission rate of a uniformly moving atom near a perfectly conducting plate and obtain the corrections induced by these two factors. We find that the corrections individually induced by the R\"{o}ntgen term and the recoil effect can be simply added and result in the total correction to the decay rate. Moreover, it is shown that the R\"{o}ntgen term gives positive correction, while the recoil effect induces negative correction. Our work paves the way towards the future studies of the light-matter interaction for the moving particle in quantum optics. \end{abstract} \maketitle The spontaneous emission of an atom is of the fundamental importance in atomic physics and quantum optics \cite{book}. Such a quantum process has been studied extensively with atoms in various physical systems, including cavity \cite{2,3}, photonic crystal \cite{4,5}, and metamaterial \cite{6}. However, the atom in most of previous investigations is usually treated to be stationary or fixed. Under this treatment, the conservation of momentum in the process of emitting or absorbing photon by the atom is neglected. In order to describe the spontaneous emission accurately, one should take the motion of atom into account. For the spontaneous decay from an atom involving nonrelativistic motion, one needs to consider two additional factors. One is the R\"{o}ntgen interaction term \cite{c1, c2, c30, c3}, which describes the coupling between the motion of the atom, the electric dipole moment, and the magnetic component of the radiation field. The inclusion of the R\"{o}ntgen interaction term can successfully overcome the violation of the gauge invariance of radiation-reduced mechanical forces and the energy-momentum conservation from the atom-field Hamiltonian under the dipole approximation \cite{c1}. The other one is the recoil shift \cite{15}, induced by the radiation of the photon which modifies the momentum and energy of the atom. It makes the atom no longer resonant with the emitted photon, which, on the other hand, is the case for the stationary atom. Actually, the recoil shift is an addition to the Doppler shift and has been measured at the order of kHz with the saturated absorption spectroscopy \cite{hall}. In this paper, we study the spontaneous decay of a uniformly moving atom near a perfectly conducting plate as a boundary, where corrections from both the R\"{o}ntgen term and the recoil shift are introduced. We find that the total correction of the spontaneous emission rate in this system is equivalent to the summation of corrections induced by involving the R\"{o}ntgen term and the recoil shift separately. Moreover, the R\"{o}ntgen term brings positive correction, while the recoil gives the negative correction. We discuss decay rates for such moving atom with dipoles at different directions. Our work reveals spontaneous emission details of the moving atom near the boundary, and therefore provides important insights into understanding this piece of the physical nature in quantum optics. We start with considering a two-level atom with transition frequency $\omega_{0}$ and mass $m$ moving near an infinitely perfectly conducting plate located at $z=0$ (see Fig.~\ref{figure.0}). The total Hamiltonian of our system under nonrelativistic condition can be described by $H=H_0+H_I$, where $H_0$ gives the atomic centre-of-mass kinetic energy: \begin{equation}\label{1} H_0=\frac{\hat{\textbf{P}}^{2}}{2m}+\omega_0\hat{\sigma}^{+}\hat{\sigma}^{-}+\sum_{\textbf{k}\lambda}\omega_{\textbf{k}} \hat{a}^{\dag}_{\textbf{k}\lambda}\hat{a}_{\textbf{k}\lambda}. \end{equation} Here $\hat{\textbf{P}}$ denotes the canonical momentum and $\hat{\sigma}^{+}=|e\rangle\langle g|$ ($\hat{\sigma}^{-}=|g\rangle\langle e|$) is the raising (lowering) operator for the atomic transition. The last term in Eq.~(\ref{1}) describes the quantized electromagnetic field with the creation operators $\hat{a}^{\dag}_{\textbf{k}\lambda}$ and annihilation operators $\hat{a}_{\textbf{k}\lambda}$ for a photon with the wave vector $\textbf{k}$, the frequency $\omega_{\textbf{k}}$, and the polarization $\lambda=1,2$. We use the natural units $\hbar=c=1$ throughout the text for the simplicity. The atom-field interaction Hamiltonian $H_I$ is given by \cite{c2,c3} \begin{equation}\label{2} H_I=-\hat{\textbf{d}}\cdot \hat{\textbf{E}}(\textbf{r})-\frac{1}{2m}\{\hat{\textbf{P}}\cdot [\hat{\textbf{B}}(\textbf{r})\times \hat{\textbf{d}}]+[\hat{\textbf{B}}(\textbf{r})\times \hat{\textbf{d}}]\cdot \hat{\textbf{P}}\}, \end{equation} where $\hat{\textbf{d}}=\textbf{d}(\hat{\sigma}^{+}+\hat{\sigma}^{-})$ denotes the atomic dipole operator with dipole transition moment $\textbf{d}$, and $\hat{\textbf{E}}(\textbf{r})$, $\hat{\textbf{B}}(\textbf{r})$ are the electric field and magnetic field operators at the position $\textbf{r}$ of the atom, respectively. The interaction described by Eq.~(\ref{2}) includes the R\"{o}ntgen term which results from the motion of the atom, i.e., the second term. In the interaction picture, the effective interaction Hamiltonian can be written as \begin{eqnarray}\label{3} H_I(t)=&-&(\hat{\sigma}^{+}e^{i\omega_{0}t}+\hat{\sigma}^{-}e^{-i\omega_{0}t})\bigg\{\textbf{d}\cdot \hat{\textbf{E}}(\textbf{r},t)\nonumber \\ &+&\frac{1}{2m}\{\hat{\textbf{P}}\cdot [\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}] +[\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}]\cdot \hat{\textbf{P}}\}\bigg\}.\nonumber \\ \end{eqnarray} \begin{figure} \caption{Schematic illustration for a two-level atom moving at a constant velocity $v$ parallel to an infinitely perfectly conducting plate.} \label{figure.0} \end{figure} In the presence of the infinite perfectly conducting plate placed at $z=0$, the mode functions of vector potential $\textbf{A}$ can be written as \cite{book} \begin{eqnarray}\label{4} (A_{\textbf{k}\lambda})_{x} &=& i\frac{\varepsilon_{k}}{\omega_{\textbf{k}}} (e_{\textbf{k}\lambda})_{x}e^{i\textbf{k}_{\parallel}\textbf{r}-i\omega_{\textbf{k}}t}\sin(k_{z}z), \nonumber\\ (A_{\textbf{k}\lambda})_{y} &=& i\frac{\varepsilon_{k}}{\omega_{\textbf{k}}} (e_{\textbf{k}\lambda})_{y}e^{i\textbf{k}_{\parallel}\textbf{r}-i\omega_{\textbf{k}}t}\sin(k_{z}z), \nonumber\\ (A_{\textbf{k}\lambda})_{z} &=&\frac{\varepsilon_{k}}{\omega_{\textbf{k}}} (e_{\textbf{k}\lambda})_{z}e^{i\textbf{k}_{\parallel}\textbf{r}-i\omega_{\textbf{k}}t} \cos(k_{z}z), \end{eqnarray} where $\varepsilon_{k}=\sqrt{\omega_{\textbf{k}}/(\varepsilon_{0}V})$, $\varepsilon_{0}$ is the permittivity of vacuum, $V$ is the quantization volume, $\textbf{e}_{\textbf{k}1}=i(-k_{y}/k_{\parallel},k_{x}/k_{\parallel},0)$, $\textbf{e}_{\textbf{k}2}=[-k_{x}k_{z}/(k_{\parallel}\omega_{\textbf{k}}), -k_{y}k_{z}/(k_{\parallel}\omega_{\textbf{k}}),k_{\parallel}/\omega_{\textbf{k}}]$, and $\textbf{k}_{\parallel}=(k_{x},k_{y},0)$ with $k_{\parallel}=\sqrt{k^{2}_{x}+k^{2}_{y}}$. It can be verified that the mode functions in Eq.~(\ref{4}) satisfy the boundary condition, the orthogonality condition, and the Coulomb gauge condition. Thus we obtain the negative-frequency components of the electric field and magnetic field operators \begin{eqnarray}\label{5} \hat{E}^{-}_{i}(\textbf{r},t) &=&-i \sum_{\textbf{k}\lambda} \omega_{\textbf{k}}(A_{\textbf{k}\lambda})^{\ast}_{i}\hat{a}^{\dag}_{\textbf{k}\lambda}, \nonumber\\ \hat{B}^{-}_{x}(\textbf{r},t) &=& -i \sum_{\textbf{k}\lambda}\varepsilon_{k}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})^{\ast}_{x}e^{-i\textbf{k}_{\parallel}\textbf{r}+i\omega_{\textbf{k}}t} \cos(k_{z}z)\hat{a}^{\dag}_{\textbf{k}\lambda},\nonumber\\ \hat{B}^{-}_{y}(\textbf{r},t) &=& -i \sum_{\textbf{k}\lambda}\varepsilon_{k}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})^{\ast}_{y}e^{-i\textbf{k}_{\parallel}\textbf{r}+i\omega_{\textbf{k}}t} \cos(k_{z}z)\hat{a}^{\dag}_{\textbf{k}\lambda},\nonumber\\ \hat{B}^{-}_{z}(\textbf{r},t) &=&- \sum_{\textbf{k}\lambda}\varepsilon_{k}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})^{\ast}_{z}e^{-i\textbf{k}_{\parallel}\textbf{r}+i\omega_{\textbf{k}}t} \sin(k_{z}z)\hat{a}^{\dag}_{\textbf{k}\lambda}.\nonumber\\ \end{eqnarray} Here $\textbf{h}=\textbf{k}/\omega_{\textbf{k}}$ is the unit vector of wave vector. The positive-frequency components are the Hermitian conjugate of the corresponding negative-frequency components. We consider initially the atom prepared at the excited state, and moving with the canonical momentum state $|\textbf{p}\rangle$, while the field is in vacuum state. Therefore, we can write the initial state of the system as $|i\rangle=|e,\textbf{p},0\rangle$. Due to the term $\hat{\sigma}^{-}\hat{a}^{\dag}_{\textbf{k}\lambda}$ in the interaction Hamiltonian, the state of the moving atom evolves into its ground state by emitting a photon, which gives the final state of the system $|f\rangle=|g,\textbf{p}-\textbf{k},1_{\textbf{k}}\rangle$. The derivative of the probability of the transition under the first-order approximation gives the spontaneous decay rate of the atom: \begin{eqnarray}\label{6} \Gamma&=&\partial_{t}\sum_{\textbf{k}\lambda}\mid\int^{t}_{0}d t^{\prime}\langle f|H_I(t^{\prime})|i\rangle\mid^{2}\nonumber\\ &=&2\mathrm{Re}\sum_{\textbf{k}\lambda}\int^{t}_{0}d t^{\prime}\langle i|H_I(t)|f\rangle\langle f|H_I(t^{\prime})|i\rangle. \end{eqnarray} In order to calculate the decay rate, we consider the case that the atom moves parallel to the plate, i.e., $\textbf{p}=(p,0,0)$. If we choose the dipole transition moment $\textbf{d}=(0,0,d)$, which exhibits the direction normal to the boundary, there is the equality: $\langle f|\hat{\textbf{P}}\cdot [\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}] +[\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}]\cdot \hat{\textbf{P}}|i\rangle=d\langle f|2\hat{B}^{-}_{y}p-i\partial_{x}\hat{B}^{-}_{y}+i\partial_{y}\hat{B}^{-}_{x}|i\rangle= d\langle f|(2p-k_{x})\hat{B}^{-}_{y}+k_{y}\hat{B}^{-}_{x}|i\rangle$. Here the formula $[\hat{P}_{i},\hat{B}_{j}]=-i\partial_{i}\hat{B}_{j}$ is used. We then rewrite Eq.~(\ref{6}) to be \begin{equation}\label{7} \Gamma = 2\pi d^{2}\sum_{\textbf{k}\lambda}\varepsilon^{2}_{k}F^{(1)}_{\lambda}(\omega_{\textbf{k}}) \delta(\omega_0-\omega_{\textbf{k}}+k_{x}v)\cos^{2}(k_{z}z), \end{equation} where $F^{(1)}_{\lambda}(\omega_{\textbf{k}})=|(e_{\textbf{k}\lambda})_{z}+(p-h_{x}\omega_{\textbf{k}}/2)(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{y}/m+h_{y}\omega_{\textbf{k}}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{x}/(2m)|^{2}$ and $v(\ll 1)$ is the velocity of the atom. In the derivation, we have used the relation $x=v t$ and the Markov approximation. For our atom with the nonrelativistic motion velocity $\textbf{v}$, conservations of momentum and energy give \cite{aa} \begin{eqnarray} \textbf{k}+\Delta \textbf{p} &=& 0, \nonumber\\ \Delta E_{a}+\Delta \textbf{p}\cdot\textbf{v}+\omega_{ \textbf{k}} &=& 0, \end{eqnarray} where $\Delta E_{a}=-\omega_{0}$ is the change of the atom's s internal energy and $\Delta \textbf{p}\cdot\textbf{v}=-\textbf{k}\cdot\textbf{v}=-k_{x}v$ describes the change of the atom's kinetic energy $(\textbf{p}-\textbf{k})^{2}/2m-\textbf{p}^{2}/2m=-(ph_{x}\omega_{\textbf{k}}-\omega^{2}_{\textbf{k}}/2)/m$. Here $ph_{x}\omega_{\textbf{k}}/m$ and $\omega^{2}_{\textbf{k}}/(2m)$ denotes the the Doppler shift and the recoil shift \cite{c3,15}, respectively. The $\delta$-function in Eq.~$({\ref{7}})$ reflects the conservation of energy, which is expected from Fermi's golden rule. Here the quantity $\textbf{v}$ is not equal to $\textbf{p}/m$ as a consequence of the difference between the kinetic momentum $m d\hat{\textbf{r}}/dt$ and the canonical momentum $\hat{\textbf{P}}$ \cite{c2}. We now perform the continuum limit, i.e., $V\rightarrow\infty$, and obtain \begin{equation}\label{8} \Gamma=\frac{d^{2}}{(2\pi)^{2}\varepsilon_{0}}\sum_{\lambda}\int d\Omega\frac{\omega^{3}_{+}F^{(1)}_{\lambda}(\omega_{+})}{G(\omega_{+})} \cos^{2}(h_{z}z\omega_{+}), \end{equation} where $d\Omega$ is the solid angle, $\omega_{+}=h_{x} p-m+\sqrt{\left(m-h_{x} p\right)^{2}+2 \omega_{0}m }$ is the positive root of the equality $\omega_0-\omega_{\textbf{k}}+k_{x}v=0$, and $G(\omega_{+})=\partial_{\omega_{\textbf{k}}}|\omega_0- \omega_{\textbf{k}}+k_{x}v|_{\omega_{+}}=1+(\omega_{+}-p h_{x})/m$. We assume the atom is heavy enough, i.e., $m\gg p, \omega_{\textbf{k}}, \omega_{0}$. Then we can expand $F^{(1)}_{\lambda}(\omega_{+})$ to the first order of $1/m$ and sum the expanded expressions over the polarisations $\lambda$ to obtain, \begin{equation}\label{9} \sum_{\lambda}F^{(1)}_{\lambda}(\omega_{+})=1-h^{2}_{z}+\left[\left(h^{2}_{x}+h^{2}_{y}\right)\omega_{+} -2ph_{x}\right]/m. \end{equation} Here the relation $\sum_{\lambda}(e_{\textbf{k}\lambda})_{i}(e_{\textbf{k}\lambda})^{\ast}_{j}=\delta_{ij}-h_{i}h_{j}$ is used. \begin{figure*} \caption{The correction to the spontaneous emission rate in unit of $\Gamma_{0} \label{figure.1} \end{figure*} By substituting Eq.~(\ref{9}) into Eq.~(\ref{8}) and then taking integration over the solid angle in the first order approximation of $1/m$, we derive the decay rate \begin{equation}\label{10} \Gamma=\Gamma_{bz}-\frac{3\Gamma_{0}\omega_{0}}{2m}\bigg(1+\frac{\sin Z}{Z}\bigg), \end{equation} where $\Gamma_{bz}=\Gamma_{0}\left(1-3\frac{Z\cos Z-\sin Z}{Z^{3}}\right)$ gives the spontaneous decay rate of a fixed atom with $z$-direction dipole transition moment in the presence of the boundary \cite{book}. Here $\Gamma_{0}=\omega^{3}_{0}d^{2}/(3\pi\varepsilon_{0})$ is the decay rate of an atom fixed in free space. $Z$ is a dimensionless parameter, which reads \begin{equation}\label{n2} Z=2 z \omega_{0}. \end{equation} The second term, i.e., $\Gamma_{ct} =-\frac{3\Gamma_{0}\omega_{0}}{2m}\left(1+\frac{\sin Z}{Z}\right)$, in Eq.~(\ref{10}) is the correction to the decay rate, which includes the contributions from both the R\"{o}ntgen term and the recoil of the emitted photon. It is interesting to notice that such contributions from the two terms hold the additivity. If there is only the R\"{o}ntgen term that is involved in the calculation, and with the same procedure, we find that the correction to the decay rate is \begin{equation} \Gamma_{cR}=\Gamma_{bz}\omega_{0}/m, \end{equation} which gives the positive contribution to the decay rate [see Fig.~\ref{figure.1}(a)]. Nevertheless, if we only consider the recoil effect, the correction to the decay rate is \begin{equation} \Gamma_{cr}=\frac{\Gamma_{0}\omega_{0}}{m}\left[-\frac{5}{2}+3\frac{2Z\cos Z-(Z^{2}+2)\sin Z}{2Z^{3}}\right], \end{equation} which gives the negative contribution to the decay rate. It can be found that $\Gamma_{ct} = \Gamma_{cR} +\Gamma_{cr}$, showing that the summation between the R\"{o}ntgen term induced correction and the recoil effect induced correction leads to the total correction to the spontaneous decay rate. Note that since the Eq.~(\ref{10}) is obtained by expanding the integrand in Eq.~(\ref{8}) to first order in $1/m$, there is the constraint $o(m^{2})Z^{2}/m^{2}\ll1$, i.e., $Z$ can not be extended to infinity and our approximate results are only valid when the atom is near the plate. We have discussed the case of the moving atom near the boundary with the dipole transition moment along the $z$-direction. Next, we consider cases with the dipole along the $y$-direction and $x$-direction, separately. By setting the dipole along the $y$-direction, i.e., $\textbf{d}=(0,d,0)$, we have the equality: $\langle f|\hat{\textbf{P}}\cdot [\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}] +[\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}]\cdot \hat{\textbf{P}}|i\rangle=-d\langle f|2\hat{B}^{-}_{z}p-i\partial_{x}\hat{B}^{-}_{z}+i\partial_{z}\hat{B}^{-}_{x}|i\rangle$. We therefore can derive the decay rate as \begin{eqnarray}\label{11} \Gamma&=& 2\pi d^{2}\sum_{\textbf{k}\lambda}\varepsilon^{2}_{k}F^{(2)}_{\lambda}(\omega_{\textbf{k}}) \delta(\omega_0-\omega_{\textbf{k}}+k_{x}v)\sin^{2}(k_{z}z)\nonumber\\ &=&\Gamma_{by}-\frac{3\Gamma_{0}\omega_{0}}{2m}\bigg(1-\frac{\cos Z}{2}-\frac{\sin Z}{2Z}\bigg), \end{eqnarray} where $F^{(2)}_{\lambda}(\omega_{\textbf{k}})=|(e_{\textbf{k}\lambda})_{y}-(p-h_{x} \omega_{\textbf{k}}/2)(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{z}/m-h_{z} \omega_{\textbf{k}}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{x}/(2m)|^{2}$ and $\Gamma_{by}=\Gamma_{0}\left[1-3\frac{Z\cos Z+(Z^{2}-1)\sin Z}{2Z^{3}}\right]$ gives the spontaneous decay rate of a fixed atom with the dipole transition moment along the $y$-direction in the presence of the plate \cite{book}. Here $\sum_{\lambda}F^{(2)}_{\lambda}(\omega_{\textbf{k}})=1-h^{2}_{y}+\left[\left(h^{2}_{x}+h^{2}_{z}\right) \omega_{\textbf{k}}-2ph_{x}\right]/m$ is used. The second term, i.e., $\Gamma_{ct}=-\frac{3\Gamma_{0}\omega_{0}}{2m}\left(1-\frac{\cos Z}{2}-\frac{\sin Z}{2Z}\right)$, in Eq.~(\ref{11}) is the correction to the spontaneous decay rate. If we only consider the R\"{o}ntgen term, it will be reduced to \begin{equation} \Gamma_{cR}=\Gamma_{by}\omega_{0}/m, \end{equation} and if we only consider the recoil effect, it will become \begin{equation} \Gamma_{cr}=\frac{\Gamma_{0}\omega_{0}}{m }\left[-\frac{5}{2}+\frac{3}{4}\left(\frac{Z^{2}+2}{Z^{2}}\cos Z+\frac{3Z^{2}-2}{Z^{3}}\sin Z\right)\right]. \end{equation} Similarly, we find that the R\"{o}ntgen term-induced correction and the recoil-induced correction can be simply added to obtain the total correction to the spontaneous decay rate in this case [see Fig.~\ref{figure.1}(b)]. Moreover, we can see that the recoil effect also decreases the decay rate, while the R\"{o}ntgen term increases the decay rate. As for the dipole transition moment $\textbf{d}=(d,0,0)$ is along the direction of motion ($x$-direction), $\langle f|\hat{\textbf{P}}\cdot [\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}] +[\hat{\textbf{B}}(\textbf{r},t)\times \textbf{d}]\cdot \hat{\textbf{P}}|i\rangle=d\langle f|-i\partial_{y}\hat{B}^{-}_{z}+i\partial_{z}\hat{B}^{-}_{y}|i\rangle$. The resulted spontaneous decay rate is \begin{eqnarray}\label{12} \Gamma&=& 2\pi d^{2}\sum_{\textbf{k}\lambda}\varepsilon^{2}_{k}F^{(3)}_{\lambda}(\omega_{\textbf{k}}) \delta(\omega_0-\omega_{\textbf{k}}+k_{x}v)\sin^{2}(k_{z}z)\nonumber\\ &=&\Gamma_{bx}-\frac{3\Gamma_{0}\omega_{0}}{2m}\bigg(1-\frac{\cos Z}{2}-\frac{\sin Z}{2Z}\bigg), \end{eqnarray} where $F^{(3)}_{\lambda}(\omega_{\textbf{k}})=|(e_{\textbf{k}\lambda})_{x}-h_{y} \omega_{\textbf{k}}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{z}/(2m)+h_{z} \omega_{\textbf{k}}(\textbf{h}\times \textbf{e}_{\textbf{k}\lambda})_{y}/(2m)|^{2}$ and $\Gamma_{bx}=\Gamma_{by}$. Here we have used $\sum_{\lambda}F^{(3)}_{\lambda}(\omega_{\textbf{k}})=1-h^{2}_{x}+\left(h^{2}_{y}+h^{2}_{z}\right) \omega_{\textbf{k}}/m$. We plot the corresponding corrections for the case of the dipole along the $x$-direction in Fig.~\ref{figure.1}(c), which show the same features as those in Fig.~\ref{figure.1}(b) with the dipole along the $y$-direction. One notices that, when the dipole transition moment is parallel to the plate, the emission rate of the atom (fixed or not) are the same. Besides, we can find that the recoil-induced and the R\"{o}ntgen term-induced correction are the same with the ones when the dipole transition moment is in $y$-direction [see Fig.~\ref{figure.1}(c)]. Thus, the motion does not break the symmetry of the spontaneous emission rate for atom with the dipole in a plane parallel to the boundary. In conclusion, we study the spontaneous emission rate of a moving atom near an infinitely perfectly-conducting plate. Different directions of the dipole moment of the atom have been considered. The corrections from the R\"{o}ntgen term and the recoil effect have been explored. It has been found that the total correction holds the additivity from individual contributions. The individual contributions induced solely by the R\"{o}ntgen term and the recoil effect are positive and negative, respectively. Moreover, it is interesting to note that the motion of atom does not break the equality of the spontaneous emission rate for the dipole in different directions but in the same plane parallel to the plate. Our work points towards interesting physics in quantum optics associated with the light-matter interaction for moving atoms in the future. A.Z. would like to thank L. Yuan for revising this paper. \end{document}
\begin{document} \title{Limitations on sharing Bell nonlocality between sequential pairs of observers} \author{Shuming Cheng} \affiliation{The Department of Control Science and Engineering, Tongji University, Shanghai 201804, China} \affiliation{Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai 201804, China} \affiliation{Institute for Advanced Study, Tongji University, Shanghai, 200092, China} \author{Lijun Liu} \affiliation{College of Mathematics and Computer Science, Shanxi Normal University, Linfen 041000, China} \author{ Travis J. Baker} \affiliation{Centre for Quantum Computation and Communication Technology (Australian Research Council), Centre for Quantum Dynamics, Griffith University, Brisbane, QLD 4111, Australia} \author{Michael J. W. Hall} \affiliation{Department of Theoretical Physics, Research School of Physics, Australian National University, Canberra ACT 0200, Australia} \date{\today} \begin{abstract} We give strong analytic and numerical evidence that, under mild measurement assumptions, two qubits cannot both be recycled to generate Bell nonlocality between multiple independent observers on each side. This is surprising, as under the same assumptions it is possible to recycle just one of the qubits an arbitrarily large number of times [P. J. Brown and R. Colbeck, Phys. Rev. Lett. \textbf{125}, 090401 (2020)]. We derive corresponding `one-sided monogamy relations' that rule out two-sided recycling for a wide range of parameters, based on a general tradeoff relation between the strengths and maximum reversibilities of qubit measurements. We also show if the assumptions are relaxed to allow sufficiently biased measurement selections, then there is a narrow range of measurement strengths that allows two-sided recycling for two observers on each side, and propose an experimental test. Our methods may be readily applied to other types of quantum correlations, such as steering and entanglement, and hence to general information protocols involving sequential measurements. \end{abstract} \maketitle \paragraph{Introduction---} It is always of interest to determine ways in which physical resources can be usefully exploited. One such resource is quantum entanglement, which is critical to information tasks such as quantum teleportation~\cite{Bennett93} and secure quantum key distribution~\cite{Ekert91}. It was recently shown, in a network scenario, that multiple pairs of independent observers can exploit the same entangled state by weakly measuring and passing along its components~\cite{Silva15}. This has generated great interest both theoretically~\cite{Mal16,Curchod17,Tavakoli18,Bera18,Sasmal18,Shenoy19,Das19,Saha19,Kumari19,Brown20,Maity20,Bowles20,Roy20} and experimentally~\cite{Schiavon17,Hu18,Choi20,Foletto20,Feng20,Foletto21}. As a notable example, it is possible to use two entangled qubits to generate Bell nonlocal correlations between a first observer holding the first qubit and each one of an arbitrarily long sequence of independent observers that hold the second qubit in turn~\cite{Brown20}. This recycling of the second qubit allows the first observer to implement device-independent information protocols, such as secure quantum key distribution~\cite{Ekert91,Ekert14} and randomness generation~\cite{Curchod17,Pironio10,Foletto21}, with each one of the other observers. \begin{figure*} \caption{Sequential Bell nonlocality with multiple observers on each side. A source S generates two qubits on each run, which are received by observers Alice~1 and Bob~1 ($A_1$ and $B_1$ in the main text). Each makes one of two local measurements on their qubit with equal probabilities (e.g., $X$ or $X'$); records their result (e.g., $x$ or $x'$); and passes their qubit onto independent observers Alice~2 and Bob~2, respectively ($A_2$ and $B_2$ in the main text). It is known that Alice~1 can demonstrate Bell nonlocality with each of an arbitrary number of Bobs in this way, via recycling of the second qubit~\cite{Brown20} \label{fig:fig1} \end{figure*} In contrast, we show here that there are surprisingly strong limitations on recycling {\it both} qubits in this way, so as to generate Bell nonlocality for {\it multiple} observers on each side. This is so even for just two observers on each side (see Fig.~\ref{fig:fig1}). In particular, we give strong numerical evidence for the conjecture that if two observers independently make one of two equally-likely two-valued measurements on their qubits, as in~\cite{Brown20}, then a second pair of observers cannot observe Bell nonlocality if the first pair does. This limitation, to one-sided recycling, may be regarded as a type of sharing monogamy, which we demonstrate analytically for a wide range of parameters via corresponding `one-sided monogamy relations'. The physical intuition behind such limitations is that if the measurements made by the first pair of observers in this scenario are sufficiently strong to demonstrate Bell nonlocality, then they are also sufficiently irreversible to leave the qubits in a Bell-local state. Correspondingly, the analytic results rely on a tradeoff between the strength and maximum reversibility of general two-valued qubit measurements, as shown below, of some interest in its own right. We show that the validity of the conjecture only requires explicit consideration of the 16-parameter set of observables measured by the first pair of observers, on a 1-parameter class of pure initial states, making a numerical test feasible. Further, we obtain analytic monogamy relations for two 14-parameter subsets, corresponding to the observables having either equal strengths or orthogonal measurement directions for each side. These monogamy relations hold for all initial states with maximally-mixed marginals, and for arbitrary initial states if the observables are unbiased. Finally, by allowing the first pair of observers to select one of their measurements with high probability ($>90\%$), we show it becomes possible for each of the observers on one side to generate Bell nonlocality with each of those on the other side, via a judicious choice of measurement strengths, and a corresponding experimental test is proposed. However, while this restores a degree of symmetry to qubit recycling, it is at the cost of a significant asymmetry in the selection of measurements, that strongly limits, e.g., the randomness that can be generated in device-independent protocols. A number of details and generalisations are left to the Supplemental Material~\cite{SM} and a forthcoming companion paper~\cite{Cheng21}. \paragraph{One-sided monogamy conjecture---} Before proceeding to details, we formally state the main conjecture and preview the numerical evidence. First, if an observer $A$ ($B$) measures either of two observables $X$ or $X'$ ($Y$ or $Y'$), with outcomes labelled by $\pm1$, then Bell nonlocality is characterised by the value of the Clauser-Horne-Shimony-Holt (CHSH) parameter~\cite{Clauser69,Brunner14} \begin{equation} \label{chsh} S(A,B):= \langle XY\rangle+\langle XY'\rangle+\langle X'Y\rangle - \langle X'Y'\rangle . \end{equation} In particular, a violation of the CHSH inequality $S(A,B)\leq2$ implies there is no local hidden variable model for the correlations between the measurement outcomes, guaranteeing the security of device independent protocols such as quantum key distribution and randomness generation. Our results strongly support the following conjecture, that significantly limits the recycling of qubits used for the generation of Bell nonlocality.\\ {\bf Conjecture:} {\it If observers $A_1$ and $B_1$ independently make one of two equally-likely measurements on a first and second qubit, respectively, and the qubits are passed on to observers $A_2$ and $B_2$, respectively, then the pairs $(A_j,B_k)$ and $(A_{j'},B_{k'})$ can each violate the CHSH inequality only if they share a common observer, i.e., } \begin{equation} \label{conjecture} S(A_j,B_k),\, S(A_{j'},B_{k'})>2~~{\rm only~if}~ j=j'~{\rm or}~k=k'. \end{equation} Thus, the conjecture asserts that sequential Bell nonlocality is only possible via a fixed observer on one side in this scenario, as in~\cite{Brown20}, but {\it not} for multiple observers on each side. In particular, it implies that at most one of the pairs $(A_1,B_1)$ and $(A_2,B_2)$ can violate the CHSH inequality, and similarly at most one of the pairs $(A_1,B_2)$ and $(A_2,B_1)$. Here $A_1$ corresponds to Alice~1 in Fig.~\ref{fig:fig1}, etc. Numerical evidence for the conjecture is illustrated in Fig.~\ref{fig:fig2}. We note the conjecture does not generally extend to, e.g., Bell inequalities with more measurements per observer on higher-dimensional systems ~\cite{adan}. \begin{figure*} \caption{ One-sided monogamy for CHSH Bell nonlocality. The possibility of violating the CHSH inequality (a) by both $(A_1,B_1)$ and $(A_2,B_2)$, and (b) by both $(A_1,B_2)$ and $(A_2,B_1)$, is tested via the values of $|S(A_1,B_1)|$ and the proxy quantities $S^*(A_1, B_2), S^*(A_2, B_1), S^*(A_2,B_2)$ in Eqs.~(\ref{chsh} \label{fig:fig2} \end{figure*} \paragraph{Qubit measurements---} To proceed, consider measurement of a (generalised) two-valued observable described by a positive operator valued measure (POVM) $\{X_+,X_-\}$. Thus, $X_\pm\geq 0$ with $X_++X_-=\mathbbm{1}$, and the observable is equivalently represented by the operator $X:= X_+ - X_-$, with $-\mathbbm{1}\leq X\leq \mathbbm{1}$. For qubits, $X$ can be decomposed as \begin{equation} X = {\mathcal B} \mathbbm{1} + \mathcal{S}\bm \sigma\cdot\bm x , \label{measurement} \end{equation} with respect to the Pauli spin operator basis $\boldsymbol{\sigma}\equiv (\sigma_1, \sigma_2, \sigma_3)$, with $\mathcal{S}\geq 0$ and $|\bm x|:= (\bm x\cdot \bm x)^{1/2} =1$. Here ${\mathcal B}$ is the {\it outcome bias} of the observable, $\mathcal{S}$ is its {\it strength}~\cite{Shenoy19} (or information gain~\cite{Silva15}), and $\bm x$ is a direction associated with the observable. For projective observables one has ${\mathcal B}=0$ and $\mathcal{S}=1$, while for the trivial observable with POVM $\{\mathbbm{1},0\}$ one has ${\mathcal B}=1$ and $\mathcal{S}=0$. More generally, ${\mathcal B}$ is the difference of the $+1$ and $-1$ outcome probabilities for the maximally-mixed state $\rho=\half\mathbbm{1}$, and $|{\mathcal B}|+ \mathcal{S}\leq 1$. In the context of sequential measurements, as in Fig.~\ref{fig:fig1}, the effect of a measurement on the subsequent state of a qubit is important. For example, an observation of $X\equiv\{X_+,X_-\}$ can be implemented by the square-root measurement that takes the state $\rho$ to \begin{align} \phi(\rho):=X_+^{1/2}\rho X_+^{1/2} + X_-^{1/2}\rho X_-^{1/2}. \end{align} More generally, any measurement of $X$ takes $\rho$ to $\phi_G(\rho) = \phi_+(X_+^{1/2}\rho X_+^{1/2}) + \phi_-(X_-^{1/2}\rho X_-^{1/2})$, for two quantum channels $\phi_+$ and $\phi_-$~\cite{Brown20}, equivalent to first carrying out the square-root measurement and then applying a quantum channel that may depend on the outcome. Now, a quantum channel is reversible if and only if it is unitary, implying the square-root measurement map $\phi$ can be recovered from $\phi_G$ only if $\phi_\pm$ are unitary transformations, and indeed the same unitary transformation if the outcome is unknown (as is the case for independent sequential measurements). Hence, the square-root measurement $\phi$ is optimal, in the sense of being the maximally reversible measurement of $X$ (up to a unitary transformation), and we follow~\cite{Brown20} in confining attention to such measurements. To quantify the degree of maximum reversibility, we note that explicit calculation gives~\cite{SM} \begin{align} \phi(\rho) &= \p{x}\rho\p{x} + \p{-x}\rho\p{-x}+ \mathcal R\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right), \label{phirho} \end{align} where $P_{\bm x}=\half(\mathbbm{1}+\bm\sigma\cdot\bm x)$ denotes the projection onto unit spin direction $\bm x$, and \begin{equation} \mathcal R := \half\sqrt{(1+{\mathcal B})^2-\mathcal{S}^2} + \half\sqrt{(1-{\mathcal B})^2-\mathcal{S}^2} . \label{rdef} \end{equation} Thus, the off-diagonal elements of $\rho$ in the $\bm \sigma\cdot \bm x$ basis are scaled by $\mathcal R$, with $\mathcal R=0$ for projective observables (${\mathcal B}=0, \mathcal{S}=1)$, and $\mathcal R=1$ for trivial observables ($\mathcal{S}=0$). Hence, $\mathcal R$ is a natural measure of the {\it maximum reversibility} associated with the measurement of a given observable (it also upper bounds the `quality factor' $\mathcal F$ of a class of unbiased weak qubit measurements~\cite{Silva15,SM}). For convenience we will often simply refer to $\mathcal R$ as the reversibility in what follows. \paragraph{Tradeoff between strength and reversibility---} Equation~(\ref{rdef}) is a general relation connecting outcome bias, strength, and maximum reversibility, and implies the fundamental tradeoff relation~\cite{SM} \begin{equation} \mathcal R^2+\mathcal{S}^2 \leq 1 ,\label{tradeoff} \end{equation} between reversibility and strength. Equality holds for any unbiased observable, i.e., for ${\mathcal B}=0$. This tradeoff is very useful for studying the shareability of Bell nonlocality via sequential measurements, and may be used to reinterpret the information-disturbance relation given in~\cite{Banaszek01} (in the case of qubit measurements)~\cite{SM}. Equations~(\ref{phirho}) and~(\ref{tradeoff}) also suggest a natural definition of ``minimal decoherence'', ${\cal D}=\sqrt{ 1 - \mathcal R^2}\geq \mathcal{S}$. \paragraph{Simplifying the conjecture---} Fortunately, the validity of the conjecture does not require explicit consideration of all possible initial states, nor of all possible observables measured by the four observers. For example, since the CHSH parameters $S(A_j,B_k)$ in Eq.~(\ref{chsh}) are convex-linear in the initial state $\rho$ shared by $A_1$ and $B_1$, only pure initial states $\rho=|\psi\rangle\langle\psi|$ need be considered to test the joint ranges of the parameters for any given measurements. We can further restrict to the 1-parameter class \begin{equation} \label{pure} |\psi\rangle= \cos\alpha |0\rangle\otimes |0\rangle + \sin \alpha |1\rangle\otimes |1\rangle,~~~~ \alpha\in[0,\pi/2] \end{equation} in the $\sigma_3$-basis $\{|0\rangle,|1\rangle\}$, as Bell nonlocality is invariant under local unitaries~\cite{Brunner14}. Moreover, once $A_1$ and $B_1$'s observables $X,X',Y,Y'$ have been specified, the optimal choices for $A_2$ and $B_2$ are uniquely determined, on each run, by which observer on the other side they are trying to generate Bell nonlocality with. For example, it follows from Eq.~(\ref{phirho}) that if $A_1$ and $B_1$ choose between their measurements with equal probabilities, as per the conjecture, and $T$ denotes the initial spin correlation matrix (with coefficients $T_{jk}:=\tr{\rho\,\sigma_j\otimes\sigma_k}$), then the correlation matrix shared by $A_2$ and $B_2$ is $KTL$, with~\cite{SM} \begin{equation} \label{barka} K:= \half(\mathcal R_{X}+\mathcal R_{X'})I_3 + \half(1-\mathcal R_{X})\bm x\bm x^\top + \half(1-\mathcal R_{X'})\bm x'\bm x'^\top \end{equation} \begin{equation} \label{barkb} L:= \half(\mathcal R_{Y}+\mathcal R_{Y'})I_3 + \half(1-\mathcal R_{Y})\bm y\bm y^\top + \half(1-\mathcal R_{Y'})\bm y'\bm y'^\top \nonumber \end{equation} (here $\mathcal R_X$ denotes the reversibility associated with observable $X$, etc.). It follows immediately from the Horodecki criterion that $A_2$ and $B_2$ can violate the CHSH inequality if and only if~\cite{Horodecki95} \begin{equation} \label{smax22} S^*(A_2,B_2):= 2\sqrt{s_1(KTL)^2 + s_2(KTL)^2} > 2, \end{equation} where $s_1(M)$ and $s_2(M)$ denote the two largest singular values of matrix $M$. Similarly, it can be shown that $(A_1,B_2)$ and $(A_2,B_1)$ can violate the CHSH inequality if and only if~\cite{SM} \begin{align} S^*(A_1, B_2) &:= \left| ({\mathcal B}_X+{\mathcal B}_{X'})L\bm b + LT^\top(\tilde{\bm x}+\tilde{\bm x}')\right| \nonumber\\ &~~~+\left| ({\mathcal B}_X-{\mathcal B}_{X'})L\bm b + LT^\top(\tilde{\bm x}-\tilde{\bm x}')\right| \label{s12} \\ S^*(A_2, B_1) &:= \left| ({\mathcal B}_Y+{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}+\tilde{\bm y}')\right| \nonumber\\ &~~~+\left| ({\mathcal B}_Y-{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}-\tilde{\bm y}')\right| \label{s21} \end{align} are greater than 2, respectively, where $\tilde{\bm x}:=\mathcal{S}_X\bm x, \tilde{\bm x}':=\mathcal{S}_{X'}{\bm x}'$, etc, and $\bm a$ and $\bm b$ are the initial Bloch vectors of the first and second qubits. Hence, to verify the conjecture one need only consider the values of $S(A_1,B_1)$ and the proxy quantities $S^*(A_1, B_2), S^*(A_2, B_1), S^*(A_2,B_2)$, over the 16-parameter set of observables $X,X',Y,Y'$ and a 1-parameter set of initial states. Moreover, if the observables are unbiased (i.e., ${\mathcal B}_X={\mathcal B}_{X'}={\mathcal B}_Y={\mathcal B}_{Y'}=0$), then it may be shown that the conjecture need only be verified for a 9-parameter set of observables on a single maximally entangled state~\cite{SM}. Note that prior work~\cite{Silva15,Mal16,Curchod17,Tavakoli18,Bera18,Sasmal18,Shenoy19,Das19,Saha19,Kumari19,Brown20,Maity20,Bowles20, Roy20,Schiavon17,Hu18,Choi20,Foletto20,Feng20,Foletto21} is restricted to unbiased observables. \paragraph{Evidence for conjecture---}Our numerical evidence is based on performing optimisations of the proxy quantities $S^*(A_2,B_2)$ and $S^*(A_2, B_1)$ under the constraint that the respective quantities $S(A_1,B_1)$ and $S^*(A_1, B_2)$ achieve a minimum fixed value. These maximisations over the 17-dimensional parameter space were computed using a constrained differential evolution algorithm~\cite{Sto97,Lam02} implemented in the \textsc{SciPy} library~\cite{Vir20} (see Section IV of the Supplemental Material~\cite{SM} for details). All data points found through optimisation---illustrated as the blue dots in Fig.~\ref{fig:fig2}---lie strictly outside the grey regions, providing strong evidence the conjecture holds. Moreover, we can directly prove the conjecture in a number of scenarios, using tradeoff relation~(\ref{tradeoff}). For example, for the case of equal strengths for each side, i.e., $\mathcal{S}_X=\mathcal{S}_{X'}$ and $\mathcal{S}_{Y}=\mathcal{S}_{Y'}$, the one-sided monogamy relation \begin{equation} \label{monog1} S^*(A_1, B_2)^2 + S^*(A_2,B_1)^2 \leq 8 \end{equation} holds for all states $\rho$ with maximally mixed marginals, i.e., $\bm a=\bm b=0$, and for arbitrary states if the observables are unbiased~\cite{SM}, implying Eq.~(\ref{conjecture}) holds for these pairs. The upper bound corresponds to the red curve on the right hand panel of Fig.~\ref{fig:fig2}. This relation also holds for the alternative case of orthogonal measurement directions on each side, i.e., $\bm x\cdot\bm x'=\bm y\cdot\bm y'=0$~\cite{SM}. One-sided monogamy relations for the pairs $(A_1,B_1)$ and $(A_2,B_2)$ can also be derived, such as \begin{equation} \label{monog2} |S(A_1, B_1)| + S^*(A_2,B_2) \leq 4 \end{equation} for the case of unbiased observables and equal strengths. The upper bound corresponding to the red curve on the left hand panel of Fig.~\ref{fig:fig2}, and can be improved to $16/(3\sqrt{2})$ for the case of orthogonal directions. However, since these relations require considerably more work (including a significant generalisation of the Horodecki criterion), they are only derived under {\it joint} strength and orthogonality assumptions in~\cite{SM}, with the general cases left to a companion paper~\cite{Cheng21}. \paragraph{Sequential Bell nonlocality via biased measurement selections---} The conjecture and above results require that observers $A_1$ and $B_1$ each select their measurements with equal probabilities. However, if they instead select between making a relatively weak measurement, with sufficiently high probability, and a relatively strong measurement, with correspondingly low probability, then the average disturbance to their state can be small enough to allow {\it all four pairs} to generate Bell nonlocality. To show this, suppose that $A_1$ measures $X$ and $X'$ with probabilities $1-\epsilon$ and $\epsilon$, and $B_1$ similarly measures $Y$ and $Y'$ with probabilities $1-\epsilon$ and $\epsilon$, on a singlet state (i.e., $T=-I$). Suppose further that $X$ and $Y$ are measured with strengths $\mathcal{S}_X=\mathcal{S}_Y=\mathcal{S}$ and reversibilities $\mathcal R_X=\mathcal R_Y=\mathcal R$; that $X'$ and $Y'$ are projective, with strengths $\mathcal{S}_{X'}=\mathcal{S}_{Y'}=1$ and reversibilities $\mathcal R_{X'}=\mathcal R_{Y'}=0$; and the measurement directions are the optimal CHSH directions~\cite{Clauser69} (i.e., $\bm x$ and $\bm x'$ are orthogonal with $\bm y=(\bm x+\bm x')/\sqrt{2}$ and $\bm y'=(\bm x-\bm x')/\sqrt{2}$). Then $|S(A_1,B_1)|= (\mathcal{S}+1)^2/\sqrt{2}$ from Eq.~(\ref{chsh}), implying $A_1$ and $B_1$ can violate the CHSH inequality only if this is larger than 2, i.e., only if \begin{equation} \label{rplus} \mathcal{S} > 8^{1/4}-1 \sim 0.682. \end{equation} Further, the state shared by $A_2$ and $B_2$ has spin correlation matrix $T=-K_\epsilon L_\epsilon$, with $K_\epsilon = (1-\epsilon)[ \mathcal R I + (1-\mathcal R) \bm a\bm a^\top ] + \epsilon \bm a'\bm a'^\top$, and a similar expression for $L_\epsilon$ with $\bm x,\bm x'$ replaced by $\bm y,\bm y'$, implying via Eq.~(\ref{smax22}) that $A_2$ and $B_2$ can violate the CHSH inequality only if~\cite{SM} \begin{equation} \label{rmin} \mathcal R > \frac{[\sqrt{2}-(1-\epsilon)^2]^{1/2}-\epsilon}{1-\epsilon}> (\sqrt{2}-1)^{1/2} \sim 0.644 . \end{equation} Equations~(\ref{tradeoff}), (\ref{rplus}) and (\ref{rmin}) yield a narrow range of strengths, $0.682<\mathcal{S}<0.765$, over which both $(A_1,B_1)$ and $(A_2,B_2)$ can generate Bell nonlocality by recycling both qubits. Further, this range constrains the probability of making the projective measurement to~\cite{SM} \begin{equation} \epsilon<\epsilon_{\max}\sim 7.9\% . \end{equation} The pairs $(A_1,B_2)$ and $(A_2,B_1)$ can also generate Bell nonlocality under these conditions~\cite{SM}. \paragraph{Discussion---} Strong numerical and analytic evidence has been given to support an unexpected one-sided monogamy conjecture, that limits sequential violation of the CHSH inequality to one-sided qubit recycling if observers make unbiased measurement selections. Conversely, allowing sufficiently biased selections permits a narrow range of measurement strengths within which two-sided qubit recycling is possible. We propose testing the latter experimentally, as it in principle permits four independent pairs of observers to generate Bell nonlocality, and hence to carry out device independent quantum information protocols such as randomness generation, via recycling of a two-qubit state. Generalisations of our methods are given in~\cite{Cheng21}, and we expect these methods can also be readily applied to the sequential sharing of quantum properties such as entanglement~\cite{Horodecki09}, Einstein-Podolsky-Rosen steering~\cite{Uola20}, and random access codes~\cite{Mohan19,Anwer20,Das21}. We also hope to find a more rigorous justification for restricting to square root measurements. \acknowledgements We thank Yong Wang, Xinhui Li, Jie Zhu, Mengjun Hu, Ad\'an Cabello and two anonymous referees for helpful discussions and comments. S. C.~is supported by the Fundamental Research Funds for the Central Universities ( No.~22120210092) and the National Natural Science Foundation of China (No.~62088101). L. L.~is supported by National Natural Science Foundation of China (No.~61703254). T. J. B. is supported by the Australian Research Council Centre of Excellence CE170100012, and acknowledges the support of the Griffith University eResearch Service \& Specialised Platforms Team and the use of the High Performance Computing Cluster ``Gowonda'' to complete this research. \\ {\it Note:} Monogamy for the pairs $(A_1,B_1), (A_2,B_2)$, for the very special case of measurements of unbiased observables on a singlet state in the optimal CHSH directions and with equal strengths for each side, has been noted independently in a recent work~\cite{Jie21}. \begin{thebibliography}{99} \bibitem{Bennett93} C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70.1895}{Phys. Rev. Lett. \textbf{70}, 1895 (1993). } \bibitem{Ekert91} A. K. Ekert, Quantum cryptography based on Bell’s theorem, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.67.661}{Phys. Rev. Lett. \textbf{67}, 661 (1991.)} \bibitem{Silva15} R. Silva, N. Gisin, Y. Guryanova, and S. Popescu, Multiple Observers can Share the Nonlocality of Half of an Entangled Pair by Using Optimal Weak Measurements, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.250401}{Phys. Rev. Lett. \textbf{114}, 250401 (2015).} \bibitem{Mal16} S. Mal, A. Majumdar, and D. Home, Sharing of nonlocality of a single member of an entangled pair of qubits is not possible by more than two unbiased observers on the other wing, \href{https://www.mdpi.com/2227-7390/4/3/48}{Mathematics \textbf{4}, 48 (2016).} \bibitem{Curchod17} F. J. Curchod, M. Johansson, R. Augusiak, M. J. Hoban, P. Wittek, and A. Acín, Unbounded randomness certification using sequences of measurements, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.020102}{Phys. Rev. A \textbf{95}, 020102(R) (2017).} \bibitem{Tavakoli18} A. Tavakoli and A. Cabello, Quantum predictions for an unmeasured system cannot be simulated with a finitememory classical system, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.032131}{Phys. Rev. A \textbf{97}, 032131 (2018).} \bibitem{Bera18} A. Bera, S. Mal, A. Sen(De), and U. Sen, Witnessing bipartite entanglement sequentially by multiple observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.062304}{Phys. Rev. A \textbf{98}, 062304 (2018).} \bibitem{Sasmal18} S. Sasmal, D. Das, S. Mal, and A. S. Majumdar, Steering a single system sequentially by multiple observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.012305}{Phys. Rev. A \textbf{98}, 012305 (2018).} \bibitem{Shenoy19} A. Shenoy H., S. Designolle, F. Hirsch, R. Silva, N. Gisin, and N. Brunner, Unbounded sequence of observers exhibiting Einstein-Podolsky-Rosen steering, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.022317}{Phys. Rev. A \textbf{99}, 022317 (2019).} \bibitem{Das19} D. Das, A. Ghosal, S. Sasmal, S. Mal, and A. S. Majumdar, Facets of bipartite nonlocality sharing by multiple observers via sequential measurements, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.022305}{Phys. Rev. A \textbf{99}, 022305 (2019).} \bibitem{Saha19} S. Saha, D. Das, S. Sasmal, D. Sarkar, K. Mukherjee, A. Roy, and S. S. Bhattacharya, Sharing of tripartite nonlocality by multiple observers measuring sequentially at one side, \href{https://link.springer.com/article/10.1007/s11128-018-2161-x}{Quantum Inf. Process. \textbf{18}, 42 (2019).} \bibitem{Kumari19} A. Kumari and A. K. Pan, Sharing nonlocality and nontrivial preparation contextuality using the same family of bell expressions, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062130}{Phys. Rev. A \textbf{100}, 062130 (2019).} \bibitem{Brown20} P. J. Brown and R. Colbeck, Arbitrarily Many Independent Observers Can Share the Nonlocality of a Single Maximally Entangled Qubit Pair,\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.090401}{Phys. Rev. Lett. \textbf{125}, 090401 (2020).} \bibitem{Maity20} A. G. Maity, D. Das, A. Ghosal, A. Roy, and A. S. Majumdar, Detection of genuine tripartite entanglement by multiple sequential observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.042340}{Phys. Rev. A \textbf{101}, 042340 (2020).} \bibitem{Bowles20} J. Bowles, F. Baccari, and A. Salavrakos, Bounding Sets of Sequential Quantum Correlations and Device-Independent Randomness Certification, Bounding sets of sequential quantum correlations and device-independent randomness certification, \href{https://quantum-journal.org/papers/q-2020-10-19-344/}{Quantum \textbf{4}, 344 (2020).} \bibitem{Roy20} S. Roy, A. Kumari, S. Mal, and A. S. De, Robustness of Higher Dimensional Nonlocality against dual noise and sequential measurements, \href{https://arxiv.org/abs/2012.12200}{arXiv:2012.12200 (2020).} \bibitem{Schiavon17} M. Schiavon, L. Calderaro, M. Pittaluga, G. Vallone, and P. Villoresi, Three-observer Bell inequality violation on a two-qubit entangled state, \href{https://iopscience.iop.org/article/10.1088/2058-9565/aa62be/meta?casa_token=LvvpM2YDF24AAAAA:iviUPtPBsEr-gk-bjEm9tfC6vkt5IJ1vqWNMmMmAMyiUEETYLGOzMGcaFLEfKUUOyJsNBIwxgQ}{Quantum Sci. Technol. \textbf{2}, 015010 (2017).} \bibitem{Hu18} M. J. Hu, Z. Y. Zhou, X. M. Hu, C. F. Li, G. C. Guo, and Y. S. Zhang, Observation of non-locality sharing among three observers with one entangled pair via optimal weak measurement, \href{https://www.nature.com/articles/s41534-018-0115-x}{Npj. Quantum. Inform. \textbf{4}, 63 (2018).} \bibitem{Choi20} Y.-H. Choi, S. Hong, T. Pramanik, H.-T. Lim, Y.-S. Kim, H. Jung, S.-W. Han, S. Moon, and Y.-W. Cho, Demonstration of simultaneous quantum steering by multiple observers via sequential weak measurements, \href{https://www.osapublishing.org/optica/abstract.cfm?uri=optica-7-6-675}{Optica \textbf{7}, 675-679 (2020).} \bibitem{Foletto20} G. Foletto, L. Calderaro, A. Tavakoli, M. Schiavon, F. Picciariello, A. Cabello, P. Villoresi, and G. Vallone, Experimental Certification of Sustained Entanglement and Nonlocality after Sequential Measurements, \href{https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.13.044008}{Phys. Rev. Applied \textbf{13}, 044008 (2020).} \bibitem{Feng20} T. Feng, C. Ren, Y. Tian, M. Luo, H. Shi, J. Chen, and X. Zhou, Observation of nonlocality sharing via not-so-weak measurements, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.102.032220}{Phys. Rev. A \textbf{102}, 032220 (2020).} \bibitem{Foletto21} G. Foletto, M. Padovan, M. Avesani, H. Tebyanian, P. Villoresi, and G. Vallone, Experimental Test of Sequential Weak Measurements for Certified Quantum Randomness Extraction, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.103.062206}{Phys. Rev. A \textbf{103}, 062206 (2021).} \bibitem{Ekert14} A. Ekert and R. Renner, The ultimate physical limits of privacy, \href{https://www.nature.com/articles/nature13132}{Nature (London) \textbf{507}, 443 (2014).} \bibitem{Pironio10} S. Pironio, A. Acín, S. Massar, A. B. de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Random numbers certified by Bell’s theorem, \href{https://www.nature.com/articles/nature09008}{Nature (London) \textbf{464}, 1021 (2010).} \bibitem{SM} See supplementary material for detailed information, including Refs.~\cite{Choudhary13,Horodecki96,Bhatia97,Gol15}. \bibitem{Cheng21} S. Cheng, L. Liu, and M. J. W. Hall, in preparation. \bibitem{Clauser69} J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Proposed Experiment to Test Local Hidden-Variable Theories,\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.23.880}{Phys. Rev. Lett. \textbf{23}, 880 (1969).} \bibitem{Brunner14} N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Bell nonlocality, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.419}{Rev. Mod. Phys. \textbf{86}, 419 (2014).} \bibitem{adan} Ad\'an Cabello, private communication. \bibitem{Banaszek01} K. Banaszek, Fidelity Balance in Quantum Operations, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.1366}{Phys. Rev. Lett. \textbf{86}, 1366 (2001).} \bibitem{Choudhary13} S. K. Choudhary, T. Konrad, and H. Uys, Implementation schemes for unsharp measurements with trapped ions, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.87.012131}{Phys. Rev. A \textbf{87}, 012131 (2013).} \bibitem{Horodecki95} R. Horodecki, P. Horodecki, and M. Horodecki, Violating Bell inequality by mixed spin-$1/2$ states: necessary and sufficient condition,\href{https://www.sciencedirect.com/science/article/abs/pii/037596019500214N}{Phys. Lett. A \textbf{200}, 340 (1995).} \bibitem{Sto97} R.~Storn and K.~Price, \newblock Differential Evolution –A Simple and Efficient Heuristic for global Optimization over Continuous Spaces, \newblock \href{https://doi.org/10.1023/A:1008202821328}{{Journal of Global Optimization}, \textbf{11} (4):341--359, (1997).} \bibitem{Lam02} J.~Lampinen, \newblock A constraint handling approach for the differential evolution algorithm, \newblock In \href{https://ieeexplore.ieee.org/document/1004459}{{\em Proceedings of the Evolutionary Computation on 2002. CEC '02. Proceedings of the 2002 Congress - Volume 02}, CEC '02, pages 1468--1473, USA, 2002. IEEE Computer Society.} \bibitem{Vir20} P.~Virtanen, R.~Gommers, T.~E.~Oliphant \emph{et al.}~SciPy 1.0: fundamental algorithms for scientific computing in Python, \href{https://www.nature.com/articles/s41592-019-0686-2}{Nat. Methods \textbf{17}, 261–272 (2020).} \bibitem{Gol15} S.~Golchi and J.~L. Loeppky, \newblock Monte Carlo based Designs for Constrained Domains, \newblock \href{https://arxiv.org/abs/1512.07328}{ arXiv:1512.07328 (2015).} \bibitem{Horodecki09} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Quantum entanglement, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.865}{Rev. Mod. Phys. \textbf{81}, 865 (2009).} \bibitem{Uola20} R. Uola, A. C. S. Costa, H. C. Nguyen, and O. G\"uhne, Quantum steering, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015001}{Rev. Mod. Phys. \textbf{92}, 015001 (2020).} \bibitem{Mohan19} K. Mohan, A. Tavakoli, and N. Brunner, Sequential random access codes and self-testing of quantum measurement instruments, \href{https://iopscience.iop.org/article/10.1088/1367-2630/ab3773/meta}{New J. Phys. \textbf{21}, 083034 (2019).} \bibitem{Anwer20} H. Anwer, S. Muhammad, W. Cherifi, N. Miklin, A. Tavakoli, and M. Bourennane, Experimental Characterization of Unsharp Qubit Observables and Sequential Measurement Incompatibility via Quantum Random Access Codes, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.080403}{Phys. Rev. Lett. \textbf{125}, 080403 (2020).} \bibitem{Das21} D. Das, A. Ghosal, S. Kanjilal, A. G. Maity, and A. Roy, Unbounded pairs of observers can achieve quantum advantage in random access codes with a single pair of qubits, \href{https://arxiv.org/abs/2101.01227}{arXiv:2101.01227 (2021).} \bibitem{Horodecki96} R. Horodecki and M. Horodecki, Information-theoretic aspects of inseparability of mixed states, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.54.1838}{Phys. Rev. A {\bf 54}, 1838 (1996).} \bibitem{Bhatia97} R. Bhatia, {\it Matrix Analysis}, Springer, New York, 1997. \bibitem{Jie21} J. Zhu, M.-J. Hu, G.-C. Guo, C.-F. Li, and Y.-S. Zhang, Einstein-Podolsky-Rosen Steering in Two-sided Sequential Measurements with One Entangled Pair, \href{https://arxiv.org/abs/2102.02550}{arXiv: 2102.02550 (2021).} \end{thebibliography} ~~ \setcounter{equation}{0} \renewcommand{S.\arabic{equation}}{S.\arabic{equation}} \setcounter{figure}{0} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \setcounter{page}{1} \renewcommand{Supplemental Material -- \arabic{page}/10}{Supplemental Material -- \arabic{page}/10} \section{\large{SUPPLEMENTAL MATERIAL}} \section{I. Bias, strength and maximum reversibility of two-valued qubit measurements} \subsection{A. Measurement strength and outcome bias} Recall from the main text that a general two-valued qubit observable, with POVM $\{X_+,X_-\}$, is equivalently represented by the operator \begin{equation} \label{xrep} X = X_+-X_- = {\cal B} \mathbbm{1} + {\cal S}\bm \sigma\cdot\bm x , \end{equation} where $\mathcal{S}\geq 0$ and ${\mathcal B}$ denote the {\it strength} and {\it outcome bias} of the observable, respectively, and $\bm x$ is the associated measurement direction, with unit norm $|\bm x|:= (\bm x\cdot\bm x)^{1/2} =1$. The POVM elements are determined uniquely by $X$ via $X_\pm=\half(\mathbbm{1}\pm X)$, and the positivity requirement $X_\pm\geq 0$ is equivalent to $-\mathbbm{1}\leq X\leq \mathbbm{1}$, i.e., to the condition \begin{equation} |{\mathcal B}|+\mathcal{S}\leq 1. \end{equation} on the strength and bias. We note that $\mathcal{S}$ is also referred to as `information gain' and denoted by $G$~[3]. However, we prefer to use the alternative term `strength'~[9], in part to distinguish it from the average information gain $G$ defined by Banaszek~[30] (see also Sec.~1.C below). Another possible terminology is `sharpness'~[31]. It is convenient for later purposes to write $X$ in the form \begin{equation} \label{eig} X := x_+P_{\bm x} + x_-P_{-\bm x}, \end{equation} where \begin{equation} P_{\bm x}:=\frac{\mathbbm{1}+\bm\sigma\cdot\bm x}{2} \end{equation} denotes the projection onto spin direction $\bm x$. Hence, \begin{equation} \label{bias} x_\pm={\mathcal B}\pm\mathcal{S}, \end{equation} and \begin{equation} \label{Xpm} X_\pm = \frac{1\pm x_+}{2}\p{x}+\frac{1\pm x_-}{2}\p{-x} . \end{equation} \subsection{B. Maximum reversibility} To obtain Eqs.~(5) and~(6) of the main text, note from Eq.~(\ref{Xpm}) that \begin{equation} \label{root} X_\pm^{1/2} = \sqrt{\frac{1\pm x_+}{2}}\p{x}+\sqrt{\frac{1\pm x_-}{2}}\p{-x}. \end{equation} Hence, the square-root measurement operation corresponding to $X$ takes state $\rho$ to the state \begin{align} \phi(\rho)&:=X_+^{1/2}\rho X_+^{1/2} + X_-^{1/2}\rho X_-^{1/2}\nonumber\\ &= \left(\frac{1+x_+}{2}+\frac{1-x_+}{2}\right)\p{x}\rho\p{x}\nonumber\\ &~+ \left(\frac{1+x_-}{2}+\frac{1-x_-}{2}\right)\p{-x}\rho\p{-x}\nonumber\\ &~+\half\sqrt{(1+x_+)(1+x_-)}\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right)\nonumber\\ &~+\half\sqrt{(1-x_+)(1-x_-)}\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right)\nonumber\\ &= \p{x}\rho\p{x} + \p{-x}\rho\p{-x}+ \mathcal R\left(\p{x}\rho\p{-x}+\p{-x}\rho\p{x}\right), \label{rhoprime} \end{align} as per Eq.~(5), with the maximum reversibility $\mathcal R$ given by \begin{equation} \label{reverse} \mathcal R :=\half\sqrt{(1+x_+)(1+x_-)} + \half\sqrt{(1-x_+)(1-x_-)} . \end{equation} Equation~(\ref{bias}) then yields \begin{equation} \mathcal R = \half\sqrt{(1+{\mathcal B})^2-\mathcal{S}^2} + \half\sqrt{(1-{\mathcal B})^2-\mathcal{S}^2} \label{reversibility} \end{equation} as per Eq.~(6) of the main text. The interpretation of $\mathcal R$ as the {\it maximum} reversibility of any measurement of $X$ is directly supported by a class of unbiased weak qubit measurements considered by Silva {\it et al.}. For these measurements, comparing Eq.~(\ref{xrep}) above with Eqs.~(3) and~(42) of~[3], \begin{equation} {\mathcal B}=0,\qquad \mathcal{S} = \frac{1-e^{-2{\rm Re}(a)}}{1+e^{2{\rm Re}(a)}}, \end{equation} where $a$ is a complex parameter with nonnegative real part. This corresponds to a maximum reversibility \begin{equation} \mathcal R = \sqrt{1-\mathcal{S}^2} = \frac{2e^{-{\rm Re}(a)}}{1+e^{2{\rm Re}(a)}} \end{equation} via Eq.~(\ref{reversibility}). Further, from Eqs.~(4) and~(44) of~[3] the postmeasurement state is given by \begin{equation} \phi_a(\rho) := \p{x}\rho\p{x} + \p{-x}\rho\p{-x}+ {\mathcal F}\left(\p{x}\rho\p{-x}+\p{x}\rho\p{-x}\right) \end{equation} where $\mathcal F$ is the `quality factor' defined by \begin{equation} {\mathcal F}:= \frac{2e^{-{\rm Re}(a)} |\cos {\rm Im}(a)|}{1+e^{2{\rm Re}(a)}} . \end{equation} Thus, the off-diagonal elements of the post-measurement state are scaled by a factor $\mathcal F\leq\mathcal R$ (with equality for ${\rm Im}(a)=0$), and so are clearly less reversible than the square-root measurement in general, as expected. It further follows that optimal measurements of this type, with ${\mathcal F}={\mathcal F}_{\max}=\mathcal R$, correspond to square root measurements. \subsection{C. Fundamental tradeoff relation} \label{sec:tradeoff} Squaring each side of Eq.~(\ref{reversibility}), then rearranging and squaring again, leads to the tradeoff relation \begin{equation} \label{rssum} \mathcal R^2+\mathcal{S}^2 = 1- {\mathcal B}^2(\frac{1}{\mathcal R^2}-1)\leq 1 \end{equation} between strength and reversibility, as per Eq.~(7) of the main text. This tradeoff is not only very helpful for studying the shareability of Bell nonlocality via sequential measurements, but is also of interest more generally. For example, for a general quantum measurement on a $d$-dimensional system, Banaszek defines a corresponding `mean operation fidelity' $F$, related to the disturbance caused by the measurement, and a `mean estimation fidelity' $G$, related to the average information gain or quality of the measurement, and shows that these satisfy the general information-disturbance relation~[29] \begin{equation} \sqrt{F-\frac{1}{d+1}}\leq\sqrt{G-\frac{1}{d+1}}+\sqrt{(d-1)(\frac{2}{d+1}-G)}. \nonumber \end{equation} For a two-valued qubit measurement with Kraus operators $M_\pm$, corresponding to the POVM $\{X_\pm=M^\dagger_\pm M_\pm\}$, these quantities may be calculated explicitly, yielding \begin{align} 6F &= 2 + \left|\tr{M_+}\right|^2 + \left|\tr{M_-}\right|^2 \nonumber\\ &= 2 + \left|\tr{U_+X_+^{1/2}}\right|^2 + \left|\tr{U_-X_-^{1/2}}\right|^2 \nonumber\\ &\leq 2 + \tr{X_+^{1/2}}^2 + \tr{X_-^{1/2}}^2 \nonumber\\ &= 2\mathcal R + 4 \label{fdef} \end{align} (where the second line uses the polar decomposition $M_\pm=U_\pm X_\pm^{1/2}$ for unitary operators $U_\pm$ and the last line follows via Eqs.~(\ref{root}) and~(\ref{reverse})), and \begin{equation} 6G = 2 + \lambda_{\max}(X_+) + \lambda_{\max}(X_-) = \mathcal{S}+3, \end{equation} where $\lambda_{\max}(M)$ denotes the maximum eigenvalue of $M$. Thus, $F$ and $G$ can be reinterpreted in terms of the maximum reversibility and strength of the measurement for this case (note also the {\it maximum} reversibility property of $\mathcal R$ is emphasised by the inequality in Eq.~(\ref{fdef}), which is saturated for the square-root measurement $M_\pm=X_\pm^{1/2}$). Further, our tradeoff relation~(\ref{rssum}) for $\mathcal R$ and $\mathcal{S}$ can be rewritten as \begin{equation} \nonumber 2+2\mathcal R \leq 2+2\sqrt{1-\mathcal{S}^2} = \left(\sqrt{1+\mathcal{S}}+\sqrt{1-\mathcal{S}}\right)^2, \end{equation} which, on taking the square root and substituting the above expressions for $F$ and $G$, yields \begin{equation} \sqrt{6F-2} \leq \sqrt{6G-2} + \sqrt{4-6G}. \end{equation} Thus, the fundamental tradeoff relation is equivalent to Banaszek's information-disturbance relation for qubit measurements. Generalisations and further applications of tradeoff relation~(\ref{rssum}) will be discussed elsewhere~[26]. \section{II. Spin correlation matrix, Bloch vectors, and the forms of $K$ and $L$} The spin correlation matrix $T$ and Bloch vectors $\bm a, \bm b$ of a two-qubit state $\rho$ are defined by \begin{equation} T_{jk}:=\tr{\rho (\sigma_j\otimes\sigma_k)}, \end{equation} \begin{equation} a_j:=\tr{\rho(\sigma_j\otimes\mathbbm{1})}, ~~b_j:=\tr{\rho(\mathbbm{1}\otimes\sigma_j)}, \end{equation} for $j,k=1,2,3$. To calculate the effect of a maximally reversible measurement of $X$ by $A_1$ on these quantities, note first that substituting $\p{x}=\half(\mathbbm{1}+\bm\sigma\cdot\bm x)$ in Eq.~(\ref{rhoprime}) gives \begin{align} \phi(\rho)= \half(\rho+\bm\sigma\cdot\bm x\rho\bm\sigma\cdot\bm x) +\half\mathcal R(\rho - \bm\sigma\cdot\bm x\rho\bm\sigma\cdot\bm x). \end{align} Thus, $\phi(\mathbbm{1})=\mathbbm{1}$ (i.e., the map is unital), and using the identity \begin{align} (\bm\sigma\cdot\bm x)\sigma_j(\bm\sigma\cdot\bm x)&= x_kx_l\sigma_k\sigma_j\sigma_l\nonumber \\ &=x_kx_l\sigma_k(\delta_{jl} + i\epsilon_{jlm}\sigma_m) \nonumber\\ &=x_j(\bm \sigma\cdot\bm x)+ ix_kx_l \epsilon_{jlm}(\delta_{km}+i\epsilon_{kmn}\sigma_n)\nonumber\\ &=x_j(\bm \sigma\cdot\bm x) +x_kx_l \epsilon_{jlm}\epsilon_{knm}\sigma_n\nonumber\\ &=x_j(\bm \sigma\cdot\bm x) +x_kx_l(\delta_{jk}\delta_{ln}-\delta_{jn}\delta_{kl})\sigma_n\nonumber\\ &=2x_j(\bm \sigma\cdot\bm x) - \sigma_j , \end{align} with summation over repeated indices, gives \begin{equation} \phi(\sigma_j) = x_j(\bm \sigma\cdot\bm x) + \mathcal R(\sigma_j -x_j\bm \sigma\cdot\bm x). \end{equation} Thus, noting from Eq.~(\ref{rhoprime}) that the map is self-dual, i.e., $\tr{\phi(M)N}=\tr{M\phi(N)}$, a measurement of $X$ on the first qubit of a two-qubit state $\rho$ changes the spin correlation matrix $T$ to $T^X$, with \begin{align} T^X_{jk} &= \tr{ (\phi\otimes I)(\rho)\,(\sigma_j\otimes\sigma_k)} =\tr{\rho \,\phi(\sigma_j)\otimes \sigma_k}\nonumber\\ &= \langle [ x_j(\bm \sigma\cdot\bm x) + \mathcal R(\sigma_j -x_j\bm \sigma\cdot\bm x)]\otimes\sigma_k\rangle\nonumber\\ &= \mathcal R\langle \sigma_j\otimes\sigma_k\rangle +(1-\mathcal R)x_jx_l\langle \sigma_l\otimes\sigma_k\rangle \nonumber\\ &= \mathcal R T_{jk} + (1-\mathcal R)x_jx_lT_{lk}. \end{align} Hence, \begin{equation} \label{tx} T^X = K^XT,\qquad K^X:= \mathcal R_{X} I_3 +(1-\mathcal R_{X})\bm x\bm x^\top , \end{equation} where $I_3$ is the $3\times3$ identity matrix, and we have replaced $\mathcal R$ by $\mathcal R_{X}$ to indicate we are referring to the measurement $X$. One similarly finds that the Bloch vector $\bm a$ changes to $T^X\bm a$, while the Bloch vector $\bm b$ of the second qubit is, of course, unchanged by a measurement on the first. Similarly, if observer $B_1$ measures the POVM $Y\equiv\{Y_+,Y_-\}$, then the spin correlation matrix changes to \begin{equation} \label{ty} T^Y = TL^Y,\qquad L^Y:= \mathcal R_{Y} I_3 +(1-\mathcal R_{Y})\bm y\bm y^\top, \end{equation} and the Bloch vector of the second qubit changes to $L^Y\bm b$. Further, if {\it both} observers make a measurement, then the spin matrix and Bloch vectors $\bm a,\bm b$ transform to \begin{equation} \label{txy} T^{XY} = K^XTL^Y, ~~ K^X\bm a,~~ L^Y\bm b, \end{equation} respectively. Finally, if $A_1$ and $B_1$ each measure one of two POVMs, $X$ or $X'$ and $Y$ or $Y'$, with equal probabilities, it follows that $K^X$ and $L^Y$ above are replaced by the `average' matrices $K$ and $L$ defined by \begin{align} K&:= \half(K^X+K^{X'}),~~~ L&:= \half(L^Y+L^{Y'}), \label{barka} \end{align} as per Eq.~(9) of the main text. \section{III. Simplifying the conjecture} \subsection{A. Derivation of $S^*(A_1,B_2)$ and $S^*(A_2,B_1)$} We first demonstrate that, for given observables $X,X',Y,Y'$ measured by $A_1$ and $B_1$, that $A_2$ and $B_1$ can violate the CHSH inequality if and only if $S^*(A_2,B_1)>2$ as per Eq.~(12) of the main text, and similarly that $A_1$ and $B_2$ can violate the CHSH inequality if and only if $S^*(A_1,B_2)>2$. This greatly simplifies obtaining numerical and analytic evidence for the conjecture stated in the main text. In particular, for the pair $(A_2,B_1)$, suppose that $A_2$ measures observables corresponding to $W={\mathcal B}_{W}\mathbbm{1}+\mathcal{S}_{W}\bm \sigma \cdot \bm w$ and $W'={\mathcal B}_{W'}\mathbbm{1}+\mathcal{S}_{W'}\bm \sigma\cdot \bm w'$. Hence, the CHSH parameter for $(A_2,B_1)$ follows from Eqs.~(\ref{chsh}) and~(\ref{measurement}) as \begin{align} S(A_2,B_1)&=\an{W\otimes (Y+Y')}+\an{W'\otimes(Y-Y')} \nonumber \\ &= {\mathcal B}_{W}\an{Y+Y'}+\mathcal{S}_W\an{\bm \sigma\cdot\bm w \otimes (Y+Y')} \nonumber \\ &~+{\mathcal B}_{W'}\an{Y-Y'}+\mathcal{S}_{W'}\an{\bm \sigma\cdot\bm w' \otimes (Y-Y')}. \end{align} This is linear in the bias ${\mathcal B}_{W}$ and ${\mathcal B}_{W'}$, and hence, recalling that $\mathcal{S}+|{\mathcal B}|\leq1$, it achieves its extremal values for fixed strengths $\mathcal{S}_W,\mathcal{S}_{W}$ at ${\mathcal B}_{W}=\alpha(1-\mathcal{S}_W)$ and ${\mathcal B}_{W'}=\beta(1-\mathcal{S}_{W'})$, for $\alpha,\beta=\pm1$, yielding \begin{align} S(A_2,B_1)\leq& \max_{\alpha,\beta} f_{\alpha\beta} \label{fab} \end{align} with \begin{align} f_{\alpha\beta}&:=\alpha(1- \mathcal{S}_{W})\an{ Y+Y'}+\mathcal{S}_W\an{\bm \sigma\cdot\bm w \otimes (Y+Y')} \nonumber \\ &~+\beta(1-\mathcal{S}_{W'})\an{Y-Y'}+\mathcal{S}_{W'}\an{\bm \sigma\cdot\bm w' \otimes (Y-Y')}. \nonumber \end{align} Again, since $f_{\alpha\beta}$ is linear in the measurement strengths $\mathcal{S}_{W}, \mathcal{S}_{W'}$, its extreme values must be achieved at $\mathcal{S}_{W}, \mathcal{S}_{W'} =0, 1$. Hence, we only need to analyse its values at these points. First, for $\mathcal{S}_{W}=\mathcal{S}_{W'}=0$ we have $f_{\alpha\beta}=(\alpha+\beta)\an{Y}+(\alpha-\beta)\an{Y'}\leq |\alpha+\beta|+|\alpha-\beta|= 2$, and so the CHSH inequality cannot be violated for this choice. Second, for $\mathcal{S}_W=0$ and $\mathcal{S}_{W'}=1$ we have \begin{align} f_{\alpha\beta}&=\alpha(\an{Y+Y'}+\an{\bm \sigma\cdot\bm w' \otimes (Y-Y')}\nonumber\\ & =2\alpha\big(\an{P_{\alpha\bm w'}\otimes Y}+\an{P_{-\alpha\bm w'}\otimes Y'}\big)\leq 2,\nonumber \end{align} where $P_{\bm x}=\half(\mathbbm{1}+\bm\sigma\cdot\bm x)$ denotes the projection onto unit spin direction $\bm x$, and so the CHSH inequality again cannot be violated for this choice, nor, by symmetry, for the choice $\mathcal{S}_W=1$ and $\mathcal{S}_{W'}=0$. Thus, it is only possible for $(A_2,B_1)$ to violate the inequality for the remaining choice $\mathcal{S}_{W}=\mathcal{S}_{W'}=1$, for which we have, via Eq.~(\ref{fab}), \begin{align} S(A_2,B_1)&\leq\an{\bm w \cdot \bm \sigma \otimes (Y+Y')}+\an{\bm w' \cdot \bm \sigma \otimes (Y-Y')}, \end{align} Note that equality holds for $W=\bm\sigma\cdot \bm w$ and $W'=\bm\sigma\cdot \bm w'$. Hence, $(A_2,B_1)$ can violate the CHSH inequality if and only if they can violate it via $A_2$ making projective measurements. A similar result holds for $(A_1,B_2)$ by symmetry. Moreover, we can find the optimal projective measurements for $A_2$ to make (and similarly for $B_2$), as follows. First, for projective measurements $W=\bm\sigma\cdot \bm w$ and $W'=\bm\sigma\cdot \bm w'$, we have \begin{align} S(A_2,B_1)&=\an{W \otimes (Y+Y')}+\an{W' \otimes (Y-Y')}\nonumber\\ &= ({\mathcal B}_Y+{\mathcal B}_{Y'})\langle W\rangle + \an{W \otimes \bm \sigma \cdot(\tilde{\bm y}+\tilde{\bm y}') } \nonumber\\ &~+({\mathcal B}_Y-{\mathcal B}_{Y'})\langle W'\rangle + \an{W' \otimes \bm \sigma \cdot(\tilde{\bm y}-\tilde{\bm y}') } \nonumber\\ &= \bm w\cdot \left[ ({\mathcal B}_Y+{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}+\tilde{\bm y}')\right] \nonumber\\ &~+\bm w'\cdot \left[ ({\mathcal B}_Y-{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}-\tilde{\bm y}')\right] \nonumber\\ &\leq \left| ({\mathcal B}_Y+{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}+\tilde{\bm y}')\right| \nonumber\\ &~+\left| ({\mathcal B}_Y-{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}-\tilde{\bm y}')\right|\nonumber\\ &=S^*(A_2,B_1), \end{align} where $\tilde{\bm y}:=\mathcal{S}_Y \bm y$, $\tilde{\bm y}':=\mathcal{S}_{Y'} \bm y'$, $\bm a$ is $A_1$'s Bloch vector for the initial shared state, and $T$ is the spin correlation matrix for the initial shared state. We have used the fact that $A_2$'s Bloch vector is $K\bm a$, and the spin correlation matrix for the state shared by $A_2$ and $B_1$ is $KT$ (see Sec.~II above). Equality holds in the last line by choosing $\bm w$ to be the unit vector in the $({\mathcal B}_Y+{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}+\tilde{\bm y}')$ direction and $\bm w'$ to be the unit vector in the $({\mathcal B}_Y-{\mathcal B}_{Y'})K\bm a + KT(\tilde{\bm y}-\tilde{\bm y}')$ direction. Hence, $A_2$ and $B_1$ can violate the CHSH inequality if and only if $S^*(A_2,B_1)>2$, as claimed in the main text. It may similarly be shown that $A_1$ and $B_2$ can violate the CHSH inequality if and only if $S^*(A_1,B_2)>2$. \subsection{B. Optimality of the singlet state for \\unbiased observables} \label{sec:singlet} Consider now the case that the observables are unbiased (i.e., ${\mathcal B}_X={\mathcal B}_{X'}={\mathcal B}_Y={\mathcal B}_{Y'}=0$). Prior work~[3--22] has in fact been restricted to this case. We show that validity of the conjecture can then be reduced to testing it on the singlet state for a 9-parameter subset of observables. First, using Eq.~(1) of the main text and Eqs.~(\ref{tx})--(\ref{txy}) above, it follows for unbiased observables that the CHSH parameters for each pair $(A_j,B_k)$ are convex-linear in the spin correlation matrix $T$ of the initial state (and are independent of the Bloch vectors). Further, any physical spin correlation matrix $T$ can be written as a convex combination of the spin correlation matrices of maximally entangled states, as follows from the proof of Proposition~1 of~[36] (in particular, $T$ can be expressed as a mixture of the four Bell states corresponding to a basis in which $T$ is diagonal). Now, any maximally entangled spin correlation matrix can be written as $T_{\rm me}=R'T_0R''^\top$, where $T_0=-I$ is the spin correlation matrix of the singlet state and $R', R''$ are local rotations of the first and second qubits. Hence, since the set of possible measurements is invariant under such rotations, it follows that searching the CHSH parameters over all measurements for a given $T_{\rm me}$ is equivalent to searching over all measurements for $T_0$, i.e, for the singlet state (corresponding to taking $\alpha=-\pi/4$ in Eq.~(12) of the main text). Moroever, noting that the singlet state is invariant under equal local rotations on each side, i.e., with $R'=R''$, the measurement direction $\bm x$ for $X$ can be fixed without loss of generality, as can the plane spanned by measurement directions $\bm x$ and $\bm x'$. Hence, the directions corresponding to $X,X',Y,Y'$ that need to be considered, for the purposes of the conjecture, form a 5-parameter set (the angle between $\bm x$ and $\bm x'$ in the given plane, and the angles specifying $\bm y$ and $\bm y'$). Finally, for unbiased observables the only remaining free parameters are the four measurement strengths, $\mathcal{S}_{X}, \mathcal{S}_{X'}, \mathcal{S}_{Y}, \mathcal{S}_{Y'}$. Hence, the conjecture need only be tested for this case, whether numerically or analytically, for a 9-parameter subset of observables on a fixed maximally-entangled state, as claimed in the main text. \section{IV. Numerical evidence for \\the conjecture} As described in the main text, verification of our conjecture only requires consideration of the values of $S(A_1,B_1)$ and the proxy quantities $S^*(A_1,B_2)$, $S^*(A_2,B_1)$, and $S^*(A_2,B_2)$, in Eqs.~(10)--(12) of the main text, for the one-parameter set of pure initial states in Eq.~(8). These quantities are determined by a set of 17 parameters; 4 for each observable $X,X',Y,Y'$ as per Eq.~(3) of the main text, in addition to one for the state. For the first case, illustrated in Fig.~2(a), the conjecture claims that the pairs $(A_1,B_1)$ and $(A_2,B_2)$ cannot both demonstrate CHSH Bell nonlocality. To test the conjecture for this case, we seek solutions to the problem \begin{equation} \begin{aligned} & \underset{\alpha,X,X',Y,Y'}{\text{max}} & & S^*(A_2,B_2)\\ & \hspace{14pt} \text{s. t.} & & |S(A_1,B_1)| \geq s, \end{aligned} \label{eq:passon_numerical_problem} \end{equation} where the quantities $S(A_1,B_1)$ and $S^*(A_2,B_2)$ are defined in Eqs.~(1) and (10) of the main text respectively, and the parameter $s\in[0,2\sqrt{2}]$ is fixed for each numerical test. For $s\geq2$, the conjecture requires that the solution to this problem does not exceed 2. Varying $s$ allows investigation of the trade-off between the proxy quantities achievable by each pair of observers. Finding a global optima for this problem is difficult, since it is does not have any particular structure which permits efficient solving in reasonable time. Therefore, these numerical optimisations were performed using a constrained differential evolution (DE) solver~[33, 34] implemented in \textsc{SciPy}~[35]. The DE solver is a stochastic global search algorithm which operates by evolving a population of candidate solutions. For our problem, each population member is a real-valued 17-dimensional vector, which is evolved by mutation, crossover, and selection processes, until a termination criteria is met~[33]. The constraints for the problem are handled using the approach detailed in~[34]. The population size, rate of mutation and crossover probability are control parameters chosen for each optimisation, which can impact convergence to a solution; see the package documentation~[35] for further details. In Fig.~2(a), we sample 400 equally spaced values of $s$ from the interval $[0,2\sqrt{2}]$, and solve problem \eqref{eq:passon_numerical_problem} for each. These are solved by the DE algorithm with a population size of $400$. To help speed up convergence of the algorithm (particularly for large values of $s$), the initial population are chosen from of a Monte Carlo sample of 200 members satisfying the constraint in \eqref{eq:passon_numerical_problem} using an algorithm presented in~[36], in addition to 200 randomly generated vectors. The following solver parameters were chosen for this set of optimisations: $10^{-5}$ tolerance, `best1bin' strategy, 0.7 recombination rate and a dithering mutation rate sampled from [0.5,0.7] each iteration. Once the DE solver converged to a solution, a local least-squares optimizer was implemented to confirm the solution was a located at a local extremum. The solutions correspond to the blue data points in Fig.~2(a). It is evident that when $s\geq2$, the maximum value of the proxy quantity $S^*(A_2,B_2)$ never exceeds 2, supporting the monogamy conjecture for this grouping of observers. For each fixed value of $s$, it should be noted that the numerical maximum is found when equality is attained by the constraint in \eqref{eq:passon_numerical_problem}. Note that for $S(A_1,B_1)=s\leq2$, Fig.~2(a) indicates that it is always possible for the proxy quantity $S^*(A_2,B_2)$ to attain the maximum quantum value of $2\sqrt{2}$. This is indeed the case: suppose that $A_1$ and $B_1$ share a singlet state and measure the trivial observables $X=X'=Y=Y'={\mathcal B} \mathbbm{1}$, with bias ${\mathcal B}\in[-1,1]$ and strength $\mathcal{S}=0$, without disturbing the state. Then we have $S(A_1,B_1) = 2{\mathcal B}^2$, which ranges over all values of $[0,2]$. Further, the state remains unchanged ($K=L=I$), so that $S^*(A_2,B_2) = 2\sqrt{2}$ can be achieved by performing the optimal CHSH measurements (this example is generalised in Sec.~V below). Conversely, however, Fig.~2(a) indicates that the range of $|S(A_1,B_1)|$ is restricted for values $S^*(A_2,A_B)\leq2$. This asymmetry is due to plotting the maximum values of the proxy quantity $S^*(A_2,B_2)$, which only depends on the choice of $X,X',Y,Y'$ (significantly reducing the number of search parameters), rather than the values of $S(A_2,B_2)$ (see also Sec.~V below). Similar results were calculated for the pairs $(A_1,B_2)$ and $(A_2,B_1)$. These are illustrated as blue points in Fig.~2(b). Here, we solve the analogous problem \begin{equation} \begin{aligned} & \underset{\alpha,X,X',Y,Y'}{\text{max}} & & S^*(A_2,B_1) \\ & \hspace{14pt} \text{s. t.} & & S^*(A_1,B_2) \geq s, \end{aligned} \label{eq:disordered_numerical_problem} \end{equation} where $S^*(A_1,B_2)$ and $S^*(A_2,B_1)$ are defined in Eqs.~(11) and (12) of the main text, and we confine $s$ to vary over $[2,2\sqrt{2}]$. The latter restriction is possible due to the symmetry of the problem under interchanging the roles of the Alices and Bobs, which implies that the trade-off curve between the two proxy quantities must be symmetric about the line $S^*(A_1,B_2)=S^*(A_2,B_1)$. Again, we solve this problem with the DE algorithm parameters listed above, this time with tolerance set to $10^{-8}$, for 500 equally spaced values of $s$. The initial random population consisted of 400 members, one of which was chosen to correspond to the case where the pair $(A_1, B_2)$ achieve maximal CHSH violation on a singlet state, which significantly improved convergence time. These results once again support the conjecture, since the extrema found for $S^*(A_2,B_1)$ never exceed $2$ over the range of $s$ (see Fig.~2(b)). Again, note that these numerical maximums are found when equality is satisfied for the constraint in \eqref{eq:disordered_numerical_problem}. Finally, it is seen from Fig.~2 that the one-sided monogamy relations in Eqs.~(13) and~(14) of the main text do not hold for all possible choices of observables by $A_1$ and $B_1$, since some points lie above the red curves. However, it is of interest to ask whether these relations might hold for the special case of unbiased observables, i.e., ${\mathcal B}_X={\mathcal B}_{X'}={\mathcal B}_Y={\mathcal B}_Y=0$. For this case only the singlet state need be considered (see Sec.~III.B above), and hence we randomly sampled over $10^{12}$ points for this state, to investigate this question. The results support a conjecture that the one-sided monogamy relation $|S(A_1,B_1)|+S^*(A_2, B_2)\leq 4$ in Eq.~(14) holds for {\it all} unbiased observables, i.e., even without making equal strength and/or orthogonality assumptions. In contrast, the monogamy relation $S^*(A_1, B_2)^2+S^*(A_2, B_1)^2\leq 8$ in Eq.~(13) was found to be numerically violated for some choices of unbiased observables that violate these assumptions \section{V. One-sided monogamy relations} \label{sec:monog} Eq.~(2) in the conjecture given in the main text is equivalent to the requirement that the CHSH parameters satisfy the general one-sided monogamy relations \begin{align} & \big| |S(A_1,B_1)|+|S(A_2,B_2)|-6\big|\nonumber\\ &~~~+\big||S(A_1,B_1)|-|S(A_2,B_2)|\big|\geq 2, \\ &\big||S(A_1,B_2)| + |S(A_2,B_1)| - 6 \big| +\nonumber\\ &~~~+ \big| |S(A_1,B_2)| - |S(A_2,B_1)| \big|\geq 2. \end{align} In particular, the first relation rules out values of $S(A_1,B_1)$ and $S(A_2,B_2)$ that are both greater than 2, and the second relation similarly rules out values of $S(A_1,B_2)$ and $S(A_2,B_1)$ that are both greater than 2. Note that these relations are saturated (up to the maximum values of $2\sqrt{2}$. For example, if $A_1$ and $B_1$ measure the trivial observables $X=X'=Y=Y'={\mathcal B}\mathbbm{1}$, then $S(A_1,B_1)=2{\mathcal B}^2$ ranges over $[0,2]$ while the lack of disturbance allows $|S(A_2,B_2)|$ to range over the full quantum range $[0,2\sqrt{2}]$. The converse result is obtained if instead $A_2$ and $B_2$ measure these trivial observables, and a similar saturation is obtained via $(A_1,B_2)$ and $(A_2,B_1)$ making trivial measurements. It follows from the main text that the conjecture is also equivalent to the above relations with $S(A_1,B_2), S(A_2,B_1), S(A_2,B_2)$ replaced by the proxy quantities $S^*(A_1,B_2), S^*(A_2,B_1), S^*(A_2,B_2)$, corresponding to requiring the points in Fig.~2 of the main text to lie outside the shaded regions. Here we derive the (less general but stronger) one-sided monogamy relations for the proxy quantities discussed in the main text. These hold for the cases of (i) unbiased observables, i.e., with \begin{equation} \label{zerobias} {\mathcal B}_X={\mathcal B}_{X'}=B_Y=B_{Y'}=0, \end{equation} (note that prior work~[3-22] is confined to this case), and/or (ii) states with zero Bloch vectors, i.e., with \begin{equation} \label{bloch} \bm a=\bm b=\bm 0. \end{equation} (which includes all maximally entangled states), in combination with any of several mild measurement assumptions. \subsection{A. One-sided monogamy relation for \\$(A_1,B_2)$ and $(A_2,B_1)$} Here we prove the monogamy relation in Eq.~(13) of the main text, i.e., \begin{equation} \label{monog1str} S^*(A_1, B_2)^2 + S^*(A_2,B_1)^2 \leq 8 , \end{equation} for each of the cases in Eqs.~(\ref{zerobias}) and~(\ref{bloch}), under the additional assumption of equal measurement strengths for each side. We also prove this relation holds under the alternative additional assumption of orthogonal measurement directions for each side. It follows that the proxy quantities $S^*(A_1, B_2)$ and $S^*(A_2, B_1)$ cannot both violate the CHSH inequality under such restrictions. Hence, as per Sec.~III.A above, neither can both $S(A_1, B_2)$ and $S(A_2,B_1)$, thus confirming the conjecture under these restrictions. \subsubsection{1. Convexity considerations} To derive the monogamy relations, some convexity properties are needed to simplify the dependence of the quantities on the spin correlation matrices. First, note for either of the above cases in Eqs.~(\ref{zerobias}) and~(\ref{bloch}), that Eqs.~(11) and (12) of the main text simplify to \begin{align} S^*(A_1, B_2)&=|LT^\top(\tilde{\bm x}+\tilde{\bm x}')| + |LT^\top (\tilde{\bm x}-\tilde{\bm x}')|, \\ S^*(A_2, B_1)&=|KT(\tilde{\bm y}+\tilde{\bm y}')| + |KT (\tilde{\bm y}-\tilde{\bm y}')|, \end{align} where $\tilde{\bm x}=\mathcal{S}_X\bm x$, etc. Importantly, these quantities are convex-linear with respect to the initial spin correlation matrix $T$. Further, the latter can always be written as a convex-linear combination $T=\sum_j w_jT_j$ of at most four spin correlation matrices $T_j$, corresponding to maximally-entangled states~[42] (specifically, to the four Bell states defined by the local basis sets in which $T$ is diagonal). Moreover, any maximally entangled state is related to the singlet state by local rotations, implying that $T_j=R_j' T_0 R_j''=-R'_j R_j''=:-R_j$, where $R_j', R_j''$ and $R_j=R_j' R_j''$ are rotation matrices, and $T_0=-I$ is the spin correlation matrix of the singlet state. Hence, since $|z|$ is a convex function, \begin{align} S^*(A_1, B_2)&\leq \sum_j w_j |LR_j^\top(\tilde{\bm x}+\tilde{\bm x}')| + |LR_j^\top (\tilde{\bm x}-\tilde{\bm x}')| \nonumber\\ &\leq \max_R \{ |LR^\top(\tilde{\bm x}+\tilde{\bm x}')| + |LR^\top (\tilde{\bm x}-\tilde{\bm x}')|\}, \label{simp12} \end{align} where the maximum is over all rotations $R$, and similarly \begin{equation} \label{simp21} S^*(A_2, B_1)\leq \max_R \{ |KR(\tilde{\bm y}+\tilde{\bm y}')| + |KR (\tilde{\bm y}-\tilde{\bm y}')| \} . \end{equation} These results will be used in obtaining Eq.~(\ref{monog1str}) for equal measurement strengths. Further, since $|z|^2$ is a convex function it also follows that \begin{align} &S^*(A_1, B_2)^2 + S^*(A_2,B_1)^2 \nonumber\\ &\leq \max_R \left\{ \left[ |KR(\tilde{\bm y}+\tilde{\bm y}')| + |KR (\tilde{\bm y}-\tilde{\bm y}')|\right]^2 \right.\nonumber\\ &\qquad~~~~ + \left. \left[ |LR^\top(\tilde{\bm x}+\tilde{\bm x}')| + |LR^\top (\tilde{\bm x}-\tilde{\bm x}')| \right]^2 \right\} . \label{simpsquare} \end{align} This result will be used in obtaining Eq.~(\ref{monog1str}) for orthogonal measurement directions. \subsubsection{2. Equal strengths for each side} Under the additional assumption that the measurements $X$ and $X'$ by $A_1$ have equal strengths, and similarly for the measurements $Y$ and $Y'$ by $B_1$, i.e., \begin{equation} \mathcal{S}_X=\mathcal{S}_{X'},~~ \mathcal{S}_Y=\mathcal{S}_{Y'}, \label{equalrev} \end{equation} Choosing $R$ to be the rotation saturating Eq.~(\ref{simp12}) then yields, \begin{align} S^*(A_1, B_2)^2 \leq & \mathcal{S}_X^2 \left(|LR^\top(\bm x+\bm x')|+ |LR^\top(\bm x-\bm x')|\right)^2\nonumber \\ =& 4 \mathcal{S}_X^2 \left(|LR^\top \bm x_1|\cos\theta+ |LR^\top \bm x_2|\sin\theta \right)^2\nonumber \\ \leq & 4 \mathcal{S}_X^2 \left(\bm x_1^\top R L^\top L R^\top \bm x_1+\bm x_2^\top RL^\top L R^\top \bm x_2\right) \nonumber \\ \leq & 4 \mathcal{S}_X^2 \left[ s_1(L^\top L)+s_2(L^\top L)\right] \nonumber \\ =& \mathcal{S}_X^2 \left[(1+\mathcal R_Y)^2+(1+\mathcal R_{Y'})^2\right. \nonumber\\ &~~~\left. +2(1-\mathcal R_Y)(1-\mathcal R_{Y'})(\bm y\cdot\bm y')^2 \right] \nonumber \\ \leq & \mathcal{S}_X^2 \left[(1+\mathcal R_Y)^2+(1+\mathcal R_{Y'})^2 \right. \nonumber\\ &~~~\left. +2(1-\mathcal R_Y)(1-\mathcal R_{Y'}) \right] \nonumber \\ \leq & \mathcal{S}_X^2 \left[(1+\mathcal R_Y)^2+(1+\mathcal R_{Y'})^2 \right. \nonumber\\ &~~~\left. +(1-\mathcal R_Y)^2+(1-\mathcal R_{Y'})^2 \right] \nonumber \\ =& 2\mathcal{S}_X^2 \left[2+ \mathcal R_Y^2+\mathcal R_{Y'}^2) \right].\label{Sa1b2} \end{align} Here, $2\cos\theta \,\bm x_1 := \bm x+\bm x'$ and $2\sin\theta\, \bm x_2:=\bm x - \bm x'$ are orthogonal unit vectors defined via the half-angle $\theta$ between $\bm x$ and $\bm x'$ (implying that $R^\top\bm x_1$ and $R^\top \bm x_2$ are similarly orthogonal), and the singular values of $L^\top L$ (equivalent to the eigenvalues thereof) have been calculated via Eq.~(9) of the main text. We similarly find, via Eq.~(\ref{simp21}), that \begin{equation} S^*(A_2, B_1)^2\leq 2\mathcal{S}_Y^2 \left[2+ \mathcal R_X^2+\mathcal R_{X'}^2 \right]. \label{Sa2b1} \end{equation} Finally, noting from the fundamental tradeoff relation~(\ref{rssum}) that \begin{align} \mathcal{S}_X\leq \min\{\rt{1-\mathcal R^2_X}, \rt{1-\mathcal R^2_{X'}}\},\\ \mathcal{S}_Y\leq \min\{\rt{1-\mathcal R^2_Y}, \rt{1-\mathcal R^2_{Y'}}\}, \end{align} Eqs.~(\ref{Sa1b2}) and~(\ref{Sa2b1}) yield \begin{align} &S^*(A_1, B_2)^2+ S^*(A_2, B_1)^2 \nonumber \\ &\leq 2\mathcal{S}_X^2 \left[(1+\mathcal R_Y^2)+(1+\mathcal R_{Y'}^2) \right]\nonumber \\ &~~~+2 \mathcal{S}_Y^2\left[(1+\mathcal R^2_X)+(1+\mathcal R_{X'}^2) \right]. \nonumber \\ &\leq 2(1-\mathcal R^2_X)(1+\mathcal R^2_Y)+2(1-R_Y^2)(1+\mathcal R_{X}^2)\nonumber \\ &~~~+2(1-\mathcal R^2_{X'})(1+\mathcal R^2_{Y'})+2(1-R_{Y'}^2)(1+\mathcal R_{X'}^2) \nonumber \\ &= 4(1-\mathcal R^2_XR^2_Y)+4(1-\mathcal R^2_{X'}\mathcal R_{Y'}^2) \nonumber \\ &\leq 8, \end{align} as claimed in Eq.~(13) of main text and Eq.~(\ref{monog1str}) above. \subsubsection{2. Orthogonal measurement directions for each side} We now drop the equal strength assumption~(\ref{equalrev}), and instead assume that observables $X$ and $X'$ have orthogonal measurement directions, as do observables $Y$ and $Y'$, i.e., that \begin{equation} \label{ortho} \bm x \cdot \bm x'=0,\qquad \bm y \cdot \bm y'=0. \end{equation} We first show that we only need to consider the case where $\bm x,\bm x',R\bm y,R\bm y'$ lie in the same plane, for any rotation $R$ in Eq.~(\ref{simpsquare}). In particular, defining $\bm x'':=\bm x\times \bm x', \bm y'':=\bm y\times \bm y'$, note it follows from Eq.~(9) of the main text and the orthogonality condition~(\ref{ortho}) that \begin{equation} K =\frac{1+\mathcal R_{X'}}{2} \bm x\bm x^\top + \frac{1+\mathcal R_{X}}{2}\bm x'\bm x'^\top + \frac{\mathcal R_X+\mathcal R_{X'}}{2} \bm x''\bm x''^\top \end{equation} \begin{equation} \nonumber L =\frac{1+\mathcal R_{Y'}}{2} \bm y\bm y^\top + \frac{1+\mathcal R_{Y}}{2}\bm y'\bm y'^\top + \frac{\mathcal R_Y+\mathcal R_{Y'}}{2} \bm y''\bm y''^\top . \end{equation} Hence, for a given rotation matrix $R$, one finds again using the orthogonality condition that \begin{align} |KR(\tilde{\bm y}\pm\tilde{\bm y}')|^2 &= \frac14\left\{ (1+\mathcal R_{X'})^2[\bm x\cdot R(\tilde{\bm y}\pm\tilde{\bm y}')]^2 \right. \nonumber\\ &~~~+ (1+\mathcal R_{X})^2[\bm x'\cdot R(\tilde{\bm y}\pm\tilde{\bm y}')]^2 \nonumber\\ &~~~+ \left. (\mathcal R_X+\mathcal R_{X'})^2[\bm x''\cdot R(\tilde{\bm y}\pm\tilde{\bm y}')]^2 \right\} . \label{kr} \end{align} Since $\mathcal R_X,\mathcal R_{X'}\leq 1$ it follows that the third term has the smallest weighting factor, so that $|KR(\tilde{\bm y}\pm\tilde{\bm y}')|$ is maximised for any $R$ by choosing directions such that $\bm x''$ is orthogonal to $R(\tilde{\bm y}\pm\tilde{\bm y}')$, i.e., such that $\bm x,\bm x'$ lie in the same plane as $R\bm y, R\bm y'$. One similarly finds that $|LR^\top(\tilde{\bm x}\pm\tilde{\bm x}')|$ is maximised by choosing directions such that $\bm y,\bm y'$ lie in the same plane as $R^\top\bm x, R^\top \bm x'$, i.e., again such that $\bm x,\bm x'$ lie in the same plane as $R\bm y, R\bm y'$. Note that the latter two vectors are also orthogonal to each other. Thus, choosing $R$ to be the rotation saturating Eq.~(\ref{simpsquare}), and introducing the parameter $\beta$ to characterise the relative angles between (coplanar) $\bm x, \bm x'$ and $R\bm y, R\bm y'$, i.e., \begin{equation} R\bm y =\bm x\cos \beta + \bm x'\sin \beta,~R\bm y' =\bm x\sin \beta - \bm x'\cos \beta . \end{equation} and \begin{equation} \bm x = R\bm y\cos \beta + R\bm y'\sin \beta,~\bm x' = R\bm y\sin \beta - R\bm y'\cos \beta , \end{equation} we find via Eq.~(\ref{kr}) that \begin{align} |KR(\tilde{\bm y}\pm\tilde{ \bm y}')|^2&= \frac14 (1+\mathcal R_{X'})^2[\bm x\cdot R(\mathcal{S}_Y\bm y\pm\mathcal{S}_{Y'}\bm y')]^2 \nonumber\\ &+ \frac14 (1+\mathcal R_{X})^2[\bm x'\cdot R(\mathcal{S}_Y\bm y\pm\mathcal{S}_{Y'}\bm y')]^2\nonumber \\ &=\frac14 (1+\mathcal R_{X'})^2 (\mathcal{S}_Y\cos\beta\pm\mathcal{S}_{Y'}\sin\beta)^2 \nonumber\\ &+\frac14 (1+\mathcal R_{X})^2 (\mathcal{S}_Y\sin\beta\mp\mathcal{S}_{Y'}\cos\beta)^2. \end{align} Hence, using ($(a+b)^2 \leq (a+b)^2+(a-b)^2=2(a^2+b^2)$ and the fundamental tradeoff relation~(\ref{rssum}) yields \begin{align} &(|KR(\tilde{\bm y}+\tilde{ \bm y}')|+|KR(\tilde{\bm y}-\tilde{ \bm y}')|) ^2 \nonumber\\ &\leq 2 (|KR(\tilde{\bm y}+\tilde{ \bm y}')|^2+| KR(\tilde{\bm y}-\tilde{ \bm y}')|^2) \nonumber \\ &= (1+\mathcal R_{X'})^2 (\mathcal{S}_Y^2\cos^2\beta+\mathcal{S}_{Y'}^2\sin^2\beta) \nonumber\\ &~~+ (1+\mathcal R_{X})^2 (\mathcal{S}_Y^2\sin^2\beta + \mathcal{S}_{Y'}^2\cos^2\beta) \nonumber \\ &\leq (1+\mathcal R_{X'})^2 (1-\mathcal R_Y^2) \cos^2\beta + (1+\mathcal R_{X'})^2 (1-\mathcal R_{Y'}^2)\sin^2\beta \nonumber \\ &~~+ (1+\mathcal R_{X})^2 (1-\mathcal R_Y^2)\sin^2\beta + (1+\mathcal R_{X})^2 (1-\mathcal R_{Y'}^2) \cos^2\beta\nonumber \\ &= (1+\mathcal R_X)^2 (1-\mathcal R_Y^2) + (1+\mathcal R_{X'})^2 (1-\mathcal R_{Y'}^2) \nonumber \\ &~~+\left[(1+\mathcal R_{X'})^2-(1+\mathcal R_{X})^2\right] (\mathcal R_{Y'}^2-\mathcal R_{Y}^2) \cos^2\beta . \label{SSa2b1} \end{align} Similarly, we obtain \begin{align} &(|LR^\top(\tilde{\bm x}+\tilde{\bm x}')| + |LR^\top (\tilde{\bm x}-\tilde{\bm x}')|)^2 \nonumber \\ &\leq (1-\mathcal R_{X}^2)(1+\mathcal R_Y)^2+(1-\mathcal R_{X'}^2)(1+\mathcal R_{Y'})^2 \nonumber \\ &~~+(\mathcal R_{X'}^2-\mathcal R_X^2)\left[(1+\mathcal R_{Y'})^2-(1+\mathcal R_{Y})^2\right] \cos^2\beta. \label{SSa1b2} \end{align} Substituting Eqs.~(\ref{SSa2b1}) and (\ref{SSa1b2}) into Eq.~(\ref{simpsquare}) then gives \begin{align} &S^*(A_1, B_2)^2 + S^*(A_2, B_1)^2 \nonumber \\ &\leq (1-\mathcal R_Y^2)(1+\mathcal R_X)^2+(1-\mathcal R_{Y'}^2)(1+\mathcal R_{X'})^2 \nonumber \\ &~~+(1-\mathcal R_{X}^2)(1+\mathcal R_Y)^2+(1-\mathcal R_{X'}^2)(1+\mathcal R_{Y'})^2 \nonumber \\ &~~+(\mathcal R_{Y'}^2-\mathcal R_{Y}^2)\left[(1+\mathcal R_{X'})^2-(1+\mathcal R_{X})^2\right] \cos^2\beta \nonumber \\ &~~+(\mathcal R_{X'}^2-\mathcal R_X^2)\left[(1+\mathcal R_{Y'})^2-(1+\mathcal R_{Y})^2\right] \cos^2\beta \nonumber\\ &= P + (\mathcal R_X-\mathcal R_{X'})(\mathcal R_{Y}-\mathcal R_{Y'})Q\cos^2\beta, \label{pq} \end{align} where \begin{align} P:= & (1-\mathcal R_Y^2)(1+\mathcal R_X)^2+(1-\mathcal R_{Y'}^2)(1+\mathcal R_{X'})^2 \nonumber \\ &+(1-\mathcal R_{X}^2)(1+\mathcal R_Y)^2+(1-\mathcal R_{X'}^2)(1+\mathcal R_{Y})^2 \end{align} and \begin{align} Q:= &(\mathcal R_X+\mathcal R_{X'}+2)(\mathcal R_{Y}+\mathcal R_{Y'})\nonumber\\ & + (\mathcal R_X+\mathcal R_{X'})(\mathcal R_{Y}+\mathcal R_{Y'}+2). \end{align} Finally, noting that $P,Q\geq0$, if $(\mathcal R_X-\mathcal R_{X'})(\mathcal R_{Y}-\mathcal R_{Y'}) \leq 0$, then Eq.~(\ref{pq}) is maximised for $\cos^2\beta=0$, implying \begin{align} &S^*(A_1, B_2)^2 + S^*(A_2, B_1)^2 \nonumber \\ &\leq (1-\mathcal R_Y^2)(1+\mathcal R_X)^2+(1-\mathcal R_{Y'}^2)(1+\mathcal R_{X'})^2 \nonumber \\ &~~+(1-\mathcal R_{X}^2)(1+\mathcal R_Y)^2+(1-\mathcal R_{X'}^2)(1+\mathcal R_{Y'})^2 \nonumber \\ &\leq 2\max_{\mathcal R_X,\mathcal R_Y}\left[(1-\mathcal R_Y^2)(1+\mathcal R_X)^2+(1-\mathcal R_{X}^2)(1+\mathcal R_Y)^2 \right] \nonumber \\ & = 8, \label{ABA'B'} \end{align} while if $(\mathcal R_X-\mathcal R_{X'})(\mathcal R_{Y}-\mathcal R_{Y'}) \geq 0$, then Eq.~(\ref{pq}) is maximised for $\cos^2\beta=1$, implying \begin{align} &S^*(A_1, B_2)^2 + S^*(A_2, B_1)^2 \nonumber \\ &\leq (1-\mathcal R_{Y'}^2)(1+\mathcal R_X)^2+(1-\mathcal R_{X}^2)(1+\mathcal R_{Y'})^2 \nonumber \\ &~~+(1-\mathcal R_{Y'}^2)(1+\mathcal R_X)^2+(1-\mathcal R_{X}^2)(1+\mathcal R_{Y'})^2 \nonumber \\ &\leq 2\max_{\mathcal R_X,\mathcal R_{Y'}}\left[(1-\mathcal R_{Y'}^2)(1+\mathcal R_X)^2+(1-\mathcal R_{X}^2)(1+\mathcal R_{Y'})^2\right] \nonumber \\ &= 8. \label{AB'A'B} \end{align} Thus, in either case we again obtain the one-sided monogamy relation in Eq.~(\ref{monog1str}), as claimed in the main text. \subsection{B. One-sided monogamy relation for \\$(A_1,B_1)$ and $(A_2,B_2)$} We now prove the one-sided monogamy relation in Eq.~(14) of the main text, under the combined assumptions of unbiased observables, and equal strengths and orthogonal measurement directions for each side. In fact, for this combination the upper bound can be improved to \begin{equation} |S(A_1,B_1)|+S^*(A_2,B_2) \leq \frac{16}{3\sqrt{2}}\sim 3.7<4, \end{equation} as noted in the main text. Hence, $S(A_1, B_1)$ and the proxy quantity $S^*(A_2, B_2)$ cannot both violate the CHSH inequality, implying that neither can both $S(A_1, B_1)$ and $S(A_2,B_2)$ (see main text), thus confirming the conjecture for this case. One-sided monogamy relations for more general cases, in which either of the equal strength or orthogonality assumptions is dropped, will be derived in a forthcoming paper~[26], via a significant generalisation of the Horodecki criterion. First, under the above assumptions it follows from Sec.~III.B that the result needs only to be proved for the singlet state, i.e, for $T=-I$ and $\bm a=\bm b=0$. But for this state the CHSH parameter for the pair $(A_1,B_1)$ can be calculated via Eqs.~(1) and~(3) of the main text and Eq.~(\ref{zerobias}), (\ref{equalrev}) and~(\ref{ortho}) above to give \begin{align} |S(A_1,B_1)|&=\mathcal{S}_X \mathcal{S}_Y| \bm x\cdot\bm y + \bm x\cdot\bm y' + \bm x'\cdot\bm y - \bm x'\cdot\bm y'| \nonumber\\ &\leq \mathcal{S}_X\mathcal{S}_Y \left[ |\bm x\cdot (\bm y + \bm y')|+ |\bm x'\cdot (\bm y - \bm y')|\right] \nonumber\\ &\leq\mathcal{S}_X\mathcal{S}_Y \left[ |\bm y + \bm y'|+ |\bm y - \bm y'|\right] \nonumber\\ &=2\sqrt{2}\mathcal{S}_X\mathcal{S}_Y \nonumber\\ &= 2\sqrt{2}\,\sqrt{1-\mathcal R_X^2}\,\sqrt{1-\mathcal R_Y^2} , \label{s11bound} \end{align} where the last line follows by noting that the fundamental tradeoff relation~(\ref{rssum}) is saturated for unbiased observables. Moreover, from the Horodecki criterion in Eq.~(9) of the main text, it follows for the singlet state that \begin{align} S^*(A_2,B_2)^2&\leq 4[s_1(KL)^2 + s_2(KL)^2]\nonumber\\ &\leq 4[s_1(K)^2s_1(L)^2 +s_2(K)^2s_2(L)^2], \end{align} using Theorem~IV.2.5 of Ref.~[43]. Further, the matrices $K$ and $L$ follow from Eq.~(9) of the main text under the equal strengths assumption as (again noting the saturation of tradeoff relation~(\ref{rssum})) \begin{equation} K=\mathcal R_XI_3 + \half(1-\mathcal R_X)(\bm x\bm x^\top + \bm x'\bm x'^\top), \end{equation} \begin{equation} L=\mathcal R_YI_3 + \half(1-\mathcal R_Y)(\bm y\bm y^\top + \bm y'\bm y'^\top). \end{equation} The orthogonality assumption then yields $s_1(K)=s_2(K)=\half(1+\mathcal R_X)$ and $s_1(L)=s_2(L)=\half(1+\mathcal R_Y)$ by inspection, and so \begin{align} S^*(A_2,B_2)^2&\leq \half (1+\mathcal R_X)^2 (1+\mathcal R_Y)^2. \label{s22bound} \end{align} Finally, defining $f(x)=2\sqrt{1-x^2}$ and $g(x):=1+x$, Eqs.~(\ref{s11bound}) and~(\ref{s22bound}) give \begin{align} &|S(A_1,B_1)| + |S^*(A_2,B_2)|\nonumber\\ &\qquad\leq \frac{ f(\mathcal R_X)f(\mathcal R_Y)+g(\mathcal R_X)g(\mathcal R_Y)}{\sqrt{2}} \nonumber\\ &\qquad \leq\frac{\sqrt{f(\mathcal R_X)^2+g(\mathcal R_X)^2}\sqrt{f(\mathcal R_Y)^2+g(\mathcal R_Y)^2} }{\sqrt{2}} \nonumber\\ &\qquad \leq \max_\mathcal R \frac{f(\mathcal R)^2+g(\mathcal R)^2}{\sqrt{2}} \nonumber\\ &\qquad= \max_\mathcal R \frac{16- (1-3\mathcal R)^2}{3\sqrt{2}} \nonumber\\ &\qquad= \frac{16}{3\sqrt{2}}, \end{align} as claimed, where the second inequality follows via $\bm m\cdot\bm n\leq |\bm m||\bm n|$ for $\bm m=(f(\mathcal R_X),g(\mathcal R_X))$ and $\bm n=(f(\mathcal R_Y),g(\mathcal R_Y))$. Note that the maximum is achieved for \begin{equation} \mathcal{S}_X=\mathcal{S}_Y =\frac{2\sqrt{2}}{3}\sim 0.943, ~~~ \mathcal R_X=\mathcal R_Y=\frac13, \end{equation} and the optimal CHSH directions. \section{VI. Biased measurement selections} Recall that, in the example of the main text, $X$ and $Y$ are each selected with probability $1-\epsilon$ and measured with strengths $\mathcal{S}_X=\mathcal{S}_Y=\mathcal{S}$ and reversibilities $\mathcal R_X=\mathcal R_Y=\mathcal R$; $X'$ and $Y'$ are selected with probability $\epsilon$ and are projective, with strengths $\mathcal{S}_{X'}=\mathcal{S}_{Y'}=1$ and reversibilities $\mathcal R_{X'}=\mathcal R_{Y'}=0$; and the measurement directions correspond to the optimal CHSH directions~[27], i.e., $\bm x$ and $\bm x'$ are orthogonal with $\bm y=(\bm x+\bm x')/\sqrt{2}$ and $\bm y'=(\bm x-\bm x')/\sqrt{2}$. The biasing of the measurement selections modifies the average matrices $K$ and $L$ in Eq.~(\ref{barka}) to \begin{align} K_\epsilon &:= (1-\epsilon)K^X+\epsilon K^{X'}\nonumber\\ &= (1-\epsilon)[ \mathcal R I + (1-\mathcal R) \bm x\bm x^\top ] + \epsilon \bm x'\bm x'^\top , \label{keps} \\ L_\epsilon &:= (1-\epsilon)L^Y+\epsilon L^{Y'}\nonumber\\ &= (1-\epsilon)[ \mathcal R I + (1-\mathcal R) \bm y\bm y^\top ] + \epsilon \bm y'\bm y'^\top , \label{leps} \end{align} as per the main text. For the optimal CHSH directions this gives, in the $\{\bm x,\bm x',\bm x\times\bm x'\}$ basis, \begin{equation} K_\epsilon= \begin{pmatrix} 1-\epsilon & 0 &0\\ 0 & \mathcal R+\epsilon(1-\mathcal R) &0 \\ 0 & 0 & (1-\epsilon)\mathcal R \end{pmatrix} , \end{equation} \begin{equation} L_\epsilon= \begin{pmatrix} \frac{1+(1-\epsilon)\mathcal R}{2} & \frac{1-(1-\epsilon)\mathcal R}{2}-\epsilon & 0 \\\frac{1-(1-\epsilon)\mathcal R}{2}-\epsilon & \frac{1+(1-\epsilon)\mathcal R}{2} & 0\\ 0 & 0 & (1-\epsilon)\mathcal R \end{pmatrix} . \end{equation} Only the principal $2\times2$ submatrices of $K_\epsilon$ and $L_\epsilon$ contribute to the two largest singular values of $K_\epsilon L_\epsilon$ (the smallest value is $(1-\epsilon)\mathcal R$), implying $s_1(K_\epsilon L_\epsilon)^2+s_2(K_\epsilon L_\epsilon)^2$ is given by the trace of the $2\times2$ principal submatrix of $(K_\epsilon L_\epsilon)(K_\epsilon L_\epsilon)^\top=K_\epsilon L_\epsilon^2 K_\epsilon$, which may be evaluated to give \begin{align} s_1(K_\epsilon L_\epsilon)^2+s_2(K_\epsilon L_\epsilon)^2 &= \half \left[1+\mathcal R^2 -2\epsilon(1-\mathcal R+\mathcal R^2)\right.\nonumber\\ &\qquad~ \left. +\epsilon^2(2-2\mathcal R+\mathcal R^2) \right]^2 . \end{align} It immediately follows via Eq.~(10) of the main text that $A_2$ and $B_2$ can violate the CHSH inequality only if \begin{align} S^*(A_2,B_2) &= \sqrt{2} \left[1+\mathcal R^2 -2\epsilon(1-\mathcal R+\mathcal R^2)\right.\nonumber\\ &\qquad~~~ \left. +\epsilon^2(2-2\mathcal R+\mathcal R^2) \right] >2 , \end{align} which is equivalent to \begin{equation} \label{rminus} \mathcal R >\mathcal R_-(\epsilon) := \frac{[\sqrt{2}-(1-\epsilon)^2]^{1/2}-\epsilon}{1-\epsilon} , \end{equation} as per Eq.~(16) of the main text. This is monotonic increasing in $\epsilon$, implying in particular that one must have \begin{equation} \mathcal R>\mathcal R_-(0)=(\sqrt{2}-1)^{1/2} \sim 0.64359 . \end{equation} This corresponds, via tradeoff relation~(\ref{rssum}), to the upper bound \begin{equation} \label{smax} \mathcal{S}<\mathcal{S}_{\max}:= \sqrt{1-\mathcal R_-(0)^2} =(2-\sqrt{2})^{1/2}\sim 0.7654. \end{equation} on the strength $\mathcal{S}$, as noted in the main text. Note that the requirement $\mathcal R_-(\epsilon)\leq1$ implies that the probability $\epsilon$ of selecting a projective measurement $X'$ or $Y'$ cannot be arbitrarily large. In particular, Eq.~(\ref{rminus}) can be inverted to give \begin{equation} \label{epsr} \epsilon_-(\mathcal R)=\frac{1-\mathcal R+ \mathcal R^2-\sqrt{\sqrt{2}(2-2 \mathcal R+ \mathcal R^2)-1}}{2 -2 \mathcal R+ \mathcal R^2} , \end{equation} which is a monotonic increasing function of $\mathcal R$, and hence violation of the CHSH inequality by $A_2$ and $B_2$ is only possible at all if \begin{equation} \epsilon< \epsilon_-(1)=1-(\sqrt{2}-1)^{1/2} \sim 0.356406 . \end{equation} Moreover, for {\it both} pairs $(A_1,B_1)$ and $(A_2,B_2)$ to violate the CHSH inequality one further requires that the strength satisfy \begin{equation} \label{smin} \mathcal{S}> \mathcal{S}_{\min}:=8^{1/4}-1\sim 0.6818, \end{equation} as per Eq.~(16) of the main text, which together with tradeoff relation~(\ref{rssum}) and Eq.~(\ref{rminus}) implies that $\mathcal R$ and $\epsilon$ must satisfy \begin{equation} \label{range} \mathcal R_-(\epsilon) <\mathcal R < \mathcal R_+ , \end{equation} with \begin{equation} \mathcal R_+:=\sqrt{1-\mathcal{S}_{\min}^2} = 2^{3/4}\sqrt{2^{1/4}-1} \sim 0.7315. \end{equation} Eq.~(\ref{range}) can clearly be satisfied for sufficiently small values of $\epsilon$, i.e., for sufficiently large selection biases (since $\mathcal R_-(0)<\mathcal R_+$). Hence, it is possible to have two-qubit recycling of Bell nonlocality for sufficiently biased measurement selections. The maximum possible value of $\epsilon$ that allows such a violation of one-sided monogamy follows from Eqs.~(\ref{epsr}) and~(\ref{range}) as \begin{equation} \epsilon_{\max}=\epsilon_-(\mathcal R_+) \sim 0.0794626\sim 7.9\% , \end{equation} as per Eq.~(17) of the main text. Finally, as noted in the main text, the pairs $(A_1,B_2)$ and $(A_2,B_1)$ can also violate a CHSH inequality under the above conditions. This expected, on the grounds that these share qubits of which only one qubit has been recycled following measurement, so that it should be even easier for them to violate a Bell inequality than it is for $(A_2,B_2)$, as both qubits of the latter pair have been recycled. We demonstrate this explicitly below by considering the optimal case that $\epsilon$ approaches zero. In particular, in this limit $K$ and $L$ in Eqs.~(10) and~(11) of the main text are replaced by $K_0$ and $L_0$ in Eqs.~(\ref{keps}) and~(\ref{leps}). Further, for the optimal CHSH directions one finds \begin{equation} K_0\tilde{\bm y}=\frac{S}{\sqrt{2}}\begin{pmatrix} 1\\ \mathcal R \end{pmatrix},~~ K_0\tilde{\bm y}'=\frac{1}{\sqrt{2}}\begin{pmatrix} 1\\ -\mathcal R \end{pmatrix} , \end{equation} and hence, noting that the Bloch vectors $\bm a$ and $\bm b$ vanish for the singlet state, Eq.~(10) of the main text is replaced by \begin{align} S^*(A_1,B_2) =&\frac{1}{\sqrt{2}}\left| \begin{pmatrix} 1+\mathcal{S} \\ -\mathcal R(1-\mathcal{S}) \end{pmatrix} \right| + \frac{1}{\sqrt{2}}\left| \begin{pmatrix} 1-\mathcal{S} \\ -\mathcal R(1+\mathcal{S}) \end{pmatrix} \right|\nonumber \\ =& \frac{1}{\sqrt{2}} \left[ (1+\mathcal{S})^2+\mathcal R^2(1-\mathcal{S})^2\right]^{1/2} \nonumber\\ &+\frac{1}{\sqrt{2}} \left[ (1-\mathcal{S})^2+\mathcal R^2(1+\mathcal{S})^2\right]^{1/2} \end{align} in the limit $\epsilon\rightarrow0$. Using the tradeoff relation~(\ref{rssum}) then gives \begin{align} S^*(A_1,B_2)\leq & \frac{1}{\sqrt{2}} \left[ (1+\mathcal{S})^2+(1-\mathcal{S}^2)(1-\mathcal{S})^2\right]^{1/2} \nonumber\\ &+ \frac{1}{\sqrt{2}} \left[ (1-\mathcal{S})^2+(1-\mathcal{S}^2)(1+\mathcal{S})^2\right]^{1/2} \end{align} in this limit. One similarly finds \begin{equation} L_0\tilde{\bm x}=\frac{S}{2}\begin{pmatrix} 1+\mathcal R \\ 1-\mathcal R \end{pmatrix},~~ L_0\tilde{\bm x}'=\frac{1}{2}\begin{pmatrix} 1-\mathcal R\\ 1+\mathcal R \end{pmatrix}, \end{equation} which yields the same upper bound for $S^*(A_2,B_1)$ via via Eq.~(11) of the main text. Hence, both pairs can violate a CHSH inequality if this upper bound is greater than 2, corresponding to \begin{equation} \mathcal{S} < \mathcal{S}_0 := \sqrt{\sqrt{3}-1} \sim 0.8556, \end{equation} or equivalently to \begin{equation} \mathcal R > \mathcal R_0:= \sqrt{2-\sqrt{3}} \sim 0.5176. \end{equation} Noting that $\mathcal{S}_0>\mathcal{S}_{\max}$ in Eq.~(\ref{smax}, it follows that both pairs $(A_1,B_2)$ and $(A_2,B_1)$ can violate the CHSH inequality if $(A_2,B_2)$ can, as claimed. \end{document}
\begin{document} \baselineskip=17pt \title{Egyptian Fractions with odd denominators} \author{ Christian Elsholtz\footnote{ Institut f\"ur Analysis und Zahlentheorie, Graz University of Technology, Kopernikusgasse 24, A-8010 Graz, Austria. E-mail: [email protected]}} \date{} \maketitle \renewcommand{\arabic{footnote}}{} \footnote{2010 \emph{Mathematics Subject Classification}: 11D68, 11D72.} \footnote{\emph{Key words and phrases}: Egyptian fractions; number of solutions of Diophantine equations.} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \begin{abstract} The number of solutions of the diophantine equation $\sum_{i=1}^k \frac{1}{x_i}=1,$ in particular when the $x_i$ are distinct odd positive integers is investigated. The number of solutions $S(k)$ in this case is, for odd $k$: \[\exp \left( \exp \left( c_1\, \frac{k}{\log k}\right)\right) \leq S(k) \leq \exp \left( \exp \left(c_2\, k \right)\right) \] with some positive constants $c_1$ and $c_2$. This improves upon an earlier lower bound of $S(k) \geq \exp \left( (1+o(1))\frac{\log 2}{2} k^2\right)$. \end{abstract} \section{Introduction} In this paper we study the number of solutions of the diophantine equation \begin{equation}{\label{eq:main}} \sum_{i=1}^k \frac{1}{x_i}=1, \end{equation} in particular, where the $x_i$ have some restrictions, such as all $x_i$ are distinct odd positive integers. Let us first review what is known for distinct positive integers, without further restriction: Let \[{\cal X}_k=\{(x_1, x_2, \ldots,x_k): \sum_{i=1}^k \frac{1}{x_i}=1, \quad 0<x_1<x_2< \cdots < x_k\}.\] It is known that \begin{equation}{\label{eq:bounds}} \exp \left( \exp \left(((\log 2)(\log 3) +o(1))\frac{k}{\log k}\right)\right) \leq |{\cal X}_k| \leq c_0^{(\frac{5}{3}+\varepsilon) \, 2^{k-3} }, \end{equation} where $c_0=1.264\ldots$ is $\lim_{n \rightarrow \infty} u_n^{1/2^n}$, $u_n=1, u_{n+1}=u_n (u_n+1)$. The lower bound is due to Konyagin \cite{konyagin}, the upper bound due to Browning and Elsholtz \cite{browning-elsholtz}. Earlier results on the upper and lower bounds were due to S\'{a}ndor \cite{sandor} and Erd\H{o}s, Graham and Straus (see \cite{erdosandgraham}, page 32). The set of solutions has also been investigated with various restrictions on the variables $x_i$. A quite general and systematic investigation of expansions of $\frac{a}{b}$ as a sum of unit fractions with restricted denominators is due to Graham \cite{graham}. Elsholtz, Heuberger, Prodinger \cite{elsholtz-heuberger-prodinger} gave an asymptotic formula for the number of solutions of ({\ref{eq:main}}), with two main terms, when the $x_i$ are (not necessarily distinct) powers of a fixed integer $t$. Another prominent case is when all denominators $x_i$ are odd. Sierpi\'{n}ski \cite{sierpinski} proved that a nontrivial solution exists. It is known that for $k=9$ there are exactly 5 solutions, and for $k=11$, there are exactly 379,118 solutions (see \cite{shiu, arce-nazario-castro-figueroa}). Chen, Elsholtz and Jiang \cite{chen-elsholtz-jiang} showed that for odd denominators $x_i$ the number of solutions of (\ref{eq:main}) is increasing with a lower bound of $\sqrt{2}^{k^2(1+o(1))}$. Other types of restrictions on the denominator have been studied, e.g. by Croot \cite{croot} and Martin \cite{Martin}. The number of solutions of the equation $\frac{m}{n}=\sum_{i=1}^k\frac{1}{x_i}$ have also been estimated by Elsholtz and Tao \cite{elsholtz-tao}. In this paper we take inspiration from the proof of Chen et al.~\cite{chen-elsholtz-jiang} for odd denominators, and the proof of Konyagin \cite{konyagin} for lower bounds in the case of unrestricted $x_i$. As Konyagin's proof makes crucial use of ingenious identities, involving a lot of even numbers, it seems unclear whether one can generalize it to odd integers. Here is our main result: \begin{theorem} Let $s\geq 1$ and let $\{p_1, \ldots, p_s\}$ denote a set of primes, and let $P=p_1 \cdots p_s$ be squarefree. Let $k$ be sufficiently large. Moreover, if $P$ is even, let $k$ be odd. Let \[{\cal X}_{k,P}=\{(x_1, x_2, \ldots,x_k): \sum_{i=1}^k \frac{1}{x_i}=1, \text{ with distinct positive } x_i \equiv \pm 1\bmod P\}.\] There is some positive constant $c(P)$ such that the following holds: \[ |{\cal X}_{k,P}| \geq \exp \left( \exp \left( c(P) \frac{k}{\log k}\right)\right). \] \end{theorem} The case $P=2$ is the case of odd denominators: \begin{corollary} Let $k$ be odd, and \[{\cal X}_{k,\rm{odd}}=\{(x_1, x_2, \ldots,x_k): \sum_{i=1}^k \frac{1}{x_i}=1, \text{ with odd distinct positive } x_i\}.\] There is some positive constant $c$ such that the following holds: \[ |{\cal X}_{k,{\rm{odd}}}| \geq \exp \left( \exp \left( c\, \frac{k}{\log k}\right)\right). \] \end{corollary} For comparison, an upper bound of type $\exp \left(\exp (c_2\, k)\right)$ follows from the unrestricted case, see ({\ref{eq:bounds}}). \section{Proof} \begin{lemma}{\label{lem:primitive}} Let $P>1$ be a squarefree integer. Let $\omega(n)$ denote the number of distinct prime factors of $n$, and $d(m)$ the number of divisors of $n$. The following holds: $\omega(P^m - 1) \geq d(m)- 6$. \end{lemma} \begin{proof} Due to a result of Bang, Zsigmondy, Birkhoff and Vandiver (see e.g. Schinzel \cite{schinzel}), it is known that for $n > 6$ the values of $P^n-1$ have at least one \emph{primitive} prime factor. (A prime factor of the sequence $P^n-1$ is primitive if it divides $P^n-1$, but does not divide any $P^m-1$ with $m<n$.). Let $m=m_1m_2$. For each divisor $m_1$ one has the factorization \[P^m-1=(P^{m_1}-1)(P^{m_1 m_2-m_1}+ P^{m_1 m_2-2m_1}+ \cdots +P^{m_1}+1),\] hence the number of prime factors of $P^m-1$ is at least the sum of the number of primitive prime factors of $P^{m_1}-1$, for all possible divisors $m_1$ of $m$. \end{proof} \begin{lemma}{\label{lemma:wigert}} For $X \geq 3$, there exists a natural number $m <X$ such that $d(m) > \exp \left((\ln 2 + o(1)) \frac{\ln X}{\ln \ln X}\right)$ as $X \rightarrow \infty$. \end{lemma} This follows from a theorem of Wigert \cite{Wigert:1907}, but can also be seen directly. Let $P_r= \prod_{i=1}^r q_i$ be the product over the first primes, and choose $m=P_r$, if $P_r \leq X < P_{r+1}$. Then $d(m)=2^r= \exp \left((\ln 2 + o(1))\frac{\ln m}{\ln \ln m}\right)= \exp \left((\ln 2 + o(1)) \frac{\ln X}{\ln \ln X}\right)$. Taking the first $r$ odd primes, one can also find an odd number $m$ of this type. \begin{lemma}{\label{lemma:albada-lint-progressions}} For every $a,b,n_0\in \N$ the following holds: every positive integer can be written as a finite sum of distinct fractions of the form $\frac{1}{an+b},\, n \geq n_0$. \end{lemma} This result with $n_0=0$ was originally proved by van Albada and van Lint \cite{albada-lint}. The result for general $n_0$ easily follows by using the progression $a'n+b'=an+(a n_0+b), n \geq 0$. As an easy consequence we have: \begin{lemma}{\label{lemma:consequence}} There exist distinct positive integers \[l_1, \ldots, l_{r_1}, m_1, \ldots,m_{r_2},n_1, \ldots, n_{r_3},\] all larger than $1$, in the residue class $1 \bmod 3P(P^2-1)$ such that the following holds: \[ \sum_{i=1}^{r_1}\frac{1}{l_i}=P-2,\qquad \sum_{i=1}^{r_2}\frac{1}{m_i}=1, \qquad \sum_{i=1}^{r_3}\frac{1}{n_i}=P, \] \end{lemma} If $P=2$, then $r_1=0$, otherwise $r_1,r_2,r_3>0$. Moreover, it is clear that $r_2 \equiv 1 \bmod P$. \begin{proof}[Proof of Theorem] The idea employed in \cite{chen-elsholtz-jiang} and \cite{konyagin} is to write $1$ as a sum of fractions where one denominator has a large number of divisors, and to split this fraction recursively into several fractions, where (at least) one of these has again a large number of divisors. Here we show that it is possible to have, for any given $t \in \N$, the fraction $\frac{1}{P^{t}-1}$ as one of these fractions. Let us start with the trivial decomposition \[1=\frac{1}{P-1} + \frac{P-2}{P-1}.\] In order to avoid that the denominator $P-1$ occurs more than once we use Lemma \ref{lemma:consequence} to write the integer $P-2$ as a sum of distinct unit fractions, with $l_i>1$: $P-2 =\sum_{i=1}^{r_1} \frac{1}{l_i}$. Next we observe that any fraction $\frac{1}{P^n-1}$ can be decomposed to obtain a sum of unit fractions containing a) $\frac{1}{P^{2n}-1}$ or b) $\frac{1}{P^{n+1}-1}$. \[(a)\quad \frac{1}{P^n-1}=\frac{1}{P^n+1}+\frac{1}{P^{2n}-1} + \sum_{i=1}^{r_2} \frac{1}{(P^{2n}-1)m_i}.\] By Lemma \ref{lemma:consequence} \[ 1=\sum_{i=1}^{r_2} \frac{1}{m_i}, \quad m_i\equiv 1 \bmod 3P(P^2-1), m_i >1 \text{ and distinct}.\] Note that all occurring denominators are distinct, with the possible exception that $P^n+1=P^{2n}-1$ holds if $P=2,n=1$. In this case, one rewrites $\frac{1}{P+1}=\frac{1}{3}=\sum_{i=1}^{r_2} \frac{1}{3m_i}$. These denominators have not been used before, as the $l_i$ or $m_i$ are congruent to $1 \bmod 3$, whereas the new denominators $3m_i$ are not. \[ (b)\quad \frac{1}{P^n-1}=\frac{1}{P^{n+1}-1}+\frac{P-1}{(P^{n}-1)(P^{n+1}-1)} + \frac{P-1}{(P^{n+1}-1)}.\] Note that these three fractions are unit fractions, as the denominators are divisible by $P-1$. These three fractions are distinct, unless $n=1$. In this case the fraction $\frac{1}{P^2-1}$ occurs twice and one of these is rewritten as $\frac{1}{P^2-1}=\sum_{i=1}^{r_2} \frac{1}{(P^2-1)m_i}$. These denominators have not been used before, as the previous denominators $l_i$ and $m_i$ were by construction congruent to $1 \bmod P^2-1$. Also, $P^n+1, P^{2n}-1, (P^{2n}-1)m_i$ are new. For constructing a solution with $\frac{1}{P^t-1}$ we write $t$ in binary. The first binary digit is of course $1$. For the positions $i\geq 2$ we perform two different types of steps, corresponding to (a) and (b) above:\\ 1) If the $i$-th leading position is a 0, then we take the ``doubling'' a).\\ 2) If the $i$-th leading position is a 1, then we first take the doubling a), followed by an ``addition'' b), For example, if $t=53=110101_2$ and starting from left to right: \[ \begin{array}{rrrrrrrrr} i=1|&&2|&3|&&4|&5|&&6|\\ 1|&&1|&0|&&1|&0|&&1\\ |&a&b|&a|&a&b|&a|&a&b\\ n=1|&2&3|&6|&12&13|&26|&52&53\\ \end{array}\] Generally, any integer $t$ can be obtained in at most $2\frac{\log t}{\log 2}$ such steps a) or b). In other words, starting from $n=1$ we can obtain a decomposition \[1=\frac{1}{P^t-1}+\sum_{i=1}^{k'-1} \frac{1}{x_i}\] with $k'=O(r_1+r_2\log t+r_3)=O_P(\log t)$ unit fractions. Observe that all denominators have been rearranged to be distinct. We next come to the most crucial step, which determines the number of solutions: \begin{lemma} Let $\sum_{i=1}^{r_3} \frac{1}{n_i}=P$ (by Lemma \ref{lemma:consequence}). \begin{itemize} \item[a)] For any divisor $d|(P^t-1)$ the following is an identity. \[\frac{1}{P^t-1}=\frac{1}{P^t-1+Pd} +\sum_{i=1}^{r_3} \frac{1}{\frac{P^t-1}{d} (P^t-1+Pd)n_i}.\] \item[b)] The number of divisors $d|P^t-1$ with $ d\equiv 1 \bmod P$ is at least $2^{\frac{\omega(P^t-1)}{P}}$. \item[c)] If $d\equiv 1 \bmod P$, then {\emph{all}} denominators are $\pm 1 \bmod P$. \end{itemize} \end{lemma} Part a) and c) are easy to verify. For part b) observe: For any $P$ prime factors $p_k$, being coprime to $P$, there is at least one subset of these primes, whose product is $1\bmod P$. Indeed, the sequence $a_1=p_1, a_2=p_1p_2, ..., a_{P}=\prod_{k=1}^{P} p_k$ must have two members $a_i, a_j$, say, which are equivalent modulo $P$. Then $\frac{a_j}{a_i}= \prod_{k=i+1}^j p_k\equiv 1 \bmod P$. Therefore, the number of divisors $d \equiv 1 \bmod P$ is at least $2^{\frac{\omega(P^t-1)}{P}}$. (Clearly, this argument can be refined (see e.g. \cite{Drmota-Skalba}), but this would not improve our final result.) All solutions produced in this way are distinct, as each solution has a unique denominator $P^t-1+Pd$. Moreover, as all these denominators are greater than $P^t$, and as in our application $t$ will be chosen large, these new denominators are greater than those that have been used before. We choose $t$ as a product of the first primes. By Lemma {\ref{lemma:wigert}} the number of divisors, and hence the number of solutions satisfies: \[ |{\cal X}_{k,P}|\geq 2^{\omega(P^t-1)/P}\geq 2^{(d(t)-6)/P}\geq 2^{\exp \left( \frac{\log 2 +o(1)}{P} \frac{\log t}{\log \log t} \right) }\geq \exp\left( \exp (c(P) k/\log k)\right).\] Recall that the number of fractions is $k=O_P(\log t)$. Finally let us comment on the condition that $k$ is odd, (see statemnet of the Theorem), when $P$ is even. By multiplying equation (\ref{eq:main}) by its common denominator, and reducing modulo $P$ it is clear that this condition is necessary. The condition is also sufficient as in view of step a) we can replace one fraction by $r_2+2$ fractions. Again, by the same argument $r_2 \equiv 1 \bmod P$, so that effectively we replace one fraction by 3 fractions (modulo $P$). Iterating this, we can reach any residue class modulo $P$, when $P$ is odd, and the odd residue classes, when $P$ is even. The number of extra fractions required is $O(P\, r_2)=O_P(1)$. This does not influence the overall result. In any case, the theorem is valid for sufficiently large $k\geq k_P$, with this necessary and sufficient congruence obstruction. \end{proof} \begin{rem} We have not worked out the constant $c(P)$. One may observe that $c(P)$ might be as small as $\frac{1}{r_2}$. To estimate $r_2$ one observes that $\sum_{i=1}^{r_2} \frac{1}{i\, 3P(P^2-1)-1}\geq \sum_{i=1}^{r_2} \frac{1}{m_i}\approx \frac{\log r_2}{3P(P^2-1)} > P$ must hold. Hence $r_2$ appears to be at least of exponential growth in $P$. Taking denominators $x_i$ only coprime to $P$, but not necessarily restricted to $x_i \equiv \pm 1 \bmod 3P(P^2-1)$ would improve this constant $c(P)$. \end{rem} I would like to thank Sergei Konyagin for insightful comments on the problem during a conference at CIRM (Luminy). The paper has been completed during a very pleasant stay at Forschungsinstitut Mathematik (FIM) at ETH Z\"urich. \end{document}
\begin{document} \title{Determinism and Computational Power of Real Measurement-based Quantum Computation} \author[1]{Simon Perdrix \thanks{\texttt{[email protected]}}} \author[2]{Luc Sanselme \thanks{\texttt{[email protected]}}} \affil[1]{\small CNRS, LORIA, Universit\'e de Lorraine, Inria Project Team Carte, , France} \affil[2]{\small LORIA, CNRS, Universit\'e de Lorraine, Inria Project Team Caramba, Lyc\'ee Poincar\'e, France} \mathsf {Path}gestyle{fancy} \lhead{Determinism and Computational Power of Real MBQC} \rhead{S. Perdrix \& L. Sanselme} \date{} \maketitle \begin{abstract} Measurement-based quantum computing (MBQC) is a universal model for quantum computation. The combinatorial characterisation of determinism in this model, powered by measurements, and hence, fundamentally probabilistic, is the cornerstone of most of the breakthrough results in this field. The most general known sufficient condition for a deterministic MBQC to be driven is that the underlying graph of the computation has a particular kind of flow called Pauli flow. The necessity of the Pauli flow was an open question. We show that Pauli flow is not necessary, providing several counter examples. We prove however that Pauli flow is necessary for determinism in the \emph{real} MBQC model, an interesting and useful fragment of MBQC. We explore the consequences of this result for real MBQC and its applications. Real MBQC and more generally real quantum computing is known to be universal for quantum computing. Real MBQC has been used for interactive proofs by McKague. The two-prover case corresponds to real-MBQC on bipartite graphs. While (complex) MBQC on bipartite graphs are universal, the universality of real MBQC on bipartite graphs was an open question. We show that real bipartite MBQC is not universal proving that all measurements of real bipartite MBQC can be parallelised leading to constant depth computations. As a consequence, McKague's techniques cannot lead to two-prover interactive proofs. \end{abstract} \section{Introduction} Measurement-based quantum computing \cite{RB01,RBB03} (MBQC for short) is a universal model for quantum computation. This model is not only very promising in terms of the physical realisations of the quantum computer \cite{Petal,Wetal}, MBQC has also several theoretical advantages, e.g. parallelisation of quantum operations \cite{BKP10,BK07} (logarithmic separation with the traditional model of quantum circuits), blind quantum computing \cite{BFK08} (a protocol for delegated quantum computing), fault tolerant quantum computing \cite{raussendorf2006fault}, simulation \cite{delfosse2015wigner}, contextuality \cite{raussendorf2013contextuality}, interactive proofs \cite{McKague,BFK08}. In MBQC, a computation consists of performing local quantum measurements over a large entangled resource state. The resource state is described by a graph -- using the so-called graph state formalism \cite{HEB04}. The \emph{tour de force} of this model is to tame the fundamental non-determinism of the quantum measurements: the number of possible outputs of a measurement-based computation on a given input is exponential in the number of measurements, and each of these {branches} of the computation is produced with an exponentially small probability. The only known technique to make such a fundamentally probabilistic computation exploitable is to implement a correction strategy which makes the overall computation deterministic: it does not affect the probability for each branch of the computation to occur, but it guarantees that all the branches produce the same output. The existence of a {correction} strategy relies on the structures of the entanglement of the quantum state on which the measurements are performed. Deciding whether a given resource state allows determinism is a central question in MBQC. Several sufficient conditions for determinism have been introduced. First in \cite{DK06} the notion of \emph{causal flow} has been introduced: if the graph describing the entangled resource state has a causal flow then a deterministic MBQC can be driven on this resource. Causal flow has been generalized to a weaker condition called Generalized flow (Gflow) which is also sufficient for determinism. Gflow has been proved to be necessary for a robust variant of determinism and when roughly speaking there is no Pauli measurement, a special class of quantum measurements (see section \ref{sec:MBQC} for details) \cite{BKMP07}. In the same paper, the authors have introduced a weaker notion of flow called Pauli Flow, allowing some measurements to be Pauli measurements. Pauli flow is the weakest known sufficient condition for determinism and its necessity was a crucial open question as the characterisation of determinism in MBQC is the cornerstone of most of the applications of MBQC. In section \ref{sec:MBQC}, we present the MBQC model, and the tools that come with it. Our first contribution is to provide a simpler characterisation of the Pauli flow (Proposition \ref{prop:sim}), with three instead of nine conditions to satisfy for the existence of a Pauli flow. Our main contribution is to prove in section \ref{sec:robdet} that the Pauli flow is not necessary in general -- by pointing out several counter examples -- but is actually necessary for \emph{real} MBQC (Theorem \ref{thm:paulinec}). Real MBQC is a restriction of MBQC where only real observables are used, i.e.~observables which eigenstates are quantum states that can be described using real numbers. Quantum mechanics, and hence models of quantum computation, are traditionally based on complex numbers. Real quantum computing is universal for quantum computation \cite{BV97a} and has been crucially used recently in the study of contextuality and simulation by means of quantum computing by state injection \cite{delfosse2015wigner}. Real MBQC \cite{MP13} may lead to several other applications. One of them is an interactive proof protocol built by McKague \cite{McKague}. McKague introduced a protocol where a verifier using a polynomial number of quantum provers can perform a computation, with the guaranty that, if a prover has cheated, it will be able to detect it. An open question left in \cite{McKague} by McKague is to know whether this model can bring to an interactive proof protocol with only two quantum provers. We answer negatively to this question in section \ref{subs:intpro}. Our third contribution is to point out the existence of a kind of supernormal-form for Pauli flow in real MBQC on bipartite graphs (Lemma \ref{lem:8}). This result enables us to prove in Theorem \ref{thm:constdepth} that real MBQC on bipartite graphs is not very powerful: all measurements of a real bipartite MBQC can be parallelised. As a consequence, only problems that can be solved in constant depth can be solved using real bipartite MBQC. \section{Measurement-based quantum computation, Generalized Flow and Pauli Flow} \label{sec:MBQC} \noindent{\bf Notations.} We assume the reader familiar with quantum computing notations, otherwise one can refer to Appendix \ref{QC} or to \cite{NC00}. We will use the following set/graph notations: First of all, the \emph{symmetric difference} of two sets $A$ and $B$ will be denoted $A \Delta B := (A\cup B)\setminus (A\cap B)$. We will use intensively the \emph{open} and \emph{closed neighbourhood}. Given a simple undirected graph $G=(V,E)$, for any $u\in V$, $N(u) := \{v \in V~|~ (u,v)\in E\}$ is the (open) neighbourhood of $u$, and $N[u]:=N(u)\cup \{u\}$ is the closed neighbourhood of $u$. For any subset $A$ of $V$, $\odd A:= \Delta_{v\in A} N(v)$ (resp. $\codd A:= \Delta_{v\in A} N[u]$) is the odd (resp. odd closed) neighbourhood of $A$. Also, we will use the notion of \emph{extensive maps}. A map $ f:A\to 2^B$, with $A\subseteq B$ is extensive if the transitive closure of $\{(u,v): v\in f(u)\}$ is a strict partial order. We say that $ f$ is extensive with respect to a strict partial order $\prec$ if $(v\in f(u)\Rightarrow u\prec v)$. \subsection{MBQC, concretely, abstractly} In this section, a brief description of the measurement-based quantum computation is given, a more detailed introduction can be found in \cite{DKP07,DKPP09}. Starting from a low-level description of measurement-based quantum computation using the so-called patterns of the Measurement-Calculus -- an assembly language composed of 4 kinds of commands: creation of ancillary qubits, entangling operation, measurement and correction -- we end up with a graph theoretical description of the computation and in particular of the underlying entangled resource of the computation. \subsection{Measurement-Calculus patterns: an assembly language} An assembly language for MBQC is the Measurement-Calculus \cite{DKP07,DKPP09}: a pattern is a sequence of commands, each command is either: \\-- $N_u$: initialisation of a fresh qubit $u$ in the state $\ket +=\frac{\ket 0+\ket 1}{\sqrt 2}$; \\-- $E_{u,v}$ entangling two qubits $u$ and $v$ by applying Control-Z operation $\Lambda Z:\ket{x,y}\mapsto (-1)^{xy}\ket {x,y}$ to the qubits $u$ and $v$; \\-- $M_u^{\lambda_u,\alpha_u}$ measurement of qubit $u$ according to the observable $\mathcal O_{\lambda_u, \alpha_u}$ described below; \\-- $X_u^{s_v}$ (resp. $Z_u^{s_v}$), a correction which consists of applying Pauli $X:\ket x\mapsto \ket {1-x}$ (resp. $Z:\ket x\mapsto (-1)^x\ket x$) to qubit $u$ iff $s_v$ (the classical outcome of the measurement of qubit $v$) is $1$. A pattern is subject to some basic well-formedness conditions like: no operation can be applied on a qubit $u$ after $u$ being measured; a correction cannot depend on a signal $s_u$ if qubit $u$ is not yet measured. The qubits which are not initialised using the $N$ command are the input qubits, and those which are not measured are the output qubits. The measurement of a qubit $u$ is characterized by $\lambda_u\subset\{X,Y,Z\}$ a subset of one or two Pauli operators, and an angle $\alpha_u\in [0,2\pi)$: \\ -- when $\lambda_u=\{M\}$ is a singleton, $u$ is measured according to $\mathcal O_{\lambda_u, \alpha_u}:=M$ if $\alpha_u=0$ or $\mathcal O_{\lambda_u, \alpha_u}:=-M$ if $\alpha_u=\pi$. \\-- when $|\lambda_u|=2$, $u$ is measured in the $\lambda_u$-plane of the Bloch sphere with an angle $\alpha_u$, i.e.~according to the observable: $$\mathcal O_{\lambda_u, \alpha_u}:=\begin{cases} \cos(\alpha_u)X_u+\sin(\alpha_u)Y_u& \text{if $\lambda_u = \{X,Y\}$}\\ \cos(\alpha_u)Y_u+\sin(\alpha_u)Z_u&\text{if $\lambda_u = \{Y,Z\}$}\\ \cos(\alpha_u)Z_u+\sin(\alpha_u)X_u& \text{if $\lambda_u = \{Z,X\}$} \end{cases} $$ Measurement of qubit $u$ produces a classical outcome $(-1)^{s_u}$ where $s_u\in \{0,1\}$ is called \emph{signal}, or simply \emph{classical outcome} with a slight abuse of notation. \subsection{A graph-based representation} In the Measurement-Calculus, the patterns are equipped with an equational theory which captures some basic invariant properties, e.g.~two operations acting on distinct qubits commute, or $E_{u,v}$ is equivalent to $E_{v,u}$. It is easy to show using the equations of the Measurement-Calculus that any pattern can be transformed into an equivalent pattern of the form: $$ \left(\prod^\prec_{u\in \comp O} Z^{s_u}_{\mathtt z(u)}X^{s_u}_{\mathtt x(u)}M_u^{\lambda_u,\alpha_u}\right)\left( \prod_{(u,v)\in G}E_{u,v}\right)\left( \prod_{u\in \comp I}N_u\right)$$ where $G=(V,E)$ is a simple undirected graph, $I,O\subseteq V$ are respectively the input and output qubits, and $\mathtt x, \mathtt z: \comp{O} \to 2^{V}$ are two extensive maps, i.e.~the relation $\prec$ defined as the transitive closure of $\{(u,v): v\in \mathtt x(u) \cup \mathtt z(u)\}$ is a strict partial order. Notice that $O^c:= V\setminus O$ and $X^{s_u}_{\mathtt x(u)} := \prod_{v\in\mathtt x(u)} X_v^{s_u}$. Moreover the product $\prod_{(u,v)\in G}$ means that the indices are the edges of the $G$, in particular each edge is taken once. The septuple $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ is a graph-based representation which captures entirely the semantics of the corresponding pattern. We simply call an MBQC such a septuple. \subsection{Semantics and Determinism} An MBQC $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ has a fundamentally probabilistic evolution with potentially $2^{|\comp O|}$ possible branches as the computation consists of $|\comp O|$ measurements. For any $s\in \{0,1\}^{|\comp O|}$, let $A_s :\mathbb C^{\{0,1\}^{I}}\to \mathbb C^{\{0,1\}^O}$ be $$A_s(\ket \varphi)= \left(\prod^\prec_{u\in \comp O} Z^{s_u}_{\mathtt z(u)}X^{s_u}_{\mathtt x(u)}\bra {\varphi^{\lambda_u, \alpha_u}_{s_u}}_u\right)\left( \!\prod_{(u,v)\in G}\!\!\!\!\Lambda Z_{u,v}\right)\left(\ket \varphi\otimes \frac{\sum_{x\in \{0,1\}^{I^c}} \ket x}{\sqrt{2^{|I^c|}}}\right)$$ where $\ket{\varphi^{\lambda_u, \alpha_u}_{s_u}}$ is the eigenvalue of $\mathcal O^{\lambda_u,\alpha_u}$ associated with the eigenvalue $(-1)^{s_u}$. Given an initial state $\ket \varphi\in \mathbb C^{\{0,1\}^I}$ and $s\in \{0,1\}^{\comp O}$, the outcome of the computation is the state $A_s \ket \Psi$ (up to a normalisation factor), with probability $\bra \varphi A^\dagger_sA_s\ket \varphi$. In other words the MBQC implements the cptp-map\footnote{A completely positive trace-preserving map describes the evolution of a quantum system which state is represented by a density matrix. See for instance \cite{NC00} for details.} $\rho \mapsto \sum_{s\in \{0,1\}^{\comp O}} A_s \rho A^\dagger_s$. Among all the possible measurement-based quantum computations, those which are deterministic are of peculiar importance. In particular, deterministic MBQC are those which are used to simulate quantum circuits (cornerstone of the proof that MBQC is a universal model of quantum computation), or to implement a quantum algorithm. An MBQC $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ is {\bf deterministic} if the output of the computation does not depend on the classical outcomes obtained during the computation: for any input state $\ket \varphi\in \mathbb C^{\{0,1\}^I} $ and branches $s,s'\in \{0,1\}^{\comp O}$, $A_s\ket \varphi$ and $A_{s'}\ket \varphi$ are proportional. Notice that the semantics of a deterministic MBQC $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ is entirely defined by a single branch, e.g. the branch $A_{0^{|\comp O|}}$. Moreover, this particular branch $A_{0^{|\comp O|}}$ is correction-free by construction (indeed all corrections are controlled by a signal, which is $0$ in this particular branch). As a consequence, intuitively, when the evolution is deterministic, the corrections are only used to make the overall evolution deterministic but have no effect on the actual semantics of the evolution. Thus the correction can be abstracted away leading to the notion of {\bf abstract MBQC} $(G,I,O,\lambda, \alpha)$. There is however a caveat when the branch $A_{0^{|\comp O|}}$ is $0$: for instance $M_1^{X,\pi}N_1N_2$ and $Z_2^{s_1}M_1^{X,\pi}N_1N_2$ are both deterministic\footnote{In both cases the unique measurement consists of measuring a qubit in state $\ket +$ according to the observable $-X$ which produces the signal $s_1=1$ with probability $1$.} and share the same abstract open graph, however they do not have the same semantics: the outcome of the former pattern is $\frac{\ket 0+\ket 1}{\sqrt 2}$, whereas the outcome of the latter is $\frac{\ket 0-\ket 1}{\sqrt 2}$. To avoid these pathological cases and guarantee that the corrections can be abstracted away, a stronger notion of determinism has been introduced in \cite{BKMP07}: an MBQC is {\bf strongly deterministic} when all the branches are not only proportional but equal up to a global phase. The strongness assumption guarantees that for any input state $\ket \varphi$, $A_{0^{|\comp O|}}\ket \varphi$ is non zero, and thus guarantees that the overall evolution is entirely described by the correction-free branch, or in other words by the knowledge of the abstract MBQC $(G,I,O,\lambda, \alpha)$. Whereas deterministic MBQC are not necessarily invertible (e.g. $M^{(X,0)}_1N_2$ which maps any state $\ket \varphi$ to the state $\ket +$), strongly deterministic MBQC correspond to the invertible deterministic quantum evolutions: they implement isometries ($\exists U : \mathbb C^{\{0,1\}^{I}}\to \mathbb C^{\{0,1\}^O}$ s.t. $U^\dagger U=I$ and $\forall s\in \{0,1\}^{|\comp O|}$, $\exists \theta$ s.t. $A_s= {2^{-|\comp O|}}{e^{i\theta}}U$). We consider a variant of strong determinism which is robust to variation of the angles of measurements (which is a continuous parameter, so a priori subject to small variations in an experimental setting for instance), and to partial computation i.e., roughly speaking if one aborts the computation, the partial outcome does not depend on the branch of the computation. \begin{definition}[Robust Determinism] $(G,I,O,\lambda, \alpha,\mathtt x, \mathtt z)$ is robustly deterministic if for any lowerset $S\subseteq O^c$ and for any $\beta:S\to [0,2\pi)$, $(G,I,O\cup S^c,\lambda\restrict{S}, \beta,\mathtt x \restrict S, \mathtt z \restrict S)$ is strongly deterministic, where $S$ is a lowerset for the partial order induced by $\mathtt x$ and $\mathtt z$: $\forall v\in S, \forall u\in O^c$, $v\in \mathtt x(u)\cup \mathtt z(u) \Rightarrow u\in S$. \end{definition} The notion of \emph{robust determinism} we introduce is actually a short cut for \emph{uniformly strong and stepwise determinism} which has been already extensively studied in the context of measurement-based quantum computing \cite{BKMP07,DKPP09,MP08-icalp}. A central question in measurement-based quantum computation is to decide whether an abstract MBQC can be implemented deterministically: given $(G,I,O,\lambda, \alpha)$, does there exist correction strategies $\mathtt x, \mathtt z$ such that $(G,I,O,\lambda, \alpha,$ $\mathtt x, \mathtt z)$ is (robustly) deterministic? This question is related to the power of postselection in quantum computing: allowing postselection one can select the correc\-tion-free branch and thus implement any abstract MBQC $(G,I,O,\lambda, \alpha)$. Post-selection is a priori a non physical evolution, but in the presence of a correction strategy, postselection can be simulated using measurements and corrections. The robustness assumption allows one to abstract away the angles and focus on the so-called {\bf open graph} $(G,I,O,\lambda)$ i.e.~essentially the initial entanglement. For which initial entanglement -- or in other words for which resource state -- a deterministic evolution can be performed? This is a fundamental question about the structures and the computational power of entanglement. Several graphical conditions for determinism have been introduced: causal flow, Generalized flow (Gflow) and Pauli Flow \cite{DK06,BKMP07,DKPP09}. These are graphical conditions on open graphs which are sufficient to guarantee the existence of a robust deterministic evolution. Gflow has been proved to be a necessary condition for Pauli-free MBQC (i.e.~for any open graph $(G,I,O,\lambda)$ s.t. $\forall u\in O^c$, $|\lambda_u|=2$). The necessity of Pauli flow was an open question\footnote{In \cite{BKMP07}, an example of deterministic MBQC with no Pauli flow is given. This is however not a counter example to the necessity of the Pauli flow as the example is not robustly deterministic. More precisely not all the branches of computation occur with the same probability: with the notation of Figure 8 in \cite{BKMP07} if measurements of qubits 4,6,8 produce the outcome 0, then the measurement of qubit 10 produces the outcome 0 with probability 1.}. In this paper we show that Pauli flow fails to be necessary in general, but is however necessary for real MBQC, i.e.~when $\forall u\in O^c$, $\lambda_u\subseteq \{X,Z\}$. In the next section, we review the graphical sufficient conditions for determinism. \subsection{Graphical Conditions for Determinism} Several flow conditions for determinism have been introduced to guarantee robust determinism. Causal flow has been the first sufficient condition for determinism \cite{DK06}. This condition has been extended to Generalized flow (Gflow) and Pauli flow \cite {BKMP07}. Our first contribution is to provide a simpler description of the Pauli flow, equivalent to the original one (see appendix \ref{app:prop1}): \begin{property} \label{prop:sim} $(G,I,O,\lambda)$ has a Pauli flow iff there exist a strict partial order $<$ over $O^c$ and $p : O^c \to 2^{I^c}$ s.t. $\forall u \in O^c$, \begin{eqnarray*} (c_X)\quad X\in \lambda_u&\Rightarrow & u\in \odd{p(u)}\setminus \left( \bigcup_{\substack{v \geq u \\ v \notin O\cup\{u\}}} \odd{p(v)} \right)\\ (c_Y)\quad Y\in \lambda_u&\Rightarrow & u\in \codd{p(u)}\setminus \left( \bigcup_{\substack{v \geq u \\ v \notin O\cup\{u\}}} \codd{p(v)} \right)\\ (c_Z)\quad Z\in \lambda_u&\Rightarrow & u\in {p(u)}\setminus \left( \bigcup_{\substack{v \geq u \\ v \notin O\cup\{u\}}} {p(v)} \right) \end{eqnarray*} \noindent where $v\ge u$ iff $\neg (v<u)$ \end{property} \begin{remark} Notice that the existence of a Pauli flow forces the input qubits to be measured in the $\{X,Y\}$-plane: If $(G,I,O,\lambda)$ has a Pauli flow then for any $u\in I \cap O^c$, $u\notin p(u)$ since $p(u)\subseteq I^c$. It implies, according to condition $(c_Z)$, that $Z\notin \lambda_u$. \end{remark} Gflow and Causal flows are special instances of Pauli flow: A Pauli flow is a Gflow when all measurements are performed in a plane (i.e. $\forall u$, $|\lambda_u|=2$); a Causal flow \cite{DK06} is nothing but a Gflow $(p,<)$ such that $\forall u, |p(u)|=1$. GFlow has been proved to be a necessary and sufficient condition for robust determinism: \begin{theorem}[\cite{BKMP07}]\label{thm:gflow} Given an abstract MBQC $(G,I,O,\lambda, \alpha)$ such that $\forall u\in O^c, |\lambda_u|=2$, {$(G,I,O,\lambda)$ has a GFlow $(p,<)$ if and only if there exists $\mathtt x, \mathtt z$ extensive with respect to $<$ s.t. $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ is robustly deterministic.} \end{theorem} Pauli flow is the most general known sufficient condition for determinism for robust determinism: \begin{theorem}[\cite{BKMP07}] \label{thm:Paulisuf}If $(G,I,O,\lambda)$ has a Pauli flow $(p,<)$, then for any $ \alpha : O^c\to [0,2\pi)$, $(G,I,O,\lambda,\alpha, \mathtt x,\mathtt z)$ is robustly deterministic where $\forall u\in O^c$, \begin{align*} \mathtt x(u)&= \{v\in p(u) ~|~ u< v\}\\ \mathtt z(u)&= \{v\in \odd{p(u)}~|~ u< v\} \end{align*} \end{theorem} Is there a converse? This is the purpose of next section. \section{Characterising Robust Determinism} \label{sec:robdet} In this section, we show the main result of the paper: Pauli flow is necessary for robust determinism in the real case, i.e.~when all the measurements are in the $\{X,Z\}$-plane ($\forall u, \lambda_u\subseteq \{X,Z\}$). We investigate in the subsequent sections the consequences of this result for real MBQC which is a universal model of quantum computation with several crucial applications. A \textbf{real open graph} $(G,I,O,\lambda)$ is an open graph such that $\forall u\in O^c, \lambda_u\subseteq \{X,Z\}$. We define similarly \textbf{real abstract MBQC} and \textbf{real MBQC}. Pauli flow conditions on real open graphs can be simplified as follows: \begin{property} A real open graph $(G,I,O,\lambda)$ has a Pauli flow iff there exist a strict partial order $<$ over $O^c$ and $p : O^c \to 2^{I^c}$ s.t. $\forall u \in O^c$, \begin{eqnarray*} (i)\quad X\in \lambda_u&\Rightarrow & u\in \odd{p(u)}\setminus \left( \bigcup_{\substack{v\geq u \\ v \notin O\cup\{u\} }} \odd{p(v)} \right)\\ (ii)\quad Z\in \lambda_u&\Rightarrow & u\in {p(u)}\setminus \left( \bigcup_{\substack{v\geq u \\ v \notin O\cup\{u\} }} {p(v)} \right)\\ \end{eqnarray*} \end{property} \begin{theorem} \label{thm:paulinec} Given a real abstract MBQC $(G,I,O,\lambda, \alpha)$, {$(G,I,O,\lambda)$ has a Pauli flow $(p,\prec)$ if and only if there exist $\mathtt x, \mathtt z$ extensive with respect to $\prec$ s.t. $(G,I,O,\lambda,$ $ \alpha, \mathtt x, \mathtt z)$ is robustly deterministic.} \end{theorem} The proof of Theorem \ref{thm:paulinec} is given in appendix. The proof is fundamentally different from the proof that Gflow is necessary for Pauli-free robust determinism (Theorem \ref{thm:gflow} in \cite{BKMP07}). Roughly speaking, the proof that Pauli flow is necessary goes as follows: first we fix the inputs to be either $\ket 0$ or $\ket +$ and all the measurements to be Pauli measurements (i.e.~if $\lambda_u=\{X,Z\}$ we fix the measurement of $u$ to be either $X$ or $Z$). For each of these choices the computation can be described in the so-called stabilizer formalism which allows one to point out the constraints the corrections should satisfy for each of these particular choices of inputs and measurements. Then, as the corrections of a robust deterministic MBQC should not depend on the choice of the inputs and the angles of measurements, one can combine the constraints the corrections should satisfy and show that they coincide with the Pauli flow conditions. \begin{remark} We consider in this paper a notion of real MBQC which corresponds to a constraint on the measurements ($\forall u\in O^c, \lambda_u\in \{X,Z\}$), it can also be understood as an additional constraint on the inputs: the input of the computation is in $\mathbb R^I$ instead of $\mathbb C^I$. This distinction might be important, for instance the pattern $M_1^YN_2$ is strongly deterministic on real inputs but not on arbitrary complex inputs. It turns out that the proof of Theorem \ref{thm:paulinec} only consider real inputs, and as a consequence is valid in both cases (i.e.~when both inputs and measurements are real ; or when inputs are complex and measurements are in the $\{X,Z\}$-plane). \end{remark} Pauli flow is necessary for real robust determinism. This property is specific to real measurements: Pauli flow is not necessary in general even when the measurements are restricted to one of the other two planes of measurements. In the following $\{X,Y\}$-MBQC (resp. $\{Y,Z\}$-MBQC) refers to MBQC where all measurements are performed in the $\{X,Y\}$-plane (resp. $\{Y,Z\}$-plane). \begin{property} There exists robustly deterministic $\{X,Y\}$-MBQC (resp. $\{Y,Z\}$-MBQC) $(G,I,O,\lambda, \alpha, \mathtt x,\mathtt z)$ such that $(G,I,O,\lambda)$ has no Pauli flow $(p,\prec)$ where $\mathtt x$ and $\mathtt z$ are extensive with respect to $\prec$. \end{property} \begin{proof} We consider the pattern $\mathcal P = Z_3^{s_2}M_2^{X,0}X_2^{s_1}M_1^{\{X,Y\},\alpha}E_{1,2}E_{1,3}N_1N_2N_3$ which is an implementation of the $\{X,Y\}$-MBQC given in Fig \ref{fig:counterexample} (the other example is similar). Notice that the correction $X_2^{s_1}$ is useless as qubit $2$ is going to be measured according to $M^X$. Thus $\mathcal P$ has the same semantics as $\mathcal P' = Z_3^{s_2}M_2^{X,0}M_1^{\{X,Y\},\alpha}E_{1,2}E_{1,3}N_1N_2N_3$. Notice in $\mathcal P'$ that the two measurements commute since there is no dependency between them, leading to the pattern $\mathcal P'' = M_1^{\{X,Y\},\alpha}Z_3^{s_2}M_2^{X,0}E_{1,2}E_{1,3}N_1N_2N_3$. It is easy to check that $\mathcal P''$ has a Pauli flow so is robustly deterministic. All but the stepwise property are transported by the transformations from $\mathcal P''$ to $\mathcal P$. Notice that $\mathcal P'$ is not stepwise deterministic as $M_1^{\{X,Y\},\alpha}E_{1,2}E_{1,3}N_1N_2N_3$ is not deterministic. However, $\mathcal P$ enjoys the stepwise property since $X_2^{s_1}M_1^{\{X,Y\},\alpha}E_{1,2}E_{1,3}N_1N_2N_3$ has a Pauli flow so is robustly deterministic. Finally, it is easy to show that the open graph has no Pauli flow $(p,\prec)$ such that $1\prec 2$, which is necessary to guarantee that $\mathtt x$ is extensive with respect to $\prec$. \end{proof} \begin{figure} \caption{Robustly deterministic $\{X,Y\} \label{fig:counterexample} \end{figure} \begin{remark}This is the last step of the proof of Theorem \ref{thm:paulinec} which fails with the examples of Figure \ref{fig:counterexample}. For instance in the $\{X,Y\}$-MBQC example, in both cases of Pauli measurements of qubit $1$ (according to $X$ or according to $Y$), a Pauli flow exists, sharing the same partial order $1\prec 2$. However the two Pauli flows are distinct and none of them is a Pauli flow when qubit $1$ is measured in the $\{X,Y\}$-plane. \end{remark} \begin{remark} The examples given in figure \ref{fig:counterexample} do have a Pauli flow but with a partial order not compatible with the order of measurements. It is important that the orders of the flow and the measurements coincide for guaranteeing that the depth of the flow (longest increasing sequence) corresponds to the depth of the MBQC. Because of the logarithmic separation between the quantum circuit model and MBQC in terms of depth (e.g. PARITY can be computed with a constant quantum depth MBQC but requires a logarithmic depth quantum circuit) \cite{BKP10}, it is also important that Pauli flow characterises not only the ability to perform a robust deterministic evolution, but characterizes also the depth of such evolution. There exists an efficient polynomial time which, given an open graph, compute a Gflow of optimal depth (when it exists) \cite{MP08-icalp}, the existence of such an algorithm in the Pauli case is an open question. \end{remark} \section{Applications: Computational Power of Real Bipartite MBQC} In this section we focus on the real MBQC which underlying graph are bipartite (real bipartite MBQC for short). Bipartite graphs (or equivalently 2-colorable graphs) play an important role in MBQC, the square grid is universal for quantum computing: any quantum circuit can be simulated by an MBQC whose underlying graph is a square grid. The brickwork graph \cite{BFK08} is bipartite and universal for $\{X,Y\}$-MBQC. Regarding real MBQC, the (non bipartite) triangular grid is universal for real MBQC \cite{MP13} but there is no known universal family of bipartite graphs. We show in this section that there is no universal family of bipartite graphs for real MBQC, by showing that any real bipartite MBQC can be done in constant depth. \subsection{Real bipartite MBQC in constant depth} In this section we show that real bipartite MBQC can always be parallelized: \begin{theorem}\label{thm:constdepth} All measurements of a robustly deterministic real bipartite MBQC can be performed in parallel. \end{theorem} The rest of the section is dedicated to the proof of Theorem \ref{thm:constdepth}. According to Theorem \ref{thm:paulinec}, a real MBQC is robustly deterministic if and only if the underlying open graph has a Pauli flow. To prove that all the measurements can be performed in parallel in the bipartite case we point out the existence of a particular correction strategy which ensures that each measurement is corrected using output qubits only. \begin{lemma}\label{lem:8} Given a bipartite graph $G$, $I, O\subseteq V(G) $ and $\lambda:O^c\to \{\{X\}, \{Z\},$ $ \{X,Z\}\}$, if $(G,I,O,\lambda)$ has a Pauli flow then there exists $p:O^c\to 2^{I^c}$ s.t.: \begin{eqnarray*} \odd{p(u)} \setminus (O\cup \lambda^{-1}(\{Z\})) & =& \{u\}\setminus \lambda^{-1}(\{Z\})\\ {p(u)} \setminus (O\cup \lambda^{-1}(\{X\})) & = &\{u\}\setminus \lambda^{-1}(\{X\}) \end{eqnarray*} \end{lemma} \begin{proof} See appendix \ref{app:lemma8} \end{proof} This particular correction strategy corresponds to a king of \emph{super-normal form}. Indeed it is known that Gflow can be put into the so called $Z$- or $X$-normal form but not both at the same time (see \cite{Hamrit2015} for details). Lemma \ref{lem:8} shows, roughly speaking, that the Pauli flow in the real bipartite case can be put in both normal forms at the same time. \begin{proof}[Proof of Theorem \ref{thm:constdepth}] Given a robustly deterministic real bipartite MBQC $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$, according to Theorem \ref{thm:paulinec}, $(G,I,O,\lambda)$ has a Pauli flow, so according to Lemma \ref{lem:8} there exists $p$ s.t. $ \odd{p(u)} \setminus (O\cup \lambda^{-1}(\{Z\})) = \{u\}\setminus \lambda^{-1}(\{Z\})$ and ${p(u)} \setminus (O\cup \lambda^{-1}(\{X\})) = \{u\}\setminus \lambda^{-1}(\{X\})$. Notice that $(p,\emptyset)$ is a Pauli flow for $(G,I,O,\lambda)$, thus according to Theorem \ref{thm:Paulisuf}, $(G,I,O,\lambda, \alpha, \mathtt x', \mathtt z')$ is robustly deterministic where $\mathtt x'= u\mapsto p(u)\setminus (\lambda^{-1}(\{X\}) \cup \{u\})$ and $\mathtt z'= u\mapsto \odd{p(u)}\setminus (\lambda^{-1}(\{Z\}) \cup \{u\})$. Both $(G,I,O,\lambda, \alpha, \mathtt x, \mathtt z)$ and $(G,I,O,\lambda, \alpha, \mathtt x', \mathtt z')$ implement the same computation, and $\forall u\in O^c$ $\mathtt x'(u)\subseteq O$ and $\mathtt z'(u)\subseteq O$ which implies that all measurements of the latter MBQC can be performed in parallel. \end{proof} \subsection{Interactive proofs} \label{subs:intpro} The starting point of our work has been a sentence of McKague in \cite{McKague}. In the future work section, McKague wonders how his work could be used to build an interactive prover with only two provers. The problem that McKaque wants to solve is the following. We imagine a classical verifier, which is a computer with classical resources, who wants to perform a computation using some non-communicating quantum provers. The quantum provers are computers with quantum resources. In fact, the classical verifier wants to achieve his computation using the quantum power of quantum provers. In this model, the hard point to breakthrough is that we want the verifier to detect cheating behavior of some provers. The model should guarantee the verifier that the result of the computation made by the provers is correct: if a prover has cheated and not computed what he was asked, the verifier should be able to detect it. We specify that the provers, in this model, cannot communicate one with the others: each prover can try to cheat on his own but he does not have the power to do it by exchanging information with the others. McKague, in \cite{McKague}, proves that it is possible to imagine a protocol in which the computation can be performed by the classical verifier using a polynomial number of quantum provers. To achieve this goal, McKague uses two main tools, one of them being Measurement Based Quantum Computation in the $(X,Z)$ plane. Mhalla and Perdrix, in \cite{MP13}, prove that there exists a grid that enables to perform a universal computing in the $(X,Z)$ plane. Usually, the $(X,Y)$ plane, first known to allow universal computation is preferred. In his work, McKague needs the $(X,Z)$ plane: to be able to detect cheating behavior, McKague needs to compute in the reals. The conjugation operation that can be performed in other planes is a problem to detect some cheatings. In his future work section, McKague argues that most his work could be used to improve his result to the use of only two provers. The main difficulty he points out is to build a bipartite graph to compute with. His self-testing skill, which is the second important tool of his work, can be applied only if the graph does not have any odd cycle. Therefore, the question we wanted to answer was whether one could build a universal bipartite grid for the $(X,Z)$-plane. Our Theorem \ref{thm:constdepth} shows that in the real case a bipartite graph is not very powerful to compute: it is far from being universal. Therefore, at best, new skills will be needed to adapt McKague's method to interactive proofs with two provers. \section{Conclusion and future work} In this paper, we made substantial steps in understanding MBQC world. The first important one is this equivalence between being robustly deterministic and having a Pauli flow for a real-MBQC. Since it does not hold for $\{X,Y\}$- and $\{Y,Z\}$-planes, a natural question is how one can modify the Pauli flow definition to obtain a characterisation of determinism in these cases? A bi-product of the characterisation of robust determinism for real MBQC is the low comutational power of real bipartite MBQC. It would be interesting to compare the computational power of real bipartite MBQC and of commuting quantum circuits. There are some good reasons to think that the power of real bipartite MBQC is exactly the same as those commuting quantum circuits. Taking a global view of the MBQC domain, some advances we make in this paper, and a good direction for further research should be to better understand the specificity of each plane in the power of the MBQC model and how the ability to perform a deterministic computation is linked to this power. Finally, another open question is the existence of an efficient algorithm for deciding whether a given open graph has a Pauli flow, and which produces a Pauli flow of optimal depth when it exists. Such an algorithm exists for Gflow~\cite{MP08-icalp}. \begin{thebibliography}{10} \bibitem{BV97a} E.~Bernstein and U.~Vazirani. \newblock Quantum complexity theory. \newblock {\em SIAM J. Comput.}, 26{:}1411--1478, 1997. \bibitem{BFK08} Anne Broadbent, Joseph Fitzsimons, and Elham Kashefi. \newblock Universal blind quantum computation. \newblock In {\em 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2009}, 2009. \newblock URL: \url{http{:}//www.citebase.org/abstract?id=oai{:}arXiv.org{:}0807.4154}. \bibitem{BK07} Anne Broadbent and Elham Kashefi. \newblock Parallelizing quantum circuits. \newblock {\em Theoretical Computer Science}, 410(26){:}2489--2510, 2009. \bibitem{BKMP07} Daniel~E. Browne, Elham Kashefi, Mehdi Mhalla, and Simon Perdrix. \newblock Generalized flow and determinism in measurement-based quantum computation. \newblock {\em New Journal of Physics (NJP)}, 9(8), 2007. \newblock URL: \url{http{:}//iopscience.iop.org/1367-2630/9/8/250/fulltext/}. \bibitem{BKP10} Daniel~E. Browne, Elham Kashefi, and Simon Perdrix. \newblock Computational depth complexity of measurement-based quantum computation. \newblock In {\em Theory of Quantum Computation, Communication, and Cryptography (TQC'10)}, volume 6519, pages 35--46. LNCS, 2011. \bibitem{DK06} Vincent Danos and Elham Kashefi. \newblock Determinism in the one-way model. \newblock {\em Physical Review A}, 74(052310), 2006. \bibitem{DKP07} Vincent Danos, Elham Kashefi, and Prakash Panangaden. \newblock The measurement calculus. \newblock {\em J. ACM}, 54(2), 2007. \bibitem{DKPP09} Vincent Danos, Elham Kashefi, Prakash Panangaden, and Simon Perdrix. \newblock {\em Extended Measurement Calculus}. \newblock Cambridge University Press, 2010. \bibitem{delfosse2015wigner} Nicolas Delfosse, Philippe~Allard Guerin, Jacob Bian, and Robert Raussendorf. \newblock Wigner function negativity and contextuality in quantum computation on rebits. \newblock {\em Physical Review X}, 5(2){:}021003, 2015. \bibitem{Hamrit2015} Nidhal Hamrit and Simon Perdrix. \newblock {\em Reversibility in Extended Measurement-Based Quantum Computation}, pages 129--138. \newblock Springer International Publishing, Cham, 2015. \bibitem{HEB04} M.~Hein, J.~Eisert, and H.~J. Briegel. \newblock Multi-party entanglement in graph states. \newblock {\em Physical Review A}, 69{:}062311, 2004. \newblock URL: \url{doi{:}10.1103/PhysRevA.69.062311}. \bibitem{McKague} Matthew McKague. \newblock Interactive proofs for bqp via self-tested graph states. \newblock {\em Theory of Computing}, 12(3){:}1--42, 2016. \bibitem{MP08-icalp} Mehdi Mhalla and Simon Perdrix. \newblock Finding optimal flows efficiently. \newblock In {\em the 35th International Colloquium on Automata, Languages and Programming (ICALP), LNCS}, volume 5125, pages 857--868, 2008. \bibitem{MP13} Mehdi Mhalla and Simon Perdrix. \newblock {G}raph {S}tates, {P}ivot {M}inor, and {U}niversality of ({X}, {Z})-{M}easurements. \newblock {\em {I}nternational {J}ournal of {U}nconventional {C}omputing}, 9(1-2){:}153--171, 2013. \bibitem{NC00} M.~A. Nielsen and I.~L. Chuang. \newblock {\em Quantum computation and quantum information}. \newblock Cambridge University Press, New York, NY, USA, 2000. \bibitem{Petal} Robert Prevedel, Philip Walther, Felix Tiefenbacher, Pascal Bohi, Rainer Kaltenbaek, Thomas Jennewein, and Anton Zeilinger. \newblock High-speed linear optics quantum computing using active feed-forward. \newblock {\em Nature}, 445(7123){:}65--69, January 2007. \newblock URL: \url{http{:}//dx.doi.org/10.1038/nature05346}, \href {http{:}//dx.doi.org/10.1038/nature05346} {\mathsf {Path}th{doi{:}10.1038/nature05346}}. \bibitem{raussendorf2013contextuality} Robert Raussendorf. \newblock Contextuality in measurement-based quantum computation. \newblock {\em Physical Review A}, 88(2){:}022322, 2013. \bibitem{RB01} Robert Raussendorf and Hans~J. Briegel. \newblock A one-way quantum computer. \newblock {\em Phys. Rev. Lett.}, 86{:}5188--5191, 2001. \bibitem{RBB03} Robert Raussendorf, Daniel~E. Browne, and Hans~J. Briegel. \newblock Measurement-based quantum computation with cluster states. \newblock {\em Physical Review A}, 68{:}022312, 2003. \newblock URL: \url{http{{:}}//arxiv.org/abs/quant-ph/0301052}. \bibitem{raussendorf2006fault} Robert Raussendorf, Jim Harrington, and Kovid Goyal. \newblock A fault-tolerant one-way quantum computer. \newblock {\em Annals of physics}, 321(9){:}2242--2270, 2006. \bibitem{Wetal} Philip Walther, Kevin~J. Resch, Terry Rudolph, Emmanuel Schenck, Harald Weinfurter, Vlatko Vedral, Markus Aspelmeyer, and Anton Zeilinger. \newblock Experimental one-way quantum computing. \newblock {\em Nature}, 434(7030){:}169--176, March 2005. \newblock URL: \url{http{:}//dx.doi.org/10.1038/nature03347}, \href {http{:}//dx.doi.org/10.1038/nature03347} {\mathsf {Path}th{doi{:}10.1038/nature03347}}. \end{thebibliography} \appendix \section{Quantum Computing in a Nutshell}\label{QC} The state of a given a finite set (or register) $A$ of qubits is a unit vector $\ket \varphi \in \mathbb C^{\{0,1\}^A}$. The so-called classical states $\{\ket x:x\in \{0,1\}^A\}$ of the register $A$ form an orthonormal basis $\mathbb C^{\{0,1\}^A}$, thus any state $\ket \varphi$ of $A$ can be described as $\ket \varphi = \sum_{x\in \{0,1\}^A}\alpha_x\ket x$ s.t. $\sum_{x\in \{0,1\}^A}|\alpha_x|^2 =1$. Given two distinct registers $A$ and $B$, if the state of $A$ is $\ket \varphi = \sum_{x\in \{0,1\}^A}\alpha_x\ket x$ and the state of $B$ is $\ket \psi = \sum_{y\in \{0,1\}^B}\beta_y\ket y$, then the state of the overall register $A\cup B$ is $\ket \varphi \otimes \ket \psi = \sum_{x\in \{0,1\}^A,y\in \{0,1\}^B}\alpha_x\beta_y\ket {xy}$, where $xy$ is the concatenation of $x$ and $y$. The adjoint of a state $\ket \varphi = \sum_{x\in \{0,1\}^A}\alpha_x\ket x\in \mathbb C^{\{0,1\}^A} $ is $\bra \varphi =(\ket{\varphi})^\dagger =\sum_{x\in \{0,1\}^A}\alpha^ *_x\bra x\in \mathbb C^{\{0,1\}^A} \to 1$, where $\forall x,y\in \{0,1\}^A$, $\bra x \ket y = \delta_{x,y}$. Any quantum evolution can be decomposed into a sequence of \emph{isometries} and \emph{measurements}: An isometry $U : \mathbb C^{\{0,1\}^A} \to \mathbb C^{\{0,1\}^B}$ is a linear map s.t.~$U^\dagger U = I$, (i.e.~$\forall x\in \{0,1\}^A, (U\ket x)^\dagger (U\ket x) = 1$), which transforms the state $\ket \varphi$ into $U\ket \varphi$. Famous examples of isometries are the unitary evolutions which correspond to the case $|A|=|B|$. The simplest example of unitary transformations are the so-called one-qubit Pauli operators $X$, $Y$, $Z$: $X= \ket x\mapsto \ket {1-x}$, $Z = \ket x\mapsto (-1)^x \ket x$ and $Y = iXZ$. An example of an isometry which is not a unitary evolution is, given a one-qubit state $\ket \psi \in \{0,1\}^{\{u\}}$ and a register $A$ s.t. $u\notin A$, the map $\ket \psi_u : \mathbb C^{\{0,1\}^A} \to \mathbb C^{\{0,1\}^{A\cup \{u\}}} = \ket \varphi \mapsto \ket \varphi\otimes \ket{\psi}$ which consists of adding a qubit $u$ in the state $\ket \psi$ to the register $A$. A measurement is a fundamentally probabilistic evolution which produces a classical outcome and transforms the state of the quantum system. We consider in this paper only destructive measurements which means that the measured qubit is consumed by the measurement: measuring a qubit $u$ of a register $A$ transforms the state $\ket \varphi \in \{0,1\}^A$ into a state $\ket \psi \in \{0,1\}^{A\setminus \{u\}}$. Moreover, we will consider only one-qubit measurements, also called local measurements. A 1-qubit measurement is characterised by an observable $\mathcal O$, i.e. an hermitian operator acting on one qubit. We assume $\mathcal O$ has two distinct eigenvalues $1$ and $-1$. Let $\ket {\varphi_0}$ and $\ket {\varphi_1}$ be the corresponding eigenvectors. A measurement according to $\mathcal O$ of a qubit $u$ of a register $A$ in the state $\ket \psi\in \mathbb C^{\{0,1\}^A}$ produces the classical outcome $0$ (resp. $1$) and the state $\frac{\bra{\varphi_0}_u \ket\psi}{\sqrt{\bra \varphi \ket {\varphi_0}_u\bra{\varphi_0}_u \ket\psi}}$ (resp. $\frac{\bra{\varphi_1}_u \ket\psi}{\sqrt{\bra \varphi \ket {\varphi_1}_u\bra{\varphi_0}_u \ket\psi}}$) with probability $\bra \varphi \ket {\varphi_0}_u\bra{\varphi_0}_u \ket\psi$ (resp. $\bra \varphi \ket {\varphi_1}_u\bra{\varphi_1}_u \ket\psi$), where $\bra {\varphi_1}_u : \mathbb C^{\{0,1\}^{A\cup \{u\}}} \to \mathbb C^{\{0,1\}^A}$ is the adjoint of $\ket {\varphi_1}_u$. A quantum evolution composed of $k$ 1-qubit measurements and $n$ isometries (in any order) has $2^k$ possible evolutions and is hence represented by $2^k$ linear maps $L_s$ indexed by the possible sequences of classical outcomes. The quantum evolution should satisfy the condition $\sum_{s\in \{0,1\}^k}L^\dagger_sL_s = I$. It can be obtained as the composition of isometries and measurements as follows: a measurement is a pair $\{\bra {\varphi_0}, \bra {\varphi_1}\}$, an isometry $U$ is a singleton $\{U\}$ and the composition of two quantum evolutions is $\{L_s:s\in \{0,1\}^k\}\circ\{M_t:t\in \{0,1\}^m\} = \{L_sM_t :s\in \{0,1\}^k, t\in \{0,1\}^m\}$. A probability distribution of quantum states, say $\{(\ket {\varphi_i}, p_i)\}_i$ can be represented as a density matrix $\rho = \sum_{i}p_i\ket {\varphi_i}\bra{\varphi_i}$. Two probability distributions of quantum states leading to the same density matrix are indistinguishable. A quantum evolution $\{L_s : s\in \{0,1\}^k\}$ transforms $\rho$ into $\sum_{s\in \{0,1\}^k}L_s\rho L^\dagger_s$. \section{Proof of property \ref{prop:sim}} Pauli flow has been introduced in \cite{BKMP07}, as follows: \begin{definition}[Pauli Flow \cite{BKMP07}] An open graph state $(G,I,O,\lambda)$ has \emph{Pauli flow} if there exists a map $p: O^c \rightarrow 2^{I^c}$ and a strict partial order $<$ over $O^c$ such that $\forall u,v\in O^c$, \\---(P1)\ if $v\in p(u)$, $u\neq v$, and $\lambda_v\notin \{\{X\},\{Y\}\}$ then $u<v$, \\---(P2)\ if $v\le u$, $u\neq v$, and $\lambda_v\notin \{\{Y\},\{Z\}\}$ then $v\notin \odd{p(u)}$, \\---(P3)\ if $v\le u$, $u\neq v$, and $\lambda_v=\{Y\}$ then $v\in p(u) \Leftrightarrow v\in \odd{p(u)}$, \\---(P4)\ if $\lambda_u= \{X,Y\}$ then $u\notin p(u)$ and $u\in \odd{p(u)}$, \\---(P5)\ if $\lambda_u=\{X,Z\}$ then $u\in p(u)$ and $u\in \odd{p(u)}$, \\---(P6)\ if $\lambda_u=\{Y,Z\}$ then $u\in p(u)$ and $u\notin \odd{p(u)}$, \\---(P7)\ if $\lambda_u=\{X\}$ then $u\in \odd{p(u)}$, \\---(P8)\ if $\lambda_u=\{Z\}$ then $u\in p(u)$, \\---(P9)\ if $\lambda_u=\{Y\}$ then either:\;\; $u\notin p(u) \; \& \; u\in \odd{p(u)}$ \;\; or \;\; $u\in p(u) \; \& \; u\notin \odd{p(u)}$. \noindent where $u\le v$ iff $\neg (v<u)$. \end{definition} \label{app:prop1} First of all, (P9) can be simplified to: if $\lambda_u=\{Y\}$ then $u\in \codd{p(u)}$. Let's now begin to rewrite the block (P4) to (P9). Using (P4), (P5) and (P7), we can say that $X \in \lambda_u \Rightarrow u \in \odd{p(u}$. Also, (P4), (P6) and (P9) enable us to show that $Y \in \lambda_u \Rightarrow u \in \codd{p(u)}$ and (P5), (P6) and (P8) that $Z \in \lambda_u \Rightarrow u \in p(u)$ is correct. Conversely, we can go back as easily to property (P4) to (P9) from $X\in \lambda_u \Rightarrow u\in \odd{p(u)}$, $Y\in \lambda_u \Rightarrow u\in \codd{p(u)}$ and $Z\in \lambda_u \Rightarrow u\in {p(u)}$. To achieve the proof, we need to show that given a $u \in O^c$, for all $v \in O^c$, (P1), (P2) and (P3) are equivalent to the fact that if $v \leq u$ and $v \neq u$, then: \\---(Q1) $X \in \lambda_v \Rightarrow v \notin \odd{p(u)}$, \\---(Q2) $Y \in \lambda_v \Rightarrow v \notin \codd{p(u)}$, \\---(Q3) $Z \in \lambda_v \Rightarrow v \notin p(u)$. This equivalence is easier to prove once (P1), (P2) and (P3) are simplified to: \\---(P1') for $v \leq u$ and $v \neq u$, $\lambda_v \notin \{\{X\},\{Y\}\} \Rightarrow v \notin p(u)$, \\---(P2') for $v \leq u$ and $v \neq u$, $\lambda_v \notin \{\{Y\},\{Z\}\} \Rightarrow v \notin \odd{p(u)}$, \\---(P3') for $v \leq u$ and $v \neq u$, $\lambda_v = \{Y\}$, $v \in p(u) \Leftrightarrow v \in \odd{p(u)}$. The end of the proof is a proof by exhaustion. To prove (Q1) from (P1'), (P2') and (P3'), let's say that if $X \in \lambda_v$, then $\lambda_v$ is $\{X\}$, $\{X,Y\}$ or $\{X,Z\}$. (P2') enables us to conclude. To prove (Q2), let's say that if $Y \in \lambda_v$, then $\lambda_v$ is $\{Y\}$, $\{X,Y\}$ or $\{Y,Z\}$. In the first case, we can conclude from (P3'), in the other two, the combination of (P1') and (P2') do the trick. The third case goes the same way. Conversely, let's show that we can prove (P1') from (Q1), (Q2) and (Q3). The proof of (P2') and (P3') will follow the same sketch. If $\lambda_v \notin \{\{X\},\{Y\}\}$, then $\lambda_v$ is $\{Z\}$ or one of the three planes. If $\lambda_v$ is $\{Z\}$, $\{X,Z\}$ or $\{Y,Z\}$, then we get the result from (Q3). If $\lambda_v$ is $\{X,Y\}$, then we know from (Q1) that $v \notin \codd{p(u)}$ and from (Q2) that $v \notin \odd{p(u)}$: that sufficient to assure that $v \notin p(u)$. That ends the proof. \section{Proof of Theorem \ref{thm:paulinec}} \label{app:thempaulinec} $[\Rightarrow]$: Theorem \ref{thm:Paulisuf}. $[\Leftarrow]$: We order the vertices of $G$ according to the order of the measurements: $V=\{v_0, \ldots , v_{n-1}\}$ s.t. $v_i\prec v_j \Rightarrow i<j$. For any $k\in [0,n)$, let $V_k = \{v_k, \ldots, v_{n-1}\}$. For any $S\subseteq I$, let the input in $S$ be $\ket 0$ and those in $I\setminus S$ be $\ket +$. Moreover for any $u\in O^c$, let $M_u$ be a Pauli measurement of qubit $u$ with $M \in \lambda_u$. The initial state -- before the first measurement -- is $\ket {\varphi_0} = \ket 0_S\otimes (\prod_{u,v \in S^c s.t. (u,v)\in G} \Lambda Z_{u,v})\ket +_{S^c}$. We are going to use some technical claims to build the proof, for which the proofs are given in appendix \ref{app:thempaulinec}. The following claims exhibit Pauli operators which depend on the measurements performed during the computation, and which stabilize the intermediate states obtained during the computation: ~\\ \noindent{\bf Claim 1.} There exists $n$ independent\footnote{$P^{(0)}, \ldots, P^{(n-1)}$ are independent if none of these Pauli operators can be obtained as the product of the other ones, even up to a global phase.} Pauli operators $P^{(0)}, \ldots, P^{(n-1)}:\mathbb C^{\{0,1\}^V}$ $\to \mathbb C^{\{0,1\}^V}$ s.t. $\forall i\in [0,n)$, $P^{(i)}\ket{\varphi_0} = \ket {\varphi_0}$ and $\forall j<i$, $M_{v_j}$ and $P^{(i)}$ commute. ~\\ \noindent{\bf [Proof of Claim 1]} For all $u\in V$, let $R^{(u)} = \begin{cases}Z_u&\text{if $u\in S$}\\ X_uZ_{N_G(u)}&\text{otherwise}\end{cases}$. The initial state $\ket {\varphi_0}$ is stabilized by $\mathcal S = \langle R_u\rangle_{u \in V}$, i.e.~$\ket {\varphi_0}$ is the unique state (up to an irrelevant global phase) such that $\forall u\in V(G)$, $R_u\ket {\varphi_0} = \ket {\varphi_0}$. We use the following Gauss-elimination-like algorithm to produce some new generators $(P^{(u)})_{u\in V}$ of $\mathcal S$ which satisfy that $\forall v_i\in O^c, \forall v_j\in V$, if $i<j$ then $M_{v_i}$ and $P^{(v_j)}$ commute: \indent For all $u\in V, P^{(u)}\leftarrow R^{(u)}$.\\ \indent For all $i\in [0,|V|-1]$:\\ \indent\indent let $A = \{j ~|~i\le j \text{ and $M_{v_i}$ and $P^{(v_j)}$ anticommute}\}$.\\ \indent\indent If $A\neq \emptyset$, let $i_0\in A$ \\ \indent\indent \indent for all $j\in A\setminus \{i_0\}$, $P^{(v_j)} \leftarrow P^{(v_j)}.P^{(v_{i_0})}$\\ \indent\indent \indent $P_{v_i}\leftrightarrow P_{v_{i_0}}$. $\Box$ ~\\ \noindent{\bf Claim 2.} After $k$ measurements and the corresponding corrections, the state $\ket{\varphi_k}$ of the system\footnote{To simplify the proof we assume that the measurements are non destructive, which means that after, say, a $Z$-measurement the measured qubit remains and is either in state $\ket 0$ of $\ket 1$ depending on the outcome of the measurement. As a consequence, for any $k$, $\ket{\varphi_k}$ is a $n$-qubit state.} satisfies: $\forall i<k$, $M_{v_i}\ket{\varphi_k}=\pm\ket{\varphi_k}$ and $\forall i\ge k$, $P^{(i)}\ket{\varphi_k}=\ket{\varphi_k}$. ~\\ \noindent{\bf [Proof of Claim 2.]} Since the first $k$ qubits of $\ket {\varphi_k}$ have been measured according to $M_{v_0},...,$ $M_{v_{k-1}}$, for any $i<k$, $M_{v_i}\ket{\varphi_k} = (-1)^{s_i}\ket {\varphi_k}$ where $s_i\in\{0,1\}$ is the classical outcome of measurement of qubit $v_i$. To prove that $\forall i\ge k$, $P^{(i)}\ket{\varphi_k}=\ket{\varphi_k}$, notice that if a quantum state is the fixpoint of some operator $P$, the measurement of this state according to an observable which commute with $P$ produces, whatever the classical outcome is, a quantum state which is also a fixpoint of $P$. Thus, since $P^{(i)}$ stabilizes the initial state $\ket{\varphi_0}$ and commutes with the first $k$ measurements, it stabilizes $\ket{\varphi_k}$. $\Box$ ~\\ \noindent{\bf Claim 3.} For any $k$, and any $n$-qubit Pauli operator $P$ s.t.~$P\ket {\varphi_k} = \pm\ket {\varphi_k}$, $\exists B_S\subseteq S^c$, $\exists D_S\subseteq S$, $\exists F_S\subseteq V_k^c$ s.t.~$P=\pm X_{B_S}Z_{\odd{B_S}\Delta D_S}\prod_{u\in F_S}M_u$. ~\\ \noindent{\bf [Proof of Claim 3.]} Claim 2 provides $n$ independent Pauli operators which stabilize $\ket {\varphi_k}$ thus $P$ must be a product of these operators: $\exists F_S \subseteq V_k^c$, $\exists Q\subseteq [k,n)$, s.t. $P = \pm \prod_{u\in F_S} M_u \prod_{i\in Q_S} P^{(i)}$. Since each $P^{(i)}$ is, according to Claim 1, a product $\prod_{u\in \Gamma_i} R_u$ where of $R_u = \begin{cases}Z_u&\text{if $u\in S$}\\ X_uZ_{N_G(u)}&\text{otherwise}\end{cases}$. As a consequence, $P=\pm X_{B_S}Z_{\odd{B_S}\Delta D_S}\prod_{u\in F_S}M_u$, where $D_S = (\Gamma_k\Delta \ldots \Delta \Gamma_{n-1})\cap S$ and $B_S = (\Gamma_k\Delta \ldots \Delta \Gamma_{n-1})\cap S^c$. $\Box$ At some step $k$ of the computation, by the strongness hypothesis, the two possible outcomes of the measurement according to $M_{v_{k}}$ occur with probability $1/2$. Thus, thanks to the stepwise determinism hypothesis, there exists a real state $\ket \psi$ on qubits $V\setminus \{v_{k}\}$ and $\theta\in[0,2\pi)$ s.t. $$ \ket {\varphi_k} = \frac1{\sqrt 2}(\ket \uparrow_{v_{k}} \otimes \ket \varphi_{V\setminus \{v_{k}\}} + e^{i\theta} \ket \downarrow_{v_{k}} \otimes X_{\mathtt x(v_{k})} Z_{\mathtt z(v_{k})} \ket \varphi_{V\setminus \{v_{k}\}})$$ where $\ket{\uparrow}\in \{\ket 0, \frac{\ket 0+\ket 1}{\sqrt 2}\}$ and $\ket \downarrow\in \{\ket 1, \frac{\ket 0-\ket 1}{\sqrt 2}\}$ are the eigenvectors of $M_{v_{k}}$. Since $\ket {\varphi_k}$, $\ket{\uparrow}$, $\ket{\downarrow}$ and $\ket{\varphi}$ are real states, $e^{i\theta}=(-1)^r$ for some $r\in \{0,1\}$. Let $T$ be the Pauli operator s.t. $T\ket \uparrow = - \ket \downarrow$ and $T\ket \downarrow = (-1)^{|\mathtt x(v_{k+1})\cap \mathtt z(v_{k+1}) |}\ket \uparrow$. Since $(-1)^r M_{v_{k}}T_{v_{k}}X_{\mathtt x(v_{k})} Z_{\mathtt z(v_{k})} \ket {\varphi_k} = \ket {\varphi_k}$, according to claim 3, $\exists B_S\subseteq S^c, D_S\subseteq S, F_S\subseteq V_{k}^c$ s.t.~$M_{v_{k}}T_{v_{k}}X_{\mathtt x(v_{k})} Z_{\mathtt z(v_{k})} = \pm X_{B_S} Z_{Odd(B_S)\Delta D_S} \prod_{u\in F_S} M_u$. Thus, $$ T_{v_{k}}X_{\mathtt x(v_{k})} Z_{\mathtt z(v_{k})} = \pm X_{B_S} Z_{Odd(B_S)\Delta D_S} \prod_{u\in F'_S} M_u$$ with $ B_S\subseteq S^c, D_S\subseteq S, F'_S\subseteq V_{k+1}^c$. The equation above, involving $\mathtt x(v_{k})$ and $\mathtt z(v_{k})$, is the main ingredient to recover the Pauli flow conditions. However, this equation depends a priori on the choice of the measurements and the initial sates. The following claims, proved in appendix, show how to get rid of this dependency. ~\\ \noindent{\bf Claim 4.} $B_S, D_S$, and $F'_S$ do not depend on $S$. Therefore, using the notation $F$, $B$ and $D$ respectively, we can notice that $D = D_{\emptyset} =\emptyset$, and $B=B_I\subseteq I^c$. ~\\ \noindent{\bf [Proof of Claim 4.]} For any $S,S'\subseteq I$, \\$X_{B_S\Delta B_{S'}} Z_{Odd(B_S\Delta B_{S'})\Delta D_S \Delta D_{S'}} \prod_{u\in F'_S\Delta F'_{S'}} M_u = \pm I$, so $ \prod_{u\in F'_S\Delta F'_{S'}} M_u = $\\ $\pm X_{B_S\Delta B_{S'}} Z_{Odd(B_S\Delta B_{S'})\Delta D_S \Delta D_{S'}} $. \noindent Since all $M_u{\in} \{X,Z\}$, the product in the RHS of the latter equation should only produce $X$ and $Z$ (but no $XZ$), thus $(B_S\Delta B_{S'}) \cap ({Odd(B_S\Delta B_{S'})\Delta D_S \Delta D_{S'}})=\emptyset$ which is equivalent to $(B_S\Delta B_{S'}) \cap (D_S \Delta D_{S'}) = (B_S\Delta B_{S'}) \cap ({Odd(B_S\Delta B_{S'})})$. Thus $|(B_S\Delta B_{S'}) \cap (D_S \Delta D_{S'})|=0\bmod 2$, since for any set $A$, $|A\cap Odd(A)| = 0\bmod 2$. To prove that $F'_S = F'_{S'}$ we exhibit a particular input state such that the initial state is an eigenvector of $\prod_{u\in F'_S\Delta F'_{S'}} M_u$. It implies that when the measurements are performed, the last measurement of $F'_S\Delta F'_{S'}$ is going to be deterministic and thus contradicts the strongness assumption. As a consequence $F'_S\Delta F'_{S'}$ must be empty. The input state is constructed as follows: The qubits in $(B_S\Delta B_{S'})\cap I$ are initialised in $\ket +$, the others in $\ket 0$. Since $|(B_S\Delta B_{S'})\cap (D_S\Delta D_{S'})|=0\bmod 2$ there exists a partition of $(B_S\Delta B_{S'})\cap (D_S\Delta D_{S'})$ into pairs of qubits $P=\{(u_i,v_i)\}_i$. For each pair in $P$, $\Lambda Z$ is applied on the corresponding qubits. The input state is then a fixpoint of $ X_{(B_S\Delta B_{S'})\cap I} Z_{D_S \Delta D_{S'}}$, thus after the entangling stage the overall state (including input and non input qubits) is an eigenstate of $X_{B_S\Delta B_{S'}} Z_{Odd(B_S\Delta B_{S'})\Delta D_S \Delta D_{S'}}$ which implies that the measurement according to $\prod_{u\in F'_S\Delta F'_{S'}} M_u$ is not strong. As a consequence, $F_S=F_{S'}$ which implies $B_S=B_{S'}$ and $D_S=D_{S'}$. $\Box$ ~\\ \noindent{\bf Claim 5.} $F$ and $B$ do not depend on the choice of Pauli measurements. ~\\ \noindent{\bf[Proof of Claim 5.]} If $F$ and $B$ depend on the choice of the Pauli measurements then there exists two choices which differ on a single measurement and differ on at least one of the sets $F$, $B$. Let $(M_u)_{u\in O^c}$ and $(M'_u)_{u\in O^c}$ these two choices and $u_0\in O^c$ s.t. $\forall u\neq u_0$, $M_u=M'_u$ and $M_{u_0}\neq M'_{u_0}$. Let $B$, $F$ and $B'$, $F'$ the sets associated with these two choices of measurements. We have $X_{B\Delta B'} Z_{Odd(B\Delta B')} = \prod_{u\in F\setminus F'}M_u\prod_{u\in F'\setminus F}M'_u\prod_{u\in F\cap F'}M_uM'_u$. Notice that $\prod_{u\in F\cap F'}M_uM'_u=$ $\begin{cases}I&\text{if $u_0\notin F\cap F'$}\\X_{u_0}Z_{u_0}&\text{if $u_0\in F\cap F'$}\end{cases}$. Since $|({B\Delta B'})\cap Odd(B\Delta B')|=0\bmod 2$, we know that $\prod_{u\in F\cap F'}M_uM'_u=I$. \\As a consequence, $X_{B\Delta B'} Z_{Odd(B\Delta B')} = \prod_{u\in F\Delta F'}M_u$. Using similar arguments that has been used above, one can provide a particular input such that the state is a fixpoint of $\prod_{u\in F\Delta F'}M_u$ which implies that, if $F\Delta F'\neq \emptyset$ the last measurement of $F\Delta F'$ is deterministic, contradicting the strongness hypothesis. Thus $F=F'$, as a consequence $B=B'$. $\Box$ We are now able to build a Pauli flow. Since $F$ does not depend on the choice of the measurements, the basis of measurement of the qubits in $F$ must not vary, i.e.~$F\subseteq \lambda^{-1}(\{X\})\cup \lambda^{-1}(\{Z\})$. As a consequence, defining $F_X = F\cap\lambda^{-1}(\{X\}))$ and $F_Z = F\cap\lambda^{-1}(\{Z\}))$, we have $T_{v_{k}}X_{\mathtt x(v_{k})} Z_{\mathtt z(v_{k})} = \pm X_{B\Delta F_X} Z_{Odd(B)\Delta F_Z }$. Defining $p(v_{k}):=B$ one can double check that for any partial order $\prec$ with respect to which $\mathtt x$ and $\mathtt z$ are extensive, $(p,\prec)$ is a Pauli flow. Indeed, if $X\in \lambda_{v_{k}}$, T anti-commutes with $X$, thus $u \in Odd(p(v_{k}))$. Similarly if $Z\in \lambda_u$, $u\in p(v_{k})$. Let $u\le v_{k}$, $u\neq v_{k}$. Since $\mathtt x$ and $\mathtt z$ are extensive, it implies that $u\notin \mathtt x(v_{k})\cup\mathtt z(v_{k})$. If $u\in Odd(p(v_{k}))$, $u\in F_Z$ so $u\in \lambda^{-1}(\{Z\})$ which implies that $X\notin \lambda_u$. So $X\in \lambda_u \Rightarrow u\notin Odd(p(v_{k}))$. Similarly $Z\in \lambda_u \Rightarrow u\notin p(v_{k})$ $\Box$ \section{Proof of Lemma \ref{lem:8}} \label{app:lemma8} Since $(G,I,O,\lambda)$ has a Pauli flow, there exists an order $<$ and $g:O^c\to 2^{I^c}$ s.t. $X\in \lambda_u \Rightarrow u\in \odd{g(u)}\setminus \left( \displaystyle{ \bigcup_{\substack{v \geq u \\ v \neq u}} \odd{g(v)}} \right)$ and $Z\in \lambda_u \Rightarrow u\in {g(u)}\setminus \left( \displaystyle{ \bigcup_{\substack{v \geq u \\ v \neq u}} {g(v)}} \right)$. Since $G$ is bipartite, let $V_0,V_1$ be a bipartition of $V(G)$ s.t. $V_0$ and $V_1$ are independent sets, and let $R_i = V_i\setminus O$ and $p: O^c\to 2^{I^c}$ be defined as follows: $\forall i\in \{0,1\}$, and $\forall u\in R_i$, $$p(u):= g(u)\oplus \left(\!\bigoplus_{\substack{v\in \odd{g(u)}\setminus \{u\}\\ \textup{s.t.} X\in \lambda_v}}\!\!\!\!\!\!\!\!\!\!p_Z(v)\right) \oplus \left(\!\bigoplus_{\substack{v\in g(u)\setminus \{u\} \\\textup{s.t.} Z\in \lambda_v}}\!\!\!\!p_X(v)\right) $$ where $\forall i\in \{0,1\}\forall v\in R_i$, $p_X(v)=p(v)\cap V_i$, and $p_Z(v)=p(v)\cap V_{1-i}$. The inductive definition of $p$ is well founded as the definition of $p(u)$ only depends on $p(v)$ with $u<v$. Let $u$ be maximal for $<$, $\odd{p(u)}=\odd{g(u)}$. For any $v<u$, $v\in (O\cup \lambda^{-1}(\{Z\}))^c$ implies $X\in \lambda_v$, so according to the Pauli flow condition, $v\notin \odd{p(u)}$. Moreover, $u\notin \lambda^{-1}(\{Z\}) \Leftrightarrow X\in \lambda_u \Rightarrow u\in \odd{p(u)}$, thus $\odd{p(u)}\setminus ((O\cup \lambda^{-1}(\{Z\})) = \{u\}\setminus \lambda^{-1}(\{Z\})$. Similarly ${p(u)} \setminus (O\cup \lambda^{-1}(\{X\})) = \{u\}\setminus \lambda^{-1}(\{X\})$. By induction, for a given $u\in O^c$, assume the property is satisfied for all $v\in O^c$ s.t. $u<v$ which implies: \begin{eqnarray*} \odd{p_X(v)} \setminus (O\cup \lambda^{-1}(\{Z\})) & =& \emptyset\\ \odd{p_Z(v)} \setminus (O\cup \lambda^{-1}(\{Z\})) & =& \{v\}\setminus \lambda^{-1}(\{Z\})\\ {p_X(v)} \setminus (O\cup \lambda^{-1}(\{X\})) & = &\{v\}\setminus \lambda^{-1}(\{X\})\\ {p_Z(v)} \setminus (O\cup \lambda^{-1}(\{X\})) & = &\emptyset \end{eqnarray*} \begin{align*} &p(u)\setminus (O\cup \lambda^{-1}(\{Z\})) = g(u)\setminus (O\cup \lambda^{-1}(\{Z\})) \oplus \\ & \left(\bigoplus_{\substack{v\in \odd{g(u)}\setminus \{u\} \\\textup{s.t.} X\in \lambda_v}}\!\!\!\!\!\!\!\!\!\!\!p_Z(v)\setminus (O\cup \lambda^{-1}(\{Z\})) \right) \oplus \left(\bigoplus_{\substack{v\in g(u)\setminus \{u\}\\ \textup{s.t.} Z\in \lambda_v}}\!\!\!\!\!\!\!p_X(v)\setminus (O\cup \lambda^{-1}(\{Z\})) \right)\\ &=g(u)\setminus (O\cup \lambda^{-1}(\{Z\})) \oplus \left(\bigoplus_{\substack{v\in g(u)\setminus \{u\}\\ \textup{s.t.} Z\in \lambda_v}}\{v\}\setminus (O\cup \lambda^{-1}(\{Z\})) \right)\\ &=g(u)\setminus (O\cup \lambda^{-1}(\{Z\})) \oplus (g(u)\setminus \{u\})\setminus (O\cup \lambda^{-1}(\{Z\})\\ &=\{u\}\setminus (O\cup \lambda^{-1}(\{Z\})) =\{u\}\setminus \lambda^{-1}(\{Z\}) \end{align*} Similarly,\begin{align*} &\odd{p(u)}\setminus (O\cup \lambda^{-1}(\{Z\})) = \odd{g(u)}\setminus (O\cup \lambda^{-1}(\{Z\})){\oplus} \\ &\!\!\left(\!\!\bigoplus_{\substack{v\in \odd{g(u)}\setminus \{u\} \\\textup{s.t.} X\in \lambda_v}}\hspace{-0.8cm}\odd{p_Z(v)}\setminus (O\cup \lambda^{-1}(\{Z\})) \!\!\right) \!\!{\oplus}\!\! \left(\!\!\bigoplus_{\substack{v\in g(u)\setminus \{u\} \\\textup{s.t.} Z\in \lambda_v}}\hspace{-0.45cm}\odd{p_X(v)}\setminus (O\cup \lambda^{-1}(\{Z\}))\!\! \right)\\ &=\odd{g(u)}\setminus (O\cup \lambda^{-1}(\{Z\})) \oplus \left(\bigoplus_{\substack{v\in \odd{g(u)}\setminus \{u\}\\ \textup{s.t.} X\in \lambda_v}}\{v\}\setminus (O\cup \lambda^{-1}(\{Z\})) \right)\\ &=\odd{g(u)}\setminus (O\cup \lambda^{-1}(\{Z\})) \oplus (\odd{g(u)}\setminus \{u\})\setminus (O\cup \lambda^{-1}(\{Z\})\\ &=\{u\}\setminus (O\cup \lambda^{-1}(\{Z\})) =\{u\}\setminus \lambda^{-1}(\{Z\}) \end{align*} \end{document}
\begin{equation}gin{document} \title[Clifford-Valued Fractal Interpolation]{Clifford-Valued Fractal Interpolation} \author{Peter R. Massopust} \address{Center of Mathematics, Technical University of Munich, Boltzmannstr. 3, 85748 Garching b. Munich, Germany} \email{[email protected]} \begin{equation}gin{abstract} In this short note, we merge the areas of hypercomplex algebras with that of fractal interpolation and approximation. The outcome is a new holistic methodology that allows the modelling of phenomena exhibiting a complex self-referential geometry and which require for their description an underlying algebraic structure. \end{abstract} \keywords{Iterated function system (IFS), Banach space, fractal interpolation, Clifford Algebra, Clifford Analysis} \subjclass{28A80, 11E88, 15A66, 41A30, 46E15} \maketitle \section{Introduction} In this short note, we merge two areas of mathematics: the theory of hypercomplex algebras as exemplified by Clifford algebras and the theory of fractal approximation or interpolation. In recent years, hypercomplex methodologies have found their way into many applications one of which is digital signal processing. See, for instance \cite{A,SW} and the references given therein. The main idea is to use the multidimensionality of hypercomplex algebras to model signals with multiple channels or images with multiple color values and to use the underlying algebraic structure of such algebras to operate on these signals or images. The results of these algebraic or analytic operations produce again elements of the hypercomplex algebra. This holistic approach cannot be performed in finite dimensional vector spaces as these do not possess an intrinsic algebraic structure. On the other hand, the concept of fractal interpolation has been employed successfully in numerous applied situations over the last decades. The main purpose of fractal interpolation or approximation is to take into account complex geometric self-referential structures and to employ approximants that are wellsuited to model these types of structures. These approximants or interpolants are elements of vector spaces and cannot be operated on in an algebraic way to produce the same type of object. Hence, the need for an extension of fractal interpolation to the hypercomplex setting. An initial investigation into the novel concept of hypercomplex iterated function system was already undertaken in \cite{m5} albeit along a different direction. The structure of this paper is as follows. In Section 2, we give a brief introduction of Clifford algebras and mention a few items from Clifford analysis. In the third section, we review some techniques and state relevant results form the theory of fractal interpolation in Banach spaces. These techniques are then employed in Section 4 to a Clifford algebraic setting. The next section briefly mentions a special case of Clifford-valued fractal interpolation, namely that based on paravector-valued functions. In the last section, we provide a brief summary and mention future research directions. \section{A Brief Introduction to Clifford Algebra and Analysis}\label{sec2} In this section, we provide a terse introduction to the concept of Clifford algebra and analysis and introduce only those items that are relevant for the purposes of this paper. For more details about Clifford algebra and analysis, the interested reader is referred to, for instance, \cite{BRS,Clifford,Clifford2,GHS,Hyper,Krav} and to, i.e., \cite{CSS,CSS2,GHS1,HQW} for its ramifications. To this end, denote by $\{e_1, \ldots, e_n\}$ the canonical basis of the Euclidean vector space $\mathbb{R}^n$. The real Clifford algebra, $\mathbb{R}_n$, generated by $\mathbb{R}^n$ is defined by the multiplication rules \begin{equation}\label{eq2.1} e_i e_j + e_j e_i = -2 \delta_{ij}, \quad i,j\in \{1,\ldots, n\} =: \mathbb{N}_n, \end{equation} where $\delta_{ij}$ is the Kronecker symbol. An element $x\in \mathbb{R}_n$ can be represented in the form $x = \sum\limits_{A} x_A e_A$ with $x_A\in \mathbb{R}$ and $\{e_A : A\subseteq \mathbb{N}_n\}$, where $e_A := e_{i_1} e_{i_2} \cdots e_{i_m}$, $1\leq i_1 < \cdots < i_m \leq n$, and $e_\emptyset =: e_0 := 1$. Thus, the dimension of $\mathbb{R}_n$ regarded as a real vector space is $2^n$. The rules defined in \eqref{eq2.1} make $\mathbb{R}_n$ into a in general noncommutative algebra, i.e., a real vector space together with a bilinear operation $\mathbb{R}_n \times \mathbb{R}_n\to\mathbb{R}_n$. A conjugation on Clifford numbers is defined by $\overline{x} := \sum\limits_{A} x_A \overline{e}_A$ where $\overline{e}_A := \overline{e}_{i_m} \cdots \overline{e}_{i_1}$ with $\overline{e}_i := -e_i$ for $i\in\mathbb{N}_n$, and $\overline{e}_0 := e_0 = 1$. In this context, one also has \begin{equation}\label{eq2} e_0 e_0 = e_0 = 1\quad\text{and}\quad e_0 e_i = e_i e_0 = e_i. \end{equation} The Clifford norm of the Clifford number $x = \sum\limits_{A} x_A e_A$ is defined by \[ |x| := \left(\sum\limits_{A\subseteq\mathbb{N}_n} |x_A|^2\right)^{1/2}. \] In the following, we consider Clifford-valued functions $f:G \subseteq\mathbb{R}^m\to\mathbb{R}_n$, where $G$ is a nonempty open domain. For this purpose, let $X$ be $G$ or any suitable subset of $\overline{G}$. Denote by $\mathcal{F}(X)$ any of the following functions spaces: $C^k (X), C^{k,\alpha}(X), L^p(X), W^{s,p}(X), B^s_{p,q}(X), F^s_{p,q}(X)$ where \begin{equation}gin{enumerate} \item $C^k (X)$, $k\in\mathbb{N}_0 := \{0\}\cup\mathbb{N}$, is the Banach space of $k$-times continuously differentiable $\mathbb{R}$-valued functions; \item $C^{k,\alpha}(X)$, $k\in\mathbb{N}_0$, $0 < \alpha \leq 1$, is the Banach space of $k$-times continuously differentiable $\mathbb{R}$-valued functions whose $k$-th derivative is Hölder continuous with Hölder exponent $\alpha$; \item $L^p(X)$, $1\leq p < \infty$, are the Lebesgue spaces on $X$; \item $W^{s,p}(X)$, $s\in\mathbb{N}$ or $s>0$, $1\leq p < \infty$, are the Sobolev-Slobodeckij spaces. \item $B^s_{p,q}(X)$, $1\leq p,q < \infty$, $s>0$, are the Besov spaces; \item $F^s_{p,q}(X)$, $1\leq p,q < \infty$, $s>0$, are the Triebel-Lizorkin spaces. \end{enumerate} The real vector space $\mathcal{F}(X, \mathbb{R}_n)$ of $\mathbb{R}_n$-valued functions over $X$ is defined by \[ \mathcal{F}(X, \mathbb{R}_n) := \mathcal{F}(X) \otimes_\mathbb{R} \mathbb{R}_n. \] This linear space becomes a Banach space when endowed with the norm \[ \n{f} := \left(\sum_{A\subseteq\mathbb{N}_n} \n{f_A}^2_{\mathcal{F}(X)}\right)^{1/2}. \] It is known \cite[Remark 2.2. and Proposition 2.3.]{GS} that $f\in \mathcal{F}(X,\mathbb{R}_n)$ iff \begin{equation}\label{eq2.3} f = \sum_{A\subseteq\mathbb{N}_n} f_A e_A \end{equation} with $f_A\in \mathcal{F}(X)$. Furthermore, functions in $\mathcal{F}n$ inherit all the topological properties such as continuity and differentiability from the functions $f_A\in\mathcal{F}(X)$. \section{Some Results From Fractal Interpolation Theory}\label{sec3} In this section, we briefly summarize fractal interpolation and the Read-Bajrakterevi\'{c} operator. For a more detailed introduction to fractal geometry and its subarea of fractal interpolation, the interested reader is referred to the following, albeit incomplete, list of references: \cite{B1,B2,BHVV,bhm,bhm2,bedford,DLM,dubuc1,dubuc2,F,H,LDV,massopust1,SB,SB1}. To this end, let $X$ be a nonempty bounded subset of the Banach space $\mathbb{R}^m$. Suppose we are given a finite family $\{L_i\}_{i = 1}^{N}$ of injective nontrivial contractions $X\to X$ generating a partition of $X$ in the sense that \begin{equation}gin{align} &\forall\;i, j\in \mathbb{N}_N, i\neq j: L_i(X)\cap L_j(X) = \emptyset;\label{c1}\\ &X = \bigcup_{i=1}^N L_i(X).\label{c2} \end{align} For simplicity, we write $X_i := L_i(X)$. Here and in the following, we always assume that $1<N\in\mathbb{N}$. The purpose of fractal interpolation is to obtain a unique global function \[ \psi: X = \bigcup\limits_{i=1}^N X_i\to\mathbb{R} \] belonging to some prescribed Banach space of functions $\mathcal{F}(X)$ and satisfying $N$ functional equations of the form \begin{equation}\label{psieq} \psi (L_i (x)) = q_i (x) + s_i (x) \psi (x), \quad x\in X,\ i\in \mathbb{N}_N, \end{equation} where for each $i\in\mathbb{N}_N$, $q_i\in\mathcal{F}(X)$ and $s_i:X\to\mathbb{R}$ are given functions. In addition, we require that $s_i$ is bounded and satisfies $s_i \cdot f\in\mathcal{F}(X)$ for any $f\in\mathcal{F}(X)$, i.e., $s_i$ is a multiplier for $\mathcal{F}(X)$. It is worthwhile mentioning that Eqn. \eqref{psieq} reflects the self-referential or fractal nature of the global function $\psi$. The idea behind obtaining $\psi$ is to consider \eqref{psieq} as a fixed point equation for an associated affine operator acting on $\mathcal{F}(X)$ and to show that the fixed point - should it exist - is unique. (Cf. also \cite{SB}.) For this purpose, define an affine operator $T: \mathcal{F}(X)\to \mathcal{F}(X)$, called a Read-Bajractarevi\'c (RB) operator, by \begin{equation}\label{eq3.4} T f := q_i\circ L_i^{-1} + s_i\circ L_i^{-1}\cdot f\circ L_i^{-1}, \end{equation} on $X_i$, $i\in \mathbb{N}_N$, or, equivalently, by \begin{equation}gin{align*} T f &= \sum_{i=1}^N q_i\circ L_i^{-1}\, \chi_{X_i} + \sum_{i=1}^N s_i\circ L_i^{-1}\cdot f\circ L_i^{-1}\, \chi_{X_i}\\ &= T(0) + \sum_{i=1}^N s_i\circ L_i^{-1}\cdot f\circ L_i^{-1}\, \chi_{X_i},\quad x\in X, \end{align*} where $\chi_S$ denotes the characteristic or indicator function of a set $S$: $\chi_S(x) = 1$, if $x\in S$, and $\chi_S(x) = 0$, otherwise. Then, \eqref{psieq} is equivalent to showing the existence of a unique fixed point $\psi$ of $T$: $T\psi = \psi$. The existence of a unique fixed point follows from the Banach Fixed Point Theorem once it has been shown that $T$ is a contraction on $\mathcal{F}(X)$. The RB operator $T$ is a contraction on $\mathcal{F}(X)$ if there exists a constant $\gamma_{\mathcal{F}(X)}\in [0,1)$ such that for all $f,g\in\mathcal{F}(X)$ \begin{equation}gin{align*} \n{Tf - Tg}_{\mathcal{F}(X)} &= \n{\sum_{i=1}^N s_i\circ L_i^{-1}\cdot (f-g)\circ L_i^{-1}\, \chi_{X_i}}_{\mathcal{F}(X)}\\ & \leq \gamma_{\mathcal{F}(X)} \n{f - g}_{\mathcal{F}(X)} \end{align*} holds. Here, $\n{\cdot}_{\mathcal{F}(X)}$ denotes the norm on $\mathcal{F}(X)$. Should such a unique fixed point $\psi$ exist then is termed a \emph{fractal function of type $\mathcal{F}(X)$} as its graph is in general a fractal set. Now, let $\mathcal{F}(X)$, for an appropriate $X$, denote one of the following Banach space of functions: the Lebesgue spaces $L^p(X)$, the smoothness spaces $C^k(X)$, Hölder spaces $C^{k,\alpha}(X)$, Sobolev-Slobodeckij spaces $W^{s,p}(X)$, Besov spaces $B^s_{p,q}(X)$, and Triebel-Lizorkin spaces $\mathcal{F}^s_{p,q}(X)$. The following results were established in a series of papers \cite{PRM,massopust, massopust1,m2,m3}. \begin{equation}gin{theorem} Let $L_i$, $i\in\mathbb{N}_N$, be defined as in \eqref{c1} and \eqref{c2}. Further let $q_i\in\mathcal{F}(X)$ and let $s_i:X\to\mathbb{R}$ be bounded and a pointwise multiplier for $\mathcal{F}(X)$. Define $T:\mathcal{F}(X)\to\mathcal{F}(X)$ as in \eqref{eq3.4}. Then there exists a constant $\gamma_\mathcal{F}\in [0,1)$ depending on $m$, the indices defining $\mathcal{F}(X)$, $\Lip(L_i)$, and $\n{s_i}_{L^\infty}$ such that $\n{T f} \leq \gamma_{\mathcal{F}}\; \n{f}$, for all $f\in\mathcal{F}(X)$. Hence, $T$ has a unique fixed point $\psi\in\mathcal{F}(X)$ which is referred to as a fractal function of class $\mathcal{F}(X)$. \end{theorem} \section{Clifford-Valued Fractal Interpolation}\label{sec4} In this section, we introduce the novel concept of Clifford-valued fractal interpolation. To this end, we refer back to Section \ref{sec2} and the definition of $X$ and $\mathcal{F}(X)$. We consider here only the case $m=1$ and leave the extension to higher dimensions to the diligent reader. According to which function space $\mathcal{F}(X)$ represents, $X$ is either an open, half-open or closed interval of finite length. Assume that there exist $N$, $1<N\in\mathbb{N}$, nontrivial contractive injections $L_i:X\to X$ such that $\{L_1( X), \ldots, L_N( X)\}$ forms a partition of $ X$, i.e, that \begin{equation}gin{enumerate} \item[(P1)]\label{P1} $L_i( X) \cap L_j( X) = \emptyset$, for $i\neq j$; \item[(P2)]\label{P2} $ X = \bigcup\limits_{i\in\mathbb{N}_N} L_i( X)$. \end{enumerate} As above, we write $X_i := L_i(X)$, $i\in\mathbb{N}_N$. On the spaces $\mathcal{F}n$, we define an RB operator $\wt{T}$ as follows. Let $f\in\mathcal{F}n$ with $f = \sum\limits_{A\subseteq\mathbb{N}_n} f_A e_A$, where $f_A\in\mathcal{F}(X)$, for all $A\subseteq\mathbb{N}_n$. Let $T:\mathcal{F}(X)\to\mathcal{F}(X)$ be an RB operator of the form \eqref{eq3.4}. Then, \begin{equation}\label{eq4.1} \wt{T}f := \sum\limits_{A\subseteq\mathbb{N}_n} T(f_A) e_A\in\mathcal{F}n, \end{equation} provided that $T(f_A)\in\mathcal{F}(X)$ for all $A\subseteq\mathbb{N}_n$. Under the latter assumption and the supposition that $T$ is contractive on $\mathcal{F}(X)$ with Lipschitz constant $\gamma_{\mathcal{F}(X)}$, we obtain for $f,g\in\mathcal{F}n$ \begin{equation}gin{align*} \n{\wt{T}f - \wt{T}g}^2 &= \sum\limits_{A\subseteq\mathbb{N}_n} \n{Tf_A - Tg_A}_{\mathcal{F}(X)}^2\\ &\leq \gamma_{\mathcal{F}(X)}^2 \sum\limits_{A\subseteq\mathbb{N}_n} \n{f_A - g_A}^2_{\mathcal{F}(X)}\\ &= \gamma_{\mathcal{F}(X)}^2 \n{f-g}^2. \end{align*} Hence, $\wt{T}$ is also contractive on $\mathcal{F}n$ and with the same Lipschitz constant $\gamma_{\mathcal{F}(X)}$. The following diagram illustrates the above approach. \begin{equation}\label{diagram} \begin{equation}gin{CD} \mathcal{F}(X) @>T>> \mathcal{F}(X)\\ @VV\otimes_\mathbb{R} \mathbb{R}_nV @VV\otimes_\mathbb{R} \mathbb{R}_nV\\ \mathcal{F}n @>\wt{T}>> \mathcal{F}n \end{CD} \end{equation} The next theorem summarizes the main result. \begin{equation}gin{theorem}\label{thm4.1} Let $X\subset\mathbb{R}$ be as mentioned above. Further, let nontrivial injective contractions $L_i:X\to X$, $i\in\mathbb{N}_N$, be given such that (P1) and (P2) are satisfied. Let $\mathcal{F}(X)$ be any one of the function spaces defined in Section \ref{sec2}. On the space $\mathcal{F}n = \mathcal{F}(X)\otimes_\mathbb{R} \mathbb{R}_n$ define an RB operator $\wt{T}:\mathcal{F}n$ $\to\mathcal{F}n$ by \[ \wt{T} f := \sum\limits_{A\subseteq\mathbb{N}_n} T(f_A) e_A, \] where $T:\mathcal{F}(X)\to\mathcal{F}(X)$ be an RB operator of the form \eqref{eq3.4}. If $T:\mathcal{F}(X)\to\mathcal{F}(X)$ is a contractive RB operator on $\mathcal{F}(X)$ with Lipschitz constant $\gamma_{\mathcal{F}(X)}$, then $\wt{T}$ is also contractive on $\mathcal{F}n$ with the same Lipschitz constant. Furthermore, the unique fixed point $\psi\in\mathcal{F}n$ satisfies the Clifford-valued self-referential equation \[ \psi (L_i (x)) = q_i (x) + s_i (x) \psi (x), \quad x\in X,\ i\in \mathbb{N}_N. \] \end{theorem} \begin{equation}gin{proof} The validity of these statements follows directly from the above elaborations. \end{proof} For the sake of completeness, we now list the Lipschitz constants $\gamma_{\mathcal{F}(X)}$ for the functions spaces listed in Section \ref{sec2} in the case $m=1$. The conditions are $\gamma_{\mathcal{F}(X)} < 1$. Note that the expressions are different for the case $m>1$. \begin{equation}gin{enumerate} \item $C^k (X)$: $\gamma_{C^k(X)} = \max\{\Lip (L_i)^{-(k+1)} \n{s_i}_{L^\infty} : i\in\mathbb{N}_N\}$. \vskip 5pt\noindent \item $C^{k,\alpha}(X)$: $\gamma_{C^{k,\alpha} (X)} = \max\{\Lip (L_i)^{-(k+\alpha)} \n{s_i}_{L^\infty} : i\in\mathbb{N}_N\}$. \vskip 5pt\noindent \item $L^p(X)$: $\gamma_{L^p(X)} = \displaystyle{\sum\limits_{i\in\mathbb{N}_N}} \Lip (L_i) \n{s_i}^p_{L^\infty}$. \vskip 5pt\noindent \item $W^{s,p}(X)$: $\gamma_{W^{s,p}(X)} = \displaystyle{\sum\limits_{i\in\mathbb{N}_N}} \Lip (L_i)^{1 - s p} \n{s_i}^p_{L^\infty}$. \vskip 5pt\noindent \item $B^s_{p,q}(X)$: $\gamma_{B^s_{p,q}(X)} = \displaystyle{\sum\limits_{i\in\mathbb{N}_N}} \Lip (L_i)^{(1/p - s) q} \n{s_i}^q_{L^\infty}$. \vskip 5pt\noindent \item $F^s_{p,q}(X)$: $\gamma_{F^s_{p,q}(X)} = \displaystyle{\sum\limits_{i\in\mathbb{N}_N}} \Lip (L_i)^{1 - s p} \n{s_i}^p_{L^\infty}$. \vskip 5pt\noindent \end{enumerate} The geometric interpretation of $\wt{T}$ lies at hand: Each of the functions $f_A$ is contracted by $T$ along the direction in $\mathbb{R}_n$ determined by $e_A$. There is no mixing taking place between different directions. This provides a holistic representation of features necessitating such a structure as, for instance, multichannel data or multicolored images. \section{Paravector-Valued Functions}\label{sec5} An important subspace of $\mathbb{R}_n$ is the space of paravectors. These are Clifford numbers of the form $x = x_0 + \sum\limits_{i=1}^n x_i e_i$. The subspace of paravectors is denoted by $\mathbb{A}_{n+1} := \mathrm{span\,}_\mathbb{R}\{e_0, e_1, \ldots, e_n\} = \mathbb{R}\oplus \mathbb{R}^n$. Given a Clifford number $x\in \mathbb{R}_n$, we assign to $x$ its paravector part by means of the mapping $\pi: \mathbb{R}_n\to \mathbb{A}_{n+1}$, $x \mapsto x_0 + \sum\limits_{i=1}^n x_i e_i$. Note that each paravector $x$ can be identified with an element $(x_0, x_1, \ldots, $ $x_n) =: (x_0, \boldsymbol{x})\in \mathbb{R}\times \mathbb{R}^n$. For many applications in Clifford theory, one therefore identifies $\mathbb{A}_{n+1}$ with $\mathbb{R}^{n+1}$. Although as point sets, these two sets are identical but differ considerably in their algebraic structures. For instance, every $x\in \mathbb{A}_{n+1}$ has an inverse whereas there is no such object for a vector $v\in\mathbb{R}^{n+1}$. We also notice that $\mathbb{A}_{n+1}$ is not necessarily closed under multiplication unless a multiplication table \cite{Hyper} is defined or $n=3$, in which case $\mathbb{A}_4 = \mathbb{H}$, the non-commutative division algebra of quaternions. $\mathbb{A}_{n+1}$ endowed with a multiplication table produces in general a nonassociative noncommutative algebra. See \cite{A} for a suitability investigation of such algebras in the area of digital signal processing. The scalar part, $\Sc$, and vector part, $\Ve$, of a paravector $\mathbb{A}_{n+1}\ni x = x_0 + \sum\limits_{i=1}^n x_i e_i$ is given by $x_0$ and $\boldsymbol{x} = \sum\limits_{i=1}^n x_i e_i$, respectively. Given a Clifford number $x\in \mathbb{R}_n$, we assign to $x$ its paravector part, $\PV(x)$, by means of the mapping $\pi: \mathbb{R}_n\to \mathbb{A}_{n+1}$, $x \mapsto x_0 + \sum\limits_{i=1}^n x_i e_i =: \PV(x)$. A function $f:\mathbb{A}_{n+1}\to\mathbb{A}_{n+1}$ is called a paravector-valued function. Any such function is of the form \begin{equation}\label{eq5.1} f(x) = f_0 (x) + \sum\limits_{i=1}^n f_i (x) e_i, \end{equation}{align} where $f_a:\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}$, $a\in\{0,1,\ldots, n\}$. The expression \eqref{eq5.1} for a paravector-valued function can also be written in the more succinct form \[ f (x + \boldsymbol{x}) = f_0(x_0, |\boldsymbol{x}|) + \omega (\boldsymbol{x})f_1(x_0,|\boldsymbol{x}|), \] where now $f_0, f_1:\mathbb{R}\times\mathbb{R}^n\to\mathbb{R}$ and $\omega (\boldsymbol{x}) := \frac{\boldsymbol{x}}{|\boldsymbol{x}|}\in \mathbb{S}^n$ with $\mathbb{S}^n$ denoting the unit sphere in $\mathbb{R}^n$. For some properties of paravector-valued functions, see, for instance \cite{GS,sproessig}. Prominent examples of paravector-valued functions are for instance the exponential and sine functions \cite{sproessig} for $x\in \mathbb{A}_{n+1}$: \begin{equation}gin{align*} \exp (x) =& \exp(x_0)\left(\cos \abs{\boldsymbol{x}} + \omega(\boldsymbol{x}) \sin \abs{\boldsymbol{x}}\right),\\ \sin (x) = & \sin x_0 \cosh \abs{\boldsymbol{x}} + \omega(\abs{x}) \sinh \abs{\boldsymbol{x}}. \end{align*} A large class of paravector-valued functions is given by right-linear linear transformations. To this end, let $M_k (\mathbb{A}_{n+1})$ be the right module of $k\times k$-matrices over $\mathbb{A}_{n+1}$. Every element $H = (H_{ij})$ of $M_k (\mathbb{A}_{n+1})$ induces a right linear transformation $L: \mathbb{A}_{n+1}^k\to \mathbb{R}_n^k$ via $L(x) = H x$ defined by $L(x)_i = \sum\limits_{j=1}^k H_{ij} x_j$, $H_{ij}\in \mathbb{A}_{n+1}$. To obtain an endomorphism $\mathcal{L}:\mathbb{A}_{n+1}^k\to\mathbb{A}_{n+1}^k$, we set $\mathcal{L}(x)_i := \pi(L(x)_i)$, $i=1, \ldots, k$. In this case, we write $\mathcal{L} = \pi\circ L$. For example, if $n:=3$ (the case of real quaternions) $L: \mathbb{A}_{4}^k\to \mathbb{A}_{4}^k$ and thus $\mathcal{L} = L$. Theorem \ref{thm4.1} applies also to paravector-valued functions and thus provides a framework for paravector-valued fractal interpolation as well and relevant associated function spaces for appropriate $X$ are defined in an analogous fashion as above. To this end, let $\mathcal{F}(X)$ be for instance one of the function spaces listed in Section \ref{sec2}. Then, \[ \mathcal{F}(X, \mathbb{A}_{n+1}) := \mathcal{F}(X)\otimes_\mathbb{R} \mathbb{A}_{n+1} \] and an element $f$ of $\mathcal{F}(X, \mathbb{A}_{n+1})$ has therefore the form \[ f = \sum_{k=0}^n f_k e_k. \] Theorem \ref{thm4.1} then asserts the existence of a paravector-valued function $\psi\in \mathcal{F}(X,\mathbb{A}_{n+1}$ of self-referential nature: \[ \psi (L_i (x)) = q_i (x) + s_i (x) \psi (x), \quad x\in X,\ i\in \mathbb{N}_N, \] where the functions $q_i$ and $s_i$ have the same meaning as in Section \ref{sec4}. \section{Brief Summary and Further Research Directions} In this short note, we have initiated the investigation of fractal interpolation into a hypercomplex setting. The main idea was to define fractal interpolants along the different directions defined by a Clifford algebra $\mathbb{R}_n$ and use the underlying algebraic structure to manipulate the hypercomplex fractal object to yield another hypercomplex fractal object. There are several extensions of this first initial approach: \begin{equation}gin{enumerate} \item Define -- under suitable conditions -- RB operators acting directly on appropriately defined function spaces $\mathcal{F}(X, \mathbb{R}_n)$ instead of resorting to the ``component'' RB operators. \item Provide a \emph{local} version of the defined hypercomplex fractal interpolation in the sense first defined in \cite{BH} and further investigated in, i.e., \cite{bhm,m2}. \item Construct \emph{nonstationary} approaches to Clifford-valued fractal interpolation in the spirit of \cite{m4}. \item Extend the notion of hypercomplex fractal interpolation to systems of function systems as described in \cite{DLM,LDV}. \end{enumerate} \begin{equation}gin{thebibliography}{99} \bibitem{A} Alsmann, D. On Families of $2^N$-Dimensional Hypercomplex Algebras Suitable for Digital Image Processing. \textit{14th European Signal Processing Conference (EUSIPCO 2006)}, 1--4. \bibitem{B1} Barnsley, M.F. \textit{Fractals Everywhere}. Dover Publications Inc., 2012. \bibitem {B2} Barnsley, M.F. Fractal functions and interpolation. \textit{Constr. Approx.} \textbf{1986}, \textit{2}, 303--329. \bibitem{BHVV} Barnsley, M.F., Harding, B., Vince, C., Viswanathan, P. Approximation of rough functions. \textit{J. Approx. Th.} \textbf{2016}, \textit{209}, 23--43. \bibitem{bhm} Barnsley, M.F., Hegland, M., Massopust, P.R. Numerics and Fractals. \textit{Bull. Inst. Math. Acad. Sin. (N.S.)} \textbf{2014}, \textit{9(3)}, 389--430. \bibitem{bhm2} Barnsley, M.F., Hegland, M., Massopust, P.R. Self-referential functions. https://arxiv.org/abs/1610.01369 \bibitem{BH} Barnsley, M.F., Hurd, L.P. \textit{Fractal Image Compression}. AK Peters Ltd., Wellesly, MA, 1993. \bibitem{bedford} Bedford, T., Dekking, M., Keane, M. Fractal image coding techniques and contraction operators. \textit{Delft University of Technology Report} \textbf{1992}, \textit{92-93}. \bibitem{BRS} Brackx, F.; Delanghe, R.; Sommen, F. \textit{Clifford Analysis}. Pitman Books, 1982. \bibitem{Clifford} Ab\l amowicz, R., Sobczyk, G. \textit{Lectures on Clifford (Geometric) Algebras and Applications}. Birkh\"auser, New York (2004). \bibitem{CSS} Colombo, F.; Sabadini, I.; Struppa, D.C. \textit{Noncommutative Functional Calculus: Theory and Applications of Slice Hyperholomorphic Functions}. Birkhäuser Verlag, 2010. \bibitem{CSS2} Colombo, F.; Sabadini, I.; Struppa, D.C. \textit{Entire Slice Regular Functions}. Springer Verlag, 2016. \bibitem{Clifford2} Delanghe, R., Sommen, F., Sou\v{c}ek, V. \textit{Clifford Algebra and Spinor-Valued Functions. A Function Theory for the Dirac Operator}. Springer, Dordrecht (1992). \bibitem{DLM} Dira, N., Levin, D., Massopust, P. Attractors of trees of maps and of sequences of maps between spaces and applications to subdivision. \textit{J. Fixed Point Theory Appl.} \textbf{2020}, \textit{22}(14), 1--24. \bibitem{dubuc1} Dubuc, S. Interpolation through an iterative scheme. \textit{J. Math. Anal. Appl.} \textbf{1986}, \textit{114(1)}, 185--204. \bibitem{dubuc2} Dubuc, S. Interpolation fractale. In \textit{Fractal Geomety and Analysis}, J. B\'elais and S. Dubuc, eds., Kluwer Academic Publishers, Dordrecht, The Netherlands, 1989. \bibitem{F} Falconer, K. \textit{Fractal Geometry: Mathematical Foundations and Applications}. 3rd ed. Wiley, 2014. \bibitem{GHS} G\"urlebeck, K.; Habetha, K.; Spr\"o{\ss}ig, W. \textit{Holomorphic Functions in the Plane and $n$-dimensional Space}, Birkh\"auser Verlag, 2000. \bibitem{GHS1} G\"urlebeck, K.; Habetha, K.; Spr\"o{\ss}ig, W. \textit{Application of Holomorphic Functions in Two and Higher Dimensions}, Birkh\"auser Verlag, 2016. \bibitem{GS} G\"urlebeck, K.; Spr\"o{\ss}ig, W. \textit{Quaternionic and Clifford Calculus for Physicists and Engineers}, John Wiley \& Sons,1997. \bibitem{H} Hutchinson, J.E. Fractals and self-similarity. \textit{Indiana Univ. Math. J.} \textbf{1981}, \textit{30}, 713--747. \bibitem{HQW} Huang, S.; Qiao, Y.Y.; Wen, G.C. \textit{Real and Complex Clifford Analysis}. Springer Verlag, 2006. \bibitem{Hyper} Kantor, I.L., Solodovnik, A.S. \textit{Hypercomplex Numbers. An Elementary Introduction to Algebras.} Springer Verlag, New York, 1989. \bibitem{Krav} Kravchenko, V. \textit{Applied Quaternionic Analysis}, Heldermann Verlag: Lemgo, Germany, 2003. \bibitem{LDV} Levin, D.; Dyn, N.; Viswanathan, P. Non-stationary versions of fixed-point theory, with applications to fractals and subdivision. \textit{J. Fixed Point Theory Appl.} \textbf{2019}, \textit{21}, 1--25. \bibitem{PRM} Massopust, P.R. Fractal functions and their applications. \textit{Chaos, Solitons and Fractals}, \textbf{1997}, \textit{8(2)}, 171--190. \bibitem {massopust} Massopust, P.R. \textit{Interpolation and Approximation with Splines and Fractals,} Oxford University Press: Oxford, USA, 2010. \bibitem{massopust1} Massopust, P.R. \textit{Fractal Functions, Fractal Surfaces, and Wavelets}, 2nd ed., Academic Press: San Diego, USA, 2016. \bibitem{m2} Massopust, P.R. Local fractal functions and function spaces. \textit{Springer Proceedings in Mathematics \& Statistics: Fractals, Wavelets and their Applications} \textbf{2014}, Vol. 92, 245--270. \bibitem{m3} Massopust, P.R. Local Fractal Functions in Besov and Triebel-Lizorkin Spaces. \textit{J. Math. Anal. Appl.} \textbf{2016}, \textit{436}, 393 -- 407. \bibitem{m4} Massopust, P.R. Non-Stationary Fractal Interpolation. \textit{Mathematics} \textbf{2019}, \textit{7(8)}, 1 -- 14. \bibitem{m5} Massopust, P.R. Hypercomplex Iterated Function Systems. To appear in \textit{Current Trends in Analysis, its Applications and Computation.} Proceedings of the 12th ISAAC Congress, Aveiro, Portugal, 2019, Cereijeras, P.; Reissig, M.; Sabadini, I.; Toft, J. (eds.), Birkh\"auser. \bibitem{SB} Serpa, C.; Buescu, J. Constructive solutions for systems of iterative functional equations. \textit{Constructive Approx.}, \textbf{2017}, \textit{45(2)}, 273--299. \bibitem{SB1} Serpa, C.; Buescu, J. Compatibility conditions for systems of iterative functional equations with non-trivial contact sets. \textit{Results Math.} \textbf{2021}, \textit{2}, 1--19. \bibitem{sproessig} Spr{\"o}ssig, W.: On Operators and Elementary Functions in Clifford Analysis. Zeitschrift f\"ur Analysis und ihre Anwendung \textbf{19}, No.2, 349--366 (1999). \bibitem{SW} Schutte, H.-D.; Wenzel, J. Hypercomplex numbers in digital signal processing. \it{IEEE International Symposium on Circuits and Systems} \textbf{1990}, DOI: 10.1109\/ISCAS.1990.112431. \end{thebibliography} \end{document}
\begin{document} \title{The local metric dimension of the lexicographic product of graphs} \begin{abstract} The metric dimension is quite a well-studied graph parameter. Recently, the adjacency dimension and the local metric dimension have been introduced and studied. In this paper, we give a general formula for the local metric dimension of the lexicographic product $G \circ \mathcal{H}$ of a connected graph $G$ of order $n$ and a family $\mathcal{H}$ composed by $n$ graphs. We show that the local metric dimension of $G \circ \mathcal{H}$ can be expressed in terms of the true twin equivalence classes of $G$ and the local adjacency dimension of the graphs in $\mathcal{H}$. \end{abstract} {\it Keywords:} Local metric dimension; local adjacency dimension; lexicographic product graphs. {\it AMS Subject Classification numbers:} 05C12; 05C76 \section{Introduction} \label{sectIntro} A metric generator of a metric space $(X,d)$ is a set $S\subset X$ of points in the space with the property that every point of $X$ is uniquely determined by the distances from the elements of $S$, \textit{i.e}., for every $x,y\in X$, there exists $z\in S$ such that $d(x,z)\ne d(y,z)$ \cite{MR0268781}. In this case we say that $z$ \emph{distinguishes} the pair $x,y$. Given a simple and connected graph $G=(V,E)$, we consider the function $d_G:V\times V\rightarrow \mathbb{N}\cup \{0\}$, where $d_G(x,y)$ is the length of a shortest path between $u$ and $v$ and $\mathbb{N}$ is the set of positive integers. Then $(V,d_G)$ is a metric space since $d_G$ satisfies $(i)$ $d_G(x,x)=0$ for all $x\in V$,$(ii)$ $d_G(x,y)=d_G(y,x)$ for all $x,y \in V$ and $(iii)$ $d_G(x,y)\le d_G(x,z)+d_G(z,y)$ for all $x,y,z\in V$. A set $S\subset V$ is said to be a \emph{metric generator} for $G$ if any pair of vertices of $G$ is distinguished by some element of $S$. A minimum cardinality metric generator is called a \emph{metric basis}, and its cardinality the \emph{metric dimension} of $G$, denoted by $\dim(G)$. The notion of metric dimension of a graph was introduced by Slater in \cite{Slater1975}, where metric generators were called \emph{locating sets}. Harary and Melter independently introduced the same concept in \cite{Harary1976}, where metric generators were called \emph{resolving sets}. Applications of this invariant to the navigation of robots in networks are discussed in \cite{Khuller1996} and applications to chemistry in \cite{Johnson1993,Johnson1998}. Several variations of metric generators, including resolving dominating sets \cite{Brigham2003}, independent resolving sets \cite{Chartrand2003}, local metric sets \cite{Okamoto2010}, strong resolving sets \cite{Sebo2004}, adjacency resolving sets \cite{JanOmo2012}, $k$-metric/adjacency generators \cite{Estrada-Moreno2014a,Estrada-Moreno2013}, simultaneous (strong) metric generators \cite{EstrGarcRamRodr2015,Ramirez-Cruz-Rodriguez-Velazquez_2014}, etc., have since been introduced and studied. A set $S$ of vertices in a connected graph $G$ is a \emph{local metric generator} for $G$ (also called local metric set for $G$ \cite{Okamoto2010}) if every two adjacent vertices of $G$ are distinguished by some vertex of $S$. A minimum local metric generator is called a \emph{local metric basis} for $G$ and its cardinality, the \emph{local metric dimension} of G, is denoted by $\dim_l(G)$. The concept of adjacency generator\footnote{Adjacency generators were called adjacency resolving sets in \cite{JanOmo2012}.} was introduced by Jannesari and Omoomi in \cite{JanOmo2012} as a tool to study the metric dimension of lexicographic product graphs. A set $S\subset V$ of vertices in a graph $G=(V,E)$ is said to be an \emph{adjacency generator} for $G$ if for every two vertices $x,y\in V-S$ there exists $s\in S$ such that $s$ is adjacent to exactly one of $x$ and $y$. A minimum cardinality adjacency generator is called an \emph{adjacency basis} of $G$, and its cardinality the \emph{adjacency dimension} of $G$, denoted by $\operatorname{\mathrm{adim}}(G)$ \cite{JanOmo2012}. The concepts of \emph{local adjacency generator}, \textit{local adjacency basis} and \emph{local adjacency dimension} are defined by analogy, and the local adjacency dimension of a graph $G$ is denoted by $\operatorname{\mathrm{adim}}_l(G)$. This concept has been studied further by Fernau and Rodr\'{i}guez-Vel\'{a}zquez in \cite{RV-F-2013,MR3218546} where they introduced the study of local adjacency generators and showed that the (local) metric dimension of the corona product of a graph of order $n$ and some non-trivial graph $H$ equals $n$ times the (local) adjacency dimension of $H$. As a consequence of this strong relation they showed that the problem of computing the local metric dimension and the (local) adjacency dimension of a graph is NP-hard. As pointed out in \cite{RV-F-2013,MR3218546}, any adjacency generator for a graph $G=(V,E)$ is also a metric generator in a suitably chosen metric space. Given a positive integer $t$, we define the distance function $d_{G,t}:V\times V\rightarrow \mathbb{N}\cup \{0\}$, where \begin{equation}\label{distinguishAdj} d_{G,t}(x,y)=\min\{d_G(x,y),t\}. \end{equation} Then any metric generator for $(V,d_{G,t})$ is a metric generator for $(V,d_{G,t+1})$ and, as a consequence, the metric dimension of $(V,d_{G,t+1})$ is less than or equal to the metric dimension of $(V,d_{G,t})$. In particular, the metric dimension of $(V,d_{G,1})$ equals $|V|-1$, the metric dimension of $(V,d_{G,2})$ equals $\operatorname{\mathrm{adim}}(G)$ and, if $G$ has diameter $D(G)$, then $d_{G,D(G)}=d_G$ and so the metric dimension of $(V,d_{G,D(G)})$ equals $\dim(G)$. Notice that when using the metric $d_{G,t}$ the concept of metric generator needs not be restricted to the case of connected graphs, as for any pair of vertices $x,y$ belonging to different connected components of $G$ we can assume that $d_G(x,y)=+\infty$ and so $d_{G,t}(x,y)=t$. Notice that $S$ is an adjacency generator for $G$ if and only if $S$ is an adjacency generator for its complement $\overline{G}$. This is justified by the fact that given an adjacency generator $S$ for $G$, it holds that for every $x,y\in V- S$ there exists $s\in S$ such that $s$ is adjacent to exactly one of $x$ and $y$, and this property holds in $\overline{G}$. Thus, $\operatorname{\mathrm{adim}}(G)=\operatorname{\mathrm{adim}} (\overline{G}).$ From the definitions of the different variants of generators, we can observe: an adjacency generator is a metric generator; a metric generator is a local metric generator; a local adjacency generator is a local metric generator; and an adjacency generator is a local adjacency generator. These facts show that the following inequalities hold for any graph $G$: \begin{enumerate}[{\rm (i)}] \item $\dim(G)\leq \operatorname{\mathrm{adim}}(G)$; \item $\dim_l(G)\leq \dim(G)$; \item $\dim_l(G)\leq \operatorname{\mathrm{adim}}_l(G)$; \item $\operatorname{\mathrm{adim}}_l(G)\leq \operatorname{\mathrm{adim}} (G)$. \end{enumerate} Moreover, if $D(G)\le 2$, then $\dim(G)= \operatorname{\mathrm{adim}}(G)$ and $\dim_l(G)=\operatorname{\mathrm{adim}}_l(G)$. The radius of a graph $G$ is denoted by $r(G)$. The following result describes situations with very small or large local adjacency dimensions. \begin{theorem}{\rm \cite{RV-F-2013}}\label{LocalAdjDimSmall-Large} Let $G$ be a non-empty graph of order $t$. The following assertions hold. \begin{enumerate}[{\rm (i)}] \item $\operatorname{\mathrm{adim}}_l(G)=1$ if and only if $G$ is a bipartite graph having only one non-trivial connected component $G^*$ and $r(G^*)\le 2$. \item $\operatorname{\mathrm{adim}}_l( G)= t-1$ if and only if $G\cong K_{t}$. \end{enumerate} \end{theorem} The remainder of the paper is structured as follows. After introducing some useful notation and terminology in Section~\ref{sectPrelim}, we extensively discuss our main results in Section~\ref{sectMain}. Then, Section~\ref{sectAdjvsK1} is devoted to show how all previous results presented in Section~\ref{sectMain} in terms of the local adjacency dimension can be expressed in terms of the local metric dimension of graphs of the form $K_1+H$. Finally, we show in Section~\ref{sectAdjProduct} that the methodology used for studying the local metric dimension can be applied to the case of the local adjacency dimension of lexicographic product graphs. In particular, we discuss when the values of both dimensions coincide. \section{Preliminary concepts}\label{sectPrelim} Throughout the paper, we will use the notation $K_n$, $K_{r,n-r}$, $C_n$, $N_n$ and $P_n$ for complete graphs, complete bipartite graphs, cycle graphs, empty graphs and path graphs of order $n$, respectively. We use the notation $u \sim v$ if $u$ and $v$ are adjacent and $G \cong H$ if $G$ and $H$ are isomorphic graphs. For a vertex $v$ of a graph $G$, $N_G(v)$ will denote the set of neighbours or \emph{open neighbourhood} of $v$ in $G$, \emph{i.e.} $N_G(v)=\{u \in V(G):\; u \sim v\}$. The \emph{closed neighbourhood} of $v$, denoted by $N_G[v]$, equals $N_G(v) \cup \{v\}$. If there is no ambiguity, we will simply write $N(v)$ or $N[v]$. Two vertices $x,y\in V(G)$ are \textit{true twins} in $G$ if $N_G[x]=N_G[y]$. The subgraph of $G$ induced by a set $S$ of vertices is denoted by $\langle S\rangle_G$. If there is no ambiguity, we will simply write $\langle S\rangle$. The length of a shortest cycle (if any) in a graph $G$ is called the girth of $G$, and it is denoted by $\mathtt{g}(G)$. Acyclic graphs are considered to have infinite girth. From now on we denote by $\mathcal{G}$ the set of graphs $H$ satisfying that for every local adjacency basis $B$, there exists $v\in V(H)$ such that $B\subseteq N_H(v)$. Notice that the only local adjacency basis of an empty graph $N_r$ is the empty set, and so $N_r\in \mathcal{G}$. Moreover, $K_1\cup K_2\in \mathcal{G}$. In fact, a non-connected graph $H\in \mathcal{G}$ if and only if $H\cong N_r$ or $H\cong N_r\cup G$, where $G$ is a connected graph in $\mathcal{G}$. We denote by $\Phi$ the family of empty graphs. Notice that $ \Phi\subset \mathcal{G}$. On the other hand, it is readily seen that no graph of radius greater than or equal to four belongs to $\mathcal{G}$. As we will see in Proposition \ref{lemmaGirth}, if $H\in \mathcal{G}$ is a connected graph different from a tree, then $\mathtt{g}(H)\le 6$. \subsection{The lexicographic product $G \circ \mathcal{H}$} Let $G$ be a graph of order $n$, and let $\mathcal{H}=\{H_1,H_2,\ldots,H_n\}$ be an ordered family composed by $n$ graphs. The \emph{lexicographic product} of $G$ and $\mathcal{H}$ is the graph $G \circ \mathcal{H}$, such that $V(G \circ \mathcal{H})=\bigcup_{u_i \in V(G)} (\{u_i\} \times V(H_i))$ and $(u_i,v_r)(u_j,v_s) \in E(G \circ \mathcal{H})$ if and only if $u_iu_j \in E(G)$ or $i=j$ and $v_rv_s \in E(H_i)$. Figure \ref{figExampleOfLexiFamily} shows the lexicographic product of $P_3$ and the family composed by $\{P_4,K_2,P_3\}$, and the lexicographic product of $P_4$ and the family $\{H_1,H_2,H_3,H_4\}$, where $H_1 \cong H_4 \cong K_1$ and $H_2 \cong H_3 \cong K_2$. In general, we can construct the graph $G\circ\mathcal{H}$ by taking one copy of each $H_i\in\mathcal{H}$ and joining by an edge every vertex of $H_i$ with every vertex of $H_j$ for every $u_i u_j\in E(G)$. Note that $G\circ\mathcal{H}$ is connected if and only if $G$ is connected. \begin{figure} \caption{The lexicographic product graphs $P_3\circ\{P_4,K_2,P_3\} \label{figExampleOfLexiFamily} \end{figure} In particular, if $H_i \cong H$ for every $H_i \in \mathcal{H}$, then $G \circ \mathcal{H}$ is a standard lexicographic product graph, which is denoted as $G \circ H$ for simplicity. Another particular case of lexicographic product graphs is the join graph. The \emph{join graph} $G+H$\label{g join} is defined as the graph obtained from disjoint graphs $G$ and $H$ by taking one copy of $G$ and one copy of $H$ and joining by an edge each vertex of $G$ with each vertex of $H$ \cite{Harary1969,Zykov1949}. Note that $G+H\cong K_2\circ\{G,H\}$. The join operation is commutative and associative. Now, for the sake of completeness, Figure \ref{ex join} illustrates two examples of join graphs. \begin{figure} \caption{Two join graphs: $P_4+C_3 \cong K_2\circ \{P_4,C_3\} \label{ex join} \end{figure} Moreover, complete $k$-partite graphs, $K_{p_1,p_2,...,p_k}\cong K_n \circ \{N_{p_1},N_{p_2},\dots ,N_{p_k}\}\cong N_{p_1}+N_{p_2}+\cdots +N_{p_k}$, are typical examples of join graphs. The particular case illustrated in Figure \ref{ex join} (right hand side), is no other than the complete $3$-partite graph $K_{2,2,2}$. The relation between distances in a lexicographic product graph and those in its factors is presented in the following remark, for which it is necessary to recall (\ref{distinguishAdj}). \begin{remark}\label{remarkDistLexi} If $G$ is a connected graph and $(u_i,b)$ and $(u_j,d)$ are vertices of $G\circ \mathcal{H}$, then $$d_{G\circ \mathcal{H}}((u_i,b),(u_j,d))=\left\{\begin{array}{ll} d_G(u_i,u_j), & \mbox{if $i\ne j$,} \\ \\ d_{H_i,2}(b,d), & \mbox{if $i=j$.} \end{array} \right.\\ $$ \end{remark} We would point out that the remark above was stated in \cite{Hammack2011,Imrich2000} for the case where $H_i\cong H$ for all $H_i\in \mathcal{H}$. The lexicographic product has been studied from different points of view in the literature. One of the most common researches focuses on finding relationships between the value of some invariant in the product and that of its factors. In this sense, we can find in the literature a large number of investigations on diverse topics. For instance, the metric dimension and related parameters have been studied in \cite{Estrada-Moreno2014b,Feng2012a, JanOmo2012, Kuziak2014, Ramirez_Estrada_Rodriguez_2015, Saputro2013}. For more information on product graphs we suggest the books \cite{Hammack2011,Imrich2000}. In order to state our main result (Theorem \ref{DimLocalLexicInTermsAdjaLoc}) we need to introduce some additional notation. Let $\mathcal{U}=\{U_1,U_2,\dots , U_k\}$ be the set of non-singleton true twin equivalence classes of a graph $G$. For the remainder of this paper we will assume that $G$ is connected and has order $n\ge 2$, and $\mathcal{H}=\{H_1,\ldots,H_n\}$. We now define the following sets and parameters: \begin{itemize} \item $T(G)=\bigcup_{j=1}^kU_j$. \item $V_E=\left\{u_i\in V(G)-T(G) :\, H_i \in \Phi \right\}$. \item $I=\{u_i\in V(G):\, H_i\in \mathcal{G}\}$. \item For any $I_j=I\cap U_j\ne \emptyset $, we can choose some $u\in I_j$ and set $I'_j=I_j-\{u\}$. We define the set $X_E=I-\bigcup_{I_j'\ne \emptyset}I_j'$. \item We say that two vertices $u_i,u_j \in X_E$ satisfy the relation $\mathcal{R}$ if and only if $u_i \sim u_j$ and $d_G(u,u_i)=d_G(u,u_j)$ for all $u\in V(G)-(V_E \cup \{u_i,u_j\})$. \item We define $\mathcal{A}$ as the family of sets $A\subseteq X_E$ such that for every pair of vertices $u_i,u_j\in X_E$ satisfying $\mathcal{R}$ there exists a vertex in $A$ that distinguishes them. \item $\varrho(G,\mathcal{H})=\displaystyle\min_{A\in \mathcal{A}}\left\{|A|\right\}.$ \end{itemize} \begin{figure} \caption{The graph $G\circ{\cal H} \label{Ex notation Lexicografico} \end{figure} With the aim of clarifying what this notation means, we proceed to show an example where we explain the role of these parameters when constructing a local metric generator $W$ for a lexicographic product graph. Let $G$ be the right-hand graph shown in Figure \ref{figExampleOfLexiFamily} and let $\mathcal{H}$ be the family composed by the graphs $H_1\cong H_6\cong N_2$, $H_2\cong P_4$, $H_3\cong H_4\cong H_5\cong K_2$. Figure \ref{Ex notation Lexicografico} shows the graph $G\circ{\cal H}$. Consider any $H_i\notin\Phi$. Note that the restriction of any local metric basis of $G\circ{\cal H}$ to the vertices of $\langle \{u_i\} \times V(H_i) \rangle \cong H_i$ must be a local adjacency generator for $\langle \{u_i\} \times V(H_i) \rangle$, as two adjacent vertices of $\langle \{u_i\} \times V(H_i) \rangle$ are not distinguished by any vertex outside ${u_i}\times V(H_i)$, so we can assume that the black-coloured vertices belong to $W$. Moreover, $U_1=\{u_2,u_3\}$ and $U_2=\{u_4,u_5\}$ are the non-singleton true twin equivalence classes of $G$. Since $u_4,u_5 \in I\cap U_2$, we have that no pair of non-black-coloured vertices in $({u_4}\times V(H_4))\cup({u_5}\times V(H_5))$ is distinguished by any black-coloured vertex, so we add to $W$ the grey-coloured vertex corresponding to the copy of $H_4$ and, by analogy, we add to $W$ the grey-coloured vertex corresponding to the copy of $H_2$. Besides, note that the white-coloured vertices of the copies of $H_3$ and $H_5$ are only distinguished by themselves and by vertices from the copies of $H_1$ and $H_6$, so we need to add one more vertex to $W$, e.g. the grey-coloured vertex in the copy of $H_1$. Note that, according to our previous definitions, we have $V_E=\{u_1,u_6\}$ and we take $I'_1= \{u_2\}$ and $I'_2=\{u_4\}$. Thus, $X_E=\{u_1,u_3,u_5,u_6\}$. Therefore, since $u_1\in X_E$ distinguishes the pair $u_3,u_5$, the sole pair of vertices from $X_E$ satisfying $\mathcal{R}$, we take $A=\{u_1\}$ and conclude that $\varrho(G,\mathcal{H})=1$. Notice that, $\displaystyle\sum_{i=1}^{6}\operatorname{\mathrm{adim}}_{l}(H_i)=4$, $\displaystyle\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=2$ and $\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{6}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)+\varrho(G,\mathcal{H})=7.$ \section{Main results}\label{sectMain} \begin{theorem}\label{DimLocalLexicInTermsAdjaLoc} Let $G$ be a connected graph of order $n\ge 2$, let $\{U_1,U_2,\dots , U_k\}$ be the set of non-singleton true twin equivalence classes of $G$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. Then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)+\varrho(G,\mathcal{H}).$$ \end{theorem} \begin{proof} We will first construct a local metric generator for $G\circ\mathcal{H}$. To this end, we need to introduce some notation. Let $V(G)=\{u_1,\ldots,u_n\}$ and let $S_i$ be a local adjacency basis of $H_i$, where $i\in \{1,\dots, n\}$. For any $I_j=I\cap U_j\ne \emptyset $, we choose $u\in I_j$ and set $I'_j=I_j-\{u\}$. Now, for every $u_i\in I'_j\ne \emptyset $, let $v_i\in V(H_i)$ such that $S_i\subseteq N_{H_i}(v_i)$. Finally, we consider a set $A\subseteq X_E$ achieving the minimum in the definition of $\varrho(G,\mathcal{H})$ and, for each $u_i\in A$, we choose one vertex $y_i\in V(H_i)-S_i$ such that $S_i\subseteq N_{H_i}(y_i)$. We claim that the set $$S=\left( \bigcup_{S_i\ne \emptyset} (\{u_i\}\times S_i) \right) \cup \left( \bigcup_{I'_j\ne \emptyset }\{(u_i,v_i):\; u_i\in I'_j \}\right)\cup \left( \bigcup_{u_i\in A} \{(u_i,y_i)\}\right)$$ is a local metric generator for $G\circ\mathcal{H}$. We differentiate the following four cases for two adjacent vertices $(u_i,v),$ $(u_j,w)\in V(G\circ\mathcal{H})-S$.\\ \\Case 1. $i=j$. In this case $v\sim w$. Since $S_i$ is a local adjacency basis of $H_i$, there exists $x\in S_i$ such that $d_{H_i,2}(x,v)\ne d_{H_i,2}(x,w)$ and so for $(u_i,x)\in \{u_i\}\times S_i\subset S$ we have $d_{G\circ\mathcal{H}}((u_i,x),(u_i,v))= d_{H_i,2}(x,v)\ne d_{H_i,2}(x,w)= d_{G\circ\mathcal{H}}((u_i,x),(u_i,w))$.\\ \\Case 2. $i\ne j$, $u_i,u_j\in U_l$ and $u_i\not \in I_l $. For any $y\in S_i-N_{H_i}(v)$ we have that $(u_i,y)\in \{u_i\}\times S_i\subseteq S$ and $d_{G\circ\mathcal{H}}((u_i,y),(u_i,v)) =2\ne 1= d_{G\circ\mathcal{H}}((u_i,y),(u_j,w))$. \\ \\Case 3. $i\ne j$, $u_i,u_j\in U_l$ and $u_i , u_j \in I_l $. If $v=v_i$ and $w=v_j$, then $(u_i,v_i)\in S$ or $(u_j,v_j)\in S$. If $v\ne v_i$ or $w\ne v_j$ (say $v\ne v_i$) then either $S_i \subseteq N_{H_i}(v)$, in which case $d_{G\circ\mathcal{H}}((u_i,v_i),(u_i,v)) = 2\ne 1= d_{G\circ\mathcal{H}}((u_i,v_i),(u_j,w))$, or there exists $y\in S_i-N_{H_i}(v)$ such that $(u_i,y)\in \{ u_i\}\times S_i \subseteq S$ and $d_{G\circ\mathcal{H}}((u_i,y),(u_i,v)) = 2\ne 1= d_{G\circ\mathcal{H}}((u_i,y),(u_j,w))$.\\ \\Case 4. $i\ne j$ and $N_G[u_i]\ne N_G[u_j]$. Notice that, in this case, $u_i\sim u_j$. If $u_i\not \in I$, then $S_i\ne \emptyset$ and there exists $y\in S_i-N_{H_i}(v)$ such that $(u_i,y)\in \{ u_i\}\times S_i \subseteq S$ and $d_{G\circ\mathcal{H}}((u_i,y),(u_i,v)) = 2\ne 1= d_{G\circ\mathcal{H}}((u_i,y),(u_j,w))$. Now, assume that $u_i,u_j \in I$. If $u_i\in I'_l$ or $u_j\in I'_l$ for some $l$ (say $u_i\in I'_l$), then $d_{G\circ\mathcal{H}}((u_i,v_i),(u_i,v))=2\ne 1=d_{G\circ\mathcal{H}}((u_i,v_i),(u_j,w)) $ or there exists $y\in S_i$ such that $d_{G\circ\mathcal{H}}((u_i,y),(u_i,v))=2\ne 1=d_{G\circ\mathcal{H}}((u_i,y),(u_j,w)) $. Finally, if $u_i,u_j\notin \bigcup I'_l$, then by the construction of $S$ there exists $u_l\in A\cup (V(G)-X_E)$ such that $d_G(u_l,u_i)\ne d_G(u_l,u_j)$. Since $u_l\in \{x: (x,y)\in S\}$, there exists $y\in V(H_l)$ such that $d_{G\circ\mathcal{H}}((u_l,y),(u_i,v))\ne d_{G\circ\mathcal{H}}((u_l,y),(u_j,w)) $. In conclusion, $S$ is a local metric generator for $G\circ\mathcal{H}$ and, as a result, $$\dim_l(G\circ\mathcal{H})\le |S|=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I_j\ne \emptyset}(|I_j|-1)+\varrho(G,\mathcal{H}).$$ It remains to show that $\displaystyle\dim_l(G\circ\mathcal{H})\ge \displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I_j\ne \emptyset}(|I_j|-1)+\varrho(G,\mathcal{H}).$ To this end, we take a local metric basis $W$ of $G\circ\mathcal{H}$ and for every $u_i\in V(G)$ we define the set $W_i=\{y: (u_i,y)\in W\}$. As for any $u_i\in V(G)$ and two adjacent vertices $v,w\in V(H_i)$, no vertex outside $\{u_i\}\times W_i$ distinguishes $(u_i,v)$ and $(u_i,w)$, we can conclude that $W_i$ is a local adjacency generator for $H_i$. Hence, \begin{equation}\label{condAll} |W_i|\ge \operatorname{\mathrm{adim}}_l(H_i),\text{ for all }i\in \{1,\dots, n\}. \end{equation} Now suppose, for the purpose of contradiction, that there exist $u_i,u_j\in I\cap U_l$ such that $|W_i|= \operatorname{\mathrm{adim}}_l(H_i)$ and $|W_j|= \operatorname{\mathrm{adim}}_l(H_i)$. In such a case, there exist $v_i\in V(H_i)-W_i$ and $v_j\in V(H_j)-W_j$ such that $W_i\subseteq N_{H_i}(v_i)$ and $W_j\subseteq N_{H_j}(v_j)$, which is a contradiction. Hence, if $|I\cap U_l|\ge 2$, then $|\{u_i\in I\cap U_l:\; |W_i|\ge \operatorname{\mathrm{adim}}_l(H_i)+1\}|\ge |I\cap U_l|-1$ and, as a consequence, \begin{equation}\label{condTT} \sum_{u_i\in I\cap T(G)}|W_i|\ge \sum_{u_i\in I\cap T(G)}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1). \end{equation} On the other hand, assume that $\varrho(G,\mathcal{H})\neq \emptyset$. We claim that \begin{equation}\label{condVarrho} \sum_{u_j\in X_E}|W_j|\geq \sum_{u_j\in X_E}\operatorname{\mathrm{adim}}_l(H_j)+\varrho(G,\mathcal{H}). \end{equation} To see this, we will prove that for any pair of vertices $u_i,u_j$ satisfying $\mathcal{R}$ there exists $u_r\in X_E$ such that $|W_r|\geq \operatorname{\mathrm{adim}}_l(H_r)+1$. If $|W_i|=\operatorname{\mathrm{adim}}_l(H_i)+1$ or $|W_j|=\operatorname{\mathrm{adim}}_l(H_j)+1$, then we are done. Suppose that $|W_i|=\operatorname{\mathrm{adim}}_l(H_i)$ and $|W_j|=\operatorname{\mathrm{adim}}_l(H_j)$. Since $W_i$ and $W_j$ are local adjacency bases of $H_i$ and $H_j$, respectively, there exist $v\in V(H_i)$ and $w\in V(H_j)$ such that $\{u_i\}\times W_i\subseteq N_{\langle \{u_i\}\times V(H_i) \rangle}(u_i,v)$ and $\{u_j\}\times W_j\subseteq N_{\langle \{u_j\}\times V(H_j) \rangle}(u_j,w)$. Thus, there exists $(u_r,y)\in \{u_r\}\times W_r$, $r\ne i,j$, which distinguishes the pair $(u_i,v), (u_j,w)$, and so $d_G(u_r,u_i)\ne d_G(u_r,u_j)$. Hence, since $u_i,u_j$ satisfy $\mathcal{R}$, we can claim that $u_r\in V_E\subseteq X_E$ and so $|W_r|>0=\operatorname{\mathrm{adim}}_l(H_r)$. In consequence, \eqref{condVarrho} holds. Therefore, \eqref{condAll}, \eqref{condTT} and \eqref{condVarrho} lead to $$\displaystyle\dim_l(G\circ\mathcal{H}) =\sum_{i=1}^{n}|W_i|\ge \displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)+\varrho(G,\mathcal{H}), $$ as required. \end{proof} From now on we proceed to obtain some particular cases of this main result. To begin with, we consider the case $\varrho(G,\mathcal{H})=0$. \begin{corollary}\label{Rocero} Let $G$ be a connected graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. If for any pair of adjacent vertices $u_i,u_j\in V(G)$, not belonging to the same true twin equivalence class, $H_i\notin \mathcal{G}$ or $H_j\notin \mathcal{G}$, or there exists $u_l\in V(G)$ such that $H_l\notin \Phi$ and $d_G(u_l,u_i)\ne d_G(u_l,u_j)$, then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1).$$ \end{corollary} In particular, if $\mathcal{H}\cap \Phi =\emptyset$, then $\varrho(G,\mathcal{H}) =0$, and so we can state the following result, which is a particular case of Corollary \ref{Rocero}. \begin{remark} For any connected graph $G$ of order $n\ge 2$ and any family $\mathcal{H}=\{H_1,\ldots,H_n\}$ composed by non-empty graphs, $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1).$$ \end{remark} If $G\cong K_n$, then $\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=\max\{0,|I|-1\}$, $|X_E|\in\{0,1\}$, which implies that $\varrho(G,\mathcal{H})=0$, and so Theorem \ref{DimLocalLexicInTermsAdjaLoc} leads to the following. \begin{corollary} For any integer $n \ge 2$ and any family $\mathcal{H}=\{H_1,\ldots,H_n\}$ of graphs, $$\dim_l(K_n\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+\max\{0,|I|-1\}.$$ Furthermore, the following assertions hold for a graph $H$. \begin{itemize} \item If $H\in \mathcal{G}$, then $\dim_l(K_n\circ H)=n\cdot \operatorname{\mathrm{adim}}_{l}(H)+n-1.$ \item If $H\notin \mathcal{G}$, then $\dim_l(K_n\circ H)=n\cdot \operatorname{\mathrm{adim}}_{l}(H).$ \end{itemize} \end{corollary} Notice that, in the general case, $\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=0$ if and only if each true twin equivalence class of $G$ contains at most one vertex $u_i$ such that $H_i\in \mathcal{G}$. Thus, we can state the following corollary. \begin{corollary} Let $G$ be a connected graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. Then $\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)$ if and only if for every two adjacent vertices $u_i,u_j\in I$, not belonging to the same true twin equivalence class, there exists $u\in V(G)-(V_E\cup \{u_i,u_j\})$ such that $d_G(u,u_i)\ne d_G(u,u_j)$ and each true twin equivalence class of $G$ contains at most one vertex $u_i$ such that $H_i\in \mathcal{G}$. \end{corollary} A particular case of the result above is stated in the next remark. \begin{remark} Let $G$ be a connected bipartite graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. If $\mathcal{H}\not\subseteq \mathcal{G}$, then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i).$$ \end{remark} \begin{corollary} Let $G$ be a connected bipartite graph of order $n$, let $H$ be a non-empty graph, and let $\mathcal{H}$ be a family composed by $n$ graphs. If $ \mathcal{H}- \Phi=\{H\}$, then \[\dim_l(G\circ\mathcal{H})=\left\{ \begin{array}{ll} \operatorname{\mathrm{adim}}_l(H)+1,& \text{if $H\in \mathcal{G}$};\\ \\ \operatorname{\mathrm{adim}}_l(H), & \text{otherwise.} \end{array} \right. \] \end{corollary} \begin{proof} If $G\cong K_2$, then $\varrho(G,\mathcal{H}) =0$, $\displaystyle\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=1$ whenever $H\in \mathcal{G}$, and $\displaystyle\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=0$ whenever $H\not \in \mathcal{G}$. On the other hand, if $G\not \cong K_2$, then $\displaystyle\sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)=0$, $\varrho(G,\mathcal{H}) =1$ whenever $H\in \mathcal{G}$, and $\varrho(G,\mathcal{H}) =0$ whenever $H\not \in \mathcal{G}$. Since in any case $ \displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)=\operatorname{\mathrm{adim}}_{l}(H)$, the result follows from Theorem \ref{DimLocalLexicInTermsAdjaLoc}. \end{proof} Our next result concerns the case of a family $\mathcal{H}$ composed by empty graphs. \begin{remark}\label{DimLocalLex=dimlocalG} For any connected graph $G$ of order $n\ge 2$ and any family $\mathcal{H}$ composed by $n$ graphs, $$\dim_{l}(G\circ\mathcal{H})\ge\dim_l(G).$$ In particular, if $\mathcal{H}\subset \Phi$, then $$\dim_{l}(G\circ\mathcal{H})=\dim_l(G).$$ \end{remark} \begin{proof} Let $W$ be a local metric basis of $G\circ\mathcal{H}$ and let $W_G=\{u:\;(u,v)\in W\}$ be the projection of $W$ onto $G$. If there exist two adjacent vertices $u_i,u_j\in V(G)-W_G$ not distinguished by any vertex in $W_G$, then no pair of vertices $(u_i,v)\in \{u_i\}\times V(H_i)$, $(u_j,w)\in \{u_j\}\times V(H_j)$ is distinguished by elements of $W$, which is a contradiction. Thus, $W_G$ is a local metric generator for $G$, so $\dim_l(G\circ\mathcal{H})=|W|\ge|W_G|\ge\dim_l(G)$. Now, we assume that $\mathcal{H}\subset \Phi$ and proceed to show that $\dim_{l}(G\circ\mathcal{H})\le\dim_l(G).$ Let $A$ be a local metric basis of $G$. For each $H_l\in \mathcal{H}$ we select one vertex $y_l$ and we define the set $A'=\{(u_l,y_l): u_l\in A\}$. Let $(u_i,v)$ and $(u_j,w)$ be two adjacent vertices of $G\circ\mathcal{H}$. Since $u_i\sim u_j$, there exists $u_l\in A$ such that $d_G(u_i,u_l)\ne d_G(u_j,u_l)$. Now, if $l\ne i,j$, then we have $d_{G\circ\mathcal{H}}((u_l,y_l),(u_i,v))=d_G(u_i,u_l) \ne d_G(u_j,u_l)=d_{G\circ\mathcal{H}}((u_l,y_l),(u_j,w))$. If $l=i$, then $d_{G\circ\mathcal{H}}((u_l,y_l),(u_i,v))=2 \ne 1=d_{G\circ\mathcal{H}}((u_l,y_l),(u_j,w))$. Since the case $l=j$ is analogous to the previous one, we can conclude that $A'$ is a local metric generator for $G\circ\mathcal{H}$ and, as a consequence, $\dim_{l}(G\circ\mathcal{H})\le \dim_l(G)$. Therefore, the proof is complete. \end{proof} In general, the converse of Corollary \ref{DimLocalLex=dimlocalG} does not hold. For instance, we take $G$ as the graph shown in Figure \ref{figExampleNoIfandOnlyif-DimLocalLex=dimlocalG}, $H_1\cong H_5\cong K_2$ and $H_2, H_3, H_4\in \Phi$. In this case, we have that, for instance, $\{u_1,u_5\}$ is a local metric basis of $G$, whereas for any $y\in V(H_1)$ and $y'\in V(H_5)$, the set $\{(u_1,y),(u_5,y')\}$ is a local metric basis of $G\circ {\cal H}$, so $\dim_{l}(G\circ\mathcal{H})=\dim_l(G)=2$. \begin{figure} \caption{The set $\{u_1, u_5\} \label{figExampleNoIfandOnlyif-DimLocalLex=dimlocalG} \end{figure} As a direct consequence of Theorems \ref{LocalAdjDimSmall-Large} and \ref{DimLocalLexicInTermsAdjaLoc} we deduce the following two results. \begin{theorem} Let $G$ be a connected graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family composed by non-empty graphs. Then $\dim_l(G\circ\mathcal{H})=n$ if and only if each true twin equivalence class of $G$ contains at most one vertex $u_i$ such that $H_i\in \mathcal{G}$ and each $H_i\in \mathcal{H}$ is a bipartite graph having only one non-trivial connected component $H_i^*$ and $r(H_i^*)\le 2$. \end{theorem} \begin{theorem} Let $G$ be a connected true twins free graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family composed by non-empty graphs of order $n_i$. Then $\dim_l(G\circ\mathcal{H})=\displaystyle \sum_{i=1}^nn_i-n$ if and only if $H_i\cong K_{n_i}$, for all $H_i\in \mathcal{H}$. \end{theorem} \section{The local adjacency dimension of ${H}$ versus the local metric dimension of $K_1+H$}\label{sectAdjvsK1} From now on we denote by $\mathcal{G}'$ the set of graphs $H$ satisfying that there exists a local metric basis of $K_1+H$ which contains the vertex of $K_1$. \begin{proposition}\label{EquivDimAdjLocaDimLocK_1+H} Let $H$ be a graph. Then $H\in \mathcal{G}' $ if and only if $H\in \mathcal{G} $. \end{proposition} \begin{proof} Let $H\in \mathcal{G}' $, and $B$ a local metric basis of $\langle u\rangle+H$ such that $u\in B$. Since $u$ does not distinguish any pair of vertices of $H$, $B-\{u\}$ is a local adjacency generator for $H$, and so $\dim_l(\langle u\rangle+H)-1\ge \operatorname{\mathrm{adim}}_l(H)$. Now, if there exists a local adjacency basis $A$ of $H$ such that $A\not \subseteq N_H(v)$ for all $v\in V(H)$, then $A$ is a local metric basis of $\langle u\rangle+H$ and so $\dim_l(\langle u\rangle+H)=\operatorname{\mathrm{adim}}_l(H)$, which is a contradiction. Therefore, $H\in \mathcal{G}$. Now, let $H\in \mathcal{G} $. Suppose that there exists a local metric basis $W$ of $\langle u\rangle+H$ such that $u\not \in W$. In such a case, for every vertex $x\in V(H)$ there exists $y\in W$ such that $y\not \in N_H(x)$, which implies that $W$ is not a local adjacency basis of $H$, as $H\in \mathcal{G} $. Thus, since $W$ is a local adjacency generator for $H$, we conclude that $\dim_l(\langle u\rangle+H)=|W| \ge \operatorname{\mathrm{adim}}_l(H)+1$. Therefore, for any local adjacency basis $A$ of $H$, $A\cup \{u\}$ is a local adjacency basis of $\langle u\rangle+H$. \end{proof} \begin{theorem}{\rm \cite{RV-F-2013}} \label{RelacAdimLocal-LocalDimK1+H} Let $H$ be a non-empty graph. The following assertions hold. \begin{enumerate}[{\rm (i)}] \item If $H\not \in \mathcal{G}'$, then $\operatorname{\mathrm{adim}}_l( H)= \dim_l(K_1+H).$ \item If $H \in \mathcal{G}'$, then $\operatorname{\mathrm{adim}}_l( H)= \dim_l(K_1+H)-1.$ \item If $H$ has radius $r(H)\ge 4$, then $\operatorname{\mathrm{adim}}_l( H)= \dim_l(K_1+H).$ \end{enumerate} \end{theorem} As the following result shows, we can express all our previous results in terms of the local adjacency dimension of the graphs $K_1+H_i$, where $H_i\in \mathcal{H}$, i.e., Theorem \ref{DimLocalLexicInTermsK_1+H} is analogous to Theorem \ref{DimLocalLexicInTermsAdjaLoc}. \begin{theorem}\label{DimLocalLexicInTermsK_1+H} Let $G$ be a connected graph of order $n\ge 2$, and $\mathcal{H}=\{H_1,\ldots,H_n\}$ a family of graphs. Then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\dim_{l}(K_1+H_i)- \tau+\varrho(G,\mathcal{H}),$$ where $\tau$ is the number of non-singleton true twin equivalence classes of $G$ having at least one vertex $u_i$ such that $H_i\in \mathcal{G}' $ . \end{theorem} \begin{proof} Notice that, by Proposition \ref{EquivDimAdjLocaDimLocK_1+H}, the parameter $\varrho(G,\mathcal{H})$ can be redefined in terms of $\mathcal{G}'$. The result immediately follows from Proposition \ref{EquivDimAdjLocaDimLocK_1+H} and Theorems \ref{DimLocalLexicInTermsAdjaLoc} and \ref{RelacAdimLocal-LocalDimK1+H}. \end{proof} \begin{lemma}\label{lemmaGirth} Let $H$ be a connected graph different from a tree. If $H\in \mathcal{G}$, then $\mathtt{g}(H)\le 6$. \end{lemma} \begin{proof} Let $A$ be local adjacency basis of $H$. Since $H\in \mathcal{G}$, we consider $v$ as the vertex of $H$ such that $A\subseteq N_H(v)$. Let $N_i(v)=\{u\in V(H):\; d_H(v,u)=i\}$. Since $A\subseteq N_1(v)$, we have that $N_3(v)$ is an independent set and $N_i(v)=\emptyset$, for all $i\ge 4$. Therefore, $\mathtt{g}(H)\le 6$. \end{proof} By Proposition \ref{EquivDimAdjLocaDimLocK_1+H}, Theorem \ref{DimLocalLexicInTermsK_1+H} and Lemma \ref{lemmaGirth} we can derive the following consequence of Theorem \ref{DimLocalLexicInTermsK_1+H} (or equivalently, Theorem \ref{DimLocalLexicInTermsAdjaLoc}). \begin{corollary}\label{CoroFirstInequalityTight} Let $G$ be a connected graph of order $n\ge 2$, and $\mathcal{H}=\{H_1,\ldots,H_n\}$ a family composed by connected graphs. If each $H_i\in \mathcal{H}$ has radius $r(H_i)\ge 4$, or $H_i$ is not a tree and it has girth $\mathtt{g}(H_i)\ge 7$, then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\dim_{l}(K_1+H_i)=\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i).$$ \end{corollary} \begin{proposition} {\rm \cite{RV-F-2013}} \label{DimAdLocCycle} For any integer $n\ge 4$, $\operatorname{\mathrm{adim}}_l( C_n)= \left\lceil\frac{n}{4}\right\rceil$. \end{proposition} From Corollary \ref{CoroFirstInequalityTight} and Proposition \ref{DimAdLocCycle} we deduce the following result. \begin{proposition} Let $G$ be a connected graph of order $t\ge 2$, and $\mathcal{H}=\{C_{n_1},\ldots,C_{n_t}\}$ a family composed by cycles of order at least $7$. Then $$\dim_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{t} \left\lceil\frac{n_i}{4}\right\rceil.$$ \end{proposition} \section{On the local adjacency dimension of lexicogra\-phic product graphs}\label{sectAdjProduct} By a simple transformation of Theorem \ref{DimLocalLexicInTermsAdjaLoc} we obtain an analogous result on the local adjacency dimension of lexicographic product graphs, which we will state without proof. To this end, we consider again some of our previous notation. As above, let $\{U_1,U_2,\dots , U_k\}$ be the set of non-singleton true twin equivalence classes of a connected graph $G$ of order $n\ge 2$, and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. Recall that $V_E=\left\{u_i\in V(G)-T(G) :\, H_i \in \Phi \right\}$, $I=\{u_i\in V(G):\, H_i\in \mathcal{G}\}$ and, for any $I_j=I\cap U_j\ne \emptyset $, we can choose some $u\in I_j$ and set $I'_j=I_j-\{u\}$. Moreover, recall that $X_E=I-\bigcup_{I_j'\ne \emptyset}I_j'$. Now, we say that two vertices $u_i,u_j \in X_E$ satisfy the relation $\mathcal{R}'$ if and only if $u_i \sim u_j$ and $d_{G,2}(u,u_i)=d_{G,2}(u,u_j)$ for all $u\in V(G)-(V_E \cup \{u_i,u_j\})$. We define $\mathcal{A}'$ as the family of sets $A\subseteq X_E$ such that for every pair of vertices $u_i,u_j\in X_E$ satisfying $\mathcal{R}'$ there exists a vertex in $A$ that distinguishes them. Finally, we define $\varrho'(G,\mathcal{H})=\displaystyle\min_{A\in \mathcal{A}'}\left\{|A|\right\}.$ \begin{theorem}\label{AdjDimLocalLexicInTermsAdjaLoc} Let $G$ be a connected graph of order $n\ge 2$, let $\{U_1,U_2,\dots , U_k\}$ be the set of non-singleton true twin equivalence classes of $G$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. Then $$\operatorname{\mathrm{adim}}_l(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1)+\varrho'(G,\mathcal{H}).$$ \end{theorem} Let $G\cong P_4$ where $V(P_4)=\{u_1,u_2,u_3,u_4\}$ and $u_i\sim u_{i+1}$, for $i\in \{1,2,3\}$. If $H_1\cong H_2\cong H_4 \cong P_3$ and $H_3\cong N_3$, then $\dim_l(G\circ\mathcal{H})=3 <4=\operatorname{\mathrm{adim}}_{l}(G\circ\mathcal{H})$. Notice that $\varrho(G,\mathcal{H})=0$ and $\varrho'(G,\mathcal{H})=1$. However, if $ H_2\cong H_3 \cong P_3$ and $H_1\cong H_4\cong N_3$, then $\varrho(G,\mathcal{H})=\varrho'(G,\mathcal{H})=1$ and $\dim_l(G\circ\mathcal{H})=3 =\operatorname{\mathrm{adim}}_{l}(G\circ\mathcal{H})$. We already know that for any graph $G$ of diameter less than or equal to two, $\dim_l(G)=\operatorname{\mathrm{adim}}_l(G)$. However, the previous example shows that the above mentioned equality is not restrictive to graphs of diameter at most two, as $D(G\circ\mathcal{H})=D(P_4)=3$. Notice that $\varrho'(G,\mathcal{H})\ge\varrho(G,\mathcal{H})$, which is a direct consequence of Theorems~\ref{DimLocalLexicInTermsAdjaLoc} and~\ref{AdjDimLocalLexicInTermsAdjaLoc}, as well as the fact that $\operatorname{\mathrm{adim}}_l(G)\ge\dim_l(G)$ for any graph $G$. The next result corresponds to the case $\varrho(G,\mathcal{H})=\varrho'(G,\mathcal{H})$. \begin{theorem}\label{FangoIguales} Let $G$ be a connected graph of order $n\ge 2$, and $\mathcal{H}=\{H_1,\ldots,H_n\}$ a family of graphs. Then $\dim_l(G\circ\mathcal{H})=\operatorname{\mathrm{adim}}_l(G\circ\mathcal{H})$ if and only if $\varrho(G,\mathcal{H})=\varrho'(G,\mathcal{H})$. \end{theorem} We now characterize the case $\varrho(G,\mathcal{H})=\varrho'(G,\mathcal{H})=0$. The symmetric difference of two sets $U$ and $W$ will be denoted by $U \triangledown W$. \begin{theorem}\label{lexiSame_local-Metric_Adj_dim} Let $G$ be a connected graph of order $n\ge 2$ and let $\mathcal{H}=\{H_1,\ldots,H_n\}$ be a family of graphs. Then the following assertions are equivalent. \begin{enumerate}[{\rm (i)}] \item $\dim_l(G\circ\mathcal{H})=\operatorname{\mathrm{adim}}_{l}(G\circ\mathcal{H})=\displaystyle\sum_{i=1}^{n}\operatorname{\mathrm{adim}}_{l}(H_i)+ \sum_{I\cap U_j\ne \emptyset}(|I\cap U_j|-1).$ \item For any pair of adjacent vertices $u_i,u_j\in V(G)$, not belonging to the same true twin equivalence class, $H_i\notin \mathcal{G}$ or $H_j\notin \mathcal{G}$, or there exists $u_l\in N_G(u_i)\triangledown N_G(u_j)$ where $H_l$ is not empty. \end{enumerate} \end{theorem} \begin{proof} By Theorems \ref{DimLocalLexicInTermsAdjaLoc}, \ref{AdjDimLocalLexicInTermsAdjaLoc} and \ref{FangoIguales}, we only need to show that $\varrho'(G,\mathcal{H})=0$ if and only if (ii) holds. \noindent $((i)\Rightarrow (ii))$ If $\varrho'(G,\mathcal{H})=0$, then for every two adjacent vertices $u_i,u_j\in I$, not belonging to the same true twin equivalence class, there exists $u_l\in V(G)-(V_E\cup \{u_i,u_j\})$ such that $d_{G,2}(u_l,u_i)\ne d_{G,2}(u_l,u_j)$, which implies that $u_l\in N_G(u_i)\triangledown N_G(u_j)$ and $H_l$ is not empty. Now, if $u_i,u_j\not \in I$, then $H_i\notin \mathcal{G}$ or $H_j\notin \mathcal{G}$.\\ \noindent $((ii)\Rightarrow (i))$ If for any pair of adjacent vertices $u_i,u_j\in V(G)$, not belonging to the same true twin equivalence class, $H_i\notin \mathcal{G}$ or $H_j\notin \mathcal{G}$, or there exists $u_l\in N_G(u_i)\triangledown N_G(u_j)$ where $H_l$ is not empty, then no pair of adjacent vertices satisfy $\mathcal{R}'$ and $V(G)-X_E$ is a local adjacency generator for $G$, which implies that $\varrho'(G,\mathcal{H})=0$. \end{proof} \end{document}
\begin{document} \newcommand{{\operatorname{Pic}}}{{\operatorname{Pic}}} \newcommand{{\operatorname{bh}}}{{\operatorname{bh}}} \newcommand{{\operatorname{GW}}}{{\operatorname{GW}}} \newcommand{{\operatorname{conj}}}{{\operatorname{conj}}} \newcommand{{\operatorname{Ev}}}{{\operatorname{Ev}}} \newcommand{{\operatorname{idim}}}{{\operatorname{idim}}} \newcommand{{\operatorname{DP}}}{{\operatorname{DP}}} \newcommand{{\operatorname{Hom}}}{{\operatorname{Hom}}} \newcommand{{\operatorname{Tors}}}{{\operatorname{Tors}}} \newcommand{{\operatorname{Aut}}}{{\operatorname{Aut}}} \newcommand{{\operatorname{Id}}}{{\operatorname{Id}}} \newcommand{{\operatorname{Sing}}}{{\operatorname{Sing}}} \newcommand{{\operatorname{codim}}}{{\operatorname{codim}}} \newcommand{{\operatorname{Ann}}}{{\operatorname{Ann}}} \newcommand{{\operatorname{ord}}}{{\operatorname{ord}}} \newcommand{{\operatorname{mt}}}{{\operatorname{mt}}} \newcommand{{\operatorname{Prec}}}{{\operatorname{Prec}}} \newcommand{ $\blacksquare$ }{ $\blacksquare$ } \newcommand{{\perp{\hskip-0.4cm}\perp}}{{\perp{\hskip-0.4cm}\perp}} \title{On higher genus Welschinger invariants of del Pezzo surfaces} \author{Eugenii Shustin} \date{} \maketitle \begin{abstract} The Welschinger invariants of real rational algebraic surfaces count real rational curves which represent a given divisor class and pass through a generic conjugation-invariant configuration of points. No invariants counting real curves of positive genera are known in general. We indicate particular situations, when Welschinger-type invariants counting real curves of positive genera can be defined. We also prove the positivity and give asymptotic estimates for such Welschinger-type invariants for several del Pezzo surfaces of degree $\ge2$ and suitable real nef and big divisor classes. In particular, this yields the existence of real curves of given genus and of given divisor class passing through any appropriate configuration of real points on the given surface. \end{abstract} \maketitle \section{Introduction} Welschinger invariants serve as genus zero open Gromov-Witten invariants. For real rational symplectic manifolds \cite{Welschinger:2003,Welschinger:2005}, they count real rational pseudo-holomorphic curves, realizing a given homology class, passing through a generic conjugation-invariant configuration of points, and equipped with weights $\pm1$. In the case of real del Pezzo surfaces, Welschinger invariants count real algebraic rational curves. A more general approach used by J. Solomon allowed him to define also invariants that count real curves of positive genera with fixed complex and real structure of the normalization \cite[Theorem 1.3]{Solomon:2006}. However, so far, no any general invariant way to count real curves of positive genera (without fixing their complex and real structure) has been found. In particular, it follows from \cite[Theorem 3.1]{Itenberg_Kharlamov_Shustin:2003}, that if we do not fix complex and real structure of the normalization, then even the count of real plane elliptic curves of any degree $\ge4$ equipped with Welschinger signs is {\it not invariant} of the choice of the point constraints. The main goal of this note is to indicate situations, in which the ``bad" bifurcation of type \cite[Theorem 3.1]{Itenberg_Kharlamov_Shustin:2003} does not occur and in which Welschinger-type invariants of positive genera can be defined. So, in Section \ref{pg} we introduce higher genus invariants of real del Pezzo surfaces with a disconnected real point set and prove that they indeed do not depend on the choice of point constraints and on variation of the surface. In Section \ref{sec2} we compute new invariants in several examples and exhibit a series of invariants, which are positive and are asymptotically comparable with Gromov-Witten invariants. In particular, this yields the existence of real curves of given genus and of given divisor class passing through any appropriate configuration of real points on the given surface. It is worth mentioning that \cite[Theorem 1]{Itenberg_Kharlamov_Shustin:2009} states that the count of tropical curves of any genus with appropriate tropical Welschinger signs is invariant of the choice of tropical point constraints for any toric surface. The reason why the ``bad" bifurcation does not appear in the tropical limit is discussed in \cite{Mikhalkin:2011,Mikhalkin:2011a}. In our consideration we intensively use techniques of \cite{Itenberg_Kharlamov_Shustin:2012} and \cite{Itenberg_Kharlamov_Shustin:2013}; for the reader's convenience, in Appendices 1 and 2, we present needed statements from these works in the form applicable to curves of arbitrary genus. \section{Invariant count of real curves of positive genera}\label{pg} Let $X$ be a real del Pezzo surface with a nonempty real point set ${\mathbb R} X$. Denote by ${\operatorname{Pic}}^{\mathbb R}(X)\subset{\operatorname{Pic}}(X)$ the subgroup of real divisor classes. For any connected component $G\subset{\mathbb R} X$, one can define a homomorphism ${\operatorname{bh}}_G:{\operatorname{Pic}}^{\mathbb R}(X)\to H_1(G;{\mathbb Z}/2)$ (cf. \cite{Borel_Haefliger:1961}), which sends an effective divisor class $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ to the class $[{\mathbb R} C\cap G]\in H_1(G;{\mathbb Z}/2)$, where $C\in|D|$ is any real curve. Indeed, it can be viewed as the composition of the homomorphisms $$H^{{\operatorname{conj}}}_2(X)\to H_2(X/{\operatorname{conj}},{\mathbb R}X;{\mathbb Z}/2)\to H_1({\mathbb R}X;{\mathbb Z}/2) \to H_1(G;{\mathbb Z}/2)$$ given by $[\sigma]\mapsto[\sigma/{\operatorname{conj}}]\mapsto[\partial(\sigma/ {\operatorname{conj}})]\mapsto[(\partial(\sigma/{\operatorname{conj}})) \cap G]$. It follows that, for each $D\in{\operatorname{Pic}}^{\mathbb R}(X)$, there is a well-defined value $\left({\operatorname{bh}}_G(D)\right)^2\in\{0,1\}$. Suppose that ${\mathbb R} X$ contains at least $g+1$ connected components $F_0,...,F_g$ for some $g\ge1$. Put ${\widehat{F}}=F_0\cup...\cup F_g$, ${\underline{F}}= (F_0,...,F_g)$. We say that a divisor class $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ is ${\widehat{F}}$-compatible, if, for any connected component $G\subset{\mathbb R} X\setminus{\widehat{F}}$, one has ${\operatorname{bh}}_G(D)=0$. Note that ${\widehat{F}}$-compatible divisor classes $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ satisfy $$DK_X\equiv\sum_{i=0}^g\left({\operatorname{bh}}_{F_i}(D)\right)^2\mod2\ .$$ For any tuple $(r_0,...,r_g,m)$ of nonnegative integers, introduce the space ${\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})$ (where ${\underline{r}}=(r_0,...,r_g)$) of configurations of $r_0+...+r_g+2m$ distinct points of $X$ such that $r_i$ of them belong to $F_i$, $i=0,...,g$, and the others form $m$ complex conjugate pairs. Choose any conjugation invariant class $\varphi\in H_2(X\setminus{\widehat{F}};{\mathbb Z}/2)$ and pick a big and nef, ${\widehat{F}}$-compatible divisor class $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ such that \begin{equation}p_a(D)=(D^2+DK_X)/2+1\ge g\quad\text{and}\quad -DK_X\ge g+1-\sum_{i=0}^g\left({\operatorname{bh}}_{F_i}(D)\right)^2\ . \label{e5}\end{equation} Then there exist nonnegative integers $r_0,...,r_g,m$ such that \begin{equation} r_0+...+r_g+2m=-DK_X+g-1,\quad r_i\equiv\left({\operatorname{bh}}_{F_i}(D) \right)^2+1\mod2,\ i=0,...,g\ . \label{e6}\end{equation} If $X$ is sufficiently generic in its deformation class, and ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}},m}(X, {\underline{F}})$ is generic, then the set ${\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})$ of real irreducible curve $C\in|D|$ of genus $g$, passing through ${\boldsymbol w}$, is finite and consists of only immersed curves (see Lemma \ref{ln12}). Furthermore, each curve $C\in{\mathcal C}^{\mathbb R}_g (X,D,{\boldsymbol w})$ has a one-dimensional real branch in each of the components $F_0,...,F_g$ of ${\mathbb R} X$. In particular, this yields that $C\setminus{\mathbb R} C$ consists of two connected complex conjugate components, and we denote one of them by $C_{1/2}$. For any vector ${\underline{\varepsilon}}= (\varepsilon_0,...,\varepsilon_g)$ with $\varepsilon_i=\pm1$, $i=0,...,g$, put $$W_{g,{\underline{r}}}(X,D,{\underline{F}}, {\underline{\varepsilon}},\varphi,{\boldsymbol w})= \sum_{C\in{\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})}(-1)^{s(C;{\underline{F}},{\underline{\varepsilon}}) +C_{1/2}\circ\varphi}\ ,$$ where $s(C,{\underline{F}},{\underline{\varepsilon}})$ is defined as follows: if $C$ is nodal, then this is the number of those real nodes of $C$ in ${\widehat{F}}$, which in $F_i$ are represented in real local coordinates as $x^2+\varepsilon_iy^2=0$, $i=0,...,g$ (a node of type $x^2+y^2=0$ is called {\it solitary}, and of type $x^2-y^2=0$ - {\it non-solitary}), and if $C$ is not nodal, we locally deform each germ $(C,z)$, $z\in{\operatorname{Sing}}(C)$, moving its components to a general position in an equivariant way, and then count real nodes as in the nodal case. Since $C$ is immersed, the number $s(C,{\underline{F}},{\underline{\varepsilon}})\mod2$ does not depend on the choice of local deformations of $C$. Our main result is the following analog of Weslchinger's theorem \cite{Welschinger:2003,Welschinger:2005} (see also \cite{Itenberg_Kharlamov_Shustin:2012}). \begin{theorem}\label{t1} Let $X$ be a real del Pezzo surface, $F_0,...,F_g$ connected components of ${\mathbb R} X$ for some $g\ge1$, $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ a nef and big, ${\widehat{F}}$-compatible divisor class satisfying (\ref{e5}), $r_0,...,r_g,m$ nonnegative integers satisfying (\ref{e6}), and $\varphi\in H_2(X\setminus {\widehat{F}};{\mathbb Z}/2)$ a conjugation-invariant class. Let ${\underline{\varepsilon}}=(\varepsilon_0,...,\varepsilon_g)$, $\varepsilon_i=\pm1$, $i=0,...,g$. Then the following hold. (1) The numbers $W_{g,{\underline{r}}}(X,D,{\underline{F}}, {\underline{\varepsilon}},\varphi,{\boldsymbol w})$ do not depend on the choice of a generic configuration ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}},m} (X,{\underline{F}})$ (which further on will be omitted in the notation). (2) If tuples $(X,D,\underline{F},\varphi)$ and $(X',D', \underline{F}',\varphi')$ are deformation equivalent, then $$ W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}}, \varphi)=W_{g,{\underline{r}}}(X',D',\underline{F}',{\underline{\varepsilon}},\varphi')\ . $$ \end{theorem} \begin{corollary}\label{c1} Under the hypotheses of Theorem \ref{t1}, for any generic configuration ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}},m} (X,{\underline{F}})$, $$|W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}}, \varphi)|\le\#{\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})\le GW_g(X,D)\ ,$$ where $GW_g$ is the genus $g$ Gromov-Witten invariant. In particular, if $W_{g,{\underline{r}}}(X,D,{\underline{F}}, {\underline{\varepsilon}},\varphi)\ne0$, then through any generic configuration ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})$, one can trace a real curve $C\in|D|$ of genus $g$. \end{corollary} In Section \ref{sec2}, we exhibit several examples in which our invariants do not vanish, and therefore prove the existence of real curves of positive genera passing through prescribed configurations of real points. {\bf Proof of Theorem \ref{t1}.} The proof follows the lines of \cite{Itenberg_Kharlamov_Shustin:2012}, where the case of rational curves has been treated in full detail in the algebraic geometry framework. We only indicate principal points of the argument, referring to Appendix 1, which contains all needed statements from \cite{Itenberg_Kharlamov_Shustin:2012}. The strategy of the proof is to verify that the studied enumerative numbers remain constant in general variations of the point constraints ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})$ and of the surface $X$. Let us fix a surface $X$ and consider the space ${\mathcal P}^{\mathbb C}_n(X)$ of $n$-tuples of distinct points of $X$. Let $n=r_0+...+r_g+2m=-DK_X+g-1$. Then ${\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})\subset{\mathcal P}^{\mathbb C}_n(X)$. Introduce the characteristic variety $${\operatorname{Ch}}_n^{\mathbb C}(X,D)=\left\{{\boldsymbol w}\in{\mathcal P}_n^{\mathbb C}(X) \ \Bigg|\ \begin{array}{l}\text{there exists a Riemann surface} \ S_g\ \text{of genus}\ g,\\ \text{an immersion}\ \nu:S_g\to X\ \text{and an}\ n\text{-tuple}\ {\boldsymbol p}\ \text{of distinct points of}\ S_g\\ \text{such that}\ \nu({\boldsymbol p})={\boldsymbol w},\ \nu(S_g)\in|D|,\ \text{and}\ h^1(S_g,{\mathcal N}_{S_g}^\nu(- {\boldsymbol p}))>0\end{array}\right\} ,$$ where ${\mathcal N}_{S_g}^\nu=\nu^*{\mathcal T}X/{\mathcal T}S_g$ is the normal bundle. If $p_a(D)>g$, this is a hypersurface in ${\mathcal P}^{\mathbb C}_n(X)$. As pointed out in \cite[Theorem 3.1]{Itenberg_Kharlamov_Shustin:2003}, the invariance of Welschinger numbers fails when the (moving) configuration ${\boldsymbol w}$ hits ${\operatorname{Ch}}_n^{\mathbb C}(X,D)$. Our key observation is that this event does not happen in our situation. \begin{lemma}\label{l1} Under the hypotheses of Theorem \ref{t1}, ${\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})\cap{\operatorname{Ch}}_n^{\mathbb C}(X,D)=\emptyset$. \end{lemma} {\bf Proof.} Let $\nu:S_g\to X$ be a conjugation-invariant immersion, ${\boldsymbol p}\subset S_g$ be a conjugation-invariant $n$-tuple such that $C=\nu^*S_g\in{\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})$, where ${\boldsymbol w}=\nu({\boldsymbol p})\in{\mathcal P}_{{\underline r},m}(X,{\underline F})$. Suppose that $h^1(S_g,{\mathcal N}_{S_g}^\nu(-{\boldsymbol p}))>0$. Then by Riemann-Roch $h^0(S_g,{\mathcal N}_{S_g}^\nu(-{\boldsymbol p}))>0$. It is well known that $\nu_*{\mathcal N}_{S_g}^\nu={\mathcal J}_C^{cond} \otimes{\mathcal O}_X(D)$, where ${\mathcal J}_C^{cond} ={\mathcal Ann}(\overline{\mathcal O}_C/{\mathcal O}_C)$ is the conductor ideal sheaf on $C$ (see details, e.g., in \cite[Section 4.2.4]{Dolgachev:2013}). Hence $$H^0(C,{\mathcal J}_C^{cond}(-{\boldsymbol w})\otimes {\mathcal O}_X(D))\simeq H^0(S_g,{\mathcal N}_{S_g}^\nu(-{\boldsymbol p}))\ne0$$ (in the case of $w=z\in{\operatorname{Sing}}(C)$ for some point $w\in{\boldsymbol w}$, we define the twisted sheaf ${\mathcal J}_C^{cond}(-{\boldsymbol w})$ as the limit when $w$ specializes to the point $z$ along a component of the germ $(C,z)$). A real nonzero element of $H^0(C,{\mathcal J}_C^{cond}(-{\boldsymbol w})\otimes{\mathcal O}_X(D))$ defines a real curve $C'\ne C$ in the linear system $|D|$, which intersects $C$ at each singular point $z\in{\operatorname{Sing}}(C)$ with multiplicity $\ge\delta(C,z)$ (see, \cite[Section 4.2.4]{Dolgachev:2013}) and in ${\boldsymbol w}$. In view of congruence (\ref{e6}), $C'$ must intersect $C$ in (at least) one additional point in each component $F_0,...,F_g$, and hence \begin{eqnarray}CC'&\ge&\sum_{i=0}^g(r_i+1)+2m+2\delta(C) \nonumber\\ &=&(-DK_X+g-1)+(g+1)+(D^2+DK_X+2-2g)=D^2+2\ ,\label{e2} \end{eqnarray} which is a contradiction. $\blacksquare$ By Lemmas \ref{ln11}-\ref{ln16}, in a general smooth equivariant deformation ${\boldsymbol w}_t$, $t\in[0,1]$, of ${\boldsymbol w}={\boldsymbol w}_0$ in ${\mathcal P}_{{\underline{r}},m}(X,{\underline{F}})$, for $t$ in the complement to a finite set $\Phi\subset[0,1]$, the curve collection ${\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})$ consists of immersed Riemann surfaces of genus $g$, and the values $t\in\Phi$ correspond to degeneration of some curves of ${\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w})$ either into nonimmersed, birational images of Riemann surfaces of genus $g$, or into curves listed in Lemma \ref{ln16}. Lemmas \ref{l1} and \ref{p2X} yield that the numbers $W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}},\varphi,{\boldsymbol w})$ do not change as $t$ varies along any of the components of $[0,1]\setminus\Phi$. Next, we can suppose that the configuration ${\boldsymbol w}_t$ is in general position on each of the finitely many curves $C=\nu(\hat C)$, where $[\nu:\hat C\to X,{\boldsymbol p}]\in {\mathcal C}^{\mathbb R}_g(X,D,{\boldsymbol w}_t)$, $t\in\Phi$. Then the constancy of the numbers $W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}},\varphi,{\boldsymbol w})$ in these bifurcations follows from Lemmas \ref{p2X}, \ref{ln2}, and \ref{leg}. The proof of statement (2) of Theorem \ref{t1} amounts in the verification of the constancy of the number $W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}}, \varphi,{\boldsymbol w})$ when $X$ smoothly bifurcates through a uninodal del Pezzo surface (see Section \ref{sec-dpudp} in Appendix 1) The treatment is based on the use of an appropriate real version of the Abramovich-Bertram-Vakil formula \cite{Abramovich_Bertram:2001}, \cite[Theorem 4.2]{Vakil:2000}, and it literally coincides with that in \cite[Section 4]{Itenberg_Kharlamov_Shustin:2013}, while the key points in this consideration are Lemmas \ref{ln12}(2ii) and \ref{lem-abv}. $\blacksquare$ \section{Examples}\label{sec2} \subsection{Small divisors} \begin{proposition}\label{p1} Suppose that the data $X,g,{\underline{F}},D,{\underline{r}},\varphi$ satisfy the hypotheses of Theorem \ref{t1}. (1) If $p_a(D)=g$, then $W_{g,{\underline{r}}}(X,D,{\underline{F}}, {\underline{\varepsilon}}, \varphi)=(-1)^{C_{1/2}\circ\varphi}$, where $C$ is any smooth curve from $|D|$. (2) If $p_a(D)=g+1$, then $$W_{g,{\underline{r}}}(X,D,{\underline{F}},{\underline{\varepsilon}}, \varphi)=\begin{cases}\sum_{i=0}^g\varepsilon_i(r_i+1-\chi(F_i)),\ & \text{if}\ \varphi=0,\\ \sum_{i=0}^g\varepsilon_i(r_i+1-\chi(F_i))-\chi({\mathbb R} X\setminus {\widehat{F}}),\ & \text{if}\ \varphi=[{\mathbb R} X\setminus{\widehat{F}}] \end{cases}$$ \end{proposition} {\bf Proof.} The first formula is evident, since the point constraints define a unique smooth curve. In the second case, the point constraints define a pencil of curves in $|D|$, which by B\'ezout's argument similar to (\ref{e2}) have, additionally to ${\boldsymbol w}$, an extra common point in each component $F_0,...,F_g$, and hence the result follows from the Morse formula after blowing up of all $\sum_{i=0}^g(r_i+1)$ real common points of the pencil. $\blacksquare$ \begin{example}\label{e1} Suppose that $X$ is a two-component real cubic surface in ${\mathbb P}^3$, $F_0\simeq{\mathbb R} P^2$, $F_1\simeq S^2$, and let $g=1$. Then (see \cite{Segre:1942}) $X$ contains precisely three real $(-1)$-curves $E_1,E_2,E_3$ such that ${\mathbb R} E_1\cup{\mathbb R} E_2\cup{\mathbb R} E_3\subset F_0$, and each real affective, big and nef divisor can be represented as $D=m_1E_1+m_2E_2+m_3E_3$ with $0<2m_i\le m_1+m_2+m_3$, $i=1,2,3$. In particular, $-K_X=E_1+E_2+E_3$. Since $p_a(-2K_X-E_i)=2$, $i=1,2,3$, we have $$W_{1,{\underline{r}}}(X,-2K_X-E_i,{\underline{F}},(\varepsilon_0, \varepsilon_1),0)=\varepsilon_0r_0+\varepsilon_1(r_1-1)$$ for any $r_0+r_1+2m=5$, $r_0\equiv0\mod2$, $r_1\equiv1\mod2$, $\varepsilon_0, \varepsilon_1=\pm1$. \end{example} \subsection{Invariants of del Pezzo surfaces of degree $\ge2$} Starting with the celebrated papers by Mikhalkin \cite{Mikhalkin:2005} and Welschinger \cite{Welschinger:2005}, the problem of computation and analysis of the behavior of (genus zero) Welschinger invariants of real rational symplectic four-folds, in particular, real del Pezzo surfaces has been addressed in a series of papers (see, e.g., \cite{Arroyo_Brugalle_Medrano:2011, Brugalle:2014,Brugalle_Puignau:2013, Horev_Solomon:2012,Itenberg_Kharlamov_Shustin:2003, Itenberg_Kharlamov_Shustin:2005,Itenberg_Kharlamov_Shustin:2009, Itenberg_Kharlamov_Shustin:2013,Shustin:2006, Welschinger:2007}). Some of the techniques developed there apply to computation of higher genus invariants introduced in Section \ref{pg}. In this section, we demonstrate examples of computations obtained by properly modified methods of \cite{Itenberg_Kharlamov_Shustin:2013}. Similarly to \cite{Itenberg_Kharlamov_Shustin:2013}, we stress on the positivity and asymptotic behavior of our invariants, which particularly yield the existence of real curves of positive genus passing through appropriate real point configurations. Real del Pezzo surfaces are classified up to deformation equivalence by their degree and the topology of the real point set (see \cite{Degtyarev_Kharlamov:2002}). In degree $\ge2$, we have the following surfaces $X$ with a disconnected real part: of degree $4$ with ${\mathbb R} X\simeq 2S^2$, of degree $3$ with ${\mathbb R} X\simeq{\mathbb R} P^2{\perp{\hskip-0.4cm}\perp} S^2$, and of degree $2$ with ${\mathbb R} X\simeq{\mathbb R} P^2{\perp{\hskip-0.4cm}\perp}{\mathbb R} P^2$, $({\mathbb R} P^2\#{\mathbb R} P^2){\perp{\hskip-0.4cm}\perp} S^2$, $2S^2$, $3S^2$, or $4S^2$, ({\it cf.}, for instance, \cite[Section 5.1]{Itenberg_Kharlamov_Shustin:2013}). For all of them, we can define elliptic invariants, for the two last types invariants of genus $2$, and for the very last one invariants of genus $3$. \begin{proposition}\label{p2} Let $X$ be a real del Pezzo surface of degree $\ge2$ such that ${\mathbb R} X$ contains (at least) two connected components $F_0,F_1$ and let $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ be a nef and big divisor class, satisfying relations (\ref{e5}) for $g=1$. Then the following conditions are satisfied. \begin{enumerate}\item[(i)] For any nonnegative integers $r_0,r_1$ satisfying (\ref{e6}) with $m=0$ and for any conjugation-invariant class $\varphi\in H_2(X\setminus(F_0\cup F_1); {\mathbb Z}/2)$, the invariants $W_{1,(r_0,r_1)}(X,D,(F_0,F_1),(1,1), \varphi)$ do not depend on the choice of the pair $r_0,r_1$ (thus, further on we omit subindex $(r_0,r_1)$ in the notation). \item[(ii)] If $X$ is not of degree $2$ with ${\mathbb R} X\simeq2S^2$, then \begin{equation}W_1(X,D,(F_0,F_1),(1,1),0)>0\ ,\label{e7}\end{equation} and \begin{equation}\lim_{k\to\infty}\frac{\log W_1(X,kD, (F_0,F_1),(1,1),0)}{k\log k}=\lim_{k\to\infty} \frac{\log {\operatorname{GW}}_0(X,kD)}{k\log k}=-DK_X\ .\label{e10}\end{equation} \item[(iii)] If $X$ is of degree $2$ with ${\mathbb R} X\simeq2S^2$, then \begin{equation}W_1(X,D,(F_0,F_1),(1,1),0)+W_{1,(-DK_X-1,1)} (X,D,(F_0,F_1),(1,-1),0)>0\ ,\label{e12} \end{equation} and $$\lim_{k\to\infty}\frac{\log\Big(W_1(X,kD,(F_0,F_1),(1,1),0)+ W_{1,(-kDK_X-1,1)}(X,kD,(F_0,F_1),(1,-1),0)\Big)} {k\log k}$$ \begin{equation}=\lim_{k\to\infty}\frac{\log {\operatorname{GW}}_0(X,kD)}{k\log k}=-DK_X\ .\label{e13} \end{equation} \end{enumerate} \end{proposition} Statement (iii) of Proposition \ref{p2} can be generalized to genus $2$ and $3$ invariants of the surfaces $X$ of degree $2$ with ${\mathbb R} X\simeq3S^2$ or $4S^2$: \begin{proposition}\label{p3} (1) Let $X$ be a real del Pezzo surface of degree $2$ with ${\mathbb R} X\simeq 3S^2$ or $4S^2$, $F_0,F_1,F_2$ three distinct connected components of ${\mathbb R} X$, $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ a nef and big divisor class satisfying relation (\ref{e5}) with $g=2$, $r_0,r_1$ odd positive integers satisfying $r_0+r_1=-DK_X$. Let ${\underline{r}}'=(r_0,r_1,1)$, ${\underline{F}}'=(F_0,F_1,F_2)$. Then the invariant $$W_{2,{\underline{r}}'}(X,D,{\underline{F}}',(1,1,\pm1)):=W_{2, {\underline{r}}'}(X,D,{\underline{F}}',(1,1,1),0)+W_{2, {\underline{r}}'}(X,D,{\underline{F}}',(1,1,-1),0)$$ does not depend on the choice of odd $r_0,r_1$ subject to $r_0+r_1=-DK_X$ (so, further on the subindex ${\underline{r}}'$ will be omitted), and it satisfies $$W_2(X,D,{\underline{F}}',(1,1,\pm1))>0$$ and $$\lim_{k\to\infty}\frac{\log W_2(X,kD,{\underline{F}}',(1,1,\pm1))} {k\log k}=\lim_{k\to\infty}\frac{\log {\operatorname{GW}}_0(X,kD)}{k\log k}=-DK_X\ .$$ (2) Let $X$ be a real del Pezzo surface of degree $2$ with ${\mathbb R} X\simeq4S^2$, $F_0,F_1,F_2,F_3$ be `the connected components of ${\mathbb R} X$, and $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ be a nef and big divisor class satisfying relation (\ref{e5}) with $g=3$, $r_0,r_1$ odd positive integers satisfying $r_0+r_1=-DK_X$. Let ${\underline{r}}''=(r_1,r_2,1,1)$, ${\underline{F}}''=(F_0,F_1,F_2,F_3)$. Then the invariant $$W_{3,{\underline{r}}''}(X,D,{\underline{F}}'',(1,1,\pm1,\pm1)):= \sum_{\varepsilon_2,\varepsilon_3=\pm1}W_{3, {\underline{r}}''}(X,D,{\underline{F}}'',(1,1,\varepsilon_2,\varepsilon_3),0)$$ does not depend on the choice of odd $r_0,r_1$ subject to $r_0+r_1=-DK_X$ (so, further on the subindex ${\underline{r}}''$ will be omitted), and it satisfies $$W_3(X,D,{\underline{F}}'',(1,1,\pm1,\pm1))>0$$ and $$\lim_{k\to\infty}\frac{\log W_3(X,kD,{\underline{F}}'',(1,1,\pm1,\pm1))} {k\log k}=\lim_{k\to\infty}\frac{\log {\operatorname{GW}}_0(X,kD)}{k\log k}=-DK_X\ .$$ \end{proposition} \begin{corollary}\label{c2} (1) Under the hypotheses of Proposition \ref{p2}(ii) (respectively, \ref{p2}(iii)), through any generic configuration ${\boldsymbol w}\in{\mathcal P}_{r_0,r_1,0}(X,(F_0,F_1))$ (respectively, ${\boldsymbol w}\in{\mathcal P}_{(-DK_X-1,1),0}(X,(F_0,F_1))$) one can draw a real elliptic curve $C\in|D|$ such that $C\supset{\boldsymbol w}$. (2) Under the hypotheses of Proposition \ref{p3}(1) (respectively, \ref{p3}(2)), through any generic configuration ${\boldsymbol w}\in{\mathcal P}_{{\underline{r}}',0}(X, {\underline{F}}')$ (respectively, ${\mathcal P}_{{\underline{r}}'',0}(X,{\underline{F}}'')$) one can draw a real curve $C\in|D|$ of genus $2$ (respectively, $3$) such that $C\supset{\boldsymbol w}$. \end{corollary} \subsection{Proof of Proposition \ref{p2}}\label{sec2.3} By blowing up suitable real points, we reduce the consideration to the only surfaces of degree $2$. To treat this case, we use real versions of the Abramovich-Bertram-Vakil formula and the Caporaso-Harris-type formulas developed in \cite{Itenberg_Kharlamov_Shustin:2013}, as well as their direct extensions to elliptic curves. We subsequently prove statements (i), (ii), and (iii). \subsubsection{Proof of statement (i)}\label{sec-i} Using Theorem \ref{t1} and the construction of \cite[Sections 4.2 and 5.2]{Itenberg_Kharlamov_Shustin:2013}, we can assume that $X$ is a generic real fiber of an {\it elliptic ABV family} (in the terminology of \cite[Section 5.2]{Itenberg_Kharlamov_Shustin:2013}), which is the following flat, conjugation-invariant family of surfaces $\pi:{\mathfrak X}\to({\mathbb C},0)$: \begin{itemize}\item ${\mathfrak X}$ is a smooth three-fold; \item all fibers ${\mathfrak X}_t$, $t\ne0$, are del Pezzo of degree $2$; the fibers ${\mathfrak X}_t$, $t\in({\mathbb R},0)\setminus\{0\}$, are real, equivariantly deformation equivalent to $X$; \item the central fiber is ${\mathfrak X}_0=Y\cup Z$, where $Y$ and $Z$ are smooth real surfaces transversally intersecting along a smooth real rational curve $E$ which satisfies ${\mathbb R} E\ne\emptyset$ and is such that $(Y,E)$ is a nodal del Pezzo pair of degree $K_Y^2=2$ (i.e., $K_YE=0$, $-K_YC>0$ for any irreducible curve $C\ne E$, and $(E^2)_Y=-2$, {\it cf.} \cite[Section 4]{Itenberg_Kharlamov_Shustin:2013}), $Z$ is a real quadric surface with ${\mathbb R} Z\simeq S^2$, in which $E$ is a hyperplane section (representing the divisor class $-K_Z/2$), \item ${\mathbb R} E$ divides some connected component $F$ of ${\mathbb R} Y$ into two parts $F_+,F_-$ so that the components $(F_0)_t,(F_1)_t$ of ${\mathbb R}{\mathfrak X}_t$ (corresponding to the given components $F_0,F_1$ of ${\mathbb R} X$), merge as $t\to0$ to $F_+$ and $F_-$, respectively. \end{itemize} By \cite[Proposition 24]{Itenberg_Kharlamov_Shustin:2013}, ${\operatorname{Pic}}^{\mathbb R}(X)$ is naturally embedded into ${\operatorname{Pic}}^{\mathbb R}(Y)$ as the orthogonal complement of $E$. Note also that the given class $\varphi\in H_2(X\setminus(F_0\cup F_1);{\mathbb Z}/2)$ can be naturally identified with a conjugation-invariant class in $H_2(Y\setminus F;{\mathbb Z}/2)$ (which we denote also by $\varphi$). For a configuration ${\boldsymbol w}$ of $-DK_X=-DK_Y$ points in $F$ such that $r_0$ of them lie in $F_+$ and $r_1$ other points lie in $F_-$ (we call such a configuration an {\it $(r_0,r_1)$-configuration}), denote by ${\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$ the set of real elliptic curves $C\in|D|_Y$ passing through ${\boldsymbol w}$. By \cite[Proposition 2.1]{Shoval_Shustin:2013}, this is a finite set which consists of only immersed curves. Since $DE=0$, any curve $C\in{\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$ has two one-dimensional real branches, in particular, $C\setminus{\mathbb R} C$ splits into two connected components, one of which we denote by $C_{1/2}$. Using \cite[Lemma 7]{Itenberg_Kharlamov_Shustin:2013}, we can replace each nonnodal singular point of any curve $C\in{\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$ by its local nodal equigeneric deformation and then correctly define the number \begin{equation} W_{1,(r_0,r_1)}(Y,D,F,\varphi,{\boldsymbol w})=\sum_{C\in{\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})}(-1)^{s(C;F)+C_{1/2}\circ\varphi}\ ,\label{e4} \end{equation} where $s(C;F)$ is the number of solitary nodes of $C$ in $F$. \begin{lemma}\label{l2} There exists a $(r_0,r_1)$-configuration ${\boldsymbol w}$ such that $$W_{1,(r_0,r_1)}(X,D,(F_0,F_1),(1,1),\varphi)=W_{1,(r_0,r_1)}(Y,D,F,\varphi,{\boldsymbol w})\ .$$ \end{lemma} {\bf Proof.} Take ${\boldsymbol w}$ to be a $D_0$-CH-configuration in the sense of Appendix 2, where $D_0\ge D$ is a suitable real effective divisor, and $|{\boldsymbol w}\cap F_+|=r_0$, $|{\boldsymbol w}\cap F_-|=r_1$. Extend ${\boldsymbol w}$ up to a family of configurations ${\boldsymbol w}_t\subset{\mathbb R}{\mathfrak X}_t$, $t\in({\mathbb R},0)$, and note that each elliptic curve $C_t\in{\mathcal C}_1^{\mathbb R}({\mathfrak X}_t, D,{\boldsymbol w}_t)$ degenerates as $t\to 0$ either to an elliptic curve $C_0\in{\mathcal C}_1^{\mathbb R}(Y,D,{\boldsymbol w})$, or to the union of an elliptic curve $C'_0\in{\mathcal C}_1^{\mathbb R}(Y,D-mE,{\boldsymbol w})$, $m>0$, and $2m$ generating lines of the quadric $Z$ attached to $2m$ intersection points of $C'_0$ with $E$ ({\it cf.} \cite[Lemma 22]{Itenberg_Kharlamov_Shustin:2013}). However, by Lemma \ref{lem-ch}, all intersection points of $C'_0$ with $E$ are real, and hence the above union of the generating lines of $Z$ is not real. Hence the latter degeneration of $C_t$ is not possible, and we are done. $\blacksquare$ \begin{lemma}\label{l3} If ${\boldsymbol w}$ is the $(r_0,r_1)$-configuration from Lemma \ref{l2} then \begin{equation} W_{1,(r_0,r_1)}(Y,D,F,\varphi,{\boldsymbol w})=W_{Y,E,\varphi+[{\mathbb R} Y\setminus F]}(D-E,0,2e_1,0)\ ,\label{e3} \end{equation} where the right-hand side is an ordinary $w$-number as defined in \cite[Section 3.6]{Itenberg_Kharlamov_Shustin:2013}. \end{lemma} {\bf Proof.} By construction of Appendix 2, ${\boldsymbol w}=\{w_i\}_{i\in J}$, where $J\subset\{1,...,N\}$, $|J|=r_0+r_1$. Let $k=\max J$. Consider degenerations of the curves $C\in{\mathcal C}_1^{\mathbb R}(Y,D,{\boldsymbol w}$ induced by the deformation of ${\boldsymbol w}$, in which ${\boldsymbol w}'={\boldsymbol w}\setminus\{w_k\}$ stay fixed, and $w_k$ specializes along the arc $L_k$ to the point $z_k\in E$ (see details in Appendix 2). By \cite[Proposition 2.6(2)]{Shoval_Shustin:2013}, any curve $C\in{\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$ degenerates \begin{enumerate}\item[(a)] either into the union $C'\cup E$, where $C'\in|D-E|$ is a real immersed elliptic curve, passing through ${\boldsymbol w}'$, intersecting $E$ at one point, and having there a smooth branch quadratically tangent to $E$, \item[(b)] or into the union $C''\cup E$, where $C''\in|D-E|$ is a real immersed rational curve, passing through ${\boldsymbol w}'$ and transversally intersecting $E$ in two distinct real points. \end{enumerate} By \cite[Proposition 2.8(2)]{Shoval_Shustin:2013}, each curve $C'\cup E$ in item (a) gives rise to two curves in ${\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$, which are distinguished by (two) deformation patterns given in\cite[Lemma 2.10(2)]{Shoval_Shustin:2013}, and which have opposite Welschinger signs (see \cite[Proposition 6.1(i)]{Shustin:2006}), and therefore do not contribute to $W_{1,(r_0,r_1)}(Y,D,F,\varphi,{\boldsymbol w})$. In its turn, each curve $C''\cup E$ in item (b) gives rise to one curve in ${\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$. Furthermore, these curves $C''$ are counted by the number $W_{Y,E,\varphi+[{\mathbb R} Y\setminus F]}(D,0,2e_1,0)$ with the same signs as the number $W_{1,(r_0,r_1)}(Y,D,F,\varphi,{\boldsymbol w})$ counts the corresponding deformed curves in ${\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w})$ ({\it cf.} the right-hand sides in (\ref{e4}) and \cite[Formulas (3) and (4)]{Itenberg_Kharlamov_Shustin:2013}), and hence (\ref{e3}) follows. $\blacksquare$ Statement (i) of Proposition \ref{p2} is an immediate consequence of Lemmas \ref{l2} and \ref{l3}. \begin{remark}\label{r1} Lemmas \ref{l2} and \ref{l3} allow one to compute all considered invariants $W_1(X,D,(F_0,F_1),(1,1),\varphi)$ via the recursive formula in \cite[Theorem 2]{Itenberg_Kharlamov_Shustin:2013}. In Table \ref{tab1} we present several values of this invariant for $D=-2K_X$ and various real del Pezzo surfaces $X$ (compared with the corresponding Gromov-Witten invariants of genus $1$, {\it cf.} \cite[Examples 4.2 and 6.7]{Brugalle:2014}). \begin{table} \begin{center} \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline $\deg X$ & 4 & 3 & 2& 2 & 2 & 2 & 2 \\ \hline ${\mathbb R} X$ & {$2S^2$} & {${\mathbb R}P^2 {\perp{\hskip-0.4cm}\perp} S^2$} & {$2{\mathbb R}P^2$}& {$({\mathbb R} P^2\#{\mathbb R} P^2){\perp{\hskip-0.4cm}\perp} S^2$} & {$2S^2$} & {$3S^2$} & {$4S^2$} \\ \hline $W_1(X,-2K)$ & 112 & 36 & 12 & 12 & 4 & 8 & 16 \\ \hline $GW_1(X,-2K)$ & 12300 & 1740 & 204 & 204 & 204 & 204 & 204 \\ \hline \end{tabular} \end{center} \caption{Elliptic invariants of del Pezzo surfaces of degree $\ge2$}\label{tab1} \end{table} \end{remark} \subsubsection{Proof of the positivity relation (\ref{e7})} By Lemmas \ref{l2} and \ref{l3}, to prove (\ref{e7}), it is enough to show that \begin{equation}W_{Y,E,[{\mathbb R} Y\setminus F]}(D-E,0,2e_1,0)>0\ .\label{e9}\end{equation} First, we prove an auxiliary inequality. Denote by ${\mathbb Z}_+^\infty$ the semigroup of vectors $\alpha=(\alpha_1,\alpha_2,...)$ with countably many nonnegative integer coordinates such that $\|\alpha\|=\sum_i\alpha_i<\infty$, and denote by ${\mathbb Z}_+^{\infty,odd}\subset{\mathbb Z}_+^\infty$ the subsemigroup of vectors $\alpha$ such that $\alpha_{2i}=0$ for all $i\ge0$. By $e_1,e_2,...$ we be denote the standard unit vectors in ${\mathbb Z}^\infty_+$. Put $I\alpha=\sum_{i\ge1}i\alpha_i$ for $\alpha\in{\mathbb Z}^\infty_+$. \begin{lemma}\label{l4} For any real nodal del Pezzo pair $(Y,E)$, introduced in Section \ref{sec-i}, any nef divisor class $D'\in{\operatorname{Pic}}^{\mathbb R}(Y)$ such that $D'E\ge0$ and $-D'K_Y>0$, and any vectors $\alpha,\beta\in{\mathbb Z}_+^{\infty,odd}$ such that $I(\alpha+\beta)=D'E$, one has \begin{equation}W_{Y,E,[{\mathbb R} Y\setminus F]}(D',\alpha,\beta,0)\ge0\ ,\label{e8}\end{equation} where $W_{Y,E,[{\mathbb R} Y\setminus F]}(D',\alpha,\beta,0)$ is an ordinary $w$-number as defined in \cite[Section 3.6]{Itenberg_Kharlamov_Shustin:2013}. \end{lemma} {\bf Proof.} For those pairs $(Y,E)$, which come from real del Pezzo surfaces $X$ with ${\mathbb R} X\simeq S^2{\perp{\hskip-0.4cm}\perp}({\mathbb R} P^2\#{\mathbb R} P^2)$, ${\mathbb R} P^2{\perp{\hskip-0.4cm}\perp}{\mathbb R} P^2$, or $3S^2$, the claim follows from \cite[Lemma 39]{Itenberg_Kharlamov_Shustin:2013}. Thus, we need to consider the only case of ${\mathbb R} X\simeq4S^2$. Via the anticanonical map $X\to{\mathbb P}^2$, the considered surface $X$ is represented as the double covering of ${\mathbb P}^2$ ramified along a real smooth quartic curve $Q_X$ having four ovals (see Figure \ref{f1}(a)), whereas ${\mathbb R} X$ doubly covers the four disks bounded by the ovals. In turn, the family ${\mathcal X}$ can be obtained via the blow-up of the node of the double covering of the trivial family ${\mathbb P}^2\times({\mathbb C},0)$ ramified along an inscribed family of quartics with the nodal central quartic $Q_Y$ shown in Figure \ref{f1}(b). To prove (\ref{e8}), we use induction on $R_Y(D',\beta):=-(K_Y+E)D'+\|\beta\|-1$. The base of induction is provided by \cite[Proposition 9(1)]{Itenberg_Kharlamov_Shustin:2013}, where all nonzero values are equal to $1$. For the induction step, we apply the suitably modified formula (6) from \cite[Theorem 2(2)]{Itenberg_Kharlamov_Shustin:2013}. In the left-hand side of \cite[Formula (6)]{Itenberg_Kharlamov_Shustin:2013}, the summands of the first sum and the factors in the second sum, which correspond to real divisor classes $D^{(i)}$ (in the notation of \cite{Itenberg_Kharlamov_Shustin:2013}), are nonnegative by the induction assumption, whereas the factors corresponding to pairs of conjugate divisor classes may be negative. More precisely, these factors correspond to pairs of conjugate $(-1)$-curves in $Y$ intersecting $E$. They can be viewed as follows ({\it cf.} \cite[Remark 23]{Itenberg_Kharlamov_Shustin:2013}): there are exactly six tangents to the quartic curve $Q_Y$ (Figure \ref{f1}(b)) passing through the node; they all are real, and each one is covered by a pair of conjugate $(-1)$-curves in $Y$ intersecting in a real solitary node, which projects to the tangency point on $Q_Y$. Thus, a pair of $(-1)$-curves covering any of the two tangents to the real nodal branch of $Q_Y$ contributes factor $(-1)$, while a pair of $(-1)$-curves covering any of the four tangents to the smooth ovals of $Q_Y$ contributes factor $1$. Each summand of the second sum in the right-hand side of \cite[Formula (6)]{Itenberg_Kharlamov_Shustin:2013} can be written as $(l+1)A_mB_{2l+m}$, where all the factors corresponding to pairs of conjugate $(-1)$-curves are separated in $A_m$, where $m$ is the number of factors, and the sum of the divisors classes appearing in the remaining part $B_{2l+m}$ equals $D'-E-(2l+m)(K_Y+E)$. By \cite[Theorem 2(1g)]{Itenberg_Kharlamov_Shustin:2013}, any pair of $(-1)$-curves appears in $A_m$ at most once. Thus, an easy computation converts \cite[Formula (6)]{Itenberg_Kharlamov_Shustin:2013} into $$W_{Y,E,[{\mathbb R} Y\setminus F]}(D',\alpha,\beta,0)=\sum_{j\ge 1,\ \beta_j>0}W_{Y,E,[{\mathbb R} Y\setminus F]}(D',\alpha+e_j,\beta-e_j,0)+B_0+2B_1+B_2\ ,$$ which completes the proof in view of $B_0,B_1,B_2\ge0$ (by the induction assumption). $\blacksquare$ \begin{figure} \caption{Ramification quartics} \label{f1} \end{figure} Note that $D-E$ is nef (on $Y$). By \cite[Lemma 35(ii)]{Itenberg_Kharlamov_Shustin:2013}, it is enough to show that $(D-E)E\ge0$ and $(D-E)E'\ge0$ for any $(-1)$-curve $E'$. We have $(D-E)E=DE-E^2=2$. For $(-1)$-curves disjoint from $E$, we have $(D-E)E'=DE'\ge0$ by the nefness of $D$. Any $(-1)$-curve $E'$ intersecting $E$ satisfies $E'E=1$, and hence is not real (any real divisor has even intersection with $E$, since $[{\mathbb R} E]=0\in H_1({\mathbb R} Y)$). Furthermore, $DE'>0$. Indeed, otherwise, $D$ would be disjoint both from $E'$ and from its complex conjugate $\overline E'$; thus, $D(E'+\overline E')=D(E+E'+\overline E')=0$, which in view of $\max\{\dim|E'+\overline E'|,\dim|E+E'+\overline E'|\}=1$, would contradict the assumption $D^2>0$. So, we conclude that $(D-E)E'=DE'-EE'=DE'-1\ge0$. To complete the proof of (\ref{e7}), we establish a slightly stronger statement than (\ref{e9}). \begin{lemma}\label{l5} For any real nodal del Pezzo pair $(Y,E)$ of degree $\ge2$ with ${\mathbb R} E\ne\emptyset$ dividing some connected component $F$ of ${\mathbb R} Y$, and any nef divisor class $D'\in{\operatorname{Pic}}^{\mathbb R}(Y)$ such that $D'E=2$, one has $$W_{Y,E,[{\mathbb R} Y\setminus F]}(D',0,2e_1,0)>0\ .$$ \end{lemma} {\bf Proof.} We apply induction on $-D'K_Y$. By \cite[Lemma 35(ii)]{Itenberg_Kharlamov_Shustin:2013}, $D'$ is nef on $X$. Since $D'\ne0$, it is effective on $X$, and is presented by a smooth curve (see, for instance, \cite[Theorems 3, 4, and Remark 3.1.4(B,C)]{Greuel_Lossen_Shustin:1998}, where the condition $p_a(D)\ge0$ trivially follows from \cite[Formula (3.1.2)]{Greuel_Lossen_Shustin:1998}), and hence $-D'K_Y=-D'K_X>0$. Furthermore, $-D'K_Y\ne1$. Indeed, otherwise, by the genus formula $(D')^2\equiv-D'K_Y=1\mod2$, that is $(D')^2\ge1$, and thus, $p_a(D')\ge1$. However, $-D'K_X=1$ and $\dim|K_X|\ge1$ would imply that a general curve $C\in|D'|_X$ is rational, which is a contradiction. Hence $-D'K_X\ge2$. Suppose that $-D'K_X=-D'K_Y=2$. This yields $-D'(K_Y+E)=0$, which ({\it cf.} \cite[Lemma 35(iii)]{Itenberg_Kharlamov_Shustin:2013}) leaves the only case $K_Y^2=2$ and $D'=-K_Y-E$, represented by a smooth rational curve, which finally yields $W_{Y,E,[{\mathbb R} Y\setminus F]}(D',0,2e_1,0)=1$. Suppose that $-D'K_Y>2$. By the genus formula, $(D')^2>0$. Then $D'E'>0$ for any $(-1)$-curve $E'$ intersecting $E$ ({\it cf.} the argument in the proof of the nefness of $D-E$ above). If $D'$ is disjoint from a real $(-1)$-curve $E'$ such that $E'E=0$, we blow down $E'$. If $D'$ is disjoint from a nonreal $(-1)$-curve $E'$ such that $E'E=0$, then $E'\overline E'=0$ (since otherwise $D'$ would be disjoint with curves in the one-dimensional linear system $|E'+\overline E'|$ contrary to $(D')^2>0$), and then we blow down both $E'$ and $\overline E'$. After finitely many such steps we arrive to a real nodal del Pezzo surface $(Y',E)$ of degree $\ge 2$ and a nef and big divisor class $D'\in{\operatorname{Pic}}^{\mathbb R}(Y')$ such that $D'E=2$, $-D'K_{Y'}=D'K_Y$, and $D'E'>0$ for any $(-1)$-curve in $Y'$. It follows that $(D'+K_{Y'})E=2$, and that $D'+K_{Y'}$ nonnegatively intersects any $(-1)$-curve on $Y'$. Hence $D'+K_{Y'}$ is nef on $Y'$. Since $-(D'+K_{Y'})K_{Y'}<-D'K_{Y'}=-DK_Y$, we have $W_{Y',E,[{\mathbb R} Y'\setminus F']}(D'+K_{Y'},0,2e_1,0)>0$, where $F'\subset{\mathbb R} Y'$ is the image of $F$. Then, by \cite[Formula (6)]{Itenberg_Kharlamov_Shustin:2013} and by Lemma \ref{l4}, $$W_{Y,E,[{\mathbb R} Y\setminus F]}(D',0,2e_1,0)=W_{Y',E,[{\mathbb R} Y'\setminus F']}(D',0,2e_1,0)$$ $$\ge W_{Y',E,[{\mathbb R} Y'\setminus F']}(D'+K_{Y'},0,2e_1,0)\cdot W_{Y',E,[{\mathbb R} Y'\setminus F']}(-K_{Y'}-E,0,2e_1,0)>0\ ,$$ where $W_{Y',E,[{\mathbb R} Y'\setminus F']}(-K_{Y'}-E,0,2e_1,0)=1$, because $p_a(-K_{Y'}-E)=0$, and hence a general curve in $|-K_{Y'}-E|_{Y'}$ is smooth rational. $\blacksquare$ \subsection{Proof of the asymptotic relation (\ref{e10})} It is enough to show that \begin{equation}\log W_1(X,kD,(F_0,F_1),(1,1),0)\ge(-DK_X)k\log k+O(k)\ , \label{e11}\end{equation} since by Lemmas \ref{l2} and \ref{l3}, and by \cite[Theorem 1]{Itenberg_Kharlamov_Shustin:2005}, $$\log W_1(X,kD,(F_0,F_1),(1,1),0)=\log W_{Y,E,[{\mathbb R} Y\setminus F]}(kD-E,0,2e_1,0)$$ $$\le \log{\operatorname{GW}}_0(X,kD)=(-DK_X)k\log k+O(k)\ .$$ Using Lemmas \ref{l4} and \ref{l5}, and \cite[Formula (6)]{Itenberg_Kharlamov_Shustin:2013}, we derive for any $k\ge2$ $$W_{*}(kD-E,0,2e_1,0)\ge\frac{1}{2}\cdot\frac{(-kDK_Y-2)!}{(-iDK_Y-1)!(-(k-i)DK_Y-1)!}$$ $$\times\sum_{i=1}^{k-1}\Big[4\cdot W_{*}(iD-E,0,2e_1,0)\cdot W_{*}((k-i)D-E,0,2e_1,0)\Big]\ .$$ where the asterisk stands for the subindex $(Y,E,[{\mathbb R} Y\setminus F])$. This inequality yields that the positive sequence $$a_n=\frac{W_{*}(nD-E,0,2e_1,0)}{(-nDK_Y)!},\quad n\ge 1\ ,$$ satisfies the relation $a_n\ge\lambda\sum_{i=1}^{n-1}a_i$ with some absolute constant $\lambda>0$. By \cite[Lemma 38]{Itenberg_Kharlamov_Shustin:2013}, $a_n\ge\xi_1\xi_2^n$, $n\ge1$, with some positive $\xi_1,\xi_2$, which leads to (\ref{e11}). \subsubsection{Proof of statement (iii)}\label{sec-iii} Let $X$ be a real del Pezzo surface of degree $2$ with ${\mathbb R} X\simeq 2S^2$ and $D\in{\operatorname{Pic}}^{\mathbb R}(X)$ a nef and big divisor class. So, $F_0\simeq F_1\simeq S^2$, and we let $r_0=-DK_X-1$, $r_1=1$. Since all such surfaces are equivariantly deformation equivalent and in view of Theorem \ref{t1}(2), we can suppose that $X$ is a fiber ${\mathfrak X}'_\tau$, $\tau>0$, if a flat conjugation-invariant family ${\mathfrak X}'\to({\mathbb C},0)$ of surfaces, along which the component $F_1$ collapses to an isolated real nodal point so that in a neighborhood of the node the family is representable as $x_1^2+x_2^2+x_3^2=\tau$. Following \cite[Section 4.2]{Itenberg_Kharlamov_Shustin:2013}, we perform the base change $\tau=t^2$ and blow up the node obtaining finally a conjugation-invariant family (called a {\it $3$-unscrew} ${\mathfrak X}\to({\mathbb C},0)$ in \cite[Section 4.2]{Itenberg_Kharlamov_Shustin:2013}) with the central fiber ${\mathfrak X}_0=Y\cup Z$, where $E=Y\cap Z$ is a smooth real rational curve with ${\mathbb R} E=\emptyset$, $(Y,E)$ being a real nodal del Pezzo pair with ${\mathbb R} Y\simeq S^2$, and $Z$ is a quadric surface in which $E$ represents the divisor class $-K_Z/2$ and which has the real part ${\mathbb R} Z\simeq S^2$. Pick a generic point $w_0\in{\mathbb R} Z$ and a generic configuration ${\boldsymbol w}'\subset{\mathbb R} Y$ of $-DK_Y-1=-DK_X-1$ distinct points in ${\mathbb R} Y$, and extend $\{w_0\}\cup{\boldsymbol w}'$ to smooth equivariant sections $t\mapsto{\boldsymbol w}_t$ of the family ${\mathfrak X}\to({\mathbb C},0)$. We can suppose that the curves of the sets ${\mathcal C}^{\mathbb R}_1({\mathfrak X}_t,D,{\boldsymbol w}_t)$, $t>0$, form disjoint equisingular families. Their limits at $t=0$ are as follows. \begin{lemma}\label{l6} The limit at $t=0$ of any family $C_t\in{\mathcal C}^{\mathbb R}_1({\mathfrak X}_t,D,{\boldsymbol w}_t)$, $t>0$, is a curve $C_0=C\cup(C'\cup C'')$, where \begin{enumerate}\item[(i)] $C'\subset Y$ is a real rational curve in the linear system $|D-mE|_Y$ for some $m\ge1$, which passes through ${\boldsymbol w}'$ and transversally intersects $E$ in $m$ distinct pairs of complex conjugate points, \item[(ii)] the curve $C'\subset Z$ is smooth rational, representing the divisor class $-K_Z/2$, passing through $w_1$, and intersecting $E$ at some pair of complex conjugate points of $C\cap E$, the curve $C''$ consists of $(m-1)$ pairs of complex conjugate lines that generate the two rulings of $Z$ and pass through $(C\cap E)\setminus(C'\cap E)$.\end{enumerate} Furthermore, any curve $C\cup(C'\cup C'')$ as above is a limit of a unique family $C_t\in{\mathcal C}^{\mathbb R}_1({\mathfrak X}_t,D,{\boldsymbol w}_t)$, $t>0$. \end{lemma} {\bf Proof.} The part $C_0\cap Z$ is a nonempty real curve passing through $w_1$. It then belongs to the linear system $|mE|_Z$ for some $m\ge1$, and hence $C=C_0\cap Y$ belongs to $|D-mE|_Y$, $m\ge0$. Since $C\supset{\boldsymbol w}'$, the dimension count in \cite[Proposition 2.1]{Shoval_Shustin:2013} and the genus bound yield that either $C$ is irreducible of genus $0$ or $1$, or $C$ consists of two components, one rational and one elliptic. In both cases, the components of $C$ are real and intersect $E$ in pairs of complex conjugate points. Note that $C$ has no elliptic component. Indeed, otherwise, the curve $C_0\cap Z$ would consists of lines from the rulings of $Z$ and would not hit a generic point $w_0\in{\mathbb R} Z$, since the family of real elliptic curves in $|D-mE|_Y$ passing through ${\boldsymbol w}'$ has real dimension one (see \cite[Proposition 2.1]{Shoval_Shustin:2013}). Hence $C$ is real, irreducible, rational, and intersects $E$ in $m$ distinct pairs of complex conjugate points. The asserted structure of $C_0\cap Z$ follows immediately. The existence and uniqueness of a family $C_t\in{\mathcal C}^{\mathbb R}_1({\mathfrak X}_t,((F_0)_t,(F_1)_t),{\boldsymbol w}_t)$, $t>0$, with a prescribed limit $C\cup(C'\cup C'')$ satisfying conditions (i), (ii), follow, for instance, from \cite[Theorem 2.8]{Shustin_Tyomkin:2006}. $\blacksquare$ Observe that the curves $C_t$ coming from a limit curve $C_0=C\cup(C'\cup C'')$ with $C\in|D-mE|_Y$ have precisely $m-1$ solitary nodes in the component $(F_1)_t \subset{\mathbb R}{\mathfrak X}_t$ and no other real nodes. Hence, $$W_1(X,D,(F_0,F_1),(1,1),0)=\sum_{m\ge1}(-1)^{m-1}2^{m-1}mW(Y,D-mE,{\boldsymbol w}')\ ,$$ $$W_{1,(1,-DK_X-1)}(X,D,(F_0,F_1),(1,-1),0)=\sum_{m\ge1}2^{m-1}mW(Y,D-mE,{\boldsymbol w}')\ ,$$ where $W(Y,D-mE,{\boldsymbol w}')=\sum_C(-1)^{s(C)}$ with $C$ running over all real rational curves in the linear system $|D-mE|_Y$ passing through ${\boldsymbol w}'$, and $s(C)$ is the total number of solitary nodes of $C$. Thus, we obtain $$W_1(X,D,(F_0,F_1),(1,1),0)+W_{1,(1,-DK_X-1)}(X,D,(F_0,F_1),(1,-1),0)$$ $$=\sum_{m\ge1}2^{2m-1}(2m-1)W(Y,D-(2m-1)E,{\boldsymbol w}')\ .$$ On the other hand, it follows from \cite[Theorem 6(2) and Proposition 35]{Itenberg_Kharlamov_Shustin:2013} that $$W(X,D',F_0,[F_1])=2\sum_{m\ge1}2^{2m-1}W(Y,D'-(2m-1)E,{\boldsymbol w}')$$ for any divisor class $D'\in{\operatorname{Pic}}^{\mathbb R}(X)$, where $$W(X,D',F_0,[F_1])=\sum_{C\in{\mathcal C}^{\mathbb R}_0(X,D',{\boldsymbol w}')}(-1)^{s(C;F_0)}$$ is the (rational) Welschinger invariant (in the notation of \cite{Itenberg_Kharlamov_Shustin:2013}). So, $$W_1(X,D,(F_0,F_1),(1,1),0)+W_{1,(-DK_X-1,1)}(X,D,(F_0,F_1),(1,-1),0)$$ \begin{equation}=\frac{1}{2}W(X,D,F_0,[F_1])+\sum_{m\ge1}W(X,D-2mE,F_0,[F_1])\ ,\label{e14}\end{equation} and we immediately derive relations (\ref{e12}), (\ref{e13}) from the positivity and asymptotics of Welschinger invariants $W(X,D',F_0,[F_1])$ established in \cite[Theorem 7]{Itenberg_Kharlamov_Shustin:2013}. \subsection{Proof of Proposition \ref{p3}} Our argument is completely parallel to that in the proof of statement (iii) of Proposition \ref{p2} in Section \ref{sec-iii}. First, we construct a conjugation-invariant family ${\mathfrak X}\to({\mathbb C},0)$ of surfaces along which the component $F_g$ (as $g=2$ or $3$) collapses, and $X$ degenerates into the union of a real nodal del Pezzo surface and a quadric surface, intersecting along a real rational curve $E$ with the empty real part. Then, similarly to (\ref{e14}) we derive $$W_{2,{\underline{r}}'}(X,D,{\underline{F}}',(1,1,\pm1))=\frac{1}{2}W_1(X,D,(F_0,F_1),(1,1),0)$$ \begin{equation}+ \sum_{m\ge1}W_1(X,D-2mE,(F_0,F_1),(1,1),0)\label{e15}\end{equation} and $$W_{3,{\underline{r}}''}(X,D,{\underline{F}}'',(1,1,\pm1,\pm1))=\frac{1}{2}W_{2,{\underline{r}}'} (X,D,{\underline{F}}',(1,1,\pm1))$$ \begin{equation}+ \sum_{m\ge1}W_{2,{\underline{r}}'}(X,D-2mE,{\underline{F}}',(1,1,\pm1),0)\ ,\label{e16}\end{equation} provided we establish the following analog of the vanishing statement in \cite[Proposition 35]{Itenberg_Kharlamov_Shustin:2013}: \begin{lemma}\label{l7} (1) Let $X,D,r_0,r_1$ be as in Proposition \ref{p3}(1). Then $$W_1(X,D,(F_0,F_1),(1,1),[F_2])=0\ .$$ (2) Let $X,D,r_0,r_1$ be as in Proposition \ref{p3}(2). Then $$W_2(X,D,(F_0,F_1,F_2),(1,1,\varepsilon_2),[F_3])=0,\quad\varepsilon_2=\pm1\ .$$ \end{lemma} Observe that formula (\ref{e15}) and Proposition \ref{p2}(i,ii) yield the first statement of Proposition \ref{p3}, and subsequently formula (\ref{e16}) yields the second statement of Proposition \ref{p3}. {\bf Proof of Lemma \ref{l7}.} We prove the first statement; the second one can be proved in the same way. One can check that the assumption $p_a(D)\ge2$ yields $-DK_X>2$, thus, we can assume that $r_1>1$. As in Section \ref{sec-i}, we consider an elliptic ABV family ${\mathfrak X}\to({\mathbb C},0)$ such that the components $F_1,F_2$ of $X={\mathfrak X}_t$ ($t>0$) degenerate into $F\cup {\mathbb R} Z$, where $Y\simeq {\mathbb R} Z\simeq S^2$, $F\setminus{\mathbb R} E=F_+\cup F_-$, ${\mathbb R} Z\setminus{\mathbb R} E={\mathbb R} Z_+\cup{\mathbb R} Z_-$, and we suppose that the limit of $F_1$ (respectively, $F_2$) is $F_+\cup{\mathbb R} Z_+$ (respectively, $F_-\cup{\mathbb R} Z_-$). Then (for an appropriate $D_0\in{\operatorname{Pic}}^{\mathbb R}(Y)$, $D_0\ge D$) we choose a $D_0$-CH-configuration ${\boldsymbol w}_0$ of $-DK_X$ real points on $Y$: $r_0$ points on the component $F_0$ of ${\mathbb R} Y$ (the limit of $F_0$) and $r_1$ points in $F_+$. Similarly to Lemma \ref{l2}, we have $$W_1(X,D,(F_0,F_1),(1,1),[F_2])=W_1(Y,D,(F_0,F_+),(1,1),{\boldsymbol w}_0)\ ,$$ where $$W_{1,(r_0,r_1)}(Y,D,(F_0,F_+),(1,1),{\boldsymbol w}_0)=\sum_{C\in{\mathcal C}_1^{\mathbb R}(Y,D,{\boldsymbol w}_0)}(-1)^{s(C,F_0\cup F)}\ .$$ As in the proof of Lemma \ref{l3}, we specialize a suitable point $w\in {\boldsymbol w}_0\cap F_+$ to ${\mathbb R} E$, and then each curve $C\in {\mathcal C}_1^{\mathbb R}(Y,D,{\boldsymbol w}_0)$ will degenerate into the union $C'\cup E$, where $C'\in|D-E|$ is a real immersed elliptic curve, passing through ${\boldsymbol w}'={\boldsymbol w}_0\setminus\{w\}$, intersecting $E$ at one point, and having there a smooth branch quadratically tangent to $E$ (the other option (b) mentioned in the proof of Lemma \ref{l3} is not possible, since $C\cap F_-$ is finite). By \cite[Proposition 2.8(2)]{Shoval_Shustin:2013}, each curve $C'\cup E$ gives rise to two curves in ${\mathcal C}^{\mathbb R}_1(Y,D,{\boldsymbol w}_0)$, which are distinguished by (two) deformation patterns given in \cite[Lemma 2.10(2)]{Shoval_Shustin:2013}, and which have opposite Welschinger signs (see \cite[Proposition 6.1(i)]{Shustin:2006}), and therefore do not contribute to $W_{1,(r_0,r_1)}(Y,D,(F_0,F_+),(1,1),{\boldsymbol w}_0)$. $\blacksquare$ \section*{Funding} The research has been supported by the German-Israeli Foundation, grant no. 1174-197.6/2011, and by the Hermann-Minkowski-Minerva Center for Geometry at the Tel Aviv University. {\small \section{Appendix A: Degeneration and deformation of curves on rational surfaces}\label{sec-aa} \subsection{Moduli spaces of curves} Let $\Sigma$ be a smooth projective rational surface and $D\in{\operatorname{Pic}}(\Sigma)$. Denote by $\overline{\mathcal M}_{g,n}(\Sigma,D)$, $g\ge0$, the space of the isomorphism classes of pairs $(\nu:\hat C\to\Sigma,{\boldsymbol p})$, where $\hat C$ is either a Riemann surface of genus $g$ or a connected reducible nodal curve of arithmetic genus $g$, $\nu_*\hat C\in|D|$, ${\boldsymbol p}=(p_1,...,p_n)$ is a sequence of distinct smooth points of $\hat C$, and each component $C'$ of $\hat C$ of genus $g'$, which is contracted by $\nu$, contains at least $3-2g'$ special points. This moduli space is a projective scheme (see \cite{Fulton_Pandharipande:1997}), and there are natural morphisms $$\Phi_{\Sigma,D}:\overline{\mathcal M}_{g,n}(\Sigma,D)\to|D|,\quad [\nu:\hat C\to\Sigma,{\boldsymbol p}]\mapsto\nu_*\hat C\ ,$$ $${\operatorname{Ev}}:\overline{\mathcal M}_{g,n}(\Sigma,D)\to\Sigma^n,\quad [\nu:\hat C\to\Sigma,{\boldsymbol p}]\mapsto\nu({\boldsymbol p})\ .$$ For any subscheme ${\mathcal V}\subset\overline{\mathcal M}_{0,n}(\Sigma, D)$, define the {\it intersection dimension} ${\operatorname{idim}}\mathcal V$ of $\mathcal V$ as follows: $${\operatorname{idim}}{\mathcal V}=\dim(\Phi_{\Sigma,D}\times{\operatorname{Ev}})({\mathcal V})\ ,$$ where the latter value is the maximum over the dimensions of all irreducible components. Put \begin{eqnarray} {\mathcal M}_{g,n}^{br}(\Sigma,D)&=&\{[\nu:\hat C\to\Sigma,{\boldsymbol p}]\ \in {\mathcal M}_{g,n}(\Sigma,D)\ |\ \hat C\ \text{is smooth, and}\nonumber\\ & &\qquad\qquad\qquad\qquad\qquad \nu\ \text{is birational on to}\ \nu(\hat C)\},\nonumber\\ {\mathcal M}_{g,n}^{im}(\Sigma,D)&=&\{[\nu:\hat C\to\Sigma,{\boldsymbol p}] \in{\mathcal M}_{g,n}^{br}(\Sigma,D)\ |\ \nu\ \text{is an immersion}\}\ .\nonumber \end{eqnarray} Denote by $\overline{{\mathcal M}^{br}_{g,n}}(\Sigma,D)$ the closure of ${\mathcal M}^{br}_{g,n}(\Sigma,D)$ in $\overline{\mathcal M}_{g,n}(\Sigma,D)$, and introduce also the space $${\mathcal M}'_{g,n}(\Sigma,D)=\{[\nu:\hat C\to\Sigma,{\boldsymbol p}] \in \overline{{\mathcal M}^{br}_{g,n}}(\Sigma,D)\ |\ \hat C\ \text{is smooth}\}\ .$$ \begin{lemma}\label{ln15} For any element $[\nu]=[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in{\mathcal M}_{g,n}^{br}(\Sigma,D)$, the map $\Phi_{\Sigma,D}\times{\operatorname{Ev}}$ is injective in a neighborhood $[\nu]$, and, for the germ at $[\nu]$ of any irreducible subscheme ${\mathcal V}\subset {\mathcal M}_{0,n}^{br}(\Sigma,D)$, we have $$\dim {\mathcal V}={\operatorname{idim}} {\mathcal V}\ .$$ \end{lemma} \subsection{Curves on del Pezzo and uninodal del Pezzo surfaces}\label{sec-dpudp} Let $\Sigma$ be the plane ${\mathbb P}^2$ blown up at eight distinct points. Denote by ${\mathcal D}$ the Kodaira-Spencer-Kuranishi space of all complex structures on the smooth four-fold $\Sigma$, factorized by the action of diffeomorphisms homotopic to the identity. It contains an open dense subset ${\mathcal D}^{\operatorname{DP}}$ consisting of del Pezzo surfaces (of degree $1$), that is, surfaces with an ample effective anticanonical class. We call a surface $Y\in{\mathcal D}$ uninodal del Pezzo, if it contains a smooth rational $(-2)$-curve $E_Y$, and $-K_Y C>0$ for each irreducible curve $C\ne E_Y$ (in particular, $C^2\ge-1$). Denote by ${\mathcal D}^{\operatorname{DP}}(A_1)\subset{\mathcal D}$ the subspace formed by uninodal del Pezzo surfaces. Observe that ${\mathcal D}^{\operatorname{DP}}(A_1)$ has codimension $1$ in ${\mathcal D}$, and ${\mathcal D}\setminus({\mathcal D}^{\operatorname{DP}}\cup{\mathcal D}^{\operatorname{DP}}(A_1))$ is of codimension $\ge2$ in ${\mathcal D}$. Through all this section we use the notation $$n=-DK_\Sigma+g-1\ .$$ \begin{lemma}\label{ln11} If $\Sigma$ is a smooth rational surface and $-DK_{\Sigma}>0$, then the space ${\mathcal M}^{im}_{g,0}(\Sigma,D)$ is either empty, or is a smooth variety of dimension $n$. \end{lemma} {\bf Proof.} Let $[\nu:\hat C\to\Sigma]\in{\mathcal M}_{g,0}^{im}(\Sigma,D)$. The Zariski tangent space to ${\mathcal M}_{g,0}^{im}(\Sigma,D)$ at $[\nu]$ can be identified with $H^0(\hat C,{\mathcal N}^{\nu}_{\hat C})$. Since \begin{equation}\deg{\mathcal N}^{\nu}_{\hat C}=-DK_\Sigma+2g-2>2g-2\ ,\label{en7}\end{equation} we have \begin{equation}h^1(\hat C,{\mathcal N}^{\nu}_{\hat C})=0\ ,\label{e20}\end{equation} and hence ${\mathcal M}_{g,0}^{im}(\Sigma,D)$ is smooth at $[\nu]$ and is of dimension \begin{equation}h^0(\hat C,{\mathcal N}^{\nu}_{\hat C})=\deg{\mathcal N}^{\nu}_{\hat C}-g+1=-DK_\Sigma+g-1=n\ .\label{en8}\end{equation} $\blacksquare$ \begin{lemma}\label{ln12} (1) Let $\Sigma\in{\mathcal D}^{{\operatorname{DP}}}$ and $-DK_{\Sigma}>0$. Then, the following holds: \begin{enumerate}\item[(i)] The space ${\mathcal M}_{g,0}^{br}(\Sigma,D)$ is either empty or satisfies $\dim{\mathcal M}_{g,0}^{br}(\Sigma,D)\le n$. \item[(ii)] If either $g > 0$ or $D\ne-K_\Sigma$, then ${\mathcal M}_{g,0}^{im}(\Sigma,D)\subset{\mathcal M}_{g,0}^{br,n}(\Sigma,D)$ is an open dense subset, where ${\mathcal M}_{g,0}^{br,n}(\Sigma,D)$ denotes the union of the components of ${\mathcal M}_{g,0}^{br}(\Sigma,D)$ of dimension $n$. \item[(iii)] There exists an open dense subset $U^{\operatorname{DP}} \subset{\mathcal D}^{{\operatorname{DP}}}$ such that, if $\Sigma \in U^{\operatorname{DP}}$, then ${\mathcal M}_{0,0}(\Sigma,-K_\Sigma)$ consists of twelve elements, each corresponding to a rational nodal curve.\end{enumerate} (2) There exists an open dense subset $U^{\operatorname{DP}}(A_1)\subset{\mathcal D}(A_1)$ such that if $\Sigma\in U^{\operatorname{DP}}(A_1)$ and $-DK_{\Sigma}>0$, then \begin{enumerate} \item[(i)] ${\operatorname{idim}}{\mathcal M}'_{g,0}(\Sigma,D)\le n$; \item[(ii)] a generic element $[\nu:\hat C\to\Sigma]$ of any irreducible component ${\mathcal V}$ of ${\mathcal M}'_{g,0}(\Sigma,D)$, such that ${\operatorname{idim}}{\mathcal V}=n$, is an immersion, and the divisor $\nu^*(E_\Sigma)$ consists of $DE_\Sigma$ distinct points. \end{enumerate} \end{lemma} {\bf Proof.} Let $\Sigma\in{\mathcal D}^{{\operatorname{DP}}}\cup{\mathcal D}^{\operatorname{DP}}(A_1)$. All the statements for the case of an effective $-K_\Sigma-D$ immediately follow from elementary properties of plane lines, conics, and cubics. In particular, a general element of ${\mathcal D}^{\operatorname{DP}}\setminus U^{\operatorname{DP}}$ is the plane blown up at eight generic points on a cuspidal cubic. So, in the sequel we suppose that $-K_\Sigma-D$ is not effective. In view of Lemma \ref{ln11}, to complete the proof of statements (1) and (2i) it is enough to show that $$\dim ({\mathcal M}_{g,0}^{br,n}(\Sigma,D)\setminus{\mathcal M}_{g,0}^{im}(\Sigma,D))<n\quad\text{and}\quad\dim({\mathcal M}'_{g,0} (\Sigma,D)\setminus{\mathcal M}^{br,n}_{g,0}(\Sigma,D))<n\ .$$ Note, first, that, in the case $n=0$, we have $g=0$ and $-DK_\Sigma=1$, and the curves $C\in\Phi_{\Sigma,D}({\mathcal M}_{g,0}^{br,0}(\Sigma,D))$ are nonsingular due to the bound \begin{equation}-DK_\Sigma\ge (C\cdot C')(z)\ge s\ ,\label{en6}\end{equation} coming from the intersection of $C$ with a curve $C'\in|-K_\Sigma|$ passing through a point $z\in C$, where $C$ has multiplicity $s$. Thus, further on we suppose that $n>0$. Let ${\mathcal V}_2$ be an irreducible component of ${\mathcal M}_{g,0}^{br,n}(\Sigma,D)\setminus {\mathcal M}_0^{im}(\Sigma,D)$, $[\nu:\hat C\to\Sigma]\in{\mathcal V}_2$ a generic element, and let $\nu$ have $s\ge1$ critical points of multiplicities $m_1\ge...\ge m_s\ge2$. Particularly, bound (\ref{en6}) gives \begin{equation}-DK_\Sigma\ge m_1\ .\label{en17}\end{equation} Then ({\it cf.} \cite[First formula in the proof of Corollary 2.4]{Caporaso_Harris:1998}), $$\dim{\mathcal V}_2\le h^0(\hat C,{\mathcal N}^{\nu}_{\hat C}/{\operatorname{Tors}}({\mathcal N}^{\nu}_{\hat C}))\ ,$$ where the normal sheaf ${\mathcal N}^{\nu}_{\hat C}$ on $\hat C$ is defined as the cokernel of the map $d\nu:{\mathcal T}\hat C\to\nu^*{\mathcal T}\Sigma$, and ${\operatorname{Tors}}(*)$ is the torsion sheaf. It follows from \cite[Lemma 2.6]{Caporaso_Harris:1998} ({\it cf.} also the computation in \cite[Page 363]{Caporaso_Harris:1998}) that $\deg{\operatorname{Tors}}({\mathcal N}^{\nu}_{\hat C})=\sum_i(m_i-1)$, and hence \begin{equation}\deg{\mathcal N}^{\nu}_{\hat C}/{\operatorname{Tors}}({\mathcal N}^{\nu}_{\hat C})=-DK_\Sigma+2g-2-\sum_{i=1}^s(m_i-1)\label{en24}\end{equation} which yields \begin{eqnarray}\dim{\mathcal V}_2 &\le& h^0(\hat C,{\mathcal N}^{\nu}_{\hat C}/{\operatorname{Tors}}({\mathcal N}^{\nu}_{\hat C}))\nonumber\\ &=&\max\{\deg{\mathcal N}^{\nu}_{\hat C}/{\operatorname{Tors}}({\mathcal N}^{\nu}_{\hat C})-g+1,\ g\}\overset{\text{(\ref{en17})}}{\le}n-(m_1-1)<n,\label{en5}\end{eqnarray} Let ${\mathcal V}$ be an irreducible component of ${\mathcal M}'_{g,0}(\Sigma,D)\setminus{\mathcal M}_{g,0}^{br,n}(\Sigma,D)$. Then a generic element $[\nu:\hat C\to\Sigma]\in{\mathcal V}$ satisfies $\nu_*\hat C=sC$ for some $s\ge2$ and some reduced, irreducible curve $C\subset\Sigma$. It follows from the Riemann-Hirwitz formula, that $g(C)-1\le\frac{1}{s}(g-1)$, and hence $${\operatorname{idim}}{\mathcal V}\le-CK_\Sigma+g(C)-1\le-\frac{1}{s}(DK_\Sigma+g-1)<-DK_\Sigma+g-1=n\ .$$ To complete the proof of (2ii), let us assume that $\dim{\mathcal V}=r$ and the divisor $\nu^*(E_\Sigma)$ contains a multiple point $sz$, $s\ge2$. In view of $DE_\Sigma \ge s$ and $(-K_\Sigma - E_\Sigma)D\ge0$ (remind that $D$ is irreducible and $-K_\Sigma-D$ is not effective), we have $-DK_\Sigma\ge s$. Furthermore, $T_{[\nu]}{\mathcal V}$ can be identified with a subspace of $H^0(\hat C,{\mathcal N}_{\hat C}^{\nu}(-(s-1)z))$ ({\it cf.} \cite[Remark in page 364]{Caporaso_Harris:1998}). Since $$\deg {\mathcal N}_{\hat C}^{\nu}(-(s-1)z))=-DK_\Sigma+2g-1-s \overset{-DK_\Sigma\ge s}{>}2g-2\ ,$$ we have $$H^1(\hat C,{\mathcal N}_{\hat C}^{\nu}(-(s-1)z^*))=0\ ,$$ and hence $$ \dim{\mathcal V}\le h^0(\hat C,{\mathcal N}_{\hat C}^{\nu}(-(s-1)z^*))=n-(s-1)<n$$ contrary to the assumption $\dim{\mathcal V}=n$. $\blacksquare$ \begin{lemma}\label{ln17} There exists an open dense subset $V^{{\operatorname{DP}}}\subset{\mathcal D}^{{\operatorname{DP}}}$ such that, for each $\Sigma\in V^{{\operatorname{DP}}}$, the set of effective divisor classes $D\in{\operatorname{Pic}}(\Sigma)$ satisfying $-DK_\Sigma=1$ is finite, the set of rational curves in the corresponding linear systems $|D|$ is finite, and any two such rational curves $C_1,C_2$ either coincide, or are disjoint, or intersect in $C_1C_2$ distinct points. \end{lemma} {\bf Proof}. For the proof see \cite[Lemma 10]{Itenberg_Kharlamov_Shustin:2012}. $\blacksquare$ \begin{lemma}\label{ln16} For each surface $\Sigma\in U^{\operatorname{DP}}\cap V^{{\operatorname{DP}}}$ , each divisor class $D \in {\operatorname{Pic}}(\Sigma)$ with $-DK_\Sigma>0$ and $D^2\ge -1$, and each irreducible component ${\mathcal V}$ of $\overline{{\mathcal M}^{br,n}_{g,0}}(\Sigma,D)\setminus{\mathcal M}_{g,0}^{br,n}(\Sigma,D)$ with ${\operatorname{idim}}{\mathcal V}=n-1$, a generic element $[\nu:\hat C\to\Sigma]\in{\mathcal V}$ is such that \begin{enumerate}\item[(i)] either $\hat C=\hat C_1\cup\hat C_2$ with $\hat C_1,\hat C_2$ smooth Riemann surfaces of genera $g_1,g_2$, respectively, such that $g=g_1+g_2$; furthermore, $|\hat C_1\cap\hat C_2|=1$, $[\nu|_{\hat C_i}:\hat C_i\to\Sigma]\in{\mathcal M}_{g_i,0}^{im}(\Sigma,D_i)$, where $C_1=\nu(\hat C_1)\ne C_2=\nu(\hat C_2)$, $D_1D_2>0$, and $-D_iK_\Sigma>0$, $D_i^2\ge -1$ for each $i=1,2$, and, in addition, at any point $z\in C_1\cap C_2$, any component of $(C_1,z)$ intersects any component of $(C_2,z)$ transversally; \item[(ii)] or $D=-2K_\Sigma$, $g=0$, $\hat C=\hat C_1\cup\hat C_2$, $|\hat C_1\cap\hat C_2|=1$, $\nu|_{\hat C_1}$ and $\nu|_{\hat C_2}$ are immersions of $\hat C_1\simeq\hat C_2\simeq{\mathbb P}^1$ on to the same uninodal curve $C\in|-K_\Sigma|$; \item[(iii)] or $D=-2K_\Sigma$, $\hat C$ is a smooth elliptic curve, $\nu:\hat C\to C=\nu(\hat C)$ is an unramified double covering. \end{enumerate} Furthermore, $\nu$ is always an immersion ({\it i.e.}, a local isomorphism on to the image), and the germ of $\overline{{\mathcal M}^{br,n}_{g,0}}(\Sigma,D)$ at $[\nu]$ is smooth. \end{lemma} {\bf Proof.} Let ${\mathcal V}$ be an irreducible component of $({\mathcal M}'_{g,0}(\Sigma,D)\cap\overline{{\mathcal M}^{br,n}_{g,0}} (\Sigma,D))\setminus{\mathcal M}^{br,n}_{g,0}(\Sigma,D)$ such that ${\operatorname{idim}}{\mathcal V}=n-1$ (${\operatorname{idim}}{\mathcal V}$ cannot be bigger by Lemma \ref{ln12}(i)). Then its generic element $[\nu:\hat C\to\Sigma]$ is such that $\nu_*\hat C=sC$ with a reduced, irreducible $C$, $s\ge2$. By the Riemann-Hurwitz formula, $g-1=s(g(C)-1)+\rho/2$, where $\rho$ is the total ramification index of the map $\nu^\vee:\hat C\to C^\vee$, $C^\vee$ being the normalization of $C$. By Lemma \ref{ln12}(i), $${\operatorname{idim}}{\mathcal V}=n-1=-sCK_\Sigma+g-2\le-CK_\Sigma+g(C)-1\ ,$$ which together with the above Riemann-Hurwitz formula yields $$(s-1)(-CK_\Sigma+g(C)-1)+\frac{\rho}{2}\le1\ .$$ It follows that \begin{itemize}\item either $s=2$, $g=g(C)=1$, $-CK_\Sigma=1$, $\rho=0$, and hence $C\in|-K_\Sigma|$, which meets one of the cases in statement (i); \item or $s=2$, $-CK_\Sigma=1$, and $g(C)=0$, which yields $g=0$ and, in view of the adjunction formula, $C^2=-1$, or $C^2\ge1$; both cases are not possible: the former one is excluded by the assumption $D^2\ge-1$, whereas the latter one leaves the only option of $C\in|-K_\Sigma|$ a uninodal curve, however, in such a case the map $\nu$ cannot be deformed into an element of ${\mathcal M}^{br}_{0,0}(\Sigma,-2K_\Sigma)$, since the deformed map would birationally send ${\mathbb P}^1$ on to a curve with $\delta$-invariant $\ge 4$ in a neighborhood of the node of $C$, which is bigger than the arithmetic genus, $((-2K_\Sigma)^2+(-2K_\Sigma)K_\Sigma)/2+1=2$. \end{itemize} Now let $[\nu:\hat C\to\Sigma]$ be a generic element of an irreducible component ${\mathcal V}$ of $\overline{{\mathcal M}^{br,n}_{g,0}}(\Sigma,D)\setminus{\mathcal M}'_{g,0}(\Sigma,D)$ with ${\operatorname{idim}}{\mathcal V}=n-1$. Then $\hat C$ has $s\ge2$ components $\hat C_1,...,\hat C_s$ of genera $g_1,...,g_s$, respectively, and $l\ge s-1$ nodes. It follows that $g=g_1+...+g_s+l-s+1$, and, by Lemma \ref{ln12}(i), $${\operatorname{idim}}{\mathcal V}=n-1=-DK_\Sigma+g-2\le-DK_\Sigma+(g_1+...+g_s)-s\ ,$$ and hence $l=1$, $s=2$, $g=g_1+g_2$. By Lemma \ref{ln12}(ii), both $\nu|_{\hat C_1}$ and $\nu|_{\hat C_2}$ are immersions. Note that the case $\nu(\hat C_1)=\nu(\hat C_2)$ is possible only when $D_1=D_2$, $g_1=g_2$, and $-D_1K_\Sigma+g_1-1=0$. Since $D_1^2=D_2^2\ge1$ in view of the adjunction formula and the condition $D^2\ge-1$, we are left with the case $D_1=D_2=-K_\Sigma$, and $C=\nu(\hat C_1)=\nu(\hat C_2)\in|-K_\Sigma|$ a rational curve with the unique node $z$. The map $\nu$ takes the germ $(\hat C,\hat z)$ isomorphically on to the germ $(C,z)$, since, otherwise, we would get a deformed map $\nu$ with the image whose $\delta$-invariant $\ge 4$, which is bigger than its arithmetic genus, $((-2K_\Sigma)^2+(-2K_\Sigma)K_\Sigma)/2+1=2$. Suppose now that $C_1=\nu(\hat C_1)\ne C_2=\nu(\hat C_2)$. Let us show that $C_1$ and $C_2$ intersect transversally as proclaimed in statement (i), which would imply that $\nu$ is in immersion. If $-D_1K_\Sigma+g_1-1=-D_2K_\Sigma+g_2-1=0$, then $g_1=g_2=0$ and $-D_1K_\Sigma=-D_2K_\Sigma=1$, which allows one to apply Lemma \ref{ln17}. If $-D_1K_\Sigma+g_1-1>0$, then we can vary $\nu|_{\hat C_1}$ in ${\mathcal M}_{g_1,0}^{im}(\Sigma,D_1)$ and achieve the required transversality as we did in the proof of Lemma \ref{ln12}(2ii). At last, the proof of the smoothness of the germ of $\overline{{\mathcal M}^{br,n}_{g,0}}(\Sigma,D)$ at $[\nu]$ literally coincides with that in \cite[Lemma 11]{Itenberg_Kharlamov_Shustin:2012}. $\blacksquare$ \begin{lemma}\label{p2X} Let $\Sigma\in U^{\operatorname{DP}}$, $g\ge0$, and $D\in{\operatorname{Pic}}(\Sigma)$ be an effective divisor class such that $n=-DK_\Sigma+g-1\ge1$. Let ${\boldsymbol w}=(w_1,...,w_n)$ be a sequence of $n$ distinct points in $\Sigma$, let $\sigma_i$ be smooth curve germs in $\Sigma$ centered at $w_i$, $n'<i\le n$, for some $n'<n$, ${\boldsymbol w}'=(w_i)_{1\le i\le n'}$, and let \begin{eqnarray}\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D;{\boldsymbol w}',\{\sigma_i\}_{n'<i\le n})&=&\{[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D)\ : \nonumber\\ & & \quad \nu(p_i)=w_i \ \text{for}\, \ 1\le i\le n',\ \nu(p_i)\in \sigma_i, \,\text{for}\, \ n'<i\le n\}\ .\nonumber\end{eqnarray} (1) Suppose that $[\nu:\hat C\to\Sigma,{\boldsymbol p}]$ either belongs to $\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D;{\boldsymbol w})\cap{\mathcal M}_{g,n}^{im}(\Sigma,D)$, or is as in Lemma \ref{ln12}(iii). If \begin{equation}H^1(\hat C,{\mathcal N}^\nu_{\hat C}(-{\boldsymbol p}))=0\ , \label{eelra1}\end{equation} then ${\operatorname{Ev}}$ sends the germ of $\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D;{\boldsymbol w}',\{\sigma_i\}_{n'<i\le n})$ at $[\nu:{\mathbb P}^1\to\Sigma,{\boldsymbol p}]$ diffeomorphically on to $\prod_{n'<i\le n}\sigma_i$. (2) Suppose that $[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D;{\boldsymbol w})$ is such that \begin{itemize}\item $[\nu:\hat C\to\Sigma]\in\overline{{\mathcal M}_{g,0}^{br}}(\Sigma,D)$ is as in Lemma \ref{ln16}(i), \item $n'\ge-D_1K_\Sigma+g_1-1$, $\#({\boldsymbol p}\cap \hat C_1)=-D_1K_\Sigma+g_1-1$, $\#({\boldsymbol p}\cap \hat C_2)=-D_2K_\Sigma+g_2$, the point sequences $(w_i)_{1\le i<-D_1K_\Sigma}$ and $(w_i)_{-D_1K_\Sigma\le i\le n}$ are generic on the curves $C_1=\nu_*\hat C_1$ and $C_2=\nu_*\hat C_2$, respectively, and the germs $\sigma_i$, $n'<i\le n$, cross $C_2$ transversally.\end{itemize} Then ${\operatorname{Ev}}$ sends the germ of $\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D;{\boldsymbol w}',\{\sigma_i\}_{n'<i\le n})$ at $[\nu:\hat C\to\Sigma,{\boldsymbol p}]$ diffeomorphically on to $\prod_{n'<i\le n}\sigma_i$. \end{lemma} {\bf Proof.} The first statement follows from the fact that ${\operatorname{Ev}}$ diffeomorphically sends the (smooth by Lemma \ref{ln11}) germ of $\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D)$ at $[\nu:\hat C\to\Sigma, {\boldsymbol p}]$ on to the germ of $\Sigma^r$ at ${\boldsymbol w}$. In view of $\dim\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D)=2n$ (see Lemma \ref{ln12}(i)) it is sufficient to show that the Zariski tangent space to ${\operatorname{Ev}}^{-1}({\boldsymbol w})$ is zero-dimensional, which is equivalent to \begin{equation}h^0(\hat C,{\mathcal N}_{\hat C}^\nu(-{\boldsymbol p}))=0 \label{eK5}\end{equation} that in turn immediately follows from (\ref{en8}) and (\ref{eelra1}). In the second case, by Lemma \ref{ln16}, the germ of $\overline{{\mathcal M}_{g,n}^{br}}(\Sigma,D)$ at $[\nu:\hat C\to\Sigma,{\boldsymbol p}]$ is smooth. The general position of the points ${\boldsymbol w}$ on the curve $C_1\cup C_2$ yields (\ref{eelra1}), which similarly to the preceding paragraph suffices for the required diffeomorphism ({\it cf.} proof of \cite[Lemma 12(2)]{Itenberg_Kharlamov_Shustin:2012}). $\blacksquare$ Consider a proper submersion $\widetilde\Sigma\to({\mathbb C},0)$ a smooth three-fold $\widetilde\Sigma$ such that $\Sigma=\widetilde\Sigma_0\in U^{\operatorname{DP}}(A_1)$ and $\widetilde\Sigma_t\in U^{\operatorname{DP}}$ for all $t\ne 0$. Choose a divisor class $D\in{\operatorname{Pic}}(\Sigma)$ such that $-DK_\Sigma>0$ and a nonnegative integer $g$. Let ${\boldsymbol w}_t\in\widetilde\Sigma_t^n$, $t\in({\mathbb C},0)$, be a smooth family of configurations of distinct points such that ${\boldsymbol w}={\boldsymbol w}_0$ is disjoint with the $(-2)$-curve $E_\Sigma\subset\Sigma$. \begin{lemma}\label{lem-abv} There exists an open dense subset ${\mathcal U}_n\subset\Sigma^n$ such that, if ${\boldsymbol w}\in {\mathcal U}_n$, then \begin{enumerate}\item[(i)] for any $m\ge0$ and any element $[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in {\mathcal M}'_{g,n}(\Sigma,D-mE_\Sigma)$ with $\nu({\boldsymbol p})={\boldsymbol w}$, the map $\nu$ is an immersion, and the divisor $\nu^*(E_\Sigma)\subset\hat C$ consists of $DE_\Sigma+2m$ distinct points; \item[(ii)] each element $[\nu:\hat C\cup(\hat E_1\cup...\cup\hat E_m)\to\Sigma,{\boldsymbol p}] \in\overline{\mathcal M}_{g,n}(\Sigma,D)$ such that \begin{itemize}\item $[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in{\mathcal M}'_{g,n}(\Sigma,D-mE_\Sigma)$, $\nu({\boldsymbol p}) ={\boldsymbol w}$, \item $\hat E_1\simeq...\simeq\hat E_m\simeq{\mathbb P}^1$ and $\nu$ takes each of $\hat E_1,...,\hat E_m$ isomorphically on to $E_\Sigma$, \item $\hat E_i\cap\hat E_j=\emptyset$ as $i\ne j$, and $|\hat C\cap\hat E_i|=1$ for all $i=1,...,m$, \end{itemize} admits an extension to a smooth family $[\nu_t:\hat C_t\to\widetilde\Sigma_t,{\boldsymbol p}_t] \in\overline{\mathcal M}_{g,n}(\widetilde\Sigma_t,D)$, $t\in({\mathbb C},0)$, where $\nu_t({\boldsymbol p}_t)={\boldsymbol w}_t$ and $[\nu_t:\hat C_t\to\widetilde\Sigma_t,{\boldsymbol p}_t] \in{\mathcal M}^{im}_{g,n}(\widetilde\Sigma_t,D)$, \item[(iii)]the set of families introduced in item (ii) is in one-to-one correspondence with each of the sets ${\mathcal C}_g(\widetilde\Sigma_t,D,{\boldsymbol w}_t)$, $t\ne0$. \end{enumerate} \end{lemma} {\bf Proof.} The statement follows from Lemma \ref{ln12}(2) and \cite[Theorem 4.2]{Vakil:2000}. $\blacksquare$ \subsection{Deformation of isolated curve singularities} \subsubsection{Local equigeneric deformations} Let $\Sigma$ be a smooth algebraic surface, and $z$ be an isolated singular point of a curve $C\subset\Sigma$. Denote by $J_{C,z}\subset{\mathcal O}_{C,z}$ the Jacobian ideal, and by $J^{cond}_{C,z}\subset{\mathcal O}_{C,z}$ the local conductor ideal, defined as ${\operatorname{Ann}}{\mathcal O}_{C^\vee}/{\mathcal O}_{C,z}$, where $C^\vee\to(C,z)$ is the normalization. If $f(x,y)=0$ is an equation of $(C,z)$ in some local coordinates $x,y$ in $(\Sigma,z)$, then $$J_{C,z}=\langle f_x,f_y\rangle,\quad J^{cond}_{C,z}=\{g\in{\mathcal O}_{C,z}\ :\ {\operatorname{ord}}(g|_{C_i})\ge{\operatorname{ord}}(f'|_{C_i})-{\operatorname{mt}}(C_i)+1,\ i=1,...,m\}\ ,$$ where $C_1,...,c_m$ are all the components of $(C,z)$, $f'=\alpha f_x+\beta f_y$ a generic polar, and ${\operatorname{mt}}(C_i)$ is the intersection number of $C_i$ with a generic smooth line through $z$ ({\it cf.} \cite[Section 4.2.4]{Dolgachev:2013}). Let $B_{C,z}$ be the base of a semiuniversal deformation of the germ $(C,z)$. This base can be identified with ${\mathcal O}_{C,z}/J_{C,z}\simeq{\mathbb C}^{\tau(C,z)}$, where $J_{C,z}\subset{\mathcal O}_{C,z}$ is the Jacobian ideal, $\tau(C,z)$ the Tjurina number. Denote by $B^{\,eg}_{C,z} \subset B_{C,z}$ the equigeneric locus that parameterizes local deformations of $(C,z)$ with the constant $\delta$-invariant equal to $\delta(C,z)$. The following lemma presents the properties of $B^{\,eg}_{C,z}$, which we will need. \begin{lemma}\label{eg1} The locus $B^{\,eg}_{C,z}$ is irreducible and has codimension $\delta(C,z)$ in $B_{C, z}$. The subset $B^{\,eg,im}_{C,z} \subset B^{\,eg}_{C,z}$ that parameterizes the immersed deformations is open and dense in $B^{\,eg}_{C , z}$, and consists only of smooth points of $B^{\,eg}_{C, z}$. The subset $B^{\,eg,nod}_{C,z} \subset B^{\,eg}_{C,z}$ that parameterizes the nodal deformations is also open and dense. The complement $B^{\,eg}_{C,z}\setminus B^{\,eg,nod}_{C,z}$ is the closure of three codimension-one strata: $B^{\,eg}_{C,z}(A_2)$ that parameterizes deformations with one cusp $A_2$ and $\delta(C,z)-1$ nodes, $B^{\,eg}_{C,z}(A_3)$ that parameterizes deformations with one tacnode $A_3$ and $\delta(C,z)-2$ nodes, and $B^{\,eg}_{C, z}(D_4)$ that parameterizes deformations with one ordinary triple point $D_4$ and $\delta(C,z)-3$ nodes. The tangent cone $T_0B^{\,eg}_{C,z}$ (defined as the limit of the tangent spaces at points of $B^{\,eg,im}_{C,z}$) can be identified with $J^{cond}_{C,z}/J_{C,z}$. \end{lemma} {\bf Proof.} The statement follows from \cite[Item (iii) in page 435, Theorem 1.4, Theorem 4.15, and Proposition 4.17]{Diaz_Harris:1984}. $\blacksquare$ \subsubsection{Local invariance of Welschinger numbers} Suppose now that $\Sigma$ possesses a real structure, $C$ is a real curve, and $z$ is its real singular point. Let $b\in B^{\,eg,im}_{C,z}$ be a real point, and let $C_b$ be the corresponding fiber of the semiuniversal deformation of the germ $(C,z)$. Choose a real point $b'\in B^{\,eg,nod}_{C,z}$ sufficiently close to $b$ and define Welschinger signs $$W^+_b=(-1)^{s_+(C_{b'})},\quad W^-_b=(-1)^{s_-(C_{b'})}\ ,$$ where $s_+(C_{b'})$ (respectively, $s_-(C_{b'})$) is the number of solitary (respectively, non-solitary) nodes of $C_{b'}$. \begin{lemma}\label{new_lemma1} Welschinger signs $W^+_b$ and $W^-_b$ do not depend on the choice of a real point $b'\in B^{\,eg,nod}_{C,z}$ sufficiently close to $b$. \end{lemma} {\bf Proof.} Straightforward. $\blacksquare$ \begin{lemma}\label{ln2} Let $L_t$, $t\in(-\varepsilon,\varepsilon)\subset{\mathbb R}$, be a continuous one-parameter family of conjugation-invariant affine subspaces of $B_{C,z}$ of dimension $\delta(C,z)$ such that \begin{itemize}\item $L_0$ passes through the origin and is transversal to $T_0B^{\,eg}_{C,z}$, \item $L_t\cap B^{\,eg}_{C,z}\subset B^{\,eg,im}_{C,z}$ for each $t \in (-\varepsilon, \varepsilon) \setminus \{0\}$. \end{itemize} Then, \begin{enumerate} \item[(i)] the intersection $L_t\cap B^{\,eg}_{C,z}$ is finite for each $t \in (-\varepsilon', \varepsilon') \setminus \{0\}$, where $\varepsilon' > 0$ is sufficiently small. \item[(ii)] the functions $W^\pm(t)=\sum_{b\in L_t\cap{\mathbb R} B^{\,eg}_{C,z}}W^\pm_b$ are constant in $(-\varepsilon', \varepsilon') \setminus \{0\}$, where $\varepsilon' > 0$ is sufficiently small. \end{enumerate} \end{lemma} {\bf Proof.} The statement follows from \cite[Lemma 15]{Itenberg_Kharlamov_Shustin:2012}. $\blacksquare$ \subsubsection{Global transversality conditions} If $C \subset \Sigma$ is a curve with isolated singularities, we consider the joint semiuniversal deformation for all singular points of $C$. The base of this deformation, the equigeneric locus, and the tangent cone to this locus at the point corresponding to $C$ are as follows: $$ B_C=\prod_{z\in{\operatorname{Sing}}(C)}B_{C,z},\quad B^{\,eg}_C=\prod_{z \in {\operatorname{Sing}}(C)}B^{\,eg}_{C,z}, \quad T_0B^{\,eg}_C=\prod_{z \in {\operatorname{Sing}}(C)}T_0B^{\,eg}_{C,z}\ .$$ \begin{lemma}\label{leg} Let $[\nu:\hat C_1\to\Sigma,{\boldsymbol p}]\in {\mathcal M}^{br}_{g,0}(\Sigma,D)$ and $C=\nu(\hat C)$. Assume that $n=-DK_\Sigma+g-1>0$. There exists an open dense subset ${\mathcal U}\subset C^n$ such that any ${\boldsymbol p}\in{\mathcal U}$ consists of $n$ distinct points, the image ${\boldsymbol w}=\nu({\boldsymbol p})$ is an $n$-tuple of distinct nonsingular points of $C$, and \begin{equation} H^0(C,{\mathcal J}^{cond}_C(-{\boldsymbol w})\otimes {\mathcal O}_\Sigma(D))=0\ .\label{eeg}\end{equation} Let ${\boldsymbol w}$ satisfy (\ref{eeg}), $|D|_{\boldsymbol w}\subset|D|$ be the linear subsystem of curves passing through ${\boldsymbol w}$, and $\Lambda({\boldsymbol w})\subset B_C$ be the natural image of $|D|_{\boldsymbol w}$. \begin{enumerate} \item[(1)] One has ${\operatorname{codim}}_{B_C}\Lambda({\boldsymbol w})=\dim B^{\,eg}_C$, and $\Lambda({\boldsymbol w})$ intersects $T_0B^{\,eg}_C$ transversally. \item[(2)] For any $n$-tuple ${\boldsymbol w}' \in \Sigma^n$ sufficiently close to ${\boldsymbol w}$ and such that $\Lambda({\boldsymbol w}')$ intersects $B^{\,eg}_C$ transversally and only at smooth points, the natural map from the germ of ${\mathcal M}_{g,r}(\Sigma,D)$ at $[\nu:\hat C\to\Sigma,{\boldsymbol p}]$ to $B^{\,eg}_C$ gives rise to a bijection between the set of elements $[\nu':\hat C'\to \Sigma,{\boldsymbol p}']$ such that $\nu' ({\boldsymbol p}') = {\boldsymbol w}'$ on one side and $\Lambda({\boldsymbol w}')\cap B^{\,eg}_C$ on the other side. \end{enumerate} \end{lemma} {\bf Proof.} The existence of the required set ${\mathcal C}$ immediately follows from the relation \begin{equation} h^0(C,{\mathcal J}^{cond}_C\otimes {\mathcal O}_\Sigma(D))=n\ ,\label{eelra7}\end{equation} since imposing one by one $n$ generic point constraints, we reduce $h^0$ to zero. To prove (\ref{eelra7}) we use the fact that ${\mathcal J}^{cond}_C=\nu_*{\mathcal O}_{\hat C}(- \Delta)$, where $\Delta\subset\hat C$ is the so-called double-point divisor, whose degree is $\deg\Delta=2\sum_{z\in{\operatorname{Sing}}(C)}\delta(C,z)$ (see, e.g., \cite[Section 2.4]{Caporaso_Harris:1998} or \cite[Section 4.2.4]{Dolgachev:2013}). Thus, $$\deg({\mathcal J}^{cond}_C\otimes {\mathcal O}_\Sigma(D))=D^2-2\sum_{z\in{\operatorname{Sing}}(C)}\delta(C,z)=-DK_\Sigma+2g-2>2g-2\ ,$$ and hence $$h^1(\hat C,{\mathcal J}^{cond}_C\otimes {\mathcal O}_\Sigma(D))=0\quad\text{and}\quad h^0(\hat C,{\mathcal J}^{cond}_C\otimes {\mathcal O}_\Sigma(D))=-DK_\Sigma+2g-2-g+1=n\ .$$ The dimension and the transversality in statement (1) mean that the pull-back of $T_0B^{\,eg}_C$ to $|D|$ intersects $|D|_{\boldsymbol w}$ transversally and only at one point, which, in view of the the identification of $T_0B^{\,eg}_C$ with $\prod_{z \in {\operatorname{Sing}}(C)}J^{cond}_{C,z}/J_{C,z}$, reduces to (\ref{eeg}), since ${\mathcal J}^{cond}_C$ can equivalently be regarded as the ideal sheaf of the zero-dimensional subscheme of $C$, defined at all singular points $z\in{\operatorname{Sing}}(C)$ by the local conductor ideals $J^{cond}_{C,z}$. (2) The second statement of Lemma immediately follows from the first one. $\blacksquare$ \section{Appendix B: CH-configurations of points on real uninodal del Pezzo surfaces}\label{sec-ab} Let $\Sigma$ be a uninodal del Pezzo surface of degree $\ge2$. Pick an effective divisor class $D\in{\operatorname{Pic}}(\Sigma)$ represented by a curve not containing $E_\Sigma$ as a component, and choose integer $g\ge0$ and two vectors $\alpha,\beta\in{\mathbb Z}^\infty_+$ such that $I(\alpha+\beta)=DE_\Sigma$. Fix a sequence ${\boldsymbol w}$ of $\|\alpha\|$ distinct points in general position on $E_\Sigma$ and a positive function $T:{\boldsymbol w}\to{\mathbb Z}$ such that $|T^{-1}(i)|=\alpha_i$, $i\ge1$. Denote by ${\mathcal M}'_{g,\|\alpha\|}(\Sigma,D,\alpha,\beta,{\boldsymbol w},T)$ the space of elements $[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in{\mathcal M}'_{g,\|\alpha\|}(\Sigma,D,{\boldsymbol w})$ such that \begin{itemize}\item $\nu^*({\boldsymbol w})=\sum_{p\in{\boldsymbol p}}T(\nu(p))\cdot p$, \item $\nu^*(E_\Sigma\setminus{\boldsymbol w})=\sum_{q\in\hat C\setminus{\boldsymbol p}}k_q\cdot q$, where the number of the coefficients $k_q$ equal to $i$ is $\beta_i$ for all $i\ge1$. \end{itemize} \begin{lemma}\label{lem-sm} If ${\mathcal M}'_{g,\|\alpha\|}(\Sigma,D,\alpha,\beta,{\boldsymbol w},T)\ne\emptyset$, then \begin{enumerate}\item[(i)] ${\operatorname{idim}}{\mathcal M}'_{g,\|\alpha\|}(\Sigma,D,\alpha,\beta,{\boldsymbol w},T) \le n=-D(K_\Sigma+E_\Sigma)+g+\|\beta\|-1$, \item[(ii)] a general element $[\nu:\hat C\to\Sigma,{\boldsymbol p}]$ of any component of ${\mathcal M}'_{g,\|\alpha\|}(\Sigma,D,\alpha,\beta,{\boldsymbol w},T)$ of intersection dimension $n$ is an immersion, and the curve $C=\nu(\hat C)$ is nonsingular along $E_\Sigma$. \end{enumerate} \end{lemma} {\bf Proof.} The statement follows from \cite[Proposition 2.1]{Shoval_Shustin:2013}. $\blacksquare$ Now let $\Sigma$ be a real uninodal del Pezzo surface with a real $(-2)$-curve $E_\Sigma$ such that ${\mathbb R}E_\Sigma\ne\emptyset$. Pick an effective divisor class $D_0\in{\operatorname{Pic}}(\Sigma)$, represented by a real curve not containing $E_\Sigma$ as component, and such that $N=\dim|D_0|>0$. Denote by ${\operatorname{Prec}}(D_0)$ the set of real effective divisor classes $D\in{\operatorname{Pic}}(\Sigma)$, represented by real curves not containing $E_\Sigma$ as component, and such that $D_0\ge D$. Notice that $\dim|D|\le N$. Let $z_1,...,z_N$ be a sequence of $N$ distinct points in general position on ${\mathbb R}E_\Sigma$, and let $z_i(t)$, $t\in[0,1]$, be a smooth path in ${\mathbb R}\Sigma$ transversal to ${\mathbb R}E_\Sigma$ at $z_i(0)=z_i$, $i=1,...,N$. We shall construct a sequence of points $w_i=z_i(t_i)$, $0<t_i<1$, $i=1,...,N$, called a $D_0$-CH-configuration ({\it cf.} \cite[Section 3.5.2]{Itenberg_Kharlamov_Shustin:2013}). We perform the construction inductively on $k=1,...,N$. Assume that we have defined $t_i$, $i<k$, and then construct $t_k$ in the following procedure. Given any data $D\in{\operatorname{Prec}}(D_0)$, $g\ge0$, $\alpha,\beta\in{\mathbb Z}^\infty_+$ such that $I(\alpha+\beta)=DE_\Sigma$ and $1\le n=-D(K_\Sigma+E_\Sigma)+g+\|\beta\|-1\le k$, and given any subsets $J_1\subset\{1,...,k-1\}$, $J_2\subset\{k+1,...,N\}$ such that $|J_1|=n-1$, $|J_2|=\|\alpha\|$, we impose the following condition: \noindent for $t\in(0,t_k]$, the sets \begin{equation} \{[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in{\mathcal M}'_{g,\|\alpha\|} (\Sigma,D,\alpha,\beta,\{z_i\}_{i\in J_2},T)\ |\ w_i\in\nu(\hat C),\ i\in J_1,\ z_k(t)\in\nu(\hat C)\}\ ,\label{eelra100}\end{equation} are finite of a capacity independent of $t$, and all their elements are presented by immersions. The existence of such $t_k\in(0,1)$ follows from the fact that there are only finitely many tuples $(D,g,\alpha,\beta,J_1,J_2)$, for which the sets (\ref{eelra100}), considered for arbitrary $t\in(0,1)$, are nonempty. \begin{lemma}\label{lem-ch} In the above notations, let ${\boldsymbol w}$ be a $D_0$-CH-configuration. Suppose that $D\in{\operatorname{Prec}}(D_0)$, $g\ge0$, $\alpha,\beta\in {\mathbb Z}^\infty_+$ satisfy $I(\alpha+\beta)=DE_\Sigma$ and $0\le n(D, g,\beta)=-D(K_\Sigma+E_\Sigma)+g+\|\beta\|-1 \le N$. Then, for any disjoint sets $J_1,J_2\subset\{1,...,N\}$ such that $|J_1|=n(D,g,\beta)$, $|J_2|=\|\alpha\|$, $\max J_1<\min J_2$, and for any real element of the set \begin{equation}\{[\nu:\hat C\to\Sigma,{\boldsymbol p}]\in{\mathcal M}'_{g,\|\alpha\|} (\Sigma,D,\alpha,\beta,\{z_i\}_{i\in J_2},T)\ |\ w_i\in\nu(\hat C),\ i\in J_1\} \label{eset}\end{equation} the divisor $\nu^*(E_\Sigma)\subset\hat C$ is supported at only real points. \end{lemma} {\bf Proof.} We use induction on $n=n(D,g,\beta)$. The case $n=0$ necessarily yields $g=0$ (see \cite[Proposition 2.5]{Shoval_Shustin:2013}), and the desired statement follows then from \cite[Lemma 3]{Itenberg_Kharlamov_Shustin:2013}. If $n>0$, we pick $k=\max J_1$ and consider degenerations of a real element of the set (\ref{eset}) in the family $[\nu_t:\hat C_t\to\Sigma,{\boldsymbol p}_t]$, $t\in(0,t_k]$, corresponding to the specialization of the point $w_k$ to $z_k\in E_\Sigma$ along the arc $L_k$. By \cite[Proposition 2.6]{Shoval_Shustin:2013}, the limit of this family is \begin{itemize}\item either $[\nu_0:\hat C_0\to\Sigma,{\boldsymbol p}_0]\in {\mathcal M}'_{g,\|\alpha\|+1}(\Sigma,D,\alpha+e_m,\beta-e_m,\{z_i\}_{i\in J_2\cup\{k\}},T_0)$, where $T_0\big|_{J_2}=T$, $T_0(z_k)=m$, which geometrically means that one of the nonfixed intersection points of $\nu(\hat C)$ with $E_\Sigma$ of multiplicity $m$ becomes fixed at the position $z_i$; the limit element satisfies $n(D,g,\beta-e_m)=n-1$; hence by the induction assumption all intersection points $\nu_0(\hat C_0)\cap E_\Sigma$ are real, and so are the points of $\nu(\hat C)\cap E_\Sigma)$; \item or $[\nu_0:\hat C_0\to\Sigma,{\boldsymbol p}_0]$ such that $\hat C_0$ splits into components $E_0$, $\hat C_1,...,\hat C_m$ so that $\nu_0:E_0\to E$ is an isomorphism, the elements $[\nu_0:\hat C_j\to\Sigma,{\boldsymbol p}_j]\in{\mathcal M}'_{g_j,\|\alpha^{(j)}\|}(\Sigma,D_j,\alpha^{(j)},\beta^{(j)},T_j)$ satisfy $n(D_j,g_j,\beta_j)<n$, $\nu_0({\boldsymbol p}_j)\subset\{z_i\ |\ i\in J_2\}$ for all $j=1,...,m$, and, moreover, the divisor $\nu_t^*(E_\Sigma)$ is supported at the (real) points $\nu_t^{-1}(z_i)$, $1\le i\le N$, and in a slightly deformed proper subset of the set $\nu_0^{-1}(E_\Sigma\setminus\{z_1,...,z_N\})$; thus, if $\nu_0:\hat C_j\to\Sigma$ is real, then by the induction assumption, $\nu^{-1}_0(E_\Sigma)\cap\hat C_j$ is real, and hence a small smooth deformation of any of its proper subsets is real too; if $\nu_0:\hat C_j\to\Sigma$ is not real (particularly, its complex conjugate must be present in the splitting as well), then we have $n=0$ and $g=0$, which by \cite[Lemma 3]{Itenberg_Kharlamov_Shustin:2013} implies that $E_\Sigma\cap\nu_0(\hat C_j)$ is just one point $z$; furthermore, in the deformation, this node smooths out, and the deformed curve does not intersect $E_\Sigma$ in a neighborhood of $z$. \end{itemize} $\blacksquare$ {\it Address}: School of Mathematical Sciences, Tel Aviv University, Ramat Aviv, 69978 Tel Aviv, Israel. {\it E-mail}: {\tt [email protected]} \end{document}
\begin{document} \title[Sketching the proof for sublattices] {Homomorphisms and principal congruences\\ of bounded lattices. II. \\Sketching the proof for sublattices} \author{G. Gr\"{a}tzer} \email[G. Gr\"atzer]{[email protected]} \address{Department of Mathematics\\ University of Manitoba\\ Winnipeg, MB R3T 2N2\\ Canada} \urladdr[G. Gr\"atzer]{http://server.maths.umanitoba.ca/homepages/gratzer/} \date{Jan. 16, 2015} \subjclass[2010]{Primary: 06B10.} \keywords{bounded lattice, principal congruence, sublattice, ordered set, isotone map.} \begin{abstract} A recent result of G. Cz\'edli relates the ordered set of principal congruences of a bounded lattice $L$ with the ordered set of principal congruences of a~bounded sublattice $K$ of $L$. In this note, I sketch a new proof. \end{abstract} \maketitle \section{Introduction} \label{S:Introduction} We start by stating the main result of my paper \cite{gG14}; see also Section 10-6 of \cite{LTS1} and Part VI of~\cite{CFL2}. \begin{theorem}\label{T:bounded} Let $P$ be a bounded ordered set. Then there is a bounded lattice~$K$ such that $P \iso \Princ K$. \end{theorem} The bibliography lists a number of papers related to this result. In particular, G.~Cz\'edli~\cite{gC15} and~\cite{gC17} extended this result to a bounded lattice $L$ and a bounded sub\-lattice $K$. In this case, the map \[ \ext(K,L) \colon \conK{x, y} \mapsto \conL{x, y} \text{\quad for $x,y \in K$}, \] is a bounded isotone map of $\Princ K$ into $\Princ L$. This map is $\set{0}$-separating, that is, $\zero_K$ is the only principal congruence of $K$ mapped by $\ext(K,L)$ to~$\zero_L$. Now we state Cz\'edli's result. \begin{theorem}\label{T:Czedli} Let $P$ and $Q$ be bounded ordered sets. Let $\gy$ be an isotone $\set{0}$-sepa\-rating bounded map from $P$ into $Q$. Then there exist a bounded lattice~$L$ and a bounded sublattice~$K$ of~$L$ representing $P$, $Q$, and $\gy$ as $\Princ K$, $\Princ L$, and $\ext(K,L)$ up to isomorphism. \end{theorem} Note that if $K = L$, then $\ext(K,L)$ is the identity map on $\Princ K = \Princ L$, so Theorem~\ref{T:bounded} follows from Theorem~\ref{T:Czedli} with $P = Q$ and $\gy$ the identity map. In this short note, I sketch a proof of Theorem~\ref{T:Czedli} by modifying the proof of~Theorem~\ref{T:bounded}. G. Cz\'edli~\cite{gC15} translates the problem to the highly technical tools of his paper~\cite{gC16} and also uses some results of that paper. Since \cite{gC16} deals with another subject matter and it is quite long, \cite{gC15} is not easy to understand; this is surely not the shortest way to prove Theorem~\ref{T:Czedli}. In G. Gr\"atzer~\cite{gG15b}, a result stronger than Theorem~\ref{T:Czedli} is proved but the proof is based explicitly on \cite{gC15} and, consequently, on \cite{gC16}. Finally, G.~Cz\'edli~\cite{gC17} is another long and quite technical paper; it proves a more general result but the special case of the construction needed by Theorem~\ref{T:Czedli} is not easy to derive from it. These facts motivate the present note, which is short and provides an easy way to understand the construction and the idea of the proof. We start by sketching the proof of Theorem~\ref{T:bounded} to make this note somewhat self-contained. For the background of this topic, see the books \cite{CFL} and \cite{LTS1}, and especially my most recent book \cite{CFL2}. \subsection*{Notation} We use the notation as in \cite{CFL2}. You can find the complete \emph{Part I. A Brief Introduction to Lattices} and \emph{Glossary of Notation} \noindent of \cite{CFL2} at \verb+tinyurl.com/lattices101+ \section{Sketching the proof of Theorem~\ref{T:bounded}} \label{S:bounded} Let $P$ be an ordered set with bounds $0$ and $1$. Let $P^- = P - \set{0,1}$ and let $P^{\scriptscriptstyle^{\parallel}}$ denote those elements of $P^-$ that are not comparable to any other element of~$P^-$. We construct the lattice $\Frame P$ consisting of the elements \text{$o$, $i$}, the elements $a_p \neq b_p$, for every $p \in P^-$, and $a_0 = b_0$, $a_1 = b_1$. These elements are ordered as in Figure~\ref{F:F}. \begin{figure} \caption{The lattice $\Frame P$.} \caption{The lattice $S(p < q)$ for $p < q \in P$.} \label{F:F} \label{F:Snew} \end{figure} We then construct the lattice $K$ (of Theorem~\ref{T:bounded}) by inserting the lattice $S(p < q)$ of Figure~\ref{F:Snew} into $\Frame P$ for all $p < q$ in $P$. For $p \in P^{\scriptscriptstyle^{\parallel}}$, let $C_p = \set{o < a_p < b_p < i}$. We define the set \begin{equation*} K = \UUm{S(p < q)}{p < q \in P^-} \uu \UUm{C_p}{p \in P^{\scriptscriptstyle^{\parallel}}}\uu \set{a_0, a_1}. \end{equation*} To show that $K$ is a lattice, we define the joins and meets in $K$ with nine rules for the two operations in \cite{gG14}. The first six are the obvious rules ($\Frame P$, the $S(p < q)$-s, and the $C_p$-s are sublattices, and so on), so we only repeat the last three. They deal with the join and meet of $x$ and $y$, where $x \in S(u < v)$ and $y \in S(w < z)$ and $\set{u,v} \neq \set{w,x}$. In most cases $x$ and $y$ are complementary, except if $S(u < v) \ii S(w < z) \neq \set{o,i}$. This can only happen in three ways, as described by the three rules that follow. \begin{enumeratei} \item[(vii)] Let $x \in S(q < p) - S(p < q')$ and $y \in S(p < q') - S(q < p)$. We form $x \jj y$ and $x \mm y$ in $K$ in the lattice $L_{\tup{C}}$, see Figure \ref{F:C}. \begin{figure} \caption{The lattice $L_{\tup{C} \caption{The lattice $L_{\tup{V} \caption{The lattice $L_{\tup{H} \label{F:C} \label{F:V} \label{F:H} \end{figure} \item[(viii)] Let $x \in S(p < q) - S(p < q')$ and $y \in S(p < q') - S(p < q)$ with $q \neq q'$. We~form $x \jj y$ and $x \mm y$ in $K$ in the lattice $L_{\tup{V}}$, see Figure~\ref{F:V}. \item[(ix)] Let $x \in S(q < p) - S(q' < p)$ and $y \in S(q' < p) - S(q < p)$ with $q \neq q'$. We~form $x \jj y$ and $x \mm y$ in $K$ in the lattice $L_{\tup{H}}$, see Figure~\ref{F:H}. \end{enumeratei} A congruence $\bga > \zero$ of a bounded lattice $L$ is \emph{Bound Isolating} (BI, for short), if $\set{0}$ and $\set{1}$ are congruence blocks of~$\bga$. With a BI congruence $\bgb$ of the lattice $K$, we associate a subset of the ordered set $P^-$: \[ \Base(\bgb) = \setm{p \in P^-}{\cngd a_p=b_p(\bgb)}. \] Then $\Base(\bgb)$ is a down set of $P^-$, and the correspondence $ \gg \colon \bgb \to \Base(\bgb) $ is an order preserving bijection between the ordered set of BI congruences of~$K$ and the ordered set of down sets of $P^-$. We extend $\gg$ by $\zero \to \set{0}$ and $\one \to P$. Then $\gg$ is an isomorphism between $\Con (K)$ and $\Down^- P$, the ordered set of nonempty down sets of~$P$, verifying Theorem \ref{T:bounded}. It is now easy to compute that the map defined by \[ p \mapsto \begin{cases} \con{a_p,b_p} &\text{for $p \in P - \set{1}$;}\\ \one &\text{for $p = 1$} \end{cases} \] is the isomorphism $P \iso \Princ K$, as required in Theorem~\ref{T:bounded}. \section{Sketching the proof of Theorem~\ref{T:Czedli}} \label{S:Czedl} Let $P$, $Q$, and $\gy$ be given as in Theorem~\ref{T:Czedli}. We form the bounded ordered set $R = P \uu Q$, a~dis\-joint union with $0_P, 0_Q$ and $1_P, 1_Q$ identified. So $R$ is a bounded ordered set containing $P$ and $Q$ as bounded ordered subsets. Observe that $\Frame P$ is a bounded sublattice of $\Frame R$. For $p < q$ in $P^-$ and for $p < q$ in $Q^-$, we insert $S(p < q)$, see Figure~\ref{F:Snew}, into $\Frame R$ so that $\con{a_p, b_p} < \con{a_q, b_q}$ will hold. Also, for $p \in P^-$, we insert $S(p < \gy p)$ as a sublattice; note that $\gy p \in Q^-$. Let $L^+$ denote the ordered set we obtain. We slim $L^+$ down to the ordered set~ $L$ by deleting all the elements of the form $x_{p, \gy p}$ for $p \in P^-$. Since $x_{p, \gy p}$ is not join-reducible, the ordered set $L$ is a lattice (but it is neither a sublattice nor a quotient of $L^+$). The joins and meets of any two elements $u$ and $v$ in~$L$ are the same as in $L^+$, except for meets of the form $u \mm v = x_{p, \gy p}$, where $u \parallel v$ and $p \in P^-$; in this case, $u \mm v = (x_{p, \gy p})_*$, the unique element covered by $x_{p, \gy p}$ in $L^+$. Now we can prove Theorem~\ref{T:Czedli} as we verified Theorem~\ref{T:bounded} in Section~\ref{S:bounded}. We define $K$ as the bounded sublattice of $L$ built on $\Frame P$. Observe that $\con{a_p, b_p} = \con{a_{\gy p}, b_{\gy p}}$, since $[a_p, b_p]$ is (three step) projective to $[a_{\gy p}, b_{\gy p}]$, so all principal BI congruences of $L$ are of the form $\con{a_q, b_q}$ for $q \in Q^-$. It is now easy to compute that the map $\ext(K,L)$ corresponds to $\gy$, as required in Theorem~\ref{T:Czedli}. \end{document}
\begin{document} \begin{abstract} For weighted Toeplitz operators ${\mathcal T}^N_\varphi$ defined on spaces of holomorphic functions in the unit ball, we derive regularity properties of the solutions $f$ to the integral equation ${\mathcal T}^N_\varphi(f)=h$ in terms of the regularity of the symbol $\varphi$ and the data $h$. As an application, we deduce that if $f\not\varepsilonquiv0$ is a function in the Hardy space $H^1$ such that its argument $\bar f/f$ is in a Lipschitz space on the unit sphere ${\mathbb S}$, then $f$ is also in the same Lipschitz space, extending a result of K. Dyakonov to several complex variables. \varepsilonnd{abstract} \maketitle \section{Introduction} The goal of this paper is to study the regularity of solutions to certain equations related to weighted Toeplitz operators in several complex variables. We will start by stating some particular cases of the main results in this paper, which involve classical spaces and integral operators and illustrate the object of this paper, although they can be applied in a more general setting. Let ${\mathbb B}$ denote the open unit ball in ${\mathbb C}^n$ and ${\mathbb S}$ its boundary. In the one variable setting ($n=1$), ${\mathbb B}$ and ${\mathbb S}$ will also be denoted by ${\mathbb D}$ and $\mathbb{T}$, respectively. For any $\tau>0$, $\mathcal Lambda_\tau=\mathcal Lambda_\tau({\mathbb S})$ is the classical Lipschitz-Zygmund space on ${\mathbb S}$. If $\varphi\in \mathcal Lambda_\tau$, we consider the Toeplitz operator ${\mathcal T}_\varphi:H^1\rightarrow H^1$, defined by ${\mathcal T}_\varphi(f)(z):=\mathcal{P}(\varphi f)(z)$, where $\mathcal{P}$ is the Cauchy projection, given by $$ \mathcal{P}(\partialsi)(z):=\int_{{\mathbb S}} \frac{\partialsi(\zeta)}{(1-\bar \zeta z)^{n}}d\sigma(\zeta)\qquad(\partialsi\in L^1({\mathbb S})). $$ Here $d\sigma$ denotes the normalized Lebesgue measure on ${\mathbb S}$. We point out that ${\mathcal T}_\varphi$ maps $H^1$ to itself because $\varphi\in \mathcal Lambda_\tau$. For this scale of Lispchitz spaces we prove the following result: \begin{thm} \label{thm:IntrLip} Let $\tau>0$ and $\varphi\in \mathcal Lambda_\tau$ be a non-vanishing function on ${\mathbb S}$. If $f\in H^1$ and ${\mathcal T}_\varphi(f)\in \mathcal Lambda_\tau$, then $f\in\mathcal Lambda_\tau$. \varepsilonnd{thm} This result extends \cite[Theorem 3.1]{KD}, which deals with the case $n=1$ and the regularity of the solutions to the equation ${\mathcal T}_\varphi(f)=0$. We remark that if we drop the condition $0\notin\varphi({\mathbb S})$, then Theorem \ref{thm:IntrLip} is not true in general. Indeed, we only need to consider the symbol $\varphi(\zeta)=(1-\zeta_1)^\tau$ and the function $f(\zeta)=(1-\zeta_1)^{-\tau}$ with $0<\tau<n$ (to ensure that $f\in H^1$). As in the one variable case (see \cite{KD}), the above theorem implies some interesting properties of the holomorphic Lipschitz functions. For instance, \begin{cor}\label{cor:Intarg0} If $f\in H^1$, $\varphi\in \mathcal Lambda_\tau$, such that $0\notin \varphi({\mathbb S})$ and $\varphi f\in \mathcal Lambda_\tau+\ker \mathcal{P}\subset L^1({\mathbb S})$, then $f\in\mathcal Lambda_\tau$. \varepsilonnd{cor} In particular, we have: \begin{cor}\label{cor:Intarg} If $f\in H^1\setminus\{0\}$ and its argument function $\varphi=\bar f/f$ is in $\mathcal Lambda_\tau$, then $f\in\mathcal Lambda_\tau$. \varepsilonnd{cor} The preceding corollary is proved in~{\cite{KD}} for $n=1$. In this paper we prove the above results and extend them to weighted Toeplitz operators associated to more general symbols. We will denote by $\Gamma_\tau=\Gamma_\tau({\mathbb S}),\,\tau>0$, the Lipschitz-Zygmund space on ${\mathbb S}$ with respect to the pseudodistance $d(\zeta,\varepsilonta)=|1-\bar\zeta\varepsilonta|$ (see Subsection \ref{sec:G} for precise definitions). Since $|1-\bar\zeta\varepsilonta|\le|\zeta-\varepsilonta|$, it is clear that $\Gamma_\tau$ is a subspace of $\mathcal Lambda_\tau$. For a positive integer $k$, and real numbers $0<\tau_0\le \tau<k$, $0<\tau_0< 1/2$, we will consider spaces $G^{\tau_0}_{\tau,k}({\mathbb B})\subset \mathcal Lambda_{\tau_0}(\bar {\mathbb B})\cap \mathcal{C}^k({\mathbb B})$, whose restrictions to ${\mathbb S}$ contain the space $\Gamma_\tau$, and also the space $\mathcal Lambda_\tau$, for $\tau>1/2$. Moreover, they satisfy that their intersection with the space $H=H({\mathbb B})$ of holomorphic functions on ${\mathbb B}$, coincides with the Lipschitz-Zygmund space of holomorphic functions on ${\mathbb B}$, denoted by $B^\infty_\tau$. These holomorphic spaces $B^\infty_\tau$ are characterized in terms of the growth of the derivatives (see Subsection \ref{sec:G} for a precise definition and their main properties). For $N>0$, let $d\nu_N(z):=c_N(1-|z|^2)^{N-1}d\nu(z)$, where $\nu$ is the Lebesgue measure on ${\mathbb B}$ and $c_N=\frac{\Gamma(n+N)}{n!\Gamma(N)}$, so that $\nu_N({\mathbb B})=1$. Let $L^1_N:=L^1(B,d\nu_N)$ and consider the weighted Bergman projection $\mathcal{P}^N:L^1_N\to H$ defined by $$ \mathcal{P}^N(\partialsi)(z):=\int_{{\mathbb B}} \frac{\partialsi(w)}{(1-\bar w z)^{n+N}}\,d\nu_N(w). $$ Let $B^1_{-N}:=L^1_N\cap H$ and for $\varphi\in L^{\infty}({\mathbb B})$, define the weighted Toeplitz operator ${\mathcal T}^N_\varphi:B^1_{-N}\rightarrow H$ by ${\mathcal T}^N_\varphi(f):=\mathcal{P}^N(\varphi f)$. Since $\lim_{N\searrow 0} \mathcal{P}^N(\partialsi)=\mathcal{P}(\partialsi)$ (see \cite[\S\,0.3]{Bea-Bur}), we extend these definitions to $N=0$, by $\mathcal{P}^0=\mathcal{P}$ and ${\mathcal T}_\varphi^0={\mathcal T}_\varphi$, $\varphi\in L^\infty({\mathbb S})$. In these cases, the operators are defined on $L^1({\mathbb S})$ and $H^1$, respectively. The next two theorems are the main results of this paper. \begin{thm}\label{thm:kerNInt} Let $\varphi\in G^{\tau_0}_{\tau,k}$ such that $0\notin\varphi({\mathbb S})$. If $f\in B^1_{-N}$ satisfies ${\mathcal T}^N_{\varphi}(f)=h\in B^\infty_\tau$, then $f\in B^\infty_\tau$ and $\|f\|_{B_\tau^\infty}\le C\left(\|f\|_{B^1_{-N}}+\|h\|_{B^\infty_\tau}\right)$, where $C>0$ is a finite constant only depending on $\varphi$, $N>0$ and $n$. In particular, $\|f\|_{B_\tau^\infty}\leq C\|f\|_{B_{-N}^1}$, for any $f\in\ker {\mathcal T}^N_{\varphi}$. \varepsilonnd{thm} The corresponding statement for the case $N=0$ is: \begin{thm}\label{thm:ker0Int} Let $\varphi$ be the restriction to ${\mathbb S}$ of a function in $G^{\tau_0}_{\tau,k}$ and let $f\in H^1$. If $0\notin \varphi({\mathbb S})$ and ${\mathcal T}_\varphi (f)=h\in B^\infty_\tau$, then $f\in B^\infty_\tau$ and $\|f\|_{B_\tau^\infty}\le C\left(\|f\|_{H^1}+\|h\|_{B^\infty_\tau}\right)$, where $C>0$ is a finite constant only depending on $\varphi$ and $n$. In particular, $\|f\|_{B_\tau^\infty}\leq C\|f\|_{H^1}$, for any $f\in\ker {\mathcal T}_{\varphi}$. \varepsilonnd{thm} The preceding theorem was proved in \cite[Theorem 3.1]{KD} for $n=1$, $\varphi\in\mathcal Lambda_\tau$ and $h=0$. Note that the inequalities in the above theorems are in fact equivalences due to the continuity of both the Toeplitz operator and the embeddings $B^\infty_\tau\subset H^1\subset B^1_{-N}$. For $\tau>1/2$, the restriction to ${\mathbb S}$ of $G_{\tau,k}^{\tau_0}$ contains the space $\mathcal Lambda_\tau$, hence Theorem~{\ref{thm:ker0Int}} includes the result of Theorem \ref{thm:IntrLip} for these cases. However, the same techniques used to prove the above theorems allow us to extend this result to the whole scale of spaces $\mathcal Lambda_\tau$. \begin{cor} If either $f\in B^1_{-N}$, for $N>0$, or $f\in H^1$ and $N=0$, $\varphi\in G_{\tau,k}^{\tau_0}$, $0\notin\varphi({\mathbb S})$ and $\varphi f\in G_{\tau,k}^{\tau_0}+\ker \mathcal{P}^N$, then $f\in B^\infty_\tau$. \varepsilonnd{cor} In particular if $g\in H^1$, then $\overline{g-g(0)}\in\ker\mathcal{P}$, and therefore we have: \begin{cor} If $f,g\in H^1$ satisfy $\varphi=\bar g/f\in \Gamma_{\tau}$ and $0\notin\varphi({\mathbb S})$, then $f,g\in B^\infty_\tau$. \varepsilonnd{cor} The preceding result generalizes Corollary~{\ref{cor:Intarg},} and extends~\cite[Corollary 3.2]{KD} to dimension $n>1$. The paper is organized as follows. In Section~2 we state some properties of spaces considered in this paper and we also recall some integral representation formulas used in the proof of the main theorems. In Section~3 we state our main technical theorem (Theorem~{\ref{thm:kerT1}}) from which we deduce Theorems~{\ref{thm:kerNInt}} and~{\ref{thm:ker0Int}} and its corollaries. We also construct some counterexamples. Finally, Theorem~{\ref{thm:kerT1}} is proved in Section~4. \section{Preliminaries} \subsection{Notations} Throughout the paper, the letter $C$ will denote a positive constant, which may vary from place to place. The notation $f(z)\lesssim g(z)$ means that there exists $C>0$, which does not depend on $z$, $f$ and $g$, such that $f(z)\le C g(z)$. We write $f(z)\approx g(z)$ when $f(z)\lesssim g(z)$ and $g(z)\lesssim f(z)$. Let $\partial_j:=\frac{\partial\,}{\partial z_j}$, for $j=1,\dots,n$. For any multiindex $\alphapha$, i.e. $\alphapha=(\alphapha_1,\cdots,\alphapha_n)\in{\mathbb N}^n$, where ${\mathbb N}$ is the set of non-negative integers, let $|\alphapha|:=\sum_{j=1}^{n}\alphapha_j$ and $\partial_\alphapha:=\frac{\partial^{|\alphapha|}}{\partial z^\alphapha}=\partial_1^{\alphapha_1}\cdots\partial_n^{\alphapha_n}$. We write $|\partial^k\varphi|:=\sum_{|\alphapha|=k}|\partial_\alphapha\varphi|$ and $|d^k\varphi|:=\sum_{|\alphapha|+|\beta|=k}|\partial_\alphapha\bar\partial_\beta\varphi|$. When $n>1$, we also consider the complex tangential differential operators $\mathcal{D}_{i,j}:=\bar z_i\partial_j-\bar z_j\partial_i$ and $\displaystyle{|\partial_T\varphi|:=\sum_{1\le i<j\le n}|\mathcal{D}_{i,j}\varphi|}$. For $n=1$, we write $|\partial_T\varphi|:=0$. We also introduce the following two functions which will be used in the definition of the spaces $G^{\tau_0}_{\tau,k}$. For $\varphi\in \mathcal{C}^1({\mathbb B})$ , let \begin{equation}\label{eqn:def:pphi} \partialphi(z):=(1-|z|^2)|\bar\partial \varphi(z)|+(1-|z|^2)^{1/2}|\bar\partial_T \varphi(z)|. \varepsilonnd{equation} For $t\in{\mathbb R}$, let $\omega_t$ be the function on ${\mathbb B}$ defined by \begin{equation} \label{eqn:omega} \omega_t(z):=\left\{ \begin{array}{ll} (1-|z|^2)^{\min(t,0)},& \mbox{if $t\ne0$,}\\ \log\frac{e}{1-|z|^2},& \mbox{if $t=0$.} \varepsilonnd{array}\right. \varepsilonnd{equation} \subsection{Holomorphic Besov spaces} Let $1\le p<\infty$, $k\in{\mathbb N}$ and $\delta>0$. The weighted Sobolev space $L^p_{k,\delta}$ is the completion of the space $\mathcal{C}^\infty(\bar {\mathbb B})$, endowed with the norm $$ \|\partialsi\|_{L^p_{k,\delta}}:={\mathbb B}ig\{\sum_{j=0}^k \int_{{\mathbb B}}|d^j \partialsi(z)|^p(1-|z|^2)^{\delta p-1}d\nu(z){\mathbb B}ig\}^{1/p}. $$ When $k=0$, we will just write $L_\delta^p=L^p_{0,\delta}$. We extend this definition to the case $p=\infty$, so that $L^\infty_{k,\delta}$ is the subspace of functions $\partialsi$ in the Sobolev space $L^1_{k,\delta+1}$ satisfying $$ \|\partialsi\|_{L^\infty_{k,\delta}}:=\sum_{j=0}^k\sup_{z\in {\mathbb B}} |d^j \partialsi(z)|(1-|z|^2)^{\delta}<\infty. $$ If $1\le p\le\infty$ and $s\in{\mathbb R}$, the holomorphic Besov space $B^p_s$ is defined to be $B^p_s:=H\cap L^p_{k,k-s}$, for some $k\in{\mathbb N}$, $k>s$. It is well known that $\|\cdot\|_{L^p_{m,m-s}}$ and $\|\cdot\|_{L^p_{k,k-s}}$ are equivalent norms on $B^p_s$, for any $m,k\in{\mathbb N}$, $m,k>s$. Note that if $s=0$, then $B^\infty_0$ is the Bloch space and if $s>0$ then $B^\infty_s$ coincides with the space of holomorphic functions on ${\mathbb B}$ whose boundary values are in the corresponding Lipschitz-Zygmund space $\mathcal Lambda_s$ (see the next subsection for the precise definitions of these last two spaces). \begin{prop}[{\cite[Theorems 5.13,14]{Bea-Bur}}] \label{prop:embed} Let $1\le p\le q\le \infty$ and let $s,t\in{\mathbb R}$. Then: \begin{enumerate} \item \label{item:embed2} If $s>t$, then $B^p_s\subset B^p_t$. \item For any $\varepsilon>0$, $B^1_0\subset H^1\subset B^1_{-\varepsilon}$. \item \label{item:embed3} If $s-n/p=t-n/q$, then $B^1_{s+n/p'}\subset B^p_s\subset B^q_t\subset B^\infty_{s-n/p}$. \varepsilonnd{enumerate} \varepsilonnd{prop} \subsection{The space $G_{\tau,k}^{\tau_0}$}\label{sec:G} In this section we define the spaces $G_{\tau,k}^{\tau_0}$ and we state some of their main properties. \begin{defn} \label{def:G} Let $0<\tau_0\le \tau$, $\tau_0< 1/2$, and let $k>\tau$ be an integer. The space $G^{\tau_0}_{\tau,k}$ consists of all functions $\varphi\in \mathcal{C}^{k}({\mathbb B})\cap \mathcal{C}(\bar {\mathbb B})$ satisfying $$ \|\varphi\|_{G^{\tau_0}_{\tau,k}} :=\sum_{j=0}^k\sup_{z\in {\mathbb B}} \frac{|\partial^j \varphi(z)|}{\omega_{\tau-j}(z)}+\sup_{z\in {\mathbb B}}\frac{\partialphi(z)}{(1-|z|^2)^{\tau_0}}<\infty, $$ \varepsilonnd{defn} Note that $H\cap G^{\tau_0}_{\tau,k}=B^\infty_\tau$, and $G^{\tau_0}_{\tau,k}\cdot L^p_\delta\subset L^p_\delta$, since $G^{\tau_0}_{\tau,k}\subset L^{\infty}({\mathbb B})$. The following embedding is a consequence of the definition of $G^{\tau_0}_{\tau,k}$ and the fact that $(1-|z|^2)^s\le (1-|z|^2)^t$ and $\omega_s(z)\lesssim \omega_t(z)$, if $s>t$. \begin{lem} \label{lem:embedG} $G^{\tau_0}_{\tau,k}\subset G^{\vartheta_0}_{\vartheta,m}$, provided that $\vartheta_0\le\tau_0$, $\vartheta\le\tau$ and $m\le k$. \varepsilonnd{lem} In order to obtain multiplicative properties of the spaces $G^{\tau_0}_{\tau,k}$, we first state some properties of the function $\omega_t$. \begin{lem}\label{lem:poteomegas} Let $a,b\in{\mathbb R}$. Then $$ (1-|z|^2)^a\,\omega_b(z) \lesssim (1-|z|^2)^c, $$ for every $c\in{\mathbb R}$ such that $c<a$ and $c\le a+b$. \varepsilonnd{lem} \begin{proof} Just note that $$ (1-|z|^2)^a\,\omega_b(z) = \left\{ \begin{array}{ll} (1-|z|^2)^{\min(a+b,a)}, & \mbox{if $b\ne 0$} \\ (1-|z|^2)^a \log\frac{e}{1-|z|^2}, & \mbox{if $b= 0$} \varepsilonnd{array} \right\} \lesssim (1-|z|^2)^c, $$ for every $c\in{\mathbb R}$ such that $c<a$ and $c\le a+b$. \varepsilonnd{proof} \begin{lem}\label{lem:prodomegas} Let $\vartheta,\tau>0$, $k\in{\mathbb R}$ and $m\in{\mathbb Z}$ such that $m\ge0$. Then $$ S^{\vartheta,\tau}_{m,k}:= \sum_{i=0}^{m} \omega_{\vartheta-i}\,\omega_{\tau+i-k} \lesssim \omega_{\vartheta-m}+\omega_{\tau-k}. $$ \varepsilonnd{lem} \begin{proof} We estimate the different products $\omega_{\vartheta-i}\,\omega_{\tau+i-k}$ as follows:\vspace*{5pt} \noindent $\bullet$ If $i>k-\tau$, then $\omega_{\vartheta-i}\,\omega_{\tau+i-k}=\omega_{\vartheta-i}\lesssim \omega_{\vartheta-m}$, since $\vartheta-m\le\vartheta-i$. \vspace*{5pt} \noindent $\bullet$ If $i<\vartheta$, then $\omega_{\vartheta-i}\,\omega_{\tau+i-k}=\omega_{\tau+i-k}\lesssim \omega_{\tau-k}$, since $\tau-k\le\tau+i-k$. \vspace*{5pt} \noindent $\bullet$ If $i>\vartheta$ and $i\le k-\tau$, then $$ \omega_{\vartheta-i}(z)\omega_{\tau+i-k}(z) =(1-|z|)^{\vartheta-i}\omega_{\tau+i-k}(z)\lesssim (1-|z|)^{\tau-k}=\omega_{\tau-k}(z), $$ by Lemma~{\ref{lem:poteomegas}}, since $\tau-k<\tau-k+\vartheta=(\vartheta-i)+(\tau+i-k)$ and $\tau-k\le -i<-\vartheta<0$. \vspace*{5pt} \noindent $\bullet$ If $i=\vartheta$ and $i<k-\tau$, then $$ \omega_{\vartheta-i}(z)\omega_{\tau+i-k}(z)=(1-|z|)^{\tau+i-k}\omega_0(z)\lesssim (1-|z|)^{\tau-k}=\omega_{\tau-k}(z), $$ by Lemma~{\ref{lem:poteomegas}}, since $\tau-k<\tau-k+\vartheta=\tau+i-k<0$. \vspace*{5pt} \noindent $\bullet$ If $\vartheta=i=k-\tau$, then $$ \omega_{\vartheta-i}(z)\omega_{\tau+i-k}(z)=\omega_0(z)^2\lesssim (1-|z|)^{\tau-k}=\omega_{\tau-k}(z), $$ since $\tau-k=-\vartheta<0$. \varepsilonnd{proof} \begin{prop} \label{prop:prodGs} $$ \|\varphi\partialsi\|_{G^{\vartheta_0}_{\vartheta,m}}\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|\partialsi\|_{G^{\vartheta_0}_{\vartheta,m}} \qquad(\varphi\in G^{\tau_0}_{\tau,k},\,\partialsi\in G^{\tau_0}_{\vartheta,m}), $$ provided that $\vartheta_0\le\tau_0$, $\vartheta\le\tau$ and $m\le k$. \varepsilonnd{prop} \begin{proof} If $\alphapha\in{\mathbb N}^n$, $|\alphapha|=l\le m$, then \begin{eqnarray*} |\partial_\alphapha(\varphi\partialsi)| &\lesssim& \sum_{\beta+\gamma=\alphapha}|\partial_\beta\varphi||\partial_\gamma\partialsi| \lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|\partialsi\|_{G^{\vartheta_0}_{\vartheta,m}}S^{\vartheta,\tau}_{l,l}\\ &\lesssim& \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|\partialsi\|_{G^{\vartheta_0}_{\vartheta,m}}\omega_{\vartheta-l}, \varepsilonnd{eqnarray*} since, by Lemma~{\ref{lem:prodomegas}}, $S^{\vartheta,\tau}_{l,l}\lesssim\omega_{\vartheta-l}+\omega_{\tau-l}\lesssim\omega_{\vartheta-l}$. On the other hand, $$ \widetilde{\varphi\partialsi}(z)\le|\varphi(z)|\widetilde{\partialsi}(z)+\widetilde{\varphi}(z)|\partialsi(z)| \lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|\partialsi\|_{G^{\vartheta_0}_{\vartheta,m}}(1-|z|^2)^{\vartheta_0}, $$ and the proof is complete. \varepsilonnd{proof} Our next goal is to show the connection between the spaces $G^{\tau_0}_{\tau,k}$ and both the non-isotropic Lipschitz-Zygmund spaces $\Gamma_\tau$ and the classical Lipschitz-Zygmund spaces $\mathcal Lambda_\tau$. If $0<\tau<1$, the classical Lipschitz-Zygmund space on ${\mathbb S}$, $\mathcal Lambda_\tau=\mathcal Lambda_{\tau}({\mathbb S})$, with respect to the Euclidean metric consists of all the functions $\varphi \in\mathcal{C}({\mathbb S})$ such that $$ \|\varphi\|_{\mathcal Lambda_\tau}:=\|\varphi\|_{\infty}+\sup_{\substack{\zeta,\varepsilonta\in S\\ \zeta\ne\varepsilonta}} \frac{|\varphi(\zeta)-\varphi(\varepsilonta)|}{|\zeta-\varepsilonta|^\tau}<\infty. $$ If $k$ is a positive integer and $k<\tau<k+1$, then $\mathcal Lambda_\tau=\mathcal Lambda_{\tau}({\mathbb S})$ consists of all the functions $\varphi\in \mathcal{C}^k({\mathbb S})$ such that $$ \|\varphi\|_{\mathcal Lambda_\tau}:=\|\varphi\|_{\mathcal{C}^k}+\sum_{|\alphapha|+|\beta|=k} \|\partial_\alphapha \bar\partial_\beta\varphi\|_{\mathcal Lambda_{\tau-k}({\mathbb S})}<\infty. $$ When $\tau$ is a positive integer, $\mathcal Lambda_\tau$ is defined analogously by using second order differences. The spaces $\mathcal Lambda_\tau({\mathbb B})$ are defined in a similar way. The main properties of the spaces $\mathcal Lambda_\tau$ can be found, for instance, in the expository paper~{\cite{Kr1}.} It is well known (see~{\cite[\S\,15]{Kr1})} that a continuous function $\varphi$ is in $\mathcal Lambda_\tau$ if and only if, for some (any) integer $k>\tau$, its harmonic extension $\Phi$ on ${\mathbb B}$ satisfies \begin{equation}\label{eqn:lipder} \sup_{z\in {\mathbb B}}(1-|z|^2)^{k-\tau}|d^k\Phi(z)| <\infty. \varepsilonnd{equation} We recall that if~{\varepsilonqref{eqn:lipder}} holds for some function $\varphi\in\mathcal{C}^k({\mathbb B})$, then $\varphi\in\mathcal Lambda_\tau({\mathbb B})$ (see~{\cite[Theorem 15.7]{Kr1}}). This fact and the estimate $|d\varphi(z)|\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}(1-|z|^2)^{\tau_0-1}$ give \begin{prop} \label{prop:GLip0} $G^{\tau_0}_{\tau,k}\subset \mathcal Lambda_{\tau_0}({\mathbb B})$. \varepsilonnd{prop} We also consider the Lipschitz-Zygmund space on ${\mathbb S}$ with respect to the pseudodistance $d(\zeta,\varepsilonta)=|1-\bar\zeta\varepsilonta|$, which is denoted by $\Gamma_{\tau}({\mathbb S})$. If $0<\tau<1/2$, this space is defined just as $\mathcal Lambda_\tau$ but replacing the Euclidean distance $|\zetaeta-\varepsilonta|$ by $d(\zeta,\varepsilonta)$. For values $\tau\ge 1/2$ the definition is given in terms of Lipschitz conditions of certain complex tangential derivatives (see \cite[pp.\,670-1]{Bo-Br-Gr} and the references therein for the precise definitions and main properties). We recall that if $f\in H({\mathbb B})$ has boundary values $f^*$, then $f^*$ in $\mathcal Lambda_\tau$, if and only if $f^*\in\Gamma_\tau$ (see \cite{Ste} or \cite[\S 6.4]{Ru} or \cite[\S 8.8]{Kr3} and the references therein. See also \cite[pp.\,670-1]{Bo-Br-Gr}). If $0<\tau<n$, the functions in $\Gamma_\tau$ can be described in terms of their invariant harmonic extensions. In this case, we have that $\varphi$ is in $\Gamma_\tau$ if and only if, for some (any) integer $k>\tau$, its invariant harmonic extension $\Phi$ on ${\mathbb B}$ satisfies~{\varepsilonqref{eqn:lipder}}. This characterization fails to be true when $\tau\ge n$ (see~{\cite[Chapter 6]{Kr2}} for more details). Similarly to what happens in the holomorphic case, the complex tangential derivatives of the functions in the space $\Gamma_\tau$ are more regular, in the sense that $\mathcal{D}_{ij}\varphi\in \Gamma_{\tau-1/2}$ for $i,j=1,\dots,n$. The next results relate the spaces $\mathcal Lambda_\tau$ and $\Gamma_\tau$ to $G_{\tau,k}^{\tau_0}$. \begin{prop}\label{prop:GLip1}\hspace*{\fill}\mbox{ } \noindent a) If $n=1$ then the harmonic extension of a function in $\mathcal Lambda_\tau$ belongs to any space $G_{\tau,k}^{\tau_0}$. \noindent b) If $n>1$ and $\tau>1/2$ then every $\varphi\in\mathcal Lambda_\tau$ is the restriction of a function $\Phi\in G_{\tau,k}^{\tau_0}$. Namely, for any integer $k>\tau$, the harmonic extension $\Phi$ of $\varphi$ satisfies that: \begin{itemize} \item $\Phi\in G_{\tau,k}^{\tau-1/2}$, when $1/2<\tau<1$. \item $\Phi\in G_{\tau,k}^{\tau_0}$, for any $0<\tau_0<1/2$, when $\tau\ge 1$. \varepsilonnd{itemize} \varepsilonnd{prop} \begin{cor} \label{cor:Glip1} If either $n=1$ or $n>1$ and $\tau>1/2$, then every $\varphi\in\mathcal Lambda_\tau$ is the restriction of a function $\Phi\in G_{\tau,k}^{\tau_0}$, for some $0<\tau_0$ and for any integer $k$. \varepsilonnd{cor} \begin{prop} \label{prop:GLip2} If $n>1$, $0<\tau<n$ and $\varphi\in\Gamma_\tau$, then, for any integer $k>\tau$, its invariant harmonic extension $\Phi$ satisfies that: \begin{itemize} \item $\Phi\in G^{\tau}_{\tau,k}$, when $0<\tau<1/2$. \item $\Phi\in G^{\tau_0}_{1,k}$, for any $0<\tau_0<1/2$, when $\tau\ge 1/2$. \varepsilonnd{itemize} \varepsilonnd{prop} \begin{cor} \label{cor:Glip2} Every $\varphi\in \Gamma_\tau$, $\tau>0$, is the restriction of a function $\Phi\in G_{\tau,k}^{\tau_0}$, for some $0<\tau_0$ and for any integer $k$. \varepsilonnd{cor} \subsection{Representation formulas and estimates} In this subsection we recall some well-known results on the integral representation formulas obtained in \cite{Char}. We begin by introducing the following nonnegative integral kernels and their corresponding integral operators. \begin{defn} Let $N,M,L\in{\mathbb R}$ such that $N>0$ and $L<n$. Then $$ \mathcal{K}^{N}_{M,L}(w,z):=\frac{(1-|w|^2)^{N-1}}{|1-\bar w z|^{M}D(w,z)^L}, \quad(z, w\in \bar{\mathbb B},\,\,z\ne w), $$ where $D(w,z):=|1-\bar w z|^2-(1-|w|^2)(1-|z|^2)$. The associated integral operator is also denoted by $\mathcal{K}^{N}_{M,L}$: $$ \mathcal{K}^{N}_{M,L}(\partialsi)(z):=\int_{{\mathbb B}} \mathcal{K}^{N}_{M,L}(w,z)\partialsi(w)\,d\nu(w). $$ \varepsilonnd{defn} Note that $D(w,z)=|(w-z)\bar z|^2+(1-|z|^2)|w-z|^2$, so, for every $z\in {\mathbb B}$ such that $1-|z|^2\ge \delta>0$, we have that \begin{equation}\label{estimate:kernel:K} \mathcal{K}^{N}_{M,L}(w,z)\simeq \left\{\begin{array}{ll} |w-z|^{-2L}, & \mbox{ if $|w-z|<(1-|z|)/2$,}\\ (1-|w|^2)^{N-1}, & \mbox{ if $|w-z|\ge(1-|z|)/2$.} \varepsilonnd{array}\right. \varepsilonnd{equation} \begin{thm}[{\cite{Char}}]\label{thm:rep} Let $N>0$. Then every function $\partialsi\in\mathcal{C}^1(\bar {\mathbb B})$ decomposes as \begin{equation} \label{eqn:rep} \partialsi= \mathcal{P}^N(\partialsi)+\mathcal{K}^N(\bar\partial \partialsi), \varepsilonnd{equation} where $$ \mathcal{K}^N(\bar\partial \partialsi)(z):=\int_{{\mathbb B}}\mathcal{K}^N(w,z)\wedge\bar\partial\partialsi(w) $$ and $\mathcal{K}^N(w,z)$ is an $(n,n-1)$-form (on $w$) of class $\mathcal{C}^{\infty}$ on ${\mathbb B}\times {\mathbb B}$ outside its diagonal. In particular if $\partialsi$ is holomorphic on ${\mathbb B}$ then $\partialsi=\mathcal{P}^N(\partialsi)$. Moreover, $\mathcal{K}^N(w,z)$ satisfies the estimate \begin{equation}\label{eqn:estimate:kernel:K} |\mathcal{K}^N(w,z)\wedge\bar\partial\partialsi(w)|\lesssim K^N_{N-n+1,n-1/2}(w,z) \partialpsi(w), \varepsilonnd{equation} for any $\partialsi\in\mathcal{C}^1({\mathbb B})$, where $\partialpsi$ is defined as in~{\varepsilonqref{eqn:def:pphi}}. \varepsilonnd{thm} Then, it is clear that, \begin{equation}\label{eqn:estimates:P:K} |\mathcal{\mathcal{P}}^N(\partialsi)|\lesssim {\mathcal K}^N_{n+N,0}(|\partialsi|) \quad\mbox{ and }\quad |\mathcal{{\mathcal K}}^N(\bar\partial\partialsi)|\lesssim {\mathcal K}^N_{N-n+1,n-1/2}(\partialpsi). \varepsilonnd{equation} \begin{rem}\label{rem:thm:rep} The above representation formula will be applied in a more general setting to functions $\partialsi=\varphi f$ where $\varphi\in G^{\tau_0}_{\tau,k}$ and either $f\in B^1_{-N}$, for $N>0$, or $f\in H^1$, for $N=0$. The validity of the formula for this class of functions is obtained by applying the dominated convergence theorem and Theorem~{\ref{thm:rep}} to the functions $\partialsi_r(z)=\partialsi(rz)$. \varepsilonnd{rem} \begin{lem}[{\cite[Lemma I.1]{Char}}] \label{lem:estK} $$ \int_{{\mathbb B}} \mathcal{K}^{N}_{M,L}(w,z)d\nu(w)\lesssim\omega_t(z), $$ where $t:=n+N-M-2L$ is the so-called {\sf type} of the kernel $\mathcal{K}^{N}_{M,L}$. \varepsilonnd{lem} Observe that from the above estimate we deduce that if ${\mathcal K}^N_{M,L}$ is a kernel of type 0, $0<\delta<N$ and $\partialsi(z)=(1-|z|^2)^{-\delta}$, then ${\mathcal K}^N_{M,L}(\partialsi)\lesssim \partialsi$. As a consequence of that result and Schur's lemma we have: \begin{lem} \label{lem:estK:Lp} If ${\mathcal K}^N_{M,L}$ is a kernel of type 0 and $0<\delta<N$, then ${\mathcal K}^{N}_{M,L}$ maps boundedly $L^p_{\delta}$ to itself. \varepsilonnd{lem} By applying H\"older's inequality we deduce the following pointwise estimate of the operators $K_{M,L}^N$, which will be often used in the forthcoming sections. \begin{lem} \label{lem:HoldK} Let $N\ge 0$, $\tau>0$, $p\geq 1$ and $0<\varepsilon<N+\tau$. Then $\displaystyle{\left({\mathcal K}^{N+\tau}_{N-n+1,n-1/2}(|\partialsi|)\right)^p\lesssim {\mathcal K}^{(N+\tau-\varepsilon)p}_{Np-n+1,n-1/2}(|\partialsi|^p).}$ \varepsilonnd{lem} In the next lemma we state some differentiation formulas for both operators $\mathcal{P}^N$ and $\mathcal{K}^N$. \begin{lem} \label{lem:derPN} Let $N\ge 0$, $\alphapha\in{\mathbb N}^n$ and $k=|\alphapha|$. \begin{enumerate} \item \label{item:derPN1} If $\partialsi\in \mathcal{C}^{k}(\bar {\mathbb B})$, then $\partial_\alphapha \mathcal{P}^{N}(\partialsi)=\mathcal{P}^{N+k}(\partial_\alphapha\partialsi)$. \item \label{item:derPN2} If $\partialsi\in \mathcal{C}^{k+1}(\bar {\mathbb B})$, then $\partial_\alphapha \mathcal{K}^{N}(\bar\partial\partialsi)=\mathcal{K}^{N+k}(\bar\partial\partial_\alphapha\partialsi)$. \varepsilonnd{enumerate} \varepsilonnd{lem} \begin{proof} These results are well known (see, for instance, \cite[\S\,5]{Br-Or}). For the sake of completeness, we give a brief sketch of the proof. For $N>0$, {\varepsilonqref{item:derPN1}}~follows from the equation $\frac{\partial }{\partial z_j}\mathcal{P}^N(w,z)=$ $\frac{\partial }{\partial w_j}\mathcal{P}^{N+1}(w,z)$ and integration by parts, while~{\varepsilonqref{item:derPN2}} is just a direct consequence of~{\varepsilonqref{eqn:rep}} and~{\varepsilonqref{item:derPN1}}. The case $N=0$ is deduced from the corresponding formulas for $N>0$ by taking $N\searrow 0$. \varepsilonnd{proof} \begin{rem} \label{rem:derPN} The above differentiation formulas will be applied to functions $\partialsi=\varphi f$ where $\varphi\in G^{\tau_0}_{\tau,k}$ and $f\in B^\infty_s$, $s>0$. The validity of the formulas in this more general setting can be shown by applying Lemma~{\ref{lem:derPN}} to $\partialsi_r(z)=\partialsi(rz)$ and the dominated convergence theorem. \varepsilonnd{rem} Now we state some regularity properties related to the integral operator $\mathcal{P}^N$. \begin{prop} \label{prop:contG} \hspace*{\fill}\mbox{ } \begin{enumerate} \item If $0<\delta<N$ and $1\le p<\infty$, then $\mathcal{P}^N$ maps continuously $L^p_\delta$ in $B^p_{-\delta}$. \item If $N\ge 0$, then $\mathcal{P}^N$ maps continuously $G^{\tau_0}_{\tau,k}$ in $B^\infty_\tau$. \item If $N=0$, then $\mathcal{P}$ maps continuously $\mathcal Lambda_{\tau}$ in $B^\infty_\tau$. \varepsilonnd{enumerate} \varepsilonnd{prop} \begin{proof} The proof of (i) can be found in~{\cite[Theorem 2.10]{Zhu}.} The proof of (ii) reduces to show that every $\partialsi\in G^{\tau_0}_{\tau,k}$ satisfies $$ |\partial^k\mathcal{P}^N(\partialsi)(z)|= |\mathcal{P}^{N+k}(\partial^k\partialsi)(z)|\lesssim K^{N+\tau}_{n+N+k,0}(1)(z)\lesssim (1-|z|^2)^{\tau-k}, $$ which follows from Lemmas~{\ref{lem:derPN}} and~{\ref{lem:estK}.} Assertion (iii) can be found in \cite[\S\, 6.4]{Ru}. \varepsilonnd{proof} \begin{prop} \label{prop:contTN} If $\varphi\in G^{\tau_0}_{\tau,k}$, then ${\mathcal T}^N_\varphi$ maps boundedly $B^1_{-N}$ to itself, for $N>0$, and $H^1$ to itself, for $N=0$. \varepsilonnd{prop} \begin{proof} The second assertion is a consequence of $G^{\tau_0}_{\tau,k}\subset \mathcal Lambda_{\tau_0}$ and {\cite[Theorem 6.5.4]{Ru},} so let us prove the first one. Assume $N>0$. By Proposition~{\ref{prop:GLip0},} $\varphi\in\mathcal Lambda_{\tau_0}({\mathbb B})$, so $|\varphi(w)-\varphi(z)|\lesssim|w-z|^{\tau_0}\lesssim|1-\bar w z|^{\tau_0/2}$. Then, since ${\mathcal T}^N_\varphi(f)(z)={\mathcal T}^N_{\varphi-\varphi(z)}(f)(z)+\varphi(z)f(z)$, we have that $$ |{\mathcal T}^N_\varphi(f)|\lesssim{\mathcal K}^N_{n+N-\tau_0/2,0}(|f|)+|f|. $$ By Fubini's Theorem and Lemma~{\ref{lem:estK},} $\|{\mathcal K}^N_{n+N-\tau_0/2,0}(|f|)\|_{L^1_N}\lesssim\|f\|_{L^1_N}$. Therefore $\|{\mathcal T}^N_\varphi(f)\|_{L^1_N}\lesssim\|f\|_{L^1_N}$, and the proof is complete. \varepsilonnd{proof} \section{Toeplitz operators with symbols in $G^{\tau_0}_{\tau,k}$} In this section we state a general theorem from which we will deduce the results stated in the introduction. The proof of this general theorem will be postponed to the next section. Observe that if the functions $\varphi\in G^{\tau_0}_{\tau,k}$ and $f\in B^1_{-N}$, $N\ge0$, satisfy the equation ${\mathcal T}^N_{\varphi}(f)=h\in B^\infty_\tau$, then, taking into account Remark~{\ref{rem:thm:rep}}, formula~{\varepsilonqref{eqn:rep}} gives that \begin{equation} \label{eqn:form210} \varphi f={\mathcal K}^N(f\bar\partial\varphi)+h. \varepsilonnd{equation} Note that, by \varepsilonqref{eqn:estimate:kernel:K}, $$|{\mathcal K}^N(f\bar\partial\varphi)|\lesssim {\mathcal K}^N_{N-n+1,n-1/2}(|f|\tilde \varphi) \lesssim {\mathcal K}^{N+\tau_0}_{N-n+1,n-1/2}(|f|)$$ and therefore by~{\varepsilonqref{estimate:kernel:K}}, ${\mathcal K}^N(f\bar\partial\varphi)$ is pointwise defined even if $f\in B^1_{-N_0}$, for some $N<N_0<N+\tau_0$. This fact and the inclusion $H^1\subset B^1_{-N_0}$ for any $N_0>0$, allow us to unify the proofs of Theorems~{\ref{thm:kerNInt}} and~{\ref{thm:ker0Int},} using the following result: \begin{thm}\label{thm:kerT1} Let $N\ge0$ and let $\varphi\in G^{\tau_0}_{\tau,k}$ be such that $0\notin\varphi({\mathbb S})$. If $0<N_0<N+\tau_0$, $f\in B^1_{-N_0}$ and $h\in B_\tau^\infty$ satisfy \varepsilonqref{eqn:form210}, then $f\in B^\infty_{\tau}$ and $\|f\|_{B_\tau^\infty}\lesssim\|f\|_{B^1_{-N_0}}+\|h\|_{B^\infty_\tau}$. \varepsilonnd{thm} Now we easily deduce Theorems~{\ref{thm:kerNInt}} and~{\ref{thm:ker0Int}} all at once: \begin{thm} \label{thm:kerT2} Let $\varphi\in G^{\tau_0}_{\tau,k}$ be such that $0\notin \varphi({\mathbb S})$. \begin{enumerate} \item If $f\in B^1_{-N}$ and ${\mathcal T}^N_{\varphi}(f)\in B^\infty_\tau$, for some $N>0$, then $f\in B^\infty_{\tau}$ and $\|f\|_{B_\tau^\infty}\lesssim\|f\|_{B^1_{-N}}+\|h\|_{B^\infty_\tau}$. \item If $f\in H^1$ and ${\mathcal T}_{\varphi}(f)\in B^\infty_\tau$, then $f\in B^\infty_{\tau}$ and $\|f\|_{B_\tau^\infty}\lesssim\|f\|_{H^1}+\|h\|_{B^\infty_\tau}$. \varepsilonnd{enumerate} \varepsilonnd{thm} \begin{proof} As we pointed out at the beginning of the section, if ${\mathcal T}^N_{\varphi}(f)=h\in B^\infty_\tau$, $N\ge0$, then $\varphi$ and $f$ satisfy \varepsilonqref{eqn:form210}. Therefore (i) directly follows from Theorem~{\ref{thm:kerT1}} (case $N>0$). By Proposition \ref{prop:embed}, $H^1\subset B^1_{-t}$, for every $t>0$, and, in particular, $H^1\subset B^1_{-N_0}$, for every $0<N_0<\tau_0$, so (ii) also follows from Theorem~{\ref{thm:kerT1}} (case $N=0$). \varepsilonnd{proof} As an immediate consequence of Theorem~{\ref{thm:kerT2}} we obtain the following corollaries. \begin{cor}\label{cor:kerT20} Let $\tau>0$ and assume that $\varphi$ satisfy that $0\notin\varphi({\mathbb S})$, and one of the following conditions: \begin{enumerate} \item $n=1$ and $\varphi\in \mathcal Lambda_\tau$. \item $n>1$, $\tau>\frac12$ and $\varphi\in \mathcal Lambda_\tau$. \item $n>1$, $\tau\leq \frac12$ and $\varphi\in \Gamma_\tau$. \varepsilonnd{enumerate} If $f\in H^1$ and ${\mathcal T}_{\varphi}(f)\in B^\infty_\tau$, then $f\in B^\infty_{\tau}$. \varepsilonnd{cor} \begin{proof} This is a consequence of Theorem \ref{thm:kerT2} and Corollaries \ref{cor:Glip1} and \ref{cor:Glip2}. \varepsilonnd{proof} \begin{cor} Let $\varphi\in G^{\tau_0}_{\tau,k}$ be such that $0\notin \varphi({\mathbb S})$. \begin{enumerate} \item If $N>0$ and $f\in B^1_{-N}$ satisfies that $\varphi f\in G_{\tau,k}^{\tau_0}+\ker \mathcal{P}^N$, then $f\in B^\infty_{\tau}$. \item If $f\in H^1$ satisfies that $\varphi f\in G_{\tau,k}^{\tau_0}+\ker \mathcal{P}$, then $f\in B^\infty_{\tau}$. \varepsilonnd{enumerate} \varepsilonnd{cor} \begin{proof} This is a consequence of Theorem~{\ref{thm:kerT2}} and Proposition~{\ref{prop:contG}(ii)}. \varepsilonnd{proof} \begin{cor} If $\varphi\in G^{\tau_0}_{\tau,k}$, then $\ker({\mathcal T}_{\varphi}^N-\lambda{\mathcal I})\subset B^\infty_\tau$, for any $\lambda\in{\mathbb C}\setminus\varphi({\mathbb S})$ and $N\ge0$. In particular, $\ker {\mathcal T}^N_\varphi\subset B^\infty_\tau$, whenever $0\notin\varphi({\mathbb S})$. \varepsilonnd{cor} \begin{proof} Since ${\mathcal T}_{\varphi}^N-\lambda{\mathcal I}={\mathcal T}_{\varphi-\lambda}^N$, it directly follows from Theorem~{\ref{thm:kerT2}}. \varepsilonnd{proof} \begin{rem} If the condition $0\notin \varphi({\mathbb S})$ is omitted, then $\ker{\mathcal T}^N_\varphi$ is not necessarily contained in $B^\infty_\tau$. For $n>1$, this result follows by taking $\varphi(z)=\bar z_1$, and observing that $\ker{\mathcal T}^N_\varphi$ contains any function in $B^1_{-N}$, if $N>0$, ($H^1$, if $N=0$), which does not depend on the first variable. For $n=1$ we may consider the symbol $\varphi(z)=\bar z^{\,m+1}(1-z)^{m+\alphapha}$ and the function $f(z)=(1-z)^{-\alphapha}$, where $0<\alphapha<1$ and $m\in{\mathbb N}$ such that $m+\alphapha\ge\tau$, which satisfy $\varphi\in G^{\tau_0}_{\tau,k}$ and $f\in\ker{\mathcal T}^N_\varphi\setminus B^{\infty}_{\tau}$. \varepsilonnd{rem} Now we extend Corollary \ref{cor:kerT20}. \begin{thm} \label{thm:LipE} Let $\tau>0$ and let $\varphi\in \mathcal Lambda_\tau$ be a non-vanishing function on ${\mathbb S}$. If $f\in H^1$ and ${\mathcal T}_\varphi(f)\in B^\infty_\tau$, then $f\in B^\infty_\tau$. \varepsilonnd{thm} \begin{proof} If $\tau>1/2$, the result is just a consequence of Corollary~{\ref{cor:Glip1}} and part (ii) of Theorem~{\ref{thm:kerT2}.} Now assume that $\tau\le 1/2$. Since $|w-z|^2\le 2|1-\bar w z|$, we have that $\mathcal Lambda_\tau\subset \Gamma_{\tau/2}$. And then Corollary~{\ref{cor:Glip2}} and part (ii) of Theorem~{\ref{thm:kerT2}} show that $f\in B^\infty_{\tau/2}$. Thus $|\partial f(z)|\lesssim (1-|z|^2)^{\tau/2-1}$, but we want to prove that $|\partial f(z)|\lesssim (1-|z|^2)^{\tau-1}$, or equivalently $|\Phi(z)\partial f(z)|\lesssim (1-|z|^2)^{\tau-1}$, $\Phi$ being the harmonic extension of $\varphi$ to ${\mathbb B}$. (Recall that, since $0\not\in\varphi({\mathbb S})$, there is $0<r<1$ so that $|\Phi(z)|\simeq 1$ for $r\le|z|\le 1$.) In order to show the estimate note that $|d\Phi(z)|\lesssim (1-|z|^2)^{\tau-1}$, which implies that $|\Phi(z)-\Phi(w)|\lesssim|z-w|^{\tau}\lesssim|1-\bar w z|^{\tau/2}$, for $z,w\in {\mathbb B}$. On the other hand, since $f\in B^\infty_{\tau/2}$, $\partial_jf\in B^1_{-1}$ so $\partial_jf=\mathcal{P}^1(\partial_jf)$ and therefore $$ \Phi(z) \partial_j f(z)=\mathcal{P}^1((\Phi(z)-\Phi)\partial_jf)(z)+\mathcal{P}^1(\partial_j(\Phi f))(z)-\mathcal{P}^1(f\partial_j\Phi)(z). $$ By Lemma~{\ref{lem:derPN},} $\mathcal{P}^1(\partial_j(\Phi f))=\partial_j{\mathcal T}_{\varphi}f$. Hence $$ |\Phi(z)\partial_j f(z)| \lesssim {\mathcal K}^{\tau/2}_{n+1-\tau/2,0}(1)(z) + (1-|z|^2)^{\tau-1} + {\mathcal K}^{\tau}_{n+1,0}(1)(z), $$ and then Lemma~{\ref{lem:estK}} shows that $|\Phi(z)\partial_j f(z)|\lesssim (1-|z|^2)^{\tau-1}$. \varepsilonnd{proof} Since $\mathcal{P}$ maps $\mathcal Lambda_\tau$ to $B^\infty_\tau$, we deduce \begin{cor} If $f\in H^1$ and $\varphi\in \mathcal Lambda_\tau$ satisfy $0\notin\varphi({\mathbb S})$ and $\varphi f\in \mathcal Lambda_\tau+\ker \mathcal{P}$, then $f\in B^\infty_{\tau}$. \varepsilonnd{cor} Now we obtain Corollary~{\ref{cor:Intarg}}: \begin{cor}\label{cor:Intarg:generalized} If $f,g\in H^1\setminus\{0\}$ satisfy $\varphi=\bar g/f\in \mathcal Lambda_{\tau}$ and $0\notin\varphi({\mathbb S})$, then $f,g\in B^\infty_\tau$. In particular, if $f\in H^1\setminus\{0\}$ and its argument function $\bar f/f$ is in $\mathcal Lambda_{\tau}$, then $f\in B^\infty_\tau$. \varepsilonnd{cor} \begin{proof} Since ${\mathcal T}_\varphi(f)=\mathcal{P}(\bar g)=\overline{g(0)}\in B^\infty_\tau$, Theorem~{\ref{thm:LipE}} shows that $f\in B^\infty_\tau$. Therefore $\bar g=f\varphi\in\mathcal Lambda_{\tau}$ and hence $g\in B^\infty_\tau$. \varepsilonnd{proof} \section{Proof of Theorem~{\ref{thm:kerT1}}} This section is devoted to the proof of Theorem~{\ref{thm:kerT1}}. It is splitted into three steps composed of several lemmas that will give succesive improvements on the regularity of the solutions to the equation ${\mathcal T}^N_{\varphi}(f)=h$. First we will show that any solution $f$ to \varepsilonqref{eqn:form210} which is in $B^1_{-{N_0}}$ is in fact in any $B^1_{-t}$, $t>0$. Then we will obtain that the solution is in $B_{-t}^\infty$ for any $t>0$, and finally we will deduce that it is in $B_\tau^\infty$. Throughout this section we will assume that $\varphi$ and $h$ satisfy the hypotheses of Theorem~{\ref{thm:kerT1}.} Since $|\varphi(\zeta)|\ge \rho>0$ on ${\mathbb S}$, we can choose $r_0$ such that $|\varphi(z)|\ge \rho/2>0$ on the corona $C=\{\,z\in {\mathbb B}\,:\,r_0\le|z|\le1\,\}$. Let $\chi$ be a real $\mathcal{C}^\infty$-function on ${\mathbb C}^n$ supported on the corona $C_0=\{\,z\in {\mathbb B}\,:\,r_0\le |z|\le 1+r_0\,\}$, such that $0\le\chi\le1$ and $\chi\varepsilonquiv1$ on a neighborhood of ${\mathbb S}$. Then~{\varepsilonqref{eqn:form210}} shows that \begin{equation} \label{eqn:form21} f=\dfrac{\chi}{\varphi}{\mathcal K}^N(f\bar\partial\varphi)+\dfrac{\chi}{\varphi}h+(1-\chi)f. \varepsilonnd{equation} The function $(1-\chi)f$ is a $\mathcal{C}^\infty$ function with compact support on ${\mathbb B}$. It is easy to prove that $\dfrac{\chi}{\varphi}\in G^{\tau_0}_{\tau,k}$, and so $\dfrac{\chi}{\varphi}h\in G^{\tau_0}_{\tau,k}$, by Proposition~{\ref{prop:prodGs}.} Therefore $\displaystyle{(1-\chi)f+\dfrac{\chi}{\varphi}h\in G^{\tau_0}_{\tau,k}}$ and \begin{equation}\label{eqn:regh} \|(1-\chi)f+\dfrac{\chi}{\varphi}h\|_{ G^{\tau_0}_{\tau,k}}\le C_\varphi\left(\|f\|_{B^1_{-N_0}}+\|h\|_{B^\infty_\tau}\right) \varepsilonnd{equation} Hence, in order to prove that $f\in B^\infty_\tau$, we have just to show that $\dfrac{\chi}{\varphi}{\mathcal K}^N(f\bar\partial\varphi)\in G^{\tau_0}_{\tau,k}$. \vspace*{12pt} \noindent {\bf\sc Step 1.} The first couple of lemmas will show that $f\in B^1_{-t}$, for any $t>0$. \begin{lem}\label{lem1:thm:kerT1} Let $f\in B^1_{-s}$, for some $0<s<N+\tau_0$, and assume it satisfies~{\varepsilonqref{eqn:form210}.} \begin{enumerate} \item If $s\le\tau_0$ then $f\in B^1_{-t}$, for every $t>0$. \item If $s>\tau_0$ then $f\in B^1_{-(s-\tau_0)}$. \varepsilonnd{enumerate} \varepsilonnd{lem} \begin{proof} First note that~{\varepsilonqref{eqn:estimates:P:K}} shows that $$ |{\mathcal K}^N(f\bar\partial\varphi)|=|{\mathcal K}^N(\bar\partial(f\varphi))|\lesssim {\mathcal K}^N_{N-n+1,n-\frac12}(|f|\partialphi), $$ and so \begin{equation}\label{eqn:KN4} |{\mathcal K}^N(f\bar\partial\varphi)|\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}{\mathcal K}^{N+\tau_0}_{N-n+1,n-1/2}(|f|). \varepsilonnd{equation} By integrating and using Fubini's Theorem, for any $t>0$ we have that $$ \|{\mathcal K}^N(f\bar\partial\varphi)\|_{L^1_t}\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\int_{{\mathbb B}}|f(w)|g_t(w)\,d\nu(w), $$ where $$ g_t(w)=(1-|w|^2)^{N+\tau_0-1}\int_{{\mathbb B}}{\mathcal K}^t_{N-n+1,n-1/2}(z,w)\,d\nu(z). $$ Now Lemmas~{\ref{lem:estK}} and~{\ref{lem:poteomegas}} show that \begin{equation}\label{eqn:gestimate} g_t(w)\lesssim(1-|w|^2)^{N+\tau_0-1}\omega_{t-N}(w)\lesssim(1-|w|^2)^{s-1}, \varepsilonnd{equation} provided that $s\le t+\tau_0$. (recall that $s<N+\tau_0$). Therefore, if $s\le t+\tau_0$, \begin{equation}\label{eqn:lem41} \|{\mathcal K}^N(f\bar\partial\varphi)\|_{L^1_t}\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{L^1_s}, \varepsilonnd{equation} so, by~{\varepsilonqref{eqn:form21}} and \varepsilonqref{eqn:regh}, $f\in B^1_{-t}$. We conclude that:\vspace*{4pt} \noindent (i) If $s\le\tau_0$ then $s\le t+\tau_0$ and~\varepsilonqref{eqn:gestimate} holds for every $t>0$, and hence $f\in B^1_{-t}$, for every $t>0$. \vspace*{6pt} \noindent (ii) If $s>\tau_0$ then~\varepsilonqref{eqn:gestimate} holds for $t=s-\tau_0$, and consequently $f\in B^1_{-(s-\tau_0)}$. \varepsilonnd{proof} \begin{lem}\label{lem2:thm:kerT1} If $f\in B^1_{-N_0}$ satisfies~{\varepsilonqref{eqn:form210}} then $f\in B^1_{-t}$, for every $t>0$. \varepsilonnd{lem} \begin{proof} For $N_0\le\tau_0$ the result follows directly from Lemma~{\ref{lem1:thm:kerT1} \textbf{(i)}}. So assume that $N_0>\tau_0$. Let $k$ be the greatest positive integer such that $k\tau_0<N_0$. Then $k\tau_0<N_0\le(k+1)\tau_0$. Now, since $f\in B^1_{-N_0}$, Lemma~{\ref{lem1:thm:kerT1}\textbf{(ii)}} implies that $f\in B^1_{-(N_0-\tau_0)}$, so $f\in B^1_{-(N_0-2\tau_0)}$, \dots, so $f\in B^1_{-(N_0-k\tau_0)}$. But $0<N_0-k\tau_0\le\tau_0$ and therefore Lemma~{\ref{lem1:thm:kerT1}\textbf{(i)}} shows that $f\in B^1_{-t}$, for every $t>0$. \varepsilonnd{proof} \begin{rem}\label{rem:constantstep1} Observe that the above arguments, \varepsilonqref{eqn:form21} and \varepsilonqref{eqn:lem41} give in particular the estimate $\|f\|_{B^1_{-t}}\lesssim \|f\|_{B^1_{-N_0}}+\|h\|_{B^\infty_\tau}$. \varepsilonnd{rem} \vspace*{12pt} \noindent {\bf\sc Step 2.} The next couple of lemmas will show that the function $f$ is in $B^\infty_{-t}$, for any $t>0$. We follow the ideas in \cite{KD}. \begin{lem}\label{lem3:thm:kerT1} Let $f\in B^p_{-s}$, for some $1\le p<\infty$ and for every $s>0$. If $f$ satisfies~{\varepsilonqref{eqn:form210}} then $f\in B^q_{-s}$, for every $s>0$ and for every $q$ such that $p<q<\infty$ and $\frac1p-\frac{\tau_0}n<\frac1q$. \varepsilonnd{lem} \begin{proof} If $-t<-s<0$, then the space $B_{-s}^p\subset B_{-t}^p$. Consequently, we only have to prove the lemma, for $s$ sufficiently small. Let $p<q<\infty$ and $0<\varepsilon<N+\tau_0$. Assume $f$ satisfies~{\varepsilonqref{eqn:form210}}. Then, as we have shown in the proof of Lemma~{\ref{lem1:thm:kerT1}}, \varepsilonqref{eqn:KN4} holds, and so Lemma~{\ref{lem:HoldK}} gives $$ |{\mathcal K}^N(f\bar\partial\varphi)|^q\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}^q {\mathcal K}^{(N+\tau_0-\varepsilon)q}_{Nq-n+1,n-\frac12}(|f|^q). $$ By Proposition~{\ref{prop:embed}(iii)}, $B^p_{-s}\subset B^\infty_{-s-n/p}$ and $|f(w)|\lesssim \|f\|_{B^p_{-s}}(1-|w|^2)^{-s-\frac{n}p}$, which implies that $$ |f(w)|^q=|f(w)|^{q-p}|f(w)|^p\lesssim \|f\|_{B^p_{-s}}^{q-p}(1-|w|^2)^{(p-q)(s+\frac{n}p)}|f(w)|^p, $$ and, by integrating, we get $$ {\mathcal K}^{(N+\tau_0-\varepsilon)q}_{Nq-n+1,n-\frac12}(|f|^q)\lesssim \|f\|_{B^p_{-s}}^{q-p}{\mathcal K}^{N(\varepsilon,s)}_{M,L}(|f|^p), $$ where $N(\varepsilon,s)=(N+\tau_0-\varepsilon)q+(p-q)\left(s+\frac{n}p\right) =sp+(N-s)q+nq\left(\frac{\tau_0-\varepsilon}{n}-\frac1{p}+\frac1{q}\right)$, $M=Nq-n+1$ and $L=n-1/2$. Therefore $$ \|{\mathcal K}^N(f\bar\partial\varphi)\|_{L^q_s}\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^p_{-s}}^{1-\frac{p}q}I_{\varepsilon,s}, \,\,\mbox{ where }\,\,I_{\varepsilon,s}=\|{\mathcal K}^{N(\varepsilon,s)}_{M,L}(|f|^p)\|_{L^1_{sq}}. $$ Thus we only have to prove that $I_{\varepsilon,s}^q\lesssim\|f\|_{B^p_{-s}}^p$, for $\varepsilon,s>0$ small enough and $\frac1p-\frac{\tau_0}n<\frac1q$, because then the previous estimate shows that $\|{\mathcal K}^N(f\bar\partial\varphi)\|_{L^q_s}\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^p_{-s}}$ and hence, by~{\varepsilonqref{eqn:form21}} and \varepsilonqref{eqn:regh}, we conclude that $f\in B^q_{-s}$. In order to estimate $I_{\varepsilon,s}^q$, first apply Fubini's Theorem to get $$I_{\varepsilon,s}^q=\int_{{\mathbb B}}|f(w)|^p(1-|w|^2)^{N(\varepsilon,s)-1} \left(\int_{{\mathbb B}}{\mathcal K}^{sq}_{M,L}(z,w)\,d\nu(z)\right)\,d\nu(w),$$ and since $n+sq-M-2L=(s-N)q$, then apply Lemma~{\ref{lem:estK}} to obtain $$ I_{\varepsilon,s}^q\lesssim \int_{{\mathbb B}}|f(w)|^p(1-|w|^2)^{N(\varepsilon,s)-1}\omega_{(s-N)q}(w)\,d\nu(w). $$ Now we consider two cases: \vspace*{6pt} \noindent {\sf Case $N=0$.} Then $(s-N)q=sq>0$ so $$ I_{\varepsilon,s}^q\lesssim\int_{{\mathbb B}}|f(w)|^p(1-|w|^2)^{N(\varepsilon,s)-1}\,d\nu(w)\le\|f\|^p_{B^p_{-s}}, $$ provided that $N(\varepsilon,s)>sp$, which holds for $\varepsilon,s>0$ small enough and $\frac1{p}-\frac1{q}<\frac{\tau_0}n$, since $$ \lim_{\varepsilon,s\searrow0}(N(\varepsilon,s)-sp)=nq\{\tfrac{\tau_0}n-(\tfrac1p-\tfrac1q)\}>0. $$ \noindent {\sf Case $N>0$.} Let $0<s<N$. Then $(s-N)q<0$ and so $$ I_{\varepsilon,s}^q\lesssim\int_{{\mathbb B}}|f(w)|^p(1-|w|^2)^{N(\varepsilon,s)+(s-N)q-1}\,d\nu(w)\le\|f\|^p_{B^p_{-s}}, $$ provided that $N(\varepsilon,s)+(s-N)q>sp$, which holds for $\varepsilon>0$ small enough and $\frac1{p}-\frac1{q}<\frac{\tau_0}n$, since $$ N(\varepsilon,s)+(s-N)q-sp=nq\{\tfrac{\tau_0-\varepsilon}n-(\tfrac1p-\tfrac1q)\}>0. $$ And the proof is complete. \varepsilonnd{proof} \begin{lem}\label{lem4:thm:kerT1} Let $f\in B^1_{-s}$, for every $s>0$. If $f$ satisfies~{\varepsilonqref{eqn:form210}} then $f\in B^{\infty}_{-t}$, for every $t>0$. \varepsilonnd{lem} \begin{proof} Let $k$ be the greatest positive integer such that $k\frac{\tau_0}{2n}<1$. Then $k\frac{\tau_0}{2n}<1\le(k+1)\frac{\tau_0}{2n}$. Let $$ p_j=\frac1{1-j\frac{\tau_0}{2n}}\qquad(j=0,\dots,k). $$ Then $p_j\ge1$, for $j=0,\dots,k$, and $\frac1{p_j}=\frac1{p_{j-1}}-\frac{\tau_0}{2n}>\frac1{p_{j-1}}-\frac{\tau_0}{n}$, for $j=1,\dots,k$. Now, since $f\in B^{p_0}_{-s}$, for every $s>0$, and $f$ satisfies~{\varepsilonqref{eqn:form210}}, Lemma~{\ref{lem3:thm:kerT1}} shows that $f\in B^{p_1}_{-s}$ so $f\in B^{p_2}_{-s}$, \dots, so $f\in B^{p_k}_{-s}$, for every $s>0$. But $\frac1{p_k}-\frac{\tau_0}{2n}=1-(k+1)\frac{\tau_0}{2n}\le0$ and therefore Lemma~{\ref{lem3:thm:kerT1}} once again shows that $f\in B^q_{-s}$, for every $q>p_k$ and every $s>0$. Since $B^q_{-s}\subset B^{\infty}_{-s-\frac{n}q}$, by Proposition~{\ref{prop:embed}(\ref{item:embed3})}, we conclude that $f\in B^{\infty}_{-t}$, for every $t>0$. \varepsilonnd{proof} \begin{rem}\label{rem:constantstep2} Observe that the above arguments and \varepsilonqref{eqn:form21} give the estimate $\|f\|_{B^\infty_{-t}}\lesssim\|f\|_{B^1_{-t}}+\|h\|_{B^\infty_\tau}$. \varepsilonnd{rem} \vspace*{12pt} \noindent {\bf\sc Step 3.} In what follows we will finally deduce that $f\in B_\tau^\infty$. \begin{lem}\label{lem5:thm:kerT1} Let $f\in B^{\infty}_{-t}$, for every $t>0$. If $f$ satisfies~{\varepsilonqref{eqn:form210}} then $f\in H^{\infty}$. \varepsilonnd{lem} \begin{proof} Since $f$ satisfies~{\varepsilonqref{eqn:form210}}, \varepsilonqref{eqn:KN4} holds, as we have shown in the proof of Lemma~{\ref{lem1:thm:kerT1}}. But $$ {\mathcal K}^{N+\tau_0}_{N-n+1,n-\frac12}(|f|)\lesssim\|f\|_{B^{\infty}_{-t}}{\mathcal K}^{N+\tau_0-t}_{N-n+1,n-\frac12}(1) $$ and, by Lemma~{\ref{lem:estK}}, ${\mathcal K}^{N+\tau_0-t}_{N-n+1,n-\frac12}(1)\lesssim\omega_{\tau_0-t}\lesssim 1$, for any $0<t<\tau_0$. Therefore $\|{\mathcal K}^N(f\bar\partial\varphi)\|_{\infty}\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^{\infty}_{-t}}$, and, by~{\varepsilonqref{eqn:form210}} and \varepsilonqref{eqn:regh} $f\in H^\infty$. \varepsilonnd{proof} In order to prove that $f\in B^\infty_\tau$, we will use the following formula. \begin{lem} \label{lem:der35} Let $N\ge 0$, $\varphi\in G^{\tau_0}_{\tau,k}$ and $f\in H^\infty$. Then \begin{equation}\label{eqn:repder} \varphi \partial_\alphapha f=\partial_\alphapha \mathcal{P}^{N}(\varphi f)-\sum_{\substack {\beta+\gamma=\alphapha\\|\beta|<k}} c_{\alphapha,\beta} \mathcal{P}^{N+k}(\partial_\gamma\varphi \partial_\beta f)+{\mathcal K}^{N+k}(\bar\partial\varphi\partial_\alphapha f), \varepsilonnd{equation} for every $\alphapha\in{\mathbb N}^n$, where $k=|\alphapha|$ and $c_{\alphapha,\beta}=\alphapha!/(\beta!\gamma!)$. \varepsilonnd{lem} \begin{proof} First assume that $\varphi\in \mathcal{C}^{k}(\bar {\mathbb B})$ and $f\in H(\bar {\mathbb B})$. By Theorem \ref{thm:rep} we have that $\varphi \partial_\alphapha f=\mathcal{P}^{N+k}(\varphi \partial_\alphapha f)+{\mathcal K}^{N+k}(\bar\partial\varphi\partial_\alphapha f)$. Moreover, $$ \varphi\partial_\alphapha f = \partial_\alphapha(\varphi f)-\sum_{\substack{\beta+\gamma=\alphapha\\|\beta|<k}}c_{\alphapha,\beta}\partial_\gamma\varphi \partial_\beta f, $$ and Lemma~{\ref{lem:derPN}} shows that $\mathcal{P}^{N+k}( \partial_\alphapha (\varphi f))=\partial_\alphapha \mathcal{P}^{N}(\varphi f)$. Hence we obtain~{\varepsilonqref{eqn:repder}.} By a standard approximation argument (based on the dominated convergence theorem), we deduce the general case from the regular case just proved above. \varepsilonnd{proof} \begin{lem}\label{lem6:thm:kerT1} \hspace*{\fill}\mbox{ } \begin{enumerate} \item If $f\in H^{\infty}$ satisfies~{\varepsilonqref{eqn:form210}}, then for every $0<t\le\tau_0$, $f\in B^{\infty}_t$. \item Let $f\in B^{\infty}_s$, for some $0<s<\tau$. If $f$ satisfies~{\varepsilonqref{eqn:form210}}, then for every $0<t\le\min(\tau,s+\tau_0)$, $f\in B^{\infty}_t$. \varepsilonnd{enumerate} \varepsilonnd{lem} \begin{proof} Let $f$ and $t$ be as in either (i) or (ii), and assume $f$ satisfies~{\varepsilonqref{eqn:form210}}. Let $k\in{\mathbb Z}$ and $\alphapha\in{\mathbb N}^n$ such that $|\alphapha|=k>\tau$. We want to prove that $|\partial_{\alphapha}f(z)|\lesssim(1-|z|^2)^{t-k}$. Since \begin{equation}\label{eqn:est:partial:alpha:f} |\partial_{\alphapha}f|\le \|\chi/\varphi\|_{\infty}\,|\varphi\partial_{\alphapha}f|+(1-\chi)|\partial_{\alphapha}f|, \varepsilonnd{equation} we only need to estimate $|\varphi\partial_{\alphapha}f|$. Lemma~{\ref{lem:der35}}, {\varepsilonqref{eqn:form210}} and~{\varepsilonqref{eqn:estimates:P:K}} show that \begin{equation}\label{eqn:est:varphi:partial:alpha:f} |\varphi\partial_{\alphapha}f|\lesssim |\partial_{\alphapha}h| + {\mathcal K}^{N+k}_{n+N+k,0}(F_{\alphapha}) + {\mathcal K}^{N+k}_{N+k-n+1,n-\frac12}(|\partial_{\alphapha}f|\partialphi), \varepsilonnd{equation} where $\displaystyle{F_{\alphapha}=\sum_{\substack{\beta+\gamma=\alphapha\\|\beta|<k}} |\partial_{\beta}f|\,|\partial_{\gamma}\varphi|}$. Note that we only need to prove that $$ |\partial_{\alphapha}f(w)|\partialphi(w)\lesssim(1-|w|^2)^{t-k} \,\,\mbox{ and }\,\,\,\, F_{\alphapha}(w)\lesssim(1-|w|^2)^{t-k}, $$ because then~{\varepsilonqref{eqn:est:varphi:partial:alpha:f}} and Lemma~{\ref{lem:estK}} show that \begin{eqnarray*} |(\varphi\partial_{\alphapha}f)(z)| &\lesssim& |\partial_{\alphapha}h(z)| +{\mathcal K}^{N+t}_{N+k-n+1,n-\frac12}(1)(z)+{\mathcal K}^{N+t}_{n+N+k,0}(1)(z)\\ &\lesssim& \omega_{t-k}(z)+\omega_{\tau-k}(z)=(1-|z|^2)^{t-k}, \varepsilonnd{eqnarray*} and therefore, by~{\varepsilonqref{eqn:est:partial:alpha:f}}, we conclude that $|\partial_{\alphapha}f(z)|\lesssim (1-|z|^2)^{t-k}.$ \vspace*{6pt} \noindent (i) Let $f\in H^{\infty}$. Then $|\partial_{\beta}f(w)|\lesssim\|f\|_{\infty}(1-|w|^2)^{-|\beta|}$, for every multiindex $\beta$. So $$ |\partial_{\alphapha}f(w)|\partialphi(w)\lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{\infty}(1-|w|^2)^{\tau_0-k} \lesssim \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{\infty}(1-|w|^2)^{t-k}, $$ for every $t\in{\mathbb R}$ such that $t\le\tau_0$. Next Lemma~{\ref{lem:poteomegas}} shows that \begin{equation*}\begin{split} F_{\alphapha}(w) &\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{\infty}\sum_{i=0}^{k-1}(1-|w|^2)^{-i}\omega_{\tau-k+i}(z) \\&\lesssim\|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{\infty}(1-|w|^2)^{t-k}, \varepsilonnd{split}\varepsilonnd{equation*} for every $t\in{\mathbb R}$ such that $t<1$ and $t\le\tau$. \vspace*{6pt} \noindent (ii) Let $f\in B^{\infty}_s$, for some $0<s<\tau$. Then $|\partial_{\beta}f(z)|\lesssim\|f\|_{B^{\infty}_s}\omega_{s-|\beta|}(z)$, for every multiindex $\beta$, so Lemma \ref{lem:poteomegas} gives that \begin{equation*}\begin{split} |\partial_{\alphapha}f(w)|\partialphi(w) &\lesssim\|f\|_{B^{\infty}_s}\|\varphi\|_{G^{\tau_0}_{\tau,k}}(1-|w|^2)^{s+\tau_0-k}\\ &\le\|f\|_{B^{\infty}_s}\|\varphi\|_{G^{\tau_0}_{\tau,k}}(1-|w|^2)^{t-k}, \varepsilonnd{split}\varepsilonnd{equation*} for every $t\in{\mathbb R}$ such that $t\le s+\tau_0<s+1$, and Lemma~{\ref{lem:prodomegas} } shows that \begin{eqnarray*} F_{\alphapha}(w) &\lesssim& \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^{\infty}_s}S^{s,\tau}_{k-1,k}(w) \\ &\lesssim& \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^{\infty}_s}\{\omega_{s+1-k}(w)+(1-|w|^2)^{\tau-k}\} \\ &\lesssim& \|\varphi\|_{G^{\tau_0}_{\tau,k}}\|f\|_{B^{\infty}_s}(1-|w|^2)^{t-k}, \varepsilonnd{eqnarray*} for every $t\in{\mathbb R}$ such that $t\le s+1$ and $t\le\tau$. \varepsilonnd{proof} \begin{lem}\label{lem7:thm:kerT1} If $f\in H^{\infty}$ satisfies~{\varepsilonqref{eqn:form210}} then $f\in B^{\infty}_\tau$. \varepsilonnd{lem} \begin{proof} Let $f\in H^{\infty}$. If $\tau=\tau_0$ there is nothing to prove by Lemma~{\ref{lem6:thm:kerT1} (i)}. So assume that $\tau_0<\tau$, and let $k\ge1$ be the greatest integer such that $k\tau_0<\tau$. Since $f\in H^{\infty}$, Lemma~{\ref{lem6:thm:kerT1} (i)} shows that $f\in B^{\infty}_{\tau_0}$, and then Lemma~{\ref{lem6:thm:kerT1} (ii)} implies that $f\in B^{\infty}_{2\tau_0}$, so $f\in B^{\infty}_{3\tau_0}$, ... , so $f\in B^{\infty}_{k\tau_0}$, and hence $f\in B^{\infty}_{\tau}$. \varepsilonnd{proof} \begin{rem}\label{rem:constantstep3} Observe that the above arguments and \varepsilonqref{eqn:form21} give the estimate $\|f\|_{B^\infty_{\tau}}\lesssim \|f\|_{B^\infty_{-t}}+\|h\|_{B^\infty_\tau}$. \varepsilonnd{rem} \begin{rem} The remarks \ref{rem:constantstep1}, \ref{rem:constantstep2} and \ref{rem:constantstep3} show that $$ \|f\|_{B^\infty_{\tau}}\lesssim \|f\|_{B^1_{-N_0}}+\|h\|_{B^\infty_\tau}. $$ Note that the opposite estimate is always fulfilled. This follows from the continuous embedding $B^\infty_\tau\subset B^1_{-N_0}$, and the estimate $\|h\|_{B^\infty_{\tau}}\le \|{\mathcal T}^N_\varphi\|\|f\|_{B^\infty_{\tau}}$. Therefore $$ \|f\|_{B^\infty_{\tau}}\approx\|f\|_{B^1_{-N}}+\|h\|_{B^\infty_\tau}. $$ \varepsilonnd{rem} \begin{thebibliography}{12} \bibitem{Ah-Br} P. Ahern, J. Bruna: Maximal and area integral characterizations of Hardy-Sobolev spaces in the unit ball of ${\mathbb C}^n$. Rev. Mat. Iberoamericana \textbf{4} (1988), no. 1, 123--153. \bibitem{Bea} F. Beatrous: Estimates for derivatives of holmorphic functions in pseudoconvex domains. Math. Z. \textbf{191} (1986), 91-116. \bibitem{Bea-Bur} F. Beatrous and J. Burbea: Sobolev spaces of holomorphic functions in the ball. Dissertationes Math. \textbf{276} (1989). \bibitem{Bo-Br-Gr} A. Bonami, J. Bruna and S. Grelier: On Hardy, BMO and Lipschitz spaces of invariantly harmonic functions in the unit ball. Proc. London Math. Soc. (3) \textbf{77} (1998), no. 3, 665--696. \bibitem{Br-Or} J. Bruna, J.M. Ortega: Interpolation by holomorphic functions smooth to the boundary in the unit ball of ${\mathbb C}^n$. Math. Ann. \textbf{274} (1986), no. 4, 527--575. \bibitem{Char} Ph. Charpentier: Formules explicites pour les solutions minimales de l'equation $\bar\partialartial u=f$ dans la boule et dans le polydisque de ${\mathbb C}^n$. Ann. Inst. Fourier, Grenoble \textbf{30} (1980), 121--154. \bibitem{KD} K.M. Dyakonov: Toeplitz operators and arguments of analytic functions. Math. Ann. \textbf{344} (2009), no. 2, 353--380. \bibitem{Kr1} S.G. Krantz: Lipschitz spaces, smoothness of functions, and approximation theory. Exposition. Math. \textbf{1} (1983), no. 3, 193--260. \bibitem{Kr2} S.G. Krantz: Partial differential equations and complex analysis. Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. \bibitem{Kr3} S.G. Krantz: Function theory of several complex variables. Reprint of the 1992 edition. AMS Chelsea Publishing, Providence, RI, 2001. \bibitem{Or-Fa1} J.M. Ortega and J. F\`abrega: Pointwise multipliers and decomposition theorems in analytic Besov spaces. Math. Z. \textbf{235} (2000), 53-81. \bibitem{Ru} W. Rudin: Function theory in the unit ball of ${\mathbb C}^n$. Springer Verlag, New York, 1980. \bibitem{Ste} E.M. Stein: Singular integrals and estimates for the Cauchy-Riemann equations. Bull. A.M.S. \textbf{79} (1973), 440-445. \bibitem{Zhu} K. Zhu: Spaces of holomorphic functions in the unit ball. Graduate Texts in Mathematics, \textbf{226}. Springer-Verlag, New York, 2005. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{\bf A note on limit of first eigenfunctions of $p$-Laplacian on graphs} \date{} \author{Huabin Ge} \address{Huabin Ge, Department of Mathematics, Beijing Jiaotong University, Beijing, 100044, China} \email{[email protected]} \thanks{H. Ge is supported by NSFC (China) under grant no. 11871094.} \author{Bobo Hua} \address{Bobo Hua, School of Mathematical Sciences, LMNS, Fudan University, Shanghai 200433, China} \email{[email protected]} \thanks{B. Hua is supported by NSFC (China) under grant no. 11831004 and grant no. 11826031. } \author{Wenfeng Jiang} \address{Wenfeng Jiang, School of Mathematics (Zhuhai), Sun Yat-Sen University, Zhuhai, China} \email{wen\[email protected]} \thanks{W. Jiang is supported by Chinese Universities Scientific Fund(74120-31610002)} \maketitle \begin{abstract} We study the limit of first eigenfunctions of (discrete) $p$-Laplacian on a finite subset of a graph with Dirichlet boundary condition, as $p\to 1.$ We prove that up to a subsequence, they converge to a summation of characteristic functions of Cheeger cuts of the graph. We give an example to show that the limit may not be a characteristic function of a single Cheeger cut. \end{abstract} \setcounter{section}{-1} \section{Introduction} The spectral theory of linear Laplacian, called Laplace-Beltrami operator, on a domain of an Euclidean space or a Riemannian manifold is extensively studied in the literature, see e.g. \cite{CH53}. The $p$-Laplacians are nonlinear generalizations of the linear Laplacian, which corresponds to the case $p=2$. These are nonlinear elliptic operators which possess many analogous properties as the linear Laplacian. A graph consists of a set of vertices and a set of edges. The Laplacian on a finite graph is a finite dimensional linear operator, see e.g. \cite{Chung10}, which emerges from the discretization of the Laplace-Beltrami operator of a manifold, the Cayley graph of a discrete group, data sciences and many others. Compared to continuous Laplacians, one advantage of discrete Laplacians is that one can calculate the eigenvalues of the Laplacian on a finite graph which are in fact eigenvalues of a finite matrix. Cheeger \cite{Cheeger70} defined an isoperimetric constant, now called Cheeger constant, on a compact manifold and used it to estimate the first non-trivial eigenvalue of the Laplacian, see also \cite{K}. These were generalized to graphs by Alon-Milman \cite{Alon-Milman} and Dodziuk \cite{Dodziuk84} respectively. These estimates, called Cheeger estimates, are very useful in the spectral theory. The spectral theory for discrete $p$-Laplacians was studied by \cite{Yamasaki79,Amghibech03,Takeuchi03,Hein-Buhler10,Keller-Mugnolo16}. It turns out that this theory unifies these constants involved in the Cheeger estimate. The Cheeger constant of a finite graph is in fact the first non-trivial eigenvalue of $1$-Laplacian, see Chang, Hein et al. \cite{Chang16,Chang-Shao-Zhang17,Hein-Buhler10}. So that the Cheeger estimate can be regarded as the eigenvalue relation for $p$-Laplacians, $p=1$ and $p=2.$ In this paper, we study first eigenvalues and eigenfunctions of the $p$-Laplacian on a subgraph of a graph with Dirichlet boundary condition. Let $G= (V, E)$ be a simple, undirected, locally finite graph, where $V$ is the vertex set and $E$ is the edge set. Two vertices $x,y$ are called neighbors, denoted by $x\sim y$, if $\{x,y\}\in E.$ Let $$\mu:V \to (0, \infty), x\mapsto \mu_x,$$ be the vertex measure on $V$ and $$w: E \to (0, \infty), \{x,y\}\mapsto w_{xy}=w_{yx},$$ be the edge measure on $E.$ The quadruple $(V,E,\mu,w)$ is called a weighted graph. For a subset of $V$ (a subset of $E,$ resp.) we denote by $|\cdot|_\mu$ ($|\cdot|_w,$ resp.) the $\mu$-measure ($w$-measure, resp.) of the set. Let $\Omega$ be a finite subset of $V.$ We denote by $C_0(\Omega)$ the set of function on $V$ which vanishes on $V\setminus \Omega.$ For $p>1,$ we define the $p$-Laplacian with Dirichlet boundary condition, called Dirichlet $p$-Laplacian, on $\Omega$ as $$\Delta_p u(x)=\frac{1}{\mu_x}\sum_{y\in V: y\sim x}w_{xy}|u(y)-u(x)|^{p-2}(u(y)-u(x)),\quad x\in \Omega, u\in C_0(\Omega).$$ We call the pair $(\lambda, u)\in {\mathbb R}\times C_0(\Omega)$ satisfying \begin{equation}\label{defi:eigenequation}-\Delta_p u(x)=\lambda |u(x)|^{p-2}u(x),\quad \forall x\in \Omega\end{equation} the eigenvalue and the eigenfunction of Dirichlet $p$-Laplaican. The smallest eigenvalue of Dirichlet $p$-Laplacian is called the first eigenvalue, denoted by $\lambda_{1,p}(\Omega)$, and the associated eigenfunction is called the first eigenfunction, denoted by $u_p.$ For $p\geq 1,$ the $p$-Dirichlet energy is defined as $$E_p(u):=\sum_{x,y\in V, y\sim x}w_{xy}|u(y)-u(x)|^p,\quad u\in C_0(\Omega),$$ and the modified $p$-Dirichlet energy is defined as $$\widetilde{E_p}(u):=\frac{E_p(u)}{|u|^p},\quad \forall u\in C_0(\Omega)\setminus \{0\}.$$ We consider the variational problem \begin{equation}\label{defil:variation}\widetilde{E_p}:C_0(\Omega)\setminus\{0\}\to {\mathbb R}.\end{equation} One is ready to see that for $p>1,$ critical points and critical values of the problem satisfy \eqref{defi:eigenequation}. The spectral theory for $1$-Laplacian was developed by Chang and Hein. Eigenvalues and eigenfunctions of Dirichlet $1$-Laplacian are defined as critical values and critical points of the variational problem \eqref{defil:variation} for $p=1.$ We denote by $\lambda_{1,1}(\Omega)$ the first eigenvalue for Dirichlet $1$-Laplacian on $\Omega.$ This yields the Rayleigh quotient characterization, $p\geq 1,$ \begin{equation}\label{eq:Rayleigh}\lambda_{1,p}(\Omega)=\inf_{u\in C_0(V), u\neq 0}\widetilde{E_p}(u).\end{equation} First eigenfunctions of Dirichlet $p$-Laplacian have some nice properties, see \cite{BH}. \begin{thm} For a finite connected subset $\Omega$ of $V$ and $p>1,$ a first eigenfunction $u$ on $\Omega$ has fixed sign, i.e. $u>0$ on $\Omega$ or $u<0$ on $\Omega.$ Moreover, the first eigenvalue is simple, i.e. for any two first eigenfunctions $u$ and $v$ on $\Omega,$ $u=c v$ for some $c\neq 0.$ \end{thm} By this result, there is a unique first eigenfunction on $\Omega$ satisfies $$u>0,\quad \mathrm{on}\ \Omega,\quad |u|_p=1,$$ which is called the normalized first eigenfunction of $p$-Laplacian. We introduce the definition of the Cheeger constant on $\Omega.$ For any subset $D\in V,$ the boundary of $D$ is defined as $\partial D:=\{\{x,y\}\in E: x\in D, y\in V\setminus D\}.$ \begin{definition} The Cheeger constant on $\Omega$ is defined as $$ h(\Omega)=\inf_{D\subset \Omega} \frac{|\partial D|_w}{|D|_\mu}. $$ A subset $D$ of $\Omega$ is called a Cheeger cut if $$ \frac{|\partial D|_w}{|D|_\mu}=h(\Omega). $$ \end{definition} We prove the main result in the following. \begin{theorem}\label{thm:main1} Let $\Omega$ be a finite connected subset of $V.$ For any sequence $\{p_i\}_{i=1}^\infty$ satisfying $p_i>1, p_i\to 1$, let $u_i$ be the corresponding normalized first eigenfunction of $p_i$-Laplacian. Then there is a subsequence $\{p_{i_k}\}$ such that $\{u_{i_k}\}_k$ converges $$ \lim_{k \to \infty}u_{i_k}=\sum_{n=1}^N c_n\mathds{1}_{A_n} . $$ where $A_n,$ $1\leq n\leq N$, are Cheeger cuts of $\Omega$ satisfying $A_{N}\subsetneq A_{N-1}\subsetneq \cdots \subsetneq A_1, $ $\mathds{1}_{A_n}$ are characteristic functions on $A_n$ and $c_n>0.$ \end{theorem} We have the following corollary. \begin{corollary}\label{coro:1}For a finite connected subset $\Omega$ of $V,$ suppose that the Cheeger cut of $\Omega$ is unique. Then $$\lim_{p \to \infty}u_{p}=\frac{1}{|A|_\mu} \mathds{1}_A,$$ where $A$ is the Cheeger cut of $\Omega.$ \end{corollary} Concerning with these results, we have the following open problems. \begin{problem}\label{prob:1} In general case, is it true that the normalized first eigenfunction $u_p$ converges, as $p\to 1$? \end{problem} \begin{problem}\label{prob:2} What are the limits of the normalized first eigenfunctions in Theorem~\ref{thm:main1}. \end{problem} For Problem~\ref{prob:1}, since there is no uniqueness for the Cheeger cuts, see e.g. Example~\ref{exam:1} in Section~\ref{sec:example}, one needs new ideas to prove the result. For Problem~\ref{prob:2}, one might hope that the limit of a sequence of normalized first eigenfunctions is a characterization function of a single Cheeger cut, as in Corollary~\ref{coro:1}. By investigating Example~\ref{exam:1} in Section~\ref{sec:example}, we show that this is not true in general. This indicates that the result in Theorem~\ref{thm:main1} cannot be improved to the characterization function of a single Cheeger cut. The paper is organized as follows: In next section, we prove the main result, Theorem~\ref{thm:main1}. In Section~\ref{sec:example}, we construct an example to show the sharpness of Theorem~\ref{thm:main1}. \section{Proof of Theorem~\ref{thm:main1}} Let $(V,E,\mu,w)$ be a weighted graph and $\Omega$ is a connected subset of $V,$ i.e. for any $x,y\in \Omega$ there is a path, $x=x_0\sim x_1\sim \cdots \sim x_k=y$, connecting $x$ and $y$ with $x_i\in \Omega,\forall 1\leq i\leq k-1.$ We need some lemmas. \begin{lemma} $$\lim_{p\to 1}\lambda_{1,p}(\Omega)=\lambda_{1,1}(\Omega).$$ \end{lemma} \begin{proof} Note that for any $u\in C_0(\Omega),$ $u\neq 0,$ $$\widetilde{E_p}(u)\to \widetilde{E_1}(u),\quad p\to 1.$$ Hence the lemma follows from the Rayleigh quotient characterization \eqref{eq:Rayleigh}. \end{proof} \begin{lemma}\label{lem:1.2} For any $p\geq 1,$ $$ \lambda_{1,p}(\Omega)\leq h(\Omega). $$ \end{lemma} \begin{proof} As $\Omega$ is a finite subset, we choose $D\subset \Omega$ such that $ \frac{|\partial D|_w}{|D|_\mu}=h(\Omega). $ Consider the characteristic function on $D,$ $$\mathds{1}_D(x)=\left\{\begin{array}{ll}1,& x\in D,\\ 0,& x\in V\setminus D,\end{array}\right.$$ Then by \eqref{eq:Rayleigh}, $$\lambda_{1,p}(\Omega)\leq \widetilde{E_p}(\mathds{1}_D)=\frac{|\partial D|_w}{|D|_\mu}=h(\Omega).$$ This proves the lemma. \end{proof} \begin{lemma}\label{lem:equivalent} $\lambda_{1,1}(\Omega)= h(\Omega).$ \end{lemma} \begin{proof} By Lemma~\ref{lem:1.2}, it suffices to prove that $\lambda_{1,1}(\Omega)\geq h(\Omega).$ For any $u\in C_0(\Omega)$ with $|u|_1=1,$ let $g=|u|.$ For any $\sigma\geq 0,$ set $\Omega_{\sigma}:=\{x\in \Omega: g(x)>\sigma\}$ and $G(\sigma):=|\partial \Omega_\sigma|_w.$ Then $$ G(\sigma)=\sum_{e=\{x,y\}\in E,g(x)\leq \sigma< g(y)}w_{xy}. $$ Hence \begin{eqnarray*} \int_{0}^{+\infty}G(\sigma) d\sigma&=&\int_{0}^{+\infty}\sum_{e=\{x,y\}\in E,g(y)>g(x)}w_{xy} \mathds{1}_{[g(x),g(y))}(\sigma)d\sigma\notag\\ &=&\sum_{e=\{x,y\}\in E,g(y)>g(x)}w_{xy}\int_{0}^{+\infty} \mathds{1}_{[g(x),g(y))}(\sigma)d\sigma\notag\\ &=& \sum_{e=\{x,y\}\in E,g(y)>g(x)}w_{xy}|g(x)-g(y)|=E_1(g). \notag\\ \end{eqnarray*} Moreover, by the definition of $h(\Omega)$, $\Omega_{\sigma}\leq h(\Omega) | \Omega_{\sigma}|_\mu,\ \forall \sigma\geq 0.$ This yields that \begin{align}\label{main-proof-2} E_1(u)\geq E_1(g)=&\int_{0}^{+\infty}G(\sigma)d\sigma\geq h(\Omega) \int_{0}^{+\infty}| \Omega_{\sigma}|_\mu\\ =&h(\Omega)|u|_1=h(\Omega).\notag \end{align} By taking the infimum over $u\in C_0(\Omega)$ with $|u|_1=1$ on the left hand side, we prove the lemma by \eqref{eq:Rayleigh}. \end{proof} By the definition, $u$ is called a first eigenfunction of $1$-Laplacian on $\Omega$ if $$\widetilde{E_1}(u)=\inf_{v\in C_0(V), v\neq 0}\widetilde{E_1}(v)=\lambda_{1,1}(\Omega).$$ Since $\widetilde{E_1}(|u|)\leq \widetilde{E_1}(u),$ $|u|$ is also a first eigenfunction of $1$-Laplacian on $\Omega.$ \begin{lemma}\label{lem:structure} Let $u$ be a nonnegative first eigenfunction of $1$-Laplacian on $\Omega.$ Then for any $\sigma\geq 0,$ $\Omega_\sigma:=\{x\in \Omega: u>\sigma\}$ is a Cheeger cut. Moreover, \begin{equation}\label{eq:structure1} u=\sum_{n=1}^N c_n\mathds{1}_{A_n} . \end{equation} where $A_n,$ $1\leq n\leq N$, are Cheeger cuts of $\Omega$ satisfying $A_{N}\subsetneq A_{N-1}\subsetneq \cdots \subsetneq A_1,$ and $c_n>0.$ \end{lemma} \begin{proof} Let $\{a_i\}_{i=0}^M$ be the range of the function $u,$ such that $$0=a_0<a_1<\cdots<a_M=\max_{x\in V}u(x).$$ Then for any $\sigma\in [a_i, a_{i+1}),$ $0\leq i\leq M-1,$ $$\Omega_\sigma=\Omega_{a_i}.$$ By the proof of Lemma~\ref{lem:equivalent}, see \eqref{main-proof-2}, \begin{eqnarray*}h(\Omega)|u|_1&=&\lambda_{1,1}(\Omega)|u|_1=E_1(u)=\int_{0}^{+\infty}G(\sigma)d\sigma=\sum_{i=0}^{M-1}(a_{i+1}-a_i)|\partial \Omega_{a_i}|_w\\ &\geq& h(\Omega) \sum_{i=0}^{M-1}(a_{i+1}-a_i)|\Omega_{a_i}|_\mu=h(\Omega)|u|_1.\end{eqnarray*} Hence for any $0\leq i\leq M-1,$ $$|\partial \Omega_{a_i}|_w=h(\Omega)|\Omega_{a_i}|_\mu.$$ So that for any $\sigma\geq 0,$ $\Omega_\sigma$ is a Cheeger cut. For the other assertion, note that $$u=a_1\mathds{1}_{\Omega_{a_0}}+(a_2-a_1)\mathds{1}_{\Omega_{a_1}}+\cdots+(a_M-a_{M-1})\mathds{1}_{\Omega_{M-1}}.$$ This proves the result. \end{proof} Now we are ready to prove Theorem~\ref{thm:main1}. \begin{proof}[Proof of Theorem~\ref{thm:main1}] Note that $$|u_i|_{p_i}=1,\quad \widetilde{E_1}(u_i)=\lambda_{1,p_i}(\Omega).$$ For any $x\in \Omega,$ $|u_i(x)|\leq |u_i|_{p_i}=1.$ Since $\Omega$ is a finite set, there is a subsequence $\{p_{i_k}\}_k$ of $\{p_i\}_i$ and $u\in C_0(\Omega)$ such that $$u_{i_k}\to u,\quad \mathrm{pointwise\ on\ \Omega}.$$ Hence $$u\geq 0,\quad |u|_1=\lim_{k\to \infty}|u_{i_k}|_{p_{i_k}}=1,$$$$\quad E_1(u)=\lim_{k\to \infty}E_1(u_{i_k})=\lim_{k\to \infty}\lambda_{1,p_{i_k}}(\Omega)=\lambda_{1,1}(\Omega).$$ This yields that $u$ is a nonnegative normalized $1$-Laplacian eigenfunction. The theorem follow from Lemma~\ref{lem:structure}. \end{proof} This yields the following corollary. \begin{proof}[Proof of Corollary~\ref{coro:1}] It suffices to prove that for any sequence $p_i\to 0,$ $p_i>1,$ there is a subsequence $p_{i_k}$ such that $$\lim_{k \to \infty}u_{p_{i_k}}=\frac{1}{|A|_\mu} \mathds{1}_A,$$ where $A$ is the Cheeger cut of $\Omega.$ By Lemma~\ref{lem:structure}, there is a subsequence $p_{i_k}$ and a nonnegative normalized first eigenfunction of $1$-Laplacian $u$ such that $$\lim_{k \to \infty}u_{p_{i_k}}=u.$$ Since the Cheeger cut of $\Omega$ is unique, by \eqref{eq:structure1}, $u=c\mathds{1}_A.$ By $|u|_1=1,$ $c=\frac{1}{|A|_\mu}.$ This proves the corollary. \end{proof} \section{An example}\label{sec:example} In this section, we give an example to show that the limit of the first eigenfunction of $\Delta_p$ may not be the characteristic function of a single set. \begin{example}\label{exam:1}Consider the graph $G$ as in Fig.~\ref{Fig:1} such that edge weight $w_{x,y}=1$ for any $x\sim y$ and vertex weight $\mu_{x_1}=\mu_{x_2}=2,\mu_{y_1}=\mu_{y_2}=4.$ Let $\Omega=\{x_1,x_2,y_1,y_2\}.$ By the enumeration, the Cheeger cuts for $h(\Omega)$ are $$\{x_1,x_2\},\{x_1,x_2,y_1\},\{x_1,x_2,y_2\},\{x_1,x_2,y_1,y_2\}.$$ \end{example} \begin{figure} \caption{Example $G$} \label{Fig:1} \end{figure} We calculate the normalized first eigenfunctions $u_p$ of $p$-Laplacian on $\Omega$ for $p>1.$ One is ready to see that there is a symmetry, $T:\Omega\to \Omega,$ such that $$T(x_1)=x_2,T(x_2)=x_1,T(y_1)=y_2,T(y_2)=y_1,$$ perserving the Dirichlet boundary condition. Since the first eigenfunction is positive and unique up to constant multiplication, $u_p(x_1)=u_p(x_2),u_p(y_1)=u_p(y_2),$ see \cite{BH}. For convenience, we write $u_p$ as a vector $$u_p=(u_p(x_1),u_p(x_2),u_p(y_1),u_p(y_2)),$$ and by scaling we set $$v_p=\frac{1}{u_p(x_1)}u_p=(1,1,t_p,t_p).$$ Note that $v_p$ is also a first eigenfunction of $p$-Laplacian on $\Omega.$ Setting $x=t_p,$ by the eigen-equation, \begin{equation}\left\{\begin{array}{ll}-\frac{1}{2}|1-x|^{p-2}(x-1)=\lambda_{1,p},\\ -\frac14(3|x|^{p-2}(-x)+\frac{1}{2}|1-x|^{p-2}(1-x))=\lambda_{1,p} x^{p-1}. \end{array}\right. \end{equation} By the first equation, $x<1.$ Plugging the first equation into the second and dividing it by $x^{p-1}$, we get $$2(1-x)^q+(\frac1x-1)^q-3=0,$$ where $q=p-1.$ Set $$f(x,q)=2(1-x)^q+(\frac1x-1)^q-3,\quad \forall x\in (0,1], q>0.$$ For any $q>0,$ since $f(\cdot, q)$ is monotonely decreasing, $f(1,q)=-3$ and $f(x,q)\to +\infty, x\to 0+$, there is a unique solution to $f(x,q)=0$ denoted by $x_q.$ Note that $x_q=t_p.$ For fixed $x\in (0,1),$ we write $a=1-x,$ $b=\frac1x-1,$ and set $$g(q)=f(x,q)=2 a^q+b^q-3.$$ For $q\geq 0,$ $$g'(q)=2 a^q\ln a+b^q\ln b ,\quad g''(q)=2 a^q(\ln a)^2+ b^q(\ln b)^2\geq 0,$$ where the derivatives are understood as right derivatives for $q=0.$ Hence $g(p)$ is convex in $(0,+\infty),$ which yields that for $q\geq 0,$ \begin{equation}\label{eq:exam1}g(q)\geq g(0)+g'(0)q= \ln(a^2 b) q.\end{equation} One can show that \begin{equation*}\left\{\begin{array}{ll}a^2 b>1,& x<\hat{x},\\ a^2 b=1,& x=\hat{x},\\ a^2 b<1,& x>\hat{x},\\ \end{array}\right. \end{equation*} where $\hat{x}$ is the real solution of $(1-x)^3=x,$ given by $$\hat{x}=1-\sqrt[3]{\frac{\sqrt{93}+9}{18}}+\sqrt[3]{\frac{\sqrt{93}-9}{18}}\approx 0.31767.$$ Hence by \eqref{eq:exam1}, for any $x<\hat{x},$ $$g(q)\geq \ln(a^2 b) q>0,\quad q>0.$$ Hence $x_q\geq \hat{x}.$ For any $x>\hat{x},$ by Taylor expension of $g(q)$ at $q=0,$ $$g(q)= \ln(a^2 b) q+o(q),\quad q\to 0.$$ Hence by $a^2b<1,$ for sufficiently small $q,$ $$g(q)<0,$$ this yields that $x_q<x.$ This implies that $\limsup_{q\to 0} x_q\leq x.$ By passing to the limit $x\to \hat{x}+,$ we get $$\limsup_{q\to 0} x_q\leq \hat{x}.$$ Combining it with $x_q\geq \hat{x},$ $$\lim_{q\to 0} x_q=\hat{x}.$$ This yields that as $p\to 1,$ $$v_p=(1,1, t_p,t_p)\to (1,1, \hat{x},\hat{x}).$$ This yields that $$u_p=\frac{v_p}{|v_p|_p}\to \frac{1}{4+8\hat{x}}(1,1, \hat{x},\hat{x})\approx (0.15287,0.15287,0.04856,0.04856).$$ Hence, the limit is not a characteristic function on a single set. \end{document}
\begin{document} \title {Invariant Differential Operators on Nonreductive Homogeneous Spaces} \author{Tom H. Koornwinder} \date{} \maketitle \begin{abstract}\noindent A systematic exposition is given of the theory of invariant differential operators on a not necessarily reductive homogeneous space. This exposition is modelled on Helgason's treatment of the general reductive case and the special nonreductive case of the space of horocycles. As a final application the differential operators on (not a priori reductive) isotropic pseudo-Riemannian spaces are characterized. \end{abstract} \bLP {\sl MSC2000 classification\/}: 43A85 (primary); 17B35, 22E30, 58J70 (secondary). \bLP {\sl Key words and phrases\/}: invariant different operators; nonreductive homogeneous spaces; space of horocycles; isotropic pseudo-Riemannian spaces. \bLP {\sl Note\/}: This is an essentially unchanged electronic version of Report ZW 153/81, Mathematisch Centrum, Amsterdam, 1981; MR 82g:43011. \section{Introduction} Let $G$ be a Lie group and $H$ a closed subgroup. Let $\gog$ and $\goh$ denote the corresponding Lie algebras. Suppose that the coset space $G/H$ is {\sl reductive}, i.e., there is a complementary subspace $\gom$ to $\goh$ in $\gog$ such that $\Ad_G(H)\gom\subset \gom$. Let $\DD(G/H)$ denote the algebra of $G$-invariant differential operators on $G/H$. The main facts about $\DD(G/H)$ are summarized below (cf.\ HELGASON [3, Ch.III], [4, Cor. X.2.6, Theor. X.2.7], [6, \S 2]). Let $\DD(G)$ be the algebra of left invariant differential operators on $G$, $\DD_H(G)$ the subalgebra of operators which are right invariant under $H$ and $S(\gog)$ the complexified symmetric algebra over $\gog$. Let $\lambda\colon S(\gog)\to\DD(G)$ denote the symmetrization mapping. $I(\gom)$ denotes the set of $\Ad_G(H)$-invariants in $S(\gom)$. Then \begin{equation} \DD_H(G)=\DD(G)\goh\cap \DD_H(G)\oplus\lambda(I(\gom)). \end{equation} Let $\pi\colon G\to G/H$ be the natural mapping. Let $C^\infty_H(G)$ consist of the $C^\infty$-functions on $G$ which are right invariant under $H$. Write ${\tilde f}:=f\circ\pi$ ($f\in C^\infty(G/H)$) and $(D_uf)^{\sim} :=u{\tilde f}$ ($f\in C^\infty(G/H)$, $u\in\DD_H(G)$). Then $D_u\in\DD(G/H)$. \begin{thm}\quad The mapping $u\mapsto D_u$ is an algebra homomorphism from $\DD_H(G)$ onto $\DD(G/H)$ with kernel $\DD(G)\goh\cap\DD_H(G)$. The mapping $P\mapsto D_{\lambda(P)}\colon I(\gom)\to\DD(G/H)$ is a linear bijection. \end{thm} Theorem 1.1. is of basic importance for the analysis on symmetric spaces. In particular, it can be shown that $\DD(G/H)$ is commutative if $G/H$ is a pseudo-Riemannian symmetric space which admits a relatively invariant measure. In its most general form this result was proved by DUFLO [1] in an algebraic way. G. van Dijk kindly communicated a short analytic proof of Duflo's result to me (unpublished). In [1] DUFLO used generalizations of (1.1) and Theorem 1.1 to the case of homogeneous line bundles over $G/H$. These can be proved by only minor changes of Helgason's original proofs. There exist nonreductive coset spaces $G/H$ for which $\DD(G/H)$ is still commutative. For instance, let $G$ be a connected real semisimple Lie group and let $M$ and $N$ be the usual subgroups of $G$. Then $G/MN$ is the space of horocycles and $\DD(G/MN)$ is commutative. In order to prove this, formula (1.1) and Theorem 1.1 have to be adapted to the nonreductive case. While HELGASON [5, \S 4], [6, \S 3] has done this in an ad hoc way for the special coset spaces under consideration, it is the purpose of the present note to give a more systematic exposition of the theory of $\DD(G/H)$ for a not necessarily reductive coset space. Furthermore, following Duflo, the theory will be developed for invariant differential operators on homogeneous line bundles over $G/H$. As a final application we will characterize $\DD(G/H)$ for isotropic pseudo-Riemannian symmetric spaces $G/H$ without a priori knowledge that $G/H$ is reductive. Throughout HELGASON [4] will be our standard reference. \section{Development of the general theory} Let $G$ be a Lie group with Lie algebra $\gog$. For $X\in \gog$ define the vector field ${\tilde X}$ on $G$ by \begin{equation} ({\tilde X}f)(g):=\frac{d}{dt}f(g\exp tX)\big|_{t=0}\,, \quad f\in C^\infty(G), \ g\in G. \end{equation} Then the mapping $X\mapsto {\tilde X}$ is an isomorphism from $\gog$ onto the Lie algebra of left invariant vector fields on $G$. Throughout this section let $X_1,\dots,X_n$ be a fixed basis of $\gog$. For a finite-dimensional real vector space $V$ the symmetric algebra $S(V)$ is defined as the algebra of all polynomials with complex coefficients on $V^*$, the dual of $V$. Let $S^m(V)$ respectively $S_m(V)$ ($m=0,1,2,\dots\;$) denote the space of homogeneous polynomials of degree $m$ on $V^*$, respectively of polynomials of degree $\le m$ on $V^*$. Thus $S^m(G)$ is spanned by the monomials $X_{i_1}X_{i_2}\dots X_{i_m}$ ($i_1,\dots,i_m\in \{1,\dots, n\}$). Let $\DD(G)$ be the algebra of left invariant differential operators on $G$ with complex coefficients. For $P\in S(\gog)$ define an operator $\lambda(P)$ on $C^\infty(G)$ by \begin{equation} (\lambda(P)f)(g):=P\left(\frac{\partial}{\partial t_1},\cdots,\frac{\partial}{\partial t_n}\right)f(g\exp(t_1X_1+\cdots+t_nX_n))\Big|_{t_1=\cdots=t_n=0}, \end{equation} where \[ P\left(\frac{\partial}{\partial t_1},\cdots,\frac{\partial}{\partial t_n}\right):=\frac{\partial^m}{\partial t_{i_1}\cdots \partial t_{i_m}}\quad \mbox{for $P=X_{i_1}\cdots X_{i_m}$.} \] It is proved in [4, Prop. II.1.9 and p. 392] that: \begin{prop} The mapping $P\mapsto\lambda(P)$ is a linear bijection from $S(\gog)$ onto $\DD(G)$. It satisfies \begin{equation} \lambda(Y^m)={\tilde Y}^m, \quad Y\in \gog; \end{equation} \begin{equation} \lambda(Y_1\dots Y_m)=\frac{1}{m!}\sum_{\sigma\in S_m} {\tilde Y}_{\sigma(1)}\dots{\tilde Y}_{\sigma(m)}, \quad Y_1,\dots, Y_m\in \gog. \end{equation} The definition of $\lambda$ is independent of the choice of the basis of $\gog$. \end{prop} The mapping $\lambda$ is called {\sl symmetrization}. The Lie algebra $\gog$ is embedded as a subspace of $\DD(G)$ under the mapping $X\to{\tilde X}$. Any homomorphism from $\gog$ to $\gog$ uniquely extends to a homomorphism from $\DD(G)$ to $\DD(G)$ and any linear mapping from $\gog$ to $\gog$ uniquely extends to a homorphism from $S(\gog)$ to $S(\gog)$. In particular, for $g\in G$, the automorphism $\Ad(g)$ of $\gog$ uniquely extends to automorphisms of both $S(\gog)$ and $\DD(G)$ and \begin{equation} \lambda(\Ad(g)P)=\Ad(g)\lambda(P), \quad P\in S(\gog), \ g\in G. \end{equation} For $g,g_1\in G, \ f\in C^\infty(G), \ D\in\DD(G)$ write \[f^{R(g)}(g_1):=f(g_1g); \quad D^{R(g)}f:=(Df^{R(g^{-1})})^{R(g)}.\] Then \begin{equation} \Ad(g)D=D^{R(g^{-1})}, \quad D\in\DD(G), \ g\in G. \end{equation} Let $H$ be a closed subgroup of $G$ and let $\goh$ be the corresponding subalgebra. Let $\gom$ be a subspace of $\gog$ complementary to $\goh$. Let $X_1,\dots, X_r$ be a basis of $\gom$ and $X_{r+1},\dots, X_n$ a basis of $\goh$. Let $\chi$ be a character of $H$, i.e.\ a continuous homomorphism from $H$ to the multiplicative group $\CC{\backslash\{0\}}$. Throughout this section, $H$, $\gom$, the basis and $\chi$ will be assumed fixed. Let $\pi\colon G\to G/H$ be the canonical mapping. Write $0:=\pi(e)$. Let \begin{equation} C^\infty_{H,\chi}(G):=\{f\in C^\infty(G)\mid f(gh)=f(g)\chi(h^{-1}), \ g\in G, \ h\in H\}. \end{equation} Sometimes we will assume that $\chi$ has an extension to a character on $G$. This assumption clearly holds if $\chi\equiv 1$ on $H$, but it does not hold for general $H$ and $\chi$. For instance, if $G=SU(2)$ or $SL(2,\RR)$ and $H=SO(2)$ then nontrivial characters on $H$ do not extend to characters on $G$. If $\chi$ extends to a character on $G$ then we define a linear bijection $f\mapsto{\tilde f}\colon C^\infty(G/H)\to C^\infty_{H,\chi}(G)$ by \begin{equation} {\tilde f}(g):=f(\pi(g))\chi(g^{-1}), \quad g\in G. \end{equation} \begin{lem} Let $P\in S(m)$. If $\lambda(P)f=0$ for all $f\in C^\infty_{H,\chi}(G)$ then $P=0$. \end{lem} \noindent \Proof For each $f\in C^\infty(G/H)$ we can find $F\in C^\infty_{H,\chi}(G)$ such that \[F(\exp(t_1X_1+\cdots+t_rX_r))=f(\exp(t_1X_1+\cdots+t_rX_r)\cdot 0)\] for $(t_1,\dots,t_r)$ in some neighbourhood of $(0,\dots,0)$. Hence \[ 0=(\lambda(P)F)(e) =P\left(\frac{\partial}{\partial t_1},\cdots,\frac{\partial}{\partial t_r}\right)f(\exp(t_1X_1+\cdots+t_rX_r)\cdot 0\Big|_{t_1=\cdots=t_r=0} \] for all $f\in C^\infty(G/H)$, so $P=0$. $\Box$ Let the differential of $\chi$ also be denoted by $\chi$. Let $\goh^\CC$ be the complexification of $\goh$. Let \begin{equation} \goh^\chi:=\{X+\chi(X)\mid X\in \goh^\CC\}\subset\DD(G). \end{equation} Clearly, $Df=0$ if $f\in C^\infty_{H,\chi}(G)$ and $D\in \goh^\chi$. Let $\DD(G)\goh^\chi$ be the linear span of all $vw$ with $v\in\DD(G), \ w\in \goh^\chi$. Observe that, by Proposition 2.1, ${\tilde Y}_1\dots{\tilde Y}_m\in\lambda(S_m(\gog))$ for $Y_1,\dots, Y_m\in \gog$. The following proposition was proved in [4, Lemma X.2.5] for $\chi\equiv 1$. \begin{prop} There are the direct sum decompositions \begin{equation} \lambda(S_m(\gog))=\lambda(S_{m-1}(\gog))\goh^\chi\oplus\lambda(S_m(\gom)) \end{equation} and \begin{equation} \DD(G)=\DD(G)\goh^\chi\oplus\lambda(S(\gom)). \end{equation} \end{prop} \noindent \Proof First we prove by complete induction with respect to $\gom$ that \[\lambda(S_m(\gog))\subset\lambda(S_{m-1}(\gog))\goh^\chi+ \lambda(S_m(\gom)).\] This clearly holds for $m=0$. Suppose it is true for $m<d$. Let \[P=X_1^{d_1}\cdots X_n^{d_n}, \quad d_1+\cdots+d_n=d.\] If $d_{r+1}\cdots+d_n=0$, then $P\in S_d(\gom)$, so $\lambda(P)\in\lambda(S_d(\gom))$. If $d_{r+1}+\cdots+d_n>0$ then, by (2.4), $\lambda(P)$ is a linear combination of certain elements ${\tilde Y}_1\cdots{\tilde Y}_d$ with $Y_i\in h$ for at least one $i$, so \[\lambda(P)\in\lambda(S_{d-1}(\gog))\goh^\CC+ \lambda(S_{d-1}(\gog))\subset\lambda(S_{d-1}(\gog))\goh^\chi+ \lambda(S_{d-1}(\gog)).\] Now apply the induction hypothesis. This yields (2.10) and (2.11) (use Proposition 2.1) except for the directness. To prove the directness of the sum (2.11), suppose that $P\in S(\gom)$ and $\lambda(P)\in \DD(G)\goh^\chi$. Then $\lambda(P)f=0$ for all $f\in C^\infty_{H,\chi}(G)$, so $P=0$ by Lemma 2.2. $\Box$ \begin{lem} Let $D\in \DD(G)$. Then $Df=0$ for all $f\in C^\infty_{H,\chi}(G)$ if and only if $D\in\DD(G)\goh^\chi$. \end{lem} \noindent \Proof Apply Proposition 2.3 and Lemma 2.2. $\Box$ Let us define \begin{equation} \DD_{H,\chi,{\rm mod}}(G):=\{D\in\DD(G)\mid \Ad(h)D-D\in\DD(G)\goh^\chi \ \mbox{for all} \ h\in H\}. \end{equation} This definition is motivated by the following lemma. \begin{lem} Let $D\in\DD(G)$. Then the following two statements are equivalent. \begin{itemize} \item[{\rm(i)}] $D\in\DD_{H,\chi,{\rm mod}}(G)$. \item[{\rm(ii)}] $f\in C^\infty_{H,\chi}(G)\Rightarrow Df\in C^\infty_{H,\chi}(G)$. \end{itemize} \end{lem} \noindent \Proof Let $D\in \DD(G)$. If $f\in C^\infty_{H,\chi}(G), \ h\in H$ then \begin{itemize} \item [($\star$)]\qquad\qquad $(Df)^{R(h)}=D^{R(h)}f^{R(h)}=\chi(h^{-1})D^{R(h)}f$. \end{itemize} \noindent First assume (i). If $f\in C^\infty_{H,\chi}(G), \ h\in H$, then $(D^{R(h)}-D)f=(\Ad(h)D-D)f=0$, so combination with ($\star$) yields $(Df)^{R(h)}=\chi(h^{-1})Df$, i.e., $Df\in C^\infty_{H,\chi}(G)$. Conversely, assume (ii). If $f\in C^\infty_{H,\chi}(G), \ h\in H$, then $(Df)^{R(h)}=\chi(h^{-1})Df$, so combination with ($\star$) yields $(D^{R(h)}-D)f=0$. Hence $\Ad(h)D-D=D^{R(h)}-D\in \DD(G)\goh^\chi$ by Lemma 2.4. $\Box$ {}From the preceding results the following theorem is now obvious. \begin{thm} \quad\par \begin{itemize} \item [\rm(a)] $\DD_{H,\chi,{\rm mod}}(G)$ is a subalgebra of $\DD(G)$. \item [\rm(b)] $\DD(G)\goh^\chi$ is a two-sided ideal in $\DD_{H,\chi,{\rm mod}}(G)$. \item [\rm(c)] There is the direct sum decomposition. \begin{equation} \DD_{H,\chi,{\rm mod}}(G)= \DD(G)\goh^\chi\oplus\lambda(S(\gom))\cap \DD_{H,\chi,{\rm mod}}(G). \end{equation} \item [\rm(d)] Define the mappings $A$ and $B$ by \begin{eqnarray*} u\stackrel{A}{\longmapsto} u({\rm mod}\; \DD(G)\goh^\chi)&\stackrel{B}{\longmapsto}& u\big|_{C^\infty_{H,\chi}(G)}\colon\\ \lambda(S(\gom))\cap\DD_{H,\chi,{\rm mod}}(G)&\stackrel{A}{\longrightarrow}& \DD_{H,\chi,{\rm mod}}(G)/\DD(G)\goh^\chi \stackrel{B}{\longrightarrow}\DD_{H,\chi,{\rm mod}} \Big|_{C^\infty_{H,\chi}(G)}. \end{eqnarray*} Then $A$ is a linear bijection and $B$ is an algebra isomorphism onto. \end{itemize} \end{thm} Define the mapping $\sigma\colon \gog\to \gom$ by \begin{equation} \sigma(X+Y):=X, \quad X\in \gom, \ Y\in \goh. \end{equation} Consider $S(\gom)$ as a subalgebra of $S(\gog)$. Thus, if $P\in S(\gom)$ and $h\in H$, then $\Ad(h)P\in S(\gog)$ and $\sigma\circ \Ad(h)P\in S(\gom)$ are well-defined. By an application of (2.4) we see that, if $Q\in S_m(\gog)$, then \begin{equation} \lambda(\sigma Q-Q)\in\lambda(S_{m-1}(g))+\DD(G)\goh^\chi. \end{equation} Define the algebra \begin{equation} I_{\rm mod}(\gom):= \{P\in S(\gom)\mid\sigma\circ \Ad(h)P=P \ \mbox{for all} \ h\in H\}. \end{equation} \begin{lem} Let $P\in S(\gom)$ such that $\lambda(P)\in\DD_{H,\chi,{\rm mod}}(G)$. Write $P=P^m+P_{m-1}$, where $P^m\in S^m(\gom), \ P_{m-1}\in S_{m-1}(\gom)$. Then $P^m\in I_{{\rm mod}}(\gom)$. \end{lem} \noindent \Proof $\lambda(\Ad(h)P-P)\in\DD(G)\goh^\chi$ by (2.12). Hence \[\lambda(\Ad(h)P^m-P^m)\in\lambda(S_{m-1}(\gog))+\DD(G)\goh^\chi.\] So \[\lambda(\sigma\circ \Ad(h)P^m-P^m)\in\lambda(S_{m-1}(\gog))+ \DD(G)\goh^\chi\subset\lambda(S_{m-1}(\gom))+\DD(G)\goh^\chi,\] where we used (2.16) and (2.10). By directness of the decomposition (2.10): \[\sigma\circ \Ad(h)P^m-P^m\in S_{m-1}(\gom).\] Hence $\sigma\circ \Ad(h)P^m-P^m$, being homogeneous of degree $m$, is the zero polynomial. $\Box$ \begin{prop} If $\lambda(I_{\rm mod}(\gom))\subset\DD_{H,\chi,{\rm mod}}(G)$ then \[\lambda(I_{\rm mod}(\gom))=\lambda(S(\gom))\cap\DD_{H,\chi,{\rm mod}}(G)\] and the mapping \[D\mapsto D\Big|_{C^\infty_{H,\chi}(G)}\colon\lambda(I_{{\rm mod}}(\gom))\to \DD_{H,\chi,{\rm mod}}(G)\Big|_{C^\infty_{H,\chi}(G)}\] is a linear bijection. \end{prop} \noindent \Proof Use complete induction with respect to the degree of $P\in S(\gom)$ in order to prove that $P\in I_{\rm mod}(\gom)$ if $\lambda(P)\in\DD_{H,\chi,{\rm mod}}(G)$ (apply Lemma 2.7). The second implication in the proposition follows from Theorem 2.6(d). $\Box$ Suppose for the moment that $\chi$ extends to a character on $\gog$ and remember the mapping $f\to{\tilde f}$ defined by (2.8). For $u\in\DD_{H,\chi,{\rm mod}}(G)$ define an operator $D_u$ acting on $C^\infty(G/H)$ by \begin{equation} (D_uf)^\sim:=u{\tilde f}, \quad f\in C^\infty(G/H). \end{equation} Then $\supp(D_uf)\subset\supp(f)$, hence, by Peetre's theorem (cf.\ for instance NARASIMHAN [7, \S 3.3]), $D_u$ is a differential operator on $G/H$. One easily shows that $D_u\in\DD(G/H)$, the space of $G$-invariant differential operators on $G/H$. \begin{thm} Suppose that $\chi$ extends to a character on $G$. Then the mapping \[u\Big|_{C^\infty_{H,\chi}(G)}\stackrel{C}{\longmapsto}D_u\colon\quad \DD_{H,\chi,{\rm mod}}(G)\Big|_{C^\infty_{H,\chi}(G)}\stackrel{C}{\longrightarrow}\DD(G/H)\] is an algebraic isomorphism onto. \end{thm} \noindent \Proof Clearly, $C$ is an isomorphism into. In order to prove the surjectivity let $D\in\DD(G/H)$. Then there is a polynomial $P\in S(\gom)$ such that \[(Df)(g\cdot 0)= P\left(\frac{\partial}{\partial t_1},\cdots,\frac{\partial}{\partial t_r}\right)f(g\exp(t_1X_1+\cdots+t_rX_r)\cdot 0\Big|_{t_1=\cdots=t_r=0}\] for all $f\in C^\infty(G/H)$ and for $g=e$. By the $G$-invariance of $D$ this formula holds for all $g\in G$. By (2.8) and (2.2) this becomes \[\chi(Df)^\sim=\lambda(P)(\chi{\tilde f}), \quad\mbox{i.e.,}\quad (Df)^\sim=(\chi^{-1}\lambda(P)\circ\chi)({\tilde f}).\] Clearly, $\chi^{-1}\lambda(P)\circ\chi\in\DD(G)$ and, by Lemma 2.5, we have $\chi^{-1}\lambda(P)\circ\chi\in\DD_{H,\chi{\rm mod}}(G)$. Thus, by (2.17), $D=D_{\chi^{-1}\lambda(P)\circ\chi}$. $\Box$ Suppose now that the coset space $G/H$ is {\sl reductive}, i.e., $\gom$ can be chosen such that $\Ad(h)\gom\subset \gom$ for all $h\in H$. {}From now on assume that $\gom$ is chosen in this way. Let \begin{eqnarray} \DD_H(G)&:=&\{D\in\DD(G)\mid \Ad(h)D=D\quad \mbox{for all} \ h\in H\},\\ I(\gom)&:=& \{P\in S(\gom)\mid \Ad(h)P=P\quad \mbox{for all} \ h\in H\}. \end{eqnarray} Then \[\lambda(S(\gom))\cap\DD_{H,\chi,{\rm mod}}(G)= \lambda(I(\gom))\subset\DD_H(G).\] Hence (2.13) becomes \begin{equation} \DD_{H,\chi,{\rm mod}}(G)=\DD(G)\goh^\chi\oplus\lambda(I(\gom)). \end{equation} We obtain from Theorems 2.6 and 2.9: \begin{thm} Let $G/H$ be reductive. Then: \begin{itemize} \item[\rm (a)] $\DD_H(G)$ is a subalgebra of $\DD(G)$. \item[\rm (b)] $\DD(G)\goh^\chi\cap \DD_H(G)$ is a two-sided ideal in $\DD_H(G)$. \item[\rm (c)] There is a direct sum decomposition \begin{equation} \DD_H(G)=\DD(G)\goh^\chi\cap\DD_H(G)\oplus\lambda(I(\gom)). \end{equation} \item[\rm (d)] Define the mappings $A, B$ and $C$ ($C$only if $\chi$ extends to a character on $G$) by \begin{eqnarray*} u&\stackrel{A}{\longmapsto}& u({\rm mod}\,\DD(G)\goh^\chi\cap\DD_H(G)) \stackrel{B}{\longmapsto} u\Big|_{C^\infty_{H,\chi}(G)}\stackrel{C}{\longmapsto} D_u\colon \\ \lambda(I(\gom))&\stackrel{A}{\longrightarrow}& \DD_H(G)/(\DD(G)\goh^\chi\cap\DD_H(G))\stackrel{B}{\longrightarrow} \DD_H(G)\Big|_{C^\infty_{H,\chi}(G)}\stackrel{C}{\longrightarrow}\DD(G/H). \end{eqnarray*} Then $A$ is a linear bijection and $B$ and $C$ are algebra isomorphisms onto. \end{itemize} \end{thm} The case $\chi\equiv 1$ of Theorem 2.10 can be found in HELGASON [4, Cor. X.2.6 and Theor. X.2.7]. See DUFLO [1] for the general case. \section{Application to $\DD(G/N)$ and $\DD(G/MN)$} Let $G$ be a connected noncompact real semisimple Lie group. We remember some of the structure theory of $G$ (cf.\ [3. Ch.VI]): \noindent $\gog_0$ : Lie algebra of $G$.\\ $\gog$ : complexification of $\gog_0$.\\ $\theta$ : Cartan involution of $\gog_0$, extended to automorphism of $\gog$.\\ $\gog_0=\gok_0+\gop_0$: corresponding Cartan decomposition of $\gog_0$.\\ $\goh_{\gop_0}$: mamximal abelian subspace of $\gop_0$, $A$ the coresponding analytic subgroup.\\ $\goh_0$ : maximal abelian subalgebra of $\gog_0$ extending $\goh_{\gop_0}$.\\ $\goh_{\gok_0}:=\goh_0\cap\gok_0$,\quad $\goh_\gok$ its complexification\\ $\goh$ : complexification of $\goh_0$; this is a Cartan subalgebra of $\gog$.\\ $\Delta$ : set of roots of $\gog$ with respect to $\goh$; the roots are real on $i\goh_{\gok_0}+\goh_{\gop_0}$. \noindent Introduce compatible orderings on $\goh^*_{\gop_0}$ and $(i\goh_{\gok_0}+\goh_{\gop_0})^*$. \noindent $\Delta^+$ : set of positive roots.\\ $P_+$ : set of positive roots not vanishing on $\goh_{\gop_0}$.\\ $P_-$ : set of positive roots vanishing on $\goh_{\gop_0}$.\\ $\gog^{\alpha}$ : root space in $\gog$ of $\alpha\in\Delta$.\\ $\gon$ : $\sum_{\alpha\in P_+}\, \gog^\alpha$.\\ $\gon_0 :=\gon\cap \gog_0$.\\ $N$ : analytic subgroup of $G$ corresponding to $\gon_0$.\\ $M$ : centralizer of $\goh_{\gop_0}$ in $G$, $M_0$ its identity component.\\ $\gom_0$ : Lie algebra of $M$.\\ $\gom$ : complexification of $\gom_0$; then \begin{equation} \gom=\goh_\gok+\sum_{\alpha\in P_-} (\gog^\alpha+\gog^{-\alpha}). \end{equation} \begin{prop} The coset spaces $G/MN$ and $G/N$ are not reductive. \end{prop} \noindent \Proof Suppose that $G/MN$ is reductive. Then there is an $\ad_\gog(\gom+\gon)$-invariant subspace $\gor$ of $\gog$ complementary to $\gom+\gon$. Let $\alpha\in P_+$ and let $X$ be a nonzero element of $\gog^\alpha$. For $H\in \goh$ write $H=W_H+Y_H+Z_H$ with $W_H\in \gor$, $Y_H\in \gom$, $Z_H\in \gon$. Then, for each $H\in \goh$: \[\alpha(H)X=[W_H+Y_H+Z_H,X]\] so \[\alpha(H)X-[Y_H,X]-[Z_H,X]=[W_H,X]\in \gor\cap(\gom+\gon),\] so \[[Y_H,X]+[Z_H,X]=\alpha(H)X.\] It follows from (3.1) that \[[Y_H,X]+[Z_H,X]\in\sum_{\beta\in\Delta\atop \beta\neq\alpha}\, \gog^\beta.\] Hence $\alpha(H)X=0$ for all $H\in h$, so $\alpha=0$. This is a contradiction. In the case $G/N$ the proof is almost the same: take $\gor$ $\ad_\gog(\gon)$-invariant and complementary to $\gon$ and $Y_H=0$. $\Box$ HELGASON [5, p. 676] states without proof that $G/MN$ is not in general reductive. Let $\gol_0$ be the orthoplement of $\gom_0$ in $\gok_0$ with respect to the Killing form on $\gog_0$. In order to apply Proposition 2.8 and Theorem 2.9 to $\DD(G/MN)$ and $\DD(G/N)$ we take $\gol_0+\goh_{\gop_0}$ respectively $\gok_0+\goh_{\gop_0}$ as complementary subspaces of $\gom_0+\gon_0$ respectively $\gon_0$ in $\gog_0$. Now we have \begin{eqnarray} I_{\rm mod}(\gol_0+\goh_{\gop_0})&=&S(\goh_{\gop_0}),\\ I_{\rm mod}(\gok_0+\goh_{\gop_0})&=&S(\gom_0+\goh_{\gop_0}). \end{eqnarray} (3.2) is proved in HELGASON [5. Lemma 4.2] and by only slight modifications in this proof, (3.3) is obtained. It follows from Lemma 2.5 that \[\lambda(S(\goh_{\gop_0}))\subset\DD_{MN,1,{\rm mod}}(G)\] and \[\lambda(S(\gom_0+\goh_{\gop_0}))\subset\DD_{N,1,{\rm mod}}(G),\] since $M$ centralizes $\goh_{\gop_0}$ and $\gom_0+\goh_{\gop_0}$ normalizes $\gon_0$. Consider $\DD(A)$ and $\DD(M_0A)$ as subalgebras of $\DD(G)$. Then $\DD(A)\subset\DD_{MN,1,{\rm mod}}(G)$ and $\DD(M_0A)\subset\DD_{N,1,{\rm mod}}(G)$. It follows by application of Proposition 2.8 and Theorem 2.9 that: \begin{thm} The mapping $u\mapsto D_u$ (cf. (2.17)) is an algebra isomorphism from $\DD(A)$ onto $\DD(G/MN)$ and from $\DD(M_0A)$ onto $\DD(G/N)$. In particular, $\DD(G/MN)$ is a commutative algebra. \end{thm} The statements about $\DD(G/MN)$ are in HELGASON [5, Theorem 4.1]. FARAUT [2, p.~393] observes that Helgason's result can be extended to the context of pseudo-Riemannian symmetric spaces. A special case of Theorem 6.2 can be formulated in the situation that $G$ is a connected complex semisimple Lie group. Let $\gog$ be its (complex) Lie algebra and put: \noindent $\gou$ : compact real form of $\gog$.\\ $\goa$ : maximal abelian subalgebra of $\gou$.\\ $\goh:= \goa+i\goa$; this is Cartan subalgebra of $\gog$.\\ $\Delta$ : set of roots of $\gog$ with respect to $\goh$.\\ $\Delta^+$ : set of positive roots with respect to some ordering.\\ $\gog^\alpha$ : root space of $\alpha\in\Delta$.\\ $\gon:=\sum_{\alpha\in\Delta^+}\gog^\alpha$,\quad $N$ the corresponding analytic subgroup.\\ $\gog^\RR:=\gog$ considered as real Lie algebra.\\ $\goh^\RR:=\goh$ considered as real subalgebra. \noindent Then $\gog^\RR=\gou+i\goa+\gon$ is an Iwasawa decomposition for $\gog^\RR$ (cf.\ [4, Theorem VI.6.3]) and $\goa$ is the centralizer of $i\goa$ in $\gou$. Hence we obtain from Theorem~3.2: \begin{thm}\quad The mapping $P\mapsto D_{\lambda(P)}$ is an algebra isomorphism from $S(\goh^\RR)$ onto $\DD(G/N)$. In particular, $\DD(G/N)$ is commutative. \end{thm} This theorem was proved by HELGASON [6, Lemma 3.3] without use of Theorem 3.2. \section{Application to isotropic spaces} We preserve the notation and conventions of Section 2. First we prove an extension of [4, Cor. X.2.8] to the case that $G/H$ is not necessarily reductive. In the following, $A$ and $B$ are as in Theorem 2.6(d). \begin{lem} If the algebra $I_{\rm mod}(\gom)$ is generated by $P_1,\dots,P_l$ and if there are $Q_1,\dots,Q_l\in S_m$ such that ${\rm degree}(P_i-Q_i)<{\rm degree}\, P_i$ and $\lambda(Q_i)\in\DD_{H,\chi,{\rm mod}}(G)$ then the algebra \[\DD_{H,\chi,{\rm mod}}\Big|_{C^\infty_{H,\chi}(G)}\] is generated by $BA\lambda(Q_1),\dots,BA\lambda(Q_l)$. \end{lem} \noindent \Proof We prove by complete induction with respect to $m$ that, for each $P\in S_m(\gom)$ with $\lambda(P)\in \DD_{H,\chi,{\rm mod}}(G), \ BA\lambda(P)$ depends polynomially on $BA\lambda(Q_1),\dots, BA\lambda(Q_l)$. In view of Theorem 2.6 this will prove the lemma. Suppose the above property holds up to $m-1$. Let $P\in S_m(\gom)$ such that $\lambda(P)\in\DD_{H,\chi,{\rm mod}}(G)$. By using Lemma 2.7 we find that $P=\Pi(P_1,\dots, P_l) \pmod {S_{m-1}(\gom)}$ for some polynomial $\Pi$ in $l$ indeterminates. Hence, $P=\Pi(Q_1,\dots, Q_l) \pmod {S_{m-1}(\gom)}$, \begin{eqnarray*} \lambda(P)&=&\lambda(\Pi(Q_1,\dots,Q_l)) \pmod{\lambda(S_{m-1}(\gom))}\\ &=&\Pi(\lambda(Q_1),\dots,\lambda(Q_l)) \pmod{\lambda(S_{m-1}(\gog))},\\ \lambda(P)&-&\Pi(\lambda(Q_1),\dots,\lambda(Q_l))\in\lambda(S_{m-1}(\gog)) \cap\DD_{H,\chi,{\rm mod}}(G). \end{eqnarray*} By Theorem 2.6 and formula (2.10) we have \[BA\lambda(P)-\Pi(BA\lambda(Q_1),\dots, BA\lambda(Q_l))=BA\lambda(P')\] for some $P'\in S_{m-1}(\gom)$ such that $\lambda(P')\in\DD_{H,\chi,{\rm mod}}(G)$. Now apply the induction hypothesis.\\ \mbox{} $\Box$ Let $\tau$ denote the action of $G$ on $G/H$. Its differential $d\tau$ yields an action of $H$ on the tangent space $(G/H)_0$ to $G/H$ at $0$. \begin{thm} Suppose there is a nondegenerate $d\tau(H)$-invariant bilinear form $\langle\cdot,\cdot\rangle$ on $(G/H)_0$ of signature $(r_1,r_2) \ (r_1+r_2=r, \ r_1\ge r_2)$ such that, for each $\lambda>0, \ d\tau(H)$ acts transitively on $\{X\in (G/H)_0\mid\langle X,\, X\rangle=\lambda\}$ (or on the connected components of these hyperbolas if $r_1=r_2=1$). Let $\Delta$ be the Laplace-Beltrami operator on $G/H$ coresponding to the $G$-invariant pseudo-Riemannian structure on $G/H$ associated with $\langle\cdot,\cdot\rangle$. Then the algebra $\DD(G/H)$ is generated by $\Delta$, and hence commutative. \end{thm} \noindent \Proof Choose a complementary subspace $\gom$ to $\goh$ in $\gog$. The mapping $d\pi$ identifies the $H$-spaces $\gom$ (under $\sigma\circ \Ad_G(H)$) and $(G/H)_0$ (under $d\tau(H)$) with each other. Transplant the form $\langle\cdot,\cdot\rangle$ to $\gom$ and choose an orthonormal basis $X_1,\dots,X_r$ of $\gom$: $\langle X_i,\, X_j\rangle= \varepsilon_i\delta_{ij}$, \ $\varepsilon_i=1$ or $-1$ for $i\le r_1$ or $>r_1$, respectively. Then the algebra $I_{\rm mod}(\gom)$ is generated by $\sum^r_{u=1}\varepsilon_iX_i^2$. It follows from the proof of Theorem 2.9 that $\Delta=D_{\lambda(P)}$ with $P\in S(\gom)$ of degree 2 such that $\lambda(P)\in\DD_{H,1,{\rm mod}}(G)$. Thus, by Lemma 2.7, we get \[P=c\sum_{i=1}^r\, \varepsilon_iX_i^2\pmod{S_1(\gom)}\] with $c\neq 0$. Now apply Lemma 4.1 and Theorem 2.9. $\Box$ Theorem 4.2 extends [4, Prop.\ X.2.10], where the case is considered that $G/H$ is a Riemannian symmetric space of rank $1$. A pseudo-Riemannian manifold $M$ is called {\sl isotropic} if for each $x\in M$ and for tangent vectors $X,Y\neq 0$ at $x$ with $\langle X,\, X\rangle=\langle Y,\, Y\rangle$ there is an isometry of $M$ fixing $x$ which sends $X$ to $Y$. Connected isotropic spaces can be written as homogeneous spaces $G/H$ satisfying the conditions of Theorem 4.2 with $G$ being the full isometry group (cf. WOLF [8, Lemma 11.6.6]). It follows from Wolf's classification [8, Theorem 12.4.5] that such spaces are symmetric and reductive. However, our proof of Theorem 4.2 does not use this fact. \vskip 0.5truecm \noindent {\obeylines\parindent 9.7truecm \addressfont {\sladdressfont present address:} Korteweg-de Vries Institute for Mathematics Universiteit van Amsterdam Plantage Muidergracht 24 1018 TV Amsterdam, The Netherlands email: {\ttaddressfont [email protected]}} \end{document}
\begin{document} \title{\bf Semiparametric Efficient G-estimation with Invalid Instrumental Variables} \mathcal{M}aketitle \begin{abstract} The instrumental variable method is widely used in the health and social sciences for identification and estimation of causal effects in the presence of potential unmeasured confounding. In order to improve efficiency, multiple instruments are routinely used, leading to concerns about bias due to possible violation of the instrumental variable assumptions. To address this concern, we introduce a new class of g-estimators that are guaranteed to remain consistent and asymptotically normal for the causal effect of interest provided that a set of at least $D^k L^{m+1-k}amma$ out of $K$ candidate instruments are valid, for $D^k L^{m+1-k}amma\leq K$ set by the analyst {\it ex ante}, without necessarily knowing the identity of the valid and invalid instruments. We provide formal semiparametric efficiency theory supporting our results. Both simulation studies and applications to the UK Biobank data demonstrate the superior empirical performance of our estimators compared to competing methods. \end{abstract} {\bf Keywords: Causal inference; g-estimation; Instrumental variable; Multiply robust; Semiparametric theory; Unmeasured confounding} \section{Introduction} \label{sec: intro} Mendelian randomization is an instrumental variable (IV) approach to causal inference with growing popularity in epidemiological studies. In Mendelian randomization studies, one aims to establish a causal relationship between a given exposure and an outcome of interest in the presence of possible unmeasured confounding, by leveraging one or more genetic markers defining the IV \citep{Davey-Smith:2003aa,Lawlor:2008aa}. In order to be a valid IV, a genetic marker must satisfy the following key conditions: (a) It must be associated with the exposure; (b) It must be independent of any unmeasured confounder of the exposure-outcome relationship; and (c) It cannot have a direct effect on the outcome variable not fully mediated by the exposure in view. Possible violation or near violation of assumption (a) known as the weak instrumental variable problem poses an important challenge in Mendelian randomization as individual genetic effects on complex traits can be weak. Violation of assumption (b) can also occur due to linkage disequilibrium or population stratification \citep{Lawlor:2008aa}. Assumption (c) also known as the exclusion restriction is rarely credible in the Mendelian randomization context as it requires a complete understanding of the biological mechanism by which each genetic marker influences the outcome. Such a priori knowledge may be unrealistic in practice due to the possible existence of unknown pleiotropic effects of the genetic markers \citep{little2003mendelian,Davey-Smith:2003aa,Lawlor:2008aa}. There has been tremendous interest in the development of methods to detect and account for violation of IV assumptions (a)-(c), primarily in multiple-IV settings under standard linear outcome and exposure models. Literature addressing violations of assumption (a) is arguably the most developed and extends to possible nonlinear models under a generalized methods of moments framework; notable papers of this rich literature include \citet{Staiger:1997aa,stock2000gmm,Stock:2002aa,Chao:2005aa,Newey:2009aa}. A growing literature has likewise emerged on methods to address violations of assumptions (b) and (c), a representative sample of which includes \citet{kolesar2015identification,Bowden:2015aa,Bowden:2016aa,Kang:2016aa,hartwig2017robust,qi2019mendelian, Windmeijer:2019aa,zhao2020statistical,morrison2020mendelian,liu2020mendelian,kang2020two,tchetgen2021genius}, and \cite{ye2021genius}. The current paper is most closely related to the works of \citet{Kang:2016aa,Guo:2018aa} and \cite{Windmeijer:2019aa}. Specifically, \cite{Kang:2016aa} developed a penalized regression approach that can recover valid inferences about the causal effect of interest provided fewer than 50\% of genetic markers are invalid instruments (known as majority rule); \citet{Windmeijer:2019aa} improved on the penalized approach, including a proposal for standard error estimation lacking in \citet{Kang:2016aa}. In an alternative approach, \citet{Han:2008aa} established that the median of multiple estimators of the effect of exposure obtained using one instrument at the time is a consistent estimator under a majority rule and an assumption that instruments cannot have direct effects on the outcome unless the instruments are uncorrelated. \citet{Guo:2018aa} proposed two stage hard thresholding (TSHT) with voting, which is consistent for the causal effect under linear outcome and exposure models, and a plurality rule which can be considerably weaker than the majority rule. The plurality rule is defined in terms of regression parameters encoding (i) the association of each invalid instrument with the outcome and that encoding (ii) the association of the corresponding instrument with the exposure. The condition effectively requires that the number of valid instruments is greater than the largest number of invalid instruments with equal ratio of regression coefficients (i) and (ii). Furthermore, they provide a simple construction for 95\% confidence intervals to obtain inferences about the exposure causal effect which are guaranteed to have correct coverage under the plurality rule. Importantly, in these works, a candidate instrument may be invalid either because it violates the exclusion restriction, or because it shares an unmeasured common cause with the outcome, i.e. either (b) or (c) fails. Both the penalized approach and the median estimator may be inconsistent if majority rule fails, while TSHT may be inconsistent if plurality rule fails. It is important to note that because confidence intervals for the causal effect of the exposure obtained by \citet{Windmeijer:2019aa} and \citet{Guo:2018aa} rely on a consistent model selection procedure, such confidence intervals fail to be uniformly valid over the entire model space \citep{leeb2008sparse,Guo:2018aa}. Interestingly, \citet{Kang:2016aa,Guo:2018aa} and \citet{Windmeijer:2019aa} effectively tackle the task of obtaining valid inferences about a confounded causal effect in the presence of invalid instruments as a model selection problem. Specifically, they aim to correctly identify which candidate instruments are invalid, while simultaneously obtaining valid Mendelian randomization inferences using selected valid instruments, arguably a more challenging task than may be required to obtain causal inferences that are robust to the presence of invalid instruments without necessarily knowing which instruments are invalid. In this paper, we propose novel methods that by-pass the model selection step altogether to deliver a regular and asymptotically linear g-estimator \citep{robins1992estimating,robins1994estimation,vansteelandt2003causal} of the causal parameter of interest, provided that at least $D^k L^{m+1-k}amma$ out of $K$ candidate instruments are valid, for $1\leqD^k L^{m+1-k}amma\leq K$ set by the analyst, without necessarily knowing their identities. A necessary trade-off between more stringent IV\ relevance requirements in exchange for less stringent IV causal requirements is revealed by decreasing the value of $D^k L^{m+1-k}amma$. We characterize a semiparametric efficiency bound which cannot be improved upon by any regular and asymptotically linear estimator which shares the robustness property of our class of estimators. Although Mendelian randomization is used as motivating example throughout the paper, the development of methodology to adequately address the issue of invalid instruments remains a priority for several disciplines, including biostatistics, epidemiology, econometrics and sociology, in which our results equally apply. \section{Preliminaries} \subsection{Data and notation} \label{sec:notation} Suppose that $(O_1,...,O_n)$ are independent, identically distributed observations of $O=(Y,A,\boldsymbol{Z})$ from a target population, where $Y$ is an outcome of interest, $A$ is an exposure and $\boldsymbol{Z}=(Z_1,...,Z_K)$ comprises of $K$ candidate instruments. Following the tradition in the IV literature \citep{robins1994correcting,angrist1995identification,angrist1996identification, heckman1997instrumental}, to ease exposition we will focus on the canonical case of binary instruments $\boldsymbol{Z}$ which takes values in $\mathcal{M}athcal{Z}=\{0,1\}^K$, but the framework can be extended readily to candidate instruments with arbitrary finite support. In the context of Mendelian randomization studies, $Z_k$ represents binary coding of the minor allele variant at the $k$th single polymorphism location which defines the $k$-th candidate instrument. For any index set $\alpha\subseteq \{1,...,K\}$, let $\boldsymbol{Z}_{\alpha}=(Z_s:s\in\alpha)$ and $\boldsymbol{Z}_{-\alpha}=(Z_s:s \not\in\alpha)$. Let ${\rm I\!H}$ denote the Hilbert space of one-dimensional functions of $\boldsymbol{Z}$ with mean zero and finite variance, equipped with the covariance inner product. For a set $J$, let $|J|$ denote its cardinality. For a vector $\upsilon$, let $\upsilon^ {\mathrm{\scriptscriptstyle T}} $ denote its transpose, $\upsilon^{\otimes 2}=\upsilon\upsilon^{ {\mathrm{\scriptscriptstyle T}} }$, $\upsilon \odot w$ its Hadamard product with another vector $w$ of the same dimension, and $||\upsilon||_0$ its zero-norm, that is the number of nonzero elements in $\upsilon$. In addition let $\widehat{E}_n\{g(O)\}=n^{-1}\sum_{i=1}^n\{g(O_i)\}$ denote the empirical mean operator. \subsection{Causal model} To define the causal effects of interest under the potential outcomes framework \citep{Neyman:1923a,Rubin:1974}, let $Y(a,\mathcal{M}athbf{z})\in {\rm I\!R}$ denote the potential outcome that would be observed had the exposure and instruments been set to the level $a\in {\rm I\!R}$ and $\mathcal{M}athbf{z}\in \mathcal{M}athcal{Z}$ respectively, and let $A(\mathcal{M}athbf{z})\in {\rm I\!R}$ denote the potential exposure if the instruments would take value $\mathcal{M}athbf{z}\in \mathcal{M}athcal{Z}$. It is well known that, even when all of the candidate instruments in $\boldsymbol{Z}$ are valid, average causal effects cannot be identified from the observed data law without further restrictions \citep{pearl2009causality}. A common structural assumption in the invalid IV literature is the additive linear, constant effects model of {\cite{holland1988causal}} extended to allow multiple invalid instruments {\citep{small2007sensitivity}}. \begin{assumption} \label{assp:alice} For two possible values of the exposure $a^{\prime}$, $a$ and a possible value of the instruments $\mathcal{M}athbf{z}\in\mathcal{M}athcal{Z}$, $$Y{(a^{\prime},\mathcal{M}athbf{z})}-Y{(a,\mathcal{M}athbf{0})}=\beta^{\ast} (a^{\prime}-a)+\psi(\mathcal{M}athbf{z}).$$ \end{assumption} \noindent The unknown parameter of interest is $\beta^{\ast}\in{\rm I\!R}$ which encodes the homogeneous causal effect per unit change in the exposure. Here $\psi(\cdot)$ is an unknown function satisfying $\psi(\mathcal{M}athbf{0})=0$ which encodes the direct effects of the instruments on the outcome; changing the instruments from the reference level $\mathcal{M}athbf{0}$ to $\mathcal{M}athbf{z}$ results in a direct effect of $\psi(\mathcal{M}athbf{z})$ on the outcome. Assumption \ref{assp:alice} rules out any interactions between the causal effect of the exposure and the direct effects of the instruments on the outcome, but allows for arbitrary interactions among the direct effects of the instruments. We formalize the definition of valid instruments in conjunction with assumption \ref{assp:alice} as follows. \begin{definition} \label{def:valid} Suppose assumption \ref{assp:alice} holds with $K$ candidate instruments. For any index set $\alpha\subseteq \{1,...,K\}$, we say that the candidate instruments $\boldsymbol{Z}_{\alpha}$ are valid while the remaining instruments $\boldsymbol{Z}_{-\alpha}$ are invalid if the causal model $\mathcal{M}athcal{C}(\alpha)$ defined by $E\{Y(0,0)|\boldsymbol{Z}\}=E\{Y(0,0)|\boldsymbol{Z}_{-\alpha}\}$ and $\psi(\boldsymbol{Z})=\psi(\boldsymbol{Z}_{-\alpha})$ holds almost surely. \end{definition} \noindent Definition \ref{def:valid} is closely related to the basic conditions which define valid instruments under the potential outcomes framework \citep{robins1994correcting, angrist1996identification}. Specifically, when there is only one candidate binary instrument $Z$, assumption (b) or no unmeasured confounding of the exposure-outcome relationship is typically stated as $Y(a,z)$ and $A(z)$ being independent of $Z$ for any levels of the exposure $a$ and instrument $z$, which implies $E\{Y(0,0)|Z\}=E\{Y(0,0)\}$. In addition, exclusion restriction is typically stated as the condition $Y(a,1)=Y(a,0)$ for any level of the exposure $a$, which implies that the function $\psi(Z)=0$ does not depend on the value of $Z$. Suppose we have oracle knowledge of the valid and invalid instruments so that $\mathcal{M}athcal{C}(\tilde{\alpha})$ is known to hold for some $\tilde{\alpha}\subseteq \{1,...,K\}$. Assumption \ref{assp:alice} yields the following semiparametric under-identified observed data model, \begin{align} \label{eq:semi} Y=\beta^{\ast} A+\psi(\boldsymbol{Z})+E\{Y{(0,0)}|\boldsymbol{Z}\}+\varepsilon, \quad E(\varepsilon|\boldsymbol{Z})=0, \end{align} where $\varepsilon=Y{(0,0)}-E\{Y(0,0)|\boldsymbol{Z}\}$. Therefore the causal model $\mathcal{M}athcal{C}(\tilde{\alpha})$ implies the observed data model $\mathcal{M}athcal{M}(\tilde{\alpha})$ defined by the conditional mean independence restriction \begin{align*} E(Y-\beta^{\ast} A\mathcal{M}id\boldsymbol{Z}) =E(Y-\beta^{\ast} A\mathcal{M}id\boldsymbol{Z}_{-\tilde{\alpha}}). \end{align*} \noindent G-estimators of $\beta$ developed in the context of additive and multiplicative structural mean models \citep{robins1992estimating,robins1994estimation} may be constructed in model $\mathcal{M}athcal{M}(\tilde{\alpha})$ based on the unconditional form \begin{align} \label{eq:g} E\{d(\boldsymbol{Z})(Y-\beta^{\ast} A)\}=E\{d(\boldsymbol{Z})E(Y-\beta^{\ast} A|\boldsymbol{Z})\}=E\{d(\boldsymbol{Z})E(Y-\beta^{\ast} A|\boldsymbol{Z}_{-\tilde{\alpha}})\}=0, \end{align} for any $d(\boldsymbol{Z})\in \{h(\boldsymbol{Z}): E\{h(\boldsymbol{Z})|\boldsymbol{Z}_{-\tilde{\alpha}}\}=0\}\cap{{\rm I\!H}}$. In the absence of oracle knowledge about $\tilde{\alpha}$, identification and estimation of $\beta$ has been an area of active research \citep{kolesar2015identification, bowden2016consistent, Kang:2016aa,Windmeijer:2019aa,Guo:2018aa}. The next section briefly discusses these references to motivate our proposed approach. \begin{remark} Assumption \ref{assp:alice} may be weakened to accommodate heterogeneous causal effects at the individual level; see \citet{angrist1995two}, \citet{hernan2006instruments}, \citet{small2007sensitivity} and the discussion in \citet{Kang:2016aa}. For example, the observed data implication $\mathcal{M}athcal{C}({\alpha})\implies \mathcal{M}athcal{M}({\alpha})$ continue to hold if we replace assumption \ref{assp:alice} with the following additive structural mean model \citep{robins1994correcting}, $$E\{Y{(a^{\prime},\mathcal{M}athbf{z})}-Y{(a,\mathcal{M}athbf{0})}\mathcal{M}id \boldsymbol{Z}=\mathcal{M}athbf{z}\}=\beta(a^{\prime}-a)+\psi(\mathcal{M}athbf{z}),$$ where $\beta$ now encodes the homogeneous {\it average} additive causal effect within subpopulations defined by the values of the candidate instruments. Nonetheless, to ease exposition and comparison with prior works in the invalid IV literature, we will focus on the constant effects model in assumption \ref{assp:alice}. A homogeneity condition akin to assumption \ref{assp:alice} for the outcome model or a similar condition for the exposure model is necessary for point identification of average causal effects, see \cite{robins1994correcting}, \cite{angrist1996identification}, \cite{hernan2006instruments} and \cite{wang2018bounded}. Crucially, assumption \ref{assp:alice} is guaranteed to hold under the sharp null hypothesis of no additive causal effect, in which case a distribution-free test of the null becomes feasible. Partial identification of causal effects with invalid instruments under a fully nonparametric model that does not impose assumption \ref{assp:alice} is discussed in Section \ref{sec:partial}. \end{remark} \subsection{Prior work on identification and estimation of model parameters} \label{sec:prior} In addition to assumption \ref{assp:alice}, \cite{Kang:2016aa}, \cite{Windmeijer:2019aa} and \cite{Guo:2018aa} assume the linear models \begin{equation} \begin{split} \label{li} E\{Y{(0,0)}|\boldsymbol{Z}\}=\sum_{k=1}^K s^{\ast}_k Z_k,\quad \psi(\boldsymbol{Z})=\sum_{k=1}^K t^{\ast}_k Z_k, \end{split} \end{equation} which rules out interactions among the direct effects of the instruments. Violation of assumptions (b) or (c) for the $k$-th candidate instrument is encoded by $s^{\ast}_k\neq 0$ or $t^{\ast}_k\neq 0$, respectively. This definition of valid instruments is a special case of definition \ref{def:valid} and yields the under-identified single-equation linear model in \citet{wooldridge2010econometric}, \begin{equation} \begin{split} \label{lout} Y=\beta^{\ast} A+\sum_{k=1}^K \underbrace{(s^{\ast}_k+t^{\ast}_k)}_{=\pi^{\ast}_k} Z_k+ \varepsilon_1,\quad E(\varepsilon_1|\boldsymbol{Z})=0. \end{split} \end{equation} Thus $\pi^{\ast}_k=0$ if the $k$-th candidate instrument is valid, and $\tilde{\alpha}=\{k:\pi^{\ast}_k=0\}$. In conjunction with the linear exposure model \begin{equation} \begin{split} \label{eq:exposure} A=\sum_{k=1}^K \xi^{\ast}_k Z_k + \varepsilon_2,\quad E( \varepsilon_2|\boldsymbol{Z})=0, \end{split} \end{equation} where $(\varepsilon_1,\varepsilon_2)$ may be correlated due to potential unmeasured confounding, the reduced form for (\ref{lout}) is \begin{equation*} \begin{split} Y=\sum_{k=1}^K \underbrace{(\beta^{\ast}\xi^{\ast}_k+\pi^{\ast}_k)}_{=\Gamma^{\ast}_k} Z_k+ (\varepsilon_1+\beta\varepsilon_2),\quad E(\varepsilon_1+\beta\varepsilon_2|\boldsymbol{Z})=0. \end{split} \end{equation*} The parameters $\{\xi^{\ast}_k,\Gamma^{\ast}_k: k=1,...,K\}$ are identified and can be estimated by regressing $A$ and $Y$ on the candidate instruments respectively. Therefore identification of $\beta^{\ast}$ entails finding conditions on the model parameter space so that there is a unique, invertible mapping between $\{\beta^{\ast},\pi^{\ast}_k: k=1,...,K\}$ and $\{\xi^{\ast}_k,\Gamma^{\ast}_k:k=1,...,K\}$ through the relationship $\Gamma^{\ast}_k=\beta^{\ast}\xi^{\ast}_k+\pi^{\ast}_k$. Assuming all the candidate instruments are relevant in the sense that $|\{k:\xi^{\ast}_k\neq 0\}|=K$, a necessary and sufficient condition is that the valid instruments form a plurality defined by the parameter values, $|\{k:\pi^{\ast}_k= 0\}|>\mathcal{M}ax_{c\neq 0}|\{k:\pi^{\ast}_k/\xi^{\ast}_k=c\}|$ \citep{hartwig2017robust,Guo:2018aa}. A sufficient condition is $|\{k:\pi^{\ast}_k= 0\}|D^k L^{m+1-k}eq \lceil K/2 \rceil$, also known as the majority rule \citep{Kang:2016aa,Windmeijer:2019aa}. Under these identifying conditions, the parameters $\{\beta^{\ast},\pi^{\ast}_k:k=1,...,K\}$ can be jointly estimated using penalized regression methods, so that $\tilde{\alpha}$ can in principle be recovered. Consistency of these selection procedures generally also depends on the parameter space to be ``well-separated" \citep{zhao2006model,https://doi.org/10.1111/j.1467-9868.2011.00771.x,Kang:2016aa,Windmeijer:2019aa,Guo:2018aa}. \section{Multiply robust identification and estimation with invalid instruments} \label{sec:indentification} \subsection{Framework} The majority rule is a popular condition that is conceptually easy to interpret and widely adopted \citep{bowden2016consistent, Kang:2016aa,Windmeijer:2019aa}. A natural extension of interest is to identify $\beta^{\ast}$ when at least $D^k L^{m+1-k}amma$ out of the $K$ candidate instruments are valid, for any integer value $1\leqD^k L^{m+1-k}amma\leq K$ specified by the analyst. The value of $D^k L^{m+1-k}amma$ may be determined by subject matter expertise. For example, a geneticist can provide a rough gauge on the maximum number of invalid instruments in Mendelian randomization studies. Alternatively, even when such {\it a priori} knowledge is lacking, one may wish to conduct sensitivity analysis in which a consistent estimator is obtained assuming at least $D^k L^{m+1-k}amma$ valid instruments, over a range of values for $D^k L^{m+1-k}amma$ to reflect uncertainty about its true value \citep{rosenbaum2010design}. A key challenge for pursuing in this direction under the linear causal and exposure models discussed in section \ref{sec:prior} is that identification of $\beta^{\ast}$ is entwined with parameter values which also encode the validity of instruments. For inference, the recent work by \cite{kang2020two} establishes valid confidence intervals for $\beta^{\ast}$ when at least $D^k L^{m+1-k}amma$ instruments are valid under linear causal model (\ref{li}), by taking unions of the confidence intervals constructed over all possible valid instrument sets $\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}$. However, identification of $\beta$ may fail even when the analyst has a significant degree of {\it ex ante} confidence that at least $D^k L^{m+1-k}amma$ out of the $K$ candidate instruments are valid, for some $D^k L^{m+1-k}amma<K/2$, if the plurality rule is violated. This represents a significant gap in the invalid IV literature, which has essentially restricted identification and estimation of causal effects to $D^k L^{m+1-k}amma=\lceil K/2 \rceil$. This paper is the first to establish identification of $\beta^{\ast}$ in a union of multiple causal models under assumption \ref{assp:alice}, which formalizes one's prior belief that at least $D^k L^{m+1-k}amma$ out of the $K$ candidate instruments are valid, without {\it ex ante} knowledge about which are. Specifically, given a collection $\{\mathcal{M}athcal{A}({\alpha}):\alpha\in\mathcal{M}athcal{Q}\}$ of causal or observed data models indexed by elements of a finite set $\mathcal{M}athcal{Q}$, let $\cup_{\alpha\in\mathcal{M}athcal{Q}}\mathcal{M}athcal{A}(\alpha)$ denote the union model in which at least one of the models in $\{\mathcal{M}athcal{A}({\alpha}):\alpha\in\mathcal{M}athcal{Q}\}$ holds. \begin{definition} \label{def:union1} Suppose assumption \ref{assp:alice} holds with $K$ candidate instruments. We say that { at least } $D^k L^{m+1-k}amma$ out of the $K$ candidate instruments are valid if the union causal model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}(\alpha)$ holds. \end{definition} \noindent Identification of $\beta^{\ast}$ is accomplished by exploiting the implication on the observed data, which is given in the next result. \begin{proposition} \label{prop:implication} Suppose assumption \ref{assp:alice} holds. The union causal model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}(\alpha)$ implies the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$. \end{proposition} \subsection{Identification and semiparametric efficiency bound} The causal parameter of interest $\beta^{\ast}$ is identified in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$ if it is identified in $\mathcal{M}athcal{M}({\alpha})$ {for each} $\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}$. The key idea of our proposed approach is that the unconditional moment condition (\ref{eq:g}) holds in $\mathcal{M}athcal{M}({\alpha})$ {for each} $\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}$, for any index function that lies in the subspace $$d(\boldsymbol{Z})\in{\rm I\!H}_{D^k L^{m+1-k}amma}=\{h(\boldsymbol{Z}): E\{h(\boldsymbol{Z})|\boldsymbol{Z}_{-\alpha}\}=0\text{ for each }\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}\}\cap{{\rm I\!H}}.$$ To ease exposition in characterizing the subspace ${\rm I\!H}_{D^k L^{m+1-k}amma}$, we first assume joint independence of the candidate instruments which we later relax in Section \ref{sec:corr}. \begin{assumption} \label{assp:indp} The distribution of the candidate instruments is non-degenerate and factorizes as $\Pr(\boldsymbol{Z}\leq\mathcal{M}athbf{z})=\prod_{s=1}^K \Pr(Z_s\leq z_s)$ for any $\mathcal{M}athbf{z}\in\mathcal{M}athcal{Z}$. \end{assumption} \noindent Assumption \ref{assp:indp} may be reasonable in Mendelian randomization studies by performing linkage disequilibrium clumping to choose an independent set of genetic variants as candidate instruments using the well-established genetics analysis software PLINK \citep{purcell2007plink}. \begin{example} \label{ex:1} To fix ideas, suppose assumption \ref{assp:indp} holds with $K=2$ candidate instruments. Then ${\rm I\!H}$ can be spanned by a set of $2^2-1$ orthogonal functions, \begin{align*} {\rm I\!H}&=\text{span}(\{Z_1-\mathcal{M}u^{\ast}_1,Z_2-\mathcal{M}u^{\ast}_2,(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)\}), \end{align*} where $\mathcal{M}u^{\ast}_k=E(Z_k)$ for $k=1,...,K$. In addition, suppose the analyst has a significant degree of {\it ex ante} confidence that at least $D^k L^{m+1-k}amma=1$ of the candidate instruments is valid, in which case we hope to characterize the subspace ${\rm I\!H}_{1}=\{h(\boldsymbol{Z}): E\{h(\boldsymbol{Z})|\boldsymbol{Z}_{-\alpha}\}=0\text{ for each }\alpha\in\{\{1\},\{2\}\}\}\cap{{\rm I\!H}}=\{h(\boldsymbol{Z}): E\{h(\boldsymbol{Z})|\boldsymbol{Z}_{k}\}=0\text{ for each }k=1,2\}\cap{{\rm I\!H}}$. It is straightforward to verify that only the demeaned two-way interaction $(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)$ satisfies $E\{(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)\mathcal{M}id Z_k\}=0$ for each $k=1,2$. Therefore, the moment condition $$E\{(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)(Y-\beta)\}=0,$$ holds in $\mathcal{M}athcal{M}(\alpha)$ for each $\alpha=\{\{1\},\{2\}\}$, that is in the union observed data model $\cup_{\alpha\in\{\{1\},\{2\}\}}\mathcal{M}athcal{M}(\alpha)$. Since ${\rm I\!H}=\text{span}(\{Z_1-\mathcal{M}u^{\ast}_1,Z_2-\mathcal{M}u^{\ast}_2\})\oplus\text{span}(\{(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)\})$, we can conclude that ${\rm I\!H}_{1}=\text{span}(\{(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)\})=\{\theta (Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2):\theta\in {\rm I\!R}\}$. \end{example} Intuitively, because at most $K-D^k L^{m+1-k}amma$ of the candidate instruments are invalid, each of the demeaned interactions of order at least $K-D^k L^{m+1-k}amma+1$ necessarily involves at least one valid instrument, without {\it ex ante} knowledge about their identities. We generalize this result for arbitrary values of $K$ and $1\leqD^k L^{m+1-k}amma\leq K$ below . { \begin{theorem} \label{prop:mr1} Suppose assumption \ref{assp:indp} holds with $K$ candidate instruments. Then $${\rm I\!H}_{D^k L^{m+1-k}amma}=\text{span}\left(\left\{\Pi_{s\in\alpha}(Z_s-\mathcal{M}u^{\ast}_s):\alpha\in\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}\right),$$ and accordingly the dimension of ${\rm I\!H}_{D^k L^{m+1-k}amma}$ is $d_{D^k L^{m+1-k}amma}=\sum_{j=K-D^k L^{m+1-k}amma+1}^K{K \choose j}$. Without loss of generality, we consider the enumeration $\{\alpha(1),...,\alpha(d_{D^k L^{m+1-k}amma})\}$ of the elements in $\left\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}$ in some fixed order and let $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})=\{\Pi_{s\in\alpha(1)}(Z_s-\mathcal{M}u^{\ast}_s),...,\Pi_{s\in\alpha(d_{D^k L^{m+1-k}amma})}(Z_s-\mathcal{M}u^{\ast}_s)\}^ {\mathrm{\scriptscriptstyle T}} $ where $\boldsymbol{\mathcal{M}u}^{\ast}=(\mathcal{M}u^{\ast}_1,...,\mathcal{M}u^{\ast}_K)^ {\mathrm{\scriptscriptstyle T}} $. Then the causal parameter of interest $\beta^{\ast}$ is identified in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$ as the unique solution to the $d_{D^k L^{m+1-k}amma}$ unconditional moment restrictions \begin{align} \label{eq:ident} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}=\mathcal{M}athbf{0}, \end{align} provided that \begin{align} \label{eq:ident2} \biggr|\biggr| \frac{\partial E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}}{\partial \beta}\biggr|\biggr| _0=||E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}||_0> 0. \end{align} \end{theorem} Condition (\ref{eq:ident2}) requires at least one of the $d_{D^k L^{m+1-k}amma}$ functions in $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})$ to be correlated with the exposure, which is a rank condition for identification with invalid instruments. Note that this requirement becomes less stringent as $D^k L^{m+1-k}amma$ increases. In fact, if we believe that all of the $K$ candidate instruments are valid, then we need at least one out of the $d_{K}=2^K-1$ main and interaction terms to be correlated with the exposure. The semiparametric identification framework outlined in Theorem \ref{prop:mr1} is particularly instructive as it reveals a fundamental trade-off between the strength of required rank condition, and the potential number of valid instruments one is willing to allow in a given analysis. In practice, appropriateness of the IV relevance assumption in question for a given value of $D^k L^{m+1-k}amma$ is empirically testable, as later illustrated in the UK Biobank data analysis in Section \ref{sec:UKB}. \begin{remark} In conjunction with Proposition \ref{prop:implication}, Theorem \ref{prop:mr1} establishes multiply robust causal identification, because $\beta^{\ast}$ is identified as long as one of the causal models in the collection $\{\mathcal{M}athcal{C}({\alpha}):\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}\}$ holds. The multiple robustness result of Theorem \ref{prop:mr1} is distinct from prior literature which primarily concerns nuisance model specification under a single causal model \citep{10.2307/2669930,vansteelandt2008multiply,tchetgen2012semiparametric,molina2017multiple,babino2019multiple}, whereas our notion of multiply robust inference concerns identification in the union of causal models. To the best of our knowledge, Theorem \ref{prop:mr1} constitutes the first instance of multiple robustness property for causal identification. \end{remark} In this paper, we consider regular and asymptotically linear estimators of $\beta^{\ast}$ implicitly defined by the $d_{D^k L^{m+1-k}amma}$ moment conditions (\ref{eq:ident}), in the semiparametric model only restricted by assumption \ref{assp:indp} and condition (\ref{eq:ident2}). An estimator is regular in the semiparametric model if it converges to its limiting distribution in a locally uniform manner in any regular parametric submodel therein; see \cite{bickel1993efficient} and \cite{newey1990semiparametric} for more precise definitions. Regularity of an estimator is often considered desirable, as the estimator's limiting distribution is not affected by small vanishing perturbations of the data generating process and thus the resulting confidence intervals are uniformly valid over the entire model space. \begin{theorem} \label{prop:mr2} Let $\beta^{\ast}$ be defined by the $d_{D^k L^{m+1-k}amma}$ moment conditions (\ref{eq:ident}). Under assumption \ref{assp:indp} and condition (\ref{eq:ident2}), any regular and asymptotically linear estimator $\widehat{\beta}(D^k L^{m+1-k}amma)$ of $\beta^{\ast}$ must satisfy the following asymptotic expansion: \begin{align*} {n}^{1/2} \{ \mathcal{M}athcal{H}at{\beta}(D^k L^{m+1-k}amma)-\beta^{\ast}\} = E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}^{-1} {n}^{1/2}\widehat{E}_n \{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}+o_{p}\left( 1\right) , \end{align*} for any $d_{D^k L^{m+1-k}amma}$-dimensional real vector $ {\mathrm{\scriptscriptstyle T}} heta$ that satisfies $E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}\neq 0$, where $\mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta,\boldsymbol{\mathcal{M}u})=\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u})\odot \{(Y-\beta A)+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\beta)\}$ and $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\beta)$ is an $d_{D^k L^{m+1-k}amma}$-dimensional augmentation vector with the $j$-th entry equal to \begin{align*} \sum_{1\leq r \leq |\alpha(j)|}\left\{\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha})\right\}. \end{align*} Furthermore, the efficient regular and asymptotically linear estimator estimator admits the above expansion with $ {\mathrm{\scriptscriptstyle T}} heta_{opt}=E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}$, so that the asymptotic variance lower bound for all regular estimators of $\beta^{\ast}$ is given by $$\mathcal{M}athcal{V}_{D^k L^{m+1-k}amma}=[E\{A\mathcal{M}athbf{D}^ {\mathrm{\scriptscriptstyle T}} _{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})\} E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}]^{-1}.$$ \end{theorem} {Because $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})$ is a subvector of $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma^{\prime}}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})$ for any $D^k L^{m+1-k}amma < D^k L^{m+1-k}amma^{\prime} $, the corresponding efficiency bounds satisfy $\mathcal{M}athcal{V}_{D^k L^{m+1-k}amma}D^k L^{m+1-k}eq \mathcal{M}athcal{V}_{D^k L^{m+1-k}amma^{\prime}}$. Theorem \ref{prop:mr2} therefore provides a formal quantification of the efficiency cost one must incur in exchange for increased causal robustness, as $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}(\alpha)$ subsumes $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma^{\prime}\}}\mathcal{M}athcal{C}(\alpha)$. It also establishes that the IV relevance condition (\ref{eq:ident2}) is in fact necessary for the existence of a regular and asymptotically linear estimator of $\beta^{\ast}$.} Unless the candidate instruments are under the control of the investigator through some known allocation scheme such as randomisation, their marginal means $\boldsymbol{\mathcal{M}u}^{\ast}$ are typically unknown in observational studies. Under assumption \ref{assp:indp}, we can obtain the nonparametric estimator $\widehat{\boldsymbol{\mathcal{M}u}}$ which solves \begin{align} \label{eq:nuis} \widehat{E}_n\{(Z_1-\mathcal{M}u_1,...,Z_K-\mathcal{M}u_K)^ {\mathrm{\scriptscriptstyle T}} \}=0. \end{align} The augmentation vector $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\beta)$ may be viewed as an adjustment to the influence function arising from nonparametric estimation of the nuisance parameters $\boldsymbol{\mathcal{M}u}^{\ast}$. To illustrate, suppose assumption \ref{assp:indp} holds with $K=2$ candidate instruments and the analyst believes that at least $D^k L^{m+1-k}amma=1$ of the candidate instruments is valid, in which case $\mathcal{M}athbf{D}_{1}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})$ comprises only of the demeaned two-way interaction $(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)$; see Example \ref{ex:1}. By Theorem \ref{prop:mr2}, any influence function for estimation of $\beta^{\ast}$ defined by the moment condition $E\{(Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)(Y-\beta A)\}=0$ is, up to multiplicative constants, of the form \begin{align} \label{eq:if} (Z_1-\mathcal{M}u^{\ast}_1)(Z_2-\mathcal{M}u^{\ast}_2)\{Y-\beta^{\ast}A-E(Y-\beta^{\ast}A|Z_1)-E(Y-\beta^{\ast}A|Z_2)+E(Y-\beta^{\ast}A)\}. \end{align} Here the three outcome regressions in the augmentation term correspond to adjustments arising from nonparametric estimation of $\mathcal{M}u^{\ast}_1$, $\mathcal{M}u^{\ast}_2$ and $\mathcal{M}u^{\ast}_1 \mathcal{M}u^{\ast}_2$. Furthermore, it can be verified that the moment condition \begin{align} \label{eq:alt} E\{Y-\beta^{\ast}A-E(Y-\beta^{\ast}A|Z_1)-E(Y-\beta^{\ast}A|Z_2)+E(Y-\beta^{\ast}A)\}=0, \end{align} holds in $\mathcal{M}athcal{M}(\alpha)$ for each $\alpha\in\{\{1\},\{2\}\}$, which represents an alternative identification approach involving outcome regressions. When nonparametric methods are used to estimate these outcome regressions, estimators based on moment condition (\ref{eq:alt}) also have influence functions of the form in (\ref{eq:if}). \subsection{G-estimation via two-stage least squares} \label{sec:g-estimation} \cite{newey1999two} and \cite{ackerberg2014asymptotic} showed that a form of semiparametric optimally weighted generalized method of moments (GMM) estimator of \cite{Hansen:1982aa} with $\widehat{\boldsymbol{\mathcal{M}u}}$ plugged-in achieves the efficiency bound $\mathcal{M}athcal{V}_{D^k L^{m+1-k}amma}$. In this paper, following \citet{robins2000robust} and \citet{okui2012doubly}, we propose estimation of $\beta^{\ast}$ via the following standard two-stage least squares procedure which may be readily implemented using existing off-the-shelf software.\\ {\it Step 1.} Regress $A$ on $\{1,\mathcal{M}athbf{D}^T_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\widehat{\boldsymbol{\mathcal{M}u}})\}^ {\mathrm{\scriptscriptstyle T}} $ by ordinary least squares to obtain the fitted values $ \widehat{A}=\widehat{\theta}_{0}+\widehat{\theta}^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\widehat{\boldsymbol{\mathcal{M}u}}). $\\ {\it Step 2.} Regress $Y$ on $(1,\widehat{A})^ {\mathrm{\scriptscriptstyle T}} $ by ordinary least squares to obtain the estimated slope parameter $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$.\\ It follows from the results of \citet{robins2000robust} and \citet{okui2012doubly} that $\widehat{\beta}_g(D^k L^{m+1-k}amma)$ admits the asymptotic linear expansion in Theorem \ref{prop:mr2} indexed by $ {\mathrm{\scriptscriptstyle T}} heta_{g}=E\{\mathcal{M}athbf{D}^{\otimes 2}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}$, the population least squares coefficient vector from regressing $A$ on $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})$. For inference, the standard sandwich estimator of the asymptotic variance treating $\widehat{\boldsymbol{\mathcal{M}u}}$ as known is conservative \citep{robins1992estimating,vansteelandt2003causal}, albeit readily available as an output from existing software which implements two-stage least squares. A convenient alternative for obtaining standard errors is to implement the nonparametric bootstrap which appropriately accounts for all sources of uncertainty. } \section{Simulation Studies} \label{sec:simulation} \subsection{Simulation design} We perform Monte Carlo simulations to investigate the numerical performance of the proposed g-estimation method. The instruments $\boldsymbol{Z}=(Z_1,...,Z_K)^{ {\mathrm{\scriptscriptstyle T}} }$ are generated from independent Bernoulli distributions with probability $p=0.8$. Similar to the study designs in \citet{Guo:2018aa} and \citet{Windmeijer:2019aa}, the outcome and exposure are generated without covariates from \begin{align*} Y&=\beta A + \pi^{ {\mathrm{\scriptscriptstyle T}} } \boldsymbol{Z}+\varepsilon_1,\\ A&= \xi^{ {\mathrm{\scriptscriptstyle T}} } f(\boldsymbol{Z})+\varepsilon_2, \end{align*} where $f(\boldsymbol{Z})$ is a vector comprising of all main and interaction terms of $\boldsymbol{Z}$, and \begin{align*} \begin{pmatrix}\varepsilon_1\\ \varepsilon_2 \end{pmatrix} \sim N \begin{bmatrix} \begin{pmatrix} 0\\ 0 \end{pmatrix}\!\!,& \begin{pmatrix} 1 & 0.25 \\ 0.25 & 1 \end{pmatrix} \end{bmatrix}. \end{align*} We set the number of candidate instruments to be $K=5$, the causal effect parameter $\beta=1$ and vary (1) the sample size $n=10000$ or $50000$; (2) the number of valid instruments given by $K-||\pi||_0$; and (3) the IV strength $\xi=(1,...,1)^TC$ for $C=0.6$ or $1$. Under the above data generating mechanism, the coefficients indexing the linear model $A={\xi}^{ {\mathrm{\scriptscriptstyle T}} } \boldsymbol{Z}+{\varepsilon_2}$ which omits interactions also equals ${\xi}=(1,...,1)^T{C}^{\prime}$ for some constant ${C}^{\prime}$. For (2), we first set $\pi=(0,0,0,0.2,0.2)^T$ encoding three valid instruments so that the majority rule of \citet{Kang:2016aa} holds. Then we consider the case where only two instruments are valid, and set $\pi=(0,0,0.1,0.2,0.3)^T$ or $\pi=(0,0,0.2,0.2,0.2)^T$. The plurality rule of \citet{Guo:2018aa} holds in the former design but is violated in the latter. We use the R package \texttt{AER} \citep{AppliedEconometricswithR} to implement $\widehat{\beta}_{g}(2)$ (g-esimator). A sandwich estimate of the asymptotic variance is available directly as an output from the function \texttt{ivreg}. We compare g-esimator with the oracle two stage least squares (2SLS) estimator using only the valid instruments, the naive 2SLS estimator using $\boldsymbol{Z}$, the post-adaptive Lasso (Post-Lasso) estimator of \citet{Windmeijer:2019aa} which uses adaptive Lasso tuned with an initial median estimator to select valid instruments, and the two-stage hard thresholding (TSHT) estimator of \citet{Guo:2018aa}. The oracle 2SLS estimator is included as a benchmark. It requires {\it a priori} knowledge of which of the putative instruments are valid, and thus is infeasible in practice. \subsection{Monte Carlo results} Table \ref{tab:ident} summarizes the estimation results for the setting with $C=0.6$ based on 1000 replications. When the majority rule holds, both Post-Lasso and TSHT perform nearly as well as the oracle 2SLS in terms of absolute bias, coverage and variance. When the majority rule is violated but the plurality rule holds, TSHT performs as well as the oracle 2SLS once $n=50000$. As Post-Lasso relies on the majority rule condition for consistency, it performs poorly in this setting. When both the majority and the plurality rules are violated, Post-Lasso and TSHT exhibit noticeable bias and poor coverage. In agreement with theory, naive 2SLS suffers from poor coverage throughout, regardless of sample size. The proposed g-estimator performs well across all simulation settings, including those in which both the majority and the plurality rules are violated, but with higher variance than the other methods. The absolute bias and variance of the estimators are generally smaller with stronger IV when $C=1$, see the Supplementary Material for estimation results of this setting. \subsection{Further Monte Carlo results} The saturated linear exposure model in the simulation study was chosen to ease comparison with existing methods. The Supplementary Material also contains Monte Carlo results under two additional data generating mechanisms of interest, namely (i) sparse linear exposure models in which not all the interaction terms are present, and (ii) semiparametric single index models $E(A|\boldsymbol{Z})=g(\xi^{ {\mathrm{\scriptscriptstyle T}} }\boldsymbol{Z})$ where $g(\cdot)$ is a link function. The proposed g-estimator is able to deliver the desired level of confidence interval coverage across the range of simulation settings considered, in line with our theory. Notably, the other competing methods all fail to have correct confidence interval coverage when both the majority and plurality rules are violated. \begin{table} \begin{center} \caption{Comparison of methods with a continuous exposure. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:ident} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds}\\ $|\text{Bias}|$ & 0.007 & 0.000& 0.014 & 0.000 & 0.000 \\ & 0.002 & 0.000 & 0.014 & 0.000 & 0.000 \\ $\sqrt{\text{Var}}$ & 0.019 & 0.002 & 0.002 & 0.003 & 0.003 \\ &0.009 & 0.001 & 0.001 & 0.001 & 0.002 \\ $\sqrt{\text{EVar}}$ & 0.019 & 0.002 & 0.002 & 0.003 & 0.003 \\ & 0.009 & 0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 & 0.929 & 0.951 & 0.000 & 0.940 & 0.950 \\ & 0.943 & 0.945 & 0.000 & 0.947 & 0.947 \\ & \mathcal{M}ulticolumn{5}{c}{Majority rule violated but plurality rule holds}\\ $|\text{Bias}|$ & 0.032 & 0.000& 0.021 & 0.014 & 0.009 \\ & 0.017 & 0.000 & 0.021 & 0.012 & 0.000 \\ $\sqrt{\text{Var}}$ & 0.058 & 0.003 & 0.002 & 0.009 & 0.009 \\ &0.043 & 0.001 & 0.001 & 0.007 & 0.003 \\ $\sqrt{\text{EVar}}$ & 0.068 & 0.003 & 0.002 & 0.003 & 0.003 \\ & 0.047 & 0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 & 0.935 & 0.938 & 0.000 & 0.118 & 0.424 \\ & 0.933 & 0.951 & 0.000 & 0.000 & 0.944 \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated}\\ $|\text{Bias}|$ & 0.032 & 0.000& 0.021 & 0.035 & 0.035 \\ & 0.017 & 0.000 & 0.021 & 0.036 & 0.035 \\ $\sqrt{\text{Var}}$ & 0.057 & 0.003 & 0.002 & 0.003 & 0.003 \\ &0.043 & 0.001 & 0.001 & 0.001 & 0.002 \\ $\sqrt{\text{EVar}}$ & 0.068 & 0.003 & 0.002 & 0.002 & 0.002 \\ & 0.047 & 0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 & 0.933 & 0.941 & 0.000 & 0.000 & 0.000 \\ & 0.933 & 0.950 & 0.000 & 0.000 & 0.000 \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent \small Note: $|\text{Bias}|$ and $\sqrt{\text{Var}}$ are the Monte Carlo absolute bias and standard deviation of the point estimates, $\sqrt{\text{EVar}}$ is the square root of the mean of the variance estimates and Cov95 is the coverage proportion of the 95\% confidence intervals, based on 1000 repeated simulations. Zeros denote values smaller than $0.0005$.} \end{tablenotes} \end{center} \end{table} \section{An Application to the UK Biobank Data} \label{sec:UKB} The UK Biobank is a large-scale ongoing prospective cohort study that recruited around 500,000 participants aged 40-69 at recruitment from 2006 to 2010. Participants provided biological samples, completed questionnaires, underwent assessments, and had nurse led interviews. Follow up is chiefly through cohort-wide linkages to National Health Service data, including electronic, coded death certificate, hospital, and primary care data \citep{sudlow2015uk}. To reduce confounding bias due to population stratification, we restrict our analysis to people of genetically verified white British descent, as in previous studies \citep{Tyrrell2016}. We are interested in estimating the causal effects of body mass index (BMI) on systolic blood pressure (SBP) and diastolic blood pressure (DBP) respectively. Participants who are taking anti-hypertensive medication based on self-report are excluded; the sample size for the final analysis is 292,757. We use top 10 independent single nucleotide polymorphisms (SNPs) that are strongly associated with BMI at genome-wide significance level $p < 5\times 10^{-8}$ \citep{Locke:2015aa}. We then implement the proposed g-estimator $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ over a range of values for $D^k L^{m+1-k}amma$, along with the following methods: Naive two stage least squares (2SLS), TSHT \citep{Guo:2018aa}, sisVIVE \citep{Kang:2016aa} and Post-lasso \citep{Windmeijer:2019aa}. The data analysis results are summarized in Table \ref{tab:ukb}. For the BMI-SBP relationship, TSHT selected all 10 SNPs as valid instruments, which explains why its point estimate is similar to naive 2SLS and $\widehat{\beta}_{g}(10)$. Interestingly for the BMI-DBP relationship, $\widehat{\beta}_{g}(4)$ delivers a point estimate of $0.649$, which is larger than those of competing methods. When $D^k L^{m+1-k}amma=6$ or 8 and thus more than half of the SNPs are assumed to be valid, the point estimate of $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ is similar to those of sisVIVE and Post-lasso, both of which require the 50\% rule for consistency. We also observe that the standard errors of $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ decrease substantially when the value of $D^k L^{m+1-k}amma$ increases, which is in agreement with theory. In particular, when $D^k L^{m+1-k}amma \leq 3$, the $p$-values for the F-test of the first stage regression for weak instruments is not significant at $0.05$ level, and hence the point estimates are likely to be biased due to weak instruments. For this reason, we do not report results for $D^k L^{m+1-k}amma \leq 3$. We further perform the Hausman homogeneity test \citep{hausman1978specification} to compare the point estimates of $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ when $D^k L^{m+1-k}amma = 6$, $8$ or $10$ versus that of $\widehat{\beta}_{g}(4)$. Under the null hypothesis $H_0: D^k L^{m+1-k}amma=4$, $\widehat{\beta}_{g}(4)$ is efficient under homogeneity restrictions while $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ remains consistent for $D^k L^{m+1-k}amma> 4$. For the BMI-SBP relationship, we did not find statistically significant differences at $0.05$ level for all three pairwise tests. For the BMI-DBP relationship, we find that the point estimates of $\widehat{\beta}_{g}(D^k L^{m+1-k}amma)$ for $D^k L^{m+1-k}amma=8$ or 10 are statistically different from that of $\widehat{\beta}_{g}(4)$ at $0.05$ level, which suggests that inferences under $D^k L^{m+1-k}amma=4$ may be most reliable. Nonetheless, we suggest reporting results as in Table \ref{tab:ukb} over a range of values for $D^k L^{m+1-k}amma$ to reflect uncertainty about its true value \citep{rosenbaum2010design}. \begin{table}[!htbp] \caption{Point estimates and standard errors (in parenthesis) of the causal effects of body mass index on systolic blood pressure and diastolic blood pressure in the UK Biobank of European descent ($K=10, n=292,757)$.} \label{tab:ukb} \begin{center} \begin{tabular}{lcccccccc} \mathcal{M}athcal{H}line \rule{0pt}{3ex} & Naive 2SLS & TSHT & sisVIVE & Post-lasso & $\widehat{\beta}_{g}(4)$ & $\widehat{\beta}_{g}(6)$ & $\widehat{\beta}_{g}(8)$ & $\widehat{\beta}_{g}(10)$\\ \mathcal{M}athcal{H}line & \mathcal{M}ulticolumn{8}{c}{Systolic blood pressure}\\ & 0.447 & 0.447& 0.604 & 0.628 & 0.543 & 0.523 & 0.530 & 0.496 \\ & (0.109) & (0.109) & $-$ &(0.133) & (0.296) & (0.135) & (0.090) & (0.090) \\ \rule{0pt}{3ex} & \mathcal{M}ulticolumn{8}{c}{Diastolic blood pressure}\\ & 0.254 & 0.296 & 0.386 & 0.459 & 0.649 & 0.410 & 0.363 & 0.349 \\ & (0.059) & (0.059) & $-$ & (0.065) &(0.147) & (0.071) & (0.048) &(0.048)\\ \mathcal{M}athcal{H}line \mathcal{M}ulticolumn{9}{l}{} \end{tabular} \end{center} \end{table} { \section{Extensions} \label{sec:extensions} \subsection{Correlated and continuous candidate instruments} \label{sec:corr} The results from Section \ref{sec:indentification} can be generalized for arbitrary distribution of the candidate instruments. \begin{proposition} \label{prop3} Suppose there are $K$ candidate instruments with joint probability mass function $\Pr(\mathcal{M}athbf{Z}=\mathcal{M}athbf{z})$ and marginal probability mass function $\Pr(Z_k=z_k)$ for the $k$-th instrument, $k=1,...,K$. Then the parameter $\beta^{\ast}$ is identified in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$ as the unique solution to the unconditional moment restrictions \begin{align} \label{eq:ident3} E\{W(\boldsymbol{Z})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}=\mathcal{M}athbf{0}, \end{align} where $W(\mathcal{M}athbf{z})=\prod_{s=1}^K Pr(Z_k=z_k)/\Pr(\mathcal{M}athbf{Z}=\mathcal{M}athbf{z})$, provided that \begin{align} \label{eq:ident4} \biggr|\biggr| \frac{\partial E\{W(\boldsymbol{Z})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}}{\partial \beta}\biggr|\biggr| _0=||E\{W(\boldsymbol{Z})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}||_0> 0. \end{align} \end{proposition} \noindent The weights $W(\boldsymbol{Z})$ formally accounts for possible dependence among the candidate instruments and in fact reduces to $1$ for all units under assumption \ref{assp:indp}, recovering the result in Theorem \ref{prop:mr1}. When $\boldsymbol{Z}$ contains continuous components, ${\rm I\!H}_{D^k L^{m+1-k}amma}$ is an infinite dimensional Hilbert space. We can adopt the general strategy proposed in \citet{NEWEY1993419} (see also \citet{tchetgen2010doubly}) by taking a basis system $\{\phi_{j}(\boldsymbol{Z}),j=1,2,...\}$ of functions dense in ${\rm I\!H}_{D^k L^{m+1-k}amma}$, such as tensor products of trigonometric, wavelets or polynomial bases. In practice we can perform two stage least squares using the empirical version of $[\phi_{1}(\boldsymbol{Z}),...,\phi_{J}(\boldsymbol{Z})]^ {\mathrm{\scriptscriptstyle T}} $ as the vector of instruments, for some $J< \infty$. \subsection{Measured baseline covariates} In many applications the IV assumptions may be more credible only after conditioning on a sufficiently rich set of measured baseline covariates $\boldsymbol{X}$ but not otherwise, in the sense that within strata of $\boldsymbol{X}$, the candidate instruments may be viewed as being randomized through some natural or quasi-experiment \citep{hernan2006instruments}. {In the Supplementary Material, we provide the conditional analogues of Theorems \ref{prop:mr1} and \ref{prop:mr2} under such settings. When $\boldsymbol{X}$ comprises of several variables with finite support, saturated covariate main effects can be specified for the marginal distributions of the candidate instruments given covariates, as well as for the exposure and outcome regressions in the two-stage least squares procedure of Section \ref{sec:g-estimation}. This becomes infeasible in moderately sized samples when $\boldsymbol{X}$ is high dimensional or contains numerous continuous components as the data are too sparse to conduct stratified estimation \citep{robins1997toward}. One dimension reduction strategy is to specify parametric working models for the covariate effects. Improved robustness against model misspecification is achieved through doubly robust estimators which remain root-$n$ consistent for the causal effect of interest $\beta^{\ast}$ if either a model for the covariate main effects on the outcome or a model for the marginal distributions of the candidate instruments, given covariates, is correctly specified, but not necessarily both \citep{robins1994correcting,okui2012doubly,vansteelandt2018improving}. Another approach uses various data-adaptive statistical or machine learning methods to estimate the covariate effects. Recent works by \citet{Chernozhukov2018ddml,chernozhukov2016locally} established rate conditions for these flexible learners such that the resulting debiased machine learning estimators of $\beta^{\ast}$ can still be root-$n$ consistent even when the complexity of the nuisance parameter space is no longer limited by classical settings (e.g. Donsker classes), by exploiting Neyman orthogonality of the influence functions which translates to reduced sensitivity under local variation in the nuisance parameter. The impact of regularization bias and overfitting in estimation of the nuisance parameters is further mitigated via cross-fitting. We outline the implementations of both the doubly robust as well as cross-fitted debiased machine learning estimators in the Supplementary Material.} \subsection{Partial identification} \label{sec:partial} The proposed approach allows for partial identification of average causal effects with discrete or bounded continuous outcome and possibly invalid instruments, even in the absence of assumption \ref{assp:alice}. The key observation that allows one to obtain valid bounds for a causal effect of interest assuming at least $D^k L^{m+1-k}amma$ out of $K$ candidate instruments are valid is that, as pointed out in Section \ref{sec:indentification}, such assumption logically implies that all higher order interactions involving at least $K-D^k L^{m+1-k}amma+1$ candidate instruments must in fact satisfy the exclusion restriction, since such interactions can at most involve $K-D^k L^{m+1-k}amma$ invalid instruments and at least one valid instrument. With these interactions defined, one may then proceed with any existing IV bounds in the literature to obtain partial inference for a variety of causal effects \citep{swanson2018partial}. The approach also readily extends to binary, count and censored survival outcome settings simply by re-coding the candidate instruments and using the derived interactions as valid instruments in appropriate IV analysis for the given type of outcome available in the causal inference literature \citep{robins1994correcting,abadie2003semiparametric,vansteelandt2003causal,tan2010marginal,tchetgen2015instrumental,wang2018bounded,liu2020identification}. } \section{Discussion} \label{sec: discussion} In closing, we acknowledge certain limitations of the proposed semiparametric method. First, the approach may be vulnerable to bias which may occur if exposure is weakly dependent on the interactions. However, although not pursued here, in principle one can leverage existing literature on many weak instruments to address this challenge \citep{Newey:2009aa, ye2021genius}. Second, in observational studies the IV assumptions may only hold conditional on a high dimensional set of baseline covariates and the multiple candidate instruments may be dependent, thus requiring methods briefly introduced in Section \ref{sec:corr}. In such settings, the need to model the joint density of candidate instruments presents several computational and inferential challenges, which we plan to address in future work. \begin{center} {\bf \Large Supplementary Material for ``Semiparametric Efficient G-estimation with Invalid Instrumental Variables"} {\large Baoluo Sun$^1$, Zhonghua Liu$^2$ and Eric Tchetgen Tchetgen$ ^3 $} \\ {$^1$Department of Statistics and Data Science, National University of Singapore\\ $^2$Department of Statistics and Actuarial Science, University of Hong Kong\\ $ ^3 $Department of Statistics and Data Science, The Wharton School of the University of Pennsylvania\\ } \end{center} \setcounter{equation}{0} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \setcounter{table}{0} \renewcommand{S\arabic{table}}{S\arabic{table}} \setcounter{figure}{0} \renewcommand{S\arabic{theorem}}{S\arabic{figure}} \setcounter{theorem}{0} \renewcommand{S\arabic{theorem}}{S\arabic{theorem}} \section*{Proof of Proposition 1} The union causal model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}(\alpha)$ implies the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$. Furthermore, $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|>D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$ implies $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|=D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$, so that $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}(\alpha)$ implies $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|=D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$. \section*{Proof of Theorem 1} To ease notation, let $\bar{Z}_{k}=Z_k-\mathcal{M}u^{\ast}_k$ denote the demeaned $k$-th instrument. Suppose assumption 2 holds for $K$ candidate instruments. For a fixed value $1\leqD^k L^{m+1-k}amma\leq K$, consider the following orthogonal direct sum decomposition, \begin{align*} {\rm I\!H}=&\text{span}\left(\left\{\Pi_{s\in\alpha}\bar{Z}_s:\alpha\in\{\ell \subseteq\{1,...,K\}:1\leq |\ell|\leq K-D^k L^{m+1-k}amma \}\right\}\right)\\ &\oplus\text{span}\left(\left\{\Pi_{s\in\alpha}\bar{Z}_s:\alpha\in\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}\right). \end{align*} Consider any element $h(\boldsymbol{Z})\in \text{span}\left(\left\{\Pi_{s\in\alpha}\bar{Z}_s:\alpha\in\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}\right)$. For each $\alpha_1\in\left\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}$ and $\alpha_2\in\left\{\ell \subseteq\{1,...,K\}:|\ell|=D^k L^{m+1-k}amma\}\right\}$, $|\alpha_1|>K-D^k L^{m+1-k}amma=|\alpha^c_2|$, where $\alpha^c_2=\{k:k\not \in\alpha_2\}$ denotes the complement of set $\alpha_2$. Therefore the set $\alpha_1\setminus \alpha^c_2=\alpha_1\cap \alpha_2$ is non-empty and \begin{align*} E\{ \Pi_{s\in \alpha_1} \bar{Z}_s \rvert \boldsymbol{Z}_{-\alpha_2}\}&=E\{ \Pi_{s\in \alpha_1\cap \alpha_2} \bar{Z}_s \Pi_{s\in \alpha_1\cap \alpha_2^c} \bar{Z}_s \rvert \boldsymbol{Z}_{-\alpha_2}\}=\Pi_{s\in \alpha_1\cap \alpha_2^c} \bar{Z}_s E\{ \Pi_{s\in \alpha_1\cap \alpha_2} \bar{Z}_s \}=0, \end{align*} so that $h(\boldsymbol{Z})\in {\rm I\!H}_{D^k L^{m+1-k}amma}$. On the other hand, any element $\upsilon\in{\rm I\!H}_{D^k L^{m+1-k}amma}$ can be expressed as $\upsilon=\upsilon_1+\upsilon_2$ with $\upsilon_1\in \text{span}\left(\left\{\Pi_{s\in\alpha}\bar{Z}_s:\alpha\in\{\ell \subseteq\{1,...,K\}:1\leq |\ell|\leq K-D^k L^{m+1-k}amma \}\right\}\right)$ and $$\upsilon_2\in \text{span}\left(\left\{\Pi_{s\in\alpha}\bar{Z}_s:\alpha\in\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}\right).$$ For each $\alpha_1\in\left\{\ell \subseteq\{1,...,K\}:1\leq |\ell|\leq K-D^k L^{m+1-k}amma\}\right\}$, $|\alpha_1|\leq K-D^k L^{m+1-k}amma$. Therefore there exists a value of $\alpha_2\in\left\{\ell \subseteq\{1,...,K\}:|\ell|=D^k L^{m+1-k}amma\}\right\}$ such that $\alpha_1\subseteq \alpha_2^c$ and \begin{align*} E\{ \Pi_{s\in \alpha_1} \bar{Z}_s\rvert \boldsymbol{Z}_{-\alpha_2}\}= \Pi_{s\in \alpha_1} \bar{Z}_s\neq 0, \end{align*} almost surely. This implies that $\upsilon_1=0$ and $\upsilon=\upsilon_2$. This proves the first part of the theorem and implies that the $d_{D^k L^{m+1-k}amma}$ (non-redundant) unconditional moment restrictions (6) hold in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$. A necessary and sufficient rank condition to ensure uniqueness of solution to (6) is given by (7). \section*{Proof of Theorem 2} We follow closely the semiparametric theory of \cite{newey1990semiparametric}, \cite{Bickel:1993} and \cite{tsiatis2007semiparametric} in deriving the influence function of any regular and asymptotically linear estimator $\widehat{\beta}(D^k L^{m+1-k}amma)$ of $\beta^{\ast}$ defined by moment restrictions (6). Consider a parametric path $t$ for the joint distribution of $O=(Y,A,\boldsymbol{Z})$ restricted by assumption 2, with joint density function given by $$f_t(y,a,\mathcal{M}athbf{z})=f_t(y,a|\boldsymbol{Z}=\mathcal{M}athbf{z})\Pi_{s=1}^K{\Pr}_t(Z_s=z_s),$$ and $f_0(y,a,\mathcal{M}athbf{z})=f(y,a,\mathcal{M}athbf{z})$. The resulting score function is given by $$S_t(y,a,\mathcal{M}athbf{z})=S_t(y,a|\mathcal{M}athbf{z})+\sum_{s=1}^KS_t(z_s).$$ The moment restrictions in (6) are equivalent to the requirement that the moment restriction \begin{align} \label{eq:moment} {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}={0}, \end{align} holds for any $d_{D^k L^{m+1-k}amma}$-dimensional real vector $ {\mathrm{\scriptscriptstyle T}} heta$. If the estimator $\widehat{\beta}(D^k L^{m+1-k}amma)$ is regular and asymptotically linear with influence function $\varphi(O)$, then $\partial \beta_t/\partial t\mathcal{M}id_{t=0}=E\{\varphi(O)S(O)\}$ \citep[Theorem 3.2]{tsiatis2007semiparametric}. Differentiating under the integral of (\ref{eq:moment}) yields \begin{align*} \frac{\partial \beta_t}{\partial t}\biggr\rvert_{t=0}=-Q^{-1}\left[ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta^{\ast} A)S(O)\}+\frac{\partial {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}_t)(Y-\beta^{\ast} A)\}}{\partial t}\biggr\rvert_{t=0}\right], \end{align*} where $Q=- {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}$ and $\mathcal{M}u_t=\{E_t(Z_1),...,E_t(Z_K)\}^ {\mathrm{\scriptscriptstyle T}} $. The $j$-th entry in $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}_t)(Y-\beta^{\ast} A)$ is the function $\Pi_{s\in\alpha(j)}\{Z_s-E_t(Z_s)\}(Y-\beta^{\ast} A)$, with corresponding path-wise derivative \begin{align*} &\frac{\partial [\Pi_{s\in\alpha(j)}\{Z_s-E_t(Z_s)\}(Y-\beta^{\ast}A)]}{\partial t}\biggr\rvert_{t=0}\\ &=\sum_{1\leq r \leq |\alpha(j)|}\left[\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^r \frac{\partial E\{\Pi_{s\not\in\alpha,s\in\alpha(j)}Z_sE_t(\Pi_{s\in\alpha}Z_s)(Y-\beta^{\ast} A)\}}{\partial t} \biggr\rvert_{t=0}\right]\\ &=\sum_{1\leq r \leq |\alpha(j)|}\left[\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^r E[\Pi_{s\not\in\alpha,s\in\alpha(j)}Z_sE\{\Pi_{s\in\alpha}Z_s S(\boldsymbol{Z}_{\alpha}\mathcal{M}id \boldsymbol{Z}_{-\alpha})\mathcal{M}id \boldsymbol{Z}_{-\alpha}\}(Y-\beta^{\ast} A)]\right]\\ &=\sum_{1\leq r \leq |\alpha(j)|}\left[\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^r E\{\Pi_{s\in\alpha(j)}Z_s(Y-\beta^{\ast} A\mathcal{M}id\boldsymbol{Z}_{-\alpha} )S(\boldsymbol{Z}_{\alpha}\mathcal{M}id \boldsymbol{Z}_{-\alpha})\}\right]\\ &=\sum_{1\leq r \leq |\alpha(j)|}\left[\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^r E\{\Pi_{s\in\alpha(j)}(Z_s-\mathcal{M}u^{\ast}_s)(Y-\beta^{\ast} A\mathcal{M}id\boldsymbol{Z}_{-\alpha} )S(O)\}\right]. \end{align*} Therefore $$ \frac{\partial \beta_t}{\partial t}\biggr\rvert_{t=0}=-Q^{-1} {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} E[\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})\odot \{(Y-\beta^{\ast} A)+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\beta^{\ast})\}S(O)],$$ and the class of influence functions is given by $$\mathcal{M}athcal{F}=\left\{ E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}^{-1} {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}):\quad {\mathrm{\scriptscriptstyle T}} heta\in{\rm I\!R}^{d_{D^k L^{m+1-k}amma}}, E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}\neq 0 \right\},$$ which proves the first part of Theorem 2. By Theorem 5.3 of \cite{Newey1994_handbook}, the optimal linear combination in terms of asymptotic variance is indexed by $ {\mathrm{\scriptscriptstyle T}} heta_{opt}$ which satisfies the generalized information equality $$E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} \mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}=E\{ {\mathrm{\scriptscriptstyle T}} heta^{ {\mathrm{\scriptscriptstyle T}} }\mathcal{M}athbf{G}^{\otimes 2 }_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}) {\mathrm{\scriptscriptstyle T}} heta_{opt}\},\quad \text{ for all } {\mathrm{\scriptscriptstyle T}} heta\in{\rm I\!R}^{d_{D^k L^{m+1-k}amma}}.$$ Equivalently, $ {\mathrm{\scriptscriptstyle T}} heta^{ {\mathrm{\scriptscriptstyle T}} }E\{\mathcal{M}athbf{G}^{\otimes 2 }_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}) {\mathrm{\scriptscriptstyle T}} heta_{opt}-\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}=0 \text{ for all } {\mathrm{\scriptscriptstyle T}} heta\in{\rm I\!R}^{d_{D^k L^{m+1-k}amma}}.$ In particular, taking $ {\mathrm{\scriptscriptstyle T}} heta=E\{\mathcal{M}athbf{G}^{\otimes 2 }_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}) {\mathrm{\scriptscriptstyle T}} heta_{opt}-\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}$ yields $$E\{\mathcal{M}athbf{G}^{\otimes 2 }_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}) {\mathrm{\scriptscriptstyle T}} heta_{opt}-\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}^ {\mathrm{\scriptscriptstyle T}} E\{\mathcal{M}athbf{G}^{\otimes 2 }_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast}) {\mathrm{\scriptscriptstyle T}} heta_{opt}-\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}=0,$$ which implies that $$ {\mathrm{\scriptscriptstyle T}} heta_{opt}=E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}.$$ The asymptotic variance lower bound for all regular estimators of $\beta^{\ast}$ with influence functions in $\mathcal{M}athcal{F}$ is therefore given by \begin{align*} \mathcal{M}athcal{V}_{D^k L^{m+1-k}amma}&=E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} _{opt}\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}^{-1} {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} _{opt} E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\} {\mathrm{\scriptscriptstyle T}} heta_{opt}E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} _{opt}\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}^{-1}\\ &=[E\{A\mathcal{M}athbf{D}^ {\mathrm{\scriptscriptstyle T}} _{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})\} E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\}]^{-1}. \end{align*} \section*{Proof of Proposition 2} It follows from the proof of Theorem 1 that the $d_{D^k L^{m+1-k}amma}$ unconditional moment restrictions \begin{align*} E\{W(\boldsymbol{Z})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}=\tilde{E}\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\}=\mathcal{M}athbf{0}, \end{align*} holds in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}(\alpha)$, where the expectation $\tilde{E}(\cdot)$ is taken with respect to the joint probability mass function $\Pi_{s=1}^K \Pr(Z_k=z_k)$. \section*{Incorporating Measured Covariates} Suppose that $(O_1,...,O_n)$ are independent, identically distributed observations of $O=(Y,A,\boldsymbol{Z},\boldsymbol{X})$ from a target population, where $Y$ is an outcome of interest, $A$ is an exposure, $\boldsymbol{Z}=(Z_1,...,Z_K)$ comprises of $K$ candidate instruments and $\boldsymbol{X}=(X_1,...,X_p)$ comprises of $p$ measured baseline covariates with support in $\mathcal{M}athcal{X}$. We assume that the following additive linear, constant effects model holds with $K$ candidate instruments. \\ {\it Assumption $1^{\prime}$. For two possible values of the exposure $a^{\prime}$, $a$ and a possible value of the instruments $\mathcal{M}athbf{z}\in\mathcal{M}athcal{Z}$ $$Y{(a^{\prime},\mathcal{M}athbf{z})}-Y{(a,\mathcal{M}athbf{0})}=\beta^{\ast} (a^{\prime}-a)+\psi(\mathcal{M}athbf{z},\mathcal{M}athbf{x}),$$ which allows for arbitrary interactions among the direct effects of the instruments within strata defined by the value of baseline covariates $\mathcal{M}athbf{x}\in \mathcal{M}athcal{X}$.}\\ For any index set $\alpha\subseteq \{1,...,K\}$, we say that the candidate instruments $\boldsymbol{Z}_{\alpha}$ are valid while the remaining instruments $\boldsymbol{Z}_{-\alpha}$ are invalid if the causal model $\mathcal{M}athcal{C}^{\prime}(\alpha)$ defined by $E\{Y(0,0)|\boldsymbol{Z},\boldsymbol{X}=\mathcal{M}athbf{x}\}=E\{Y(0,0)|\boldsymbol{Z}_{-\alpha},\boldsymbol{X}=\mathcal{M}athbf{x}\}$ and $\psi(\boldsymbol{Z},\boldsymbol{X}=\mathcal{M}athbf{x})=\psi(\boldsymbol{Z}_{-\alpha},\boldsymbol{X}=\mathcal{M}athbf{x})$ holds almost surely for any $\mathcal{M}athbf{x}\in\mathcal{M}athcal{X}$, which implies the observed data model $\mathcal{M}athcal{M}^{\prime}(\alpha)$ defined by the conditional mean independence restriction $$E\{Y-\beta^{\ast}|\boldsymbol{Z},\boldsymbol{X}=\mathcal{M}athbf{x}\}=E\{Y-\beta^{\ast}|\boldsymbol{Z}_{-\alpha},\boldsymbol{X}=\mathcal{M}athbf{x}\},$$ for any $\mathcal{M}athbf{x}\in\mathcal{M}athcal{X}$. We say that { at least } $D^k L^{m+1-k}amma$ out of the $K$ candidate instruments are valid if the union causal model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|D^k L^{m+1-k}eq D^k L^{m+1-k}amma\}}\mathcal{M}athcal{C}^{\prime}(\alpha)$ holds, which implies the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}^{\prime}(\alpha)$. In addition, we assume conditional independence of the candidate instruments.\\ {\it Assumption $2^{\prime}$. The conditional distribution of the candidate instruments is non-degenerate and factorizes as $\Pr(\boldsymbol{Z}\leq\mathcal{M}athbf{z}\mathcal{M}id \boldsymbol{X}=\mathcal{M}athbf{x})=\prod_{s=1}^K \Pr(Z_s\leq z_s\mathcal{M}id \boldsymbol{X}=\mathcal{M}athbf{x})$ for any $\mathcal{M}athbf{z}\in\mathcal{M}athcal{Z}$ and $\mathcal{M}athbf{x}\in\mathcal{M}athcal{X}$.}\\ The proof of the following result is similar to the proofs of Theorems 1 and 2.\\ {\it Theorem $3$. Suppose assumption $2^{\prime}$ holds with $K$ candidate instruments and without loss of generality consider the enumeration $\{\alpha(1),...,\alpha(d_{D^k L^{m+1-k}amma})\}$ of the elements in $\left\{\ell \subseteq\{1,...,K\}:K-D^k L^{m+1-k}amma+1\leq |\ell|\leq K\}\right\}$ in some fixed order. Let $\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})=[\Pi_{s\in\alpha(1)}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\},...,\Pi_{s\in\alpha(d_{D^k L^{m+1-k}amma})}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\}]^ {\mathrm{\scriptscriptstyle T}} $ where $\mathcal{M}u^{\ast}_s(\boldsymbol{X})=E(Z_s\mathcal{M}id \boldsymbol{X})$ for $s=1,...,K$. Then the causal parameter of interest $\beta^{\ast}$ is identified in the union observed data model $\cup_{\alpha\in\{\ell \subseteq\{1,...,K\}:|\ell|= D^k L^{m+1-k}amma\}}\mathcal{M}athcal{M}^{\prime}(\alpha)$ as the unique solution to the $d_{D^k L^{m+1-k}amma}$ conditional moment restrictions \begin{align} \label{eq:identx} E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\mathcal{M}id \boldsymbol{X}\}=\mathcal{M}athbf{0}, \end{align} provided that \begin{align} \label{eq:ident2x} \biggr|\biggr| \frac{\partial E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})(Y-\beta A)\mathcal{M}id \boldsymbol{X}=\mathcal{M}athbf{x}\}}{\partial \beta}\biggr|\biggr| _0=||E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})A\mathcal{M}id \boldsymbol{X}=\mathcal{M}athbf{x}\}||_0> 0, \end{align} for at least one value $\mathcal{M}athbf{x}\in\mathcal{M}athcal{X}$. Furthermore, any regular and asymptotically linear estimator $\widehat{\beta}(D^k L^{m+1-k}amma)$ of $\beta^{\ast}$ defined by (\ref{eq:identx}) under assumption $2^\prime$ and condition (\ref{eq:ident2x}) must satisfy the following asymptotic expansion: \begin{align*} {n}^{1/2} \{ \mathcal{M}athcal{H}at{\beta}(D^k L^{m+1-k}amma)-\beta^{\ast}\} = E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} (\boldsymbol{X})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})A\}^{-1} {n}^{1/2}\widehat{E}_n \{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} (\boldsymbol{X})\mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\}+o_{p}\left( 1\right) , \end{align*} for any $d_{D^k L^{m+1-k}amma}$-dimensional vector function $ {\mathrm{\scriptscriptstyle T}} heta(\boldsymbol{X})$ that satisfies $E\{ {\mathrm{\scriptscriptstyle T}} heta^ {\mathrm{\scriptscriptstyle T}} (\boldsymbol{X})\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u}^{\ast})A\}\neq 0$, where $\mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta,\boldsymbol{\mathcal{M}u})=\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\boldsymbol{\mathcal{M}u})\odot \{(Y-\beta A)+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)\}$ and $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)$ is an $d_{D^k L^{m+1-k}amma}$-dimensional augmentation vector with the $j$-th entry equal to \begin{align*} \sum_{1\leq r \leq |\alpha(j)|}\left\{\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})\right\}. \end{align*} The efficient regular and asymptotically linear estimator estimator admits the above expansion with $ {\mathrm{\scriptscriptstyle T}} heta_{opt}(\boldsymbol{X})=E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\mathcal{M}id \boldsymbol{X}\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\mathcal{M}id \boldsymbol{X}\}$, so that the asymptotic variance lower bound for all regular estimators of $\beta^{\ast}$ is given by $$\mathcal{M}athcal{V}_{D^k L^{m+1-k}amma}=[E\{A\mathcal{M}athbf{D}^ {\mathrm{\scriptscriptstyle T}} _{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})\mathcal{M}id \boldsymbol{X}\} E\{\mathcal{M}athbf{G}^{\otimes 2}_{D^k L^{m+1-k}amma}(O;\beta^{\ast},\boldsymbol{\mathcal{M}u}^{\ast})\mathcal{M}id \boldsymbol{X}\}^{-1}E\{\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z};\boldsymbol{\mathcal{M}u}^{\ast})A\mathcal{M}id \boldsymbol{X}\}]^{-1}.$$ } \section*{Doubly robust and debiased machine learning estimation} \subsection*{Doubly robust estimator} When $\boldsymbol{X}$ is of moderate to high dimension relative to the sample size, one dimension reduction strategy is to specify working models $\{\mathcal{M}u_s(\boldsymbol{X};\eta_s): s=1,...,K\}$ and $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta,\psi)$ indexed by finite-dimensional parameters $\eta=(\eta^ {\mathrm{\scriptscriptstyle T}} _1,...,\eta^ {\mathrm{\scriptscriptstyle T}} _K,)^ {\mathrm{\scriptscriptstyle T}} $ and $\psi$ respectively, which constrain the nuisance parameter space. A doubly robust estimator $\widehat{\beta}_{dr}(D^k L^{m+1-k}amma)$ is obtained as follows.\\ {\it Step 1.} Solve the score function $$\widehat{E}_n\left[\left\{{Z_s}/{\mathcal{M}u_s(\boldsymbol{X};\eta_s)}-{(1-Z_s)}/{(1-\mathcal{M}u_s(\boldsymbol{X};\eta_s))}\right\}{\partial\mathcal{M}u_s(\boldsymbol{X};\eta_s) }/{\partial \eta_s}\right]=\mathcal{M}athbf{0},$$ to obtain $\widehat{\eta}_s$ for each $s=1,...,K$. Let $\widehat{\boldsymbol{\mathcal{M}u}}(\boldsymbol{X})=\{\mathcal{M}u_s(\boldsymbol{X};\widehat{\eta}_s):s=1,...,K\}$. \\ {\it Step 2.} For a user specified positive semi-definite $d_{D^k L^{m+1-k}amma}\times d_{D^k L^{m+1-k}amma}$ weighting matrix $\Omega$, \begin{align*} \widehat{\beta}_{dr}(D^k L^{m+1-k}amma)=\arg\mathcal{M}in_{\beta} \widehat{E}_n\{\mathcal{M}athbf{G}^{ {\mathrm{\scriptscriptstyle T}} }_{D^k L^{m+1-k}amma}(O;\beta,\widehat{\boldsymbol{\mathcal{M}u}},\widehat{\psi}(\beta)\}\Omega \widehat{E}_n\{\mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta,\widehat{\boldsymbol{\mathcal{M}u}},\widehat{\psi}(\beta))\}, \end{align*} where $\mathcal{M}athbf{G}_{D^k L^{m+1-k}amma}(O;\beta,\widehat{\boldsymbol{\mathcal{M}u}},\widehat{\psi}(\beta))=\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\widehat{\boldsymbol{\mathcal{M}u}})\odot \{Y-\beta A +\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta,\widehat{\psi}(\beta))\}$ and for each fixed value $\beta$, $\widehat{\psi}(\beta)$ solves $\widehat{E}_n[\{Y-\beta A+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta,\psi) \}\partial \mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta,\psi)/\partial \psi]=\mathcal{M}athbf{0}$.\\ Asymptotic normality of $\widehat{\beta}_{dr}(D^k L^{m+1-k}amma)$ holds by invoking the $n^{-1/2}$ asymptotic expansion for $\widehat{\beta}_{dr}(D^k L^{m+1-k}amma)-\beta^{\dag}$ allowing for model misspecification of the nuisance parameters \citep{white1982maximum}, where $\beta^{\dag}$ denotes the probability limit of $\widehat{\beta}_{dr}(D^k L^{m+1-k}amma)$. In addition, let $\boldsymbol{\mathcal{M}u}^{\dag}(\boldsymbol{X})$ and $\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta^{\dag})$ denote the probability limits of $\widehat{\boldsymbol{\mathcal{M}u}}(\boldsymbol{X})$ and $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta^{\dag},\widehat{\psi}(\beta^{\dag}))$ respectively. It follows from Theorem 4 that $\beta^{\dag}=\beta^{\ast}$ if either the model $\{\mathcal{M}u_s(\boldsymbol{X};\eta_s): s=1,...,K\}$ or $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta,\psi)$ is correctly specified, but not necessarily both. \\ { {\it Theorem 4. The $d_{D^k L^{m+1-k}amma}$ moment conditions $${E}[\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};{\boldsymbol{\mathcal{M}u}}^{\dag})\odot \{Y-\beta^{\ast} A +\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}]=\mathcal{M}athbf{0},$$ hold if at least one of the following holds: (i) ${\boldsymbol{\mathcal{M}u}}^{\dag}={\boldsymbol{\mathcal{M}u}}^{\ast}$ or (ii) $\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)=\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)$. }\\ {\it Proof: Let $$\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta)=\sum_{1\leq r \leq |\alpha(j)|}\left\{\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E^{\dag}(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})\right\},$$ denote the $j$-th entry of $\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)$. If ${\boldsymbol{\mathcal{M}u}}^{\dag}={\boldsymbol{\mathcal{M}u}}^{\ast}$, then the $j$-th moment condition \begin{align*} E&[\Pi_{s\in\alpha(j)}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\}\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}] \\ &=E[\Pi_{s\in\alpha(j)}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\}\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})-\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}]\\ &=0, \end{align*} where the second equality arise from the fact that for any $1\leq r \leq |\alpha(j)|$ and $\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}$, there is at least one instrument indexed by $\zeta\in\alpha\subseteq \alpha(j)$ such that \begin{align*} E&[\{Z_\zeta-\mathcal{M}u^{\ast}_\zeta(\boldsymbol{X})\}\Pi_{s\in\alpha(j),s\neq \zeta}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\} E^{\dag}(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})]\\ &=E[\{Z_\zeta-\mathcal{M}u^{\ast}_\zeta(\boldsymbol{X})\}]E[\Pi_{s\in\alpha(j),s\neq \zeta}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\} E^{\dag}(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})]=0, \end{align*} and similarly $E[\{Z_\zeta-\mathcal{M}u^{\ast}_\zeta(\boldsymbol{X})\}\Pi_{s\in\alpha(j),s\neq \zeta}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\} E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})]=0$. On the other hand, if $\mathcal{M}athbf{A}^{\dag}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)=\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)$ then the $j$-th moment condition \begin{align*} E&[\Pi_{s\in\alpha(j)}\{Z_s-\mathcal{M}u^{\dag}_s(\boldsymbol{X})\}\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}] \\ &=E[\Pi_{s\in\alpha(j)}\{Z_s-\mathcal{M}u^{\ast}_s(\boldsymbol{X})\}\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}] =0, \end{align*} where the second equality follows from the fact that for any $\upsilon\subseteq\alpha(j)$ with cardinality $|\upsilon|=\kappa$, \begin{align*} E&[\{\Pi_{s\not \in\upsilon,s\in\alpha(j)}Z_s\}\{\Pi_{r\in\upsilon}\mathcal{M}u^{\dag}_r(\boldsymbol{X})\}\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\}]\\ &=E[\{\Pi_{s\not \in\upsilon,s\in\alpha(j)}Z_s\}\{\Pi_{r\in\upsilon}\mathcal{M}u^{\dag}_r(\boldsymbol{X})\}E\{Y-\beta^{\ast}A+\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma,j}(\boldsymbol{Z},\boldsymbol{X};\beta^{\ast})\mathcal{M}id \boldsymbol{Z}_{-\upsilon},\boldsymbol{X}\}]\\ &=E\Biggr[\{\Pi_{s\not \in\upsilon,s\in\alpha(j)}Z_s\}\{\Pi_{r\in\upsilon}\mathcal{M}u^{\dag}_r(\boldsymbol{X})\}\Biggr\{E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\upsilon},\boldsymbol{X})\\ &\phantom{===}+\sum_{1\leq r \leq |\alpha(j)|}\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha\cup\upsilon},\boldsymbol{X})\Biggr\}\Biggr]\\ &=E\Biggr[\{\Pi_{s\not \in\upsilon,s\in\alpha(j)}Z_s\}\{\Pi_{r\in\upsilon}\mathcal{M}u^{\dag}_r(\boldsymbol{X})\}\Biggr\{\sum_{t=0}^{\kappa}(-1)^{t}{\kappa \choose t}E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\upsilon},\boldsymbol{X})\\ &\phantom{===}+\sum_{r>\kappa}\sum_{\{\alpha\cup \upsilon\} \in \{\ell\subseteq\alpha(j):|\ell|=r\}}\sum_{t=0}^{\kappa}(-1)^t{\kappa \choose t}E(Y-\beta A\mathcal{M}id \boldsymbol{Z}_{-\alpha\cup\upsilon},\boldsymbol{X})\Biggr\}\Biggr]\\ &=0. \end{align*} } } \subsection*{Debiased machine learning estimator} The cross-fitted debiased machine learning approach uses various flexible and data-adaptive methods to estimate the functions $\{\mathcal{M}u^{\ast}_s(\boldsymbol{X}): s=1,...,K\}$ and $\{\mathcal{M}athbf{m}_y(\boldsymbol{Z},\boldsymbol{X}),\mathcal{M}athbf{m}_a(\boldsymbol{Z},\boldsymbol{X})\}$, where $\mathcal{M}athbf{m}_y(\boldsymbol{Z},\boldsymbol{X})$, $\mathcal{M}athbf{m}_a(\boldsymbol{Z},\boldsymbol{X})$ are $d_{D^k L^{m+1-k}amma}$-dimensional vectors with the $j$-th entries equal to \begin{align*} \sum_{1\leq r \leq |\alpha(j)|}\left\{\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E(Y\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})\right\};\\ \sum_{1\leq r \leq |\alpha(j)|}\left\{\sum_{\alpha\in\{\ell\subseteq\alpha(j):|\ell|=r\}}(-1)^{r} E(A\mathcal{M}id \boldsymbol{Z}_{-\alpha},\boldsymbol{X})\right\}, \end{align*} respectively. Hence, $\mathcal{M}athbf{A}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\beta)=\mathcal{M}athbf{m}_y(\boldsymbol{Z},\boldsymbol{X})- \beta \mathcal{M}athbf{m}_a(\boldsymbol{Z},\boldsymbol{X})$. Let $(I_j)_{j=1}^{J}$ be a $J$-fold random partition of the observation indices $\{1,2,...,n\}$. A debiased machine learning estimator $\widehat{\beta}_{dml}(D^k L^{m+1-k}amma)$ is obtained as follows. \\ {\it Step 1.} For each $j=1,2,...,J$, obtain $\widehat{\boldsymbol{\mathcal{M}u}}^{(j)}$, $\widehat{\mathcal{M}athbf{m}}^{(j)}_y$ and $\widehat{\mathcal{M}athbf{m}}^{(j)}_a$ by machine learning (e.g. with probability random forest for binary response and regression random forest for continuous response) of the functions $\boldsymbol{\mathcal{M}u}^{\ast}$, $\mathcal{M}athbf{m}_y(\boldsymbol{Z},\boldsymbol{X})$ and $\mathcal{M}athbf{m}_a(\boldsymbol{Z},\boldsymbol{X})$ respectively, using all observations not in $I_j$.\\ {\it Step 2.} For a user specified positive semi-definite $d_{D^k L^{m+1-k}amma}\times d_{D^k L^{m+1-k}amma}$ weighting matrix $\Omega$, the cross-fitted debiased machine learning estimator is given by $$\widehat{\beta}_{dml}(D^k L^{m+1-k}amma)=\arg\mathcal{M}in_{\beta}\widehat{\mathcal{M}athbf{G}}^{ {\mathrm{\scriptscriptstyle T}} }_{D^k L^{m+1-k}amma}(O;\beta)\Omega\widehat{\mathcal{M}athbf{G}}_{D^k L^{m+1-k}amma}(O;\beta),$$ where $\widehat{\mathcal{M}athbf{G}}_{D^k L^{m+1-k}amma}(O;\beta)=n^{-1}\sum_{j=1}^J\sum_{i\in I_j} \widehat{\mathcal{M}athbf{G}}^{(j)}_{D^k L^{m+1-k}amma}(O_i;\beta)$ and $\widehat{\mathcal{M}athbf{G}}^{(j)}_{D^k L^{m+1-k}amma}(O;\beta)=\mathcal{M}athbf{D}_{D^k L^{m+1-k}amma}(\boldsymbol{Z},\boldsymbol{X};\widehat{\boldsymbol{\mathcal{M}u}}^{(j)})\odot [Y+\widehat{\mathcal{M}athbf{m}}^{(j)}_y(\boldsymbol{Z},\boldsymbol{X})-\beta\{A+\widehat{\mathcal{M}athbf{m}}^{(j)}_a(\boldsymbol{Z},\boldsymbol{X})\}]$. Under general regularity conditions established by \citet{Chernozhukov2018ddml,chernozhukov2016locally}, $\widehat{\beta}_{dml}(D^k L^{m+1-k}amma)$ is a root-$n$ consistent estimator of $\beta^{\ast}$ if all the nuisance parameters are estimated with mean-squared error rates diminishing faster than $n^{-1/4}$. Such rates are achievable for many highly data-adaptive machine learning methods, including LASSO, gradient boosting trees, random forests or ensembles of these methods. \section*{Additional Monte Carlo results} \subsection*{Varying Degree of Exposure Interactions} \noindent It is of interest to investigate the performance of estimators under settings where the exposure model is sparse. In addition to the main effects, we randomly select $\Delta=30$\% or 60\% of both the lower and higher order interactions of order $D^k L^{m+1-k}eq ||\pi||_0+1$ to have nonzero coefficients in $\tau$. However, this sampling scheme may render the plurality rule invalid even when $\pi=(0,0,0.1,0.2,0.3)^{ {\mathrm{\scriptscriptstyle T}} }$, due to potential asymmetric interactions in the exposure model. Therefore, we perform Monte Carlo simulations corresponding to the two cases $\pi=(0,0,0,0.2,0.2)^{ {\mathrm{\scriptscriptstyle T}} }$ and $\pi=(0,0,0.2,0.2,0.2)^{ {\mathrm{\scriptscriptstyle T}} }$ only, the latter of which represents violation of both the majority and plurality rules. The estimation results are reported in Tables \ref{tab:int} and \ref{tab:int2}. As expected the proposed g-estimator has smaller absolute bias and lower variance as the degree of interactions in the exposure model increases, with empirical coverage close to the nominal level throughout. \subsection*{Single Index Models} We present further Monte Carlo results for exposures generated under the class of semiparametric single index models $E(A|Z)=g(\xi^{ {\mathrm{\scriptscriptstyle T}} } Z)$ where $g(\cdot)$ is a link function. Specifically, the continuous exposure is generated without covariates from $ A= \exp(\xi^{ {\mathrm{\scriptscriptstyle T}} } \boldsymbol{Z})+\varepsilon_2. $ We also consider the important setting of a binary exposure generated from $ A= I(-3+\xi^{ {\mathrm{\scriptscriptstyle T}} } \boldsymbol{Z}+\varepsilon_2>0), $ where $I(\cdot)$ is the indicator function. We fix $C=1$, while all other simulation settings are unchanged. Tables \ref{tab:log} and \ref{tab:probit} summarize the estimation results based on 1000 replications. Remarkably, the proposed g-estimator performs well, even though in both settings only main effects are included in the linear predictor function, as exposure interactive effects are induced on the additive scale by the nonlinear link function, rather than specified explicitly {\it a priori}. All other conclusions are qualitatively similar to those drawn when the exposure is generated under the identity link with interactions. \begin{table} \begin{center} \caption{Comparison of methods with {continuous exposure generated under the identity link $(C=1.0)$}. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:ident2} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-Lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds}\\ $|\text{Bias}|$ & 0.004 & 0.000& 0.008 & 0.000 & 0.000 \\ & 0.001 & 0.000 & 0.009 & 0.000 & 0.000 \\ $\sqrt{\text{Var}}$ &0.012 & 0.001 & 0.001 & 0.002 & 0.002 \\ &0.006 & 0.001 & 0.001 & 0.001 & 0.001 \\ $\sqrt{\text{EVar}}$ & 0.011 & 0.001 & 0.001 & 0.002 & 0.001 \\ & 0.006 &0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 &0.931 & 0.951 & 0.000 & 0.939 & 0.950 \\ & 0.941 &0.945 & 0.001 & 0.947 &0.947 \\%\cline{2-6}\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule violated but plurality rule holds}\\ $|\text{Bias}|$ & 0.018 & 0.000& 0.013 & 0.009 & 0.005 \\ & 0.010 & 0.000 & 0.013 & 0.007 & 0.000 \\ $\sqrt{\text{Var}}$ & 0.035 & 0.002 & 0.001 & 0.005 & 0.005 \\ &0.026 & 0.001 & 0.001 & 0.004 & 0.002 \\ $\sqrt{\text{EVar}}$ & 0.041 & 0.002 & 0.001 & 0.005 & 0.005 \\ & 0.028 & 0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 & 0.937 & 0.937 & 0.000 & 0.119 & 0.425 \\ & 0.936 & 0.951 & 0.000 & 0.000 & 0.944 \\%\cline{2-6}\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated}\\ $|\text{Bias}|$ & 0.019& 0.000& 0.013 & 0.021 & 0.021 \\ & 0.010 & 0.000 & 0.013 & 0.021 & 0.021 \\ $\sqrt{\text{Var}}$ & 0.035 & 0.002 & 0.001 & 0.002 & 0.002 \\ &0.026 & 0.001 & 0.001 & 0.001 & 0.001 \\ $\sqrt{\text{EVar}}$ & 0.041 & 0.002 & 0.001 & 0.001 & 0.002 \\ & 0.028 & 0.001 & 0.001 & 0.001 & 0.001 \\ Cov95 & 0.936 & 0.940 & 0.000 & 0.000 & 0.000 \\ & 0.936 & 0.950 & 0.000 & 0.000 & 0.000 \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent \small Note: $|\text{Bias}|$ and $\sqrt{\text{Var}}$ are the Monte Carlo absolute bias and standard deviation of the point estimates, $\sqrt{\text{EVar}}$ is the square root of the mean of the variance estimates and Cov95 is the coverage proportion of the 95\% confidence intervals, based on 1000 repeated simulations. Zeros denote values smaller than $.0005$.} \end{tablenotes} \end{center} \end{table} \begin{table} \begin{center} \caption{ Comparison of methods with {continuous exposure generated under the identity link $(C=0.6)$}, and varying degree of interactive effects. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:int} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-Lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds, $\Delta=30$\%}\\ $|\text{Bias}|$ &$0.029$ & 0.000 & $0.039$ & $0.002$ & $0.002$ \\ &$0.009$ & 0.000 & $0.039$ &0.000 & 0.000 \\ $\sqrt{\text{Var}}$ &$0.056$ & $0.006$ & $0.007$ & $0.009$ & $0.010$ \\ &$0.030$ & $0.003$ & $0.006$ & $0.003$ & $0.003$ \\ $\sqrt{\text{EVar}}$ &$0.057$ & $0.007$ & $0.005$ & $0.007$ & $0.007$ \\ &$0.031$ & $0.003$ & $0.002$ & $0.003$ & $0.003$ \\ Cov95 & $0.919$ & $0.961$ & 0.000 & $0.937$ & $0.919$ \\ &$0.940$ & $0.941$ & 0.000 & $0.941$ & $0.943$ \\ & \mathcal{M}ulticolumn{5}{c}{Majority rule holds, $\Delta=60$\%}\\ $|\text{Bias}|$ &$0.012$ & 0.000 & $0.022$ & 0.000 & 0.000 \\ &$0.003$ & 0.000 & $0.022$ & 0.000& 0.000 \\ $\sqrt{\text{Var}}$ &$0.032$ & $0.004$ & $0.003$ & $0.004$ & $0.005$ \\ &$0.016$ & $0.002$ & $0.002$ & $0.002$ & $0.003$ \\ $\sqrt{\text{EVar}}$ &$0.031$ & $0.004$ & $0.003$ & $0.004$ & $0.004$ \\ &$0.016$ & $0.002$ & $0.001$ & $0.002$ & $0.002$ \\ Cov95 &$0.927$ & $0.953$ & 0.000 & $0.954$ & $0.951$ \\ &$0.948$ & $0.950$ & 0.000 & $0.945$ & $0.944$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated, $\Delta=30$\%}\\ $|\text{Bias}|$ &$0.098$ & 0.000 & $0.059$ & $0.093$ & $0.085$ \\ &$0.070$ & 0.000 & $0.059$ & $0.094$ & $0.054$ \\ $\sqrt{\text{Var}}$ &$0.143$ & $0.009$ & $0.007$ & $0.016$ & $0.029$ \\ &$0.125$ & $0.004$ & $0.006$ & $0.011$ & $0.037$ \\ $\sqrt{\text{EVar}}$ &$0.166$ & $0.009$ & $0.005$ & $0.007$ & $0.008$ \\ &$0.137$ & $0.004$ & $0.002$ & $0.003$ & $0.003$ \\ Cov95 &$0.923$ & $0.943$ & 0.000 & 0.000 & $0.024$ \\ &$0.922$ & $0.941$ & 0.000 & 0.000 & $0.205$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated, $\Delta=60$\%}\\ $|\text{Bias}|$ &$0.054$ & 0.000 & $0.034$ & $0.055$ & $0.054$ \\ &$0.030$ & 0.000 & $0.034$ & $0.055$ & $0.042$ \\ $\sqrt{\text{Var}}$ &$0.091$ & $0.005$ & $0.004$ & $0.006$ & $0.009$ \\ &$0.072$ & $0.002$ & $0.002$ & $0.004$ & $0.017$ \\ $\sqrt{\text{EVar}}$ &$0.106$ & $0.005$ & $0.003$ & $0.004$ & $0.004$ \\ &$0.077$ & $0.002$ & $0.001$ & $0.002$ & $0.002$ \\ Cov95 &$0.936$ & $0.946$ & 0.000 & 0.000 & 0.000 \\ &$0.925$ & $0.959$ & 0.000 & 0.000 & $0.043$ \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent \small Note: See the footnote of Table \ref{tab:ident2}.} \end{tablenotes} \end{center} \end{table} \begin{table} \begin{center} \caption{Comparison of methods with {continuous exposure generated under the identity link $(C=10.0)$}, and varying degree of interactive effects. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:int2} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-Lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds, $\Delta=30$\%}\\ $|\text{Bias}|$ &$0.015$ & 0.000 & $0.024$ & $0.001$ & $0.001$ \\ &$0.005$ & 0.000 & $0.024$ & 0.000&0.000 \\ $\sqrt{\text{Var}}$ &$0.034$ & $0.004$ & $0.004$ & $0.006$ & $0.006$ \\ &$0.018$ & $0.002$ & $0.003$ & $0.002$ & $0.002$ \\ $\sqrt{\text{EVar}}$ &$0.035$ & $0.004$ & $0.003$ & $0.004$ & $0.004$ \\ &$0.019$ & $0.002$ & $0.001$ & $0.002$ & $0.002$ \\ Cov95 &$0.932$ & $0.961$ & 0.000 & $0.937$ & $0.919$ \\ &$0.942$ & $0.941$ & 0.000 & $0.941$ & $0.942$ \\ & \mathcal{M}ulticolumn{5}{c}{Majority rule holds, $\Delta=60$\%}\\ $|\text{Bias}|$ &$0.006$ & 0.000 & $0.013$ & 0.000 & 0.000 \\ &$0.002$ & 0.000 & $0.013$ & 0.000 & 0.000 \\ $\sqrt{\text{Var}}$ &$0.020$ & $0.002$ & $0.002$ & $0.002$ & $0.003$ \\ &$0.009$ & $0.001$ & $0.001$ & $0.001$ & $0.002$ \\ $\sqrt{\text{EVar}}$ &$0.019$ & $0.002$ & $0.002$ & $0.002$ & $0.002$ \\ &$0.009$ & $0.001$ & $0.001$ & $0.001$ & $0.001$ \\ Cov95 &$0.932$ & $0.953$ & 0.000 & $0.954$ & $0.950$ \\ &$0.950$ & $0.950$ & 0.000 & $0.945$ & $0.944$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated, $\Delta=30$\%}\\ $|\text{Bias}|$ &$0.056$ & 0.000 & $0.036$ & $0.056$ & $0.051$ \\ &$0.040$ & 0.000 & $0.035$ & $0.056$ & $0.033$ \\ $\sqrt{\text{Var}}$ &$0.088$ & $0.005$ & $0.004$ & $0.010$ & $0.017$ \\ &$0.077$ & $0.002$ & $0.003$ & $0.007$ & $0.022$ \\ $\sqrt{\text{EVar}}$ &$0.104$ & $0.005$ & $0.003$ & $0.004$ & $0.005$ \\ &$0.084$ & $0.002$ & $0.001$ & $0.002$ & $0.002$ \\ Cov95 &$0.935$ & $0.944$ & 0.000 & 0.000 & $0.023$ \\ &$0.931$ & $0.943$ & 0.000 & 0.000 & $0.205$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated, $\Delta=60$\%}\\ $|\text{Bias}|$ &$0.030$ & 0.000 & $0.020$ & $0.033$ & $0.033$ \\ &$0.017$ & 0.000 & $0.020$ & $0.033$ & $0.025$ \\ $\sqrt{\text{Var}}$ &$0.055$ & $0.003$ & $0.002$ & $0.003$ & $0.006$ \\ &$0.044$ & $0.001$ & $0.001$ & $0.002$ & $0.010$ \\ $\sqrt{\text{EVar}}$ &$0.064$ & $0.003$ & $0.002$ & $0.002$ & $0.003$ \\ &$0.047$ & $0.001$ & $0.001$ & $0.001$ & $0.001$ \\ Cov95 &$0.944$ & $0.946$ & 0.000& 0.000 & 0.000 \\ &$0.929$ & $0.959$ & 0.000 & 0.000 & $0.041$ \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent\small Note: See the footnote of Table \ref{tab:ident2}.} \end{tablenotes} \end{center} \end{table} \begin{table} \begin{center} \caption{Comparison of methods with {continuous exposure generated under the log link}. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:log} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-Lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds}\\ $|\text{Bias}|$ &$0.000$ & $0.000$ & $0.002$ & $0.000$& $0.000$ \\ &$0.000$ & $0.000$ & $0.002$ & $0.000$&$0.000$ \\ $\sqrt{\text{Var}}$ &$0.001$ & $0.000$ & $0.000$& $0.000$ & $0.000$ \\ &$0.001$ & $0.000$ & $0.000$ &$0.000$ &$0.000$ \\ $\sqrt{\text{EVar}}$ &$0.001$ & $0.000$ & $0.000$& $0.000$ & $0.000$ \\ &$0.001$ & $0.000$& $0.000$ & $0.000$ & $0.000$ \\ Cov95 &$0.939$ & $0.951$ & $0.000$ & $0.938$ & $0.950$ \\ &$0.948$ & $0.945$ & $0.000$ & $0.947$ & $0.946$ \\ & \mathcal{M}ulticolumn{5}{c}{Majority rule violated but plurality rule holds}\\ $|\text{Bias}|$ &$0.002$ & $0.000$ & $0.003$ & $0.002$ & $0.001$ \\ &$0.000$ & $0.000$ & $0.003$ & $0.001$ & $0.000$ \\ $\sqrt{\text{Var}}$ &$0.006$ & $0.000$ & $0.000$ & $0.001$ & $0.001$ \\ &$0.003$ & $0.000$ & $0.000$ & $0.001$ & $0.000$ \\ $\sqrt{\text{EVar}}$ &$0.006$ & $0.000$ &$0.000$& $0.000$& $0.000$ \\ &$0.003$ & $0.000$ &$0.000$&$0.000$ &$0.000$\\ Cov95 &$0.944$ & $0.939$ &$0.000$ & $0.118$ & $0.423$ \\ &$0.956$ & $0.951$ & $0.000$ & $0.000$& $0.944$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated}\\ $|\text{Bias}|$ &$0.002$ & $0.000$ & $0.003$ & $0.004$ & $0.004$ \\ &$0.000$ &$0.000$& $0.003$ & $0.004$ & $0.004$ \\ $\sqrt{\text{Var}}$ &$0.006$ & $0.000$ & $0.000$ & $0.000$ &$0.000$ \\ &$0.003$ & $0.000$ & $0.000$ & $0.000$ & $0.000$ \\ $\sqrt{\text{EVar}}$ &$0.006$ & $0.000$ & $0.000$ & $0.000$ & $0.000$\\ &$0.003$ & $0.000$ & $0.000$ & $0.000$& $0.000$ \\ Cov95 &$0.940$ & $0.939$ &$0.000$ &$0.000$ & $0.000$ \\ &$0.956$ & $0.950$ & $0.000$ &$0.000$ & $0.000$ \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent \small Note: See the footnote of Table \ref{tab:ident2}.} \end{tablenotes} \end{center} \end{table} \begin{table} \begin{center} \caption{Comparison of methods with {binary exposure generated under the probit link}. The two rows of results for each estimator correspond to sample sizes of $n = 10000$ and $n = 50000$ respectively.} \label{tab:probit} \begin{tabular}{cccccccccccc} \toprule & G-estimator & Oracle 2SLS & Naive 2SLS &Post-Lasso & TSHT \\ \mathcal{M}athcal{H}line\noalign{ } & \mathcal{M}ulticolumn{5}{c}{Majority rule holds}\\ $|\text{Bias}|$ &$0.065$ & $0.002$ & $0.300$ & $0.002$ & $0.000$ \\ &$0.015$ & $0.002$ & $0.302$ & $0.002$ & $0.004$ \\ $\sqrt{\text{Var}}$ &$0.187$ & $0.051$ & $0.039$ & $0.055$ & $0.059$ \\ &$0.090$ & $0.023$ & $0.018$ & $0.023$ & $0.032$ \\ $\sqrt{\text{EVar}}$ &$0.185$ & $0.051$ & $0.039$ & $0.051$ & $0.052$ \\ &$0.091$ & $0.023$ & $0.017$ & $0.022$ & $0.023$ \\ Cov95 &$0.928$ & $0.952$ & $0.000$ & $0.941$ & $0.948$ \\ &$0.954$ & $0.947$ & $0.000$ & $0.948$ & $0.946$ \\ & \mathcal{M}ulticolumn{5}{c}{Majority rule violated but plurality rule holds}\\ $|\text{Bias}|$ &$0.377$ & $0.003$ & $0.450$ & $0.302$ & $0.203$ \\ &$0.196$ & $0.003$ & $0.453$ & $0.252$ & $0.008$ \\ $\sqrt{\text{Var}}$ &$0.704$ & $0.062$ & $0.040$ & $0.196$ & $0.195$ \\ &$0.535$ & $0.028$ & $0.018$ & $0.147$ & $0.053$ \\ $\sqrt{\text{EVar}}$ &$1.077$ & $0.062$ & $0.039$ & $0.056$ & $0.062$ \\ &$0.561$ & $0.028$ & $0.017$ & $0.026$ & $0.027$ \\ Cov95 &$0.932$ & $0.939$ & $0.000$ & $0.110$ & $0.396$ \\ &$0.925$ & $0.951$ & $0.000$ & $0.000$ & $0.945$ \\ & \mathcal{M}ulticolumn{5}{c}{Both majority and plurality rules violated}\\ $|\text{Bias}|$ &$0.378$ & $0.003$ & $0.450$ & $0.749$ & $0.751$ \\ &$0.196$ & $0.003$ & $0.453$ & $0.753$ & $0.752$ \\ $\sqrt{\text{Var}}$ &$0.704$ & $0.062$ & $0.039$ & $0.055$ & $0.055$ \\ &$0.536$ & $0.028$ & $0.018$ & $0.023$ & $0.030$ \\ $\sqrt{\text{EVar}}$ &$1.085$ & $0.062$ & $0.039$ & $0.051$ & $0.051$ \\ &$0.561$ & $0.028$ & $0.017$ & $0.022$ & $0.023$ \\ Cov95 &$0.933$ & $0.939$ & $0.000$ & $0.000$ &$0.000$ \\ &$0.924$ & $0.951$ & $0.000$ & $0.000$ & $0.000$ \\ \mathcal{M}athcal{H}line \end{tabular} \begin{tablenotes} \item {\noindent \small Note: See the footnote of Table \ref{tab:ident2}.} \end{tablenotes} \end{center} \end{table} \end{document}
\begin{document} \mathbf{m}aketitle \begin{abstract} We consider a non-local free energy functional, modelling a competition between entropy and pairwise interactions reminiscent of the second order virial expansion, with applications to nematic liquid crystals as a particular case. We build on previous work on understanding the behaviour of such models within the large-domain limit, where minimisers converge to minimisers of a quadratic elastic energy with manifold-valued constraint, analogous to harmonic maps. We extend this work to establish H\"older bounds for (almost-)minimisers on bounded domains, and demonstrate stronger convergence of (almost)-minimisers away from the singular set of the limit solution. The proof techniques bear analogy with recent work of singularly perturbed energy functionals, in particular in the context of the Ginzburg-Landau and Landau-de Gennes models. \mathbf{e}nd{abstract} \mathbf{m}athbf{n}umberwithin{equation}{section} \mathbf{m}athbf{n}umberwithin{definition}{section} \mathbf{m}athbf{n}umberwithin{theorem}{section} \mathbf{m}athbf{n}umberwithin{remark}{section} \mathbf{m}athbf{n}umberwithin{goal}{section} \section{Introduction} \subsection{Variational models of liquid crystals} Liquid crystalline systems are those which sit outside of the classical solid-liquid-gas trichotomy. While there are a plethora of different systems classified as liquid crystals, they can be broadly described as fluid systems where molecules admit a long range order of certain degrees of freedom. This is in contrast to classical fluids, which lack long range correlations between molecules. The fluidity of the systems makes them ``soft'', that is, easily susceptible to influence by external influences such as fields or stresses, whilst the long range ordering permits anisotropic electrostatic and optical behaviour. These two properties combined make them ideal for a variety of technological applications, as their anisotropy is exploitable whilst their softness makes them easy to manipulate. The simpliest liquid crystalline system is that of a nematic liquid crystal. These are systems of elongated molecules, often idealised as having axial symmetry, which form phases with no long range positional order, but where the long axes of molecules is generally well aligned over larger length scales. Even in the well studied case of nematics there are a variety of models that one may use to study their theoretical behaviour, where the choice of model is usually dependent on the length scales considered and the type of defects one wishes to observe. One of the earliest and most studied free energy functionals one may consider in continuum modelling is the Oseen-Frank model~{\mathrm{c}}ite{frank1958liquid}. In the simpliest formulation, we consider a prescribed domain~$\Omega\subseteq\mathbf{m}athbb{R}^3$ and a unit vector field as our continuum variable $n{\mathrm{c}}olon\Omega\to\mathbf{m}athbb{S}^2$, interpreted as the local alignment axis of molecules. As molecules are assumed to be (statistically) head-to-tail symmetric, we interpret the configurations~$n$, $-n$ as physically equivalent, {that is, they are two mathematically distinct representations of the same physical state.} In the simplified {\it one-constant approximation}, we look for minimisers of the free energy \begin{equation}\label{refOFOneConstant} \int_\Omega \frac{K}{2} \abs{\mathbf{m}athbf{n}abla n(x)}^2 \, \mathrm{d} x, \mathbf{e}nd{equation} subject to certain boundary conditions, although more general formulations are possible. The problem has attracted interest not only from the liquid crystal community, but also from the mathematical community as the prototypical harmonic map problem. If a prescribed Dirichlet boundary condition admits non-zero degree, then by necessity any $n$ satisfying it must admit discontinuities, meaning that defects/singularities are an unavoidable part of the model's study. More generally, one may consider an Oseen-Frank energy where different modes of deformation are penalised to different extents. Neglecting the saddle-splay null-Lagrangian term, this gives a free energy of the form \begin{equation}\label{eqOF} \int_\Omega \frac{K_1}{2}(\mathbf{m}athbf{n}abla {\mathrm{c}}dot n(x))^2 + \frac{K_2}{2}(n(x){\mathrm{c}}dot \mathbf{m}athbf{n}abla \times n(x))^2 + \frac{K_3}{2}|n(x)\times \mathbf{m}athbf{n}abla\times n(x)|^2\,\mathrm{d} x. \mathbf{e}nd{equation} The constants $K_1$, $K_2$, $K_3$ are known as the Frank constants, and represent the penalisations of splay, twist, and bend deformations respectively. In the case where $K_1=K_2=K_3=K$, we reclaim the one-constant approximation~\mathbf{e}qref{refOFOneConstant}. It is natural to ask if such a free energy can be justified. While the original formulation was more phenomenological in nature and based solely on symmetry arguments and a small-deformation assumption, attempts have been made to identify the Oseen-Frank model as a large-domain limit of a more fundamental model, the Landau-de Gennes model {\mathrm{c}}ite{de1995physics,mottram2014introduction}. In the Landau-de Gennes model, the continuum variable is the {\it Q-tensor}, corresponding to the normalised second moment of a one-particle distribution function. Explicitly, if the distribution of the long axes of molecules in a small neighbourhood of a point $x\in\Omega$ are described by a probability distribution $f(x, \, {\mathrm{c}}dot){\mathrm{c}}olon\mathbf{m}athbb{S}^2\to[0, \, +\infty)$, we define the Q-tensor at the point $x$ as \begin{equation} Q(x) = \int_{\mathbf{m}athbb{S}^2} f(x, \, p) \left(p\mathrm{o}times p-\frac{1}{3}I\right)\,\mathrm{d} p. \mathbf{e}nd{equation} As molecules are assumed to be head-to-tail symmetric, a molecule is as likely to have orientation $p\in\mathbf{m}athbb{S}^2$ as $-p$, so that $f(x, \, p)=f(x, \, -p)$. For this reason the first moment of $f(x, \, {\mathrm{c}}dot)$ will always vanish, making the Q-tensor the first non-trivial moment, containing information on molecular alignment. Q-tensors are, following their definition, traceless, symmetric, $3\times 3$ matrices. We denote this set as \begin{equation} \text{Sym}_0(3)=\left\{Q\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon Q=Q^T,\, \text{Trace}(Q)=0\right\}. \mathbf{e}nd{equation} The Q-tensor contains more information than the director field, namely that it does not force the interpretation of axially symmetric ordering about an axis (less symmetric configurations are permitted), and the degree of orientational ordering is permitted to vary. Depending on their eigenvalues, they come in one of three varieties. \begin{itemize} \item If all eigenvalues are equal, $Q=0$, and we say that~$Q$ is {\it isotropic}, and representative of a disordered system. In particular, if $f$ is a uniform distribution on $\mathbf{m}athbb{S}^2$, $Q=0$. \item If two eigenvalues are equal and the third is distinct, we say~$Q$ is {\it uniaxial}. A uniaxial Q-tensor can be written as $Q=s\left(n\mathrm{o}times n-\frac{1}{3}I\right)$, for a scalar $s$ and unit vector $n$. We interpret $n$ as the favoured direction of alignment, and $s$ as a measure of the degree of ordering molecules about $n$. \item If all three eigenvalues are distinct, we say that $Q$ is {\it biaxial}. \mathbf{e}nd{itemize} The corresponding free energy to be minimised is \begin{equation} \int_\Omega \mathbf{p}si_b(Q(x)) + W(Q(x),\mathbf{m}athbf{n}abla Q(x))\,\mathrm{d} x. \mathbf{e}nd{equation} The function $\mathbf{p}si_b{\mathrm{c}}olon\text{Sym}_0(3)\to\mathbf{m}athbb{R}{\mathrm{c}}up\{+\infty\}$ is a frame indifferent bulk potential, which may be taken as a polynomial or the Ball-Majumdar singular potential {({\mathrm{c}}ite{ball2010nematic}, and further discussion in \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast})}. Its main characteristic is that, in the cases considered, it is minimised on the set \begin{equation} \label{NN} \mathbf{m}athbb{N}N =\left\{Q\in\text{Sym}_0(3){\mathrm{c}}olon \textrm{there exists } n\in \mathbf{m}athbb{S}^2 \textrm{ such that } Q=s_0\left(n\mathrm{o}times n-\frac{1}{3}I\right)\right\}, \mathbf{e}nd{equation} with~$s_0$ a temperature, concentration and material dependent constant. The elastic energy~$W$ is minimised when $\mathbf{m}athbf{n}abla Q=0$. While many forms are possible, by symmetry the only frame-indifferent, quadratic energy that only depends on the gradient of~$Q$ is of the form \begin{equation} W(\mathbf{m}athbf{n}abla Q) = \frac{L_1}{2} Q_{ij,k}Q_{ij,k} + \frac{L_2}{2} Q_{ij,k}Q_{ik,j} + \frac{L_3}{2}Q_{ij,j}Q_{ik,k}, \mathbf{e}nd{equation} where Einstein summation notation is used. While Oseen-Frank represents nematic defects as discontinuities in the continuum variables, the Landau de-Gennes approach admits a different description, where nematic defects point defects are typically described as a melting of nematic order, that is~$Q=0$. This permits smooth configurations to describe defects. In an appropriate large-domain limit of a rescaled problem, the contributions of the bulk energy become overwhelming, and we expect the minimisers to converge to minimisers of a constrained problem, where we minimise the elastic energy \begin{equation} \int_\Omega W(\mathbf{m}athbf{n}abla Q)\,\mathrm{d} x, \mathbf{e}nd{equation} subject to the constraint that $Q(x)\in\mathbf{m}athbb{N}N$ almost everywhere. In the case where~$Q=s_0\left(n\mathrm{o}times n-\frac{1}{3}I\right)$ almost everywhere for some~$n\in W^{1,2}(\Omega,\mathbf{m}athbb{S}^2)$, we say that~$Q$ is {\it orientable}, and the problem in the presence of Dirichlet boundary conditions that are $\mathbf{m}athbb{N}N$-valued almost everywhere becomes equivalent to that of the minimising the energy \mathbf{e}qref{eqOF} for~$n$. The constants~$L_i$ and~$K_i$ are related in the case of Dirichlet boundary conditions where null-Lagrangian terms may be neglected as \begin{equation} \begin{split} \frac{1}{s_0^2}K_1=& 2L_1+L_2+L_3,\\ \frac{1}{s_0^2}K_2=&2L_1,\\ \frac{1}{s_0^2}K_3=&2L_1. \mathbf{e}nd{split} \mathbf{e}nd{equation} An energy purely quadratic in $\mathbf{m}athbf{n}abla Q$ cannot give rise to three independent elastic constants in the Oseen-Frank model, with the so-called ``cubic term'' $Q_{ij}Q_{kl,i}Q_{kl,j}$ often being used to fill the degeneracy. Such a term does not arise from the model we will consider, although a more complex variant taking into account molecular length scales has been proposed to avoid this issue~{\mathrm{c}}ite{creyghton2018scratching}. Studying the convergence of minimisers of Landau-de Gennes towards the Oseen-Frank limit has attracted interest, with Majumdar and Zarnescu showing global $W^{1,2}$~convergence and uniform convergence away from singular sets in the one-constant case~{\mathrm{c}}ite{majumdar2010landau}, Nguyen and Zarnescu proving convergence results in stronger topologies~{\mathrm{c}}ite{NguyenZarnescu}, Contreras, Lamy and Rodiac generalising the approach to other harmonic-map problems~{\mathrm{c}}ite{contreras2018convergence}, and further extensions by Contreras and Lamy~{\mathrm{c}}ite{contreras2018singular} and Canevari, Majumdar and Stroffolini~{\mathrm{c}}ite{canevari2019minimizers} to more general elastic energies. In other settings, the $W^{1,2}$-convergence does not hold globally but only locally, away from the singular sets, due to topological obstructions carried by the boundary data and/or the domain (see e.g.~{\mathrm{c}}ite{BaumanParkPhillips,GolovatyMontero, pirla3, IgnatJerrard}). Recently, Di Fratta, Robbins, Slastikov and Zarnescu found higher-order Landau-de Gennes corrections to the Oseen-Frank functional, in two dimensional domains, by studying the $\Gamma$-expansion of the Landau-de Gennes functional in the large-domain limit~{\mathrm{c}}ite{diFrattaetal2020}. The problem holds many parallels to the now-classical Ginzburg-Landau problem~{\mathrm{c}}ite{bethuel1994ginzburg}. Other singular limits and qualitative features of Landau-de Gennes solutions have been studied too; see, for instance,~{\mathrm{c}}ite{ContrerasLamy-Biaxial, Fratta16, HenaoMajumdarPisante, INSZ-Instability, kitavtsev2016liquid, INSZ-Hedgehog, INSZ-2dstability, INSZ-symmetry} and the references therein. While Landau-de Gennes has proven an effective model in many situations, there are still open questions as to how one may justify the model in a rigorous way. While one may use Landau-de Gennes, in appropriate situations, to justify Oseen-Frank, a rigorous justification of Landau-de Gennes itself is lacking. Historically it was justified on a phenomenological basis, but other work has been able to provide Landau-de Gennes as a gradient expansion of a non-local mean field model~{\mathrm{c}}ite{garlea2017landau,han2015microscopic}. Justification by formal gradient expansions leaves open the question as to the {consistency of minimisers of the original free energy with minimisers of its approximation, that is, are minimisers of the approximate model necessarily good approximations of the minimisers of the original problem?} To this end, recent work has been focused on rigorous asymptotic analysis of non-local free energies, which similarly produce the Oseen-Frank model in a large-domain limit~{\mathrm{c}}ite{liu2017oseen,liu2018small,taylor2018oseen,taylor2019gamma}. These approaches ``bypass'' the intermediate and non-rigorous derivation of Landau-de Gennes. This is analogous to recent investigations into peridynamics, a formulation of elasticity based on non-local interactions. These formulations of elasticity bear mathematical similarity with the mean-field theory approach, where stress-strain relations are described in terms of non-local operators on the deformation map, rather than derivatives as in the more classical formulations of elasticity~{\mathrm{c}}ite{bellido2015hyperelasticity,silling2008convergence}. The classical density functional theory we will consider in this work is based on a simplified competition between an entropic contribution to the energy, favouring disorder, and an interaction energy, favouring order. The models themselves are justified as a second order truncation of the virial expansion in the dilute regime based on long-range attractive interactions in the style of Maier and Saupe {\mathrm{c}}ite{maier1959einfache} and with mathematical similarity to the model of Onsager {\mathrm{c}}ite{onsager1949effects}. Explicitly, given the one-particle distribution function $f(x,{\mathrm{c}}dot)$ in a neighbourhood of $x$, we define a free energy functional \begin{equation}\label{eqMeanField} \begin{split} &k_BT\rho_0 \int_{\Omega\times\mathbf{m}athbb{S}^2} f(x, \, p)\ln f(x, \, p) \,\mathrm{d} x\,\mathrm{d} p \\ &\qquad\qquad\qquad - \frac{\rho_0^2}{2} \int_{\Omega\times\mathbf{m}athbb{S}^2} \int_{\Omega\times\mathbf{m}athbb{S}^2} f(x, \, p)f(y, \, p) \mathbf{m}athcal{K}(x-y,\, p, \, q)\,\mathrm{d} x\,\mathrm{d} p\,\mathrm{d} y\,\mathrm{d} q. \mathbf{e}nd{split} \mathbf{e}nd{equation} $\rho_0>0$ is the number density of particles in space, $k_B$~the Boltzmann constant, $T>0$~temperature and $\mathbf{m}athcal{K}(z, \, p, \, q)$ denotes the interaction energy of particles with orientations $p$, $q$ and with centres of mass separated by a vector~$z$. The entropic term on the left is convex and readily shown to be minimised at a uniform distribution, that is, an isotropic disordered system. The nature of the pairwise interaction energy on the right is that nearby particles will prefer to be aligned with each other. We see that temperature and concentration mediate the competition between these opposing mechanisms. Recent work has established the Oseen-Frank energy~\mathbf{e}qref{eqOF} in terms of a large-domain limit of the energy~\mathbf{e}qref{eqMeanField} under certain assumptions, in which the elastic constants~$K_i$ can be related to second moments of the interaction kernel. Previous work has established weaker modes of convergence, while in this work we will establish stronger convergence of minimisers away from defect sets, analogous to the approach taken by Majumdar and Zarnescu for the Landau-de Gennes model~{\mathrm{c}}ite{majumdar2010landau}. {\BBB~The results however will require stronger assumptions on the regularity and decay of the interaction kernel than those of {\mathrm{c}}ite{taylor2018oseen}, owing to the need for more precise control on the decay of various integral quantities.} \subsection{Simplification of the model and non-dimensionalisation} Here and throughout the sequel, we consider the more general case where molecules admit an internal degree of freedom $p$ in a manifold $\mathbf{m}athcal{M}$. We will employ a macroscopic order parameter $u\in\mathbf{m}athbb{R}^m$ to emphasise the analysis is not limited to the concrete case of nematic liquid crystals. Through most of the paper, we consider the case where~$f$ is prescribed on $\left(\mathbf{m}athbb{R}^3\setminus\frac{1}{\mathbf{e}ps}\Omega\right)\times\mathbf{m}athcal{M}$, where~$\Omega$ is a non-dimensional reference domain and~$\mathbf{e}ps>0$ is a small parameter, representative of the inverse of a large length scale of the domain. In Section~\ref{sect:bd}, we relax this assumption and study a minimisation problem where~$f$ is prescribed only in a neighbourhood of the domain, of suitable thickness. We consider the free energy \begin{equation} \begin{split} &\tilde{\mathbf{m}athcal{G}}_\mathbf{e}ps(f) = k_BT\rho_0 \int_{\frac{1}{\mathbf{e}ps}\Omega\times\mathbf{m}athcal{M}} f(x, \, p)\ln f(x, \, p)\,\mathrm{d} x\,\mathrm{d} p \\ &\qquad\qquad\qquad -\frac{\rho_0^2}{2} \int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athcal{M}} \int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athcal{M}} f(x,\, p)f(y, \, q) \mathbf{m}athcal{K}(x-y, \, p, \, q)\,\mathrm{d} x\,\mathrm{d} p\,\mathrm{d} y\,\mathrm{d} q. \mathbf{e}nd{split} \mathbf{e}nd{equation} For simplification of the problem, we take the interaction energy to be of the form \begin{equation} \mathbf{m}athcal{K}(z, \, p, \, q)=K(z)\sigma(p){\mathrm{c}}dot \sigma(q), \mathbf{e}nd{equation} where~$\sigma\in L^\infty(\mathbf{m}athcal{M},\mathbf{m}athbb{R}^m)$ is some ``microscopic order parameter'', and $K{\mathrm{c}}olon\mathbf{m}athbb{R}^3\to\mathbf{m}athbb{R}^{m\times m}$ is a symmetric tensor field, which will satisfy certain technical conditions (see~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast} in Section~\ref{sect:setting}). By applying Fubini we may then introduce a ``macroscopic order parameter'', $u\in L^\infty(\mathbf{m}athbb{R}^3,\mathbf{m}athbb{R}^m)$ by \begin{equation} u(x) = \int_{\mathbf{m}athcal{M}}f(x, \, p)\sigma(p)\,\mathrm{d} p, \mathbf{e}nd{equation} and re-write the interaction energy as \begin{equation} \begin{split} &-\frac{\rho_0^2}{2} \int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athcal{M}} \int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athcal{M}} f(x, \, p)f(y, \, p)\mathbf{m}athcal{K}(x-y, \, p, \, q) \,\mathrm{d} x\,\mathrm{d} p\,\mathrm{d} y\,\mathrm{d} q \\ &\qquad\qquad\qquad = -\frac{\rho_0^2}{2} \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3} K(x-y)u(x){\mathrm{c}}dot u(y)\,\mathrm{d} x\,\mathrm{d} y. \mathbf{e}nd{split} \mathbf{e}nd{equation} While it is not possible to write the entropic term explicitly in terms of $u$, we may provide a lower bound by means of the singular Ball-Majumdar/Katriel potential and its extensions~{\mathrm{c}}ite{ball2010nematic,katriel1986free,taylor2016maximum} by \begin{equation} \int_{\frac{1}{\mathbf{e}ps}\Omega\times\mathbf{m}athcal{M}} f(x, \, p)\ln f(x, \, p)\,\mathrm{d} x\,\mathrm{d} p \geq \int_{\frac{1}{\mathbf{e}ps}\Omega}\mathbf{p}si_s(u(x))\,\mathrm{d} x, \mathbf{e}nd{equation} where the function $\mathbf{p}si_s{\mathrm{c}}olon\mathbf{m}athbb{R}^m\to\mathbf{m}athbb{R}{\mathrm{c}}up\{+\infty\}$ is defined by \begin{equation} \label{BallMajumdar} \mathbf{p}si_s(u)=\mathbf{m}in\left\{\int_{\mathbf{m}athcal{M}}f(p)\ln f(p)\,\mathrm{d} p {\mathrm{c}}olon f\geq 0 \textrm{ a.e.,} \, \int_{\mathbf{m}athcal{M}} f(p)\,\mathrm{d} p=1, \, \int_{\mathbf{m}athcal{M}} f(p)\sigma(p)\,\mathrm{d} p = u\right\} \! , \mathbf{e}nd{equation} where by convention~$\mathbf{p}si_s(u)=+\infty$ when the constraint set is empty. Note that the minimisation problem \mathbf{e}qref{BallMajumdar} is strictly convex, thus solutions are necessarily unique, and we may define~$f_u$ to be the corresponding minimiser for $u\in \mathbf{Q}Q=\left\{u: \mathbf{p}si_s(u)<+\infty\right\}$. That is, \begin{equation} f_u=\text{arg min}\left\{\int_{\mathbf{m}athcal{M}}f(p)\ln f(p)\,\mathrm{d} p {\mathrm{c}}olon f\geq 0 \textrm{ a.e.,} \, \int_{\mathbf{m}athcal{M}} f(p)\,\mathrm{d} p=1, \, \int_{\mathbf{m}athcal{M}} f(p)\sigma(p)\,\mathrm{d} p = u\right\}. \mathbf{e}nd{equation} The precise definition of~$\mathbf{p}si_s$ will be unimportant in this work, and we employ any function~$\mathbf{p}si_s$ satisfying certain technical assumptions in the sequel (see~\mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast} in Section~\ref{sect:setting}) We in fact have the result that $f^*$ is a minimiser of~$\tilde{\mathbf{m}athcal{G}}_\mathbf{e}ps$ if and only if, for $u^*(x)=\int_{\mathbf{m}athbb{S}^2}f^*(x, \, p)\sigma(p)\,\mathrm{d} p$, $u^*$ is a minimiser of \begin{equation} \label{energy-1} \tilde{\mathbf{m}athcal{F}}_\mathbf{e}ps(u)=k_BT\rho_0\int_{\frac{1}{\mathbf{e}ps}\Omega}\mathbf{p}si_s(u(x))\,\mathrm{d} x-\frac{\rho_0^2}{2}\int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3}K(x-y)u(x){\mathrm{c}}dot u(y)\,\mathrm{d} x\,\mathrm{d} y, \mathbf{e}nd{equation} with $f^*=f_{u^*}$. This is readily seen by writing the minimisation as a two-step process, first minimising over all $f$ such that $f_u=f$, and later minimising over $u$ and noting that the first minimisation may be performed pointwise almost-everywhere in $\mathbf{m}athbb{R}^3$, as in {\mathrm{c}}ite{taylor2018oseen}. That is to say, we have a simpler, macroscopic energy with equivalent minimisers. By introducing a change of variables, \[ x=\frac{x^\mathbf{p}rime}{\mathbf{e}ps}, \quad y=\frac{y^\mathbf{p}rime}{\mathbf{e}ps}, \quad u^\mathbf{p}rime(x^\mathbf{p}rime)=u(x), \quad \mathbf{e}ps^\mathbf{p}rime := \frac{\mathbf{e}ps}{\rho_0^{1/3}}, \quad K^\mathbf{p}rime(x^\mathbf{p}rime)=\frac{1}{k_BT}K(\mathbf{e}ps^\mathbf{p}rime x), \] and a (non-dimensional) constant~$C_{\mathbf{e}ps^\mathbf{p}rime}$ to be specified later, we rescale the domain and obtain the free energy we will consider for the remainder of this work, so that \begin{equation} \label{energy0} \begin{split} E_{\mathbf{e}ps^\mathbf{p}rime}(u^\mathbf{p}rime) &:= \frac{\mathbf{e}ps}{k_BT\rho_0^{1/3}} \tilde{\mathbf{m}athcal{F}}_\mathbf{e}ps(u) + C_{\mathbf{e}ps^\mathbf{p}rime}\\ &= \frac{1}{{\mathbf{e}ps^\mathbf{p}rime}^2} \int_{\Omega} \mathbf{p}si_s(u^\mathbf{p}rime(x^\mathbf{p}rime))\,\mathrm{d} x^\mathbf{p}rime - \frac{1}{2{\mathbf{e}ps^\mathbf{p}rime}^5}\int_{\mathbf{m}athbb{R}^3} \int_{\mathbf{m}athbb{R}^3} K^\mathbf{p}rime\left(\frac{x^\mathbf{p}rime-y^\mathbf{p}rime}{\mathbf{e}ps^\mathbf{p}rime}\right)u^\mathbf{p}rime(x^\mathbf{p}rime){\mathrm{c}}dot u^\mathbf{p}rime(y^\mathbf{p}rime)\,\mathrm{d} x^\mathbf{p}rime\,\mathrm{d} y^\mathbf{p}rime + C_{\mathbf{e}ps^\mathbf{p}rime}. \mathbf{e}nd{split} \mathbf{e}nd{equation} The additive constant~$C_{\mathbf{e}ps^\mathbf{p}rime}$ is irrelevant for the purpose of minimisation; however, we will make a specific choice of~$C_{\mathbf{e}ps^\mathbf{p}rime}$ (see Equation~\mathbf{e}qref{C_eps} below) for analytical convenience. We will consider the regime as $\mathbf{e}ps^\mathbf{p}rime\to 0$ in this work. From the definition of $\mathbf{e}ps^\mathbf{p}rime$, this may be interpreted in two forms, one in which the characteristic length scale of the domain, $\frac{1}{\mathbf{e}ps}$, becomes large, and one in which the density $\rho_0$ becomes large. However, as the energy we consider is based on the second order virial expansion which is explicitly a model for dilute regimes, we interpret the limit $\mathbf{e}ps^\mathbf{p}rime\to 0$ as the former, that is, a large-domain limit. In the sequel we omit the primes and consider~\mathbf{e}qref{energy0} as our free energy functional to be minimised at scale~$\mathbf{e}ps>0$. \section{Technical assumptions and main results} \label{sect:setting} Let~$\text{Sym}(m)$ be the space of~$(m\times m)$-symmetric matrices, with real coefficients. Given an interaction kernel~$K{\mathrm{c}}olon\mathbf{m}athbb{R}^3\to\text{Sym}(m)$ and~$\mathbf{e}ps> 0$, we define~$K_\mathbf{e}ps(z) := \mathbf{e}ps^{-3} K(\mathbf{e}ps^{-1}z)$ for any~$z\in\mathbf{m}athbb{R}^3$. Then, we may rewrite the functional~\mathbf{e}qref{energy0} as \begin{equation} \label{energy} \begin{split} E_\mathbf{e}ps(u) := - \frac{1}{2\mathbf{e}ps^2}\int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y)u(x){\mathrm{c}}dot u(y) \, \mathrm{d} x \, \mathrm{d} y + \frac{1}{\mathbf{e}ps^2} \int_\Omega\mathbf{p}si_s(u(x)) \, \mathrm{d} x + C_\mathbf{e}ps, \mathbf{e}nd{split} \mathbf{e}nd{equation} where~$u{\mathrm{c}}olon\mathbf{m}athbb{R}^3\to\mathbf{m}athbb{R}^m$ is the macroscopic order parameter, $\Omega\subseteq\mathbf{m}athbb{R}^3$ is a bounded, smooth domain, and $\mathbf{p}si_s{\mathrm{c}}olon\mathbf{m}athbb{R}^m\to [0, \, +\infty]$ is any convex potential that satisfies the assumptions~\mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast} below (for instance, the Ball-Majumdar/Katriel potential defined by~\mathbf{e}qref{BallMajumdar}). \mathbf{p}aragraph*{Assumptions on the kernel~$K$.} Our assumptions on the kernel~$K$ are reminiscent of~{\mathrm{c}}ite{taylor2018oseen}. We define~$g(z) := \lambda_{\mathbf{m}in}(K(z))$ for any~$z\in\mathbf{m}athbb{R}^3$, where~$\lambda_{\mathbf{m}in}(K)$ denotes the minimum eigenvalue of~$K$. \begin{enumerate}[label=(K\textsubscript{\arabic*}), ref=K\textsubscript{\arabic*}] \item \label{hp:K_decay} \label{hp:Kfirst} $K\in W^{1,1}(\mathbf{m}athbb{R}^3, \, \text{Sym}(m))$. \item \label{hp:K_even} $K$ is even, that is $K(z) = K(-z)$ for a.e.~$z\in\mathbf{m}athbb{R}^m$. \item \label{hp:g} $g\geq 0$ a.e. on~$\mathbf{m}athbb{R}^3$, and there exist positive numbers~$\rho_1 < \rho_2$, $k_*$ such that~$g\geq k_*$ a.e.~on~$B_{\rho_2}\setminus B_{\rho_1}$. \item \label{hp:g_decay} $g\in L^1(\mathbf{m}athbb{R}^3)$ and {\BBB there exists~$q>7/2$ such that $\int_{\mathbf{m}athbb{R}^3} g(z) \abs{z}^q \mathrm{d} z <+\infty$.} \item \label{hp:lambda_max} There exists a positive constant~$C$ such that $\lambda_{\mathbf{m}ax}(K(z))\leq Cg(z)$ for a.e.~$z\in\mathbf{m}athbb{R}^3$ (where~$\lambda_{\mathbf{m}ax}(K)$ denotes the maximum eigenvalue of~$K$). \item \label{hp:nabla_K} \label{hp:Klast} {\BBB There exists~$\mathbf{m}athbf{n}u > 1$ such that} \[ \int_{\mathbf{m}athbb{R}^3} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K(z)} \abs{z}^{\mathbf{m}athbf{n}u} \mathrm{d} z <+\infty, \] where~$\mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K(z)}^2 := \mathbf{p}artial_\alpha K_{ij}(z) \, \mathbf{p}artial_\alpha K_{ij}(z)$. \mathbf{e}nd{enumerate} In the case of physically meaningful systems the tensor $K$ will have to respect frame invariance. In the case of nematic liquid crystals, where the order parameter is a traceless symmetric matrix~$Q$, frame indifference implies that the bilinear form must necessarily be of the form \begin{equation} K(z)Q_1{\mathrm{c}}dot Q_2 = f_1(|z|)Q_1{\mathrm{c}}dot Q_2 + f_2(|z|)Q_1z{\mathrm{c}}dot Q_2z + f_3(|z|) (Q_1z{\mathrm{c}}dot z)(Q_2z{\mathrm{c}}dot z), \mathbf{e}nd{equation} for all $Q_1,Q_2 \in \text{Sym}_0(3)$, where~$f_1$, $f_2$, $f_3$ are real-valued functions defined on $[0, \, +\infty)$ {\mathrm{c}}ite{taylor2018oseen}. It is clear that by appropriate choices of~$f_1$, $f_2$, $f_3$ which are~$C^1$ and with sufficient decay at infinity the previous assumptions can be satisfied. This family of bilinear forms includes the simplified interaction energy \begin{equation} K(z)Q_1{\mathrm{c}}dot Q_2=\mathbf{v}arphi(|z|)Q_1{\mathrm{c}}dot Q_2 \mathbf{e}nd{equation} {\BBB for a suitable function $\mathbf{v}arphi$, which includes the results of~{\mathrm{c}}ite{liu2017oseen,liu2018small}, where $\mathbf{v}arphi$ is taken to be rapidly decaying and $C^\infty$, which are stronger assumptions than we shall consider. Furthermore ~{\mathrm{c}}ite[Equation (3.43)]{bowick2017mathematics} considers $K$ to have the same structure, albeit with a slower decay of $\mathbf{v}arphi$ than our analysis would permit.} {\BBB~We remark that the integrability requirements in \mathbf{e}qref{hp:g_decay} and \mathbf{e}qref{hp:nabla_K} and regularity requirement in \mathbf{e}qref{hp:K_decay}, although weaker than the assumptions in the earlier work {\mathrm{c}}ite{liu2017oseen}, are stronger than that of {\mathrm{c}}ite{taylor2018oseen}, and permit more delicate control of integral estimates needed to show convergence of minimisers in a stronger sense. (See also Remark~\ref{rk:assumptions} below.)} \mathbf{p}aragraph*{Assumptions on the singular potential~$\mathbf{p}si_s$.} \begin{enumerate}[label=(H\textsubscript{\arabic*}), ref=H\textsubscript{\arabic*}] \item \label{hp:psi_s} \label{hp:Hfirst} $\mathbf{p}si_s{\mathrm{c}}olon\mathbf{m}athbb{R}^m\to {\BBB (-\infty, \, +\infty]}$ is a convex function. \item \label{hp:Q} The domain of~$\mathbf{p}si_s$, $\mathbf{Q}Q := \mathbf{p}si_s^{-1}({\BBB\mathbf{m}athbb{R}})\subseteq\mathbf{m}athbb{R}^m$, is a non-empty, bounded open set {\BBB that contains~$0$} and~$\mathbf{p}si_s\in C^2(\mathbf{Q}Q)$. \item \label{hp:unif_conv} There exists a constant~$c>0$ such that $\mathbf{m}athbf{n}abla^2\mathbf{p}si_s(y){\mathrm{c}}hi{\mathrm{c}}dot{\mathrm{c}}hi\geq c\abs{{\mathrm{c}}hi}^2$ for any~$y\in\mathbf{Q}Q$ and any~${\mathrm{c}}hi\in\mathbf{m}athbb{R}^m$. \item \label{hp:blow-up} There holds $\mathbf{p}si_s(y)\to+\infty$ as~$\mathrm{d}ist(y, \, \mathbf{p}artial\mathbf{Q}Q)\to 0$. \mathbf{e}nd{enumerate} We define the ``bulk potential''~$\mathbf{p}si_b{\mathrm{c}}olon\mathbf{Q}Q\to\mathbf{m}athbb{R}$ in terms of~$K$ and~$\mathbf{p}si_s$, as \begin{equation} \label{psib} \mathbf{p}si_b(y) := \mathbf{p}si_s(y) - \frac{1}{2}\left(\int_{\mathbf{m}athbb{R}^3} K(z)\, \mathrm{d} z\right)y{\mathrm{c}}dot y + c_0 \quad \textrm{for any } y\in\mathbf{Q}Q, \mathbf{e}nd{equation} where~$c_0\in\mathbf{m}athbb{R}$ is a constant, uniquely determined by imposing that $\inf\mathbf{p}si_b = 0$. We make the following assumptions on~$\mathbf{p}si_b$: \begin{enumerate}[label=(H\textsubscript{\arabic*}), ref=H\textsubscript{\arabic*}, resume] \item \label{hp:NN} The set $\mathbf{m}athbb{N}N := \mathbf{p}si_b^{-1}(0)\subseteq\mathbf{Q}Q$ is a compact, smooth, connected manifold without boundary. \item \label{hp:non_degeneracy} \label{hp:Hlast} For any~$y\in\mathbf{m}athbb{N}N$ and any unit vector~$\xi\in\mathbb{S}^{m-1}$ that is orthogonal to~$\mathbf{m}athbb{N}N$ at~$y$, we have $\mathbf{m}athbf{n}abla^2\mathbf{p}si_b(y)\xi{\mathrm{c}}dot\xi>0$. \mathbf{e}nd{enumerate} \begin{remark} \label{rk:trivial} If the norm of~$\int_{\mathbf{m}athbb{R}^3}K(z)\, \mathrm{d} z$ is smaller than the constant~$c$ given by~\mathbf{e}qref{hp:unif_conv}, then the function~$\mathbf{p}si_b$ is {\BBB strictly} convex and hence, its zero-set~$\mathbf{m}athbb{N}N$ reduces to a point. This happens, for example, in the sufficiently high temperature regime, independently of the precise form of~$K$. Nevertheless, our arguments remain valid in this case, too. \mathbf{e}nd{remark} \begin{remark} \label{rk:BallMajumdar} The Ball-Majumdar/Katriel potential, defined by~\mathbf{e}qref{BallMajumdar}, satisfies the conditions~\mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast}. \mathbf{e}qref{hp:psi_s},~\mathbf{e}qref{hp:Q},~\mathbf{e}qref{hp:blow-up}, and~\mathbf{e}qref{hp:NN} follow from {\mathrm{c}}ite{ball2010nematic}, apart from the $C^2$ smoothness of $\mathbf{p}si_s$ which is implicitly proven in {\mathrm{c}}ite{katriel1986free} via an inverse function theorem argument, although not stated. \mathbf{e}qref{hp:unif_conv} is proven in {\mathrm{c}}ite{taylor2016maximum}. With this choice of the potential, the set~$\mathbf{m}athbb{N}N :=\mathbf{p}si_b^{-1}(0)$ is either a point or the manifold given by~\mathbf{e}qref{NN} (see~{\mathrm{c}}ite[Section~4]{ball2010nematic}). In both cases, \mathbf{e}qref{hp:non_degeneracy} is satisfied (see~{\mathrm{c}}ite[Proposition 4.2]{li2015local}). \mathbf{e}nd{remark} \mathbf{p}aragraph*{The admissible class and an equivalent expression for the free energy.} We complement the minimisation of the functional~\mathbf{e}qref{energy} by prescribing~$u$ on~$\mathbf{m}athbb{R}^3\setminus\Omega$. We take a map~$u_{\mathbf{m}athrm{bd}}\in H^1(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$ such that \begin{equation} \label{hp:bd} \tag{BD} u_{\mathbf{m}athrm{bd}}(x)\in\mathbf{Q}Q \quad \textrm{for a.e. } x\in\mathbf{m}athbb{R}^3\setminus\Omega, \qquad u_{\mathbf{m}athrm{bd}}(x)\in\mathbf{m}athbb{N}N \quad \textrm{for a.e. } x\in\Omega, \mathbf{e}nd{equation} and we define the admissible class \begin{equation} \label{A} \mathbf{m}athscr{A} := \left\{u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q){\mathrm{c}}olon \mathbf{p}si_s(u)\in L^1(\Omega), \ u = u_{\mathbf{m}athrm{bd}} \textrm{ a.e. on } \mathbf{m}athbb{R}^3\setminus\Omega\right\} \!. \mathbf{e}nd{equation} \begin{remark} {\BBB In order for the assumption~\mathbf{e}qref{hp:bd} to be satisfied, it is necessary that the trace of~$u_{\mathbf{m}athrm{bd}}$ on~$\mathbf{p}artial\Omega$ takes its values in the manifold~$\mathbf{m}athbb{N}N$. If~$\mathbf{m}athbb{N}N$ is simply connected, then any boundary value that belongs to~$H^{1/2}(\mathbf{p}artial\Omega, \, \mathbf{m}athbb{N}N)$ admits an extension in~$H^1(\Omega, \, \mathbf{m}athbb{N}N)$; this follows from~{\mathrm{c}}ite[Theorem~6.2]{HardtLin}. However, when~$\mathbf{m}athbb{N}N$ is multiply connected (for instance, when~$\mathbf{m}athbb{N}N$ is the real projective plane, as in the applications to liquid crystals) there exist boundary values in~$H^{1/2}(\mathbf{p}artial\Omega, \, \mathbf{m}athbb{N}N)$ that do not have any extension in~$H^1(\Omega, \, \mathbf{m}athbb{N}N)$. (See e.g.~{\mathrm{c}}ite{bethuel2014, MironescuVanSchaftingen-trace} for results on the extension problem for manifold-valued Sobolev maps.) On the other hand, $\mathbf{Q}Q$ is a convex set that contains~$\mathbf{m}athbb{N}N$ and~$0$, so any boundary value in~$H^{1/2}(\mathbf{p}artial\Omega, \, \mathbf{m}athbb{N}N)$ has an extension in~$H^1(\mathbf{m}athbb{R}^3\setminus\Omega, \, \mathbf{Q}Q)$.} \mathbf{e}nd{remark} In the class~$\mathbf{m}athscr{A}$, the functional~$E_\mathbf{e}ps$ has an alterative expression. For any~$y\in\mathbf{m}athbb{R}^m$, we use the abbreviated notation~$y^{\mathrm{o}times 2} := y\mathrm{o}times y$. We choose \begin{equation} \label{C_eps} C_\mathbf{e}ps := \frac{c_0}{\mathbf{e}ps^2} \abs{\Omega} + \frac{1}{2\mathbf{e}ps^2} \int_{\mathbf{m}athbb{R}^3\setminus\Omega} \left(\int_{\mathbf{m}athbb{R}^3}K(z)\,\mathrm{d} z\right) {\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(x)^{\mathrm{o}times 2} \,\mathrm{d} x, \mathbf{e}nd{equation} where~$\abs{\Omega}$ denotes the volume of~$\Omega$ and~$c_0\in\mathbf{m}athbb{R}$ is the same number as in~\mathbf{e}qref{psib}. The constant~$C_\mathbf{e}ps$ only depends on~$\mathbf{e}ps$, $\Omega$, $K$ and~$u_{\mathbf{m}athrm{bd}}$, so it is does not affect minimisers of the functional. By applying the algebraic identity \[ -2K(x-y)u(x){\mathrm{c}}dot u(y) = K(x-y){\mathrm{c}}dot(u(x) - u(y))^{\mathrm{o}times 2} - K(x-y){\mathrm{c}}dot u(x)^{\mathrm{o}times 2} - K(x-y){\mathrm{c}}dot u(y)^{\mathrm{o}times 2} \] and using~\mathbf{e}qref{psib}, \mathbf{e}qref{C_eps}, we re-write~\mathbf{e}qref{energy} as \begin{equation} \label{energy2} \begin{split} E_\mathbf{e}ps(u) = \frac{1}{4\mathbf{e}ps^2}\int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \, \mathrm{d} x \, \mathrm{d} y +\frac{1}{\mathbf{e}ps^2} \int_\Omega\mathbf{p}si_b(u(x)) \, \mathrm{d} x \mathbf{e}nd{split} \mathbf{e}nd{equation} for any~$u\in\mathbf{m}athscr{A}$. We note that the free energy admits parallels to the Landau-de Gennes energy, with the right-hand term being a corresponding bulk energy and the left-hand term acting as a non-local analogue of the elastic energy, which we shall see is reclaimed in a precise way in the asymptotic limit as~$\mathbf{e}ps\to 0$. Let~$L$ be the unique symmetric fourth-order tensor that satisfies \begin{equation} \label{L} L\xi{\mathrm{c}}dot \xi := \frac{1}{4}\int_{\mathbf{m}athbb{R}^3} K(z) {\mathrm{c}}dot (\xi z)^{\mathrm{o}times 2}\, \mathrm{d} z \qquad \textrm{for any } \xi\in\mathbf{m}athbb{R}^{m\times 3}. \mathbf{e}nd{equation} {\BBB The right-hand side of~\mathbf{e}qref{L} is well-defined and finite for any~$\xi\in\mathbf{m}athbb{R}^{m\times 3}$ because~$K$ has finite second moment, thanks to the assumption~\mathbf{e}qref{hp:g_decay}.} Coordinate-wise, $L$ is defined by \[ L_{ij\alpha\beta} = \frac{1}{4} \int_{\mathbf{m}athbb{R}^3} K_{\alpha\beta}(z)\,z_i\,z_j\,\mathrm{d} z \] for any~$i$, $j\in\{1, \, 2, \, 3\}$ and~$\alpha$, $\beta\in\{1, 2, \, \ldots, \, m\}$. Let~$E_0{\mathrm{c}}olon\mathbf{m}athscr{A}\to[0, \, +\infty]$ be given as \begin{equation} \label{limit} E_0(u) := \begin{cases} \mathrm{d}isplaystyle\int_{\Omega} L\mathbf{m}athbf{n}abla u {\mathrm{c}}dot \mathbf{m}athbf{n}abla u & \textrm{if } u\in H^1(\Omega, \, \mathbf{m}athbb{N}N){\mathrm{c}}ap\mathbf{m}athscr{A} \\ +\infty &\textrm{otherwise.} \mathbf{e}nd{cases} \mathbf{e}nd{equation} By assumption~\mathbf{e}qref{hp:bd}, the set~$H^1(\Omega, \, \mathbf{m}athbb{N}N){\mathrm{c}}ap\mathbf{m}athscr{A}$ is non-empty and hence, the functional~$E_0$ is not identically equal to~$+\infty$. Taylor~{\mathrm{c}}ite{taylor2018oseen} proved that, as~$\mathbf{e}ps\to 0$, the functional~$E_\mathbf{e}ps$ $\Gamma$-converges to~$E_0$ with respect to the $L^2$-topology. In particular, up to subsequences, minimisers~$u_\mathbf{e}ps$ of~$E_\mathbf{e}ps$ in the class~$\mathbf{m}athscr{A}$ converge $L^2$-strongly to a minimiser $u_0$ of~$E_0$ in~$\mathbf{m}athscr{A}$. Our aim is to prove a convergence result for minimisers, in a stronger topology. \mathbf{p}aragraph*{Main results.} Given a Borel set~$G\subseteq\mathbf{m}athbb{R}^3$ and~$u\in L^\infty(G, \, \mathbf{Q}Q)$, we define \begin{equation} \label{loc_energy} F_\mathbf{e}ps(u, \, G) := \frac{1}{4\mathbf{e}ps^2}\int_{G\times G} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \, \mathrm{d} x \, \mathrm{d} y +\frac{1}{\mathbf{e}ps^2} \int_{G} \mathbf{p}si_b(u(x)) \, \mathrm{d} x. \mathbf{e}nd{equation} For any~$\mathbf{m}u\in (0, \, 1)$, we denote the $\mathbf{m}u$-H\"older semi-norm of~$u$ on~$G$ as \[ [u]_{C^\mathbf{m}u(G)} := \sup_{x, \, y\in G, \ x\mathbf{m}athbf{n}eq y} \frac{\abs{u(x) - u(y)}}{\abs{x-y}^\mathbf{m}u} \] \begin{mainthm}[Uniform~$\mathbf{e}ta$-regularity] \label{th:Holder} Assume that~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast}, \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast} and~\mathbf{e}qref{hp:bd} are satisfied. Then, there exist positive numbers~$\mathbf{e}ta$, $\mathbf{e}ps_*$, $M$ and~$\mathbf{m}u\in (0, \, 1)$ such that for any ball~$B_{r_0}(x_0)\subseteq\Omega$, any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_* r_0)$, and any minimiser~$u_\mathbf{e}ps$ of~$E_\mathbf{e}ps$ in~$\mathbf{m}athscr{A}$ such that \[ r_0^{-1} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{r_0}(x_0)) \leq \mathbf{e}ta^2 \] there holds \[ r_0^{\mathbf{m}u} \, [u_\mathbf{e}ps]_{C^\mathbf{m}u(B_{r_0/2}(x_0))} \leq M. \] \mathbf{e}nd{mainthm} As a corollary, we deduce a convergence result for minimisers of~$E_\mathbf{e}ps$, in the locally uniform topology. We recall that any minimiser~$u_0$ for the limit functional~\mathbf{e}qref{limit} in~$\mathbf{m}athscr{A}$ is smooth in~$\Omega\setminus S[u_0]$, where \begin{equation} \label{singularset} S[u_0] := \left\{x\in\Omega{\mathrm{c}}olon \liminf_{\rho\to 0} \rho^{-1} \int_{B_\rho(x)}\abs{\mathbf{m}athbf{n}abla u_0}^2 > 0 \right\} \!. \mathbf{e}nd{equation} Moreover, $S[u_0]$ is a closed set of zero total length (see e.g.~{\mathrm{c}}ite{HKL, Luckhaus}). \begin{mainthm} \label{th:conv} Assume that the conditions~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast}, \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast} and~\mathbf{e}qref{hp:bd} are satisfied. Let~$u_\mathbf{e}ps$ be a minimiser of~$E_\mathbf{e}ps$ in~$\mathbf{m}athscr{A}$. Then, up to extraction of a (non-relabelled) subsequence, we have \[ u_\mathbf{e}ps \to u_0 \qquad \textrm{locally uniformly in } \Omega\setminus S[u_0], \] where $u_0$ is a minimiser of the functional~\mathbf{e}qref{limit} in~$\mathbf{m}athscr{A}$. \mathbf{e}nd{mainthm} The strategy of the proof for Theorem~\ref{th:Holder} is inspired by~{\mathrm{c}}ite{contreras2018singular}. Under the assumption~$F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_1)\leq\mathbf{e}ta$, we obtain an algebraic decay for the mean oscillation of~$u_\mathbf{e}ps$, that is \begin{equation} \label{Campanato} \fint_{B_\rho} \abs{u_\mathbf{e}ps - \fint_{B_\rho} u_\mathbf{e}ps}^2 \leq C\rho^{2\mathbf{m}u} \mathbf{e}nd{equation} for any~$\rho\in (0, \, 1)$ and some positive constants~$C$, $\mathbf{m}u$ that do not depend on~$\rho$, $\mathbf{e}ps$. If the radius~$\rho$ is large enough, {\BBB i.e.~$\rho\geq\mathbf{e}ps^\gamma$ for some suitable~$\gamma\in (0, \, 1)$, we obtain an algebraic decay for~$F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_\rho)$ as a function of~$\rho$ by adapting analogous arguments for the limit functional~$E_0$ (cf.~Luckhaus' partial regularity results in~{\mathrm{c}}ite{Luckhaus});} then, we deduce~\mathbf{e}qref{Campanato} via a suitable Poincar\'e inequality (Proposition~\ref{prop:Poincare}). On the other hand, {\BBB if~$\rho\leq\mathbf{e}ps^\gamma$} we obtain~\mathbf{e}qref{Campanato} from the Euler-Lagrange equations for~$E_\mathbf{e}ps$ (Proposition~\ref{prop:EL}). The inequality~\mathbf{e}qref{Campanato} immediately implies the desired bound on the H\"older norm of~$u_\mathbf{e}ps$, by Campanato embedding. Once Theorem~\ref{th:Holder} is proven, Theorem~\ref{th:conv} follows, via the Ascoli-Arzel\`a theorem. \begin{remark} \label{rk:assumptions} {\BBB As we observed before, if we are interested in weaker modes of convergence for the minimisers (e.g., $L^2$-convergence), then we may replace~\mathbf{e}qref{hp:g_decay} and~\mathbf{e}qref{hp:nabla_K} with the weaker condition that~$g\in L^1(\mathbf{m}athbb{R}^3)$ and~$g$ has finite second moment, as in~{\mathrm{c}}ite{taylor2018oseen}. However, \mathbf{e}qref{hp:g_decay} and~\mathbf{e}qref{hp:nabla_K} play a very important r\^ole for us; both of them are used in the proof of the estimate~\mathbf{e}qref{Campanato} for small radii, $\rho\leq\mathbf{e}ps^\gamma$. We do not know whether Theorems~\ref{th:Holder} and~\ref{th:conv} remain true under weaker assumptions.} \mathbf{e}nd{remark} \section{Preliminary results} \subsection{The Euler-Lagrange equations} Throughout the paper, we denote by~$C$ several constants that depend only on~$\Omega$, $K$, $m$, $\mathbf{p}si_s$ and~$u_{\mathbf{m}athrm{bd}}$. We write~$A\lesssim B$ as a short-hand for~$A\leq CB$. We also define $g_\mathbf{e}ps(z) := \mathbf{e}ps^{-3}g(\mathbf{e}ps^{-1}z)$ for~$z\in\mathbf{m}athbb{R}^3$ (where, we recall, $g(z)$ is the minimum eigenvalue of~$K(z)$) and \begin{equation} \label{Lambda} \mathscr{L}ambda:= \mathbf{m}athbf{n}abla\mathbf{p}si_s{\mathrm{c}}olon\mathbf{Q}Q\to\mathbf{m}athbb{R}^m. \mathbf{e}nd{equation} \begin{proposition} \label{prop:EL} Consider the free energy~$E_\mathbf{e}ps$, given by~\mathbf{e}qref{energy}, with $u=u_{\mathbf{m}athrm{bd}}$ on $\mathbf{m}athbb{R}^3\setminus \Omega$. Then there exists a minimiser~$u_\mathbf{e}ps\in L^\infty(\Omega, \, \mathbf{Q}Q)$ (identified with its extension by~$u_{\mathbf{m}athrm{bd}}$ to~$\mathbf{m}athbb{R}^3$), and it satisfies the Euler-Lagrange equation, \begin{equation} \label{EL} \mathscr{L}ambda(u_\mathbf{e}ps(x)) = \int_{\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y)u_\mathbf{e}ps(y)\,\mathrm{d} y \mathbf{e}nd{equation} for a.e.~$x\in\Omega$. \mathbf{e}nd{proposition} \begin{proof} By neglecting the additive constant in~\mathbf{e}qref{energy}, and multiplying by~$\mathbf{e}ps^2$, without loss of generality we may consider the functional \[ \mathbf{m}athcal{F}(u) := \int_\Omega \mathbf{p}si_s(u(x))\,\mathrm{d} x - \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y)u(x){\mathrm{c}}dot u(y)\,\mathrm{d} x\,\mathrm{d} y \] instead of~$E_\mathbf{e}ps$. To show existence, we use a direct method argument. First we show that the bilinear form admits a global lower bound. As $u_{\mathbf{m}athrm{bd}}\in L^2(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ and~$u$ admits uniform~$L^\infty$-bounds on~$\Omega$, we have that $u\in L^2(\mathbf{m}athbb{R}^3, \, \mathrm{o}verline{\mathbf{Q}Q})$, $\mathbf{m}athbf{n}orm{u}_{L^2(\mathbf{m}athbb{R}^3)}$ is bounded uniformly. We thus have the estimate that \begin{equation} \begin{split} \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3} &|K_\mathbf{e}ps(x-y)u(x){\mathrm{c}}dot u(y)|\,\mathrm{d} x\,\mathrm{d} y\\ &\lesssim \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3} g_\mathbf{e}ps(x-y)|u(x)||u(y)|\,\mathrm{d} x\,\mathrm{d} y\\ &= \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3} \left(g_\mathbf{e}ps(x-y)^{\frac{1}{2}}|u(x)|\right) \left(g_\mathbf{e}ps(x-y)^{\frac{1}{2}}|u(y)|\right)\,\mathrm{d} x\,\mathrm{d} y\\ &\lesssim \left(\int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3}g_\mathbf{e}ps(x-y)|u(x)|^2\,\mathrm{d} x\,\mathrm{d} y\right)^\frac{1}{2}\left(\int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3}g_\mathbf{e}ps(x-y)|u(y)|^2\,\mathrm{d} x\,\mathrm{d} y\right)^\frac{1}{2}\\ &= \mathbf{m}athbf{n}orm{g_\mathbf{e}ps}_{L^1(\mathbf{m}athbb{R}^3)}\mathbf{m}athbf{n}orm{u}_{L^2(\mathbf{m}athbb{R}^3)}^2 = \mathbf{m}athbf{n}orm{g}_{L^1(\mathbf{m}athbb{R}^3)}\mathbf{m}athbf{n}orm{u}_{L^2(\mathbf{m}athbb{R}^3)}^2 \mathbf{e}nd{split} \mathbf{e}nd{equation} The singular function~$\mathbf{p}si_s$ admits a lower bound pointwise, hence the functional~$\mathbf{m}athcal{F}$ admits a global lower bound. To show the admissible set is non empty, simply take $u(x)=u_0\in \mathbf{Q}Q$ for all $x\in \Omega$, so that $\mathbf{p}si_s(u(x))$ is a non-infinite constant. The uniform $L^\infty$ bounds on $u$ imply that we have $L^\infty$ weak-* compactness of a minimising sequence. As~$\mathbf{p}si_s$ is strictly convex, we have weak-* lower semicontinuity of the entropic term. It suffices to show weak-* lower semicontinuity of the bilinear term. First we split the bilinear term into the ``boundary'' and ``bulk'' contributions. That is, we write $u=u_{\mathbf{m}athrm{bd}}\,{\mathrm{c}}hi_{\mathbf{m}athbb{R}^3\setminus\Omega} + u\,{\mathrm{c}}hi_{\Omega}$, where~${\mathrm{c}}hi_{\mathbf{m}athbb{R}^3\setminus\Omega}$ and~${\mathrm{c}}hi_\Omega$ are the characteristic functions of~$\mathbf{m}athbb{R}^3\setminus\Omega$ and~$\Omega$ respectively. As $K_\mathbf{e}ps*(u_{\mathbf{m}athrm{bd}} \, {\mathrm{c}}hi_{\mathbf{m}athbb{R}^3\setminus\Omega})\in L^1(\Omega)$, if $u_j\mathrm{o}verset{*}{\rightharpoonup}u$, \begin{equation} \int_\Omega u_j(x) \, K_\mathbf{e}ps*(u_{\mathbf{m}athrm{bd}} \, {\mathrm{c}}hi_{\mathbf{m}athbb{R}^3\setminus\Omega})(x)\,\mathrm{d} x \to \int_\Omega u(x) \, K_\mathbf{e}ps*(u_{\mathbf{m}athrm{bd}} \, {\mathrm{c}}hi_{\mathbf{m}athbb{R}^3\setminus\Omega})(x)\,\mathrm{d} x. \mathbf{e}nd{equation} The second term requires a little more care. Following~{\mathrm{c}}ite[Corollary 4.1]{eveson1995compactness}, the map $L^\infty(\Omega)\mathbf{m}athbf{n}i u\mathbf{m}apsto K_\mathbf{e}ps*(u\,{\mathrm{c}}hi_{\Omega})$ is $L^\infty$-to-$L^1$ compact if and only if the set $\left\{K_\mathbf{e}ps(x-{\mathrm{c}}dot)\,{\mathrm{c}}hi_\Omega{\mathrm{c}}olon x\in\Omega\right\}$ is relatively $L^1$-compact. This is immediate however as $\Omega$ is a bounded set and $K_\mathbf{e}ps$ is integrable. Therefore the map \begin{equation} u\mathbf{m}apsto \int_\Omega\int_\Omega K_\mathbf{e}ps(x-y)u(x){\mathrm{c}}dot u(y)\,\mathrm{d} x\,\mathrm{d} y \mathbf{e}nd{equation} is in fact continuous with the weak-* $L^\infty$ topology, and therefore the entire bilinear term is continuous also. Therefore the energy functional is lower semicontinuous and minimisers exist by the direct method. To show that minimisers satisfy the Euler-Lagrange equation, we note that if $u$ has finite energy, then the measure of the set $\{x\in\Omega : u(x)\in\mathbf{p}artial\mathbf{Q}Q\}$ is zero. In particular, we may define $U_\mathrm{d}elta=\left\{x\in\Omega :\mathbf{p}si_s(u(x))< 1/\mathrm{d}elta\right\}$, and we have that \begin{equation} \Omega = \Gamma{\mathrm{c}}up \bigcup\limits_{\mathrm{d}elta>0}U_\mathrm{d}elta, \mathbf{e}nd{equation} where $\Gamma$ is a null set. By Assumption~\mathbf{e}qref{hp:blow-up}, for every $\mathrm{d}elta>0$, there exists some~$\gamma>0$ so that if $\mathbf{p}si_s(\tilde{u})< 1/\mathrm{d}elta$, then~$\mathrm{d}ist(\tilde{u}, \, \mathbf{p}artial\mathbf{Q}Q)>\gamma$. In particular, for $\mathbf{p}hi\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$ supported on~$U_\mathrm{d}elta$ and~$\mathbf{e}ta$ sufficiently small, $u+\mathbf{e}ta\mathbf{p}hi$ is bounded away from~$\mathbf{p}artial\mathbf{Q}Q$ on~$U_\mathrm{d}elta$. Therefore we may take variations without issue, as \begin{equation*} \begin{split} \frac{1}{\mathbf{e}ta}\left(\mathbf{m}athcal{F}(u+\mathbf{e}ta\mathbf{p}hi)-\mathbf{m}athcal{F}(u)\right) &= \int_{U_\mathrm{d}elta}\frac{1}{\mathbf{e}ta}\left(\mathbf{p}si_s(u(x)+\mathbf{e}ta\mathbf{p}hi(x))-\mathbf{p}si_s(u(x))\right)\,\mathrm{d} x \\ & -\frac{1}{2}\int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3}K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(2\mathbf{p}hi(x)\mathrm{o}times u(y)+\mathbf{e}ta\mathbf{p}hi(x)\mathrm{o}times \mathbf{p}hi(y)\right)\,\mathrm{d} x\,\mathrm{d} y. \mathbf{e}nd{split} \mathbf{e}nd{equation*} Now we have no issue taking the limit as~$\mathbf{e}ta\to 0$, as~$\mathbf{p}si_s$ is~$C^2$ away from $\mathbf{p}artial\mathbf{Q}Q$, to give \begin{equation*} \begin{split} \lim \limits_{\mathbf{e}ta\to 0}\frac{1}{\mathbf{e}ta}\left(\mathbf{m}athcal{F}(u+\mathbf{e}ta\mathbf{p}hi)-\mathbf{m}athcal{F}(u)\right) =&\int_{U_\mathrm{d}elta}\mathscr{L}ambda(u(x)){\mathrm{c}}dot \mathbf{p}hi(x)\,\mathrm{d} x - \int_{\mathbf{m}athbb{R}^3}\int_{\mathbf{m}athbb{R}^3}K_\mathbf{e}ps(x-y)u(y)\,\mathrm{d} y{\mathrm{c}}dot\mathbf{p}hi(x)\,\mathrm{d} x\\ =&\int_{U_\mathrm{d}elta}\left(\mathscr{L}ambda(u(x))-\int_{\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y)u(y)\,\mathrm{d} y\right){\mathrm{c}}dot \mathbf{p}hi(x)\,\mathrm{d} x, \mathbf{e}nd{split} \mathbf{e}nd{equation*} recalling that $\mathbf{p}hi(x)=0$ outside of $U_\mathrm{d}elta$. As $\mathbf{p}hi$ was arbitrary, this implies that $u$ satisfies \begin{equation} \mathscr{L}ambda(u(x))=\int_{\mathbf{m}athbb{R}^3} K_\mathbf{e}ps(x-y)u(y)\,\mathrm{d} y \mathbf{e}nd{equation} on~$U_\mathrm{d}elta$, and since $\mathrm{d}elta$ was arbitrary, this implies that $u$ satisfies the Euler-Lagrange equation outside of $\Gamma$, which is of measure zero. \mathbf{e}nd{proof} The Euler-Lagrange equations are particularly useful when used in combination with the following property. \begin{lemma} \label{lemma:nabla_inv} The map~$\mathscr{L}ambda{\mathrm{c}}olon\mathbf{Q}Q\to\mathbf{m}athbb{R}^m$ is invertible and its inverse is of class~$C^1$. Moreover, \begin{equation} \label{nabla_inv} \sup_{z\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla(\mathscr{L}ambda^{-1})(z)} \leq c^{-1}, \mathbf{e}nd{equation} where~$c$ is the constant given by~\mathbf{e}qref{hp:unif_conv}, and \begin{equation} \label{Lambda_blowup} \abs{\mathscr{L}ambda(y)}\to +\infty \qquad \textrm{as } \mathrm{d}ist(y, \, \mathbf{p}artial\mathbf{Q}Q)\to 0. \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} To prove~\mathbf{e}qref{Lambda_blowup}, it suffices to note that as $\mathbf{p}si_s$ is a closed proper convex function which is $C^1$ on an open domain, so by applying classical results from convex analysis {\mathrm{c}}ite[Theorem 25.1, Theorem 26.1]{rockafellar1970convex}, we see that $\mathbf{p}si_s$ satisfies the property of {\it essential smoothness}, which implies~\mathbf{e}qref{Lambda_blowup}. More so, as $\mathbf{p}si_s$ is also strictly convex on a bounded domain, this implies $\mathbf{p}si_s$ is a Legendre-type function which provides the results that $\mathscr{L}ambda(\mathbf{m}athcal{Q})=\mathbf{m}athbb{R}^m$ {\mathrm{c}}ite[Corollary 13.3.1]{rockafellar1970convex}, and that $\mathscr{L}ambda$ is a $C^0$ bijection from $\mathbf{m}athcal{Q}\to\mathscr{L}ambda(Q)$ {\mathrm{c}}ite[Theorem 26.5]{rockafellar1970convex}. The $C^1$ regularity of $\mathscr{L}ambda^{-1}$ follows immediately from the inverse function theorem, as $\mathbf{p}si_s$ is strongly convex. \mathbf{e}nd{proof} The Euler-Lagrange equation~\mathbf{e}qref{EL} and Lemma~\ref{lemma:nabla_inv} have important consequences in terms of regularity and ``strict physicality'' of minimisers --- that is, the image of~$u_\mathbf{e}ps$ does not touch the boundary of the physically admissible set~$\mathbf{Q}Q$. \begin{proposition} \label{prop:physicality} Minimisers~$u_\mathbf{e}ps$ of the functional~$E_\mathbf{e}ps$ in the class~$\mathbf{m}athscr{A}$ are Lipschitz-continuous on~$\Omega$, with~$\mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla u_\mathbf{e}ps}_{L^\infty(\Omega)}\lesssim\mathbf{e}ps^{-1}$. Moreover, there exists a number~$\mathrm{d}elta>0$ such that for any~$\mathbf{e}ps>0$ and any~$x\in\Omega$, \begin{equation} \label{physicality} \mathrm{d}ist(u_\mathbf{e}ps(x), \, \mathbf{p}artial\mathbf{Q}Q) \geq\mathrm{d}elta. \mathbf{e}nd{equation} \mathbf{e}nd{proposition} \begin{proof} The minimiser~$u_\mathbf{e}ps$ takes values in the bounded set~$\mathbf{Q}Q$ and hence, $\|u_\mathbf{e}ps\|_{L^\infty(\mathbf{m}athbb{R}^3)}\leq C$, where the constant~$C$ does not depend on~$\mathbf{e}ps$. Moreover, $\|K_\mathbf{e}ps\|_{L^1(\mathbf{m}athbb{R}^3)} = \|K\|_{L^1(\mathbf{m}athbb{R}^3)} < +\infty$. Then, by applying Young's inequality to~\mathbf{e}qref{EL}, we obtain \[ \mathbf{m}athbf{n}orm{\mathscr{L}ambda(u_\mathbf{e}ps)}_{L^\infty(\Omega)} \leq \|K_\mathbf{e}ps\|_{L^1(\mathbf{m}athbb{R}^3)} \|u_\mathbf{e}ps\|_{L^\infty(\mathbf{m}athbb{R}^3)} \leq C. \] On the other hand, we have~$\abs{\mathscr{L}ambda(z)}\to+\infty$ as~$z\to\mathbf{p}artial\mathbf{Q}Q$ by~\mathbf{e}qref{Lambda_blowup} and hence, \mathbf{e}qref{physicality} follows. Since we have assumed that~$K\in W^{1,1}(\mathbf{m}athbb{R}^3, \, \text{Sym}(m))$, from the Euler-Lagrange equation~\mathbf{e}qref{EL} we deduce \[ \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla(\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps)}_{L^\infty(\Omega)} = \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K_\mathbf{e}ps * u_\mathbf{e}ps}_{L^\infty(\Omega)} \leq \mathbf{e}ps^{-1} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K}_{L^1(\Omega)} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps}_{L^\infty(\Omega)} < + \infty. \] By Lemma~\ref{lemma:nabla_inv}, we conclude that~$\mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla u_\mathbf{e}ps}_{L^\infty(\Omega)}\lesssim\mathbf{e}ps^{-1}$. \mathbf{e}nd{proof} \subsection{A Poincar\'e-type inequality for~$F_\mathbf{e}ps$} \label{sect:Poincare} The goal of this section is to prove the following inequality on~$F_\mathbf{e}ps$. We recall that the functional~$F_\mathbf{e}ps$ is defined in~\mathbf{e}qref{loc_energy}. \begin{proposition} \label{prop:Poincare} There exists~$\mathbf{e}ps_1 > 0$ such that, for any~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$, any~$\rho >0$, any~$x_0\in\mathbf{m}athbb{R}^3$ and any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_1\rho]$, there holds \[ \fint_{B_{\rho/2}(x_0)} \abs{u - \fint_{B_{\rho/2}(x_0)}u}^2 \lesssim \rho^{-1} F_\mathbf{e}ps(u, \, B_\rho(x_0)). \] \mathbf{e}nd{proposition} To simplify the proof of Proposition~\ref{prop:Poincare}, we will take advantage of the scaling properties of~$F_\mathbf{e}ps$: if $u_\rho{\mathrm{c}}olon B_1\to\mathbf{m}athbb{R}^m$ is defined by~$u_\rho(x):= u(\rho x + x_0)$ for~$x\in B_1$, then a change of variables gives \begin{align} \rho^{-1} F_\mathbf{e}ps(u, \, B_\rho(x_0)) &= F_{\mathbf{e}ps/\rho}(u_\rho, \, B_1) \label{scaling} \mathbf{e}nd{align} In the proof of Proposition~\ref{prop:Poincare}, we will adapt arguments from~{\mathrm{c}}ite{taylor2018oseen}. By assumption~\mathbf{e}qref{hp:g}, there exist positive numbers~$\rho_1 < \rho_2$, $k$ such that $g\geq k$ a.e.~on~$B_{\rho_2}\setminus B_{\rho_1}$. Let $\mathbf{v}arphi\in C^\infty_\mathbf{m}athrm{c}(B_{\rho_2}\setminus B_{\rho_1})$ be a non-negative, radial function (i.e.~$\mathbf{v}arphi(z) = \tilde{\mathbf{v}arphi}(\abs{z})$ for~$z\in\mathbf{m}athbb{R}^3$) such that $\int_{\mathbf{m}athbb{R}^3}\mathbf{v}arphi(z)\, \mathrm{d} z = 1$. Since~$g$ is bounded away from zero on the support of~$\mathbf{v}arphi$, there holds \[ \mathbf{v}arphi + \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi} \leq C g \qquad \textrm{pointwise a.e.~on } \mathbf{m}athbb{R}^3, \] for some constant~$C$ that depends on~$g$ and~$\mathbf{v}arphi$; however, $\mathbf{v}arphi$ is fixed once and for all, and so is~$C$. We define~$\mathbf{v}arphi_\mathbf{e}ps(z) :=\mathbf{e}ps^{-3}\mathbf{v}arphi(\mathbf{e}ps^{-1}z)$ for any~$z\in\mathbf{m}athbb{R}^3$ and~$\mathbf{e}ps>0$. Then, $\mathbf{v}arphi_\mathbf{e}ps\in C^\infty_{\mathbf{m}athrm{c}}(\mathbf{m}athbb{R}^3)$ is non-negative, even, satisfies $\int_{\mathbf{m}athbb{R}^3}\mathbf{v}arphi_\mathbf{e}ps(z)\, \mathrm{d} z = 1$ and \begin{equation} \label{mollify} \mathbf{v}arphi_\mathbf{e}ps + \mathbf{e}ps \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps} \leq C g_\mathbf{e}ps \qquad \textrm{pointwise a.e.~on } \mathbf{m}athbb{R}^3. \mathbf{e}nd{equation} \begin{lemma} \label{lemma:mollify-Morrey} There exists~$\mathbf{e}ps_2 > 0$ such that, for any~$u\in L^\infty(B_1, \, \mathbf{m}athbb{R}^m)$ and any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_2]$, there holds \[ \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u)}^2 \lesssim \mathbf{e}ps^{-2} \int_{B_1\times B_1} K_\mathbf{e}ps(x - y){\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \mathrm{d} x\, \mathrm{d} y. \] \mathbf{e}nd{lemma} \begin{proof} We adapt the arguments from~{\mathrm{c}}ite[Lemma~2.1 and Proposition~2.1]{taylor2018oseen}. We define \[ I(y, \, z) := \int_{B_{1/2}} \mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x-y){\mathrm{c}}dot \mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x-z) \, \mathrm{d} x \qquad \textrm{for } y, \, z\in\mathbf{m}athbb{R}^3. \] We express the gradient of~$\mathbf{v}arphi_\mathbf{e}ps* u$ as~$\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u) = (\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps) * u$. By applying the identity $2a{\mathrm{c}}dot b = -\abs{a - b}^2 + \abs{a}^2 + \abs{b}^2$, we obtain \[ \begin{split} \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla (\mathbf{v}arphi_\mathbf{e}ps * u)(x)}^2\mathrm{d} x &= \int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} u(y){\mathrm{c}}dot u(z) \, I(y, \, z) \, \mathrm{d} y \, \mathrm{d} z \\ &= \mathbf{u}nderbrace{-\frac{1}{2}\int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} \abs{u(y) - u(z)}^2 I(y, \, z) \, \mathrm{d} y\, \mathrm{d} z}_{=: I_1} \\ &+ \mathbf{u}nderbrace{\frac{1}{2}\int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} \abs{u(y)}^2 I(y, \, z) \, \mathrm{d} y\, \mathrm{d} z}_{=: I_2} + \mathbf{u}nderbrace{\frac{1}{2}\int_{\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3} \abs{u(z)}^2 I(y, \, z) \, \mathrm{d} y\, \mathrm{d} z}_{=: I_3} \mathbf{e}nd{split} \] We first consider the term~$I_2$. Since~$\mathbf{v}arphi_\mathbf{e}ps$ is compactly supported, we have $\int_{\mathbf{m}athbb{R}^3}\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(z)\,\mathrm{d} z = 0$. Therefore, \[ I_2 = \frac{1}{2}\int_{B_{1/2}\times\mathbf{m}athbb{R}^3} \abs{u(y)}^2 \mathbf{m}athbf{n}abla \mathbf{v}arphi_\mathbf{e}ps(x-y){\mathrm{c}}dot \left( \int_{\mathbf{m}athbb{R}^3} \mathbf{m}athbf{n}abla \mathbf{v}arphi_\mathbf{e}ps(x-z) \, \mathrm{d} z \right) \mathrm{d} x\, \mathrm{d} y = 0, \] and likewise~$I_3 = 0$. Now, we consider~$I_1$. The gradient~$\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps$ is supported in a ball of radius~$C\mathbf{e}ps$, where $C$ is an~$\mathbf{e}ps$-independent constant. This implies \[ \begin{split} I_1 &= \frac{1}{2}\int_{B_{1/2 + C\mathbf{e}ps}\times B_{1/2 + C\mathbf{e}ps}} \abs{u(y) - u(z)}^2 \left( \int_{B_{1/2}} \mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - y) {\mathrm{c}}dot \mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - z) \, \mathrm{d} x \right) \, \mathrm{d} y\, \mathrm{d} z \\ &\leq \int_{B_{1/2}\times B_{1/2 + C\mathbf{e}ps}\times B_{1/2 + C\mathbf{e}ps}} \abs{u(y) - u(x)}^2 \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - y)} \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - z)} \, \mathrm{d} x \, \mathrm{d} y\, \mathrm{d} z \\ &\qquad + \int_{B_{1/2}\times B_{1/2 + C\mathbf{e}ps}\times B_{1/2 + C\mathbf{e}ps}} \abs{u(x) - u(z)}^2 \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - y)} \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(x - z)} \, \mathrm{d} x \, \mathrm{d} y\, \mathrm{d} z \\ &\leq 2\mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps}_{L^1(\mathbf{m}athbb{R}^3)} \int_{B_{1/2}\times B_{1/2 + C\mathbf{e}ps}} \abs{u(y) - u(x)}^2 \abs{\mathbf{m}athbf{n}abla\mathbf{v}arphi_\mathbf{e}ps(y - x)}\mathrm{d} x\, \mathrm{d} y \mathbf{e}nd{split} \] Thanks to~\mathbf{e}qref{mollify}, we obtain \[ \begin{split} I_1 \lesssim \mathbf{e}ps^{-2} \mathbf{m}athbf{n}orm{g}_{L^1(\mathbf{m}athbb{R}^3)} \int_{B_{1/2}\times B_{1/2 + C\mathbf{e}ps}} \abs{u(y) - u(x)}^2 g_\mathbf{e}ps(y - x) \, \mathrm{d} x\, \mathrm{d} y. \mathbf{e}nd{split} \] For~$\mathbf{e}ps$ sufficiently small we have~$1/2 + C\mathbf{e}ps<1$, and the lemma follows. \mathbf{e}nd{proof} Given two sets~$A\subseteq\mathbf{m}athbb{R}^3$, $A^\mathbf{p}rime\subseteq\mathbf{m}athbb{R}^3$, we write~$A\subset\!\subset A^\mathbf{p}rime$ when the \mathbf{e}mph{closure} of~$A$ is contained in~$A^\mathbf{p}rime$. \begin{lemma} \label{lemma:mollify-L2} Let~$A$, $A^\mathbf{p}rime$ be open sets such that $A\subset\!\subset A^\mathbf{p}rime\subseteq\mathbf{m}athbb{R}^{3}$. Then, there exists~$\mathbf{e}ps_3 = \mathbf{e}ps_3(A, \, A^\mathbf{p}rime)$ such that, for any~$u\in L^\infty(A^\mathbf{p}rime, \, \mathbf{m}athbb{R}^m)$ and any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_3]$, there holds \[ \int_{A} \abs{u - \mathbf{v}arphi_\mathbf{e}ps * u}^2 \lesssim \int_{A^\mathbf{p}rime\times A^\mathbf{p}rime} K_\mathbf{e}ps(x - y){\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \mathrm{d} x\, \mathrm{d} y. \] \mathbf{e}nd{lemma} \begin{proof} Since~$\int_{\mathbf{m}athbb{R}^3}\mathbf{v}arphi_\mathbf{e}ps(z)\,\mathrm{d} z = 1$, we have \[ I := \int_{A} \abs{u(x) - (\mathbf{v}arphi_\mathbf{e}ps*u)(x)}^2\mathrm{d} x = \int_{A} \abs{\int_{\mathbf{m}athbb{R}^3} \mathbf{v}arphi_\mathbf{e}ps(x-y) \left(u(x) - u(y)\right) \mathrm{d} y}^2 \mathrm{d} x. \] We apply Jensen inequality with respect to the probability measure $\mathbf{v}arphi_\mathbf{e}ps(x-y)\, \mathrm{d} y$: \[ I \leq \int_{A} \left(\int_{\mathbf{m}athbb{R}^3} \mathbf{v}arphi_\mathbf{e}ps(x-y) \abs{u(x) - u(y)}^2 \mathrm{d} y\right) \mathrm{d} x . \] Because the support of~$\mathbf{v}arphi_\mathbf{e}ps$ is contained in a ball of radius~$C\mathbf{e}ps$, where~$C$ is an~$\mathbf{e}ps$-independent constant, the integrand is equal to zero if~$x\in A$, $\mathrm{d}ist(y, \, A) > C\mathbf{e}ps$. By applying~\mathbf{e}qref{mollify}, we obtain \[ I \leq \int_{A\times \{y\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon \mathrm{d}ist(y, \, A)\leq C\mathbf{e}ps\}} g_\mathbf{e}ps(x-y) \abs{u(x) - u(y)}^2 \mathrm{d} x \, \mathrm{d} y \] and, if~$\mathbf{e}ps\leq C^{-1}\mathrm{d}ist(A, \, \mathbf{p}artial A^\mathbf{p}rime)$, the lemma follows. \mathbf{e}nd{proof} \begin{proof}[Proof of Proposition~\ref{prop:Poincare}] Due to the scaling property~\mathbf{e}qref{scaling}, it suffices to prove that \begin{equation} \label{Poincare1} \fint_{B_{1/2}} \abs{u - \fint_{B_{1/2}} u}^2 \lesssim F_{\mathbf{e}ps/\rho}(u, \, B_1) \mathbf{e}nd{equation} for any~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$ and any~$\mathbf{e}ps$, $\rho$ with~$\mathbf{e}ps/\rho$ sufficiently small. The triangle inequality and the elementary inequality $(a + b + c)^2 \leq 3(a^2 + b^2 + c^2)$ imply \[ \fint_{B_{1/2}} \abs{u - \fint_{B_{1/2}} u}^2 \leq 6 \fint_{B_{1/2}} \abs{u - \mathbf{v}arphi_{\mathbf{e}ps/\rho} * u}^2 + 3 \fint_{B_{1/2}} \abs{\mathbf{v}arphi_{\mathbf{e}ps/\rho} * u - \fint_{B_{1/2}}\mathbf{v}arphi_{\mathbf{e}ps/\rho} * u}^2 \] Thanks to the Poincar\'e inequality, we obtain \[ \fint_{B_{1/2}} \abs{u - \fint_{B_{1/2}} u}^2 \lesssim \int_{B_{1/2}} \abs{u - \mathbf{v}arphi_{\mathbf{e}ps/\rho} * u}^2 + \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_{\mathbf{e}ps/\rho} * u)}^2 . \] If $\mathbf{e}ps/\rho$ is sufficiently small, Lemma~\ref{lemma:mollify-Morrey} and Lemma~\ref{lemma:mollify-L2} give \[ \fint_{B_{1/2}} \abs{u - \fint_{B_{1/2}} u}^2 \lesssim \left((\mathbf{e}ps/\rho)^2 + 1 \right) F_{\mathbf{e}ps/\rho}(u, \, B_1), \] so~\mathbf{e}qref{Poincare1} follows. \mathbf{e}nd{proof} \subsection{Localised $\Gamma$-convergence for the non-local term} The $\Gamma$-convergence of the functional~$E_\mathbf{e}ps$, as~$\mathbf{e}ps\to 0$, was studied in~{\mathrm{c}}ite{taylor2018oseen}. In this section, we adapt the arguments of~{\mathrm{c}}ite{taylor2018oseen} to prove a localised $\Gamma$-convergence result. We focus on the interaction part of the free energy only, since this is all we need in the proof of Theorem~\ref{th:Holder}. {\BBB We denote by $F_\mathbf{e}ps^\mathbf{m}athbf{n}l$ the non-local interaction part of~$F_\mathbf{e}ps$, given by \begin{equation} \label{locintenergy} \begin{split} F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, G) :=& F_\mathbf{e}ps(u, \, G) - \frac{1}{\mathbf{e}ps^2}\int_G \mathbf{p}si_b(u(x)) \, \mathrm{d} x \\ =& \frac{1}{4\mathbf{e}ps^2} \int_{G\times G} K_\mathbf{e}ps(x - y) {\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \,\mathrm{d} x \, \mathrm{d} y \mathbf{e}nd{split} \mathbf{e}nd{equation} for any~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ and any Borel set~$G\subseteq\mathbf{m}athbb{R}^3$.} \begin{proposition} \label{prop:Gamma-liminf} {\BBB Let~$\rho>0$, $x_0\in\mathbf{m}athbb{R}^3$, and let~$v_\mathbf{e}ps\in L^2(B_\rho(x_0), \, \mathbf{m}athbb{R}^m)$, $v_0\in H^1(B_\rho(x_0), \, \mathbf{m}athbb{R}^m)$ be such that $v_\mathbf{e}ps\to v_0$ strongly in $L^2(B_\rho(x_0))$ as~$\mathbf{e}ps\to 0$. Then, for any open set~$G\subseteq B_{\rho}(x_0)$ we have \begin{equation} \label{liminf} \int_{G} L\mathbf{m}athbf{n}abla v_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla v_0 \leq \liminf_{\mathbf{e}ps\to 0} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(v_\mathbf{e}ps, \, G) \mathbf{e}nd{equation} } \mathbf{e}nd{proposition} \begin{proposition} \label{prop:Gamma-limsup} {\BBB Let~$\rho>0$. $x_0\in\mathbf{m}athbb{R}^3$. Let $v_\mathbf{e}ps\in H^1(B_\rho(x_0), \, \mathbf{m}athbb{R}^m)$, $v_0\in H^1(B_\rho(x_0), \, \mathbf{m}athbb{R}^m)$ be such that $v_\mathbf{e}ps\to v_0$ strongly in $H^1(B_\rho(x_0))$ as~$\mathbf{e}ps\to 0$. Then \begin{equation*} \limsup\limits_{\mathbf{e}ps\to 0} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(v_\mathbf{e}ps, \, B_\rho(x_0)) \leq \int_{B_\rho(x_0)} L\mathbf{m}athbf{n}abla v_0{\mathrm{c}}dot \mathbf{m}athbf{n}abla v_0 \mathbf{e}nd{equation*} } \mathbf{e}nd{proposition} In the proofs of Proposition~\ref{prop:Gamma-liminf} and~\ref{prop:Gamma-limsup}, we will use the following notation. Given a vector~$w\in\mathbf{m}athbb{R}^3\setminus\{0\}$ and a function~$u$ defined on a subset of~$\mathbf{m}athbb{R}^3$, we define the difference quotient \[ D_w u(x) := \frac{u(x+w) - u(x)}{\abs{w}} \] for any $x$ in the domain of~$u$ such that $x+w$ belongs to the domain of~$u$. If~$\abs{w}\leq h$, $u\in H^1(B_{\rho+h})$, and $|{\mathrm{c}}dot|_*$ is any seminorm on~$\mathbf{m}athbb{R}^{m}$, then \begin{equation} \label{diffq} \int_{B_\rho} |D_{\mathbf{e}ps w}u(x)|_*^2\, \mathrm{d} x \leq \int_{B_{\rho+\mathbf{e}ps h}}|(\hat{w}{\mathrm{c}}dot\mathbf{m}athbf{n}abla)u(x)|_*^2\,\mathrm{d} x \mathbf{e}nd{equation} where~$\hat{w} := w/\abs{w}$. This follows from the same technique as, e.g.,~{\mathrm{c}}ite[Lemma 7.23]{gilbarg2015elliptic}, for the case in which we have the standard Euclidean norm. However, we realise the proof only relies on the convexity of the seminorm, and no further structure. For convenience, we give the proof of Proposition~\ref{prop:Gamma-limsup} first. \begin{proof}[Proof of Proposition~\ref{prop:Gamma-limsup}] We assume that~$x_0 = 0$. Using a reflection across the boundary of~$B_\rho$ and a cut-off function, we define~$v_\mathbf{e}ps$ and~$v_0$ on~$\mathbf{m}athbb{R}^3\setminus B_\rho$, in such a way that $v_\mathbf{e}ps\in H^1(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$, $v_0\in H^1(\mathbf{m}athbb{R}^3, \, \mathbf{m}athbb{R}^m)$ and $v_\mathbf{e}ps\to v_0$ strongly in~$H^1(\mathbf{m}athbb{R}^3)$. Let~$t>0$ be a parameter. We have \begin{equation*} \begin{split} &\frac{1}{4\mathbf{e}ps^2} \int_{B_\rho}\int_{B_\rho} K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} x\,\mathrm{d} y\\ &\leq \frac{1}{4}\int_{B_\rho}\int_{\mathbf{m}athbb{R}^3}|z|^2K(z){\mathrm{c}}dot\left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z \,\mathrm{d} x\\ &= \frac{1}{4}\int_{B_\rho}\int_{B_\frac{t}{\mathbf{e}ps}}|z|^2K(z){\mathrm{c}}dot\left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z \,\mathrm{d} x + \frac{1}{4} \int_{B_\rho}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}|z|^2K(z){\mathrm{c}}dot\left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x. \mathbf{e}nd{split} \mathbf{e}nd{equation*} To estimate the first integral at the right-hand side, we exchange the order of integration and, for any~$z$, we apply~\mathbf{e}qref{diffq} to the seminorm~$\abs{\xi}_*^2 := \abs{z}^2K(z) {\mathrm{c}}dot \xi^{\mathrm{o}times 2}$; for the second integral, we apply~\mathbf{e}qref{hp:lambda_max}: \begin{equation} \label{DSJ1} \begin{split} &\frac{1}{4\mathbf{e}ps^2} \int_{B_\rho}\int_{B_\rho} K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} x\,\mathrm{d} y\\ &\leq \frac{1}{4} \int_{\mathbf{m}athbb{R}^3}\int_{B_{\rho+t}}K(z){\mathrm{c}}dot\left((z{\mathrm{c}}dot \mathbf{m}athbf{n}abla) v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} x\,\mathrm{d} z +C\int_{B_\rho}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}g(z)|z|^2|D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)|^2\,\mathrm{d} z\,\mathrm{d} x\\ &\stackrel{\mathbf{e}qref{L}}{=} \int_{B_{\rho+t}}L\mathbf{m}athbf{n}abla v_\mathbf{e}ps (x){\mathrm{c}}dot \mathbf{m}athbf{n}abla v_\mathbf{e}ps(x) \,\mathrm{d} x + C\int_{B_\rho}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}g(z)|z|^2|D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)|^2\,\mathrm{d} z\,\mathrm{d} x. \mathbf{e}nd{split} \mathbf{e}nd{equation} We now estimate the latter summand independently. {\BBB For $z \in \mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}$, $|\mathbf{e}ps z|^2>t^2$, so \[ |D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)|^2 \leq \frac{1}{t^2} \abs{v_\mathbf{e}ps(x+\mathbf{e}ps z) - v_\mathbf{e}ps(x)}^2 \leq \frac{2}{t^2} \left(\abs{v_\mathbf{e}ps(x+\mathbf{e}ps z)}^2 + \abs{v_\mathbf{e}ps(x)}^2\right) \] } Therefore, by applying Fubini theorem, we may estimate \begin{equation} \label{DSJ2} \begin{split} \int_{B_\rho}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}g(z)|z|^2|D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)|^2\,\mathrm{d} z\,\mathrm{d} x &\leq \frac{{\BBB 4\mathbf{m}athbf{n}orm{v_\mathbf{e}ps}_{L^2(\mathbf{m}athbb{R}^3)}^2}}{t^2}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}g(z)|z|^2\,\mathrm{d} z \mathbf{e}nd{split} \mathbf{e}nd{equation} As $g$ has finite second moment {\BBB and $\mathbf{m}athbf{n}orm{v_\mathbf{e}ps}_{L^2(\mathbf{m}athbb{R}^3)}\leq C$}, for fixed $t$ we must have that \begin{equation} \label{DSJ3} \lim\limits_{\mathbf{e}ps \to 0}\frac{{\BBB 2\mathbf{m}athbf{n}orm{v_\mathbf{e}ps}^2_{L^2(\mathbf{m}athbb{R}^3)}}}{t^2}\int_{\mathbf{m}athbb{R}^3\setminus B_\frac{t}{\mathbf{e}ps}}g(z)|z|^2\,\mathrm{d} z=0. \mathbf{e}nd{equation} Combining~\mathbf{e}qref{DSJ1}, \mathbf{e}qref{DSJ2} and~\mathbf{e}qref{DSJ3} gives \begin{equation*} \limsup\limits_{\mathbf{e}ps \to 0} \frac{1}{4\mathbf{e}ps^2} \int_{B_\rho}\int_{B_\rho} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot\left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x \leq \limsup\limits_{\mathbf{e}ps\to 0} \int_{B_{\rho+t}} L\mathbf{m}athbf{n}abla v_\mathbf{e}ps(x) {\mathrm{c}}dot \mathbf{m}athbf{n}abla v_\mathbf{e}ps(x)\,\mathrm{d} x. \mathbf{e}nd{equation*} As $v_\mathbf{e}ps\to v_0$ in $H^1(\mathbf{m}athbb{R}^3)$, this implies \begin{equation*} \lim\limits_{\mathbf{e}ps\to 0} \int_{B_{\rho+t}} L\mathbf{m}athbf{n}abla v_\mathbf{e}ps(x) {\mathrm{c}}dot \mathbf{m}athbf{n}abla v_\mathbf{e}ps(x)\,\mathrm{d} x = \int_{B_{\rho+t}} L\mathbf{m}athbf{n}abla v_{0}(x) {\mathrm{c}}dot \mathbf{m}athbf{n}abla v_{0}(x)\,\mathrm{d} x. \mathbf{e}nd{equation*} Therefore we have \begin{equation*} \limsup\limits_{\mathbf{e}ps \to 0} \frac{1}{4\mathbf{e}ps^2} \int_{B_\rho}\int_{B_\rho}K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x \leq \int_{B_{\rho+t}} L\mathbf{m}athbf{n}abla v_{0}(x) {\mathrm{c}}dot \mathbf{m}athbf{n}abla v_{0}(x)\,\mathrm{d} x, \mathbf{e}nd{equation*} and passing to the limit as~$t\to 0$ in the right-hand side gives the desired result. \mathbf{e}nd{proof} \begin{proof}[Proof of Proposition~\ref{prop:Gamma-liminf}] Again, we assume that~$x_0 = 0$. Without loss of generality, we may assume that \begin{equation} \label{DSJ0} \liminf_{\mathbf{e}ps\to 0} \frac{1}{4\mathbf{e}ps^2} \int_{G\times G} K_\mathbf{e}ps(x - y) {\mathrm{c}}dot \left(v_\mathbf{e}ps(x) - v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2} \,\mathrm{d} x \, \mathrm{d} y < + \infty, \mathbf{e}nd{equation} otherwise there is nothing to prove. Up to extraction of a (non-relabelled) subsequence, we may also assume that the left-hand side of~\mathbf{e}qref{DSJ0} is actually a limit. Let $G\subseteq B_{\rho}$ be open and~$G^\mathbf{p}rime\subset\!\subset G$. Then we may write that \begin{equation} \label{DSJ4} \begin{split} \frac{1}{4\mathbf{e}ps^2} \int_G\int_G & K_\mathbf{e}ps(x-y){\mathrm{c}}dot \left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x\\ &\geq \frac{1}{4\mathbf{e}ps^2} \int_{G^\mathbf{p}rime}\int_{G} K_\mathbf{e}ps(x-y){\mathrm{c}}dot \left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x\\ &= \frac{1}{4} \int_{G^\mathbf{p}rime}\int_{\frac{G-x}{\mathbf{e}ps}} |z|^2K(z){\mathrm{c}}dot \left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x. \mathbf{e}nd{split} \mathbf{e}nd{equation} Let~$G^c := \mathbf{m}athbb{R}^3\setminus G$ and~$\mathrm{d}elta := \mathrm{d}ist(G^\mathbf{p}rime, \, G^c)>0$. We note that \begin{equation*} \begin{split} \left|\int_{G^\mathbf{p}rime}\int_{\left(\frac{G-x}{\mathbf{e}ps}\right)^c} |z|^2K(z){\mathrm{c}}dot \left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x\right| \lesssim \int_{G^\mathbf{p}rime}\int_{B_{\frac{\mathrm{d}elta}{\mathbf{e}ps}}^c}g(z)|z|^2|D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)|^2\,\mathrm{d} z\,\mathrm{d} x, \mathbf{e}nd{split} \mathbf{e}nd{equation*} which by previous estimates (see~\mathbf{e}qref{DSJ2}, \mathbf{e}qref{DSJ3}) we have seen converges to zero as~$\mathbf{e}ps\to 0$. This means \begin{equation} \label{DSJ5} \begin{split} \liminf\limits_{\mathbf{e}ps\to 0}&\int_{G^\mathbf{p}rime}\int_{\frac{G-x}{\mathbf{e}ps}} |z|^2K(z){\mathrm{c}}dot \left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x\\ &=\liminf\limits_{\mathbf{e}ps\to 0}\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3} |z|^2K(z){\mathrm{c}}dot \left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x\\ \mathbf{e}nd{split} \mathbf{e}nd{equation} Furthermore, we note this can be written as an $L^2$ norm, by defining $w_\mathbf{e}ps: G^\mathbf{p}rime\times\mathbf{m}athbb{R}^3\to\mathbf{m}athbb{R}^m$ by $w_\mathbf{e}ps(z,x):=|z|K^\frac{1}{2}(z)D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)$. Thanks to~\mathbf{e}qref{DSJ0}, we immediately see that $w_\mathbf{e}ps$ is $L^2$-bounded, so must admit an $L^2$-weakly converging subsequence $w_j:=w_{\mathbf{e}ps_j}$ with $\mathbf{e}ps_j\to 0$ and $w_j$ has weak-$L^2$ limit $w_0$. Furthermore, we take \begin{equation} \label{DSJ6} \liminf\limits_{\mathbf{e}ps\to 0}\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3} |z|^2K(z){\mathrm{c}}dot \left(D_{\mathbf{e}ps z}v_\mathbf{e}ps(x)\right)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x =\liminf\limits_{j\to\infty} \mathbf{m}athbf{n}orm{w_j}^2_{L^2(G^\mathbf{p}rime\times\mathbf{m}athbb{R}^3)} \geq \mathbf{m}athbf{n}orm{w_0}^2_{L^2(G^\mathbf{p}rime\times\mathbf{m}athbb{R}^3)} \! . \mathbf{e}nd{equation} It remains to identify the limit~$w_0$. We may do this by integrating against test functions. Let~$\mathbf{p}hi\in C^\infty_{\mathbf{m}athrm{c}}(G^\mathbf{p}rime\times \mathbf{m}athbb{R}^3)$. There exists some~$R_0>0$ such that, for any~$(y, \, z)\in\mathbf{m}athbb{R}^3\times\mathbf{m}athbb{R}^3$ with~$|z|>R_0$, $\mathbf{p}hi(y, \, z)=0$. Furthermore, there exists some $\mathrm{d}elta>0$ so that if $\mathrm{d}ist(y, \, (G^\mathbf{p}rime)^c)<\mathrm{d}elta$, then~$\mathbf{p}hi(y, \, z)=0$. In particular, if~${\mathbf{e}ps_j}< \frac{\mathrm{d}elta}{R_0}$ and~$(x - \mathbf{e}ps_j z, \, z)\in\mathbf{m}athrm{supp}(\mathbf{p}hi)$, then~$x\in G^\mathbf{p}rime$. Therefore \begin{equation*} \begin{split} \langle w_j, \, \mathbf{p}hi\rangle &=\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}\mathbf{p}hi(x, \, z)|z|K^\frac{1}{2}(z)D_{{\mathbf{e}ps_j} z}v_{\mathbf{e}ps_j}(x)\,\mathrm{d} z\,\mathrm{d} x\\ &= \frac{1}{{\mathbf{e}ps_j}}\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}\Big(\mathbf{p}hi(x-{\mathbf{e}ps_j} z, \, z)-\mathbf{p}hi(x, \, z)\Big)K^\frac{1}{2}(z)v_{\mathbf{e}ps_j}(x)\,\mathrm{d} z\,\mathrm{d} x, \mathbf{e}nd{split} \mathbf{e}nd{equation*} and we may exploit the fact that \[ \frac{1}{{\mathbf{e}ps_j}}\Big(\mathbf{p}hi(x-{\mathbf{e}ps_j} z, \, z)-\mathbf{p}hi(x, \, z)\Big) \to (-z{\mathrm{c}}dot \mathbf{m}athbf{n}abla_x )\mathbf{p}hi(x, \, z) \qquad \textrm{uniformly on } G^\mathbf{p}rime\times\mathbf{m}athbb{R}^3 \textrm{ as } j\to+\infty, \] with the assumed $L^2$ convergence of $v_{\mathbf{e}ps_j}\to v_0$, to give that \begin{equation*} \begin{split} \lim\limits_{j\to\infty} \langle w_j, \, \mathbf{p}hi\rangle &= \lim\limits_{j\to\infty}\frac{1}{\mathbf{e}ps_j}\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}\Big(\mathbf{p}hi(x-{\mathbf{e}ps_j} z, \, z)-\mathbf{p}hi(x, \, z)\Big)K^\frac{1}{2}(z)v_{\mathbf{e}ps_j}(x)\,\mathrm{d} z\,\mathrm{d} x,\\ &=\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}(-z{\mathrm{c}}dot \mathbf{m}athbf{n}abla_x )\mathbf{p}hi(x, \, z)K^\frac{1}{2}(z)v_0(x)\,\mathrm{d} z\,\mathrm{d} x\\ &=\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}\mathbf{p}hi(x, \, z)K^\frac{1}{2}(z)(z{\mathrm{c}}dot \mathbf{m}athbf{n}abla )v_0(x)\,\mathrm{d} z\,\mathrm{d} x=\langle w_0, \, \mathbf{p}hi\rangle. \mathbf{e}nd{split} \mathbf{e}nd{equation*} Therefore $w_0(x, \, z)=K^\frac{1}{2}(z)(z{\mathrm{c}}dot \mathbf{m}athbf{n}abla )v_0(x)$, and by~\mathbf{e}qref{DSJ4}, \mathbf{e}qref{DSJ5}, \mathbf{e}qref{DSJ6} we have \begin{equation*} \begin{split} \liminf\limits_{\mathbf{e}ps\to 0} &\frac{1}{4\mathbf{e}ps^2}\int_G\int_G K_\mathbf{e}ps(x-y){\mathrm{c}}dot \left(v_\mathbf{e}ps(x)-v_\mathbf{e}ps(y)\right)^{\mathrm{o}times 2}\,\mathrm{d} y\,\mathrm{d} x\\ &\geq \frac{1}{4} \liminf\limits_{j\to\infty} \mathbf{m}athbf{n}orm{w_j}^2_{L^2(G^\mathbf{p}rime\times \mathbf{m}athbb{R}^3)}\\ &\geq \frac{1}{4}\mathbf{m}athbf{n}orm{w_0}^2_{L^2(G^\mathbf{p}rime\times \mathbf{m}athbb{R}^3)}\\ &= \frac{1}{4}\int_{G^\mathbf{p}rime}\int_{\mathbf{m}athbb{R}^3}K(z){\mathrm{c}}dot\Big((z{\mathrm{c}}dot \mathbf{m}athbf{n}abla )v_0(x)\Big)^{\mathrm{o}times 2}\,\mathrm{d} z\,\mathrm{d} x\\ &\stackrel{\mathbf{e}qref{L}}{=} \int_{G^\mathbf{p}rime}L\mathbf{m}athbf{n}abla v_0(x){\mathrm{c}}dot \mathbf{m}athbf{n}abla v_0(x)\,\mathrm{d} x. \mathbf{e}nd{split} \mathbf{e}nd{equation*} As the set $G^\mathbf{p}rime\subset\!\subset G$ was arbitrary, by monotonicity the lower bound~\mathbf{e}qref{liminf} holds. \mathbf{e}nd{proof} \subsection{{\BBB Other auxiliary results}} {\BBB In this section, we collect some auxiliary results that will be useful in the proof of Theorem~\ref{th:Holder}. Our first result is a remark on the kernel~$K$, which will be used repeatedly. We will use the notation~$a_\mathbf{e}ps = \mathrm{o}(b_\mathbf{e}ps)$ if there exists a positive sequence~$c_\mathbf{e}ps$, depending on~$K$ and~$\mathbf{Q}Q$ only, such that~$\abs{a_\mathbf{e}ps} \leq c_\mathbf{e}ps \abs{b_\mathbf{e}ps}$ and~$c_\mathbf{e}ps\to 0$ as~$\mathbf{e}ps\to 0$. \begin{lemma} \label{lemma:decayK} For any~$\sigma>0$, there holds \[ \sup_{x\in\mathbf{m}athbb{R}^3}\left(\frac{1}{\mathbf{e}ps^2} \int_{\{y\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon\abs{x - y}\geq\sigma\}} g_\mathbf{e}ps(x - y) \, \mathrm{d} y\right) = \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^2}\right) \] where~$q>7/2$ is the number given by Assumption~\mathbf{e}qref{hp:g_decay}. \mathbf{e}nd{lemma} \begin{proof} By the change of variable~$y = x + \mathbf{e}ps z$, we obtain \[ \begin{split} \frac{1}{\mathbf{e}ps^2} \int_{\{y\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon\abs{x - y}\geq\sigma\}} g_\mathbf{e}ps(x - y) \, \mathrm{d} y &= \frac{1}{\mathbf{e}ps^2} \int_{\{z\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon\abs{z}\geq \sigma/\mathbf{e}ps\}} g(z) \, \mathrm{d} z \\ &\leq \frac{\mathbf{e}ps^{q-2}}{\sigma^q} \int_{\{z\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon\abs{z}\geq \sigma/\mathbf{e}ps\}} g(z) \abs{z}^q \, \mathrm{d} z \mathbf{e}nd{split} \] By assumption~\mathbf{e}qref{hp:g_decay}, $g$ has finite moment of order~$q$, so the lemma follows. \mathbf{e}nd{proof} Given a function~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ and Borel sets~$G\subseteq\mathbf{m}athbb{R}^3$, $G^\mathbf{p}rime\subseteq\mathbf{m}athbb{R}^3$, we define \begin{equation} \label{locdefect} \Gamma_\mathbf{e}ps(u, \, G, \, G^\mathbf{p}rime) := \frac{1}{2\mathbf{e}ps^2}\int_{G\times G^\mathbf{p}rime} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \, \mathrm{d} x \, \mathrm{d} y \mathbf{e}nd{equation} In the terminology introduced by Alberti and Bellettini~{\mathrm{c}}ite{AlbertiBellettini}, $\Gamma_\mathbf{e}ps$ is called the `locality defect'. Indeed, if~$G$ and~$G^\mathbf{p}rime$ are disjoint and~$F^\mathbf{m}athbf{n}l_\mathbf{e}ps$ is defined as in~\mathbf{e}qref{locintenergy}, then \begin{equation} \label{locdefsplit} F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, G {\mathrm{c}}up G^\mathbf{p}rime) = F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, G) + F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, \, G^\mathbf{p}rime) + \Gamma_\mathbf{e}ps(u, \, G, \, G^\mathbf{p}rime) \mathbf{e}nd{equation} because~$K$ is assumed to be an even function (by~\mathbf{e}qref{hp:K_even}). Moreover, for any~$G\subseteq\mathbf{m}athbb{R}^3$, $G^\mathbf{p}rime\subseteq\mathbf{m}athbb{R}^3$ we have $\Gamma_\mathbf{e}ps(u, \, G, \, G^\mathbf{p}rime) = \Gamma_\mathbf{e}ps(u, \, G^\mathbf{p}rime, \, G)$. Given a set~$G\subseteq\mathbf{m}athbb{R}^3$ and a number~$\sigma>0$, we define \begin{equation} \label{partialsigma} \mathbf{p}artial_\sigma G := \left\{x\in\Omega{\mathrm{c}}olon \mathrm{d}ist(x, \, \mathbf{p}artial G)<\sigma\right\} \!. \mathbf{e}nd{equation} Our next result is an estimate for the `locality defect'~$\Gamma_\mathbf{e}ps$.} \begin{lemma} \label{lemma:locdefect} {\BBB Let~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$, let~$G\subseteq G^\mathbf{p}rime \subseteq \mathbf{m}athbb{R}^3$ be bounded Borel sets, and let~$\sigma>0$. Then, \begin{equation*} \label{glue0} \Gamma_\mathbf{e}ps(u, \, G, \, G^\mathbf{p}rime\setminus G) \lesssim F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u, \, \mathbf{p}artial_\sigma G) + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u - \zeta}^2_{L^2(G^\mathbf{p}rime)} \mathbf{e}nd{equation*} where~$q>7/2$ is given by~\mathbf{e}qref{hp:g_decay}.} \mathbf{e}nd{lemma} \begin{proof} {\BBB If two points~$x\in G$, $y\in G^\mathbf{p}rime\setminus G$ satisfy~$\abs{x - y}\leq\sigma$, then necessarily~$x\in\mathbf{p}artial_\sigma G$, $y\in\mathbf{p}artial_\sigma G$. Then, \[ \begin{split} \Gamma_\mathbf{e}ps(u, \, G, \, G^\mathbf{p}rime\setminus G) &\leq 2F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, \mathbf{p}artial_\sigma G) \\ &\qquad + \mathbf{u}nderbrace{\frac{1}{2\mathbf{e}ps^2} \int_{\{x\in G^\mathbf{p}rime, \, y\in G^\mathbf{p}rime{\mathrm{c}}olon \abs{x - y} \geq \sigma\}} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot \left(u(x) - u(y)\right)^{\mathrm{o}times 2} \, \mathrm{d} x \, \mathrm{d} y}_{=: I} \mathbf{e}nd{split} \] We need to estimate the term~$I$. Let~$\zeta\in\mathbf{m}athbb{R}^m$ be a constant. The inequality $\abs{u(x) - u(y)}^2 \leq 2\abs{u(x) - \zeta}^2 + 2\abs{u(y) - \zeta}^2$, and the assumption that~$K$ is even (see~\mathbf{e}qref{hp:K_even}), imply \begin{equation} \label{locdefect1} \begin{split} I&\lesssim \frac{1}{2\mathbf{e}ps^2} \int_{\{x\in G^\mathbf{p}rime, \, y\in G^\mathbf{p}rime{\mathrm{c}}olon \abs{x - y} \geq \sigma\}} g_\mathbf{e}ps(x - y)\abs{u(x) - u(y)}^2 \mathrm{d} x \, \mathrm{d} y \\ &\lesssim \frac{2}{\mathbf{e}ps^2} \int_{\{x\in G^\mathbf{p}rime, \, y\in G^\mathbf{p}rime{\mathrm{c}}olon \abs{x - y} \geq \sigma\}} g_\mathbf{e}ps(x - y)\abs{u(x) - \zeta}^2 \mathrm{d} x \, \mathrm{d} y \mathbf{e}nd{split} \mathbf{e}nd{equation} Then, by Lemma~\ref{lemma:decayK}, we obtain \[ I \lesssim \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u - \zeta}^2_{L^2(G^\mathbf{p}rime)} \] and the lemma follows.} \mathbf{e}nd{proof} \begin{lemma} \label{lemma:diffGamma} {\BBB Let~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$, $\xi\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$, let~$G\subseteq \mathbf{m}athbb{R}^3$ be a bounded Borel set such that~$u=\xi$ a.e. in~$\mathbf{m}athbb{R}^3\setminus G$, and let~$\sigma>0$. Then, \begin{equation*} \begin{split} \Gamma_\mathbf{e}ps(\xi, \, G, \, \mathbf{m}athbb{R}^3\setminus G) - \Gamma_\mathbf{e}ps(u, \, G, \, \mathbf{m}athbb{R}^3\setminus G) \lesssim F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi, \, \mathbf{p}artial_\sigma G) + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \abs{G}^{1/2} \mathbf{m}athbf{n}orm{\xi - u}_{L^2(G)} \mathbf{e}nd{split} \mathbf{e}nd{equation*} where~$q>7/2$ is given by~\mathbf{e}qref{hp:g_decay}.} \mathbf{e}nd{lemma} \begin{remark} \label{rk:diffGamma} {\BBB The right-hand side of Lemma~\ref{lemma:diffGamma} contains the $L^2$-norm of~$\xi - u$, \mathbf{e}mph{not} the $L^2$-norm squared. This loss of a power will be responsible for additional technicalities later on in the proof (see Lemma~\ref{lemma:decay} below). } \mathbf{e}nd{remark} \begin{proof}[Proof of Lemma~\ref{lemma:diffGamma}] {\BBB Let~$H_\sigma := \{(x, \, y)\in G\times(\mathbf{m}athbb{R}^3\setminus G) {\mathrm{c}}olon \abs{x-y}\geq\sigma\}$. We have \[ G\times (\mathbf{m}athbb{R}^3\setminus G)\subseteq (\mathbf{p}artial_\sigma G\times \mathbf{p}artial_\sigma G){\mathrm{c}}up H_\sigma \] and hence, \[ \begin{split} &\Gamma_\mathbf{e}ps(\xi, \, G, \, \mathbf{m}athbb{R}^3\setminus G) - \Gamma_\mathbf{e}ps(u, \, G, \, \mathbf{m}athbb{R}^3\setminus G) \\ &\qquad \leq 2F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi, \, \mathbf{p}artial_\sigma G) + \mathbf{u}nderbrace{\frac{1}{2\mathbf{e}ps^2} \int_{H_\sigma} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot\left(\left(\xi(x) - u(y)\right)^{\mathrm{o}times 2} - \left(u(x) - u(y)\right)^{\mathrm{o}times 2}\right) \mathrm{d} x \, \mathrm{d} y}_{=:I} \mathbf{e}nd{split} \] Using the identity $K_\mathbf{e}ps(x-y){\mathrm{c}}dot (a^{\mathrm{o}times 2} - b^{\mathrm{o}times 2}) = K_\mathbf{e}ps(x-y)(a - b){\mathrm{c}}dot(a + b)$, we obtain \[ \begin{split} I &= \frac{1}{2\mathbf{e}ps^2} \int_{H_\sigma} K_\mathbf{e}ps(x-y) \left(\xi(x) - u(x)\right) {\mathrm{c}}dot \left(\xi(x) + u(x) - 2u(y)\right) \mathrm{d} x \, \mathrm{d} y \mathbf{e}nd{split} \] Since~$u$, $\xi$ take their values in the bounded set~$\mathbf{Q}Q$, we deduce \[ \begin{split} &I \lesssim \frac{1}{\mathbf{e}ps^2} \int_{H_\sigma} g_\mathbf{e}ps(x-y) \abs{\xi(x) - u(x)} \mathrm{d} x \, \mathrm{d} y \\ &\lesssim \sup_{x\in G} \left( \frac{1}{\mathbf{e}ps^2} \int_{\{y\in\mathbf{m}athbb{R}^3{\mathrm{c}}olon\abs{x-y}\geq\sigma\}} g_\mathbf{e}ps(x-y) \mathrm{d} y \right) \mathbf{m}athbf{n}orm{\xi - u}_{L^1(G)} \mathbf{e}nd{split} \] By applying Lemma~\ref{lemma:decayK} and the H\"older inequality at the right-hand side, the result follows.} \mathbf{e}nd{proof} \begin{lemma} \label{lemma:glue} {\BBB Let~$G\subseteq\mathbf{m}athbb{R}^3$ be a Borel set. Let~$u_1\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$, $u_2\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$. Then, \[ F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u_2, \, G) \lesssim F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u_1, \, G) + \frac{1}{\mathbf{e}ps^2} \mathbf{m}athbf{n}orm{u_2 - u_1}^2_{L^2(G)} \]} \mathbf{e}nd{lemma} \begin{proof} {\BBB By writing~$u_2(x) - u_2(y) = (u_1(x) - u_1(y)) + (u_2(x) - u_1(x)) + (u_1(y) - u_2(y))$, and using that~$K$ is even (by assumption~\mathbf{e}qref{hp:K_even}), we obtain \begin{equation*} \begin{split} F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u_2, \, G) &\lesssim F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u_1, \, G) + \frac{2}{\mathbf{e}ps^2}\int_{G\times G} K_\mathbf{e}ps(x - y){\mathrm{c}}dot \left(u_2(x) - u_1(x)\right)^{\mathrm{o}times 2} \mathrm{d} x \, \mathrm{d} y \mathbf{e}nd{split} \mathbf{e}nd{equation*} and the lemma follows.} \mathbf{e}nd{proof} \begin{lemma} \label{lemma:glueH1} {\BBB Let~$G\subset\!\subset G^\mathbf{p}rime\subset\!\subset\mathbf{m}athbb{R}^3$ be open sets and~$\sigma\in (0, \, \mathrm{d}ist(G, \, \mathbf{p}artial G^\mathbf{p}rime))$. Then, for any~$u\in H^1(G^\mathbf{p}rime, \, \mathbf{Q}Q)$, we have \[ \begin{split} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u, \, G) &\lesssim \int_{G{\mathrm{c}}up\mathbf{p}artial_\sigma G} \abs{\mathbf{m}athbf{n}abla u}^2 + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u - \zeta}^2_{L^2(G)} \mathbf{e}nd{split} \] where~$q>7/2$ is the number given by~\mathbf{e}qref{hp:g_decay}.} \mathbf{e}nd{lemma} \begin{proof} {\BBB This lemma is a variant of Proposition~\ref{prop:Gamma-limsup}. We have \begin{equation*} \begin{split} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u, \, G) &= \frac{1}{4\mathbf{e}ps^2}\int_{\{x\in G, \, y\in G{\mathrm{c}}olon \abs{x - y}\leq\sigma\}} K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(u(x) - u(y)\right)^{\mathrm{o}times 2} \,\mathrm{d} x\, \mathrm{d} y \\ &\qquad + \frac{1}{4\mathbf{e}ps^2} \int_{\{x\in G, \, y\in G{\mathrm{c}}olon \abs{x - y}\geq\sigma\}} K_\mathbf{e}ps(x-y){\mathrm{c}}dot\left(u(x) - u(y)\right)^{\mathrm{o}times 2} \,\mathrm{d} x\, \mathrm{d} y =: I_1 + I_2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} For the first term~$I_1$, we may repeat the very same argument from the proof of Proposition~\ref{prop:Gamma-limsup}, which gives \begin{equation*} \begin{split} I_1 \leq \frac{1}{4\mathbf{e}ps^2}\int_{G\times B_{\sigma/\mathbf{e}ps}} K(z){\mathrm{c}}dot\left(u(x + \mathbf{e}ps z) - u(x)\right)^{\mathrm{o}times 2} \,\mathrm{d} x\, \mathrm{d} z \lesssim \int_{G{\mathrm{c}}up\mathbf{p}artial_\sigma G} \abs{\mathbf{m}athbf{n}abla u}^2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} On the other hand, the estimate~\mathbf{e}qref{locdefect1} in the proof of Lemma~\ref{lemma:locdefect} shows that \[ I_2 \lesssim \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u - \zeta}^2_{L^2(G)} \qedhere \]} \mathbf{e}nd{proof} The next lemma is an estimate on the potential~$\mathbf{p}si_b$. \begin{lemma} \label{lemma:psib} For any~$\mathrm{d}elta>0$, there exists a constant~$C_\mathrm{d}elta>0$ such that, for any~$y_1\in\mathbf{Q}Q$, $y_2\in\mathbf{Q}Q$ with $\mathrm{d}ist(y_2, \, \mathbf{p}artial\mathbf{Q}Q)\geq\mathrm{d}elta$, we have \begin{equation} \label{in_psib} \mathbf{p}si_b(y_2) \leq C_\mathrm{d}elta \left(\mathbf{p}si_b(y_1) + \abs{y_1 - y_2}^2\right). \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} The assumption~\mathbf{e}qref{hp:non_degeneracy} implies, via a Taylor expansion and a compactness argument, that there exist~$\gamma>0$, $\kappa_1>0$, $\kappa_2>0$ so that if $\mathrm{d}ist(y, \, \mathbf{m}athbb{N}N)<\gamma$, then \begin{equation} \label{nondeg} \kappa_1 \mathrm{d}ist^2(y, \, \mathbf{m}athbb{N}N)\leq \mathbf{p}si_b(y)\leq \kappa_2 \mathrm{d}ist^2(y, \, \mathbf{m}athbb{N}N). \mathbf{e}nd{equation} To prove the result we exhaust three cases, \begin{enumerate} \item $\mathrm{d}ist(y_1, \, \mathbf{m}athbb{N}N)\geq \frac{1}{2}\gamma$. \item $\mathrm{d}ist(y_1, \, \mathbf{m}athbb{N}N)< \frac{1}{2}\gamma$, $\mathrm{d}ist(y_2, \, \mathbf{m}athbb{N}N)\geq \gamma$. \item $\mathrm{d}ist(y_1, \, \mathbf{m}athbb{N}N)< \frac{1}{2}\gamma$, $\mathrm{d}ist(y_2, \, \mathbf{m}athbb{N}N)< \gamma$. \mathbf{e}nd{enumerate} In the case of (1), we have that such~$y_1$ satisfy $\mathbf{p}si_b(y_1)>c_1$ for a constant $c_1>0$ (that depends on~$\gamma$), as~$y_1$ is bounded away from the minimising manifold. We furthermore have that~$\mathbf{p}si_b(y_2)\leq c_2$ because $\mathrm{d}ist(y_2, \, \mathbf{p}artial\mathbf{Q}Q)>\mathrm{d}elta$ (and the constant~$c_2$ will depend on~$\mathrm{d}elta$). Therefore the inequality~\mathbf{e}qref{in_psib} holds trivially with $C_\mathrm{d}elta=\frac{c_1}{c_2}$. In the case of (2), since $\mathrm{d}ist(y_1, \, \mathbf{m}athbb{N}N)<\frac{1}{2}\gamma$, $\mathrm{d}ist(y_2, \, \mathbf{m}athbb{N}N)\geq \gamma$, we must have $|y_1-y_2|^2\geq \frac{1}{4}\gamma^2$, then we use the upper bound on $\mathbf{p}si_b(y_2)$ as before. In the case of (3), we note that since $y_1,y_2$ are both sufficiently close to $\mathbf{m}athbb{N}N$, \begin{equation*} \begin{split} \mathbf{p}si_b(y_2) &\stackrel{\mathbf{e}qref{nondeg}}{\lesssim} \mathrm{d}ist^2(y_2, \, \mathbf{m}athbb{N}N) \lesssim \mathrm{d}ist^2(y_1, \, \mathbf{m}athbb{N}N) + |y_1-y_2|^2 \stackrel{\mathbf{e}qref{nondeg}}{\lesssim} \mathbf{p}si_b(y_1)+|y_1-y_2|^2. \qedhere \mathbf{e}nd{split} \mathbf{e}nd{equation*} \mathbf{e}nd{proof} Finally, we conclude this section with interpolation (or extension) results. The first one is a classical interpolation result for~$H^1$-maps; it constructs a suitable map in an annulus, with prescribed values on the boundary. \begin{lemma}[{\mathrm{c}}ite{Luckhaus,contreras2018convergence}] \label{lemma:interpolation} For any~$M>0$, there exists~$\mathbf{e}ta = \mathbf{e}ta(M)>0$ such that the following statement holds. Let~$\mathbf{Q}Q_0\subset\!\subset\mathbf{Q}Q$ be a convex, open set that contains~$\mathbf{m}athbb{N}N$. Let~$\rho$, $\lambda$ be positive numbers with~$\lambda < \rho$, and let~$u\in H^1(\mathbf{p}artial B_\rho, \, \mathbf{Q}Q_0)$, $v\in H^1(\mathbf{p}artial B_\rho, \, \mathbf{m}athbb{N}N)$ be such that \[ \int_{\mathbf{p}artial B_\rho} \left(\abs{\mathbf{m}athbf{n}abla u}^2 + \abs{\mathbf{m}athbf{n}abla v}^2\right) \mathrm{d}\mathscr{H}^2\leq M, \qquad \int_{\mathbf{p}artial B_\rho} \abs{u - v}^2 \,\mathrm{d}\mathscr{H}^2 \leq \mathbf{e}ta\lambda^2. \] Then, there exists a map $w\in H^1(B_{\rho} \setminus B_{\rho - \lambda}, \, \mathbf{Q}Q_0)$ such that $w(x) = u(x)$ for $\mathscr{H}^2$-a.e.~$x\in\mathbf{p}artial B_\rho$, $w(x) = v(\rho x/(\rho - \lambda))$ for $\mathscr{H}^2$-a.e.~$x\in\mathbf{p}artial B_{\rho - \lambda}$, and \begin{gather*} \int_{B_\rho\setminus B_{\rho-\lambda}} \abs{\mathbf{m}athbf{n}abla w}^2 \lesssim \lambda \int_{\mathbf{p}artial B_\rho} \left(\abs{\mathbf{m}athbf{n}abla u}^2 + \abs{\mathbf{m}athbf{n}abla v}^2 + \frac{\abs{u - v}^2}{\lambda^2}\right)\mathrm{d}\mathscr{H}^2 \\ \int_{B_\rho\setminus B_{\rho-\lambda}} \mathbf{p}si_b(w) \lesssim \lambda \int_{\mathbf{p}artial B_\rho} \mathbf{p}si_b(u)\,\mathrm{d}\mathscr{H}^2 \mathbf{e}nd{gather*} \mathbf{e}nd{lemma} \begin{remark} \label{rk:interpolation} Lemma~\ref{lemma:interpolation}, in case~$\mathbf{p}si_b=0$, was first proven by Luckhaus~{\mathrm{c}}ite[Lemma~1]{Luckhaus}. Up to a scaling, the statement given here is essentially the same as~{\mathrm{c}}ite[Lemma~B.2]{contreras2018convergence}. However, in~{\mathrm{c}}ite{contreras2018convergence} the potential is assumed to be finite and smooth on the whole of~$\mathbf{m}athbb{R}^m$, while our potential~$\mathbf{p}si_b$ is singular out of~$\mathbf{Q}Q$. Nevertheless, the proof carries over to our setting. Indeed, the map~$w$ constructed in~{\mathrm{c}}ite{contreras2018convergence} takes values in a neighbourhood of~$\mathbf{m}athbb{N}N$, whose thickness can be made arbitrarily small by choosing~$\mathbf{e}ta$ small (see also~{\mathrm{c}}ite[Lemma~1]{Luckhaus}). In particular, we can make sure that the image of~$w$ is contained in the set~$\mathbf{Q}Q_0$, where the function~$\mathbf{p}si_b$ is finite and smooth, and the arguments of~{\mathrm{c}}ite{contreras2018convergence} carry over. Incidentally, Lemma~\ref{lemma:interpolation} crucially depends on the non-degeneracy assumption~\mathbf{e}qref{hp:non_degeneracy} for the bulk potential~$\mathbf{p}si_b$. \mathbf{e}nd{remark} We give a variant of Lemma~\ref{lemma:interpolation} which is adapted to our non-local setting. \begin{lemma} \label{lemma:nonlocinterp} {\BBB Let~$\mathbf{Q}Q_0\subset\!\subset\mathbf{Q}Q$ be an open convex set. Let~$u_\mathbf{e}ps$, $u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q_0)$ and~$u^*_\mathbf{e}ps$, $u^*\in H^1(B_{1/2}, \, \mathbf{m}athbb{N}N)$ satisfy the following conditions: \begin{gather} M_\mathbf{e}ps := \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2 + F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_1) + \mathbf{m}athbf{n}orm{u^*_\mathbf{e}ps - u_\mathbf{e}ps}^2_{L^2(B_t)} \quad \textrm{is bounded} \label{hp:interp3} \\ u_\mathbf{e}ps\to u \ \textrm{ strongly in } L^2(B_{1/2}), \quad u^*_\mathbf{e}ps\to u^* \label{hp:interp1} \ \textrm{ strongly in } H^1(B_{1/2}) \textrm{ as } \mathbf{e}ps\to 0 \\ u^* = u \quad \textrm{ a.e. in } B_1\setminus B_s \ \textrm{ for some } s\in (1/4, \, 1/2) \label{hp:interp2} \mathbf{e}nd{gather} Let~$\sigma\in (0, \, 1/10)$. Then, up to extraction of a (non-relabelled) subsequence, there exist maps~$\xi_\mathbf{e}ps\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q_0)$ and radii~$r$, $t$ with~$s < r < t < 1/2$ that satisfy the following conditions: \begin{enumerate}[label=(\roman*)] \item $\xi_\mathbf{e}ps = u_\mathbf{e}ps$ a.e.~in~$\mathbf{m}athbb{R}^3\setminus B_t$; \item ${\xi_\mathbf{e}ps}_{|B_r}\in H^1(B_r, \, \mathbf{Q}Q_0)$ and~${\xi_\mathbf{e}ps}_{|B_r}\to u^*_{|B_r}$ strongly in~$H^1(B_r)$; \item there holds \[ \begin{split} &F_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t) + \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) - \Gamma_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \\ &\qquad\qquad \leq F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_r) + C\int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + C\sigma M_\mathbf{e}ps + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2} M_\mathbf{e}ps^{1/2}}{\sigma^q}\right) \mathbf{e}nd{split} \] where~$q>7/2$ is given by~\mathbf{e}qref{hp:g_decay}. \mathbf{e}nd{enumerate} } \mathbf{e}nd{lemma} {\BBB As we will see in the proof, the maps~$\xi_\mathbf{e}ps$ agree with~$u^*_\mathbf{e}ps$ on~$B_r$, up to rescaling and interpolation near the boundary of~$B_r$.} \begin{proof}[Proof of Lemma~\ref{lemma:nonlocinterp}] {\BBB We split the proof into several steps. \mathbf{m}edskip \begin{step}[Construction of~$\xi_\mathbf{e}ps$] Let~$\mathbf{v}arphi_\mathbf{e}ps\in C^{\infty}_{\mathbf{m}athrm{c}}(\mathbf{m}athbb{R}^3)$ be a sequence of mollifiers, defined as in~\mathbf{e}qref{mollify}. Lem\-ma~\ref{lemma:mollify-Morrey} implies \begin{equation} \label{energybd} \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 \lesssim F(u_\mathbf{e}ps, \, B_1) \stackrel{\mathbf{e}qref{hp:interp3}}{\leq} M_\mathbf{e}ps \mathbf{e}nd{equation} for~$\mathbf{e}ps$ small enough. Let~$N\geq 1$ be an integer number such that \[ \frac{1}{18\sigma} \leq N \leq \frac{1}{6\sigma} \] Such a number exists, because of the assumption that~$0 < \sigma < 1/10$. We divide the annulus~$B_{1/2}\setminus\bar{B}_{s}$ into~$N$ concentric sub-annuli: \[ A_i := B_{s + i\frac{1/2 - s}{N}} \setminus \bar{B}_{s + (i-1)\frac{1/2 - s}{N}} \qquad \textrm{for } i = 1, \, 2, \, \ldots, \, N. \] We have \[ \sum_{i=1}^N \left(F_\mathbf{e}ps(u_\mathbf{e}ps, \, A_i) + \int_{A_i} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 + \int_{A_i} \abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2\right) \stackrel{\mathbf{e}qref{energybd}}{\lesssim} M_\mathbf{e}ps \] As a consequence, for any~$\mathbf{e}ps$ we can choose an index~$i(\mathbf{e}ps)$ such that \begin{equation} \label{comp1} F_\mathbf{e}ps(u_\mathbf{e}ps, \, A_{i(\mathbf{e}ps)}) + \int_{A_i} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 + \int_{A_i} \abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2 \lesssim \frac{M_\mathbf{e}ps}{N} \lesssim \sigma M_\mathbf{e}ps \mathbf{e}nd{equation} Passing to a subsequence, we may also assume that all the indices~$i(\mathbf{e}ps)$ are the same, so from now on, we write~$i$ instead of~$i(\mathbf{e}ps)$. We take positive numbers~$a < b$ such that $A^\mathbf{p}rime := B_b\setminus \bar{B}_a\subset\!\subset A_i$ and~$b - a > 5\sigma$. Then, Lemma~\ref{lemma:mollify-L2} gives \begin{equation} \label{comp2} \frac{1}{\mathbf{e}ps^2} \int_{A^\mathbf{p}rime} \abs{\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps - u_\mathbf{e}ps}^2 \lesssim F_\mathbf{e}ps(u_\mathbf{e}ps, \, A_i) \stackrel{\mathbf{e}qref{comp1}}{\lesssim} \sigma M_\mathbf{e}ps \mathbf{e}nd{equation} for~$\mathbf{e}ps$ small enough. We have assumed that~$u_\mathbf{e}ps$ takes its values in the convex set~$\mathbf{Q}Q_0\subset\!\subset\mathbf{Q}Q$; it follows that the image of~$\mathbf{v}arphi_\mathbf{e}ps*u_\mathbf{e}ps$ is contained in~$\mathrm{o}verline{\mathbf{Q}Q_0}\subset\!\subset\mathbf{Q}Q$. Thus, we may apply Lemma~\ref{lemma:psib} to estimate the integral of~$\mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps *u_\mathbf{e}ps)$: \begin{equation} \label{comp3} \begin{split} \frac{1}{\mathbf{e}ps^2} \int_{A^\mathbf{p}rime} \mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps *u_\mathbf{e}ps) &\lesssim \frac{1}{\mathbf{e}ps^2} \int_{A^\mathbf{p}rime} \left(\mathbf{p}si_b(u_\mathbf{e}ps) + \abs{\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps - u_\mathbf{e}ps}^2\right) \stackrel{\mathbf{e}qref{comp2}}{\lesssim} \sigma M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation} Using Fatou's lemma, we see that \begin{equation*} \begin{split} &\int_a^b \left(\liminf_{\mathbf{e}ps\to 0} \int_{\mathbf{p}artial B_r} \abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2 + \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 + \frac{1}{\mathbf{e}ps^2} \mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps) \, \mathrm{d}\mathscr{H}^2 \right)\mathrm{d} r \\ &\qquad\qquad\qquad \leq \liminf_{\mathbf{e}ps\to 0} \int_{A^\mathbf{p}rime} \left(\abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2 + \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 + \frac{1}{\mathbf{e}ps^2} \mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)\right) \stackrel{\mathbf{e}qref{energybd}, \mathbf{e}qref{comp3}}{\lesssim} \sigma M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation*} By Fubini theorem, there exists a radius~$r\in (a + \sigma, \, b - 3\sigma)$ and a (non-relabelled) subsequence~$\mathbf{e}ps\to 0$ such that \begin{equation} \label{comp4} \begin{split} &\int_{\mathbf{p}artial B_r} \left(\abs{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}^2 + \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 + \frac{1}{\mathbf{e}ps^2} \mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps) \right)\, \mathrm{d}\mathscr{H}^2 \lesssim M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation} By a similar argument, we may also assume that \begin{equation} \label{comp5} \begin{split} \int_{\mathbf{p}artial B_r} \abs{\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps - u^*_\mathbf{e}ps}^2 \mathrm{d}\mathscr{H}^2 \lesssim \mathbf{e}ps^2 M_\mathbf{e}ps + \frac{1}{\sigma} \int_{A^\mathbf{p}rime} \abs{ u^*_\mathbf{e}ps - u_\mathbf{e}ps}^2 \mathbf{e}nd{split} \mathbf{e}nd{equation} Due to~\mathbf{e}qref{hp:interp1} and~\mathbf{e}qref{hp:interp2}, we have~$ u^*_\mathbf{e}ps - u_\mathbf{e}ps \to u^* - u$ in~$L^2(A^\mathbf{p}rime)$ and~$ u^* = u$ in~$A^\mathbf{p}rime$, so the right-hand side of~\mathbf{e}qref{comp5} tends to zero. Let \begin{equation} \label{def_lambda} \lambda_\mathbf{e}ps := \left(\mathbf{e}ps^2 M_\mathbf{e}ps + \frac{1}{\sigma} \int_{A^\mathbf{p}rime} \abs{u^*_\mathbf{e}ps - u_\mathbf{e}ps}^2\right)^{1/4} > 0 \mathbf{e}nd{equation} We have~$\lambda_\mathbf{e}ps\to 0$ as~$\mathbf{e}ps\to 0$. Moreover, for this choice of~$\lambda_\mathbf{e}ps$, the assumptions of Lemma~\ref{lemma:interpolation} are satisfied for~$\mathbf{e}ps$ small enough. By applying Lemma~\ref{lemma:interpolation}, we construct a map $w_\mathbf{e}ps\in H^1(B_{r} \setminus B_{r - \lambda_\mathbf{e}ps}, \, \mathbf{Q}Q_0)$ such that $w_\mathbf{e}ps(x) = (\mathbf{v}arphi * u_\mathbf{e}ps)(x)$ for~$x\in\mathbf{p}artial B_r$, $w_\mathbf{e}ps(x) = v(r x/(r - \lambda_\mathbf{e}ps))$ for~$x\in\mathbf{p}artial B_{r - \lambda_\mathbf{e}ps}$, and \begin{equation} \label{comp6} \begin{split} &\int_{B_r\setminus B_{r-\lambda_\mathbf{e}ps}} \left(\abs{\mathbf{m}athbf{n}abla w_\mathbf{e}ps}^2 + \frac{1}{\mathbf{e}ps^2} \mathbf{p}si_b(w_\mathbf{e}ps)\right) \lesssim \lambda_\mathbf{e}ps M_\mathbf{e}ps + \frac{1}{\lambda_\mathbf{e}ps} \left(\mathbf{e}ps^2 M_\mathbf{e}ps + \frac{1}{\sigma} \int_{A^\mathbf{p}rime} \abs{ u^*_\mathbf{e}ps - u_\mathbf{e}ps}^2\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} and \begin{equation} \label{competitorbulk} \frac{1}{\mathbf{e}ps^2} \int_{B_r\setminus B_{r-\lambda_\mathbf{e}ps}} \mathbf{p}si_b(w_\mathbf{e}ps) \lesssim \frac{\lambda_\mathbf{e}ps}{\mathbf{e}ps^2} \int_{\mathbf{p}artial B_r} \mathbf{p}si_b(w_\mathbf{e}ps) \, \mathrm{d}\mathscr{H}^2 \lesssim \lambda_\mathbf{e}ps M_\mathbf{e}ps \mathbf{e}nd{equation} The right-hand sides of~\mathbf{e}qref{comp6}, \mathbf{e}qref{competitorbulk} converge to zero as~$\mathbf{e}ps\to 0$. Finally, we take~$t\in (r + 2\sigma, \, b - \sigma)$ and we define \begin{equation} \label{def_xi} \xi_\mathbf{e}ps(x) := \begin{cases} u_\mathbf{e}ps(x) & \textrm{if } x\in\mathbf{m}athbb{R}^3\setminus B_t \\ (\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)(x) & \textrm{if } B_t\setminus B_r \\ w_\mathbf{e}ps(x) & \textrm{if } x\in B_r\setminus B_{r - \lambda_\mathbf{e}ps}\\ u^*_\mathbf{e}ps\left(\mathrm{d}frac{rx}{r-\lambda_\mathbf{e}ps}\right) & \textrm{if } x\in B_{r - \lambda_\mathbf{e}ps} \mathbf{e}nd{cases} \mathbf{e}nd{equation} By construction, $\xi_\mathbf{e}ps=u_\mathbf{e}ps$ out of~$B_t$. Moreover, a routine computation, based on~\mathbf{e}qref{comp6}, shows that $\xi_\mathbf{e}ps\to u^*_\mathbf{e}ps$ strongly in~$H^1(B_r)$ \mathbf{e}nd{step} \mathbf{m}edskip \begin{step}[Bounds on~$\mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_t)}$] By construction, we have \begin{equation*} \begin{split} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_t\setminus B_r)} = \mathbf{m}athbf{n}orm{\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(A^\mathbf{p}rime)} \stackrel{\mathbf{e}qref{comp2}}{\lesssim} \mathbf{e}ps \sigma^{1/2} M_\mathbf{e}ps^{1/2} \mathbf{e}nd{split} \mathbf{e}nd{equation*} On the other hand, \begin{equation*} \begin{split} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_r)} \lesssim \mathbf{m}athbf{n}orm{w_\mathbf{e}ps - \tau_\mathbf{e}ps u^*_\mathbf{e}ps}_{L^2(B_r\setminus B_{r-\lambda_\mathbf{e}ps})} + \mathbf{m}athbf{n}orm{\tau_\mathbf{e}ps u^*_\mathbf{e}ps - u^*_\mathbf{e}ps}_{L^2(B_r)} + \mathbf{m}athbf{n}orm{ u^*_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_r)} \mathbf{e}nd{split} \mathbf{e}nd{equation*} where~$\tau_\mathbf{e}ps u^*_\mathbf{e}ps(x) := u^*_\mathbf{e}ps(rx/(r - \lambda_\mathbf{e}ps))$. We recall that $\mathbf{m}athbf{n}orm{ u^*_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_{1/2})}\leq M_\mathbf{e}ps^{1/2}$. The norm of~$w_\mathbf{e}ps - \tau_\mathbf{e}ps u^*_\mathbf{e}ps$ can be estimated using the Poincar\'e inequality and~\mathbf{e}qref{comp6}, because we know that~$w_\mathbf{e}ps = \tau_\mathbf{e}ps u^*_\mathbf{e}ps$ on~$\mathbf{p}artial B_{r-\lambda_\mathbf{e}ps}$: \begin{equation*} \begin{split} \mathbf{m}athbf{n}orm{w_\mathbf{e}ps - \tau_\mathbf{e}ps u^*_\mathbf{e}ps} _{L^2(B_r\setminus B_{r-\lambda_\mathbf{e}ps})} &\lesssim \lambda_\mathbf{e}ps \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla(w_\mathbf{e}ps - \tau_\mathbf{e}ps u^*_\mathbf{e}ps)} _{L^2(B_r\setminus B_{r-\lambda_\mathbf{e}ps})} \lesssim \left(\lambda_\mathbf{e}ps + \frac{\lambda_\mathbf{e}ps^{1/2}}{\sigma^{1/2}}\right) M_\mathbf{e}ps^{1/2} \mathbf{e}nd{split} \mathbf{e}nd{equation*} A classical argument (see e.g. {\mathrm{c}}ite[Lemma~7.23]{gilbarg2015elliptic} for an analogous result) gives \begin{equation*} \begin{split} \mathbf{m}athbf{n}orm{\tau_\mathbf{e}ps u^*_\mathbf{e}ps - u^*_\mathbf{e}ps}_{L^2(B_r)} \lesssim \lambda_\mathbf{e}ps \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla u^*_\mathbf{e}ps}_{L^2(B_r)} \lesssim \lambda_\mathbf{e}ps M_\mathbf{e}ps^{1/2} \mathbf{e}nd{split} \mathbf{e}nd{equation*} Combining the inequalities above, and recalling that~$\lambda_\mathbf{e}ps\to 0$, we obtain \begin{equation} \label{compL2} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_t)}\lesssim M_\mathbf{e}ps^{1/2} \mathbf{e}nd{equation} As a consequence, we deduce \begin{equation*} \begin{split} \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - \zeta}_{L^2(B_t)} &\leq \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_t)} + \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - \zeta}_{L^2(B_t)}\\ &\lesssim M_\mathbf{e}ps^{1/2} + \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - \zeta}_{L^2(B_t)} \mathbf{e}nd{split} \mathbf{e}nd{equation*} and hence, by Proposition~\ref{prop:Poincare}, \begin{equation} \label{compL2bis} \begin{split} \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - \zeta}_{L^2(B_t)} &\lesssim M_\mathbf{e}ps^{1/2} + F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_1)^{1/2} \lesssim M_\mathbf{e}ps^{1/2} \mathbf{e}nd{split} \mathbf{e}nd{equation} \mathbf{e}nd{step} \mathbf{m}edskip \begin{step}[Bounds on~$F_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t)$] Since~$K_\mathbf{e}ps$ is an even function (see~\mathbf{e}qref{hp:K_even}), we have \begin{equation} \label{interp7} \begin{split} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_t) &\leq F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_r) + F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_t\setminus B_r) + \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r, \, B_t\setminus B_r) \\ &\leq F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_r) + F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\mathbf{v}arphi_\mathbf{e}ps* u_\mathbf{e}ps, \, A^\mathbf{p}rime) + \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r, \, B_t\setminus B_r) \mathbf{e}nd{split} \mathbf{e}nd{equation} where~$\Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r, B_t\setminus B_r)$ is defined as in~\mathbf{e}qref{locdefect}. Lemma~\ref{lemma:glue} implies \begin{equation} \label{interp8} \begin{split} F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\mathbf{v}arphi_\mathbf{e}ps* u_\mathbf{e}ps, \, A^\mathbf{p}rime) &\lesssim F_\mathbf{e}ps^\mathbf{m}athbf{n}l(u_\mathbf{e}ps, \, A^\mathbf{p}rime) + \frac{1}{\mathbf{e}ps^2} \mathbf{m}athbf{n}orm{\mathbf{v}arphi_\mathbf{e}ps *u_\mathbf{e}ps - u_\mathbf{e}ps}^2_{L^2(A^\mathbf{p}rime)} \stackrel{\mathbf{e}qref{comp1}, \ \mathbf{e}qref{comp3}}{\lesssim} \sigma M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation} Now, we estimate~$\Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r, \, B_t\setminus B_r)$. By construction, we have $a + \sigma < r < t < b - \sigma$ and hence $\mathbf{p}artial_\sigma B_r\subseteq A^\mathbf{p}rime$. We apply Lemma~\ref{lemma:locdefect}, Lemma~\ref{lemma:glueH1} and~\mathbf{e}qref{compL2bis}: \begin{equation} \label{interp9} \begin{split} \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r, \, B_t\setminus B_r) &\lesssim F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, \mathbf{p}artial_{\sigma/2} B_r) + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q - 2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - \zeta}_{L^2(B_t)}^2 \\ &\lesssim \int_{\mathbf{p}artial_{\sigma} B_r} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q - 2}}{\sigma^q}\right) \inf_{\zeta\in\mathbf{m}athbb{R}^m} \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - \zeta}_{L^2(B_t)}^2 \\ &\lesssim \int_{\mathbf{p}artial_{\sigma} B_r} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q - 2}M_\mathbf{e}ps}{\sigma^q}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} The gradient term at the right-hand side can be further estimated by~\mathbf{e}qref{comp2}: \begin{equation} \label{interp9.5} \begin{split} \int_{\mathbf{p}artial_{\sigma} B_r} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 \lesssim \int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + \int_{A^\mathbf{p}rime} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 \lesssim \int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + \sigma M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation} Combining~\mathbf{e}qref{interp7}, \mathbf{e}qref{interp8}, \mathbf{e}qref{interp9} and~\mathbf{e}qref{interp9.5}, we obtain \begin{equation} \label{interp10} \begin{split} F^\mathbf{m}athbf{n}l_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t) \leq F^\mathbf{m}athbf{n}l_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_r) &+ \int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + \sigma M_\mathbf{e}ps + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q - 2} M_\mathbf{e}ps}{\sigma^q}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} We estimate the local term of the energy, i.e.~the integral of~$\mathbf{p}si_b(\xi_\mathbf{e}ps)$ in~$B_t$. By construction, $\xi_\mathbf{e}ps$ restricted to~$B_{r-\lambda_\mathbf{e}ps}$ takes its values in~$\mathbf{m}athbb{N}N$. As a consequence, \begin{equation} \label{interp11} \begin{split} \frac{1}{\mathbf{e}ps^2} \int_{B_t} \mathbf{p}si_b(\xi_\mathbf{e}ps) &= \frac{1}{\mathbf{e}ps^2} \int_{B_t\setminus B_r} \mathbf{p}si_b(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps) + \frac{1}{\mathbf{e}ps^2} \int_{B_r \setminus B_{r - \lambda_\mathbf{e}ps}}\mathbf{p}si_b(w_\mathbf{e}ps) \stackrel{\mathbf{e}qref{comp3}, \ \mathbf{e}qref{competitorbulk}}{\lesssim} \sigma M_\mathbf{e}ps \mathbf{e}nd{split} \mathbf{e}nd{equation} \mathbf{e}nd{step} \mathbf{m}edskip \begin{step}[Bounds on~$\Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t)$] By construction, $\xi_\mathbf{e}ps = u_\mathbf{e}ps$ out of~$B_t$. Then, Lemma~\ref{lemma:diffGamma} implies \begin{equation*} \begin{split} \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) - \Gamma_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) &\lesssim F^\mathbf{m}athbf{n}l_\mathbf{e}ps(\xi_\mathbf{e}ps, \, \mathbf{p}artial_\sigma B_t) + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \mathbf{m}athbf{n}orm{\xi_\mathbf{e}ps - u_\mathbf{e}ps}_{L^2(B_t)} \mathbf{e}nd{split} \mathbf{e}nd{equation*} Reasoning as in~\mathbf{e}qref{interp8}, and applying~\mathbf{e}qref{comp1}, we deduce \[ F^\mathbf{m}athbf{n}l_\mathbf{e}ps(\xi_\mathbf{e}ps, \, \mathbf{p}artial_\sigma B_t) \lesssim \sigma M_\mathbf{e}ps \] Then, due to~\mathbf{e}qref{compL2}, we conclude that \begin{equation} \label{interp12} \begin{split} &\Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) - \Gamma_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \lesssim \sigma M_\mathbf{e}ps + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2} M_\mathbf{e}ps^{1/2}}{\sigma^q}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} Combining~\mathbf{e}qref{interp10}, \mathbf{e}qref{interp11} and~\mathbf{e}qref{interp12}, we obtain the estimate~(iii) in the statement of the lemma, and the proof is complete. \qedhere \mathbf{e}nd{step} } \mathbf{e}nd{proof} \begin{remark} \label{rk:nonlocinterp} {\BBB In addition to~\mathbf{e}qref{hp:interp3}, \mathbf{e}qref{hp:interp1} and~\mathbf{e}qref{hp:interp2}, suppose there exist points~$\zeta_\mathbf{e}ps\in\mathbf{m}athbb{N}N$, a positive sequence $\mathbf{e}ta_\mathbf{e}ps\to 0$ and maps~$v\in H^1(B_{1/2}, \, \mathbf{m}athbb{R}^m)$, $v^*\in H^1(B_{1/2}, \, \mathbf{m}athbb{R}^m)$ such that~$v = v^*$ out of~$B_s$, $M_\mathbf{e}ps\lesssim \mathbf{e}ta^2_\mathbf{e}ps$ and \[ \frac{u_\mathbf{e}ps - \zeta_\mathbf{e}ps}{\mathbf{e}ta_\mathbf{e}ps} \to v \ \textrm{strongly in } L^2(B_{1/2}), \quad \frac{u^*_\mathbf{e}ps - \zeta_\mathbf{e}ps}{\mathbf{e}ta_\mathbf{e}ps} \to v^* \ \textrm{strongly in } H^1(B_{1/2}) \] as~$\mathbf{e}ps\to 0$. Then, the same sequence~$\xi_\mathbf{e}ps$ constructed above satisfies \[ \frac{\xi_\mathbf{e}ps - \zeta_\mathbf{e}ps}{\mathbf{e}ta_\mathbf{e}ps} \to v^* \qquad \textrm{strongly in } L^2(B_r), \] \mathbf{e}mph{so long as} we choose~$\lambda_\mathbf{e}ps$ in a suitable way (see in particular Equations~\mathbf{e}qref{def_xi} and~\mathbf{e}qref{comp6}). For instance, instead of~\mathbf{e}qref{def_lambda}, we may take \[ \lambda_\mathbf{e}ps := \left(\mathbf{e}ps^2 \frac{M_\mathbf{e}ps}{\mathbf{e}ta^2_\mathbf{e}ps} + \frac{1}{\sigma \mathbf{e}ta_\mathbf{e}ps^2} \int_{A^\mathbf{p}rime} \abs{u^*_\mathbf{e}ps - u_\mathbf{e}ps}^2\right)^{1/4} \] } \mathbf{e}nd{remark} \section{Proof of the main results} \subsection{A compactness result for $\mathrm{o}mega$-minimisers} The goal of this section is to prove a compactness result for minimisers of~$E_\mathbf{e}ps$, subject to variable ``boundary conditions'', as~$\mathbf{e}ps\to 0$. For later convenience, we state our result in terms of ``almost minimisers'' --- or, more precisely, $\mathrm{o}mega$-minimisers, as defined below. This will be useful to study variants of our original minimisation problem, as we will do in Section~\ref{sect:bd}. \begin{definition} \label{def:omegamin} Let~$\Omega\subseteq\mathbf{m}athbb{R}^3$ be a bounded domain. Let~$\mathrm{o}mega{\mathrm{c}}olon [0, \, +\infty)\to [0, \, +\infty)$ be an increasing function such that $\mathrm{o}mega(s) \to 0$ as~$s\to 0$. We say that a function~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ is an $\mathrm{o}mega$-minimiser of~$E_\mathbf{e}ps$ in~$\Omega$ if, for any ball~$B_\rho(x_0)\subseteq\Omega$ and any~$v\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ such that~$v = u$ a.e.~on~$\mathbf{m}athbb{R}^3\setminus B_\rho(x_0)$, there holds \[ E_\mathbf{e}ps(u) \leq E_\mathbf{e}ps(v) + \mathrm{o}mega(\mathbf{e}ps) \, \rho. \] \mathbf{e}nd{definition} By definition, a minimiser for~$E_\mathbf{e}ps$ in the class~$\mathbf{m}athscr{A}$ defined by~\mathbf{e}qref{A} is also a~$\mathrm{o}mega$-minimiser in~$\Omega$, for any~$\mathrm{o}mega\geq 0$. $\mathrm{o}mega$-minimisers behave nicely with respect to scaling. Given~$u\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$, an increasing function~$\mathrm{o}mega{\mathrm{c}}olon [0, \, +\infty)\to [0, \, +\infty)$, $x_0\in\mathbf{m}athbb{R}^3$ and~$\rho>0$, we define $u_\rho{\mathrm{c}}olon\mathbf{m}athbb{R}^3\to\mathbf{Q}Q$ and~$\mathrm{o}mega_\rho{\mathrm{c}}olon [0, \, +\infty)\to [0, \, +\infty)$ as~$u_\rho(y) := u(x_0 + \rho y)$ for~$y\in\mathbf{m}athbb{R}^3$ and~$\mathrm{o}mega_\rho(s) := \mathrm{o}mega(\rho s)$ for~$s\geq 0$, respectively. A scaling argument implies \begin{lemma} \label{lemma:scaleomega} If~$u$ is an~$\mathrm{o}mega$-minimiser for~$E_\mathbf{e}ps$ in a bounded domain~$\Omega\subseteq\mathbf{m}athbb{R}^3$, then~$u_\rho$ is an~$\mathrm{o}mega_\rho$-minimiser for~$E_{\mathbf{e}ps/\rho}$ in~$(\Omega - x_0)/\rho$. \mathbf{e}nd{lemma} \begin{proposition} \label{prop:compactness} Let~$\Omega\subseteq\mathbf{m}athbb{R}^3$ be a bounded domain. Let~$\mathrm{o}mega{\mathrm{c}}olon [0, \, +\infty)\to [0, \, +\infty)$ be an increasing function such that $\mathrm{o}mega(s) \to 0$ as~$s\to 0$. Let~$u_\mathbf{e}ps$ be a sequence of $\mathrm{o}mega$-minimisers of~$E_\mathbf{e}ps$ in~$\Omega$. {\BBB Suppose that there exists an open, convex set~$\mathbf{Q}Q_0\subset\!\subset\mathbf{Q}Q$ such that~$u_\mathbf{e}ps(x)\in\mathbf{Q}Q_0$ for any~$\mathbf{e}ps>0$ and a.e.~$x\in\Omega$.} Let~$B_\rho(x_0)\subseteq\Omega$ be a ball such that $\sup_{\mathbf{e}ps>0} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_\rho(x_0))<+\infty$. Then, up to extraction of a non-relabelled subsequence, $u_\mathbf{e}ps$ converge $L^2(B_{\rho/2}(x_0))$-strongly to a map~$u_0\in H^1(B_{\rho/2}(x_0), \, \mathbf{m}athbb{N}N)$, which minimises the functional \[ w\in H^1(B_{\rho/2}(x_0), \, \mathbf{m}athbb{N}N) \mathbf{m}apsto \int_{B_{\rho/2}(x_0)} L\mathbf{m}athbf{n}abla w {\mathrm{c}}dot \mathbf{m}athbf{n}abla w \] subject to its own boundary conditions. Moreover, for any~$s\in(0, \, \rho/2)$ there holds \begin{equation} \label{strongconv} \lim_{\mathbf{e}ps\to 0} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_s(x_0)) = \int_{B_s(x_0)} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \mathbf{e}nd{equation} \mathbf{e}nd{proposition} Proposition~\ref{prop:compactness} differs from the results in~{\mathrm{c}}ite{taylor2018oseen} in that no ``boundary condition'' is prescribed: each~$u_\mathbf{e}ps$ minimises the functional~$E_\mathbf{e}ps$ (possibily up to a small error, which is quantified by the function~$\mathrm{o}mega$) subject to its own ``boundary condition''. \begin{proof}[Proof of Proposition~\ref{prop:compactness}] If~$u$ is an~$\mathrm{o}mega$-minimiser of~$E_\mathbf{e}ps$ in~$\Omega$ and~$B_\rho(x_0)\subseteq\Omega$, then~$u_\rho$ is an~$\mathrm{o}mega_\rho$-minimiser for~$E_{\mathbf{e}ps/\rho}$ in~$(\Omega - x_0)/\rho$. Since we have assumed that~$\Omega$ is bounded, the radius~$\rho$ is bounded too --- say, $\rho\leq R_0$, where~$R_0$ depends only on~$\Omega$. The function~$\mathrm{o}mega$ is increasing, so~$\mathrm{o}mega_\rho(s) = \mathrm{o}mega(\rho s) \leq\mathrm{o}mega(R_0 s) =: \mathrm{o}mega_0(s)$. As a consequence, $u_\rho$ is also an~$\mathrm{o}mega_0$-minimiser. Since~$\mathrm{o}mega_0$ is independent of~$\rho$, by a scaling argument (see Equation~\mathbf{e}qref{scaling}) we may assume without loss of generality that~$\rho = 1$ and~$x_0 = 0$. Let~$\mathbf{v}arphi_\mathbf{e}ps\in C^{\infty}_{\mathbf{m}athrm{c}}(\mathbf{m}athbb{R}^3)$ be defined as in Section~\ref{sect:Poincare}. Lem\-ma~\ref{lemma:mollify-Morrey} and Lemma~\ref{lemma:mollify-L2} imply that \begin{equation*} \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla(\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps)}^2 \lesssim F(u_\mathbf{e}ps, \, B_1), \qquad \int_{B_{1/2}} \abs{\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps - u_\mathbf{e}ps}^2 \lesssim \mathbf{e}ps^2 F(u_\mathbf{e}ps, \, B_1) \mathbf{e}nd{equation*} for~$\mathbf{e}ps$ small enough. Since~$F(u_\mathbf{e}ps, \, B_1)$ is bounded, we can extract a (non-relabelled) subsequence so that $\mathbf{v}arphi_\mathbf{e}ps * u_\mathbf{e}ps\rightharpoonup u_0$ weakly in~$H^1(B_{1/2})$, $u_\mathbf{e}ps\to u_0$ strongly in~$L^2(B_{1/2})$. The map~$u_0$ takes its values in~$\mathbf{m}athbb{N}N$, because \[ \int_{B_{1/2}} \mathbf{p}si_b(u_0) \leq \liminf_{\mathbf{e}ps\to 0} \mathbf{e}ps^2 \int_{B_{1/2}} \mathbf{p}si_b(u_\mathbf{e}ps) =0 \] by Fatou lemma. We must show that \begin{equation} \label{comp-min} \int_{B_{1/2}} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \leq \int_{B_{1/2}} L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \mathbf{e}nd{equation} for any~$v\in H^1(B_{1/2}, \, \mathbf{m}athbb{N}N)$ such that $v = u_0$ on~$\mathbf{p}artial B_{1/2}$. By an approximation argument, it suffices to prove~\mathbf{e}qref{comp-min} in case~$v = u_0$ in a neighbourhood of~$\mathbf{p}artial B_{1/2}$. Therefore, we fix~$s\in (0, \, 1/2)$ and we take a map~$v\in H^1(B_{1/2}, \, \mathbf{m}athbb{N}N)$ such that $v = u_0$ on~$B_{1/2}\setminus\bar{B}_s$. The map~$v$ is not an admissible competitor for~$u_\mathbf{e}ps$, because in general~$u_0 \mathbf{m}athbf{n}eq u_\mathbf{e}ps$ on~$\mathbf{m}athbb{R}^3\setminus B_1$. However, we may construct suitable competitors by applying Lemma~\ref{lemma:nonlocinterp}. {\BBB We can indeed do so, because we have assumed that all the~$u_\mathbf{e}ps$'s take their values in an open, convex set~$\mathbf{Q}Q_0\subset\!\subset\mathbf{Q}Q$.} {\BBB Let~$\sigma\in (0, \, 1/10)$. By applying Lemma~\ref{lemma:nonlocinterp} (with~$u_\mathbf{e}ps^* = v$ for any~$\mathbf{e}ps$), we find maps~$\xi_\mathbf{e}ps\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q_0)$ and radii~$r$, $t$ with~$\mathbf{m}ax(s, \, 1/4) < r < t < 1/2$, so that $\xi_\mathbf{e}ps = u_\mathbf{e}ps$ a.e.~in~$\mathbf{m}athbb{R}^3\setminus B_t$, $\xi_\mathbf{e}ps\to v$ strongly in~$H^1(B_r)$ and \begin{equation*} \begin{split} F_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t) &+ \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) - \Gamma_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \\ &\leq F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_r) + C\int_{B_r\setminus B_{r-\sigma}}\abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + C\sigma + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation*} (where~$q>7/2$ is given by~\mathbf{e}qref{hp:g_decay}). Since~$u_\mathbf{e}ps$ is an~$\mathrm{o}mega$-minimiser for~$E_\mathbf{e}ps$ and~$\xi_\mathbf{e}ps = u_\mathbf{e}ps$ out of~$B_t$, we have \begin{equation*} \begin{split} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t) + \Gamma_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) &\leq F_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t) + \Gamma_\mathbf{e}ps(\xi_\mathbf{e}ps, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) + \mathrm{o}mega(\mathbf{e}ps) \mathbf{e}nd{split} \mathbf{e}nd{equation*} and hence, \begin{equation} \label{comp7} \begin{split} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_t) &\leq F_\mathbf{e}ps^\mathbf{m}athbf{n}l(\xi_\mathbf{e}ps, \, B_r) + C\int_{B_r\setminus B_{r-\sigma}}\abs{\mathbf{m}athbf{n}abla\xi_\mathbf{e}ps}^2 + C\sigma + \mathrm{o}\!\left(\frac{\mathbf{e}ps^{q-2}}{\sigma^q}\right) + \mathrm{o}mega(\mathbf{e}ps) \mathbf{e}nd{split} \mathbf{e}nd{equation} We apply Proposition~\ref{prop:Gamma-liminf} and Proposition~\ref{prop:Gamma-limsup} to pass to the limit in both sides of~\mathbf{e}qref{comp7}, first as~$\mathbf{e}ps\to 0$, then as~$\sigma\to 0$. We obtain \begin{equation*} \int_{B_r} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \leq \int_{B_r} L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \mathbf{e}nd{equation*} which implies~\mathbf{e}qref{comp-min}. In case~$v = u_0$, the same argument shows that \begin{equation} \label{comp-lim1} \begin{split} \limsup_{\mathbf{e}ps\to 0} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_r) \leq \int_{B_r} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \mathbf{e}nd{split} \mathbf{e}nd{equation} On the other hand, Proposition~\ref{prop:Gamma-liminf} implies \begin{equation} \label{comp-lim2} \begin{split} \int_{G} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \leq \liminf_{\mathbf{e}ps\to 0} F_\mathbf{e}ps(u_\mathbf{e}ps, \, G) \qquad \textrm{for any open set } G\subseteq B_{1/2} \mathbf{e}nd{split} \mathbf{e}nd{equation} Combining~\mathbf{e}qref{comp-lim1} with~\mathbf{e}qref{comp-lim2}, we deduce~\mathbf{e}qref{strongconv}. } \mathbf{e}nd{proof} \subsection{A decay lemma for~$F_\mathbf{e}ps$} The aim of this section is to prove a decay property for the localised energy~$F_\mathbf{e}ps$: \begin{lemma} \label{lemma:decay} {\BBB There exist~$\mathbf{e}ta>0$, $\theta\in (0, \, 1/2)$ and~$\mathbf{e}ps_*>0$ such that, for any ball~$B_{\rho}(x_0)\subseteq\Omega$, any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_*\rho)$ and any minimiser~$u_\mathbf{e}ps$ of~$E_\mathbf{e}ps$ in~$\mathbf{m}athscr{A}$ such that \[ F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\rho}(x_0)) \leq \mathbf{e}ta^2 \rho, \] there holds \begin{equation} \label{decay} \begin{split} & F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta\rho}(x_0)) \leq \frac{\theta}{2} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\rho}(x_0)) + \left(\frac{\mathbf{e}ps}{\rho}\right)^{2q-4} \rho. \mathbf{e}nd{split} \mathbf{e}nd{equation} } \mathbf{e}nd{lemma} {\BBB Compared with analogous results in the regularity theory for Oseen-Frank minimisers --- see, for instance, Proposition~1 in~{\mathrm{c}}ite{Luckhaus} or Theorem~2.4 in~{\mathrm{c}}ite{HKL} --- the estimate~\mathbf{e}qref{decay} contains an extra term at the right-hand side. This additional term controls the contibutions from the `locality defect'~$\Gamma_\mathbf{e}ps$, defined by~\mathbf{e}qref{locdefect} (cf. Lemma~\ref{lemma:diffGamma} and Remark~\ref{rk:diffGamma}). However, this term will introduce additional issues in the proof of Theorem~\ref{th:Holder}, which we are only able to resolve in case~$q>7/2$.} As a first step towards the proof of Lemma~\ref{lemma:decay}, we check that the limit tensor~$L$ (defined by~\mathbf{e}qref{L}) is elliptic. \begin{proposition} \label{prop:L} There exists a constant~$\lambda>1$ so that \begin{equation*} \lambda^{-1} |\xi|^2 \leq L\xi{\mathrm{c}}dot \xi \leq \lambda|\xi|^2 \mathbf{e}nd{equation*} for all $\xi\in\mathbf{m}athbb{R}^{m\times 3}$. \mathbf{e}nd{proposition} \begin{proof} The upper bound comes trivially, as \begin{equation*} \begin{split} 4L\xi{\mathrm{c}}dot \xi = \int_{\mathbf{m}athbb{R}^3}K(z)(\xi z){\mathrm{c}}dot (\xi z)\,\mathrm{d} z \lesssim \int_{\mathbf{m}athbb{R}^3}g(z)|\xi z|^2\,\mathrm{d} z \lesssim \left(\int_{\mathbf{m}athbb{R}^3}g(z)|z|^2\,\mathrm{d} z\right)|\xi|^2 \mathbf{e}nd{split} \mathbf{e}nd{equation*} and the constant at the right-hand side is finite, due to~\mathbf{e}qref{hp:g_decay}. For the lower bound, recall that $g$ is non-negative and satisfies $g(z)\geq k$ for $\rho_1<|z|<\rho_2$. Then we have that \begin{equation*} \begin{split} 4L\xi{\mathrm{c}}dot \xi = \int_{\mathbf{m}athbb{R}^3} K_{ij}(z) z_\alpha z_\beta \, \xi_{i,\alpha} \, \xi_{j,\beta}\,\mathrm{d} z &\geq \int_{\mathbf{m}athbb{R}^3} g(z)z_\alpha z_\beta \, \xi_{i,\alpha} \, \xi_{i,\beta}\,\mathrm{d} z \\ &\geq k \int_{B_{\rho_2}\setminus B_{\rho_1}}z_\alpha z_\beta\,\mathrm{d} z \, \xi_{i,\alpha} \, \xi_{i,\beta} \mathbf{e}nd{split} \mathbf{e}nd{equation*} We may evaluate the inner integral as \begin{equation*} \begin{split} \int_{B_{\rho_2}\setminus B_{\rho_1}} z_\alpha z_\beta\,\mathrm{d} z =& \int_{\rho_1}^{\rho_2} \int_{\mathbf{m}athbb{S}^2} r^2p_\alpha p_\beta \,\mathrm{d} p\,\mathrm{d} r\\ =&\int_{\rho_1}^{\rho_2}r^2\,\mathrm{d} r \int_{\mathbf{m}athbb{S}^2} p_\alpha p_\beta \,\mathrm{d} p\\ =& \left(\frac{\rho_2^3-\rho_1^3}{3}\right)\frac{4\mathbf{p}i}{3}\mathrm{d}elta_{\alpha\beta} \mathbf{e}nd{split} \mathbf{e}nd{equation*} This gives a lower bound on the bilinear form as \begin{equation*} L\xi{\mathrm{c}}dot \xi \geq \frac{k\mathbf{p}i(\rho_2^3-\rho_1^3)}{9}\mathrm{d}elta_{\alpha\beta}\xi_{i\alpha}\xi_{i\beta}=\frac{k\mathbf{p}i(\rho_2^3-\rho_1^3)}{9}|\xi|^2 \qedhere \mathbf{e}nd{equation*} \mathbf{e}nd{proof} {\BBB We will prove Lemma~\ref{lemma:decay} by blow-up, adapting Luckhaus' arguments from~{\mathrm{c}}ite{Luckhaus}. By a scaling argument (see~\mathbf{e}qref{scaling}), we may assume without loss of generality that $x_0=0$ and~$\rho=1$. Suppose, towards a contradiction, that Lemma~\ref{lemma:decay} does not hold. Then, for any choice of the parameter~$\theta\in (0, \, 1/2)$, we find a sequence~$\mathbf{e}ps_j\to 0$ and minimisers~$u_j$ such that \begin{gather} \mathbf{e}ta_j^2 := F_{\mathbf{e}ps_j}(u_j, \, B_1) \to 0 \qquad \textrm{as } j\to+\infty, \label{smallenergy} \\ F_{\mathbf{e}ps_j} (u_j, \, B_\theta) > \frac{\theta\mathbf{e}ta_j^2}{2} + \mathbf{e}ps_j^{2q-4} \label{nodecay} \mathbf{e}nd{gather} We denote by~$\mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N$ the tangent space to the manifold~$\mathbf{m}athbb{N}N$ at a point~$\zeta\in\mathbf{m}athbb{N}N$ (regarded as a linear subspace of~$\mathbf{m}athbb{R}^m$, i.e.~$0\in\mathbf{m}athrm{T}_{\zeta}\mathbf{m}athbb{N}N$). We recall that, for any point~$y\in\mathbf{m}athbb{R}^m$ sufficiently close to the manifold~$\mathbf{m}athbb{N}N$, there exists a unique point~$\mathbf{p}i(y)\in\mathbf{m}athbb{N}N$ such that~$\mathrm{d}ist(y, \, \mathbf{m}athbb{N}N) = \abs{y - \mathbf{p}i(y)}$. Moreover, there exists a constant~$\mathrm{d}elta_*(\mathbf{m}athbb{N}N) > 0$ such that the map~$y \mathbf{m}apsto \mathbf{p}i(y)$ is smooth (with bounded derivatives) in \begin{equation} \label{nearestproj} \mathbf{m}athscr{U} :=\{z\in\mathbf{m}athbb{R}^m{\mathrm{c}}olon \mathrm{d}ist(z, \, \mathbf{m}athbb{N}N)\leq \mathrm{d}elta_*(\mathbf{m}athbb{N}N)\} \mathbf{e}nd{equation} (see e.g.~{\mathrm{c}}ite[Section~2.12.3]{Simon-Harmonic}). \begin{lemma} \label{lemma:average} For any~$j\in\mathbf{m}athbb{N}$ large enough, there exists a constant~$\zeta_j\in\mathbf{m}athbb{N}N$ and a measurable set~$G_j\subseteq B_{1/2}$ such that \begin{gather} \int_{B_{1/2}} \abs{u_j(x) - \zeta_j}^2 \, \mathrm{d} x \lesssim \mathbf{e}ta_j^2 \label{average1} \\ \mathbf{e}ta_j^{-1} \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(u_j(x) - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) \, \mathrm{d} x \to 0 \label{average2} \mathbf{e}nd{gather} and~$\abs{G_j}\to 0$ as~$j\to+\infty$. \mathbf{e}nd{lemma} \begin{proof} Let~$\tilde{\zeta}_j := \fint_{B_{1/2}} u_j$. By the Poincar\'e-type inequality, Proposition~\ref{prop:Poincare}, we have \begin{equation} \label{avrg1} \int_{B_{1/2}} \abs{u_j - \tilde{\zeta}_j}^2 \lesssim F_{\mathbf{e}ps_j}(u_j, \, B_1) \stackrel{\mathbf{e}qref{smallenergy}}{=} \mathbf{e}ta_j^2. \mathbf{e}nd{equation} On the other hand, Proposition~\ref{prop:physicality} and Lemma~\ref{lemma:psib} imply \begin{equation} \label{avrg2} \mathbf{p}si_b(\tilde{\zeta}_j) \lesssim \fint_{B_{1/2}} \mathbf{p}si_b(u_j) + \fint_{B_{1/2}} \abs{u_j - \tilde{\zeta}_j}^2 \stackrel{\mathbf{e}qref{avrg1}}{\lesssim} (\mathbf{e}ps_j^2 + 1) \mathbf{e}ta_j^2 \to 0. \mathbf{e}nd{equation} In particular, the distance between~$\tilde{\zeta}_j$ and~$\mathbf{m}athbb{N}N$ tends to zero as~$j\to+\infty$. Then, for~$j$ large enough, the projection~$\zeta_j := \mathbf{p}i(\tilde{\zeta}_j)\in\mathbf{m}athbb{N}N$ is well-defined. Moreover, recalling~\mathbf{e}qref{nondeg}, we have \begin{equation} \label{avrg3} \abs{\zeta_j - \tilde{\zeta}_j}^2 = \mathrm{d}ist^2(\tilde{\zeta}_j, \, \mathbf{m}athbb{N}N) \lesssim \mathbf{p}si_b(\tilde{\zeta}_j) \stackrel{\mathbf{e}qref{avrg2}}{\lesssim} \mathbf{e}ta_j^2. \mathbf{e}nd{equation} The estimate~\mathbf{e}qref{average1} follows from~\mathbf{e}qref{avrg1} and~\mathbf{e}qref{avrg3}. Let us consider the set \[ G_j := \left\{x\in B_{1/2}{\mathrm{c}}olon \mathrm{d}ist(u_j(x), \, \mathbf{m}athbb{N}N) \geq \mathrm{d}elta_*(\mathbf{m}athbb{N}N) \right\} \] where~$\mathrm{d}elta_*(\mathbf{m}athbb{N}N)$ is the constant from~\mathbf{e}qref{nearestproj}. On the set~$G_j$, the function~$\mathbf{p}si_b(u_j)$ is bounded from below by a strictly positive constant, which depends only on~$\mathrm{d}elta_*(\mathbf{m}athbb{N}N)$ and~$\mathbf{p}si_b$. Then, \[ \abs{G_j} \lesssim \int_{B_1} \mathbf{p}si_b(u_j) \lesssim \mathbf{e}ps_j^2 \, F_{\mathbf{e}ps_j}(u_j, \, B_1) = \mathbf{e}ps_j^2 \, \mathbf{e}ta_j^2 \to 0. \] Moreover, the projection~$\mathbf{p}i{\mathrm{c}}irc u_j$ is well-defined on~$B_{1/2}\setminus G_j$ and \[ \begin{split} \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) &\leq \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) + \int_{B_{1/2}\setminus G_j} \abs{u_j - \mathbf{p}i{\mathrm{c}}irc u_j} \\ &\stackrel{\mathbf{e}qref{nondeg}}{\lesssim} \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) + \int_{B_{1}} \mathbf{p}si_b^{1/2}(u_j) \\ &\lesssim \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) + \mathbf{e}ps_j \, \mathbf{e}ta_j \\ \mathbf{e}nd{split} \] Therefore, in order to prove~\mathbf{e}qref{average2}, it suffices to show that \begin{equation} \label{average3} \mathbf{e}ta_j^{-1} \int_{B_{1/2}\setminus G_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) \to 0 \qquad \textrm{as } j\to+\infty. \mathbf{e}nd{equation} To this end, we fix a large number~$M > 0$ and we consider the sets \[ A_j := \left\{x\in B_{1/2}\setminus G_j{\mathrm{c}}olon \abs{(\mathbf{p}i{\mathrm{c}}irc u_j)(x) - \zeta_j} \leq M\mathbf{e}ta_j \right\} \!, \qquad B_j := B_{1/2}\setminus (G_j{\mathrm{c}}up A_j). \] We first estimate the contribution from the set~$A_j$. Since the manifold $\mathbf{m}athbb{N}N$ is compact and smooth, we have \begin{equation} \label{avrg4} \mathrm{d}ist(y-z, \, \mathbf{m}athrm{T}_{z}\mathbf{m}athbb{N}N) \lesssim \abs{y - z}^2 \qquad \textrm{for any } y\in\mathbf{m}athbb{N}N, \ z\in\mathbf{m}athbb{N}N \mathbf{e}nd{equation} (For~$y$ sufficiently close to~$z$, say~$\abs{y - z}\leq\mathbf{e}ta_0 = \mathbf{e}ta_0(\mathbf{m}athbb{N}N)$, the inequality~\mathbf{e}qref{avrg4} can be obtained by writing~$\mathbf{m}athbb{N}N$ as the graph of a smooth function, locally around~$z$, and using a Taylor expansion. If~$\abs{y - z}\geq\mathbf{e}ta_0$, we remark that the left-hand side of~\mathbf{e}qref{avrg4} is bounded from above because~$\mathbf{m}athbb{N}N$ is compact.) Then, \begin{equation}\label{avrg5} \begin{split} \mathbf{e}ta_j^{-1} \int_{A_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) \lesssim \mathbf{e}ta_j^{-1} \int_{A_j} |\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j|^2 \leq M^2 \mathbf{e}ta_j \abs{A_j} \to 0 \mathbf{e}nd{split} \mathbf{e}nd{equation} as~$j\to+\infty$. Now, we estimate the contribution from~$B_j$. By construction, the image~$u_j(B_{1/2}\setminus G_j)$ is contained in the neighbourhood~$\mathbf{m}athscr{U}$ given by~\mathbf{e}qref{nearestproj}. Moreover, $\mathbf{p}i$ is Lipschitz-continuous on~$\mathbf{m}athscr{U}$ and~$\mathbf{p}i(\zeta_j) = \zeta_j$, so \begin{equation} \label{avrg5.5} \abs{(\mathbf{p}i{\mathrm{c}}irc u_j)(x) - \zeta_j} \lesssim \abs{u_j(x) - \zeta_j} \qquad \textrm{for a.e. } x\in B_{1/2}\setminus G_j \mathbf{e}nd{equation} By definition, we have $\abs{\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j}\geq M\mathbf{e}ta_j$ on~$B_j$ and hence, \begin{equation} \label{avrg6} \abs{B_j} \lesssim M^{-2} \mathbf{e}ta_j^{-2} \int_{B_{1/2}\setminus G_j} \abs{\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j}^2 \stackrel{\mathbf{e}qref{avrg5.5}}{\lesssim} M^{-2} \mathbf{e}ta_j^{-2} \int_{B_{1/2}\setminus G_j} \abs{u_j - \zeta_j}^2 \stackrel{\mathbf{e}qref{average1}}{\lesssim} M^{-2} \mathbf{e}nd{equation} By applying the inequality $\mathrm{d}ist(y-z, \, \mathbf{m}athrm{T}_{z}\mathbf{m}athbb{N}N)\leq |y - z|$, which holds for any~$y\in\mathbf{m}athbb{N}N$, $z\in\mathbf{m}athbb{N}N$, we obtain \begin{align*} \mathbf{e}ta_j^{-1} \int_{B_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) \leq \mathbf{e}ta_j^{-1} \int_{B_j} \abs{\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j} \stackrel{\mathbf{e}qref{avrg5.5}}{\lesssim} \mathbf{e}ta_j^{-1} \int_{B_j} \abs{u_j - \zeta_j} \mathbf{e}nd{align*} and hence, \begin{equation} \label{avrg7} \mathbf{e}ta_j^{-1} \int_{B_j} \mathrm{d}ist(\mathbf{p}i{\mathrm{c}}irc u_j - \zeta_j, \, \mathbf{m}athrm{T}_{\zeta_j}\mathbf{m}athbb{N}N) \lesssim \mathbf{e}ta_j^{-1} \abs{B_j}^{1/2} \left(\int_{B_{1/2}} \abs{u_j - \zeta_j}^2\right)^{1/2} \stackrel{\mathbf{e}qref{average1}, \, \mathbf{e}qref{avrg6}}{\lesssim} M^{-1} \mathbf{e}nd{equation} By combining~\mathbf{e}qref{avrg5} with~\mathbf{e}qref{avrg7}, and passing to the limit as~$M\to+\infty$, we deduce~\mathbf{e}qref{average3}. \mathbf{e}nd{proof} Let~$\zeta_j\in\mathbf{m}athbb{N}N$, $G_j\subseteq B_{1/2}$ be as in Lemma~\ref{lemma:average}. We consider the maps $v_j{\mathrm{c}}olon B_{1/2}\to\mathbf{m}athbb{R}^3$ given by \[ v_j := \frac{1}{\mathbf{e}ta_j} \left(u_j - \zeta_j\right) \] Thanks to~\mathbf{e}qref{average1}, the sequence~$v_j$ is bounded in~$L^2(B_{1/2})$. \begin{lemma} \label{lemma:L2conv} There exist a (non-relabelled) subsequence, a point~$\zeta\in\mathbf{m}athbb{N}N$ and a map~$v\in H^1(B_{1/2}, \, \mathbf{m}athbb{R}^m)$ such that $\zeta_j\to \zeta$, $v_j\to v$ strongly in~$L^2(B_{1/2})$ and a.e.~as~$j\to+\infty$. Moreover, \begin{equation} \label{tg} v(x)\in \mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N \qquad \textrm{for a.e. } x\in B_{1/2}. \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} Let~$\mathbf{v}arphi_{\mathbf{e}ps_j}$ be a sequence of mollifiers, as in~\mathbf{e}qref{mollify}. Since~$\zeta_j$ is constant, we have $\mathbf{v}arphi_\mathbf{e}ps*\zeta_j = \zeta_j \int_{\mathbf{m}athbb{R}^3} \mathbf{v}arphi_{\mathbf{e}ps_j} = \zeta_j$ and hence, by Lemma~\ref{lemma:mollify-Morrey}, \[ \int_{B_{1/2}} \abs{\mathbf{m}athbf{n}abla (\mathbf{v}arphi_{\mathbf{e}ps_j}*v_j)}^2 = \frac{1}{\mathbf{e}ta_j^2} \int_{B_{1/2}} \abs{(\mathbf{m}athbf{n}abla \mathbf{v}arphi_{\mathbf{e}ps_j})*u_j}^2 \lesssim \frac{F_{\mathbf{e}ps_j}(u_j, \, B_1)}{\mathbf{e}ta_j^2} \stackrel{\mathbf{e}qref{smallenergy}}{=} 1 \] Moreover, by Lemma~\ref{lemma:mollify-L2}, \[ \int_{B_{1/2}} \abs{v_j - \mathbf{v}arphi_{\mathbf{e}ps_j}*v_j}^2 = \frac{1}{\mathbf{e}ta_j^2} \int_{B_{1/2}} \abs{u_j - \mathbf{v}arphi_{\mathbf{e}ps_j}*u_j}^2 \lesssim \frac{\mathbf{e}ps_j^2 \, F_{\mathbf{e}ps_j}(u_j, \, B_1)}{\mathbf{e}ta_j^2} \stackrel{\mathbf{e}qref{smallenergy}}{=} \mathbf{e}ps_j^2 \to 0 \] Then, we may extract a subsequence in such a way that $\mathbf{v}arphi_{\mathbf{e}ps_j}*v_j\rightharpoonup v$ weakly in~$H^1(B_{1/2})$ and~$v_j \to v$ strongly in~$L^2(B_{1/2})$ and a.e. We may also assume that~$\zeta_j\to \zeta\in\mathbf{m}athbb{N}N$, because~$\mathbf{m}athbb{N}N$ is compact. It remains to prove~\mathbf{e}qref{tg}. Let~$\mathrm{d}elta > 0$ be a small parameter. By Lemma~\ref{lemma:average}, we know that~$\abs{G_j} \to 0$, so we may extract a subsequence~$j_k\to+\infty$ in such a way that $\sum_{k\in\mathbf{m}athbb{N}} |G_{j_k}| \leq \mathrm{d}elta$. Let~$G := {\mathrm{c}}up_{k\in\mathbf{m}athbb{N}} G_{j_k}$. The estimate~\mathbf{e}qref{average2} implies \[ \int_{B_{1/2}\setminus G} \mathrm{d}ist(v_{j_k}, \, \mathbf{m}athrm{T}_{\zeta_{j_k}}\mathbf{m}athbb{N}N) \leq \int_{B_{1/2}\setminus G_{j_k}} \mathrm{d}ist(v_{j_k}, \, \mathbf{m}athrm{T}_{\zeta_{j_k}}\mathbf{m}athbb{N}N) \to 0 \] as~$k\to+\infty$. By Fatou lemma, we deduce that~$v(x)\in\mathbf{m}athrm{T}_{\zeta}\mathbf{m}athbb{N}N$ for a.e.~$x\in B_{1/2}\setminus G$. This implies \[ \abs{\left\{x\in B_{1/2}{\mathrm{c}}olon v(x)\mathbf{m}athbf{n}otin\mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N\right\}} \leq \abs{G} \leq \sum_{k\in\mathbf{m}athbb{N}} \abs{G_{j_k}} \leq \mathrm{d}elta. \] Since~$\mathrm{d}elta>0$ is arbitrary, \mathbf{e}qref{tg} follows. \mathbf{e}nd{proof} Next, we show that~$v$ minimises the gradient energy associated with the tensor~$L$, subject to its own boundary conditions. \begin{lemma} \label{lemma:minim} Let~$v^*\in H^1(B_{1/2}, \, \mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N)$ be such that $v^* = v$ on~$\mathbf{p}artial B_{1/2}$ (in the sense of traces). Then, \begin{equation} \label{blmin} \int_{B_{1/2}} L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot \mathbf{m}athbf{n}abla v \leq \int_{B_{1/2}} L\mathbf{m}athbf{n}abla v^*{\mathrm{c}}dot\mathbf{m}athbf{n}abla v^* \mathbf{e}nd{equation} Moreover, for any~$s\in(0, \, 1/2)$ there holds \begin{equation} \label{blstrongconv} \lim_{\mathbf{e}ps\to 0} \mathbf{e}ta_j^{-2} F_{\mathbf{e}ps_j}(u_j, \, B_s) = \int_{B_s} L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \mathbf{e}nd{equation} \mathbf{e}nd{lemma} \begin{proof} By a density argument, it suffices to prove~\mathbf{e}qref{blmin} in case $v^* = v$ in a neighbourhood of~$\mathbf{p}artial B_{1/2}$. Let us fix~$s\in (0, \, 1/2)$ and~$v^*\in H^1(B_{1/2}, \, \mathbf{m}athrm{T}_{\zeta}\mathbf{m}athbb{N}N)$ such that~$v^*=v$ a.e.~in~$B_{1/2}\setminus\bar{B}_s$. For any~$a>0$, we let $\mathbf{m}athscr{U}_a := \{y\in\mathbf{m}athbb{R}^m{\mathrm{c}}olon \mathrm{d}ist(y, \, \mathbf{m}athbb{N}N)\leq a\}$. Let~$z\in\mathbf{m}athbb{N}N$, $R>0$ and~$w\in\mathbf{m}athrm{T}_z\mathbf{m}athbb{N}N$ be such such that~$\abs{w}\leq R$. Since~$\mathbf{m}athbf{n}abla\mathbf{p}i(z)$ is the orthogonal projection onto~$\mathbf{m}athrm{T}_z\mathbf{m}athbb{N}N$ (see e.g.~{\mathrm{c}}ite[Section~2.12.3]{Simon-Harmonic}), we have \begin{equation} \label{pi1} \abs{z + \mathbf{e}ta_jw - \mathbf{p}i(z - \mathbf{e}ta_jw)} \lesssim \mathbf{e}ta_j^2 \, R^2 \, \|\mathbf{m}athbf{n}abla^2\mathbf{p}i\|_{L^\infty(\mathbf{m}athscr{U}_{\mathbf{e}ta_j R})} \mathbf{e}nd{equation} Moreover, for any~$X\in\mathbf{m}athrm{T}_z\mathbf{m}athbb{N}N$ we have \begin{equation} \label{pi2} \abs{X - \mathbf{m}athbf{n}abla\mathbf{p}i(z + \mathbf{e}ta_jw)X} \leq \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla\mathbf{p}i(z) - \mathbf{m}athbf{n}abla\mathbf{p}i(z+\mathbf{e}ta_jw)}\abs{X} \lesssim \mathbf{e}ta_j R \abs{X}\, \|\mathbf{m}athbf{n}abla^2\mathbf{p}i\|_{L^\infty(\mathbf{m}athscr{U}_{\mathbf{e}ta_j R})} \mathbf{e}nd{equation} We choose a positive sequence~$R_j\to+\infty$ such that~$\mathbf{e}ta_j R_j^2\to 0$ and we define \[ v^*_j := \frac{R_jv^*}{\mathbf{m}ax(R_j, \, \abs{v^*})}, \qquad u^*_j := \mathbf{p}i(\zeta_j + \mathbf{e}ta_j v^*_j). \] Using~\mathbf{e}qref{pi1} and~\mathbf{e}qref{pi2}, a routine computation shows that \[ \frac{u^*_j - \zeta_j}{\mathbf{e}ta_j} \to v^* \qquad \textrm{strongly in } H^1(B_{1/2}) \] Now, we apply Lemma~\ref{lemma:nonlocinterp} (and Remark~\ref{rk:nonlocinterp}). For any~$\sigma\in (0, \, 1/10)$, we find radii~$r$, $t$ with~$\mathbf{m}ax(1/2, \, s) < r < t < 1/2$ and maps~$\xi_j\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ such that $\xi_j = u_j$ a.e.~in~$\mathbf{m}athbb{R}^3\setminus B_t$, \begin{equation} \label{Xi} \Xi_j := \frac{\xi_j - \zeta_j}{\mathbf{e}ta_j} \to v^* \qquad \textrm{strongly in } H^1(B_{r}) \mathbf{e}nd{equation} and \[ \begin{split} F_{\mathbf{e}ps_j}(\xi_j, \, B_t) &+ \Gamma_{\mathbf{e}ps_j}(\xi_j, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) - \Gamma_{\mathbf{e}ps_j}(u_j, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \\ &\leq F_{\mathbf{e}ps_j}^\mathbf{m}athbf{n}l(\xi_j, \, B_r) + C\int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_j}^2 + C\sigma \mathbf{e}ta_j^2 + \mathrm{o}\!\left(\frac{\mathbf{e}ps_j^{q-2} \, \mathbf{e}ta_j}{\sigma^q}\right) \mathbf{e}nd{split} \] Since~$u_j$ is a minimiser of~$E_{\mathbf{e}ps_j}$, we have \[ F_{\mathbf{e}ps_j}(u_j, \, B_t) + \Gamma_{\mathbf{e}ps_j}(u_j, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \leq F_{\mathbf{e}ps_j}(\xi_j, \, B_t) + \Gamma_{\mathbf{e}ps_j}(\xi_j, \, B_t, \, \mathbf{m}athbb{R}^3\setminus B_t) \] and hence, \[ F_{\mathbf{e}ps_j}(u_j, \, B_t) \leq F_{\mathbf{e}ps_j}^\mathbf{m}athbf{n}l(\xi_j, \, B_r) + C\int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\xi_j}^2 + C\sigma \mathbf{e}ta_j^2 + \mathrm{o}\!\left(\frac{\mathbf{e}ps_j^{q-2} \, \mathbf{e}ta_j}{\sigma^q}\right) \] We divide both sides of this inequality by~$\mathbf{e}ta^2_j$ and obtain \begin{equation} \label{blmin1} \begin{split} F^\mathbf{m}athbf{n}l_{\mathbf{e}ps_j}(v_j, \, B_t) &\leq \frac{1}{\mathbf{e}ta^2_j} F_{\mathbf{e}ps_j}(u_j, \, B_t) \\ &\leq F_{\mathbf{e}ps_j}^\mathbf{m}athbf{n}l\left(\Xi_j, \, B_r\right) + C\int_{B_r\setminus B_{r-\sigma}} \abs{\mathbf{m}athbf{n}abla\Xi_j}^2 + C\sigma + \mathrm{o}\!\left(\frac{\mathbf{e}ps_j^{q-2}}{\sigma^q \, \mathbf{e}ta_j}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} The assumptions~\mathbf{e}qref{smallenergy}, \mathbf{e}qref{nodecay} imply that~$\mathbf{e}ta_j^2 \geq \mathbf{e}ps_j^{2q-4}$, that is $\mathbf{e}ta_j \geq \mathbf{e}ps_j^{q-2}$. Then, recalling~\mathbf{e}qref{Xi}, Proposition~\ref{prop:Gamma-liminf} and Proposition~\ref{prop:Gamma-limsup}, we may pass to the limit in~\mathbf{e}qref{blmin1}, first as~$j\to+\infty$, then as~$\sigma\to 0$. We obtain \[ \int_{B_t} L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \leq \limsup_{j\to +\infty} \frac{1}{\mathbf{e}ta^2_j} F_{\mathbf{e}ps_j}(u_j, \, B_t) \leq \int_{B_t} L\mathbf{m}athbf{n}abla v^*{\mathrm{c}}dot\mathbf{m}athbf{n}abla v^* \] and~\mathbf{e}qref{blmin} follows. By choosing~$v^* = v$, we also deduce~\mathbf{e}qref{blstrongconv}, exactly as in Proposition~\ref{prop:compactness}. \mathbf{e}nd{proof} Now, we are in position to complete the proof of Lemma~\ref{lemma:decay}. \begin{proof}[Proof of Lemma~\ref{lemma:decay}] The assumption~\mathbf{e}qref{smallenergy}, Lemma~\ref{lemma:L2conv} and Proposition~\ref{prop:Gamma-liminf} imply that \begin{equation} \label{minim0} \int_{B_{1/2}}L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \leq \liminf_{j\to+\infty} \frac{1}{\mathbf{e}ta_j^2} F_{\mathbf{e}ps_j}(u_j, \, B_{1/2})\leq 1 \mathbf{e}nd{equation} By Lemma~\ref{lemma:minim}, $v\in H^1(B_{1/2}, \, \mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N)$ minimises a quadratic functional among maps with values in the linear space~$\mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N$. In particular, $v$ is a solution of the system \[ -\mathrm{d}iv\left(L\mathbf{m}athbf{n}abla v\right) = 0 \qquad \textrm{in } B_{1/2} \] This system has constant coefficients and is (strongly) elliptic, by Proposition~\ref{prop:L}. Then, elliptic regularity theory (see e.g.~{\mathrm{c}}ite[Section~III.2, Theorem~2.1, Remarks~2.2 and~2.3]{Giaquinta83}) implies that~$v$ is smooth and there exists a number~$\theta\in (0, \, 1)$ (depending only on~$L$) such that \begin{equation*} \int_{B_{\theta}}L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \leq \frac{\theta}{4} \int_{B_{1/2}}L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v \stackrel{\mathbf{e}qref{minim0}}{\leq} \frac{\theta}{4} \mathbf{e}nd{equation*} However, \mathbf{e}qref{nodecay} and~\mathbf{e}qref{blstrongconv} imply \begin{equation*} \int_{B_{\theta}}L\mathbf{m}athbf{n}abla v{\mathrm{c}}dot\mathbf{m}athbf{n}abla v = \lim_{j\to+\infty} \frac{1}{\mathbf{e}ta_j^2} F_{\mathbf{e}ps_j}(u_j, \, B_{1/2}) \geq \frac{\theta}{2} \mathbf{e}nd{equation*} Thus, we have obtained a contradiction, and the lemma follows. \mathbf{e}nd{proof} } \subsection{Proof of Theorem~\ref{th:Holder} and Theorem~\ref{th:conv}} \begin{proof}[Proof of Theorem~\ref{th:Holder}] Let~$\mathbf{e}ta$, $\theta$ and~$\mathbf{e}ps_*$ be given by Lemma~\ref{lemma:decay}. Let~$B_{r_0}(x_0)\subset\!\subset\Omega$ be a ball, $\mathbf{e}ps\in (0, \, \mathbf{e}ps_* r_0)$, and let~$u_\mathbf{e}ps$ be a minimiser of~$E_\mathbf{e}ps$ in~$\mathbf{m}athscr{A}$ such that \begin{equation} \label{Holder0} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{r_0}(x_0)) \leq \mathbf{e}ta^2\,r_0. \mathbf{e}nd{equation} By a scaling argument, using~\mathbf{e}qref{scaling}, we can assume without loss of generality that~$x_0 = 0$ and~$r_0 = 1$. {\BBB We fix a parameter~$\gamma\in (0, \, 1)$.} \setcounter{step}{0} \begin{step}[Campanato estimate for radii~$\rho\geq\mathbf{e}ps^\gamma$] {\BBB In this case, the result follows by Lemma~\ref{lemma:decay} combined with a classical iteration argument. Let~$k\geq 1$ be an integer such that~$\theta^k\geq \mathbf{e}ps^\gamma$. Thanks to~\mathbf{e}qref{Holder0}, we can apply Lemma~\ref{lemma:decay} iteratively, first on the ball~$B_1$, then on~$B_\theta$, on~$B_{\theta^2}$, and so on. We obtain \begin{equation} \label{holder-largerad1} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta^k}) \leq \frac{\theta^k}{2^k} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_1) + \mathbf{e}ps^{2q - 4}\theta^{k-1} \sum_{j=0}^{k-1} 2^{-j} \theta^{(4 - 2q)(k - 1 - j)} \mathbf{e}nd{equation} At each step of the iteration, the assumptions of Lemma~\ref{lemma:decay} remain satisfied, so long as~$\theta^k\geq \mathbf{e}ps^\gamma$ and~$\mathbf{e}ps$ is small enough. Indeed, \mathbf{e}qref{holder-largerad1} implies \[ F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta^k}) \leq \frac{\theta^k}{2^k} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_1) + \mathbf{e}ps^{2q - 4}\,\theta^{(k-1)(5 - 2q)} \sum_{j=0}^{k-1} \left(\frac{\theta^{2q-4}}{2}\right)^j \] The series at the right-hand side converges, because $\theta \leq 1/2$, and its sum is less than~$2$. The assumption~\mathbf{e}qref{Holder0} implies \begin{equation} \label{holder-largerad2} \begin{split} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta^k}) \leq \theta^k \left(\frac{\mathbf{e}ta^2}{2^k} + 2\theta^{2q-5} \left(\frac{\mathbf{e}ps}{\theta^k}\right)^{2q-4} \right) \mathbf{e}nd{split} \mathbf{e}nd{equation} Under the assumption that~$\theta^k\geq\mathbf{e}ps^\gamma$, we can further estimate the right-hand side as \begin{equation*} \begin{split} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta^k}) &\leq \theta^k \left(\frac{\mathbf{e}ta^2}{2^k} + 2\theta^{2q-5} \, \mathbf{e}ps^{(2q - 4)(1 - \gamma)}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation*} and we can make sure that $F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\theta^k}) \leq \mathbf{e}ta^2\,\theta^k$ by taking~$\mathbf{e}ps$ small enough (depending on~$\mathbf{e}ta$, $\theta$ and~$\gamma$ only). Now, take a radius~$\rho$ such that~$\mathbf{e}ps^\gamma \leq \rho < 1$. Let~$k = k(\rho)\geq 0$ be the unique integer such that $\theta^{k+1} \leq \rho < \theta^k$. The inequality~\mathbf{e}qref{holder-largerad2} implies \begin{equation} \label{holder-largerad3} \begin{split} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{\rho}) &\leq \frac{\rho}{\theta} \left(\frac{\mathbf{e}ta^2}{2^k} + 2\theta^{2q-5} \left(\frac{\rho^{1/\gamma}}{\rho}\right)^{2q - 4} \right) \leq C\rho \left(\rho^{\alpha} + \rho^{(2q-4)(1/\gamma - 1)}\right) \mathbf{e}nd{split} \mathbf{e}nd{equation} for~$\alpha := \log 2/\abs{\log\theta} \in (0, \, 1)$ and some constant~$C$ that depends only on~$\mathbf{e}ta$, $\theta$. We assume that \begin{equation} \label{holderalpha} 0 < \gamma < \frac{2q - 4}{\alpha + 2q - 4} \mathbf{e}nd{equation} which implies $\alpha < (2q - 4)(1/\gamma - 1)$. By applying Proposition~\ref{prop:Poincare}, and possibly modifying the value of~$C$, from~\mathbf{e}qref{holder-largerad3} we deduce \begin{equation} \label{Holder-largeradii} \fint_{B_{\rho}} \abs{u_\mathbf{e}ps - \fint_{B_{\rho}}u_\mathbf{e}ps}^2 \leq C\rho^{\alpha} \qquad \textrm{for } \mathbf{e}ps^\gamma \leq \rho < 1. \mathbf{e}nd{equation} } \mathbf{e}nd{step} \begin{step}[Campanato estimate for radii~$\rho\leq\mathbf{e}ps^\gamma$] {\BBB We need to show that an estimate similar to~\mathbf{e}qref{Holder-largeradii} holds for~$\rho<\mathbf{e}ps^\gamma$ as well. To this end, we consider the number~$\mathbf{m}athbf{n}u>1$ given by Assumption~\mathbf{e}qref{hp:nabla_K} and we define \begin{equation} \label{beta,p} p := 3 + \alpha - \frac{\alpha}{\mathbf{m}athbf{n}u}, \qquad \beta := 1 - \frac{\alpha}{(\alpha + 3)\mathbf{m}athbf{n}u} \mathbf{e}nd{equation} We have $p > 3$, $0 < \beta < 1$. We claim that we can choose the parameter~$\gamma$ so as to satisfy~\mathbf{e}qref{holderalpha} and \begin{equation} \label{betap1} \beta < \gamma \mathbf{e}nd{equation} Indeed, a straightforward algebraic manipulation shows that \begin{equation} \label{betap2} 1 - \frac{\alpha}{(\alpha + 3)\mathbf{m}athbf{n}u} < \frac{2q - 4}{\alpha + 2q - 4} \qquad \mathscr{L}ongleftrightarrow \qquad \mathbf{m}athbf{n}u < \frac{\alpha + 2q - 4}{\alpha + 3} \mathbf{e}nd{equation} The assumption that~$q > 7/2$ guarantees that $(\alpha + 2q - 4)/(\alpha + 3) > 1$. Moreover, we have assumed that~$\mathbf{m}athbf{n}abla K$ is integrable (see~\mathbf{e}qref{hp:K_decay}), so there is no loss of generality in taking a smaller value for~$\mathbf{m}athbf{n}u$ (so long as~$\mathbf{m}athbf{n}u > 1$). Therefore, we may assume that~$\mathbf{m}athbf{n}u>1$ satisfies~\mathbf{e}qref{betap2} and hence, we can choose~$\gamma$ that satisfies~\mathbf{e}qref{holderalpha} and~\mathbf{e}qref{betap1}.} Let \[ m_\mathbf{e}ps :=\fint_{B_{2\mathbf{e}ps^\beta}} u_\mathbf{e}ps \] and let~${\mathrm{c}}hi_\mathbf{e}ps$ be the characteristic function of the ball~$B_{\mathbf{e}ps^\beta}$. Since~$\mathbf{m}athbf{n}abla K_\mathbf{e}ps$ has zero average, from the Euler-Lagrange equation (Proposition~\ref{prop:EL}) we obtain \[ \begin{split} \mathbf{m}athbf{n}abla(\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps) &= (\mathbf{m}athbf{n}abla K_\mathbf{e}ps) * (u_\mathbf{e}ps - m_\mathbf{e}ps) \\ &= ({\mathrm{c}}hi_\mathbf{e}ps\mathbf{m}athbf{n}abla K_\mathbf{e}ps) * (u_\mathbf{e}ps - m_\mathbf{e}ps) + \left((1 - {\mathrm{c}}hi_\mathbf{e}ps)\mathbf{m}athbf{n}abla K_\mathbf{e}ps\right) * (u_\mathbf{e}ps - m_\mathbf{e}ps). \mathbf{e}nd{split} \] Let~$\tilde{{\mathrm{c}}hi}_\mathbf{e}ps$ be the characteristic function of the ball~$B_{2\mathbf{e}ps^\beta}$. Since~${\mathrm{c}}hi_\mathbf{e}ps\mathbf{m}athbf{n}abla K_\mathbf{e}ps$ is supported on~$B_{\mathbf{e}ps^\beta}$, we deduce \[ \begin{split} \mathbf{m}athbf{n}abla(\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps) &= ({\mathrm{c}}hi_\mathbf{e}ps\mathbf{m}athbf{n}abla K_\mathbf{e}ps) * (\tilde{{\mathrm{c}}hi}_\mathbf{e}ps(u_\mathbf{e}ps - m_\mathbf{e}ps)) + \left((1 - {\mathrm{c}}hi_\mathbf{e}ps)\mathbf{m}athbf{n}abla K_\mathbf{e}ps\right) * (u_\mathbf{e}ps - m_\mathbf{e}ps) \quad \textrm{in } B_{\mathbf{e}ps^\beta}. \mathbf{e}nd{split} \] We apply H\"older's inequality, and then Young's inequality for the convolution: \begin{equation} \label{Holder1} \begin{split} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla (\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps)}_{L^p(B_{\mathbf{e}ps^\beta})} &\lesssim \|({\mathrm{c}}hi_\mathbf{e}ps\mathbf{m}athbf{n}abla K_\mathbf{e}ps) *(\tilde{{\mathrm{c}}hi}_\mathbf{e}ps(u_\mathbf{e}ps - m_\mathbf{e}ps))\|_{L^p(\mathbf{m}athbb{R}^3)} \\ &\qquad\qquad + \mathbf{e}ps^{3\beta/p} \mathbf{m}athbf{n}orm{((1-{\mathrm{c}}hi_\mathbf{e}ps)\mathbf{m}athbf{n}abla K_\mathbf{e}ps) * (u_\mathbf{e}ps - m_\mathbf{e}ps)}_{L^\infty(\mathbf{m}athbb{R}^3)} \\ &\lesssim \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K_\mathbf{e}ps}_{L^1(B_{\mathbf{e}ps^\beta})} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - m_\mathbf{e}ps}_{L^p(B_{2\mathbf{e}ps^\beta})} \\ &\qquad\qquad + \mathbf{e}ps^{3\beta/p} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K_\mathbf{e}ps}_{L^1(\mathbf{m}athbb{R}^3\setminus B_{\mathbf{e}ps^\beta})} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - m_\mathbf{e}ps}_{L^\infty(\mathbf{m}athbb{R}^3)} \mathbf{e}nd{split} \mathbf{e}nd{equation} We bound the terms at the right-hand side. {\BBB We have~$\beta < \gamma$, so~$2\mathbf{e}ps^\beta\geq\mathbf{e}ps^\gamma$ and hence} \begin{equation*} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - m_\mathbf{e}ps}^2_{L^2(B_{2\mathbf{e}ps^\beta})} \lesssim \mathbf{e}ps^{(3 + \alpha)\beta} \mathbf{e}nd{equation*} {\BBB due to~\mathbf{e}qref{Holder-largeradii}.} Since $\|u_\mathbf{e}ps\|_{L^\infty(\mathbf{m}athbb{R}^3)}\leq C$, by interpolation we obtain \begin{equation} \label{Holder2} \begin{split} \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - m_\mathbf{e}ps}_{L^p(B_{2\mathbf{e}ps^\beta})} &\lesssim \mathbf{m}athbf{n}orm{u_\mathbf{e}ps - m_\mathbf{e}ps}_{L^2(B_{2\mathbf{e}ps^\beta})}^{2/p} \lesssim \mathbf{e}ps^{(3 + \alpha)\beta/p}. \mathbf{e}nd{split} \mathbf{e}nd{equation} By a change of variable, we have \begin{equation} \label{Holder3} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K_\mathbf{e}ps}_{L^1(B_{\mathbf{e}ps^\beta})} \leq \mathbf{e}ps^{-1}\mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K}_{L^1(\mathbf{m}athbb{R}^3)} \mathbf{e}nd{equation} and \begin{equation} \label{Holder4} \begin{split} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K_\mathbf{e}ps}_{L^1(\mathbf{m}athbb{R}^3\setminus B_{\mathbf{e}ps^\beta})} &= \mathbf{e}ps^{-1}\int_{\mathbf{m}athbb{R}^3\setminus B_{\mathbf{e}ps^{\beta-1}}} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K (z)} \mathrm{d} z \\ &\leq \mathbf{e}ps^{-1}\int_{\mathbf{m}athbb{R}^3\setminus B_{\mathbf{e}ps^{\beta-1}}} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K (z)} {\BBB \frac{\abs{z}^\mathbf{m}athbf{n}u}{\mathbf{e}ps^{\mathbf{m}athbf{n}u\beta - \mathbf{m}athbf{n}u}} \, \mathrm{d} z }\\ &\leq {\BBB \mathbf{e}ps^{\mathbf{m}athbf{n}u - \mathbf{m}athbf{n}u\beta - 1}} \int_{\mathbf{m}athbb{R}^3} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla K (z)} \abs{z}^{\BBB \mathbf{m}athbf{n}u} \mathrm{d} z, \mathbf{e}nd{split} \mathbf{e}nd{equation} where the integral at the right-hand side is finite by Assumption~\mathbf{e}qref{hp:nabla_K}. Combining~\mathbf{e}qref{Holder1}, \mathbf{e}qref{Holder2}, \mathbf{e}qref{Holder3} and~\mathbf{e}qref{Holder4}, and using that $u_\mathbf{e}ps$ is bounded in~$L^\infty(\mathbf{m}athbb{R}^3)$, we obtain \begin{equation*} \mathbf{m}athbf{n}orm{\mathbf{m}athbf{n}abla (\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps)}_{L^p(B_{\mathbf{e}ps^\beta})} \lesssim \mathbf{e}ps^{(3 + \alpha)\beta/p - 1} + {\BBB \mathbf{e}ps^{3\beta/p + \mathbf{m}athbf{n}u - \mathbf{m}athbf{n}u\beta - 1} } \mathbf{e}nd{equation*} By simple algebra, from~\mathbf{e}qref{beta,p} we obtain \begin{equation*} \frac{(3+\alpha)\beta}{p} - 1 = {\BBB \frac{3\beta}{p} + \mathbf{m}athbf{n}u - \mathbf{m}athbf{n}u\beta - 1} = 0 \mathbf{e}nd{equation*} so $\|\mathbf{m}athbf{n}abla(\mathscr{L}ambda{\mathrm{c}}irc u_\mathbf{e}ps)\|_{L^p(B_{\mathbf{e}ps^\beta})}$ is bounded. Thanks to~\mathbf{e}qref{nabla_inv}, we deduce that $\|\mathbf{m}athbf{n}abla u_\mathbf{e}ps\|_{L^p(B_{\mathbf{e}ps^\beta})}$ is bounded too. Since~$p>3$, {\BBB we have the Sobolev embedding $W^{1,p}(B_{\mathbf{e}ps^\beta})\hookrightarrow C^\mathbf{m}u(B_{\mathbf{e}ps^\beta})$, where \begin{equation} \label{Holder6} \mathbf{m}u := 1 - \frac{3}{p} = {\BBB \frac{\alpha(\mathbf{m}athbf{n}u - 1)}{\alpha(\mathbf{m}athbf{n}u - 1) + 3\mathbf{m}athbf{n}u}} \mathbf{e}nd{equation} Moreover, the constant~$C$ in the Sobolev inequality $[u]_{C^\mathbf{m}u(B_r)} \leq C\|\mathbf{m}athbf{n}abla u\|_{L^p(B_r)}$ is independent of~$r>0$, as demonstrated by a scaling argument. Then, we obtain \begin{equation*} [u_\mathbf{e}ps]_{C^\mathbf{m}u(B_{\mathbf{e}ps^\beta})} \leq C \mathbf{e}nd{equation*} and hence, } \begin{equation} \label{Holder-smallradii} \begin{split} \fint_{B_\rho}\abs{u_\mathbf{e}ps - \fint_{B_\rho}u_\mathbf{e}ps}^2 &\lesssim \rho^{2\mathbf{m}u} \qquad \textrm{for any } \rho\in (0, \, \mathbf{e}ps^\beta]. \mathbf{e}nd{split} \mathbf{e}nd{equation} \mathbf{e}nd{step} \begin{step}[Conclusion] By combining~\mathbf{e}qref{Holder-largeradii}, \mathbf{e}qref{Holder6} and~\mathbf{e}qref{Holder-smallradii}, we deduce that \[ \fint_{B_\rho}\abs{u_\mathbf{e}ps - \fint_{B_\rho}u_\mathbf{e}ps}^2 \leq C \rho^{\mathbf{m}in(\alpha, \, 2\mathbf{m}u)} = C \rho^{2\mathbf{m}u} \] for any radius~$\rho\in (0, \, 1)$ and for some constant~$C>0$ that does not depend on~$\mathbf{e}ps$, $\rho$. Then, Campanato embedding gives an $\mathbf{e}ps$-independent bound on the $\mathbf{m}u$-H\"older semi-norm of~$u_\mathbf{e}ps$ on~$B_{1/2}$. This completes the proof. \qedhere \mathbf{e}nd{step} \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem~\ref{th:conv}] Let~$u_\mathbf{e}ps$ be a minimiser of~$E_\mathbf{e}ps$ in~$\mathbf{m}athscr{A}$. By the results of~{\mathrm{c}}ite{taylor2018oseen}, there exists a (non-relabelled) subsequence such that $u_\mathbf{e}ps\to u_0$ strongly in~$L^2(\Omega)$, where~$u_0$ is a minimiser of the limit functional~\mathbf{e}qref{limit}. Take a point~$x_0\in\Omega\setminus S[u_0]$, where~$S[u_0]$ is defined by~\mathbf{e}qref{singularset}. By definition of~$S[u_0]$, there exists a number~$r_0>0$ such that \[ r_0^{-1}\int_{B_{r_0}(x_0)} L\mathbf{m}athbf{n}abla u_0{\mathrm{c}}dot\mathbf{m}athbf{n}abla u_0 \leq \frac{\mathbf{e}ta^2}{2}, \] where~$\mathbf{e}ta$ is given by Theorem~\ref{th:Holder}. Proposition~\ref{prop:compactness} implies \[ r_0^{-1} F_\mathbf{e}ps(u_\mathbf{e}ps, \, B_{r_0}(x_0)) \leq \mathbf{e}ta^2 \] for any~$\mathbf{e}ps$ small enough and hence, by Theorem~\ref{th:Holder}, $[u_\mathbf{e}ps]_{C^\mathbf{m}u(B_{r_0}(x_0))}$ is uniformly bounded. Then, Ascoli-Arzel\`a's theorem implies that $u_\mathbf{e}ps\to u_0$ uniformly in~$B_{r_0}(x_0)$. \mathbf{e}nd{proof} \section{Generalisation to finite-thickness boundary conditions} \label{sect:bd} In this section, we discuss a variant of the minimisation problem, where we prescribe~$u$ in a neighbourhood of~$\mathbf{p}artial\Omega$ only. Let~$\Omega_\mathbf{e}ps\supset\!\supset\Omega$ be a larger domain, possibly depending on~$\mathbf{e}ps$. We consider the functional \begin{equation} \label{energytilde} \begin{split} \tilde{E}_\mathbf{e}ps(u) := - \frac{1}{2\mathbf{e}ps^2}\int_{\Omega_\mathbf{e}ps\times\Omega_\mathbf{e}ps} K_\mathbf{e}ps(x-y)u(x){\mathrm{c}}dot u(y) \, \mathrm{d} x \, \mathrm{d} y + \frac{1}{\mathbf{e}ps^2} \int_\Omega\mathbf{p}si_s(u(x)) \, \mathrm{d} x + {\BBB \tilde{C}_\mathbf{e}ps,} \mathbf{e}nd{split} \mathbf{e}nd{equation} {\BBB where~$\tilde{C}_\mathbf{e}ps$ is a constant. The value of~$\tilde{C}_\mathbf{e}ps$ is irrelevant for the purposes of minimisation, but we will make a specific choice of~$\tilde{C}_\mathbf{e}ps$ (see~\mathbf{e}qref{Ctilde} below) for convenience. } As before, we take a map~$u_{\mathbf{m}athrm{bd}}\in H^1(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ that satisfies~\mathbf{e}qref{hp:bd} and define the admissible class \begin{equation} \label{Aeps} \tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps := \left\{u\in L^\infty(\Omega_\mathbf{e}ps, \, \mathbf{Q}Q){\mathrm{c}}olon \mathbf{p}si_s(u)\in L^1(\Omega), \ u = u_{\mathbf{m}athrm{bd}} \textrm{ a.e. on } \Omega_\mathbf{e}ps\setminus\Omega\right\} \!. \mathbf{e}nd{equation} The thickness of the boundary layer~$\Omega_\mathbf{e}ps\setminus\Omega$ must be related to the decay properties of the kernel~$K$. More precisely, in addition to~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast}, \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast}, we assume that \begin{enumerate}[label = (K$^\mathbf{p}rime$), ref = K$^\mathbf{p}rime$] \item \label{hp:Omega_eps} {\BBB There exist numbers~$q > 7/2$, $\beta\in\left(0, \, 1 - \frac{7}{2q}\right)$ and~$\tau>0$ such that \begin{gather*} \int_{\mathbf{m}athbb{R}^3} g(z)\abs{z}^{q} \mathrm{d} z < +\infty \mathbf{e}nd{gather*} and~$\mathrm{d}ist(\Omega, \, \mathbf{p}artial\Omega_\mathbf{e}ps) \geq\tau\mathbf{e}ps^\beta$ for any~$\mathbf{e}ps> 0$.} \mathbf{e}nd{enumerate} \begin{remark} \label{rk:Omega_eps} In case the kernel~$K$ is compactly supported, we can allow for a boundary layer of thickness proportional to~$\mathbf{e}ps$. More precisely, we can replace the assumption~\mathbf{e}qref{hp:Omega_eps} with the following: there exists~$R_0>0$ such that $\mathbf{m}athrm{supp}(K)\subseteq B_{R_0}$ and~$\mathrm{d}ist(\Omega, \, \mathbf{p}artial\Omega_\mathbf{e}ps)\geq R_0\mathbf{e}ps$ for any $\mathbf{e}ps > 0$. The proofs in this case remain essentially unchanged. \mathbf{e}nd{remark} Under these assumptions, we can prove the analogues of Theorems~\ref{th:Holder} and~\ref{th:conv}. \begin{theorem} \label{th:Holder-tilde} Assume that the conditions~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast}, \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast}, \mathbf{e}qref{hp:bd} and~\mathbf{e}qref{hp:Omega_eps} are satisfied. Then, there exist positive numbers~$\mathbf{e}ta$, $\mathbf{e}ps_*$, $M$ and~$\mathbf{m}u\in (0, \, 1)$ such that, for any ball~$B_{r_0}(x_0)\subset\!\subset\Omega$, any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_* r_0)$, and any minimiser~$\tilde{u}_\mathbf{e}ps$ of~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$, there holds \[ r_0^{-1} F_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps, \, B_{r_0}(x_0)) \leq \mathbf{e}ta \qquad\mathscr{L}ongrightarrow\qquad r_0^{\mathbf{m}u} \, [\tilde{u}_\mathbf{e}ps]_{C^\mathbf{m}u(B_{r_0/2}(x_0))} \leq M. \] \mathbf{e}nd{theorem} \begin{theorem} \label{th:conv-tilde} Assume that the conditions~\mathbf{e}qref{hp:Kfirst}--\mathbf{e}qref{hp:Klast}, \mathbf{e}qref{hp:Hfirst}--\mathbf{e}qref{hp:Hlast}, \mathbf{e}qref{hp:bd} and~\mathbf{e}qref{hp:Omega_eps} are satisfied. Let~$\tilde{u}_\mathbf{e}ps$ be a minimiser of~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$. Then, up to extraction of a (non-relabelled) subsequence, we have \[ \tilde{u}_\mathbf{e}ps \to u_0 \qquad \textrm{locally uniformly in } \Omega\setminus S[u_0], \] where $u_0$ is a minimiser of the functional~\mathbf{e}qref{limit} in~$\mathbf{m}athscr{A}$ and~$S[u_0]$ is defined by~\mathbf{e}qref{singularset}. \mathbf{e}nd{theorem} The proofs of Theorem~\ref{th:Holder-tilde} and~\ref{th:conv-tilde} are largely similar to those of Theorem~\ref{th:Holder} and~\ref{th:conv}. {\BBB First, we show that the energy~$\tilde{E}_\mathbf{e}ps$ has an alternative expression, which is analogous to~\mathbf{e}qref{energy2}. We define \begin{equation} \label{Ctilde} \tilde{C}_\mathbf{e}ps := \frac{c_0}{\mathbf{e}ps^2}\abs{\Omega} + \frac{1}{2\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps\setminus\Omega} \left(\int_{(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} K(z)\,\mathrm{d} z\right) {\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(x)^{\mathrm{o}times 2} \, \mathrm{d} x, \mathbf{e}nd{equation} where~$c_0$ is the same number as in~\mathbf{e}qref{psib}, and \begin{equation} \label{Heps} H_\mathbf{e}ps(x) := \frac{1}{2\mathbf{e}ps^2} \int_{\mathbf{m}athbb{R}^3\setminus(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} K(z) \, \mathrm{d} z \qquad \textrm{for any } x\in\mathbf{m}athbb{R}^3. \mathbf{e}nd{equation} \begin{lemma} \label{lemma:Etilde2} For any~$u\in\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$, we have \[ \begin{split} \tilde{E}_\mathbf{e}ps(u) &= \frac{1}{4\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps\times\Omega_\mathbf{e}ps} K_\mathbf{e}ps(x-y) {\mathrm{c}}dot\left(u(x) - u(y)\right)^{\mathrm{o}times 2} \mathrm{d} x \, \mathrm{d} y \\ &\qquad\qquad\qquad + \int_{\Omega} H_\mathbf{e}ps(x){\mathrm{c}}dot u(x)^{\mathrm{o}times 2} \, \mathrm{d} x + \frac{1}{\mathbf{e}ps^2} \int_{\Omega} \mathbf{p}si_b(u(x)) \, \mathrm{d} x \mathbf{e}nd{split} \] \mathbf{e}nd{lemma} \begin{proof} We inject the algebraic identity \[ -2K(x-y)u(x){\mathrm{c}}dot u(y) = K(x-y){\mathrm{c}}dot(u(x) - u(y))^{\mathrm{o}times 2} - K(x-y){\mathrm{c}}dot u(x)^{\mathrm{o}times 2} - K(x-y){\mathrm{c}}dot u(y)^{\mathrm{o}times 2} \] in the expression for~$\tilde{E}_\mathbf{e}ps$. Since~$K$ is even, we obtain \[ \begin{split} \tilde{E}_\mathbf{e}ps(u) &= F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, \Omega_\mathbf{e}ps) - \frac{1}{2\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps\times\Omega_\mathbf{e}ps} K_\mathbf{e}ps(x -y){\mathrm{c}}dot u(x)^{\mathrm{o}times 2} \,\mathrm{d} x + \frac{1}{\mathbf{e}ps^2} \int_{\Omega} \mathbf{p}si_s(u(x)) \, \mathrm{d} x + \tilde{C}_\mathbf{e}ps \\ &= F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, \Omega_\mathbf{e}ps) - \frac{1}{2\mathbf{e}ps^2} \int_{\Omega} \left(\int_{(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} K(z) \, \mathrm{d} z\right) {\mathrm{c}}dot u(x)^{\mathrm{o}times 2} \,\mathrm{d} x \\ &\qquad - \frac{1}{2\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps\setminus\Omega} \left(\int_{(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} K(z) \, \mathrm{d} z\right) {\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(x)^{\mathrm{o}times 2} \,\mathrm{d} x + \frac{1}{\mathbf{e}ps^2} \int_\Omega \mathbf{p}si_s(u(x)) \, \mathrm{d} x + \tilde{C}_\mathbf{e}ps \mathbf{e}nd{split} \] where~$F^\mathbf{m}athbf{n}l_\mathbf{e}ps$ is defined by~\mathbf{e}qref{locintenergy}. Due to~\mathbf{e}qref{Ctilde}, we deduce \[ \tilde{E}_\mathbf{e}ps(u) = F^\mathbf{m}athbf{n}l_\mathbf{e}ps(u, \, \Omega_\mathbf{e}ps) - \frac{1}{2\mathbf{e}ps^2} \int_{\Omega} \left(\int_{(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} K(z) \, \mathrm{d} z\right) {\mathrm{c}}dot u(x)^{\mathrm{o}times 2} \,\mathrm{d} x + \frac{1}{\mathbf{e}ps^2} \int_\Omega \left(\mathbf{p}si_s(u(x)) + c_0\right) \mathrm{d} x \] and, thanks to~\mathbf{e}qref{psib} and~\mathbf{e}qref{Heps}, the lemma follows. \mathbf{e}nd{proof} } \begin{lemma} \label{lemma:ELtilde} For any~$\mathbf{e}ps$, there exists a minimiser~$\tilde{u}_\mathbf{e}ps$ for~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$ and it satisfies the Euler-Lagrange equation, \begin{equation} \label{ELtilde} \mathscr{L}ambda(\tilde{u}_\mathbf{e}ps(x)) = \int_{\Omega_\mathbf{e}ps} K_\mathbf{e}ps(x-y)\tilde{u}_\mathbf{e}ps(y)\,\mathrm{d} y \mathbf{e}nd{equation} for a.e.~$x\in\Omega$. \mathbf{e}nd{lemma} The proof of Lemma~\ref{lemma:ELtilde} is identical to that of Proposition~\ref{prop:EL}, so we skip it for brevity. We remark that the equation~\mathbf{e}qref{ELtilde} can be written as \[ \mathscr{L}ambda(\tilde{u}_\mathbf{e}ps) = K_\mathbf{e}ps * (\tilde{u}_\mathbf{e}ps{\mathrm{c}}hi_\mathbf{e}ps) \qquad \textrm{a.e. on } \Omega, \] where~${\mathrm{c}}hi_\mathbf{e}ps$ is the characteristic function of~$\Omega_\mathbf{e}ps$. In particular, the uniform strict physicality of~$\tilde{u}_\mathbf{e}ps$ follows from~\mathbf{e}qref{ELtilde}, exactly as in Proposition~\ref{prop:physicality}. \begin{lemma} \label{lemma:H} Under the assumption~\mathbf{e}qref{hp:Omega_eps}, {\BBB the tensor field~$H_\mathbf{e}ps$ defined by~\mathbf{e}qref{Heps} satisfies \[ \sup_{x\in\Omega} \mathbf{m}athbf{n}orm{H_\mathbf{e}ps(x)} = \mathrm{o}(\mathbf{e}ps^{q - \beta q - 2}) \] as~$\mathbf{e}ps\to 0$.} \mathbf{e}nd{lemma} {\BBB Under the assumption~\mathbf{e}qref{hp:Omega_eps}, we have~$\beta < 1 - 7/(2q)$, which implies~$q - \beta q - 2 > 3/2$. In particular, $H_\mathbf{e}ps$ converges to zero uniformly in~$\Omega$ as~$\mathbf{e}ps\to 0$.} \begin{proof}[Proof of Lemma~\ref{lemma:H}] For any~$x\in\Omega$, we have {\BBB $B_{\tau\mathbf{e}ps^\beta}(x)\subseteq\Omega_\mathbf{e}ps$} by \mathbf{e}qref{hp:Omega_eps}, and hence {\BBB $B_{\tau\mathbf{e}ps^{\beta-1}}\subseteq(\Omega_\mathbf{e}ps - x)/\mathbf{e}ps$}, {\BBB $\mathbf{m}athbb{R}^3\setminus (\Omega_\mathbf{e}ps - x)/\mathbf{e}ps \subseteq \mathbf{m}athbb{R}^3\setminus B_{\tau\mathbf{e}ps^{\beta - 1}}$.} Then, the definition~\mathbf{e}qref{Heps} of~$H_\mathbf{e}ps$ gives \[ \begin{split} \mathbf{m}athbf{n}orm{H_\mathbf{e}ps(x)} &\leq \frac{1}{\mathbf{e}ps^2} \int_{\mathbf{m}athbb{R}^3\setminus (\Omega_\mathbf{e}ps - x)/\mathbf{e}ps} \mathbf{m}athbf{n}orm{K(z)} \mathrm{d} z \\ &\leq \frac{1}{\mathbf{e}ps^2} {\BBB \int_{\mathbf{m}athbb{R}^3\setminus B_{\tau\mathbf{e}ps^{\beta-1}}} \mathbf{m}athbf{n}orm{K(z)} \mathrm{d} z }\\ &\leq {\BBB \frac{\mathbf{e}ps^{q - \beta q - 2}}{\tau^q} \int_{\mathbf{m}athbb{R}^3\setminus B_{\tau\mathbf{e}ps^{\beta-1}}} \mathbf{m}athbf{n}orm{K(z)} \abs{z}^q \, \mathrm{d} z} \mathbf{e}nd{split} \] {\BBB and the lemma follows, due to~\mathbf{e}qref{hp:Omega_eps}}. \mathbf{e}nd{proof} \begin{lemma} \label{lemma:omega} Let~$\tilde{u}_\mathbf{e}ps$ be a minimiser for~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$, identified with its extension by~$u_{\mathbf{m}athrm{bd}}$ to~$\mathbf{m}athbb{R}^3$. Then, $\tilde{u}_\mathbf{e}ps$ is an~$\mathrm{o}mega$-minimiser for~$E_\mathbf{e}ps$ in~$\Omega$, where {\BBB $\mathrm{o}mega(s) := C s^{q - \beta q - 2}$, $C>0$ is a constant that depends only on~$\Omega$, $K$, $\mathbf{Q}Q$ and~$q$, $\beta$ are given by~\mathbf{e}qref{hp:Omega_eps}.} \mathbf{e}nd{lemma} {\BBB Lemma~\ref{lemma:omega} guarantees that our compactness result, Proposition~\ref{prop:compactness}, applies to minimisers of~$\tilde{E}_\mathbf{e}ps$.} \begin{proof}[Proof of Lemma~\ref{lemma:omega}] We write~$\Omega_\mathbf{e}ps^c := \mathbf{m}athbb{R}^3\setminus\Omega_\mathbf{e}ps$. Let~$\tilde{u}_\mathbf{e}ps$ be a minimiser for~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$. Let~$B := B_\rho(x_0)\subseteq\Omega$ be a ball, and let~$v\in L^\infty(\mathbf{m}athbb{R}^3, \, \mathbf{Q}Q)$ be such that~$v = \tilde{u}_\mathbf{e}ps$~a.e.~on~$\mathbf{m}athbb{R}^3\setminus B$. By comparing~\mathbf{e}qref{energy} with~\mathbf{e}qref{energytilde}, and using~\mathbf{e}qref{hp:K_even}, we obtain \begin{equation} \label{omega0} \begin{split} &E_\mathbf{e}ps(v) - \tilde{E}_\mathbf{e}ps(v) {\BBB - C_\mathbf{e}ps + \tilde{C}_\mathbf{e}ps} \\ &= -\frac{1}{\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)v(x){\mathrm{c}}dot v(y)\,\mathrm{d} x\,\mathrm{d} y -\frac{1}{2\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps^c\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)v(x){\mathrm{c}}dot v(y)\,\mathrm{d} x\,\mathrm{d} y\\ &= -\frac{1}{\mathbf{e}ps^2} \int_{B\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)v(x){\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(y)\,\mathrm{d} x\,\mathrm{d} y -\frac{1}{\mathbf{e}ps^2} \int_{(\Omega\setminus B)\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)\tilde{u}_\mathbf{e}ps(x){\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(y)\,\mathrm{d} x\,\mathrm{d} y \\ &\qquad\qquad\qquad -\frac{1}{2\mathbf{e}ps^2} \int_{\Omega_\mathbf{e}ps^c\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)u_{\mathbf{m}athrm{bd}}(x){\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(y)\,\mathrm{d} x\,\mathrm{d} y \mathbf{e}nd{split} \mathbf{e}nd{equation} The second and third integral at the right-hand side are independent of~$v$. We bound the first integral by making the change of variable $y = x + \mathbf{e}ps z$ and applying Fubini theorem: \[ \begin{split} \frac{1}{\mathbf{e}ps^2} \abs{\int_{B\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)v(x){\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(y)\,\mathrm{d} x\,\mathrm{d} y} \leq \mathbf{m}athbf{n}orm{u_{\mathbf{m}athrm{bd}}}_{L^\infty(\mathbf{m}athbb{R}^3)} \int_{B} \mathbf{m}athbf{n}orm{H_\mathbf{e}ps(x)} \abs{v(x)} \,\mathrm{d} x \mathbf{e}nd{split} \] where~$H_\mathbf{e}ps$ is defined by~\mathbf{e}qref{Heps}. Since~$u_{\mathbf{m}athrm{bd}}$ and~$v$ both take values in the bounded set~$\mathbf{Q}Q$, and since~$\abs{B}\lesssim\rho^3\lesssim\rho$, by Lemma~\ref{lemma:H} {\BBB we have} \begin{equation} \label{omega1} \begin{split} \frac{1}{\mathbf{e}ps^2} \abs{\int_{B\times\Omega_\mathbf{e}ps^c} K_\mathbf{e}ps(x-y)v(x){\mathrm{c}}dot u_{\mathbf{m}athrm{bd}}(y)\,\mathrm{d} x\,\mathrm{d} y} {\BBB \lesssim \mathbf{e}ps^{q - \beta q - 2}\rho} \mathbf{e}nd{split} \mathbf{e}nd{equation} From~\mathbf{e}qref{omega0} and~\mathbf{e}qref{omega1}, we deduce \begin{equation} \label{omega2} \begin{split} E_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps) - \tilde{E}_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps) \leq E_\mathbf{e}ps(v) - \tilde{E}_\mathbf{e}ps(v) + {\BBB C \mathbf{e}ps^{q - \beta q - 2}\rho} \mathbf{e}nd{split} \mathbf{e}nd{equation} On the other hand, \begin{equation} \label{omega3} \tilde{E}_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps) \leq \tilde{E}_\mathbf{e}ps(v) , \mathbf{e}nd{equation} because~$\tilde{u}_\mathbf{e}ps$ is a minimiser for~$\tilde{E}_\mathbf{e}ps$ and~$v\in\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$. Combining~\mathbf{e}qref{omega2} and~\mathbf{e}qref{omega3}, the lemma follows. \mathbf{e}nd{proof} {\BBB Finally, we prove the analogue of the decay lemma, Lemma~\ref{lemma:decay}. } \begin{lemma} \label{lemma:decaytilde} {\BBB There exist~$\mathbf{e}ta>0$, $\theta\in (0, \, 1/2)$ and~$\mathbf{e}ps_*>0$ such that, for any ball~$B_{\rho}(x_0)\subseteq\Omega$, any~$\mathbf{e}ps\in (0, \, \mathbf{e}ps_*\rho)$ and any minimiser~$\tilde{u}_\mathbf{e}ps$ of~$\tilde{E}_\mathbf{e}ps$ in~$\tilde{\mathbf{m}athscr{A}}_\mathbf{e}ps$ such that \[ F_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps, \, B_{\rho}(x_0)) \leq \mathbf{e}ta^2 \rho, \] there holds \begin{equation} \label{decaytilde} \begin{split} & F_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps, \, B_{\theta\rho}(x_0)) \leq \frac{\theta}{2} F_\mathbf{e}ps(\tilde{u}_\mathbf{e}ps, \, B_{\rho}(x_0)) + \left(\frac{\mathbf{e}ps}{\rho}\right)^{2q - 2\beta q - 4} \rho. \mathbf{e}nd{split} \mathbf{e}nd{equation} } \mathbf{e}nd{lemma} {\BBB The estimate~\mathbf{e}qref{decaytilde} is slightly worse that the corresponding estimate from Lemma~\ref{lemma:decay}, i.e.~Equation~\mathbf{e}qref{decay}, because the exponent of~$\mathbf{e}ps/\rho$ at the right-hand side is strictly less than~$2q - 4$. Nevertheless, \mathbf{e}qref{decaytilde} is still enough for our purposes. Indeed, the assumption~$\beta< 1 - 7/(2q)$ implies that $\bar{q} := q - \beta q > 7/2$. Then, the proofs of Theorem~\ref{th:Holder} and Theorem~\ref{th:conv} carry over; it suffices to replace all the occurrences of~$q$ with~$\bar{q}$. In particular,} once Lemma~\ref{lemma:decaytilde} is proved, Theorem~\ref{th:Holder-tilde} and Theorem~\ref{th:conv-tilde} follow. \begin{proof}[Proof of Lemma~\ref{lemma:decaytilde}] {\BBB Suppose that~$B_\rho(x_0)$ is a ball contained in~$\Omega$ and that~$u{\mathrm{c}}olon\Omega_\mathbf{e}ps\to\mathbf{Q}Q$. We consider~$U_\rho := (\Omega - x_0)/\rho$, $U_{\mathbf{e}ps,\rho} := (\Omega_\mathbf{e}ps - x_0)/\rho$ and define $u_{\rho}{\mathrm{c}}olon U_\rho\to\mathbf{Q}Q$ as $u_{\rho}(y) := u(x_0 + \rho y)$. Then, \[ \begin{split} \rho^{-1}\tilde{E}(u) = F_{\mathbf{e}ps/\rho}^\mathbf{m}athbf{n}l(u_\rho, \, U_{\mathbf{e}ps,\rho}) + \frac{1}{(\mathbf{e}ps/\rho)^2} \int_{U_\rho} \mathbf{p}si_b(u_\rho(x)) \, \mathrm{d} x + \int_{U_\rho} H_{\mathbf{e}ps/\rho}^{\rho, x_0}(x) {\mathrm{c}}dot u_\rho(x)^{\mathrm{o}times 2} \, \mathrm{d} x \mathbf{e}nd{split} \] where~$F^\mathbf{m}athbf{n}l_\mathbf{e}ps$ is defined by~\mathbf{e}qref{locintenergy} and \[ H_{\mathbf{e}ps/\rho}^{\rho, x_0}(x) := \frac{1}{(\mathbf{e}ps/\rho)^2} \int_{\mathbf{m}athbb{R}^3\setminus (\Omega_\mathbf{e}ps - x_0 - \rho x)/\mathbf{e}ps}K(z) \, \mathrm{d} z = \rho^2 H_{\mathbf{e}ps}(x_0 + \rho x) \] Lemma~\ref{lemma:H} implies that \[ \sup_{x\in U_\rho}\|H_{\mathbf{e}ps/\rho}^{\rho, x_0}(x)\| = \mathrm{o}(\rho^2 \mathbf{e}ps^{q - \beta q - 2}) = \mathrm{o}\!\left((\mathbf{e}ps/\rho)^{q - \beta q - 2}\right) \] As a consequence, if we prove the lemma in case~$x_0 = 0$, $\rho = 1$, then the general statement will follow, by a scaling argument. Suppose the lemma does not hold. Then, for any~$\theta\in (0, \, 1/2)$, there exist a sequence~$\mathbf{e}ps_j\to 0$ and minimisers~$\tilde{u}_j$ of~$\tilde{E}_{\mathbf{e}ps_j}$ such that \begin{gather} \mathbf{e}ta_j^2 := F_{\mathbf{e}ps_j}(\tilde{u}_j, \, B_1) \to 0 \qquad \textrm{as } j\to+\infty, \label{smallenergytilde} \\ F_{\mathbf{e}ps_j} (\tilde{u}_j, \, B_\theta) > \frac{\theta\mathbf{e}ta_j^2}{2} + \mathbf{e}ps_j^{2q - 2\beta q - 4} \geq \frac{\theta\mathbf{e}ta_j^2}{2} + \mathbf{e}ps_j^{2q - 4} \label{nodecaytilde} \mathbf{e}nd{gather} Lemma~\ref{lemma:average} and Lemma~\ref{lemma:L2conv} carry over, so there exists a sequence of points~$\zeta_j\in\mathbf{m}athbb{N}N$ such that, up to a non-relabelled subsequence, $\zeta_j\to\zeta$ and \begin{equation} \label{decaytilde0} \tilde{v}_j := \frac{\tilde{u}_j - \zeta_j}{\mathbf{e}ta_j}\to \tilde{v} \qquad \textrm{strongly in } L^2(B_{1/2}) \textrm{ and a.e.} \mathbf{e}nd{equation} Moreover, the limit map~$\tilde{v}$ belongs to~$H^1(B_{1/2}, \, \mathbf{m}athrm{T}_{\zeta}\mathbf{m}athbb{N}N)$. Let~$s\in (0, \, 1/2)$ and let~$\tilde{v}^*\in H^1(B_{1/2}, \, \mathbf{m}athrm{T}_\zeta\mathbf{m}athbb{N}N)$ be such that~$\tilde{v}^* = \tilde{v}$ a.e.~in~$B_{1/2}\setminus\bar{B}_s$. We construct admissible competitors~$\tilde{\xi}_\mathbf{e}ps\in L^\infty(\Omega_\mathbf{e}ps, \, \mathbf{Q}Q)$ exactly as before, by applying Lemma~\ref{lemma:nonlocinterp}. We recall that there exist radii~$r$, $t$ with~$\mathbf{m}ax(1/2, \, s) < r < t < 1/2$ such that~$\tilde{\xi}_\mathbf{e}ps = \tilde{u}_\mathbf{e}ps$ out of~$B_t$ and \begin{equation} \label{decaytilde2} \frac{\tilde{\xi}_j - \zeta_j}{\mathbf{e}ta_j} \to \tilde{v}^* \qquad \textrm{strongly in } H^1(B_r) \mathbf{e}nd{equation} However, $\tilde{u}_j$ is now a minimiser of~$\tilde{E}_{\mathbf{e}ps_j}$, not of~$E_{\mathbf{e}ps_j}$. As a result, we have the inequality \[ \begin{split} F_{\mathbf{e}ps_j}(\tilde{u}_j, \, B_t) + \Gamma_{\mathbf{e}ps_j}(\tilde{u}_j, \, B_t, \, \Omega_\mathbf{e}ps\setminus B_t) &\leq F_{\mathbf{e}ps_j}(\tilde{\xi}_j, \, B_t) + \Gamma_{\mathbf{e}ps_j}(\tilde{\xi}_j, \, B_t, \, \Omega_\mathbf{e}ps\setminus B_t) \\ &\qquad + \int_{B_t} H_{\mathbf{e}ps_j}(x){\mathrm{c}}dot \left(\tilde{\xi}_j(x)^{\mathrm{o}times 2} - \tilde{u}_j(x)^{\mathrm{o}times 2}\right) \mathrm{d} x, \mathbf{e}nd{split} \] with an additional term at the right-hand side. (We recall that~$\Gamma_\mathbf{e}ps$ is defined by~\mathbf{e}qref{locdefect}.) We claim that \begin{equation} \label{decaytilde1} I_j := \frac{1}{\mathbf{e}ta^2_j} \int_{B_t} H_{\mathbf{e}ps_j}(x) {\mathrm{c}}dot \left(\tilde{\xi}_j(x)^{\mathrm{o}times 2} - \tilde{u}_j(x)^{\mathrm{o}times 2}\right) \mathrm{d} x\to 0 \mathbf{e}nd{equation} as~$j\to+\infty$. Once~\mathbf{e}qref{decaytilde1} is proved, the rest of the arguments carry over. Equation~\mathbf{e}qref{Heps} implies that the matrix~$H_\mathbf{e}ps(x)$ is symmetric, for any~$x\in B_1$. Then, \[ I_j = \frac{1}{\mathbf{e}ta^2_j} \int_{B_t} H_{\mathbf{e}ps_j}(x) \left(\tilde{\xi}_j(x) - \tilde{u}_j(x)\right) {\mathrm{c}}dot \left(\tilde{\xi}_j(x) + \tilde{u}_j(x)\right) \mathrm{d} x \] As both~$\tilde{\xi}_j$ and~$\tilde{u}_j$ take their values in the bounded set~$\mathbf{Q}Q$, the H\"older inequality gives \[ I_j \lesssim \mathbf{e}ta_j^{-2} \|H_{\mathbf{e}ps_j}\|_{L^\infty(B_t)} \left(\|\tilde{\xi}_\mathbf{e}ps - \zeta_j\|_{L^2(B_{1/2})} + \mathbf{m}athbf{n}orm{\tilde{u}_j - \zeta_j}_{L^2(B_{1/2})}\right) \] We can further estimate the right-hand side by applying~\mathbf{e}qref{decaytilde0}, \mathbf{e}qref{decaytilde2} and Lemma~\ref{lemma:H}: \[ I_j = \mathrm{o}\!\left(\frac{\mathbf{e}ps_j^{q - \beta q - 2}}{\mathbf{e}ta_j}\right) \] However, \mathbf{e}qref{smallenergytilde} and~\mathbf{e}qref{nodecaytilde} imply that~$\mathbf{e}ta_j^2 \geq \mathbf{e}ps^{2q - 2\beta q - 4}$, so~$\mathbf{e}ta_j \geq \mathbf{e}ps^{q - \beta q - 2}$ and~\mathbf{e}qref{decaytilde1} follows.} \mathbf{e}nd{proof} \mathbf{p}aragraph*{Acknowledgements.} The authors are grateful to Arghir D. Zarnescu, who brought the problem to their attention. Part of this work was carried out when the authors were visiting the \mathbf{e}mph{Centro Internazionale per la Ricerca Matematica} (CIRM) in Trento (Italy), supported by the Research-in-Pairs program. The authors would like to thank the CIRM for its hospitality. This research has been partially supported by the Basque Government through the BERC 2018-2021 program; and by Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo Ochoa excellence accreditation SEV-2017-0718 and through project MTM2017-82184-R funded by (AEI/FEDER, UE) and acronym ``DESFLU''. G. C. was partially supported by GNAMPA-INdAM. \mathbf{e}nd{document}
\betaegin{document} \tauitle{The compression body graph has infinite diameter} \alphauthor{Joseph Maher, Saul Schleimer} \partialate{\tauoday} \title{The compression body graph has infinite diameter} \betaegin{abstract} We show that the compression body graph has infinite diameter. \\ Subject code: 37E30, 20F65, 57M50. \epsilonnd{abstract} \tauableofcontents \sigmaection{Introduction} The curve complex is a simplicial complex whose vertices are isotopy classes of simple closed curves, and whose simplices are spanned by simple closed curves which may be realized disjointly in the surface. In this paper, we consider a related complex, the \epsilonmph{compression body graph}, defined by Biringer and Vlamis \cite{bv}*{page~94}, which we shall denote by $\mathcal{H}(S)$. Vertices of this graph are isomorphism classes of \epsilonmph{marked compression bodies}. A marked compression body $(U, f)$ is a non-trivial compression body $U$ together with a choice of homeomorphism $f \colon \partial_+ U \tauo S$, up to isotopy, from the upper boundary of the compression body to the surface $S$. Two marked compression bodies $(U, f)$ and $(V, g)$ are \epsilonmph{isomomorphic} if they differ by a homeomorphism of the compression body. More precisely, there must be a homeomorphism $h \colon U \tauo V$ such that $f = g \circ h \vert_{\partial_+ U}$. Two compression bodies are \epsilonmph{adjacent} if one is contained in the other one. More precisely, the compression body graph is a poset as follows: suppose that $(U, f)$ and $(V, g)$ are marked compression bodies, and $ U' \sigmaubseteq U $ is a subcompression body of $U$. If there is a homeomorphism $h \colon U' \tauo V$ such that $f = g \circ h \vert_{\partial_+ U}$ then $V < U$. Biringer and Vlamis \cite{bv}*{Theorem~1.1}, following Ivanov, showed that the simplicial automorphism group of this graph is equal to the mapping class group. We show: \betaegin{theorem} \lambdaabel{Thm:InfDiam} The compression body graph $\mathcal{H}(S)$ is an infinite diameter Gromov hyperbolic metric space. \epsilonnd{theorem} The compression body graph $\mathcal{H}(S)$ is quasi-isometric to the metric space obtained by electrifying the curve complex along each \epsilonmph{disc set}: the set of all simple closed curves that bound discs in a specific compression body. We will write $\pi_\mathcal{Y}$ for the inclusion map $X \hookrightarrow X_\mathcal{Y}$, which we shall also refer to as the \epsilonmph{projection map}. As the electrification of a Gromov hyperbolic space along uniformly quasi-convex subsets is Gromov hyperbolic the compression body graph is Gromov hyperbolic. This follows from work of Bowditch \cite{bowditch-rel-hyp}, Kapovich and Rafi \cite{kapovich-rafi} and Masur and Minsky \cite{mm1, mm3}. See Section \ref{section:outline} for further details. We also study the action of the mapping class group on $\mathcal{H}(S)$. In Lemma \ref{lemma:coarse} we show that that for every genus $g \gammaeqslant 2$, there are elements of the mapping class group of $S$ which act loxodromically on $\mathcal{H}(S)$. Furthermore in Corollary \ref{corollary:johnson} we show that every subgroup of the Johnson filtration contains elements which act loxodromically on $\mathcal{H}(S)$. In the final section, we use our methods to give an alternate proof of Theorem \ref{T:bjm}: the stable lamination of a pseudo-Anosov element is contained in the limit set of a compression body $V$ if and only if some power of the pseudo-Anosov extends over a non-trivial subcompression body of $V$. This was originally shown by Biringer, Johnson and Minsky \cite{bjm}*{Theorem 1}, using techniques from hyperbolic three-manifolds. It has also been shown by Ackermann \cite{ack}*{Theorem 1}, extending the methods of Casson and Long \cite{casson-long}. We also weaken the hypotheses of a result of Lubotzky, Maher and Wu \cite{lmw}*{Theorem 1}, showing that a random Heegaard splitting is hyperbolic with probability tending to one exponentially quickly, for a larger class of random walks than those considered in \cite{lmw}. The results of this paper have been used by Agol and Freedman~\cite{agol-freedman}, Burton and Purcell~\cite{burton-purcell}*{Theorem 4.12}, Dang and Purcell~\cite{dang-purcell}*{Theorem 1.2} and Ma and Wang~\cite{ma-wang}. Finally, Theorem \ref{Thm:InfDiam} implies that many graphs, formed by considering compression bodies of restricted topological type, are also of infinite diameter, see Section \ref{section:restricted} for further details. \sigmaubsection*{Acknowledgements} \lambdaabel{section:Acknowledgments} We thank Ian Biringer for enlightening conversations. The first author acknowledges support from the Simons Foundation and PSC-CUNY. The second author acknowledges support from the EPSRC (EP/I028870/1). The authors further acknowledge the support of the NSF (DMS-1440140) while the both were in residence at the MSRI in Berkeley, California, during the Fall 2016 semester. The authors would like to thank the referee for helpful comments and suggestions. \sigmaection{Outline} \lambdaabel{section:outline} We now give a brief outline of the main argument. The curve complex is Gromov hyperbolic, and the disc sets are quasi-convex. Thus, by electrifying the curve complex along the discs sets, we obtain a Gromov hyperbolic space $\mathcal{C}_\mathcal{D}(S)$ which is quasi-isometric to the compression body graph $\mathcal{H}(S)$. The electrification of a Gromov hyperbolic space along uniformly quasi-convex subsets is Gromov hyperbolic: \betaegin{theorem} \lambdaabel{theorem:quotient} Let $X$ be a Gromov hyperbolic space and let $\mathcal{Y} = \{ Y_i\}_{i \in I}$ be a collection of uniformly quasi-convex subsets of $X$. Then $X_\mathcal{Y}$, the electrification of $X$ with respect to $\mathcal{Y}$, is Gromov hyperbolic. Furthermore, for any constants $K$ and $c$, there are constants $K'$ and $c'$ such that a $(K, c)$-quasi-geodesic in $X$ projects to a reparameterized $(K', c')$-quasi-geodesic in $X_\mathcal{Y}$. \epsilonnd{theorem} The first statement is due to Bowditch \cite{bowditch-rel-hyp}*{Proposition 7.12}, and both statements follow immediately from Kapovich and Rafi \cite{kapovich-rafi}*{Corollary 2.4}. Masur and Minsky showed that the curve complex is Gromov hyperbolic \cite{mm1}*{Theorem 1.1}, and that the disc sets are quasi-convex \cite{mm3}*{Theorem~1.1}. We shall write $\mathcal{C}(S)$ for the curve graph of the surface $S$, and $\mathcal{D}$ for the collection of all discs sets in $\mathcal{C}(S)$. Therefore $\mathcal{C}_\mathcal{D}(S)$ denotes the curve complex electrified along disc sets, which is quasi-isometric to the compression body graph $\mathcal{H}(S)$. These two quasi-isometric spaces are Gromov hyperbolic with infinite diameter by Theorem \ref{Thm:InfDiam}. Furthermore, work of Dowdall and Taylor \cite{dow-tay}*{Theorem 3.2} shows that the Gromov boundary of $\mathcal{C}_\mathcal{D}(S)$ is homeomorphic to the subset of $\partial \mathcal{C}(S)$ consisting of the complement of the limit sets of disc sets. Any quasi-geodesic ray $\gammaamma \colon \mathbb{N} \tauo \mathcal{C}(S)$ projects to a reparameterized and possibly finite diameter quasi-geodesic ray in the electrification $\mathcal{C}_\mathcal{D}(S)$. In Section \ref{section:stability}, we prove our stability result for quasi-geodesics in the curve complex $\mathcal{C}(S)$: if $\pi_\mathcal{H} \circ \gammaamma$ has finite diameter image in $\mathcal{C}_\mathcal{D}(S)$ then there is a constant $k$, and a handlebody $V$, such that $\gammaamma$ is contained in a $k$-neighbourhood of the disc set $\mathcal{D}(V)$ in $\mathcal{C}(S)$. In particular, this means that the ending lamination corresponding to $\gammaamma$ lies in the limit set of the disc set $\mathcal{D}(V)$. However, for a given handlebody $V$, the limit set of its disc set $\mathcal{D}(V)$ in the boundary of the curve complex has measure zero. As there are only countably many discs sets, the union of their limit sets also has measure zero. Therefore, there is a full measure set of minimal laminations disjoint from the union of the disc sets. Any one of these gives rise to a geodesic ray whose image in the electrification $\mathcal{C}_\mathcal{D}(S)$ has infinite diameter. In particular, $\mathcal{C}_\mathcal{D}(S)$ has infinite diameter. We now give a brief sketch of the proof of the stability result. Suppose there is a geodesic ray $\gammaamma$ in the curve complex $\mathcal{C}(S)$, and a sequence of handlebodies $V_i$, such that the initial segment of $\gammaamma$ of length $i$ is contained in a $k'$-neighbourhood of $\mathcal{D}(V_i)$. In particular, for any $i$, there are infinitely many disc sets $\mathcal{D}(V_j)$ passing within distance $k'$ of both $\gammaamma_0$ and $\gammaamma_i$. Recall that given two simple closed curves $a$ and $b$ on a surface $S$, we may \epsilonmph{surger} $a$, along an innermost arc of $b$, to produce a new curve $a'$, disjoint from $a$, and with smaller geometric intersection number with $b$. This is illustrated in Figure \ref{fig:arc}. By iterating this procedure, we obtain a surgery sequence, which gives rise to a reparameterized quasi-geodesic in $\mathcal{C}(S)$ from $a$ to $b$. We call this a \epsilonmph{curve surgery sequence}. If the simple closed curves $a$ and $b$ bound discs in a handlebody $V$, then we may surger $a$ along innermost bigons of $b$, to produce a surgery sequence connecting $a$ and $b$, in which every surgery curve bounds a disc in $V$. We shall call such a surgery sequence a \epsilonmph{disc surgery sequence}. Recall that a \epsilonmph{train track} on a surface $S$ is a smoothly embedded graph, such that the edges at each vertex are all mutually tangent, and there is at least one edge in each of the two possible directed tangent directions. Furthermore, there are no complementary regions which are nullgons, monogons, bigons or annuli. A \epsilonmph{split} of a train track $\tauau$ is a new train track $\tauau'$ obtained by one of the local modifications illustrated in Figure \ref{pic:split}. We say that a sequence of train tracks $\{ \tauau_j \}$ is a \epsilonmph{splitting sequence} if each $\tauau_{j+1}$ is obtained as a split of $\tauau_j$. A key result we use from \cite{mm3}*{page 319} says, roughly speaking, that we may choose a surgery sequence $\{ D_j \}$ connecting two discs in a common compression body such that there is a corresponding train track splitting sequence $\{ \tauau_j \}$, so that for each $j$, the disc $D_j$ is dual to the train track $\tauau_j$, and meets it in a single point at the switch. We state a precise version of this as Proposition \ref{prop:splitting seq} below. Recall that given an essential subsurface $X$ contained in $S$, there is a \epsilonmph{subsurface projection map} from a subset of $\mathcal{C}(S)$ to $\mathcal{C}(X)$. Roughly speaking, this map sends a simple closed curve $a$, meeting $X$ essentially, to a simple closed curve $a' \sigmaubset X$, which is disjoint from some arc of $a \cap X$. See Section \ref{section:prelim} for a precise definition. We may extend the definition of subsurface projection from simple closed curves to train tracks, by instead projecting the \epsilonmph{vertex cycles} of the train track $\tauau$. See Section \ref{section:tt} for further details. The collection of vertex cycles $\{ \Lambdaambda(\tauau_j) \}$ for a splitting sequence $\{ \tauau_j \}$ projects to a reparameterized quasi-geodesic in $\mathcal{C}(S)$. We say that three (ordered) points $x$, $y$ and $z$ in a metric space satisfy the \epsilonmph{reverse triangle inequality} with constant $K$ if $d(x, y) + d(y, z) \lambdaeqslant d(x, z) + K$. By the Morse lemma, given constants $\partialelta$, $Q$ and $c$, there is a constant $K$, such that if $y$ lies on a $(Q, c)$-quasi-geodesic between $x$ and $y$ in a $\partialelta$-hyperbolic space, then $x$, $y$ and $z$ satisfy the reverse triangle inequality. Given three (ordered) points $x$, $y$ and $z$, we say that $y$ is \epsilonmph{$K$-intermediate} with respect to $x$ and $z$, if $x$, $y$ and $z$ satisfy the reverse triangle inequality, and furthermore $d(x, y) \gammaeqslant K$ and $d(y, z) \gammaeqslant K$. Given a train track sequence $\{ \tauau_j \}$, we say that a particular train track $\tauau_j$ is \epsilonmph{$K$-intermediate} if the vertex cycles of $\tauau_j$ are distance at least $K$ in the curve complex from the vertex cycles of both $\tauau_0$ and $\tauau_n$. If $K$ is sufficiently large, then for any two train track sequences $\{ \tauau_j \}$ and $\{ \tauau'_j \}$ starting near $\gammaamma_0$ and ending near $\gammaamma_r$, for any subsurface $X$ in $S$, and for any pair of $K$-intermediate train tracks $\tauau_j$ and $\tauau'_k$, the distance between the subsurface projections of $\tauau_j$ and $\tauau'_k$ in $\mathcal{C}(X)$ is bounded in terms of $d_X(\gammaamma_0, \gammaamma_r)$. In particular, using the Masur-Minsky distance formula we find that every train track sequence connecting $\gammaamma_0$ and $\gammaamma_r$ passes within a bounded marking distance of any $K$-intermediate train track. As the marking complex is locally finite, infinitely many train track sequences must share a common train track, and this implies that infinitely many of the handlebodies $V_i$ must share a common disc $D_r$. Furthermore, the boundary of $D_r$ lies close to the geodesic from $\gammaamma_0$ to $\gammaamma_r$. We iterate this argument to obtain the following. Choose an increasing sequence of numbers $\{ r_n \}$, with the difference between consecutive numbers fixed and sufficiently large. At each stage, we may assume we have passed to an infinite subset of the handlebodies so that each contains the discs $D_{r_1}, \lambdadots D_{r_n}$. These discs lie in a common compression body $W_n$, and the geodesic from $\gammaamma_0$ to $\gammaamma_{r_n}$ is contained in a bounded neighbourhood of $\mathcal{D}(W_n)$. The compression bodies $W_n$ form an increasing sequence $W_n < W_{n+1}$, which eventually stabilizes to a constant sequence $W$. The entire geodesic ray $\gammaamma$ is then contained in a bounded neighbourhood of $\mathcal{D}(W)$, by the stability result. In the next section, Section \ref{section:prelim}, we review the results we will use, and set up some notation. In Section \ref{section:stability}, we provide the details of the proof of the stability result. In Section \ref{section:diameter}, we use the stability result to prove Theorem \ref{Thm:InfDiam}. In Section \ref{section:loxodromics}, we show that there are many loxodromic isometries of the compression body graph. In the final section, Section \ref{section:applications}, we give several applications. \sigmaection{Preliminaries} \lambdaabel{section:prelim} In this section we review some background material, and set up some notation. \sigmaubsection{The mapping class group} Let $S$ be a compact connected oriented surface, possibly with boundary. The \epsilonmph{mapping class group} $\tauext{Mod}(S)$ is the group of homeomorphisms of $S$, up to isotopy. We shall fix a finite generating set for the mapping class group, and we shall write $\partialg{g,h}$ for the corresponding word metric on $\tauext{Mod}(S)$. We say a simple closed curve in $S$ is \epsilonmph{peripheral} if it cobounds an annulus together with one of the boundary components of $S$. We say a simple closed curve in $S$ is \epsilonmph{essential} if it does not bound a disc and is not peripheral. Given two finite collections of essential curves $\mu$ and $\mu'$, we may extend the geometric intersection number from single curves to finite collections by \[ i(\mu, \mu') = \sigmaum_{a \in \mu, \, a' \in \mu'} i(a, a'). \] We say a set of curves $\mu$ is a \epsilonmph{marking} of $S$ if the set of curves \epsilonmph{fills} the surface: that is, for all curves $a \in \mathcal{C}(S)$ we have $i(a, \mu) > 0$. We say a marking $\mu$ is an \epsilonmph{$L$-marking} if $i(\mu, \mu) \lambdaeqslant L$. We say two markings $\mu$ and $\mu'$ are $L'$-adjacent if $i(\mu, \mu') \lambdaeqslant L'$. Define $\mathcal{M}_{L, L'}(S)$ to be the graph whose vertices are (isotopy classes of) $L$-markings and whose edges connect pairs of $L$-markings which are $L'$-adjacent. Masur and Minsky \cite{mm2}*{Section 7.1} show that, for sufficiently large constants $L$ and $L'$, the \epsilonmph{marking graph} $\mathcal{M}(S) = \mathcal{M}_{L, L'}(S)$ is locally finite, connected and quasi-isometric to the Cayley graph of the mapping class group $(\tauext{Mod}(S), d_\tauext{Mod})$. We shall fix a pair of constants $L$ and $L'$ with this property for the remainder of this paper, and will suppress these constants from our notation. The \epsilonmph{complex of curves} $\mathcal{C}(S)$ is a simplicial complex, whose vertices consist of isotopy classes of essential curves, and whose simplices are spanned by disjoint (perhaps after isotopy) curves. We need to modify this definition for certain low-complexity surfaces, as we now describe. In the case of a once-holed torus we connect two curves by an edge if they have geometric intersection number one. In the case of a four-holed sphere we connect two curves by an edge if they have geometric intersection number two. Finally, if the surface $S$ is an annulus $A$, then we define the curve complex $\mathcal{C}(A)$ as follows: the vertices consist of properly embedded essential arcs up to homotopies fixing the endpoints, and two arcs are connected by an edge if they may be realized disjointly in the interior of the annulus. We shall consider the complex of curves as a metric space in which each edge has length one. We will write $d_S$ for distance in the complex of curves. We extend the definition of the metric from points to finite sets by setting \[ d_S(A, B) = \min_{a \in A, \, b \in B} d_S(a, b). \] Suppose that $S$ is not an annulus. Then a subsurface $Y \sigmaubset S$ is \epsilonmph{peripheral} if it is an annulus with peripheral boundary. A subsurface $Y \sigmaubset S$ is \epsilonmph{essential} if it is not peripheral and if all boundary components are essential or peripheral. Let $Y \sigmaubset S$ be an essential subsurface. We will write $d_Y$ for the metric in $\mathcal{C}(Y)$. Let $Y_\varnothing$ be the set of vertices of $\mathcal{C}(S)$ corresponding to essential curves which may be isotoped to be disjoint from $Y$. There is a coarsely well defined \epsilonmph{subsurface projection} $\pi_Y \colon \mathcal{C}(S) - Y_\varnothing \tauo \mathcal{C}(Y)$, which we now define. We say a properly embedded arc in a surface $S$ is \epsilonmph{essential} if it does not bound a properly embedded bigon together with a subarc of the boundary of $S$. Let $\mathcal{AC}(S)$ be the \epsilonmph{arc and curve complex} of $S$: the simplicial complex whose vertices are isotopy classes of essential curves and properly embedded essential simple arcs. Thus $\mathcal{AC}(S)$ contains $\mathcal{C}(S)$ as a subcomplex, and this inclusion is a quasi-isometry. Let $S^Y$ be the cover of $S$ corresponding to $Y$. The surface $Y$ is homeomorphic to the Gromov closure of $S^Y$ so we may identify $\mathcal{AC}(Y)$ with $\mathcal{AC}(S^Y)$. For any essential curve or arc $a$, let $a^Y$ be the full preimage of $a$ in $S^Y$. Define $\pi_Y(a)$ to be the set of essential components of $a^Y$ in $S^Y$. So $\pi_Y(a)$ is either empty, or is a simplex in $\mathcal{AC}(Y)$. As $\mathcal{AC}(Y)$ is quasi-isometric to $\mathcal{C}(Y)$, this gives a coarsely well-defined map to $\mathcal{C}(Y)$ in the latter case. We define the \epsilonmph{cut-off function} $\lambdafloor \cdot \rfloor_c$ on $\mathbb{R}$ by $\lambdafloor x \rfloor_c = x$ if $x \gammaeqslant c$ and zero otherwise. Given a set $X$, two functions $f$ and $g$ on $X \tauimes X$, and two constants $K \gammaeqslant 1$ and $c > 0$, we say that $f$ and $g$ are $(K, c)$-coarsely equivalent, denoted $f \alphapprox_{(K, c)} g$, if for all $x$ and $y$ in $X$ we have \[ \taufrac{1}{K} f(x, y) - \taufrac{c}{K} \lambdaeqslant g(x, y) \lambdaeqslant K f(x, y) + c. \] We now state the \epsilonmph{distance estimate} due to Masur and Minsky. \betaegin{theorem} \cite{mm2}*{Theorem 6.12} \lambdaabel{theorem:mm dist} For any surface $S$ there is a constant $M_0$, such that for any $M \gammaeqslant M_0$, there are constants $K$ and $c$, such that for any markings $\mu$ and $\nu$, \[ d_{\mathcal{M}(S)}( \mu, \nu) \alphapprox_{(K, c)} \sigmaum_X \lambdafloor d_X( \mu, \nu ) \rfloor_M. \] \epsilonnd{theorem} That is, the distance in the marking complex is coarsely equivalent to the (cut-off) sum of subsurface projections. Note that there are only finitely many non-zero terms on the right-hand side. \sigmaubsection{Compression bodies and discs sets} Let $S$ be a closed connected oriented surface of genus $g$. A \epsilonmph{compression body} $V$ is a compact orientable three-manifold obtained from $S \tauimes I$ by attaching two-handles to $S \tauimes \{ 0 \}$, and capping off any newly created two-sphere components with three-balls. In particular, if $S$ is a two-sphere, we do not cap off $S \tauimes \{ 1 \}$. The genus $g$ surface $S \tauimes \{ 1 \}$ is the \epsilonmph{upper boundary} $\partial_+ V$, while the other boundary components make up the \epsilonmph{lower boundary} $\partial_- V$. The lower boundary need not be connected. If $\partial_- V$ is empty, then $V$ is a \epsilonmph{handlebody}. The \epsilonmph{trivial compression body} has no two-handles, and is homeomorphic to $S \tauimes I$. A \epsilonmph{marked compression body} is a pair $(V, f )$ where $V$ is a compression body and $f \colon \partial_+ V \tauo S$ is a homeomorphism. Two marked compression bodies $(V, f)$ and $(W, g)$ are \epsilonmph{isomorphic} if there is a homeomorphism $h \colon V \tauo W$ such that $g \circ (h |_{\partial V}) = f $. In what follows, we will suppress the marking $f$ and assume that $\betady V$ and $S$ are actually equal. Biringer and Vlamis define the \epsilonmph{compression body graph} $\mathcal{H}(S)$ to be the graph whose vertices are isomorphism classes of non-trivial marked compression bodies. Here $V$ and $W$ are adjacent if either $V < W$ or $W < V$ \cite{bv}*{page 94}. They show that the simplicial automorphism group of $\mathcal{H}(S)$ is equal to the mapping class group $\tauext{Mod}(S)$ \cite{bv}*{Theorem 1.1} for genus at least three. In genus two, the mapping class group surjects onto the automorphism group, with kernel generated by the hyperelliptic involution. We say a disc $D$ in a marked compression body $(V, f)$ is \epsilonmph{essential} if it is properly embedded in $V$. In this case, its boundary is an essential curve in the upper boundary $\partial_+ V$. We say an essential curve in $S$ bounds a disc in $V$ if it is the image under $f$ of the boundary of an essential disc in $V$. \betaegin{definition} The \epsilonmph{disc set} $\mathcal{D}(V)$ of a marked compression body $(V, f)$ is the collection of all essential curves in $S$ which bound discs in $V$. \epsilonnd{definition} A subset $Y$ of a geodesic metric space is \epsilonmph{$Q$-quasi-convex} if for any pair of points $x$ and $y$ in $Y$, any geodesic connecting $x$ and $y$ is contained in a $Q$-neighbourhood of $Y$. Masur and Minsky \cite{mm3}*{Theorem 1.1} showed that there is a $Q$ such that the disc set $\mathcal{D}(V)$ is a $Q$-quasi-convex subset of $\mathcal{C}(S)$. Given a metric space $(X, d)$, and a collection of subsets $Y = \{ Y_i \}_{i \in I} $, the \epsilonmph{electrification} of $X$ with respect to $Y$ is the metric space $X_\mathcal{Y}$ obtained by adding a new vertex $y_i$ for each set $Y_i$, and coning off $Y_i$ by attaching edges of length $\taufrac{1}{2}$ from each $y \in Y_i$ to $y_i$; the image of each set $Y_i$ in $X_\mathcal{Y}$ has diameter one. We shall write $\mathcal{D}(S)$ to denote the collection of all disc sets of all non-trivial compression bodies $V$ with boundary $S$. The compression body graph $\mathcal{C}B(S)$ is quasi-isometric to the curve complex electrified along all the disc sets, namely $\mathcal{C}(S)_{\mathcal{D}(S)}$. We remark that there is another natural space quasi-isometric to $\mathcal{C}(S)_{\mathcal{D}(S)}$, known as the \epsilonmph{graph of handlebodies}, which has vertices being classes of marked handlebodies and edges being distinct pairs $\{ V , W\}$ where $\mathcal{D}(V)$ has non-empty intersection with $\mathcal{D}(W)$. \sigmaubsection{Surgery sequences} In this section we recall the definition of surgery sequences for simple closed curves and for discs. Since we restrict our attention to \epsilonmph{closed} surfaces our discussion is simpler than the more general case of compact surfaces with boundary. In subsequent sections we may abuse notation by referring to these as just surgery sequences, if it is clear from context whether we mean curves or discs. \betaegin{definition} Let $a$ be an essential curve in $S$, and let $b'$ be a simple arc whose endpoints lie on $a$, and whose interior is disjoint from $a$. Furthermore, suppose that $b'$ is essential in $S - a$. The endpoints of $b'$ divide $a$ into two arcs with common endpoints, $a'$ and $a''$, say. A simple closed curve $c$ is said to be produced by \epsilonmph{(arc) surgery of $a$ along $b'$} if $c$ is homotopic to either of the simple closed curves $a' \cup b'$ or $a'' \cup b'$. \epsilonnd{definition} \betaegin{definition} Let $a$ and $b$ be essential simple closed curves in minimal position with $i(a, b) \gammaeqslant 2$. An \epsilonmph{innermost arc} of $b$ with respect to $a$ is a subarc $b' \sigmaubset b$ whose endpoints lie on $a$, and whose interior is disjoint from $a$. A simple closed curve $c$ is said to be produced by \epsilonmph{(curve) surgery of $a$ along $b$} if $c$ is produced by surgery of $a$ along $b'$, for some choice of innermost arc $b'$ in $b$. \epsilonnd{definition} If $c$ is produced by surgery of $a$ along $b$, then the number of intersections of $c$ with $b$ is strictly less than the number of intersections of $a$ with $b$. \betaegin{definition} Given a pair of essential simple closed curves $a$ and $b$, a \epsilonmph{(curve) surgery sequence} connecting $a$ and $b$ is a sequence of simple closed curves $\{ a_i \}_{i = 0}^{n-1}$, such that $a_0 = a$, the final curve $a_{n-1}$ is disjoint from $b$, and each $a_{i+1}$ is produced from $a_{i}$ by a surgery of $a_i$ along $b$. \epsilonnd{definition} \betaegin{definition} Let $D$ and $E$ be essential discs in a compression body $V$ in minimal position, and let $E'$ be an outermost bigon of $E$ with respect to the arcs of $D \cap E$. The arc of intersection between $E'$ and $D$ divides $D$ into two discs, $D'$ and $D''$, say. We say that a disc $F$ is produced by \epsilonmph{(disc) surgery of $D$ along $E'$} if $F$ is homotopic to either of the discs $D' \cup E'$ or $D'' \cup E'$, for some choice of outermost bigon $E'$ contained in $E$. \epsilonnd{definition} The disc $F$ produced by surgery of $D$ along $E'$ is disjoint from $D$, and is essential. To see this, suppose that one of the discs $F$ is inessential. That is, $\partial F$ bounds a disc $C$ in $S$. We can then ambiently isotope $\partial E' \cap S$ across $C$, reducing the number of intersections between $\partial D$ and $\partial E$, a contradiction. \betaegin{definition} Given a pair of essential discs $D$ and $E$ in minimal position, a \epsilonmph{(disc) surgery sequence} connecting $D$ and $E$ is a sequence of essential properly embedded discs $\{ D_i \}_{i = 0}^{n-1}$, such that $D_1 = D$, the final disc $D_{n-1}$ is disjoint from $E$, and each $D_{i+1}$ is produced from $D_{i}$ by a (disc) surgery of $D_i$ along $E$. \epsilonnd{definition} We remark that if $\{ D_i \}_{i=1}^n$ is a (disc) surgery sequence, then $\{ \partial D_i \}_{i=1}^n$ is a (curve) surgery sequence. \betaegin{proposition} Let $V$ be a compression body. Any two curves in its disc set $\mathcal{D}(V)$ are connected by a disc surgery sequence. \epsilonnd{proposition} \betaegin{proof} Let $D$ and $E$ be two essential discs in a compression body $V$, and assume they intersect in minimal position. Choose an outermost bigon $E'$ of $E$ with respect to the arcs $D \cap E$. We may surger the disc $D$ along $E'$, which produces two new discs in $V$ disjoint from $D$. Call one of these $D_1$. The disc $D_1$ is disjoint from $D$ and has fewer intersections with $E$. This process terminates after finitely many disc surgeries. This gives a disc surgery sequence $\{ D_i \}_{i=0}^{n-1}$ connecting $D$ and $E$. \epsilonnd{proof} \sigmaubsection{Train tracks}\lambdaabel{section:tt} We briefly recall some of the results we use about train tracks on surfaces. For more details see for example Penner and Harer \cite{ph}. A \epsilonmph{pre-train track} $\tauau$ is a smoothly embedded finite graph in $S$ such that the edges at each vertex are all mutually tangent, and there is at least one edge in each of the two possible directed tangent directions. The vertices are commonly referred to as \epsilonmph{switches} and the edges as \epsilonmph{branches}. We will always assume that all switches have valence at least three. A trivalent switch is illustrated below in Figure \ref{fig:switch}. If none of the complementary regions of $\tauau$ in $S$ are nullgons, monogons, bigons or annuli, then we say that $\tauau$ is a \epsilonmph{train track}. Up to the action of the mapping class group, there are only finitely many train tracks in $S$. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.3] \partialraw [thick] (20,13) node [left] {$a$} -- (22.5, 13) .. controls (23.5,13) and (25,13) .. (28,10) node [right] {$b$}; \partialraw [thick] (20,13) -- (22.5, 13) .. controls (23.5,13) and (25,13) .. (28,16) node [right] {$c$}; \epsilonnd{tikzpicture} \epsilonnd{center} \caption{A trivalent switch for a train track.} \lambdaabel{fig:switch} \epsilonnd{figure} An assignment of non-negative numbers to the branches of $\tauau$, known as \epsilonmph{weights}, satisfies the \epsilonmph{switch equality} if the sum of weights in each of the two possible directed tangent directions is equal: that is $a = b + c$ in Figure \ref{fig:switch} above. A weighted train track defines a measured lamination on the surface. We say that the corresponding lamination is \epsilonmph{carried} by the train track. A train track $\tauau$ determines a polytope of projectively measured laminations $P(\tauau) \sigmaubset \mathbb{P}ML(S)$ carried by $\tauau$. Let $\Lambdaambda(\tauau)$ be the set of vertices of $P(\tauau)$. Every $v \in \Lambdaambda(\tauau)$ gives a \epsilonmph{vertex cycle}: a simple closed curve, carried by $\tauau$, that puts weight at most two on each branch of $\tauau$. It follows that the set $\Lambdaambda(\tauau)$ gives a finite set of curves in $\mathcal{C}(S)$ and these curves have bounded pairwise geometric intersection numbers. A train track $\tauau$ is \epsilonmph{filling} if $\Lambdaambda(\tauau)$ is a marking. As there are only finitely many train tracks up to the action of the mapping class group, these bounds depend only on the surface $S$. A simple closed curve $a$ is \epsilonmph{dual} to a track $\tauau$ if $a$ misses the switches of $\tauau$, crosses the branches of $\tauau$ transversely, and forms no bigons with $\tauau$. We say that a simple closed curve $a$ is \epsilonmph{switch-dual} to $\tauau$ if $a$ meets the train track transversely at exactly one point of $\tauau$, which is a switch, and forms no bigons with $\tauau$. A track $\tauau$ is \epsilonmph{large} if all components of $S - \tauau$ are discs or peripheral annuli. A track $\tauau$ is \epsilonmph{recurrent} if for every branch $b \sigmaubset \tauau$ there is a curve $\alphalpha \in P(\tauau)$ putting positive weight on $b$. A track $\tauau$ is \epsilonmph{transversely recurrent} if for every branch $b \sigmaubset \tauau$ there is a curve $\betaeta$ dual to $\tauau$, such that $\betaeta$ crosses $b$. \betaegin{lemma} \lambdaabel{Lem:TracksFinite} Given a surface $S$, there is a constant $N$, with the following property. For any marking $\mu$ of $S$, there are at most $N$ non-isotopic filling train tracks $\tauau$ with $\Lambdaambda(\tauau) = \mu$. \epsilonnd{lemma} \betaegin{proof} There are only finitely many isotopy classes of filling train tracks up to the action of the mapping class group. If $\tauau = g \sigmaigma$, for some element of the mapping class group $g \in \tauext{Mod}(S)$, and $\Lambdaambda(\tauau) = \Lambdaambda(\sigmaigma) = \mu$, then $g$ preserves the collection of curves $\mu$. Note that the collection of curves $\mu$ fills $S$ and has finite total self-intersection number depending only on $S$. We deduce that there are only finitely many such mapping classes $g$. \epsilonnd{proof} A \epsilonmph{split} of a train track $\tauau$ produces a new train track $\sigmaigma$ by one of the local modifications illustrated in Figure \ref{pic:split} below. Here a subset of $\tauau$ diffeomorphic to the top configuration, is replaced with one of the lower three configurations. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.3] \partialef\tauangent{ \partialraw[thick] (-2, 1) .. controls (-1.5, 0.25) and (-1, 0) .. (0, 0); } \partialef\sigmaegment{ \tauangent \partialraw[thick] (0, 0) -- (4, 0); \betaegin{scope}[xshift=4cm,x=-1cm] \tauangent \epsilonnd{scope} } \sigmaegment \betaegin{scope}[y=-1cm] \sigmaegment \epsilonnd{scope} \partialef\sigmaplit{ \betaegin{scope}[yshift=-5cm] \sigmaegment \epsilonnd{scope} \betaegin{scope}[yshift=-6cm,y=-1cm] \sigmaegment \epsilonnd{scope} } \sigmaplit \betaegin{scope}[xshift=-10cm] \sigmaplit \partialraw[thick] (0, -6) .. controls (2, -6) and (2, -5) .. (4, -5); \epsilonnd{scope} \betaegin{scope}[xshift=10cm] \sigmaplit \partialraw[thick] (0, -5) .. controls (2, -5) and (2, -6) .. (4, -6); \epsilonnd{scope} \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Splitting a train track.} \lambdaabel{pic:split} \epsilonnd{figure} A \epsilonmph{shift} for a train track is the local modification given in Figure \ref{pic:shift}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.3] \partialef\tauangent{ \partialraw[thick] (-2, 1) .. controls (-1.5, 0.25) and (-1, 0) .. (0, 0); } \partialef\sigmaegment{ \tauangent \partialraw[thick] (0, 0) -- (4, 0); \betaegin{scope}[xshift=4cm,x=-1cm] \tauangent \epsilonnd{scope} } \tauangent \partialraw [thick] (-4, 0) -- (6, 0); \betaegin{scope}[y=-1cm, xshift=4cm] \tauangent \epsilonnd{scope} \betaegin{scope}[xshift=12cm] \betaegin{scope}[y=-1cm] \tauangent \epsilonnd{scope} \partialraw [thick] (-4, 0) -- (6, 0); \betaegin{scope}[xshift=4cm] \tauangent \epsilonnd{scope} \epsilonnd{scope} \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Shifting for a train track.} \lambdaabel{pic:shift} \epsilonnd{figure} A \epsilonmph{tie neighbourhood} $N(\tauau)$ for a train track $\tauau$ is a union of rectangles, as follows. For each switch there is a rectangle $R(s)$, and for each branch $b$ there is a rectangle $R(b)$. All rectangles are foliated by vertical arcs called \epsilonmph{ties}. We glue a vertical side of a branch rectangle to a subset of a vertical side of a switch rectangle as determined by the combinatorics of $\tauau$. See Figure \ref{fig:tie} for a local picture near a trivalent switch. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.5] \foreach \x in {0,0.5,...,9} { \partialraw (\x, 0) -- (\x, 6); }; \filldraw [white] (6, 2) rectangle (9,4); \partialraw (0, 0) -- (9, 0) -- (9, 2) -- (6, 2) -- (6, 4) -- (9, 4) -- (9, 6) -- (0, 6) -- cycle; \partialraw [thick] (0, 3) -- (1, 3) .. controls (3, 3) and (5, 1) .. (7, 1) -- (9, 1); \partialraw [thick] (0, 3) -- (1, 3) .. controls (3, 3) and (5, 5) .. (7, 5) -- (9, 5); \epsilonnd{tikzpicture} \epsilonnd{center} \caption{A tie neighbourhood for a train track.} \lambdaabel{fig:tie} \epsilonnd{figure} We say a train track or simple closed curve $\sigmaigma$ is \epsilonmph{carried} by $\tauau$ if $\sigmaigma$ may be isotoped to lie in $N(\tauau)$, such that $\sigmaigma$ is transverse to the ties of $N(\tauau)$. We denote this by either $\sigmaigma \prec \tauau$ or $\tauau \succ \sigmaigma$. If $\sigmaigma$ is a train track obtained by splitting and shifting $\tauau$ then $\tauau \succ \sigmaigma$. A \epsilonmph{train track carrying sequence} is a sequence of train tracks $\tauau_0 \succ \tauau_1 \succ \cdots$, such that each $\tauau_{i}$ carries $\tauau_{i+1}$. We may also denote a train track splitting sequence by $\{ \tauau_i \}_{i = 0}^n$, where $n \in \mathbb{N}_0 \cup \{ \infty \}$. We say that $\{ \tauau_i \}_{i = 0}^n$ is \epsilonmph{$K$-connected} if \[ d_S( \Lambdaambda(\tauau_i), \Lambdaambda(\tauau_{i+1})) \lambdaeqslant K\] for all $i$. Masur and Minsky \cite{mm3}*{Theorem 1.3} showed that train track splitting sequences give rise to reparameterized quasi-geodesics in the curve complex $\mathcal{C}(S)$. Using this, Masur, Mosher and Schleimer show that train track splitting sequences give reparameterized quasi-geodesics under subsurface projection. \betaegin{theorem} \cite{mms}*{Theorem 5.5} \lambdaabel{theorem:qg} For any surface $S$ with $\xi(S) \gammaeqslant 1$ there is a constant $Q = Q(S)$ with the following property: For any sliding and splitting sequence $\{ \tauau_i \}^n_{i=0}$ of birecurrent train tracks in $S$ and for any essential subsurface $X \sigmaubset S$, if $\pi_X( \tauau_i ) \not = \varnothing$ then the sequence $\{ \pi_X ( \Lambdaambda ( \tauau_i ) ) \}^n_{i=0}$ is a $Q$-reparameterized quasi-geodesic in the curve complex $\mathcal{C}(X)$. \epsilonnd{theorem} We abuse notation by writing $d_X(\tauau , \cdot)$ for $d_X(\pi_X(\Lambdaambda(\tauau)), \cdot)$, where $X$ is an essential subsurface of $S$. Given a train track carrying sequence $\{ \tauau_i \}_{i = 0}^n$, we say that a train track $\tauau_i$ in the sequence is \epsilonmph{$K$-intermediate} if $d_S(\tauau_0, \tauau_i) \gammaeqslant K$ and $d_S(\tauau_i, \tauau_n) \gammaeqslant K$. We will use the following consequence of Theorem \ref{theorem:qg}: \betaegin{theorem} \lambdaabel{Thm:MMS} There is a constant $B$, depending only on $S$, such that for any constant $A$, and any two carrying sequences $\{ \tauau_i \}_{i = 0}^n$ and $\{ \sigmaigma_i \}_{i = 0}^{m}$, with $d_S(\tauau_0, \sigmaigma_0) \lambdaeqslant A$ and $d_S(\tauau_n, \sigmaigma_{m}) \lambdaeqslant A$, then for any pair of $(A + B)$-intermediate train tracks $\tauau_i$ and $\sigmaigma_j$, and any subsurface $X \sigmaubset S$, \[ d_X(\tauau_i, \sigmaigma_j) \lambdaeqslant d_X(\tauau_0, \tauau_n) + B. \] \epsilonnd{theorem} To prove Theorem \ref{Thm:MMS} we will also need the following \epsilonmph{bounded geodesic image theorem} of Masur and Minsky. \betaegin{theorem}\cite[Theorem 3.1]{mm2}\lambdaabel{theorem:bounded image} For any surface $S$, and for any essential subsurface $X$ of $S$, there is a constant $M_X$, such that for any geodesic $\gammaamma$ in $\mathcal{C}(S)$, all of whose vertices intersect $X$ non-trivially, the projected image of $\gammaamma$ in $\mathcal{C}(X)$ has diameter at most $M_X$. \epsilonnd{theorem} \betaegin{proof}[Proof of Theorem \ref{Thm:MMS}] If there is a constant $B$, depending on $S$, such that $d_X(\tauau_i, \sigmaigma_j) \lambdaeqslant B$, then we are done. Otherwise, by the bounded geodesic image theorem, $\partial X$ is also $(A+B)$-intermediate. Then $d_X(\tauau_0, \sigmaigma_0) \lambdaeqslant M_X$ and $d_X(\tauau_n, \sigmaigma_m) \lambdaeqslant M_X$. By Theorem \ref{theorem:qg}, there is a constant $Q$, such that the images of both $\{ \Lambdaambda(\tauau_i) \}_{i=1}^n$ and $\{ \Lambdaambda(\sigmaigma_i) \}_{i=1}^{m}$ under $\pi_X$ are reparameterized $Q$-quasi-geodesics, and furthermore, their endpoints are distance at most $A_2$ apart in $\mathcal{C}(X)$. By the Morse property for quasi-geodesics, there is a constant $A_3$, depending only on $M_X, Q$ and $\partialelta$, and hence only on the surface $S$, such that $\{ \Lambdaambda(\tauau_i) \}_{i=1}^n$ and $\{ \Lambdaambda(\sigmaigma_i) \})_{i=1}^{m}$ are Hausdorff distance at most $A_3$ apart, and so the distance between $\pi_X(\tauau_i)$ and $\pi_X(\sigmaigma_j)$ is at most $d_X(\tauau_0, \tauau_n) + A_3$. Therefore the result follows, choosing $B = \max \{ A_1 + O(\partialelta), A_3\}$, which only depends on the topology of the surface $S$. \epsilonnd{proof} We will abuse notation and say that an essential disc $E$ is \epsilonmph{carried} by a train track $\tauau$ if $\partial E$ is carried by $\tauau$. We will use the following result of Masur and Minsky \cite{mm3}*{page 309}. \betaegin{proposition} \lambdaabel{prop:splitting seq} Let $D$ and $E$ be essential discs contained in a compression body $V$. Then there is a surgery sequence $\{D_i\}_{i = 0}^{n-1}$ with $D = D_0$ and $D_{n-1}$ disjoint from $E$, and a carrying sequence $\{\tauau_i\}_{i = 0}^n$, with the following properties. \betaegin{enumerate} \item The train track $\tauau_i$ has only one switch for all $0 \lambdaeqslant i \lambdaeqslant n$. \item The boundary of $D_i$ is switch-dual to the track $\tauau_i$, for all $0 \lambdaeqslant i < n$. \item The disc $E$ is carried by the train track $\tauau_i$, for all $0 \lambdaeqslant i \lambdaeqslant n$. \epsilonnd{enumerate} \epsilonnd{proposition} Masur and Minsky \cite{mm3} work in a more general setting allowing for surfaces with boundary components, and state a weaker version of property 2, that $d_S(D_i, \tauau_i) \lambdaeqslant 5$. However, in the case of closed surfaces, the statement we need follows from the proof of \cite{mm3}*{Lemma 4.1}. We provide a review of their work in the appendix for the convenience of the reader. Finally, we observe: \betaegin{proposition} \lambdaabel{prop:large} For all surfaces $S$, there is a constant $K$ such that for any curves $\partial D$ and $\partial E$ in $\mathcal{D}$, and any carrying sequence $\{ \tauau_i \}_{i=0}^n$ connecting them, any $K$-intermediate train track in $\{ \tauau_i \}_{i=0}^n$ is birecurrent and filling. \epsilonnd{proposition} \betaegin{proof} For all $i$, the curve $\partial E$ runs over every edge of $\tauau_i$, so $\tauau_i$ is recurrent. The train track $\tauau_i$ has only one switch, and $\partial D_i$ crosses $\tauau_i$ exactly once at that switch, so $\tauau_i$ is transversely recurrent. If $\tauau_i$ is not filling, then there is a curve $a$ disjoint from the set of vertex cycles $\Lambdaambda(\tauau_i)$. Then $a$ is disjoint from every collection of simple closed curves produced from positive integer valued sums of the $\Lambdaambda(\tauau_i)$, and in particular $\tauau_i$ cannot carry a pair of curves which fill $S$. However, the train track $\tauau_i$ carries $\partial E$. As long as $d_S(\Lambdaambda(\tauau_i), \partial E) \gammaeqslant 3$, there is a vertex cycle distance at least $3$ from $\partial E$, and so $\tauau_i$ carries a pair of curves which fill $S$, and so $\Lambdaambda(\tauau_i)$ fills. Therefore a $K$-intermediate train track $\tauau_i$ is filling, for any $K \gammaeqslant 3$. \epsilonnd{proof} \sigmaubsection{Laminations} If $f$ is a pseudo-Anosov homeomorphism of $S$ which extends over a compression body $V$, then the stable lamination of $f$ is a limit of discs of $V$. On the other hand we have the following: \betaegin{proposition} \lambdaabel{prop:lamination} There is a minimal lamination in $\mathbb{P}ML(S)$ which does not lie in the limit set of a disc set $\mathcal{D}(V)$, for any compression body $V$. \epsilonnd{proposition} \betaegin{proof} Recall that $\mathbb{P}ML(S)$ is the space of projectively measured laminations in $S$. Fix a handlebody $V$. Define the \epsilonmph{limit set} $\overline{\mathcal{D}}(V)$ of $V$ to be the closure of $\mathcal{D}(V)$, considered as a subset of $\mathbb{P}ML(S)$. Following Masur \cite{masur}*{Theorem 1.2}, we define $Z(V)$, the \epsilonmph{zero set} of $\overline{\mathcal{D}}(V)$, to be the set of laminations $\mu \in \mathbb{P}ML(S)$ where there is some $\nu \in \overline{\mathcal{D}}(V)$ having geometric intersection $i(\mu, \nu)$ equal to zero. Note that $Z(V)$ is closed in $\mathbb{P}ML(S)$. Masur \cite{masur}*{Theorem 1.2} shows that $Z(V)$ has empty interior in $\mathbb{P}ML(S)$, and Gadre \cite{gadre}*{Theorem A.1} shows that it has measure zero. Therefore, the complement of the countable union $\cup_V Z(V)$ is full measure in $\mathbb{P}ML(S)$. The set of minimal laminations also has full measure in $\mathbb{P}ML(S)$, so there is at least one minimal lamination disjoint from the union of the limit sets of the disc sets. \epsilonnd{proof} We remark that this argument also works using harmonic measure instead of Lebesgue measure by work of Kaimanovich and Masur \cite{km}*{Theorem 2.2.4} and Maher \cite{maher_heegaard}*{Theorem 3.1}. \sigmaection{Stability} \lambdaabel{section:stability} \betaegin{theorem} \lambdaabel{Thm:Ray} Suppose $\gammaamma \colon \mathbb{N} \tauo \mathcal{C}(S)$ is a geodesic ray. The diameter of the image of $\pi_{\mathcal{H}(S)} \circ \gammaamma$ is finite if and only if there is a compression body $V$ and a constant $k$ so that the image of $\gammaamma$ lies in a $k$-neighborhood of $\mathcal{D}(V)$. \epsilonnd{theorem} The backward direction is immediate. In this section we show the forward direction. A standard coarse geometry argument using the hyperbolicity of the curve complex, plus the quasi-convexity of $\mathcal{D}(V)$ inside of $\mathcal{C}(S)$, reduces the forward direction of Theorem \ref{Thm:Ray} to the following statement, which we call the stability hypothesis. \betaegin{theorem}[Stability hypothesis] \lambdaabel{Thm:Stability} Given a surface $S$ and a constant $k$, there is a constant $k' \gammaeqslant k$ with the following property. Suppose that $\gammaamma$ is a geodesic ray in $\mathcal{C}(S)$, and $V_i$ is a sequence of compression bodies such that, for all $i$, the segment $\gammaamma|[0,i]$ lies in a $k$--neighborhood of $\mathcal{D}(V_i)$. Then there is a constant $k'$ and a non-trivial compression body $W$, contained in infinitely many of the $V_i$, such that that $\gammaamma$ is contained in a $k'$-neighborhood of $\mathcal{D}(W)$. \epsilonnd{theorem} We first show the following. \betaegin{lemma} \lambdaabel{lemma:close} There is a constant $K$, which depends on $S$, such that for any two essential simple closed curves $a$ and $b$ in $S$, with $d_S(a, b) \gammaeqslant 3K$, there is a constant $N(a, b, K)$, such that for any collection $\{ V_i \}_{i=1}^N $ of compression bodies with $d_S(a, \mathcal{D}(V_i)) \lambdaeqslant K$ and $d_S(b, \mathcal{D}(V_i)) \lambdaeqslant K$ there are at least two compression bodies $V_i$ and $V_j$ which share a common simple closed curve $c$. Furthermore, the curve $c$ is $K$-intermediate for $a$ and $b$. \epsilonnd{lemma} For fixed $K$, the constant $N$ is coarsely equivalent to the smallest marking distance between any markings containing $a$ and $b$. \betaegin{proof}[Proof of Lemmma \ref{lemma:close}] For each compression body $V_i$, choose curves $a_i$ and $b_i$ in $\mathcal{D}(V)$ such that $d_S(a, a_i) \lambdaeqslant K$ and $d_S(b, b_i) \lambdaeqslant K$. The discs bounded by $a_i$ and $b_i$ in $V_i$ determine a disc surgery sequence $\{ D^i_j \}_{j=0}^{n_i - 1}$, with $D^i_0 = a_i$ and $D^i_{n_i - 1}$ disjoint from $b_i$, and a train track carrying sequence $\{ \tauau^i_j \}_{j=0}^{n_i - 1}$. By Proposition \ref{prop:large} there is a $K$ such that for all $i, j$, every $K$-intermediate train track $\tauau^i_j$ is filling and birecurrent. By Theorem \ref{Thm:MMS}, and bounded geodesic projections, there is a constant $K_1$ such that for all $K$-intermediate train tracks $\tauau^{i_1}_{j_1}$ and $\tauau^{i_2}_{j_2}$, and any subsurface $Y \sigmaubset S$, \[ d_Y( \tauau^{i_1}_{j_1}, \tauau^{i_2}_{j_2}) \lambdaeqslant d_Y(a, b ) + K_1. \] Therefore, using the Masur-Minsky distance formula, Theorem \ref{theorem:mm dist}, with cutoff $M$ larger than $K_1 + M_0$, there is a constant $N_1$ such that \[ d_{\mathcal{M}}( \tauau^{i_1}_{j_1} , \tauau^{i_2}_{j_2} ) \lambdaeqslant N_1. \] By Lemma \ref{Lem:TracksFinite} there are most $N_2$ train tracks $\tauau$ with $\Lambdaambda(\tauau) = \mu$, for any marking $\mu$. As $d_{\mathcal{M}}$ is a proper metric, for any $K$-intermediate track $\tauau^i_j$, there are at most $N_3 = N_2 \norm{ B_{\mathcal{M}}(\Lambdaambda(\tauau^i_j), N_1) }$ train tracks with markings within distance $N_1$ of $\Lambdaambda(\tauau^i_j)$. Thus, if there are at least $N_3 + 1$ compression bodies, at least two of them must share a common train track. For each train track $\tauau^i_j$, there is a unique simple closed curve that is switch-dual to $\tauau^i_j$, and bounds a disc in the compression body $V_i$. Therefore, if there are at least $N = N_3 + 1$ compression bodies, at least two of them must have a disc in common. \epsilonnd{proof} We now complete the proof of Theorem \ref{Thm:Stability}. \betaegin{proof}[Proof of Theorem \ref{Thm:Stability}] Choose a subsequence $(n_k)_{k \in \mathbb{N}_0}$ with $n_0 = 0$ such that $d_S(\gammaamma(n_{k}), \gammaamma(n_{k+1}) ) \gammaeqslant 3 K$ for all $k$. By Lemma \ref{lemma:close}, there is an infinite subset of the compression bodies $V_i$ such that the $V_i$ contain a simple closed curve $\partial D_0$ which is $K$-intermediate for $\gammaamma(n_0)$ and $\gammaamma(n_1)$. We may pass to this infinite subset, and then apply Lemma \ref{lemma:close} to $\gammaamma(n_1)$ and $\gammaamma(n_2)$, producing a simple closed curve $\partial D_1$, which is $K$-intermediate for $\gammaamma(n_1)$ and $\gammaamma(n_2)$, and such that $\partial D_0$ and $\partial D_1$ simultaneously compress in infinitely many compression bodies. Isotope $\partial D_0$ and $\partial D_1$ in $S$ so they realize their geometric intersection number. By work of Casson and Long \cite[Proof of Lemma 2.2]{casson-long}, the curves $\partial D_0$ and $\partial D_1$ simultaneously compress in a handlebody $V$ if and only if there is a pairing on the points of $\partial D_0 \cap \partial D_1$ that is simultaneously unlinked on $\partial D_0$ and on $\partial D_1$. By the previous paragraph, we know that there is at least one such pairing. There are only finitely many such pairings, so we may pass to a further subsequence of the $V_i$ where $\partial D_0$ and $\partial D_1$ compress in all of the $V_i$, and with the same pairing. It follows that these $V_i$ share a common non-trivial compression body $W_1$, containing $\partial D_0$ and $\partial D_1$. We now iterate the argument, finding simultaneous compressions $\partial D_1, \partial D_2, \partial D_3, \lambdadots$ for a descending chain of subsequences of $\{V_i\}$. This gives rise to an ascending chain of compression bodies $W_1 \sigmaubset W_2 \sigmaubset W_3 \sigmaubset \lambdadots$. However, an ascending chain of compression bodies must stabilize after finitely many steps. If there is some $m$ so that $W_m$ is a handlebody, then infinitely many of the $V_i$ are equivalent. If there is some $m$ so that $W_m = W_n$ for all $n > m$ then the compression body $W = W_m$ is contained in infinitely many of the $V_i$. In either case we have completed the proof of Theorem \ref{Thm:Stability}. \epsilonnd{proof} \sigmaection{Infinite diameter} \lambdaabel{section:diameter} We will use the following result of Klarreich \cite[Theorem 1.3]{klarreich}. See also Hamenst\"adt \cite{ham}. \betaegin{theorem}\cite[Theorem 1.3]{klarreich}\lambdaabel{theorem:boundary} The Gromov boundary of the complex of curves $\mathcal{C}(S)$ is homeomorphic to the space of minimal foliations on $S$. \epsilonnd{theorem} We may now complete the proof of Theorem \ref{Thm:InfDiam}. \betaegin{proof}[Proof of Theorem \ref{Thm:InfDiam}] Fix a geodesic ray $\gammaamma \colon \mathbb{N} \tauo \mathcal{C}(S)$, and set $\gammaamma_i = \gammaamma(i)$. Applying Theorem \ref{theorem:boundary}, let $\lambdaambda$ be the ending lamination associated to $\gammaamma$. That is, after fixing a hyperbolic metric on $S$, and after replacing all curves by their geodesic representatives, the lamination $\lambdaambda$ is obtained from any Hausdorff limit of the $\gammaamma_i$ by deleting isolated leaves. We say that the $\gammaamma_i$ \epsilonmph{superconverge} to $\lambdaambda$. By Proposition \ref{prop:lamination} we may assume that $\lambdaambda$ does not lie in the zero set of any compression body. Suppose that $\pi_{\mathcal{H}(S)} \circ \gammaamma$ has finite diameter. By Theorem \ref{Thm:Ray} there is a compression body $V$, a constant $k$, and a sequence of meridians $\partial D_i \in \mathcal{D}(V)$ so that $d_S(\gammaamma_i, \partial D_i) \lambdaeqslantq k$. Kobayashi's Lemma \cite{kobayashi}*{Proposition 2.2} implies the $\partial D_i$ also superconverge to $\lambdaambda$. It follows that any accumulation point of the $\partial D_i$, taken in $\mathbb{P}ML(S)$, is supported on $\lambdaambda$. So, picking any measure of full support for $\lambdaambda$, realizes $\lambdaambda$ as an element of $Z(V)$, contradicting our initial choice of $\lambdaambda$. \epsilonnd{proof} \sigmaection{Compression bodies of restricted topological type} \lambdaabel{section:restricted} Consider the poset $\mathcal{C}B(S)$ of compression bodies, ordered by inclusion, as described above. We can take the quotient by the action of the mapping class group of the upper boundary surface $S$. This gives the poset of topological types of compression bodies, which we shall denote $\mathcal{C}B_\tauop(S)$. Let $A$ be a non-empty subset of $\mathcal{C}B_\tauop(S)$. We say that $A$ is \epsilonmph{downwardly closed} if $A$ is closed under passing to subcompression bodies. Define the \epsilonmph{restricted compression body graph} $\mathcal{C}B(S, A)$ to be the subposet of $\mathcal{C}B(S)$ where we require all vertices to have topological type lying in $A$. \betaegin{theorem} \lambdaabel{Thm:restrict} Let $A$ be a downwardly closed connected non-empty subset of topological types of compression body. Then the restricted compression body graph $\mathcal{H}(S, A)$ is an infinite diameter Gromov hyperbolic metric space. \epsilonnd{theorem} \noindent Our methods apply in this generality, but in fact Theorem \ref{Thm:restrict} also follows quickly from Theorem \ref{Thm:InfDiam}. \betaegin{proof}[Proof of Theorem \ref{Thm:restrict}] If $A \sigmaubset B$ are downwardly closed subsets of $\mathcal{C}B_\tauop(S)$ then the inclusion $\mathcal{C}B(S,A) \sigmaubset \mathcal{C}B(S,B)$ is simplicial and is coarsely onto. As a special case, $\mathcal{C}B(S, A) \sigmaubset \mathcal{H}(S)$ is simplicial and coarsely onto. Thus $\mathcal{C}B(S, A)$ has infinite diameter. \epsilonnd{proof} \sigmaection{Loxodromics} \lambdaabel{section:loxodromics} Let $G$ be a group acting by isometries on a Gromov hyperbolic space $(X, d_X)$ with basepoint $x_0$. We say that an element $h \in G$ acts \epsilonmph{loxodromically} if the translation length \[ \tauau(h) = \lambdaim_{n \tauo \infty} \taufrac{1}{n} d_X( x_0, h^n x_0) \] is positive. This implies that $h$ has a unique pair of fixed points $\{\lambdaambda^+_h, \lambdaambda^-_h\}$ in the Gromov boundary $\partial X$. Masur and Minsky \cite{mm1} showed that for the action of the mapping class group on the curve complex, an element is loxodromic if and only if it is pseudo-Anosov. We say two loxodromic isometries $h_1$ and $h_2$ are \epsilonmph{independent} if their pairs of fixed points $\{ \lambdaambda^+_{h_1}, \lambdaambda^-_{h_1} \}$ and $\{ \lambdaambda^+_{h_2}, \lambdaambda^-_{h_2} \}$ in the Gromov boundary $\partial X$ are disjoint. We remark that there are many pseudo-Anosov elements for which no power extends over any compression body. For example, if a power of $g$ extends over a compression body, then some power of $g$ preserves a rational subspace of first homology. However, generic elements of $\operatorname{Sp}(2g, \mathbb{Z})$ do not do this, see for example \cites{dt,rivin}. We now give an alternate geometric argument to show the existence of pseudo-Anosov elements for which no power extends over a compression body. \betaegin{lemma}\lambdaabel{lemma:pA} Let $S$ be a closed surface of genus at least two. Then there is a mapping class group element $h \in \tauext{Mod}(S)$ which acts loxodromically on the compression body graph $\mathcal{H}(S)$. \epsilonnd{lemma} We prove the following, more general result. Let $G$ act by isometries on $X$, and let $Y$ be a quasi-convex subset of $X$. We say that $G$ \epsilonmph{acts loxodromically} on $(X, Y)$, if $G$ contains a loxodromic element $g$ whose quasi-axis is contained in a bounded neighbourhood of $Y$. \betaegin{lemma}\lambdaabel{lemma:coarse} Let $G$ be a group acting by isometries on a Gromov hyperbolic space $X$, and let $\mathcal{Y}$ be a collection of uniformly quasi-convex subsets of $X$. Furthermore, suppose that $G$ acts with unbounded orbits on $X_\mathcal{Y}$, and there is a quasi-convex set $Y \in \mathcal{Y}$, such that $G$ acts loxodromically on $(X, Y)$. Then $G$ contains an element which acts loxodromically on $X_\mathcal{Y}$. \epsilonnd{lemma} We now show that Lemma \ref{lemma:coarse} implies Lemma \ref{lemma:pA}. \betaegin{proof}[Proof of Lemma \ref{lemma:pA}] Let $G = \tauext{Mod}(S)$ be the mapping class group acting on $X = \mathcal{C}(S)$, the complex of curves, which is Gromov hyperbolic. Let $\mathcal{Y}$ consist of the collection of disc sets of compression bodies, which is a collection of uniformly quasi-convex subsets of $X$. The space $X_\mathcal{Y} = \mathcal{H}(S)$ has infinite diameter by Theorem \ref{Thm:InfDiam}. The mapping class group acts coarsely transitively on $\mathcal{H}(S)$, and so in particular acts with unbounded orbits. Let $V$ be a handlebody and pick a pair of discs $D$ and $D'$ whose boundaries $\alphalpha$ and $\alphalpha'$ fill $S$. We define $f$ to be the product of a right Dehn twist on $\alphalpha$, followed by a left Dehn twist on $\alphalpha'$. The mapping class group element $f$ is pseudo-Anosov by work of Thurston \cite{thurston}. The element $f$ extends over the compression body $V$, and the images of $\alphalpha$ under powers of $f$ is a quasi-axis for $f$, which is contained in $\mathcal{D}(V)$, so $f$ acts loxodromically on $\mathcal{D}(V)$. Lemma \ref{lemma:coarse} then implies that $G$ contains an element which acts loxodromically on $\mathcal{H}(S)$. \epsilonnd{proof} The proof we present relies on the fact that $(K, c)$-quasi-geodesics in the curve complex $\mathcal{C}(S)$ project to reparameterized $(K', c')$-quasi-geodesics in the compression body graph $\mathcal{H}(S)$, where $K'$ and $c'$ depend only on $K$ and $c$. We will use the following properties of coarse negative curvature. We omit the proofs of Propositions \ref{prop:npp diam}, \ref{prop:gromov product} and \ref{prop:qg gromov} below, as they are elementary exercises in coarse geometry. We say a path $\gammaamma$ is a \epsilonmph{$(K, c, L)$-local quasi-geodesic}, if every subpath of $\gammaamma$ of length $L$ is a $(K, c)$-quasi-geodesic. The following theorem gives a classical ``local to global'' property for negative curvature. \betaegin{theorem}\cite{cdp}*{Chapter 3, Th\'eor\`eme 1.4, page 25} \lambdaabel{theorem:localqg} Given constants $\partialelta, K$ and $c$, there are constants $K', c'$ and $L_0$, such that for any $L \gammaeqslant L_0$, any $(K, c , L)$-local quasi-geodesic in a $\partialelta$-hyperbolic space $X$ is a $(K', c')$-quasi-geodesic. \epsilonnd{theorem} In fact, we will make use of the following special case of this result. A \epsilonmph{piecewise geodesic} is a path $\gammaamma$ which is a concatenation of geodesic segments $\gammaamma_i = [x_i, x_{i+1}]$. We say a piecewise geodesic $\gammaamma$ has \epsilonmph{$R$-bounded Gromov products} if $\gammap{x_i}{x_{i-1}}{x_{i+1}} \lambdaeqslant R$ for all $i$. \betaegin{proposition}\lambdaabel{prop:bounded gp} Given constants $\partialelta$ and $R$, there is are constants $L, K$ and $c$ such that if $\gammaamma$ is a piecewise geodesic in a $\partialelta$-hyperbolic space, with $R$-bounded Gromov products, and $\norm{\gammaamma_i} \gammaeqslant L$ for all $i$, then $\gammaamma$ is a $(K, c)$-quasi-geodesic. \epsilonnd{proposition} \betaegin{proof} Consider a subpath of $\gammaamma$ which is concatenation of two geodesic segments $[x_{i - 1}, x_i]$ and $[x_i, x_{i + 1}]$. Then by the definition of the Gromov product, this subpath is a $(1, 2 \gammap{x_i}{x_{i-1}}{x_{i+1}} )$-quasi-geodesic: that is, a $(1, 2R)$-quasi-geodesic. As each subpath of $\gammaamma$ of length at most $L$ is contained in at most $2$ geodesic subsegments, $\gammaamma$ is therefore a $(1, 2R, L)$-local quasi-geodesic. By Theorem \ref{theorem:localqg}, given $\partialelta, 1$ and $2R$, there are constants $L, K$ and $c$, such that any $(1, 2R, L)$-local quasi-geodesic is a $(K, c)$-quasi-geodesic, as required. \epsilonnd{proof} We now record the following property of quasiconvex sets inside a hyperbolic space. This follows from thin triangles and we omit the proof. \betaegin{proposition}\lambdaabel{prop:npp diam} Given constants $\partialelta$ and $Q$, there is a constant $R_0$, such that any constant $R \gammaeqslant R_0$ has the following properties: let $Y$ and $Y'$ be $Q$-quasiconvex sets in a $\partialelta$-hyperbolic space $X$. If the distance between $Y$ and $Y'$ is at least $R$, then the nearest point projection of $Y$ to $Y'$ has diameter at most $R$. Furthermore, if $Y''$ is a $Q$-quasiconvex set distance at least $R$ from $Y$, and distance at most $R$ from $Y'$, then the distance between the nearest point projections of $Y'$ and $Y''$ to $Y$ is at most $R$. \epsilonnd{proposition} Let $\mathcal{Y}$ be a collection of $Q$-quasi-convex sets in a $\partialelta$-hyperbolic space $X$, such that $X_\mathcal{Y}$ has infinite diameter. Let $\mathcal{Z} = \{ Z_i \}_{i \in N}$ be an ordered subcollection of the $\mathcal{Y}$, where $N$ is a set of consecutive integers in $\mathbb{Z}$. If $N = \mathbb{Z}$, then we say that $\mathcal{Z}$ is \epsilonmph{bi-infinite}. We say that $\mathcal{Z}$ is \epsilonmph{$L$-well-separated} if $d_X(Z_i, Z_{i+1}) \gammaeqslant L$ for all $i$, and furthermore the distance between the nearest point projections of $Z_{i-1}$ and $Z_{i+1}$ to $Z_i$ is also at least $L$. Then $\mathcal{Z}$ determines a collection of piecewise geodesics as follows. For each $Z_i$, let $p_i$ be a point in the nearest point projection of $Z_{i-1}$ to $Z_i$, and let $q_i$ be a point in the nearest point projection of $Z_{i+1}$ to $Z_i$. Let $\gammaamma_\mathcal{Z}$ be a path formed from the concatenation of the geodesic segments $[p_i, q_i]$ and $[q_i, p_{i+1}]$. We will call such a path $\gammaamma_\mathcal{Z}$ a \epsilonmph{$\mathcal{Z}$-piecewise geodesic}. \betaegin{proposition}\lambdaabel{prop:gromov product} Given constants $\partialelta$ and $Q$, there is a constant $R$, such that for any $Q$-quasi-convex set $Z$ in a $\partialelta$-hyperbolic space $X$, and for any points $x$ in $X$ and $z$ in $Z$, with $p$ a nearest point in $Z$ to $x$, then the Gromov product $\gammap{p}{x}{z} \lambdaeqslant R$, where $R$ depends only on $\partialelta$ and $Q$. \epsilonnd{proposition} We now show that an $L$-well separated collection $\mathcal{Z}$ of ordered $Q$-quasi-convex sets gives rise to a natural family of quasi-geodesics. \betaegin{proposition}\lambdaabel{prop:well-separated} Given constants $\partialelta$ and $Q$, there are constants $K, c$ and $L_0$, such that for any $L \gammaeqslant L_0$, and any collection $\mathcal{Z}$ of $L$-well-separated ordered $Q$-quasi-convex sets in a $\partialelta$-hyperbolic space $X$, any $\mathcal{Z}$-piecewise geodesic is a $(K, c)$-quasi-geodesic. \epsilonnd{proposition} \betaegin{proof} By Proposition \ref{prop:npp diam}, there is a constant $R_1$, which only depends on $\partialelta$ and $Q$, such that if $Z$ and $Z'$ are two $Q$-quasi-convex sets distance at least $R_1$ apart, then the nearest point projection of $Z$ to $Z'$ has diameter at most $R_1$. By Proposition \ref{prop:gromov product}, for any $x \in X$ and $z \in Z$, with $p$ a nearest point in $Z$ to $x$, then $\gammap{p}{x}{z} \lambdaeqslant R_2$. For two adjacent segments $[p_i, q_i], [q_i, p_{i+1}]$ in a $\mathcal{Z}$-piecewise geodesic, $q_i$ need not be the closest point on $Z_i$ to $p_i$, but by Proposition \ref{prop:npp diam}, it is distance at most $R_1$ from the nearest point $q'_i$ on $Z_i$ to $p_i$. By the definition of the Gromov product, if $d_X(q_i, q'_i) \lambdaeqslant R_1$, then the difference between the Gromov products $\gammap{q_i}{p_i}{q_{i+1}}$ and $\gammap{q'_i}{p_i}{q_{i+1}}$ is at most $R_1$, and so any $\mathcal{Z}$-piecewise geodesic has $(R_1+R_2)$-bounded Gromov products. So, by Proposition \ref{prop:bounded gp}, there are constants $L, K$ and $c$, such that if every segment of a $\mathcal{Z}$-piecewise geodesic has length at least $L$, it is a $(K, c)$-quasi-geodesic. \epsilonnd{proof} We say that an ordered set $\mathcal{Z} = \{ Z_i\}_{i \in \mathbb{Z}}$ of uniformly quasi-convex subsets of $X$ is \epsilonmph{$(L, M)$-$\mathcal{Y}$-separated}, if $\mathcal{Z}$ is $L$-well-separated in $X$, and $d_{X_\mathcal{Y}}(\pi(Z_i), \pi(Z_{i+1})) \gammaeqslant M$ for all $i$. \betaegin{proposition}\lambdaabel{prop:qg gromov} Given constants $\partialelta, K$ and $c$ there is a constant $R$, such that for any reparameterized $(K, c)$-quasi-geodesic $\gammaamma \colon \mathbb{R} \tauo X$ in a $\partialelta$-hyperbolic space $X$, for any three numbers $r \lambdaeqslant s \lambdaeqslant t$, the Gromov product $\gammap{\gammaamma(s)}{\gammaamma(r)}{\gammaamma(t)} \lambdaeqslant R$. \epsilonnd{proposition} \betaegin{proposition}\lambdaabel{prop:bi-infinite} Given constants $\partialelta$ and $Q$, there are constants $K, c, L_0$ and $M_0$, such that for any collection $\mathcal{Y}$ of $Q$-quasi-convex sets, for any $L \gammaeqslant L_0$ and $M \gammaeqslant M_0$, and any bi-infinite collection $\mathcal{Z}$ of $(L, M )$-$\mathcal{Y}$-separated sets, then the image of any $\mathcal{Z}$-piecewise geodesic in $X_\mathcal{Y}$ is a bi-infinite reparameterized $(K, c)$-quasi-geodesic. \epsilonnd{proposition} \betaegin{proof} By Proposition \ref{prop:well-separated}, given constants $\partialelta_1$ and $Q$, there are constants $K_1, c_1$ and $L_1$ such that for any collection $\mathcal{Z}$ of $L$-well-separated $Q$-quasi-convex sets, with $L \gammaeqslant L_1$, any $\mathcal{Z}$-piecewise geodesic $\gammaamma$ is a $(K_1, c_1)$-quasi-geodesic in $X$. By Theorem \ref{theorem:quotient} there are constants $\partialelta_2, K_2$ and $c_2$ such that the image of $\gammaamma$ in $X_\mathcal{Y}$ is a reparameterized $(K_2, c_2)$-quasi-geodesic in the $\partialelta_2$-hyperbolic space $X_\mathcal{Y}$. By Proposition \ref{prop:qg gromov}, given constants $\partialelta_2, K_2$ and $c_2$, there is a constant $R$, such that for any $i \lambdaeqslant j \lambdaeqslant k$, there is a bound on their Gromov product in $X_\mathcal{Y}$: that is, $\gammap{\pi(p_j)}{\pi(p_i)}{\pi(p_k)}^{X_\mathcal{Y}} \lambdaeqslant R$. Therefore, by Proposition \ref{prop:bounded gp}, given constants $\partialelta_2$ and $R$, there are constants $K_2, c_2$ and $M$ such that as long as $d_{X_\mathcal{Y}}(\pi(p_i), \pi(p_{i+1})) \gammaeqslant M$, the piecewise geodesic in $X_\mathcal{Y}$ formed from geodesic segments $[\pi(p_{i}), \pi(p_{i+1})]$ is a $(K_2, c_2)$-quasi-geodesic, and in particular is bi-infinite. So for any collection $\mathcal{Z}$ of $(L, M)$-$\mathcal{Y}$-separated sets, the image of any $\mathcal{Z}$-piecewise geodesic in $X_\mathcal{Y}$ is a reparameterized bi-infinite $(K_2, c_2)$-quasi-geodesic. \epsilonnd{proof} We may now complete the proof of Lemma \ref{lemma:coarse}. \betaegin{proof}[Proof (of Lemma \ref{lemma:coarse}).] Let $\partialelta$ be the constant of hyperbolicity for $X$, and let $Q$ be a constant such that all sets $Y \in \mathcal{Y}$ are $Q$-quasiconvex. Let $f$ be an element of $G$ which acts loxodromically on $Y \in \mathcal{Y}$. The isometry $f$ acts elliptically on $X_\mathcal{Y}$, coarsely fixing $Y$. In particular, there is a constant $A$, depending only on $\partialelta$ and $Q$, such that $d_{X_\mathcal{Y}}(Y, f^\epsilonll Y) \lambdaeqslant A$ for all $\epsilonll \in \mathbb{Z}$. Given $\partialelta$ and $Q$, let $R_0$ be the constant from Proposition \ref{prop:npp diam}, and choose $R \gammaeqslant R_0 + A$. In particular, for any two sets $Y$ and $Y'$ in $\mathcal{Y}$, distance at least $R$ apart in $X$, the nearest point projection of $Y'$ to $Y$ has diameter at most $R$, and if $Y''$ is another set in $\mathcal{Y}$, distance at least $R$ from $Y$ and at most $R$ from $Y'$, then the nearest point projections of $Y'$ and $Y''$ to $Y$ are distance at most $R$ apart. Consider a pair of numbers $L$ and $M$ with $M \gammaeqslant L \gammaeqslant R$. The group $G$ acts with unbounded orbits on $X_\mathcal{Y}$, so for any such number $M$, there is a group element $g \in G$ such that $d_{X_\mathcal{Y}}(Y, g Y) \gammaeqslant M + A$. This implies that for all $\epsilonll$, $d_X(Y, g Y) \gammaeqslant M + A$, and equivalently, for all $\epsilonll$, $ d_X(Y, g^{-1} Y) \gammaeqslant M + A$. As $d_{X_\mathcal{Y}}(Y, f^\epsilonll g Y) = d_{X_\mathcal{Y}}(f^{- \epsilonll}Y, g Y)$ this implies that for all $\epsilonll$, $d_{X_\mathcal{Y}}(Y, f^\epsilonll g Y) \gammaeqslant M$, and equivalently, that for all $\epsilonll$, $d_{X_\mathcal{Y}}(Y, g^{-1} f^{-\epsilonll} Y) \gammaeqslant M$. As the map from $X$ to $X_\mathcal{Y}$ is $1$-Lipschitz, this implies that $d_{X}(Y, f^\epsilonll g Y) \gammaeqslant M$, and equivalently $d_{X}(Y, g^{-1} f^{-\epsilonll} Y) \gammaeqslant M$. As we have chosen $M \gammaeqslant R$, the nearest point projection of $f^\epsilonll g Y$ to $Y$ has diameter at most $R$. Similarly, the nearest point projection of $g^{-1} f^{-\epsilonll} Y$ to $Y$ has diameter at most $R$. Furthermore, for all $\epsilonll$ and $\epsilonll'$, $d_X( g^{-1} f^{-\epsilonll} Y, g^{-1} f^{-\epsilonll'} Y ) \lambdaeqslant A$, and so for all $\epsilonll$ and $\epsilonll'$, the nearest point projections of $g^{-1} f^{-\epsilonll} Y$ and $g^{-1} f^{-\epsilonll'} Y$ to $Y$ are distance at most $R$ apart. Therefore, as $f$ acts loxodromically on $Y$, for any number $L \gammaeqslant R$ there is a sufficiently large number $\epsilonll$ such that $\pi_Y(g^{-1} f^{-\epsilonll} Y)$ and $\pi_Y(f^\epsilonll g Y)$ are distance at least $L$ apart. This is illustrated schematically in Figure \ref{pic:npp Y}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture} \partialraw [thick] (0, 0) -- (10, 0) node [right] {$Y$}; \partialraw [triangle 45-triangle 45] (2, 0) -- node [midway, right] {$\gammaeqslant M + A$} (2, 2); \partialraw [thick] (1, 2) -- (3, 2) node [above] {$g Y$}; \partialraw [triangle 45-triangle 45] (3, 0) -- node [midway, right] {$\gammaeqslant M$} (3, -2); \partialraw [thick] (2, -2) -- (4, -2) node [above] {$g^{-1} f^{-\epsilonll} Y$}; \partialraw [triangle 45-triangle 45] (8, 0) -- node [midway, right] {$\gammaeqslant M$} (8, 2); \partialraw [thick] (7, 2) -- (9, 2) node [above] {$f^\epsilonll g Y$}; \partialraw [triangle 45-triangle 45] (3, 0) -- node [midway, above] {$\gammaeqslant L$} (8, 0); \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Nearest point projections of $g^{-1} Y$ and $f^\epsilonll g Y$ to $Y$ in $X$.} \lambdaabel{pic:npp Y} \epsilonnd{figure} Set $h = f^\epsilonll g$. Then by Proposition \ref{prop:bi-infinite}, for $L$ and $M$ sufficiently large, the set $\mathcal{Z} = \{ h^n Y\}_{n \in \mathbb{Z}}$ is a bi-infinite collection of $(L, M)$-$\mathcal{Y}$-separated sets, with the property that any $\mathcal{Z}$-piecewise geodesic quasi-axis for $h$ projects to a bi-infinite quasi-axis for $h$ in $X_\mathcal{Y}$, and so $h$ acts loxodromically on $X_\mathcal{Y}$, as required. \epsilonnd{proof} The following corollary implies that every subgroup in the Johnson filtration contains an element which acts loxodromically on $\mathcal{H}$. \betaegin{corollary}\lambdaabel{corollary:johnson} Let $X$ be a $\partialelta$-hyperbolic space, and let $\mathcal{Y}$ be a collection of uniformly quasi-convex sets, such that the electrification $X_\mathcal{Y}$ has infinite diameter. Let $G$ be a group which acts on $X$ by isometries, and which contains an element which acts loxodromically on $X_\mathcal{Y}$. Let $H$ be a normal subgroup of $G$, which contains an element which acts loxodromically on $X$. Then $H$ contains an element which acts loxodromic on $X_\mathcal{Y}$. \epsilonnd{corollary} We will use the following property of independent loxodromics. \betaegin{proposition}\lambdaabel{prop:schottky} Let $g$ and $h$ be independent loxodromics on a $\partialelta$-hyperbolic space $X$. Then there is a constant $m$, such that $g^m$ and $h^m$ freely generate a free group, all of whose non-trivial elements act loxodromically on $X$. Furthermore, the orbit map applied to this free group is a quasi-isometric embedding. \epsilonnd{proposition} \betaegin{proof}[Proof (of Corollary \ref{corollary:johnson})] Suppose that $g \in G$ acts loxodromically on $X_\mathcal{Y}$, and $h \in H$ acts loxodromically on $X$. If $h$ acts loxodromically on $X_\mathcal{Y}$, then we are done. If not, then $g$ and $h$ are independent as loxodromics acting on $X$. We may choose $\{ h^n x_0 \}_{n \in \mathbb{Z}}$ as a quasi-axis $\gammaamma_h$ for $h$. Note that the axis is quasi-convex in $X$. Given constants $L$ and $M$, we will show that there are positive integers $l$ and $m$, such that if $f = g^l h^m g^{-l} h^m$, the ordered collection of sets $\mathcal{Z} = \{ Z_n = f^n \gammaamma_h \}_{n \in \mathbb{N}}$ is a bi-infinite collection of uniformly quasi-convex $(L, M)$-$\mathcal{Y}$-separated sets in $X$. As $\mathcal{Z}$ consists of translates of a single quasi-convex set, the $Z_i$ are uniformly quasi-convex. For $l$ sufficiently large, the nearest point projection of $f \gammaamma_h = g^l h^m g^{-l} \gammaamma_h$ to $\gammaamma_h$ lies in a bounded neighbourhood of some vertex of $\gammaamma_h$, say $x_0$. Similarly, for $l$ and $m$ sufficiently large, the nearest point projection of $f^{-1} \gammaamma_h$ lies in a bounded neighbourhood of $h^{-m} x_0$. By choosing $m$ sufficiently large, this implies that \[ d_X( \pi_{\gammaamma_h} ( f \gammaamma_h ), \pi_{\gammaamma_h}( f^{-1} \gammaamma_h) ) \gammaeqslant L. \] Then as $\mathcal{Z}$ is $f$-invariant, it is $L$-well-separated in $X$. As $g$ and $h$ are independent, the nearest point projection of $\gammaamma_h$ to $\gammaamma_g$ has bounded diameter in $X$, so the nearest point projection of $\pi(\gammaamma_h)$ to $\pi(\gammaamma_g)$ also has bounded diameter in $X_\mathcal{Y}$. As $g$ acts loxodromically on $X_\mathcal{Y}$, for any constant $M$ there is a positive integer $l$ such that $d_{X_\mathcal{Y}}(\gammaamma_h, f \gammaamma_h) \gammaeqslant M$, and again as $\mathcal{Z}$ is $f$-invariant, this implies that $\mathcal{Z}$ is $M$-$\mathcal{Y}$-separated. Proposition \ref{prop:bi-infinite} then implies that $f$ acts loxodromically on $X_\mathcal{Y}$. \epsilonnd{proof} We now show that if two pseudo-Anosov elements $f$ and $g$ act loxodromically on the compression body graph $\mathcal{H}(S)$, and act independently on $\mathcal{C}(S)$, then they also act independently on $\mathcal{H}(S)$. \betaegin{proposition} Let $X$ be a $\partialelta$-hyperbolic space, and let $\mathcal{Y}$ be a collection of uniformly quasi-convex subsets of $X$. Let $f$ and $g$ act loxodromically on $X_\mathcal{Y}$, and let them be independent as loxodromics acting on $X$. Then they are independent loxodromics acting on $X_\mathcal{Y}$. \epsilonnd{proposition} \betaegin{proof} Let $\lambdaambda^\pm_f$ be the fixed points of $f$ in $\partial X$, and let $\lambdaambda^\pm_g$ be the fixed points of $g$ in $\partial X$. As $f$ and $g$ act independently on $X$, all of these points are distinct. Let $\gammaamma$ be a geodesic from $\lambdaambda^+_f$ to $\lambdaambda^+_g$. The image of $\gammaamma$ under projection to $X_\mathcal{Y}$ is a reparameterized geodesic connecting the fixed points of $f$ and $g$ in $\partial X_\mathcal{Y}$ with at least one point in the interior of $X_\mathcal{Y}$. Thus $\pi \circ \gammaamma$ is a bi-infinite geodesic connecting the fixed points, so they are distinct. \epsilonnd{proof} \sigmaection{Applications} \lambdaabel{section:applications} As an application of our methods, we give an alternative proof of a result of Biringer, Johnson and Minsky \cite{bjm}*{Theorem 1.1}, and a slightly stronger version of a result of Lubotzky, Maher and Wu \cite{lmw}*{Theorem 2}, using results of Maher and Tiozzo \cite{maher-tiozzo}. \betaegin{theorem}\lambdaabel{T:bjm} Let $g$ be a pseudo-Anosov element of the mapping class group of a closed orientable surface. Then the stable lamination $\lambdaambda^+_g$ of $g$ is contained in the limit set for a compression body $V$ if and only if the unstable lamination $\lambdaambda^-_g$ is contained in the limit set for $V$, and furthermore, there is a compression body $V' \sigmaubseteq V$ such that some power of $g$ extends over $V'$. \epsilonnd{theorem} \betaegin{proof} Suppose the stable lamination $\lambdaambda^+_g$ for the pseudo-Anosov element $g$ lies in the disc set of a compression body $V_1$. Let $\lambdaambda^-_g$ be the unstable lamination for $g$, and let $\gammaamma$ be a geodesic axis for $g$ in $\mathcal{C}(S)$. Choose a basepoint $\gammaamma(0)$ on $\gammaamma$, and assume that $\gammaamma$ is parameterized by distance from $\gammaamma_0$ in $\mathcal{C}(S)$, with $\lambdaim_{n \tauo \infty} \gammaamma(n) = \lambdaambda^+_g$. Consider the sequence of compression bodies $W_i = g^{-i} V_1$. Possibly after re-indexing the $W_i$, we may assume that the geodesic from $\gammaamma(0)$ to $\gammaamma(-i)$ is contained in $k$-neighbourhood of $\mathcal{D}(W_i)$, where $k$ depends only on the constant of hyperbolicity and the quasi-convexity constants for the disc sets. We may therefore apply Theorem \ref{Thm:Stability} above, which then implies that there is a compression body $V_2$, contained in infinitely many of the $W_i$. Furthermore, the geodesic ray from $\gammaamma(0)$ to $\lambdaambda^-(g)$ is contained in a bounded neighbourhood of $\mathcal{D}(V_2)$. In particular, $V_2 \sigmaubseteq g^{k_1} V_1$ for some $n_1$, and we may assume that $k_1 \not = 0$ as $V_2$ is contained in infinitely many $g^i V_1$. We may then repeat this procedure using positive powers of $g$, to produce a compression body $V_3$, contained in infinitely many $\mathcal{D}( g^i V_2 )$, such that $\lambdaambda_g^+$ is contained in the limit set of $\mathcal{D}(V_3)$. Iterating this procedure produces a descending chain of compression bodies $V_{n+1} \sigmaubseteq g^{k_n} V_{n}$, which has the property that the $V_i$ have limit sets containing $\lambdaambda^+_g$ if $i$ is odd, and $\lambdaambda^-_g$ if $i$ is even. As before, we may assume that the integers $k_n$ are not zero. A descending chain of compression bodies must eventually stabilize with $V_n = g^{k_n} V_{n+1}$ for some $n$ and $k_n \not = 0$. Therefore, there is a single compression body $V = V_n$, which contains both $\lambdaambda^+_g$ and $\lambdaambda_g^-$, as required. Finally, we observe that as $k_n \not = 0$, some power of $g$ extends over this compression body. \epsilonnd{proof} Lubotzky, Maher and Wu showed that a random walk on the mapping class group gives a hyperbolic manifold with a probability that tends to one exponentially quickly, assuming that the probability distribution $\mu$ generating the random walk is \epsilonmph{complete}: that is, the limit set of the subgroup generated by the support of $\mu$ is dense in Thurston's boundary for the mapping class group, $\mathcal{PML}$. As the compression body graph is an infinite diameter Gromov hyperbolic space, we may apply the results of Maher and Tiozzo \cite{maher-tiozzo}, to replace this hypothesis with the assumption that the support of $\mu$ contains a pair of independent loxodromic elements for the action of the subgroup on the compression body graph. We shall write $w_n$ for a random walk of length $n$ on the mapping class group of a closed orientable surface of genus $g$, generated by a probability distribution $\mu$, and $M(w_n)$ for the corresponding \epsilonmph{random Heegaard splitting}: that is the $3$-manifold obtained by using the resulting mapping class group element $w_n$ as the gluing map for a Heegaard splitting. \betaegin{proposition} Every isometry of the compression body graph $\mathcal{H}$ is either elliptic or loxodromic. \epsilonnd{proposition} \betaegin{proof} Biringer and Vlamis \cite{bv}*{Theorem 1.1} showed that the isometry group of $\mathcal{H}$ is equal to the mapping class group. If $g$ is not pseudo-Anosov, then $g$ acts elliptically on the curve complex, and hence acts elliptically on $\mathcal{H}$. Let $g$ be a pseudo-Anosov element of the mapping class group. Let $\gammaamma$ be an axis for $g$ in the curve complex $\mathcal{C}(S)$, and let $\gammaamma(0)$ be a choice of basepoint for $\gammaamma$. Suppose that $g$ does not act loxodromically, it follows that there is a sequence of compression bodies $V_i$, whose nearest point projections to $\gammaamma$ have diameters tending to infinity. Recall that the disc sets $\mathcal{D}(V_i)$ are uniformly quasi-convex. Thus there is a constant $k$, depending on the quasi-convexity constants of the disc sets and the hyperbolicity constant of the curve complex $\mathcal{C}(S)$, such that (after passing to a subsequence and re-indexing) the intersection of $\mathcal{D}(V_i)$ with a $k$-neighbourhood of $\gammaamma$ has diameter at least $i$. By translating the disc sets by powers of $g$, and possibly passing to a further subsequence and re-indexing, we may assume that both $\gammaamma(0)$ and $\gammaamma(i)$ are distance at most $k$ from $\mathcal{D}(g^{n_i} V_i)$. We may further relabel the disc sets and just write $V_i$ for the given translate $g^{n_i} V_i$. We may now apply our stability result, Theorem \ref{Thm:Stability}. This implies that there is a single compression body $W$ contained in infinitely many of the $V_i$, and so in particular the positive limit point $\lambdaambda^+_g$ of $g$ is contained in the limit set of $\mathcal{D}(W)$. Thus there is a compression body $W'$ such that some power of $g$ extends over $W'$. Thus $g$ acts elliptically, and we are done. \epsilonnd{proof} \betaegin{theorem} Let $\mu$ be a probability distribution on the mapping class group of a closed orientable surface of genus $g$, whose support has bounded image in the compression body graph $\mathcal{H}$, and which contains two independent pseudo-Anosov elements whose stable laminations do not lie in the limit set of any compression body. Then the probability that $M(w_n)$ is hyperbolic and of Heegaard genus $g$ tends to one exponentially quickly. \epsilonnd{theorem} \betaegin{proof} Maher and Tiozzo \cite{maher-tiozzo}*{Theorem 1.2} show that given a countable group $G$ acting non-elementarily on a separable Gromov hyperbolic space $(X, d_X)$, a finitely supported random walk on $G$ has positive drift with exponential decay: that is, there are constants $L > 0, K \gammaeqslant 0$, and $c < 1$ such that \[ \mathbb{P} \lambdaeqslantft[ d_X(x_0, w_n x_0) \gammaeqslant Ln \right] \gammaeqslant 1 - Kc^n. \] We may apply this result to the mapping class group $G$ acting on the compression body graph $\mathcal{H}$. A subgroup of $G$ acts \epsilonmph{non-elementarily} on $\mathcal{H}$ if $G$ contains two independent loxodromic elements. Geodesics in $\mathcal{C}(S)$ project to reparameterized quasi-geodesics in $\mathcal{H}$, so by Theorem \ref{Thm:Ray}, if the stable and unstable laminations of a pseudo-Anosov element $h$ do not lie in the limit set of some compression body, then the image of the axis of $h$ in $\mathcal{C}_\mathcal{D}(S)$ is a bi-infinite quasi-axis for the action of $h$ on the compression body graph $\mathcal{H}$, and so in particular $h$ acts loxodromically on $\mathcal{H}$. If $h_1$ and $h_2$ are two pseudo-Anosov elements, which act loxodromically on the compression body graph $\mathcal{H}$, and act independently on the curve complex $\mathcal{C}(S)$, then in fact they act independently on the compression body graph $\mathcal{H}$, as the geodesic from $\lambdaambda^+_{h_1}$ to $\lambdaambda^+_{h_2}$ in the curve complex $\mathcal{C}(S)$ projects to an reparameterized quasi-geodesic in $\mathcal{H}$, which has infinite diameter by Theorem \ref{Thm:Ray}. Finally, we observe that the compression body graph $\mathcal{H}$ is a countable simplicial complex, and so is separable: that is, has a countable dense subset, and so the hypotheses of \cite[Theorem 1.2]{maher-tiozzo} are satisfied. The \epsilonmph{Hempel distance} of a Heegaard splitting $M(h)$ is the distance in the curve complex between the disc sets of the two handlebodies. This is bounded below by distance in the compression body graph $d_{\mathcal{H}}(x_0, w_n x_0)$. Since distance in the compression body graph has positive drift with exponential decay, so does Hempel distance. Finally, we observe that if the Hempel distance is at least three, then $M(w_n)$ is hyperbolic, by work of Hempel \cite{hempel}*{Corollaries 3.7 and 3.8}, and Perelman's proof of Thurston's geometrization conjecture \cite{morgan-tian}. If the splitting distance is greater than $2g$, then the given Heegaard splitting is a minimal genus Heegaard splitting for $M(w_n)$, by work of Scharlemann and Tomova \cite{st}*{page 594}. \epsilonnd{proof} \alphappendix \sigmaection{Train tracks} In this section we review the work of Masur and Minsky \cite{mm3}. They prove the version of Proposition \ref{prop:splitting seq} we require, but we find it convenient to write down an argument which differs from theirs in certain details. Let $a$ and $b$ be two essential simple closed curves in $S$ in minimal position. A \epsilonmph{subarc} of a simple closed curve is a closed connected subinterval. We will only every consider subarcs of $a$ or $b$ whose endpoints lie in $a \cap b$. Given a pair of curves $a$ and $b$ in minimal position, a \epsilonmph{bicorn} is a simple closed curve, denoted by $c = (a_i, b_i)$, consisting of the union of one subarc $a_i$ of $a$ and one subarc $b_i$ of $b$. A subarc of $a$ or $b$ is \epsilonmph{innermost} with respect to $a \cap b$ if its endpoints lie in $a \cap b$, and its interior is disjoint from $a \cap b$. We may abuse notation by referring to these innermost subarcs as components of either $a - b$ or $b - a$, though in fact we wish to include their endpoints. An innermost arc $b_i$ of $b$ with respect to $a \cap b$ is a \epsilonmph{returning arc} if both endpoints lie on the same side of $a$ in $S$. This is illustrated on the left hand side of Figure \ref{fig:arc} below. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.4] \partialraw (0, 0) -- (10, 0) node [below] {$a$}; \partialraw (0.333, -2) node [right] {$b$}; \foreach \i in {0.333,1.666,...,10}{ \partialraw (\i,-2) -- (\i, 2); } \partialraw [very thick] (3,0) -- (3,2) .. controls (3,3) and (4,4) .. (5, 4) .. controls (6,4) and (7,3) .. (7, 2) node [right] {$b_i$} -- (7, 0); \betaegin{scope}[xshift=14cm] \partialraw [thick, red] (0, 0) -- (2.5, 0) -- (2.5, 2) .. controls (2.5, 3.5) and (4, 4.5) .. (5, 4.5) .. controls (6, 4.5) and (7.5, 3.5) .. (7.5, 2) -- (7.5, 0) -- (10, 0) node [below] {$c' \sigmatrut$}; \partialraw [thick, red] (3.5,0) -- (3.5, 1.5) .. controls (3.5, 2.5) and (4, 3.5) .. (5, 3.5) .. controls (6, 3.5) and (6.5, 2.5) .. (6.5, 1.5) -- (6.5, 0) node [below] {$c \sigmatrut$} -- cycle; \foreach \i in {0.333,1.666,...,10}{ \partialraw (\i,-2) -- (\i, 2); } \partialraw [very thick] (3,0) -- (3,2) .. controls (3,3) and (4,4) .. (5, 4) .. controls (6,4) and (7,3) .. (7, 2) -- (7, 0); \epsilonnd{scope} \epsilonnd{tikzpicture} \epsilonnd{center} \caption{An innermost arc $b_i$ of $b$ forming a returning arc for $a$.} \lambdaabel{fig:arc} \epsilonnd{figure} Given a simple closed curve $a$ and a returning arc $b_i$, we may produce a new simple closed curve by \epsilonmph{arc surgery} of $a$ with respect to $b_i$. There are two possible bicorns, $c$ and $c'$, formed from the union of $b_i$ with one of the two subarcs of $a$ with endpoints $\partial b_i$. These are illustrated on the right hand side of Figure \ref{fig:arc}, where we have isotoped the replacement curves to be disjoint from $b_i$ for clarity. We say a bicorn is \epsilonmph{returning} if $b_i$ is a returning arc for $a$. We say a sequence of returning bicorns $(a_i, b_i)$ is \epsilonmph{nested} if $a_{i+1} \sigmaubset a_{i}$, and $b_{i+1}$ is a returning arc for $c_i$, for all $i$. An adjacent pair in a sequence of returning bigons is illustrated in Figure \ref{pic:nested bicorns}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.4] \partialraw (-2, 0) -- (17, 0) node [below] {$a$}; \partialraw (1, -2) node [right] {$b$}; \foreach \i in {-1,0,...,11}{ \partialraw (\i,-2) -- (\i, 2); } \partialraw [ultra thick, rounded corners=6pt] (3,0) -- (3, 4) -- (16, 4) node [right] {$b_{i+1}$} -- (16, -2) -- (14, -2) -- (14, 3) -- (7,3) -- (7, 0); \partialraw [line width=0.25cm, white] (0,0) -- (0,2) .. controls (0,4) and (3,6) .. (5, 6) .. controls (7,6) and (10,4) .. (10, 2) -- (10, 0); \partialraw [ultra thick, rounded corners=6pt] (0,0) -- (0,2) .. controls (0,4) and (3,6) .. (5, 6) .. controls (7,6) and (10,4) .. (10, 2) node [right] {$b_i$} -- (10, 0); \partialraw [thick] (0, 0) -- (10, 0) node [below left] {$a_i$}; \partialraw (7,0) node [below left, circle, fill=white, inner sep=0] {$a_{i+1}$}; \partialraw [ultra thick] (0, 0) -- (10, 0); \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Nested bicorns.} \lambdaabel{pic:nested bicorns} \epsilonnd{figure} \noindent For a nested bicorn sequence, each pair of adjacent bicorns $c_i$ and $c_{i+1}$ may be made disjoint after a small isotopy. Let $D$ and $E$ be essential embedded discs in minimal position in a compression body. Then $a = \partial D$ and $b = \partial E$ is a pair of essential simple closed curves in minimal position. We say a disc $F$ contained in $D \cup E$ is a \epsilonmph{bicorn disc} if the boundary of the disc $F$ is a bicorn in $a \cup b$. We omit the proof of the following observation. \betaegin{proposition} Let $D$ and $E$ be essential embedded discs in minimal position in a compression body, and let $F_i$ be a bicorn disc in $D \cup E$ with boundary $a_i \cup b_i$. Then there is an arc $\gammaamma_i$ in $D \cap E$ such that $F$ is the union of a subdisc $D_i$ of $D$ bounded by $a_i \cup \gammaamma_i$ and a subdisc $E_i$ of $E$ bounded by $b_i$ and $\gammaamma_i$, which only intersect along $\gammaamma_i$. \epsilonnd{proposition} A \epsilonmph{nested bicorn disc sequence} is a nested bicorn sequence in which every bicorn $c_i = (a_i, b_i)$ bounds a disc $F_i = (D_i, E_i)$, and furthermore $D_{i+1} \sigmaubset D_i$ for all $i$. Two essential simple closed curves $a$ and $b$ in minimal position in $S$, and a subinterval $a_i \sigmaubset a$ with endpoints in $a \cap b$, determines a pre-train track $\tauau_i'$ with a single switch, as follows: discard $a - a_i$, collapse $a_i$ to a point, and smooth the tangent vectors as illustrated in Figure \ref{pic:collapse}. The dashed line labelled $a$ on the right hand side of Figure \ref{pic:collapse} is not part of the pre-train track $\tauau_i'$. Rather, it is drawn for comparison with the left hand diagram. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.4] \partialraw [thick] (-3, -4) -- (-3, 4); \partialraw [thick] (-1, -4) -- (-1, 4); \partialraw [thick] (0, -4) -- (0, 4); \partialraw [thick] (1, -4) -- (1, 4) node [right] {$b$}; \betaegin{scope}[xshift=+10cm] \partialraw [dashed] (-4, 0) -- (4, 0) node [below] {$a$}; \partialraw [thick] (-3, -4) -- (-3, 4); \partialraw [thick] (0, -4) -- (0, 4); \partialraw [thick] (0, 0) .. controls (0.1, 1) .. (1.5, 4) node [right] {$\tauau_i'$}; \partialraw [thick] (0, 0) .. controls (-0.1, 1) .. (-1.5, 4); \partialraw [thick] (0, 0) .. controls (0.1, -1) .. (1.5, -4); \partialraw [thick] (0, 0) .. controls (-0.1, -1) .. (-1.5, -4); \epsilonnd{scope} \partialraw (-4, 0) -- (4, 0) node [below] {$a$}; \partialraw (0.5, -0.5) node [circle, fill=white, inner sep=0] {$a_i$}; \partialraw [ultra thick] (-1, 0) -- (1, 0); \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Smoothing intersections of $a$ and $b$ by collapsing $a_i$.} \lambdaabel{pic:collapse} \epsilonnd{figure} A pair of simple closed curves $a$ and $b$ in minimal position divide the surface $S$ into a number of complementary regions, whose boundaries consist of alternating innermost subarcs of $a$ and $b$. We say a complementary region is a \epsilonmph{rectangle} if it is a disc, whose boundary consists of exactly four innermost subarcs. Given a pre-train tack $\tauau$ contained in a surface $S$, a \epsilonmph{bigon collapse} is a homotopy of the surface, supported in a neighbourhood of a bigon, which maps the bigon to a single arc, as illustrated in Figure \ref{pic:bigon collapse}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.4] \partialraw [thick] (0, 0) -- (1, 0) .. controls (2, 0) and (2, 1) .. (3, 1) .. controls (4, 1) and (4, 0) .. (5, 0) -- (6, 0); \betaegin{scope}[yscale=-1] \partialraw [thick] (0, 0) -- (1, 0) .. controls (2, 0) and (2, 1) .. (3, 1) .. controls (4, 1) and (4, 0) .. (5, 0) -- (6, 0); \epsilonnd{scope} \partialraw [thick] (9,0) --(15, 0); \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Collapsing a bigon.} \lambdaabel{pic:bigon collapse} \epsilonnd{figure} \noindent We break the proof of Proposition \ref{prop:splitting seq} into the following three propositions. We say a pre-track \epsilonmph{collapses} to a train track if there is a sequence of bigon collapses which produces a train track. We say a bicorn $(a_i, b_i)$ is \epsilonmph{non-degenerate} if there is a non-rectangular component of $S - (a \cup b)$ whose boundary intersects $a - a_i$. Non-degeneracy allows us to collapse a pre-track to a track, as follows. \betaegin{proposition}\lambdaabel{prop:collapse} Let $a$ and $b$ be simple closed curves in minimal position, and let $(a_i, b_i)$ be a non-degenerate bicorn. Then the pre-track $\tauau'_i$ determined by $a_i$ and $b$ bigon collapses to a train track $\tauau_i$. Furthermore, $\tauau_i$ is switch dual to the bicorn $c_i$ determined by $(a_i, b_i)$. \epsilonnd{proposition} A bicorn sequence $(a_i, b_i)$ is \epsilonmph{non-degenerate} if every bicorn $(a_i, b_i)$ is non-degenerate. If the initial bicorn $(a_1, b_1)$ is non-degenerate, then this implies that every subsequent bicorn is non-degenerate. \betaegin{proposition}\lambdaabel{prop:nested} Let $a$ and $b$ be simple closed curves in minimal position, and let $(a_i, b_i)$ be a collapsible nested bicorn sequence, and let $\tauau_i$ be the corresponding train tracks. Then $\tauau_i$ is a carrying sequence of train tracks: that is, $\tauau_{i+1} \prec \tauau_{i}$ for all $i$. \epsilonnd{proposition} \betaegin{proposition}\lambdaabel{prop:discs} Let $a$ and $b$ be simple closed curves which bound discs in a compression body $V$. Then there is a collapsible nested bicorn disc sequence $\{ F_i \}_{i = 1}^n$ with bicorn boundaries $\{ c_i \}_{i=1}^n$, as follows. If we define $c_0 = a$, then the sequence of simple closed curves $\{ c_i \}_{i=0}^n$ is a disc surgery sequence connecting $a$ and $b$. \epsilonnd{proposition} We now define a \epsilonmph{rectangular tie neighbourhood} for a train track $\tauau$ with a single switch. This is a regular neighbourhood of $\tauau$ in the surface $S$ which is foliated by intervals transverse to $\tauau$, with a decomposition as a union of foliated rectangles with disjoint interiors, with the following properties. There is a single rectangle containing the switch, which we shall call the \epsilonmph{switch rectangle}. There is one rectangle for each branch, which we shall call a \epsilonmph{branch rectangle}. The branch rectangles have disjoint closures. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.1] \partialraw (0, -15) node [below] {The switch rectangle in the center is shaded.}; \filldraw [draw=black,fill=lightgray] (-15,-15) rectangle (15, 15); \foreach \x in {-15,-13,...,15} { \partialraw (\x, -15) -- (\x, 15); }; \partialraw [thick] (0, 0) .. controls (5, 0) and (10, 10) .. (15, 10); \partialraw [thick] (0, 0) .. controls (5, 0) and (10, -10) .. (15, -10); \partialraw [thick] (0, 0) -- (-15, 0); \partialraw [thick] (0, 0) .. controls (-5, 0) and (-10, 12) .. (-15, 12); \partialraw [thick] (0, 0) .. controls (-5, 0) and (-10, -12) .. (-15, -12); \partialraw (15, 5) rectangle (31, 15); \foreach \x in {15,17,...,31} { \partialraw (\x, 5) -- (\x, 15); }; \partialraw [thick] (15, 10) -- (31, 10); \betaegin{scope}[yshift=-20cm] \partialraw (15, 15) rectangle (31, 5); \foreach \x in {15,17,...,31} { \partialraw (\x, 5) -- (\x, 15); }; \partialraw [thick] (15, 10) -- (31, 10); \epsilonnd{scope} \partialraw (-15, 3) rectangle (-31, -3); \foreach \x in {-15,-17,...,-31} { \partialraw (\x, 3) -- (\x, -3); }; \partialraw [thick] (-15, 0) -- (-31, 0); \betaegin{scope}[yshift=-12cm] \partialraw (-15, 3) rectangle (-31, -3); \foreach \x in {-15,-17,...,-31} { \partialraw (\x, 3) -- (\x, -3); }; \partialraw [thick] (-15, 0) -- (-31, 0); \epsilonnd{scope} \betaegin{scope}[yshift=12cm] \partialraw (-15, 3) rectangle (-31, -3); \foreach \x in {-15,-17,...,-31} { \partialraw (\x, 3) -- (\x, -3); }; \partialraw [thick] (-15, 0) -- (-31, 0); \epsilonnd{scope} \epsilonnd{tikzpicture} \epsilonnd{center} \caption{A rectangular tie neighbourhood for a train track.} \lambdaabel{fig:rtie} \epsilonnd{figure} We define a \epsilonmph{rectangular tie neighbourhood} for a pre-track with a single switch as above, except that we allow a single rectangle to contain multiple parallel branches. We observe that parallel branches in a single rectangle make up the boundaries of bigons in the pre-track, and these may be collapsed by a homotopy supported in the union of the switch rectangle and the rectangle containing the branches. In particular, if all bigon complementary regions are contained in the rectangular tie neighbourhood, then collapsing all bigons produces a train track, for which the rectangular tie neighbourhood of the pre-track is a rectangular tie neighbourhood for the train track. We now define a collection of foliated rectangles determined by $a$ and $b$. All of the rectangular tie neighbourhoods we construct will be subcollections of these rectangles. Isotope $a$ and $b$ into minimal position. For each intersection point $x$ in $a \cap b$ we take a rectangular neighbourhood $R_x$, which we shall call a \epsilonmph{vertex rectangle}. The rectangle has four \epsilonmph{corners}, one in each of the quadrants formed by the local intersection of $a$ and $b$, and alternating sides parallel to $a$ and $b$. We foliate $R_x$ by arcs parallel to $a \cap R_x$, so that the two sides of the rectangle parallel to $a$ are leaves of the foliation. This is illustrated below in Figure \ref{fig:rx}. \betaegin{figure}[H] \betaegin{center} \caption{A rectangle $R_x$ determined by a point $x \in a \cap b$.} \lambdaabel{fig:rx} \betaegin{tikzpicture}[scale=0.5] \filldraw [thick,color=black,fill=lightgray] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1.5,...,2} { \partialraw (-2, \y) -- (2, \y); }; \partialraw [thick, red] (-3, 0) -- (3, 0) node [below] {$a$}; \partialraw [thick, red] (0, -3) -- (0, 3) node [right] {$b$}; \partialraw (0.45, -0.45) node [red, circle, fill=lightgray, inner sep=0] {$x$}; \partialraw (2,-2) node [below right] {$R_x$}; \epsilonnd{tikzpicture} \epsilonnd{center} \epsilonnd{figure} Now suppose that $\alphalpha$ is a component of $a - b$, with endpoints $x$ and $y$. We define a rectangle $R_\alphalpha$, called an \epsilonmph{$a$-rectangle}, as follows. It is a rectangle in $S$, containing $\alphalpha - (R_x \cup R_y)$, two of whose sides consist of the sides of $R_x$ and $R_y$ which are parallel to $b$ and intersect $\alphalpha$. The other two sides consist of properly embedded arcs parallel to $\alphalpha - (R_x \cup R_y)$. We shall foliate this rectangle with arcs parallel to $a \cap R_\alphalpha$ such that the two sides parallel to $\alphalpha$ are leaves of the foliation. This is illustrated in Figure \ref{pic:Ralpha}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.4] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \betaegin{scope}[xshift=16cm] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \epsilonnd{scope} \partialraw [thick, color=black, fill=lightgray] (2,-2) rectangle (14, 2); \foreach \y in {-2,-1,...,2} { \partialraw (2, \y) -- (14, \y); }; \partialraw [thick, red] (-4, 0) -- (20, 0) node [below] {$a$}; \partialraw [thick, red] (0, -4) -- (0, 4) node [right] {$b$}; \partialraw [thick, red] (16, -4) -- (16, 4) node [right] {$b$}; \partialraw (0,0) node [red, below right] {$x$}; \partialraw (16,0) node [red, below right] {$y$}; \partialraw (8, 0) node [red, below] {$\alphalpha$}; \partialraw (8, -2) node [below] {$R_\alphalpha$}; \partialraw (-2, -2) node [below] {$R_x$}; \partialraw (18, -2) node [below] {$R_y$}; \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Rectangles determined by a component $\alphalpha$ of $a - b$.} \lambdaabel{pic:Ralpha} \epsilonnd{figure} We do the same for components $\betaeta$ of $b - a$, but this time foliated by arcs crossing $\betaeta$ exactly once, as illustrated in Figure \ref{pic:Rbeta}. We shall call these rectangles \epsilonmph{$b$-rectangles}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.25] \partialraw [fill=lightgray] ([shift=(0:10cm)]8,2) arc (0:180:10cm); \partialraw [fill=white] ([shift=(0:6cm)]8,2) arc (0:180:6cm); \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \betaegin{scope}[xshift=16cm] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \epsilonnd{scope} \foreach \x in {0,10,...,180} { \partialraw (8,2) ++(\x:6cm) -- ++(\x:4cm); }; \partialraw [thick] ([shift=(0:6cm)]8,2) arc (0:180:6cm); \partialraw [thick] ([shift=(0:10cm)]8,2) arc (0:180:10cm); \partialraw [thick, red] ([shift=(0:8cm)]8,2) arc (0:180:8cm); \partialraw [thick, red] (-4, 0) -- (4, 0); \partialraw [thick, red, dashed] (4, 0) -- (14, 0); \partialraw [thick, red] (12, 0) -- (20, 0) node [below] {$a$}; \partialraw [thick, red] (0, -4) -- (0, 2); \partialraw [thick, red] (16, -4) node [right] {$b$} -- (16, 2); \partialraw (16.75, 3.2) node [red, circle, fill=lightgray, inner sep=0] {$\betaeta$}; \partialraw (0.75, -0.75) node [red, circle, fill=white, inner sep=0] {$x$}; \partialraw (16.75, -0.75) node [red, circle, fill=white, inner sep=0] {$y$}; \partialraw (-2, -2) node [below] {$R_x$}; \partialraw (18, -2) node [below] {$R_y$}; \partialraw (8, 8) node [below] {$R_\betaeta$}; \epsilonnd{tikzpicture} \epsilonnd{center} \caption{Rectangles determined by a component $\betaeta$ of $b - a$.} \lambdaabel{pic:Rbeta} \epsilonnd{figure} Finally, suppose that $f$ is a rectangle component of $S - (a \cup b)$. The \epsilonmph{face rectangle} $R_f$ is a foliated rectangle lying inside $f$, whose sides consist of the four sides of the $a$- and $b$-rectangles which meet $f$. The foliation consists of arcs parallel to $a$, such that the two $a$-sides of the rectangle are leaves of the foliation. This is illustrated below in Figure \ref{pic:Rf}. \betaegin{figure}[H] \betaegin{center} \betaegin{tikzpicture}[scale=0.3] \partialraw [thick, color=black, fill=lightgray] (2,2) rectangle (14, 8); \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \betaegin{scope}[xshift=16cm] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \epsilonnd{scope} \foreach \y in {-2,-1,...,2} { \partialraw (2, \y) -- (14, \y); }; \partialraw [thick, red] (-4, 0) -- (20, 0) node [below] {$a$}; \partialraw (0.75,-1) node [red, circle, fill=white, inner sep=0] {$w$}; \partialraw (16.75,-1) node [red, circle, fill=white, inner sep=0] {$z$}; \partialraw (8, -1) node [red, circle, fill=white, inner sep=0] {$\alphalpha'$}; \partialraw (8, -2) node [below] {$R_{\alphalpha'}$}; \partialraw (-2, 5) node [left] {$R_\betaeta$}; \partialraw (18, 5) node [right] {$R_{\betaeta'}$}; \partialraw (-2, -2) node [below] {$R_w$}; \partialraw (18, -2) node [below] {$R_z$}; \partialraw [thick, red] (0, 4) -- (0, 6); \partialraw [thick, red] (16, 4) -- (16, 6); \partialraw [thick, black] (-2, 2) -- (-2, 8); \partialraw [thick, black] (2, 2) -- (2, 8); \partialraw [thick, black] (14, 2) -- (14, 8); \partialraw [thick, black] (18, 2) -- (18, 8); \foreach \y in {3,4,...,7} { \partialraw (-2, \y) -- (18, \y); }; \partialraw [thick, black] (2, -2) -- (14, -2); \partialraw [thick, black] (2, 12) -- (14, 12); \partialraw [thick, red] (0, -4) -- (0, 4) node [right, circle, fill=white, inner sep=0] {$\betaeta$}; \partialraw [thick, red] (16, -4) -- (16, 4) node [right, circle, fill=white, inner sep=0] {$\betaeta'$}; \betaegin{scope}[yshift=+10cm] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \betaegin{scope}[xshift=16cm] \partialraw [thick,black] (-2,-2) rectangle (2, 2); \foreach \y in {-2,-1,...,2} { \partialraw (-2, \y) -- (2, \y); }; \epsilonnd{scope} \foreach \y in {-2,-1,...,2} { \partialraw (2, \y) -- (14, \y); }; \partialraw [thick, red] (-4, 0) -- (20, 0) node [below] {$a$}; \partialraw [thick, red] (0, -4) -- (0, 4) node [right] {$b$}; \partialraw [thick, red] (16, -4) -- (16, 4) node [right] {$b$}; \partialraw (0.75,-1) node [red, circle, fill=white, inner sep=0] {$x$}; \partialraw (16.75,-1) node [red, circle, fill=white, inner sep=0] {$y$}; \partialraw (8, -1) node [red, circle, fill=white, inner sep=0] {$\alphalpha$}; \partialraw (8, 2) node [above] {$R_\alphalpha$}; \partialraw (-2, 2) node [above] {$R_x$}; \partialraw (18, 2) node [above] {$R_y$}; \epsilonnd{scope} \partialraw (8, 5) node [circle, fill=lightgray, inner sep=0] {$R_f$}; \epsilonnd{tikzpicture} \epsilonnd{center} \caption{A face rectangle determined by a rectangular face $f$ of $S - ( a \cup b )$.} \lambdaabel{pic:Rf} \epsilonnd{figure} We shall denote the resulting foliation of this subset of $S$ by $\mathcal{F}$. This foliates all of $S$ except for the (slightly shrunken) non-rectangular regions of $S - (a \cup b)$. We now prove Proposition \ref{prop:collapse}. We will construct a foliated region in $S$ which is a union of rectangles, and show that it is a tie neighbourhood for a train track with a single switch. \betaegin{proof}[Proof (of Proposition \ref{prop:collapse}).] The foliated region $\mathcal{F}$ is a union of foliated rectangles. We will build a rectangular tie neighbourhood $\mathcal{F}_i$ for the pre-track $\tauau'_i$ which will consist of a subcollection of these rectangles, and which may contain branch rectangles with multiple edges. There will be no complementary regions of $\mathcal{F}_i$ which are rectangles, so the pre-track $\tauau'_i$ will collapse to a train track $\tauau_i$. The foliated region $\mathcal{F}_i$ consists of all vertex rectangles and all $b$-rectangles, together with all the face rectangles which are contained in rectangular components of $S - (a_i \cup b)$, as well as all $a$-rectangles adjacent to an included face rectangle. Let $a^+_i$ be the maximal subarc of $a$ with endpoints in $a \cap b$ contained in the connected component of $\mathcal{F}_i \cap a$ containing $a_i$. In particular $a_i \sigmaubseteq a^+_i$. As the bicorn $(a_i, b_i)$ is non-degenerate, there is at least one non-rectangle region of $S - (a \cup b)$ with a boundary edge in $a - a_i$. Any non-rectangle region of $S - (a \cup b)$ is contained in a non-rectangle region of $S - (a_i \cup b)$, so $a^+_i$ does not consist of all of $a_i$, and therefore is an interval. The union of the vertex rectangles and the $a$-rectangles in $\mathcal{F}_i$ meeting $a^+_i$ is a regular neighbourhood of $a^+_i$, and is a foliated rectangle containing the unique switch of the pre-track $\tauau_i$. We shall denote this rectangle by $R^+_i$. This will be the switch rectangle for the rectangular tie neighbourhood. We must now show that all components of $\mathcal{F}_i - R^+_i$ are foliated rectangles (perhaps containing multiple branches of $\tauau_i$) and that no components of $S - \mathcal{F}_i$ is a rectangle. If a component of $\mathcal{F}_i - R^+_i$ has no face rectangles, then it is a union of $b$-edge rectangles and vertex rectangles. Let $\mathcal{B}$ be the union of the vertex rectangles and the $b$-rectangles; so $\mathcal{B}$ is a regular neighbourhood of $b$. The intersection $\mathcal{B} \cap R^+_i$ consists of a subset of the vertex rectangles. Thus $\mathcal{B} - R^+_i$ is a union of rectangles with disjoint closures. We will refer to the components of $\mathcal{B} - R^+_i$ as \epsilonmph{$b$-strips}. Each $b$-strip is a rectangle whose boundary consists of four edges: two parallel to $a$, and contained in $\partial R^+_i$, and the other two parallel to $b$, and parallel to properly embedded arcs in the complement of $R^+_i$. We say two $b$-strips are \epsilonmph{parallel} if their $b$-parallel edges cobound a rectangle in $S - R^+_i$. We say two face rectangles are \epsilonmph{vertically adjacent} if they border a common $a$-rectangle not in $R^+_i$. We say a maximal collection of vertically adjacent rectangles, together with the $a$-rectangles between them, is a \epsilonmph{face strip}. A face strip is a rectangle whose boundary consists of four edges, two parallel to $a$ and contained in $\partial R^+_i$, and two parallel to $b$, and are properly embedded parallel arcs in the complement of $R^+_i$. We say two face strips are \epsilonmph{parallel} if the $b$-parallel edges are parallel arcs in the complement of $R^+_i$. We say a face strip is \epsilonmph{parallel} to a $b$-strip if their $b$-parallel edges cobound a rectangle in $S - R^+_i$. We observe that a face strip is adjacent to two $b$-strips, one on each side, and all three strips are parallel to each other. A component of $\mathcal{F}_i - R^+_i$ which contains a face rectangle is a union of face strips and $b$-strips. Each pair of face strips is separated by a $b$-strip, and the observation in the paragraph above ensures that all of the strips are parallel. We now verify that all of the components of $\mathcal{F}_i - R^+_i$ are disjoint. Any two components consisting only of $b$-strips are disjoint. Let $R_1$ and $R_2$ be a pair of components of $\mathcal{F}_i - R^+_i$ which contain a corner in common. Each corner lies in the boundary of a face rectangle, a $b$-rectangle, an $a$-rectangle and a vertex rectangle. Suppose the face rectangle lies in one of the components, say $R_1$. Then the $b$-rectangle also lies in $R_1$. If the vertex and $a$-rectangles do not lie in $R_1$, then they lie in $R^+_i$, so there can be no rectangle $R_2$ intersecting $R_1$, a contradiction. So neither $R_1$ nor $R_2$ contain the face rectangle. If the $a$-rectangle lies in $R_1$, then so must the face rectangle beside it, so the $a$-rectangle is also not contained in either $R_1$ or $R_2$. If $R_1$ contains the $b$-rectangle, then it either contains the vertex rectangle, or the vertex rectangle lies in $a^+_i$. In either case, none of the other rectangles can lie in $R_2$, a contradiction. \epsilonnd{proof} Proposition \ref{prop:nested} follows from work of Penner and Harer \cite{ph}. \betaegin{proof}[Proof (of Proposition \ref{prop:nested}).] Any closed train route in $\tauau_{i+1}$ is a union of arcs of $b$ starting and ending at $a_{i+1}$. As $a_{i+1} \sigmaubset a_i$, it is also a train route in $\tauau_i$. The train track $\tauau_{i+1}$ may then be obtained from $\tauau_i$ by splitting and shifting, by Penner and Harer \cite{ph}*{Theorem 2.4.1}. \epsilonnd{proof} Before we prove Proposition \ref{prop:discs} we need the following observation. \betaegin{proposition} \lambdaabel{prop:bicorn essential} Suppose that $a$ and $b$ are essential simple closed curves in minimal position. Then any bicorn contained in $a \cup b$ is an essential simple closed curve. \epsilonnd{proposition} Proposition \ref{prop:bicorn essential} is standard and we omit the proof. \betaegin{proof}[Proof (of Proposition \ref{prop:discs}).] Let $a = \partial D$ and $b = \partial E$. We first show how to choose an initial non-degenerate bicorn $(a_1, b_1)$. By an Euler characteristic argument, there must be at least one complementary region of $a \cup b$ which is not a rectangle. Let $\alphalpha$ be a component of $a - b$ which lies in the boundary of one of the non-rectangular regions of the complement of $a \cup b$. Let $b_1 \sigmaubset b$ be a choice of returning arc determined by an outermost disc $E_1$ in $E$, and let $a_1$ be the subinterval of $a$: \betaegin{itemize} \item with the same endpoints as $b_1$, \item which is disjoint from $\alphalpha$. \epsilonnd{itemize} The bicorn $c_1$ determined by $(a_1, b_1)$ bounds a disc corresponding to disc surgery of $D$ along $E_1$, and we shall denote this disc by $F_1$. This disc $F_1$ is essential by Proposition \ref{prop:bicorn essential}. Now suppose we have constructed the $i$-th nested bicorn disc $F_i = (D_i, E_i)$ with bicorn boundary $c_i = (a_i, b_i)$. The intersection $E \cap F_i$ is equal to $E_i \cup (E \cap D_i)$. Choose $E_{i+1}$ to be an outermost disc of $E$ with respect to $E \cap D_i$, which is not $E_i$. Then $\gammaamma_{i+1} = E_{i+1} \cap D_i$ bounds a disc in $D_i$ which we shall choose to be $D_{i+1}$. We shall set $F_{i+1} = D_{i+1} \cup E_{i+1}$, which has bicorn boundary $c_{i+1} = (a_{i+1}, b_{i+1})$, where $a_{i+1} \sigmaubset a_i$ to be $D_{i+1} \cap a$ and $b_{i+1}$ to be $E_{i+1} \cap b$. The disc $F_{i+1}$ is essential by Proposition \ref{prop:bicorn essential}, and furthermore, is obtained from disc surgery of $F_i$ along $E_{i+1}$. We have therefore constructed the next disc in the nested bicorn disc sequence, as required. \epsilonnd{proof} \betaegin{bibdiv} \betaegin{biblist} \betaib{ack}{article}{ author={Ackermann, Robert}, title={An alternative approach to extending pseudo-Anosovs over compression bodies}, journal={Algebr. Geom. Topol.}, volume={15}, date={2015}, number={4}, pages={2383--2391}, issn={1472-2747}, } \betaib{agol-freedman}{article}{ author={Agol, Ian}, author={Freedman, Michael}, title={Embedding heegaard decompositions}, date={2019}, eprint={arXiv:1906.03244}, } \betaib{bjm}{article}{ author={Biringer, Ian}, author={Johnson, Jesse}, author={Minsky, Yair}, title={Extending pseudo-Anosov maps into compression bodies}, journal={J. Topol.}, volume={6}, date={2013}, number={4}, pages={1019--1042}, issn={1753-8416}, } \betaib{bv}{article}{ author={Biringer, Ian}, author={Vlamis, Nicholas G.}, title={Automorphisms of the compression body graph}, journal={J. Lond. Math. Soc. (2)}, volume={95}, date={2017}, number={1}, pages={94--114}, issn={0024-6107}, } \betaib{bowditch-rel-hyp}{article}{ author={Bowditch, Brian H.}, title={Relatively hyperbolic groups}, journal={Internat. J. Algebra Comput.}, volume={22}, date={2012}, number={3}, pages={1250016, 66}, issn={0218-1967}, } \betaib{burton-purcell}{article}{ author={Burton, Stephan D.}, author={Purcell, Jessica S.}, title={Geodesic systems of tunnels in hyperbolic 3-manifolds}, journal={Algebr. Geom. Topol.}, volume={14}, date={2014}, number={2}, pages={925--952}, issn={1472-2747}, } \betaib{casson-long}{article}{ author={Casson, A. J.}, author={Long, D. D.}, title={Algorithmic compression of surface automorphisms}, journal={Invent. Math.}, volume={81}, date={1985}, number={2}, pages={295--303}, issn={0020-9910}, } \betaib{cdp}{book}{ author={Coornaert, M.}, author={Delzant, T.}, author={Papadopoulos, A.}, title={G\'eom\'etrie et th\'eorie des groupes}, language={French}, series={Lecture Notes in Mathematics}, volume={1441}, publisher={Springer-Verlag, Berlin}, date={1990}, pages={x+165}, } \betaib{dang-purcell}{article}{ author={Dang, Vinh}, author={Purcell, Jessica}, title={Cusp shape and tunnel number}, eprint={arXiv:1711.03693}, date={2017}, } \betaib{dow-tay}{article}{ author={Dowdall, Spencer}, author={Taylor, Samuel J.}, title={The co-surface graph and the geometry of hyperbolic free group extensions}, journal={J. Topol.}, volume={10}, date={2017}, number={2}, pages={447--482}, issn={1753-8416}, } \betaib{dt}{article}{ author={Dunfield, Nathan M.}, author={Thurston, William P.}, title={Finite covers of random 3-manifolds}, journal={Invent. Math.}, volume={166}, date={2006}, number={3}, pages={457--521}, issn={0020-9910}, } \betaib{gadre}{article}{ author={Lustig, Martin}, author={Moriah, Yoav}, title={Are large distance Heegaard splittings generic?}, note={With an appendix by Vaibhav Gadre}, journal={J. Reine Angew. Math.}, volume={670}, date={2012}, pages={93--119}, issn={0075-4102}, } \betaib{ham}{article}{ author={Hamenst{\"a}dt, Ursula}, title={Train tracks and the Gromov boundary of the complex of curves}, conference={ title={Spaces of Kleinian groups}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={329}, publisher={Cambridge Univ. Press}, place={Cambridge}, }, date={2006}, pages={187--207}, } \betaib{hempel}{article}{ author={Hempel, John}, title={3-manifolds as viewed from the curve complex}, journal={Topology}, volume={40}, date={2001}, number={3}, pages={631--657}, issn={0040-9383}, } \betaib{km}{article}{ author={Kaimanovich, Vadim A.}, author={Masur, Howard A.}, title={The Poisson boundary of the mapping class group}, journal={Invent. Math.}, volume={125}, date={1996}, number={2}, pages={221--264}, issn={0020-9910}, } \betaib{kapovich-rafi}{article}{ author={Kapovich, Ilya}, author={Rafi, Kasra}, title={On hyperbolicity of free splitting and free factor complexes}, journal={Groups Geom. Dyn.}, volume={8}, date={2014}, number={2}, pages={391--414}, issn={1661-7207}, } \betaib{klarreich}{article}{ author={Klarreich, E.}, title={The boundary at infinity of the curve complex and the relative Teichm\"uller space}, } \betaib{kobayashi}{article}{ author={Kobayashi, Tsuyoshi}, title={Heights of simple loops and pseudo-Anosov homeomorphisms}, conference={ title={Braids}, address={Santa Cruz, CA}, date={1986}, }, book={ series={Contemp. Math.}, volume={78}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={1988}, pages={327--338}, } \betaib{lmw}{article}{ author={Lubotzky, Alexander}, author={Maher, Joseph}, author={Wu, Conan}, title={Random Methods in 3-Manifold Theory}, journal={Tr. Mat. Inst. Steklova}, volume={292}, date={2016}, number={Algebra, Geometriya i Teoriya Chisel}, pages={124--148}, issn={0371-9685}, isbn={5-7846-0137-7}, isbn={978-5-7846-0137-7}, } \betaib{ma-wang}{article}{ author={Ma, Jiming}, author={Wang, Jiajun}, title={Quasi-homomorphisms on mapping class groups vanishing on a handlebody group}, date={2017}, } \betaib{maher_heegaard}{article}{ author={Maher, Joseph}, title={Random Heegaard splittings}, journal={J. Topol.}, volume={3}, date={2010}, number={4}, pages={997--1025}, issn={1753-8416}, } \betaib{maher-tiozzo}{article}{ author={Maher, Joseph}, author={Tiozzo, Giulio}, title={Random walks on weakly hyperbolic groups}, journal={Journal f\"ur die reine und angewandte Mathematik (Crelle's Journal), to appear}, date={2017}, } \betaib{masur}{article}{ author={Masur, Howard}, title={Measured foliations and handlebodies}, journal={Ergodic Theory Dynam. Systems}, volume={6}, date={1986}, number={1}, pages={99--116}, issn={0143-3857}, } \betaib{mm1}{article}{ author={Masur, Howard A.}, author={Minsky, Yair N.}, title={Geometry of the complex of curves. I. Hyperbolicity}, journal={Invent. Math.}, volume={138}, date={1999}, number={1}, pages={103--149}, issn={0020-9910}, } \betaib{mm2}{article}{ author={Masur, H. A.}, author={Minsky, Y. N.}, title={Geometry of the complex of curves. II. Hierarchical structure}, journal={Geom. Funct. Anal.}, volume={10}, date={2000}, number={4}, pages={902--974}, issn={1016-443X}, } \betaib{mm3}{article}{ author={Masur, Howard A.}, author={Minsky, Yair N.}, title={Quasiconvexity in the curve complex}, conference={ title={In the tradition of Ahlfors and Bers, III}, }, book={ series={Contemp. Math.}, volume={355}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2004}, pages={309--320}, } \betaib{mms}{article}{ author={Masur, Howard}, author={Mosher, Lee}, author={Schleimer, Saul}, title={On train-track splitting sequences}, journal={Duke Math. J.}, volume={161}, date={2012}, number={9}, pages={1613--1656}, issn={0012-7094}, } \betaib{masur-schleimer}{article}{ author={Masur, Howard}, author={Schleimer, Saul}, title={The geometry of the disk complex}, journal={J. Amer. Math. Soc.}, volume={26}, date={2013}, number={1}, pages={1--62}, issn={0894-0347}, } \betaib{morgan-tian}{book}{ author={Morgan, John}, author={Tian, Gang}, title={Ricci flow and the Poincar\'e conjecture}, series={Clay Mathematics Monographs}, volume={3}, publisher={American Mathematical Society}, place={Providence, RI}, date={2007}, pages={xlii+521}, isbn={978-0-8218-4328-4}, } \betaib{ph}{book}{ author={Penner, R. C.}, author={Harer, J. L.}, title={Combinatorics of train tracks}, series={Annals of Mathematics Studies}, volume={125}, publisher={Princeton University Press}, place={Princeton, NJ}, date={1992}, pages={xii+216}, isbn={0-691-08764-4}, isbn={0-691-02531-2}, } \betaib{rivin}{article}{ author={Rivin, Igor}, title={Walks on groups, counting reducible matrices, polynomials, and surface and free group automorphisms}, journal={Duke Math. J.}, volume={142}, date={2008}, number={2}, pages={353--379}, issn={0012-7094}, } \betaib{st}{article}{ author={Scharlemann, Martin}, author={Tomova, Maggy}, title={Alternate Heegaard genus bounds distance}, journal={Geom. Topol.}, volume={10}, date={2006}, pages={593--617}, issn={1465-3060}, } \betaib{thurston}{article}{ author={Thurston, William P.}, title={On the geometry and dynamics of diffeomorphisms of surfaces}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={19}, date={1988}, number={2}, pages={417--431}, issn={0273-0979}, } \epsilonnd{biblist} \epsilonnd{bibdiv} \vskip 20pt \noindent Joseph Maher \\ CUNY College of Staten Island and CUNY Graduate Center \\ \url{[email protected]} \\ \noindent Saul Schleimer \\ University of Warwick \\ \url{[email protected] } \epsilonnd{document}
\begin{document} \begin{abstract} Let $R$ be a ring satisfying a polynomial identity and let $\delta$ be a derivation of $R$. We show that if $N$ is the nil radical of $R$ then $\delta(N)\subseteq N$ and the Jacobson radical of $R[x;\delta]$ is equal to $N[x;\delta]$. As a consequence, we have that if $R$ is locally nilpotent then $R[x;\delta]$ is locally nilpotent. This affirmatively answers a question of Smoktunowicz and Ziembowski. \end{abstract} \keywords{Differential polynomial ring, PI ring, Jacboson radical} \maketitle \section{Introduction} Let $R$ be a ring (not necessarily unital) and let $\delta$ be a derivation of $R$. We recall that the differential polynomial ring $R[x;\delta]$ is, as a set, given by all polynomials of the form $a_n x^n+\cdots +a_1 x +a_0$ with $n\ge 0$, $a_0,\ldots ,a_n\in R$. Multiplication is given by $xa=ax+\delta(a)$ for $a\in R$ and extending using associativity and linearity. There has been a lot of interest in studying the Jacobson radical of the ring $R[x;\delta]$ \cite{Am, BG, FKM, Jor, SZ, TWC}. In the case when $\delta$ is the zero derivation---that is when $R[x;\delta]=R[x]$---Amitsur \cite{Am} showed that the Jacobson radical, $J(R[x])$, of $R[x]$ is precisely $N[x]$ where $N$ is a nil ideal of $R$ given by $N=J(R[x])\cap R$. At the opposite end of the spectrum, Ferrero, Kishimoto, and Motose \cite{FKM} showed that when $R$ is commutative then $J(R[x;\delta])\cap R$ is a nil ideal and $J(R[x;\delta])=(J(R[x;\delta])\cap R)[x;\delta]$. It is still unknown whether $J(R[x;\delta])$ is equal to $(0)$ when $R$ has no nonzero nil ideals. A surprising recent development comes from the work of Smoktunowicz and Ziembowski \cite{SZ}. We recall that a ring $R$ is locally nilpotent if every finitely generated subring of $R$ is a nilpotent ring. Smoktunowicz and Ziembowski negatively answered a question of Sheshtakov \cite[Question 1.1]{SZ}, by constructing an example of a locally nilpotent ring $R$ such that $R[x;\delta]$ is not equal to its own Jacobson radical. In addition to this, they asked \cite[p. 2]{SZ} whether Sheshtakov's question has an affirmative answer if one assumes, in addition, that $R$ satisfies a polynomial identity (PI ring, for short). In this paper we show that this is indeed the case. Our main result is the following theorem. \begin{thm} \label{NilRing} Let $R$ be a locally nilpotent ring satisfying a polynomial identity and let $\delta$ be a derivation of $R$. Then $R[x;\delta]$ is locally nilpotent. \end{thm} In particular, this result shows that $R[x;\delta]$ is equal to its own Jacobson radical under the hypotheses from the statement of Theorem \ref{NilRing}. This gives an affirmative answer to a question of Smoktunowicz and Ziembowski \cite[p. 2]{SZ}. We note that the analogue of Theorem \ref{NilRing} need not hold if we form a skew polynomial extension of $R$ using an automorphism $\sigma$ instead of a derivation $\delta$. For example, consider the ring $R=M/M^2$ where $M$ is the maximal ideal $(t_n\colon n\in \mathbb{Z})$ of $\mathbb{C}[t_n\colon n\in \mathbb{Z}]$ and let $\sigma$ be the automorphism of $R$ given by $\sigma(t_i+M^2)=t_{i+1}+M^2$. Then $R$ is commutative and $R^2=0$ but $t_0x\in R[x;\sigma]$ is not nilpotent. As a corollary of Theorem \ref{NilRing}, we obtain---in the characteristic zero case---a result that can be thought of as an extension of a result of Ferrero, Kishimoto, and Motose \cite{FKM} to polynomial identity rings. (Our result does not hold in the positive characteristic case, however; we give examples which show this.) We recall that in a ring satisfying a polynomial identity, we have a two-sided nil ideal called the \emph{nil radical}. This ideal is the sum of all right nil ideals \cite[Proposition 1.6.9 and Corollary 1.6.18]{Ro} and is locally nilpotent \cite{K}. In general, it is unknown whether the sum of left nil ideals is again nil---this is the famous K\"othe conjecture. \begin{thm} \label{cor: J} Let $R$ be a unital polynomial identity algebra over a field of characteristic zero and let $\delta$ be a derivation of $R$. Then if $N$ is the nil radical of $R$ then $\delta(N)\subseteq N$ and $J(R[x;\delta])=N[x;\delta]$. In particular, $J(R[x;\delta])=(J(R[x;\delta])\cap R)[x;\delta]$. \end{thm} We give examples that show that the containment $\delta(N)\subseteq N$ need not hold if the characteristic zero hypothesis is dropped; in particular, the equality $J(R[x;\delta])=N[x;\delta]$ need not hold without this hypothesis. The outline of this paper is as follows. In Section 2, we give some results from combinatorics on words. In Section 3, we use these combinatorial results to prove Theorems \ref{NilRing} and \ref{cor: J}. We note that throughout this paper, we have opted to work with rings that are not necessarily unital---the reason for this is that the question of Smoktunowicz and Ziembowski was asked for such rings. On the other hand, we occasionally refer to results that sometimes implicitly assume that the involved rings are unital. In practice, this does not create any issues: to a non-unital ring $R$, one can create an overring $S$ of $R$ with identity in which $R$ sits as a two-sided ideal and has the property that $S/R$ is a homomorphic image of $\mathbb{Z}$ (possibly $\mathbb{Z}$, itself). By working with the ring $S$, one can generally apply any results stated for unital rings to $S$ and then show they are inherited by $R$. Since this is generally straightforward, we make no mention of this other than here. \section{Combinatorics on words} In this section, we give some results on combinatorics on words that will be useful to us. We begin by recalling some of the basic notions we will use. For any set $A$, let $A^+$ denote the free semigroup on $A$. We will refer to the elements of $A^+$ as \emph{words}. \ For any $u\in A^+$, we let $u_i\in A$ denote the $i$-th letter of $u$. A \emph{subword} of $u$ is a contiguous string of letters of $u$, possibly empty. We say that a subword $v$ of $u$ is a \emph{prefix} if $u=vw$ for some word $w$, possibly empty; we say that $v$ is a \emph{suffix} if $u=wv$ for some, possibly empty, subword $w$ of $u$. We will be interested in the case when $A=\mathbb{N}:=\{0,1,\ldots \}$. Let $S_n$ be the symmetric group on $n$ letters. Define ${\rm weight}\colon\mathbb{N}^+\to \mathbb{N}$ as follows. If $u\in \mathbb{N}^+$ is a word of length $n$ then we define \[ {\rm weight}(u) := \min\left\{\sum_{i=1}^n (n+1 - i) u_{\sigma(i)}\mid \sigma\in S_n\right\}. \] For a natural number $k$ and a word $u\in \mathbb{N}^+$ of length $n$, we say that $u$ is $k$-\emph{valid} if ${\rm weight}(u) \le k{n+1\choose 2}$. Roughly speaking, this says that the average letter of $u$ is not too large compared to $k$. Let ${\bf b}=(b_0,b_1,b_2,\ldots )$ be a sequence of natural numbers. We say that $u\in\mathbb{N}^+$ is ${\bf b}$-bounded if for each $m\in\mathbb{N}$, every subword of length $b_m$ contains at least one letter greater than $m$. Let $(B,<)$ be a poset. We place a partial order $\prec$ on $B^+$ as follows. Let $u,v\in B^+$. Then $u$ and $v$ are incomparable if one is a prefix of the other. Otherwise, we compare them lexicographically using the order from $B$. We will be most interested in this when $B$ is the natural numbers and we will make use of this induced order on $\mathbb{N}^+$. We say that a finite sequence of words $\{v_i\}_{i=1}^d\subseteq B^+$ is $d$-\emph{decreasing} if $$v_1 \succ v_2 \succ \cdots \succ v_d.$$ We say that a word $u\in B^+$ has a $d$-decreasing subword if we can express $u = v w_1 w_2\cdots w_d x$ where $\{w_i\}_{i=1}^d$ is a $d$-decreasing subsequence. We observe that every word trivially contains a $0$-decreasing subsequence. \begin{prop} \label{CoW} Let ${\bf b}=(b_0,b_1,\ldots )$ be a sequence of natural numbers, let $d$ and $k$ be positive integers, and let $\varepsilon\in (0,1]$. Then there exist natural number constants $M=M(d,{\bf b},k,\varepsilon)$ and $N=N(d,{\bf b},k,\varepsilon)$ such that if $u\in\mathbb{N}^+$ is a $k$-valid, ${\bf b}$-bounded word of length $n \ge N$, then the subword of $u$ consisting of the last $\lfloor{\varepsilon n}\rfloor$ letters contains a $d$-decreasing subsequence $\{w_i\}_{i=1}^d$ where the first letter of $w_i$ is less than $M$ for $i=1,\ldots ,d$. \end{prop} \begin{proof} We proceed by induction on $d$. The case when $d = 0$ is vacuous and we may take $M=M(0,b,k,\varepsilon)=N(0,b,k,\varepsilon)=1$. Suppose now that the proposition is true for all nonnegative integers $\le d$. We take \begin{equation} M_1 = M\left(d,{\bf b},k,\varepsilon/2\right)\qquad {\rm and}\qquad N_1 = N\left(d,{\bf b},k,\varepsilon/2\right). \end{equation} We pick a positive integer $M_2$ satisfying: \begin{enumerate} \item[(i)] $M_2 > M_1$; \item[(ii)] $M_2 > 8b_{M_1}^2 k\varepsilon^{-2}$. \end{enumerate} We have $$ M_2 {(\varepsilon n/2 -1)/b_{M_1}\choose 2} \sim M_2\varepsilon^2 b_{M_1}^{-2}n^2/8 \ge kn^2.$$ It follows that there is a natural number $N_2>N_1$ such that whenever $n>N_2$ we have \begin{equation} \label{eq: MM} M_2 {(\varepsilon n/2 -1)/b_{M_1} \choose 2} > k{n+1\choose 2}. \end{equation} Let $u\in \mathbb{N}^+$ be a $k$-valid, ${\bf b}$-bounded word of length $n\ge N_2$. We write $u = vwx$, where $wx$ is of length $\lfloor{\varepsilon n}\rfloor$ and $x$ is of length $\lfloor{\varepsilon n/2}\rfloor$. We decompose $w$ into subwords of length $b_{M_1}$ as follows. We write $w = y_1\cdots y_j y_{j+1}$ where each of $y_1,\ldots ,y_j$ has length $b_{M_1}$ and $y_{j+1}$ has length less than $b_{M_1}$ (possibly zero). By construction, \begin{equation} j = \left\lfloor{\frac{\lfloor{\varepsilon n}\rfloor - \lfloor{\varepsilon n/2} \rfloor}{b_{M_1}}}\right\rfloor. \end{equation} Since $y_i$ has length $b_{M_1}$ for $i\in \{1,\ldots ,j\}$, it must contain a letter $a_i$ with $a_i\ge M_1$. We claim that there exists some $i\in \{1,\ldots ,j\}$ such that $a_i< M_2$. To see this, suppose that this is not the case. Then $u$ contains at least $j$ letters that are each at least $M_2$. Since $u$ contains $j$ letters that are at least $M_2$, we have $${\rm weight}(u)\ge j M_2 + (j-1) M_2 + \cdots + M_2 = M_2{j+1\choose 2}\ge M_2 {(\varepsilon n/2 -1)/b_{M_1} \choose 2}.$$ But Equation (\ref{eq: MM}) gives that this contradicts the fact that $u$ is a $k$-valid word. We conclude that $w$ must contain a letter $a$ with $M_1 \le a < M_2$. We write $w = bc$ where $c$ is a word whose first letter is $a$. By the inductive hypothesis, we can write $x = p v_1 \cdots v_d q$ where $v_1\succ \cdots \succ v_d$ and the first letter of $v_i$ is strictly less than $M_1$ for $i\in \{1,\ldots ,d\}$. We then have that $wx = bcpv_1\cdots v_d q$, and by construction $$cp \succ v_1 \succ \cdots \succ v_d$$ is a $(d+1)$-decreasing subsequence where the first letter of each word in the sequence is less than $M_2$. The result now follows taking $$N(d+1,{\bf b},k,\varepsilon) = N_2 \qquad {\rm and}\qquad M(d+1,{\bf b},k,\varepsilon) = M_2.$$ \end{proof} \begin{cor} \label{CoW2} Let ${\bf b}=(b_0,b_1,\ldots )$ be a sequence of natural numbers and let $d$ and $k$ be positive integers. Then there exists a natural number $N=N(d,b,k)$ such that if $u\in\mathbb{N}^+$ is a $k$-valid, ${\bf b}$-bounded word of length $n \ge N$, then $u$ contains a $d$-decreasing subword.\end{cor} \begin{proof} We take $N(d,b,k) = N(d,b,k,1)$ from Corollary \ref{CoW}. \end{proof} \section{Proofs of Theorems \ref{NilRing} and \ref{cor: J}} In this section we prove our main results. We begin with a simple lemma that will allow us to apply our combinatorial results from the preceding section. \begin{lem}\label{DerLem} Let $R$ be a ring, let $T=\{a_1,\ldots ,a_m\}$ be a finite subset of $R$, and let $\delta$ be a derivation of $R$. If $n$ and $k$ are natural numbers and $p_1,\ldots ,p_{n+1}$ are nonnegative integers that are at most $k$ then the product $$a_{i_0}x^{p_1}a_{i_1}x^{p_2}\cdots a_{i_n}x^{p_{n+1}}$$ in $R[x;\delta]$ can be written as a $\mathbb{Z}$-linear combination of of elements of the form $$a_{i_0}\delta^{j_1}(a_{i_1})\delta^{j_2}(a_{i_2})\cdots \delta^{j_{n}}(a_{i_n})x^M,$$ where $M$ is a nonnegative integer and $j_1j_2\cdots j_n\in \mathbb{N}^+$ is $k$-valid. \end{lem} \begin{proof} Using the formula \begin{equation} x^d a = \sum_{j=0}^d {d\choose j} \delta^j(a) x^{d-j}, \end{equation} for $d\ge 0$ and $a\in R$, it is straightforward to see that $$a_{i_1}x^{p_1}a_{i_2}x^{p_2}\cdots a_{i_n}x^{p_n}$$ can be expressed as a $\mathbb{Z}$-linear combination of elements of the form $$a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_n}(a_{i_n}) x^{p_1+p_2+\cdots +p_n+p_{n+1}-j_1-\cdots -j_n}$$ where we have $j_i \le p_1+\cdots +p_i-j_1-\cdots -j_{i-1}$ for $i=1,\ldots ,n$. In particular, we have $$j_1+\cdots +j_i \le p_1+\cdots +p_i\le ki$$ for $i=1,\ldots ,n$. Summing over all $i$ then gives $$\sum_{i=1}^n (n+1-i) j_i \le \sum_{i=1}^n ki = k {n+1\choose 2}.$$ Thus the word $j_1j_2\cdots j_n\in \mathbb{N}^+$ is necessarily $k$-valid. The result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{NilRing}] Let $S=\{p_1(x),\ldots ,p_m(x)\}$ be a finite subset of $R[x;\delta]$. We wish to show that there is a natural number $N=N(S)$ such that $S^{N+1}=0$; i.e., every product of $N+1$ elements of $S$ is equal to zero. Then there is a finite subset $T=\{a_1,\ldots ,a_t\}$ of $R$ and a natural number $k$ such that $S\subseteq T+Tx+\cdots +Tx^k$. Let $S_0=T\cup Tx\cup \cdots \cup Tx^k$. Then every element of $S^n$ can be expressed as a sum of elements of the form $S_0^n$ and hence it is sufficient to show that there is a natural number $N$ such that $S_0^{N+1}=0$. To show that $S_0^{N+1}=0$, it is enough to show that $$Tx^{p_1}Tx^{p_2}\cdots Tx^{p_{N+1}}=0$$ for every sequence $(p_1,\ldots ,p_{N+1})\in \{0,\ldots ,k\}^{N+1}$. For each $n\ge 0$, we let $T_n=T\cup \delta(T)\cup \cdots \cup \delta^n(T)\subseteq R$. Then since $R$ is locally nilpotent, there exists a natural number $b_n$ such that $T_n^{b_n}=0$. We let ${\bf b}=(b_0,b_1,b_2,\ldots )$. By Lemma \ref{DerLem}, if $(p_1,\ldots ,p_{N+1})\in \{0,\ldots ,k\}^{N+1}$ then we have that $$Tx^{p_1}Tx^{p_2}\cdots Tx^{p_{N+1}}$$ can be written as a $\mathbb{Z}$-linear combination of elements of the form $$a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}}) x^M$$ where $M$ is a nonnegative integer and $j_1j_2\cdots j_N$ is a word that is $k$-valid. Moreover, whenever $j_1j_2\cdots j_N$ is not ${\bf b}$-bounded we trivially have $$a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}}) =0,$$ since it necessarily contains a factor from $T_n^{b_n}$ for some $n\ge 0$. In particular, it is sufficient to show that there is some natural number $N$ such that all elements of $R$ of the form $$a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}}),$$ with $j_1j_2j_3\cdots j_N\in \mathbb{N}^+$ a $k$-valid and ${\bf b}$-bounded word, are zero. Let $d$ be the PI degree of $R$ and let $N=N(d,{\bf b},k)$ be as in the statement of Corollary \ref{CoW2}. We claim that whenever $j_1j_2j_3\cdots j_N$ a $k$-valid and ${\bf b}$-bounded word we have $a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}})=0$. To see this, suppose towards a contradiction that this is not the case and let $j_1\cdots j_N$ be the lexicographically smallest (i.e., the smallest word with respect to $\prec$) $k$-valid and ${\bf b}$-bounded word of length $N$ such that there exists $(i_0,\ldots ,i_N)\in \{1,\ldots ,m\}^{N+1}$ such that $a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}})$ is nonzero. Given a subword $y=j_sj_{s+1}\cdots j_{s+r}$ of $j_1\cdots j_N$, we define $$f(y)=\delta^{j_s}(a_{i_s})\cdots \delta^{j_{s+r}}(a_{i_{s+r}})\in R.$$ By Corollary \ref{CoW2}, we can write $j_1\cdots j_N=uw_1w_2\cdots w_d v$ with $$w_1\succ w_2 \succ \cdots \succ w_d.$$ Furthermore, we have that $R$ satisfies a homogeneous multilinear polynomial identity of degree $d$ \cite[Proposition 13.1.9]{MR}: \[ X_1\cdots X_d = \sum_{\substack{\sigma\in S_d \\ \sigma\neq {\rm id}}} c_\sigma X_{\sigma(1)}\cdots X_{\sigma(d)} \] with $c_\sigma \in\mathbb{Z}$ for each $\sigma\in S_d\setminus \{{\rm id} \}$. Taking $X_i=f(w_i)$ for $i=1,\ldots, d$ we see that $$f(w_1)f(w_2)\cdots f(w_d) \ = \ \sum_{\substack{\sigma\in S_d \\ \sigma\neq{\rm id}}} c_\sigma f(w_{\sigma(1)})\cdots f(w_{\sigma(d)}).$$ Hence \begin{equation} \label{eq: sigma} a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}}) = \sum_{\substack{\sigma\in S_d \\ \sigma\neq {\rm id}}} c_\sigma a_{i_0}f(u)f(w_{\sigma(1)})\cdots f(w_{\sigma(d)})f(v). \end{equation} By construction, for $\sigma\in S_d$ with $\sigma\neq {\rm id}$ we have $a_{i_0}f(u)f(w_{\sigma(1)})\cdots f(w_{\sigma(d)})f(v)$ is an element of the form $a_{i_0} \delta^{j_{\tau(1)}}(a_{i_{\tau(1)}})\cdots \delta^{j_{\tau(N)}}(a_{i_{\tau(N)}})$ with $\tau\in S_N$ such that $j_{\tau(1)}\cdots j_{\tau(N)}$ is lexicographically less than $j_1\cdots j_N$. We note that, by definition, permutations of $k$-valid words are again $k$-valid. Thus if we also have that if $j_{\tau(1)}\cdots j_{\tau(N)}$ is ${\bf b}$-bounded then we must have $a_{i_0} \delta^{j_{\tau(1)}}(a_{i_{\tau(1)}})\cdots \delta^{j_{\tau(N)}}(a_{i_{\tau(N)}})=0$ by minimality of $j_1\cdots j_N$. On the other hand, if $j_{\tau(1)}\cdots j_{\tau(N)}$ is not ${\bf b}$-bounded then $a_{i_0} \delta^{j_{\tau(1)}}(a_{i_{\tau(1)}})\cdots \delta^{j_{\tau(N)}}(a_{i_{\tau(N)}})$ contains a factor that lies in $T_n^{b_n}$ for some $n\ge 0$ and hence it is zero. Thus we have shown that in either case we have $a_{i_0} \delta^{j_{\tau(1)}}(a_{i_{\tau(1)}})\cdots \delta^{j_{\tau(N)}}(a_{i_{\tau(N)}})$ is zero for all applicable $\tau$, and so from Equation (\ref{eq: sigma}) we see $$a_{i_0}\delta^{j_1}(a_{i_1})\cdots \delta^{j_{N}}(a_{i_{N}})=0,$$ a contradiction. It follows that $S^{N+1}=0$. \end{proof} We now deduce Theorem \ref{cor: J} from Theorem \ref{NilRing}. \begin{proof}[Proof of Theorem \ref{cor: J}] Since $R$ is PI, the sum of all nil right ideals is a nil two-sided locally nilpotent ideal $N$, which is called the nil radical of $R$ (see Rowen \cite[Proposition 1.6.9 and Corollary 1.6.18]{Ro} and Kaplansky \cite{K}). We claim that $\delta(N)\subseteq N$. To see this, suppose to the contrary that $\delta(N)\not\subseteq N$. Then there is some $a\in N$ such that $\delta(a)\not\in N$. Then either $\delta(a)$ is not nilpotent or there is some $r\in R$ such that $\delta(a)r$ is not nilpotent. In the latter case, we have $\delta(ar)=\delta(a)r+a\delta(r)\equiv \delta(a)r~(\bmod ~N)$ and so if $\delta(a)r$ is not nilpotent then neither is $\delta(ar)$. Hence in either case we see we can find an element $b\in N$ such that $\delta(b)$ is not nilpotent. By assumption, there is some $n\ge 2$ such that $b^n=0$. It is straightforward to show that there exist nonnegative integers $c_{j_1,\ldots ,j_n}$ such that \begin{equation} \label{eq: b} 0=\delta^n(b^n) = \sum_{j_1+\cdots +j_n=n} c_{j_1,\ldots ,j_n} \delta^{j_1}(b)\cdots \delta^{j_n}(b). \end{equation} Moreover, we have that $c_{1,1,\ldots ,1}\ge 1$ and hence is nonzero in $R$ since $R$ is a unital algebra over a field of characteristic zero. Observe that if $j_1+\cdots +j_n=n$ and $(j_1,\ldots ,j_n)\neq (1,1,\ldots ,1)$ then $j_i=0$ for some $i$ and so $\delta^{j_1}(b)\cdots \delta^{j_n}(b)\in RbR\subseteq N$. Hence Equation (\ref{eq: b}) gives $\delta(b)^n \in N$. Since $N$ is a nil ideal, it follows that $\delta(b)$ is nilpotent, a contradiction. Thus $\delta(N)\subseteq N$. Notice that $N$ is locally nilpotent and so the subring $N[x;\delta]$ of $R[x;\delta]$ is a locally nilpotent ideal of $R[x;\delta]$ by Theorem \ref{NilRing}. It follows that $N[x;\delta]\subseteq J(R[x;\delta])$. To show that $J(R[x;\delta])$ is equal to $N[x;\delta]$, it suffices to show that $(R/N)[x;\delta]$ has zero Jacobson radical. A result of Tsai, Wu, and Chuang \cite{TWC} gives that if $S$ is a PI ring with zero nil radical then the Jacobson radical of $S[x;\delta]$ is zero. The result now follows. \end{proof} We note that the characteristic zero hypothesis is essential in Theorem \ref{cor: J}. For example, if $p>0$ is a prime number and we let $R=\mathbb{F}_p[T]/(T^p)$ and let $t$ denote the image of $T$ in $R$, then $R$ has a unique derivation satisfying $\delta(t)=1$. It is clear that $R$ is commutative (and hence PI) and that the nil radical of $R$ is not closed under application of $\delta$ (since $t$ is in the nil radical and $\delta(t)=1$). In fact, the only proper ideal of $R$ closed under application of $\delta$ is easily seen to be $(0)$. In this case, the result of Ferrero, Kishimoto, and Motose \cite{FKM} gives that $S:=J(R[x;\delta])\cap R$ is a nil ideal of $R$ that is closed under $\delta$ and thus we see that $S$ is necessarily zero. They also show that $J(R[x;\delta])=S[x;\delta]$ and so the Jacobson radical is zero in this case. \end{document}
\begin{document} \title[Unrestricted virtual braids and crystallographic braid groups]{Unrestricted virtual braids and crystallographic braid groups} \author[Paolo Bellingeri]{Paolo Bellingeri} \address{Normandie Univ, UNICAEN, CNRS, LMNO, 14000 Caen, France} \email{[email protected]} \author[John Guaschi]{John Guaschi} \address{Normandie Univ, UNICAEN, CNRS, LMNO, 14000 Caen, France} \email{[email protected]} \author[Stavroula Makri]{Stavroula Makri} \address{Normandie Univ, UNICAEN, CNRS, LMNO, 14000 Caen, France} \email{[email protected]} \date{\ensuremath{\longrightarrow}day} \subjclass{Primary 20F36; Secondary 20H15} \keywords{Braid groups, virtual and welded braid groups, unrestricted virtual braid groups} \begin{abstract} We show that the crystallographic braid group $B_n/[P_n,P_n]$ embeds naturally in the group of unrestricted virtual braids $UVB_n$, we give new proofs of known results about the torsion elements of $B_n/[P_n,P_n]$, and we characterise the torsion elements of $UVB_n$. \end{abstract} \maketitle \section{Introduction} Let $n\geq 1$. The group of unrestricted virtual braids, denoted throughout this paper by $UVB_n$, was introduced by Kauffman and Lambropoulou in~\cite{KaL} as the analogue of fused links in the setting of braids. The classification of fused links is now well known. Such links are distinguished by their \emph{virtual linking number}, see for instance~\cite{N}, where they are considered as the closure of unrestricted virtual braids, as well as~\cite{ABMW} for their classification in terms of Gauss diagrams. Unrestricted virtual braid groups occur as natural quotients of virtual and welded braid groups. They appear for instance in~\cite{KMRW} in the study of local representations of welded braid groups, where they are called \emph{symmetric loop braid groups}, and they may be decomposed as a semi-direct product of a right-angled Artin group, which is in fact the pure subgroup $UVP_n$ of $UVB_n$, by the symmetric group $S_n$~\cite{BBD}. The main aim of this paper is to characterise the torsion elements of $UVB_n$ using this decomposition, namely to show that any element of finite order is a conjugate of an element of $S_n$ by an element of $UVP_n$. The structure of this paper is as follows. In Section~\ref{sec:uvb}, we give presentations of the virtual braid groups $VB_n$, of the welded braid groups $WB_n$, and of $UVB_n$. In this way, $UVB_n$ may be viewed as a quotient of both $VB_n$ and $WB_n$. We also recall two important results of~\cite{BBD} that describe the structure of the pure unrestricted virtual braid group $UVP_n$ as a direct sum of copies of the free group $F_2$ on two generators, which as mentioned above, allows us to decompose $UVB_n$ as a semi-direct product of $UVP_n$ and $S_n$ in a natural way. A similar decomposition holds for $VB_n$ and $WB_n$, but the canonical homomorphism $\eta \colon\thinspace B_n \ensuremath{\longrightarrow} UVB_n$, where $B_n$ is the Artin braid group, is not injective, which is in contrast with the nature of the corresponding homomorphisms for $VB_n$ and $WB_n$. In Section~\ref{sec:cryst}, we study the image of $\eta$, and in Proposition~\ref{prop:cris}, we prove that it is isomorphic to the quotient $B_n/[P_n,P_n]$, where $P_n$ is the pure Artin braid group, and $[P_n,P_n]$ is its commutator subgroup. This quotient has been the subject of recent study, one of the reasons being that it is a crystallographic group~\cite{GGO17,GGO19,GGO20}. The results of~\cite{GGO17} have been generalised to quasi-Abelianised quotients of complex reflection groups~\cite{BM,M16}, and to surface braid groups~\cite{GGOP}. This enables us to give an alternative proof in Proposition~\ref{prop:cris3} and Remark~\ref{rem:uvbnhn} of the fact that $B_n/[P_n,P_n]$ embeds in the semi-direct product $P_n/[P_n,P_n] \rtimes S_n$. In the final section of the paper, Section~\ref{sec:torsionuvbn}, we study the torsion elements of $UVB_n$. We apply the embedding of Proposition~\ref{prop:cris} to give a new combinatorial proof in Theorem~\ref{thm:ord2} of the fact that there are no elements of even order in $B_n/[P_n,P_n]$, and in Theorem~\ref{thm:thm_of_torsion}, we characterise the torsion elements of $UVB_n$ by showing that every such element is conjugate to an element of $S_n$ by an element of $UVP_n$. To our knowledge, it is not known whether a analogous result holds for $VB_n$ and $WB_n$. \section{Unrestricted virtual braids}\label{sec:uvb} In order to define unrestricted virtual braid groups, in this section we recall first the notions of virtual and welded braid groups by exhibiting their usual group presentations. \begin{defin}\label{def:vbn} The \emph{virtual braid group} $VB_n$ is the group defined by the group presentation: \noindent Generators: $\{\sig{1}, \ldots, \sig{n-1}, \rr{1}, \ldots, \rr{n-1}\}$ \noindent Relations: \begin{align*} \sig{\ii} \sig{i+1} \sig{\ii} &= \sig{i+1}\sig{\ii} \sig{i+1} & & \text{for $\ii=1,\dots,\nn-2$} \tag{BR1}\label{eq:br1}\\ \sig{\ii} \sig{\jj} &= \sig{\jj} \sig{\ii} & & \text{for $\lvert \ii - \jj \rvert \geq 2$} \tag{BR2}\label{eq:br2}\\ \rr\ii \rr{i+1} \rr\ii &= \rr{i+1} \rr\ii\rr{i+1} & & \text{for $\ii=1,\dots,\nn-2$} \tag{SR1}\label{eq:sr1}\\ \rr\ii \rr\jj &= \rr\jj \rr\ii & & \text{for $\lvert \ii - \jj \rvert \geq 2$}\tag{SR2}\\ \rrq\ii2 &= 1 & & \text{for $\ii=1,\dots,\nno$} \tag{SR3}\label{eq:sr3}\\ \sig{i} \rr{j} &= \rr{j} \sig{i} & & \text{for $\lvert \ii - \jj \rvert \geq 2$}\tag{MR1}\label{eq:mr1}\\ \rr{i} \rr{i+1} \sig{i} &= \sig{i+1} \rr{i} \rr{i+1} & & \text{for $\ii=1,\dots,\nn-2$.} \tag{MR2}\label{eq:mr2} \end{align*} \end{defin} A diagrammatic description of generators and relations of $VB_n$ may be found for instance in~\cite{Bar-0,BP,Kam}. For a topological interpretation, we refer the reader to~\cite{Cis}, and for an algebraic one (in terms of actions on root systems) to~\cite{BPT}. Note that the relations~(\ref{eq:br1})--(\ref{eq:br2}) (resp.\ (\ref{eq:sr1})--(\ref{eq:sr3})) correspond to the usual relations of the Artin braid group $B_{n}$ (resp.\ the symmetric group $S_{n}$) for the set $\{\sig{1}, \ldots, \sig{n-1} \}$ (resp.\ for the set $\{\rr{1}, \ldots, \rr{n-1}\}$), and the remaining relations~(\ref{eq:mr1})--(\ref{eq:mr2}) are `mixed' in the sense that they involve generators of both of these sets. Recall that the pure braid group $P_n$ is the kernel of the homomorphism $\pi\colon\thinspace B_n\longrightarrow S_n$ defined on the generators $\sig{1}, \ldots, \sig{n-1}$ of $B_{n}$ by $\pi(\sigma_i)=s_i$ for all $i=1,\ldots n-1$, where $s_i$ is the transposition $(i,i+1)$. Analogously, we define the \emph{virtual pure braid group}, denoted by $VP_\nn$, to be the kernel of the homomorphism $\pi_{VP}\colon\thinspace VB_\nn \longrightarrow S_\nn$ that for all $\ii = 1, 2, \dots, n-1$, maps the generators $\sig{\ii}$ and $\rr\ii$ to $s_\ii$. A group presentation for $VP_\nn$ is given in~\cite{Bar-0}. Let $\iota\colon\thinspace S_n \longrightarrow VB_n$ be the homomorphism defined by $\iota(s_i) = \rr{i}$ for $i=1, \ldots, n-1$. Since $\iota$ is a section for $\pi_{VP}$, it follows that $\iota$ is injective and that $VB_n$ is a semi-direct product of $VP_n$ by $S_n$. The canonical homomorphism $\eta\colon\thinspace B_n \longrightarrow VB_n$ defined by $\eta(\sigma_i) = \sigma_i$ for $i=1, \ldots, n-1$ is injective~\cite{BP,G,Kam}. The welded braid group $WB_n$ may be defined as a quotient of $VB_n$ by adding the following family of relations to the presentation given in Definition~\ref{def:vbn}: \begin{equation} \rr\ii \sig{\ii+1} \sig\ii = \sig{\ii+1} \sig\ii \rr{\ii+1},\ \text{for $\ii = 1, \dots, \nn-2$,} \tag{\text{OC}}\label{eq:oc} \end{equation} where OC stands for `Over Commuting'. Welded braid groups have several different equivalent definitions~\cite{D}. In particular, they may be defined as \emph{basis conjugating automorphisms} of free groups \cite{FRR}, from which it follows that the homomorphism $\eta\colon\thinspace B_n \longrightarrow WB_n$ defined by $\eta(\sigma_i) = \sigma_i$ for $i=1, \ldots, n-1$ is injective~\cite{Kam}. As in the case of virtual braid groups, the \emph{welded pure braid group}, denoted by $WP_\nn$, is defined to be the kernel of the homomorphism $\pi_{WP}\colon\thinspace WB_\nn \longrightarrow S_\nn$ given by sending the generators $\sig{\ii}$ and $\rr\ii$ of $WB_\nn$ to $s_\ii$ for all $\ii = 1, 2, \dots, n-1$. A group presentation for $WP_\nn$ may be found in~\cite{BH,D,Su}. Abusing notation, we define $\iota\colon\thinspace S_n \longrightarrow WB_n$ to be also the homomorphism defined by $\iota(s_i) = \rr{i}$ for $i=1, \ldots, n-1$. As in the virtual case, $\iota$ is a section for $\pi_{WP}$, and therefore $\iota$ is injective and $WB_n$ is a semi-direct product of $WP_n$ by $S_n$. Note that the symmetrical relations: \begin{equation} \rr{\ii+1} \sig{\ii} \sig{\ii+1} = \sig\ii \sig{\ii+1} \rr{\ii},\ \text{for $\ii = 1, \dots, \nn-2$}, \tag{\text{UC}}\label{eq:uc} \end{equation} where UC stands for `Under Commuting', do not hold in~$\WB\nn$ (see for instance~\cite{BS,Kam}). Consequently, the following group, which was first defined by Kauffman and Lambropoulou in~\cite{KaL}, is a proper quotient of $WB_n$. \begin{defin}\label{D:Unrestricted} The \emph{unrestricted virtual braid group} $UVB_n$ is the group defined by the following group presentation: \noindent Generators: $\{\sig{1}, \ldots, \sig{n-1}, \rr{1}, \ldots, \rr{n-1}\}$ \noindent Relations: the seven relations~(\ref{eq:br1})--(\ref{eq:br2}), (\ref{eq:sr1})--(\ref{eq:sr3}) and~(\ref{eq:mr1})--(\ref{eq:mr2}) of Definition~\ref{def:vbn}, plus the relations of types~(\ref{eq:oc}) and~(\ref{eq:uc}). \end{defin} As in the case of $VB_n$, let $\pi_{UVP} \colon\thinspace UVB_n\longrightarrow S_n$ be the homomorphism defined by $\pi_{UVP}(\sigma_i)=\pi_{UVP}(\rho_i)=s_i$ for $i=1,\ldots,n-1$. The kernel of $\pi_{UVP}$ is the \emph{unrestricted virtual pure braid group}, denoted by $UVP_n$. For $1\leq i,j\leq n$, $i\neq j$, we define the elements $\lambda_{i,j}$ of $UVP_n$ as follows: \begin{equation}\label{eq:lambda} \begin{cases} \lambda_{i,i+1}=\rho_i\sigma_i^{-1} & \text{for $i=1,\ldots,n-1$}\\ \lambda_{i+1,i}=\sigma_i^{-1}\rho_i & \text{for $i=1,\ldots,n-1$}\\ \lambda_{i,j}=\rho_{j-1}\rho_{j-2}\cdots\rho_{i+1}\lambda_{i,i+1} \rho_{i+1}\cdots\rho_{j-2} \rho_{j-1}& \text{for $1\leq i<j-1\leq n-1$}\\ \lambda_{j,i}=\rho_{j-1}\rho_{j-2}\cdots\rho_{i+1}\lambda_{i+1,i}\rho_{i+1}\cdots\rho_{j-2}\rho_{j-1}& \text{for $1\leq i<j-1\leq n-1$.} \end{cases} \end{equation} \begin{thm}[Bardakov--Bellingeri--Damiani~\cite{BBD}]\label{thm:tss} The group $UVP_n$ admits the following presentation:\\ Generators: $\lambda_{i,j}$, where $1\leq i,j\leq n$, $i\neq j$.\\ Relations: The generators commute pairwise except for the pairs $\lambda_{i,j}$ and $\lambda_{j,i}$ for all $1\leq i,j\leq n$, $i\neq j$. \end{thm} It follows from Theorem~\ref{thm:tss} that the group $UVP_n$ is a right-angled Artin group that is isomorphic to the direct product of the ${n(n-1)/2}$ free groups $F_{i,j}$ of rank $2$, where $1\leq i<j\leq n$, and where $\{ \lambda_{i,j}, \lambda_{j,i} \}$ is a basis of $F_{i,j}$. By convention, if $1\leq i,j\leq n$ then we set $F_{j,i}=F_{i,j}$. As for $VB_n$, let $\iota\colon\thinspace S_n \longrightarrow UVB_n$ be the homomorphism defined by $\iota(s_i) = \rr{i}$ for $i=1, \ldots, n-1$. Then $\iota$ is injective, and is a section for $\pi_{UVP}$. This leads to the following natural decomposition of $UVB_n$. \begin{thm}[Bardakov--Bellingeri--Damiani~\cite{BBD}]\label{thm:tss2} The group $UVB_n$ is isomorphic to the semi-direct product $UVP_n\rtimes S_n$, where $S_n$ acts on $UVP_n$ by permuting the indices of the elements of the generating set of $UVP_n$ given in Theorem~\ref{thm:tss}. More precisely, for all $\lambda_{i,j} \in UVP_n$, where $1\leq i, j \leq n$, $i\neq j$, and for all $s\in S_n$ we have: \begin{equation}\label{eq:conjuvpn} \iota(s)\,\lambda_{i,j}\,\iota(s)^{-1}=\lambda_{s(i),s(j)}. \end{equation} \end{thm} \section{Crystallographic groups}\label{sec:cryst} As we mentioned in Section~\ref{sec:uvb}, the Artin braid group $B_n$ embeds naturally in $VB_n$ and $WB_n$ via the injective homomorphism $\eta$ defined in each of these two cases. However this is no longer the case for the canonical homomorphism $\eta\colon\thinspace B_n \longrightarrow UVB_n$ defined by $\eta(\sigma_i) = \sigma_i$ for $i=1, \ldots, n-1$. In this section, we show that the image of $B_n$ under the homomorphism $\eta$ is isomorphic to the quotient of $B_n$ by the commutator subgroup $[P_n, P_n]$. This quotient was introduced by Tits in~\cite{Ti} as \emph{groupes de Coxeter \'etendus}, and has been studied more recently by various authors (see for example~\cite{BM,Di,GGO17,GGO19,GGO20,M16}). \begin{prop} \label{prop:cris} Let $\eta\colon\thinspace B_n\longrightarrow UVB_n$ be the canonical homomorphism defined by $\eta(\sigma_i)=\sigma_i$ for $1\leq i\leq n-1$. Then $\eta(B_n)$ is isomorphic to the group $B_n/[P_n, P_n]$. \end{prop} \begin{proof} Consider the homomorphism $\eta\colon\thinspace B_n\longrightarrow UVB_n$. Since $P_n=\ensuremath{\operatorname{\text{Ker}}}(\pi)$ (resp.\ $UVP_n=\ensuremath{\operatorname{\text{Ker}}}(\pi_{UVP})$), where $\pi\colon\thinspace B_n\longrightarrow S_n$ (resp.\ $\pi_{UVP}\colon\thinspace UVB_n \longrightarrow S_n$) is defined by $\pi(\sigma_i)=s_i$ (resp.\ $\pi_{UVP}(\sigma_i)= \pi_{UVP}(\rho_i)=s_i$) for all $1\leq i\leq n-1$, it follows from the definition of $\eta\colon\thinspace B_n\longrightarrow UVB_n$ that $\pi=\pi_{UVP} \circ \eta$, and thus $\pi=\pi_{UVP}\left\lvert_{\eta(B_{n})}\right. \circ \eta$. Since $\pi$ is surjective, the homomorphism $\pi_{UVP}\left\lvert_{\eta(B_{n})}\right.\colon\thinspace \eta(B_{n})\longrightarrow S_{n}$ is too. From the equality $\pi=\pi_{UVP} \circ \eta$, we obtain the following commutative diagram of short exact sequences: \begin{equation}\label{eq:cd1} \begin{tikzcd} 1 \arrow[r] & P_{n} \arrow[r] \arrow[d, "\eta\left\lvert_{P_n}\right."] & B_{n} \arrow[r, "\pi"] \arrow[d, "\eta"] & S_{n} \arrow[r] \ar[equal]{d} & 1\\ 1 \arrow[r] & UVP_{n} \arrow[r] & UVB_{n} \arrow[r, "\pi_{UVP}"] & S_{n} \arrow[r] & 1. \end{tikzcd} \end{equation} We claim that $\eta(P_{n})=\ensuremath{\operatorname{\text{Ker}}}(\pi_{UVP}\left\lvert_{\eta(B_{n})}\right.)$. To prove the claim, the exactness of~(\ref{eq:cd1}) implies that $\eta(P_{n}) \subset \ensuremath{\operatorname{\text{Ker}}}(\pi_{UVP}\left\lvert_{\eta(B_{n})}\right.)$. Conversely, let $x\in \ensuremath{\operatorname{\text{Ker}}}(\pi_{UVP}\left\lvert_{\eta(B_{n})}\right.)$. Then $x\in \eta(B_{n})$, and there exists $y\in B_{n}$ such that $\eta(y)=x$. The commutativity of~(\ref{eq:cd1}) implies that $y\in P_{n}$, and so $x\in \eta(P_{n})$, so $\ensuremath{\operatorname{\text{Ker}}}(\pi_{UVP}\left\lvert_{\eta(B_{n})}\right.) \subset \eta(P_{n})$, and the claim follows. Considering $\eta$ and its restriction to $P_{n}$, we obtain the following commutative diagram of short exact sequences: \begin{equation}\label{eq:cdeta} \begin{tikzcd} &&1 \arrow[d] & 1 \arrow[d]\\ 1 \arrow[r] & \ensuremath{\operatorname{\text{Ker}}}(\eta\left\lvert_{P_n}\right.) \arrow[d] \arrow[r] & P_n \arrow[d] \arrow[r, "\eta\left\lvert_{P_n}\right."] & \eta(P_n)\arrow[r] \arrow[d] \arrow[r] & 1 \\ 1 \arrow[r] & \ensuremath{\operatorname{\text{Ker}}}(\eta) \arrow[r] &B_n \arrow[d, "\pi"]\arrow[r, "\eta"] & \eta(B_n)\arrow[d, "\pi_{UVP}\left\lvert_{\eta(B_{n})}\right."]\ar[r] & 1.\\ &&S_n \arrow[d] \ar[equal]{r} & S_n \arrow[d]\\ &&1 & 1 \end{tikzcd} \end{equation} Note that the exactness of the rightmost column of~(\ref{eq:cdeta}) follows from the claim of the previous paragraph. By a standard diagram-chasing argument, we conclude from~(\ref{eq:cdeta}) that $\ensuremath{\operatorname{\text{Ker}}}(\eta)= \ensuremath{\operatorname{\text{Ker}}}(\eta\left\lvert_{ P_n}\right.)$. Applying the isomorphism $UVB_n \cong UVP_n\rtimes S_n$ of Theorem~\ref{thm:tss2} and using equation~(\ref{eq:lambda}), we see that $\eta(\sigma_i)=\lambda_{\ii, \ii+1}\inv \rho_{i}$. Recall that the pure braid group $P_n$ is generated by the set $\{a_{ij} \mid 1 \leq i < j \leq \nn\}$, where: \begin{gather*} a_{\ii,\ii+1} = \sigg\ii{2}, \\ a_{\ii, \jj} = \sig{\jj-1} \sig{\jj-2} \cdots \sig{\ii+1} \sigg\ii{2} \siginv{\ii+1} \cdots \siginv{\jj-2} \siginv{\jj-1} \mbox{,} \quad \mbox{for } \ii+1 < \jj \leq \nn. \end{gather*} In~\cite[Corollary 2.8]{BBD} it was shown that $\eta(a_{\ii, \jj})=\lambda_{\ii, \jj}^{-1} \lambda_{\jj, \ii}^{-1}$ for $1 \leq \ii+1 < \jj \leq \nn$. From the construction, we deduce that $[P_n, P_n]$ is contained in $\ensuremath{\operatorname{\text{Ker}}}(\eta\left\lvert_{ P_n}\right.)$. Further, $\eta(P_n)$ is isomorphic to $P_n/[P_n,P_n]=\Z^{n(n-1)/2}$ (see~\cite[Corollary 2.8]{BBD}). It follows that $[P_n, P_n]$ coincides with $\ensuremath{\operatorname{\text{Ker}}}(\eta\left\lvert_{ P_n}\right.)$. The statement follows by combining this with the fact that $\ensuremath{\operatorname{\text{Ker}}}(\eta)= \ensuremath{\operatorname{\text{Ker}}}(\eta\left\lvert_{ P_n}\right.)$ and using the exactness of~(\ref{eq:cdeta}). \end{proof} \begin{rem} We can rephrase Proposition~\ref{prop:cris} by saying that the canonical homomorphism $\eta\colon\thinspace B_n \ensuremath{\longrightarrow} UVB_n$ factors through an embedding of $B_n/[P_n, P_n]$ into $UVB_n$. On the other hand, by~\cite[Theorem~2.5]{CO}, the canonical homomorphism from $B_n$ to $VB_n$ induces an embedding of $B_n/[P_n, P_n]$ in $VB_n/[VP_n,VP_n]$. Since $VB_n/[VP_n,VP_n]$ is isomorphic to $UVB_n/[UVP_n,UVP_n]$~\cite[Theorem~4.1]{CO}, the canonical homomorphism $B_n$ to $UVB_n$ gives rise to an embedding of the quotient $B_n/[P_n, P_n]$ in $UVB_n/[UVP_n,UVP_n]$. We therefore conclude that the composition of the embedding of $B_n/[P_n, P_n]$ in $UVB_n$ defined in the proof of Proposition~\ref{prop:cris} and the canonical projection $q_n\colon\thinspace UVB_n \ensuremath{\longrightarrow} UVB_n/[UVP_n,UVP_n]$ is also injective. \end{rem} We recall that an abstract group $\Gamma$ is said to be \emph{crystallographic} if it can be realised as an extension of a free Abelian subgroup of $\Gamma$ of maximal rank by a finite group~\cite[Theorem 2.1.4]{Dek}. Using the following result, in~\cite{GGO17}, Gon\c{c}alves--Guaschi--Ocampo proved that the group $B_n/[P_n, P_n]$ is a crystallographic group. \begin{prop}[Gon\c{c}alves--Guaschi--Ocampo~\cite{GGO17}]\label{prop:ggo} Let $n\geq 2$. Then the following sequence: \begin{equation}\label{eq:cryst} \begin{tikzcd} 1 \arrow[r] & \mathbb{Z}^{n(n-1)/2} \arrow[r] &B_n/[P_n, P_n] \arrow[r, "\widehat{\pi}"] & S_n \ar[r] & 1, \end{tikzcd} \end{equation} is short exact, where $\widehat{\pi}$ is the homomorphism induced by $\pi\colon\thinspace B_n\longrightarrow S_n$. \end{prop} Note that the sequence~(\ref{eq:cryst}) does not split~\cite{Di,GGO17}. It follows from Propositions~\ref{prop:cris} and~\ref{prop:ggo} that the group $\eta(B_n)$ is a crystallographic subgroup of $UVB_n$. We remark also that $\operatorname{\text{Im}}(\eta)$ is not normal in $UVB_n$, and that the normal closure of $\operatorname{\text{Im}}(\eta)$ is of index $2$ in $UVB_n$. This follows from the fact that in the quotient of $UVB_n$ by $\operatorname{\text{Im}}(\eta)$, the generators $\rho_i$ are identified to a single element due to relations~(\ref{eq:oc}) and~(\ref{eq:uc}) of Definition~\ref{D:Unrestricted}. The group $UVB_n$ contains other crystallographic groups of rank $n(n-1)/2$ that are not isomorphic to $\eta(B_n)$. One such example is given in the following proposition that is a straightforward consequence of Theorems \ref{thm:tss} and \ref{thm:tss2} \begin{prop} \label{prop:cris2} Let $C_n$ be the subgroup of $UVB_n$ generated by $\{ \lambda_{i,j} \lambda_{j,i}^{-1} \}$ for $1\leq i<j\leq n$ and by $\rho_i$ for $i=1, \ldots, n$, and let $H_n$ be the subgroup of $C_n$ generated by $\{ \lambda_{i,j} \lambda_{j,i}^{-1} \}$ for $1\leq i<j\leq n$. The restriction of $\pi_{UVP}$ to $C_n$ defines the following split exact sequence: \begin{equation*} \begin{tikzcd} 1 \arrow[r] & H_n \arrow[r] &C_n \arrow[r, "\pi_{UVP}{\big|}_{C_n}"] & S_n \ar[r] & 1, \end{tikzcd} \end{equation*} In particular $C_n$ is crystallographic. \end{prop} \begin{prop} \label{prop:cris3} Let $H_n$ be the subgroup of $UVB_n$ defined in the statement of Proposition~\ref{prop:cris2}, let $\langle \! \langle H_n \rangle \! \rangle$ its normal closure in $UVB_n$, and let $\{x_{i,j}\}_{1\le i<j\le n}$ be a set of generators of $\mathbb{Z}^{n(n-1)/2}$. \begin{enumerate}[(a)] \item\label{it:cris3a} The quotient $UVB_n/\langle \! \langle H_n \rangle \! \rangle$ is isomorphic to the semi-direct product $\mathbb{Z}^{n(n-1)/2} \rtimes S_n$, where for any $s \in S_n$, $s \cdot x_{i,j}= x_{s(i),s(j)}$, where we take $x_{i,j}=x_{j,i}$. \item\label{it:cris3b} Let $\theta\colon\thinspace B_n \longrightarrow UVB_n/\langle \! \langle H_n \rangle \! \rangle$ be the composition of $ \eta: B_n \longrightarrow UVB_n$ and the projection $\pi_H\colon\thinspace UVB_n \longrightarrow UVB_n/\langle \! \langle H_n \rangle \! \rangle$. The group $\theta(B_n)$ is isomorphic to $B_n/[P_n, P_n]$. \end{enumerate} \end{prop} \begin{proof} Part~(\ref{it:cris3a}) follows from Theorems~\ref{thm:tss} and~\ref{thm:tss2} that imply that $UVB_n/\langle \! \langle H_n \rangle \! \rangle = UVP_n/\langle \! \langle H_n \rangle \! \rangle \rtimes S_n$. The identification $\lambda_{i,j} = \lambda_{j,i}$ for $i \not= j$ implies that $UVP_n/H_n=\mathbb{Z}^{n(n-1)/2} $, and it also defines the induced action of $S_n$ on this quotient. The proof of Proposition~\ref{prop:cris} can be adapted to prove part~(\ref{it:cris3b}) (the image of $P_n$ by $\theta$ is a free Abelian group of rank $n(n-1)/2$, and is thus isomorphic to $P_n/[P_n,P_n]$). \end{proof} \begin{rem}\label{rem:uvbnhn} The quotient $UVB_n/\langle \!\langle H_n \rangle \!\rangle$ coincides with the semi-direct product considered in~\cite[Section 2.7]{Ti} and in the Remark following~\cite[Proposition 5.12]{M09}. Thus Proposition~\ref{prop:cris3} yields a new proof of the fact that the crystallographic braid group $B_n/[P_n, P_n]$ embeds in the semi-direct product $\mathbb{Z}^{n(n-1)/2} \rtimes S_n$. \end{rem} \section{Torsion elements of $UVB_n$}\label{sec:torsionuvbn} Let $n\geq 3$. In~\cite{GGO17}, it was shown that $B_n/[P_n, P_n]$ has torsion, but that it possesses no elements of even order. Viewing $B_n/[P_n, P_n]$ as the subgroup $\operatorname{\text{Im}}(\eta)$ of $UVB_n$ via Proposition~\ref{prop:cris}, we may give an alternative proof of this latter fact. We do this by first characterising the elements of $UVB_n$ of order $2$, and then showing that these elements do not belong to $\operatorname{\text{Im}}(\eta)$. \begin{thm}\label{thm:ord2} Let $n\geq 3$. \begin{enumerate}[(a)] \item\label{it:ord2a} An element $v\in UVB_{n}$ is of order $2$ if and only if there exist $\rho\in \operatorname{\text{Im}}(\iota)$ of order $2$ and $g\in UVP_{n}$ such that $v=g\rho g^{-1}$. \item\label{it:ord2b} The elements of order $2$ of $UVB_n$ do not belong to $\operatorname{\text{Im}}(\eta)$. In particular, $B_n/[P_n, P_n]$ has no elements of even order. \end{enumerate} \end{thm} \begin{proof} Let $n\geq 3$. \begin{enumerate}[(a)] \item First observe that the given condition is clearly sufficient. Conversely, suppose that $v\in UVB_{n}$ is of order $2$. Identifying $UVB_n$ with the internal semi-direct product $UVP_n\rtimes \operatorname{\text{Im}}(\iota)$ using Theorem~\ref{thm:tss2}, there exist unique elements $w\in UVP_{n}$ and $\rho \in \operatorname{\text{Im}}(\iota)$ such that $v=w\rho$, and the pair $(w,\rho)$ is non trivial. Further, the fact that $UVP_{n}$ is torsion free implies that $\rho \neq 1$. Since $v$ is of order $2$, we have $1=v^{2}=(w\rho)^{2}=w\ldotp \rho w \rho\inv \ldotp \rho^{2}$, where $w\ldotp \rho w \rho\inv\in UVP_n$ and $\rho^{2}\in \operatorname{\text{Im}}(\iota)$, from which we deduce using Theorem~\ref{thm:tss2} that $\rho$ is also of order $2$. Hence $\pi_{UVP}(\rho) \in S_{n}$ is also of order $2$, and thus may be written as a non-trivial product of disjoint transpositions. So there exists $\tau\in \operatorname{\text{Im}}(\iota)$ such that $\pi_{UVP}(\tau\rho\tau\inv)=\sigma$, where $\sigma=(1,2)(3,4)\cdots (m,m+1)$, and $1\leq m\leq n-1$ is odd. Setting $\widetilde{v}=\tau v \tau\inv$, $\widetilde{w}=\tau w \tau\inv$ and $\widetilde{\rho}=\tau \rho \tau\inv$, it follows that $\widetilde{v}= \widetilde{w} \widetilde{\rho}$, where $\widetilde{v}\in UVB_n$ is of order $2$, $\widetilde{w}\in UVP_{n}$, $\widetilde{\rho} \in \operatorname{\text{Im}}(\iota)$ is of order $2$, $\pi_{UVP}(\widetilde{\rho})= \sigma$, and $\widetilde{w}\ldotp \widetilde{\rho}\widetilde{w}\widetilde{\rho}\inv=1$. For $1\leq i<j\leq n$, let $\pi_{i,j}\colon\thinspace UVP_{n} \longrightarrow F_{i,j}$ denote the projection of $UVP_{n}$ onto $F_{i,j}$ (see Theorem~\ref{thm:tss} and the comments that follow it). Applying~(\ref{eq:conjuvpn}), we observe that conjugation by an element of $UVB_n$ permutes the $\{ F_{i,j}\}_{1\leq i<j\leq n}$. Let $T$ denote the subset of $n(n-1)/2$ transpositions of $S_{n}$, and let $T_{\sigma}=\{ (i,i+1)\in T \,\mid\, i\in \{ 1,3,\ldots,m\}\}$. Then the subgroup $\langle\sigma\rangle$ of $S_{n}$ of order $2$ acts on $T$ by conjugation, and if $1\leq i<j\leq n$, the orbit $\mathcal{O}(i,j)$ of $(i,j)$ is equal to $\{(i,j),(\sigma(i),\sigma(j))\}$, which is equal to $\{ (i,j)\}$ if and only if either $(i,j)\in T_{\sigma}$ or if $m+2\leq i<j\leq n$, and contains two elements otherwise. Further, for all $1\leq i<j\leq n$, $\sigma(i)<\sigma(j)$ if and only if $(i,j)\notin T_{\sigma}$. Let $\mathcal{T}$ be a transversal for this action of $\langle\sigma\rangle$ on $T$. Using Theorem~\ref{thm:tss}, we obtain: \begin{equation}\label{eq:tildew} \widetilde{w}=\prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{k,l}, \lambda_{l,k}), \end{equation} where $w_{k,l}=w_{k,l}(\lambda_{k,l}, \lambda_{l,k})\in F_{k,l}$, and this decomposition is unique up to permutation of the factors. So: \begin{align} 1&= \widetilde{w}\ldotp \widetilde{\rho} \widetilde{w} \widetilde{\rho}\,\inv=\left( \prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{k,l}, \lambda_{l,k})\right) \widetilde{\rho} \left( \prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{k,l}, \lambda_{l,k})\right)\widetilde{\rho}\,\inv\notag\\ &= \left( \prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{k,l}, \lambda_{l,k})\right) \left( \prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{\sigma(k),\sigma(l)}, \lambda_{\sigma(l), \sigma(k)})\right)\notag\\ &= \prod_{(i,j)\in \mathcal{T}}\prod_{(k,l)\in \mathcal{O}(i,j)} w_{k,l}(\lambda_{k,l}, \lambda_{l,k}) \ldotp w_{\sigma(k),\sigma(l)}(\lambda_{k,l}, \lambda_{l,k}),\label{eq:wrho} \end{align} where we have used the fact that $\sigma^{2}=1$. Note also that if $(i,j)\in T_{\sigma}$ then $j=\sigma(i)>\sigma(j)=i$, and in this case, any term of the form $w_{\sigma(i),\sigma(j)}(\lambda_{i,j}, \lambda_{j,i})$ in~(\ref{eq:wrho}) should be interpreted as $w_{i,j}(\lambda_{j,i}, \lambda_{i,j})$. The expression~(\ref{eq:wrho}) is written with respect to the direct product structure of $UVP_{n}$. It follows from Theorem~\ref{thm:tss} that $w_{i,j}(\lambda_{i,j}, \lambda_{j,i}) \ldotp w_{\sigma(i),\sigma(j)}(\lambda_{i,j}, \lambda_{j,i})=1$ for all $1\leq i< j\leq n$, and hence: \begin{equation}\label{eq:wsigmaij} w_{\sigma(i),\sigma(j)}(\lambda_{\sigma(i),\sigma(j)}, \lambda_{\sigma(j),\sigma(i)})= w_{i,j}^{-1}(\lambda_{\sigma(i),\sigma(j)}, \lambda_{\sigma(j),\sigma(i)}). \end{equation} Suppose that $(i,j)\in T_{\sigma}$. Then $w_{i,j}(\lambda_{i,j}, \lambda_{j,i}) \ldotp w_{i,j}( \lambda_{j,i},\lambda_{i,j})=1$. Writing $a=\lambda_{i,j}$, $b=\lambda_{j,i}$ and $w_{i,j}(a,b)=a^{k_{1}}b^{l_{1}}\cdots a^{k_{m}}b^{l_{m}}$, where $m\geq 0$, $k_{1},l_{1},\ldots, k_{m},l_{m}\in \Z$ and $l_{1},\ldots, k_{m}\neq 0$, it follows from the relation $w_{i,j}(a,b)\ldotp w_{i,j}(b,a)=1$ that $j_{q}=-k_{m+1-q}$ for all $q=1,\ldots,m$. Taking $y_{i,j}(a,b)=a^{k_{1}}b^{-k_{m}}\cdots a^{k_{m/2}}b^{-k_{(m+2)/2}}$ (resp.\ $y_{i,j}(a,b)=a^{k_{1}}b^{-k_{m}}\cdots a^{k_{(m-1)/2}}b^{-k_{(m+3)/2}} a^{k_{(m+1)/2}}$) if $m$ is even (resp.\ odd), we see that $w_{i,j}(a,b)=y_{i,j}(a,b) \ldotp y_{i,j}\inv(b,a)$. Thus $w_{i,j}(\lambda_{i,j}, \lambda_{j,i})= y_{i,j}(\lambda_{i,j}, \lambda_{j,i}) \ldotp y_{i,j}^{-1}(\lambda_{j,i},\lambda_{i,j})$. Setting $z_{i,j}=w_{i,j}$ if $(i,j)\notin T_{\sigma}$ and $z_{i,j}=y_{i,j}$ if $(i,j)\in T_{\sigma}$, it follows from~(\ref{eq:tildew}) and~(\ref{eq:wsigmaij}) that: \begin{equation}\label{eq:tildev} \widetilde{v}=\widetilde{w}\widetilde{\rho}=\left(\prod_{(i,j)\in \mathcal{T}} z_{i,j}(\lambda_{i,j}, \lambda_{j,i}) z_{i,j}^{-1}(\lambda_{j,i}, \lambda_{i,j}) \right)\widetilde{\rho}. \end{equation} Now the terms $z_{i,j}(\lambda_{i,j}, \lambda_{j,i})$ (resp.\ $z_{i,j}^{-1}(\lambda_{j,i}, \lambda_{i,j})$) appearing in~(\ref{eq:tildev}) commute pairwise, and setting $\widetilde{g}=\prod_{(i,j)\in \mathcal{T}} z_{i,j}(\lambda_{i,j}, \lambda_{j,i})\in UVP_{n}$, we obtain: \begin{equation*} \widetilde{v}=\left(\prod_{(i,j)\in \mathcal{T}} z_{i,j}(\lambda_{i,j}, \lambda_{j,i}) \right)\left(\prod_{(i,j)\in \mathcal{T}} z_{i,j}^{-1}(\lambda_{j,i}, \lambda_{i,j}) \right)\widetilde{\rho}=\widetilde{g} \widetilde{\rho}\, \widetilde{g}^{-1}. \end{equation*} Hence $v=\tau^{-1} \widetilde{v} \tau=g \rho g^{-1}$, where $\rho \in \operatorname{\text{Im}}(\iota)$ is of order $2$ and $g=\tau^{-1}\widetilde{g}\tau \in UVP_{n}$, which proves that the condition of part~(\ref{it:ord2a}) is also necessary. \item We start by characterising the elements of $\operatorname{\text{Im}}(\eta)$. Let $v\in UVB_n$. Then $\pi_{UVP}(v)\in S_n$, and so $\pi_{UVP}(v)=s_{i_1}\cdots s_{i_r}$, where $r\geq 0$, and for all $j=1,\ldots,r$, $s_{i_j}\in \{ (k,k+1) \,\mid\, k=1,\ldots, n-1\}$. Hence $v=w\rho$, where $w\in UVP_n$ and $\rho= \iota(\pi_{UVP}(v))=\rho_{i_1}\cdots \rho_{i_r}$. Let $\beta=\sigma_{i_1}\cdots \sigma_{i_r}\in B_n$. Then by~(\ref{eq:lambda}), we have $\eta(\beta)=\sigma_{i_1}\cdots \sigma_{i_r}=\lambda_{i_1,i_1+1}^{-1}\rho_{i_1} \cdots \lambda_{i_r,i_r+1}^{-1}\rho_{i_r}=y \rho_{i_1}\cdots \rho_{i_r}=y \rho \in \operatorname{\text{Im}}(\eta)$, where $y\in UVP_n$. It follows that: \begin{equation*} v\in \operatorname{\text{Im}}(\eta) \Longleftrightarrow (\eta(\beta))^{-1} v\in \operatorname{\text{Im}}(\eta) \Longleftrightarrow \rho^{-1} y^{-1} w \rho \in \operatorname{\text{Im}}(\eta). \end{equation*} Now $y^{-1} w \in UVP_n$, so $\rho^{-1} y^{-1} w \rho \in UVP_n$, and it follows that $v\in \operatorname{\text{Im}}(\eta)$ if and only if $\rho^{-1} y^{-1} w \rho\in \eta(P_n)$. From Section~\ref{sec:cryst}, $P_n$ is generated by the set $\{a_{ij} \,\mid\, 1 \leq i < j \leq n\}$, and since $\eta(a_{ij})=\lambda_{i,j}^{-1} \lambda_{j,i}^{-1}$ for all $1\leq i<j\leq n$, we see that $v\in \operatorname{\text{Im}}(\eta)$ if and only if $\rho^{-1} y^{-1} w \rho$ belongs to the free Abelian subgroup $\Gamma$ of $UVP_n$ of rank $n(n-1)/2$ generated by the set $\{\lambda_{i,j}^{-1} \lambda_{j,i}^{-1} \,\mid\, 1 \leq i < j \leq n\}$. In particular, if $1\leq i<j\leq n$ and $\varepsilon_{i,j}\colon\thinspace UVP_n \longrightarrow \Z$ denotes the evaluation homomorphism defined by $\varepsilon_{i,j}(\lambda_{k,l})=1$ if $\{k,l\}=\{i,j\}$ and $\varepsilon_{i,j}(\lambda_{k,l})=0$ otherwise, where $1\leq k,l\leq n$, $k\neq l$, then $\varepsilon_{i,j}(\Gamma)=2\Z$ for all $1\leq i<j\leq n$. Now suppose on the contrary that $\operatorname{\text{Im}}(\eta)$ possesses an element $v$ of order $2$. From part~(\ref{it:ord2a}), there exist $g\in UVP_n$ and $\rho\in \operatorname{\text{Im}}(\iota)$ of order $2$ such that $v=g\rho g^{-1}$. Then $\pi_{UVP}(\rho)$ is also of order $2$, and so may be written as a non-trivial product of transpositions whose supports are pairwise disjoint. So there exist $1\leq r\leq n/2$ and distinct elements $i_1,j_1,\ldots,i_r,j_r$ of $\{1,\ldots,n\}$ such that $i_k<j_k$ for all $1\leq k \leq r$ and $\rho=\iota((i_1,j_1)\cdots (i_r,j_r))$. Let $\tau \in \operatorname{\text{Im}}(\iota)$ be such that $\pi_{UVP}(\tau \rho\tau^{-1})=(1,2) \cdots (2r-1,2r)$, let $\widetilde{\rho}= \tau \rho\tau^{-1} \in \operatorname{\text{Im}}(\iota)$, let $\widehat{\tau}\in B_n$ be such that $\pi(\widehat{\tau})= \pi_{UVP}(\tau)$, let $\beta = \sigma_1\sigma_3 \cdots \sigma_{2r-1}$, and let $\widehat{\beta} = \widehat{\tau}^{-1} \beta \widehat{\tau}$. Then $\widetilde{\rho}=\rho_1\rho_3 \cdots \rho_{2r-1}$ and $\eta(\beta)=\sigma_1\sigma_3 \cdots \sigma_{2r-1}=\lambda_{1,2}^{-1}\lambda_{3,4}^{-1}\cdots \lambda_{2r-1,2r}^{-1} \widetilde{\rho}$ by~(\ref{eq:lambda}) and Theorem~\ref{thm:tss}. Since $v\in \operatorname{\text{Im}}(\eta)$, it follows that $(\eta(\widehat{\beta}))^{-1}\ldotp v \in \operatorname{\text{Im}}(\eta)$. Hence: \begin{equation}\label{eq:etav} (\eta(\widehat{\beta}))^{-1}\ldotp v = \eta(\widehat{\tau}^{-1}) \widetilde{\rho}^{\,-1} \eta(\widehat{\tau}) \ldotp \eta(\widehat{\tau}^{-1}) \lambda_{2r-1,2r} \cdots \lambda_{3,4} \lambda_{1,2} \eta(\widehat{\tau}) \ldotp g\rho g^{-1}. \end{equation} Now using~(\ref{eq:cd1}), we have $\pi_{UVP}(\eta(\widehat{\tau}))= \pi(\widehat{\tau})=\pi_{UVP}(\tau)$, so: \begin{equation}\label{eq:etaconj} \eta(\widehat{\tau}^{-1}) \lambda_{2r-1,2r} \cdots \lambda_{3,4} \lambda_{1,2} \eta(\widehat{\tau})= \lambda_{i_r,j_r} \cdots \lambda_{i_2,j_2} \lambda_{i_1,j_1}. \end{equation} It follows from~(\ref{eq:etav}) and~(\ref{eq:etaconj}) that: \begin{align} (\eta(\widehat{\beta}))^{-1}\ldotp v &=\eta(\widehat{\tau}^{-1}) \widetilde{\rho}^{\,-1} \eta(\widehat{\tau}) \rho\ldotp \rho^{-1} \lambda_{i_r,j_r} \cdots \lambda_{i_2,j_2} \lambda_{i_1,j_1} \rho\ldotp \rho^{-1} g\rho g^{-1}\notag\\ &=\eta(\widehat{\tau}^{-1}) \widetilde{\rho}^{\,-1} \eta(\widehat{\tau}) \rho \lambda_{j_r,i_r} \cdots \lambda_{j_2,i_2} \lambda_{j_1,i_1} \rho^{-1} g\rho \ldotp g^{-1}\label{eq:etav2} \end{align} Since $\pi_{UVP}(\eta(\widehat{\tau}))= \pi(\widehat{\tau})=\pi_{UVP}(\tau)$, there exists $z\in UVP_n$ such that $\eta(\widehat{\tau})=\tau z$, and hence: \begin{align} (\eta(\widehat{\beta}))^{-1}\ldotp v &= z^{-1} \tau^{-1} \widetilde{\rho}^{\,-1} \tau z \rho \ldotp \lambda_{j_r,i_r} \cdots \lambda_{j_2,i_2} \lambda_{j_1,i_1} \ldotp \rho^{-1} g\rho \ldotp g^{-1}\notag\\ &=z^{-1}\ldotp \rho^{-1} z \rho \ldotp \lambda_{j_r,i_r} \cdots \lambda_{j_2,i_2} \lambda_{j_1,i_1} \ldotp \rho^{-1} g\rho \ldotp g^{-1}.\label{eq:prodzg} \end{align} The expression on the right-hand side of~(\ref{eq:prodzg}) is written as a product of elements of $UVP_n$, which we will now project onto $F_{i_1,j_1}$. Let $\alpha(\lambda_{i_1,j_1}, \lambda_{j_1,i_1})=\pi_{i_1,j_1}(z)$ et $\beta(\lambda_{i_1,j_1}, \lambda_{j_1,i_1})=\pi_{i_1,j_1}(g)$. Using~(\ref{eq:conjuvpn}) and the fact that $\pi_{UVP}(\rho)=(i_1,j_1)\cdots (i_r,j_r)$, by~(\ref{eq:prodzg}), we have: \begin{equation*} \pi_{i_1,j_1}((\eta(\widehat{\beta}))^{-1}\ldotp v)=(\alpha(\lambda_{i_1,j_1}, \lambda_{j_1,i_1}))^{-1} \ldotp \alpha(\lambda_{j_1,i_1}, \lambda_{j_1,i_1}) \ldotp \lambda_{j_1,i_1} \ldotp \beta(\lambda_{j_1,i_1},\lambda_{i_1,j_1}) \ldotp (\beta(\lambda_{i_1,j_1}, \lambda_{j_1,i_1}))^{-1}, \end{equation*} from which it follows that $\varepsilon_{i_1,j_1}(\eta(\widehat{\beta}))^{-1}\ldotp v)=1$. We conclude from the previous paragraph that $\eta(\widehat{\beta}))^{-1}\ldotp v \notin \Gamma$, and thus $v\notin \operatorname{\text{Im}}(\eta)$, which yields a contradiction. Therefore $\operatorname{\text{Im}}(\eta)$ contains no elements of order $2$, and we deduce from Proposition~\ref{prop:cris} that $B_n/[P_n,P_n]$ has no elements of even order.\qedhere \end{enumerate} \end{proof} \begin{prop}\label{prop:torsuvbn} If $n\geq 3$ then any torsion element of $UVB_n$ belongs to the normal closure of $\operatorname{\text{Im}}(\iota)$ in $UVB_n$. \end{prop} \begin{proof} From Definition~\ref{D:Unrestricted}, one may check that the quotient of $UVB_n$ by the normal closure of $\operatorname{\text{Im}}(\iota)$ in $UVB_n$ is isomorphic to $\Z$. If $g$ is a torsion element of $UVB_n$, its image in this quotient is thus trivial, in other words $g$ belongs to the normal closure of $\operatorname{\text{Im}}(\iota)$ in $UVB_n$. \end{proof} One may show in the same manner that the statement of Proposition~\ref{prop:torsuvbn} also holds for $VB_n$ and $WB_n$. In the case of $UVB_n$, we may strengthen the results of Theorem~\ref{thm:ord2}(\ref{it:ord2a}) and Proposition~\ref{prop:torsuvbn}. \begin{thm}\label{thm:thm_of_torsion} Let $n\geq 2$, and let $w$ be a torsion element of $UVB_n$ of order $r$. Then there exists $s_w \in S_n$ of order $r$ such that $w$ is conjugate to $\iota(s_w)$ by an element of $UVP_n$. \end{thm} Before proving Theorem~\ref{thm:thm_of_torsion}, we define some notation, and we make study the action of the symmetric group $S_n$ on the group $UVP_n$ that will used in the proof. If $\tau\in S_n$, let $\operatorname{\text{Supp}}(\tau)$ denote its support. Let $s\in S_n$, let $o(s)$ denote the order of $s$ and let $G_s$ denote the cyclic subgroup of $S_n$ of order $o(s)$ generated by the element $s$. By Theorem~\ref{thm:tss2}, the permutation $s$ acts on $UVP_n=\langle\lambda_{i,j}\ |\ 1\leq i\neq j\leq n\rangle$ by permuting the indices of the elements $\lambda_{i,j}$. To simplify the notation, if $g \in UVP_n$ and $s\in S_n$, we set $s(g)= \iota(s) g \iota(s)^{-1}$, and we shall identify $s$ with its image $\iota(s)$ in $UVB_n$. Let $s=s_1 \cdots s_{m(s)}$ be the cycle decomposition of $s$, where $1\leq m(s)\leq n$, and the subsets $\operatorname{\text{Supp}}(s_1),\ldots, \operatorname{\text{Supp}}(s_{m(s)})$ form a partition of the set $\{1,\ldots, n \}$ (in particular, some of the $\operatorname{\text{Supp}}(s_j)$, where $1\leq j\leq m(s)$, may be singletons). Let $I=\{ (i, j) \mid 1\le i\neq j \le n\}$. The action of $G_s$ on $I$ gives rise to a partition of $I$ in $n(s)$ disjoint orbits, where $n(s)\in \N$, and the cardinality of each such orbit is either the order of one cycle or is the least common multiple of the orders of two cycles of $s$. If $(i, j)\in I$, let $\mathcal{O}(i,j)$ denote its orbit, and let $\lvert \mathcal{O}(i,j)\rvert$ denote the cardinality of $\mathcal{O}(i,j)$. Clearly, $\lvert \mathcal{O}(j,i)\rvert= \lvert \mathcal{O}(i,j)\rvert$ for all $(i,j)\in I$. Further, if $\mathcal{O}(j,i)=\mathcal{O}(i,j)$ then there exist $1\leq q\leq m(s)$ and $p\in \N$ such that $i,j\in \operatorname{\text{Supp}}(s_q)$, $s_q^p(i)=j$ and $s_q^p(j)=i$, from which it follows that $\lvert \mathcal{O}(i,j)\rvert= o(s_q)$ is even, and if $(k,l)\in I$ then $(k,l)\in \mathcal{O}(i,j)$ if and only if $(l,k)\in \mathcal{O}(i,j)$. This gives rise to a partition $\sqcup_{k=1}^{n(s)} \mathcal{O}_k$ of $I$, where $n(s)\in \N$, that dominates the partition given by the action of $G_s$ on $I$, and for $k=1,\ldots, n(s)$, $\mathcal{O}_k$ is defined by the property that if $(i,j)\in I$, then $(i,j)\in \mathcal{O}_k$ if and only if $\mathcal{O}_k=\mathcal{O}(i,j) \cup \mathcal{O}(j,i)$. Note that $n(s)$ is the number of orbits of the action of $G_s$ on the set $\{ \{i, j\} \mid 1\le i\neq j \le n\}$ of \emph{unordered} pairs of elements of $\{ 1,\ldots,n\}$. Since $\lvert \mathcal{O}(j,i)\rvert= \lvert \mathcal{O}(i,j)\rvert$, and $\mathcal{O}(i,j)$ is even if $\mathcal{O}(j,i)=\mathcal{O}(i,j)$, it follows that for all $1\leq k\leq n(s)$, the cardinality $\lvert\mathcal{O}_k\rvert$ of $\mathcal{O}_k$ is even, so $\lvert\mathcal{O}_k\rvert=2n_k$ for some $n_k\in \N$. Let $F_{\mathcal{O}_k} =\bigoplus_{(i, j) \in \mathcal{O}_k} F_{i,j}$, where as in Section~\ref{sec:uvb}, we identify $F_{i,j}$ with $F_{j,i}$. Then: \begin{equation}\label{eq:decompFOk} UVP_n= \bigoplus_{1\leq k\leq n(s)} F_{\mathcal{O}_k}. \end{equation} The following lemma is folklore. \begin{lem}\label{prep:lemma1} Let $F_2=F_2 (x, y)$ be the free group of rank $2$ freely generated by $\{ x, y\}$, and let $\alpha\colon\thinspace F_2 \ensuremath{\longrightarrow} F_2$ be the involutive automorphism defined by $\alpha(x)=y$ and $\alpha(y)=x$. Let $w \in F_2$ be such that $w \alpha(w)=1$. Then there exists $u \in F_2$ such that $w=u \alpha(u^{-1})$. \end{lem} \begin{proof} Let $w \in F_2$ be such that $w \alpha(w)=1$. If $w=1$ then we may take $u=1$. So suppose that $w\neq 1$. Since $w\alpha(w)=1$, $w$ cannot be written in reduced form as a word starting and ending with a non-zero power of the same generator. By replacing $w$ by $\alpha(w)$ if necessary, we may thus suppose that $w=x^{\varepsilon_1}y^{\varepsilon_2}\cdots y^{\varepsilon_{2t-1}}y^{\varepsilon_{2t}}\neq1$, where $\varepsilon_1,\varepsilon_2,\dots,\varepsilon_{2t} \in \Z \setminus\{0\}$, and thus: \begin{equation*} 1=w \alpha(w)=\underbrace{\big(x^{\varepsilon_1}y^{\varepsilon_2}\cdots x^{\varepsilon_{2t-1}}y^{\varepsilon_{2t}}\big)}_{\neq1}\cdot\underbrace{\big(y^{\varepsilon_1}x^{\varepsilon_2}\cdots y^{\varepsilon_{2t-1}}x^{\varepsilon_{2t}}\big)}_{\neq1}\in F_2. \end{equation*} It follows that $ \varepsilon_{2t-j} = - \varepsilon_{j+1}$ for $j=0,\ldots,2t-1$, from which we conclude that $w= u \alpha(u^{-1})$, where $u= x^{\varepsilon_1}y^{\varepsilon_2}\cdots y^{\varepsilon_{t}}$. \end{proof} In what follows, we shall identify $S_n$ with its image in $UVB_n$ by $\iota$. In particular, if $s\in S_n$, then $\iota(s)$ shall be denoted simply by $s$. \begin{lem}\label{prep:lemma2} Let $n\geq 2$, let $s\in S_n$, let $\gamma_1 \in F_{\mathcal{O}_l}$, where $1\leq l\leq n(s)$, and let $w_1=\gamma_1 s \in UVB_n$. If $w_1$ is of finite order then it is conjugate to $s$ by an element of $F_{\mathcal{O}_l}$. \end{lem} \begin{proof} Let $s\in S_n$, $\gamma_1 \in F_{\mathcal{O}_l}$, where $1\leq l\leq n(s)$, and $w_1\in UVB_n$ be as in the statement. By reordering the orbits $\mathcal{O}_{1},\ldots, \mathcal{O}_{n(s)}$ if necessary, we may suppose that $l=1$, so $\gamma_1 \in F_{\mathcal{O}_1}$. The result is clear if $o(w_1)=1$. If $o(w_1)=2$, then by Theorem~\ref{thm:ord2}(\ref{it:ord2a}), there exists $g\in UVP_n$ such that $w_1=gsg^{-1}$. With respect to the decomposition~(\ref{eq:decompFOk}), let $g=\delta_1 \cdots \delta_{n(s)}$, where for $k=1,\ldots, n(s)$, $\delta_k \in F_{\mathcal{O}_k}$. Since $gsg^{-1}=\gamma_1 s$, we have: \begin{equation}\label{eq:gamma1a} \gamma_1 =\delta_1 \cdots \delta_{n(s)} \ldotp s \, (\delta_1 \cdots \delta_{n(s)})^{-1}s^{-1} = \delta_1 \cdots \delta_{n(s)} \ldotp s \delta_{n(s)}^{-1}s^{-1} \cdots s \delta_{1}^{-1}s^{-1}\in F_{\mathcal{O}_1}. \end{equation} Now for all $k=1,\ldots, n(s)$, $F_{\mathcal{O}_k}$ is invariant under conjugation by $s$, and identifying the components of~(\ref{eq:gamma1a}) with respect to~(\ref{eq:decompFOk}), we see that $\delta_k \ldotp s \delta_k^{-1}s^{-1}=1$ for all $2\leq k\leq n(s)$, or in other words, $\delta_k$ and $s$ commute. It follows from~(\ref{eq:gamma1a}) that $\gamma_1=\delta_1 \ldotp s \delta_1^{-1}s^{-1}$, and thus $w_1=\gamma_1 s=\delta_1 s \delta_1^{-1}$, where $\delta_1\in F_{\mathcal{O}_1}$, as required. So suppose that $o(w_1)\geq 3$. For all $m\in \N$, we have: \begin{equation}\label{eq:w1m} w_1^m= (\gamma_1 s)^m= \gamma_1s\gamma_1s^{-1}s^2\gamma_1s^{-2} \cdots s^{m-1}\gamma_1s^{-(m-1)}s^m=\gamma_1 s(\gamma_1) \cdots s^{m-1}(\gamma_1) s^m. \end{equation} Since $UVP_n$ is torsion free, it follows from~(\ref{eq:w1m}) that $o(w_1)=o(s)$. As we mentioned above, $F_{\mathcal{O}_1}$ is invariant under conjugation by $s$, so for all $j=0,1,\ldots, m-1$, the term $s^j(\gamma_1)= s^j \gamma_1 s^{-j}$ of~(\ref{eq:w1m}) belongs to $F_{\mathcal{O}_1}$. Let $(i_0,j_0) \in \mathcal{O}_1$, where $1\leq i_0<j_0\leq n$. If $\mathcal{O}(i_0,j_0)\neq \mathcal{O}(j_0,i_0)$ (resp.\ $\mathcal{O}(i_0,j_0)= \mathcal{O}(j_0,i_0)$), let $\varepsilon=1$ (resp.\ $\varepsilon=2$). Then $\lvert \mathcal{O}(i_0,j_0)\rvert=\varepsilon n_1$, and $\lvert \mathcal{O}_1\rvert=2n_1$, $F_{\mathcal{O}_1}=\bigoplus_{(i, j) \in \mathcal{O}_1} F_{i,j}=\bigoplus_{q=0}^{n_1-1} F_{s^q(i_0),s^q(j_0)}$, and for $j=0, \ldots, n_1-1$, there exists $u_j \in F_{s^j(i_0),s^j(j_0)}$ such that $\gamma_1=u_0\cdots u_{n_1-1}$. Hence $u_0, \ldots, u_{n_1-1}$ belong to distinct free factors of $UVP_n$, so commute pairwise. Further: \begin{equation}\label{eq:skuj} \text{$s^k(u_j)\in F_{s^{j+k}(i_0),s^{j+k}(j_0)}$ for all $k\in \Z$ and $j=0, \ldots, n_1-1$,} \end{equation} where the superscript $j+k$ is taken modulo $n_1$. In particular, if $v \in F_{s^q(i_0),s^q(j_0)}$ for some $q=0,\ldots, n_1-1$, then $s^{n_1}(v) = v$ (resp.\ $s^{n_1}(v) = \alpha(v)$, where $\alpha\colon\thinspace F_{s^q(i_0),s^q(j_0)} \ensuremath{\longrightarrow} F_{s^q(i_0),s^q(j_0)}$ is the automorphism that exchanges $\lambda_{s^q(i_0),s^q(j_0)} $ and $\lambda_{s^q(j_0),s^q(i_0)}$, as in Lemma~\ref{prep:lemma1}), and thus $s^{\varepsilon n_1}(v) = v$. It follows from this, the fact that $o(w_1)=o(s)$ and~(\ref{eq:w1m}) that: \begin{align*} 1&=w_1^{\varepsilon n_1 o(s)}=(w_1^{\varepsilon n_1})^{o(s)}=(\gamma_1 s(\gamma_1) \cdots s^{\varepsilon n_1-1}(\gamma_1) s^{\varepsilon n_1})^{o(s)}=(\gamma_1 s(\gamma_1) \cdots s^{\varepsilon n_1-1}(\gamma_1))^{o(s)} s^{\varepsilon n_1 o(s)}\\ &=(\gamma_1 s(\gamma_1) \cdots s^{\varepsilon n_1-1}(\gamma_1))^{o(s)}. \end{align*} Since $UVP_n$ is torsion free, we conclude that $\gamma_1 s(\gamma_1)\cdots s^{\varepsilon n_1-1}(\gamma_1)=1$. Hence: \begin{equation}\label{eq:prodsu} u_0\cdots u_{n_1-1} s\big(u_0\cdots u_{n_1-1}\big) \cdots s^{\varepsilon n_1-1}\big(u_0\cdots u_{n_1-1}\big)=1. \end{equation} Applying~(\ref{eq:skuj}), and projecting~(\ref{eq:prodsu}) into $F_{1,2}$, we see that: \begin{equation}\label{eq:prodsin1} 1 =\prod_{i=0}^{\varepsilon-1} s^{in_1}(u_0) s^{in_1+1}\big( u_{n_1-1}\big)\cdots s^{(i+1)n_1-1}\big(u_1 \big)= \prod_{i=0}^{\varepsilon-1} s^{in_1}\big( u_0 s\big(u_{n_1-1}\big)\cdots s^{n_1-1}\big(u_1 \big)\big). \end{equation} We deduce from~(\ref{eq:prodsin1}) (resp.\ from~(\ref{eq:prodsin1}) and Lemma~\ref{prep:lemma1}) that $u_{0}=\prod_{j=1}^{n_1-1}s^{n_1-j}\big( u_j^{-1} \big)$ (resp.\ that there exists $\widetilde{u} \in F_{1,2}$ such that $u_{0}=\widetilde{u} s^{n_1}(\widetilde{u}^{-1}) \big(\prod_{j=1}^{n_1-1}s^{n_1-j}\big( u_j^{-1} \big)\big)$). Using the fact that $u_1,\ldots, u_{n_1-1}$ commute pairwise, it follows that: \begin{equation}\label{eq:gamma1} \gamma_{1}= u_{0}u_{1}\cdots u_{n_1-1}= \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s\big(u_{n_1-1}^{-1} \big) \cdot u_{n_1-1} \cdots u_1, \end{equation} where $\beta=1$ (resp.\ $\beta=\widetilde{u} s^{n_1}(\widetilde{u}^{-1})$). In what follows, if $a,b\in UVB_n$, we shall write $a\sim b$ if $a$ and $b$ are conjugate by an element of $UVP_n$. Let us show by induction that: \begin{equation}\label{eq:gamma1s} \gamma_1\cdot s \sim \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big) s^t\big(u_{n_1-t}^{-1} \big) \cdot u_{n_1-t} \cdots u_1 \cdot s \end{equation} for all $t=1,\ldots, n_1$. If $t=1$ then the result follows directly from~(\ref{eq:gamma1}). So suppose that~(\ref{eq:gamma1s}) holds for some $1\leq t\leq n_1-1$. Let $M_0= \prod_{c=1}^{n_1-1} s^{c}(\widetilde{u})$, and for $1\leq t\leq n_1-1$, let $M_t=\prod_{c= 1}^{t-1}s^{t-c}\big(u_{n_1-t} \big)$. Note that $M_1=1$, and that $M_0=1$ if $\mathcal{O}(i_0,j_0)\neq \mathcal{O}(j_0,i_0)$. By~(\ref{eq:skuj}), for $c=1,\ldots,t-1$, $s^{t-c}\big(u_{n_1-t} \big)\in F_{s^{n_1-c}(i_0),s^{n_1-c}(j_0)}$, so for $1\leq t\leq n_1-1$, the factors of $M_t$ commute pairwise, and $M_t\in \bigoplus_{c=n_1-t+1}^{n_1-1} F_{s^{c}(i_0),s^{c}(j_0)}$. Thus: \begin{equation}\label{eq:mtst} M_t^{-1}s^t\big(u_{n_1-t}^{-1} \big)=\left(\prod_{c= 1}^{t-1}s^{t-c}\big(u_{n_1-t} \big)\right)^{-1} s^t\big(u_{n_1-t}^{-1} \big)= \left(\prod_{c=1}^{t-1}s^{c}\big(u_{n_1-t}^{-1} \big)\right) s^t\big(u_{n_1-t}^{-1} \big)=\prod_{c=1}^{t}s^{c}\big(u_{n_1-t}^{-1} \big). \end{equation} and since $s^t\big(u_{n_1-t}^{-1}\big)\in F_{i_0,j_0}$ and $u_{n_1-t} \cdots u_1\in \bigoplus_{i=1}^{n_1-t} F_{s^{i}(i_0),s^{i}(j_0)}$, we see that $M_t^{-1}s^t\big(u_{n_1-t}^{-1} \big)$ commutes with $u_{n_1-t} \cdots u_1$. So by~(\ref{eq:gamma1s}) and~(\ref{eq:mtst}), we have: \begin{align} \gamma_1\cdot s &\sim \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big) M_t M_t^{-1} s^t\big(u_{n_1-t}^{-1} \big) \cdot u_{n_1-t} \cdots u_1 \cdot s\notag\\ &= \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big) M_t \cdot u_{n_1-t} \cdots u_1 \left(\prod_{c=1}^{t}s^{c}\big(u_{n_1-t}^{-1} \big)\right) \cdot s\notag\\ &= \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big) \left(\prod_{c=1}^{t} s^{t-c}\big(u_{n_1-t} \big)\right) u_{n_1-(t+1)} \cdots u_1 \cdot s \cdot \prod_{c=0}^{t-1}s^{c}\big(u_{n_1-t}^{-1} \big),\label{eq:gamma1scom} \end{align} where we have used the fact that $M_t \cdot u_{n_1-t}=\prod_{c=1}^{t} s^{t-c}\big(u_{n_1-t} \big)$. Now by~(\ref{eq:skuj}), $\prod_{c=1}^{t} s^{t-c}\big(u_{n_1-t} \big)\in \bigoplus_{c=n_1-t}^{n_1-1} F_{s^{c}(i_0),s^{c}(j_0)}$, and since $\beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big)\in F_{i_0,j_0}$, we see that these two terms commute, and it follows from~(\ref{eq:gamma1scom}) that: \begin{equation}\label{eq:gamma1scom2} \gamma_1\cdot s \sim \left(\prod_{c=1}^{t} s^{t-c}\big(u_{n_1-t} \big)\right) \beta s^{n_1-1}\big(u_1^{-1}\big)\cdots s^{t+1}\big(u_{n_1-(t+1)}^{-1} \big) u_{n_1-(t+1)} \cdots u_1 \cdot s \cdot \prod_{c=0}^{t-1}s^{c}\big(u_{n_1-t}^{-1} \big). \end{equation} One may check that $\prod_{c=1}^{t} s^{t-c}\big(u_{n_1-t} \big)$ is the inverse of $\prod_{c=0}^{t-1}s^{c}\big(u_{n_1-t}^{-1} \big)$, and using~(\ref{eq:gamma1scom2}), we conclude that~(\ref{eq:gamma1s}) holds for $t+1$. Taking $t=n_1$ in~(\ref{eq:gamma1s}), we obtain $\gamma_1\cdot s \sim \beta \cdot s$. If $\mathcal{O}(i_0,j_0)\neq \mathcal{O}(j_0,i_0)$ then $\beta=1$, and the statement of the lemma holds in this case. So suppose that $\mathcal{O}(i_0,j_0)= \mathcal{O}(j_0,i_0)$. Then $\widetilde{u}\in F_{i_0,j_0}$, and for $c=1,\ldots, n_1-1$, $s^c(\widetilde{u})\in F_{s^c(i_0),s^c(j_0)}$. Thus the factors of $\widetilde{u} M_0=\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u})$ commute pairwise, and hence $M_0^{-1} s^{n_1}(\widetilde{u}^{-1})= \prod_{c=1}^{n_1} s^{c}(\widetilde{u}^{-1})$. It follows that: \begin{align*} \gamma_1\cdot s \sim \beta \cdot s &=\widetilde{u} s^{n_1}(\widetilde{u}^{-1})\cdot s = \widetilde{u} M_0 M_0^{-1} s^{n_1}(\widetilde{u}^{-1})\cdot s=\left(\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u})\right) \left(\prod_{c=1}^{n_1} s^{c}(\widetilde{u}^{-1})\right) \cdot s\\ &=\left(\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u})\right) \cdot s \cdot \left(\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u}^{-1})\right). \end{align*} Now $\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u})$ may be seen to be the inverse of $\prod_{c=0}^{n_1-1} s^{c}(\widetilde{u}^{-1})$, and therefore $\gamma_1\cdot s \sim s$ also in this case, which completes the proof of the lemma. \end{proof} This enables us to prove Theorem~\ref{thm:thm_of_torsion}. \begin{proof}[Proof of Theorem~\ref{thm:thm_of_torsion}] Let $w\in UVB_n$ be a torsion element of order $r$. By Theorems~\ref{thm:tss} and~\ref{thm:tss2} there exist (unique) $u\in UVP_n$ and $s\in S_n$ such that $w=us$. As in~(\ref{eq:w1m}), we see that $u\cdot s(u)\cdots s^{r-1}(u)=1$ in $UVP_n$ and $s^r=1$. Further, from~(\ref{eq:decompFOk}), $u=\gamma_1\cdots\gamma_{n(s)}$, where for $l=1,\ldots, n(s)$, $\gamma_l \in F_{\mathcal{O}_l}$. Let $l\in \{1,\ldots, n(s)\}$. Since $F_{\mathcal{O}_l}$ is invariant under conjugation by $s$, it follows from~(\ref{eq:decompFOk}) that $\gamma_l s(\gamma_l)\cdots s^{r-1}(\gamma_l)=1$ in $F_{\mathcal{O}_l}$. Since $\gamma_l s(\gamma_l)\cdots s^{r-1}(\gamma_l)$ may also be written as $(\gamma_l s)^r$ using~(\ref{eq:w1m}), we conclude from Lemma~\ref{prep:lemma2} that $\gamma_l s$ is conjugate to $s$ by an element of $F_{\mathcal{O}_l}$, or in other words, there exists $\Lambda_l\in F_{\mathcal{O}_l}$ such that $\gamma_l s=\Lambda_{l}\cdot s\cdot \Lambda_{l}^{-1}$. Let us prove by reverse induction that for all $m=1,\ldots, n(s)+1$: \begin{equation}\label{eq:Lambdam} w= \Lambda_{n(s)}\Lambda_{n(s)-1} \cdots \Lambda_{m} \gamma_1 \cdots \gamma_{m-1}\cdot s \cdot \Lambda_{m}^{-1} \cdots \Lambda_{n(s)-1}^{-1}\Lambda_{n(s)}^{-1}. \end{equation} If $m=n(s)+1$ then~(\ref{eq:Lambdam}) follows directly from the fact that $w=us=\gamma_1\cdots \gamma_{n(s)}\cdot s$. So suppose that~(\ref{eq:Lambdam}) holds for some $m\in \{2,\ldots, n(s)+1 \}$. Then using~(\ref{eq:decompFOk}) and the facts that $\gamma_1 \cdots \gamma_{m-2}\in \bigoplus_{i=1}^{m-2} F_{\mathcal{O}_i}$ and $\gamma_{m-1}\in F_{\mathcal{O}_m-1}$, we obtain: \begin{align*} w &= \Lambda_{n(s)}\Lambda_{n(s)-1} \cdots \Lambda_{m} \gamma_1 \cdots \gamma_{m-2} (\gamma_{m-1} \cdot s) \cdot \Lambda_{m}^{-1} \cdots \Lambda_{n(s)-1}^{-1}\Lambda_{n(s)}^{-1}\\ &= \Lambda_{n(s)}\Lambda_{n(s)-1} \cdots \Lambda_{m} \gamma_1 \cdots \gamma_{m-2} \Lambda_{m-1} \cdot s \cdot \Lambda_{m-1}^{-1} \Lambda_{m}^{-1} \cdots \Lambda_{n(s)-1}^{-1}\Lambda_{n(s)}^{-1}\\ &= \Lambda_{n(s)}\Lambda_{n(s)-1} \cdots \Lambda_{m} \Lambda_{m-1} \gamma_1 \cdots \gamma_{m-2} \cdot s \cdot \Lambda_{m-1}^{-1} \Lambda_{m}^{-1} \cdots \Lambda_{n(s)-1}^{-1}\Lambda_{n(s)}^{-1}, \end{align*} which proves~(\ref{eq:Lambdam}) for $m-1$. Taking $m=1$, we see that $w=\Lambda \cdot s \cdot \Lambda^{-1}$, where $\Lambda=\Lambda_{n(s)}\cdots \Lambda_{1}\in UVP_n$, and this completes the proof of the theorem. \end{proof} \begin{rem} It is an open question whether a result similar to that of Theorem~\ref{thm:thm_of_torsion} holds for $VB_n$ and for $WB_n$. \end{rem} \begin{acknow} The first and third authors are supported by the French project `AlMaRe' (ANR-19-CE40-0001-01). \end{acknow} \end{document}
\begin{document} \title{Remark on the N-barrier method for a class of autonomous elliptic systems } \begin{abstract} In this note, we aim to extend the previous work on an N-barrier maximum principle (\cite{hung2015n,hung2015maximum}) to a more general class of systems of two equations. Moreover, an N-barrier maximum principle for systems of three equations is established. \end{abstract} \section{N-barrier maximum principle for two equations}\label{sec: NBMP 2 eqns} We are concerned with the autonomous elliptic system in $\mathbb{R}$: \begin{equation}\label{eqn: L-V BVP before scaled without BCs} \begin{cases} d_1\,u_{xx}+\theta\,u_{x}+u\,f(u,v)=0, \quad x\in\mathbb{R}, \\ \\ d_2\,v_{xx}\hspace{0.8mm}+\theta\,v_{x}+v\,g(u,v)=0,\quad x\in\mathbb{R}, \end{cases} \end{equation} which arises from the study of traveling waves in the following reaction-diffusion system (\cite{Volperts94TWS-Parabolic}): \begin{equation}\label{eqn: compet L-V sys of 2 species with diffu} \begin{cases} u_t=d_1\,u_{yy}+u\,f(u,v), \quad y\in\mathbb{R},\quad t>0,\\ \\ \hspace{0.7mm}v_t=d_2\,v_{yy}+v\,g(u,v), \quad y\in\mathbb{R},\quad t>0.\\ \end{cases} \end{equation} A positive solution $(u(x),v(x))=(u(y,t),v(y,t))$, $x=y-\theta \,t$ of \eqref{eqn: L-V BVP before scaled without BCs} stands for a traveling wave solution of \eqref{eqn: compet L-V sys of 2 species with diffu}. Here $d_1$ and $d_2$ represent the diffusion rates and $\theta$ is the propagation speed of the traveling wave. Throughout, we assume, unless otherwise stated, that the following hypotheses on $f(u,v)\in C^{0}(\mathbb{R^{+}}\times\mathbb{R^{+}})$ and $g(u,v)\in C^{0}(\mathbb{R^{+}}\times\mathbb{R^{+}})$ are satisfied: \begin{itemize} \item [$\mathbf{[H1]}$] the unique solution of $f(u,v)=g(u,v)=0$ and $u,v>0$ is $(u,v)=(u^\ast,v^\ast)$; \item [$\mathbf{[H2]}$] the unique solution of $f(u,0)=0$ and $u>0$ is $u=u_1>0$; the unique solution of $f(0,v)=0, v>0$ is $v=v_1>0$; the unique solution of $g(u,0)=0, u>0$ is $u=u_2>0$; the unique solution of $g(0,v)=0, v>0$ is $v=v_2>0$; \item [$\mathbf{[H3]}$] as $u,v>0$ are sufficiently large, $f(u,v), g(u,v)<0$; as $u,v>0$ are sufficiently small, $f(u,v), g(u,v)>0$; \item [$\mathbf{[H4]}$] the two curves \begin{equation} \mathcal{C}_f=\Big\{ (u,v)\;\big|\; f(u,v)=0,\;u,v\geq0\Big\} \end{equation} and \begin{equation} \mathcal{C}_g=\Big\{ (u,v)\;\big|\; g(u,v)=0,\;u,v\geq0\Big\} \end{equation} lie completely within the region $\mathcal{R}$, which is enclosed by the $u$-axis, the $v$-axis, the line $\bar{\mathcal{L}}$ given by \begin{equation} \bar{\mathcal{L}}=\big\{ (u,v)\;\big|\; \frac{u}{\bar{u}}+\frac{v}{\bar{v}}=1,\; u,v\geq0 \big\} \end{equation} and the line $\underaccent\bar{\mathcal{L}}$ given by \begin{equation} \underaccent\bar{\mathcal{L}}=\big\{ (u,v)\;\big|\; \frac{u}{\underaccent\bar{u}}+\frac{v}{\underaccent\bar{v}}=1,\;u,v\geq0\big\}, \end{equation} for some $\bar{u}>\underaccent\bar{u}>0$, $\bar{v}>\underaccent\bar{v}>0$. \end{itemize} In this note, the following boundary value problem for \eqref{eqn: L-V BVP before scaled} is studied: \begin{equation}\label{eqn: L-V BVP before scaled} \begin{cases} d_1\,u_{xx}+\theta\,u_{x}+u\,f(u,v)=0, \quad x\in\mathbb{R}, \\ d_2\,v_{xx}\hspace{0.8mm}+\theta\,v_{x}+v\,g(u,v)=0,\quad x\in\mathbb{R}, \\ (u,v)(-\infty)=\text{\bf e}_{-},\quad (u,v)(+\infty)= \text{\bf e}_{+}, \end{cases} \end{equation} where $\text{\bf e}_{-}, \text{\bf e}_{+} =(0,0), (u_1,0), (0,v_2)$, or $(u^\ast,v^\ast)$. We call a solution $(u(x),v(x))$ of \eqref{eqn: L-V BVP before scaled} an $(\text{\bf e}_{-},\text{\bf e}_{+})$-wave. As in \cite{hung2015n,hung2015maximum}, adding the two equations in \eqref{eqn: L-V BVP before scaled} leads to an equation involving $p(x)=\alpha\,u(x)+\beta\,v(x)$ and $q(x)=d_1\,\alpha\,u(x)+d_2\,\beta\,v(x)$, i.e. \begin{align}\label{eq: q''+p'+f+g=0 before scale} 0&=\alpha\,\big(d_1\,u_{xx}+\theta\,u_{x}+u\,f(u,v)\big)+ \beta\,\big(d_2\,v_{xx}+\theta\,v_{x}+v\,g(u,v)\big)\notag\\[3mm] &=q''(x)+\theta\,p'(x)+ \alpha\,u\,f(u,v)+ \beta\,v\,g(u,v)\notag\\[3mm] &=q''(x)+\theta\,p'(x)+F(u,v), \end{align} where $\alpha$, $\beta>0$ are \textit{arbitrary} constants and $F(u,v)=\alpha\,u\,(\sigma_1-c_{11}\,u-c_{12}\,v)+\beta\,v\,(\sigma_2-c_{21}\,u-c_{22}\,v)$. To show the main result, we begin with a useful lemma. \begin{lem}\label{lem: F(u,v)=0 is contained in R} Under $\mathbf{[H1]}$$\sim$$\mathbf{[H4]}$, the curve $F(u,v)=0$ in the first quadrant of the $uv$-plane, i.e \begin{equation} \mathcal{C}_F=\Big\{ (u,v)\;\big|\; F(u,v)=0,\;u,v\geq0\Big\} \end{equation} lies completely within the region $\mathcal{R}$. \end{lem} \begin{proof} The proof is elementary and is thus omitted here. \end{proof} We are now in the position to prove \begin{thm}[\textbf{N-Barrier Maximum Principle for Systems of Two Equations}]\label{thm: N-Barrier Maximum Principle for 2 Species} Assume $\mathbf{[H1]}$$\sim$$\mathbf{[H4]}$ hold. Suppose that there exists a pair of positive solutions $(u(x),v(x))$ of the boundary value problem \begin{equation}\label{eqn: L-V BVP before scaled} \begin{cases} d_1\,u_{xx}+\theta\,u_{x}+u\,f(u,v)=0, \quad x\in\mathbb{R}, \\ d_2\,v_{xx}\hspace{0.8mm}+\theta\,v_{x}+v\,g(u,v)=0,\quad x\in\mathbb{R}, \\ (u,v)(-\infty)=\text{\bf e}_{-},\quad (u,v)(+\infty)= \text{\bf e}_{+}, \end{cases} \end{equation} where $\text{\bf e}_{-}, \text{\bf e}_{+} =(0,0), (u_1,0), (0,v_2)$, or $(u^\ast,v^\ast)$. For any $\alpha,\beta>0$, let $p(x)=\alpha\,u(x)+\beta\,v(x)$ and $q(x)=\alpha\,d_1\,u(x)+\beta\,d_2\,v(x)$. We have \begin{equation}\label{eqn: upper and lower bounds of p} \underaccent\bar{p} \leq p(x)\leq \bar{p}, \quad x\in\mathbb{R}, \end{equation} where \begin{equation} \bar{p}=\max\big(\alpha\,\bar{u},\beta\,\bar{v}\big)\,\frac{\max(d_1,d_2)}{\min(d_1,d_2)} \end{equation} and \begin{equation} \underaccent\bar{p}=\min\big(\alpha\,\underaccent\bar{u},\beta\,\underaccent\bar{v}\big)\,\frac{\displaystyle\min(d_1,d_2)}{\displaystyle\max(d_1,d_2)}\,\chi , \end{equation} where \begin{equation} \chi = \begin{cases} 1, \quad \text{if} \quad \text{\bf e}_{\pm}\neq(0,0),\\ 0, \quad \text{if} \quad \text{\bf e}_{\pm}=(0,0). \end{cases} \end{equation} \end{thm} \begin{proof} We employ \textit{the N-barrier method} developed in \cite{hung2015n,hung2015maximum} to show \eqref{eqn: upper and lower bounds of p}. It is readily seen that once appropriate \textit{N-barriers} are constructed, the upper and lower bounds in \eqref{eqn: upper and lower bounds of p} are given in exactly the same way as in \cite{hung2015n,hung2015maximum}. To prove the lower bound, we can construct an N-barrier which starts either from the point $(\underaccent\bar{u},0)$ or $(0,\underaccent\bar{v})$. Due to Lemma~\ref{lem: F(u,v)=0 is contained in R}, the N-barrier is away from the region $\mathcal{R}$. Then the lower bound can be proved by contradiction as in \cite{hung2015n,hung2015maximum}. On the other hand, the upper bound can be shown in the same manner by constructing an N-barrier starting either from the point $(\bar{u},0)$ or $(0,\bar{v})$. In addition, it is clear that when the boundary conditions $\text{\bf e}_{+}=(0,0)$ or $\text{\bf e}_{-}=(0,0)$, a trivial lower bound of $p(x)$, i.e. $p(x)\geq0$ can only be given. This completes the proof. \end{proof} \setcounter{equation}{0} \setcounter{figure}{0} \section{N-barrier maximum principle for three equations}\label{sec: NBMP 3 eqns} In this section, the following assumptions on $f(u,v,w)\in C^{0}(\mathbb{R^{+}}\times\mathbb{R^{+}}\times\mathbb{R^{+}})$, $g(u,v,w)\in C^{0}(\mathbb{R^{+}}\times\mathbb{R^{+}}\times\mathbb{R^{+}})$, and $h(u,v,w)\in C^{0}(\mathbb{R^{+}}\times\mathbb{R^{+}}\times\mathbb{R^{+}})$ are satisfied: \begin{itemize} \item [$\mathbf{[A1]}$] \textbf{\emph{(Unique coexistence state)}} $(u,v,w)=(u^\ast,v^\ast,w^\ast)$ is the unique solution of \begin{equation} \begin{cases} f(u,v,w)=0,\quad u,v,w>0, \\ g(u,v,w)=0,\quad u,v,w>0, \\ h(u,v,w)=0,\quad u,v,w>0. \\ \end{cases} \end{equation} \item [$\mathbf{[A2]}$] \textbf{\emph{(Competitively exclusive states)}} For some $u_i,v_i,w_i>0$ $(i=1,2,3.)$ with $(\Lambda_1-\Lambda_2)^2+(\Lambda_1-\Lambda_3)^2+(\Lambda_2-\Lambda_3)^2\neq0$ $(\Lambda=u,v,w.)$, \begin{equation} f(u_1,0,0)=f(0,v_1,0)=f(0,0,w_1)=0, \end{equation} \begin{equation} g(u_2,0,0)=g(0,v_2,0)=g(0,0,w_2)=0, \end{equation} \begin{equation} h(u_3,0,0)=h(0,v_3,0)=h(0,0,w_3)=0. \end{equation} \item [$\mathbf{[A3]}$] \textbf{\emph{(Logistic-growth nonlinearity)}} For $u,v,w>0$, as $u,v,w$ are sufficiently large \begin{equation} f(u,v,w),g(u,v,w),h(u,v,w)<0, \end{equation} and \begin{equation} f(u,v,w),g(u,v,w),h(u,v,w)>0, \end{equation} when $u,v,w$ are sufficiently small. \item [$\mathbf{[A4]}$] \textbf{\emph{(Covering pentahedron)}} the three surfaces \begin{equation} \mathcal{S}_f=\big\{ (u,v,w)\;\big|\; f(u,v,w)=0,\;u,v,w\geq0\big\}, \end{equation} \begin{equation} \mathcal{S}_g=\big\{ (u,v,w)\;\big|\; g(u,v,w)=0,\;u,v,w\geq0\big\}, \end{equation} \begin{equation} \mathcal{S}_h=\big\{ (u,v,w)\;\big|\; h(u,v,w)=0,\;u,v,w\geq0\big\}, \end{equation} lie completely within the pentahedron (a polyhedron with five faces) $\mathcal{PH}$, which is enclosed by the $uv$-plane, the $uw$-plane, and the $vw$-plane as well as the planes \begin{equation} \bar{\mathcal{P}}=\big\{ (u,v,w)\;\big|\; \frac{u}{\bar{u}}+\frac{v}{\bar{v}}+\frac{w}{\bar{w}}=1,\; u,v,w\geq0 \big\} \end{equation} and \begin{equation} \underaccent\bar{\mathcal{P}}=\big\{ (u,v,w)\;\big|\; \frac{u}{\underaccent\bar{u}}+\frac{v}{\underaccent\bar{v}}+\frac{w}{\underaccent\bar{w}}=1,\;u,v,w\geq0\big\} \end{equation} for some $\bar{u}>\underaccent\bar{u}>0$, $\bar{v}>\underaccent\bar{v}>0$, $\bar{w}>\underaccent\bar{w}>0$. \end{itemize} We can prove in a similar manner that an N-barrier maximum principle remains true for systems of three equations. \begin{thm}[\textbf{N-Barrier Maximum Principle for Systems of Three Equations}]\label{thm: N-barrier Maximum Principle for 3 species} Assume $\mathbf{[A1]}$$\sim$$\mathbf{[A4]}$ are satisfied. Suppose that there exists a pair of positive solutions $(u(x),v(x),w(x))$ of the boundary value problem \begin{equation}\label{eqn: L-V BVP before scaled 3 eqns} \begin{cases} d_1\,u_{xx}+\theta\,u_{x}+u\,f(u,v,w)=0, \quad x\in\mathbb{R}, \\ d_2\,v_{xx}\hspace{0.8mm}+\theta\,v_{x}+v\,g(u,v,w)=0,\quad x\in\mathbb{R}, \\ d_3\,w_{xx}\hspace{0.0mm}+\theta\,w_{x}+w\,h(u,v,w)=0,\quad x\in\mathbb{R}, \\ (u,v,w)(-\infty)=\text{\bf e}_{-},\quad (u,v,w)(+\infty)= \text{\bf e}_{+}, \end{cases} \end{equation} where $\text{\bf e}_{-}, \text{\bf e}_{+} =(0,0,0), (u_1,0,0), (0,v_2,0), (0,0,w_3), (u^\ast,v^\ast,w^\ast)$. For any $\alpha,\beta,\gamma>0$, let $p(x)=\alpha\,u(x)+\beta\,v(x)+\gamma\,w(x)$. We have \begin{equation} \min\big(\alpha\,\underaccent\bar{u},\beta\,\underaccent\bar{v},\gamma\,\underaccent\bar{w}\big) \frac{\min(d_1,d_2,d_3)}{\max(d_1,d_2,d_3)}\,\chi \leq p(x)\leq \max\big(\alpha\,\bar{u},\beta\,\bar{v},\gamma\,\bar{w}\big) \frac{\max(d_1,d_2,d_3)}{\min(d_1,d_2,d_3)} \end{equation} for $x\in\mathbb{R}$, where \begin{equation} \chi = \begin{cases} 1, \quad \text{if} \quad \text{\bf e}_{\pm}\neq(0,0,0),\\ 0, \quad \text{if} \quad \text{\bf e}_{\pm}=(0,0,0). \end{cases} \end{equation} \end{thm} \end{document}
\begin{document} \renewcommand*{\thefootnote}{\fnsymbol{footnote}} \begin{center} \Large{\textbf{Intermittency in the small-time behavior of L\'evy processes}}\\ Danijel Grahovac$^1$\footnote{[email protected]}\\ \end{center} \begin{flushleft} \footnotesize{ $^1$ Department of Mathematics, University of Osijek, Trg Ljudevita Gaja 6, 31000 Osijek, Croatia} \end{flushleft} \textbf{Abstract: } In this paper we consider convergence of moments in the small-time limit theorems for L\'evy processes. We provide precise asymptotics for all the absolute moments of positive order. The convergence of moments in limit theorems holds typically only up to some critical moment order and higher order moments decay at different rate. Such behavior is known as intermittency and has been encountered in some limit theorems. \textbf{Keywords: } small-time limit theorems; L\'evy processes; absolute moments; intermittency \section{Introduction} In classical limit theorems, one typically studies the behavior of some form of aggregated process as time tends to infinity. For example, if $\{\xi_i, \, i \in {\mathbb N}\}$ is an i.i.d.~sequence and $(a_\lambda) \subset {\mathbb R}$, $a_\lambda\to \infty$, then it is well known that the class of weak limits of \begin{equation}\label{e:intr1} \left\{ \frac{1}{a_\lambda} \sum_{i=1}^{\lfloor \lambda t \rfloor} \xi_i, \, t \geq 0 \right\}, \quad \text{ as } \lambda\to \infty, \end{equation} coincides with the class of self-similar L\'evy processes, namely, Brownian motion or infinite variance strictly stable processes. If $\xi_i$ are infinitely divisible, then one-dimensional marginals of \eqref{e:intr1} may be identified with $a_\lambda^{-1} X(\lfloor \lambda t \rfloor)$ for a L\'evy process $X$ with increments distributed as $\xi_i$. The type of the limiting process depends solely on the tail behavior of the distribution of $\xi_i$. If $\xi_i$ is infinitely divisible, then this behavior is related to the behavior of the L\'evy measure of $\xi_i$ at infinity. Alternatively, one may consider limit theorems for a small-time limiting scheme (or ``zooming in'' as termed by \cite{ivanovs2018zooming}), where one considers for a L\'evy process $X$ the limit \begin{equation}\label{e:intr2} \left\{ \frac{1}{a_\lambda} X (\lambda t ), \, t \geq 0 \right\}, \quad \text{ as } \lambda\to 0. \end{equation} Although the class of possible limits is the same as for \eqref{e:intr1}, the domains of attraction now crucially depend on the behavior of the L\'evy measure of $X$ near zero (see \cite{doney2002stability,maller2008convergence,ivanovs2018zooming}). In this paper we investigate small-time behavior of absolute moments of L\'evy processes. Assuming that \eqref{e:intr2} converges to some non-trivial process $\widehat{X}$, we show that the normalized absolute moments $a_\lambda^{-q} \mathbb{E} |X(\lambda t)|^q$ converge to the absolute moments of the limit, but this typically holds only up to some critical moment order $\alpha$. Absolute moments of the order $q$ greater than $\alpha$ cannot be normalized with $a_\lambda^{q}$. This implies that the $L^q$ norms of $X(\lambda t)$ decay at different rates for different range of $q$. Such a behavior is known as intermittency (see \cite{GLST2016JSP,GLST2019Bernoulli}) and resembles a similar phenomenon appearing in solutions of some stochastic partial differential equations (SPDE) (see e.g.~\cite{carmona1994parabolic,gartner2007geometric,zel1987intermittency,khoshnevisan2014analysis,chong2018almost,chong2019intermittency}). Intermittency in the SPDEs usually involves long-term behavior of solutions corresponding to $\lambda \to \infty$ and in limit theorems one is likewise interested in the limits of aggregated processes as $\lambda \to \infty$ (see e.g.~\cite{GLT2019Limit,GLT2019LimitInfVar,GLT2019MomInfVar,grahovac2018intermittency}). Here, however, we find an instance of intermittent behavior appearing in the small-time limit. We note that there exists a large body of literature related to the small-time behavior of L\'evy processes: \cite{bertoin2008passage,maller2009small,maller2008convergence,doney2002stability,aurzada2013small,maller2015strong,ivanovs2018zooming}. Small-time behavior of moments of L\'evy processes has been investigated in \cite{figueroa2008small} (see also \cite{woerner2003variational,jacod2007asymptotic,asmussen2001approximations}). However, these results apply only to moments of the order greater than the Blumenthal-Getoor index of the process. We provide here the asymptotics for the full range of positive order moments provided that \eqref{e:intr2} converges to a non-trivial limit. In \cite{deng2015shift}, bounds are established for absolute moments of general L\'evy processes applicable both for small and large time. The paper is organized as follows. We start with some preliminaries in Section \ref{sec2} and discuss the notion of intermittency in small-time in Section \ref{sec3}. In Section \ref{sec4} we establish asymptotic behavior of moments. In Section \ref{sec5} we shortly discuss the so-called multifractal formalism which relates behavior of moments with the sample path properties of the process. \section{Preliminaries}\label{sec2} Through the paper, $X=\{X(t), \, t \geq 0\}$ will denote a L\'evy process. Let $\mathbb{P}si$ be its characteristic exponent $\log \mathbb{E} \left[ e^{i \zeta X(t)} \right] = t \mathbb{P}si (\zeta)$ for $t\geq 0$, $\zeta \in {\mathbb R}$, which by the L\'evy-Khintchine formula has the following form \begin{equation}\label{LKformula} \mathbb{P}si(\zeta) = i \gamma \zeta - \frac{\sigma^2}{2} \zeta^2 + \int_{{\mathbb R}} \left(e^{i\zeta x} - 1 - i \zeta x \mathbf{1}_{\{|x|\leq 1\}} \right) \mathbb{P}i(dx), \end{equation} with $\gamma \in {\mathbb R}$, $\sigma>0$ and $\mathbb{P}i$ a measure on ${\mathbb R} \backslash \{0\}$ such that $\int (1\wedge x^2) \mathbb{P}i(dx) < \infty$. We refer to $(\gamma,\sigma,\mathbb{P}i)$ as the characteristic triplet. The L\'evy process has paths of bounded variation if and only if $\sigma=0$ and $\int_{|x|\leq 1}|x| \mathbb{P}i(dx)<\infty$. If $\int_{|x|\leq 1}|x| \mathbb{P}i(dx)<\infty$, then \eqref{LKformula} may be written in the form \begin{equation}\label{LKformulabv} \mathbb{P}si(\zeta) = i \gamma' \zeta - \frac{\sigma^2}{2} \zeta^2 + \int_{{\mathbb R}} \left(e^{i\zeta x} - 1\right) \mathbb{P}i(dx), \end{equation} and $\gamma'$ will be referred to as drift. For more details see \cite{sato1999levy,kyprianou2014fluctuations,bertoin1998levy}. Let $\{Y(t), \, t\geq 0\}$ be a general real-valued process. It is said to be self-similar with index $H>0$ if for any $c>0$ it holds that $\{Y(ct)\} \overset{d}{=} \{c^H Y(t)\}$, where $\{\cdot\} \overset{d}{=} \{\cdot\}$ denotes the equality of the finite dimensional distributions of two processes. If $X$ is a self-similar L\'evy process, then $X$ is either \begin{itemize} \item Brownian motion in which case $H=1/2$ and the characteristic triplet is $(0,\sigma,0)$, \item linear drift in which case $H=1$ and the characteristic triplet is $(\gamma,0,0)$, $\gamma\neq 0$, \item strictly $\alpha$-stable L\'evy process with $0<\alpha<2$ in which case $H=1/\alpha$ and the characteristic triplet is $(\gamma,0,\mathbb{P}i)$ with \begin{equation*} \mathbb{P}i(dx)=c_+ x^{-1-\alpha} \1_{\{x>0\}} dx + c_- |x|^{-1-\alpha} \1_{\{x<0\}} dx, \end{equation*} for some $c_+,c_- \geq0$, $c_++c_->0$ and, additionally, $c_+=c_-$ if $\alpha=1$, while $\gamma$ is given by $\gamma = (c_+-c_-)/(1-\alpha)$. \end{itemize} It follows from Lamperti's theorem (see \cite{lamperti1962semi}, \cite[Thm.~2.8.5]{pipiras2017long}) that the class of self-similar L\'evy processes coincides with the class of weak limits of $\{\sum_{i=1}^{\lfloor \lambda t \rfloor} \xi_i/a(\lambda), \, t \geq 0\}$ as $\lambda\to \infty$, where $\{\xi_i, \, i \in {\mathbb N}\}$ is an i.i.d.~sequence and $a(\lambda)$ is a nonstochastic function such that $a(\lambda) \to \infty$ as $\lambda \to \infty$. By \cite[Theorem 16.14]{kallenberg2002foundations}, a necessary and sufficient coefficient for such convergence to some L\'evy process $\widehat{X}=\{\widehat{X}(t), \, t \geq 0\}$ is that $\sum_{i=1}^{n} \xi_i/a_n \to^d \widehat{X}(1)$. The characterization of domains of attraction follows from \cite[Thm.~7.35.2]{gnedenko1954limit}, \cite{shimura1990strict} and \cite{ivanovs2018zooming} for the convergence to a linear drift. In a small-time limiting scheme (or ``zooming in'' as termed by \cite{ivanovs2018zooming}) one considers the limit \begin{equation}\label{e:limscheme} \left\{ \frac{Y(ts)}{a(t)}, \, s \geq 0 \right\} \overset{fdd}{\to} \{Z(s), \, s \geq 0\}, \quad \text{ as } t \to 0, \end{equation} for some processes $\{Y(s), \, s \geq 0\}$, $\{Z(s), \, s \geq 0\}$, a nonstochastic function $a(t)$ such that $a(t)\to 0$ as $t\to 0$, and with convergence in the sense of convergence of all finite-dimensional distributions. By adapting the proof of Lamperti's theorem, one can show that if \eqref{e:limscheme} holds with $Z$ stochastically right-continuous and non-trivial (not identically $0$), then $Z$ must be self-similar with some index $H>0$ and $a(t)$ is regularly varying at zero with index $H$, which we will denote by $a(t)\in {\mathbb R}V^0(H)$ (see \cite[Thm.~1]{ivanovs2018zooming}). \section{Small-time intermittency}\label{sec3} Intermittency typically refers to an unusual asymptotic behavior of moments. In \cite{GLST2019Bernoulli}, intermittency is defined as a change in the rate of growth of the $L^q$ norm of the process as time tends to $\infty$. More precisely, for a general real-valued process $Y=\{Y(t),\, t \geq 0\}$ define the scaling function at point $q \in {\mathbb R}$ as \begin{equation}\label{deftauInf} \tau^\infty(q) = \tau_Y^\infty(q) = \lim_{t\to \infty} \frac{\log \mathbb{E} |Y(t)|^q}{\log t}. \end{equation} The process $Y$ is then intermittent if \begin{equation*} q \mapsto \frac{\tau_Y^\infty(q)}{q} = \lim_{t\to \infty} \frac{\log \left\lVert Y(t) \right\rVert_q}{\log t}, \end{equation*} has points of strict increase, where $\left\lVert Y(t) \right\rVert_q= \left( \mathbb{E} |Y(t)|^q \right)^{1/q}$, which is the $L^q$ norm if $q\geq 1$. The scaling function \eqref{deftauInf} is tailored for measuring the rate of growth of moments as time tends to $\infty$ of the process whose moments grow roughly as a power function of time. For $H$-self-similar process, for example, $\tau^\infty(q)=Hq$ for $q$ in the range of finite moments. In the limiting scheme \eqref{e:limscheme}, clearly $Y(t) \to^P 0$ as $t \to 0$. In this setting we want to measure the rate of decay of absolute moments to zero as $t\to 0$ in \eqref{deftauInf}. Hence, we define the scaling function as \begin{equation}\label{deftau0} \tau^0(q) = \tau_Y^0(q) = \lim_{t\to 0} \frac{\log \mathbb{E} |Y(t)|^q}{\log (1/t)}, \end{equation} where we assume the limit exists, possibly equal to $\infty$. If $\mathbb{E}|Y(t)|^q=\infty$ for $t\geq t_0$, then $\tau^0(q)=\infty$. If $Y$ is $H$-self-similar process, then $\tau^0(q)=-Hq$ for $q$ in the range of finite moments. We divide by $\log(1/t)$ instead of $\log t$ in \eqref{deftau0} for convenience. Namely, we have that $\tau_Y^0(q)=\tau_{Y'}^\infty(q)$, where $\tau_{Y'}^\infty$ is the scaling function \eqref{deftauInf} of the process $Y'(t)=Y(1/t)$. From this fact and \cite[Prop.~2.1]{GLST2016JSP}, we immediately get that $\tau^0$ is convex, hence continuous. Moreover, $q \mapsto \frac{\tau^0(q)}{q}$ is nondecreasing on $\mathcal{D}_{\tau^0}=\{q \in {\mathbb R} : \tau^0(q)<\infty\}$. Following \cite{GLST2016JSP,GLST2019Bernoulli}, we say that the process $\{Y(t), \, t \geq 0\}$ with the scaling function $\tau^0$ given by \eqref{deftau0} is intermittent (in the small-time limit) if there exist $p, r \in \mathcal{D}_{\tau^0}$ such that \begin{equation*} \frac{\tau^0(p)}{p} < \frac{\tau^0(r)}{r}. \end{equation*} The following proposition explains some implications of intermittency related to limit theorems. The proof follows directly from \cite[Thm.~1]{GLST2019Bernoulli} and Lamperti's theorem for limiting scheme \eqref{e:limscheme} \cite[Thm.~1]{ivanovs2018zooming}. \begin{prop}\label{prop:limitLamp} Suppose that $Y$ satisfies \eqref{e:limscheme} for some stochastically right-continuous and non-trivial process $Z$. Then $Z$ is $H$-self-similar and the scaling function \eqref{deftau0} of $Y$ is $\tau_Y^0(q)=-Hq$ for every $q$ such that, as $t\to 0$ \begin{equation}\label{e:limitmom} a(t)^{-q} \mathbb{E}| Y(ts)|^q \to \mathbb{E} |Z(s)|^q, \quad \forall s \geq 0. \end{equation} \end{prop} Proposition \ref{prop:limitLamp} shows that for intermittent processes obeying limit theorem in the sense of \eqref{e:limscheme}, the convergence of moments as in \eqref{e:limitmom} must fail to hold for some range of $q$. \section{Small-time moment asymptotics of L\'evy processes}\label{sec4} Domains of attraction in the limiting scheme \eqref{e:limscheme} have been characterized in \cite{doney2002stability,maller2008convergence,ivanovs2018zooming}. Suppose $X$ is a L\'evy process with characteristic exponent $\mathbb{P}si$ such that \begin{equation}\label{e:LPlimit} \left\{ \frac{X(t s)}{a(t)}, \, s \geq 0 \right\} \overset{fdd}{\to} \{\widehat{X}(s), \, s \geq 0\}, \quad \text{ as } t \to 0. \end{equation} Then $\widehat{X}$ must be a L\'evy process too with some characteristic exponent $\widehat{\mathbb{P}si}$, the convergence extends to convergence in Skorokhod space of c\`adl\`ag functions \cite[Cor.~VII.3.6]{jacod1987limit} and is equivalent to $a(t)^{-1} X(t) \overset{d}{\to} \widehat{X}(1)$, as $t \to 0$, which in turn is equivalent to \begin{equation}\label{e:psiconv} t \mathbb{P}si(a(t)^{-1} \zeta) \to \widehat{\mathbb{P}si}(\zeta), \quad \text{ as } t \to 0, \ \forall \zeta \in {\mathbb R}. \end{equation} Moreover, if $\widehat{X}$ is non-trivial, then it is a $H$-self-similar L\'evy process for some $H\in [1/2,2]$ and $a(t) \in {\mathbb R}V^0(H)$. In \cite[Thm.~2]{ivanovs2018zooming}, a complete characterization of domains of attraction is provided in terms of the characteristic triplet $(\gamma, \sigma, \mathbb{P}i)$ of the L\'evy process $X$. Some sufficient conditions can also be found in \cite[Prop.~2.3]{bisewski2019zooming}. In the following we denote the characteristic triplet of $X$ by $(\gamma, \sigma, \mathbb{P}i)$, the characteristic triplet of $\widehat{X}$ by $(\widehat{\gamma}, \widehat{\sigma}, \widehat{\mathbb{P}i})$ and we assume \eqref{e:LPlimit} holds. We also define indices \begin{align} \beta^0 &:= \inf \left\{ \beta \geq 0 : \int_{|x|\leq 1} |x|^\beta \mathbb{P}i(dx) < \infty \right\},\label{beta0}\\ \beta^\infty &:= \sup \left\{ \beta \geq 0 : \int_{|x|> 1} |x|^\beta \mathbb{P}i(dx) < \infty \right\}.\label{betainf} \end{align} The index $\beta^0$ is known as the Blumenthal-Getoor index \cite{blumenthal1961sample} and we must have $\beta^0 \in [0,2]$ since $\mathbb{P}i$ is the L\'evy measure. On the other hand, $\beta^\infty \in [0, \infty]$ is related to the tail behavior of the distribution of $X(1)$. In particular, $\mathbb{E} |X(1)|^q<\infty$ for $q<\beta^\infty$ and $\mathbb{E} |X(1)|^q=\infty$ for $q>\beta^\infty$. \begin{lemma}\label{lemma:q<beta} If $X$ and $\widehat{X}$ are L\'evy processes such that \eqref{e:LPlimit} holds, then for $0<q< (1/H) \wedge \beta^\infty$, $\{ a(t)^{-q} |X(ts)|^q \}$ is uniformly integrable for every $s\geq 0$ and hence \begin{equation*} a(t)^{-q} \mathbb{E} |X(ts)|^q \to \mathbb{E} |\widehat{X}(s)|^q, \quad \text{ as } t \to 0. \end{equation*} \end{lemma} \begin{proof} It is enough to show that $\{ a(t)^{-q} \mathbb{E} |X(ts)|^q \}$ is bounded for arbitrary $0<q<(1/H) \wedge \beta^\infty$. Let $X^{(t)} (s)=a(t)^{-1} X(ts)$ and let $(\gamma^{(t)}, \sigma^{(t)}, \mathbb{P}i^{(t)})$ denote the characteristic triplet of the L\'evy process $\{ X^{(t)} (s), \, s \geq 0 \}$. Decompose $X^{(t)}$ into independent factors \begin{equation}\label{e:proofdecomposition} X^{(t)}(s) = \gamma^{(t)} s + X_1^{(t)}(s) + X_2^{(t)} (s), \end{equation} where $X_1^{(t)}$ is a L\'evy process with triplet $(0,\sigma^{(t)}, \mathbb{P}i_1^{(t)})$, $\mathbb{P}i_1^{(t)} (dx) = \1_{\{|x|\leq 1\}} \mathbb{P}i^{(t)}(dx)$, and $X_2^{(t)}$ is a L\'evy process with triplet $(0,0, \mathbb{P}i_2^{(t)})$, $\mathbb{P}i_2^{(t)} (dx) = \1_{\{|x|> 1\}} \mathbb{P}i^{(t)}(dx)$. By the elementary inequality $|a+b|^r\leq 2^r(|a|^r+|b|^r)$ we have from \eqref{e:proofdecomposition} \begin{equation*} \mathbb{E} |X^{(t)} (s)|^q \leq C \left( |\gamma^{(t)} s|^q + \mathbb{E} |X_1^{(t)} (s)|^q + \mathbb{E} |X_2^{(t)} (s)|^q \right). \end{equation*} We now show each of these terms is bounded. By \eqref{e:psiconv} and \cite[Thm.~15.14]{kallenberg2002foundations}, $\gamma^{(t)}$, $\sigma^{(t)}$ and $\int_{|x|\leq 1} x^2 \mathbb{P}i^{(t)} dx$ have finite limits as $t \to 0$ (see also \cite[Lem.~4.8]{bisewski2019zooming}, \cite[Prop.~4.1]{maller2008convergence}). In particular, $\{\gamma^{(t)}\}$ is bounded. Moreover, $\mathbb{E} |X_1^{(t)} (s)|^q$ is bounded. Indeed, since $q \leq 2$, by Jensen's inequality and \cite[Exa.~25.12]{sato1999levy} we have \begin{equation*} \mathbb{E} |X_1^{(t)} (s)|^q = \mathbb{E} \left( |X_1^{(t)} (s)|^2 \right)^{q/2} \leq \left( \mathbb{E} |X_1^{(t)} (s)|^2 \right)^{q/2} = \left( s \left(\sigma^{(t)}\right)^2 + s \int_{|x|\leq 1} x^2 \mathbb{P}i^{(t)} (dx) \right)^{q/2}. \end{equation*} For $\mathbb{E} |X_2^{(t)} (s)|^q$ we proceed similarly as in the proof of \cite[Lem.~4.9]{bisewski2019zooming}. By \cite[Lem.~4.8]{bisewski2019zooming} it holds that \begin{equation}\label{e:xpPIconv} \int_{|x|>1} |x|^q \mathbb{P}i^{(t)}(dx) \to \int_{|x|>1} |x|^q \widehat{\mathbb{P}i}(dx) < \infty, \end{equation} Although the statement of \cite[Lem.~4.8]{bisewski2019zooming} is for $t=1/n$, the same proof is valid for general $t$. Note that $X_2^{(t)}$ is a compound Poisson process with intensity parameter $\lambda^{(t)}=\mathbb{P}i^{(t)}(1,\infty)+\mathbb{P}i^{(t)}(-\infty,-1)$ and jump distribution $\1_{\{|x|> 1\}} \mathbb{P}i^{(t)}(dx)/\lambda^{(t)}$. If $N^{(t)}(s)$ is Poisson random variable with parameter $s\lambda^{(t)}$ and $\xi^{(t)}$ a generic jump, then by conditioning on the number of jumps and using Minkowski inequality we get for $q \geq 1$ \begin{align*} \mathbb{E} |X_2^{(t)} (s)|^q &\leq \mathbb{E} (N^{(t)}(s))^q \mathbb{E} |\xi^{(t)}|^q \leq \mathbb{E} (N^{(t)}(s))^2 \frac{1}{\lambda^{(t)}} \int_{|x|>1} |x|^q \mathbb{P}i^{(t)}(dx)\\ &=\left( s + \lambda^{(t)} s^2 \right) \int_{|x|>1} |x|^q \mathbb{P}i^{(t)}(dx), \end{align*} and this is bounded since $\lambda^{(t)}$ is bounded and \eqref{e:xpPIconv} holds. For $q<1$ we use the inequality $(a+b)^q\leq a^q+b^q$, $a,b\geq 0$ and get \begin{equation*} \mathbb{E} |X_2^{(t)} (s)|^q \leq \mathbb{E} N^{(t)}(s) \mathbb{E} |\xi^{(t)}|^q =s \int_{|x|>1} |x|^q \mathbb{P}i^{(t)}(dx), \end{equation*} which is bounded by \eqref{e:xpPIconv}. \end{proof} Following \cite[Cor.~1]{ivanovs2018zooming}, we have that $\beta^0=1/H$ unless $\sigma>0$ or $X$ is bounded variation with $\gamma'\neq 0$. Lemma \ref{lemma:q<beta} covers the moment asymptotics for $q<\beta^0 \wedge \beta^\infty$. For $\beta^0<q<\beta^\infty$ we have the following lemma which is based on the results of \cite{figueroa2008small} (see also \cite{woerner2003variational}). \begin{lemma}\label{lemma:q>beta} Let $X$ be a L\'evy process with characteristic triplet $(\gamma, \sigma, \mathbb{P}i)$ such that $\beta^0<\beta^\infty$ and let $I=(\beta^0, \beta^\infty)$, where $\beta^0$ and $\beta^\infty$ are given by \eqref{beta0} and \eqref{betainf}. \begin{enumerate}[(i)] \item If $\sigma=0$ and $\mathbb{P}i \equiv 0$, then as $t \to 0$ \begin{equation*} t^{-q} \mathbb{E} |X(ts)|^q \to s^q |\gamma|^q, \quad \text{ for } q\in I. \end{equation*} \item If $\sigma\neq 0$ and $\mathbb{P}i \equiv 0$, then as $t \to 0$ \begin{equation*} t^{-\frac{q}{2}} \mathbb{E} |X(ts)|^q \to s^{\frac{q}{2}} \mathbb{E} | \mathcal{N}(0,\sigma^2)|^q, \quad \text{ for } q\in I, \end{equation*} where $\mathbb{E} | \mathcal{N}(0,\sigma^2)|^q$ is the $q$-th absolute moment of normal distribution with mean $0$ and variance $\sigma^2$. \item If $\sigma = 0$ and $\mathbb{P}i \not\equiv 0$, then as $t \to 0$ \begin{equation}\label{s0Pinot0case} t^{-1} \mathbb{E} |X(ts)|^q \to s \int_{\mathbb R} |x|^q \mathbb{P}i(dx), \quad \text{ for } q\in I \cap (1,\infty). \end{equation} In case $\beta^0<1$, if $\gamma'=0$ in \eqref{LKformulabv}, then \eqref{s0Pinot0case} holds for $q\leq 1$ also and if $\gamma'\neq 0$, then as $t \to 0$ \begin{equation}\label{s0Pinot0case2} t^{-q} \mathbb{E} |X(ts)|^q \to s^q |\gamma'|^q, \quad \text{ for } q\in I \cap (0,1). \end{equation} \item If $\sigma \neq 0$ and $\mathbb{P}i \not\equiv 0$, then as $t \to 0$ \begin{equation*} \begin{cases} t^{-1} \mathbb{E} |X(ts)|^q \to s \int_{\mathbb R} |x|^q \mathbb{P}i(dx),& \quad \text{ for } q\in I\cap (2,\infty),\\ t^{-1} \mathbb{E} |X(ts)|^q \to s \sigma^2 + s \int_{\mathbb R} |x|^2 \mathbb{P}i(dx),& \quad \text{ for } q=2 \text{ and } q\in I,\\ t^{-\frac{q}{2}} \mathbb{E} |X(ts)|^q \to s^{\frac{q}{2}} \mathbb{E} | \mathcal{N}(0,\sigma^2)|^q,& \quad \text{ for } q \in I\cap (0,2). \end{cases} \end{equation*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item is obvious. \item If we decompose $X$ as $X(t)=\gamma t + X_1(t)$ where $X_1$ is Brownian motion, then by self-similarity \begin{equation*} t^{-\frac{q}{2}} \mathbb{E} |X(ts)|^q = \mathbb{E} |\gamma s t^{-\frac{1}{2}+1} + s^{\frac{1}{2}} X_1(1)|^q \to s^{\frac{q}{2}} \mathbb{E} | \mathcal{N}(0,\sigma^2)|^q. \end{equation*} \item That \eqref{s0Pinot0case} holds for $q>1$ and for $q\leq 1$ if $\gamma'=0$, follows from \cite[Thm.~1.1]{figueroa2008small}. If $\gamma'\neq 0$, we decompose $X$ as $X(t)=\gamma' t + X_2(t)$, where $X_2$ is a L\'evy process with triplet $(\gamma-\gamma',0,\mathbb{P}i)$. Note that the drift term of $X_2$ is $0$ and by the previous case we have that $t^{-1} \mathbb{E} |X_2(ts)|^q \to s \int_{\mathbb R} |x|^q \mathbb{P}i(dx)$ and hence $t^{-q} \mathbb{E} |X_2(ts)|^q \to 0$ for $q<1$. It follows that $|t^{-1} X_2(ts)|^q \to^P 0$ and $\{|t^{-1} X_2(ts)|^q\}$ is uniformly integrable, but then so is $|\gamma' t + n X_2(t/n)|^q$ which gives \eqref{s0Pinot0case2}. \item The cases $q>2$ and $q=2$ follow from \cite[Thm.~1.1]{figueroa2008small}. For $q<2$ we decompose $X$ into independent components as $X(t)=X_1(t)+X_2(t)$ where $X_1$ is Brownian motion and $X_2$ is L\'evy process with triplet $(\gamma,0,\mathbb{P}i)$. We have that $t^{-\frac{1}{2}} X_1(ts) = s^{\frac{1}{2}} X_1(1)$ and $t^{-\frac{q}{2}} \mathbb{E} |X_2(ts)|^q \to 0$, by self-similarity and case (iii), respectively. Hence, $|t^{-\frac{1}{2}} X_2(ts)|^q \to^P 0$ and $\{|t^{-\frac{1}{2}} X_2(ts)|^q\}$ is uniformly integrable and so is $|t^{-\frac{1}{2}}X_1(ts) + t^{-\frac{1}{2}} X_2(ts)|^q$ which completes the proof. \end{enumerate} \end{proof} By combining the results of Lemmas \ref{lemma:q<beta} and \ref{lemma:q>beta} we get the scaling function of the L\'evy process satisfying \eqref{e:LPlimit}. \begin{theorem}\label{thm:tau0} Suppose that $X$ and $\widehat{X}$ are L\'evy processes such that \eqref{e:LPlimit} holds, let $(\gamma,\sigma,\mathbb{P}i)$ denote the characteristic triplet of $X$ and $\tau^0$ its scaling function \eqref{deftau0}. \begin{enumerate}[(i)] \item Suppose that $\widehat{X}$ is Brownian motion. If $\mathbb{P}i\not \equiv 0$, then for $q \in (0,\beta^\infty)$ \begin{equation*} \tau^0(q)=\begin{cases} -\frac{1}{2} q, & \ \text{ if } 0<q\leq 2,\\ -1, & \ \text{ if } q>2. \end{cases} \end{equation*} If $\mathbb{P}i \equiv 0$, then $\tau^0(q)=-q/2$ for $q \in (0,\beta^\infty)$. \item Suppose that $\widehat{X}$ is linear drift. If $\mathbb{P}i\not \equiv 0$, then for $q \in (0,\beta^\infty)$ \begin{equation*} \tau^0(q)=\begin{cases} -q, & \ \text{ if } 0<q\leq 1,\\ -1, & \ \text{ if } q>1. \end{cases} \end{equation*} If $\mathbb{P}i \equiv 0$, then $\tau^0(q)=-q$ for $q \in (0,\beta^\infty)$. \item If $\widehat{X}$ is strictly stable L\'evy process, then for $q \in (0,\beta^\infty)$ \begin{equation*} \tau^0(q)=\begin{cases} -\frac{1}{\beta^0} q, & \ \text{ if } 0<q\leq \beta^0,\\ -1, & \ \text{ if } q > \beta^0. \end{cases} \end{equation*} \end{enumerate} \end{theorem} \begin{proof} From Proposition \ref{prop:limitLamp} (i) we have that $\tau^0(q)=-Hq$ for $0<q< (1/H) \wedge \beta^\infty$. \begin{enumerate}[(i)] \item By Lemma \ref{lemma:q<beta}, $\tau^0(q)=-q/2$ for $0<q<2$. If $\mathbb{P}i\not \equiv 0$, then by Lemma \ref{lemma:q>beta} (iii) and (iv) we have $\tau^0(q)=-1$ for $q\geq 2$. If $\mathbb{P}i \equiv 0$, then we must have $\sigma\neq 0$ by \cite[Thm.~2]{ivanovs2018zooming}. From Lemma \ref{lemma:q>beta} (ii) then $\tau^0(q)=-q/2$ for $q\geq 2$. \item By Lemma \ref{lemma:q<beta}, $\tau^0(q)=-q$ for $0<q<1$. By \cite[Thm.~2]{ivanovs2018zooming} we must have $\sigma=0$. For $q>1$ we use Lemma \ref{lemma:q>beta} (iii) if $\mathbb{P}i\not \equiv 0$ and Lemma \ref{lemma:q>beta} (i) if $\mathbb{P}i\equiv 0$. That $\tau^0(q)=-1$ follows by continuity of $\tau^0$ since $\tau^0$ is convex. \item In this case we must have $\sigma=0$, $\mathbb{P}i\not \equiv 0$ and in the bounded variation case $\gamma'=0$ (see \cite[Thm.~2]{ivanovs2018zooming}) and also $\beta^0=1/H$ by \cite[Cor.~1]{ivanovs2018zooming}. From Lemma \ref{lemma:q<beta} it follows that $\tau^0(q)=-q/\beta^0$ for $0<q<\beta^0$ and by Lemma \ref{lemma:q>beta} (iii) $\tau^0(q)=-1$ for $q>\beta^0$. By continuity of $\tau^0$ then $\tau^0(\beta^0)=-1$. \end{enumerate} \end{proof} The scaling functions obtained in Theorem \ref{thm:tau0} are plotted in Figure \ref{fig:tau0}. If under the assumptions of Theorem \ref{thm:tau0} we put \begin{equation*} \alpha = \begin{cases} 2, & \ \text{ if } \widehat{X} \text{ is Brownian motion},\\ 1, & \ \text{ if } \widehat{X} \text{ is linear drift},\\ \beta^0, & \ \text{ if } \widehat{X} \text{ is strictly stable process},\\ \end{cases} \end{equation*} then, when $\mathbb{P}i\not \equiv 0$, $\tau^0$ can be written in a unified way for $q \in (0,\beta^\infty)$ as \begin{equation*} \tau^0(q)=\begin{cases} -\frac{1}{\alpha} q, & \ \text{ if } 0<q\leq \alpha,\\ -1, & \ \text{ if } q>\alpha. \end{cases} \end{equation*} Provided that the range of finite moments is large enough, the L\'evy process exhibits intermittency in small-time as soon as $\mathbb{P}i\not \equiv 0$. Namely, we must have $\beta^\infty>\alpha$ to cover the non-linearity of $\tau^0$ so that \begin{equation*} \frac{\tau^0(q)}{q} = \begin{cases} -\frac{1}{\alpha}, & \ \text{ if } 0<q\leq \alpha,\\ -\frac{1}{q}, & \ \text{ if } \alpha < q < \beta^\infty, \end{cases} \end{equation*} is strictly increasing on $(\alpha,\beta^\infty)$. If $\beta^\infty<\alpha$, in particular, the variance of $X(1)$ is infinite, then $\tau^0$ is linear in the range of finite moments and there is no intermittency. We note that Theorem \ref{thm:tau0} (and Lemmas \ref{lemma:q<beta} and \ref{lemma:q>beta}) establish the moment asymptotics for \textit{every} finite absolute moment of any L\'evy process satisfying \eqref{e:LPlimit}. The assumption \eqref{e:LPlimit} is very weak and is satisfied for almost every L\'evy process of practical interest. The most notable exceptions are driftless processes with no Gaussian component such that $\overline{\mathbb{P}i}(x)=\mathbb{P}i((x,\infty))+\mathbb{P}i((-\infty,-x)) \in {\mathbb R}V^0(0)$. Particular examples of such processes are driftless compound Poisson process, driftless gamma and driftless variance gamma process. For more details and examples see \cite{bisewski2019zooming}. \begin{figure} \caption{Case (i) of Theorem \ref{thm:tau0} \label{fig:tau0i} \caption{Case (ii) of Theorem \ref{thm:tau0} \label{fig:tau0i} \caption{Case (iii) of Theorem \ref{thm:tau0} \label{fig:tau0i} \caption{The scaling function \eqref{deftau0} \label{fig:tau0} \end{figure} \section{A note on the multifractal formalism}\label{sec5} The multifractal formalism first appeared in the field of turbulence theory (\cite{frisch1985fully,frisch1995turbulence}) and relates local and global scaling properties of an object. For stochastic processes one may describe local regularity by roughness of the process sample paths measured by the pointwise H\"older exponents. Let $t_0 \in [0,\infty)$ and $\gamma>0$. We say that a function $f: [0,\infty) \to \mathbb{R}$ is $C^{\gamma}(t_0)$ if there exists a constant $C>0$ and polynomial $P_{t_0}$ of degree at most $\lfloor \gamma \rfloor$ such that for all $t$ in some neighborhood of $t_0$ it holds that $|f(t)-P_{t_0}(t)| \leq C |t - t_0|^{\gamma}$. If the Taylor polynomial of this degree exists, then $P_{t_0}$ is that Taylor polynomial. Thus, if $P_{t_0}$ is constant, then $P_{t_0} \equiv f(t_0)$. This happens, in particular, when $\gamma<1$ (see \cite{riedi2003multifractal}). It is clear that if $f \in C^{\gamma}(t_0)$, then $f \in C^{\gamma'}(t_0)$ for each $0<\gamma'<\gamma$. A pointwise H\"older exponent of the function $f$ at $t_0$ is $H(t_0)= \sup \left\{ \gamma : f \in C^{\gamma}(t_0) \right\}$. In the multifractal analysis one analyzes the sets $S_h=\{ t : H(t)=h \}$ consisting of the points in the domain where $f$ has the H\"older exponent of value $h$. In the interesting examples, sets $S_h$ are fractal and one typically uses Hausdorff dimension $\dim_H S_h$ of these sets to measure their size. The mapping $h \mapsto d(h)=\dim_H S_h$ is called the spectrum of singularities (also multifractal or Hausdorff spectrum) with the convention that $\dim_H \emptyset=-\infty$. The set of $h$ such that $d(h)\neq - \infty$ is referred to as the support of the spectrum. A function $f$ is said to be multifractal if the support of its spectrum is non-trivial, in the sense that it is not a one point set. When considered for a stochastic process, H\"older exponents are random variables and $S_h$ random sets. However, in many cases the spectrum is deterministic, that is, for almost every sample path spectrum is the same. Moreover, the spectrum is usually homogeneous, in the sense it is the same when considered over any nonempty subset $A\subset [0,\infty)$. It is well-known that the sample paths of Brownian motion are monofractal, i.e.~$d(1/2)=1$ and $d(h)=-\infty$ for $h\neq 1/2$. However, other L\'evy processes typically have multifractal paths \cite{jaffard1999levy,balanca2013}. Namely, if there is no Gaussian component and $\beta^0>0$, then the spectrum of singularities is a.s. \begin{equation}\label{specLP} d(h) = \begin{cases} \beta^0 h, & \text{if } \ h \in [0, 1/\beta^0],\\ -\infty, & \text{if } \ h > 1/\beta^0. \end{cases} \end{equation} Multifractal formalism states that the singularity spectrum of the function or a process can be obtained from some function $\zeta$ by computing \begin{equation}\label{formalism} d(h)= \inf_q \left( hq - \zeta(q) +1\right). \end{equation} Here $\zeta$ quantifies some global property of the process and its definition depends on the context. In \cite{frisch1985fully}, $\zeta$ is defined as the asymptotic scaling of moments of increments which would correspond to taking $\zeta(q) = \lim_{t\to 0} \frac{\log \mathbb{E} |X(t)|^q}{\log t}$. In the theory of multifractals processes such $\zeta$ is referred to as the scaling function and describes the asymptotic scale invariance of moments (see e.g.~\cite{muzy2002multifractal,muzy2013random,allez2013lognormal,rhodes2014levy}). In scenario (iii) of Theorem \ref{thm:tau0} we would have that \begin{equation*} \zeta(q)=\begin{cases} \frac{1}{\beta^0} q, & \ \text{ if } 0<q\leq \beta^0,\\ 1, & \ \text{ if } \beta^0 < q < \beta^\infty. \end{cases} \end{equation*} It is easy to check then that the multifractal formalism \eqref{formalism} holds for L\'evy processes, at least when there is no Brownian component. It remains an open problem whether it is possible to derive \eqref{specLP} directly from Theorem \ref{thm:tau0}. \end{document}
\begin{document} \title[Counterexamples to Strichartz Estimates] {Counterexamples to Strichartz estimates for the magnetic Schr\"odinger equation} \author{Luca Fanelli} \address{Luca Fanelli: Universidad del Pais Vasco, Departamento de Matem$\acute{\text{a}}$ticas, Apartado 644, 48080, Bilbao, Spain} \email{[email protected]} \author{Andoni Garcia} \address{Andoni Garcia: Universidad del Pais Vasco, Departamento de Matem$\acute{\text{a}}$ticas, Apartado 644, 48080, Bilbao, Spain} \email{[email protected]} \thanks{The second author is supported by the grant BFI06.42 of the Basque Government} \begin{abstract} In space dimension $n\geq3$, we consider the magnetic Schr\"odinger Hamiltonian $H=-(\nabla-iA(x))^2$ and the corresponding Schr\"odinger equation \begin{equation*} i\partial_tu+Hu=0. \end{equation*} We show some explicit examples of potentials $A$, with less than Coulomb decay, for which any solution of this equation cannot satisfy Strichartz estimates, in the whole range of Schr\"odinger admissibility. \end{abstract} \date{\today} \subjclass[2000]{35L05, 58J45.} \keywords{ Strichartz estimates, dispersive equations, Schr\"odinger equation, magnetic potential} \maketitle \section{Introduction}\label{sec:introd} Recently, a lot of attention has been devoted to the measurement of the precise decay rate of solutions of dispersive equations. A family of a priori estimates including time-decay, Strichartz, local smoothing and Morawetz estimates are in a sense the core of the linear and nonlinear theory, with immediate applications to local and global well-posedness, scattering and low regularity Cauchy problems. This kind of problems concerns with some of the fundamental dispersive equations in Quantum Mechanics, including among the others the Schr\"odinger, wave, Klein-Gordon and Dirac equations. Strichartz estimates appear in \cite{S}, by R. Strichartz, first; later, the basic framework from the point of view of PDEs was given in the well-known paper \cite{GV} by J. Ginibre and G. Velo, and then completed in \cite{KT} by M. Keel and T. Tao, proving the endpoint estimates. The Strichartz estimates for the Schr\"odinger equation are the following: \begin{equation}\label{eq:strifree} \left\|e^{it\mathcal{D}elta}f\right\|_{L^p_tL^q_x}\leq C\|f\|_{L^2_x}, \end{equation} for any couple $(p,q)$ satisfying the Schr\"odinger admissibility condition \begin{equation}\label{eq:admis} \frac2p=\frac n2-\frac nq \qquad p\geq2, \quad p\neq2\ \text{if }n=2, \end{equation} where $n$ is the space dimension. The importance in the applications leads naturally to consider perturbations of the Schr\"odinger operator $H_0=-\mathcal{D}elta$ by linear lower-order terms; in this paper, we will deal with a purely magnetic Schr\"odinger Hamiltonian $H$ of the form \begin{equation}\label{eq:hamiltonianintr} H=-(\nabla-iA(x))^2, \end{equation} where $A:\mathbb{R}^n\to\mathbb{R}^n$ is a magnetic potential, describing the interaction of a free particle with an external magnetic field. The magnetic field $B$, which is the physically measurable quantity, is given by \begin{equation}\label{eq:B} B\in\mathcal M_{n\times n}, \qquad B=DA-(DA)^t, \end{equation} i.e. it is the anti-symmetric gradient of the vector field $A$ (or, in geometrical terms, the differential $dA$ of the 1-form which is standardly associated to $A$). In dimension $n=3$ the action of $B$ on vectors is identified with the vector field $\text{curl}A$, namely \begin{equation}\label{eq:B3} Bv=\text{curl}A\times v \qquad n=3, \end{equation} where the cross denotes the vectorial product in $\mathbb{R}^3$. The investigation on Strichartz estimates for solutions of the magnetic Schr\"odinger equation \begin{equation}\label{eq:schromagnintr} \begin{cases} i\partial_tu(t,x)+Hu(t,x)=0 \\ u(0,x)=f(x) \end{cases} \end{equation} is actually in course of development. In the last years, this problem was studied in the papers \cite{DF,EGS1,EGS,GST}, by functional calculus techniques involving Spectral Theorem and resolvent estimates. Later, in \cite{DFVV}, Strichartz estimates, comprehensive of the endpoint, are stated as a consequence of the local smoothing estimates proved in \cite{FV} via integration by parts. The results in the above mentioned papers seem to show that the regularity $|A|\sim|x|^{-1}$, which is the one of the Coulomb potential, should be a threshold for the validity of Strichartz estimates. Indeed, the behavior of $A$, in all those results, is of the type $|A|\sim|x|^{-1-\epsilon}$, as $|x|\to\infty$. Since there is no heuristic, based on scaling arguments, showing that the Coulomb decay is critical for Strichartz estimates, this remains a conjecture until the lack of counterexamples for Strichartz estimates is not overcome. For electric Schr\"odinger Hamiltonians of the type $-\mathcal{D}elta+V(x)$, some explicit counterexamples are given in the paper \cite{GVV}. In that setting, the critical regularity for the electric potential $V$ is the inverse square one $|V|\sim|x|^{-2}$; the counterexamples in \cite{GVV} are based on potentials of the form \begin{equation*} V(x)=(1+|x|^2)^{-\frac\alpha2}\omega\left(\frac{x}{|x|}\right), \qquad 0<\alpha<2, \end{equation*} where $\omega$ is a positive scalar function, homogeneous of degree 0, which has a non-degenerate minimum point $P\in S^{n-1}$. Moreover, it is crucial there to assume that $w(P)=0$. The main idea is to approximate $H$, by a second order Taylor expansion, with an harmonic oscillator. Then the condition $\alpha<2$ causes the lack of global (in time) dispersion. In this paper, we produce some explicit examples of magnetic potentials $A$, with less than Coulomb decay, for which Strichartz estimates for equation \eqref{eq:schromagnintr} fail. Before stating the main results, we introduce some notations. Let us consider the $2\times2$ anti-symmetric matrix \begin{equation*} \sigma := \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right). \end{equation*} For any even $n=2k\in\mathbb{N}$, we denote by $\Omega_n$ the $n\times n$ anti-symmetric matrix generated by $k$-diagonal blocks of $\sigma$, in the following way: \begin{equation}\label{eq:Omega} \Omega_n := \left( \begin{array}{cccc} \sigma & 0 & \cdots & 0 \\ 0 & \sigma & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \sigma \end{array} \right) \end{equation} Our first result is for odd space dimensions. \begin{theorem}\label{thm:odd} Let $n\geq3$ be an odd number and let us consider the following anti-symmetric $n\times n$-matrix \begin{equation}\label{eq:Modd} M := \left( \begin{array}{cc} \Omega_{n-1} & 0 \\ 0 & 0 \end{array} \right), \end{equation} where $\Omega_{n-1}$ is the $(n-1)\times(n-1)$-matrix defined in \eqref{eq:Omega}. Let $A$ be the following vector-field: \begin{equation}\label{eq:A} A(x)=|x|^{-\alpha}Mx, \qquad 1<\alpha<2. \end{equation} Then a solution of the magnetic Schr\"odinger equation \eqref{eq:schromagnintr} cannot satisfy Strichartz estimates, for all the Schr\"odinger admissible couples $(p,q)\neq(\infty,2)$. \end{theorem} The analogous result in even dimensions is the following. \begin{theorem}\label{thm:even} Let $n\geq4$ be an even number; let $\mathbf{0}$ be the null $2\times2$-block and let us consider the following anti-symmetric $n\times n$-matrix \begin{equation}\label{eq:Meven} M := \left( \begin{array}{cc} \Omega_{n-2} & 0 \\ 0 & \mathbf{0} \end{array} \right), \end{equation} where $\Omega_{n-2}$ is the $(n-2)\times(n-2)$-matrix defined in \eqref{eq:Omega}. Let $A$ be the following vector field: \begin{equation}\label{eq:AA} A(x)=|x|^{-\alpha}Mx, \qquad 1<\alpha<2. \end{equation} Then a solution of the magnetic Schr\"odinger equation \eqref{eq:schromagnintr} cannot satisfy Strichartz estimates, for all the Schr\"odinger admissible couples $(p,q)\neq(\infty,2)$. \end{theorem} \begin{remark}\label{rem:l2} The $L^\infty L^2$ estimate, which is not disproved by Theorems \ref{thm:odd}, \ref{thm:even} clearly holds as the $L^2$-conservation for solutions of \eqref{eq:schromagnintr}. \end{remark} \begin{remark}\label{rem:generalize} As will be clear by the proofs of the previous Theorems, for our counterexamples it is sufficient to consider potentials which are not singular at the origin, of the form $A=\langle x\rangle^{-\alpha}Mx$, where $\langle x\rangle=(1+|x|^2)^{1/2}$. Indeed, the reason for which Strichartz estimates fail is that these examples, for $1<\alpha<2$, do not decay enough at infinity. It is also possible to generalize the family of potentials producing counterexamples, by considering, instead of the above matrix $\sigma$, the following one \begin{equation*} \widetilde\sigma := \left( \begin{array}{cc} 0 & \omega\left(\frac x{|x|}\right) \\ -\omega\left(\frac x{|x|}\right) & 0 \end{array} \right), \end{equation*} where $\omega$ is a scalar function, homogeneous of degree 0, and constructing $\Omega_n$, $M$ and $A$ as above. \end{remark} \begin{remark}\label{rem:weak} Following the notations in \cite{FV}, we denote by $B_\tau$ the tangential component of $B=DA-(DA)^t$, namely $B_\tau(x)=\frac{x}{|x|}B(x)$. It is easy to see that, in the cases \eqref{eq:A}, \eqref{eq:AA}, we have $B_\tau=0$ (see also Examples 1.6 and 1.7 in \cite{FV}). Hence, by Theorems 1.9 and 1.10 in \cite{FV}, weak-dispersive estimates, including local smoothing, hold for equation \eqref{eq:schromagnintr}, independently of the decay rate $\alpha$. In fact, these are relevant examples for which weak dispersion holds, but not Strichartz estimates. \end{remark} \begin{remark}\label{rem:alfa} The potentials in \eqref{eq:A}, \eqref{eq:AA} have the decay $|A|\sim|x|^{1-\alpha}$ and the statements are given in the range $\alpha\in(1,2)$. In the case $\alpha<1$, since the potential $A$ does not decay at infinity, the spectrum of $H$ contains eigenvalues; hence, due to the presence of $L^2$-eigenfunctions for $H$, global dispersive estimates obviously fail. It remains opened the question for $\alpha=2$, for which no results are known, neither of positive or negative type. \end{remark} \begin{remark}\label{rem:dimension} The counterexamples in the previous Theorems cover the dimensions $n\geq3$. The dimension $n=1$ is not meaningful for magnetic potentials, because by gauge transformations it is always possible to pass from \eqref{eq:hamiltonianintr} to the free Hamiltonian $H_0=-\partial^2/\partial x^2$. On the other hand, the problem of Strichartz estimates for \eqref{eq:schromagnintr}, in dimension $n=2$, remains completely open. \end{remark} The idea of the proof of Theorems \ref{thm:odd}, \ref{thm:even} is analogous to the one in \cite{GVV}. By homogeneity and the algebraic form of $A$ in \eqref{eq:A}, \eqref{eq:AA}, it is possible to approximate the Hamiltonian $H$ in \eqref{eq:hamiltonianintr} with the dimensionless operator \begin{equation}\label{eq:LL} T:=-(\nabla-i\Omega y)^2+|y|^2, \end{equation} where $\Omega=\Omega_{n-1}$, if $n$ is odd, and $\Omega=\Omega_{n-2}$ if $n$ is even. We recall that the operator \begin{equation*} T_0:=-(\nabla-i\Omega y)^2 \end{equation*} has compact resolvent and its spectrum is purely discrete. Actually, it is the analogous of the harmonic oscillator, in the setting of magnetic fields, and its eigenvalues define the levels of energy which are usually referred to as the {\it Landau levels} (see e.g. \cite{CFKS}). The operator $T$ in \eqref{eq:LL} has the same spectral properties of $T_0$, since the quadratic form $|y|^2$ is positively defined (see e.g. \cite{M}). The rest of the paper is organized as follows. In Sections \ref{sec:odd}, \ref{sec:even}, starting from an eigenfunction of $T$ we construct and estimate some approximated solutions to equation \eqref{eq:schromagnintr} (see Lemmas \ref{lem:odd}, \ref{lem:even}). The proof of Theorems \ref{thm:odd}, \ref{thm:even} are performed in Sections \ref{sec:proofodd}, \ref{sec:proofeven}, respectively. \section{Preliminaries for the odd-dimensional case}\label{sec:odd} Throughout this Section, $n$ will be an odd number, $n\geq3$. On the even-dimensional space $\mathbb{R}^{n-1}$, let us consider again the operator $T$ defined in \eqref{eq:LL}. Observe that the explicit expansion of $T$ is the following: \begin{equation}\label{eq:epo1} T:=-\mathcal{D}elta + 2i (y_{2}, -y_{1}, \dots ,y_{n-1}, -y_{n-2}) \cdot \nabla + 2|y|^2, \end{equation} for $y\in\mathbb{R}^{n-1}$. Since the quadratic form $|y|^2$ is positively defined and $\Omega$ is anti-symmetric, $T$ has compact resolvent and in particular its spectrum reduces to a discrete set of eigenvalues (see e.g. \cite{M}). Moreover, any eigenfunction $v(y)$ solving \begin{equation}\label{eq:epo2} Tv(y) = \lambda v(y), \end{equation} for some eigenvalue $\lambda\in\mathbb{R}$, is such that \begin{equation}\label{eq:lp} v(y) \in \bigcap _{p=1}^{\infty} L^{p}(\mathbb{R}^{n-1}), \qquad \nabla v \in \bigcap _{p=1}^{\infty} L^{p}(\mathbb{R}^{n-1}). \end{equation} Let us fix an eigenvalue $\lambda\in\mathbb{R}$ and a corresponding eigenfunction $v$; by scaling $v$ we create a function $\omega:\mathbb{R}^{n-1}\times(0,+\infty)$ in the following way: \begin{equation}\label{eq:omega1} \omega(x) := v\left(\frac{y}{\sqrt{z^{\alpha}}}\right), \qquad x:=(y,z) \in \mathbb{R}^{n-1} \times (0, \infty), \end{equation} where $\alpha$ is the exponent in \eqref{eq:A}. By a direct computation, we see that $\omega$ satisfies \begin{equation}\label{eq:epo4} -\left(\nabla-\frac{i}{z^{\alpha}}M(y,0)^t\right)^2\omega + \frac{1}{z^{2\alpha}}|y|^2\omega = \frac{\lambda}{z^{\alpha}} \omega, \end{equation} where $M$ is defined via \eqref{eq:Modd}. Let us now introduce the time-dependent function \begin{equation}\label{eq:epo5} W(t,y,z) = e^{i( \lambda t /z^{\alpha})}\omega(y,z), \qquad (t,y,z) \in \mathbb{R} \times \mathbb{R}^{n-1} \times (0, \infty). \end{equation} By direct computations, it turns out that $W$ solves \begin{equation}\label{eq:epo10} i\partial_{t}W - \left(\nabla - \frac{i}{z^{\alpha}}M(y,0)^t\right)^2 W + \frac{1}{z^{2\alpha}}|y|^2W = F, \end{equation} where \begin{align}\label{eq:F} F(t,y,z) = & \frac{e^{i( \lambda t /z^{\alpha})}}{z^{2}} \left\{\left[ \frac{\alpha^{2}\lambda^{2}t^{2}}{z^{2\alpha}}-\frac{\alpha(\alpha + 1)i \lambda t}{z^{\alpha}}\right] v\left(\frac{y}{\sqrt{z^{\alpha}}} \right) \right. \\ & \left. - G\left(\frac{y}{\sqrt{z^{\alpha}}}\right)\cdot\left[\frac{\alpha^{2}i \lambda t}{z^{\alpha}} + \frac{\alpha(\alpha + 2)}{4}\right] - \frac{\alpha^{2}}{4} H\left(\frac{y}{\sqrt{z^{\alpha}}}\right)\right\}, \nonumber \end{align} with \begin{equation}\label{eq:epo7} G(y) = y \cdot \nabla_{y}v(y), \qquad H(y) = y D^{2}_{y}v(y)\cdot y. \end{equation} We now introduce a real-valued cutoff function $\psi \in C^{\infty}_0(\mathbb{R})$ with the following properties: \begin{equation}\label{eq:psi} \psi(z) = 0\quad \text{for }|z| > 1, \qquad \psi(z) = 1\quad \text{for }|z| < 1/2. \end{equation} Let us fix a parameter $\gamma \in (1/2,1)$, and for any $R>0$ let us denote by \begin{equation}\label{eq:psiR} \psi_{R}(z) := \psi\left(\frac{z - R}{R^{\gamma}}\right). \end{equation} With these notations, we truncate $W$ as follows: \begin{equation}\label{eq:WR} W_{R}(t,y,z) := W(t,y,z)\psi_R(z)\psi\left(\frac{|y|^{2}}{z^{2}}\right). \end{equation} Again, a direct computation shows that $W_R$ solves the Cauchy problem \begin{equation}\label{eq:epo17} \begin{cases} i\partial_tW_R-(\nabla-\frac{i}{z^\alpha}M(y,0)^t)^2W_{R} +\frac{|y|^2}{z^{2\alpha}}W_R = F_R \\ W_{R}(0,y,z) = f_{R}(y,z). \end{cases} \end{equation} Here the initial datum is given by \begin{equation}\label{eq:epo18} f_{R}(y,z) = \psi_{R}(z)\psi\left(\frac{|y|^{2}}{z^{2}}\right)\omega(y,z), \end{equation} and \begin{equation}\label{eq:epo19} F_{R}(t,y,z) = \psi_{R}\psi F + G_{R}, \end{equation} where $F$ is given by \eqref{eq:F} and $G_R$ has the form \begin{align}\label{eq:G} G_{R}(t,y,z) = &e^{i( \lambda t /z^{\alpha})}\left\{-\omega\left[\frac{2(n - 1)}{z^{2}}\psi_{R}\psi' + \frac{4|y|^{2}}{z^{4}}\psi_{R}\psi'' + \psi''_{R}\psi - \frac{4|y|^{2}}{z^{3}}\psi'_{R}\psi' \right.\right. \\ &\left.+ \frac{4|y|^{4}}{z^{6}}\psi_{R}\psi'' + \frac{6|y|^{2}}{z^{4}}\psi_{R}\psi' - \frac{2\alpha i\lambda t}{z^{\alpha+1}}\psi'_{R}\psi + \frac{4\alpha i\lambda t|y|^{2}}{z^{\alpha+4}}\psi_{R}\psi'\right] \nonumber \\ & \left.- G\left(\frac{y}{\sqrt{z^\alpha}}\right)\cdot\left[ \frac{4}{z^{2}}\psi_{R}\psi' - \frac{\alpha}{z}\psi^{'}_{R}\psi + \frac{2\alpha |y|^{2}}{z^{4}}\psi_{R}\psi'\right]\right\}, \nonumber \end{align} and $G$ is defined by \eqref{eq:epo7}. The main result of this Section is the following. \begin{lemma}\label{lem:odd} Let $n\geq 3$ be an odd number, $p,q\in(1,\infty)$, and $\gamma\in(1/2,1)$; then we have \begin{equation}\label{eq:epo21} \|f_{R}\|_{L^{2}_{x}} \leq C R^{(\alpha(n-1)+2\gamma)/4}, \end{equation} \begin{equation}\label{eq:epo22} \|W_{R}\|_{L^{p}_{T}L^{q}_{x}} \geq C T^{1/p} R^{(\alpha(n-1)+2\gamma)/2q}, \end{equation} \begin{equation}\label{eq:epo23} \|F_{R}\|_{L^{p}_{T}L^{q}_{x}} \leq C T^{1/p} R^{(\alpha(n-1)+2\gamma)/2q}\max\{R^{-2\gamma}, T^{2}R^{-(2\alpha+2)}\}, \end{equation} for all $R > 2, T > 0$, and some constant where $C = C(q,\gamma) > 0$. In particular, if $(p,q)\neq(\infty,2)$ is a Schr\"odinger admissible couple and $\beta > 0$, then the following estimates hold \begin{equation}\label{eq:epo25} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}}} \geq CR^{(\beta n - (\alpha(n-1)+2\gamma))/np}, \end{equation} \begin{equation}\label{eq:epo26} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta}); L^{q}_{x})}}{\|F_{R}\|_{L^{p'}((0,R^{\beta}); L^{q'}_{x})}} \geq CR^{\kappa}, \end{equation} for any $R>2$, where \begin{equation*} \kappa = \kappa(n, \gamma, \beta, p) = 2\left(\frac{\beta n - (\alpha(n-1)+2\gamma)}{np}\right) +\min\{2\gamma - \beta, 2\alpha + 2 - 3\beta\} \end{equation*} and the constants $C > 0$ do not depend on $R$. \end{lemma} \begin{proof} For a given function $L(y)$, we denote by \begin{equation}\label{eq:epo28} \Lambda(y,z) := L\left(\frac{y}{\sqrt{z^{\alpha}}}\right), \end{equation} for any $(y,z) \in \mathbb{R}^{n-1} \times \mathbb{R}.$ In order to prove \eqref{eq:epo21} and \eqref{eq:epo22}, it is sufficient to show that, if $0\neq L\in \bigcap _{p=1}^{\infty} L^{p}(\mathbb{R}^{n-1})$, then the following estimates hold: \begin{equation}\label{eq:epo29} cR^{(\alpha(n-1)+2\gamma)/2q} \leq \|\Lambda\psi_{R}\psi\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q}, \end{equation} for all $R>1$, where $c = c(q,L) > 0$ and $C = C(q,L) > 0$ and $\psi$ and $\psi_{R}$ are defined by \eqref{eq:psi}, \eqref{eq:psiR}. Indeed, \eqref{eq:epo21} and \eqref{eq:epo22} follow by \eqref{eq:epo29}, with the choice $L(y)=v(y)$. In order to prove \eqref{eq:epo29}, observe that, by the properties of $\psi_{R}$ and $\psi$, we have \begin{align*} & \int_{R-R^{\gamma}/2}^{R+R^{\gamma}/2}dz\int_{|y| < \frac{\sqrt{z^{2-\alpha}}}{\sqrt{2}}}|L(y)|^{q}z^{(n-1)\alpha/2}dy \\ &\ \ \ = \int_{R-R^{\gamma}/2}^{R+R^{\gamma}/2}dz\int_{|y| < \frac{z}{\sqrt{2}}}|\Lambda|^{q}dy \leq \int_{\mathbb{R}^{n}}|\Lambda\psi_{R}\psi|^{q}dydz \leq \int_{R-R^{\gamma}}^{R+R^{\gamma}}dz\int_{|y| < z}|\Lambda|^{q}dy \\ &\ \ \ = \int_{R-R^{\gamma}}^{R+R^{\gamma}}dz\int_{|y| < \sqrt{z^{2-\alpha}}}|L(y)|^{q}z^{(n-1)\alpha/2}dy; \end{align*} this implies (\ref{eq:epo29}). With the same argument as above, we can prove the following estimates: \begin{align*} &cR^{(\alpha(n-1)+2\gamma)/2q} \leq \|\Lambda\psi_{R}\psi^{'}\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q} \\ &cR^{(\alpha(n-1)+2\gamma)/2q - \gamma} \leq \|\Lambda\psi^{'}_{R}\psi\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q - \gamma} \\ &cR^{(\alpha(n-1)+2\gamma)/2q - \gamma} \leq \|\Lambda\psi^{'}_{R}\psi^{'}\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q - \gamma} \\ &cR^{(\alpha(n-1)+2\gamma)/2q} \leq \|\Lambda\psi_{R}\psi^{''}\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q} \\ &cR^{(\alpha(n-1)+2\gamma)/2q - 2\gamma} \leq \|\Lambda\psi^{''}_{R}\psi\|_{L^{q}_{x}} \leq CR^{(\alpha(n-1)+2\gamma)/2q - 2\gamma} \end{align*} Now we can pass to the proof of \eqref{eq:epo23}. It is sufficient to use the estimates \begin{equation}\label{eq:epo32} \|\psi_{R}\psi F\|_{L^{p}_{T}L^{q}_{x}} \leq C T^{1/p} R^{(\alpha(n-1)+2\gamma)/2q}\max\{R^{-2}, T^{2}R^{-(2\alpha+2)}\}, \end{equation} \begin{equation}\label{eq:epo33} \|G_{R}\|_{L^{p}_{T}L^{q}_{x}} \leq C T^{1/p} R^{(\alpha(n-1)+2\gamma)/2q}\max\{R^{-2\gamma}, R^{-(6-2\alpha)}, TR^{-(\alpha+1+\gamma)}, TR^{-4}\}, \end{equation} which we are going to prove under the conditions $1/2 < \gamma < 1$ and $1 < \alpha < 2$. Looking at the structure of F, it is easy to see that, in order to deduce \eqref{eq:epo32}, it is sufficient to estimate the norms of functions of the following type: \begin{equation} \frac{t}{z^{\alpha+2}} \Lambda\psi_{R}\psi, \qquad \frac{t^{2}}{z^{2\alpha+2}} \Lambda\psi_{R}\psi, \qquad \frac{1}{z^{2}} \Lambda\psi_{R}\psi, \end{equation} where $\Lambda$ is defined by \eqref{eq:epo28} and L may change in different expressions. Let us consider the first term. Due to the properties of $\psi_{R}$, on the support of $\frac{t}{z^{\alpha+2}} \Lambda\psi_{R}\psi$ we have that $t/z^{\alpha+2} \leq Ct/R^{\alpha+2}$; hence we obtain \begin{equation}\label{eq:epo35} \|\frac{t}{z^{\alpha+2}} \Lambda\psi_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1+1/p}\frac{1}{R^{\alpha+2}}\|\Lambda\psi_{R}\psi\|_{L^{q}_{x}} \leq CT^{1+1/p}R^{(\alpha(n-1)+2\gamma)/2q-(\alpha+2)} \end{equation} where we have used \eqref{eq:epo29} in the last estimate. With the same arguments, we can deduce that \begin{align*} &\|\frac{t^{2}}{z^{2\alpha+2}} \Lambda\psi_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{2+1/p}R^{(\alpha(n-1)+2\gamma)/2q-(2\alpha+2)} \\ &\|\frac{1}{z^{2}} \Lambda\psi_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - 2}. \end{align*} Notice that the norm estimate in \eqref{eq:epo35} is the geometric mean of the estimates above, so it cannot be the largest one; this remark proves \eqref{eq:epo32}. Analogously, looking at the structure of $G_{R}$ it is easy to see that the following estimates imply \eqref{eq:epo33}: \begin{align*} &\|\frac{1}{z^{2}} \Lambda\psi_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - 2},\\ &\|\frac{|y|^{2}}{z^{4}} \Lambda\psi_{R}\psi^{''}\|_{L^{p}_{T}L^{q}_{x}} = \|\frac{1}{z^{4-\alpha}}\Theta\psi_{R}\psi^{''}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - (4-\alpha)},\\ &\|\Lambda\psi^{''}_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - 2\gamma},\\ &\|\frac{|y|^{2}}{z^{3}}\Lambda\psi^{'}_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - (3-\alpha+\gamma)},\\ &\|\frac{|y|^{4}}{z^{6}} \Lambda\psi_{R}\psi^{''}\|_{L^{p}_{T}L^{q}_{x}} = \|\frac{1}{z^{6-2\alpha}}\Psi\psi_{R}\psi^{''}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - (6-2\alpha)},\\ &\|\frac{|y|^{2}}{z^{4}} \Lambda\psi_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} = \|\frac{1}{z^{4-\alpha}}\Theta\psi_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - (4-\alpha)},\\ &\|\frac{t}{z^{\alpha+1}}\Lambda\psi^{'}_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1+1/p}R^{(\alpha(n-1)+2\gamma)/2q - (\alpha+1+\gamma)},\\ &\|\frac{t|y|^{2}}{z^{\alpha+4}} \Lambda\psi_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} = \|\frac{t}{z^{4}}\Theta\psi_{R}\psi^{'}\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1+1/p}R^{(\alpha(n-1)+2\gamma)/2q - 4},\\ &\|\frac{1}{z}\Lambda\psi^{'}_{R}\psi\|_{L^{p}_{T}L^{q}_{x}} \leq CT^{1/p}R^{(\alpha(n-1)+2\gamma)/2q - (1+\gamma)}. \end{align*} Here we denoted by \begin{equation}\label{eq:tetapsi} \Theta(y,z) := \left(\frac{|y|}{\sqrt{z^{\alpha}}}\right)^{2} L\left(\frac{y}{\sqrt{z^{\alpha}}}\right), \quad \Psi(y,z) := \left(\frac{|y|}{\sqrt{z^{\alpha}}}\right)^{4} L\left(\frac{y}{\sqrt{z^{\alpha}}}\right), \end{equation} for $(y,z) \in \mathbb{R}^{n-1} \times \mathbb{R}.$ In order to conclude the proof of the Lemma, it is now sufficient to remark that estimates \eqref{eq:epo25} and \eqref{eq:epo26} follow from \eqref{eq:epo21}, \eqref{eq:epo22}, and \eqref{eq:epo23}, where we choose $T = R^{\beta}$. \end{proof} \section{Proof of Theorem \ref{thm:odd}}\label{sec:proofodd} We can now prove Theorem \ref{thm:odd}. For any $x\in\mathbb{R}^n$, we denote by $x=(y,z)$, where $y=(y_1,\dots,y_{n-1})\in\mathbb{R}^{n-1}$, $z\in\mathbb{R}$. Since $M$ is anti-symmetric, we have that $\text{div}A\equiv0$ and the explicit expansion of $H$ in \eqref{eq:hamiltonianintr} is given by \begin{equation}\label{eq:Hexpl} H=-\mathcal{D}elta+2iA\cdot\nabla+|A|^2. \end{equation} By homogeneity we have \begin{equation}\label{eq:omog} A(y,z) \cdot \nabla u =z^{1-\alpha}A\left(\frac{y}{z}, 1\right) \cdot \nabla u, \end{equation} for all $(y,z) \in \mathbb{R}^{n-1} \times (0, \infty)$. Let $P = (0,...,0,1)$; since $A(P)=0$ and $DA(P)=M$, doing the first-order Taylor expansion of $A$ around $P$ in \eqref{eq:omog} we get \begin{align}\label{eq:epo39} 2iA(y,z)\cdot\nabla u & = 2iz^{1-\alpha}\left\{M\left(\frac{y}{z}, 0\right)^{t} + R_{1}\left(\frac{y}{z}\right)\right\}\cdot\nabla u \\ & = 2iz^{-\alpha}M\left(y,0\right)^{t}\cdot\nabla u + 2iz^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\cdot\nabla u, \nonumber \end{align} where the rest $R_1$ satisfies \begin{equation}\label{eq:epo42} \left|R_{1}\left(\frac{y}{z}\right)\right| \leq C\frac{|y|^{2}}{z^{2}}, \end{equation} for all $(y,z)\in\mathbb{R}^{n-1}\times(0,+\infty)$ such that $|y|<|z|$. Analogously, for $|A|^2$ we have \begin{equation}\label{eq:omog2} |A|^{2}(y,z) = z^{2-2\alpha}|A|^{2}\left(\frac{y}{z}, 1\right). \end{equation} Notice that $|A|^2(P)=0=\nabla(|A|^2)(P)$, and moreover \begin{equation*} D^2(|A|^2)(P)=2M^tM = 2\left( \begin{array}{cccc} I_{n-1} & 0 \\ 0 & 0 \end{array} \right), \end{equation*} where $I_{n-1}$ denote the identity $(n-1)\times(n-1)$-matrix. Hence, doing the second-order Taylor expansion of $|A|^2$ around $P$ in \eqref{eq:omog2}, we obtain \begin{equation}\label{eq:epo50} |A|^2(y,z)=\frac{2}{z^{2\alpha}}|y|^2 + z^{2-2\alpha}R_{2}\left(\frac{y}{z}\right), \end{equation} where the rest $R_2$ satisfies \begin{equation}\label{eq:epo51} |R_{2}\left(\frac{y}{z}\right)| \leq C\frac{|y|^{3}}{z^{3}}, \end{equation} provided $|y| < |z|$. We can now select a couple $(\lambda, v(y))$ which satisfies the eigenvalue problem \eqref{eq:epo2}; hence from now on the functions $W_{R}$, $f_{R}$, and $F_{R}$ are fixed by \eqref{eq:WR}, \eqref{eq:epo18} and \eqref{eq:epo19}. Due to \eqref{eq:Hexpl}, \eqref{eq:epo39}, \eqref{eq:epo50}, the following auxiliar equation is naturally related to our Cauchy problem \eqref{eq:schromagnintr} \begin{align}\label{eq:epo69} & i\partial_{t}u_{R}-\left(\nabla- \frac{i}{z^{\alpha}} M(y,0)^t\right)^2 u_R \\ &\ \ \ + \frac{1}{z^{2\alpha}}|y|^2u_R + 2i z^{1-\alpha}R_{1}\left(\frac{y}{z}\right) \nabla u_R + z^{2-2 \alpha}R_{2}\left(\frac{y}{z}\right)u_{R} =\tilde{F}_{R}, \nonumber \end{align} with initial datum \begin{equation}\label{eq:epo69datum} u_{R}(0, y, z) = f_{R}, \end{equation} where \begin{equation}\label{eq:epo70} \tilde{F}_{R}(t,y,z) = \chi_{(0, R^{\beta})}(t)\left\{F_{R} + 2i z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\nabla W_{R} + z^{2-2 \alpha}R_{2}\left(\frac{y}{z}\right)W_{R}\right\}, \end{equation} $\beta$ is the same as in Lemma \ref{lem:odd}, and $f_{R}$, $F_{R}$ are given by \eqref{eq:epo18}, \eqref{eq:epo19}. Notice that, due to \eqref{eq:epo17}, \eqref{eq:epo70}, the solution $u_R$ of the Cauchy problem \eqref{eq:epo69}-\eqref{eq:epo69datum} coincides with the solution $W_R$ of \eqref{eq:epo17} for small times $t\in(0,R^\beta)$. We prove the following crucial Lemma. \begin{lemma}\label{lem:mainodd} Let $(p,q)$ be a Schr\"odinger admissible couple, $(p,q)\neq(\infty,2)$, and let $(p',q')$ be the dual couple. The following estimate holds: \begin{equation}\label{eq:epo71} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'}((0,R^{\beta});L^{q'}_{x})}} \geq CR^{\delta}, \qquad \forall 1/2 < \gamma < 1, \end{equation} where \begin{align}\label{eq:epo72} & \delta =\delta(n, \alpha, \gamma, \beta, p) = 2\left(\frac{\beta n - (\alpha(n-1)+2\gamma)}{np}\right)\\ &+ \min\left\{2\gamma - \beta, 2\alpha + 2 -3\beta, \frac{\alpha}{2} + 1 - \beta, 3 - \frac{\alpha}{2} - \beta, \gamma + 1 - \beta, \alpha + 2 - 2\beta\right\}, \nonumber \end{align} and $\gamma$ is the same as in \eqref{eq:psiR}. \end{lemma} \begin{proof} Due to \eqref{eq:epo26}, we just need to estimate the rest terms in \eqref{eq:epo70}. For the term containing $R_2$, we have \begin{align*} & \|z^{2-2\alpha}R_{2}\left(\frac{y}{z}\right)W_{R}\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \ \ \leq CT^{1/p'}\|\frac{|y|^{3}}{z^{2\alpha+1}}\omega\psi_{R}\psi\|_{L^{q'}_{x}} \leq CT^{1/p'}R^{-(\alpha/2+1)}\|M\psi_{R}\psi\|_{L^{q'}_{x}} \\ &\ \ \ \leq CT^{1/p'}R^{-(\alpha/2+1)+(\alpha(n-1)+2\gamma)/2q'}, \end{align*} for any $R > 1$, and $1/2 < \gamma < 1$, where $\omega$ is the rescaled eigenfunction in \eqref{eq:omega1}; here we denoted by \begin{equation}\label{eq:MM} M(y,z) = \left(|y|/\sqrt{z^{\alpha}}\right)^{3} v\left(y/\sqrt{z^{\alpha}}\right), \end{equation} and we used \eqref{eq:epo29} with $L(y) = |y|^{3}v(y)$ at the last step. Hence for $t\in(0,R^\beta)$ we obtain \begin{equation}\label{eq:epo53} \|z^{2-2\alpha}R_{2}\left(\frac{y}{z}\right)W_{R}\|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \leqslant CR^{\beta/p'-(\alpha/2+1)+(\alpha(n-1)+2\gamma)/2q'}, \end{equation} for any $\beta > 0$. Let us continue with the term corresponding to $R_{1}$ in \eqref{eq:epo70}. We need to estimate \begin{equation}\label{eq:WR2} \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\nabla W_{R}\|_{L^{p'}((0, T);L^{q'}_{x})} \end{equation} Notice that, by \eqref{eq:WR} we have \begin{equation}\label{eq:WR3} \nabla W_{R} = W\psi\nabla\psi_{R} + W\psi_{R}\nabla \psi+(\nabla W)\psi_{R}\psi; \end{equation} hence we can treat separately the three terms in \eqref{eq:WR2}. First we estimate \begin{align*}\label{eq:epo54} &\|z^{1-\alpha}R_{1}\left(\frac{y}{z} \right)W\psi\nabla\psi_{R}\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \ \ \leq CT^{1/p'}\|\frac{|y|^{2}}{z^{\alpha+1}}\omega\psi^{'}_{R}\psi\|_{L^{q'}_{x}} \leq CT^{1/p'}R^{-1}\|\Theta\psi^{'}_{R}\psi\|_{L^{q'}_{x}} \\ &\ \ \ \leq CT^{1/p'}R^{-(\gamma+1)+(\alpha(n-1)+2\gamma)/2q'}, \end{align*} for any $R > 1$, and $1/2 < \gamma < 1$, where $\Theta$ is the same as in \eqref{eq:tetapsi}. Therefore \begin{equation}\label{eq:epo55} \|z^{1-\alpha}R_{1}\left(\frac{y}{z} \right)W\psi\nabla\psi_{R}\|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \leqslant CR^{\beta/p'-(\gamma+1)+(\alpha(n-1)+2\gamma)/2q'}, \end{equation} for any $\beta > 0$. Analogously, since \begin{equation*} |\nabla \psi| \leqslant C\frac{|y|}{z^{2}}\psi^{'}\quad\text{provided that}\quad|y| < |z|, \end{equation*} we estimate \begin{align*} & \|z^{1-\alpha}R_{1}\left(\frac{y}{z} \right)W\psi_{R}\nabla \psi\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \ \ \leq CT^{1/p'}\|\frac{|y|^{3}}{z^{\alpha+3}}\omega\psi_{R}\psi^{'}\|_{L^{q'}_{x}} \leq CT^{1/p'}R^{-(3-\alpha/2)}\|M\psi_{R}\psi^{'}\|_{L^{q'}_{x}} \\ &\ \ \ \leq CT^{1/p'}R^{-(3-\alpha/2)+(\alpha(n-1)+2\gamma)/2q'}, \end{align*} for any $R > 1$, and $1/2 < \gamma < 1$. Hence \begin{equation}\label{eq:epo57} \|z^{1-\alpha}R_{1}\left(\frac{y}{z} \right)W\psi_{R}\nabla \psi\|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \leqslant CR^{\beta/p'-(3-\alpha/2)+(\alpha(n-1)+2\gamma)/2q'}, \end{equation} for any $\beta > 0$. For the remaining term, notice that by \eqref{eq:epo5} we have \begin{align}\label{eq:epo58} & \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\psi_{R}\psi\nabla W\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \leq \|z^{1-\alpha}\frac{t}{z^{\alpha+1}}R_{1} \left(\frac{y}{z}\right)\omega\psi_{R}\psi\|_{L^{p'}((0, T);L^{q'}_{x})} + \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\psi_{R}\psi\nabla \omega\|_{L^{p'}((0, T);L^{q'}_{x})}. \nonumber \end{align} For the first term in \eqref{eq:epo58} we have \begin{align*}\label{eq:epo60} & \|z^{1-\alpha}\frac{t}{z^{\alpha+1}}R_{1}\left(\frac{y}{z}\right) \omega\psi_{R}\psi\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \ \ \leq CT^{1+1/p'}\|\frac{|y|^{2}}{z^{2\alpha+2}}\omega\psi_{R}\psi\|_{L^{q'}_{x}} \leq CT^{1+1/p'}R^{-(\alpha+2)}\|\Theta\psi_{R}\psi\|_{L^{q'}_{x}} \\ & \ \ \ \leq CT^{1+1/p'}R^{-(\alpha+2)+(\alpha(n-1)+2\gamma)/2q'}, \end{align*} for any $R > 1$, and $1/2 < \gamma < 1$. Therefore \begin{equation}\label{eq:1} \|z^{1-\alpha}\frac{t}{z^{\alpha+1}}R_{1}\left(\frac{y}{z}\right) \omega\psi_{R}\psi\|_{L^{p'}((0, R^\beta);L^{q'}_{x})} \leq CR^{\beta+\frac\beta{p'}-(\alpha+2)+(\alpha(n-1)+2\gamma)/2q'} \end{equation} for any $\beta > 0$. For the second term in \eqref{eq:epo58}, we proceed as follows: \begin{align}\label{eq:epo61} & \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\psi_{R}\psi\nabla \omega\|_{L^{p'}((0, T);L^{q'}_{x})} \\ &\ \ \ \leq CT^{1/p'}\|\frac{|y|^{2}}{z^{\alpha+1}}\psi_{R}\psi\nabla \omega\|_{L^{q'}_{x}} \leq CT^{1/p'}\|\frac{|y|^{2}}{z^{3\alpha/2+1}}\psi_{R}\psi(\nabla v)\left(\frac{y}{\sqrt{z^{\alpha}}}\right)\|_{L^{q'}_{x}} \nonumber \\ &\ \ \ \leq CT^{1/p'}R^{-(\alpha/2+1)}\|\widetilde{M}\psi_{R}\psi\|_{L^{q'}_{x}}, \nonumber \end{align} where \begin{equation*} \widetilde{M}(y,z) = \left(|y|/\sqrt{z^{\alpha}}\right)^{2} (\nabla v)\left(y/\sqrt{z^{\alpha}}\right). \end{equation*} Explicitly we have \begin{align*}\label{eq:epo62} \|\widetilde{M}\psi_{R}\psi\|_{L^{q'}_{x}}^{q'} & = \int_{R-R^{\gamma}}^{R+R^{\gamma}}dz\int_{|y| < z}\left| \left(\frac{|y|}{\sqrt{z^{\alpha}}}\right)^{2} (\nabla v)\left(\frac{y}{\sqrt{z^{\alpha}}}\right)\right|^{q'}dy \\ & = \int_{R-R^{\gamma}}^{R+R^{\gamma}}dz\int_{|y| < \sqrt{z^{2-\alpha}}}\left||y|^{2}(\nabla v)(y)\right|^{q'}z^{(n-1)\alpha/2}dy \\ &\leq CR^{(\alpha(n-1)+2\gamma)/2}, \end{align*} for all $R>1$, and $1/2<\gamma<1$. Hence by \eqref{eq:epo61} we obtain \begin{equation}\label{eq:2} \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\psi_{R}\psi\nabla \omega\|_{L^{p'}((0, R^\beta);L^{q'}_{x})} \leq CR^{\frac{\beta}{p'}-(\alpha/2+1)+\frac{\alpha(n-1)+2\gamma}{2q'}}, \end{equation} for any $\beta>0$. In conclusion, by \eqref{eq:epo58}, \eqref{eq:1} and \eqref{eq:2} we obtain \begin{align}\label{eq:epo65} & \|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\psi_{R}\psi\nabla W\|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \\ &\ \ \leq CR^{\beta/p'+(\alpha(n-1)+2\gamma)/2q'}\max\{R^{\beta-(\alpha+2)}, R^{-(\alpha/2+1)}\}, \nonumber \end{align} for any $\beta>0$. Hence, by \eqref{eq:WR3}, \eqref{eq:epo55}, \eqref{eq:epo57} and \eqref{eq:epo65} we can conclude that \begin{align}\label{eq:epo66} &\|z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\nabla W_{R}\|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \\ &\ \ \ \leq CR^{\beta/p'+(\alpha(n-1)+2\gamma)/2q'}\max \{R^{-(\gamma+1)}, R^{\alpha/2-3},R^{\beta-(\alpha+2)}, R^{-(\alpha/2+1)}\}, \nonumber \end{align} for any $\beta > 0$. Now, by \eqref{eq:epo53}, \eqref{eq:epo66} we get \begin{align}\label{eq:epo67} &\|z^{2-2\alpha}R_{2}\left(\frac{y}{z}\right)W_{R} + z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\nabla W_{R} \|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \\ &\ \ \ \leq CR^{\beta/p'+(\alpha(n-1)+2\gamma)/2q'}\max \{R^{-(\gamma+1)}, R^{\alpha/2-3},R^{\beta-(\alpha+2)}, R^{-(\alpha/2+1)}\}, \nonumber \end{align} for any $\beta > 0$. By \eqref{eq:epo70}, we have \begin{align}\label{eq:quasi} & \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'} ((0,R^{\beta});L^{q'}_{x})}} \\ &\ \ \geq \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|F_{R}\|_{L^{p'} ((0,R^{\beta});L^{q'}_{x})}+\|z^{2-2\alpha} R_{2}\left(\frac{y}{z}\right)W_{R} + z^{1-\alpha}R_{1}\left(\frac{y}{z}\right)\nabla W_{R} \|_{L^{p'}((0, R^{\beta});L^{q'}_{x})}}. \nonumber \end{align} Finally, the thesis \eqref{eq:epo71} follows from \eqref{eq:epo22}, \eqref{eq:epo23}, \eqref{eq:epo67} and \eqref{eq:quasi}, when the couple $(p,q)$ satisfies the Schr\"odinger admissibility condition. \end{proof} Now let us come back to the inhomogeneous Cauchy problem \eqref{eq:epo69}-\eqref{eq:epo69datum}. Notice that \begin{equation}\label{eq:epo74} \|u_{R}\|_{L^{p}_{t}L^{q}_{x}} \geq \|u_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})} = \|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}, \end{equation} from which it follows that \begin{equation}\label{eq:epo75} \frac{\|u_{R}\|_{L^{p}_{t}L^{q}_{x}}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}_{t}L^{q'}_{x}}} \geq \frac{ \|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}((0, R^{\beta}); L^{q'}_{x})}} \end{equation} \noindent Notice that \eqref{eq:epo25} implies that for any $1 \leqslant p < \infty, \quad 1/2 < \gamma < 1$, \begin{equation}\label{eq:epo76} \frac{\|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}}} \to +\infty, \end{equation} as $R\to+\infty$, provided that $\beta > (\alpha(n-1)+2\gamma)/n$. On the other hand, the function $\delta(n, \gamma, \beta, p)$ defined in \eqref{eq:epo72}, varies continuously with $\alpha$, and in particular \begin{align}\label{eq:epo77} &\delta\left(n, \gamma, \frac{\alpha(n-1)+2\gamma}{n}, p\right) \\ & =\min\left\{\frac{(n - 1)(2\gamma - \alpha)}{n}, \frac{(2 - \alpha)n + 3\alpha - 6\gamma}{n}, \frac{(2 - \alpha)n + 2\alpha - 4\gamma}{2n}, \right. \nonumber \\ &\ \ \ \ \left. \frac{3(2 - \alpha)n + 2\alpha - 4\gamma}{2n}, \frac{n(\gamma + 1) - (n - 1)\alpha - 2\gamma}{n}, \frac{(2 - \alpha)n + 2\alpha - 4\gamma}{n}\right\}. \nonumber \end{align} Hence $\delta$ is strictly positive if \begin{equation}\label{eq:range} \frac\alpha2 < \gamma < \frac\alpha2 + \frac{(2 - \alpha)n}6. \end{equation} Since $1<\alpha<2$, the range given by \eqref{eq:range} contains some $\gamma \in (1/2, 1)$; hence, by choosing $\beta > (\alpha(n-1)+2\gamma)/n$, we obtain $\delta>0$. This remark, together with \eqref{eq:epo71}, gives \begin{equation}\label{eq:epo78} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'}((0,R^{\beta});L^{q'}_{x})}} \to +\infty, \end{equation} as $R\to+\infty$, and by \eqref{eq:epo75}, \eqref{eq:epo76}, \eqref{eq:epo78} we conclude that \begin{equation}\label{eq:epo79} \frac{\|u_{R}\|_{L^{p}_{t}L^{q}_{x}}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}_{t}L^{q'}_{x}}} \to +\infty, \end{equation} as $R\to+\infty$. The last inequality shows that the following Strichartz estimates \begin{equation}\label{eq:epo80} \|u\|_{L^{p}_{t}L^{q}_{x}} \leqslant C\left(\|f_{R}\|_{L^{2}_{x}} + \|F\|_{L^{p'}_{t}L^{q'}_{x}}\right) \end{equation} cannot be satisfied by solutions of the inhomogeneus Schr\"odinger equation \begin{equation}\label{eq:epo81} \begin{cases} i\partial_{t}u-\mathcal{D}elta u + 2iA(x)\cdot \nabla u+|A|^{2}(x)u=F \\ u(x,0)=f(x), \end{cases} \end{equation} where the potential $A$ is given by \eqref{eq:A}. In order to disprove Strichartz estimates for the corresponding homogeneus Schr\"odinger equation in the non-endpoint case $p > 2$, it is sufficient to apply a $TT^{*}$-argument, by using the standard Christ-Kiselev Lemma (see \cite{CK}). Finally, also the endpoint estimate $p=2$ fails; indeed, if it were true, by interpolation with the mass conservation (i.e. the $L^\infty L^2$-estimate) also the non-endpoint estimates would be satisfied. This completes the proof of Theorem \ref{thm:odd}. \section{Preliminaries for the even-dimensional case}\label{sec:even} Before the proof of Theorem \ref{thm:even}, we start with some preliminary computations, analogous to the ones performed in Section \ref{sec:odd}. Throughout this Section, $n=2k$ will be an even number, $n\geq4$. In the even dimension $n-2$, let us again consider an eigenfunction $v$ of the operator $T$ corresponding to an eigenvalue $\lambda$, as in \eqref{eq:epo2}, \eqref{eq:lp}. By scaling $v$ we can create a function $\omega$ on $\mathbb{R}^n$ as follows: \begin{equation} \omega(x) := v\left(\frac{y}{\sqrt{|z|^{\alpha}}}\right), \qquad x:=(y,z) \in \mathbb{R}^{n-2} \times \mathbb{R}^{2} \setminus \{0\}, \end{equation} where $\alpha$ is the exponent in \eqref{eq:AA}. It is easy to see that $\omega$ satisfies \begin{equation}\label{eq:epe4} -\left(\nabla- \frac{i}{|z|^{\alpha}}M(y,0,0)^t\right)^2\omega + \frac{1}{|z|^{2\alpha}}|y|^2\omega = \frac{\lambda}{|z|^{\alpha}} \omega, \end{equation} for any $(y,z) \in \mathbb{R}^{n-2}\times \mathbb{R}^{2} \setminus \{0\}$, where $M$ is the matrix in \eqref{eq:Meven}. Let us now introduce the time-dependent function \begin{equation}\label{eq:epe5} W(t,y,z) = e^{i( \lambda t /|z|^{\alpha})}\omega(y,z), \end{equation} for $(t,y,z) \in \mathbb{R} \times \mathbb{R}^{n-2} \times \mathbb{R}^{2}\setminus \{0\}$. As in the odd-dimensional case, a direct computation shows that $W$ solves \begin{equation}\label{eq:epe10} i\partial_{t}W - \left(\nabla- \frac{i}{|z|^{\alpha}}M(y,0,0)^t\right)^2 W + \frac{1}{|z|^{2\alpha}}|y|^2W = F, \end{equation} where $F$ is given by \begin{align}\label{eq:Feven} F(t,y,z) = & e^{i( \lambda t /|z|^{\alpha})}\left\{ \left[\frac{\alpha^{2}\lambda^{2}t^{2}}{|z|^{2\alpha+2}}-\frac{\alpha^{2}i \lambda t}{|z|^{\alpha+2}}\right] v\left(\frac{y}{\sqrt{|z|^{\alpha}}} \right) \right. \\ & \left. - G\left(\frac{y}{\sqrt{|z|^{\alpha}}}\right)\cdot\left[\frac{\alpha^{2}i \lambda t}{|z|^{\alpha+2}} - \frac{\alpha^{2}}{4|z|^{2}}\right] - \frac{\alpha^{2}}{4|z|^{2}} H\left(\frac{y}{\sqrt{|z|^{\alpha}}}\right) \right\}, \nonumber \end{align} with \begin{equation}\label{eq:epe7} G(y) = y \cdot \nabla_{y}v(y), \qquad H(y) = yD^{2}v(y)\cdot y. \end{equation} Now we introduce a smooth real-valued cutoff $\psi\in C^{\infty}(\mathbb{R})$ satisfying \begin{equation*} \psi(s) = 0 \quad \text{for } |s| > 1, \qquad \psi(s) = 1 \quad \text{for } |s| < 1/2. \end{equation*} Let us fix a parameter $\gamma \in (1/2,1)$ and denote by \begin{equation}\label{eq:psiReven} \psi^{1}_{R}(z) := \psi\left(\frac{z_{1} - R}{R^{\gamma}}\right), \qquad \psi^{2}_{R}(z) := \psi\left(\frac{z_{2} - R}{R^{\gamma}}\right), \end{equation} for $z=(z_1,z_2)\in\mathbb{R}^2$. We truncate $W$ as follows: \begin{equation}\label{eq:WReven} W_{R}(t,y,z) := W(t,y,z)\psi^{1}_{R}(z)\psi^{2}_{R}(z) \psi\left(\frac{|y|^{2}}{|z|^{2}}\right). \end{equation} Again, by direct computations it turns out that $W_R$ solves the Cauchy problem \begin{equation}\label{eq:epe17} \begin{cases} i\partial_{t}W_{R}-\left(\nabla-\frac{i}{|z|^{\alpha}}M(y,0,0)^t\right)^2W_{R} + \frac{1}{|z|^{2\alpha}}|y|^2W_{R} = F_{R} \\ W_{R}(0,y,z) = f_{R}(y,z). \end{cases} \end{equation} Here the initial datum is given by \begin{equation}\label{eq:epe18} f_{R}(y,z) = \psi^{1}_{R}(z)\psi^{2}_{R}(z) \psi\left(\frac{|y|^{2}}{|z|^{2}}\right)\omega(y,z), \end{equation} while $F_R$ is given by \begin{equation}\label{eq:epe19} F_{R}(t,y,z) = \psi^{1}_{R}\psi^{2}_{R}\psi F + G_{R}, \end{equation} where $F$ is defined in \eqref{eq:Feven} and $G_R$ has the explicit form \begin{align}\label{eq:Geven} G_{R}(t,y,z) = &e^{i( \lambda t /|z|^{\alpha})}\left\{-\omega\left[\frac{2(n - 2)}{|z|^{2}}\psi^{1}_{R}\psi^{2}_{R}\psi^{'} + \frac{4|y|^{2}}{|z|^{4}}\psi^{1}_{R}\psi^{2}_{R}(\psi''+\psi') + \psi^{2}_{R}\psi^{1''}_{R}\psi \right.\right. \\ & + \psi^{1}_{R}\psi^{2''}_{R}\psi - \frac{4|y|^{2}}{|z|^{4}}\left(\psi^{1'}_{R}\psi^{2}_{R}\psi^{'}z_{1} +\psi^{1}_{R}\psi^{2'}_{R}\psi^{'}z_{2}\right) + \frac{4|y|^{4}}{|z|^{6}}\psi^{1}_{R}\psi^{2}_{R}\psi^{''} \nonumber \\ & \left. - \frac{2\alpha i\lambda t}{|z|^{\alpha+2}}\left(\psi^{2}_{R}\psi^{1'}_{R}\psi z_{1} +\psi^{1}_{R}\psi^{2'}_{R}\psi z_{2}\right) + \frac{4\alpha i\lambda t|y|^{2}}{|z|^{\alpha+4}}\psi^{1}_{R}\psi^{2}_{R}\psi^{'}\right] \nonumber \\ &- G\left(\frac{y}{\sqrt{|z|^\alpha}}\right)\cdot\left[ \frac{4}{|z|^{2}}\psi^{1}_{R}\psi^{2}_{R}\psi^{'} + \frac{2\alpha |y|^{2}}{|z|^{4}}\psi^{1}_{R}\psi^{2}_{R}\psi^{'} \right. \nonumber \\ &\left.\left. - \frac{\alpha}{|z|^{2}}\left(\psi^{1'}_{R}\psi^{2}_{R}\psi z_{1} + \psi^{1}_{R}\psi^{2'}_{R}\psi z_{2}\right)\right]\right\}. \nonumber \end{align} The main result of this Section is the following. \begin{lemma}\label{lem:even} Let $n\geq4$ be an even number, $p, q \in(1,\infty)$, and $\gamma\in(1/2,1)$; then we have \begin{equation}\label{eq:epe21} \|f_{R}\|_{L^{2}_{x}} \leq C R^{(\alpha(n-2)+4\gamma)/4}, \end{equation} \begin{equation}\label{eq:epe22} \|W_{R}\|_{L^{p}_{T}L^{q}_{x}} \geq C T^{1/p} R^{(\alpha(n-2)+4\gamma)/2q}, \end{equation} \begin{equation}\label{eq:epe23} \|F_{R}\|_{L^{p}_{T}L^{q}_{x}} \leqslant C T^{1/p} R^{(\alpha(n-2)+4\gamma)/2q}\max\{R^{-2\gamma}, T^{2}R^{-(2\alpha+2)}\}, \end{equation} for all $R>2$, $T>0$ and some constant $C=C(q,\gamma)>0$ In particular, if $(p,q)\neq(\infty,2)$ is a Schr\"odinger admissible couple and $\beta > 0$, then the following estimates hold \begin{equation}\label{eq:epe25} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}}} \geq CR^{(\beta n - (\alpha(n-2)+4\gamma))/np}, \end{equation} \begin{equation}\label{eq:epe26} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta}); L^{q}_{x})}}{\|F_{R}\|_{L^{p'}((0,R^{\beta}); L^{q'}_{x})}} \geq CR^{\kappa}, \end{equation} for any $R>2$, where \begin{equation} \kappa = \kappa(n, \gamma, \beta, p) = 2\left(\frac{\beta n - (\alpha(n-2)+4\gamma)}{np}\right) +\min\{2\gamma - \beta, 2\alpha + 2 - 3\beta\} \end{equation} and the constants $C > 0$ do not depend on $R > 2$. \end{lemma} \begin{remark}\label{rem:noproof} Notice that the statement of Lemma \ref{lem:even} is almost the same as the one of Lemma \ref{lem:odd} in the odd-dimensional case. The only difference is in the numerology of the exponents, and it is due to the dimensional gap between the cutoffs in \eqref{eq:WR}, \eqref{eq:WReven}. Since the proof is completely analogous (with more terms to be estimated due to \eqref{eq:Geven} but all of the same type), we omit straightforward details. \end{remark} \section{Proof of the Theorem \ref{thm:even}}\label{sec:proofeven} We can pass to the proof of Theorem \ref{thm:even}, which is completely analogous to the one of Theorem \ref{thm:odd}. For $x\in\mathbb{R}^n$, we will denote $x=(y,z)$, where $y=(y_1,\dots,y_{n-2})\in\mathbb{R}^{n-2}$ and $z=(z_1,z_2)\in\mathbb{R}^2$. Let us write again the expansion \begin{equation}\label{eq:Hexpleven} H=-\mathcal{D}elta+2iA\cdot\nabla+|A|^2. \end{equation} By homogeneity we have \begin{equation} A(y,z) = |z|^{1-\alpha}A\left(\frac{y}{|z|}, 0, 1\right), \end{equation} for all $(y,z) \in \mathbb{R}^{n-2}\times \mathbb{R}^{2}\setminus \{0\}$. Let us fix again the point $P(0,\dots,0,1)$, as an arbitrary direction in the degeneracy plane generated by the axes $z_1,z_2$. By Taylor expanding around $P$, we get the analogous to formulas \eqref{eq:epo39} and \eqref{eq:epo50}, i.e. \begin{align}\label{eq:epe39} 2iA(y,z)\cdot\nabla u & = 2i|z|^{1-\alpha}\left\{M\left(\frac{y}{z}, 0, 0\right)^{t} + R_{1}\left(\frac{y}{|z|}\right)\right\}\cdot\nabla u \\ & = 2i|z|^{-\alpha}M\left(y, 0, 0\right)^{t}\cdot\nabla u + 2i|z|^{1-\alpha}R_{1}\left(\frac{y}{|z|}\right)\cdot\nabla u, \nonumber \end{align} \begin{equation}\label{eq:epe50} |A|^2(y,z)=\frac{2}{|z|^{2\alpha}}|y|^2 + |z|^{2-2\alpha}R_{2}\left(\frac{y}{|z|}\right), \end{equation} where now $M$ is the matrix in \eqref{eq:Meven} and the rests satisfy \begin{equation}\label{eq:epe42} \left|R_{1}\left(\frac{y}{|z|}\right)\right| \leq C\frac{|y|^{2}}{|z|^{2}}, \qquad |R_{2}\left(\frac{y}{|z|}\right)| \leq C\frac{|y|^{3}}{|z|^{3}}, \end{equation} provided $|y|<|z|$. As in Section \ref{sec:proofodd}, we select a couple $(\lambda, v(y))$ solving the eigenvalue problem \eqref{eq:epo2}; again, this fix the functions $W_{R}$, $f_{R}$, and $F_{R}$ which we use in the sequel. As in the odd dimensional case, due to \eqref{eq:Hexpleven}, \eqref{eq:epe39}, \eqref{eq:epe50}, it is natural to consider the following auxiliar equation \begin{align}\label{eq:epe69} & i\partial_{t}u_{R}-\left(\nabla- \frac{i}{|z|^{\alpha}} M(y,0,0)^t\right)^2 u_R \\ &\ \ \ + \frac{1}{|z|^{2\alpha}}|y|^2u_R + 2i |z|^{1-\alpha}R_{1}\left(\frac{y}{|z|}\right) \nabla u_R + |z|^{2-2 \alpha}R_{2}\left(\frac{y}{|z|}\right)u_{R} =\tilde{F}_{R}, \nonumber \end{align} with initial datum \begin{equation}\label{eq:epe69datum} u_{R}(0, y, z) = f_{R}, \end{equation} where \begin{equation}\label{eq:epe70} \tilde{F}_{R}(t,y,z) = \chi_{(0, R^{\beta})}(t)\left\{F_{R} + 2i |z|^{1-\alpha}R_{1}\left(\frac{y}{|z|}\right)\nabla W_{R} + |z|^{2-2 \alpha}R_{2}\left(\frac{y}{|z|}\right)W_{R}\right\}, \end{equation} $\beta$ is the same as in Lemma \ref{lem:even}, and $f_{R}$, $F_{R}$ are given by \eqref{eq:epe18}, \eqref{eq:epe19}. Again, notice that, due to \eqref{eq:epe17}, \eqref{eq:epe70}, the solution $u_R$ of the Cauchy problem \eqref{eq:epe69}-\eqref{eq:epe69datum} coincides with the solution $W_R$ of \eqref{eq:epe17} for small times $t\in(0,R^\beta)$. The crucial result is the following. \begin{lemma}\label{lem:maineven} Let $(p,q)$ be a Schr\"odinger admissible couple, $(p,q)\neq(\infty,2)$, and let $(p',q')$ be the dual couple. The following estimate holds: \begin{equation}\label{eq:epe71} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'} ((0,R^{\beta});L^{q'}_{x})}} \geq CR^{\delta}, \qquad \forall 1/2 < \gamma < 1, \end{equation} where \begin{align}\label{eq:epe72} & \delta =\delta(n, \alpha, \gamma, \beta, p) = 2\left(\frac{\beta n - (\alpha(n-2)+4\gamma)}{np}\right)\\ &+ \min\left\{2\gamma - \beta, 2\alpha + 2 -3\beta, \frac{\alpha}{2} + 1 - \beta, 3 - \frac{\alpha}{2} - \beta, \gamma + 1 - \beta, \alpha + 2 - 2\beta\right\}, \nonumber \end{align} and $\gamma$ is the same as in \eqref{eq:psiReven}. \end{lemma} \begin{remark} Notice that the only difference in the statements of Lemmas \ref{lem:mainodd}, \ref{lem:maineven} is in the first terms of formulas \eqref{eq:epo72}, \eqref{eq:epe72} for the exponents $\delta$. This is due to the dimensional difference between the cutoffs functions in \eqref{eq:psiR} and \eqref{eq:psiReven}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:maineven}] The proof is completely analogous to the one of Lemma \ref{lem:mainodd}. Again, due to \eqref{eq:epe26} it is sufficient to estimate the rest terms in \eqref{eq:epe70}. With the same type of computations made in the odd-dimensional case, we get the analogous to estimate \eqref{eq:epo67}, which is now the following: \begin{align}\label{eq:epe67} &\||z|^{2-2\alpha}R_{2}\left(\frac{y}{|z|}\right)W_{R} + |z|^{1-\alpha}R_{1}\left(\frac{y}{|z|}\right)\nabla W_{R} \|_{L^{p'}((0, R^{\beta});L^{q'}_{x})} \\ &\ \ \ \leq CR^{\beta/p'+(\alpha(n-2)+4\gamma)/2q'}\max \{R^{-(\gamma+1)}, R^{\alpha/2-3},R^{\beta-(\alpha+2)}, R^{-(\alpha/2+1)}\}, \nonumber \end{align} for any $\beta > 0$. By \eqref{eq:epe70}, we have \begin{align}\label{eq:quasieven} & \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'} ((0,R^{\beta});L^{q'}_{x})}} \\ & \geq \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|F_{R}\|_{L^{p'} ((0,R^{\beta});L^{q'}_{x})}+\|\frac{|z|^2}{|z|^{2\alpha}} R_{2}\left(\frac{y}{|z|}\right)W_{R} + |z|^{1-\alpha}R_{1}\left(\frac{y}{|z|}\right)\nabla W_{R} \|_{L^{p'}((0, R^{\beta});L^{q'}_{x})}}. \nonumber \end{align} Finally, the thesis \eqref{eq:epe71} follows from \eqref{eq:epe22}, \eqref{eq:epe23}, \eqref{eq:epe67} and \eqref{eq:quasieven}, when the couple $(p,q)$ satisfies the Schr\"odinger admissibility condition. \end{proof} In order to conclude the proof of Theorem \ref{thm:even}, let us come back to the inhomogeneous Cauchy problem \eqref{eq:epe69}-\eqref{eq:epe69datum}. We have \begin{equation}\label{eq:epe74} \|u_{R}\|_{L^{p}_{t}L^{q}_{x}} \geq \|u_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})} = \|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}, \end{equation} from which it follows that \begin{equation}\label{eq:epe75} \frac{\|u_{R}\|_{L^{p}_{t}L^{q}_{x}}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}_{t}L^{q'}_{x}}} \geq \frac{ \|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}((0, R^{\beta}); L^{q'}_{x})}} \end{equation} \noindent Notice that \eqref{eq:epe25} implies that for any $1 \leq p < \infty, \quad 1/2 < \gamma < 1$, \begin{equation}\label{eq:epe76} \frac{\|W_{R}\|_{L^{p}((0, R^{\beta}); L^{q}_{x})}}{\|f_{R}\|_{L^{2}_{x}}} \to +\infty, \end{equation} as $R\to+\infty$, provided that $\beta > (\alpha(n-2)+4\gamma)/n$. On the other hand, the function $\delta(n, \gamma, \beta, p)$ defined in \eqref{eq:epe72}, varies continuously with $\alpha$, and in particular \begin{align}\label{eq:epe77} &\delta\left(n, \gamma, \frac{\alpha(n-2)+4\gamma}{n}, p\right) \\ & =\min\left\{\frac{(n - 2)(2\gamma - \alpha)}{n}, \frac{(2 - \alpha)n + 6\alpha - 12\gamma}{n}, \frac{(2 - \alpha)n + 4\alpha - 8\gamma}{2n}, \right. \nonumber \\ &\ \ \ \ \left. \frac{3(2 - \alpha)n + 4\alpha - 8\gamma}{2n}, \frac{n(\gamma + 1) - (n - 2)\alpha - 4\gamma}{n}, \frac{(2 - \alpha)n + 4\alpha - 8\gamma}{n}\right\}. \nonumber \end{align} Hence $\delta$ is strictly positive if \begin{equation}\label{eq:rangeeven} \frac\alpha2 < \gamma < \frac\alpha2 + \frac{(2 - \alpha)n}{12}. \end{equation} Since $1<\alpha<2$, the range given by \eqref{eq:rangeeven} contains some $\gamma \in (1/2, 1)$; hence, by choosing $\beta > (\alpha(n-2)+4\gamma)/n$, we obtain $\delta>0$. This remark, together with \eqref{eq:epe71}, gives \begin{equation}\label{eq:epe78} \frac{\|W_{R}\|_{L^{p}((0,R^{\beta});L^{q}_{x})}}{\|\tilde{F}_{R}\|_{L^{p'}((0,R^{\beta});L^{q'}_{x})}} \to +\infty, \end{equation} as $R\to+\infty$, and by \eqref{eq:epe75}, \eqref{eq:epe76}, \eqref{eq:epe78} we conclude that \begin{equation}\label{eq:epe79} \frac{\|u_{R}\|_{L^{p}_{t}L^{q}_{x}}}{\|f_{R}\|_{L^{2}_{x}} + \|\tilde{F}_{R}\|_{L^{p'}_{t}L^{q'}_{x}}} \to +\infty, \end{equation} as $R\to+\infty$. The proof now proceeds exactly as in the odd case. \end{document}
\begin{document} \title{Birational geometry of moduli spaces of rank 2 logarithmic connections} \begin{abstract} In this paper, we describe the birational structure of moduli spaces of rank 2 logarithmic connections with the fixed trace connection.\footnote[0]{$\textit{Key words and phrases}$. logarithmic connection, parabolic structure, apparent singularities.}\footnote[0]{2020 $\textit{Mathmatics Subject Classification.}$ Primary 14D20; Secondary 34M55} \end{abstract} \section{Introduction} Let $C$ be a smooth projective curve over the field of complex numbers $\mathbb{C}$ of genus $g$ and $\mathbf{t}=\{t_1,\ldots,t_n\}$ be a set of n-distinct points on $C$. Let $L$ be a line bundle on $C$ with degree $d$ and $M(r,L)$ be the moduli space of stable vector bundles of rank $r$ on $C$ with the determinant $L$. Let $P^{\boldsymbol{\alpha}}(r,L)$ be the moduli space of $\boldsymbol{\alpha}$-semistable parabolic bundles of rank $r$ over $(C,\mathbf{t})$ with the determinat $L$. Boden and Yokogawa~\cite{BY} showed that $P^{\boldsymbol{\alpha}}(r,L)$ is a rational variety in the case of certain flag structures including full flag structures. After that, King and Schofield proved that $M(r,L)$ is rational when $r$ and $d$ are coprime. Parabolic connections consist of a vector bundle, a connection and a flag structure, so we can considar the similar problem, that is, moduli spaces of parabolic connections whose pair of the determinant and the trace connection is isomorphic to a fixed type are rational or not. When $g=0,1$ and $r=2$, it is known that the problem is true for certain weights (\cite{LS}\cite{FL}\cite{FLM}). In this paper, we show that the problem is true for certain weights for any $d$ when $g\geq 2$ and $r=2$. In order to state the main theorem precisely, we introduce some notations. Let $\boldsymbol{\nu}=(\nu^\pm_i)_{1 \leq i \leq n}$ be a collection of complex number satisfying $\sum_{i=1}^n(\nu^+_i+\nu^-)=-d$. Let $\boldsymbol{\alpha}=\{\alpha^{(i)}_1, \alpha^{(i)}_2\}_{1 \leq i \leq n}$ is a collection of rational numbers such that for all $i=1, \ldots, n$, $0 < \alpha^{(i)}_1 < \alpha^{(i)}_2 <1$. Let $(L,\nabla_L)$ be a pair of a line bundle on $C$ with $\deg L=d$ and a logarithmic connection $\nabla_L$ over $L$ which has the residue data $\textrm{res}_{t_i}(\nabla_L)=\nu^+_i+\nu^-_i$ for each $i$. Let $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))$ be the moduli space of rank 2 $\boldsymbol{\alpha}$-stable $\boldsymbol{\nu}$-parabolic connections over $(C,\mathbf{t})$ whose determinat and trace connection are isomorphic to $(L,\nabla_L)$ and whose local exponents is $\boldsymbol{\nu}$. Inaba~\cite{In} showed that $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))$ is a smooth irreducible variety if \begin{equation} g=1, n\geq 2\ \text{or}\ g \geq 2, n\geq 1. \end{equation} By elementaly transformations, we can change degree $d$ freely. When $d=2g-1$, by the theory of apparent singularities~\cite{SS}, we can define the rational map \[ \textrm{App}: M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \cdots \rightarrow \mathbb{P}H^0(C,L \otimes \Omega_C^1(D)). \] The map which forgets connections induces a rational map \[ \textrm{Bun}: M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \cdots \rightarrow P^{\boldsymbol{\alpha}}(2,L). \] Let $V_0$ and $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0$ be the open subset of $P^{\boldsymbol{\alpha}}(2,L)$ and $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))$ respectively defined in section 4. From certain arguments, we obtain an open immersion $V_0 \hookrightarrow \mathbb{P}H^1(C,L^{-1}(-D))$ and a proper subvariety $\Sigma \subset \mathbb{P}H^0(C,L\otimes \Omega_C^1(D)) \times \mathbb{P}H^1(C,L^{-1}(-D))$. Then the following theorem holds. \begin{theorem} Under the condition (1), assume $d=2g-1$, $\sum_{i=1}^{n}\nu^-_i \neq 0$ and $\sum_{i=1}^n(\alpha^{(i)}_2-\alpha^{(i)}_1)<1$. Then tha map \[ \textrm{App}\times\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \longrightarrow (\mathbb{P}H^0(C,L \otimes \Omega_C^1(D)) \times V_0)\,\setminus\,\Sigma \] is an isomorphism. Hence, the rational map \[ \textrm{App}\times\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \ \cdots \rightarrow |L \otimes \Omega_C^1(D)| \times P^{\boldsymbol{\alpha}}(2,L) \] is birational. \end{theorem} \begin{cor} Under the condition (1), suppose $\sum_{i=1}^n(\alpha^{(i)}_2-\alpha^{(i)}_1)<1$. Then $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))$ is a rational variety for any $d$. \end{cor} Section 2 and 3 respectively contain a brieaf summary of parabolic bundles and parabolic connections. In section 4, we study the birational structure of certain moduli spaces of rank 2 logarithmic connections. Firstly, we provide the distingushed open subset $V_0$ of the moduli space of parabolic bundles. Secondly, we introduce two rational maps, that is, the bundle map and the apparent map. The apparent map was defined in general genus and rank by Saito and Szab\'o~\cite{SS}. Thirdly, we prove the main theorem. This proof is based on the proof of Theorem 4.3 in \cite{LS}. Finaly, we show that $\textrm{App} \times \textrm{Bun}$ is birational in another way. \section{Parabolic bundles} In this section, we recall basic definitions and known facts on rank 2 parabolic bundles and their moduli spaces. In the case of general rank and parabolic structures, see \cite{MS}. \subsection{Parabolic bundles and $\boldsymbol{\alpha}$-stability} Let $C$ be a smooth projective curve of genus $g$ and $\mathbf{t}=\{t_1, \ldots, t_n\}$ be a set of n-distinct points on $C$. For any algebraic vector bunele $E$ on $C$, we set $E|_{t_i}=E \otimes (\mathcal{O}_C/\mathcal{O}_C(-t_i))$. \begin{definition} A quasi-parabolic bundle of rank 2 over $(C,\mathbf{t})$ is a pair $(E,l_*=\{l^{(i)}_*\}_{1 \leq i \leq n})$ consisting of the following data: \begin{itemize} \setlength{\itemsep}{0cm} \item[(1)] $E$ is a rank 2 algebraic vector bundle on $C$. \item[(2)] $l^{(i)}_*$ is a filtration $E|_{t_i} = l^{(i)}_0 \supsetneq l^{(i)}_{1} \supsetneq l^{(i)}_2 =\{0\}$ such that $\dim l^{(i)}_1=1$. \end{itemize} \end{definition} The set of filtrations $ l_*=\{l^{(i)}_*\}_{1\leq i \leq n}$ is said to be a parabolic structure on the vector bundle $E$. \\ A weight $\boldsymbol{\alpha}=\{\alpha^{(i)}_1, \alpha^{(i)}_2\}_{1 \leq i \leq n}$ is a collection of rational numbers such that for all $i=1, \ldots, n$, $0 < \alpha^{(i)}_1 < \alpha^{(i)}_2 <1$. \begin{definition} Let $(E,l_*)$ be a quasi-parabolic bundle and $F$ be a subbundle of $E$. The parabolic degree of $F$ associated with $\boldsymbol{\alpha}$ is defined by \[ \textrm{par\,deg}_{\boldsymbol{\alpha}}F := \deg F + \sum_{i=1}^{n}\sum_{j=1}^{2} \alpha^{(i)}_j \dim ((F|_{t_i} \cap l^{(i)}_{j-1})/(F|_{t_i} \cap l^{(i)}_{j})). \] \end{definition} \begin{definition} A quasi-parabolic bundle $(E,l_*)$ is $\boldsymbol{\alpha}$-semistable (resp. $\boldsymbol{\alpha}$-stable) if for any sub line bundle $F \subsetneq E$, the inequality \[ \textrm{par\,deg}_{\boldsymbol{\alpha}}F \underset{(\text{resp}. \ <)}{\leq} \frac{\textrm{par\,deg}_{\boldsymbol{\alpha}}E}{2} \] holds. \end{definition} \begin{lemma} A quasi-parabolic bundle $(E,l_*)$ is $\boldsymbol{\alpha}$-semistable (resp. $\boldsymbol{\alpha}$-stable) if and only if the inequality \[ \deg E -2 \deg F + \sum_{F|_{t_i}\neq l^{(i)}_1}(\alpha^{(i)}_2-\alpha^{(i)}_1)- \sum_{F|_{t_i}= l^{(i)}_1}(\alpha^{(i)}_2-\alpha^{(i)}_1) \underset{(\text{resp}. \ >)}{\geq}0 \] holds. \end{lemma} \begin{proof} If $F|_{t_i}\neq l^{(i)}_1$, then we have \[ \dim ((F|_{t_i} \cap l^{(i)}_{j-1})/(F|_{t_i} \cap l^{(i)}_{j}))=\left\{ \begin{array}{ll} 1 &j=1 \\ 0 &j=2 \end{array} \right. \] by definition. If $F|_{t_i} = l^{(i)}_1$, then we also have \[ \dim ((F|_{t_i} \cap l^{(i)}_{j-1})/(F|_{t_i} \cap l^{(i)}_{j}))=\left\{ \begin{array}{ll} 0 &j=1 \\ 1 &j=2 \end{array} \right.. \] Therefore, we conclude the equivalence. \end{proof} \subsection{The moduli space of rank 2 quasi-parabolic bundles} Let us fix $(C,\mathbf{t})$. Let $P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(d)$ denote the moduli space of rank 2 $\boldsymbol{\alpha}$-semistable quasi-parabolic bundles over $(C,\mathbf{t})$ of degree $d$. \begin{theorem}(Mehta and Seshadri [Theorem 4.1~\cite{MS}]). The modulli space $P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(d)$ is an irreducible normal projective variety of dimention $4g+n-3$. Moreover, if $(E,l_*)$ is $\boldsymbol{\alpha}$-stable, then $P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(d)$ is smooth at the point corresponding to $(E,l_*)$. \end{theorem} Let $\textrm{Pic}^d C$ be the Picard variety of degree $d$, which is the set of isomorphism classes of line bundles of degree $d$ on $C$. Then we can define the morphism \[ \det \colon P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(d) \longrightarrow \textrm{Pic}^d C ,\ (E,l_*) \mapsto \det E \] where $\det E=\bigwedge^2 E$. For each $L \in \textrm{Pic}^d C$, set $ P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(L) = \text{det}^{-1}(L)$, i.e., \[ P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(L)= \{(E,l_*) \in P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(d) \mid \det E \simeq L\}. \] \begin{theorem} The modulli space $P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(L)$ is an irreducible normal projective variety of dimention $3g+n-3$ if it is not empty. Moreover, if $(E,l_*)$ is $\boldsymbol{\alpha}$-stable, then $P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(L)$ is smooth at the point corresponding to $(E,l_*)$. \end{theorem} \section{Parabolic connections} In this section, we review basic definitions and known facts on rank 2 parabolic connections and thier moduli spaces. In the case of general rank , see \cite{In}. Let us fix $(C,\mathbf{t})$ and $D=t_1+ \cdots + t_n$ denote the effective divisor associated with $\mathbf{t}$. \subsection{Parabolic connections and $\boldsymbol{\alpha}$-stability} \begin{definition} Let us fix $\lambda \in \mathbb{C}$. We call a pair $(E, \nabla)$ a logarithmic $\lambda$-connection over $(C,\mathbf{t})$ when $E$ is an algebraic vector bundle on $C$ and $\nabla \colon E \rightarrow E \otimes \Omega_C^1(D)$ satisfies the $\lambda$-twisted Leibniz rule, i.e., for all local sections $a \in \mathcal{O}_C, \sigma \in E$, \[ \nabla(a\sigma)=\sigma \otimes\lambda\cdot da+ a\nabla(\sigma). \] \end{definition} Let $(E,\nabla)$ be a rank 2 logarithmic $\lambda$-connection over $(C,\mathbf{t})$. Then we can define the residue matrix $\textrm{res}_{t_i}(\nabla) \in \textrm{End}(E|_{t_i}) \simeq M_r(\mathbb{C})$. The set $\{\nu^+_i,\nu^-_i\}$ of eigenvalues of the residue matrix $\textrm{res}_{t_i}(\nabla)$ is called local exponents of $\nabla$ at $t_i$. The following lemma follows by the residue theorem. \begin{lemma}(Fuchs relation) Let $(E,\nabla)$ be a logarithmic $\lambda$-connection over $(C,\mathbf{t})$ and $\boldsymbol{\nu}=(\nu^\pm_i)_{1 \leq i \leq n}$ be local exponents of $\nabla$. Then we have \[ \sum_{i=1}^{n}(\nu^+_i+\nu^-_i) = -\lambda \deg E. \] \end{lemma} For each $n \geq 1$, $d \in \mathbb{Z}$ and $\lambda \in \mathbb{C}$, we set \[\mathcal{N}^{(n)}(d,\lambda):=\left\{ (\nu^\pm_i)_{1 \leq i \leq n} \in \mathbb{C}^{2n} \middle|\,\sum_{i=1}^{n}(\nu^+_i+\nu^-_i)=-\lambda d\right\}. \] \begin{definition} Let us fix $\boldsymbol{\nu}=(\nu^\pm_i)_{1 \leq i \leq n} \in \mathcal{N}^{(n)}(d,\lambda)$. A $\boldsymbol{\nu}$-parabolic $\lambda$-connection over $(C,\mathbf{t})$ is a collection $(E,\nabla, l_*=\{l^{(i)}_*\}_{1\leq i \leq n})$ consisting of the following data: \begin{itemize} \setlength{\itemsep}{0cm} \item[(1)]$(E,\nabla)$ is a logarithmic $\lambda$-connection over $(C,\mathbf{t})$. \item[(2)]$l^{(i)}_*$ is a parabolic structure of $E$ such that for any $i,j$, $\dim (l^{(i)}_j/l^{(i)}_{j+1})=1 $ and for any $i$, $(\textrm{res}_{t_i}(\nabla)-\nu^+_i \textrm{id})(l^{(i)}_1)=\{0\}, \, (\textrm{res}_{t_i}(\nabla)-\nu^-_i\textrm{id})(E|_{t_i}) \subset l^{(i)}_{1}$. \end{itemize} When $\lambda=1$, a $\boldsymbol{\nu}$-parabolic $\lambda$-connection is called $\boldsymbol{\nu}$-parabolic connection. \end{definition} \begin{remark} When $\lambda=0$, $\boldsymbol{\nu}$-parabolic $\lambda$-connections are nothing but $\boldsymbol{\nu}$-parabolic Higgs bundles. \end{remark} A weight $\boldsymbol{\alpha}=\{\alpha^{(i)}_1, \alpha^{(i)}_2\}_{1 \leq i \leq n}$ is a collection of rational numbers such that for all $i=1, \ldots, n$, $0 < \alpha^{(i)}_1 < \alpha^{i}_2 <1$. \begin{definition} A $\boldsymbol{\nu}$-parabolic $\lambda$-connection $(E,\nabla,l_*)$ is $\boldsymbol{\alpha}$-stable (resp. $\boldsymbol{\alpha}$-semistable) if for any sub line bundle $F \subsetneq E$ satisfying $\nabla(F) \subset F \otimes \Omega_C^1(D)$, the inequality \[ \textrm{par\,deg}_{\boldsymbol{\alpha}}F \underset{(\text{resp}. \ \leq)}{<} \frac{\textrm{par\,deg}_{\boldsymbol{\alpha}}E}{2} \] holds. \end{definition} \begin{remark} Assume for any collection $(\epsilon_i)_{1 \leq i \leq n}$ with $\epsilon_i \in \{+,-\}$, $\sum_{i=1}^n\nu^{\epsilon_i}_i \notin \mathbb{Z}$. Then a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla,l_*)$ is irreducible, that is, for any sub line bundle $F \subset E$, $\nabla(F) \nsubseteq F \otimes \Omega_C^1(D)$. In fact, if a sub line bundle $F \subset E$ satisfies $F \subset F \otimes \Omega_C^1(D)$, then there exists a collection $(\epsilon_i)_{1 \leq i \leq n}$ with $\epsilon_i \in \{+,-\}$ such that $\sum_{i=1}^n\nu^{\epsilon_i}_{i} \in \mathbb{Z}$ by Lemma 3.2. Moreover, $(E,\nabla,l_*)$ is $\boldsymbol{\alpha}$-stable for any weight $\boldsymbol{\alpha}$ by irreducibility. \end{remark} \subsection{The moduli space of rank 2 parabolic connections} For a fixed $(C,\mathbf{t})$, $\boldsymbol{\nu} \in \mathcal{N}^{(n)}(d):=\mathcal{N}^{(n)}(d,1)$ and $\boldsymbol{\alpha}$, let $M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, d)$ be the coarse moduli space of rank 2 $\boldsymbol{\alpha}$-stable $\boldsymbol{\nu}$-parabolic connections over $(C,\mathbf{t})$. \begin{theorem}(Inaba, Iwasaki and Saito~\cite{IIS1}~\cite{IIS2}, Inaba~\cite{In}) $M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, d)$ is an irreducible smooth quasi-projective variety of dimension $8g+2n-6$ if it is nonempty. \end{theorem} \begin{remark} Assume for any $(\epsilon_i)_{1 \leq i \leq n}$ with $\epsilon_i \in \{+,-\}$, $\sum_{i=1}^n\nu^{\epsilon_i}_i \notin \mathbb{Z}$. Then the moduli space $M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, d)$ does not depend on the weight $\boldsymbol{\alpha}$ by Remark 3.2. \end{remark} For $\boldsymbol{\nu}=(\nu^\pm_i)_{1\leq i \leq n} \in \mathcal{N}^{(n)}(d,\lambda)$, we set $\textrm{tr}(\boldsymbol{\nu})=(\nu^+_i+\nu^-_i)_{1\leq i \leq n}$. Let $(L,\nabla_L)$ be a $\textrm{tr}(\boldsymbol{\nu})$-parabolic $\lambda$-connection, i.e., $L$ is a line bundle on $C$ and $\nabla_L \colon L \rightarrow L \otimes \Omega_C^1(D) $ satisfies the $\lambda$-twisted Leibniz rule and $\textrm{res}_{t_i}\nabla_L = \nu^+_i+\nu^-_i$ for each $i$. For a fixed $(C,\mathbf{t})$, $\boldsymbol{\nu} \in \mathcal{N}^{(n)}(d), \boldsymbol{\alpha}$, and $(L, \nabla_L)$, we set \[ M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, (L,\nabla_L)) :=\{(E,\nabla,l_*)\in M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, d) \mid (\det E, \textrm{tr} \nabla)\simeq (L,\nabla_L)\}. \] \begin{theorem}(Inaba [Proposition 5.1, 5.2, 5.3~\cite{In}]). When $g=0,r\geq 2, n \geq 4$ or $g=1,n\geq 2$ or $g\geq 2, n \geq 1$, $M^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(\boldsymbol{\nu}, (L,\nabla_L))$ is an irreducible smooth quasi-projective variety of dimension $6g+2n-6$ if it is nonempty. \end{theorem} \subsection{Elementary transformations} Let $(E,\nabla,l_*)$ be a $\boldsymbol{\nu}$-parabolic connection. We consider the parabolic structure at $t_i$ \[ E|_{t_i} = l^{(i)}_0 \supsetneq l^{(i)}_1 \supsetneq l^{(i)}_0=0. \] For the naturral homomorphism $\delta_i \colon E \rightarrow E|_{t_i}/l^{(i)}_1$, let $E'=\textrm{Ker}\, \delta_i$. Then $E'$ is a locally free sheaf of rank 2 on $C$ and $\nabla$ induces a logarithmic connection $\nabla' \colon E' \longrightarrow E' \otimes \Omega_C^1(D)$. We consider a parabolic structure of $E'$ at $t_i$. The natural homomorphism $\pi \colon E' \rightarrow l^{(i)}_1$ induces two exact sequances \[ 0 \longrightarrow E(-t_i) \overset{\iota}{\longrightarrow }E' \overset{\pi}{\longrightarrow } l^{(i)}_1 \longrightarrow 0 \] \[ 0 \longrightarrow E|_{t_i}/l^{(i)}_1(-t_i) \longrightarrow E'|_{t_i} \longrightarrow l^{(i)}_1 \longrightarrow 0. \] Set $l'^{(i)}_1=\iota( E|_{t_i}/l^{(i)}_1(-t_i))$, then $\{l'^{(i)}_j\}_j$ defines a parabolic structure of $E'$ at $t_i$. \begin{lemma} For $\boldsymbol{\nu} \in \mathcal{N}^{(n)}(d)$, we define $\boldsymbol{\nu}'=(\nu'^\pm_m)_{1 \leq m \leq n}$ as \[ \nu'^\epsilon_m=\left\{ \begin{array}{lll} \nu^\epsilon_m & m\neq i \\ \nu^-_i +1& m=i,\epsilon=+\\ \nu^+_i& m=i, \epsilon=- \ . \end{array} \right. \] Then the collection $(E',\nabla',l'_*)$ is a $\boldsymbol{\nu}'$-parabolic connection over $(C,\mathbf{t})$ and $\deg E'=\deg E-1$. \end{lemma} \begin{proof} Let $\{e_1, e_2\}$ be a set of local generators of $E$ around $t_i$ which satisfies $l^{(i)}_1=\mathbb{C}\bar{e}_1$. Here $\bar{e}_j$ denotes the image of $e_j$ by the natural map $E \rightarrow E|_{t_i}$. By definition, we have $\textrm{res}_{t_i}(\nabla)(\bar{e}_1)=\nu^+_i \bar{e}_1$ and $\textrm{res}_{t_i}(\nabla)(\bar{e}_2)\equiv\nu^-_i \bar{e}_2\mod l^{(i)}_1$. Let $m_{t_i} \subset \mathcal{O}_{C,t_i}$ denote the maximal ideal and $z_i$ be a generator of $m_{t_i}$. By definition, the set $\{e_1, z_ie_2\}$ is a set of local generators of $E'$ at $t_i$. Set $e'_1=e_1$ and $e'_2=z_ie_2$. Then the image of $e'_2$ by the natural morphism $E' \rightarrow E'|_{t_i}$ generates $l'^{(i)}_1$. Moreover, we see that near $t_i$, \[ \nabla'(e'_2)=\nabla(z_ie_2)=z_ie_2 \otimes \frac{dz_i}{z_i} +z_i\nabla(e_2), \] so we obtain $\textrm{res}_{t_i}(\nabla')(\bar{e}'_2) = (1+\nu^-_i)\bar{e}'_2$. Therefore, the collection $(E',\nabla',l'_*)$ is a $\boldsymbol{\nu}'$-parabolic connection. The second assertion follows from the following exact sequence \[ 0 \longrightarrow E' \longrightarrow E \longrightarrow E|_{t_i}/l^{(i)}_1 \longrightarrow 0. \] \end{proof} \begin{definition} For a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla,l_*)$, we call the $\boldsymbol{\nu}'$-parabolic connection $(E',\nabla',l'_*)$ which is given by the above way the lower transformation of $(E,\nabla,l_*)$ at $t_i$ and we set \[ \textrm{elm}^-_{t_i}(E,\nabla, l_*):=(E',\nabla',l'_*). \] \end{definition} Let $E''=E \otimes \mathcal{O}_C(t_i)$ and $\nabla'' \colon E'' \rightarrow E'' \otimes \Omega_C^1(D)$ be the natural logarithmic connection. $l''_*$ is the parabolic structure induced by $l_*$. By the same calculation as above, it follows that $(E'',\nabla'', l''_*)$ is a $\boldsymbol{\nu}''$-parabolic connection over $(C,\mathbf{t})$ where \[ \nu''^\epsilon_m=\left\{ \begin{array}{ll} \nu^\epsilon_m & m\neq i \\ \nu^\epsilon_i -1 & m=i \end{array}. \right. \] We set $\mathbf{b}_i(E,\nabla,l_*)=(E'',\nabla'', l''_*)$. \begin{definition} For a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla,l_*)$, we define the upper transformation of $(E,\nabla, l_*)$ at $t_i$ by \[ \textrm{elm}^+_{t_i}(E,\nabla,l_*)=\textrm{elm}^-_{t_i}\circ \mathbf{b}_i((E,\nabla,l_*)). \] \end{definition} The following lemma follows by definitions of $\textrm{elm}^-_{t_i}$ and $\mathbf{b}_i$. \begin{lemma} For $\boldsymbol{\nu} \in \mathcal{N}^{(n)}(d)$, we set \[ \nu'''^\epsilon_m=\left\{ \begin{array}{lll} \nu^\epsilon_m & m\neq i \\ \nu^-_i & m=i,\epsilon=+ \\ \nu^+_i - 1 & m=i, \epsilon=- \end{array} \right. \] and $(E''',\nabla''',l'''_*)=\textrm{elm}^+_{t_i}(E,\nabla,l_*)$. Then the collection $(E''',\nabla''',l'''_*)$ is a $\boldsymbol{\nu}'''$-parabolic connection over $(C,\mathbf{t})$ and $\deg E'''=\deg E+1$. \end{lemma} \begin{proposition} Elementaly transformations $\sigma=\textrm{elm}^\pm_{t_i}$ give an isomorphism between two moduli spaces of stable parabolic connections \[ \sigma \colon M_{(C,\mathbf{t})}^\alpha(\boldsymbol{\nu},r,d) \overset{\sim}{\longrightarrow} M_{(C,\mathbf{t})}^\alpha(\sigma^*(\boldsymbol{\nu}),r,d'). \] Here $\sigma^*(\boldsymbol{\nu})$ and $d'$ are respectively local exponents and degree given in Lemma 3.7 and Lemma 3.10. Therefore, when we consider some caracteristics of moduli spaces of stable parabolic connections, we can freely change degree. \end{proposition} \section{The birational structure of moduli spaces of parabolic connections} In this setion, we describe the birational structure of moduli spaces of rank 2 parabolic connections. Throughout this section, we assume that $g \geq 1$ and $d=2g-1$. Let us fix a line bundle $L$ with degree $d$. Then we have $H^1(C,L)=\{0\}$ and by Riemann-Roch Theorem, $\dim H^0(C,L)=d+1-g=g$. Let us fix a weight $\boldsymbol{\alpha}=\{\alpha^{(i)}_1, \alpha^{(i)}_2\}_{1 \leq i \leq n}$ and set $w_i=\alpha^{(i)}_2-\alpha^{(i)}_1$. \subsection{The distinguished open subset of the moduli space of parabolic bundles} \begin{lemma} Assume $\sum_{i=1}^{n}w_i <1$. For a quasi-parabolic bundle $(E,l_*)$, the following conditions are equivalent: \begin{itemize} \setlength{\itemsep}{0cm} \item[(i)] $(E,l_*)$ is $\boldsymbol{\alpha}$-semistable. \item[(ii)] $(E,l_*)$ is $\boldsymbol{\alpha}$-stable. \item[(iii)] $E$ is stable. \end{itemize} \end{lemma} \begin{proof} If $(E,l_*)$ is $\boldsymbol{\alpha}$-semistable but not $\boldsymbol{\alpha}$-stable, then there is a sub line bundle $F \subset E$ such that \[ \deg E -2 \deg F =\sum_{F|_{t_i}= l^{(i)}_1}w_i- \sum_{F|_{t_i}\neq l^{(i)}_1}w_i. \] The left hand side is odd, but \begin{equation} \left| \sum_{F|_{t_i}= l^{(i)}_1}w_i- \sum_{F|_{t_i}\neq l^{(i)}_1}w_i\right|<\sum_{i=1}^n w_i<1. \end{equation} It is a contradiction. So condition (i) and (ii) are equivalent. If $(E,l_*)$ is $\boldsymbol{\alpha}$-stable, then for all sub line bundle $F \subset E$, the inequality \begin{equation} 2 \deg F < \deg E + \sum_{F|_{t_i}\neq l^{(i)}_1}w_i- \sum_{F|_{t_i}= l^{(i)}_1}w_i \end{equation} holds. From (2), it follows that \begin{equation} \deg E +\sum_{F|_{t_i}\neq l^{(i)}_1}w_i- \sum_{F|_{t_i}= l^{(i)}_1}w_i > \deg E -1. \end{equation} From (3) and (4), we obtain \[ 2\deg F \leq \deg E-1 < \deg E. \] Hence, $E$ is stable. Conversely, if $E$ is stable, then we can prove that $(E,l_*)$ is $\boldsymbol{\alpha}$-stable by the above argument. \end{proof} \begin{lemma} Suppose a vector bundle $E$ on $C$ satisfies following conditions: \begin{itemize} \item[(i)] $E$ is an extension of $L$ by $\mathcal{O}_C$, that is, $E$ fits into an exact sequence \[ 0 \longrightarrow \mathcal{O}_C \longrightarrow E \longrightarrow L \longrightarrow 0. \] \item[(ii)] $\dim H^0(C,E)=1$. \end{itemize} Then $E$ is stable. \end{lemma} \begin{proof} If $E$ is not stable, then there exists a sub line bundle $F \subset E$ such that $\deg F \geq g$. Since $\dim H^0(C,F)-\dim H^1(C,F)=\deg F +1-g \geq 1$, we have $\dim H^0(C,F) \geq 1$, hence we have an inclusion $\mathcal{O}_C \hookrightarrow E$. By assumption (ii), we have an unique inclusion $\mathcal{O}_C \subset F \subset E$ and this inclusion induces the injection $F/\mathcal{O}_C \hookrightarrow E/\mathcal{O}_C \simeq L$. Since $L$ is torsion free, so one conclude that $F/\mathcal{O}_C =0$, that is, $F \simeq \mathcal{O}_C$. This contradicts the fact that $\deg F \geq g \geq 1$. \end{proof} \begin{proposition} For an element $b \in H^1(C,L^{-1})$, let \begin{equation} 0 \longrightarrow \mathcal{O}_C \longrightarrow E_b \longrightarrow L \longrightarrow 0 \end{equation} be the exact sequence obtained by the extension of $L$ by $\mathcal{O}_C$ with the extension class $b$. Then $\dim H^0(C,E_b)=1$ if and only if the natural cup-product map \[ \langle \ , b \rangle \colon H^0(C,L)\longrightarrow H^1(C,\mathcal{O}_C) \] is an isomorphism. Moreover, $\dim H^0(C,E_b)=1$ for a generic element $b\in H^1(C,L^{-1})$. \end{proposition} \begin{proof} Since $H^1(C,L)=\{0\}$, from the exact sequence (5), we obtain the following exact sequence \[ 0 \longrightarrow H^0(C,\mathcal{O}_C) \longrightarrow H^0(C,E_b) \longrightarrow H^0(C,L) \overset{\langle \ ,b\rangle}{\longrightarrow} H^1(C,\mathcal{O}_C) \longrightarrow H^1(C,E_b) \longrightarrow 0 \] Here we note that by definition of the extension with $b$ the connecting homomorphism $\delta \colon H^0(C,L) \rightarrow H^1(C,\mathcal{O}_C)$ is given by $\langle\ ,b \rangle$. Since $\dim H^0(C,E_b)=\dim H^1(C,E_b)+ \deg E_b+2(1-g) = \dim H^1(C,E_b)+1$, the first assertion follows from the above exact sequence. We show the second assersion. We set \[ Z:=\{(s,b) \in H^0(C,L) \times H^1(C,L^{-1}) \mid \langle s,b\rangle =0\}. \] Since $\deg L \otimes \Omega_C^1 = 4g-3 \geq 2g-1$, we have $H^1(C,L\otimes \Omega_C^1)=\{0\}$ and \[ \dim H^1(C,L^{-1}) = \dim H^0(C,L \otimes \Omega_C^1)^* = \deg L \otimes \Omega_C^1 +1-g=3g-2. \]Hence, it is sufficient to show that $\dim Z=3g-2$. In fact, if $\dim Z=3g-2$, then for generic $b \in H^1(C,L^{-1})$, we have $\dim q^{-1}(b)=0$ and it means $q^{-1}(b)=\{(0,b)\}$. Here $q \colon Z \rightarrow H^1(C,L^{-1})$ is the projection. Let $p \colon Z \rightarrow H^0(C,L)$ be the projection. We show that for any $s \in H^0(C,L)\setminus \{0\}$ , $\dim p^{-1}(s)=2g-2$. A section $\sigma \in H^0(C,\Omega_C^1)$ induces the diagram \[ \xymatrix@R=30pt@C=40pt{ H^0(C,L) \times H^1(C,L^{-1}) \ar[r]^-{\langle\ ,\ \rangle }\ar[d]_{\otimes\sigma \times \textrm{id}} &H^1(C,\mathcal{O}_C) \ar[d]^{\otimes\sigma} \\ H^0(C,L \otimes \Omega_C^1) \times H^1(C,L^{-1}) \ar[r]^-{\langle\ ,\ \rangle'} &H^1(C,\Omega_C^1) } \] where the above and below map are natural cup-products and the left and right map are natural maps induced by $\sigma$. Note that $\langle\ , \ \rangle'$ is nondegenerate. Set $s \in H^0(C,L)\setminus\{0\}$. For $b \in H^1(C,L^{-1})$, $\langle s,b \rangle=0$ if and only if for all $\sigma \in H^0(C,\Omega_C^1)$, $\langle s \otimes\sigma , b \rangle'=\langle s,b \rangle\otimes\sigma=0$. Since the set \[ \{ s\otimes \sigma \mid \sigma \in H^0(C,\Omega_C^1) \} \simeq H^0(C,\Omega_C^1) \] is a $g$ dimentional subspace of $H^0(C,L \otimes \Omega_C^1)$ and by nondegeneracy of $\langle\ ,\ \rangle'$, the set \[ \{b \in H^1(C,L^{-1}) \mid \langle s,b \rangle=0 \} \] defines a $2g-2$ dimentional subspace of $H^1(C,L^{-1})$. We therefore obtain $\dim p^{-1}(s)=2g-2$. So we conclude $\dim Z=3g-2$. \end{proof} \begin{proposition} Let $\sum_{i=1}^{n}w_i < 1$. Let $V_0 \subset P^{\boldsymbol{\alpha}}(L)=P^{\boldsymbol{\alpha}}_{(C,\mathbf{t})}(L)$ be the subset which consits of all elements $(E,l_*) \in P^{\boldsymbol{\alpha}}(L)$ satisfying following conditions: \begin{itemize} \setlength{\itemsep}{0cm} \item[(i)] $E$ is an extension of $L$ by $\mathcal{O}_C$. \item[(ii)] $\dim H^0(C,E)=1$. \item[(iii)] For any $i$, $\mathcal{O}_C|_{t_i} \neq l^{(i)}_1$. Here $\mathcal{O}_C|_{t_i}$ is identified with the image by an injection $\mathcal{O}_C|_{t_i} \hookrightarrow E|_{t_i}$. \end{itemize} Then $V_0$ is a nonempty Zariski open subset of $P^{\boldsymbol{\alpha}}(L)$. \end{proposition} \begin{proof} Let $E$ be a vector bundle on $C$ satisfying conditions (i) and (ii). Then we have $\det E \simeq L$ from (i) and $E$ is stable by Lemma 4.2. Let $M_L$ denote the moduli space of rank 2 stable vector bundles on $C$ with the determinat $L$. First, we show that the subset of $M_L$ consising of vector bundles which satisfy (i) and (ii) is open. Since rank and degree are coprime, $M_L$ has the universal family $\mathcal{E}$. Set \[ V=\{x \in M_L\mid \dim H^0(C,\mathcal{E}|_{C \times x})=1\}, \] then $V$ is an open subset of $M_L$ by upper semicontinuity of dimentions. Let $q \colon C \times V \rightarrow V$ be the natural projection. By Corollaly 12.9 in \cite{Ha}, $q_*\mathcal{E}$ is an invertible sheaf on $V$ and for any $x \in V$, $(q_*\mathcal{E})|_x$ is naturaly isomorphic to $H^0(C,\mathcal{E}|_{C \times x})$. Hence $q^*q_*\mathcal{E}$ is an invertible sheaf on $C \times V$ and a natural homomorphism $ \iota \colon q^*q_*\mathcal{E} \rightarrow \mathcal{E}$ is injective. By definition, for any $x \in V$, we have $(q^*q_*\mathcal{E})|_{C \times x} \simeq H^0(C, \mathcal{E}|_{C \times x})\otimes_{\mathbb{C}}\mathcal{O}_{C \times x} \simeq\mathcal{O}_{C}$ and $\iota|_{c \times x} \colon \mathcal{O}_C \simeq (q^*q_*\mathcal{E})|_{C \times x}\rightarrow \mathcal{E}|_{C \times x}$ is not zero. Set \[ Y=\{(c,x) \in C \times V \mid \text{$\iota|_{(c,x)} \colon \mathcal{O}_C|_c \simeq (q^*q_*\mathcal{E})|_{(c \times x)}\rightarrow \mathcal{E}|_{(c,x)}$ is zero.}\} \] and $V'=V\setminus q(Y)$, then $Y$ is a closed subset of $C \times V$ and $V'$ is an open subset of $V$. If $x \in V'$, then we obtain $\mathcal{E}|_{C \times x}/\mathcal{O}_C \simeq L$ , that is, $\mathcal{E}|_{C \times x}$ is an extension of $L$ by $\mathcal{O}_C$. Therefore, $V'$ consists of all isomorphism classes of vector bundles satisfying the conditions (i) and (ii), and $V'$ is an open subset of $M_L$. Moreover, by Proposition 4.3, $V'$ is not empty. Second, we prove that $V_0$ is open. By Lemma 4.1, we obtain \[ P^{\boldsymbol{\alpha}}(L)\simeq \mathbb{P}(\mathcal{E}|_{t_1\times M_L})\times_{M_L} \mathbb{P}(\mathcal{E}|_{t_2\times M_L})\times_{M_L} \cdots \times_{M_L} \mathbb{P}(\mathcal{E}|_{t_n\times M_L}). \] For each $t_i$, by projectivization of $\iota|_{t_i \times V'} \colon (q^*q_*\mathcal{E})|_{t_i \times V'} \rightarrow \mathcal{E}|_{t_i \times V'}$, we obtain a morphism $\hat{l}^{(i)}_1 \colon V'\rightarrow \mathbb{P}(\mathcal{E}|_{t_i \times V'})$ such that for all $x \in V'$, $\hat{l}^{(i)}_1(x)$ is the point associated with the image by the immersion $\mathcal{O}_C \hookrightarrow \mathcal{E}|_{C \times x}$ at $t_i$. Let $\varpi \colon P^{\boldsymbol{\alpha}}(L) \rightarrow M_L$ be the natural forgetful map and $p_i \colon P^{\boldsymbol{\alpha}}(L) \rightarrow \mathbb{P}(\mathcal{E}|_{t_i \times M_L})$ be the natural projection. Set \[ V_0=\varpi^{-1}(V')\setminus\bigcup^n_{i=1}p_i^{-1}(\hat{l}^{(i)}_1(V')). \] Then $V_0$ is an open subset of $P^{\boldsymbol{\alpha}}(L)$ and the set of all isomorphism classes of parabolic bundles satisfying (i), (ii) and (iii). \end{proof} We introduce another expression of $V_0$. For $b\in H^1(C,L^{-1})$, let \[ 0 \longrightarrow \mathcal{O}_C \longrightarrow E_b \longrightarrow L \longrightarrow 0 \] be the exact sequence associated with $b$. We set \[ U:=\{b \in H^1(C,L^{-1}) \mid \dim H^0(C,E_b)=1\} \] and then $U$ is an open subset and $0 \notin U$ by Proposition 4.3. The natural homomorphism $\psi \colon H^1(C,L^{-1}(-D)) \rightarrow H^1(C,L^{-1})$ induces the morphism \[ \tilde{\psi} \colon \mathbb{P}H^1(C,L^{-1}(-D)) \setminus\,\mathbb{P}\textrm{Ker}\, \psi \longrightarrow\mathbb{P}H^1(C,L^{-1}). \] Let $\tilde{U} \subset \mathbb{P}H^1(C,L^{-1})$ be the open subset associated with $U$ and $\tilde{V}=\tilde{\psi}^{-1}(\tilde{U})$. Suppose $(E,l_*) \in P^{\boldsymbol{\alpha}}(L)$ satisfies conditions (i), (ii) and (iii) of Proposition 4.4. Let $b \in H^1(C,L^{-1})$ be the element associated with an exact sequence \[ 0 \longrightarrow \mathcal{O}_C \longrightarrow E \longrightarrow L \longrightarrow 0 \] and $[b]$ be the point in $\mathbb{P}H^1(C,L^{-1})$ associated with the subspace generated by $b$. By assumption, we have $b \in U$. Let $\{U_i\}_i$ be an open covering of $C$ and $(c_{ij})_{i,j}, c_{ij}= c_i/c_j$ be transition functions of $L$ over $\{U_i\}_i$. Let $e^i_1$ be the restricton of a global section $\mathcal{O}_C \hookrightarrow E$ on $U_i$ and $e^i_2$ be a local section of $E$ on $U_i$ whose image by the natural map $E \rightarrow E|_{t_i}$ generates $l^{(k)}_1$ at each $t_k \in U_i$. For generators $e^i_1$ and $e^i_2$, transition matrixes $M_{i,j}$ is denoted by \[ M_{ij} = \begin{pmatrix} 1 & b'_{ij} \\ 0 & c_{ij} \end{pmatrix} \] where $b'=(b'_{ij}c_j)_{i,j} \in H^1(C,L^{-1}(-D))$. Then we have $\tilde{\psi}([b'])=[b]$, and so $[b'] \in \tilde{V}$. By using the above argument, we can correspand $[b'] \in \tilde{V}$ to an isomorphism class of a parabolic bundle satisfying all conditions of Proposition 4.4. Thus we conclude $V_0 \simeq \tilde{V}$. \subsection{Two rational maps} Let us fix $\boldsymbol{\nu} \in \mathcal{N}^{(n)}(d)$ and a $\textrm{tr}(\boldsymbol{\nu})$-parabolic connection $\nabla_L$ over $L$. Let $V_0$ be the open subset of $P^{\boldsymbol{\alpha}}(L)$ defined in Proposition 4.4. We set \begin{align*} M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) &:=M_{(C,\mathbf{t})}^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \\ M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 &:=\{(E,\nabla,l_*) \in M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \mid (E,l_*) \in V_0\}. \end{align*} Let the morphism \[ \textrm{Bun}\colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \longrightarrow V_0 \] be the forgetful map which sends $(E,\nabla,l_*)$ to $(E,l_*)$. We can extend this map to the rational map \[ \textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \cdots \rightarrow P^{\boldsymbol{\alpha}}(L). \] Assume $\sum_{i=1}^{n}\nu^-_i \neq 0$. For each $(E,\nabla,l_*) \in M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0$, $E$ has the unique sub line bundle which is isomorphic to the trivial line bundle. Suppose $\nabla(\mathcal{O}_C) \subset \mathcal{O}_C \otimes \Omega_C^1(D)$. Then we obtain $\sum_{i=1}^{n}\nu^-_i = 0$ by Fuchs relation because $\mathcal{O}_C|_{t_i} \cap l^{(i)}_1=\{0\}$ for any $i$. It is a contradiction. Therefore, $\nabla(\mathcal{O}_C) \subsetneq \mathcal{O}_C \otimes \Omega_C^1(D)$ and we can define the nonzero section $\varphi_\nabla \in H^0(C,L \otimes \Omega_C^1(D))$ by the composition \[ \mathcal{O}_C \hookrightarrow E \xrightarrow{\nabla} E\otimes \Omega_C^1(D) \rightarrow E/\mathcal{O}_C \otimes \Omega_C^1(D) \simeq L \otimes \Omega_C^1(D). \] We define the morphism \[ \textrm{App} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \longrightarrow \mathbb{P}H^0(C,L\otimes \Omega_C^1(D)) \simeq |L\otimes \Omega_C^1(D)| \] by $\textrm{App}((E,\nabla,l_*))=[\varphi_\nabla]$. Here $[\varphi_\nabla]$ is the point in $\mathbb{P}H^0(C,L\otimes \Omega_C^1(D))$ associated with the subspace of $H^0(C,L\otimes \Omega_C^1(D))$ generated by $\varphi_\nabla$. We can extend this map to the rational map \[ \textrm{App} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \cdots \rightarrow |L\otimes \Omega_C^1(D)|. \] \subsection{Main theorem} Let \[ \langle\ ,\ \rangle \colon H^0(C,L\otimes \Omega_C^1(D)) \times H^1(C,L^{-1}(-D)) \longrightarrow H^1(C,\Omega_C^1) \] be the natural cup-product. This cup-product is nondegenerate. \begin{theorem} Assume $\sum_{i=1}^{n}\nu^-_i \neq 0$ and $\sum_{i=1}^nw_i < 1$. Let us define the subvariety $\Sigma \subset \mathbb{P}H^0(C,L\otimes \Omega_C^1(D)) \times \mathbb{P}H^1(C,L^{-1}(-D))$ by \[ \Sigma=\{([s],[b])\mid \langle s , b \rangle=0\}. \] Then tha map \[ \textrm{App}\times\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \longrightarrow (\mathbb{P}H^0(C,L \otimes \Omega_C^1(D)) \times V_0)\,\setminus\,\Sigma \] is an isomorphism. Therefore, the rational map \[ \textrm{App}\times\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L)) \ \cdots \rightarrow |L \otimes \Omega_C^1(D)| \times P^{\boldsymbol{\alpha}}(L) \] is birational. In particular, $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))$ is a rational variety. \end{theorem} Before showing this theorem, we prove the following lemma. \begin{lemma} Let $(E,l_*) \in V_0$ and $b \in H^1(C,L^{-1})$ be an element associated with an extension \[ 0 \longrightarrow \mathcal{O}_C \longrightarrow E \longrightarrow L \longrightarrow 0. \] Then the natural cup-product map \[ \langle\ ,b \rangle' \colon H^0(C,\Omega_C^1) \longrightarrow H^1(C,L^{-1}\otimes \Omega_C^1) \] is an isomorphism. In particular, for an element $b' \in H^1(C,L^{-1}(-D))$ associated with $(E,l_*)$, the composition of the natural cup-product map and natural homomorphism \[ H^0(C,\Omega_C^1) \xrightarrow{\langle\ ,b' \rangle''}H^1(C,L^{-1}(-D)\otimes \Omega_C^1) \longrightarrow H^1(C,L^{-1} \otimes \Omega_C^1) \] is also an isomorphism. \end{lemma} \begin{proof} By Serre duality, we have $H^0(C,\Omega_C^1) \simeq H^1(C,\mathcal{O}_C)^*$ and $\ H^1(C,L^{-1} \otimes \Omega_C^1)\simeq H^0(C,L)^*$. So it suffices to prove that the natural cup-product map \[ \langle\ ,b \rangle''' \colon H^0(C,L) \longrightarrow H^1(C,\mathcal{O}_C) \] is an isomorphism, and it is nothing but the first assersion of Proposition 4.3. Second assersion follows from the following diagram. \[ \xymatrix@R=30pt@C=40pt{ H^0(C,\Omega_C^1) \times H^1(C,L^{-1}(-D)) \ar[r]^-{\langle\ ,\ \rangle'' }\ar[d] &H^1(C,L^{-1}(-D)\otimes\Omega_C^1) \ar[d]\\ H^0(C,\Omega_C^1) \times H^1(C,L^{-1}) \ar[r]^-{\langle\ ,\ \rangle'} &H^1(C,L^{-1}\otimes\Omega_C^1) } \] \end{proof} \begin{proof}[Proof of Theorem 4.5] Firstly, we show that for any $\gamma \in H^0(C,L \otimes \Omega_C^1 (D))$ and $b' \in H^1(C,L^{-1}(-D))$ such that the quasi-parabolic bundle $(E,l_*)$ associated with $b'$ is in $V_0$, there exist unique $\lambda \in \mathbb{C}$ and an unique $\lambda\boldsymbol{\nu}$-parabolic $\lambda$-connection $(E,\nabla,l_*)$ such that $\textrm{tr} \nabla=\lambda\nabla_L$ and $\varphi_\nabla=\gamma$. Let $\{U_i\}_i$ be an open covering of $C$ and $(c_{ij})_{i,j}, c_{ij}= c_i/c_j$ be transition functions of $L$ over $\{U_i\}_i$. Let $e^i_1$ be the restricton of a global section $\mathcal{O}_C \hookrightarrow E$ on $U_i$ and $e^i_2$ be a local section of $E$ on $U_i$ whose image $\bar{e}^i_2$ by the natural map $E \rightarrow E|_{t_i}$ generates $l^{(k)}_1$ at each $t_k \in U_i$. For local generators $e^i_1$ and $e^i_2$, we can denote transition matrixes of $E$ by \[ M_{ij} = \begin{pmatrix} 1 & b'_{ij} \\ 0 & c_{ij} \end{pmatrix} \] where $b'=(b'_{ij}c_j)_{i,j} \in H^1(C,L^{-1}(-D))$. A logarithmic $\lambda$-connection $\nabla$ is given in $U_i$ by $\lambda d + A_i$ \[ A_i= \begin{pmatrix} \alpha_i& \beta_i \\ \gamma_i & \delta_i \end{pmatrix} \in M_2(\Omega_C^1(D)(U_i)) \] with the compatibility condition \[ \lambda dM_{ij}+A_iM_{ij}=M_{ij}A_j \] on each intersection $U_i \cap U_j$. By using elements of matrixes, this condition is written by \begin{equation} \left\{ \begin{array}{ll} \dfrac{\gamma_i}{c_i}- \dfrac{\gamma_j}{c_j}=0 \\ \alpha_i-\alpha_j=b'_{ij}\gamma_j \\ \delta_i-\delta_j=-b'_{ij}\gamma_j-\lambda \dfrac{dc_{ij}}{c_{ij}} \\ c_i\beta_i-c_j\beta_j=-(\lambda c_jdb'_{ij}+(b'_{ij}c_j)(\alpha_i-\delta_j)). \end{array} \right. \end{equation} If $(E,\nabla,l_*)$ is a $\lambda\boldsymbol{\nu}$-parabolic $\lambda$-connection, then for each point $t_i$, $\nabla$ satisfies the residual condition \begin{equation} \textrm{res}_{t_k}(A_i)= \begin{pmatrix} \lambda \nu_k^- & 0 \\ * & \lambda \nu_k^+ \end{pmatrix} \end{equation} at each $t_k \in U_i$ because $\bar{e}^i_2$ generates $l^{(k)}_1$. $\nabla_L$ is denoted in $U_i$ by $d+\omega_i$ with the compatibility condition \begin{equation} dc_{ij}+c_{ij}\omega_i=c_{ij}\omega_j \end{equation} on each $U_i \cap U_j$. If $\textrm{tr} \nabla = \lambda\nabla_L$, then the equation \begin{equation} \alpha_i+\delta_i=\lambda\omega_i \end{equation} holds. When $\nabla$ is denoted in $U_i$ by $\lambda d + A_i$, we have $\varphi_\nabla=(\gamma_i/c_i)_i\in H^0(C,L \otimes \Omega_C^1(D))$. So if $\varphi_\nabla=\gamma$, then we have \begin{equation} (\gamma_i/c_i)_i=\gamma. \end{equation} We show that there exist $\lambda \in \mathbb{C}$ and $\alpha_i,\beta_i,\gamma_i,\delta_i \in \Omega_C^1(D)(U_i)$ satisfying the conditions (6), (7), (9) and (10) uniquely. Step 1: we find $\gamma_i$. From (10), we have to set $\gamma_i=c_i\gamma$. Step 2: we find $\alpha_i$. Fix a section $\alpha^0_i \in \Omega_C^1(D)(U_i)$ which have the residue data $\textrm{res}_{t_k}(\alpha^0_i)=\nu^-_k$ at each $t_k \in U_i$. The cocycle $(\alpha^0_i-\alpha^0_j)_{i,j}$ defines an element of $H^1(C,\Omega_C^1)$. If $(\alpha^0_i-\alpha^0_j)_{i,j}$ is zero in $H^0(C,\Omega_C^1)$, then there exist sections $\tilde{\alpha}_i \in \Omega_C^1(U_i)$ on each $i$ such that $\alpha^0_i-\alpha^0_j=\tilde{\alpha}_i-\tilde{\alpha}_j$ for any $i,j$. $(\alpha^0_i-\tilde{\alpha_i})_i$ defines a global logarithmic 1-form whose sum of residues $\sum_{i=1}^n \nu^-_i$ is not zero. This contradicts the residue theorem. Therefore, the cocycle $(\alpha^0_i-\alpha^0_j)_{i,j}$ is a generator of $H^1(C,\Omega_C^1)$ and there is the unique $\lambda \in \mathbb{C}$ such that $\lambda(\alpha^0_i-\alpha^0_j)_{i,j}=(b'_{ij}\gamma_j)_{i,j}$. Let $\tilde{\alpha}_i \in \Omega_C^1(U_i)$ be sections such that \[ \tilde{\alpha}_i-\tilde{\alpha}_j=b'_{ij}\gamma_j-\lambda(\alpha^0_i-\alpha^0_j) \] for any $i,j$. Set $\alpha_i=\lambda \alpha^0_i+\tilde{\alpha}_i$, then $(\alpha_i)_i$ is a solution of the second equation of (6) and have the residue data $\textrm{res}_{t_k}(\alpha_i)=\lambda\nu^-_k$. Note that $(\alpha_i)_i$ is still uniquely determined. Actually the difference of two solutions of the second equation of (6) having the same residue data defines a global 1-form and now $\dim H^0(C,\Omega_C^1) \geq g \geq 1$. Step 3: we find $\delta_i$. From (9), we have to set $\delta_i=\lambda\omega_i-\alpha_i$. It is clear that $(\delta_i)_i$ is a solution of the third equation of (6) and have the residue data $\textrm{res}_{t_k}(\delta_i)=\lambda\nu^+_k$. $\delta_i$ is uniquely determined by $\alpha_i$. Step 4: we find $\beta_i$ and show that $\alpha_i$ is uniquely determined. From (8) and (9), $(-(\lambda c_jdb'_{ij}+(b'_{ij}c_j)(\alpha_i-\delta_j)))_{i,j} $ defines a cocycle of $H^1(C,L^{-1}\otimes \Omega_C^1)$. Since the linear map $\langle\ , b'\rangle'' \colon H^0(C,\Omega_C^1)\rightarrow H^1(C,L^{-1}\otimes \Omega_C^1)$ is an isomorphism by Lemma 4.6, there exist an unique global 1-form $\zeta=(\zeta_i/c_i)_i \in H^0(C,\Omega_C^1)$ such that \[ \langle2\zeta ,b'\rangle''=(2b'_{ij}\zeta_j)_{i,j}=-(\lambda c_jdb'_{ij}+(b'_{ij}c_j)(\alpha_i-\delta_j))_{i,j} \] in $H^1(C,L^{-1}\otimes \Omega_C^1)$. Hence, we obtain \[ -(\lambda c_jdb'_{ij}+(b'_{ij}c_j)((\alpha_i+\zeta_i/c_i)-(\delta_j-\zeta_j/c_j)))_{i,j}=0 \] and there exist $\tilde{\beta}_i\in (L^{-1}\otimes \Omega_C^1)(U_i)$ such that \[ \tilde{\beta}_i-\tilde{\beta}_j=-(\lambda c_jdb'_{ij}+(b'_{ij}c_j)((\alpha_i+\zeta_i)-(\delta_j-\zeta_j))). \] for any $i,j$. Set $\beta_i=\tilde{\beta}_i/c_i$, then $(\beta_i)_i$ is a solution of the fourth equation of (6) and have the residue data $\textrm{res}_{t_k}(\beta_i)=0$. Since $H^0(C, L^{-1}\otimes \Omega_C^1)\simeq H^1(C,L)^* = \{0\}$, $\beta_i$ is uniquely determined. When $\lambda=0$, the cocycle $(b'_{ij}\gamma_j)_{i,j}$ is zero because $\alpha_i \in \Omega_C^1(U_i)$. Conversely, assume $(b'_{ij}\gamma_j)_{i,j}=0$. Then there exist $\tilde{\alpha}_i \in \Omega_C^1(U_i)$ for each $i$ such that $\alpha_i-\alpha_j=b'_{ij}\gamma_j=\tilde{\alpha}_i-\tilde{\alpha}_j$. The cocycle $(\alpha_i - \tilde{\alpha}_i)_i$ defines a global logarithmic 1-form on $C$. By the residue theorem, we have \[ \sum_{i=1}^n\lambda \nu^-_i=0. \] By assumption, we obtain $\lambda=0$. Therefore, the morphism \[ \textrm{App}\times\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \longrightarrow (\mathbb{P}H^0(C,L \otimes \Omega_C^1(D)) \times V_0)\,\setminus\,\Sigma \] is bijective. By Zariski's main theorem (for example, see Chapter 3, \S 9, Proposition 1 in \cite{Mu}), $\textrm{App} \times \textrm{Bun}$ is isomorphism. \end{proof} The following proposition is the same as Proposition 4.6 in \cite{LS} and follows by using the same argument of the proof. \begin{proposition} Suppose $\sum_{i=1}^n\nu^-_i=0$. Then $M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0$ is isomorphic to the total space of the cotangent bundle $T^*V_0$ and the map $\textrm{Bun} \colon M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0 \rightarrow V_0$ correspands to the natural projection $T^*V_0 \rightarrow V_0$. Moreover, the section $\nabla_0 \colon V_0 \rightarrow M^{\boldsymbol{\alpha}}(\boldsymbol{\nu},(L,\nabla_L))^0$ corresponding to the zero section $V_0 \rightarrow T^*V_0$ is given by those reducible connections preserving the destabilizing subbundle $\mathcal{O}_C$. \end{proposition} \subsection{Another proof} We will show $\textrm{App} \times \textrm{Bun}$ is an birational map in another way. First, we show the existence of a parabolic connection over a parabolic bundle. The following lemma is an analogy of Lemma 2.5 in \cite{FL}. \begin{lemma} For each $(E,l_*) \in P^{\boldsymbol{\alpha}}(L)$, there is a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla, l_*)$ such that $\textrm{tr} \nabla \simeq \nabla_L$. \end{lemma} \begin{proof} Let $\{U_i\}_i$ be an open covering of $C$ and $\nabla'_i$ be a logarithmic connection on $U_i$ satisfying $(\textrm{res}_{t_k}(\nabla'_i)-\nu^+_k\textrm{id})(l^{(k)}_1)=0, (\textrm{res}_{t_k}(\nabla'_i)-\nu^-_k\textrm{id})(E|_{t_i}) \subset l^{(k)}_1$ at each $t_k \in U_i$ and $\textrm{tr} \nabla'_i=\nabla_L|_{U_i}$. We define sheaves $\mathcal{E}_0$ and $\mathcal{E}_1$ on $C$ by \begin{align*} \mathcal{E}^0 &:=\{s \in \mathcal{E}nd(E) \mid \text{$\textrm{tr} (s)=0$ and $s_{t_i}(l^{(i)}_1) \subset l^{(i)}_1$ for any $i$}\} \\ \mathcal{E}^1 &:=\{s \in \mathcal{E}nd(E) \otimes \Omega_C^1(D) \mid \text{$\textrm{tr} (s)=0$ and $\textrm{res}_{t_i}(s)(l^{(i)}_j)\subset l^{(i)}_{j+1}$ for any $i, j$}\}. \end{align*} Then the isomorphism $\mathcal{E}^1 \simeq (\mathcal{E}^0)^\vee \otimes \Omega_C^1$ holds. Differences $\nabla'_i-\nabla'_j$ define the cocycle \[ (\nabla'_i-\nabla'_j)_{i,j} \in H^1(C,\mathcal{E}^1). \] By Serre duality and simpleness of $E$, we obtain \[ H^1(C,\mathcal{E}^1) \simeq H^0(C, \mathcal{E}^0)^* = \{0\}. \] Hence, there exist $\Phi_i \in \mathcal{E}^1(U_i)$ for each $i$ such that $\nabla'_i-\nabla'_j=\Phi_i-\Phi_j$. Set $\nabla_i=\nabla'_i-\Phi_i$. Then $(\nabla_i)_i$ defines a $\boldsymbol{\nu}$-parabolic connection $\nabla$ over $(E,l_*)$ satisfying $\textrm{tr} \nabla \simeq \nabla_L$. \end{proof} For a quasi-parabolic bundle $(E,l_*) \in V_0$, let us fix a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla,l_*) \in \textrm{Bun}^{-1}((E,l_*))$. Let $(E,\nabla', l_*) \in \textrm{Bun}^{-1}((E,l_*))$ be another $\boldsymbol{\nu}$-parabolic connection. Then $\nabla'-\nabla$ is a global section of $\mathcal{E}^1$ which is the sheaf defined in the proof of Lemma 4.8. Therefore, we have the isomorphism $\textrm{Bun}^{-1}((E,l_*)) \simeq \nabla+H^0(C,\mathcal{E}^1)$. For a section $\Theta \in H^0(\mathcal{E}nd(E) \otimes \Omega_C^1(D))$, we define the section $\varphi_\Theta \in H^0(C,L \otimes \Omega_C^1(D))$ by the composition \[ \mathcal{O}_C \hookrightarrow E \xrightarrow{\Theta} E\otimes \Omega_C^1(D) \rightarrow E/\mathcal{O}_C \otimes \Omega_C^1(D) \simeq L \otimes \Omega_C^1(D) \] and define the map \[ \varphi \colon H^0(C,\mathcal{E}nd(E) \otimes \Omega_C^1(D)) \longrightarrow H^0(C,L \otimes \Omega^1(D)) \] by $\varphi(\Theta)=\varphi_\Theta$. It is crealy linear. Let us define the sheaf $\mathcal{F}^1$ by \[ \mathcal{F}^1 =\{s \in \mathcal{E}nd(E) \otimes \Omega_C^1(D) \mid \text{$\textrm{res}_{t_i}(s)(l^{(i)}_j)\subset l^{(i)}_{j+1}$ for all $i, j$}\}. \] Assume $\Theta \in H^0(C,\mathcal{F}^1)$ satisfies $\varphi_\Theta=0$, that is, $\Theta(\mathcal{O}_C) \subset \mathcal{O}_C \otimes \Omega_C^1(D)$. By definitions of $V_0$ and $\mathcal{E}^1$, we obtain $\textrm{res}_{t_i}(\Theta)(\mathcal{O}_C|_{t_i}) \subset \mathcal{O}_C|_{t_i} \cap l^{(i)}_1=\{0\}$ for any $i$. Hence, we have $\Theta(\mathcal{O}_C) \subset \mathcal{O}_C \otimes \Omega_C$, that is, $\Theta|_{\mathcal{O}_C}$ is a global section of $\Omega_C^1$. \begin{lemma} The linear map $H^0(C,\mathcal{F}^1)\cap\textrm{Ker}\, \varphi \rightarrow H^0(C,\Omega_C^1),\ \Theta \mapsto \Theta|_{\mathcal{O}_C}$ is an isomorphism. \end{lemma} \begin{proof} For $\mu \in H^0(C,\Omega_C^1)$, we define $\Theta=\textrm{id}_E\otimes \mu$. Then we have $\Theta \in H^0(C,\mathcal{F}^1)\cap\textrm{Ker}\, \varphi$ and $\Theta|_{\mathcal{O}_C}=\mu$. The linear map is hence surjective. We show that the map is injective. If $\Theta \in H^0(C,\mathcal{F}^1)\cap\textrm{Ker}\, \varphi$ satisfies $\Theta|_{\mathcal{O}_C}=0$, then $\Theta$ induces the homomorphism $\hat{\Theta} \colon L \simeq E/\mathcal{O}_C \rightarrow E \otimes \Omega_C^1(D)$. $\textrm{res}_{t_i}(\Theta)=0$ implies $\textrm{res}_{t_i}(\hat{\Theta})=0$, so we obtain $\hat{\Theta}(L) \subset E \otimes \Omega_C^1$. Since $\textrm{rank}\, E=2$, we have isomorphisms $E^\vee \simeq E \otimes (\det E)^{-1} \simeq E \otimes L^{-1}$. By this isomorphism and Serre duality, \[ \textrm{Hom}(L,E \otimes \Omega_C^1) \simeq H^0(C,L^{-1} \otimes E \otimes \Omega_C^1)\simeq H^0(C,E^\vee \otimes \Omega_C^1) \simeq H^1(C,E)^*=\{0\} \] Hence we obtain $\hat{\Theta}=0$ and this implies $\Theta=0$. \end{proof} \begin{proof}[Another proof the second assersion of Theorem 4.5] We show that for each $(E,l_*) \in V_0$, the morphism \[ \textrm{App} \colon \textrm{Bun}^{-1}((E,l_*)) \longrightarrow \mathbb{P} H^0(C,L\otimes \Omega_C^1(D)) \] is injective. Let us fix a $\boldsymbol{\nu}$-parabolic connection $(E,\nabla,l_*) \in \textrm{Bun}^{-1}((E,l_*))$. If there exist $\Theta \in H^0(C,\mathcal{E}^1)$ such that $\varphi_\nabla=\varphi_\Theta$, $\nabla-\Theta$ is a $\boldsymbol{\nu}$-parabolic connection and $\varphi_{\nabla-\Theta}=0$. It is a contradiction. Thus, we have \[ \{\varphi_\Theta \mid \Theta \in H^0(C,\mathcal{E}^1)\} \cap \mathbb{C}\varphi_\nabla =\{0\}. \] Hence, we only need to show that the linear map $\varphi \colon H^0(C,\mathcal{E}^1) \rightarrow H^0(C,L \otimes \Omega^1(D))$ is injective. Suppose a section $\Theta \in H^0(C,\mathcal{E}^1)$ satisfies $\varphi_\Theta=0$. By the proof of Lemma 4.9, there is the section $\mu \in H^0(C,\Omega_C^1)$ such that $\Theta = \textrm{id}_E \otimes \mu$. Since $\textrm{tr}\, \Theta =0$, we get $\mu=0$ and this means $\Theta=0$. \end{proof} \section*{Acknowledgments} The author would like to thank Professor Masa-Hiko Saito and Kota Yoshioka for useful comments and valuable discussions. He also thanks to Professor Masa-Hiko Saito for reading this paper carefully . \end{document}
\mathbf{e}gin{document} \frontmatter \setstretch{1.3} \fancyhead{} \rhead{\thepage} \lhead{} \pagestyle{fancy} \newcommand{\HRule}{\rule{\operatorname{lin}ewidth}{0.5mm}} \hypersetup{pdftitle={\ttitle}} \hypersetup{pdfsubject=\subjectname} \hypersetup{pdfauthor=\authornames} \hypersetup{pdfkeywords=\keywordnames} \mathbf{e}gin{titlepage} \mathbf{e}gin{center} \textsc{\LARGE \univnameA}\\ \textsc{\LARGE \univnameB}\\[1.5cm] \textsc{\Large Master Thesis}\\[0.5cm] \HRule \\[0.4cm] {\huge \bfseries \ttitle}\\[0.4cm] \HRule \\[1.5cm] \mathbf{e}gin{minipage}{0.4\textwidth} \mathbf{e}gin{flushleft} \large \emph{Author:}\\ \authornames \end{flushleft} \end{minipage} \mathbf{e}gin{minipage}{0.4\textwidth} \mathbf{e}gin{flushright} \large \emph{Supervisors:} \\ \href{http://www.science.uva.nl/math/People/show_person.php?Person_id=Eisner-Lobova+T.}{\supnameAa}\\ \href{http://staff.science.uva.nl/~alejan/}{\supnameAb} \\% Supervisor name - remove the \href bracket to remove the link \href{http://www.im.uj.edu.pl/instytut/pracownik?id=219}{\supnameB} \end{flushright} \end{minipage}\\[3cm] \large \textit{A thesis submitted in fulfilment of the requirements\\ for the degree of \degreename}\\[0.3cm] \textit{in the}\\[0.4cm] \deptnameA,\ \univnameA \\ \deptnameB,\ \univnameB \\[2cm] {\large \today}\\[4cm] \end{center} \end{titlepage} \mathsf{D}eclaration{ \addtocontents{toc}{ } I, \authornames, declare that this thesis titled \emph{``\ttitle''} is my own. I confirm that: \mathbf{e}gin{itemize} \item[\tiny{$\mathbf{l}acksquare$}] This work was done wholly or mainly while in candidature for a research degree at these Universities. \item[\tiny{$\mathbf{l}acksquare$}] Where I have consulted the published work of others, this is always clearly attributed. \item[\tiny{$\mathbf{l}acksquare$}] Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. \item[\tiny{$\mathbf{l}acksquare$}] I have acknowledged all main sources of help. \end{itemize} Signed:\\ \rule[1em]{25em}{0.5pt} Date:\\ \rule[1em]{25em}{0.5pt} } \operatorname{cl}earpage \pagestyle{empty} \null \textit{``Thanks to my solid academic training, today I can write hundreds of words on virtually any topic without possessing a shred of information, which is how I got a good job in journalism."} \mathbf{e}gin{flushright} Dave Barry \end{flushright} \null \operatorname{cl}earpage \addtotoc{Abstract} \abstract{\addtocontents{toc}{ } Ultrafilters are very useful and versatile objects with applications throughout mathematics: in topology, analysis, combinarotics, model theory, and even theory of social choice. Proofs based on ultrafilters tend to be shorter and more elegant than their classical counterparts. In this thesis, we survey some of the most striking ways in which ultrafilters can be exploited in combinatorics and ergodic theory, with a brief mention of model theory. In the initial sections, we establish the basics of the theory of ultrafilters in the hope of keeping our exposition possibly self-contained, and then proceed to specific applications. Important combinatorial results we discuss are the theorems of Hindman, van der Waerden and Hales-Jewett. Each of them asserts essentially that in a finite partition of, respectively, the natural numbers or words over a finite alphabet, one cell much of the combinatorial structure. We next turn to results in ergodic theory, which rely strongly on combinatorial preliminaries. They assert essentially that certain sets of return times are combinatorially rich. We finish by presenting the ultrafilter proof of the famous Arrow's Impossibility Theorem and the construction of the ultraproduct in model theory. } \operatorname{cl}earpage \setstretch{1.3} \acknowledgements{\addtocontents{toc}{ } Any advances that are made in this thesis would not be have been possible without the guidance and help from the supervisors under whom the author had the privilege to work. Many thanks go to Pavel Zorin-Kranich, who was de facto an informal supervisor of this thesis, to Miko\l{}aj Fr\k{a}czyk for the illuminating discussions and his non-wavering enthusiasm, and to the StackExchange community for providing and endless supply of answers to the endless stream of questions produced during our work. We are also grateful to Professor Bergelson for expressing an interest in our research, and providing some useful remarks. Last, but not least, the author wishes to thank his close ones for the continual support and understanding during the time of writing of this thesis. The \LaTeX\ template to which this thesis owns its appearance was created by Steven Gunn and Sunil Patel, and is distributed on Creative Commons License CC BY-NC-SA 3.0 at \url{http://www.latextemplates.com}. } \operatorname{cl}earpage \pagestyle{fancy} \lhead{\emph{Contents}} \tableofcontents \mainmatter \pagestyle{fancy} \input{ ChapterIntro} \input{ ChapterA} \input{ ChapterC} \input{ ChapterB} \input{ ChapterD} \addtocontents{toc}{ } \mathbf{a}ckmatter \nocite{*} \label{Bibliography} \lhead{\emph{Bibliography}} \mathbf{i}bliographystyle{alpha} \mathbf{i}bliography{Bibliography} \addcontentsline{toc}{chapter}{Index} \index{FP|seealso{$\mathsf{IP}$-set}} \index{FU|seealso{$\mathsf{IP}$-set}} \index{FS|seealso{$\mathsf{IP}$-set}} \index{topology!limit|see{limit}} \index{topology!topological semigroup|see{semigroup!topological semigroup}} \index{topology!ultrafilter|see{ultrafilter!topological structure}} \index{symmetric discrete derivative|see{discrete derivative!symmetric}} \index{T@$\mathbb{T}$|see{torus}} \index{IP@$\mathsf{IP}$|seealso{idempotent}} \printindex \end{document}
\begin{document} \title{Commutativity and ideals in algebraic crossed products} \date{January 31, 2007} \author{Johan {\"O}inert} \address{Department of Mathematics and Computer Science\\ University of Antwerp, Middelheimlaan 1\\ B-2020 Antwerp, Belgium {\rm and} Centre for Mathematical Sciences, Lund University, Box 118, SE-221 00 Lund, Sweden} \thanks{This work was partially supported by the Crafoord Foundation, The Royal Physiographic Society in Lund, The Swedish Royal Academy of Sciences, The Swedish Foundation of International Cooperation in Research and Higher Education (STINT) and "LieGrits", a Marie Curie Research Training Network funded by the European Community as project MRTN-CT 2003-505078} \email{[email protected]} \author{Sergei D. Silvestrov} \address{Centre for Mathematical Sciences, Lund University, Box 118, SE-221 00 Lund, Sweden} \email{[email protected]} \subjclass[2000]{16S35, 16W50, 16D25, 16U70} \keywords{Crossed products, graded rings, ideals, maximal commutativity} \begin{abstract} We investigate properties of commutative subrings and ideals in non-commutative algebraic crossed products for actions by arbitrary groups. A description of the commutant of the base coefficient subring in the crossed product ring is given. Conditions for commutativity and maximal commutativity of the commutant of the base subring are provided in terms of the action as well as in terms of the intersection of ideals in the crossed product ring with the base subring, specially taking into account both the case of base rings without non-trivial zero-divisors and the case of base rings with non-trivial zero-divisors. \end{abstract} \maketitle \section{Introduction} The description of commutative subrings and commutative subalgebras and of the ideals in non-commutative rings and algebras are important directions of investigation for any class of non-commutative algebras or rings, because it allows one to relate representation theory, non-commutative properties, graded structures, ideals and subalgebras, homological and other properties of non-commutative algebras to spectral theory, duality, algebraic geometry and topology naturally associated with the commutative subalgebras. In representation theory, for example, one of the keys to the construction and classification of representations is the method of induced representations. The underlying structures behind this method are the semi-direct products or crossed products of rings and algebras by various actions. When a non-commutative ring or algebra is given, one looks for a subring or a subalgebra such that its representations can be studied and classified more easily, and such that the whole ring or algebra can be decomposed as a crossed product of this subring or subalgebra by a suitable action. Then the representations for the subring or subalgebra are extended to representations of the whole ring or algebra using the action and its properties. A description of representations is most tractable for commutative subrings or subalgebras as being, via the spectral theory and duality, directly connected to algebraic geometry, topology or measure theory. If one has found a way to present a non-commutative ring or algebra as a crossed product of a commutative subring or subalgebra by some action on it of the elements from outside the subring or subalgebra, then it is important to know whether this subring or subalgebra is maximal abelian or, if not, to find a maximal abelian subring or subalgebra containing the given subalgebra, since if the selected subring or subalgebra is not maximal abelian, then the action will not be entirely responsible for the non-commutative part as one would hope, but will also have the commutative trivial part taking care of the elements commuting with everything in the selected commutative subring or subalgebra. This maximality of a commutative subring or subalgebra and associated properties of the action are intimately related to the description and classifications of representations of the non-commutative ring or algebra. Little is known in general about connections between properties of the commutative subalgebras of crossed product rings and algebras and properties of the action. A remarkable result in this direction is known, however, in the context of crossed product $C^*$-algebras. When the algebra is described as the crossed product $C^*$-algebra $C(X) \rtimes_{\alpha} \mathbb{Z}$ of the algebra of continuous functions on a compact Hausdorff space $X$ by an action of $\mathbb{Z}$ via the composition automorphism associated with a homeomorphism $\sigma : X \to X$, it is known that $C(X)$ sits inside the $C^*$-crossed product as a maximal abelian subalgebra if and only if for every positive integer $n$, the set of points in $X$ having period $n$ under iterations of $\sigma$ has no interior points \cite[Theorem 5.4]{TomiyamaNotes1}, \cite[Corollary 3.3.3]{TomiyamaBook}, \cite[Theorems 2.8, 5.2]{TomiyamaNotes2}, \cite[Proposition 4.14]{Zeller-Meier}, \cite[Lemma 7.3.11]{LiBing-Ren}. This condition is equivalent to the action of $\mathbb Z$ on $X$ being topologically free in the sense that the non-periodic points of $\sigma$ are dense in $X$. In \cite{svesildej}, a purely algebraic variant of the crossed product allowing for more general classes of algebras than merely continuous functions on compact Hausdorff spaces serving as ``base coefficient algebras'' in the crossed products was considered. In the general set theoretical framework of a crossed product algebra $A \rtimes_{\alpha}\mathbb{Z}$ of an arbitrary subalgebra $A$ of the algebra $\mathbb{C}^X$ of complex-valued functions on a set $X$ (under the usual pointwise operations) by $\mathbb{Z}$ acting on $A$ via a composition automorphism defined by a bijection of $X$, the essence of the matter is revealed. Topological notions are not available here and thus the condition of freeness of the dynamics as described above is not applicable, so that it has to be generalized in a proper way in order to be equivalent to the maximal commutativity of $A$. In \cite{svesildej} such a generalization was provided by involving separation properties of $A$ with respect to the space $X$ and the action for significantly more arbitrary classes of base coefficient algebras and associated spaces and actions. The (unique) maximal abelian subalgebra containing $A$ was described as well as general results and examples and counterexamples on equivalence of maximal commutativity of $A$ in the crossed product and the generalization of topological freeness of the action. In this article, we bring these results and interplay into a more general algebraic context of crossed product rings (or algebras) for crossed systems with arbitrary group actions and twisting cocycle maps \cite{MoGR}. We investigate the connections with the ideal structure of a general crossed product ring, describe the center of crossed product rings, describe the commutant of the base coefficient subring in a crossed product ring of a general crossed system, and obtain conditions for maximal commutativity of the commutant of the base subring in terms of the action as well as in terms of intersection of ideals in the crossed product ring with the base subring, specially taking into account both the case of base rings without non-trivial zero-divisors and the case of base rings with non-trivial zero-divisors. \section{Preliminaries} In this section, we recall the basic objects and the notation, from \cite{MoGR}, which are necessary for the understanding of the rest of this article. \subsection*{Gradings} Let $G$ be a group with unit element $e$. The ring $\mathcal{R}$ is \emph{$G$-graded} if there is a family $\{\mathcal{R}_\sigma\}_{\sigma \in G}$ of additive subgroups $\mathcal{R}_\sigma$ of $\mathcal{R}$ such that $\mathcal{R}=\bigoplus_{\sigma \in G} \mathcal{R}_\sigma$ and $\mathcal{R}_\sigma \mathcal{R}_\tau \subseteq \mathcal{R}_{\sigma \tau}$ (\emph{strongly $G$-graded} if, in addition, $\supseteq$ also holds) for every $\sigma, \tau \in G$. \subsection*{Crossed products} If $\mathcal{R}$ is a unital ring, then $U(\mathcal{R})$ denotes the group of multiplication invertible elements of $\mathcal{R}$. A unital $G$-graded ring $\mathcal{R}$ is called a \emph{$G$-crossed product} if $U(\mathcal{R})\cap \mathcal{R}_\sigma \neq \emptyset$ for every $\sigma \in G$. Note that every $G$-crossed product is strongly $G$-graded, as explained in \cite[p.2]{MoGR}. \subsection{Crossed systems}\label{crossedsystem} \begin{defn} A $G$-crossed system is a quadruple $\{\mathcal{A},G,\sigma,\alpha\}$, consisting of a unital associative ring $\mathcal{A}$, a group $G$ (with unit element $e$), a map \begin{displaymath} \sigma : G \to \aut(\mathcal{A}) \end{displaymath} and a \emph{$\sigma$-cocycle} map \begin{displaymath} \alpha : G \times G \to U(\mathcal{A}) \end{displaymath} such that for any $x,y,z\in G$ and $a\in \mathcal{A}$ the following conditions hold: \begin{itemize} \item[(i)] $\sigma_x(\sigma_y(a)) = \alpha(x,y) \, \sigma_{xy}(a) \, \alpha(x,y)^{-1}$ \,; \item[(ii)] $\alpha(x,y) \, \alpha(xy,z) = \sigma_x(\alpha(y,z)) \, \alpha(x,yz)$ \,; \item[(iii)] $\alpha(x,e)=\alpha(e,x)=1_{\aalg}$ \, . \end{itemize} \end{defn} \begin{rem}\label{zeroandone} Note that, by combining conditions (i) and (iii), we get $\sigma_e(\sigma_e(a))=\sigma_e(a)$ for all $a\in \mathcal{A}$. Furthermore, $\sigma_e : \mathcal{A} \to \mathcal{A}$ is an automorphism and hence $\sigma_e = \identity_{\mathcal{A}}$. Also note that, from the definition of $\aut(\mathcal{A})$, we have $\sigma_g(0_{\aalg})=0_{\aalg}$ and $\sigma_g(1_{\aalg})=1_{\aalg}$ for any $g \in G$. \end{rem} \begin{rem}\label{conditionsAcomm} From condition (i) it immediately follows that $\sigma$ is a group homomorphism if $\mathcal{A}$ is commutative or if $\alpha$ is trivial. \end{rem} \begin{defn} Let $\overline{G}$ be a copy (as a set) of $G$. Given a $G$-crossed system $\{\mathcal{A},G,\sigma,\alpha\}$, we denote by $\aalg \rtimes_\alpha^{\sigma} G$ the free left $\mathcal{A}$-module having $\overline{G}$ as its basis and in which the multiplication is defined by \begin{eqnarray}\label{leftmodulemultiplication} (a_1 \overline{x})(a_2 \overline{y}) = a_1 \sigma_x(a_2) \, \alpha(x,y) \, \overline{xy} \end{eqnarray} for all $a_1,a_2 \in \mathcal{A}$ and $x,y \in G$. Elements of $\aalg \rtimes_\alpha^{\sigma} G$ may be expressed as formal sums $\sum_{g\in G} a_g \overline{g}$ where $a_g \in \mathcal{A}$ and $a_g = 0_{\aalg}$ for all but a finite number of $g \in G$. Explicitly, this means that the addition and multiplication of two arbitrary elements $\sum_{s\in G} a_s \overline{s}, \sum_{t\in G} b_t \overline{t} \in \aalg \rtimes_\alpha^{\sigma} G$ is given by \begin{eqnarray} \sum_{s\in G} a_s \overline{s} + \sum_{t\in G} b_t \overline{t} &=& \sum_{g\in G} (a_g + b_g) \overline{g} \nonumber \\ \left( \sum_{s\in G} a_s \overline{s} \right) \left( \sum_{t\in G} b_t \overline{t} \right) &=& \sum_{(s,t) \in G \times G} (a_s \overline{s})(b_t \overline{t}) = \sum_{(s,t) \in G \times G} a_s \, \sigma_s(b_t) \, \alpha(s,t) \, \overline{st} \nonumber \\ &=& \sum_{g\in G} \left( \sum_{\{(s,t) \in G \times G \mid st=g\}} a_s \, \sigma_s(b_t) \, \alpha(s,t) \right) \overline{g} \label{product}. \end{eqnarray} \end{defn} \begin{rem} The ring $\mathcal{A}$ is unital, with unit element $1_{\aalg}$, and it is easy to see that $(1_{\aalg} \, \overline{e})$ is the multiplicative identity in $\aalg \rtimes_\alpha^{\sigma} G$. \end{rem} By abuse of notation, we shall sometimes let $0$ denote the zero element in $\aalg \rtimes_\alpha^{\sigma} G$. The proofs of the two following propositions can be found in \cite[Proposition 1.4.1, p.11]{MoGR} and \cite[Proposition 1.4.2, pp.12-13]{MoGR} respectively. \begin{prop} Let $\{\mathcal{A},G,\sigma,\alpha\}$ be a $G$-crossed system. Then $\aalg \rtimes_\alpha^{\sigma} G$ is an associative ring (with the multiplication defined in \eqref{leftmodulemultiplication}). Moreover, this ring is $G$-graded, $\aalg \rtimes_\alpha^{\sigma} G = \bigoplus_{g\in G} \, \mathcal{A} \overline{g}$, and it is a $G$-crossed product. \end{prop} \begin{prop} Every $G$-crossed product $\mathcal{R}$ is of the form $\aalg \rtimes_\alpha^{\sigma} G$ for some ring $\mathcal{A}$ and some maps $\sigma,\alpha$. \end{prop} \begin{rem} If $k$ is a field and $\mathcal{A}$ is a $k$-algebra, then so is $\aalg \rtimes_\alpha^{\sigma} G$. \end{rem} \noindent The base coefficient ring $\mathcal{A}$ is naturally embedded as a subring into $\aalg \rtimes_\alpha^{\sigma} G$. Consider the canonical isomorphism \begin{displaymath} \iota : \mathcal{A} \hookrightarrow \aalg \rtimes_\alpha^{\sigma} G, \quad a \mapsto a \overline{e}. \end{displaymath} We denote by $\tilde{\mathcal{A}}$ the image of $\mathcal{A}$ under $\iota$ and by $\mathcal{A}^G = \{ a\in \mathcal{A} \, \mid \, \sigma_s(a)=a, \,\, \forall s\in G\}$ the \emph{fixed ring} of $\mathcal{A}$. \begin{rem}\label{AcommAtildecomm} Obviously, $\mathcal{A}$ is commutative if and only if $\tilde{\mathcal{A}}$ is commutative. \end{rem} \begin{exmp} Let $\mathcal{A}$ be commutative and $\mathcal{B}=\aalg \rtimes_\alpha^{\sigma} G$ a crossed product. For $x\in G$ and $c,d \in \mathcal{A}$ we may write \begin{eqnarray*} (c \, \overline{x})(d \, \overline{e}) = c \, \sigma_x(d) \, \overline{x} = (\sigma_x(d) \, \overline{e})(c \, \overline{x}). \end{eqnarray*} Let $b=c \,\overline{x}$, $a = d\overline{e}$ and $f : \mathcal{B} \to \mathcal{B}$ be a map defined by $f = \iota \circ \sigma_x \circ \iota^{-1}$. Then the above relation may be written as \begin{displaymath} b \, a = f(a) \, b \end{displaymath} which is a re-ordering formula frequently appearing in physical applications. \end{exmp} \section{Commutativity in $\aalg \rtimes_\alpha^{\sigma} G$} From the definition of the product in $\aalg \rtimes_\alpha^{\sigma} G$, given by \eqref{product}, we see that two elements $\sum_{s\in G} a_s \overline{s}$ and $\sum_{t\in G} b_t \overline{t}$ commute if and only if \begin{equation}\label{twoelementscommute} \sum_{\{(s,t) \in G \times G \mid st=g\} } a_s \, \sigma_s(b_t) \, \alpha(s,t) = \sum_{\{(s,t) \in G \times G \mid st=g\}} b_s \, \sigma_s(a_t) \, \alpha(s,t) \end{equation} for each $g \in G$. The crossed product $\aalg \rtimes_\alpha^{\sigma} G$ is in general non-commutative and in the following proposition we give a description of its center. \begin{prop}\label{thecenter} The center of $\aalg \rtimes_\alpha^{\sigma} G$ is as follows \begin{eqnarray*} Z(\aalg \rtimes_\alpha^{\sigma} G) &=& \Big\{ \sum_{g\in G} r_g \, \overline{g} \,\, \Big\lvert \,\, r_{ts^{-1}} \, \alpha(ts^{-1},s) = \sigma_s(r_{s^{-1}t}) \, \alpha(s,s^{-1}t),\\ & & \hspace{55pt} r_s \, \sigma_s(a) = a \, r_s, \quad \forall a\in \mathcal{A}, \,\, (s,t)\in G \times G \Big\}. \end{eqnarray*} \end{prop} \begin{proof} Let $\sum_{g\in G} r_g \, \overline{g} \in \aalg \rtimes_\alpha^{\sigma} G$ be an element which commutes with every element of $\aalg \rtimes_\alpha^{\sigma} G$. Then, in particular $\sum_{g\in G} r_g \, \overline{g}$ must commute with $a \, \overline{e}$ for every $a \in \mathcal{A}$. From \eqref{twoelementscommute} we immediately see that this implies $r_s \, \sigma_s(a)=a \, r_s$ for every $a\in \mathcal{A}$ and $s\in G$. Furthermore, $\sum_{g\in G} r_g \, \overline{g}$ must commute with $1_{\aalg} \, \overline{s}$ for any $s \in G$. This yields \begin{eqnarray*} \sum_{t\in G} r_{ts^{-1}} \, \alpha(ts^{-1},s) \, \overline{t} = [gs= t] = \sum_{g\in G} r_g \, \alpha(g,s) \, \overline{gs} = \sum_{g \in G} r_g \, \sigma_g(1_{\aalg}) \, \alpha(g,s) \, \overline{gs}\\ = \left( \sum_{g \in G} r_g \, \overline{g} \right) (1_{\aalg} \, \overline{s}) = (1_{\aalg} \, \overline{s}) \left( \sum_{g \in G} r_g \, \overline{g} \right) = \sum_{g \in G} 1_{\aalg} \, \sigma_s (r_g) \, \alpha(s,g) \, \overline{sg} \\ = \sum_{g \in G} \sigma_s (r_g) \, \alpha(s,g) \, \overline{sg} = [t=sg] = \sum_{t \in G} \sigma_{s} (r_{s^{-1}t}) \, \alpha(s,s^{-1}t) \, \overline{t} \end{eqnarray*} and hence, for each $(s,t) \in G \times G$, we have $r_{ts^{-1}} \, \alpha(ts^{-1},s) = \sigma_s(r_{s^{-1}t}) \, \alpha(s,s^{-1}t)$.\\ Conversely, suppose that $\sum_{g\in G} r_g \, \overline{g} \in \aalg \rtimes_\alpha^{\sigma} G$ is an element satisfying $r_s \, \sigma_s(a) = a \, r_s$ and $r_{ts^{-1}} \, \alpha(ts^{-1},s) = \sigma_s(r_{s^{-1}t}) \, \alpha(s,s^{-1}t)$ for every $a\in \mathcal{A}$ and $(s,t)\in G\times G$. Let $\sum_{s \in G} a_s \, \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G$ be arbitrary. Then \begin{eqnarray*} \left( \sum_{g \in G} r_g \, \overline{g} \right) \left(\sum_{s \in G} a_s \, \overline{s} \right) = \sum_{(g,s) \in G \times G} r_g \, \sigma_g(a_s) \, \alpha(g,s) \, \overline{gs} = \sum_{(g,s) \in G \times G} a_s \, r_g \, \alpha(g,s) \, \overline{gs} \\ = [gs=t] = \sum_{(t,s) \in G \times G} a_s \, (r_{ts^{-1}} \, \alpha(ts^{-1},s)) \, \overline{t} = \sum_{(t,s) \in G \times G} a_s \, \sigma_s(r_{s^{-1}t}) \, \alpha(s,s^{-1}t) \, \overline{t} \\ = [t=sg] = \sum_{(g,s) \in G \times G} a_s \, \sigma_s(r_{g}) \, \alpha(s,g) \, \overline{sg} = \left(\sum_{s \in G} a_s \, \overline{s} \right) \left( \sum_{g \in G} r_g \, \overline{g} \right) \end{eqnarray*} and hence $\sum_{g \in G} r_g \, \overline{g}$ commutes with every element of $\aalg \rtimes_\alpha^{\sigma} G$. \end{proof} A few corollaries follow from Proposition \ref{thecenter}, showing how a successive addition of restrictions on the corresponding $G$-crossed system, leads to a simplified description of $Z(\aalg \rtimes_\alpha^{\sigma} G)$. \begin{cor}[Center of a twisted group ring] Let $\sigma \equiv \identity_{\aalg}$. Then, the center of $\aalg \rtimes_\alpha^{\sigma} G$ is as follows \begin{eqnarray*} Z(\aalg \rtimes_\alpha^{\sigma} G) &=& \Big\{ \sum_{g\in G} r_g \, \overline{g} \,\, \Big\vert \,\, r_s \in Z(\mathcal{A}), \quad r_{ts^{-1}} \, \alpha(ts^{-1},s) = r_{s^{-1}t} \, \alpha(s,s^{-1}t), \\ & & \hspace{170pt} \forall a\in \mathcal{A}, \,\, (s,t)\in G \times G \Big\}. \end{eqnarray*} \end{cor} \begin{cor} Let $G$ be abelian and $\alpha$ symmetric\footnote{Symmetric in the sense that $\alpha(x,y)=\alpha(y,x)$ for every $(x,y) \in G \times G$.}. Then, the center of $\aalg \rtimes_\alpha^{\sigma} G$ is as follows \begin{eqnarray*} Z(\aalg \rtimes_\alpha^{\sigma} G) = \Big\{ \sum_{g\in G} r_g \, \overline{g} \,\, \Big\lvert \,\, r_s \, \sigma_s(a) = a \, r_s, \quad r_s \in \mathcal{A}^G, \quad \forall a\in \mathcal{A}, \,\, s\in G \Big\}. \end{eqnarray*} \end{cor} \begin{cor}\label{centerspecial} Let $\mathcal{A}$ be commutative, $G$ abelian and $\alpha \equiv 1_{\aalg}$. Then, the center of $\aalg \rtimes_\alpha^{\sigma} G$ is as follows \begin{eqnarray*} Z(\aalg \rtimes_\alpha^{\sigma} G) = \Big\{ \sum_{g\in G} r_g \, \overline{g} \,\, \Big\vert \,\, r_s \in \mathcal{A}^G, \,\, \sigma_s(a) - a \in \ann(r_s), \,\, \forall a\in \mathcal{A}, \, s\in G \Big\}. \end{eqnarray*} \end{cor} \begin{rem} Note that in the proof of Theorem \ref{thecenter}, the property that the image of $\alpha$ is contained in $U(\mathcal{A})$ is not used and therefore the theorem is true in greater generality. Consider the case when $\mathcal{A}$ is an integral domain and let $\alpha$ take its values in $\mathcal{A} \setminus \{0_{\aalg}\}$. In this case it is clear that $r_s \, \sigma_s(a) = a \, r_s$ for all $a\in \mathcal{A}$ $\Longleftrightarrow$ $r_s ( \sigma_s(a) - a) = 0$ for all $a\in \mathcal{A}$ $\Longleftrightarrow r_s = 0$ for $s \not \in \sigma^{-1}(\identity_{\aalg}) = \{g \in G \mid \sigma_g = \identity_{\aalg} \}$. After a change of variable via $x=s^{-1}t$ the first condition in the description of the center may be written as $\sigma_s(r_x) \, \alpha(s,x) = r_{sxs^{-1}} \, \alpha(sxs^{-1},s)$ for all $(s,x) \in G \times G$. From this relation we conclude that $r_x=0$ if and only if $r_{sxs^{-1}}=0$, and hence it is trivially satisfied if we put $r_x = 0$ whenever $x \not \in \sigma^{-1}(\identity_{\aalg})$. This case has been presented in \cite[Proposition 2.2]{NeOyYu} with a more elaborate proof. \end{rem} The final corollary describes the exceptional situation when $Z(\aalg \rtimes_\alpha^{\sigma} G)$ coincides with $\aalg \rtimes_\alpha^{\sigma} G$, that is when $\aalg \rtimes_\alpha^{\sigma} G$ is commutative. \begin{cor} $\aalg \rtimes_\alpha^{\sigma} G$ is commutative if and only if all of the following hold \begin{itemize} \item[(i)] $\mathcal{A}$ is commutative, \item[(ii)] $\sigma_s = \identity_{\aalg}$ for each $s\in G$, \item[(iii)] $G$ is abelian, \item[(iv)] $\alpha$ is symmetric. \end{itemize} \end{cor} \begin{proof} Suppose that $Z(\aalg \rtimes_\alpha^{\sigma} G)=\aalg \rtimes_\alpha^{\sigma} G$. Then, $\tilde{\mathcal{A}} \subseteq \aalg \rtimes_\alpha^{\sigma} G = Z(\aalg \rtimes_\alpha^{\sigma} G)$ and hence (i) follows by Remark \ref{AcommAtildecomm}. By the assumption, $1_{\aalg} \, \overline{s} \in Z(\aalg \rtimes_\alpha^{\sigma} G)$ for any $s\in G$ and by Proposition \ref{thecenter} we see that $\sigma_s = \identity_{\aalg}$ for every $s\in G$, and hence (ii). For any $(x,y) \in G \times G$ we have $\alpha(x,y) \, \overline{xy} = (1_{\aalg} \overline{x})(1_{\aalg} \overline{y}) = (1_{\aalg} \overline{y})(1_{\aalg} \overline{x}) = \alpha(y,x) \, \overline{yx}$, but $\alpha(x,y) \neq 0_{\aalg}$ which implies $xy=yx$ and also $\alpha(x,y)=\alpha(y,x)$, which shows (iii) and (iv). The converse statement of the corollary is easily verified. \end{proof} \section{The commutant of $\tilde{\mathcal{A}}$ in $\aalg \rtimes_\alpha^{\sigma} G$} From now on we shall assume that $G \neq \{e\}$. As we have seen, $\tilde{\mathcal{A}}$ is a subring of $\aalg \rtimes_\alpha^{\sigma} G$ and we define its commutant by \begin{displaymath} \comm(\tilde{\mathcal{A}}) = \{b \in \aalg \rtimes_\alpha^{\sigma} G \, \mid \, ab = ba, \quad \forall a \in \tilde{\mathcal{A}} \}. \end{displaymath} \noindent Theorem \ref{commutantcomm} tells us exactly when an element of $\aalg \rtimes_\alpha^{\sigma} G$ lies in $\comm(\tilde{\mathcal{A}})$. \begin{thm}\label{commutantcomm} The commutant of $\tilde{\mathcal{A}}$ in $\aalg \rtimes_\alpha^{\sigma} G$ is as follows \begin{eqnarray*} \comm(\tilde{\mathcal{A}}) = \left\{ \sum_{s \in G} r_s \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G \,\, \Big\lvert \,\, r_s \, \sigma_s(a) = a \, r_s, \quad \forall a \in \mathcal{A} , \, s\in G \right\}. \end{eqnarray*} \end{thm} \begin{proof} The proof is established through the following sequence of equivalences: \begin{eqnarray*} \sum_{s \in G} r_s \overline{s} \in \comm(\tilde{\mathcal{A}}) \Longleftrightarrow \left( \sum_{s \in G} r_s \overline{s} \right) \left( a \overline{e} \right) = \left( a \overline{e} \right) \left( \sum_{s \in G} r_s \overline{s} \right), \quad \forall a \in \mathcal{A} \\ \Longleftrightarrow \sum_{s \in G} r_s \, \sigma_s(a) \, \alpha(s,e) \, \overline{se} = \sum_{s \in G} a \, \sigma_e(r_s) \, \alpha(e,s) \, \overline{es}, \quad \forall a \in \mathcal{A} \\ \Longleftrightarrow \sum_{s \in G} r_s \, \sigma_s(a) \, \overline{s} = \sum_{s \in G} a \, r_s \, \overline{s}, \quad \forall a \in \mathcal{A} \\ \Longleftrightarrow \text{For each } s \in G: \, \, r_s \, \sigma_s(a) = a \, r_s, \quad \forall a \in \mathcal{A} \end{eqnarray*} Here we have used the fact that $\alpha(s,e)=\alpha(e,s)=1_{\aalg}$ for all $s\in G$. The above equivalence can also be deduced directly from \eqref{twoelementscommute}. \end{proof} In the case when $\mathcal{A}$ is commutative we get the following description of the commutant as a special case of Theorem \ref{commutantcomm}. \begin{cor}\label{commutantcomm2} If $\mathcal{A}$ is commutative, then the commutant of $\tilde{\mathcal{A}}$ in $\aalg \rtimes_\alpha^{\sigma} G$ is \begin{eqnarray*} \comm(\tilde{\mathcal{A}}) = \left\{ \sum_{s \in G} r_s \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G \,\, \Big\lvert \,\, \sigma_s(a)-a \in \ann(r_s), \quad \forall a \in \mathcal{A} , \, s\in G \right\}. \end{eqnarray*} \end{cor} When $\mathcal{A}$ is commutative it is clear that $\tilde{\mathcal{A}} \subseteq \comm(\tilde{\mathcal{A}})$. Using the explicit description of $\comm(\tilde{\mathcal{A}})$ in Corollary \ref{commutantcomm2}, we are now able to state exactly when $\tilde{\mathcal{A}}$ is maximal commutative, i.e. $\comm(\tilde{\mathcal{A}})=\tilde{\mathcal{A}}$. \begin{cor}\label{maxabelianequivstatement} Let $\mathcal{A}$ be commutative. Then $\tilde{\mathcal{A}}$ is maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$ if and only if, for each pair $(s,r_s) \in (G \setminus\{e\}) \times (\mathcal{A} \setminus \{0_{\aalg}\})$, there exists $a\in \mathcal{A}$ such that $\sigma_s(a)-a \not \in \ann(r_s) = \{ c \in \mathcal{A} \, \mid \, r_s \cdot c = 0_{\aalg} \}$. \end{cor} \begin{exmp}\label{introtillchristians} In this example we follow the notation of \cite{svesildej}. Let $\sigma : X \to X$ be a bijection on a non-empty set $X$, and $A \subseteq \mathbb{C}^X$ an algebra of functions, such that if $h \in A$ then $h \circ \sigma \in A$ and $h \circ \sigma^{-1} \in A$. Let $\tilde{\sigma} : \mathbb{Z} \to \aut(A)$ be defined by $\tilde{\sigma}_n : f \mapsto f \circ \sigma^{\circ (-n)}$ for $f\in A$. We now have a $\mathbb{Z}$-crossed system (with trivial $\tilde{\sigma}$-cocycle) and we may form the crossed product $A \rtimes_{\tilde{\sigma}} \mathbb{Z}$. Recall the definition of the set $\sep_A^n(X) = \{x \in X \, \mid \, \exists h \in A, \text{ s.t. } h(x) \neq (\tilde{\sigma}_n(h))(x) \}$. Corollary \ref{maxabelianequivstatement} is a generalization of \cite[Theorem 3.5]{svesildej} and the easiest way to see this is by negating the statements. Suppose that $A$ is not maximal commutative in $A \rtimes_{\tilde{\sigma}} \mathbb{Z}$. Then, by Corollary \ref{maxabelianequivstatement}, there exists a pair $(n,f_n) \in (\mathbb{Z} \setminus \{0\}) \times (A \setminus \{0\})$ such that $\tilde{\sigma}_n(g)-g \in \ann(f_n)$ for every $g \in A$, i.e. $\supp(\tilde{\sigma}_n(g)-g) \cap \supp(f_n) = \emptyset$ for every $g \in A$. In particular, this means that $f_n$ is identically zero on $\sep_A^n(X)$. However, $f_n \in A \setminus \{0\}$ is not identically zero on $X$ and hence $\sep_A^n(X)$ is not a \emph{domain of uniqueness} (as defined in \cite[Definition 3.2]{svesildej}). The converse can be proved similarly. \end{exmp} \begin{cor}\label{commutantzerodiv} Let $\mathcal{A}$ be commutative. If for each $s \in G \setminus \{e\}$ it is always possible to find some $a \in \mathcal{A}$ such that $\sigma_s(a)-a$ is not a zero-divisor in $\mathcal{A}$, then $\tilde{\mathcal{A}}$ is maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$. \end{cor} The next corollary is a consequence of Corollary \ref{maxabelianequivstatement} and shows how maximal commutativity of the base coefficient ring in the crossed product has an impact on the non-triviality of the action $\sigma$. \begin{cor}\label{maxabeliansigma} Let the subring $\tilde{\mathcal{A}}$ be maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$, then $\sigma_g \neq \identity_{\aalg}$ for all $g\in G \setminus \{e\}$. \end{cor} The description of the commutant $\comm(\tilde{\mathcal{A}})$ from Corollary \ref{commutantcomm2} can be further refined in the case when $\mathcal{A}$ is an integral domain. \begin{cor}\label{commutantcomm3} If $\mathcal{A}$ is an integral domain\footnote{By an \emph{integral domain} we shall mean a commutative ring with an additive identity $0_{\aalg}$ and a multiplicative identity $1_{\aalg}$ such that $0_{\aalg} \neq 1_{\aalg}$, in which the product of any two non-zero elements is always non-zero.}, then the commutant of $\tilde{\mathcal{A}}$ in $\aalg \rtimes_\alpha^{\sigma} G$ is \begin{eqnarray*} \comm(\tilde{\mathcal{A}}) = \Big\{ \sum_{s\in \sigma^{-1}(\identity_{\aalg}) } r_s \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G \,\, \Big\lvert \,\, r_s \in \mathcal{A} \Big\} \end{eqnarray*} where $\sigma^{-1}(\identity_{\aalg}) = \{g \in G \mid \sigma_g = \identity_{\aalg} \}$. \end{cor} The following corollary can be derived directly from Corollary \ref{maxabeliansigma} together with either Corollary \ref{commutantzerodiv} or Corollary \ref{commutantcomm3}. \begin{cor}\label{maxabeliansigmaconverse} Let $\mathcal{A}$ be an integral domain. Then $\sigma_g \neq \identity_{\aalg}$ for all $g\in G \setminus \{e\}$ if and only if $\tilde{\mathcal{A}}$ is maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$. \end{cor} \begin{rem}\label{injectivityofsigma} Recall that when $\mathcal{A}$ is commutative, $\sigma$ is a group homomorphism. Thus, to say that $\sigma_g \neq \identity_{\aalg}$ for all $g\in G\setminus\{e\}$ is just another way of saying that $\ker(\sigma)=\{e\}$ or equivalently, that $\sigma$ is injective. \end{rem} \begin{exmp} Let $\mathcal{A} =\mathbb{C}[x_1,\ldots,x_n]$ be the polynomial ring in $n$ commuting variables $x_1,\ldots,x_n$ and $G = S_n$ the symmetric group of $n$ elements. An element $\tau \in S_n$ is a permutation which maps the sequence $(1,\ldots,n)$ into $(\tau(1),\ldots,\tau(n))$. The group $S_n$ acts on $\mathbb{C}[x_1,\ldots,x_n]$ in a natural way. To each $\tau \in S_n$ we may associate a map $\mathcal{A} \to \mathcal{A}$, which sends any polynomial $f(x_1,...,x_n) \in \mathbb{C}[x_1,\ldots,x_n]$ into a new polynomial $g$, defined by $g(x_1,\ldots,x_n) = f(x_{\tau(1)},\ldots,x_{\tau(n)})$. It is clear that each such mapping is a ring automorphism on $\mathcal{A}$. Let $\sigma$ be the embedding $S_n \hookrightarrow \aut(\mathcal{A})$ and $\alpha \equiv 1_{\aalg}$. Note that $\mathbb{C}[x_1,\ldots,x_n]$ is an integral domain and that $\sigma$ is injective. Hence, by Corollary \ref{maxabeliansigmaconverse} and Remark \ref{injectivityofsigma} it is clear that the embedding of $\mathbb{C}[x_1,\ldots,x_n]$ is maximal commutative in $\mathbb{C}[x_1,\ldots,x_n] \rtimes^{\sigma} S_n$. \end{exmp} One might want to describe properties of the $\sigma$-cocycle in the case when $\tilde{\mathcal{A}}$ is maximal commutative, but unfortunately this will lead to a dead end. The explaination for this is revealed by condition (iii) in the definition of a $G$-crossed system, where we see that $\alpha(e,g)=\alpha(g,e)=1_{\aalg}$ for all $g \in G$ and hence we are not able to extract any interesting information about $\alpha$ by assuming that $\tilde{\mathcal{A}}$ is maximal commutative. At the end of \cite[Remark 1.4.3, part 2]{MoGR} it is explained that in a \emph{twisted group ring} $\mathcal{A} \rtimes_\alpha G$, i.e. with $\sigma \equiv \identity_{\aalg}$, $\tilde{\mathcal{A}}$ can never be maximal commutative (for $G \neq \{e\}$). When $\mathcal{A}$ is commutative, this follows immediately from Corollary \ref{maxabeliansigma}. We will now give a sufficient condition for $\comm(\tilde{\mathcal{A}})$ to be commutative. \begin{prop}\label{commcomm} Let $\mathcal{A}$ be a commutive ring, $G$ an abelian group and $\alpha$ symmetric. Then $\comm(\tilde{\mathcal{A}})$ is commutative. \end{prop} \begin{proof} Let $\sum_{s \in G} r_s \overline{s}$ and $\sum_{t \in G} p_t \overline{t}$ be arbitrary elements of $\comm(\tilde{\mathcal{A}})$, then by our assumptions and Corollary \ref{commutantcomm2} we get \begin{eqnarray*} \left(\sum_{s \in G} r_s \, \overline{s} \right) \left( \sum_{t \in G} p_t \, \overline{t} \right) &=& \sum_{(s,t) \in G \times G} r_s \, \sigma_s(p_t) \, \alpha(s,t) \, \overline{st} = \sum_{(s,t) \in G \times G} r_s \, p_t \, \alpha(s,t) \, \overline{st} \\ &=& \sum_{(s,t) \in G \times G} p_t \, \sigma_t(r_s) \, \alpha(t,s) \, \overline{ts} = \left( \sum_{t \in G} p_t \, \overline{t} \right) \left(\sum_{s \in G} r_s \, \overline{s} \right). \end{eqnarray*} This shows that $\comm(\tilde{\mathcal{A}})$ is commutative. \end{proof} This proposition is a generalization of \cite[Proposition 2.1]{svesildej} from a function algebra to an arbitrary unital associative commutative ring $\mathcal{A}$, from $\mathbb{Z}$ to an arbitrary abelian group $G$ and from a trivial to a possibly non-trivial symmetric $\sigma$-cocycle $\alpha$. \begin{rem} By using Proposition \ref{commcomm} and the arguments made in Example \ref{introtillchristians} it is clear that Corollary \ref{commutantcomm2} is a generalization of \cite[Theorem 3.3]{svesildej}. Furthermore, we see that Corollary \ref{centerspecial} is a generalization of \cite[Theorem 3.6]{svesildej}. \end{rem} \section{Ideals in $\aalg \rtimes_\alpha^{\sigma} G$} In this section we describe properties of the ideals in $\aalg \rtimes_\alpha^{\sigma} G$ in connection with maximal commutativity and properties of the action $\sigma$. \begin{thm}\label{killerproof} Let $\mathcal{A}$ be commutative. Then \begin{displaymath} I \cap \comm(\tilde{\mathcal{A}}) \neq \{0\} \end{displaymath} for every non-zero two-sided ideal $I$ in $\aalg \rtimes_\alpha^{\sigma} G$. \end{thm} \begin{proof} Let $\mathcal{A}$ be commutative. Then $\tilde{\mathcal{A}}$ is also commutative. Let $I \subseteq \aalg \rtimes_\alpha^{\sigma} G$ be an arbitrary non-zero two-sided ideal in $\aalg \rtimes_\alpha^{\sigma} G$. \\ \noindent \emph{Part 1:} \\ For each $g \in G$ we may define a \emph{translation-deformation operator} \begin{displaymath} T_g : \aalg \rtimes_\alpha^{\sigma} G \to \aalg \rtimes_\alpha^{\sigma} G, \quad \sum_{s \in G} a_s \overline{s} \mapsto \left(\sum_{s \in G} a_s \overline{s}\right) (1_{\aalg} \overline{g}). \end{displaymath} Note that, for any $g \in G$, $I$ is invariant\footnote{By \emph{invariant} we mean that the set is \emph{closed} under this operation.} under $T_g$. We have \begin{displaymath} T_g \left( \sum_{s \in G} a_s \overline{s} \right) = \left(\sum_{s \in G} a_s \overline{s}\right) (1_{\aalg} \overline{g}) = \sum_{s \in G} a_s \, \sigma_s(1_{\aalg}) \, \alpha(s,g) \overline{sg} = \sum_{s \in G} a_s \, \alpha(s,g) \overline{sg} \end{displaymath} for every $g\in G$. It is important to note that if $a_s \neq 0_{\aalg}$, then $a_s \, \alpha(s,g) \neq 0_{\aalg}$ and hence this operation does not kill coefficients, it only translates and deformes them. If we have a non-zero element $\sum_{s \in G} a_s \overline{s}$ for which $a_e =0_{\aalg}$, then we may pick some non-zero coefficient, say $a_p$ and apply the operator $T_{p^{-1}}$ to end up with \begin{displaymath} T_{p^{-1}} \left( \sum_{s \in G} a_s \overline{s} \right) = \sum_{s \in G} a_s \, \alpha(s,p^{-1}) \overline{sp^{-1}} := \sum_{s \in G} d_t \overline{t}. \end{displaymath} This resulting element will then have the following properties: \begin{itemize} \item $d_e = a_p \, \alpha(p,p^{-1}) \neq 0_{\aalg}$, \item $\# \{ s\in G \, \mid \, a_s \neq 0_{\aalg} \} = \# \{ s\in G \, \mid \, d_s \neq 0_{\aalg} \}$. \end{itemize} \noindent \emph{Part 2:} \\ Next we define a \emph{kill operator} \begin{displaymath} D_a : \aalg \rtimes_\alpha^{\sigma} G \to \aalg \rtimes_\alpha^{\sigma} G, \quad \sum_{s \in G} a_s \overline{s} \mapsto (a \overline{e}) \left( \sum_{s \in G} a_s \overline{s} \right) - \left( \sum_{s \in G} a_s \overline{s} \right) (a \overline{e}) \end{displaymath} for each $a \in \mathcal{A}$. Note that, for each $a \in \mathcal{A}$, $I$ is invariant under $D_a$. By assumption $\mathcal{A}$ is commutative and hence the above expression can be simplified \begin{eqnarray*} D_a \left( \sum_{s \in G} a_s \overline{s} \right) &=& (a \overline{e}) \left( \sum_{s \in G} a_s \overline{s} \right) - \left( \sum_{s \in G} a_s \overline{s} \right) (a \overline{e}) \\ &=& \left( \sum_{s \in G} a \, \sigma_{e}(a_s) \, \alpha(e,s) \, \overline{es} \right) - \left( \sum_{s \in G} a_s \, \sigma_{s}(a) \, \alpha(s,e) \, \overline{se} \right) \\ &=& \sum_{s \in G} \underbrace{a \, a_s}_{=a_s \, a} \overline{s} - \sum_{s \in G} a_s \, \sigma_{s}(a) \, \overline{s} = \sum_{s \in G} a_s \, (a-\sigma_{s}(a)) \overline{s} \\ &=& \sum_{s \neq e} a_s \, (a-\sigma_{s}(a)) \overline{s} = \sum_{s \neq e} d_s \overline{s}. \end{eqnarray*} The operators $\{D_a\}_{a \in \mathcal{A}}$ all share the property that they kill the coefficient in front $\overline{e}$. Hence, if $a_e \neq 0_{\aalg}$, then the number of non-zero coefficients of the resulting element will always be reduced by at least one. Note that $\comm(\tilde{\mathcal{A}}) = \bigcap_{a \in \mathcal{A}} \ker(D_a)$. This means that for each non-zero $\sum_{s \in G} a_s \overline{s}$ in $\aalg \rtimes_\alpha^{\sigma} G \setminus \comm(\tilde{\mathcal{A}})$ we may always choose some $a \in A$ such that $\sum_{s \in G} a_s \overline{s} \not \in \ker(D_a)$. By choosing such an $a$ we note that, using the same notation as above, we get \begin{displaymath} \# \{s \in G \, \mid \, a_s \neq 0_{\aalg} \} \geq \# \{s \in G \, \mid \, d_s \neq 0_{\aalg} \} \geq 1 \end{displaymath} for each non-zero $\sum_{s \in G} a_s \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G \setminus \comm(\tilde{\mathcal{A}})$. \\ \\ \noindent \emph{Part 3:} \\ The ideal $I$ is assumed to be non-zero, which means that we can pick some non-zero element $\sum_{s\in G} r_s \overline{s} \in I$. If $\sum_{s\in G} r_s \overline{s} \in \comm(\tilde{\mathcal{A}})$, then we are finished, so assume that this is not the case. Note that $r_s \neq 0_{\aalg}$ for finitely many $s \in G$. Recall that the ideal $I$ is invariant under $T_g$ and $D_a$ for all $g \in G$ and $a\in \mathcal{A}$. We may now use the operators $\{T_g\}_{g\in G}$ and $\{D_a\}_{a\in \mathcal{A}}$ to generate new elements of $I$. More specifically, we may use the $T_g$:s to translate our element $\sum_{s\in G} r_s \overline{s}$ into a new element which has a non-zero coefficient in front of $\overline{e}$ (if needed) after which we use the $D_a$ operator to kill this coefficient and end up with yet another new element of $I$ which is non-zero but has a smaller number of non-zero coefficients. We may repeat this procedure and in a finite number of iterations arrive at an element of $I$ which lies in $\comm(\tilde{\mathcal{A}}) \setminus \tilde{\mathcal{A}}$ and if not we continue the above procedure until we reach an element which is of the form $b \, \overline{e}$ with some non-zero $b \in \mathcal{A}$. In particular $\tilde{\mathcal{A}} \subseteq \comm(\tilde{\mathcal{A}})$ and hence $I \cap \comm(\tilde{\mathcal{A}}) \neq \{0\}$. \end{proof} The embedded base ring $\tilde{\mathcal{A}}$ is maximal commutative if and only if $\tilde{\mathcal{A}} = \comm(\tilde{\mathcal{A}})$ and hence we have the following corollary. \begin{cor}\label{maxcommcorollary} Let the subring $\tilde{\mathcal{A}}$ be maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$. Then \begin{displaymath} I \cap \tilde{\mathcal{A}} \neq \{0\} \end{displaymath} for every non-zero two-sided ideal $I$ in $\aalg \rtimes_\alpha^{\sigma} G$. \end{cor} \begin{prop}\label{leftidealprop} Let $I$ be a subset of $\mathcal{A}$ and define \begin{displaymath} J = \left\{ \sum_{s\in G} a_s \, \overline{s} \in \aalg \rtimes_\alpha^{\sigma} G \, \mid \, a_s \in I \right\}. \end{displaymath} Then the following assertions hold: \begin{itemize} \item[(i)] If $I$ is a right ideal in $\mathcal{A}$, then $J$ is a right ideal in $\aalg \rtimes_\alpha^{\sigma} G$. \item[(ii)] If $I$ is a two-sided ideal in $\mathcal{A}$ such that $I \subseteq \mathcal{A}^G$, then $J$ is a two-sided ideal in $\aalg \rtimes_\alpha^{\sigma} G$. \end{itemize} \end{prop} \begin{proof} If $I$ is a (possibly one sided) ideal in $\mathcal{A}$, it is clear that $J$ is an additive subgroup of $\aalg \rtimes_\alpha^{\sigma} G$.\\ (i). Let $I$ be a right ideal in $\mathcal{A}$. Then \begin{eqnarray*} \left( \sum_{s\in G} a_s \, \overline{s} \right) \left( \sum_{t \in G} b_t \, \overline{t} \right) &=& \sum_{(s,t) \in G \times G} \underbrace{ a_s \, \sigma_s(b_t) \, \alpha(s,t) }_{\in I} \overline{st} \in J \end{eqnarray*} for arbitrary $\sum_{s\in G} a_s \, \overline{s} \in J$ and $\sum_{t \in G} b_t \, \overline{t} \in \aalg \rtimes_\alpha^{\sigma} G$ and hence $J$ is a right ideal. \\ (ii). Let $I$ be a two-sided ideal in $\mathcal{A}$ such that $I \subseteq \mathcal{A}^G$. By (i) it is clear that $J$ is a right ideal. Let $\sum_{s\in G} a_s \, \overline{s} \in J$ and $\sum_{t \in G} b_t \, \overline{t} \in \aalg \rtimes_\alpha^{\sigma} G$ be arbitrary. Then \begin{eqnarray*} \left( \sum_{t \in G} b_t \, \overline{t} \right) \left( \sum_{s\in G} a_s \, \overline{s} \right) = \sum_{(t,s) \in G \times G} b_t \, \sigma_t(a_s) \, \alpha(t,s) \overline{ts} = \sum_{(t,s) \in G \times G} \underbrace{ b_t \, a_s \, \alpha(t,s) }_{\in I} \overline{ts} \in J \end{eqnarray*} which shows that $J$ is also a left ideal. \end{proof} \begin{thm}\label{explicitideal} Let $\sigma : G \to \aut(\mathcal{A})$ be a group homomorphism and $N$ be a normal subgroup of $G$, contained in $\sigma^{-1}(\identity_{\aalg}) = \{g\in G \mid \sigma_g = \identity_{\aalg}\}$. Let $\varphi : G \to G/N$ be the quotient group homomorphism and suppose that $\alpha$ is such that $\alpha(s,t)=1_{\aalg}$ whenever $s \in N$ or $t \in N$. Furthermore, suppose that there exists a map $\beta : G/N \times G/N \to U(\mathcal{A})$ such that $\beta(\varphi(s),\varphi(t))= \alpha(s,t)$ for each $(s,t) \in G \times G$. Let $I$ be the ideal in $\aalg \rtimes_\alpha^{\sigma} G$ generated by an element $\sum_{s\in N} a_s \, \overline{s}$ for which the coefficients (of which all but finitely many are zero) satisfy $\sum_{s\in N} a_s = 0_{\aalg}$. Then, \begin{displaymath} I \cap \tilde{\mathcal{A}} = \{0\}. \end{displaymath} \end{thm} \begin{proof} Let $I \subseteq \aalg \rtimes_\alpha^{\sigma} G$ be the ideal generated by an element $\sum_{s\in N} a_s \, \overline{s}$, which satisfies $\sum_{s\in N} a_s = 0_{\aalg}$. The quotient homomorphism $\varphi : G \to G/N, \quad s \mapsto sN$ satisfies $\ker(\varphi)=N$. By assumption, the map $\sigma$ is a group homomorphism and $\sigma(N)= \identity_{\aalg}$. Hence by the universal property, see for example \cite[p.16]{Lang}, there exists a unique group homomorphism $\rho$ making the following diagram commute: \[ \xymatrix{ G \ar[r]^-\sigma \ar[d]_\varphi & \aut(\mathcal{A}) \\ G/N \ar@{-->}[ur]_\rho} \] \noindent By assumption there exists $\beta$ such that $\beta(\varphi(s),\varphi(t))= \alpha(s,t)$ for each $(s,t) \in G \times G$. One may verify that $\beta$ is a $\rho$-cocycle and hence we can define a new crossed product $\mathcal{A} \rtimes_\beta^{\rho} G/N$. We now define $\Gamma$ to be the map \begin{displaymath} \Gamma : \aalg \rtimes_\alpha^{\sigma} G \to \mathcal{A} \rtimes_\beta^{\rho} G/N, \quad \sum_{s \in G} a_s \overline{s} \mapsto \sum_{s \in G} a_s \overline{\varphi(s)} \end{displaymath} and show that it is a ring homomorphism. For any two elements $\sum_{s \in G} a_s \overline{s}$ and $\sum_{t \in G} b_t \overline{t}$ in $\aalg \rtimes_\alpha^{\sigma} G$, the additivity of $\Gamma$ follows by \begin{eqnarray*} \Gamma \left(\sum_{s \in G} a_s \overline{s} + \sum_{t \in G} b_t \overline{t} \right) &=& \Gamma \left(\sum_{s \in G} (a_s + b_s) \overline{s} \right) = \sum_{s \in G} (a_s + b_s) \overline{\varphi(s)} \\ &=& \sum_{s \in G} a_s \overline{\varphi(s)} + \sum_{t \in G} b_t \overline{\varphi(t)} = \Gamma \left(\sum_{s \in G} a_s \overline{s} \right) + \Gamma \left(\sum_{t \in G} b_t \overline{t} \right) \end{eqnarray*} and due to the assumptions, the multiplicativity follows by \begin{eqnarray*} \Gamma \left(\sum_{s \in G} a_s \overline{s} \sum_{t \in G} b_t \overline{t} \right) &=& \Gamma \left(\sum_{(s,t) \in G \times G} a_s \sigma_s(b_t) \, \alpha(s,t) \, \overline{st} \right) \\ &=& \sum_{(s,t) \in G \times G} \, a_s \, \sigma_s(b_t) \, \alpha(s,t) \, \overline{\varphi(st)} \\ &=& \sum_{(s,t) \in G \times G} \, a_s \, \rho_{\varphi(s)}(b_t) \, \alpha(s,t) \, \overline{\varphi(s) \varphi(t)} \\ &=& \sum_{(s,t) \in G \times G} \, a_s \, \rho_{\varphi(s)}(b_t) \, \beta(\varphi(s),\varphi(t))\, \overline{\varphi(s) \varphi(t)} \\ &=& \left(\sum_{s \in G} a_s \overline{\varphi(s)} \right) \left(\sum_{t \in G} b_t \overline{\varphi(t)} \right) = \Gamma \left(\sum_{s \in G} a_s \overline{s} \right) \, \Gamma \left(\sum_{t \in G} b_t \overline{t} \right) \\ \end{eqnarray*} and hence $\Gamma$ defines a ring homomorphism. We shall note that the generator of $I$ is mapped onto zero, i.e. \begin{displaymath} \Gamma \left( \sum_{s \in N} a_s \, \overline{s} \right) = \sum_{s \in N} a_s \, \overline{\varphi(s)} = \sum_{s \in N} a_s \, \overline{N} = \left( \sum_{s \in N} a_s \right) \, \overline{N} = 0_{\aalg} \, \overline{N} = 0 \end{displaymath} and hence $\Gamma \lvert_{I} \equiv 0$. Furthermore, we see that \begin{displaymath} \Gamma \left(b \overline{e} \right) = 0 \quad \Longrightarrow \quad b \overline{\varphi(e)}=0 \Leftrightarrow b \overline{N}=0 \Leftrightarrow b = 0_{\aalg} \Leftrightarrow b\overline{e} = 0 \end{displaymath} and hence $\Gamma\lvert_{\tilde{\mathcal{A}}}$ is injective. We may now conclude that if $c \in I \cap \tilde{\mathcal{A}}$, then $\Gamma(c)=0$ and so necessarily $c=0$. This shows that $I \cap \tilde{\mathcal{A}} = \{0\}$. \end{proof} When $\mathcal{A}$ is commutaive, $\sigma$ is automatically a group homomorphism and we get the following corollary. \begin{cor}\label{explicitideal2} Let $\mathcal{A}$ be commutative and $N \subseteq \sigma^{-1}(\identity_{\aalg}) = \{g\in G \mid \sigma_g = \identity_{\aalg}\}$ a normal subgroup of $G$. Let $\varphi : G \to G/N$ be the quotient group homomorphism and suppose that $\alpha$ is such that $\alpha(s,t)=1_{\aalg}$ whenever $s \in N$ or $t \in N$. Furthermore, suppose that there exists a map $\beta : G/N \times G/N \to U(\mathcal{A})$ such that $\beta(\varphi(s),\varphi(t))= \alpha(s,t)$ for each $(s,t) \in G \times G$. Let $I$ be the ideal in $\aalg \rtimes_\alpha^{\sigma} G$ generated by an element $\sum_{s\in N} a_s \, \overline{s}$ for which the coefficients (of which all but finitely many are zero) satisfy $\sum_{s\in N} a_s = 0_{\aalg}$. Then, \begin{displaymath} I \cap \tilde{\mathcal{A}} = \{0\}. \end{displaymath} \end{cor} When $\alpha \equiv 1_{\aalg}$ there is no need to assume that $\mathcal{A}$ is commutative, in order to make $\sigma$ a group homomorphism. In this case we may choose $\beta \equiv 1_{\aalg}$. Thus, by Theorem \ref{explicitideal} we have the following corollaries. \begin{cor}\label{explicitidealtrivial} Let $\alpha \equiv 1_{\aalg}$ and $N \subseteq \sigma^{-1}(\identity_{\aalg}) = \{g\in G \mid \sigma_g = \identity_{\aalg}\}$ be a normal subgroup of $G$. Let $I$ be the ideal in $\aalg \rtimes_\alpha^{\sigma} G$ generated by an element $\sum_{s\in N} a_s \, \overline{s}$ for which the coefficients (of which all but finitely many are zero) satisfy $\sum_{s\in N} a_s = 0_{\aalg}$. Then, \begin{displaymath} I \cap \tilde{\mathcal{A}} = \{0\}. \end{displaymath} \end{cor} \begin{cor}\label{nontrivialintersectionimplytrivialsigma} Let $\alpha \equiv 1_{\aalg}$. Then the following implication holds: \begin{center} {\rm(i)} $Z(G) \cap \sigma^{-1}(\identity_{\aalg}) \neq \{e\}$. \\ $\Downarrow$ \\ {\rm (ii)} For each $g\in Z(G) \cap \sigma^{-1}(\identity_{\aalg})$, the ideal $I_g$ generated by the element $\sum_{n\in \mathbb{Z}} a_n \, \overline{g^n}$ for which $\sum_{n \in \mathbb{Z}} a_n =0_{\aalg}$ has the property $I_g \cap \tilde{\mathcal{A}} = \{0\}$. \end{center} \end{cor} \begin{proof} Suppose that there exists a $g \in (Z(G) \cap \sigma^{-1}(\identity_{\aalg})) \setminus\{e\}$. Let $I_g \subseteq \aalg \rtimes_\alpha^{\sigma} G$ be the ideal generated by $\sum_{n\in \mathbb{Z}} a_n \, \overline{g^n}$, where $\sum_{n \in \mathbb{Z}} a_n =0_{\aalg}$. The element $g$ commutes with each element of $G$ and hence the cyclic subgroup $N = \langle g \rangle$ generated by $g$ is normal in $G$ and since $\sigma$ is a group homomorphism $N \subseteq \sigma^{-1}(\identity_{\aalg})$. Hence $I_g \cap \tilde{\mathcal{A}} = \{0\}$ by Corollary \ref{explicitidealtrivial}. \end{proof} \begin{cor}\label{idealtoaction} Let $\alpha \equiv 1_{\aalg}$ and $G$ be abelian. Then the following implication holds: \begin{center} {\rm (i)} $I \cap \tilde{\mathcal{A}} \neq \{0\}$, for all non-zero two-sided ideals $I$ in $\aalg \rtimes_\alpha^{\sigma} G$. \\ $\Downarrow$ \\ {\rm(ii)} $\sigma_g \neq \identity_{\aalg}$ for all $g \in G \setminus \{e\}$. \end{center} \end{cor} \begin{proof}[Proof by contrapositivity] Since $G$ is abelian, $G=Z(G)$. Suppose that (ii) is false, i.e. there exists $g \in G \setminus \{e\}$ such that $\sigma_g = \identity_{\aalg}$. Pick such a $g$ and let $I_g \subseteq \aalg \rtimes_\alpha^{\sigma} G$ be the ideal generated by $1_{\aalg} \, \overline{e} - 1_{\aalg} \, \overline{g}$. Then obviously $I_g \neq \{0\}$ and by Corollary \ref{nontrivialintersectionimplytrivialsigma} we get $I_g \cap \tilde{\mathcal{A}} = \{0\}$ and hence (i) is false. This concludes the proof. \end{proof} \begin{exmp} We should note that in the proof of Corollary \ref{idealtoaction} one could have chosen the ideal in many different ways. The ideal generated by $1_{\aalg} \overline{e} - 1_{\aalg} \overline{g} + 1_{\aalg} \overline{g^2} - 1_{\aalg} \overline{g^3} + \ldots + 1_{\aalg} \overline{g^{2n}} - 1_{\aalg} \overline{g^{2n+1}} = (1_{\aalg} \overline{e} - 1_{\aalg} \overline{g}) \sum_{k=0}^{n} 1_{\aalg} \, g^{2k}$ is contained in the ideal $I_g$, generated by $1_{\aalg} \overline{e} - 1_{\aalg} \overline{g}$, and therefore it has zero intersection with $\tilde{\mathcal{A}}$ if $I_g \cap \tilde{\mathcal{A}} = \{0\}$. Also note that for $\alpha \equiv 1_{\aalg}$ we may always write \begin{displaymath} 1_{\aalg} \overline{e} - 1_{\aalg} \overline{g^n} = (1_{\aalg} \overline{e} - 1_{\aalg} \overline{g})(\sum_{k=0}^{n-1} 1_{\aalg} \, g^k) \end{displaymath} and hence $1_{\aalg} \overline{e} - 1_{\aalg} \overline{g}$ is a zero-divisor in the crossed product $\aalg \rtimes_\alpha^{\sigma} G$ whenever $g$ is a torsion element. \end{exmp} \begin{exmp} We now give an example of how one may choose $\beta$ as in Theorem \ref{explicitideal}. Let $N \subseteq \sigma^{-1}(\identity_{\aalg})$ be a normal subgroup of $G$ such that for $g\in N$, $\alpha(s,g)=1_{\aalg}$ for all $s\in G$ and let $\alpha$ be symmetric. Since $\alpha$ is the $\sigma$-cocycle map of a $G$-system, we get \begin{eqnarray*} \alpha(g,s) \, \alpha(gs,t) = \sigma_g(\alpha(s,t)) \, \alpha(g,st) \Longleftrightarrow \alpha(g,s) \, \alpha(gs,t) = \alpha(s,t) \, \alpha(g,st) \\ \Longleftrightarrow \alpha(gs,t) = \alpha(s,t) \end{eqnarray*} for all $(s,t) \in G \times G$. Using the last equality and the symmetry of $\alpha$ we immediately see that \begin{displaymath} \alpha(g s, h t)= \alpha(s,t) \quad \forall s,t \in G \end{displaymath} for all $g,h \in N$. The last equality means that $\alpha$ is constant on the pairs of right cosets which coincide with the left cosets by normality of $N$. It is therefore clear that we can define $\beta : G/N \times G/N \to \aut(\mathcal{A})$ by $\beta(\varphi(s),\varphi(t)) := \alpha(s,t)$ for any $s,t \in G$. \end{exmp} \begin{thm}\label{idealtomaxcomm} Let $\mathcal{A}$ be an integral domain, $G$ an abelian group and $\alpha \equiv 1_{\aalg}$. Then the following implication holds: \begin{center} (i) $I \cap \tilde{\mathcal{A}} \neq \{0\}$, for every non-zero two-sided ideal $I$ in $\aalg \rtimes_\alpha^{\sigma} G$. \\ $\Downarrow$ \\ (ii) $\tilde{\mathcal{A}}$ is a maximal commutative subring in $\aalg \rtimes_\alpha^{\sigma} G$. \end{center} \end{thm} \begin{proof} This follows from Corollary \ref{maxabeliansigmaconverse} and Corollary \ref{idealtoaction}. \end{proof} \begin{exmp}[The quantum torus] Let $q \in \mathbb{C} \setminus \{0, 1\}$ and denote by \\ $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$ the \emph{twisted Laurent polynomial ring} in two non-commuting variables under the twisting \begin{eqnarray}\label{qformula} y \, x = q \, x \, y. \end{eqnarray} The ring $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$ is known as the \emph{quantum torus}. Now let \begin{itemize} \item $\mathcal{A} = \mathbb{C}[x,x^{-1}]$, \item $G = (\mathbb{Z},+)$, \item $\sigma_n : P(x) \mapsto P(q^n x)$, $n \in G$, \item $\alpha(s,t) = 1_{\aalg}$ for all $s,t \in G$. \end{itemize} It is easily verified that $\sigma$ and $\alpha$ together satisfy conditions (i)-(iii) of $G$-systems and it is not hard to see that $\aalg \rtimes_\alpha^{\sigma} G \cong \mathbb{C}_q[x,x^{-1},y,y^{-1}]$. In the current example, $\mathcal{A}$ is an integral domain, $G$ is abelian, $\alpha \equiv 1_{\aalg}$ and hence all the conditions of Theorem \ref{idealtomaxcomm} are satisfied. Note that the commutation relation \eqref{qformula} implies \begin{eqnarray}\label{qmulti} y^n \, x^m = q^{mn} \, x^m \, y^n, \quad \forall n,m \in \mathbb{Z}. \end{eqnarray} It is important to distinguish between two different cases: \begin{case}[$q$ is a root of unity] Suppose that $q^n = 1$ for some $n\neq 0$. From equality \eqref{qmulti} we note that $y^n \in Z(\mathbb{C}_q[x,x^{-1},y,y^{-1}])$ and hence $\mathbb{C}[x,x^{-1}]$ is not maximal commutative in $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$. Thus, according to Theorem \ref{idealtomaxcomm}, there must exist some non-zero ideal $I$ which has zero intersection with $\mathbb{C}[x,x^{-1}]$. \end{case} \begin{case}[$q$ is not a root of unity] Suppose that $q^n \neq 1$ for all $n \in \mathbb{Z} \setminus \{0\}$. One can show that this implies that $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$ is \emph{simple}. This means that the only non-zero ideal is $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$ itself and this ideal obviously intersects $\mathbb{C}[x,x^{-1}]$ non-trivially. Hence, by Theorem \ref{idealtomaxcomm}, we conclude that $\mathbb{C}[x,x^{-1}]$ is maximal commutative in $\mathbb{C}_q[x,x^{-1},y,y^{-1}]$. \end{case} \end{exmp} \section{Ideals, intersections and zero-divisors} Let $D$ denote the subset of zero-divisors in $\mathcal{A}$ and note that $D$ is always non-empty since $0_{\aalg} \in D$. By $\tilde{D}$ we denote the image of $D$ under the embedding $\iota$. \begin{thm}\label{idealszerodivisors} Let $\mathcal{A}$ be commutative. Then the following implication holds: \begin{center} (i) $I \cap \left( \tilde{\mathcal{A}} \setminus \tilde{D} \right) \neq \emptyset$, for every non-zero two-sided ideal $I$ in $\aalg \rtimes_\alpha^{\sigma} G$. \\ $\Downarrow$ \\ (ii) $D \cap \mathcal{A}^G = \{ 0_{\aalg} \}$, i.e. the only zero-divisor that is fixed under all automorphisms is $0_{\aalg}$. \end{center} \end{thm} \begin{proof}[Proof by contrapositivity] Let $\mathcal{A}$ be commutative. Suppose that $D \cap \mathcal{A}^G \neq \{ 0_{\aalg} \}$. Then there exist some $c \in D \setminus \{0_{\aalg}\}$ such that $\sigma_s(c)=c$ for all $s \in G$. There is also some $d \in D \setminus \{0_{\aalg}\}$, such that $c \cdot d = 0_{\aalg}$. Consider the ideal \begin{displaymath} \ann(c) = \{a \in \mathcal{A} \, \mid \, a\cdot c = 0_{\aalg} \} \end{displaymath} of $\mathcal{A}$. It is clearly non-empty since we always have $0_{\aalg} \in \ann(c)$ and $d \in \ann(c)$. Let $\theta$ be the quotient homomorphism \begin{displaymath} \theta : \mathcal{A} \to \mathcal{A} / \ann(c), \quad a \mapsto a + \ann(c). \end{displaymath} Let us define a map $\rho : G \to \aut(\mathcal{A} / \ann(c))$ by \begin{displaymath} \quad \rho_s(a + \ann(c)) = \sigma_s(a) + \ann(c) \end{displaymath} for all $a + \ann(c) \in \mathcal{A}/\ann(c)$. Note that $\ann(c)$ is invariant under $\sigma_s$ for all $s \in G$ and thus it is easily verified that $\rho$ is a well-defined automorphism on $\mathcal{A}/\ann(c)$. By introducing the function \begin{displaymath} \beta : G \times G \to U(\mathcal{A}/\ann(c)), \quad (s,t) \mapsto (\theta \circ \alpha)(s,t) \end{displaymath} it is easy to verify that $\{\mathcal{A}/\ann(c),G,\rho,\beta\}$ is in fact a $G$-crossed system. Now consider the map \begin{displaymath} \Gamma : \aalg \rtimes_\alpha^{\sigma} G \to \mathcal{A}/\ann(c) \rtimes_\beta^{\rho} G, \quad \sum_{s \in G} a_s \overline{s} \mapsto \sum_{s \in G} \theta(a_s) \overline{s}. \end{displaymath} For any two elements $\sum_{s \in G} a_s \overline{s}, \sum_{t \in G} b_t \overline{t} \in \aalg \rtimes_\alpha^{\sigma} G$ the additivity of $\Gamma$ follows by \begin{eqnarray*} \Gamma \left(\sum_{s \in G} a_s \overline{s} + \sum_{t \in G} b_t \overline{t} \right) &=& \Gamma \left(\sum_{s \in G} (a_s + b_s) \overline{s} \right) = \sum_{s \in G} \theta(a_s + b_s) \overline{s} \\ &=& \sum_{s \in G} \theta(a_s) \overline{s} + \sum_{t \in G} \theta(b_t) \overline{t} = \Gamma \left(\sum_{s \in G} a_s \overline{s} \right) + \Gamma \left(\sum_{t \in G} b_t \overline{t} \right) \end{eqnarray*} and due to the assumptions, the multiplicativity follows by \begin{eqnarray*} \Gamma \left(\sum_{s \in G} a_s \overline{s} \sum_{t \in G} b_t \overline{t} \right) &=& \Gamma \left(\sum_{(s,t) \in G \times G} a_s \, \sigma_s(b_t) \, \alpha(s,t) \, \overline{st} \right) \\ &=& \sum_{(s,t) \in G \times G} \, \theta(a_s \, \sigma_s(b_t) \, \alpha(s,t) ) \, \overline{st} \\ &=& \sum_{(s,t) \in G \times G} \, \theta(a_s) \, \theta(\sigma_s(b_t)) \, \theta(\alpha(s,t)) \, \overline{s t} \\ &=& \sum_{(s,t) \in G \times G} \theta(a_s) \, \rho_s( \theta(b_t) ) \, \beta(s,t) \, \overline{s t} \\ &=& \left(\sum_{s \in G} \theta(a_s) \overline{s} \right) \left(\sum_{t \in G} \theta(b_t) \overline{t} \right) = \Gamma \left(\sum_{s \in G} a_s \overline{s} \right) \, \Gamma \left(\sum_{t \in G} b_t \overline{t} \right) \end{eqnarray*} where have used that $\beta = \theta \circ \alpha$ and $\theta(\sigma_s(b_t)) = \rho_s( \theta(b_t))$ for all $b_t \in \mathcal{A}$ and $s\in G$. This shows that $\Gamma$ is a ring homomorphism. Now, pick some $g \neq e$ and let $I$ be the ideal generated by $d\, \overline{g}$. Clearly $I \neq \{0\}$ and we see that $\Gamma\lvert_{I} \equiv 0$. Note that $\ker(\theta) = \ann(c)$ and in particular we have \begin{displaymath} \Gamma(a \overline{e}) = 0 \Longrightarrow a \in \ann(c). \end{displaymath} Take $m \, \overline{e} \in I \cap \left( \tilde{\mathcal{A}} \setminus \tilde{D} \right)$. Then $\Gamma(m \, \overline{e}) = 0$ and hence $m \in \ann(c) \subseteq D$, which is a contradiction. This shows that \begin{displaymath} I \cap \left( \tilde{\mathcal{A}} \setminus \tilde{D} \right) = \emptyset \end{displaymath} and by contrapositivity this concludes the proof. \end{proof} \begin{exmp}[The truncated quantum torus] Let $q \in \mathbb{C} \setminus \{0, 1\}$, $m \in \mathbb{N}$ and consider the ring \begin{eqnarray*} \frac{\mathbb{C}[x,y,y^{-1}]}{(y \, x - q \, x \, y \, , \, x^m)} \end{eqnarray*} which is commonly referred to as the \emph{truncated quantum torus}. It is easily verified that this ring is isomorphic to $\aalg \rtimes_\alpha^{\sigma} G$ with \begin{itemize} \item $\mathcal{A} = \mathbb{C}[x]/(x^m)$, \item $G = (\mathbb{Z},+)$, \item $\sigma_n : P(x) \mapsto P(q^n x)$, $n \in G$, \item $\alpha(s,t) = 1_{\aalg}$ for all $s,t \in G$. \end{itemize} One should note that in this case $\mathcal{A}$ is commutative, but not an integral domain. In fact, the zero-divisors in $\mathbb{C}[x]/(x^m)$ are precisely those polynomials where the constant term is zero, i.e. $p(x)=\sum_{i=0}^{m-1} a_i \, x^i$, with $a_i \in \mathbb{C}$, such that $a_0=0$. It is also important to remark that, unlike the \emph{quantum torus}, $\aalg \rtimes_\alpha^{\sigma} G$ is never simple (for $m>1$). In fact we always have a chain of two-sided ideals \begin{displaymath} \frac{\mathbb{C}[x,y,y^{-1}]}{(y \, x - q \, x \, y \, , \, x^m)} \supset \langle x \rangle \supset \langle x^2 \rangle \supset \ldots \supset \langle x^{m-1} \rangle \supset \{0\} \end{displaymath} independent of the value of $q$. Moreover, the two-sided ideal $J = \langle x^{m-1} \rangle$ is contained in $\comm(\mathbb{C}[x]/(x^m))$ and contains elements outside of $\mathbb{C}[x]/(x^m)$. Hence we conclude that $\mathbb{C}[x]/(x^m)$ is not maximal commutative in $\frac{\mathbb{C}[x,y,y^{-1}]}{(y \, x - q \, x \, y \, , \, x^m)}$. When $q$ is a root of unity, with $q^n=1$ for some $n < m$, we are able to say more. Consider the polynomial $p(x) = x^n$, which is a non-trival zero-divisor in $\mathbb{C}[x]/(x^m)$. For every $s \in \mathbb{Z}$ we see that $p(x)=x^n$ is fixed under the automorphism $\sigma_s$ and therefore, by Theorem \ref{idealszerodivisors}, we conclude that there exists a non-zero two-sided ideal in $\frac{\mathbb{C}[x,y,y^{-1}]}{(y \, x - q \, x \, y \, , \, x^m)}$ such that its intersection with $\tilde{\mathcal{A}} \setminus \tilde{D}$ is empty. \end{exmp} \section{Comments to the literature} The literature contains several different types of intersection theorems for group rings, Ore extensions and crossed products. Typically these theorems rely on heavy restrictions on the coefficient rings and the groups involved. We shall now give references to some interesting results in the literature. It was proven in \cite[Theorem 1, Theorem 2]{Rowen} that the center of a semiprimitive (semisimple in the sense of Jacobson \cite{Jacobson}) P.I. ring respectively semiprime P.I. ring has a non-zero intersection with every non-zero ideal in such a ring. For crossed products satisfying the conditions in \cite[Theorem 2]{Rowen}, it offers a more precise result than Theorem \ref{killerproof} since $Z(\aalg \rtimes_\alpha^{\sigma} G) \subseteq \comm(\tilde{\mathcal{A}})$. However, every crossed product need not be semiprime nor a P.I. ring and this justifies the need for Theorem \ref{killerproof}. In \cite[Lemma 2.6]{LorenzPassman} it was proven that if the coefficient ring $\mathcal{A}$ of a crossed product $\aalg \rtimes_\alpha^{\sigma} G$ is prime, $P$ is a prime ideal in $\aalg \rtimes_\alpha^{\sigma} G$ such that $P \cap \tilde{\mathcal{A}} = 0$ and $I$ is an ideal in $\aalg \rtimes_\alpha^{\sigma} G$ properly containing $P$, then $I \cap \tilde{\mathcal{A}} \neq 0$. Furthermore, in \cite[Proposition 5.4]{LorenzPassman} it was proven that the crossed product $\aalg \rtimes_\alpha^{\sigma} G$ with $G$ abelian and $\mathcal{A}$ a $G$-prime ring has the property that, if $G_{\inn}=\{e\}$, then every non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$ has a non-zero intersection with $\tilde{\mathcal{A}}$. It was shown in \cite[Corollary 3]{FisherMontgomery} that if $\mathcal{A}$ is semiprime and $G_{\inn}=\{e\}$, then every non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$ has a non-zero intersection with $\tilde{\mathcal{A}}$. In \cite[Lemma 3.8]{LorenzPassman2} it was shown that if $\mathcal{A}$ is a $G$-prime ring, $P$ a prime ideal in $\aalg \rtimes_\alpha^{\sigma} G$ with $P\cap \tilde{\mathcal{A}}=0$ and if $I$ is an ideal in $\aalg \rtimes_\alpha^{\sigma} G$ properly containing $P$, then $I\cap \tilde{\mathcal{A}} \neq 0$. In \cite[Proposition 2.6]{MontgomeryPassman} it was shown that if $\mathcal{A}$ is a prime ring and $I$ is a non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$, then $I \cap (\mathcal{A} \rtimes_{\alpha}^{\sigma} G_{\inn}) \neq 0$. In \cite[Proposition 2.11]{MontgomeryPassman} it was shown that for a crossed product $\aalg \rtimes_\alpha^{\sigma} G$ with prime ring $\mathcal{A}$, every non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$ has a non-zero intersection with $\tilde{\mathcal{A}}$ if and only if $C^t[G_{\inn}]$ is $G$-simple and in particular if $|G_{\inn}| < \infty$, then every non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$ has a non-zero intersection with $\tilde{\mathcal{A}}$ if and only if $\aalg \rtimes_\alpha^{\sigma} G$ is prime. Corollary \ref{maxcommcorollary} shows that if $\tilde{\mathcal{A}}$ is maximal commutative in $\aalg \rtimes_\alpha^{\sigma} G$, without any further conditions on the coefficient ring or the group, we are able to conclude that every non-zero ideal in $\aalg \rtimes_\alpha^{\sigma} G$ has a non-zero intersection with $\tilde{\mathcal{A}}$. In the theory of group rings (crossed products with no action or twisting) the intersection properties of ideals with certain subrings have played an important role and are studied in depth in for example \cite{FormanekLichtman}, \cite{LorenzPassman3} and \cite{Passman2}. Some further properties of intersections of ideals and homogeneous components in graded rings have been studied in for example \cite{CohenMontgomery}, \cite{MarubNauweOysta}. For ideals in Ore extensions there are interesting results in \cite[Theorem 4.1]{Irving1D} and \cite[Lemma 2.2, Theorem 2.3, Corollary 2.4]{LaunLenaRiga}, explaining a correspondence between certain ideals in the Ore extension and certain ideals in its coefficient ring. Given a domain $\mathcal{A}$ of characteristic $0$ and a non-zero derivation $\delta$ it is shown in \cite[Proposition 2.6]{Irving2D} that every non-zero ideal in the Ore extension $R= \mathcal{A} [x; \delta]$ intersects $\mathcal{A}$ in a non-zero $\delta$-invariant ideal. Similar types of intersection results for ideals in Ore extension rings can be found in for example \cite{LeroyMatczuk} and \cite{McConnellRobson}. The results in this article appeared initially in the preprint \cite{OinSil}.\\ \noindent \textbf{Acknowledgements.} We are grateful to Marcel de Jeu, Christian Svensson, Theodora Theohari-Apostolidi and especially Freddy Van Oystaeyen for useful discussions on the topic of this article. \end{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{thm}[theorem]{Theorem} \newtheorem{prop}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand\BB{{\mathcal B}} \newcommand\FF{{\mathcal F}} \newcommand\VV{{\mathcal V}} \newcommand\WW{{\mathcal W}} \newcommand\Z{{\mathbb Z}} \newcommand{\overline}{\overline} \title[An almost complex Castelnuovo de Franchis theorem]{An almost complex Castelnuovo de Franchis theorem} \author{Indranil Biswas} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India} \email{[email protected]} \author{Mahan Mj} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India} \email{[email protected]} \subjclass[2000]{32Q60, 14F45} \keywords{Almost complex structure, Castelnuovo--de Franchis theorem, Riemann surface, fundamental group} \date{} \begin{abstract} Given a compact almost complex manifold, we prove a Castelnuovo--de Franchis type theorem for it. \end{abstract} \maketitle \section{Introduction} Given a smooth complex projective variety $X$, the classical Castelnuovo-de Franchis theorem \cite{cas, df} associates to an isotropic subspace of $H^0(X,\, \Omega^1_X)$ of dimension greater than one an irrational pencil on $X$. The topological nature of this theorem was brought out by Catanese \cite{cat}, who established a bijective correspondence between subspaces $\widetilde{U} \,\subset\, H^1(X, {\mathbb C})$ of the form $U \oplus \overline{U}$ with $U$ being a maximal isotropic subspace of $H^1(X, {\mathbb C})$ of dimension $g\, \geq\, 1$ and irrational fibrations on $X$ of genus $g$. The purpose of this note is to emphasize this topological content further. We extract topological hypotheses that allow the theorem to go through when we have only an almost complex structure. In what follows, $M$ will be a compact smooth manifold of dimension $2k$. Let $$ J_M\, :\, TM\,\longrightarrow\, TM $$ be an almost complex structure on $M$, meaning $J_M\circ J_M\,=\, -\text{Id}_{TM}$. \begin{definition} We shall say that a collection of closed complex $1$--forms $\omega_1, \cdots, \omega_n$ on $M$ are in {\it general position} if \begin{enumerate} \item the zero-sets $$ Z(\omega_i)\,:=\, \{x\, \in\, M\, \mid\, \omega_i(x)\,=\, 0\}\, \subset\, M $$ are smooth embedded submanifolds, and \item these submanifolds $Z(\omega_i)$ intersect transversally. \end{enumerate} \end{definition} We are now in a position to state the main theorem of this note. \begin{theorem} \label{main} Let $M$ be a compact smooth $2k$--manifold equipped with an almost complex structure $J_M$. Let $\omega_1\, , \cdots\, , \omega_g$ be closed complex $1$--forms on $M$ linearly independent over ${\mathbb C}$, with $g\, \geq\, 2$, such that \begin{itemize} \item each $\omega_i$ is of type $(1\, ,0)$, meaning $\omega_i(J_M (v))\,=\, \sqrt{-1}\cdot\omega_i(v)$ for all $v\,\in\, TM$, and \item $\omega_i$ are in general position with $\omega_i \wedge \omega_j \,=\, 0$ for all $i\, , j$. \end{itemize} Then there exists a smooth almost holomorphic map $f \,:\, M\,\longrightarrow\, C$ to a compact Riemann surface of genus at least $g$, and there are linearly independent holomorphic $1$--forms $\eta_1\, , \cdots\, , \eta_g$ on $C$, such that $\omega_i \,=\, f^\ast \eta_i$ for all $i$. \end{theorem} \section{Leaf space and almost complex blow-up}\label{se2} \subsection{Leaf space} Assume that $\omega_i$ are forms as in Theorem \ref{main}. Since $\omega_i \wedge \omega_j \,=\, 0$ for all $1\,\leq\, i\, , j\,\leq\, g$, it follows that there are complex valued smooth functions $f_{i,j}$ such that \begin{equation}\label{e1} \omega_i \,=\, f_{i,j} \omega_j \end{equation} wherever $\omega_j \,\neq\, 0$. Hence the collection $$\WW\,=\,\{ \omega_1, \cdots, \omega_g \}$$ determines a complex line subbundle of the complexified cotangent bundle $(T^*M)\otimes{\mathbb C}$ over the open subset $$ V\,:=\, M \setminus \bigcap_{i=1}^g Z(\omega_i)\,\subset\, M\, . $$ \begin{lemma}\label{cod2} Let $\FF\,:=\, \{v\, \in\, TV\,\mid\, \omega_i(v)\,=\, 0\,~~ \forall ~~ 1\,\leq\, i\, \leq\, g\}\,\subset\, TV$ be the distribution on $V$ defined by $\WW$. Then $\FF$ is integrable and defines a foliation of real codimension two on $V$. \end{lemma} \begin{proof} For any $x\, \in\, M$ and $1\,\leq\, i\, \leq\, g$, consider the $\mathbb R$--linear homomorphism $$ \omega^x_i\, :\, T_xM \, \longrightarrow\, {\mathbb C}\, ,~~~\, ~~ v\, \longmapsto\, \omega_i(x)(v)\, . $$ Since $\omega_i$ is of type $(1\, ,0)$, if $\omega_i(x)\, \not=\, 0$, then $\omega^x_i$ is surjective. Also, $\omega^x_j$ is a scalar multiple of $\omega^x_i$ because $\omega_i\wedge \omega_j\,=\, 0$. Therefore, we conclude that the distribution $\FF$ on $V$ is of real codimension two. Since $dw_i\,=\, 0$ for all $i$, it follows that the distribution $\FF$ is integrable. \end{proof} \begin{remark}\label{even} Since $Z(\omega_i)$ are smooth embedded submanifolds by hypothesis, and $\omega_i$ are of type $(1\, ,0)$, it follows that $Z(\omega_i)$ are almost complex submanifolds of $M$, meaning $J_M$ preserves the tangent subbundle $TZ(\omega_i)\, \subset\, (TM)\vert_{Z(\omega_i)}$. Consequently, all the intersections of the $Z(\omega_i)$'s are also almost complex submanifolds and are therefore even dimensional. Clearly, each $Z(\omega_i)$ has complex codimension at least one (real codimension at least two) as an almost complex submanifold. Therefore, the complement $M \setminus V$ has complex codimension at least two. \end{remark} \subsection{Almost complex blow-up} We shall need an appropriate notion of blow-up in our context. Suppose $K\,\subset\, M$ is a smooth embedded submanifold of $M$ of dimension $2j$ such that $J_M (TK) \,=\, TK$. By Remark \ref{even}, the submanifold $\bigcap_{i=1}^g Z(\omega_i)$ satisfies these conditions. Note that $J_M$ induces an automorphism $$ J_{M/K}\, :\, ((TM)\vert_K)/TK\,\longrightarrow\, ((TM)\vert_K)/TK $$ of the quotient bundle over $K$. Since $J_M$ is an almost complex structure it follows that $J_{M/K}\circ J_{M/K}\,=\, - \text{Id}_{((TM)\vert_K)/TK}$. Therefore, $((TM)\vert_K)/TK$ is a complex vector bundle on $K$ of rank $k-j$. We would like to replace $K$ by the (complex) projectivized normal bundle ${\mathbb P}(((TM)\vert_K)/TK)$ which will be called the \textbf{almost complex blow-up} of $M$ along $K$. For notational convenience, the intersection $\bigcap_{i=1}^g Z(\omega_i)$ will be denoted by $\mathcal Z$. We first projectivize $(TM)\vert_{\mathcal Z}$ to get a ${\mathbb C}{\mathbb P}^{k-1}$ bundle over $\mathcal Z$; this ${\mathbb C}{\mathbb P}^{k-1}$ bundle will be denoted by $\BB$. So $\BB$ parametrizes the space of all (real) two dimensional subspaces of $(TM)\vert_{\mathcal Z}$ preserved by $J_M$ (such two dimensional subspaces are precisely the complex lines in $(TM)\vert_{\mathcal Z}$ equipped with the complex vector bundle structure defined by $J_M$). Let $$\pi\,:\, \BB\,\longrightarrow\, \mathcal Z$$ be the natural projection. For notational convenience, the pulled back vector bundle $\pi^\ast ((TM)\vert_{\mathcal Z})$ will be denoted by $\pi^\ast TM$. Let $$ T_\pi\, :=\, {\rm kernel}(d\pi)\, \subset\, T\BB $$ be the relative tangent bundle, where $d\pi\, :\, T\BB\, \longrightarrow\, \pi^\ast T{\mathcal Z}$ is the differential of $\pi$. The map $\pi$ is almost holomorphic, and $T_\pi$ has the structure of a complex vector bundle. It is known that the complex vector bundle $T_\pi$ is identified with the vector bundle $Hom_{\mathbb C}({\mathcal L},\, (\pi^\ast TM)/{\mathcal L})\,=\, ((\pi^\ast TM)/{\mathcal L})\otimes_{\mathbb C} {\mathcal L}^*$, where $$ {\mathcal L}\, \subset\, \pi^\ast TM $$ is the tautological real vector bundle of rank two; note that both ${\mathcal L}$ and $(\pi^\ast TM)/{\mathcal L}$ have structures of complex vector bundles given by $J_M$, and the above tensor product and homomorphisms are both over $\mathbb C$. Therefore, the pullback $\pi^\ast TM$ splits as $${\mathcal L}\oplus ((\pi^\ast TM)/{\mathcal L}) \,=\, {\mathcal L}\oplus (T_{\pi}\otimes {\mathcal L})\, . $$ The image of the zero section of the complex line bundle ${\mathcal L} \,\longrightarrow\, \BB$ is identified with $\BB$, and the normal bundle of $\BB\, \subset\, {\mathcal L}$ is identified with ${\mathcal L}$. A small deleted normal neighborhood $U_{\BB}$ of $\BB$ in ${\mathcal L}$ can be identified with a deleted neighborhood $U$ of $\mathcal Z$ in $M$. Let $\overline{U}_{\BB}\,:=\, U_{\BB}\bigcup \BB$ be the neighborhood of $\BB$ in ${\mathcal L}$ (it is no longer a deleted neighborhood), where $\BB$ is again identified with the image of the zero section of $\mathcal L$. In the disjoint union $(M\setminus{\mathcal Z})\sqcup \overline{U}_{\BB}$, we may identify $U$ with $U_{\BB}$. The resulting topological space will be called the {\bf almost complex blow-up} of $M$ along ${\mathcal Z}$. \begin{remark}\mbox{} \begin{enumerate} \item Consider the foliation on the deleted neighborhood $U$ of $\mathcal Z$ in $M$ defined by $\mathcal F$. Let ${\mathbb L}_U$ denote the leaf space for it. After identifying $\BB$ with the image of the zero section of $\pi^\ast TM$, we obtain locally, from a neighborhood of $\BB$, a map to the leaf space ${\mathbb L}_U$. \item Note that the notion of an almost complex blow-up above is well-defined up to the choice of an identification of $U$ with $U_{\BB}$. Therefore, the construction yields a well-defined almost complex manifold, up to an isomorphism. \end{enumerate} \end{remark} \section{Proof of Theorem \ref{main}} We will first show that the leaves of $\FF$ in Section \ref{se2} are proper embedded submanifolds of $V\,=\, M\setminus {\mathcal Z}$. Suppose some leaf ${\mathbb L}_0$ does not satisfy the above property. Then there exists $x \,\in \,V$ and a neighborhood $U_x$ of $x$ such that $U_x\bigcap{\mathbb L}_0$ contains infinitely many leaves (of $\FF$) accumulating at $x$. Let $\sigma$ be a smooth path starting at $x$ transverse to $\FF$. Thus for every $\epsilon\,> \,0$ there exist \begin{enumerate} \item distinct (local) leaves $F_1\, , F_2\,\subset\, U_x \bigcap \FF$, such that (globally) $F_1, F_2 \,\subset\, {\mathbb L}_0$, \item points $y_j \,\in\, F_j$, $j\,=\, 1\, ,2$, \item a sub-path $\sigma_{12}\,\subset\,\sigma \,\subset\, U_x$ joining $y_1$ and $y_2$, and \item a path $\tau_{12}\,\subset\, L_0$ joining $y_1$ and $y_2$ such that $$\vert\int_{\sigma_{12}} \omega_i\vert \,< \,\epsilon\, \ \ ~ \forall\ i\, . $$ \end{enumerate} Since $\FF$ is in the kernel of each $\omega_i$, $$\vert\int_{\tau_{12}} \omega_i\vert \, = \,0, \ \ ~ \forall\ i\, . $$ Setting $\epsilon$ smaller than the absolute value of any non-zero period of the $\omega_i$'s, it follows that $$\int_{\tau_{12}\cup\sigma_{12}} \omega_i\, =\,0~\ \ 1\,\leq\, i\, \leq\, g\, .$$ Hence $$\int_{\sigma_{12}} \omega_i\, =\,0~\ \ \forall \ \ 1\,\leq\, i\, \leq\, g\, .$$ Taking limits we obtain that $\omega_i (\sigma^\prime (0)) \,= \,0$; but this contradicts the choice that $\sigma$ is transverse to $\FF$. Therefore, the leaves of $\FF$ are proper embedded submanifolds of $V$. Consequently, the leaf space $$D\,=\, V/\langle \FF\rangle$$ for the foliation $\FF$ is a smooth 2-manifold. Let $$q\,:\, V\,\longrightarrow\, D$$ be the quotient map. \begin{remark} \label{referee} The referee kindly pointed out to us the following considerably simpler proof of the fact that leaves of $\FF$ are closed in $V$: By equation \eqref{e1}, we have $\omega_i \,=\, f_{i,j} \omega_j$. Since $\omega_i$ are closed, $$d(f_{i,j}) \wedge \omega_j \,=\, 0, \ \ \forall\ i\, , j\, .$$ The functions $f_{i,j}$ define a map $f\,:\, V \,\longrightarrow\, {\mathbb C} P^1$. Since $d(f_{i,j}) \wedge \omega_j \,=\, 0,$ it follows that $f$ is constant on the leaves of the foliation $\FF$ defined by the forms $\omega_i$, $1\,\leq\, i\, leq\, n$. \end{remark} Next, let $$\xi \,:=\, TV/ \FF$$ be the quotient complex line bundle on $V$. Then $\xi$ carries a natural flat partial connection $\mathcal D$ along the leaves (cf. \cite{La}); this $\mathcal D$ is known as the Bott partial connection. It is straightforward to check that $\mathcal D$ preserves the complex structure on $\xi$. Indeed, this follows immediately from the fact that $\omega_i$ are of type $(1\, ,0)$. Hence $\xi$ induces an almost complex structure on $D$. Since $D$ is two--dimensional, this almost complex structure is integrable giving $D$ the structure of a Riemann surface. Further, there exist closed integral $(1\, ,0)$ forms $\eta_i$, $1\,\leq\, i\,\leq\, g$, on $D$ such that $$\omega_i\,= \,f^\ast \eta_i\, .$$ \subsection{Removing indeterminacy} Since $Z(\omega_i)$ are mutually transverse, we can construct the almost complex blow-ups of $M$ along $\mathcal Z$ successively along the $\mu$--fold intersections, $\mu\,=\,2\, , \cdots\, , g$, to obtain $\widehat M$ such that \begin{enumerate} \item there is an extension of the line bundle $\xi$ to all of $\widehat M$, \item there is a well-defined smooth map $$\widehat{q}\,:\, \widehat{M} \,\longrightarrow\, D$$ extending $q$, and \item the blown-up locus has the structure of complex analytic ${\mathbb C}{\mathbb P}^1$--bundles as usual (due to transversality). \end{enumerate} The Riemann surface $D$ is compact because $\widehat M$ is so. Further, $D$ has genus greater than one as $g \,> \,1$. So any complex analytic map from ${\mathbb C}{\mathbb P}^1$ to $D$ must be constant. Therefore, $\widehat q$ actually induces a smooth map from $M$ to $D$, and we may assume that the indeterminacy locus of $q$ was empty to start off with, or equivalently that $q$ extends to a smooth map $\overline{q} \,:\, M\,\longrightarrow\, D$. This furnishes the required conclusion and completes the proof of Theorem \ref{main}. \section{Refinements and consequences} \subsection{Stein factorization} We now proceed as in the proof of the classical Castelnuovo-de Franchis Theorem, \cite[p. 24, Theorem 2.7]{abc}, to deduce {\it a posteriori} that $D$ is a Stein factorization of a map to a compact Riemann surface. Define $$h \,:\, V\,\longrightarrow\, {\mathbb C}{\mathbb P}^{g-1}\, , ~\ \ ~ m\,\longmapsto\, [\omega_1 (m)\, :\, \cdots \,:\, \omega_g (m)]\, .$$ Suppose $\omega_i(m) \,\neq\, 0$. Then there exists a small neighborhood $U(m)$ such that $$h(x)\,=\, [f_{1,i} (x)\,:\, \cdots \,:\,f_{i-1,i} (x)\,:\, 1\, :\, f_{i+1,i} (x) \,: \,\cdots \,:\,f_{g,i} (x)]\, , ~\ \forall\ x \,\in\, U(m)\, ,$$ where $f_{l,j}$ are defined in \eqref{e1}. Since $\omega_j \wedge \omega_i \,=\, 0$ and $d\omega_j\,=\, 0$ for all $i\, ,j$, it follows that $df_{j,i} \wedge \omega_i \,= \,0$. Consequently, each $f_{j,i}$ is constant on the leaves of $\FF$. Hence $h$ has a (complex) one dimensional image, so the image of $h$ is a Riemann surface $C$. Further, $h$ induces a holomorphic map $$h_1\,:\, D\,\longrightarrow\, C\, .$$ We note that $D$ may be thought of as the Stein factorization of $h$, and $h$ factors as $h\,=\, h_1\circ q$. \subsection{Further generalizations} Let $M$ be a compact manifold, and let ${\mathcal S}\,\subset\, TM$ be a nonsingular foliation such that $M$ has a flat transversely almost complex structure. This means that we have an automorphism $$ J\, :\, TM/{\mathcal S}\, \longrightarrow\, TM/{\mathcal S} $$ such that \begin{enumerate} \item $J\circ J\,=\, -\text{Id}_{TM/{\mathcal S}}$, and \item $J$ is flat with respect to the Bott partial connection on $TM/{\mathcal S}$. \end{enumerate} Let $\omega_i$, $1\,\leq\, i\, \leq\, g$, be linearly independent smooth sections of $(TM/{\mathcal S})^*\otimes{\mathbb C}$ such that \begin{enumerate} \item each $\omega_i$ is flat with respect to the connection on $(TM/{\mathcal S})^*\otimes{\mathbb C}$ induced by the Bott partial connection on $TM/{\mathcal S}$, \item each $\omega_i$ is of type $(1\, ,0)$, meaning the corresponding homomorphism $TM/{\mathcal S}\, \longrightarrow\, M\times\mathbb C$ is $\mathbb C$--linear, \item each $\omega_i$ is closed when considered as a complex $1$--form on $M$ using the composition $$ TM\otimes{\mathbb C}\,\longrightarrow\, (TM/{\mathcal S})\otimes{\mathbb C} \,\stackrel{\omega_i}{\longrightarrow}\, M\times\mathbb C $$ \item $\omega_i\wedge\omega_j\,=\, 0$ for all $i\, ,j$, and \item $\omega_j$ are in general position. \end{enumerate} Theorem \ref{main} can be generalized to this set--up. \begin{remark} The only real use we have made of the hypothesis in Theorem \ref{main} that $\omega_i$'s are in general position is to ensure that $\mathcal Z$ has complex codimension at least two. This is what allows us to remove indeterminacies in the smooth category (as opposed to the algebraic or complex analytic categories, where complex codimension greater than one follows naturally). Any hypothesis that ensures that the indeterminacy locus has complex codimension greater than one would suffice. \end{remark} \end{document}
\begin{document} \selectlanguage{english} \thispagestyle{empty} \raggedbottom \vspace*{2mm} \centerline{\bfseries\Large QBism, Quantum Nonlocality, and} \vspace*{2mm} \centerline{\bfseries\Large the Objective Paradox} \vspace*{3mm} \centerline{{\bfseries Gerold Gr\"{u}ndler}\,\footnote{\href{mailto:[email protected]}{email:\ [email protected]}}} \vspace*{1mm} \centerline{\small Astrophysical Institute Neunhof, N\"{u}rnberg, Germany} \vspace*{7mm} \noindent\parbox{\textwidth}{\small\hspace{1em}The Quantum-Bayesian interpretation of quantum theory claims to eliminate the question of quantum nonlocality. This claim is not justified, because the question of non-locality does not arise due to any interpretation of quantum theory, but due to objective experimental facts. We define the notion ``objective paradox'' and explain, comparing QBism and the Copenhagen interpretation, how avoidance of any paradox results into poor explanatory power of an interpretation, if there actually exists an objective paradox.} \vspace*{5mm} \section{Overview} Quantum Bayesianism\!\cite{Fuchs:qbism1,Fuchs:qbismlocality}, or for short QBism, is an interpretation of quantum theory, which claims to \al remove the paradoxes, conundra, and pseudo\bz{-}problems that have plagued quantum foundations for the past nine decades\arp\!\cite{Fuchs:qbismlocality}, and in particular \al eliminates `quantum nonlocality'.\arp\!\cite{Fuchs:qbismlocality} The latter claim is challenged in section\:\ref{sec:nonlocality}\,. In section\:\ref{sec:explpower} the discussion is extended to a general consideration regarding the (relatively poor) explanatory power of QBism. In section\:\ref{sec:objparadox} the notion \al objective paradox\ar is defined. We point out, how the anxious avoidance of any paradox --- even an objective paradox --- causes QBism's deficit of explanatory power. \section{Quantum Nonlocality}\label{sec:nonlocality} We consider the gedanken\bz{-}experiment described by Einstein, Podolsky, and Rosen\!\cite{Einstein:EPR} in the simplified variant proposed by Bohm\!\cite{Bohm:quantmech}: An instable system is prepared in a singlet state. It decays into two fragments with \mbox{$\text{spin}=1/2$} each. At position $x_A$, Alice measures by means of a Stern\bz{-}Gerlach magnet the spin of one of the fragments along the $z$\bz{-}axis. At position $x_B$, Bob measures by means of another Stern\bz{-}Gerlach magnet the spin of the other fragment along the $z$\bz{-}axis. We assume that the measurements at $x_A$ and $x_B$ are space\bz{-}like separated, such that Alice and Bob can not get any informations on the result of the other, before they have completed their own measurement. In the sequel we will discuss the experiment in a reference frame, in which Alice has completed her measurement, before Bob starts his measurement. This arbitrary choice of reference system is of no relevance regarding the question of nonlocality. Based on their knowledge of the experimental setup, Alice and Bob both describe the spin state of the singlet system, as long as they have not yet made their measurements, by the state function\pagebreak[1] \begin{align} \sqrt{\frac{1}{2}}\,\Big(\, |\downarrow\,\rangle _A\otimes |\uparrow\,\rangle _B+|\uparrow\,\rangle _A\otimes |\downarrow\,\rangle _B\,\Big)\ .\label{kjsjngnhsfd} \end{align} Depending on the result $\uparrow _{A}$ or $\downarrow _{\,A}$ of her measurement, Alice assigns a state function to her own fragment \begin{subequations}\label{jmkdnmgfii}\begin{align} \uparrow _{A}\ &\longrightarrow\ |\uparrow\,\rangle _A &\downarrow _{\,A}\ &\longrightarrow\ |\downarrow\,\rangle _A\ , \end{align} and --- due to application of \eqref{kjsjngnhsfd} --- she assigns at the same time a state function to Bob's fragment: \begin{align} \uparrow _{A}\ &\longrightarrow\ |\downarrow\,\rangle _B &\downarrow _{\,A}\ &\longrightarrow\ |\uparrow\,\rangle _B\label{jmkdnmgfiib} \end{align}\end{subequations} The state assignment \eqref{jmkdnmgfiib} to Bob's fragment just matches Alice's conditional probabilistic assumptions \begin{align} P(\,\uparrow _{B}|\uparrow _{A})&=0&P(\,\downarrow _{\,B}|\uparrow _{A})&=1\notag\\ P(\,\uparrow _{B}|\downarrow _{\,A})&=1&P(\,\downarrow _{\,B}|\downarrow _{\,A})&=0 \end{align} on Bob's result, which again are based on her knowledge of the fact that the primary system has been prepared in a singlet state. To EPR\!\cite{Einstein:EPR}, Alice's state assignment \eqref{jmkdnmgfiib} to Bob's fragment seemed a severe problem. They argued: If Alice can predict the outcome of Bob's measurement with certainty ($P=1$), then there must exist an element of reality (something `out there'), which causes Bob's measurement result. But according to quantum theory, only the overall singlet spin function \eqref{kjsjngnhsfd} exists as long as neither Alice nor Bob have started their measurements, while specific spin states of the fragments do not yet exist. Only when the fragments arrive at the Stern\bz{-}Gerlach magnets --- but not earlier! --- , then Nature decides for either $\uparrow _A$ and $\downarrow _{\,B}$, or for $\downarrow _{\,A}$ and $\uparrow _B$\,. Alice and Bob can not use the correlation of their measurement results for exchange of informations with superluminal speed. Thus with regard to Alice and Bob, special relativity theory is not violated. But Nature herself has the information on the spin settings of the fragments available over space\bz{-}like distance, with no time delay at all, as proved by the perfect correlation of Alice's and Bob's observations. One is tempted to say that Nature, when setting the spins in the moment of measurement, seems to be exempted from the restrictions of special relativity theory, and act non\bz{-}local. How does the Copenhagen interpretation explain the correlations? In his 1955 Gifford lecture\!\cite{Heisenberg:CopInt} on \al The Copenhagen Interpretation of Quantum Theory\arp , Heisenberg explicates his interpretation of the state function: \begin{addmargin}{1.5em} \al The probability function combines objective and subjective elements. It contains statements about possibilities or better tendencies (`potentia' in Aristotelian philosophy), and these statements are completely objective, they do not depend on any observer; and it contains statements about our knowledge\footnote{Heisenberg uses the term \al knowledge\ar for the subjective element represented by the state function. Timpson\!\cite[sec.\,2.3]{Timpson:qbism} emphasizes, that notions like \al knowledge\ar or \al information\ar have an objective character, as they can be objectively right or wrong. On the other hand, notions like \al assumption\ar or \al believe\ar denote something truly subjective. We probably understand Heisenberg correctly if we assume that he wanted to characterize the subjective element represented by the state function as \al believe\ar of the physicist, but not as \al knowledge\arp , in contrast to the objective element represented by the state function (\mbox{i.\,e.}\ the Aristotelian tendencies).} of the system, which of course are subjective in so far as they may be different for different observers. [\,\dots{}] we may say that the transition from the `possible' to the `actual' takes place as soon as the interaction of the object with the measuring device, and thereby with the rest of the world\footnote{Heisenberg was well aware of the importance of decoherence for the measurement process, long before Zeh and Zurek started to consider the issue in the seventies. If in doubt, read Heisenberg's 1955 article\!\cite{Heisenberg:CopInt}.}, has come into play; it is not connected with the act of registration of the result by the mind of the observer.\footnote{Note that Heisenberg here clearly addresses and removes the issue of \al Wigner's friend\arp , many years before Wigner\!\cite{Wigner:Freund} invented the alleged problem. QBism as well removes the issue, but in a basically different manner: In Heisenberg's point of view, the measurement result comes into being as soon as the instrument (which according to the Copenhagen interpretation must coercively be described by the methods and notions of classical physics) has registered the result, \mbox{i.\,e.}\ already before the friend reads the result from the display, and a fortiori before he communicates the result to Wigner. In the QBism point of view, Wigner may describe the primary quantum object, the instrument, and the friend as an entangled quantum system, and the measurement result comes into being for Wigner only in the moment, when he receives his friend's report.} The discontinuous change in the probability function, however, takes place with the act of registration, because it is the discontinuous change of our knowledge in the instant of registration that has its image in the discontinuous change of the probability function.\ar \end{addmargin} The Aristotelian tendencies are not hard facts, but anyway they are something `out there', not merely assumptions or believes of a physicist. When Alice does her measurement, then \al the transition from the `possible' to the `actual' takes place\arp , and the Aristotelian tendencies `out there' are changed from \eqref{kjsjngnhsfd} to \eqref{jmkdnmgfii}. This of course does mean, that Alice due to her measurement triggers an objective change at Bob's place, with no time retardation. The measurement on the entangled system \eqref{kjsjngnhsfd} clearly has a non\bz{-}local character in Heisenberg's point of view. Bohr, on the other hand, never made (to my best knowledge) any onthological statement on the quantum\bz{-}theoretical state function.\footnote{The often quoted sentence \al There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature \emph{is}. Physics concerns what we can \emph{say} about nature.\arp , reported by Petersen\!\!\cite{Petersen:BohrPhilos}, is strongly misleading, if taken out of context. In that article, Petersen explains: \al Traditional philosophy has accustomed us to regard language as something secondary and reality as something primary. Bohr considered this attitude toward the relation between language and reality inappropriate. When one said to him that it must be reality which, so to speak, lies beneath language, and of which language is a picture, he would reply, `We are suspended in language in such a way that we cannot say what is up and what is down. The word reality is also a word, a word which we must learn to use correctly.' Bohr was not puzzled by ontological problems or by questions as to how concepts are related to reality. Such questions seemed sterile to him.\ar Bohr always insisted that the quantum objects and the measuring devices applied for their observation together constitute the individual quantum pheonomena. Only in their entirety can quantum phenomena reasonably be discussed by human beings. Bohr never answered with yes or no the question, whether (only fantasized but not observed) quantum objects are \al real\ar \emph{without} the (classical) measurement instruments, which are needed for their observation. Any answer to that question seemed pointless to Bohr, and a misuse of language.} He neither objected nor approved Heisenberg's partial identification of the state function with objective Aristotelian tendencies `out there' (the potentia of Aristotle's philosophy); he simply --- and obviously very deliberately --- was silent about this question. But he always emphasized the wholeness of quantum phenomena. In his reply\!\cite{Bohr:EPR} to EPR, Bohr spoke of phenomena \al where we have to do with a feature of \emph{individuality} completely foreign to classical physics.\ar He emphasized the word individuality by italics, and he left no doubt that he meant this notion literally.\footnote{(latin) individual\,=\,not divisible} Bohr was convinced that the measuring instruments, which physicists apply for the observation of a quantum object, must be considered an integral, not separable part of the individual quantum phenomenon. In the EPR\bz{-}gedankenexperiment discussed above, the individual quantum phenomenon extends from Alice's Stern\bz{-}Gerlach magnet to Bob's Stern\bz{-}Gerlach magnet. When Alice does a measurement, then the \emph{whole} individual quantum phenomenon is affected. In Bohr's eyes, it would not be correct to say that Alice's measurement affects only her fragment. There only exists the one indivisible system \eqref{kjsjngnhsfd}, onto which Alice's instrument works, and only as an effect of her completed measurement we can reasonably speak of two new quantum systems \eqref{jmkdnmgfii}, which have been created by Alice's measurement out of the primary quantum system \eqref{kjsjngnhsfd}. Thus, while their wordings are different, Heisenberg and Bohr agree on the non\bz{-}local character of quantum phenomena, and thereby offer an explanation for the EPR\bz{-}correlations. This explanation may not please everybody, but at least it is an explanation, and it is a clear (affirmative) answer to the question of quantum nonlocality. To Einstein, Podolski, and Rosen, on the other hand, the assumption of nonlocality seemed unacceptable. From the perfect correlations, they concluded that the fragments actually must have well\bz{-}defined spin states $\uparrow $ or $\downarrow $ along the $z$\bz{-}axis all the time, independent of any measurement. They deemed quantum theory incomplete, \mbox{i.\,e.}\ they assumed that \eqref{kjsjngnhsfd}, while being correct, must be completed by insertion of `hidden variables', which reflect the spin states of the fragments, and determine the results of Alice's and Bob's measurements. We do not need to accept this explanation, but at least it is an explanation, and it is a clear (negative) answer to the question of quantum nonlocality. QBism, however, works around the question of nonlocality due to a quite strange and radical measure: This interpretation postulates that the one and only purpose of quantum theory is to help an agent to organize and optimize his\bz{/}her personal believes regarding his\bz{/}her future experiences, which of course will happen at his\bz{/}her future place, but not somewhere else. Fuchs\,et.\,al.\!\cite{Fuchs:qbismlocality} explain: \begin{addmargin}{1.5em} \al QBist quantum mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience. No agent can move faster than light: the space\bz{-}time trajectory of any agent is necessarily timelike. [\dots ] Quantum mechanics, in the QBist interpretation, cannot assign correlations, spooky or otherwise, to space\bz{-}like separated events, since they cannot be experienced by any single agent. Quantum mechanics is thus explicitly local in the QBist interpretation. And that’s all there is to it.\ar \end{addmargin} Now, EPR\bz{-}correlations exist not only in gedanken\bz{-}experiments. They have been experimentally confirmed, see for example \cite{Hensen:Belltest,Giustina:Belltest,Shalm:Belltest}. EPR raised a reasonable question, when they asked how Nature brings about the perfect correlations, whether Nature really is acting non\bz{-}local over space\bz{-}like distances, or whether there actually exist hidden variables, or whether there is any further possible explanation. No interpretation of quantum theory is obliged to offer an explanation for the experimentally confirmed correlations. It's perfectly fine, if \al no comment!\ar is QBism's only answer to the question of nonlocality. But it is not sensible to say that with QBism \al the issue of nonlocality simply does not arise\arp\!\cite{Fuchs:qbismlocality}. It \emph{does} arise, independent of any interpretation of quantum theory, due to hard experimental facts. \section{Explanatory Power}\label{sec:explpower} The QBist point of view, that Alice's assignment of state functions \eqref{jmkdnmgfiib} exists only for Alice at her place, but does not affect anything at Bob's place, does merely say what does \emph{not} cause the perfect correlation. When Alice is using quantum theory, to assign the state function \eqref{jmkdnmgfiib} to Bob's fragment, then she makes implicit use of the non\bz{-}local character of quantum theory. Of course she may shrug shoulders and never ask what is behind the baffling power of quantum theory, to supply her locally with believes regarding the results of space\bz{-}like distant measurements, which mysteriously turn out true with no exception, and help her to win every bet. Paraphrasing Caves, Fuchs, and Schack\!\cite[sec.\,VI,\,par.\,3--5]{Fuchs:qbism2}, this is the statement of QBism: \begin{addmargin}{1.5em} Doesn’t the perfect correlation demand an explanation independent of Alice’s belief? Alice has put together all her experience, prior beliefs, previous measurement outcomes, her knowledge of physics and in particular quantum theory, all to predict the perfect correlation of her results with Bob's. Why would she want any further explanation? What could be added to her belief of certainty? She has consulted the world \emph{in every way she can} to reach this belief; the world offers no further stamp of approval for her belief beyond all the factors that she has already considered. \end{addmargin} These sentences are indicating a misunderstanding. Asking \al Why would she want any further explanation?\arp , Caves, Fuchs, and Schack deny that QBism actually has deprived Alice of \emph{any} explanation for the perfect correlations. The belief of certainty, and her unrestricted trust in the correctness and reliability of the abstract mathematical machinery of quantum theory can not replace anything which could rightly be named an \al explanation\arp . All reasonable interpretations of quantum theory (QBism no doubt belongs to this group) are identical with regard to the experimentally verifiable\bz{/}falsifiable consequences which can be derived from them. Thus reasonable interpretations differ not by being right or wrong (they all are right), they merely differ in the metaphysics, \mbox{i.\,e.}\ in the intelligible pictures (positivists would say in the distracting illusions) they are offering to give us a framework in which we can sort our experiences. The coherence of this metaphysical framework then can give us the \emph{feeling to understand} the phenomena, which is much more than merely being able to compute and correctly predict future measurement results, and thereby sorting and optimizing one's personal believes regarding one's future personal experiences. Timpson\!\cite[sec.\,4.2]{Timpson:qbism} points out a general \al explanatory deficit problem\ar of QBism, and presents besides many other examples the question, why some solids are good electrical conductors, while others are insulators. Independent of any interpretation, we can apply the mathematical machinery of quantum theory, and find out whether the Fermi surface is within a partially filled conduction band (then this solid is a conductor) or between and far\bz{-}off two conduction bands (then the solid is an insulator). If we follow the QBism interpretation, then the mathematical result helps us to update our personal believes and informs us, which bets we should accept, and which bets we better should reject, `and that's all there is to it.' With other interpretations, which consider the state function as representing something `out there', we get a much richer picture: We see electrons, which can (or can not) due to an externally applied voltage and\bz{/}or interactions with phonons be excited into a free energy level, and then move almost unimpeded through the solid. No such pictures exist with QBism, because in that interpretation the state function does not represent anything `out there'. This deficit of plausible pictures of course does not affect the predictive power with regard to future measurement results. Thus it is a matter of taste, whether we, following Timpson\!\cite{Timpson:qbism}, complain of QBism's lack of explanatory power, or whether we rather appreciate that QBism spares us much of the metaphysical ballast of other interpretations. Note by the way that the question, whether some field like the $\psi $\bz{-}field of quantum theory should be reified and considered to represent something `out there', does not only turn up in the interpretation of quantum theory. Consider for example Maxwell's electromagnetic field. Does it \al exist\arp ? It is created due to the dynamics of charged particles, and the one and only method to observe it is due to the observation of charged test particles. Thus the electromagnetic field may rightly be considered to be nothing than an abstract mathematical formalism, which helps us to organize our believes regarding the question, how the dynamics of some charged particles will impact the dynamics of some other charged particles. Still the mental picture of an electromagnetic field really existing `out there' is of such convincing explanatory power, and such a valuable help to develop our intuition about electromagnetic interactions, that many of us would strongly hesitate to give it up. E.\,T.\,Jaynes\!\cite{Jaynes:omelette} once described the formalism of quantum theory as \begin{addmargin}{1.5em} \al a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature --- all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about\arp .\end{addmargin} Caves, Fuchs, and Schack may rightly claim that they unscrambled the egg, at least with regard to the interpretation of the state function. They purged it from all objective content (Heisenberg's Aristotelian tendencies out there, Bohr's objective individuality of quantum phenomena), and kept nothing but the subjective believes of an agent. But this success comes at a high price: At the same time, they skipped a large part of the explanatory metaphysical power of the Copenhagen interpretation, without replacing it by anything better. What is so bad with the inseparable mixture of objective facts and subjective believes in quantum theory? We acquired our cognitive capabilities during millions of years of evolution. Hence it is no surprise that these capabilities don't fit to a type of phenomena (\mbox{i.\,e.}\ quantum phenomena) which human beings first time encountered in the twentieth century. The `paradoxes, conundra, and pseudo\bz{-}problems' turning up in the discussion of quantum phenomena are not caused by any interpretation of quantum theory. Instead they are caused by the mismatch between objective facts `out there' and our human type of cognition, which is not prepared to cope with quantum phenomena. We have a theory, which perfectly matches our cognitive capabilities: Classical physics. A fictitious theory, which would perfectly match the objective world `out there' would be completely useless, because no human being would be able to understand it. Quantum theory can only be useful, if it spans the gap between the objective world and our cognitive capabilities. To meet this demand, quantum theory must necessarily combine elements of objective aspects of the world, and elements of the human type of thinking, like a bridge over a river must have bridge heads on both banks to serve it's purpose. Like no part of the bridge can be assigned to the left or to the right bank of the river, but each part is necessarily tied to both banks, the parts of quantum theory can not be separated into elements which belong to the objective world, and other elements which belong to the human observer. Instead quantum theory can only meet it's purpose, if each of it's parts is reflecting both the objective reality `out there' and the cognitive capabilities and the type of thinking of the human observer. The inseparability of the subjective and objective aspects of the formalism is not a problem, but a necessary feature of quantum theory. Once we had the mathematical machinery of quantum theory, found by ingenious guessing, we next needed some interpretation, because we felt that a perfectly working mathematical machinery is not sufficient, to give us the feeling of true understanding. Bohr and Heisenberg did an excellent job, when they worked out the Copenhagen interpretation\!\cite{Heisenberg:CopInt}, to reconcile the objective world `out there' with the cognitive capabilities of human brains. Good ideas for further improvement are welcome. QBism, however, simply amputated the objective part from the interpretation of the state function (while keeping with no modification the full mathematical machinery with it's `scrambled objective and subjective elements'), and left us with a torso of marginal explanatory power, a bridge with only one head on one bank. \section{Accepting/denying an objective paradox}\label{sec:objparadox} In the well\bz{-}known experiment of Tonomura et.\,al.\cite{Tonomura:electroninterference}, single electrons go one by one through the biprism with two openings, and still the observation points of the electrons in the detector plane add up after many experimental runs to an interference pattern. The state function, which quantum theory assigns to each single electron, evolves through both openings of the biprism. If the state function is interpreted as representing something objectively existing out there and really moving through \emph{both} openings of the biprism in each single run of the experiment, then we get a mental picture which offers an explanation for the interference pattern. According to the Copenhagen interpretation, the electron may indeed be imagined as a wave moving through both openings. But at the same time the Copenhagen interpretation reminds us that the validity of the wave picture is limited by the complementary picture of a particle, and that it depends on our choice of the particular experimental arrangement, whether the wave\bz{-}like or the particle\bz{-}like character of the electron is elicited. Actually in the experiment of Tonomura et.\,al., the electron is elicited as a wave while moving through the biprism, but as a particle when being observed in the detector plane. We thus arrive in case of this experiment not at a consistent classical picture of wave \emph{or} particle, but at the complementary picture of wave \emph{and} particle. If we could have nice classical pictures, then hardly anybody would reject them. But if we can only have such strange and paradoxical complementary pictures, then some of us prefer to skip those dubious \al explanations\ar completely, and constrain to well\bz{-}behaved Bayesian probabilities. While others think that strange explanations at least are better than no explanations. Its a matter of taste. Its \emph{really} nothing but a matter of taste, as either interpretation leads to identical experimentally testable consequences, and to identical expectation values for future measurements. QBism claims \al that it removes the paradoxes [\,\dots ] that have plagued quantum foundations for the past nine decades.\arp\!\cite{Fuchs:qbismlocality} The greek word paradox means, that something is incompatible with the human way of thinking. Is it really an advantage, if an interpretation of quantum theory can avoid any paradox? It may \emph{not} be an advantage, if there really should be an objective mismatch inbetween the reality of quantum phenomena and the cognitive capabilities of human beings, caused by the fact that quantum phenomena were irrelevant during the many millions of years of human evolution. Such mismatch could be named an \al objective paradox\arp , which can not be removed, because we can change the contents of our thinking, but we can not change our way of thinking. If such objective paradox really exists, shouldn't then an interpretation of quantum theory better acknowledge it, appropriately reflect it, and somehow arrange with it, instead of trying to shift it aside? \al The Copenhagen interpretation of quantum theory starts from a paradox.\ar is the first sentence of Heisenberg's article\cite{Heisenberg:CopInt}. And some pages later he explains: \begin{addmargin}{1.5em} \al There is no use in discussing what could be done if we were other beings than we are. At this point we have to realize, as von Weizs\"{a}cker has put it, that `Nature is earlier than man, but man is earlier than natural science'. The first part of the sentence justifies classical physics, with its ideal of complete objectivity. The second part tells us why we cannot escape the paradox of quantum theory, namely, the necessity of using the classical concepts.\ar \end{addmargin} The Copenhagen interpretation acknowledges the existence of an objective paradox, and reflects it by the introduction of complementary explanatory pictures. QBism, on the other hand, targets --- and indeed accomplishes! --- an interpretation which is completely free of paradoxa. For this purpose it reduces the interpretation of the state function to representing the subjective assumptions of an agent, but not anything objectively existing `out there'. As an inevitable trade\bz{-}off, the denial of the objective paradox thereby deprives QBism of almost all explanatory power. The nonlocality of Nature, encoded in the objective individuality of quantum phenomena in Bohr's wording, or in objective Aristotelian tendencies in Heisenberg's wording, may seem quite strange to many of us. Indeed, these ideas \emph{really are very strange}, if considered in human brains, whose cognitive capabilities have been shaped by millions of years of interaction with an environment, which can appropriately be described in terms of classical physics.\footnote{How could Aristotle conceive the objective tendencies, even though he never observed a quantum phenomenon? Isn't this an indication, that the objective tendencies actually are not something `out there', but a subjective element of human cognition? Probably the best answer to this question is to note, that the request for a separation of objective and subjective elements is essentially nonsense, both with regard to the mathematical formalism and with regard to the interpretation of physical theories.} But is the wordless shrug of shoulders offered by QBism really better? Well, that's a matter of taste. \flushleft{\interlinepenalty=100000 } \end{document}
\begin{document} \title{ 3D-VAR for Parametrized Partial Differential Equations: A Certified Reduced Basis Approach \thanks{This work was supported by the Excellence Initiative of the German federal and state governments and the German Research Foundation through Grant GSC 111.} } \titlerunning{A certified reduced basis approach for 3D-\blue{VAR}} \author{ Nicole Aretz-Nellesen \and Martin A. Grepl \and \\ Karen Veroy } \institute{ Nicole Aretz-Nellesen \at Aachen Institute for Advanced Study in Computational Engineering Science (AICES), RWTH Aachen University, Schinkelstra{\ss}e 2, 52062 Aachen, Germany \\ \email{[email protected]} \and Martin A. Grepl \at Numerical Mathematics (IGPM), RWTH Aachen University, Templergraben 55, 52056 Aachen, Germany \\ \email{[email protected]} \and Karen Veroy \at Aachen Institute for Advanced Study in Computational Engineering Science (AICES) and Faculty of Civil Engineering, RWTH Aachen University, Schinkelstraße 2, 52062 Aachen \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, we propose a reduced order approach for 3D variational data assimilation governed by parametrized partial differential equations. In contrast to the classical 3D-\blue{VAR} formulation that penalizes the measurement error directly, we present a modified formulation that penalizes the experimentally-observable misfit in the measurement space. Furthermore, we include a model correction term that allows to obtain an improved state estimate. We begin by discussing the influence of the measurement space on the amplification of noise and prove a necessary and sufficient condition for the identification of a ``good" measurement space. We then propose a certified reduced basis (RB) method for the estimation of the model correction, the state prediction, the adjoint solution and the observable misfit with respect to the true state for real-time and many-query applications. {\it A posteriori} bounds are proposed for the error in each of these approximations. Finally, we introduce different approaches for the generation of the reduced basis spaces and the stability-based selection of measurement functionals. The 3D-VAR method and the associated certified reduced basis approximation are tested in a parameter and state estimation problem for a steady-state thermal conduction problem with unknown parameters and unknown Neumann boundary conditions. \keywords{variational data assimilation \and 3D-VAR \and model correction \and reduced basis method \and state estimation \and parameter estimation \and {\it a posteriori} error estimation } \end{abstract} \section{Introduction} \label{sec:introduction} In numerical simulations, mathematical models --- such as ordinary or partial differential equations (PDEs) --- are widely used to predict the state or behavior of a physical system. The goal of variational data assimilation is to improve state predictions through the incorporation of measurement data, e.g., experimental observations, into the mathematical model. \blue{Variational data assimilation is prevalent in meteorology \cite{DT1986,Lorenc1986,Lorenc1981} and oceanography \cite{Bennett,Bennett_2002}, for example in weather forecasting and ocean circulation modeling.} Prominent examples include the 3D- and 4D-\blue{VAR} methods, which weigh deviations from a prior best-knowledge (initial) state against differences with respect to measurement data; see the recent texts~\cite{DataAssimilation,Reich} and references therein for a discussion of variational data assimilation. Whereas 4D-\blue{VAR} considers dynamical systems (i.e. three space dimensions plus time), and usually aims to estimate the (unknown) initial condition of the system, 3D-\blue{VAR} considers the stationary case. In this paper, we propose a certified reduced order approach for a modified 3D-VAR method for parametrized, coercive PDEs. Compared to the classical 3D-\blue{VAR} method, our modified formulation penalizes the experimentally-observable misfit in the measurement space instead of the difference in the measurements. Furthermore, we account for an imperfect model by introducing a model bias in the formulation, similar to the weak-constraint 4D-\blue{VAR} approach~\cite{Tremolet2006}. The proposed method makes data-informed modifications to a best-knowledge background model to generate a compromise between the original model and the observed measurements. The method thereby accounts for the physical integrity of the state prediction and can be used to estimate unknown model properties and parameters along with the state. The 3D-\blue{VAR} problem --- and variational data assimilation in general --- is usually cast as an optimization problem and has very close connections to optimal control theory~\cite{VH2006}. In addition, the optimality condition has a saddle-point structure which can be analyzed using standard functional analysis arguments. Based on these observations we can identify necessary and sufficient properties of the measurement space that \textit{increase} the stability of the 3D-VAR formulation and provide a practical procedure for the generation of a measurement space meeting these conditions. This paper builds upon the model-data weak approach presented in~\cite{YANO2013937}, the parametrized-background data-weak (PBDW) approach to variational data assimilation introduced in~\cite{PBDW,PBDW2}, and the certified reduced-order approach for 4D-\blue{VAR} in~\cite{4DVAR}. The model-data weak formulation in \cite{YANO2013937} finds a state estimate by minimizing the distance between state and observed state while penalizing model corrections. This method indirectly accepts any kind of model modification in the dual space and can thereby account for unpredicted behaviour. In the parametrized-background data-weak (PBDW) approach to variational data assimilation introduced in \cite{PBDW,PBDW2}, a state estimate is obtained by projecting an observation from the measurement space onto a space featuring model properties, such as an RB space that approximates the solution manifold of a parametrized PDE. The PBDW framework has recently been extended with adaptive \cite{Taddei17,2017arXiv171209594M} and localization \cite{Taddei_localization} strategies, and has been recast in a multispace setting \cite{doi:10.1137/15M1025384}. In addition to the discussions in the PBDW literature, the optimal selection of measurements with greedy orthogonal matching pursuit (OMP) algorithms in state estimation has been analyzed recently in~\cite{Olga}. We build upon this work in the construction of our measurement space. The connection and distinctions between 3D-\blue{VAR} and the PBDW formulation has already been discussed in~\cite{PBDW,Taddei17}. We also note that the PBDW approach is related to the generalized empirical interpolation method~\cite{Maday2013,Maday_gEIM_2015} and gappy-POD~\cite{WILLCOX2006208}, see~\cite{PBDW,Taddei17} for a thorough discussion. We will show that --- under certain assumptions on the spaces {\it and} in the limit of the regularization parameter going to infinity --- our modified 3D-\blue{VAR} formulation is equivalent to the PBDW formulation. However, there are also decisive differences between the two formulations. After a brief discussion of the mathematical background in Section~\ref{sec:preliminaries}, we present the following main contributions: \begin{itemize} \item We introduce a data-weak reformulation of the classical 3D-VAR method in Section \ref{sec:truth}. We present a detailed analysis of the method's stability with respect to the properties of the original best-knowledge model, the model bias, and the choice of the measurement space. We show that our method is stable in the sense of Hadamard independently of the choice of measurement functionals and uniformly over the parameter domain. Furthermore, we identify a necessary and sufficient property of the measurement space that further increases the stability and restricts the amplification of measurement noise when the 3D-VAR method emphasizes closeness to the data over the best-knowledge model. \item In Section \ref{sec:RB}, we present the RB method for the 3D-\blue{VAR} problem and develop online computationally efficient approximations and {\it a posteriori} error bounds for the state estimate, the adjoint solution, the model correction, and the misfit. This significant reduction in computational complexity makes the 3D-\blue{VAR} method feasible for repeated computations with measurement noise and varying parameters. Furthermore, the RB approximation aids in the efficient selection of the measurements. \item In Section \ref{sec:spaces}, we integrate the theoretical results from both the stability and error analyses to propose an iterative selection of the measurement functionals. More specifically, we propose an algorithm that employs the OMP measurement selection from \cite{Olga} in a greedy manner over the parameter domain to generate the measurement space. The space can be tailored specifically to the stable estimation of parameters and model properties of the 3D-VAR method from measurement data. We also discuss a construction of the reduced basis spaces which does not require the measurements to be known {\it a priori}. \end{itemize} In Section \ref{sec:experiments} we present numerical results for a steady state thermal conduction problem with uncertain parameters and unknown Neumann boundary condition, and investigate the influence of measurement noise upon the estimation of the uncertain parameters and boundary condition. \section{Preliminaries} \label{sec:preliminaries} In this section, we specify the mathematical framework within which we cast the 3D-VAR method in the following section. We introduce the required spaces and forms, and list all our assumptions upon them. We start with the state space $\mathcal{Y}$, which we take to be a Hilbert space with inner product $\mySPY{\cdot}{\cdot}$ and induced norm $\mathcal{N}ormY{\cdot}$. Likewise, we consider a Hilbert space $\mathcal{U}$ for the regularizing model correction, with norm $\mathcal{N}ormU{\cdot}$ induced by the inner product $\mySPU{\cdot}{\cdot}$. In the numerical implementation, these spaces are typically finite-dimensional subspaces --- in the RB literature usually referred to as ``truth'' spaces --- of Sobolev- or $L^2$-spaces, e.g. finite element spaces. For notational convenience, we omit the explicit distinction between the infinite dimensional and finite dimensional setting in the following since the results presented hold for both cases (with the appropriate definitions). The RB approximation in section \ref{sec:RB} utilizes closed subspaces $\mathcal{Y}RB \subset \mathcal{Y}$ and $\mathcal{U}RB \subset \mathcal{U}$ that represent the most dominant model dynamics. For the incorporation of measurement data, we additionally consider a nontrivial, finite-dimensional subspace $\mathcal{T} \subset \mathcal{Y}$ as measurement space and let the operator $\myProj{\mathcal{T}} : \mathcal{Y} \rightarrow \mathcal{T}$ denote the orthogonal projection onto this space. \blue{We assume $\dim \mathcal{T} < \infty$, but note that most of our analysis holds for a generic closed subspace $\mathcal{T} \subset \mathcal{Y}$.} \begin{remark}\label{rmk:T} \blue{We} briefly \blue{explain how the space $\mathcal{T}$} can be linked to physical measurements: Suppose we are given $L < \infty$ linearly independent measurement functionals $g_l \in \mathcal{Y}'$, $l \in \{1,...,L\}$. We can then choose the hierarchical space $\mathcal{T}$ as the span of the Riesz representations of the measurement functions, see \cite{BENNETT1985129,PBDW}, via \begin{align} \label{eq:def:T} \mathcal{T} ~=~\text{span}\{~\tau_l \in \mathcal{T}:~1 \le l \le L \text{ and } \mySPY{\tau_l}{\tau} = g_l(\tau)~\forall~\tau \in \mathcal{T}~\}. \end{align} For any state $y \in \mathcal{Y}$, the projection $\myProj{\mathcal{T}}y$ is then the only state in $\mathcal{T}$ that yields the same measurements as $y$, i.e. $g_l(\myProj{\mathcal{T}}y) = g_l(y) \in \mathbb{R}$ for $l=1,...,L$. In this context, we can consider $\myProj{\mathcal{T}}y$ to be the experimentally observed part of $y$. Due to the linear independence of the measurement functionals, any set $\mathbf{m} = (m_l)_{l=1}^L \in \mathbb{R}^L$ is obtained as the measurement data of exactly one state \blue{$y_{\rm{d}}(\mathbf{m})$} in $\mathcal{T}$\blue{, and -- due to the projection theorem -- of all states in $y_{\rm{d}}(\mathbf{m}) + \mathcal{T}^{\perp}$. More specifically, t}wo states in $\mathcal{Y}$ yield the same measurements if and only if their difference lies in $\mathcal{T}^{\perp}$. In practice, the number $L$ of measurements should be kept small to limit experimental expenses. \end{remark} We let $\mathcal{C} \subset \mathbb{R}^d$ be a compact set of all admissible parameters. For this set, we introduce three (possibly) parameter-dependent forms, namely $f_{\rm{bk},\mu} \in \mathcal{Y}'$ and the bilinear forms $\mya : \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}$ and $\myb : \mathcal{U} \times \mathcal{Y} \rightarrow \mathbb{R}$. \blue{Throughout this paper, the index $\rm{bk}$ stands for ``best-knowledge," whereas the index $\mu$ signifies the dependence on a parameter $\mu \in \mathcal{C}$.} The first two forms represent the best-knowledge model dynamics \begin{align} \label{eq:bk} \text{find } \myybk \in \mathcal{Y} \text{ such that } \mya(\myybk,\psi) &= f_{\rm{bk},\mu}(\psi) \qquad \forall~\psi \in \mathcal{Y}, \end{align} whereas for $u \in \mathcal{U}$, $\myb(u,\cdot) \in \mathcal{Y}'$ denotes the induced model modification. We make the assumptions that $\mya$ is uniformly coercive over $\mathcal{C}$\footnote{Throughout this paper, we adopt the notational convention that infima and suprema exclude elements with norm 0.}, \begin{align}\label{eq:def:alpha} \exists~\underline{\alpha}_a > 0 \text{ s.t. } \myalpha := \inf _{y \in \mathcal{Y}} \frac{\mya(y,y)}{\mathcal{N}ormY{y}^2} ~\ge~ \underline{\alpha}_a \quad \forall \mu \in \mathcal{C}, \end{align} and that both bilinear forms $\mya$ and $\myb$ are uniformly bounded, i.e., \begin{equation}\label{eq:uniformBound:ab} \begin{aligned} &\exists~\overline{\gamma}_a > 0 \text{ s.t. } 0 ~<~ \mygammaa := \sup_{y \in \mathcal{Y}} \sup_{z \in \mathcal{Y}} \frac{\mya(y,z)}{\mathcal{N}ormY{y} \mathcal{N}ormY{z}} ~\le~ \overline{\gamma}_a ~<~ \infty \quad \forall \mu \in \mathcal{C},\\ &\exists~ \overline{\gamma}_b > 0 \text{ s.t. } 0 ~<~ \mygammab := \sup_{u \in \mathcal{U}}\sup_{y \in \mathcal{Y}} \frac{\myb(u,y)}{\mathcal{N}ormU{u}\mathcal{N}ormY{y}} ~\le~ \overline{\gamma}_b ~<~ \infty \quad \forall \mu \in \mathcal{C}. \end{aligned} \end{equation} The conditions \eqref{eq:def:alpha} and \eqref{eq:uniformBound:ab} guarantee that the best-knowledge model \eqref{eq:bk} and the 3D-VAR formulation introduced in the next section are uniformly stable and that the error of the RB approximation is quasi-optimal with uniformly bounded constants over $\mathcal{C}$. In anticipation of the a posteriori error estimation procedure, we additionally presuppose that we can compute a lower bound $\myalphaLB \le \myalpha$ and an upper bound $\mybUB \ge \mygammab$ at reasonably low cost for all $\mu \in \mathcal{C}$, e.g., through the min-$\theta$-approach or the successive constraint method~\cite{SCM,RB_Patera_2007}. For the efficient computation of the RB solution and a posteriori error bounds in an offline-online procedure, we make the assumption that the $\mu$-dependent forms $\mya$, $\myb$ and $f_{\rm{bk},\mu}$ are affine in functions of the parameter, i.e., \begin{equation} \label{eq:affine} \begin{aligned} \mya~=~ \textstyle\sum\limits _{\vartheta = 1} ^{\Theta_a} \theta^{\vartheta}_a(\mu) a^{\vartheta} \qquad \myb~=~ \sum\limits _{\vartheta = 1} ^{\Theta_b} \theta^{\vartheta}_b(\mu) b^{\vartheta} \qquad f_{\rm{bk},\mu}~=~ \sum\limits_{\vartheta = 1} ^{\Theta_f} \theta^{\vartheta}_f(\mu) f^{\vartheta}_{\rm{bk}}, \end{aligned} \end{equation} where the coefficient functions $\theta_a$, $\theta_b$, $\theta_f : \mathcal{C} \rightarrow \mathbb{R}$ are continuous in the parameter $\mu$, and the bilinear forms $a^{\vartheta}$, $b^{\vartheta} $ as well as the linear forms $f^{\vartheta}_{\rm{bk}}$ are parameter-independent. In the nonaffine case, (generalized) empirical interpolation techniques may be used to construct an affine approximation \cite{Barrault,Grepl_nonaffine,Maday_gEIM_2015,Maday2013}. \section{3D-VAR Formulation} \label{sec:truth} In the following, we introduce the 3D-VAR formulation for a coercive, parameter-dependent PDE. The method aims at finding a weighted compromise between a best-knowledge model and measurement data by making a data-informed perturbation of the model. After a reformulation as a saddle-point problem, a stability analysis shows how the choice of measurement functionals influences the amplification of errors in the measurements and the approximation of the best-knowledge source term. This leads to a practical criterion for the selection of suitable measurement functionals. \subsection{Problem Statement} \label{sec:intro3DVAR} The main goal in data assimilation is to estimate the (unknown) state $y_{\rm{true}} \in \mathcal{Y}$ of a physical system. To this end, we suppose that a best-knowledge mathematical model exists in terms of the underlying elliptic PDE \eqref{eq:bk} whose unique solution $\myybk$ provides a first approximation of $y_{\rm{true}}$. However, any model can only provide an approximation to the underlying physics due to simplifications in the derivation of the governing PDE, and boundary conditions, or due to approximations and uncertainties of the system geometry or loading, $f_{\rm{bk},\mu}$. Since $\myybk \neq y_{\rm{true}}$ in general, we thus introduce a perturbation of the best-knowledge problem \eqref{eq:bk} in order to allow for a better approximation of the state $y_{\rm{true}}$. Here, we specifically consider model corrections of the form $\myb(u,\cdot) \in \mathcal{Y} '$ with $u \in \mathcal{U}$, leading to the modified problem: \begin{equation}\label{eq:forward} \begin{aligned} &\text{For a given \blue{model modification }}u_0 \in \mathcal{U} \text{ find } y_{\mu} = y_{\mu}(u_0) \in \mathcal{Y} \text{ s.t. }\\ &\mya(y_{\mu},\psi) = f_{\rm{bk},\mu} (\psi) + \myb(u_0,\psi) \quad \forall~\psi \in \mathcal{Y}. \end{aligned} \end{equation} Within this framework, we now aim to construct an improved approximation of $y_{\rm{true}}$ by finding a ``good" model correction $u \in \mathcal{U}$. To do this, we incorporate knowledge \blue{of the true state $y_{\rm{true}}$ in form of an approximation $y_{\rm{d}} \in \mathcal{T}$ to $\myProj{\mathcal{T}}y_{\rm{true}}$. In the notation of Remark \ref{rmk:T}, this approximation might be $y_{\rm{d}} = y_{\rm{d}}(\mathbf{m}_{\rm{d}}) \in \mathcal{T}$, resulting from measurement data $\mathbf{m}_{\rm{d}} \in \mathbb{R}^L$, $(\mathbf{m}_{\rm{d}})_l = g_l(y_{\rm{true}}) + \varepsilon_l$ of the true state with noise $\varepsilon_l \in \mathbb{R}$. To approximate $y_{\rm{true}} \in \myProj{\mathcal{T}}y_{\rm{true}} + \mathcal{T}^{\perp} \approx y_{\rm{d}} + \mathcal{T}^{\perp}$, the} model correction $u \in \mathcal{U}$ should thus be chosen such that the solution $y_{\mu}(u)$ of the modified model \eqref{eq:forward} for $u_0 = u$ is close to $\blue{y_{\rm{d}} + \mathcal{T}^{\perp}}$. \blue{By the projection theorem, this corresponds to choosing $u$ such that} the experimentally-observable misfit $d_{\mu}(u) = y_{\rm{d}}-\myProj{\mathcal{T}}y_{\mu}(u)$ between \blue{$y_{\mu}(u)$ and} the data state $y_{\rm{d}}$, measured in the $\mathcal{Y}$-norm, \blue{is} small. At the same time, however, we want the model correction, measured in $\mathcal{N}ormU{u}$, to be small such that the state \blue{$y_{\mu}(u)$} stays close to our best-knowledge model \eqref{eq:bk}. Note that \blue{$y_{\mu}(0) = \myybk$} for $u_0 = 0 \in \mathcal{U}$ in~\eqref{eq:forward}. The 3D-VAR formulation thus takes the form \begin{equation}\label{prob:truth} \begin{aligned} & \min _{(u,y,d) \in \, \mathcal{U} \times \mathcal{Y} \times \mathcal{T}} \frac{1}{2} \mathcal{N}ormU{u}^2 + \frac{\lambda}{2} \mathcal{N}ormY{d}^2 ~\text{ s.t. }\\ & \left. \begin{array}{rll} \mya(y,\psi) &= f_{\rm{bk},\mu} (\psi) + \myb(u,\psi) &\quad \forall~\psi \in \mathcal{Y},\\ \mySP{y+d}{\tau} &= \mySP{y_{\rm{d}}}{\tau} &\quad \forall ~\tau \in \mathcal{T}, \end{array} \right. \end{aligned} \end{equation} where the regularization parameter $\lambda > 0$ quantitatively expresses the trust in the original model \eqref{eq:bk} over the validity of the measurements: a small factor $\lambda$ prioritizes proximity to the original model, whereas a large $\lambda$ favours closeness to the data state $y_{\rm{d}}$. By specifically considering model corrections and the distance in the measurement space $\mathcal{T}$ (instead of the difference in measurements), we follow the model-data weak approach to variational data assimilation introduced in \cite{YANO2013937}. Denoting the optimal solution by $(\myu,\myy,\myd) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{T}$, then $\myy = y_{\mu}(\myu)$ is the solution of the perturbed model \eqref{eq:forward} with model correction $u_0 = \myu$, and $\myd = y_{\rm{d}}-\myProj{\mathcal{T}}\myy$ is the experimentally-observable misfit between the data state $y_{\rm{d}}$ and the 3D-VAR solution state $\myy$. The minimization over $\mathcal{N}ormU{u}$ in the cost function enforces that the model correction $\myu$ contains only components that actively decrease the misfit, i.e. $\myu$ is perpendicular to the closed subspace \begin{align} \label{eq:def:U0} \mathcal{U}_0(\mu) := \{~ u \in \mathcal{U}:~ \myProj{\mathcal{T}}y_{\mu} = 0 \text{ where } \mya(y_{\mu},\psi) ~=~ \myb(u,\psi)~\forall \psi \in \mathcal{Y}~\}. \end{align} If the measurements can be reproduced in the modified model \eqref{eq:forward}, that is if $y_{\rm{d}} = \myProj{\mathcal{T}}y_{\mu}(u_{\rm{true}})$ for a solution $y_{\mu}(u_{\rm{true}})$ of \eqref{eq:forward} with $u_0 = u_{\rm{true}} \in \mathcal{U}$, then $\frac{1}{2}\mathcal{N}ormU{u_{\rm{true}}}^2$ is a natural, $\lambda$-independent upper bound for the cost function in \eqref{prob:truth}. Therefore, $\mathcal{N}ormY{\myd} \in \mathcal{O}(\lambda^{-1/2})$, and $\mathcal{N}ormU{\myu} \le \mathcal{N}ormU{u_{\rm{true}}}$ is bounded independently of $\lambda$. \begin{remark} Formally, the 3D-VAR problem \eqref{prob:truth} is a Tikhonov regularisation \begin{align}\label{eq:Tikhonov} \min _{u \in \mathcal{U}}\frac{1}{\lambda} \mathcal{N}ormU{u}^2 + \mathcal{N}ormY{Q_{\mu}u-y_{\rm{d}} +\myProj{\mathcal{T}}\myybk} ^2 \end{align} with regularisation parameter $1/\lambda$ and bounded linear operator $Q_{\mu} : \mathcal{U} \rightarrow \mathcal{T}$, $Q_{\mu}u_0 := \myProj{\mathcal{T}}y_{\mu}(u_0)$, where $y_{\mu}(u_0) \in \mathcal{Y}$ is the unique solution of \eqref{eq:forward} with source term $f_{\rm{bk},\mu} = 0$. With standard theory for Tikhonov regularisations (see e.g.~\cite{Tikhonov_1995}) there exists a unique solution $\myu = \myu(\lambda,y_{\rm{d}}) \in \mathcal{U}$ to \eqref{eq:Tikhonov} and hence to the 3D-VAR problem \eqref{prob:truth} for all data states $y_{\rm{d}} \in \mathcal{T}$ and all $\lambda > 0$. In particular, the solution depends continuously on the data with $\mathcal{N}ormU{\myu(\lambda,y_{\rm{d}})} \le \sqrt{\lambda} \mathcal{N}ormY{y_{\rm{d}} - \myProj{\mathcal{T}}\myybk}$. Thus, the 3D-VAR problem \eqref{prob:truth} is well-posed. \end{remark} \begin{remark} \label{rmk:convergence} \blue{Since $\text{range}(Q_{\mu}) \subset \mathcal{T}$ and $\dim \mathcal{T} < \infty$, we can exploit} the connection between Tikhonov regularisations and generalized inverses~\cite{Groetsch_1977} \blue{to} obtain that for $\lambda \rightarrow \infty$ the solution $\myu(\lambda,y_{\rm{d}})$ converges to the generalized inverse $Q_{\mu}^+ (y_{\rm{d}}- \myProj{\mathcal{T}}\myybk)$\blue{, which is the unique solution to \begin{align*} \min _{u \in \mathcal{U}} \mathcal{N}ormU{u} \quad \text{s.t. } \quad u \in \text{arg} \min _{v \in \mathcal{U}} \mathcal{N}ormY{Q_{\mu}v-y_{\rm{d}}+ \myProj{\mathcal{T}}\myybk}. \end{align*} If the true state $y_{\rm{true}}$ can be completely described by \eqref{eq:forward} for a model modification $u_0 = u_{\rm{true}}$, and if in addition $y_{\rm{d}} = \myProj{\mathcal{T}}y_{\rm{true}}$ is an unbiased observation (e.g. from noise-free measurements), then $\mathcal{N}ormY{Q_{\mu}u-y_{\rm{d}}+ \myProj{\mathcal{T}}\myybk} = 0$ if and only if $u \in u_{\rm{true}} + \mathcal{U}_0(\mu)$. The 3D-VAR model correction $\myu$ then converges to $\myProj{\mathcal{U}_0 ^{\perp}(\mu)} u_{\rm{true}}$. } \end{remark} \subsection{Saddle-Point Formulation} \label{sec:tSaddlePoint} We next recast the 3D-VAR minimization \eqref{prob:truth} as a saddle-point problem. Although we already know that the minimization is well-posed as a Tikhonov regularisation for any fixed parameter $\mu$, the saddle-point structure has several benefits: First, the analytic properties known for saddle-point problems --- such as Brezzi's theorem --- allow us to quantify the structural influences of the original best-knowledge model \eqref{eq:bk}, the model perturbation $\myb$, and the measurement space $\mathcal{T}$ on the amplification of errors in $y_{\rm{d}}$ and $f_{\rm{bk},\mu}$. Second, the analysis shows that the 3D-VAR problem is uniformly well-posed over the parameter domain, enabling the use of the method in parameter estimation. Third, in the discretized setting we obtain a linear system with saddle-point structure for which various numerical methods exist. We note that for the solution $(\myu,\myy,\myd) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{T}$ of \eqref{prob:truth}, the experimentally-observable mistfit, $\myd = y_{\rm{d}}-\myProj{\mathcal{T}}\myy$, between the 3D-VAR solution $\myy$ and the observed state $y_{\rm{d}}$ can be be computed {\it a posteriori} from $\myu$ and $\myy$. We therefore omit the explicit dependence on $\myd$ in the saddle point formulation which results not only in a smaller system but also improved stability constants. For an approach which explicitly includes $\myd$, we refer to \cite{Masterarbeit_Nicole}. We next define the Hilbert space $\mathcal{H} := \mathcal{U} \times \mathcal{Y}$ with induced inner product $\mySP[\mathcal{H}]{(u,y)}{(\phi,\psi)} := \mySPU{u}{\phi} + \mySPY{y}{\psi}$ and induced norm $\mathcal{N}orm[\mathcal{H}]{\cdot} := \sqrt{\mySP[\mathcal{H}]{\cdot}{\cdot}}$. Furthermore, we define the bilinear and linear forms \begin{equation}\label{eq:defABG} \begin{aligned} A : \mathcal{H} \times \mathcal{H} \rightarrow \mathbb{R}: && A((u,y),(\phi,\psi)) & := \mySPU{u}{\phi} + \lambda \mySPY{\myProj{\mathcal{T}}y}{\myProj{\mathcal{T}}\psi} \\ \myB : \mathcal{H} \times \myY \rightarrow \mathbb{R}: && \myB((u,y),\psi) & := \mya(y,\psi) - \myb(u,\psi) \\ F : \mathcal{H} \rightarrow \mathbb{R}: && F ((\phi,\psi)) &:= \lambda\mySPY{y_{\rm{d}}}{\psi} \\ \myG : \myY \rightarrow \mathbb{R}: && \myG (\psi) &:= f_{\rm{bk},\mu}(\psi).\\ \end{aligned} \end{equation} Note that $F$, $\myG$ are linear with $\mathcal{N}orm[\mathcal{H}']{F} = \lambda\mathcal{N}ormY{y_{\rm{d}}}$ and $\mathcal{N}orm[\myY']{\myG} = \mathcal{N}orm[\mathcal{Y}']{f_{\rm{bk},\mu}}$. Furthermore, $A$ is symmetric and positive semi-definite, and both $A$ and $\myB$ are bilinear with bounded continuity constants \begin{equation}\label{eq:def:gammaAB} \begin{aligned} \gamma_{\rm{A}} &~:=~ \sup_{h \in \mathcal{H}} \sup_{k \in \mathcal{H}} \frac{A(h,k)}{\mathcal{N}orm[\mathcal{H}]{h} \mathcal{N}orm[\mathcal{H}]{k}} ~=~ \max\{1,\lambda\} \\ \mygammaB &~:=~ \sup_{h \in \mathcal{H}} \sup_{y \in \myY} \frac{\myB(h,y)}{\mathcal{N}orm[\mathcal{H}]{h} \mathcal{N}ormY{y}} ~\le~ \sqrt{\mygammaa^2 + \mygammab^2}. \\ \end{aligned} \end{equation} With the definitions in \eqref{eq:defABG}, the 3D-VAR minimization \eqref{prob:truth} is equivalent to \begin{align*} \min _{h \in \mathcal{H}} A(h,h) - F(h) \text{ such that } \myB(h,\psi) = \myG(\psi) \text{ for all } \psi \in \mathcal{Y}. \end{align*} Employing a Lagrangian approach, we obtain the associated necessary, and in our setting sufficient, first-order optimality conditions~\cite{Hinze_book_optimizationPDE,Reyes_book}: Given $\mu \in \mathcal{C}$ find $(h^*eins,h^*zwei)=((\myu,\myy),h^*zwei) \in \mathcal{H} \times \myY$ such that \begin{equation}\label{prob:truth:sp} \begin{aligned} A(h^*eins,k) + \myB(k,h^*zwei) &= F(k) && \forall~k \in \mathcal{H}, \\ \myB(h^*eins,\psi) &= \myG(\psi) && \forall~\psi \in \mathcal{Y}. \end{aligned} \end{equation} \subsection{Stability Analysis} \label{sec:tStability} The {\it a priori} stability of the 3D-Var formulation is closely linked to the choice of the measurement space $\mathcal{T}$. The following stability analysis thus provides a criterion for choosing the measurement space $\mathcal{T}$ in section \ref{sec:greedyOMP}. Furthermore, it allows us to link the properties of the modified model \eqref{eq:forward} and the choice of the measurement space $\mathcal{T}$ to the amplification of noise in the 3D-Var solution. We start with the Ladyzhenskaya-Babu\v{s}ka-Brezzi (LBB) condition\blue{, for which we prove the lower bound $\beta_B^{\rm{LB}}(\mu) := \myalpha$. In the sequel, the superscripts LB and UB shall refer to lower and upper bounds, respectively.} \begin{theorem} \label{thm:tInfSup} \begin{align*} \mybeta ~:=~ \inf \limits_{y \in \mathcal{Y} } \sup \limits_{h \in \mathcal{H} } \frac{\myB(h,y)}{\mathcal{N}orm[\mathcal{H}]{h}\mathcal{N}ormY{y}} ~\ge~ \beta_B^{\rm{LB}}(\mu) ~:=~\myalpha ~\ge~ \underline{\alpha}_a ~>~ 0. \end{align*} \end{theorem} \begin{proof} For any $y \in \mathcal{Y}\setminus \{0\}$, we obtain \begin{align*} \sup \limits_{h \in \mathcal{H} } \frac{\myB(h,y)}{\mathcal{N}orm[\mathcal{H}]{h}\mathcal{N}ormY{y}} = \sup \limits_{(\phi,\psi) \in \mathcal{H} } \frac{\mya(\psi,y)-\myb(\phi,y)}{\mathcal{N}orm[\mathcal{H}]{(\phi,\psi)}\mathcal{N}ormY{y}} \ge \frac{\mya(y,y)}{\mathcal{N}ormY{y}^2} \ge \myalpha, \end{align*} where we have chosen $(\phi,\psi) = (0,y)$ and inserted the coercivity condition \eqref{eq:def:alpha}. \qed \end{proof} We now turn to verifying the coercivity of $A$ on the nullspace of $\myB$, given by \begin{equation}\label{def:HeinsNull} \begin{aligned} \mathcal{H}^0(\mu) := ~&\{~h \in \mathcal{H}:~\myB(h,\psi) = 0 \quad \forall~\psi \in \mathcal{Y}~\} \\ = ~&\{~(u,y) \in \mathcal{U} \times \mathcal{Y}:~\mya(y,\psi) = \myb(u,\psi) \quad \forall~ \psi \in \mathcal{Y} ~\} \subset \mathcal{H}. \end{aligned} \end{equation} Note that $\mathcal{H}^0(\mu)$ is a closed subspace of $\mathcal{H}$. For a concise notation, we introduce \begin{equation}\label{eq:def:Ymu} \begin{aligned} \mathcal{Y}mu := ~&\{~y \in \mathcal{Y}:~\exists u \in \mathcal{U} \text{ s.t. } (u,y) \in \mathcal{H}^0(\mu)~\} \\ = ~&\{~y \in \mathcal{Y}:~\exists u \in \mathcal{U} \text{ s.t. } \mya(y,\psi) = \myb(u,\psi) \quad \forall~ \psi \in \mathcal{Y}~\} \end{aligned} \end{equation} as the space of all states in $\mathcal{H}^0(\mu)$. Note that $\mathcal{Y}mu \neq \{0\}$ since $\mygammab > 0$ by \eqref{eq:uniformBound:ab}. With $A((0,\psi),(0,\psi))= 0$ for all $\psi \in \mathcal{T} ^{\perp}$, $A$ is in general not $\mathcal{H}$-coercive. For the application of Brezzi's Theorem \cite{brezzi}, it is, however, already sufficient for $A$ to be $\mathcal{H}^0(\mu)$-coercive. This property follows from the coercivity of $\mya$ as shown in the following theorem. \begin{theorem}\label{thm:tCoercivity} For $\mu \in \mathcal{C}$, let $\mydelta$ be the coercivity constant of $A$ on $\mathcal{H}^0(\mu)$, i.e. \begin{align} \label{eq:def:delta} \mydelta ~:=~ \inf _{h \in \mathcal{H}^0(\mu)} \frac{A(h,h)}{~\mathcal{N}orm[\mathcal{H}]{h}^2}. \end{align} Define the ratios \begin{align}\label{eq:def:eta} e_{q}ainf := \inf _{(u,y) \in \mathcal{H}^0(\mu)} \frac{\mathcal{N}ormY{y}}{\mathcal{N}ormU{u}} \ge 0 \quad \text{and} \quad e_{q}asup := \sup _{(u,y) \in \mathcal{H}^0(\mu)} \frac{\mathcal{N}ormY{y}}{\mathcal{N}ormU{u}} > 0, \end{align} and the inf-sup constant \begin{align} \label{eq:def:kappa} \mykappa := \inf _{y \in \mathcal{Y}mu}~ \sup _{\tau \in \mathcal{T}}~ \frac{\mySPY{y}{\tau}}{\mathcal{N}ormY{y}\mathcal{N}ormY{\tau}} = \inf _{y \in \mathcal{Y}mu} \frac{\mathcal{N}ormY{\myProj{\mathcal{T}}y}}{\mathcal{N}ormY{y}}~\ge~ 0. \end{align} Then \begin{equation}\label{eq:def:deltaLB} \alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa) := \left\{ \begin{array}{ll} \frac{1 + \lambda \mykappa^2 e_{q}asup^2 }{1 + e_{q}asup^2}, & \qquad \text{if } \lambda \mykappa^2 \le 1 \\ \frac{1 + \lambda \mykappa^2 e_{q}ainf^2 }{1 + e_{q}ainf^2}, & \qquad \text{otherwise} \end{array} \right. \end{equation} is a positive lower bound to the $\mathcal{H}^0(\mu)$-coercivity constant of $A$, i.e. \begin{align}\label{eq:bound:deltaLB} \mydelta ~\ge~ \alpha^{\rm{LB}}_{A}(\lambda, e_{q}ainf, e_{q}asup, \mykappa) ~\ge~ \frac{1}{1+e_{q}asup^2} ~>~ 0. \end{align} \end{theorem} \begin{proof} Since $\mya$ is coercive and $\mygammab > 0$ by assumption \eqref{eq:uniformBound:ab}, there exists at least one element $(u,y) \in \mathcal{H}^0(\mu)$ with $y \neq 0$; hence $e_{q}asup > 0$. Let $0 \neq h = (u,y) \in \mathcal{H}^0(\mu)$ be arbitrary. We note that $y \in \mathcal{Y}mu$. With the definitions \eqref{eq:def:eta} and \eqref{eq:def:kappa}, we obtain $e_{q}ainf \mathcal{N}ormU{u} \le \mathcal{N}ormY{y} \le e_{q}asup \mathcal{N}ormU{u}$ and $\mathcal{N}ormY{\myProj{\mathcal{T}}y} \ge \mykappa \mathcal{N}ormY{y}$ respectively. If $\lambda \mykappa^2 \le 1$, then \begin{align*} A(h,h) &= \mathcal{N}ormU{u}^2 + \lambda \mathcal{N}ormY{\myProj{\mathcal{T}}y}^2 \\ &\ge \mathcal{N}ormU{u}^2 + \lambda \mykappa^2 \mathcal{N}ormY{y}^2 \\ &\ge (1-xe_{q}asup^2) ~ \mathcal{N}ormU{u}^2 + (\lambda \mykappa^2 + x)~ \mathcal{N}ormY{y}^2 \\ &\ge \min \{1-xe_{q}asup^2,~ \lambda \mykappa^2+x\} ~ \mathcal{N}orm[\mathcal{H}]{h}^2 \end{align*} for arbitrary $x \in [0,e_{q}asup^{-2}]$. The minimum is maximal when both terms equal, which is the case for $x = \frac{1 - \lambda \mykappa^2}{1 + e_{q}asup^2} < e_{q}asup^{-2}$. The minimum then takes the value $\alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa)$. Now suppose $\lambda \mykappa^2 > 1$. If $y = 0$, then $e_{q}ainf = 0$ and $A(h,h) =\mathcal{N}ormU{u}^2 = \mathcal{N}orm[\mathcal{H}]{h}^2$. For $y \neq 0$, we obtain for any $x \in [0, \lambda \mykappa^2]$: \begin{align*} A(h,h) &= \mathcal{N}ormU{u}^2 + \lambda \mathcal{N}ormY{\myProj{\mathcal{T}}y}^2 \\ &\ge \mathcal{N}ormU{u}^2 + \lambda \mykappa^2 \mathcal{N}ormY{y}^2 \\ &\ge (1+xe_{q}ainf^2) ~ \mathcal{N}ormU{u}^2 + (\lambda \mykappa^2 -x )~ \mathcal{N}ormY{y}^2 \\ &\ge \min \{1+xe_{q}ainf^2,~ \lambda \mykappa^2 - x\} \mathcal{N}orm[\mathcal{H}]{h}^2. \end{align*} With $x = \frac{\lambda \mykappa^2-1}{1 + e_{q}ainf^2}$ we obtain the second part of $\alpha^{\rm{LB}}_{A}$. \qed \end{proof} We briefly comment on the meaning of some of the constants. While the inf-sup constant $\mykappa$ relates to the experimental design, the ratios $e_{q}ainf$ and $e_{q}asup$ relate to the modelling process. The former depends on the choice of measurements, i.e., the measurement space $\mathcal{T}$ and reflects how well model modifications can be distinguished based on changes in $\mathcal{T}$. The latter ratios $e_{q}ainf$ and $e_{q}asup$ reflect how much change in the state can minimally and maximally be evoked by a model modification in $\mathcal{U}$, respectively. As already mentioned in the proof of Theorem \ref{thm:tCoercivity}, $e_{q}asup > 0$ due to $\mygammab > 0$. Note that if $e_{q}asup = 0$ would hold, $\mathcal{U}$ would have no effect on the best-knowledge model \eqref{eq:bk}. The ratio $e_{q}ainf$ equals zero if there exists a superfluous search direction $u \in \mathcal{U}$ whose physical influence $\myb(u,\cdot)$ is not sufficiently captured by $\mathcal{Y}$. To express this correlation, we define, for nontrivial spaces $\mathcal{V} \subset \mathcal{U}$, $\mathcal{W} \subset \mathcal{Y}$, the inf-sup constant \begin{align}\label{eq:def:infsup:b} \beta_b(\mathcal{V},\mathcal{W}) := \inf _{u \in \mathcal{V}} \sup _{w \in \mathcal{W}} \frac{\myb(u,w)}{\mathcal{N}ormU{u} \mathcal{N}ormY{w}} \ge 0. \end{align} We can then bound $e_{q}ainf$ by $\frac{\beta_b(\mathcal{U},\mathcal{Y}mu)}{\mygammaa} \le e_{q}ainf \le \frac{\beta_b(\mathcal{U},\mathcal{Y}mu)}{\myalpha}$. The inf-sup constant $\beta_b(\mathcal{U},\mathcal{Y}mu)$ thus provides information on the quality of the model modifications for any parameter $\mu \in \mathcal{C}$ and may prove useful in the design of possible model modifications. In order to quantify the influence of the noise on the 3D-Var solution, we first present a result concerning the behavior of $\mydelta$. \blue{ To describe asymptotic behavior in $\lambda$, we make use of the Landau symbol $\Theta$ (see, e.g., \cite{cormen2009introduction}): A function $f$ lies in the class $\Theta(g)$ if $g$ is an asymptotically tight bound, i.e. there exist $c_1, c_2 > 0$ and a $\lambda_0 \ge 0$ such that $0 \le c_1 g(\lambda) \le f(\lambda) \le c_2 g(\lambda)$ for all $\lambda \ge \lambda_0$. } \begin{theorem} \label{thm:equiv} For $\mu \in \mathcal{C}$, either $\mydelta \le 1$ for all $\lambda > 0$ or $\mydelta \in \Theta(\lambda)$ \blue{with $\lambda_0 > 0$.} The latter is equivalent to both $e_{q}ainf > 0$ and $\mykappa > 0$ being satisfied. \end{theorem} \begin{proof} By Theorem \ref{thm:tCoercivity} and \eqref{eq:def:gammaAB}, \begin{align} \label{eq:proofThmEquiv:1} 0 < \frac{1}{1+e_{q}asup^2} \le \alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa) \le \mydelta \le \gamma_{\rm{A}} = \max\{1,\lambda\}, \end{align} where $\alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa) \in \Theta(\lambda)$ if and only if $e_{q}ainf > 0$ and $\mykappa > 0$. It is hence sufficient to show that $\myalpha \le 1$ for all $\lambda > 0$ when $e_{q}ainf = 0$ or $\mykappa = 0$. \blue{We note that $\lambda_0 > 0$ stems from the upper bound $\mydelta \le\max\{1,\lambda\}$ in \eqref{eq:proofThmEquiv:1}.} We suppose first that $e_{q}ainf = 0$. Let $\lambda > 0$ be fixed. With \eqref{eq:def:eta}, there exists a sequence $h_n = (u_n,y_n) \in \mathcal{H}^0(\mu)$, $n \in \mathbb{N}$, such that $\mathcal{N}ormU{u_n} = 1$ and $\mathcal{N}ormY{y_n} \rightarrow 0$ for $n \rightarrow \infty$. Then $\mydelta \le \frac{A(h_n,h_n)}{~\mathcal{N}orm[\mathcal{H}]{h_n}^2} \le \frac{1 + \lambda \mathcal{N}ormY{y_n}^2}{1 + \mathcal{N}ormY{y_n}^2} \rightarrow 1$ for $n \rightarrow \infty$. If $\mykappa = 0$, there exists $h_n = (u_n,y_n) \in \mathcal{H}^0(\mu)$, $n \in \mathbb{N}$, with $\mathcal{N}ormY{y_n} = 1$ and, for $n \rightarrow \infty$, $\mathcal{N}ormY{\myProj{\mathcal{T}}{y_n}} \rightarrow 0$. In particular $\mathcal{N}ormU{u_n}>0$ for all $n \in \mathbb{N}$. Then \begin{align*} \mydelta \le \frac{A(h_n,h_n)}{\mathcal{N}orm[\mathcal{H}]{h_n}^2} &= \frac{\mathcal{N}ormU{u_n}^2}{\mathcal{N}ormU{u_n}^2 + \mathcal{N}ormY{y_n}^2} + \lambda \frac{\mathcal{N}ormY{\myProj{\mathcal{T}}y_n}^2}{\mathcal{N}ormU{u_n}^2 + \mathcal{N}ormY{y_n}^2}, \end{align*} where the first term is bounded by $(1+e_{q}ainf^2)^{-1} \le 1$ and the second converges to $0$ for $n \rightarrow \infty$ and any fixed $\lambda$. \qed \end{proof} \blue{Theorem \ref{thm:equiv} shows that the lower bound from $\alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa)$ under-predicts the true $\mathcal{H}^0(\mu)$-coercivity constant $\mydelta$ only by a bounded multiplicative factor, i.e. $\mydelta \in \Theta(\alpha^{\rm{LB}}_{A}(\lambda,e_{q}ainf, e_{q}asup, \mykappa))$. We thus gain insight on how the main behaviour of $\mydelta$ is influenced by the quantities $e_{q}ainf$, $e_{q}asup$, and $\mykappa$. By optimizing these quantities in the modelling process and with the selection of measurements, we have a direct way of influencing $\mydelta$ and thereby the stability of the whole 3D-VAR system \eqref{prob:truth:sp}. } We next recall that $\mybeta$ and $\mydelta$ are uniformly bounded away from zero as shown in Theorems~\ref{thm:tInfSup} and \ref{thm:tCoercivity}, respectively. It then follows from Brezzi's Theorem that the 3D-VAR formulation is uniformly well-posed over the parameter domain for all source functions in $\mathcal{H}'$ and $\myY'$~\cite{brezzi,Brezzi_book}. \blue{For the particular choice in \eqref{eq:defABG}, we get the following stability bounds from Theorem 5.2, p. 38, in \cite{Brezzi_book} after inserting $\mathcal{N}orm[\mathcal{H}']{F} = \lambda\mathcal{N}ormY{y_{\rm{d}}}$ and $\mathcal{N}orm[\myY']{\myG} = \mathcal{N}orm[\mathcal{Y}']{f_{\rm{bk},\mu}}$:} \begin{theorem}\label{thm:tStability} Problem \eqref{prob:truth:sp} has a unique solution $(h^*eins,h^*zwei) \in \mathcal{H} \times \myY$, and \begin{subequations}\label{eq:tStability} \begin{align} \mathcal{N}orm[\mathcal{H}]{h^*eins} &~~\le~~ \frac{\lambda~\mathcal{N}ormY{y_{\rm{d}}}}{\mydelta} +\frac{\mathcal{N}orm[\mathcal{Y}']{f_{\rm{bk},\mu}}}{\mybeta}\bigg{(}\frac{\gamma_{\rm{A}}}{\mydelta}+1\bigg{)} \label{eq:tStability:h}\\ \mathcal{N}orm[\myY]{h^*zwei} &~~\le~~ \frac{\lambda~\mathcal{N}ormY{y_{\rm{d}}}}{\mybeta}\bigg{(}\frac{\gamma_{\rm{A}}}{\mydelta}+1\bigg{)}~ + \frac{\gamma_{\rm{A}}~\mathcal{N}orm[\mathcal{Y}']{f_{\rm{bk},\mu}} }{\mybeta^2} \bigg{(}\frac{\gamma_{\rm{A}}}{\mydelta}+1\bigg{)} \label{eq:tStability:p} \end{align} \end{subequations} \end{theorem} We note that the effect of experimental noise on the solution is governed by the stability coefficients in front of $\mathcal{N}ormY{y_{\rm{d}}}$. Since $\gamma_{\rm{A}} = \max\{1,\lambda\}$, they scale like $\Theta(\lambda / \mydelta)$ in \eqref{eq:tStability:h} and $\Theta(\lambda^2 / \mydelta)$ in \eqref{eq:tStability:p} for $\lambda \ge 1$. Furthermore, we know from Theorem~\ref{thm:equiv} that $\mydelta$ is of order $\Theta(\lambda)$ for $\lambda \rightarrow \infty$ if and only if $e_{q}ainf > 0$ and $\mykappa > 0$. It thus follows that the stability coefficients for the model modification and state variable $h^*eins$ are bounded from above independently of $\lambda$ in this case, and that the stability coefficients for the adjoint $h^*zwei$ scales like $\Theta(\lambda)$. If $e_{q}ainf = 0$ or $\mykappa = 0$, the scaling is significantly worse with $\Theta(\lambda)$ and $\Theta(\lambda^2)$, respectively. Provided $e_{q}ainf > 0$, we should thus choose $\mathcal{T}$ such that the inf-sup constant $\mykappa$ is maximized. We discuss such a stability-based generation of the measurement space in section \ref{sec:greedyOMP}. \begin{remark} We briefly comment on the link between the inf-sup constant $\mykappa$, the PBDW formulation, and the 3D-VAR solution. Suppose $\mathcal{Y}mu$ is closed, as is the case if $\dim \mathcal{U} < \infty$, and suppose further that $\mykappa > 0$. Let $\myybk \in \mathcal{Y}$ be the solution of the best-knowledge problem \eqref{eq:bk}. Then the minimization \begin{align*} \min _{y \in \mathcal{Y}mu, d \in \mathcal{T}} \mathcal{N}ormY{d}^2 \text{ such that } \mySPY{y + d}{\tau} = \mySPY{y_{\rm{d}} - \myybk}{\tau} ~\forall ~\tau \in \mathcal{T} \end{align*} has a unique, $\mu$-dependent solution $(y_{\infty,\mu}hat,e_{q}aPBDW) \in \mathcal{Y}mu \times \mathcal{T}$. This problem is the PBDW formulation \cite{PBDW} over the best-knowledge space $\mathcal{Y}mu$, where the basis functions of $\mathcal{U}$ act as parameters. For an extensive analysis of state estimation with the PBDW approach, we refer to the literature \cite{PBDW,PBDW2,doi:10.1137/15M1025384,2017arXiv171209594M}. \blue{ With this notation and the discussion in Remark \ref{rmk:convergence}, the model correction $\myu = \myu(\lambda)$ hence converges for $\lambda \rightarrow \infty$ to the unique element $u_{\infty,\mu}$ with minimal norm under all $u \in \mathcal{U}$ with $(u,y_{\infty,\mu}hat) \in \mathcal{H}^0(\mu)$. } If the inf-sup condition $\beta_b(\mathcal{U},\mathcal{Y}mu) > 0$ holds, \blue{or equivalently} $e_{q}ainf > 0$, \blue{then the limit} $u_{\infty,\mu}$ is already uniquely defined by the property $(u_{\infty,\mu},y_{\infty,\mu}) \in \mathcal{H}^0(\mu)$. In either case, we have $\myy = \myy(\lambda) \rightarrow y_{\infty,\mu} := \myybk + y_{\infty,\mu}hat$ and $\myd(\lambda) = y_{\rm{d}} - \myProj{\mathcal{T}}\myy(\lambda) \rightarrow e_{q}aPBDW$ for $\lambda \rightarrow \infty$. It has been shown in \cite{PBDW,doi:10.1137/15M1025384}, that if $y_{\rm{d}} = \myProj{\mathcal{T}}y_{\rm{true}}$, then \begin{align*} \mathcal{N}ormY{\myProj{\mathcal{Y}mu}(y_{\rm{true}}-y_{\infty,\mu})} &\le \frac{1}{\mykappa} \inf_{d \in \mathcal{T} \cap \mathcal{Y}mu^{\perp}} \inf _{y \in \myybk+\mathcal{Y}mu} \mathcal{N}ormY{y_{\rm{true}}-(y + d)} \\ \mathcal{N}ormY{y_{\rm{true}}-(y_{\infty,\mu} + e_{q}aPBDW)} &\le \frac{1}{\mykappa} \inf_{d \in \mathcal{T} \cap \mathcal{Y}mu^{\perp}} \inf _{y \in\myybk + \mathcal{Y}mu} \mathcal{N}ormY{y_{\rm{true}}-(y + d)}, \end{align*} and $\mykappa^{-1}$ is the optimal class performance for the second bound (see \cite{doi:10.1137/15M1025384}). The approximation of $y_{\rm{true}}$ through the 3D-VAR solution therefore also profits in the limit $\lambda \rightarrow \infty$ from a preferably large inf-sup constant $\mykappa > 0$. \end{remark} \begin{remark} We note that the PBDW approach requires $\dim \mathcal{T} \ge \dim \mathcal{U}$ measurements of the true state for the problem to be well-posed. For the 3D-VAR method, on the other hand, the method remains well-posed for any measurement space --- allthough a ``good'' choice for $\mathcal{T}$ may increase its stability. Additionally, the regularisation parameter $\lambda$ provides a direct control over the amplification of noise. By changing $\lambda$, the 3D-VAR method can be adjusted for different noise levels. A comparatively small $\lambda$-value may result in better state estimate than a large $\lambda$-value given noisy data. \end{remark} \subsection{Numerical Solution} For the numerical solution of the 3D-VAR saddle-point system \eqref{prob:truth:sp}, the spaces $\mathcal{U}$ and $\mathcal{Y}$ are replaced with finite-dimensional approximation spaces, e.g. conforming finite element spaces. Let $(\phi_m)_{m=1}^{\mathcal{M}} \subset \mathcal{U}$ and $(\psi_n)_{n=1}^{\mathcal{N}} \subset \mathcal{Y}$ be bases of $\mathcal{U}$ and $\mathcal{Y}$, respectively. For $\mathcal{T}$ we consider the basis $(\tau_l)_{l=1}^L$ which can be represented in terms of the basis of $\mathcal{Y}$. The 3D-VAR solution $(h^*eins,h^*zwei) = (\myu,\myy,\myw) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{Y}$ is then uniquely defined by the basis coefficients of the model modification, state, and adjoint arguments represented by $\myu = \sum _{m = 1} ^{\mathcal{M}} \mathbf{u}^{*}_m \phi_m$, $\myy = \sum _{n = 1} ^{\mathcal{N}} \mathbf{y}^{*}_n \psi_n$, and $\myw = \sum _{n = 1} ^{\mathcal{N}} \mathbf{p}^{*}_n \psi_n$, respectively. The coefficient vector $(\mathbf{u}^{*},\mathbf{y}^{*},\mathbf{p}^{*}) \in \mathbb{R}^{\mathcal{M} + 2\mathcal{N}}$ is the unique solution of the linear $(\mathcal{M}+2\mathcal{N})\times(\mathcal{M}+2\mathcal{N})$ saddle-point system \begin{equation} \label{eq:numImp:t} \begin{aligned} \left( \begin{array}{ccc} \textbf{U} & \textbf{0} & ~\textbf{B}^T(\mu)~ \\ \textbf{0} & \lambda\textbf{P} & \textbf{A}^T(\mu) \\ ~\textbf{B}(\mu)~ & ~\textbf{A}(\mu)~ & \textbf{0} \end{array} \right) \left( \begin{array}{c} \mathbf{u}^{*} \\ \mathbf{y}^{*} \\ \mathbf{p}^{*} \end{array} \right) = \left( \begin{array}{c} \textbf{0}\\ \lambda \mathbf{d}^{*}ata \\ \mathbf{f}_{\rm{bk}}(\mu) \end{array} \right). \end{aligned} \end{equation} Here, $\mathbf{U} \in \mathbb{R}^{\mathcal{M} \times \mathcal{M}}$ is the mass matrix on $\mathcal{U}$, $\mathbf{P} \in \mathbb{R}^{\mathcal{N} \times \mathcal{N}}$ is the matrix representation of the state projection $\myProj{\mathcal{T}}$, $\mathbf{A}(\mu) \in \mathbb{R}^{\mathcal{N} \times \mathcal{N}}$ is the stiffness matrix, and $\mathbf{B}(\mu) \in \mathbb{R}^{\mathcal{N} \times \mathcal{M}}$ is the model modification matrix. The entries are given by $\mathbf{U}_{i,j} = \mySPU{\phi_j}{\phi_i}$, $\mathbf{P}_{i,j} = \mySPY{\myProj{\mathcal{T}} \psi_j}{\myProj{\mathcal{T}} \psi_i}$, $\mathbf{A}_{i,j}(\mu) = \mya(\psi_j,\psi_i)$ and $\mathbf{B}_{i,j}(\mu) = -\myb(\phi_j,\psi_i)$. On the right side of the equation, we have the best-knowledge source term vector $\mathbf{f}_{\rm{bk}}(\mu) \in \mathbb{R}^{\mathcal{N}}$ given by $\mathbf{f}_{{\rm{bk}},i}(\mu) = f_{\rm{bk},\mu}(\psi_i)$, and the data vector $\mathbf{d}^{*}ata \in \mathbb{R}^{\mathcal{N}}$ with $\mathbf{y} _{{\rm{d}},i} = \mySPY{y_{\rm{d}}}{\psi_i}$. The latter can be obtained from the measurement data $\mathbf{m}_{\rm{d}}$: For $\mathbf{T} \in \mathbb{R}^{L \times L}$, $\mathbf{T}_{i,j} := g_i(\tau_j)$ and $\mathbf{S}\in \mathbb{R}^{\mathcal{N} \times L}$, $\mathbf{S}_{i,j}:= \mySPY{\tau_j}{\psi_i}$, we have $\mathbf{d}^{*}ata = (\mathbf{S}\mathbf{T}^{-1}) \mathbf{m}_{\rm{d}}$. We note that the affine nature \eqref{eq:affine} of the bilinear and linear forms $\mya$, $\myb$ and $f_{\rm{bk},\mu}$ transfers to the corresponding matrices $\mathbf{A}(\mu)$, $\mathbf{B}(\mu)$ and $\mathbf{f}_{\rm{bk}}(\mu)$. Hence, for different parameters they can be assembled from precomputed, $\mu$-independent matrices, e.g. the stiffness matrix $\textbf{A}(\mu)$ has the form $\textbf{A}(\mu) = \sum _{\vartheta = 1} ^{Q_a} \theta_a^{\vartheta}(\mu) \textbf{A}^{\vartheta} \in \mathbb{R}^{\mathcal{N} \times \mathcal{N}} \text{ with } (\textbf{A}^{\vartheta})_{i,j} = a^{\vartheta}(\Psi_j,\Psi_i) \text{ for } \vartheta = 1,...,Q_a$. \section{RB Approximation} \label{sec:RB} The stability analysis in section \ref{sec:tStability}, in particular Theorem \ref{thm:tStability}, indicates, that the 3D-VAR problem \eqref{prob:truth} is well-posed for every parameter $\mu \in \mathcal{C}$ with uniformly bounded stability constants. Yet, the implementation of the 3D-VAR method involves solving a large linear system of equations for each parameter $\mu$, where the $\mu$-dependent parts of the block matrix need to be assembled anew whenever $\mu$ changes. For real-time and many-query applications over the parameter domain, the computational cost hence becomes large. We address these problems by introducing an RB scheme: We first derive an RB formulation of the 3D-VAR method and employ the extensive stability analysis of section \ref{sec:tStability} to the RB spaces to emphasize the connection to the measurement space. We comment briefly on the numerical implementation of the RB method, before we provide an error analysis between the solutions of the original (truth) 3D-VAR problem \eqref{prob:truth} and its RB approximation. We derive computationally efficient a posteriori error bounds for the error in the model modification, state estimate, adjoint solution and the misfit between observations in the measurement space. \blue{We briefly comment on the notation. We require several reduced basis quantities in this section which are direct analogues of their truth counterparts, e.g., of the inf-sup constant $\beta_{\cal T}(\mu)$. In order to avoid various repetitions, quantities with an indexed $()_{\cal{R}}$ are defined as before with the FE spaces replaced by the corresponding RB spaces.} \subsection{RB Problem Statement} \label{sec:rbStability} The main contributions to the computational cost of the 3D-VAR method \eqref{prob:truth} come from the evaluation of the PDE over the state space $\mathcal{Y}$ in the direction of all possible model modifications $\mathcal{U}$. If the amount of model modifications is restricted to the most essential directions, and the solution of the PDE is approximated efficiently with a reduced order model, then the 3D-VAR saddle-point system reduces in size and may be solved faster. To this end, we assume to be given closed, non-trivial subspaces $\mathcal{U}RB \subset \mathcal{U}$ and $\mathcal{Y}RB \subset \mathcal{Y}$, which we refer to as RB spaces. In practice, $\mathcal{U}RB$ and $\mathcal{Y}RB$ are typically low-dimensional. In section \ref{sec:spaces}, we discuss how they can be chosen based on the intrinsic structure of the PDE. Note that we do not consider a reduction of the measurement space $\mathcal{T}$, i.e. the RB problem is based on the same data and no measurements are ignored. Our reduced-basis 3D-VAR problem has then the form \begin{equation}\label{prob:RB} \begin{aligned} &\min _{(u_{\rm{R}},y_{\rm{R}},d) \in \mathcal{U}RB \times \mathcal{Y}RB \times \mathcal{T}} \frac{1}{2} \mathcal{N}ormU{u_{\rm{R}}}^2 + \frac{\lambda}{2} \mathcal{N}ormY{d}^2 \text{ s.t. } \\ &\left. \begin{array}{rll} \mya(y_{\rm{R}},\psi_{\rm{R}}) &= f_{\rm{bk},\mu} (\psi_{\rm{R}}) + \myb(u_{\rm{R}},\psi_{\rm{R}}) & \quad \forall~\psi_{\rm{R}} \in \mathcal{Y}RB \\ \mySPY{y_{\rm{R}}+d}{\tau} &= \mySPY{y_{\rm{d}}}{\tau} & \quad \forall ~\tau \in \mathcal{T}. \end{array} \right. \end{aligned} \end{equation} Similar to before, the reduced basis minimization \eqref{prob:RB} can be written as an equivalent saddle-point problem over the spaces $\mathcal{H}RB := \mathcal{U}RB \times \mathcal{Y}RB$ and $\mathcal{Y}RB$: Given $\mu \in \mathcal{C}$, find $(h^*einsRB,h^*zweiRB) = ((\myuRB,\myyRB),\mywRB) \in \mathcal{H}RB \times \mathcal{Y}RB$ such that \begin{equation}\label{prob:RB:sp} \begin{aligned} A(h^*einsRB,k_{\rm{R}}) + \myB(k_{\rm{R}},h^*zweiRB) &= F(k_{\rm{R}}) && \forall~k_{\rm{R}} \in \mathcal{H}RB \\ \myB(h^*einsRB,\psi_{\rm{R}}) &= \myG(\psi_{\rm{R}}) && \forall~\psi_{\rm{R}} \in \mathcal{Y}RB. \end{aligned} \end{equation} The forms $A$, $\myB$, $F$ and $\myG$ are the ones defined in \eqref{eq:defABG} restricted to the respective spaces. Following standard nomenclature in the RB literature, we distinguish between the original 3D-VAR formulation \eqref{prob:truth} and its RB approximation \eqref{prob:RB} by referring to the former as the truth problem and the latter as the RB problem. On a structural level, the only formal difference between the truth and the RB problem is that $\mathcal{T}$ is a closed subspace of the state space $\mathcal{Y}$, while in general $\mathcal{T} \not \subset \mathcal{Y}RB$. However, this property remained unused within the stability analysis in section \ref{sec:tStability}, which therefore directly applies to the RB case when $\mathcal{U}$ and $\mathcal{Y}$ are formally replaced with $\mathcal{U}RB$ and $\mathcal{Y}RB$. We therefore omit a detailed stability analysis as well as the {\it a priori} error analysis. The latter directly follows from standard a-priori theory for Galerkin projections of saddle-point problems \cite{brezzi,Brezzi_book}. More specifically, in the present case we can even apply the results for symmetric problems with a coercive bilinear form $A$ from~\cite{Gerner_2012,Gerner_phd}. \subsection{Computational Procedure for the RB System} In the finite-dimensional setting and after a preparatory offline phase, the RB problem \eqref{prob:RB:sp} can be solved independently of the dimensions $\mathcal{N} = \dim \mathcal{Y}$ and $\mathcal{M} = \dim \mathcal{U}$ of the high-fidelity spaces. For given basis functions $(\phi_{\rm{R,} m})_{m=1}^M$ of $\mathcal{U}RB$ and $(\psi_{\rm{R,} n})_{n=1}^N$ of $\mathcal{Y}RB$, the basis coefficients $(\mathbf{u}^{*}RB,\mathbf{y}^{*}RB,\mathbf{p}^{*}RB) \in \mathbb{R}^{M+2N}$ of $(h^*einsRB,h^*zweiRB) = (\myuRB,\myyRB,\mywRB) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{Y}$ can be computed by solving the linear system \begin{equation} \label{eq:RB:numImp} \begin{aligned} \left( \begin{array}{ccc} \textbf{U}_{\rm{R}} & \textbf{0} & ~\textbf{B}_{\rm{R}}(\mu)~ \\ \textbf{0} & \lambda\textbf{P}_{\rm{R}} & \textbf{A}_{\rm{R}}^T(\mu) \\ ~\textbf{B}_{\rm{R}}^T(\mu)~ & ~\textbf{A}_{\rm{R}}(\mu)~ & \textbf{0} \end{array} \right) \left( \begin{array}{c} \mathbf{u}^{*}RB \\ \mathbf{y}^{*}RB \\ \mathbf{p}^{*}RB \end{array} \right) = \left( \begin{array}{c} \textbf{0}\\ \lambda \mathbf{d}^{*}ataRB \\ \mathbf{f}_{\rm{bk}}RB(\mu) \end{array} \right). \end{aligned} \end{equation} Once $\mathcal{U}RB$ and $\mathcal{Y}RB$ have been chosen, e.g. with the approaches presented in section \ref{sec:spaces}, the matrices and vectors can be obtained from the previous ones in \eqref{eq:numImp:t} by multiplication with the basis representations of $(\phi_{\rm{R,}m})_{m=1}^{\mathcal{M}}$ in $(\phi_m)_{m=1}^M$, and $(\psi_{\rm{R,}n})_{n=1}^N$ in $(\psi_n)_{n=1}^{\mathcal{N}}$. This computation needs only to be performed once, since the $\mu$-dependent forms $\textbf{A}_{\rm{R}}(\mu)$, $\textbf{B}_{\rm{R}}(\mu)$ and $\mathbf{f}_{\rm{bk}}RB(\mu)$ can be assembled from small, stored, $\mu$-independent matrices in $\mathcal{O}(Q_aN^2+Q_bMN+Q_fN)$ within the online phase thanks to the affine decomposition~\eqref{eq:affine}. For any measurement data $\mathbf{m}_{\rm{d}} \in \mathbb{R}^L$, the vector $\mathbf{d}^{*}ataRB$ can be obtained via $\mathbf{d}^{*}ataRB = (\mathbf{S}_{\rm{R}}\mathbf{T}^{-1}) \mathbf{m}$, where $\mathbf{T} \in \mathbb{R}^{L \times L}$, $\mathbf{T}_{i,j} := g_i(\tau_j)$ and $\mathbf{S}_{\rm{R}}\in \mathbb{R}^{N \times L}$, $(\mathbf{S}_{\rm{R}})_{i,j}:= \mySPY{\tau_j}{\psi_{\rm{R},i}}$ for the basis $(\tau_l)_{l=1}^L$ of $\mathcal{T}$. Assuming that $\mathbf{S}_{\rm{R}}\mathbf{T}^{-1}$ is precomputed, each new measurement set $\mathbf{m}_{\rm{d}}$ necessitates computational cost of order $\mathcal{O}(NL)$. Once assembled, the saddle-point system \eqref{eq:RB:numImp} can be solved in $\mathcal{O}((M+2N)^3)$. Altogether, the online solves are completely independent of the dimensionality of the truth spaces. \subsection{{\it A Posteriori} Error Estimation} \label{sec:aposteriori} Given the truth and RB saddle-point problems \eqref{prob:truth:sp} and \eqref{prob:RB:sp}, we may follow the approach in \cite{Gerner_2012,gerner_siam_sp} to derive {\it a posteriori} error bounds for $h^*eins-h^*einsRB$ and $h^*zwei-h^*zweiRB$. However, these bounds provide no individual information on the error between the model corrections $\myu$ and $\myuRB$, or between the state estimates $\myy$ and $\myyRB$. We therefore pursue the approach from \cite{Kaercher_optCon} to develop separate error bounds for the errors $e_{u} := \myu - \myuRB \in \mathcal{U}$ in the model correction, $e_{y} := \myy - \myyRB \in \mathcal{Y}$ in the state estimate, and $e_{p} := \myw - \mywRB \in \mathcal{Y}$ in the adjoint solution. In addition, we provide an error bound for the difference $e_{d} := \myd - \mydRB \in \mathcal{T}$ between the misfits $\myd := y_{\rm{d}} - \myProj{\mathcal{T}}\myy \in \mathcal{T}$ and $\mydRB := y_{\rm{d}} - \myProj{\mathcal{T}}\myyRB \in \mathcal{T}$. We first define the residual functions \begin{equation} \label{eq:def:r} \begin{aligned} &r_{u} : \mathcal{U} \rightarrow \mathbb{R}: &&r_u(\phi) := \myb(\phi,\mywRB) - \mySPU{\myuRB}{\phi}, \\ &r_{p} : \mathcal{Y} \rightarrow \mathbb{R}: &&r_p(\psi) := \lambda \mySPY{\psi}{\mydRB} - \mya(\psi,\mywRB), \\ &r_{y} : \mathcal{Y} \rightarrow \mathbb{R}: &&r_y(\psi) := f_{\rm{bk},\mu}(\psi) + \myb(\myuRB,\psi) - \mya(\myyRB,\psi), \end{aligned} \end{equation} all three of which are linear, continuous and $\mu$-dependent. By subtracting \eqref{prob:RB:sp} from \eqref{prob:truth:sp} and varying over the whole spaces $\mathcal{U}$, $\mathcal{Y}$, $\mathcal{T}$ individually, the error tuple $(e_{u},e_{y},e_{d},e_{p}) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{Y} \times \mathcal{T}$ solves the variational system \begin{subequations}\label{eq:errPDE} \begin{align} \mySPU{e_{u}}{\phi} - \myb(\phi,e_{p}) &= r_{u}(\phi) && \forall ~ \phi \in \mathcal{U}, \label{eq:errPDE:1}\\ \mya(\psi,e_{p}) - \lambda \mySPY{e_{d}}{\psi} &= r_{p}(\psi) && \forall ~\psi \in \mathcal{Y}, \label{eq:errPDE:2}\\ \mya(e_{y},\psi) - \myb(e_{u},\psi) &= r_{y}(\psi) && \forall ~ \psi \in \mathcal{Y}, \label{eq:errPDE:3}\\ \mySPY{e_{y}}{\tau} + \mySPY{e_{d}}{\tau} &= 0 && \forall ~ \tau \in \mathcal{T}. \label{eq:errPDE:4} \end{align} \end{subequations} The first three equations provide alternative representations of $r_{u}$, $r_{p}$ and $r_{y}$, which can be used to bound their norms in regard to the approximation errors by \begin{equation}\label{eq:normbound:r} \begin{aligned} \mathcal{N}orm[\myU']{r_{u}} &~~\le~~ \mathcal{N}ormU{e_{u}} + \mygammab ~ \mathcal{N}ormY{e_{p}} \\ \mathcal{N}orm[\myY']{r_{p}} &~~\le~~ \mygammaa~\mathcal{N}ormY{e_{p}} + \lambda \mathcal{N}ormY{e_{d}} \\ \mathcal{N}orm[\myY']{r_{y}} &~~\le~~ \mygammaa~\mathcal{N}ormY{e_{y}} + \mygammab~\mathcal{N}ormU{e_{u}}. \end{aligned} \end{equation} Hence, small errors are reflected by the residual norms. In case the truth and the RB solution coincide, $r_{u}$, $r_{p}$ and $r_{y}$ equal zero. Following the alternative approach in \cite{Kaercher_optCon}, we obtain the following: \begin{theorem} \label{thm:aPos:ErrB} Let $\mu \in \mathcal{C}$ be given and define \begin{equation*} \begin{aligned} p_u &~:=~ \mathcal{N}orm[\myU']{r_{u}} + \frac{\mygammab}{\myalpha} ~\mathcal{N}orm[\myY']{r_{p}} && q_u ~:=~ \frac{2}{\myalpha} \mathcal{N}orm[\myY']{r_{p}}\mathcal{N}orm[\myY']{r_{y}} + \frac{\lambda}{4 \myalpha^2} \mathcal{N}orm[\myY']{r_{y}}^2 \\ p_d &~:=~ \frac{1}{\myalpha}\mathcal{N}orm[\myY']{r_{y}} && q_d ~:=~ \frac{2}{\lambda\myalpha} \mathcal{N}orm[\myY']{r_{p}}\mathcal{N}orm[\myY']{r_{y}} + \frac{1}{4\lambda}p_u^2. \end{aligned} \end{equation*} Then for the unique solution $(e_{u},e_{y},e_{d},e_{p}) \in \mathcal{U} \times \mathcal{Y} \times \mathcal{Y} \times \mathcal{T}$ of \eqref{eq:errPDE}, we have \begin{equation} \label{eq:apos:indiv} \begin{aligned} \mathcal{N}ormU{e_{u}} &~\le~ \frac{1}{2} p_u + \sqrt{\frac{1}{4}p_u^2 + q_u} && \quad \mathcal{N}ormY{e_{y}} ~\le~ \frac{1}{\myalpha}\mathcal{N}orm[\myY']{r_{y}} + \frac{\mygammab}{\myalpha} \mathcal{N}ormU{e_{u}} \\ \mathcal{N}ormY{e_{d}} &~\le~ \frac{1}{2} p_d + \sqrt{\frac{1}{4}p_d^2 + q_d} && \quad \mathcal{N}ormY{e_{p}} ~\le~ \frac{1}{\myalpha} \mathcal{N}orm[\myY']{r_{p}} + \frac{\lambda}{\myalpha} \mathcal{N}ormY{e_{d}}. \end{aligned} \end{equation} \end{theorem} \begin{proof} To bound $\mathcal{N}ormY{e_{p}}$, we use the coercivity \eqref{eq:def:alpha} of $\mya$, and set $\psi := e_{p}$ in \eqref{eq:errPDE:2} to obtain \begin{equation} \label{eq:apos:proof:w} \begin{aligned} \myalpha \, \mathcal{N}ormY{e_{p}} ~\le~ \frac{\mya (e_{p},e_{p})}{\mathcal{N}ormY{e_{p}}} ~=~ \frac{r_{p}(e_{p}) + \lambda \mySPY{e_{d}}{e_{p}}}{\mathcal{N}ormY{e_{p}}} ~\le~ \mathcal{N}orm[\myY']{r_{p}} + \lambda \, \mathcal{N}ormY{e_{d}}. \end{aligned} \end{equation} The bound for $\mathcal{N}ormY{e_p}$ is derived similarly with $\psi := e_{y} \in \mathcal{Y}$ in equation \eqref{eq:errPDE:3}; we thus get \begin{equation} \label{eq:apos:proof:y} \begin{aligned} \myalpha \, \mathcal{N}ormY{e_{y}} \le \frac{\mya (e_{y},e_{y})}{\mathcal{N}ormY{e_{y}}} = \frac{r_{y}(e_{y}) + \myb(e_{u},e_{y})}{\mathcal{N}ormY{e_{y}}} \le \mathcal{N}orm[\myY']{r_{y}} + \mygammab \, \mathcal{N}ormU{e_{u}}. \end{aligned} \end{equation} For $\mathcal{N}ormU{e_{u}}$ and $\mathcal{N}ormY{e_{d}}$, we start with the choice $\phi = e_{u} \in \mathcal{U}$ in \eqref{eq:errPDE:1}, \begin{equation} \label{eq:apos:proof:u} \begin{aligned} \mathcal{N}ormU{e_{u}}^2 &\stackrel{\eqref{eq:errPDE:1}}{~~=~~} r_{u}(e_{u}) + \myb(e_{u},e_{p}) \\ &\stackrel{\eqref{eq:errPDE:3}}{~~=~~} r_{u}(e_{u}) - r_{y}(e_{p}) + \mya(e_{y},e_{p}) \\ &\stackrel{\eqref{eq:errPDE:2}}{~~=~~} r_{u}(e_{u}) - r_{y}(e_{p}) + r_{p}(e_{y}) + \lambda \mySPY{e_{d}}{e_{y}} \\ &\stackrel{\eqref{eq:errPDE:4}}{~~=~~} r_{u}(e_{u}) - r_{y}(e_{p}) + r_{p}(e_{y}) - \lambda \mathcal{N}ormY{e_{d}}^2 \\ &\stackrel{~}{~~\le ~~} \mathcal{N}orm[\myU']{r_{u}} \mathcal{N}ormU{e_{u}} + \mathcal{N}orm[\myY']{r_{y}} \mathcal{N}ormY{e_{p}} + \mathcal{N}orm[\myY']{r_{p}} \mathcal{N}ormY{e_{y}} - \lambda \mathcal{N}ormY{e_{d}}^2. \end{aligned} \end{equation} The use of the previous bounds \eqref{eq:apos:proof:w} and \eqref{eq:apos:proof:y} in the second and third term yields \begin{equation} \label{eq:apos:proof:res} \begin{aligned} &\mathcal{N}ormU{e_{u}}^2 + \lambda \mathcal{N}ormY{e_{d}}^2 \\ & ~\le~ (\mathcal{N}orm[\myU']{r_{u}} + \frac{ \mygammab}{\myalpha}~\mathcal{N}orm[\myY']{r_{p}} )\mathcal{N}ormU{e_{u}}+\frac{ 2 \mathcal{N}orm[\myY']{r_{p}}\mathcal{N}orm[\myY']{r_{y}}}{\myalpha} + \frac{\lambda\mathcal{N}orm[\myY']{r_{y}} }{\myalpha}\mathcal{N}ormY{e_{d}}. \end{aligned} \end{equation} For the last term, we have $\frac{\lambda}{\myalpha}\mathcal{N}orm[\myY']{r_{y}} \mathcal{N}ormY{e_{d}} \le \frac{\lambda}{4 \myalpha^2} \mathcal{N}orm[\myY']{r_{y}}^2 + \lambda \mathcal{N}ormY{e_{d}}^2$ by Young's Inequality. After subtracting $\lambda \mathcal{N}ormY{e_{d}}^2$ from either side of \eqref{eq:apos:proof:res}, we have \begin{align*} \mathcal{N}ormU{e_{u}}^2 ~~\le~~ (\mathcal{N}orm[\myU']{r_{u}} + \frac{\mygammab}{\myalpha}~\mathcal{N}orm[\myY']{r_{p}}) \mathcal{N}ormU{e_{u}} + \frac{2}{\myalpha} \mathcal{N}orm[\myY']{r_{p}}\mathcal{N}orm[\myY']{r_{y}} + \frac{\lambda \mathcal{N}orm[\myY']{r_{y}}^2}{4 \myalpha^2}, \end{align*} which yields the bound for $\mathcal{N}ormU{e_u}$ with the quadratic formula. For the bound of $\mathcal{N}ormY{e_{d}}$ we use Young's Inequality on the first term in the inequality \eqref{eq:apos:proof:res} to get \begin{align*} (\mathcal{N}orm[\myU']{r_{u}} + \frac{\mygammab}{\myalpha} ~\mathcal{N}orm[\myY']{r_{p}}) \mathcal{N}ormU{e_{u}} ~~\le ~~ \frac{1}{4} (\mathcal{N}orm[\myU']{r_{u}} + \frac{\mygammab}{\myalpha}~\mathcal{N}orm[\myY']{r_{p}})^2 + \mathcal{N}ormU{e_{u}}^2. \end{align*} Then, $\mathcal{N}ormU{e_{u}}^2$ can be subtracted from either side of \eqref{eq:apos:proof:res}, which leaves \begin{align*} \lambda \mathcal{N}ormY{e_{d}}^2 \le \frac{\lambda \mathcal{N}orm[\myY']{r_{y}}}{\myalpha} \mathcal{N}ormY{e_{d}} + \frac{2}{\myalpha} \mathcal{N}orm[\myY']{r_{p}}\mathcal{N}orm[\myY']{r_{y}} + \frac{1}{4} (\mathcal{N}orm[\myU']{r_{u}} + \frac{\mygammab}{\myalpha} ~\mathcal{N}orm[\myY']{r_{p}})^2. \end{align*} The bound for $\mathcal{N}ormY{e_d}$ can now also be obtained with the quadratic formula. \qed \end{proof} The error bounds in Theorem \ref{thm:aPos:ErrB} can be used in the finite-dimensional setting to bound the approximation error without computing the truth solution. First note that the bounds decrease with the coercivity constant $\myalpha$, which can therefore be substituted with its efficiently computable lower bound $\myalphaLB \le \myalpha$. Analogously, $\mygammab$ can be replaced with its upper bound $\mybUB$. Due to the affine parameter dependence assumed in \eqref{eq:affine}, the residual norms can be computed from \eqref{eq:def:r} in an offline-online-procedure independent of the space dimensions $\mathcal{N} = \dim \mathcal{Y}$ and $\mathcal{M} = \dim \mathcal{U}$. Their computation is a standard procedure in the RB literature and therefore omitted, see e.g. \cite{RB_Patera_2007}. To summarize, we then obtain {\it a posteriori} bounds $\myDu$, $\myDy$, $\myDd$ and $\myDw$ for $\mathcal{N}ormU{e_{u}}$, $\mathcal{N}ormY{e_{y}}$, $\mathcal{N}ormY{e_{d}}$, and $\mathcal{N}ormY{e_{p}}$ that may be computed fast and efficiently. We note that the boundedness \eqref{eq:normbound:r} of the residual norms with respect to the errors ensures a correspondence between the {\it a posteriori} error bounds and the behavior of the true error. \section{Space Construction} \label{sec:spaces} In the previous sections, we have assumed the measurement space $\mathcal{T}$ and the RB spaces $\mathcal{U}RB$ and $\mathcal{Y}RB$ to be given. In this section, we present ideas on how the previously discussed properties concerning stability and approximation quality may be used for constructing the spaces. More specifically, we first introduce a greedy-OMP algorithm for the stability-based generation of $\mathcal{T}$, and subsequently present an approach for the construction of the reduced basis spaces. \subsection{Greedy-OMP Algorithm} \label{sec:greedyOMP} In some applications, prior information can allow for a low-dimensional model correction space $\mathcal{U}$. One reason would be that, due to the known presence of noise, the focus of interest lies in the large-scale behaviour of the model correction rather than noise-sensitive details and oscillations. Such a low-dimensional choice can be of great benefit, since it can counteract noise amplification already for low-dimensional measurement spaces, as indicated by our stability analysis in section \ref{sec:tStability}. In the following, we propose a greedy-OMP algorithm that chooses a measurement space $\mathcal{T}$ such that 1) the influence of noise upon the 3D-VAR model correction and state estimates is bounded independently of $\lambda$ and $\mu$, and 2) solutions of the modified model \eqref{eq:forward} are distinguishable for different parameters. We presuppose that $M := \dim \mathcal{U}$ is small and that $\mathcal{Y}$ can sufficiently capture the modifications induced by $\mathcal{U}$ so that $e_{q}ainf > 0$ holds uniformly on $\mathcal{C}$. Then, by the stability analysis in section \ref{sec:tStability}, the amplification of noise is bounded for the 3D-VAR model correction and state solution $h^*eins$ $\lambda$-independently if $\mykappa > 0$. For any fixed parameter $\mu$, we can compute the space $\mathcal{Y}mu$ of state modifications by considering the forward problem \begin{align}\label{eq:greedyOMP:1} \text{For }u_0 \in \mathcal{U} \text{ find } y_{\mu}(u_0)\in \mathcal{Y} \text{ s.t. }\mya(y_{\mu}(u_0),\psi) = \myb(u_0,\psi) \quad \forall ~\psi \in \mathcal{Y}. \end{align} Then $\mathcal{Y}mu = \text{span}(y_{\mu}(\phi_m))_{m=1}^M$ for any basis $(\phi_m)_{m=1}^M$ of $\mathcal{U}$. We assume to be given a library $\mathcal{L} \subset \mathcal{Y}'$, from which we may select different measurement functionals $g_l \in \mathcal{L}$ whose Riesz representation then spans $\mathcal{T}$, see \eqref{eq:def:T}. For any fixed parameter $\mu$, a generalized OMP algorithm can be applied to the space $\mathcal{Y}mu$ to iteratively expand $\mathcal{T}$ and --- under some restrictions upon the library --- enforce $\mykappa > 0$ for $L = \dim \mathcal{T} \ge M = \dim \mathcal{Y}mu$ (see \cite{Olga}). Theoretically, $\mathcal{T}$ can be expanded by repeating this procedure for sufficiently many parameters. However, the necessity to solve the forward problem \eqref{eq:greedyOMP:1} for $u_0 = \phi_m$, $m=1,...,M$ and each parameter accumulates a high computational cost. We can improve this procedure with our RB approximation: We set $\mathcal{U}RB :=\mathcal{U}$ and assume to have an RB space $\mathcal{Y}RB \subset \mathcal{Y}$ with $e_{q}ainfRB > 0$ that sufficiently approximates $\mathcal{Y}mu$ in the following sense: For any $\mu \in \mathcal{C}$ there exists an $\varepsilon_{\mu}$, $0 \le \varepsilon_{\mu} \ll 1$, with: \begin{equation} \label{eq:YmuApproxProperty} \text{If } (u,y) \in \mathcal{H}^0(\mu) \text{ and } (u,y_{\rm{R}}) \in \mathcal{H}RB^0(\mu) \text{ then } \mathcal{N}ormY{y - y_{\rm{R}}} \le \varepsilon_{\mu} \mathcal{N}ormY{y}. \end{equation} We emphasize that the RB spaces here may be considered as completely separate from the RB 3D-VAR method. We use the same notation as in section \ref{sec:RB} only for simplicity, as it avoids the needless repetition of definitions and results. For ${u_0} \in \blue{\mathcal{U} = \mathcal{U}RB}$, let $y({u_0}) \in \mathcal{Y}mu$ and $y_{\rm{R}}({u_0})\in \mathcal{Y}muRB$ denote the unique elements with $({u_0},y({u_0})) \in \mathcal{H}^0(\mu)$ and $({u_0},y_{\rm{R}}({u_0})) \in \mathcal{H}RB^0(\mu)$ respectively. Then \begin{align}\label{eq:greedyOMP:2} \frac{\mathcal{N}ormY{y_{\rm{R}}({u_0})}}{\mathcal{N}ormY{y({u_0})}} \ge \frac{\mathcal{N}ormY{y({u_0})} - \mathcal{N}ormY{y({u_0})-y_{\rm{R}}({u_0})}}{\mathcal{N}ormY{y({u_0})}} \ge 1-\varepsilon_{\mu}. \end{align} The inf-sup constant $\mykappa$ on the high fidelity space $\mathcal{Y}$ can then be bounded with respect to $\beta_{\myT,\myYRB}mu$ on the RB space $\mathcal{Y}RB$ via \begin{align*} \mykappa =& \inf _{{u_0} \in \blue{\mathcal{U}}} \sup _{\tau \in \mathcal{T}}\frac{\mySPY{y({u_0})}{\tau}}{\mathcal{N}ormY{y({u_0})}\mathcal{N}ormY{\tau}} \\ =& \inf _{{u_0} \in \blue{\mathcal{U}}} \sup _{\tau \in \mathcal{T}} \frac{\mySPY{y_{\rm{R}}({u_0})}{\tau}}{\mathcal{N}ormY{y({u_0})} \mathcal{N}ormY{\tau}} + \frac{\mySPY{y({u_0})-y_{\rm{R}}({u_0})}{\tau}}{\mathcal{N}ormY{y({u_0})} \mathcal{N}ormY{\tau}}\\ \ge & \inf _{{u_0} \in \blue{\mathcal{U}}} \frac{\mathcal{N}ormY{y_{\rm{R}}({u_0})}}{\mathcal{N}ormY{y({u_0})}}\sup _{\tau \in \mathcal{T}} \frac{\mySPY{y_{\rm{R}}({u_0})}{\tau}}{\mathcal{N}ormY{y_{\rm{R}}({u_0})} \mathcal{N}ormY{\tau}} - \varepsilon_{\mu} \\ \ge & ~(1-\varepsilon_{\mu}) \inf _{{u_0} \in \blue{\mathcal{U} = \mathcal{U}RB}} \sup _{\tau \in \mathcal{T}} \frac{\mySPY{y_{\rm{R}}({u_0})}{\tau}}{\mathcal{N}ormY{y_{\rm{R}}({u_0})} \mathcal{N}ormY{\tau}} - \varepsilon_{\mu}\\ = & ~(1-\varepsilon_{\mu}) \beta_{\myT,\myYRB}mu - \varepsilon_{\mu}. \end{align*} Hence, $\mathcal{T}$ is a stabilizing choice for the truth 3D-VAR method, if the inf-sup condition $\beta_{\myT,\myYRB}mu > 0$ holds and $\varepsilon_{\mu}$ is sufficiently small. The greedy-OMP algorithm \ref{alg:greedyOMP} generates a measurement space $\mathcal{T}$ from the library $\mathcal{L} \subset \mathcal{Y}'$ to increase $\beta_{\myT,\myYRB}mu$ over a training set $\Xi _{\rm{train}} \subset \mathcal{C}$. The initial computation of $\mathcal{Y}muRB$ for $\mu \in \Xi _{\rm{train}}$ \blue{(line \ref{line:compYmuRB}), and the searches for $y_L$ (line \ref{line:OMP1}) and $\mu_{L+1}$ (line \ref{line:findNext})} in each iteration requires only online operations, i.e., independent of the FE dimensions. The selection of the measurement functional \blue{$g_L$ (line \ref{line:OMP2})} depends on \blue{the FE-dimension} $\mathcal{N} = \dim \mathcal{Y}$ and the size of the library, \blue{and the expansion of $\mathcal{T}$ (line \ref{line:expansion}) needs $\mathcal{N}$-dependent computations for the Riesz representation and orthonormalization. However, both (lines \ref{line:OMP2} and \ref{line:expansion})} need to be performed only once per iteration. In the algorithm, \blue{the measurement functional $g_L$ is chosen in lines \ref{line:OMP1} and \ref{line:OMP2} so that the expansion of $\mathcal{T}$ in line \ref{line:expansion} increases $\beta_{\myT,\myYRB}mu[\mu_{L}]$. This is an application to the space $\mathcal{Y}muRB[\mu_L]$ of the worst-case OMP algorithm in \cite{Olga}, which is in turn a greedy generalization of classical OMP algorithms.} Other selection strategies for increasing the inf-sup constant with regard to measurement functionals \blue{exist and can be substituted in lines \ref{line:OMP1}-\ref{line:OMP2}; we refer to \cite{Olga} and \cite{PBDW} for further options and both analytical and numerical comparisons.} \begin{algorithm} \caption{Greedy Orthogonal Matching Pursuit}\label{alg:greedyOMP} \begin{algorithmic}[1] \Require $\Xi _{\rm{train}} \subset \mathcal{C}$ training set, $\mathcal{L} \subset \mathcal{Y}'$ library, $\beta_0> 0$ target value, $\mu_1 \in \Xi _{\rm{train}}$, $L_{\rm{max}}$ \State Compute $\mathcal{Y}muRB$ for each $\mu \in \Xi _{\rm{train}}$ \label{line:compYmuRB} \State $\mathcal{T} \gets \{0\}$, $L \gets 0$, $\beta \gets 0$ \While {$\beta \le \beta_0$ \textbf{ and } $L \le L_{\rm{max}}$} \State $L \gets L+1$ \State \blue{$y_L \gets \text{arg} \max \{\mathcal{N}ormY{y - \myProj{\mathcal{T}}y}:~ y \in \mathcal{Y}muRB[\mu_L], \mathcal{N}ormY{y} = 1 \}$} \label{line:OMP1} \State \blue{$g_L \gets \text{arg} \max _{g \in \mathcal{L}} |g(y_L - \myProj{\mathcal{T}}y_L)| / \mathcal{N}orm[\mathcal{Y}']{g}$} \label{line:OMP2} \State Expand $\mathcal{T}$ with the Riesz representation of $g_L$ \label{line:expansion} \State Find $\mu_{L+1} \in \text{arg} \min _{\mu \in \Xi _{\rm{train}}} \beta_{\myT,\myYRB}mu$ \label{line:findNext} \State $\beta \gets \beta_{\myT,\myYRB}mu[\mu_{L+1}]$ \EndWhile \end{algorithmic} \end{algorithm} If the 3D-VAR method is to be employed for parameter estimation, then $\mathcal{T}$ should be able to distinguish between solutions of the modified model \eqref{eq:forward} for different parameters. To this end, we define, for $\mu,\nu \in \mathcal{C}$, \begin{align*} \beta_{\mathcal{T}}(\mu,\nu) := \inf _{y \in \mathcal{Y}mu[(\mu,\nu)]} \sup _{\tau \in \mathcal{T}} \frac{\mySPY{y}{\tau}}{\mathcal{N}ormY{y}\mathcal{N}ormY{\tau}} \text{~ with ~} \mathcal{Y}mu[(\mu,\nu)] := \text{span} \{~\myybk[\mu],\myybk[\nu],\mathcal{Y}mu[\mu],\mathcal{Y}mu[\nu]~\}. \end{align*} Here, $\myybk[\mu]$ and $\myybk[\nu]$ are the best-knowledge solutions of \eqref{eq:bk} for the respective parameters. If $y_{\mu} = y_{\mu}(u_1)$ and $y_{\nu} = y_{\nu}(u_2)$ are solutions of the modified model \eqref{eq:forward} for parameters $\mu,\nu \in \mathcal{C}$ and modifications $u_1,u_2 \in \mathcal{U}$, then $y_{\mu},y_{\nu} \in \mathcal{Y}mu[(\mu,\nu)]$. The experimentally-observable difference $\mathcal{N}ormY{\myProj{\mathcal{T}}(y_{\mu}-y_{\nu})} \ge \beta_{\mathcal{T}}(\mu,\nu)\mathcal{N}ormY{y_{\mu}-y_{\nu}}$ is hence bounded from below relative to their actual distance. For parameter estimation, $\mathcal{T}$ should thus be chosen such that $\beta_{\mathcal{T}}(\mu,\nu) > 0$ on $\mathcal{C} \times \mathcal{C}$. We can generate $\mathcal{T}$ similarly to before, with a slight modification of the greedy-OMP algorithm \ref{alg:greedyOMP}: In addition to $\beta_{\myT,\myYRB}mu$, we consider \begin{align}\label{eq:def:kappaRB:mu:nu} \beta_{\mathcal{T}\rm{,R}}(\mu,\nu) := \inf _{y \in \mathcal{Y}muRB[(\mu,\nu)]} \sup _{\tau \in \mathcal{T}} \frac{\mySPY{y}{\tau}}{\mathcal{N}ormY{y}\mathcal{N}ormY{\tau}} \end{align} over a training set in $\mathcal{C}\times\mathcal{C}$ with $\mathcal{Y}mu[(\mu,\nu)] := \text{span} \{~\myybkRB[\mu],\myybkRB[\nu],\mathcal{Y}muRB[\mu],\mathcal{Y}muRB[\nu]~\}$ with RB approximations $\myybkRB[\mu]$ and $\myybkRB[\nu]$ of the best-knowledge states $\myybk[\mu]$ and $\myybk[\nu]$. Note that this additionally requires $\mathcal{Y}RB$ to approximate the best-knowledge model \eqref{eq:bk}. The spaces $\mathcal{Y}muRB[(\mu,\nu)]$ can be computed efficiently online from the RB space $\mathcal{Y}RB$. It is therefore possible to consider different sets of training parameters in the course of the algorithm to reduce the required amount of memory. \subsection{Construction of reduced basis spaces} \label{sec:stepwise} We next consider \blue{the} construction of the reduced basis spaces $\mathcal{Y}RB$ and $\mathcal{U}RB$. If the measurement space $\mathcal{T}$ and the measurements --- and thus the data state $y_{\rm{d}} \in \mathcal{T}$ --- are known {\it a priori}, we can directly use a greedy approach similar to the one for optimal control problems in~\cite{Kaercher_optCon}, see \cite{Masterarbeit_Nicole}. However, here we propose a different approach which does not require $\mathcal{T}$ or $y_{\rm{d}}$ to be known {\it a priori}. By varying in \eqref{prob:truth:sp} separately over the spaces $\mathcal{U}$ and $\mathcal{Y}$, and inserting $\myd = y_{\rm{d}} - \myProj{\mathcal{T}}\myy$, the following system is obtained for the model correction $\myu$, state $\myy$, and adjoint solution $\myw$: \begin{subequations} \label{eq:t:pde} \begin{align} \mySPU{\myu}{\phi} - \myb(\phi,\myw) &= 0 && \forall ~\phi \in \mathcal{U} \label{eq:t:pde:1}\\ \mya(\psi,\myw) - \lambda \mySPY{\psi}{\myd} &= 0 && \forall ~\psi \in \mathcal{Y} \label{eq:t:pde:2}\\ \mya(\myy,\psi) - \myb(\myu,\psi) &= f_{\rm{bk},\mu}(\psi) && \forall ~ \psi \in \mathcal{Y} \label{eq:t:pde:3}\\ \mySPY{\myy + \myd}{\tau} &= \mySPY{y_{\rm{d}}}{\tau} && \forall ~ \tau \in \mathcal{T}. \label{eq:t:pde:4} \end{align} \end{subequations} We assume that $\mathcal{U}RB \subset \mathcal{U}$ is fixed and low-dimensional, so that $\myu$ can be approximated well enough in $\mathcal{U}RB$ for the expected range of measurement data and the desired level of detail. \blue{As $\myu$ is not known,} the RB space $\mathcal{Y}RB$ needs to provide good approximations \blue{to the solution of \eqref{eq:t:pde:3} for each model modification in $\mathcal{U}RB$ relative to its norm. We realise this by constructing a state space $\mathcal{Y}_y \subset \mathcal{Y}$ as} an RB approximation of the forward problem \begin{equation} \begin{aligned} &\text{For } \mu \in \mathcal{C} \text{ and } f \in \{~f_{\rm{bk},\mu}\} \cup \{~ \myb(\phi_{\rm{R,}m},\cdot):~1\le m\le M ~\} \subset \mathcal{Y}':\\ &\text{Find } y \in \mathcal{Y} \text{ s.t. } \mya(y,\psi) = f(\psi) \quad \forall ~ \psi \in \mathcal{Y}. \end{aligned} \end{equation} If $\mathcal{T}$ is not fixed by the experimental design, it can now be generated from $\mathcal{Y}_y$ with the greedy-OMP algorithm \ref{alg:greedyOMP}. \blue{Given $\mathcal{T}$ and f}ollowing an analogous argument as before for the generation of $\mathcal{Y}_y$, we can obtain an adjoint space $\mathcal{Y}_p \subset \mathcal{Y}$ if we iteratively replace $\myd$ in \eqref{eq:t:pde:2} with orthonormal basis functions $(\tau_l)_{l=1}^L$ of $\mathcal{T}$ and perform an RB approximation of each equation over the parameter domain $\mathcal{C}$. By considering a relative target accuracy, we can use $\lambda=1$ for the RB \blue{space generation, but note that since t}he approximation quality of the adjoint solution $\myw$ scales with $\lambda$, the target accuracy \blue{for} the RB \blue{adjoint} approximation should be chosen with respect to the largest regularisation parameter $\lambda$ that the RB 3D-VAR method is expected to be used for. \blue{Finally, we set} $\mathcal{Y}RB:= \mathcal{Y}_y + \mathcal{Y}_p$. \blue{T}he following sketch \blue{summarises this} consecutive construction of the spaces: \begin{center} \begin{minipage}[c]{0.95\textwidth} \begin{tikzpicture}[node distance = 2.5cm, auto] \node (l1) {$\mathcal{U}RB$}; \node [right of=l1](l2){$\mathcal{Y}_y$}; \node [right of=l2](l3){$\mathcal{T}$}; \node [right of=l3](l4){$\mathcal{Y}_p$}; \node [right of=l4](l5){$\mathcal{Y}RB=\mathcal{Y}_y+\mathcal{Y}_p$.}; \path [line] (l1) -- node {\eqref{eq:t:pde:3}} (l2); \path [line] (l2) -- node {\textit{Alg.} \ref{alg:greedyOMP}} (l3); \path [line] (l3) -- node {\eqref{eq:t:pde:2}} (l4); \path [line] (l4) -- (l5); \end{tikzpicture} \end{minipage} \end{center} \blue{T}his stepwise selection of the spaces avoids two possible drawbacks: First, the data state $y_{\rm{d}}$ needs not to be known at the start of the offline phase; the RB spaces are hence not influenced by measurement noise. Second, once the spaces are constructed, they can be used repeatedly for different measurement data $y_{\rm{d}}$ without necessitating additional offline costs. As $\mathcal{Y}RB= \mathcal{Y}_y + \mathcal{Y}_p$ comprises $M+L+1$ RB approximation spaces, $\mathcal{Y}RB$ may become large. The target accuracy and the training set should thus be chosen carefully to reduce the offline computational time. Once the data state $y_{\rm{d}}$ becomes known, $\mathcal{Y}RB$ can be condensed by following the two-step RB approach in~\cite{EHK+2012}, i.e., by subsequently applying a greedy algorithm to derive spaces of smaller dimensions. \section{Numerical Results} \label{sec:experiments} In this section we present numerical experiments to verify our theoretical results. All computations were performed with MATLAB\textsuperscript{\textregistered} on a computer with 2.5 GHz Intel Core i5 processor and 4 GB of RAM. \subsection{Model Description} We consider the steady state temperature distribution $y$ of a thermal block $\Omega = (0,1)\times(0,1)$ that comprises up to three different material properties within $\Omega_1 = (\text{\small $\frac{1}{4}$},\text{\small$\frac{3}{4}$}) \times (\text{\small$\frac{1}{4}$},\text{\small $\frac{3}{4}$})$, $\Omega_2 = (0,1)\times(\text{\small$\frac{1}{2}$},1) \setminus \Omega_1$, and $\Omega_{0} = (0,1)\times(0,\text{\small$\frac{1}{2}$}) \setminus \Omega_1 $. We consider parameters $\mu = (\mu_1,\mu_2) \in \mathcal{C} := [\frac{1}{10},10]^2$, where $\mu_i$, $i \in \{1,2\}$ is the ratio of the thermal conductivity of $\Omega_i$ to $\Omega_0$. We divide the outer boundary $\partial \Omega = \Gamma_{\rm{in}} \cup \Gamma_{\rm{D}} \cup \Gamma_{\rm{N}}$, with $\Gamma_{\rm{N}} = \{0,1\} \times [0,1]$, $\Gamma_{\rm{D}} = [0,1]\times \{1\} $, and $\Gamma_{\rm{in}} = [0,1] \times \{0\}$, and confer different boundary conditions on each by $\nabla y \cdot n = 0$ a.e. on $\Gamma_{\rm{N}}$ (zero Neumann flux), $y|_{\Gamma_{\rm{D}}} = 0$ a.e. on $\Gamma_{\rm{D}}$ (zero Dirichlet) and $\nabla y \cdot n = u$ a.e. on $\Gamma_{\rm{in}}$ (Neumann flux). Here, $n$ is the outer unit normal to $\Omega$ and $u$ is a yet unspecified function in $\mathcal{U}cont := L^2(\Gamma_{\rm{in}})$. Across the subdomains, we require that both the temperature $y$ as well as the heat flux is continuous. An outline of this setup is provided in Figure \ref{fig:model}(a). \begin{figure} \caption{(a) Illustration of the domain and the imposed outer boundary conditions. (b) Greedy placement of the measurement sensors.} \label{fig:model} \end{figure} For the state space, we define $\mathcal{Y}cont := \{\, y \in \Hkp[1]:\, y|_{\Gamma_{\rm{D}}} = 0\, \}$ with inner product $\mySP[\mathcal{Y}cont]{y}{w} := \mySPY{y}{w} := \int_{\Omega} \nabla y(\mathbf{x}) \cdot \nabla w(\mathbf{x}) d\mathbf{x}$. For the space discretization in the state variable we use a linear finite element space $\mathcal{Y}$ of dimension 4,210. The bilinear forms $\mya : \mathcal{Y}cont \times \mathcal{Y}cont \rightarrow \mathbb{R}$ and $b : \mathcal{U}cont \times \mathcal{Y}cont \rightarrow \mathbb{R}$ associated with the thermal block problem are then given by \begin{align*} \mya(y,w) := \sum_{i=0}^2 \mu_i \int _{\Omega_i} \nabla y(\mathbf{x}) \cdot \nabla w(\mathbf{x})~d\mathbf{x} \quad \quad \quad b(u,w) := \int _{\Gamma_{\rm{in}}} u(\mathbf{x})~w(\mathbf{x})~dS(\mathbf{x}), \end{align*} where $\mu_0 := 1$. Both forms are continuous, and $\mya$ is coercive with $\myalpha = \min \{1,\mu_1,\mu_2\} \ge 0.1$. For the state and parameter estimation, we assume that the unknown true state $y_{\rm{true}} \in \mathcal{Y}$ is given as the unique solution of the weak PDE \begin{align}\label{eq:numExp:true} \mya[\mu _{\rm{true}}](y_{\rm{true}}, \psi) = b(u_{\rm{true}},\psi) \qquad \qquad \forall ~\psi \in \mathcal{Y}, \end{align} with $u_{\rm{true}}(x_1):= 1.5+0.3\sin(2\pi x_1)$ for $x_1 \in \Gamma_{\rm{in}}$, and $\mu _{\rm{true}} = (7,0.3)$. For the numerical computation of this state we approximate $u_{\rm{true}}$ with 69 linear finite elements on $\Gamma_{\rm{in}}$. For the best-knowledge model, we presume a steady, homogeneous heat inflow in the form of a uniform Neumann flux $u_{\rm{start}} \equiv 1$ on $\Gamma_{\rm{in}}$. We hence obtain $f_{\rm{bk},\mu} = b(u_{\rm{start}},\cdot) \in \mathcal{Y}'$. In this numerical experiment, we aim to estimate $\mu _{\rm{true}}$, $u_{\rm{true}}$ and $y_{\rm{true}}$ from a few measurements of $y_{\rm{true}}$. \subsection{Space Generation with Prior Knowledge} \label{sec:exp:generation} In this section, we first specify the framework for the application of the 3D-VAR method. We then use the approach described in the previous section to generate the RB spaces and the measurement space $\mathcal{T}$. Due to the diffusion of heat, state changes brought forth by local details in the Neumann flux smooth out as the distance to the boundary $\Gamma_{\rm{in}}$ increases. The observation of such details would hence necessitate the placement of observation functionals very close to the boundary, and the local reconstruction of the Neumann flux would then be sensitive to noise. For the model modifications, we hence make the educated guess that $u_{\rm{true}}$ can be approximated sufficiently in the space $\mathbb{P}_3(\Gamma_{\rm{in}})$ of polynomials on $\Gamma_{\rm{in}}$ with degree smaller or equal to 3; accordingly, we fix $\mathcal{U} := \mathbb{P}_3(\Gamma_{\rm{in}})$ equipped with the $L^2(\Gamma_{\rm{in}})$ inner product. As a consequence of this decision, we accept that we can approximate $u_{\rm{true}}$ only up to the accuracy $\mathcal{N}orm[L^2(\Gamma_{\rm{in}})]{\myProj{\mathcal{U}^{\perp}} u_{\rm{true}}} \approx $1.9877e-02. Similarly, for the true parameter $\mu _{\rm{true}}$, $y_{\rm{true}}$ can only be approximated upto $\mathcal{N}ormY{\myProj{\mathcal{Y}mu[\mu _{\rm{true}}]^{\perp}}y_{\rm{true}}} \approx$ 4.7576e-03. We next generate the measurement space $\mathcal{T}$ and the RB spaces $\mathcal{U}RB$ and $\mathcal{Y}RB$. Since we already restricted $\mathcal{U}$ to a small dimension, we chose $\mathcal{U}RB := \mathcal{U}$ without discarding any further model modifications. The state space $\mathcal{Y}_y \subset \mathcal{Y}$ is generated from the first four Legendre polynomials by a weak greedy algorithm over a $41\times 41$ regular training grid on the logarithmic parameter domain with target accuracy $10^{-5}$ relative to the norm of the state solution. The algorithm terminated with $\dim \mathcal{Y}_y = 64$. We note that if we were to use this state space for a $\mu$-independent PBDW state estimate of $y_{\rm{true}}$, we would need at least 64 measurement functionals for well-posedness, whereas any number of measurements is sufficient for well-posedness of the 3D-VAR method. The measurement space $\mathcal{T}$ was obtained from $\mathcal{Y}_y$ with the greedy-OMP algorithm \ref{alg:greedyOMP}, which selected 16 measurement functionals $g_l \in \mathcal{Y}'$, $l \in \{ 1,...,16\}$. The library consists of gaussian functionals in $\mathcal{Y}'$ with standard deviation $0.01$ and centres in a $97\times 97$ regular grid on $(0.02,0.98)^2 \subset \Omega$. We use the target value $\beta_0 := 0.5$ for the inf-sup constants $\beta_{\myT,\myYRB}mu$ and $\beta_{\mathcal{T},\rm{R}}(\mu,\nu)$ with parameters $\mu$, $\nu$ in a $21 \times 21$ grid on $\mathcal{C}$. Figure \ref{fig:spaces}(a) shows the development of the minimal inf-sup constants $\beta_{\myT,\myYRB}mu$ and $\beta_{\myT,\myYRB}mu[\mu,\nu]$ over the respective training set as the greedy-OMP Algorithm \ref{alg:greedyOMP} expands $\mathcal{T}$. Due to the margin at the domain boundary for the placement of the sensors, we cannot expect the inf-sup constant to reach 1 asymptotically. Since $\min_{\mu} \beta_{\myT,\myYRB}mu$ exceeds the target value already for five measurement functionals, most measurements have been chosen to increase $\beta_{\myT,\myYRB}mu[\mu,\nu]$ over $\mathcal{C}^2$. The centres of the chosen measurement functionals are indicated in Figure \ref{fig:model}(b). The four measurements near $\Gamma_{\rm{in}}$ are most important for determining the optimal model correction, whereas the other measurements are most important for estimating unknown parameters. In Figure \ref{fig:spaces}(b), the $\mathcal{H}^0(\mu)$-coercivity constant $\alpha_A^0(\mu _{\rm{true}},\lambda)$ of $A$ and the corresponding lower bound $\alpha^{\rm{LB}}_{A}$ from Theorem \ref{thm:tCoercivity} are plotted over $\lambda$ for the high-fidelity spaces. We observe that the lower bound closely tracks the behaviour of $\mydelta$. Asymptotically, $\mydelta$ is larger than $\alpha^{\rm{LB}}_{A}$ by a factor of 1.0325. \begin{figure} \caption{(a) Development of the target variables of the greedy-OMP algorithm versus the dimension of $\mathcal{T} \label{fig:spaces} \end{figure} To finish the generation of $\mathcal{Y}RB$, we expand the state space $\mathcal{Y}_y$ with an adjoint space $\mathcal{Y}_p$ with $\dim \mathcal{Y}_p = 95$. This space is computed from an orthonormal basis of $\mathcal{T}$ with an RB approximation of the adjoint equation \eqref{eq:t:pde:2}. Since prior tests indicate that due to the local influence of the measurement functionals the weak greedy algorithm only chooses training parameters very close to the boundary $\partial \mathcal{C}$, the computation of $\mathcal{Y}_p$ was done using a relative target accuracy of $10^{-5}$ on 40 regularly spaced training parameters on $\partial \mathcal{C}$ in the logarithmic plane. After the computation of $\mathcal{Y}_p$, the target accuracy was confirmed on a fine test grid over the whole parameter domain. Altogether, the offline phase for the RB space generation, including the selection of appropriate measurement functionals, finished after 463 seconds with $\dim \mathcal{U}RB = 4$, $\dim \mathcal{T} = 16$ and $\dim \mathcal{Y}RB = 159$. \subsection{Reduced Basis Approximation} In this section, we evaluate 1) the computational efficiency of the RB solution, 2) its accuracy and the effectivity of the a posteriori error bounds, and 3) the effects of noise on the approximation of $u_{\rm{true}}$. To this end, we take measurements $\mathbf{m}_{\rm{d}} = (g_l(y_{\rm{true}}))_{l=1}^{16} \in \mathbb{R}^{16}$ of the true state to compute $y_{\rm{d}} = \myProj{\mathcal{T}}y_{\rm{true}}$. Noisy measurements are constructed by adding a random variable drawn from the probability distribution $\mathcal{N}(0,0.01^2)$ upon each measurement before the computation of $y_{\rm{d}}$. This corresponds to a noise level of approximately 1.5\% in each measurement compared to the difference to the measurements of the best-knowledge state $\myybk[\mu _{\rm{true}}]$. First, we evaluate the \blue{computational efficiency} of the RB method \blue{in comparison to} the truth 3D-VAR method: For 200 random parameters in $\mathcal{C}$ and $\lambda = 100$, the truth 3D-VAR solution was computed from different noisy measurements. The mean computation time was 7.08 $s$. The RB 3D-VAR solution was then computed over the same parameters with the same noisy data for comparison. With an average online computation time of 4.2 $ms$ for the computation of the RB 3D-VAR solution and 1.3 $ms$ for the computation of the a posteriori error bounds, the RB method showed a mean speedup of 1,276. \begin{figure} \caption{ Mean relative error \blue{(blue, dashed, o-marks)} \label{fig:exp:effectivity} \end{figure} To assess the effectivity of the a posteriori error bounds, we generate additional pairs $(\mathcal{U}RB^j,\mathcal{Y}RB^j) \subset \mathcal{U} \times \mathcal{Y}$, $j \in \{1,2,3\}$ of RB spaces with $\mathcal{U}RB^j = \mathbb{P}_{j-1}(\Gamma_{\rm{in}})$ the span of polynomials on $\Gamma_{\rm{in}}$ of degree smaller or equal to $j-1$. We kept the measurement space $\mathcal{T}$ fixed, but otherwise followed the approach as outlined in section \ref{sec:exp:generation} to obtain $\mathcal{Y}RB^j$. For the same parameters and the same noisy data as before, we evaluate the error between the truth 3D-VAR solution and the RB solution on these additional spaces. Figure \ref{fig:exp:effectivity} shows the mean error and mean a posteriori error bound relative to the norm of the solution variable versus the dimension $j = \dim \mathcal{U}RB^j$ of the RB model correction space. Additionally, the maximum relative error and the corresponding relative error bound are indicated. As the polynomial $x^3$ cannot be approximated in $\mathcal{U}RB^j$ for $j \le 3$, the large portion of $x^3$ in the truth 3D-VAR model modification $\myu$ leads to the strong decrease in the error for $\dim \mathcal{U}RB = 4$. We observe that the effectivity of the error bounds is generally small and constant, except for the case $\mathcal{U}RB = \mathcal{U}$ where $e_{u}$ and $e_{d}$ lie below the natural limits of the error bounds $\myDu$ and $\myDd$ dictated by the square root of the machine accuracy. For different regularisation parameters $\lambda \le 10^3$, we observe a posteriori error bounds of a similar magnitude. \begin{figure} \caption{Qualitative behaviour of the 3D-VAR solution as an approximation of $u_{\rm{true} \label{fig:utrue:approx} \end{figure} Given that the RB model correction $\myuRB$ approximates $\myu$ so accurately, we use the RB method to investigate how well the qualitative, global behaviour of the true Neumann flux $u_{\rm{true}}$ can be reflected by the 3D-VAR method, if the true parameter $\mu _{\rm{true}}$ is provided. We hence compute $\myuRB[\mu _{\rm{true}}]$ for 100 different noisy data sets and for regularisation parameters $\lambda = 10^i$, $i=0,...,3$. The results are shown in Figure \ref{fig:utrue:approx}. We observe that for $\lambda = 1$, we obtain a good compromise between $u_{\rm{true}}$ and our initial guess $u_{\rm{start}} \equiv 1$ in the best-knowledge model, and the measurement error only leads to small deviations. As we increase $\lambda$ in favour of the biased measurements, the estimates start to differ, but, as can be seen when comparing \blue{$\lambda = 10^2$} to \blue{$\lambda = 10^3$}, the difference to the noise-free solution is bounded. \subsection{Parameter Estimation} \label{sec:paraEst} We finally test three different approaches for parameter estimation with the 3D-VAR method to obtain approximations for $\mu _{\rm{true}} = (7,0.3)$, $u_{\rm{true}}$ and $y_{\rm{true}}$; to assess the estimation quality in general, we first use noise-free data. As the involved minimization process takes a long time to converge, we employ the RB 3D-VAR method for comparison. Finally, we evaluate the sensitivity of the parameter estimate with respect to noise in the data. We define three different cost functionals $J_i^{\lambda} : \mathcal{C} \rightarrow \mathbb{R}$ by \begin{equation} \label{eq:J:truth} \begin{aligned} \myJeins(\mu) &:= \frac12 \mathcal{N}ormU{\myu}^2 + \frac{\lambda}{2} \mathcal{N}ormY{\myd}^2 \qquad \qquad \myJdrei(\mu) := \frac12 \mathcal{N}ormY{\myd}^2 \\ \myJzwei(\mu) &:= \frac12 \mathcal{N}orm[2]{\mathbf{v}_{\rm{d}}-(g_l(\myy))_{l=1}^{16}}^2 \end{aligned} \end{equation} where $(\myu,\myy,\myw)\in \mathcal{U} \times \mathcal{Y} \times \mathcal{Y}$ is the 3D-VAR truth solution \eqref{prob:truth:sp} and $\myd = y_{\rm{d}}-\myProj{\mathcal{T}}\myy$ is the observable misfit. We then approximate $\mu _{\rm{true}}$ with \begin{align} \mymui ~\in~ \text{arg} ~ \min _{\mu \in \mathcal{C}} \myJi (\mu), \qquad i \in \{1,2,3\}. \label{eq:min:J} \end{align} We thus obtain a bilevel optimization problem whose inner optimization requires the solution of the 3D-Var problem For $i=2$, \eqref{eq:min:J} finds the parameter $\mymuzwei \in \mathcal{C}$, for which the measurements $(g_l(\myy[\mymuzwei]))_{l=1}^{16}$ are closest to the actual measurement values $\mathbf{v}_{\rm{d}}$. In contrast, for $i=3$ the state $\myy[\mymudrei]$ is closest to $y_{\rm{d}} + \mathcal{T} ^{\perp}$ over all 3D-VAR state solutions. Due to our prior choice for $\mathcal{U}$ with $u_{\rm{true}} \notin \mathcal{U}$, the measurements are not necessarily obtainable and we cannot expect $\mymui$ to converge to $\mu _{\rm{true}}$ for $\lambda \rightarrow \infty$. We first consider the truth parameter estimation problem. The problems in \eqref{eq:min:J} for $i \in \{1,2,3\}$ were solved within $\mathcal{C}$ from the starting point $(\mu_1,\mu_2)=(1,1)$ using the matlab function \texttt{fminsearch}\blue{, which uses the gradient-free simplex search algorithm described in \cite{simplex}}. We chose the target accuracy of 1e-12 in both the minimum value and the minimizing parameter. Each minimization took between 25 and 28 minutes. Table \ref{table:muapprox:truth} shows the results obtained for $\lambda = 10^j$, $j=0,...,3$. Generally, the parameter estimate improves as $\lambda$ increases and the 3D-VAR solution favours closeness to the measurement data, with $i=3$ providing the best parameter estimate. However, its accuracy appears to stagnate for $\lambda > 10^2$, presumably because of $u_{\rm{true}} \notin \mathcal{U}$. The other two problems yielded comparative results for $\lambda = 1$, but did not improve as fast as the third for larger $\lambda$. Concerning $u_{\rm{true}}$ and $y_{\rm{true}}$, we observe that the approximation through the 3D-VAR solution at the parameter estimate $\mymui$ improves with increasing $\lambda$ and comes close to the best-fit errors $\mathcal{N}orm[L^2(\Gamma_{\rm{in}})]{\myProj{\mathcal{U}^{\perp}} u_{\rm{true}}} \approx $1.9877e-02 and $\mathcal{N}ormY{\myProj{\mathcal{Y}mu[\mu _{\rm{true}}]^{\perp}}} \approx$ 4.7576e-03, which resulted from limiting $\mathcal{U}$ to $\mathbb{P}_3(\Gamma_{\rm{in}})$. Note that even though the parameter estimate for $i=3$ becomes slightly worse when changing from $\lambda = 10^2$ to $10^3$, the approximation of $u_{\rm{true}}$ and $y_{\rm{true}}$ still improves. \begin{table} \caption{ The parameter estimates obtained from solving the minimizations \eqref{eq:min:J} for different $\lambda$, the logarithmic distance to the true parameter $\mu _{\rm{true}}$, the number of function evaluations, and the approximation quality of $u_{\rm{true}}$ and $y_{\rm{true}}$ for $\mu = \mymui$ through $u_{\mu}:=u_{\rm{start}}+\myu$ and $\myy$. } \label{table:muapprox:truth} \begin{tabular}{cc|cccccc} \hline\noalign{ } $i$ & $\lambda$ & $(\mymui)_1$ & $(\mymui)_2$& log. dist. & eval. & $\mathcal{N}ormU{u_{\rm{true}} - u_{\mymui}}$ & $\mathcal{N}ormY{y_{\rm{true}}-\myy[\mymui]}$ \\ \noalign{ }\hline\noalign{ } 1 & $10^0$ & 4.7545 & 0.2173 & 2.1878e-01 & 235 & 4.5524e-01 & 2.3916e-01 \\ & $10^1$ & 6.0571 & 0.2666 & 8.1061e-02 & 224 & 1.9754e-01 & 9.8988e-02 \\ & $10^2$ & 6.8618 & 0.2952 & 1.1115e-02 & 219 & 3.5157e-02 & 1.5374e-02 \\ & $10^3$ & 6.9896 & 0.2997 & 8.1349e-04 & 226 &2.1062e-02 & 5.0379e-03 \\ \noalign{ }\hline\noalign{ } 2 & $10^0$ & 3.2061 & 0.2370 & 3.5420e-01 & 228 &4.0448e-01 & 2.7434e-01 \\ & $10^1$ & 5.6337 & 0.2928 & 9.4896e-02 & 225 &1.1741e-01 & 7.9138e-02 \\ & $10^2$ & 6.8335 & 0.2997 & 1.0461e-02 & 243 & 2.6032e-02 & 1.1198e-02 \\ & $10^3$ & 6.9876 & 0.3001 & 7.9396e-04 & 238 & 2.1109e-02 & 5.0631e-03 \\ \noalign{ }\hline\noalign{ } 3 & $10^0$ & 5.6402 & 0.2513 & 1.2134e-01 & 226 &3.4210e-01 & 1.9846e-01 \\ & $10^1$ & 6.9380 & 0.2980 & 4.8032e-03 & 232 & 1.0528e-01 & 5.2142e-02 \\ & $10^2$ & 7.0038 & 0.3002 & 3.3277e-04 & 235 & 2.5843e-02 & 9.1612e-03 \\ & $10^3$ & 7.0047 & 0.3002 & 3.9092e-04 & 228 & 2.1138e-02 & 5.0527e-03 \\ \noalign{ }\hline \end{tabular} \end{table} To speed up the parameter estimation, we replace the truth 3D-Var problem with its RB approximation and consider the ``reduced'' cost functionals \begin{equation}\label{eq:J:RB} \begin{aligned} \myJeinsRB(\mu) &:= \frac12 \mathcal{N}ormU{\myuRB(\mu)}^2 + \frac{\lambda}{2} \mathcal{N}ormY{\mydRB(\mu)}^2 \quad \qquad \myJdreiRB(\mu) := \frac12 \mathcal{N}ormY{\mydRB(\mu)}^2 \\ \myJzweiRB(\mu) &:= \frac12 \mathcal{N}orm[2]{\mathbf{v}_{data}-(g_l(\myyRB(\mu)))_{l=1}^{16}}^2 \\ \end{aligned} \end{equation} where for $\mu \in \mathcal{C}$ we have $(\myuRB,\myyRB,\mywRB)\in \mathcal{U}RB \times \mathcal{Y}RB \times \mathcal{Y}RB$ is the RB 3D-VAR solution and $\mydRB = y_{\rm{d}}-\myProj{\mathcal{T}}\myyRB$ is the misfit. We then take \begin{align} \mymuiRB &~\in~ \text{arg} ~ \min _{\mu \in \mathcal{C}} \myJiRB (\mu) && i \in \{1,2,3\}, \label{eq:min:JRB} \end{align} as RB parameter estimate. The distance between $\mymuiRB$ and $\mymui$ using noise-free data, as well as the computational time for the RB parameter estimate and the corresponding speedup compared to using \eqref{eq:J:truth} are listed in Table \ref{table:muapprox:noise}(a). The approximation of $\mymui$ through $\mymuiRB$ was very precise with a maximal logarithmic distance of 3.3e-08 in the parameter domain. In each case, the minimization finished within 1.1 s, \blue{resulting in} speedup-factors between 1,498 and 1,680 compared to \blue{the truth evaluation} \eqref{eq:J:truth} \blue{requiring 25 - 28 min per parameter estimate}. \blue{ The speedup thus justifies the initial offline cost of 7.7 min for the RB space generation, especially when considering that the RB spaces need only be generated once and can then be used ({\cal i}) for all combinations of $\lambda$ and $i$ in \eqref{eq:min:JRB} (and also other cost functions), thereby allowing for conclusions to be drawn from the comparison of multiple parameter estimates; and ({\cal ii}) to repeat the parameter estimation repeatedly for different measurement data. } \begin{table} \caption{ (a) Parameter estimation with the RB 3D-VAR method for noise-free data: Distance to the truth parameter estimate $\mymui$ in the logarithmic parameter plane, the computational time, and the speedup compared to \eqref{eq:J:truth}. (b) Minimum, mean and maximum distance on the logarithmic parameter plane between RB parameter estimates obtained from noisy data, and the unbiased RB parameter estimate.} \label{table:muapprox:noise} \begin{tabular}{cc|ccc|ccc} \hline\noalign{ } ~ & ~ & \multicolumn{3}{c}{(a)} & \multicolumn{3}{c}{(b)} \\ ~ & ~ & \multicolumn{3}{c}{noise-free data} & \multicolumn{3}{c}{noisy data (log. dist. to noise-free $\mymuiRB$)} \\ $i$ & $\lambda$ & dist. to $\mymui$ & time [s] & speedup & min & mean & max \\ \noalign{ }\hline\noalign{ } 1 & $10^0$ & 3.2603e-08 & 1.1017 & 1498.8 & 9.5939e-04 & 2.1314e-02 & 8.1847e-02 \\ & $10^1$ & 4.6456e-09 & 1.0347 & 1527.1 & 1.8072e-03 & 2.4407e-02 & 9.0962e-02 \\ & $10^2$ & 7.4937e-09 & 1.0147 & 1489.7 & 2.0184e-03 & 2.6270e-02 & 9.5310e-02 \\ & $10^3$ & 6.7160e-10 & 1.0178 & 1576.8 & 1.7981e-03 & 2.6575e-02 & 9.5909e-02 \\ \noalign{ }\hline\noalign{ } 2 & $10^0$ & 4.5371e-09 & 0.9358 & 1679.5 & 7.2490e-04 & 1.3379e-02 & 4.4470e-02 \\ & $10^1$ & 1.4731e-08 & 0.9443 & 1662.7 & 1.6227e-03 & 2.0544e-02 & 6.9932e-02 \\ & $10^2$ & 5.5204e-09 & 1.0675 & 1608.7 & 2.7511e-03 & 2.4773e-02 & 8.4723e-02 \\ & $10^3$ & 2.9233e-09 & 1.0386 & 1587.1 & 2.9854e-03 & 2.5363e-02 & 8.6687e-02 \\ \noalign{ }\hline\noalign{ } 3 & $10^0$ & 2.1966e-08 & 0.9785 & 1591.7 & 1.0941e-03 & 2.3011e-02 &8.7283e-02 \\ & $10^1$ & 1.3851e-08 & 0.9963 & 1666.6 & 2.4658e-03 & 2.6271e-02 & 9.5628e-02 \\ & $10^2$ & 2.9369e-09 & 0.9434 & 1710.8 & 1.7993e-03 & 2.6604e-02 & 9.5979e-02 \\ & $10^3$ & 2.9860e-09 & 0.9914 & 1613.8 & 1.7759e-03 & 2.6611e-02 & 9.5978e-02 \\ \noalign{ }\hline \end{tabular} \end{table} Given the precise approximation quality, we then use the RB method to obtain parameter estimates from noisy data for 100 different noise vectors. Table \ref{table:muapprox:noise}(b) provides the minimum, mean and maximum distance between the RB parameter estimates $\mymui$ obtained from noisy and noise-free data. We observe that the noise influenced the parameter estimate independently of $\lambda$ and $i$. Further analysis showed that there was more deviation in the $(\mymuiRB)_1$-argument than in $(\mymuiRB)_2$, and that the mean over the biased parameter estimates converged against the noise-free $\mymuiRB$ when increasing the number of samples. \section{Conclusion} In this paper, we have proposed and analyzed a data-weak variant of the 3D-VAR method for parametrized PDEs. Through a data-informed perturbation of the model, the method generates an intermediate state between the best-knowledge model solution and the observation in the measurement space. We have reformulated the 3D-VAR method as a saddle-point problem and performed a stability analysis to reveal the relationship between the 3D-VAR method, the model, and the measurement space. In particular, we showed a necessary and sufficient condition on the design of the measurement space that results in improved stability and the mitigation of noise amplification. This condition can be used in the modelling process and in the experimental design for the selection of suitable model modifications and measurement functionals. For an efficient computation in a parametrized real-time or many-query setting, a certified RB method was introduced. We developed {\it a posteriori} error bounds for the computationally efficient approximation of the error in the model modification, state, adjoint, and observable misfit between the truth and the RB solution. We proposed a greedy-OMP algorithm for choosing the measurement space and a construction of the RB spaces which does not require the measurement data to be known {\it a priori}. We presented numerical results for parameter and state estimation for a steady heat conduction problem with uncertain parameters and an unknown Neumann boundary condition. The numerical results confirm the validity of our approach as well as the theoretical findings. \end{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \title{Complements of tori and Klein bottles in the\\4-sphere that have hyperbolic structure} \shorttitle{Complements of tori and Klein bottles with hyperbolic structure} \author{Dubravko Ivan\v si\'c\\John G. Ratcliffe\\Steven T. Tschantz} \asciiauthors{Dubravko Ivansic, John G. Ratcliffe and Steven T. Tschantz} \shortauthors{Ivan\v si\'c, Ratcliffe and Tschantz} \coverauthors{Dubravko Ivan\noexpand\v si\noexpand\'c\\John G. Ratcliffe\\Steven T. Tschantz} \address{{\rm DI:}\qua Department of Mathematics and Statistics, Murray State University\\Murray, KY 42071, USA\\ \\{\rm JR and ST:}\qua Department of Mathematics, Vanderbilt University\\Nashville, TN 37240, USA} \asciiaddress{DI: Department of Mathematics and Statistics\\Murray State University, Murray, KY 42071, USA\\JR and ST: Department of Mathematics, Vanderbilt University\\Nashville, TN 37240, USA} \asciiemail{[email protected], [email protected], [email protected]} \gtemail{\mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} Many noncompact hyperbolic 3-manifolds are topologically\break complements of links in the 3-sphere. Generalizing to dimension 4, we construct a dozen examples of noncompact hyperbolic 4-manifolds, all of which are topologically complements of varying numbers of tori and Klein bottles in the 4-sphere. Finite covers of some of those manifolds are then shown to be complements of tori and Klein bottles in other simply-connected closed 4-manifolds. All the examples are based on a construction of Ratcliffe and Tschantz, who produced 1171 noncompact hyperbolic 4-manifolds of minimal volume. Our examples are finite covers of some of those manifolds. \end{abstract} \asciiabstract{ Many noncompact hyperbolic 3-manifolds are topologically complements of links in the 3-sphere. Generalizing to dimension 4, we construct a dozen examples of noncompact hyperbolic 4-manifolds, all of which are topologically complements of varying numbers of tori and Klein bottles in the 4-sphere. Finite covers of some of those manifolds are then shown to be complements of tori and Klein bottles in other simply-connected closed 4-manifolds. All the examples are based on a construction of Ratcliffe and Tschantz, who produced 1171 noncompact hyperbolic 4-manifolds of minimal volume. Our examples are finite covers of some of those manifolds.} \primaryclass{57M50, 57Q45} \keywords{Hyperbolic 4-manifolds, links in the 4-sphere, links in simply-connected closed 4-manifolds} \maketitle \section{Introduction} \label{introduction} Let $\hn$ be the $n$-dimensional hyperbolic space and let $G$ be a discrete subgroup of $\Isom\hn$, the isometries of $\hn$. If $G$ is torsion-free, then $M=\hn/G$ is a hyperbolic manifold of dimension $n$. A hyperbolic manifold of interest in this paper is also complete, noncompact and has finite volume and the term ``hyperbolic manifold'' will be understood to include those additional properties. Such a manifold $M$ is the interior of a compact manifold with boundary $\mbar$. Every boundary component $E$ of $\mbar$ is a compact flat (Euclidean) manifold, i.e. a manifold of the form $\rnm/K$, where $K$ is a discrete subgroup of $\Isom\rnm$, the isometries of $\rnm$. For simplicity, we sometimes inaccurately call $E$ a boundary component of $M$ (rather than $\mbar$, whose boundary component it really is). When $n=3$, $M$ is the interior of a compact 3-manifold $\mbar$ whose boundary consists of tori and Klein bottles. When all the boundary components are tori, $M$ may be diffeomorphic to $S^3$ with several closed solid tori removed, which in turn is diffeomorphic to $S^3-A$, where $A$ is the link consisting of the core circles of the solid tori. Indeed, it is a well known fact that many hyperbolic 3-manifolds are link complements in $S^3$. The first example of a generalization of this situation to dimension 4 was given by Ivan\v si\'c~\cite{Ivansic3}, where it was shown that $\tilde M_{1011}$, the orientable double cover of one of the 1049 nonorientable hyperbolic 4-manifolds that Ratcliffe and Tschantz constructed \cite{Ratcliffe-Tschantz}, is a complement of 5 tori in $S^4$. Finite covers of $\tilde M_{1011}$ were also shown to be complements of a collection of tori in a simply-connected 4-manifold with even Euler characteristic. In this paper, we find 11 additional examples of hyperbolic 4-manifolds that are complements in $S^4$ of a set of tori or a combination of tori and Klein bottles. The list of examples is in Theorem~\ref{mainthm}. Like $\tilde M_{1011}$, all of them are orientable double covers of some of Ratcliffe and Tschantz's nonorientable manifolds. Proving that those manifolds are complements in $S^4$ is typically trickier than in \cite{Ivansic3}, since the many symmetries of $M_{1011}$ enabled an easier computation. By taking finite covers of some of the link complements we also find additional examples of hyperbolic 4-manifolds that are complements of a collection of tori and Klein bottles in a simply-connected 4-manifold with even Euler characteristic. Example~\ref{m1091supp} displays 8 different families of examples, illustrating the richness of such objects, many more of which likely exist. Manifolds $\tilde M_{1091}$ and $\tilde M_{1011}$ were used by Ratcliffe and Tschantz~\cite{Ratcliffe-Tschantz2} to construct the first examples (infinitely many) of aspherical homology 4-spheres. Three other examples in this paper can be used for the same purpose, see Example~\ref{aspherical}. When looking for examples of link complements in $S^4$ among the orientable double covers of the 1049 nonorientable manifolds, the main difficulty is to reduce the number of potential examples with which to experiment. We use a homological and a group-theoretical criterion (see \S\ref{examples}) to rule out all except 49 manifolds from having the desired property. Then we show that some of the remaining 49 manifolds have orientable double covers that are complements of tori and Klein bottles in the 4-sphere. Essentially, we use group presentations to show that the manifold $N$ resulting from closing up the boundary components of the double covers is simply-connected. This, along with the fact that the Euler characteristic $\chi(N)$ is~2, guarantees that $N$ is homeomorphic to $S^4$. Ratcliffe and Tschantz constructed their manifolds by side-pairings of a hyperbolic noncompact right-angled 24-sided polyhedron, making computations with them laborious, especially if they have to be done repeatedly. The two criteria were checked using a computer, but we emphasize that the results of this paper do not depend on a computer calculation, since it is verified by hand that some of the ``promising'' examples indeed have a double cover that is a complement in the 4-sphere. \section{Some preliminary facts} \label{prelims} Let $M$ be hyperbolic $n$-manifold. We say that $M$ {\it is a (codimension-2) complement in} $N$ if $M=N-A$, where $N$ is a closed $n$-manifold and $A$ is a closed $(n-2)$-submanifold of $N$ that has a tubular neighborhood in $N$ and has as many components as $\bd\mbar$. Then every component $E$ of $\bd\mbar$, being the boundary of a tubular neighborhood, must be an $S^1$-bundle over a component of $A$. The $(n-1)$-manifold $E$ is flat. The following theorem (see \cite{Ivansic2} or \cite{Apanasov} Theorem~6.40) summarizes when a flat manifold is an $S^1$-bundle and, when it is, what its $S^1$-fibers are. \begin{theorem} \label{bundlesref} Let $E=\rnm/K$ be a compact flat $(n-1)$-manifold, where $K$ is a discrete subgroup of $\Isom\rnm$. Then $E$ is an $S^1$-bundle over a manifold $B$ if and only if there exists a translation $t\in K$ such that $\left<t\right>$ is a normal subgroup of $K$ and $t$ is not a power of any element of $K$ other than $t^{\pm 1}$. If the translation is given by $x\mapsto x+v$ and any element of~$K$ by $x\mapsto Ax+a$, where $A\in O(n-1)$, $a\in\rnm$, the normality of $\left<t\right>$ can be expressed as $Av=\pm v$ for every $A$ such that $Ax+a$ is an element of $K$. Furthermore, if $n-1\ne 4,5$ (we are interested in $n-1=3$ in this paper) the manifold $B$ above is a flat manifold. \qed \end{theorem} We say that a translation $t$ is {\it normal in $E$} if $\left<t\right>$ is a normal subgroup, we say it is {\it primitive in $E$} if it is not a power of any other element of $K$. By Theorem~\ref{bundlesref}, if $M$ is a hyperbolic 4-manifold that is a complement in $N$, it must be a complement of flat 2-manifolds, that is, tori and Klein bottles. We are interested in the case when $N=S^4$. When $M=N-A$, an immediate consequence of the fact that components of $A$ are flat is that $\chi(M)=\chi(N)$, so examples of complements in $S^4$ must be searched for among manifolds with $\chi(M)=2$. Nonorientable examples of Ratcliffe and Tschantz are very suitable for this purpose, since their Euler characteristic is 1: passing to the orientable double cover will give us the required Euler characteristic. Once we have found a hyperbolic manifold $M$ all of whose boundary components are \makebox{$S^1$-bundles}, $M=N-A$. To show that $N$ is a topological $S^4$, it suffices to show that $\pi_1 N=1$ and $\chi(N)=2$ (see \cite{Gompf-Stipsicz} or \cite{Freedman-Quinn}). The following application of van Kampen's theorem shows how to compute $\pi_1 N$ from $\pi_1 M$; details are in \cite{Ivansic3}. \begin{proposition} \label{presentationofN} Let $M$ be a complement in $N$ and let $\bd\mbar=E_1\cup\dots\cup E_m$. If $t_1,\dots,t_m\in \pi_1 M$ represent fibers of the $S^1$-bundles $E_1,\dots,E_m$ then $\pi_1 N = \pi_1 M/ \ncl t_1,\dots,t_m \ncr$, where $\ncl A \ncr$ denotes the normal closure of a subset $A\subset \pi_1 M$. In other words, if we have a presentation for $M$, the presentation for $N$ is obtained by adding relations $t_1=1,\dots,t_m=1$. \qed \end{proposition} Hyperbolic manifolds $M$ of interest in this paper are given by side-pairings of a hyperbolic finite-volume noncompact polyhedron $Q$. Side-pairings determine sets of points on the polyhedron, called {\it cycles}, that are identified in the manifold. Boundary components of $\mbar$ correspond to cycles of ideal vertices of $Q$. If $[v_i]$ is the cycle of an ideal vertex corresponding to $E_i$, then $\stab v_i$ is isomorphic to $\pi_1 E_i$, making the inclusion $\pi_1 E_i\to\pi_1 M$ injective. Elements of $\pi_1 E_i$ are parabolic isometries, thus, they preserve horospheres centered at $v_i$ and act like Euclidean isometries on them. Let $\Gamma$ be a discrete subgroup of $\Isom\hn$ with fundamental polyhedron $P$. Then $\Gamma$ is generated by side-pairings of $P$, isometries that send a side of $P$ to another side of $P$. If $s$ is an isometry sending side $S$ of $P$ to $S'$, then, if we start moving from $P$ and pass through~$S$, we wind up in $s^{-1}P$. Similarly, if we take a path $P$ to $\gamma P$ that passes through interiors of translates of sides $S_1,\dots,S_m$ whose side-pairings are $s_1,\dots,s_k$, then $\gamma=s_1^{-1}\dots s_m^{-1}$. Furthermore, if $G$ is a finite-index subgroup of $\Gamma$ with transversal $X$ (that is, a set of right-coset representatives, so $\Gamma=\cup_{x\in X} Gx$), then the fundamental polyhedron of $G$ is $Q=\cup_{x\in X} xP$, where elements of $X$ can be chosen so that $Q$ is connected. The fundamental groups $G$ of Ratcliffe and Tschantz's manifolds are all minimal index torsion-free subgroups of a certain reflection group $\Gamma$, which is generated by reflections in a 10-sided right-angled polyhedron $P$. The side-pairings generating $\Gamma$ are, of course, reflections in the sides of $P$. Every subgroup $G$ corresponding to a Ratcliffe-Tschantz manifold has the same transversal in $\Gamma$, a finite group $K$ isomorphic to $\z_2^4$ and generated by reflections in the four coordinate planes in the ball model of $\hiv$. The fundamental polyhedron of $G$ is $Q=KP$, a 24-sided regular right-angled hyperbolic polyhedron that we call the {\it 24-cell}. Let $f$ be a side-pairing of $Q$ that sends side $R$ to side $R'$. As was shown in \cite{Ratcliffe-Tschantz} and \cite{Ivansic3}, $f$ has to have form $f=kr=r'k$, where $k\in K$ is an element such that $k(R)=R'$, and $r$, $r'$ are reflections in $R$, $R'$, respectively. We will call $k$ the {\it $K$-part of $f$}. Thus, it is enough to specify elements $k\in K$ in order to define a side-pairing of $Q$. As is clear from Theorem~\ref{bundlesref} and Proposition~\ref{presentationofN}, we will need the ability to recognize when a composite of side-pairings is a translation. The following proposition, proved in \cite{Ivansic3}, will be used for that purpose: \begin{proposition} \label{moving} Let $\Gamma$ be generated by reflections in the sides of a polyhedron $P$ and let $G$ be a finite-index subgroup of $\Gamma$ so that its transversal is a finite group $K$, making $Q=KP$ the fundamental polyhedron of $G$. If a path from $Q$ to $gQ$ passes through translates of sides $R_1,\dots,R_m$ of $Q$, then $g$ can be written as $g=(r_m\dots r_1)(k_1^{-1}\dots k_m^{-1})$, where $r_i$ is a reflection in a translate of $R_i$ and $k_i\in K$ is the $K$-part of the side-pairing of $R_i$. \qed \end{proposition} In this paper we work with presentations of groups and their subgroups. If a group $G$ is given by presentation $\left<Y\,|\, R\right>$, and $H$ is a finite-index subgroup of $G$ with Schreier transversal~$X$ (i.e. $G=\cup_{x\in X} Hx$ and every initial segment of an element $x\in X$ is also an element of $X$), then the presentation of $H$ can be derived from the presentation of $G$. Let $g\mapsto \overline g$ be the map $G\to X$ sending $g$ to its coset representative, that is $Hg=H\overline g$, and let $\gamma:G\to H$ be the map $\gamma(g)=g{\overline g}^{-1}$. \begin{proposition}[The Reidemeister-Schreier method, \cite{Lyndon-Schupp} Proposition 4.1] \label{rs} The presentation of~$H$ is given by $\left<Z\, | \, S\right>$, where $Z=\{\gamma(xy)\ne 1\ | \ x\in X, y\in Y\}$ and $S$ is the set of all words $xrx^{-1}$ written in terms of elements of $Z$, for all $x\in X$ and $r\in R$. \qed \end{proposition} \section{The 24-cell, its side-pairings and cycle relations} \label{24cell} Working in the ball model of $\hiv$, let $S_{****}$ be one of the 24 spheres of radius 1 centered at a point in $\riv$ whose coordinates have two zeroes and two $\pm 1$'s. The string $*$$*$$*$$*$ consists of symbols $+$,$-$,0 and identifes the sphere by its center. For example, $S_{+0-0}$ is the sphere centered at $(1,0,-1,0)$. These spheres are orthogonal to $\bd\hiv$, hence they determine hyperplanes in $\hiv$. The 24-sided polyhedron $Q$ is defined to be the intersection of the half-spaces corresponding to those hyperplanes that contain the origin. The side of $Q$ lying on the sphere $S_{****}$ is also denoted by $S_{****}$. All the dihedral angles of $Q$ are $\pi/2$ and its 24 ideal vertices are $v_{\pm000}=(\pm 1,0,0,0)$, $v_{0\pm00}=(0,\pm 1,0,0)$, $v_{00\pm0}=(0,0,\pm 1,0)$, $v_{000\pm}=(0,0,0,\pm 1)$ and $v_{\pm\pm\pm\pm}=(\pm 1/2,\pm 1/2,\pm 1/2,\pm 1/2)$. The polyhedron $Q$ has no real vertices. Note that two sides intersect if and only if their identifying strings have equal nonzero entries in one position and the positions of the remaining nonzero entries are different. Two sides touch at $\bd\hiv$ when they have equal nonzero entries in one position and opposite nonzero entries in another position, or if the positions where they have nonzero entries are complementary. Furthermore, an ideal vertex is on a side if its Euclidean distance from the center of the sphere defining the side is 1. In the case of vertices whose label has one nonzero entry, a vertex is on a side if and only if the side has an equal nonzero entry in the same position as the vertex; in the case of vertices $v_{\pm\pm\pm\pm}$, a vertex is on a side if and only if the nonzero entries in the label of the side coincide with the entries in the label of the vertex in the same position. For example, $S_{0-+0}$ and $S_{+-00}$ intersect, $S_{0-+0}$ and $S_{0+0+}$ are disjoint, the side $S_{0+0-}$ has six ideal vertices $v_{0+00}$, $v_{000-}$, $v_{\pm+\pm-}$; $S_{0-+0}$ and $S_{0--0}$ touch at $v_{0-00}$, and $S_{0-+0}$ and $S_{-00-}$ touch at $v_{--+-}$. As mentioned in \S\ref{prelims}, every Ratcliffe-Tschantz example has side-pairing isometries of form~$rk$, where $k\in\z_2^4$ is a composite of reflections in the coordinate planes \mbox{$x_1=0,\dots,x_4=0$}. The label $\pm$$\pm$$\pm$$\pm$ identifies an element of $K$, for example, $k_{+--+}$ is the composite of reflections in $x_2$ and $x_3$. Ratcliffe and Tschantz have found in \cite{Ratcliffe-Tschantz} that all sides whose labels have nonzero entries in the same two positions must have the same $K$-part for their side-pairing. \rk{Side-pairing labeling convention} Group letters $a,\dots,l$ and the sides of $Q$ as follows: \begin{center} \begin{tabular}{c} $\{a,b,S_{\pm\pm00}\}$, $\{c,d,S_{\pm0\pm0}\}$, $\{e,f,S_{0\pm\pm0}\}$,\\ $\{g,h,S_{\pm00\pm}\}$, $\{i,j,S_{0\pm0\pm}\}$, $\{k,l,S_{00\pm\pm}\}$. \end{tabular} \end{center} The two letters in each group denote the side-pairings of the four sides from the group. The first letter always pairs the side labeled $++$, and the second letter pairs the side whose label is the next unused one, where the symbols $\pm\pm$ have been ordered in the dictionary order: $++$, $+-$, $-+$, $--$. The encoding of the side-pairing transformation is done like in \cite{Ratcliffe-Tschantz}. A string of six characters from $\{1,\dots,9, A,\dots,F\}$, one for each group above, encodes the $K$-part of each side-pairing. Each character stands for a hexadecimal number, which is turned into binary form and the {\it order of digits is reversed}. Converting every 0 into a plus and every 1 into a minus obtains the label of $k$. For example, $1$ stands for $k_{-+++}$; $C=12$ stands for $k_{++--}$. (See Example~\ref{m56}.) The presentation of a group $G$ generated by side-pairings of a polyhedron is obtained by applying Poincar\'e's polyhedron theorem (\cite{Ratcliffe} Theorem 11.2.2). Finding presentations of the groups and identifying usable translations in parabolic groups is crucial to our proof of Theorem~\ref{mainthm}. An efficient method for these two tasks is essential if one is to deal with a dozen examples, so we elaborate on it. Let $C$ be a small enough horosphere centered at $v_{++++}$ that intersects only the sides of~$Q$ that contain $v_{++++}$. Then $Q_C=C\cap Q$ is a cube with opposing sides \mbox{$S_{++00}$ and $S_{00++}$}, \mbox{$S_{+0+0}$ and $S_{0+0+}$}, $S_{0++0}$ and $S_{+00+}$ (see \figref{cube}). The tiling of $\hiv$ by translates of~$Q$ intersected with $C$ gives a tiling of $C$ by cubes, one of which is $Q_C$. \begin{figure} \caption{The cube $Q_C$ when $C$ is centered at $v_{++++} \label{cube} \end{figure} We use the term {\it ridge} to denote a codimension-2 face of a polyhedron, while an {\it edge} is a 1-face. Every edge of the cube $Q_C$ represents a ridge of $Q$. Circling around the edge once gives a cycle relation for the presentation in Poincar\'e's theorem. To easily determine which sides we pass through, use \figref{ridgecycles}. The left column depicts the intersections of the tiling of $C$ with the three planes that pass through the center of $Q_C$ and are parallel to its sides. The side-pairings in that figure are the ones for manifold $M_{56}$ from Example~\ref{m56}. Let $s$ be the side-pairing sending $S$ to $S'$, $r_S$ the reflection in $S$ and let $k_S$ be the $K$-part of $s$. Consider a ridge $U\cap S$, where $U$ is another side of $Q$. The ridge $U\cap S$ is sent via $s$ to a ridge $V\cap S' $, where $V$ is another side of~$Q$. Now $V\cap S'=k_S r_S (U\cap S) =k_S (U\cap S)=k_S(U)\cap S'$ so we conclude $V=k_S(U)$. Therefore, adjacent to $Q$ on the other side of $S$ is $s^{-1}Q$, whose sides $s^{-1}V$ and $s^{-1}S'$ meet at ridge $U\cap S=s^{-1}(V\cap S')$. \begin{figure} \caption{Diagrams that produce cycle relations for $M_{56} \label{ridgecycles} \end{figure} The observation $V=k_S(U)$ allows us to easily find the labelings on all the struts in the diagrams in the left column of \figref{ridgecycles} (labelings inside the square come from $Q_C$). For example, consider the sides around the vertex labeled 1 in the topmost diagram. The bottom left strut is the translate of the side $k_{S_{+00+}}(S_{0+0+})=k_{-++-}(S_{0+0+})=S_{0+0-}$, whose side-pairing is $j$. The left bottom strut is the translate of $k_{S_{0+0+}}(S_{+00+})=k_{--++}(S_{+00+})=S_{-00+}$, whose side-pairing is $h^{-1}$. Cycle relations are words obtained circling an edge of $Q_C$ and successively adding the side-pairings at the end of the word. Care must be taken to add $s^{-1}$ at the end of the word whenever we {\it exit} through a side labeled $s$ and to add $s$ whenever we {\it enter} into a side labeled~$s$. Going in the direction of the arrow around vertex 1 we exit through $g$ and $j$ and then enter into $h^{-1}$ and $i$, so the cycle relation is $g^{-1}j^{-1}h^{-1}i=1$. We thus obtain 12 relations for $G$ from the first column of \figref{ridgecycles}. Because $Q$ has 96 ridges that fall into 24 ridge cycles, each corresponding to a relation, 12 further relations are needed. They are found by going around other edges in the tiling of $C$. If we reflect the three planes we used for sections of $Q_C$ in the sides of $Q_C$ parallel to them, we get new sections of the tiling represented by diagrams in the middle and right columns of \figref{ridgecycles}. The fact $V=k_S(U)$ from above is now used to see that the labelings on the new diagrams are obtained by applying $k_S$ to every label in the original diagram, where $S$ is the side used to reflect the section plane. The middle and right columns do not include struts because cycle relations follow from only the labels on the square part and the cycle relations for the left column. If $A=\{S_1,kS_1,S_2,kS_2\}$ is the set of four sides whose labels all have the same nonzero positions, and are hence paired by the same element $k\in K$, we note that any element $k'\in K$ acts on $A$ and, due to commutativity of $K$, preserves or interchanges the subsets of $A$ containing paired sides. Along with the fact that $k'^2=1$ for every $k'\in K$ this means that action of $k'$ on $A$ can be inferred just from knowing $k'S_1$. For example, in the third row in Fig. 1, we see that by applying $k_{S_{0++0}}$ to the sides labeled $i$ and $a$ we get sides labeled $j$ and~$a^{-1}$. We infer that applying $k_{S_{0++0}}$ to sides labeled $j$ and $b$ gives sides labeled $i$ and $b^{-1}$. Thus, the cycle relation for vertex 2 in the new diagram is the cycle relation $i^{-1}b^{-1}ia$ for vertex~2 in the old diagram with letters replaced in the above pattern, obtaining $j^{-1}bja^{-1}$. Now, if a cycle relation coming from the new diagram is the inverse, a cyclic permutation or a combination of the two of a corresponding cycle relation in the old diagram, we have found the same ridge cycle, and have to reflect the section plane in the other side of the cube $Q_C$. We argue that the nine diagrams thus obtained will contain all the cycle relations (multiple times, in fact). Note that in $Q$ every ridge is an ideal triangle, two of whose ideal vertices are of type $v_{\pm\pm\pm\pm}$. For example, the ridge $R=S_{0+0+}\cap S_{0++0}$ is an ideal triangle with vertices $v_{0+00}$, $v_{++++}$ and $v_{-+++}$ and edges $E_1=S_{0+0+}\cap S_{0++0}\cap S_{00++}$, $E_2=S_{0+0+}\cap S_{0++0}\cap S_{++00}$, and $E_3=S_{0+0+}\cap S_{0++0}\cap S_{-+00}$. The intersection of translates of edges of $Q$ with $C$ are the vertices in the tiling of $C$. An edge of the cube $Q_C$ is an arc in the ideal triangle going from one edge of the triangle to another. Because elements of $K$ do not change the nonzero positions in the labeling of sides, only $E_2$ and $E_3$ might be in the same cycle of edges. Thus, in the manifold $\hiv/G$ the ideal triangle might only have its edges $E_2$ and $E_3$ identified, since no points on $E_1$ are identified. If ridge $R$ and the ridge represented by reflection of $R\cap C$ in the side $S_{00++}$ of $Q_C$ were in the same cycle, the segment consisting of $R\cap C$ and its reflection would project to a smooth curve in the ideal triangle that crosses $E_1$ twice, which is impossible. Since in $Q$ there are only 8 ridges of type $S_{0\pm0\pm}\cap S_{0\pm\pm0}$ we conclude that every one of them is either in the cycle of $R$, or the cycle of the ridge represented by the reflection of $C\cap R$ in $S_{00++}$. Reasoning similarly for all the other ridge cycles we see that the opportunity for cycle relation repetitions occurs only for ridges represented by vertices labeled 2 and 3 in the middle column. Often there are no repetitions: in Example~\ref{m56} we needed three section planes only in the first row. Now we describe how to identify and choose suitable translations in parabolic subgroups. Let $G_v$ be the parabolic group that is the stabilizer of an ideal vertex $v$ of $Q$, and let $C$ be a horosphere centered at $v$. The intersection $Q_C=Q\cap C$ is always a Euclidean cube. Position a coordinate system with origin in the center of $Q_C$ and axes perpendicular to its sides, and consider the tiling of $C$ by translates of $Q_C$. To simplify language, we identify a cube~$gQ_C$ and the element $g$. According to Proposition~\ref{moving}, by considering a path from $Q_C$ to~$gQ_C$ that passes only through sides of cubes (e.g. a path that is piecewise parallel to the coordinate axes), we may write $g=(r_m\dots r_1)(k_1^{-1}\dots k_m^{-1})$. Since $r_m\dots r_1$ preserves~$v$, $g\in G_v$ if and only if $k_1^{-1}\dots k_m^{-1}v=v$. \begin{figure} \caption{Translations in $8Q_C$ for manifold $M_{56} \label{trans1-4} \end{figure} Let $v=v_{+000}$, $v_{0+00}$, $v_{00+0}$ or $v_{000+}$. The coordinate axes perpendicular to sides of $Q_C$ may be identified with coordinate axes $x_1$, $x_2$, $x_3$ and $x_4$, in the arrangement suggested by \figref{trans1-4}. The 4-symbol labels on the axes indicate which side we pass through if we move from the center of $Q_C$ in the direction of the arrow. Parallel sides of each $Q_C$ have same nonzero positions but differ in sign in positions other than the nonzero position of the vertex label (a side and a vertex have the same symbol in the nonzero position of the vertex label when the vertex is on the side). Moving two steps in the directions of the axes produces translations since $k_1=k_2$ in every case. Let $T_v$ be the subgroup of $G_v$ generated by those three translations and let $TG_v$ be the translation subgroup of $G_v$. For our purposes we look for translations only in $8Q_C$, the $2\times 2 \times 2$ block of cubes that is the fundamental polyhedron for~$T_v$. A Euclidean transformation $Ax+a$ is a translation if and only if its rotational part~$A$ is the identity. We get the rotational part of $r_m\dots r_1 k_1^{-1}\dots k_m^{-1}$ by replacing $r_i$ by an element~$k_{r_i}$ that represents the reflection in the coordinate plane parallel to the reflection plane of~$r_i$ and computing the product. Note that it is enough to find the rotational parts of the three cubes adjacent to $Q_C$ in the direction of the three axes. Any other rotational part is obtained by multiplying those three: every time we move by one step in the direction of an axis, we multiply by the rotational part of one of the three cubes that corresponds to that axis. The dots in \figref{trans1-4} represent centers of cubes comprising~$8Q_C$; the dot is filled if it corresponds to an element preserving~$v$. Labels on the dots give the rotational part of the group element corresponding to the dot. The three zero positions in the vertex label give the rotational part of the transformation, while the nonzero position tells whether the transformation preserves $v$. For example, in the topmost diagram in \figref{trans1-4} the label $+$$+$$-$$+$ on a vertex reveals an element that preserves $v$ whose rotational part is a reflection in the $x_3$-plane. The label $-$$-$$+$$+$ reveals that the corresponding element does not preserve~$v$. Arrows in the middle column indicate translations found in $8Q_C$, they correspond to dots labeled $+$$+$$+$$+$. The right column of the diagram computes the group elements corresponding to the dots of interest. This is done by considering which translates of sides we pass through as we go from~$Q_C$ to the dot. \begin{figure} \caption{Translations in $64Q_C$ for manifold $M_{56} \label{trans5} \end{figure} Let $v=v_{++++}$, then $Q_C$ is the cube from \figref{cube}. Here the condition $k_1^{-1}\dots k_m^{-1}v=v$ becomes $k_1^{-1}\dots k_m^{-1}=1$, since $K$ acts freely on the set of vertices $v_{\pm\pm\pm\pm}$. Moving four steps in the directions of the three axes produces translations because $k_1=k_3$, $k_2=k_4$ and $r_1\dots r_4$ is a composite of reflections in four parallel planes. Let $T_v$ be the subgroup of~$G_v$ generated by those three translations and $TG_v$ be the translation subgroup of $G_v$. For our purposes, we look for translations in $64Q_C$, the $4\times 4 \times 4$ block of cubes that is the fundamental polyhedron for $T_v$. It is clear that we get a translation if and only if we move an even number of steps in the direction of each axis, and have $k_1^{-1}\dots k_m^{-1}=1$. Therefore, we have to check $k_1^{-1}\dots k_m^{-1}=1$ for the 7 cubes obtained by moving $Q_C$ by a combination of two units in every direction. The dots in \figref{trans5} depict the centers of some cubes in~$64Q_C$, and the ones at the vertices of the cube are the interesting ones. Labels indicate the products of the $K$-parts, so a $+$$+$$+$$+$ label means the dot corresponds to a translation. At right we compute the group elements corresponding to the dots of interest. One of the translations is omitted, since it is the sum of the other two in the diagram. \section{Examples of ``link'' complements in the 4-sphere} \label{examples} In this section we show that 12 nonorientable Ratcliffe-Tschantz manifolds have orientable double covers that are complements of tori and Klein bottles in $S^4$. First of all, we need criteria that rule out a manifold from having this property. The first criterion is: \begin{proposition} \label{crithom} Let $M$ be a complement of $r$ tori and $s$ Klein bottles in $S^4$. Then (coefficients are in $\z$): $$ H_1 M = \z^r \oplus \z_2^s,\ H_2 M =\z^{2r} \oplus \z^s,\ H_3 M= \z^{r+s-1}. $$ \end{proposition} \begin{proof} The proof is a simple application of Alexander's duality: if $A$ is a compact submanifold of $S^n$, then $\tilde H_i(S^4-A)\cong \tilde H^{3-i}(A)$. \end{proof} The other criterion was developed in \cite{Ivansic3} and exploits a homomorphism that exists for every group $G$ of a Ratcliffe-Tschantz manifold. It is defined by $\phi:G\to \z_2^6$, $a,b\mapsto e_1$, $c,d\mapsto e_2,\dots, k,l\mapsto e_6$, where $e_i$ is a canonical generator of $\z_2^6$. \begin{proposition} \label{critphi} Let $T$ be the set of all translations in $G$. If $\dim_{\z_2}\left<\phi(T)\right> <5$ then the orientable double cover of $M$ is not a complement in the 4-sphere. \qed \end{proposition} Using a computer to test the 1149 nonorientable Ratcliffe-Tschantz manifolds for both criteria, we found that only 49 remained eligible to have double-cover complements in $S^4$, namely, manifolds numbered \begin{displaymath} \begin{array}{rrrrrrrrrrrrr} 23& 25 & 28& 29& 32& 33& 35& 36& 40& 46& 47& 51& 56 \\ 60& 71 & 72 & 73 & 74& 80& 92& 93 & 94& 96& 109 & 112 &116\\ 121& 131& 132& 139& 170& 171& 231& 235& 236& 251& 292& 296& 297\\ 299&426& 427& 434& 435& 1011& 1091& 1092& 1094& 1095 &&& \end{array} \end{displaymath} If the double cover of any of these manifolds is a complement in $S^4$, the potential link structure is visible from its homology by Proposition~\ref{crithom}. The computer calculation showed that only twelve different link structures are possible. We then proceeded to prove (manually) that for each of those link structures there is at least one manifold from the above list whose orientable double cover is the complement of such a link in $S^4$. Those manifolds are listed in Theorem~\ref{mainthm}. Most likely, other manifolds from the list above have the same property, but it is not easy to ascertain that we will be getting examples different from the ones in Theorem~\ref{mainthm}, since orientable double covers of nonisometric manifolds may be isometric. Let $G$ be any group, $H$ a normal subgroup of $G$ and let $\ncl A\ncr_H$ be the normal closure of a set $A\subset H$ in $H$. If $G$ is the biggest group under consideration, $\ncl A \ncr$ or $\ncl A \ncr_G$ will denote the normal closure of $A$ in $G$. Clearly, $\ncl A\ncr_H$ is the set of all elements of the form $\prod h_i a_i h_i^{-1}$, where $h_i\in H$, $a_i\in A\cup A^{-1}$. If $X$ is a transversal of $H$ in $G$ it is easy to see that for a set $A\subset H$, $\ncl A \ncr_G=\ncl xAx^{-1},\ x\in X\ncr_H$. Note that $\ncl A \ncr_G$ is a subgroup of $H$, since $H$ is normal in $G$. While $\ncl A \ncr_H\subseteq \ncl A \ncr_G$, in general, $\ncl A \ncr_H \ne \ncl A \ncr_G$. For example, if $H=\left<a\right>*\left< b\right>$, $G=H\rtimes\z_2$, where $\z_2$ acts on $H$ by swapping the generators $a$ and $b$, then clearly $\ncl a \ncr_G=H$, while $\ncl a \ncr_H$ is a subgroup of $H$ with infinite index. The following proposition will prove useful. \begin{proposition} \label{normclosures} Let $H$ be a normal subgroup of $G$ with transversal $X$. Suppose a set $A$ has the property that $\ncl A\ncr_H=\ncl A\ncr_G$. Let $B\subset H$ have the property: for every $b\in B$ and every $x\in X$, $[xbx^{-1}]=[b^{\pm1}]$, where $[\ ]$ denotes classes in $G/\ncl A \ncr_G$. Then $\ncl A\cup B \ncr_H=\ncl A\cup B \ncr_G$. \end{proposition} \begin{proof} Elements of type $hxa(hx)^{-1}$ and $hxb(hx)^{-1}$ generate the subgroup $\ncl A\cup B\ncr_G$, where $a\in A\cup A^{-1}$, $b\in B\cup B^{-1}$, $x\in X$, $h\in H$. Now $\ncl xAx^{-1},\ x\in X \ncr_H=\ncl A \ncr_G=\ncl A \ncr_H$ implies that every element $xax^{-1}$ is a product of elements of type $hah^{-1}$, so every $hxa(hx)^{-1}=h(xax^{-1})h^{-1}$ has the same property. Since $[xbx^{-1}]=[b^{\pm1}]$ in $G/\ncl A \ncr_G$ means that $xbx^{-1}=b^{\pm1}u$, for some $u\in \ncl A \ncr_G=\ncl A \ncr_H$, $xbx^{-1}$ is a product of $b^{\pm1}$ and elements of type $hah^{-1}$, making $hxb(hx)^{-1}$ a product of elements of type $hbh^{-1}$ and $hah^{-1}$. Therefore, $\ncl A\cup B\ncr_G$ is generated by elements of type $hah^{-1}$ and $hbh^{-1}$, so $\ncl A\cup B\ncr_G=\ncl A\cup B\ncr_H$. \end{proof} \begin{theorem} \label{mainthm} The orientable double covers of the nonorientable Ratcliffe-Tschantz manifolds listed in \tableref{table1} are complements of the indicated combination of tori and Klein bottles in a manifold that is homeomorphic to the 4-sphere.\end{theorem} \begin{table}\small\anchor{table1} \begin{center} \begin{tabular}{l|l} The orientable double & is a complement \\ cover of manifold & in $S^4$ of:\\ \hline no.~1011 & \hskip5pt 5 tori \\ no.~71 & \hskip5pt 6 tori\\ no.~23 & \hskip5pt 7 tori\\ no.~1092 & \hskip5pt 8 tori\\ no.~1091 & \hskip5pt 9 tori \\ no.~231 & \hskip5pt 3 tori, 4 Klein bottles \\ no.~112& \hskip5pt 4 tori, 2 Klein bottles \\ no.~56& \hskip5pt 4 tori, 3 Klein bottles \\ no.~92 & \hskip5pt 5 tori, 1 Klein bottle\\ no.~51 & \hskip5pt 5 tori, 2 Klein bottles \\ no.~40& \hskip5pt 6 tori, 1 Klein bottle \\ no.~36 & \hskip5pt 6 tori, 2 Klein bottles\\ \end{tabular} \end{center} \nocolon\caption{}\label{table1} \end{table} \begin{remark} Note that Theorem~\ref{mainthm} claims that double covers of various Ratcliffe-Tschantz manifolds are complements in a manifold that is {\it homeomorphic} to the 4-sphere, rather than diffeomorphic. This is because we used Freedman's theory, applicable only to the topological category, to recognize a 4-sphere. It is still unknown whether manifolds homeomorphic to the 4-sphere are diffeomorphic to the $S^4$ with the standard differentiable structure. Using Kirby diagrams of a 4-manifold, Ivan\v si\'c has shown that the topological $S^4$ inside which $\tilde M_{1011}$ is a complement of 5 tori is indeed diffeomorphic to the standard differentiable $S^4$. We anticipate that the same is true for the examples treated here. \end{remark} \begin{proof} Let $M$ be a nonorientable Ratcliffe-Tschantz manifold, $\tilde M$ its orientable double cover, $G=\pi_1 M$, $H=\pi_1 \tilde M$. Clearly, $H$ is the index-2 subgroup of $G$ containing all the orientation preserving isometries. If we can find primitive normal translations $t_1,\dots,t_m$, one in each boundary component of $\tilde M$, then $\tilde M$ is a complement in a closed manifold $N$. As was shown in \cite{Ivansic2}, the Euler characteristics of $\tilde M$ and $N$ are equal. By Proposition~\ref{presentationofN} $\pi_1 N=H/\ncl t_1,\dots,t_m\ncr_H$. The classification of 4-dimensional simply-connected manifolds (see \cite{Freedman-Quinn} or \cite{Gompf-Stipsicz}) gives that if $\pi_1 N=1$ and $\chi(N)=2$, then $N$ is homeomorphic to $S^4$. Since $\chi(\tilde M)=2$, it will be enough to show that $H/\ncl t_1,\dots,t_m\ncr_H=1$. The presentation of $G$ has 12 generators and 24 relations so the presentation of the \mbox{index-2} subgroup $H$ obtained by the Reidemeister-Schreier method will have 23 generators and 48 relations, very tedious to compute with. Thus, the challenge is to show that $H/\ncl t_1,\dots,t_m\ncr_H=1$ while working with the presentation of $G$ as much as possible. The ten homeomorphism types of flat 3-manifolds are identified by the letters A--J, where the manifolds are ordered in the same way as in \cite{Hantzsche-Wendt} or \cite{Wolf}. Every nonorientable Ratcliffe-Tschantz manifold that passes both criteria from above has boundary components of type A, B, G, H, I or J. Type A is the 3-torus and type B is an (orientable) $S^1$-bundle over a Klein bottle. The nonorientable types G and H have type A as their orientable double cover and the nonorientable types I and J have type B as their orientable double cover. Holonomy groups of boundary components of $M$ are computed in the course of searching for suitable translations. They help determine whether chosen translations are normal in a boundary component and they distinguish types A, B, G or H, and I or J. Although not essential for our proof, types G and H and can further be distinguished by the fact that at least one generator of the translation subgroup of H is not a normal translation. There is a criterion that distinguishes I and J as well. Note that orientable boundary components of $M$ lift to two homeomorphic boundary components in $\tilde M$ and nonorientable ones lift to their orientable double covers. Thus, knowing the boundary components of $M$ immediately gives the link structure that $\tilde M$ is a complement of (a torus or a Klein bottle for every component of type A or B, respectively). Translations in boundary components of $\tilde M$ have to be carefully chosen in order to be $S^1$-fibers, to produce $H/\ncl t_1,\dots,t_m\ncr_H=1$ and to allow us to work in $G$. It was shown in \cite{Ivansic3} that the homomorphism $\phi$ from Proposition~\ref{critphi} sends to 0 every subgroup $T_v$ in~\S\ref{24cell}, and that $H/\ncl t_1,\dots,t_m\ncr_H=1$ implies $\dim_{\z_2}\left<\phi(t_1),\dots,\phi(t_m)\right>\ge 5$. Therefore, one should choose translations in $8Q_C$ or $64Q_C$ whenever possible, since one risks otherwise that $\dim_{Z_2} \left<\phi(t_1),\dots,\phi(t_m)\right>$ comes short of 5. A translation of type $t=t'+t''$ where $t'\in T_v$ and $t''\in 8Q_C$ or $64Q_C$, is a possible choice, but we avoid them, because its length in terms of generators $a$--$l$ is big, possibly complicating the computation. Let $E$ be a boundary component of $M$ and $\tilde E$ a component of its lift in $\tilde M$. In order for $\tilde M$ to be a complement, we must choose a translation $t$ that is normal and primitive in $\tilde E$. Note that $t$ need not be normal in $E$, but it is very useful if it is when $E$ is nonorientable. For in that case, there is an orientation-reversing $x\in G$ so that $xtx^{-1}=t^{\pm 1}$, which by Proposition~\ref{normclosures} implies $\ncl A, t\ncr_H=\ncl A, t \ncr_G$ for any subset $A$ of $H$ satisfying $\ncl A \ncr_H=\ncl A\ncr_G$. This allows us to continue computing in $G$. We introduce a term for this property: we say that $t$ is {\it self-conjugate} if there exists an orientation-reversing $x\in G$ so that $xtx^{-1}=t^{\pm 1}$. Checking that $t$ is primitive in $\tilde E$ is easy because $\tilde E$ is either of type A or B. In the first case, $t$ must be the shortest translation in its direction, in the second, it must not point in the same direction as the axis of rotation by $\pi$ that generates the holonomy group. If $E$ is orientable, its lift has two components $\tilde E$ and $\tilde E'$, whose fundamental groups are conjugate by an orientation reversing $x\in G$. Choosing a primitive normal translation $t$ in $\tilde E$ gives the obvious choice $xtx^{-1}$ for $\tilde E'$. This choice works most of the time, but on occasion, like in Example~\ref{m36}, one must choose a different translation in $\tilde E'$ in order to satisfy the condition $\dim_{\z_2}\left<\phi(t_1),\dots,\phi(t_m)\right>\ge 5$. To prove the theorem, each example needs to be handled separately. We show three of the most difficult examples in decreasing detail and indicate how to proceed on others. \begin{example} \label{m56} The side-pairing of manifold $M_{56}$ is encoded by 13D935. According to our side-pairing convention, the side-pairings are (the $K$-parts are under the arrow): \begin{displaymath} \begin{array}{ll} S_{++00} \arrtop{a}{-+++} S_{-+00}\hskip10pt & S_{+-00} \arrtop{b}{-+++} S_{--00}\hskip10pt \\ \rule[18pt]{0pt}{0pt} S_{+0+0} \arrtop{c}{--++} S_{-0+0}\hskip10pt & S_{+0-0} \arrtop{d}{--++} S_{-0-0} \\ \rule[18pt]{0pt}{0pt} S_{0++0} \arrtop{e}{-+--} S_{0+-0}\hskip10pt & S_{0-+0} \arrtop{f}{-+--} S_{0--0}\hskip10pt \\ \rule[18pt]{0pt}{0pt} S_{+00+} \arrtop{g}{-++-} S_{-00-}\hskip10pt & S_{+00-} \arrtop{h}{-++-} S_{-00+} \\ \rule[18pt]{0pt}{0pt} S_{0+0+} \arrtop{i}{--++} S_{0-0+}\hskip10pt & S_{0+0-} \arrtop{j}{--++} S_{0-0-} \hskip10pt \\ \rule[18pt]{0pt}{0pt} S_{00++} \arrtop{k}{-+-+} S_{00-+}\hskip10pt & S_{00+-} \arrtop{l}{-+-+} S_{00--} \end{array} \end{displaymath} The orientation-reversing generators are $c$, $d$ and $g$--$l$ (those whose $K$-part has an even number of minuses in its label). We use diagrams in \figref{ridgecycles} to find the cycle relations as described in \S\ref{24cell}. The arrangement of groups of relators below corresponds to the arrangement of diagrams used to obtain them and the numbers 1--4 are labels of the vertices in the diagrams corresponding to each cycle relation. The cycle relations marked by $\bullet$ are repetitions of those from the left column, so we reflect the section plane in $S_{00++}$ and obtain the two new relations in the right column. \begin{displaymath} \begin{array}{rrr} \begin{array}{rr} 1)& g^{-1}j^{-1}h^{-1}i=1 \\ 2)& g^{-1}ch^{-1}c=1\\ 3)& e^{-1}j^{-1}fi= 1\\ 4)& e^{-1}dfc=1 \end{array} & \begin{array}{r} hj^{-1}gi=1 \\ \bullet\ hc^{-1}gc^{-1}=1\\ \bullet\ e^{-1}j^{-1}fi= 1\\ e^{-1}d^{-1}fc^{-1}=1 \end{array} & \begin{array}{r} \\ hd^{-1}gd^{-1}=1\\ ej^{-1}f^{-1}i=1\\ \ \end{array} \\ &&\\ \begin{array}{rr} 1)& g^{-1}l^{-1}h^{-1}k=1\\ 2)& g^{-1}ah^{-1}a=1\\ 3)& e^{-1}le^{-1}k=1\\ 4)& e^{-1}aea=1 \end{array} & \begin{array}{r} hl^{-1}gk = 1\\ hb^{-1}gb^{-1}=1\\ f^{-1}lf^{-1}k=1\\ f^{-1}b^{-1}fb^{-1}=1 \end{array} & \\ &&\\ \begin{array}{rr} 1)& i^{-1}k^{-1}ik=1\\ 2)& i^{-1}bia=1\\ 3)& c^{-1}k^{-1}d^{-1}k =1\\ 4)& c^{-1}bc^{-1}a=1 \end{array} & \begin{array}{r} j^{-1}ljl^{-1}=1\\ j^{-1}b^{-1}ja^{-1}=1\\ dlcl^{-1}=1\\ db^{-1}da^{-1}=1 \end{array} & \end{array} \end{displaymath} The diagrams in Figures~\ref{trans1-4} and~\ref{trans5} find the translations of interest for $M_{56}$. The following table identifies boundary component types and the translations. $\tilde M_{56}$ has boundary components of type BBBAAAA, making it a complement of 3 Klein bottles and 4 tori. Boundary components $E_1$--$E_4$ are the ones corresponding to ideal vertices $v_{+000}$--$v_{000+}$, while $E_5$ corresponds to vertex $v_{++++}$. Elements of the holonomy group are identified by their planes of reflection, for example, $x_2 x_3$ is the composite of reflections in planes $x_2$ and $x_3$. \begin{center} \begin{tabular}{c|c|c|c|c} comp. & type & translations in $8Q_C$ or $64 Q_C$ & holonomy & self-conj?\\ \hline\rule[15pt]{0pt}{0pt} $E_1$ & $I$ & none & $x_2$, $x_3$, $x_2x_3$& n/a\\ $E_2$ & $B$ & $a$ & $x_1x_4$ & no\\ $E_3$ & $H$ & $e^{-1}dl$ & $x_2$ & no\\ $E_4$ & $G$ & $i^{-1}k^{-1}$ & $x_1$ & yes\\ $E_5$ & $A$ & $c^{-1}i$, $a^{-1}k^{-1}eg$ & trivial & no \end{tabular} \end{center} \figref{trans1-4} shows why $e^{-1}dl$ is not normal in $E_3$: reflecting the vector in the third cube in the plane $x_2=0$ produces a nonparallel vector. Similarly, $i^{-1}k^{-1}$ is normal in $E_4$ because it is invariant under reflection in the plane $x_1=0$. Note that there are no translations in $E_1$ that are in the cube $8Q_C$, so we use one of the translations $a^{-1}b$, $c^{-1}d^{-1}$, or $g^{-1}h$ that generate $T_{v_{+000}}$. The nontrivial element of holonomy of $\tilde E_1$ is a rotation in the $x_2 x_3$-plane. All three translations are thus normal in $\tilde E_1$, but the third one is not primitive, since it is a power of a slide-rotation. Make the following choices for translations: \begin{displaymath} \begin{array}{lllll} t_1=c^{-1}d, & t_2=a, & t_3=e^{-1}dl, & t_4=i^{-1}k^{-1}, & t_5=c^{-1}i, \\ & t'_2=ct_2c^{-1}, & & & t'_5=c(a^{-1}k^{-1}eg)c^{-1}. \end{array} \end{displaymath} Because $t_1$ and $t_4$ are self-conjugate, we have $\ncl t_1,t_2,t_4 \ncr_G=\ncl t_1,t_2,t'_2,t_4 \ncr_H$. Adding $t_1=t_2=t_4=1$ to the relators we get $d=c$, $k=i^{-1}$ and $a=1$, which immediately yields $b=1$ and $h=g^{-1}$, so the presentation for $G/\ncl t_1,t_2,t_4 \ncr$ has generators $c$, $f$, $g$, $i$, $j$, $l$ and relations (repeating or trivial relations omitted): \begin{displaymath} \left\{ \begin{array}{r} g^{-1}j^{-1}gi=1\\ g^{-1}cgc=1\\ e^{-1}j^{-1}fi=1\\ e^{-1}cfc=1\\ ej^{-1}f^{-1}i=1\\ e^{-1}c^{-1}fc^{-1}=1 \end{array} \right. \hskip10pt \left\{ \begin{array}{r} g^{-1}l^{-1}gi^{-1}=1\\ e^{-1}le^{-1}i^{-1}=1\\ f^{-1}lf^{-1}i^{-1}=1 \end{array} \right. \hskip10pt \left\{ \begin{array}{r} c^{-1}ic^{-1}i^{-1}=1\\ c^{-2}=1\\ j^{-1}ljl^{-1}=1\\ clcl^{-1}=1 \end{array} \right. \end{displaymath} Taking quotients is equivalent to adding relators so, as is customary in working with group presentations, we usually omit class notation. Taking into account $c^2=1$, note that one relation in the group at right says that $i$ and $c$ commute in $G/\ncl t_1,t_2,t_4 \ncr$, and one relation in the left group says $g$ and $c$ commute ($c=c^{-1}$). But then $ct_5c^{-1}=cc^{-1}ic^{-1}=t_5$, fulfilling conditions of Proposition~\ref{normclosures}, so we conclude $\ncl t_1,t_2,t_4,t_5 \ncr_G=\ncl t_1,t_2,t'_2,t_4,t_5\ncr_H$. In the group $G/\ncl t_1,t_2,t_4,t_5 \ncr$, we now have $i=c$, so $j=gig^{-1}=gcg^{-1}=c$ and $l=gi^{-1}g^{-1}=gc^{-1}g^{-1}=c$, which in conjuction with $e^{-1}le^{-1}i^{-1}=1$ implies $ e^{-1}ce^{-1}c^{-1}=1$, in other words, $cec^{-1}=e^{-1}$. Since now $t_3=e^{-1}$, we have $ct_3c^{-1}=ce^{-1}c^{-1}=e=t_3^{-1}$, so Proposition~\ref{normclosures} yields $\ncl t_1,t_2,t_3,t_4,t_5 \ncr_G$ $=\ncl t_1,t_2,t'_2,t_3,t_4,t_5 \ncr_H$. The presentation of the group $G/ \ncl t_1,t_2,t_3,t_4,t_5\ncr$ is obtained from the above relations combined with $i=c$ and $e=1$ (following from $t_5=t_3=1$). Then \mbox{$f=1$} follows. Using $j=l=c$, the presentation simplifies to \begin{displaymath} \left< c,g\, |\, c^2=1,\ cgc^{-1}=g\right>=\z\oplus\z_2. \end{displaymath} In this group, $t'_5=ca^{-1}k^{-1}egc^{-1}=c1c1gc^{-1}=gc^{-1}=gc$, so $ct'_5c^{-1}=t'_5$. Proposition~\ref{normclosures} says $\ncl t_1,t_2,t_3,t_4,t_5,t'_5\ncr_G=\ncl t_1,t_2,t'_2,t_3,t_4,t_5,t'_5 \ncr_H$. Now $G/ \ncl t_1,t_2,t_3,t_4,t_5,t'_5\ncr=\left<c\,|\,c^2=1\right>=\z_2$, so $H/\ncl t_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H=H/ \ncl t_1,t_2,t_3,t_4,t_5,t'_5 \ncr_G=1$. Therefore, $\tilde M_{56}$ is a complement in the 4-sphere. \end{example} \begin{example} \label{m1091} The side-pairing of manifold $M_{1091}$ is encoded by 53RR35. All the generators are orientation reversing. The presentation of $G$ is given by the following relations, which are grouped by row of diagrams like in \figref{ridgecycles}. \begin{displaymath} \begin{array}{rrr} \left\{ \begin{array}{r} g^{-1}jh^{-1}i=1\\ g^{-1}dh^{-1}c=1\\ e^{-1}jf^{-1}i=1\\ e^{-1}df^{-1}c=1\\ hjgi=1\\ hc^{-1}gd^{-1}=1\\ f^{-1}je^{-1}i=1\\ f^{-1}c^{-1}e^{-1}d^{-1}=1 \end{array} \right. & \left\{ \begin{array}{r} g^{-1}lh^{-1}k=1\\ g^{-1}bh^{-1}a=1\\ e^{-1}lfk=1\\ e^{-1}bfa=1\\ hlgk=1\\ ha^{-1}gb^{-1}=1\\ fle^{-1}k=1\\ fa^{-1}e^{-1}b^{-1}=1 \end{array} \right. & \left\{ \begin{array}{r} i^{-1}k^{-1}ik=1\\ i^{-1}bia=1\\ c^{-1}k^{-1}d^{-1}k=1\\ c^{-1}bd^{-1}a=1\\ jlj^{-1}l^{-1}=1\\ ja^{-1}j^{-1}b^{-1}=1\\ dlcl^{-1}=1\\ da^{-1}cb^{-1}=1 \end{array} \right. \end{array} \end{displaymath} The table below lists the translations of interest to us in the boundary components. The boundary components of $\tilde M_{1091}$ are AAAAAAAAA, making it a complement of 9 tori. The boundary component $E_6$ is the one corresponding to vertex $v_{+++-}$, for which the search for translations is the same as for $v_{++++}$. \noindent \begin{center} \begin{tabular}{c|c|c|c|c} comp. & type & translations in $8Q_C$ or $64 Q_C$ & holonomy & self-conj?\\ \hline\rule[15pt]{0pt}{0pt} $E_1$ & $A$ & $c^{-1}h$, $a^{-1}h$, $c^{-1}b$ & trivial & no\\ $E_2$ & $H$ & $e^{-1}j$ & $x_3$ & no\\ $E_3$ & $H$ & $e^{-1}l$ & $x_2$ & no\\ $E_4$ & $G$ & $i^{-1}k^{-1}$ & $x_1$ & yes\\ $E_5$ & $A$ & $a^{-1}k, c^{-1}i$, $e^{-1}g$ & trivial & no\\ $E_6$ & $A$ & $a^{-1}l$, $c^{-1}j$, $e^{-1}h$ & trivial & no \end{tabular} \end{center} Choose the translations: \begin{displaymath} \begin{array}{llllll} t_1=c^{-1}b, & t_2=e^{-1}j, & t_3=e^{-1}l, & t_4=i^{-1}k^{-1}, & t_5=c^{-1}i, & t_6=e^{-1}h,\\ t'_1=ct_1c^{-1}, & & & & t'_5=ct_5c^{-1}, & t'_6=ct_6c^{-1}. \end{array} \end{displaymath} Due to self-conjugacy of $t_4$, $\ncl t_1,t_4,t_5,t_6\ncr_G=\ncl t_1,t'_1,t_4,t_5,t'_5,t_6,t'_6\ncr_H$. In what follows, we refer to the relations above as entries in an $8\times 3$ matrix. In the group $G/\ncl t_1,t_4,t_5,t_6\ncr $ we have $k^{-1}=i=c=b$ and $h=e$, so equation 43 gives $d=a$ and equation 23 gives $b=a^{-1}$. Equations 32 and 42 convert to $l=ea^{-1}f^{-1}$ and $a^{-1}=ea^{-1}f^{-1}$, so $l=a^{-1}$. Similarly, comparing 11 and 21 gives $j=a$. The presentation of $G/\ncl t_1,t_4,t_5,t_6\ncr $ then reduces to generators $a$,$e$,$g$ and $f$ and the relations, grouped as above (third column reduces to trivial relations): \begin{displaymath} \begin{array}{rr} \left\{ \begin{array}{r} g^{-1}ae^{-1}a^{-1}=1\\ e^{-1}af^{-1}a^{-1}=1\\ eaga^{-1}=1\\ f^{-1}ae^{-1}a^{-1}=1 \end{array} \right. & \left\{ \begin{array}{r} g^{-1}a^{-1}e^{-1}a=1\\ e^{-1}a^{-1}fa=1\\ ea^{-1}ga=1\\ fa^{-1}e^{-1}a=1 \end{array} \right. \end{array} \end{displaymath} In this array equations 11 and 41 give $f=g$, equations 22 and 32 imply $e=e^{-1}$, and equations 12 and 42 give $g=f^{-1}$, further simplifying the presentation to \begin{displaymath} \left< a,e,f\ |\ e^2=f^2=1,\ aea^{-1}=f,\ afa^{-1}=e \right>\cong (\z_2 * \z_2)\rtimes \z, \end{displaymath} where $\z$ acts on $\z_2*\z_2$ by swapping the generators. One can now show (next paragraph) that $H/\ncl t_1,t_4,t_5,t_6\ncr\cong \z\oplus \z$, where the generators are $[t_2]=e^{-1}a$ and $[t_3]=e^{-1}a^{-1}$. Let $H_1=H/\ncl t_1,t'_1,t_4,t_5,t'_5,t_6,t'_6\ncr_H$. We now have \begin{multline*} H/\ncl t_1,t'_1,t_2,t_3,t_4,t_5,t'_5,t_6,t'_6\ncr_H=\\ \left(H/\ncl t_1,t'_1,t_4,t_5,t'_5,t_6,t'_6\ncr_H\right)/\ncl [t_2],[t_3] \ncr_{H_1}=\\ \left(H/\ncl t_1,t_4,t_5,t_6\ncr \right)/\ncl [t_2],[t_3] \ncr_{H_1}=\\ \left( \left<[t_2]\right> \oplus \left< [t_3]\right>\right) /\ncl [t_2],[t_3] \ncr_{H_1}=1. \end{multline*} To show that $H/\ncl t_1,t_4,t_5,t_6\ncr\cong \z\oplus \z$ one can use either the explicit isomorphism of $G/\ncl t_1,t_4,t_5,t_6\ncr$ with $(\z_2 * \z_2)\rtimes \z$, or apply the Reidemeister-Schreier method to the presentation above using $\{1,e\}$ as the transversal. In the latter case, we get the presentation \begin{displaymath} \left< a_0,f_0,a_1,f_1\ | \ f_0f_1=a_0a_1^{-1}f_0^{-1}=a_0f_1a_1^{-1}=a_1a_0^{-1}f_1^{-1}=a_1f_0a_0^{-1}=1\right>, \end{displaymath} where $a_0=ae^{-1}$, $f_0=fe^{-1}$, $a_1=ea$, and $f_1=ef$. This easily reduces to $\left< a_0, a_1\, |\, a_0a_1=a_1a_0 \right>$. We also note $[t_2]=e^{-1}a=ea=a_1$ and $[t_3]=e^{-1}a^{-1}=(ae^{-1})^{-1}=a_0^{-1}$. \end{example} \begin{example} \label{m36} The side-pairing of manifold $M_{36}$ is encoded by 1468AF. The orientation-reversing generators are $e$,$f$,$i$,$j$,$k$,$l$. The presentation of $G$ is given by the following relations. \begin{displaymath} \begin{array}{rrr} \left\{ \begin{array}{r} g^{-1}j^{-1}g^{-1}i=1\\ g^{-1}c^{-1}gc=1\\ e^{-1}jf^{-1}i=1\\ e^{-1}cfc=1\\ h^{-1}j^{-1}h^{-1}i=1\\ h^{-1}d^{-1}hd=1\\ ej^{-1}fi^{-1}=1\\ e^{-1}dfd=1 \end{array} \right. & \left\{ \begin{array}{r} g^{-1}l^{-1}h^{-1}k=1\\ g^{-1}a^{-1}ha=1\\ e^{-1}le^{-1}k=1\\ e^{-1}b^{-1}ea=1\\ g^{-1}kh^{-1}l^{-1}=1\\ gb^{-1}h^{-1}b=1\\ f^{-1}k^{-1}f^{-1}l^{-1}=1\\ f^{-1}b^{-1}fa=1 \end{array} \right. & \left\{ \begin{array}{r} i^{-1}l^{-1}i^{-1}k=1\\ i^{-1}b^{-1}ia=1\\ c^{-1}ld^{-1}k=1\\ c^{-1}a^{-1}da=1\\ jkjl^{-1}=1\\ ja^{-1}j^{-1}b=1\\ c^{-1}kd^{-1}l=1\\ cb^{-1}d^{-1}b=1 \end{array} \right. \end{array} \end{displaymath} \noindent The following table identifies boundary component types and translations of interest. The boundary components of $\tilde M_{36}$ are AAAABBAA making it a complement of 6 tori and 2 Klein bottles. \begin{center} \begin{tabular}{c|c|c|c|c} comp. & type & translations in $8Q_C$ or $64 Q_C$ & holonomy & self-conj?\\ \hline\rule[15pt]{0pt}{0pt} $E_1$ & $A$ & $c^{-1}$, $g^{-1}$ & trivial & no\\ $E_2$ & $A$ & $a^{-1}$, $e^{-1}j$ & trivial & no\\ $E_3$ & $J$ & none & $x_1$, $x_2$, $x_1x_2$ & n/a\\ $E_4$ & $J$ & none & $x_1$, $x_2$, $x_1x_2$ & n/a\\ $E_5$ & $A$ & $c^{-1}i^{-1}eg$, $a^{-1}k^{-1}ci$, $a^{-1}k^{-1}eg$ & trivial & no \end{tabular} \end{center} Choose the translations: \begin{displaymath} \begin{array}{lllll} t_1=a^{-1}b, & t_2=e^{-1}j, & t_3= e^{-1}f^{-1}, & t_4=i^{-1}j^{-1}, & t_5=a^{-1}k^{-1}ci, \\ t'_1=kck^{-1}, & t'_2=eae^{-1}, & & & t'_5=k(c^{-1}i^{-1}eg)k^{-1}. \end{array} \end{displaymath} Note that for boundary component $E_1$, although there are two translations in $8Q_C$, we choose $a^{-1}b\in\ker\phi$ because neither $c$ nor $g$ is self-conjugate. Note also that in boundary components $E_3$ and $E_4$ there are no translations in $8Q_C$ so we are forced to choose translations in $\ker \phi$. Lifts of $E_3$ and $E_4$ are of type $B$, so care must be taken that the translations are primitive in the lifts. In other words, the translations must be in the planes of rotation (in this case, the plane $x_1x_2$). Since $t_3$ and $t_4$ are normal in nonorientable $E_3$ and $E_4$, they are self-conjugate, so $\ncl t_3, t_4\ncr_H=\ncl t_3, t_4\ncr_G$. In $G/ \ncl t_3,t_4\ncr$ we have $f=e^{-1}$ and $j=i^{-1}$. Equations 42 and 82 then give $eae^{-1}=b$, $ebe^{-1}=a$, so $e(a^{-1}b)e^{-1}=b^{-1}a=(a^{-1}b)^{-1}$, that is, $et_1e^{-1}=t_1^{-1}$. Equation 31 reads as $e^{-1}i^{-1}ei=1$, so $et_2e^{-1}=t_2$ because $e$ and $j=i^{-1}$ commute. Then $ \ncl t_1,t_2,t_3,t_4\ncr_H=\ncl t_1,t_2,t_3,t_4\ncr_G$. In $G/ \ncl t_1,t_2,t_3,t_4\ncr$ we have $a=b$, which gives $eae^{-1}=a$, implying \begin{displaymath} \ncl t_1,t_2,t'_2,t_3,t_4\ncr_H=\ncl t_1,t_2,t'_2,t_3,t_4\ncr_G. \end{displaymath} In $G/\ncl t_1,t_2,t'_2,t_3,t_4\ncr$ we then have $a=1$ which implies $b=1$, $h=g$ and $d=c$. Equation 32 gives $l=ek^{-1}e$. Simplifying the presentation, we get \begin{multline*} G/\ncl t_1,t_2,t'_2,t_3,t_4\ncr =\left< c,e,g,k\ | \ egeg=g^{-1}c^{-1}gc=\right.\\ \left. e^{-1}ce^{-1}c=kgekeg=k^2=c^{-1}ekec^{-1}k=1\right>. \end{multline*} Unfortunately, the remaining translations $t'_1=kck^{-1}$, $t_5=kce^{-1}$ and $t'_5=kc^{-1}e^2gk^{-1}$ cannot easily be shown to be self-conjugate in this group, so we resort to the Reidemeister-Schreier method to get $H/ \ncl t_1,t_2,t'_2,t_3,t_4\ncr$ before we continue. Using $\{1,k\}$ as the transversal, it is not hard to see that the presentation for $H/ \ncl t_1,t_2,t'_2,t_3,t_4\ncr$ has generators $c_0=c$, $g_0=g$, $e_0=ek^{-1}$, $c_1=kck^{-1}$, $g_1=kgk^{-1}$, $e_1=ke$ and relations \begin{displaymath} \begin{array}{rrrrr} e_0g_1e_1g_0=1 & g_0^{-1}c_0^{-1}g_0c_0=1 & e_1^{-1}c_1e_0^{-1}c_0=1 & g_1e_1^2g_0=1 & c_0^{-1}e_0^2c_1^{-1}=1\\ e_1g_0e_0g_1=1 & g_1^{-1}c_1^{-1}g_1c_1=1 & e_0^{-1}c_0e_1^{-1}c_1=1 & g_0e_0^2g_1=1 & c_1^{-1}e_1^2c_0^{-1}=1 \end{array} \end{displaymath} The translations take the form $t'_1=c_1$, $t_5=c_1e_0^{-1}$, $t'_5=c_1^{-1}e_1e_0g_1$. Adding $t'_1=t_5=t'_5=1$ to the above relations immediately gives $c_1=e_0=1$, $g_1=e_1^{-1}$. Relation~15 then reduces to $c_0=1$ so~13 now says $e_1=1$, hence $g_1=1$. Then $g_0=1$ follows from relation~21, therefore, $H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H=1$. In this example, infinitely many other choices for $t_5$ and $t'_5$ also give the 4-sphere. If we only add $c_1=1$ to the above equations, we get $c_0=e_0^2$, $e_1=e_0$ and $g_1=e_0^{-2}g_0^{-1}$, and the presentation simplifies to $\left<e_0, g_0\,|\,e_0g_0e_0^{-1}g_0^{-1}=1\right>$. Thus, $H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4\ncr_H=\left<e_0\right>\oplus \left<g_0\right>$. The basis elements of $\pi_1 \tilde E_5=\z^3$ are $v_1=c^{-1}i^{-1}eg$, $v_2=a^{-1}k^{-1}ci$, and $v_3=a^{-1}k^{-1}eg$; they are sent to $g_0$, $e_0^{-1}$ and $e_0g_0$. The conjugates $v'_i=kv_i k^{-1}$ generate $\pi_1\tilde E'_5$; they are sent to $g_0^{-1}$, $e_0$ and $e_0^{-1}g_0^{-1}$. Any primitive translations in $\tilde E_5$, $\tilde E'_5$ then have form $t_5=pv_1-qv_2+rv_3$ or $t'_5=-p' v'_1+q' v'_2-r' v'_3$ where $\gcd(p,q,r)=\gcd(p',q',r')=1$. If $\psi:H\to H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4\ncr_H$ is the quotient map, then $\psi(t_5)=(q+r, p+r)$ and $\psi(t'_5)=(q'+r',p'+r')$, as written in terms of the basis $\{e_0, g_0\}$. Then \begin{displaymath} H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H=1 \Longleftrightarrow \left| \begin{array}{cc} q+r & q'+r'\\ p+r & p'+r' \end{array} \right|=1. \end{displaymath} Infinitely many choices of $p$, $q$, $r$, $p'$, $q'$, $r'$ exist that satisfy the last condition. For example, settting $r=r'=0$, it is easy to see that the resulting equation $qp'-pq'=1$ can be satisfied for an arbitrary relatively prime pair of numbers $p$ and $q$ once suitable $q'$ and $r'$ are found ($q'$ and $r'$ are consequently relatively prime). \end{example} \begin{example} \label{otherex} For the remaining examples from Theorem~\ref{mainthm}, the proofs proceed in a similar way. In \tableref{table2} we list the choices for translations that make the orientable double cover a complement in $S^4$. Types of boundary components are identified in the order we have used here, note that this order is different from tables in \cite{Ratcliffe-Tschantz}. The translations are listed in the order one sets them equal to 1. The letters RS, if applicable, stand in the place where we were forced to pass to a presentation of $H/\ncl \text{translations}\ncr$ via the Reidemeister-Schreier method. Examples that do not employ it are typically straightforward. Appearances of translations labeled $t'$ indicate that the corresponding boundary component lifted to two components in the orientable double cover.\end{example} \begin{table}[ht!]\small\anchor{table2} \begin{center} \begin{tabular}{c|c|c|c} number & side-pairing & boundary & translation choice\\ \hline\rule[15pt]{0pt}{0pt} 1011 & 14FF28 & GGGGG & $t_1=c$, $t_2=a$, $t_3=k$, $t_4=i$, \\ &&& $t_5=e^{-1}g$ (see \cite{Ivansic3})\\ \hline\rule[15pt]{0pt}{0pt} 71 & 13EB34 & GGHGA & $t_1=a^{-1}h$, $t_2=a$, $t_4=k$, \\ &&& $t_5=c^{-1}i$, $t'_5=ct_5c^{-1}$, $t_3=e^{-1}l$ \\ \hline\rule[15pt]{0pt}{0pt} 23 & 1569A4 & GAGAH & $t_1=c^{-1}h$, $t_2=a$, $t'_2=ct_2c^{-1}$,\\ &&& $t_3=e^{-1}d^{-1}$, $t_4=k$,\\ &&& $t'_4=ct_4c^{-1}$, $t_5=c^{-1}i^{-1}eg$ \\ \hline\rule[15pt]{0pt}{0pt} 1092 & 53FFCA & AGGAGG & $t_1=c^{-1}h$, $t'_1=ct_1c^{-1}$, $t_2=a^{-1}i^{-1}$, \\ &&& $t_3=c^{-1}k^{-1}$, $t_4=i^{-1}h^{-1}$, $t'_4=ct_4c^{-1}$ \\ &&& $t_5=e^{-1}g$, $t_6=e^{-1}h$\\ \hline\rule[15pt]{0pt}{0pt} 231 & 1569F4 & GBGBH & $t_1=c^{-1}h$, $t_2=a$, $t'_2=ct_2c^{-1}$,\\ &&& $t_3=e^{-1}d^{-1}$, $t_4=k$, $t'_4=ct_4c^{-1}$,\\ &&& $t_5=a^{-1}k^{-1}ci^{-1}eg$ \\ \hline\rule[15pt]{0pt}{0pt} 112 & 13C874 & GGHBG & $t_1=g$, $t_2=a$, $t_4=k$, $t'_4=ct_4c^{-1}$, \\ &&& $t_5=c^{-1}ieg$, $t_3=e^{-1}d^{-1}l$\\ \hline\rule[15pt]{0pt}{0pt} 92 & 1348EC & GAHIH & $t_1=g$, $t_4=k^{-1}l^{-1}$,\\ &&& $t_2=e$, $t_3=e^{-1}d^{-1}l$,\\ &&& RS, $t'_2=cac^{-1}$, $t_5=c^{-1}jak$\\ \hline\rule[15pt]{0pt}{0pt} 51 & 156A9C & HGABG & $t_2=a$, $t_3=c^{-1}l$, $t'_3=ct_3c^{-1}$,\\ &&& $t_4=i^{-1}h^{-1}$, $t'_4=ct_4c^{-1}$,\\ &&& $t_5=c^{-1}i^{-1}f^{-1}g$, $t_1=c^{-1}ag^{-1}$\\ \hline\rule[15pt]{0pt}{0pt} 40 & 143CF9 & GAGAJ & $t_1=c$, $t_3=e^{-1}k^{-1}$, $t_5=e^{-1}h^{-1}fg$,\\ &&& $t_4=g^{-1}l$, $t_2=a$,\\ &&& RS, $t'_2=i(e^{-1}j)i^{-1}$, $t'_4=i(i^{-1}h)i^{-1}$\\ \end{tabular} \end{center} \nocolon\caption{}\label{table2} \end{table} This completes the proof of Theorem~\ref{mainthm}. \end{proof} \eject We conclude with some related examples. \begin{example} \label{m1091supp} (Examples of complements in simply-connected manifolds with higher Euler characteristic)\qua It was shown in \cite{Ivansic3} that index-$n$ cyclic covers of $\tilde M_{1011}$ are complements of $4n+1$ tori in a simply-connected closed manifold $N$ with Euler characteristic $2n$. We show that many other manifolds from Theorem~\ref{mainthm} have an analogous property. Two propositions from~\cite{Ivansic3} are used for this purpose. Let $M$ be a hyperbolic manifold with boundary components $E_1,\dots,E_m$ and let $H=\pi_1 M$, $K_i=\pi_1 E_i$. If we take the cover corresponding to a normal subgroup $H_0$ of $H$, \mbox{\cite{Ivansic3} Proposition~2.3} asserts that the number of path-components of $p^{-1}(E_i)$ is equal to the index of the image of $K_i$ in $H/H_0$ under the quotient map. Now let $t_i\in \pi_1 E_i$ be a translation normal in $E_i$, $i=1,\dots,m$. Setting $H_0=\ncl t_1,\dots,t_m\ncr_H$, suppose that $H/H_0$ is a finite group of order $l$, and that $t_i$ is primitive $K_i \cap H_0$. Then \mbox{\cite{Ivansic3} Proposition~2.4} states that the index-$l$ cover of $M$ corresponding to subgroup $H_0$ is a complement inside a simply-connected closed manifold. From example~\ref{m56} it is easily seen that $H/\ncl t_1,t_2,t'_2,t_3,t_4,t_5\ncr_H=\z=\left<gc\right>=\left<[t'_5]\right>$. It follows that $H/\ncl t_1,t_2,t'_2,t_3,t_4,t_5, (t'_5)^m\ncr_H=\z_m$, so the above discussion shows that an $m$-fold cover of $\tilde M_{56}$ is a complement in a simply-connected 4-manifold $N$ with $\chi(N)=2m$. To find the link type, consider the generators of boundary components of $\tilde M_{56}$ and their images under the quotient map $H\to H/\ncl t_1,t_2,t'_2,t_3,t_4,t_5\ncr_H$: \begin{center} \begin{tabular}{c|c|c|c} component & type & generators & images in $\z=\left<gc\right>$ \\ \hline\rule[15pt]{0pt}{0pt} $\tilde E_1$ & B &$a^{-1}b$, $g^{-1}h$, $c^{-1}h$ & $1$, $g^{-2}=(gc)^{-2}$, $gc$\\ $\tilde E_2$ & B & $a$, $e$, $i^{-1}j$ & $1$, $1$, $1$\\ $\tilde E'_2$ & B &$cac^{-1}$, $cec^{-1}$, $c(i^{-1}j)c^{-1}$& $1$, $1$, $1$ \\ $\tilde E_3$ & A & $c^{-2}$, $e^{-1}dl$, $e^{-1}f$ & $1$, $1$, $1$ \\ $\tilde E_4$ & A & $g^{-1}h^{-1}$, $i^{-2}$, $i^{-1}k^{-1}$ & $1$, $1$, $1$ \\ $\tilde E_5$ & A & $a^{-1}k^{-1}a^{-1}k$, $a^{-1}k^{-1}eg$, $c^{-1}i$ & $1$, $gc$, $1$\\ $\tilde E'_5$ & A & $c(a^{-1}k^{-1}a^{-1}k)c^{-1}$, & $1$, $gc$, $1$\\ && $c(a^{-1}k^{-1}eg)c^{-1}$, $c(c^{-1}i)c^{-1}$ &\\ \end{tabular} \end{center} The results quoted above imply that the number of components in the lift of a boundary component $\tilde E$ is simply the order of the group $\z/\left<A, [t'_5]^m\right>$, where $A$ is the set of images of the generators of $\pi_1 \tilde E$. Those orders are, respectively, 1, $m$, $m$, $m$, $m$, 1, 1. Let $H_0=\ncl t_1,t_2,t'_2,t_3,t_4,t_5, (t'_5)^m\ncr_H$. Note that $(t'_5)^m$ is primitive in $H_0\cap \pi_1\tilde E'_5$, since powers smaller than $m$ of $t'_5$ are not in $H_0$ because they map to nontrivial elements of $H/H_0=\z_m$. The only generator of $\pi_1 \tilde E_1$ with nontrivial holonomy is $c^{-1}h$, mapping under the restriction $\psi: \pi_1\tilde E_1\to \z_m$ to the generator of $\z_m$. If $m$ is even, elements of $\ker\psi$ will all have trivial holonomy, meaning that $\tilde E_1$ lifts to a collection of manifolds of type $A$; if $m$ is odd, some elements of $\ker\psi$ will have nontrivial holonomy, so $\tilde E_1$ will lift to a collection of manifolds of type $B$. Putting everything together, we see that an $m$-fold cyclic cover of $\tilde M_{56}$ is a complement inside a simply-connected closed manifold $N$ with $\chi(N)=2m$ of \begin{center} \begin{tabular}{l} $2m+3$ tori and $2m$ Klein bottles, if $m$ is even;\\ $2m+2$ tori and $2m+1$ Klein bottles, if $m$ is odd. \end{tabular} \end{center} At the end of Example~\ref{m36} we saw that $H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4\ncr_H=\z\oplus\z$, where $\pi_1\tilde E_5$ and $\pi_1 \tilde E'_5$ map surjectively to $\z\oplus\z$ under the quotient map. This now allows for many choices of $t_5$ and $t'_5$ that make $H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H$ finite. With notation from Example~\ref{m36} it is clear that \begin{displaymath} H/\ncl t_1,t'_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H\text{ is finite} \Longleftrightarrow \left| \begin{array}{cc} q+r & q'+r'\\ p+r & p'+r' \end{array} \right|\ne0. \end{displaymath} Note that $t_5$ and $t'_5$ do not have to be primitive in $\pi_1 \tilde E_5$ or $\pi_1 \tilde E'_5$, so there is no condition on $\gcd(p,q,r)$ or $\gcd(p',q',r')$. It turns out $t_5$ and $t'_5$ are always primitive in $H_0\cap \pi_1 \tilde E_5$ and $H_0\cap \pi_1 \tilde E'_5$, where $H_0=\ncl t_1,t'_1,t_2,t'_2,t_3,t_4,t_5,t'_5\ncr_H$. The number of boundary components of the cover of $\tilde M_{36}$ corresponding to $H_0$ is determined by considering the order of the groups \begin{displaymath} \z\oplus\z/\left< (q+r,p+r), (q'+r',p'+r'), A\right>, \end{displaymath} where $A$ is the set of images of generators of boundary components of $\tilde M_{56}$. Care must be taken to ascertain whether boundary components of type B lift to type B or A. Among the many possibilities here, we just consider the case $q=p'=r=r'=0$, with $p$ and $q'$ arbitrary. Under those conditions, one can see that $\tilde M_{36}$ has a finite cover with deck group $\z_p\oplus \z_{q'}$ that is the complement in a simply-connected manifold $N$ with $\chi(N)=2pq'$~of \begin{center} \begin{tabular}{l} $(1+2p)\gcd(2,q')+\gcd(2p,q')+2+2p$ tori, if $q'$ is even;\\ $(1+2p)\gcd(2,q')+\gcd(2p,q')+2$ tori and $2p$ Klein bottles, if $q'$ is odd. \end{tabular} \end{center} The table below lists other examples of complements in a simply-connected manifold $N$ that we found using the same method. The last column indicates which translations different from the ones in Example~\ref{otherex} are set equal to~1 to cause $H/\ncl \text{translations}\ncr_H$ to be a finite group. The cover is then the one corresponding to the kernel of $H\to H/\ncl \text{translations}\ncr_H$. The Euler characteristic of $N$ is always twice the cardinality of the group of deck transformations. \begin{center} \begin{tabular}{c|c|l|c} A cover & with & is a complement in a simply- & Altered choice\\ of mfd. & deck group & connected closed manifold of: & of translations\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{1091}$ & $\z_m\oplus\z_n $ & $4\gcd(m,n)+5$ tori & $t_2^m$ and $t_3^n$\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{71}$ & $\z_m$ & $3m+3$ tori & $t_3^m$\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{112}$ & $\z_m$ & $3m+3$ tori, $m$ even & $t_3^m$\\ && $3m+1$ tori, 2 K. bottles, $m$ odd &\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{92}$ & $\z_m$ & $4m+2$ tori, $m$ even & $t_5^m$ \\ && $4m+1$ tori, 1 K. bottle, $m$ odd &\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{92}$ & $\z_m$ & $3m+3$ tori, $m$ even & $(t'_2)^m$ \\ && $3m+2$ tori, 1 K. bottle, $m$ odd &\\ \hline \rule[15pt]{0pt}{0pt} $\tilde M_{40}$ & $\z_m$ & $2m+4$ tori, $m$ K. bottles & $(t'_2)^m$ or $(t'_4)^m$\\ \end{tabular} \end{center} \end{example} \begin{example} \label{aspherical} (More examples of aspherical homology spheres.) Ratcliffe and Tschantz produced the first examples (infinitely many) of aspherical 4-manifolds that are homology spheres. The construction in~\cite{Ratcliffe-Tschantz2} used $\tilde M_{1011}$: by choosing fibers $t_i$ of boundary components $E_i$ as in Example~\ref{otherex} and filling them in with a disc, one gets a manifold $N$ with $\pi_1 N=1$, thus $H_1 N=0$. By Proposition~\ref{crithom}, $H_1 \tilde M_{1011}=\z^5$, generated by the fibers $t_1,\dots,t_5$. If the fibers are altered, $\pi_1 N$ will likely fail to be trivial. However, one can retain $H_1 N=0$ if $t_i$ is replaced with $t_i+s_i$, where $s_i$ is a carefully chosen element of $H_1 E_i=\z^3$ that can have arbitrarily large length. When fibers are chosen to be sufficiently long, the manifold $N$ supports a metric of nonpositive curvature and is therefore aspherical ($N$ is a homology sphere because $H_1 N=0$ and $\chi(N)=2$). We note that this construction goes through for any of the orientable double covers from~\ref{mainthm} whose boundary components are all 3-tori (i.e. which are complements of a collection of tori in $S^4$), namely $\tilde M_{71}$, $\tilde M_{23}$, $\tilde M_{1092}$ and $\tilde M_{1091}$. \end{example} \Addresses\recd \end{document}
\begin{document} \title{Vertex Partitions into an Independent Set\ and a Forest with Each Component Small} \abstract{For each integer $k\ge 2$, we determine a sharp bound on $\textrm{mad}(G)$ such that $V(G)$ can be partitioned into sets $I$ and $F_k$, where $I$ is an independent set and $G[F_k]$ is a forest in which each component has at most $k$ vertices. For each $k$ we construct an infinite family of examples showing our result is best possible. Our results imply that every planar graph $G$ of girth at least 9 (resp. 8, 7) has a partition of $V(G)$ into an independent set $I$ and a set $F$ such that $G[F]$ is a forest with each component of order at most 3 (resp. 4, 6). Hendrey, Norin, and Wood asked for the largest function $g(a,b)$ such that if $\textrm{mad}(G)<g(a,b)$ then $V(G)$ has a partition into sets $A$ and $B$ such that $\textrm{mad}(G[A])<a$ and $\textrm{mad}(G[B])<b$. They specifically asked for the value of $g(1,b)$, i.e., the case when $A$ is an independent set. Previously, the only values known were $g(1,4/3)$ and $g(1,2)$. We find $g(1,b)$ whenever $4/3< b<2$. } \section{Introduction} An \Emph{$(I,F_k)$-coloring} for a graph $G$ is a partition of $V(G)$ into sets $I$ and $F$ such that $I$ is an independent set and $F$ induces a forest in which each component has at most $k$ vertices. The \emph{average degree of $G$} is $2|E(G)|/|V(G)|$. The \emph{maximum average degree of $G$}, denoted \Emph{$\textrm{mad}(G)$}, is the maximum, taken over all subgraphs $H$, of the average degree of $H$. In this paper, we prove a sufficient condition for a graph $G$ to have an $(I,F_k)$-coloring, in terms of $\textrm{mad}(G)$. \begin{thm} \label{mad-thm} For each integer $k\ge 2$, let $$ f(k):=\left\{\begin{array}{ll}\aside{$f(k)$} 3-\frac3{3k-1} & \mbox{$k$ even} \\ 3-\frac3{3k-2} & \mbox{$k$ odd} \\ \end{array} \right. $$ If $\textrm{mad}(G)\le f(k)$, then $G$ has an $(I,F_k)$-coloring. \end{thm} Theorem~\ref{mad-thm} is best possible. For each positive integer $k$ there exists an infinite family of graphs with maximum average degree approaching $f(k)$ (from above) such that none of these graphs has an $(I,F_k)$-coloring. Note that $f(3)=\frac{18}7$, $f(4)=\frac{30}{11}$, and $f(6)=\frac{48}{17}$. Each planar graph $G$ with girth $g$ has $\textrm{mad}(G)<\frac{2g}{g-2}$. So Theorem~\ref{mad-thm} implies that every planar graph $G$ of girth at least 9 (resp. 8, 7) has a partition of $V(G)$ into an independent set $I$ and a set $F$ where $G[F]$ is a forest with each tree of order at most 3 (resp. 4, 6); for girth 9, this is best possible, since~\cite[Corollary 4]{EMOP} constructs girth 9 planar graphs with no $(I,F_2)$-coloring. This strengthens results in~\cite{DMP,NS-mfmc}. Theorem~\ref{mad-thm} is implied by a more general result below, our Main Theorem. Before introducing definitions and notation to state it, we briefly discuss related work. Choi, Dross, and Ochem \cite{CDO} studied a variant of $(I,F_k)$-colorings where they did not require the components of $G[F_k]$ to be acyclic, but only to have order at most $k$. They proved that $G$ has such a coloring whenever $\textrm{mad}(G) < \frac{8}{3}(1-\frac{1}{3k+1})$. Theorem~\ref{mad-thm} allows a weaker hypothesis (and a stronger conclusion). Moreover, the argument on the sharpness of Theorem~\ref{mad-thm} (see Lemma \ref{sharp-critical-lem}) does not require the acyclic nature of $F_k$, and therefore Theorem~\ref{mad-thm} is also a sharp result for this variant of the problem. Dross, Montassier, and Pinlou~\cite{DMP} studied a different variant of $(I,F_k)$-colorings, where $G[F_k]$ has bounded maximum degree, but perhaps not bounded order (earlier related results are in~\cite{BK-defective} and~\cite{BIMOR}). Under hypotheses very similar to those in Theorem~\ref{mad-thm}, they proved that $G$ has such a coloring. These results, too, are strengthened by Theorem~\ref{mad-thm}. We can also view Theorem~\ref{mad-thm} in a more general context. Hendrey, Norin, and Wood~\cite[Problem \#14]{HNW} asked for the largest function \Emph{$g(a,b)$} such that if $\textrm{mad}(G)<g(a,b)$ then $V(G)$ has a partition into sets $A$ and $B$ such that $\textrm{mad}(G[A])<a$ and $\textrm{mad}(G[B])<b$. They specifically asked for the value of $g(1,b)$, which corresponds to the case that $A$ is an independent set. Nadara and Smulewicz~\cite{NS-mfmc} used maximum flows to give a short proof that $g(1,b)\ge b+1$ and $g(2,b)\ge b+2$. However, the only exact values previously known\footnote{Borodin, Kostochka, and Yancey~\cite{BKY-improper} also showed that $g(4/3,4/3)=14/5$.} were $g(1,4/3)$ and $g(1,2)$ (see~\cite{BK-improper} for $g(1,4/3)$ and see below for $g(1,2)$). We find the value of $g(1,b)$ whenever $4/3\le b<2$. We also study a related function \Emph{$\tilde{g}(a,b)$}. This is the largest value for which there is a finite set $\mathcal{G}_{a,b}$ of graphs such that if $\textrm{mad}(G)<\tilde{g}(a,b)$ and $G$ has no graph in $\mathcal{G}_{a,b}$ as a subgraph, then $V(G)$ has a partition into sets $A$ and $B$ where $\textrm{mad}(G[A])<a$ and $\textrm{mad}(G[B])<b$. That is, $\tilde{g}(a,b)$ is the minimum value such that there is an infinite family of graphs $G_j$ with $\textrm{mad}(G_j)$ approaching $\tilde{g}(a,b)$ from above (as $j\to \infty$) and each $V(G_j)$ has no partition $A,B$ with $\textrm{mad}(G_j[A])<a$ and $\textrm{mad}(G_j[B])<b$. Clearly, $g(a,b)\le \tilde{g}(a,b)$, and sometimes this inequality is strict. In~\cite{CY-IF} we observed that $g(1,2)=3$. The lower bound follows from degeneracy.\footnote{Given a vertex $v$ of degree at most 2, by induction we partition $G-v$ into sets $I$ and $F$ such that $I$ is independent and $G[F]$ is a forest. If $v$ has no neighbor in $I$, then we add $v$ to $I$. Otherwise, we add it to $F$.} The upper bound $g(1,2)\le 3$ comes from $K_4$. However, $K_4$ is the single obstruction to strengthening this bound. In fact, we proved that $\tilde{g}(1,2)=3.2$. Each component of a graph $G$ with $\textrm{mad}(G)<2$ is a forest. Thus, a partition of $V(G)$ into sets $I$ and $F$ with $\textrm{mad}(G[I])<1$ and $\textrm{mad}(G[F])<2-2/(k+1)$ is precisely an $(I,F_k)$-partition. In the present paper, we show that $g(1,2-2/(k+1))=\tilde{g}(1,2-2/(k+1))=f(k)$ for every integer $k\ge 2$ (here $f(k)$ is as defined in Theorem~\ref{mad-thm}). This is particularly interesting because $\tilde{g}(1,2)=3.2$, but $\tilde{g}(1,b)<3$ for every $b<2$. A \Emph{precoloring} of $G$ is a partition of $V(G)$ into sets $U_0, U_1, \ldots, U_{k-1}, F_1, F_2, \ldots, F_k$\aaside{$U_0,\ldots,U_{k-1}$}{4mm} \aaside{$F_1,\ldots,F_k, I$}{8mm}, and $I$. Intuitively, we think of a vertex in $F_j$ as being already colored $F$ and having an additional $j-1$ (fake) neighbors that are also already colored $F$. So, for example, if a vertex is in $F_k$ then we cannot color any of its neighbors in $\bigcup_{j=0}^{k-1}U_j$ with $F$, since this would create a component colored $F$ with at least $k+1$ vertices. Similarly, a vertex $v$ in $U_j$ is uncolored, but has $j$ fake neighbors that are colored $F$. So coloring $v$ with $F$ would create a component colored $F$ with $j+1$ vertices. An $(I,F_k)$-coloring of a precolored graph $G$ is an $(I,F_k)$-coloring $(I',F')$ of the underlying (not precolored) graph $G$ such that $I\subseteq I'$, $\cup_{j=1}^k F_j\subseteq F'$ and each component of $G[F']$ has at most $k$ vertices \emph{including} any fake neighbors arising from the precoloring. A graph $G$ is \emph{precolored trivially} if $U_0=V(G)$, so $U_1=\cdots=U_{k-1}=F_1=\cdots=F_k=I=\emptyset$. A precolored graph $G$ is \Emph{$(I,F_k)$-critical} if $G$ has no $(I,F_k)$-coloring, but every proper subgraph of $G$ does and, furthermore, for any vertex precolored $U_j$ or $F_j$, reducing $j$ by 1 allows an $(I,F_k)$-coloring of $G$. So Theorem~\ref{mad-thm} is equivalent to saying that every (trivially precolored) $(I,F_k)$-critical graph $G$ has $\textrm{mad}(G)>f(k)$. To facilitate a proof by induction, we want to extend Theorem~\ref{mad-thm} to allow other precolorings. However, a vertex in $U_j$ (with $j>0$) or in $F_j$ imposes more constraints on an $(I,F_k)$-coloring than one in $U_0$. Intuitively, a vertex in $V(G)\setminus U_0$ should ``count more'' toward the average degree than one in $U_0$. This motivates weighting vertices differently, as we do below. (In Section~\ref{gadgets-sec}, we explain our choice of weights.) \begin{defn} \label{rho-defn} \label{coeff-defn} For each integer $k\ge 2$, let \begin{itemize} \item $C_E:=\{ 3k-1\mbox{ for $k$ even, } 3k-2\mbox{ for $k$ odd}\}$;\aside{$C_E$} \item $C_{U,0}:=\frac{3C_E-3}2$; \item $C_{U,j} := C_{U,0} - 3j = \frac{3C_E-3}2-3j$ for $0 < j \leq k$;\aside{$C_{U,j}$} \item $C_{F,j} := C_{U,j-1} + C_{I} - C_E = C_E - 3j$ for $1 \leq j \leq \lfloor \frac{k+1}2 \rfloor$;\aside{$C_{F,j}$} \item $C_{F,j} := C_{U,\flr{\frac{k-1}2}} + C_{U,\ceil{\frac{k-1}2}} + C_{U,j-\lfloor \frac{k+3}2 \rfloor} - 3C_E = 3(k-j)$ for $\lfloor \frac{k+3}2 \rfloor \leq j\le k$; and \item $C_I := C_{U,0} + C_{F,k} - C_E = \frac{C_E - 3}2$.\aside{$C_I$} \end{itemize} \end{defn} \begin{mainthm} Fix an integer $k\ge 2$. Let $$ \rho_G^k(R):=\sum_{j=0}^{k-1}C_{U,j}|U_j\cap R|+\sum_{j=1}^kC_{F,j}|F_j\cap R|+C_I|I\cap R|-C_E|E(G[R])|, \aside{$\rho^k_G$}$$ \nopagebreak for each $R\subseteq V(G)$. If a precolored graph $G$ is $(I,F_k)$-critical, then $\rho_G^k(V(G))\le -3$. \end{mainthm} Now is a good time to define more terminology and notation. We typically write $\rho^k$, rather than $\rho^k_G$, when there is no danger of confusion. We also write \Emph{coloring} to mean $(I,F_k)$-coloring. An \Emph{$F$-component} is a component of $G[F]$ (either for an $(I,F_k)$-coloring of a graph $G$ or for a precoloring of $G$, where $F=\cup_{j=1}^kF_j$). We will often want to move a vertex $v$ from $U_a$ to $U_{a+b}$ or from $F_a$ to $F_{a+b}$, for some integers $a$ and $b$. Informally, we call this ``adding $b$ $F$-neighbors to $v$''. If an uncolored vertex $v$ ever has $k$ or more $F$-neighbors, then we recolor $v$ with $I$ (since coloring $v$ with $F$ would create an $F$-component with at least $k+1$ vertices, which is forbidden); see Lemma~\ref{noIFk-lem} and the comment after it. Note the following easy proposition. \begin{prop} The Main Theorem implies Theorem~\ref{mad-thm}. \end{prop} \begin{proof} Observe that $\frac{2C_{U,0}}{C_E}=f(k)$, as defined in Theorem~\ref{mad-thm}. Thus, if $G$ is precolored trivially, then the condition $\rho^k(V(G))\ge 0$ is equivalent to $\frac{2|E(G)|}{|V(G)|}\le f(k)$. By the Main Theorem, each $(I,F_k)$-critical graph $G$ has $\rho^k(V(G))\le -3$. Thus, if $\textrm{mad}(G)\le f(k)$, then $\rho(R)\ge 0$ for all $R\subseteq V(G)$; so $G$ contains no $(I,F_k)$-critical subgraph. Hence, $G$ has an $(I,F_k)$-coloring. \end{proof} The proof of the Main Theorem differs somewhat depending on whether $k$ is even or odd. However, the two cases are similar. Thus, we begin the proof (for all $k$) in Section~\ref{proof-start-sec}. In Section~\ref{k-even-sec} we conclude it for $k$ even, and in Section~\ref{k-odd-sec} we conclude it for $k$ odd. Before proving the Main Theorem, we discuss the sharpness examples and the gadgets that motivate our weights in Definition~\ref{rho-defn}. We then conclude the introduction with a brief overview of the potential method. \subsection{Sharpness Examples} \label{sharpness-sec} \begin{figure} \caption{Top: The sharpness example $G_{k,3} \label{sharpness-fig} \end{figure} \begin{example} We write \emph{add a pendent 3-cycle at a vertex $z$} to mean identify $z$ with a vertex of a new 3-cycle. \emph{Adding $\ell$ pendent 3-cycles at $z$} means repeating this $\ell$ times. Similarly, \emph{adding a 2-thread from $y$ to $z$} means adding new vertices $y'$ and $z'$ and new edges $yy',y'z',z'z$. (Adding $\ell$ 2-threads is defined analogously.) We form an $(I,F_k)$-critical graph \Emph{$G_{k,t}$} as follows (Figure~\ref{sharpness-fig} shows $G_{k,3}$). Start with vertices $v_0,\ldots,v_t$, $w_0,\ldots,w_t$, $x_0,\ldots,x_t$, where $\{v_j,w_j,x_j\}$ induces $K_3$ for each $j\in\{0,\ldots,t\}$. Now add $\flr{\frac{k-2}2}$ pendent 3-cycles at $v_0$, $\flr{\frac{k-1}2}$ pendent 3-cycles at $w_0$, and $\flr{\frac{k}2}$ pendent 3-cycles at $x_0$. For each $j\in\{1,\ldots,t\}$, add $\flr{\frac{k-2}2}$ 2-threads from $v_{j-1}$ to $v_j$, $\flr{\frac{k-1}2}$ 2-threads from $v_{j-1}$ to $w_j$, and $\flr{\frac{k}2}$ 2-threads from $v_{j-1}$ to $x_j$. Finally, add a single pendent 3-cycle at $v_t$. \end{example} The proof that $G_{k,t}$ is $(I,F_k)$-critical is a bit tedious, but we include it below for completeness. It is not needed for the proof of our Main Theorem, so the reader should feel free to skim (or skip) it. Intuitively, if we start to color $G_{k,t}$ from the left, each $v_j$ will be in an $F$-component of order $k$; but for $v_t$, due to the extra pendent 3-cycle, we get an $F$-component of order $k+1$, a contradiction. When we delete some edge $e$, at some point we are able to use $I$ on some $v_{j'}$, and we continue using $I$ on each $v_j$ with $j\ge j'$. The coloring of $G_{k,t}-e$ is some combination of the two colorings at the bottom of Figure~\ref{sharpness-fig}. (It is interesting to note that the family $G_{2,t}$ is precisely those sharpness examples given by Borodin and Kostochka in~\cite{BK-improper}.) \begin{lem} $G_{k,t}$ is $(I,F_k)$-critical for all integers $k\ge 2$ and $t\ge 0$. \label{sharp-critical-lem} \end{lem} \begin{proof} Let $G^j_{k,t}$ denote the subgraph of $G_{k,t}$ induced by $v_0,\ldots,v_j,w_0,\ldots,w_j,x_0,\ldots,x_j$ along with their pendent 3-cycles and any 2-threads between them. We show by induction that $G^j_{k,t}$ has an $(I,F_k)$-coloring for each $j<t$; furthermore, in each such coloring $v_j$ is in an $F$-component of order $k$. Consider $G^0_{k,t}$. Because of their pendent 3-cycles, $w_0$ and $x_0$ will have at least $\flr{\frac{k-1}2}$ and $\flr{\frac{k}2}$ $F$-neighbors (respectively) in every coloring of $G^0_{k,t}$. If both $w_0$ and $x_0$ are colored $F$, then they lie in an $F$-component of order at least $\flr{\frac{k-1}2} + \flr{\frac{k}2}+2 = k+1$, a contradiction. So one of $w_0$ and $x_0$ must be colored $I$. Thus, $v_0$ is colored $F$; so $v_0$ lies in an $F$-component of order at least $\flr{\frac{k-2}2}+\flr{\frac{k-1}2}+2=k$. To see that $G^0_{k,t}$ has a coloring, color $x_0$ with $I$ and $v_0$ and $w_0$ with $F$. For each 3-cycle pendent at $v_0$ or $w_0$, use $I$ on one vertex and $F$ on the other. For each 3-cycle pendent at $x_0$, use $F$ on both vertices. This proves the base case. Now we consider the induction step. Since $v_{j-1}$ is in an $F$-component of order $k$ in $G^{j-1}_{k,t}$, each neighbor of $v_{j-1}$ on a 2-thread to $\{v_j,w_j,x_j\}$ must be colored $I$; thus, each of \emph{their} neighbors must be colored $F$. Now the analysis is nearly identical that that for $j=0$. To extend the coloring to all of $G^j_{k,t}$, color $x_j$ with $I$ and color $v_j$ and $w_j$ with $F$. If we instead tried to color $v_j$ with $I$, then $w_j$ and $x_j$ must both be colored $F$, so they lie in an $F$-component of order $\flr{\frac{k}2}+\flr{\frac{k-1}2}+2=k+1$, a contradiction. To see that $G_{k,t}$ has no coloring, note that such a coloring would have $v_t$ in an $F$-component of order $k$ (as in the induction step above). However, due to the extra pendent 3-cycle at $v_t$, this creates an $F$-component of order $k+1$, a contradiction. Finally, we show that $G_{k,t}$ is $(I,F_k)$-critical. That is, for each $e\in E(G_{k,t})$ subgraph $G_{k,t}-e$ has a coloring. By induction we first prove the stronger statement that if $e\in E(G^{j-1}_{k,t})$, then $G^j_{k,t}-e$ has a coloring with $v_j$ colored $I$. (The intuition is that once we get this for some $j'$, then we can ensure it for all $j'>j$, so can finish the coloring.) Afterward, we use this to prove that $G_{k,t}-e$ has an $(I,F_k)$-coloring for every $e\in E(G_{k,t})$. Base case: $j=1$. If $e$ is not on a pendent 3-cycle at $v_0$, then $G^{j-1}_{k,t}-e$ has a coloring in which $v_0$ is colored $I$, as follows. Either (a) $e\in \{v_0w_0,v_0x_0\}$, so we can color two vertices in $\{v_0,w_0,x_0\}$ with $I$ or (b) $e=w_0x_0$ or $e$ is on a 3-cycle pendent at $w_0$ or $x_0$, so we can color both $w_0$ and $x_0$ with $F$. If we can color $v_0$ with $I$, then we extend to $G^1_{k,t}-e$ by using $F$ on all neighbors of $v_0$ on 2-threads, using $I$ on $v_1$ and neighbors of $w_1$ and $x_1$ on 2-threads, and using $F$ on all remaining vertices. Assume instead that $e$ is on a pendent 3-cycle at $v_0$. Now we color both endpoints of $e$ with $I$, so that $v_0$ is in an $F$-component of order only $k-1$. This enables us to use $F$ on some neighbor of $v_0$ on a 2-thread to $x_1$ (and use $I$ on its neighbor on that 2-thread). Now we use $F$ on $w_1$ and $x_1$, and use $I$ on $v_1$. This finishes the base case. The induction step is nearly identical to the base case. Suppose $e\in E(G^{j-1}_{k,t})$. If $e\in E(G^{j-2}_{k,t})$, then $G^{j-1}_{k,t}-e$ has a coloring in which $v_{j-1}$ uses $I$. We extend it to $G^j_{k,t}-e$ in exactly the same way as extending the coloring of $G^0_{k,t}-e$ to $G^1_{k,t}-e$ above. Otherwise $e\in E(G^{j-1}_{k,t})\setminus E(G^{j-2}_{k,t})$. Recall, from above, that $G^{j-2}_{k,t}$ has a coloring, and it has $v_{j-2}$ in an $F$-component of order $k$. Now the extension to $G^{j-1}_{k,t}$ is nearly identical to coloring $G^0_{k,t}-e$ (from the base case at the start of the proof). This proves our stronger statement by induction. Finally, we prove that $G_{k,t}-e$ has a coloring for every $e\in E(G_{k,t})$. If $e$ is not on the 3-cycle pendent at $v_t$, then we can color $G_{k,t}-e$ with $I$ on $v_t$, so the extra pendent 3-cycle does not matter. If $e$ is on the pendent 3-cycle, then we color so that $v_t$ is in an $F$-component of order $k$ without the extra 3-cycle. However, now $v_t$ has only a single neighbor on that pendent 3-cycle, so we color that neighbor with $I$ and the remaining vertex with $F$. \end{proof} \subsection{Gadgets: Where the Coefficients Come From} \label{gadgets-sec} Here we explain our choice of weights in Definition~\ref{rho-defn}: $C_E$, $C_{U,j}$, $C_{F,j}$, $C_I$. Everything starts with our sharpness examples in Section~\ref{sharpness-sec}. We must choose $C_{U,0}$ and $C_E$ so that all of these examples have the same potential, i.e., $\rho^k(G_{k,t+1})=\rho^k(G_{k,t})$ for all positive $t$. Note that $|V(G_{k,t+1})|-|V(G_{k,t})|=3+2(\flr{\frac{k}2}+\flr{\frac{k-1}2}+\flr{\frac{k-2}2})=C_E$ and $|E(G_{k,t+1})|-|E(G_{k,t})|=3+3(\flr{\frac{k}2}+\flr{\frac{k-1}2}+\flr{\frac{k-2}2})=C_{U,0}$. \emph{This} is how we chose $C_E$ and $C_{U,0}$. For each of $I$, $F_j$ and $U_j$ ($j>0$) we construct a gadget, consisting of edges and vertices in $U_0$. Each gadget has a specified vertex $v$ which the gadget simulates having the desired precoloring; see Figure~\ref{gadgets-fig}. The easiest of these is $U_1$. The gadget is simply a 3-cycle. Suppose we add a pendent 3-cycle $C$ at any vertex $v$. In any coloring of $G$ (with $C$ added), at least one neighbor of $v$ on $C$ is colored $F$. Further, if $v$ is colored $F$, then we can color the remaining vertices of $C$ so that exactly one is in $F$. Thus, this gadget precisely simulates $v$ being in $U_1$. For each larger $j$, the gadget for $U_j$ simply adds $j$ pendent 3-cycles at $v$. Alternatively, we can define the gadgets recursively, where adding a pendent 3-cycle moves a vertex from $U_j$ to $U_{j+1}$. \begin{figure} \caption{Gadgets to simulate precoloring.\label{gadgets-fig} \label{gadgets-fig} \end{figure} But how do we simulate a vertex in $F_1$? It is simpler (surprisingly) to start with the gadget for $F_k$. This is just the subgraph of $G_{k,t}$ induced by $v_0,w_0,x_0$ and their pendent 3-cycles. Precisely, it is formed from a $K_3$ by adding $\flr{\frac{k-2}2}$ pendent 3-cycles at $v$ and adding $\flr{\frac{k-1}2}$ and $\flr{\frac{k}2}$ pendent 3-cycles at the two other vertices of the $K_3$; see the left end of Figure~\ref{sharpness-fig}. In the proof of Lemma~\ref{sharp-critical-lem}, we showed that any coloring of this subgraph must have $v_0$ in an $F$-component of order $k$. The potential of this subgraph is 0, so $C_{F,k}=0$. The gadget for $I$ is simply an edge to a vertex in $F_k$. So $C_I = C_{U,0}-C_E+C_{F,k} = \frac{C_E-3}2$. Finally, the gadget for $F_1$ is an edge to a vertex in $I$. So $C_{F,1}=C_{U,0}+C_I-C_E=C_E-3$. Adding a pendent 3-cycle at a vertex in $C_{F,j}$ moves it to $C_{F,j+1}$. So we are tempted to say that $C_{F,j+1}=C_{F,j}-3$ for all $j$; but this is not quite right! We must simulate each $F_j$ as efficiently as possible. We can do slightly better for $F_{j'}$ when $j'=\flr{\frac{k+3}2}$. The best gadget for $F_{j'}$ is shown in Figure~\ref{gadgets-fig}; it is formed from the gadget for $F_k$ by \emph{removing} $k-j'$ pendent 3-cycles at $v_0$. This gadget gives $C_{F,j'}=3k-3j'$ (rather than $C_E-3j'$, which we get if we build up from the gadget for $F_1$). Now for each $j>j'$, we add $j-j'$ pendent 3-cycles at $v$. Thus, $C_{F,j}=3k-3j$ for all $j\ge j'$. It is enlightening to notice that the Main Theorem is logically equivalent to its restriction to graphs that are precolored trivially. Since this is not needed for our proof of the Main Theorem, we are content to provide only a proof sketch. \begin{equiv-lem} The Main Theorem is true if and only if it is true when restricted to graphs with no precolored vertices. \end{equiv-lem} \begin{proof}[Proof Sketch] The case with a trivial precoloring is clearly implied by the general case. Now we show the reverse implication. Suppose the Main Theorem is false for some specific value of $k$. Let $G$ be a counterexample; among all counterexamples, choose one that minimizes $|V(G)|$. We will construct another counterexample $\widehat{G}$ (for the same value of $k$) with $U_0=V(\widehat{G})$. If $G$ has a vertex $v$ precolored $I$, then we form $G'$ from $G-v$ by coloring each neighbor of $v$ (in $G$) with $F$. Since $G$ is $(I,F_k)$-critical, so is $G'$. Since $G'$ is smaller than $G$, we know that $\rho^k_{G'}(V(G'))\le -3$. It is straightforward to check that $\rho^k_G(V(G))\le \rho^k_{G'}(V(G'))\le -3$ (see Lemma~\ref{noIFk-lem} for details); so $G$ is not a counterexample, a contradiction. Thus, $I=\emptyset$. Now we form a graph $\widehat{G}$ from $G$ by identifying each vertex $v\in V(G)$ colored $U_j$ or $F_j$ with the vertex $v$ in the corresponding gadget (and removing the precoloring). It is easy to check that $-2\le \rho^k_G(V(G)) = \rho^k_G(V(\widehat{G}))$; indeed, this is exactly why we chose the values we did for $C_{U,j}$ and $C_{F,j}$. So all that remains is to show that $\widehat{G}$ is $(I,F_k)$-critical. First, note that each gadget precisely simulates the precoloring. That is, every $(I,F_k)$-coloring of the gadget for each $U_j$ either gives $v$ at least $j$ $F$-neighbors or it colors $v$ with $I$; furthermore, some coloring of the gadget for $U_j$ colors $v$ with $I$ and some other coloring of the gadget for $U_j$ colors $v$ with $F$ and gives $v$ exactly $j$ $F$-neighbors. Similarly, every $(I,F_k)$-coloring of the gadget for each $F_j$ colors $v$ with $F$ and puts it in an $F$-component of order at least $j$; and some coloring of the gadget for $F_j$ colors $v$ with $F$ and puts it in a component of order exactly $j$. Second, note that deleting any edge from the gadget for $U_j$ allows a coloring in which $v$ has at most $j-1$ $F$-neighbors. Similarly, deleting any edge from the gadget for $F_j$ allows a coloring in which $v$ is in an $F$-component of order at most $j-1$. Thus, $\widehat{G}$ is $(I,F_k)$-critical. \end{proof} Since the Main Theorem is equivalent to its restriction to graphs with trivial precolorings, what is the point of allowing precolorings? The point is to order the graphs in a way that is more useful for induction (note that $V(\widehat{G})>V(G)$, so allowing precolorings enables us to simulate $\widehat{G}$ with a precolored graph $G$ that is smaller than $\widehat{G}$). In fact, we could write the whole proof without precolorings, but the partial order on the graphs needed for that version would be much harder to understand and keep track of. \subsection{The Potential Method: A Brief Introduction} \label{potential-sec} The function $\rho^k$ is called the \Emph{potential function}, and the proof technique we employ in this paper is called the \EmphE{potential method}{3mm}. Here we give a brief overview. The essential first step in any proof using the potential method is to find an infinite family of sharpness examples. These examples determine a sharp necessary condition on $\textrm{mad}(G)$. So we use them to choose the coefficients $C_{U,0}$ and $C_E$, which define $\rho$. The necessary generalization (allowing precoloring and specifically all the different options $U_j$ and $F_j$) varies with the problem. For some problems, we do not use precoloring at all. In one case we allowed parallel edges~\cite{CY-IF}. Whenever a generalization allows precolorings, the coefficients are all determined by the gadgets, as discussed in the previous section (so it is essential to find the gadgets with highest potential). Behind every proof using the potential method is a typical proof using reducibility and discharging. Consider, for example, Theorem~\ref{mad-thm}. Suppose we are aiming to prove that theorem and we want to show that a certain configuration $H$ is reducible. Typically, we color $G-V(H)$ by induction and then show how to extend the coloring to $V(H)$. The reason we can color $G-V(H)$ by induction is that, by definition, $\textrm{mad}(G-V(H))\le \textrm{mad}(G)$; since $G-V(H)$ is smaller than $G$, the theorem holds for $G-V(H)$. \emph{The heart of the potential method is to show that we can slightly modify $G-V(H)$ before we color it by induction.} This modification (say, adding some $F$-neighbors) enables us to require more of our coloring of $G-V(H)$. Since this coloring of $G-V(H)$ is more constrained, we may be able to extend it to $V(H)$, even if we could not do so for an arbitrary $(I,F_k)$-coloring of $G-V(H)$. To make all of this precise, we need a lower bound on $\rho^k(R)$ for all $R\subsetneq V(G)$. Such a bound is called a Gap Lemma. Our modifications may lower $\rho^k(R)$, but if we can ensure that even this lowered potential is at least $-2$ for all $R$, then we know by induction that $G'$ cannot contain an $(I,F_k)$-critical subgraph, so it must have an $(I,F_k)$-coloring. Once we have proved that various configurations are reducible, we use discharging to show that a (hypothetical, smallest) counterexample $G$ to our Main Theorem cannot exist. We assign charge so that the assumption $\rho^k(V(G))\ge -2$ implies that the sum of all initial charges is at most 4. (This is analogous, for graphs with $\textrm{mad}<\alpha$, to using the initial charge $\textrm{ch}(v):=d(v)-\alpha$.) As a first step, we show that each vertex ends with nonnegative charge. With a bit more work, we show that if $G$ has no coloring, then its total charge exceeds 4, so $G$ is not a counterexample. Our proof of the Main Theorem naturally translates into a polynomial-time algorithm. This is typical of proofs using the potential method. The translation is mostly straightforward. The least obvious step is efficiently finding a set of minimum potential, which can be done using a max-flow/min-cut algorithm. We discuss algorithms at length in~\cite[Sections~2.3 and 5]{CY-IF}. \section{Starting the Proof of the Main Theorem} \label{proof-start-sec} Fix an integer $k\ge 2$. In what follows, we typically write $\rho$ rather than $\rho^k$. We say that a graph $G_1$ is \Emph{smaller} than a graph $G_2$ if either (a) $|V(G_1)|<|V(G_2)|$ or (b) $|V(G_1)|=|V(G_2)|$ and $|E(G_1)|<|E(G_2)|$. Assume that the Main Theorem is false for $k$. Let $G$ be a smallest counterexample. In this section, we prove a number of lemmas restricting the structure of $G$. \begin{lem} $I\cup U_k \cup F_k=\emptyset$. \label{noI-lem} \label{noFk-lem} \label{noUk-lem} \label{noIFk-lem} \end{lem} \begin{proof} Assume, to the contrary, that $I\cup U_k\cup F_k\ne\emptyset$. First, suppose there exists $v\in F_k$. Form $G'$ from $G$ by deleting $v$ and adding each neighbor of $v$ to $I$. For each $R'\subseteq V(G')$, subgraph $G'[R']$ has an $(I,F_k)$-coloring if and only if $G[R'\cup\{v\}]$ does. Since $G$ is $(I,F_k)$-critical, so is $G'$. Since $G'$ is smaller than $G$, by the minimality of $G$, we have $\rho_{G'}(V(G'))\le -3$. However, now $\rho_G(V(G)) \le \rho_{G'}(V(G'))+(C_{U,0}-C_I-C_E)d(v) = \rho_{G'}(V(G'))\le-3$. Thus, $G$ is not a counterexample. Suppose instead there exists $v\in I$. Form $G'$ from $G$ by deleting $v$ and adding each neighbor of $v$ to $F$ (we assume $d(v)\ge 1$). For each $R'\subseteq V(G')$, subgraph $G'[R']$ has an $(I,F_k)$-coloring if and only if $G[R'\cup\{v\}]$ does. Since $G$ is $(I,F_k)$-critical, so is $G'$. Since $G'$ is smaller than $G$, by the minimality of $G$ we have $\rho_{G'}(V(G'))\le -3$. Coloring a vertex in $U_j$ with $F$ moves it to $F_{j+1}$, so decreases its potential by $C_{U,j}-C_{F,j+1}\le \frac{3C_E-3}2-3j-(C_E-3(j+1)) = \frac{C_E+3}2$. So $\rho_G(V(G)) \le \rho_{G'}(V(G'))+(\frac{C_E+3}2)d_G(v)-C_Ed_G(v)+C_I =\rho_{G'}(V(G'))+(\frac{3-C_E}2)d(v)+\frac{C_E-3}2\le \rho_{G'}(V(G'))\le -3$. Thus, $G$ is not a counterexample. Finally, suppose there exists $v\in U_k$. Form $G'$ from $G$ by coloring $v$ with $I$. For each $R'\subseteq V(G')$, subgraph $G'[R']$ has an $(I,F_k)$-coloring if and only $G[R']$ does. Since $G$ is $(I,F_k)$-critical, so is $G'$. Note that $\rho_{G'}(V(G'))=\rho_G(V(G))-C_{U,k}+C_I > \rho_G(V(G))$. Now repeating the argument in the previous paragraph shows that $G$ is not a smallest counterexample. \end{proof} At various points in our proof, we will construct a graph $G'$ from some subgraph of $G$ by adding $F$-neighbors to one or more vertices. If this ever produces an uncolored vertex $v$ with at least $k$ $F$-neighbors, then we recolor $v$ with $I$, as in the final paragraph of the previous proof. \begin{lem} \label{no two adjacent F} For each edge $vw$, at least one of $v$ and $w$ is in $U$. \label{noFedge-lem} \end{lem} \begin{proof} Suppose, to the contrary, that $v \in F_i$ and $w \in F_j$. Form $G'$ from $G$ by contracting edge $vw$ to create a new vertex $v*w \in F_{i+j}$. Further, for each vertex $x$ incident to both $v$ and $w$, remove edges $vx$ and $wx$ and put $x$ into $I$. Contracting edge $vw$ decreases potential by $(C_{F,i}+C_{F,j}-C_E)-C_{F,i+j}\le 0$. Putting a vertex $x$ into $I$ and deleting two incident edges decreases potential by at most $C_{U,0}-2C_E-C_I=-C_E$; that is, it increases potential by at least $C_E$. Since $G'$ is smaller than $G$, we have $\rho_{G'}(V(G'))\le -3$. Thus, $\rho_{G}(V(G))\le \rho_{G'}(V(G'))\le -3$. So $G$ is not a counterexample. \end{proof} \begin{lem} \label{delta2-lem} For each $v\in V(G)$, either $d(v)\ge 2$ or $v\in F_j$ with $j\ge \flr{\frac{k+3}2}$. \end{lem} \begin{proof} Assume, to the contrary, that $d(v)\le 1$ and $v\notin F_j$ with $j\ge \flr{\frac{k+3}2}$. Since $G$ is critical, it is connected, so $d(v)=1$; denote the unique neighbor of $v$ by $w$. If $v$ is uncolored, then color $G-v$ by the minimality of $G$. Now extend this coloring to $G$ by coloring $v$ with the color not used on $w$. So assume, by Lemma~\ref{noIFk-lem}, that $v$ is precolored $F_j$ for some $j\in \{1,\ldots,\flr{\frac{k+1}2}\}$. Lemma~\ref{noFedge-lem} implies that $w\in U_{\ell}$ for some $\ell$. Form $G'$ from $G-v$ by increasing the number of $F$-neighbors of $w$ by $j$. Note that $\rho_G(V(G))-\rho_{G'}(V(G'))\le C_{U,\ell}+C_{F,j}-C_E-C_{U,\ell+j}=0$. (If the new total number of $F$-neighbors of $w$ is at least $k$, then we color $w$ with $I$.) For each $R'\subseteq V(G')$, subgraph $G'[R']$ has an $(I,F_k)$-coloring if and only $G[R'\cup\{v\}]$ does. Since $G$ is $(I,F_k)$-critical, so is $G'$. Since $G'$ is smaller than $G$, by the minimality of $G$, we have $\rho_{G'}(V(G'))\le -3$. However, now $\rho_G(V(G))\le\rho_{G'}(V(G')\le-3$. Thus, $G$ is not a counterexample. \end{proof} Recall, from Section~\ref{potential-sec}, that the heart of any proof using the potential method is its gap lemmas. Our next definition plays a crucial role in the first of these. \begin{defn} \label{G'-defn} Given $R\subsetneq V(G)$ and an $(I,F_k)$-coloring $\vph$ of $G[R]$, we construct $G':=H(G,R,\vph)$\aside{$G'$, $H(G,R,\vph)$} as follows; see Figure~\ref{G'-fig}. Let $\overline{R}:=V(G)\setminus R$. Let $\nabla(R)\aside{$\overline{R}, \nabla(R)$}:=\{v\in R: \exists w\in \overline{R}, vw\in E(G)\}$. To form $G'$ from $G$, delete $R$ and add two new vertices $v_F,v_I$, where $v_F$ is precolored $F_k$ and $v_I$ is precolored $I$. (So $G'[\overline{R}]\cong G[\overline{R}]$.) For each $vw\in E(G)$ with $w\in \overline{R}$, $v\in R$ and $\vph(v)=F$, add to $G'$ the edge $wv_F$. For each $vw\in E(G)$ with $w\in \overline{R}$, $v\in R$ and $\vph(v)=I$, add to $G'$ the edge $wv_I$. Finally, delete $v_F$ or $v_I$ if it has no incident edges. So $V(G')\subseteq \overline{R}\cup\{v_F,v_I\}$. In each case, let $X:=V(G')\setminus \overline{R}$. \end{defn} \begin{figure} \caption{The construction of $G'$ from $G$, $R$, and $\vph$ in Definition~\ref{G'-defn} \label{G'-fig} \end{figure} \begin{lem}[Weak Gap Lemma] If $R\subsetneq V(G)$ and $|R|\ge 1$, then $\rho(R)\ge 1$. \label{weak-gap-lem} \end{lem} \begin{proof} Suppose, to the contrary, that there exists such an $R$ with $\rho(R)\le 0$. Choose $R$ to minimize $\rho(R)$. By Lemma~\ref{noFk-lem}, $F_k=\emptyset$. So each vertex has positive potential. Thus, $|R|\ge 2$ and $R$ induces at least one edge. Since $G$ is critical, $G[R]$ has an $(I,F_k)$-coloring $\vph$. Let $G':=H(G,R,\vph)$. If $G'$ has an $(I,F_k)$-coloring $\vph'$, then the union of $\vph$ and $\vph'$ is an $(I,F_k)$-coloring of $G$ (since each edge from $R$ to $\overline{R}$ has endpoints with opposite colors). So $G'$ has a critical subgraph $G''$; let $R':=V(G'')$ (it is possible that some vertices in $R'$ have fewer $F$-neighbors in $G''$ than in $G'$). Note that $|V(G')|\le |V(G)|$ and $|E(G')|<|E(G)|$; thus, $G'$ is smaller than $G$. As a result, $G''$ is smaller than $G$. Thus, $\rho_{G'}(R')\le \rho_{G''}(R')\le -3$. Since $G'[X]$ is edgeless, $\rho_{G'}(X')\ge 0$ for every $X'\subseteq X$. Now \begin{align} \rho_G((R'\setminus X)\cup R) & \le \rho_{G'}(R')-\rho_{G'}(R'\cap X)+\rho_G(R) \nonumber \\ & \le -3 + \rho_G(R) \\ & < \rho_G(R). \nonumber \end{align} Since $\rho_G((R'\setminus X)\cup R)<\rho_G(R)$ and we chose $R$ to minimize $\rho_G(R)$, this implies that $(R'\setminus X)\cup R=V(G)$. But now $\rho(V(G))\le -3$, so $G$ is not a counterexample. \end{proof} The Strong Gap Lemma, which we prove next, is one of the most important lemmas in the paper. Very roughly, the proof mirrors that of the Weak Gap Lemma, but it is much more nuanced, which allows us to prove a far stronger lower bound (one that grows linearly with $k$). \begin{lem}[Strong Gap Lemma] If $R\subsetneq V(G)$ and $G[R]$ contains an edge, then $\rho(R) \geq \frac{C_E - 3}2$. \end{lem} Before proving the lemma formally, we give a proof sketch. Choose \Emph{$R$} to minimize $\rho(R)$ among $R\subsetneq V(G)$ such that $G[R]$ contains an edge. For the sake of contradiction, assume that $\rho(R) < \frac{C_E-3}2$; by integrality, $\rho(R)\leq \frac{C_E-5}2$. Let $t := \left\lfloor \frac{\rho(R) +2}3 \right\rfloor$\aside{$t$}. Again, by integrality, $3t \geq \rho(R)$. By the Weak Gap Lemma, $t \geq 1$. We essentially repeat the proof of the Weak Gap Lemma, but more carefully. In that proof it was crucial that $\rho_{G'}(V(G')\setminus\overline{R})\ge \rho_G(R)$. To ensure this now, we will show that $\rho_{G'}(V(G')\setminus \overline{R})\ge \frac{C_E-5}2$. To do this, before using induction to get an $(I,F_k)$-coloring $\vph$ of $G[R]$, we modify $G[R]$ slightly, to get a graph $G_R$. Denote $\nabla(R)$ by $v_1, \ldots, v_s$. We must ensure that in the coloring $\vph$ of $G[R]$ the components colored $F$ containing $v_1,\ldots,v_s$ do not each contain $k$ vertices. Specifically, if $F^1,...,F^m$ are the $F$-components of $\vph$ containing vertices $v_1,\ldots,v_s$, then we want to maximize $\sum_{j=1}^m(k-|F^j|)$. When constructing $G'$, this will allow us to create vertices $v_j$ that are precolored $F_{|F^j|}$, rather than $F_k$. When $j\le \lfloor\frac{k-2}2\rfloor$, recall that $C_{F,k-j}=3j$. Thus, to ensure that $\rho(X)\ge \rho(R)$, it suffices to have $\sum_{j=1}^m(k-|F^j|)\ge t$, since then $\rho(X)\ge \sum_{j=1}^m3(k-|F^j|)\ge 3t \ge \rho(R)$, as desired. We construct $G_R$ from $G[R]$ by adding ``fake'' neighbors precolored $F$ to vertices in $\nabla(R)$; in total, we must add at least $t$ such fake $F$-neighbors. More formally, we move vertices from $F_{a_j}$ to $F_{b_j}$ where $\sum b_j = t+\sum a_j$. The reason that we can color the resulting graph $G_R$ is that we chose $R$ to minimize $\rho(R)$. In particular, $\rho_G(Y)\ge \rho_G(R)$ for all $Y\subseteq R$ (that induces at least one edge). Thus, $\rho_{G_R}(Y)\ge \rho_G(Y)-3t\ge \rho_G(R)-3\lfloor\frac{\rho_G(R)+2}3\rfloor\ge-2$. Thus, $Y$ cannot induce a critical graph in $G_R$ or some subgraph of it; so, $G_R$ is colorable. Making all this precise requires more details, which we give below in Case 2. \begin{proof} We exactly repeat the first paragraph above; in particular, we define $R$ and $t$ as above. Before proceeding to the main case, we handle the easy case that $\rho(\nabla(R)) < \rho(R)$. \textbf{Case 1: $\boldsymbol{\rho(\nabla(R)) < \rho(R)}$.} By our choice of $R$, we know that $G[\nabla(R)]$ is edgeless; also $R\setminus \nabla(R)\ne\emptyset$. That is, $\nabla(R)$ is an independent separating set. Moreover, each vertex of $\nabla(R)$ is colored $F$, since $\min\{C_{U,k-1},C_I\} \ge \min\{\frac{3k-3}2,\frac{C_E-3}2\}> \frac{C_E-5}2\ge \rho(R)$. Form $\tilde{G}$ from $G$ by moving each vertex of $\nabla(R)$ into $F_k$. For each $S\subseteq V(G)$ such that $G[S]$ contains an edge, we have $\rho_{\tilde{G}}(S)\ge \rho_G(S) - \rho_G(\nabla(R)) > \rho_G(S)-\rho_G(R)\ge 0$. Furthermore, $\rho_{\tilde{G}}(S)\ge 0$ for each $S\subseteq V(G)$ such that $G[S]$ is edgeless, since each vertex has nonnegative potential. Thus, every proper induced subgraph of $\tilde{G}$ has an $(I,F_k)$-coloring. Denote the components of $G - \nabla(R)$ by $C^1, C^2, \ldots, C^r$. For each $j$, by induction we have an $(I,F_k)$-coloring of $\tilde{G}[C^j \cup \nabla(R)]$. The union of these colorings is a coloring of $G$, which contradicts that $G$ is a counterexample. \textbf{Case 2: $\boldsymbol{\rho(\nabla(R)) \geq \rho(R)}$}. Now we show how to form $G_R$ from $G[R]$ so that our $(I,F_k)$-coloring $\vph$ of $G_R$ ensures $\rho_{G'}(V(G')\setminus \overline{R})\ge \rho_G(R)$. Denote $\nabla(R)$ by \Emph{$v_1,\ldots,v_s$}. First suppose that some $v_{\ell}$ is uncolored; say $v_{\ell}\in U_{p_{\ell}}$. To form $G_R$ from $G[R]$, we move $v_{\ell}$ to $U_{p_{\ell}+t}$; if $p_{\ell}+t>k-1$, then we instead move $v_{\ell}$ to $I$. (We leave all other vertices in $\nabla(R)$ unchanged.) Now assume that each $v_j\in \nabla(R)$ is colored $F$. Say $v_j\in F_{p_j}$ for each $v_j\in \nabla(R)$. We pick nonnegative integers $\ell_j$ iteratively as follows. Let $\ell_j := \min\{k-p_j, t - \sum_{j''<j} \ell_{j''}\}$. Note that $\rho(\{v_j\})\le 3(k-p_j)$ for all $j$. So, if $\sum \ell_j\le t-1$, then $\rho(\nabla(R))\le 3(t-1)<\rho(R)$; this contradicts the case we are in. Thus, $\sum \ell_j=t$ (also, $\ell_j \geq 0$ for all $j$). Form $G_R$ from $G[R]$ by moving each $v_j$ into $F_{p_j+\ell_j}$. We claim $G_R$ has an $(I,F_k)$-coloring. Since $G_R$ is smaller than $G$, this will hold by induction once we show that $\rho_{G_R}(R') \geq -2$ for each $R' \subseteq R$. Assume, to the contrary, that $\rho_{G_R}(R') \leq -3$, for some $R'$. Now $$\rho_G(R') \le \rho_{G_R}(R')+3t \leq -3 + 3t = 3\left(\left\lfloor \frac{\rho_G(R) +2}3 \right\rfloor - 1\right) < \rho_G(R).$$ By our choice of $R$, this implies that $R'$ is edgeless. But this contradicts $\rho_{G_R}(R')\le -3$, since each vertex contributes nonnegative potential. Thus, $G_R$ has the desired $(I,F_k)$-coloring $\vph$. We construct $G'$ from $G$, $R$, and $\vph$ as follows. As described above, $G'$ contains $G[\overline{R}]$, to which we add new vertices that we call $X$. Let $F^1, F^2, \ldots, F^m$ denote the components of $F$ in $\vph$ that contain at least one vertex of $\nabla(R)$. For each $F^j$, let $(k-\ell_j')$ be the number of vertices in $F^j$ when $\vph$ is viewed as a coloring of $G[R]$ (not $G_R$); when constructing $G'$, add to $X$ a vertex $v_{F,j} \in F_{k-\ell_j'}$. If $\vph$ uses $I$ on one or more vertices in $\nabla(R)$, then add to $G'$ a single vertex $v_I \in I$. Next, we must show that $\rho_{G'}(X) \geq \rho_G(R)$. Recall that $X$ denotes the vertices in $G'$ that are not in $G$. By construction, $G'[X]$ is edgeless, so $\rho_{G'}(X) = \sum_{v_j\in X}\rho_{G'}(v_j)$. If $v_I\in X$, then $\rho_{G'}(X)\ge \rho_{G'}(\{v_I\}) = C_I = \frac{C_E-3}2 > \rho_G(R)$, so we are done. Thus, we assume that $v_I\notin X$. Essentially, we want to show that each $v_j\in X\cap F_{k-\ell'_j}$ adds $3\ell'_j$ to $\rho_{G'}(X)$. Since $\sum \ell'_j\ge t$, we get $\rho_{G'}(X) = \sum\rho_{G'}(\{v_j\}) = \sum 3\ell'_j \ge 3t \ge \rho_G(R)$. But there is a small complication. We only have $\rho_{G'}(\{v_j\}) = 3\ell'_j$ when $\ell'_j\le \lceil \frac{k-3}2 \rceil$; otherwise $\rho_{G'}(\{v_j\})=C_E-3(k-\ell'_j)$, which is $3\ell'_j-1$ when $k$ is even and $3\ell'_j-2$ when $k$ is odd. If $\ell'_j\ge \lceil \frac{k-1}2 \rceil$ for at least two values of $j$, then $\rho_{G'}(X)\ge 2(C_E-3(k-\lceil\frac{k-1}2\rceil)) \ge \frac{C_E-5}2\ge \rho_G(R)$, as desired. So assume that $\ell'_j\ge \lceil\frac{k-1}2\rceil$ for at most one value of $j$. If $k$ is even, then $\rho_G(R)\le \frac{C_E-5}2 = \frac{3k-6}2$, so $t = \lfloor \frac{3k-2}6 \rfloor = \lfloor\frac{k-2}2 \rfloor$. Thus, either $\ell'_j\le \lceil\frac{k-1}2\rceil$ for each $j$, or $\sum \ell'_j>t$. In both cases, $\rho_{G'}(X)\ge \rho_G(R)$. Assume instead that $k$ is odd. If $\rho_G(R)<\frac{C_E-5}2$, then $t\le \lfloor \frac{k-2}2\rfloor$, and the analysis is similar to that above for $k$ even. So we instead assume that $\rho_G(R)=\frac{C_E-5}2$ and $\ell'_{i}=\frac{k-1}2=t$ for some $i$ (with $\ell'_j=0$ for all other $j$). But in this case, $\rho_{G'}(X)=3t-2$ and $\rho_G(R)=\frac{3k-7}2 = \frac{3k-3}2-2 = 3t-2$. So, again $\rho_{G'}(X)\ge \rho_G(R)$, as desired. The graph $G'$ is smaller than $G$, since by construction $|V(G')| \leq |V(G)|$ (equality may be possible if $G[R] \cong K_{1,s-1}$) and $|E(G')| < |E(G)|$, since $G[R]$ contains an edge. Each vertex $v \in \overline{R}$ has at most one neighbor in $R$ since otherwise $$\rho(R \cup \{v\}) \leq \rho(R) + C_{U,0} - 2C_E \leq \rho(R) - \frac{C_E + 3}2 \le \frac{C_E-5}2-\frac{C_E+3}2 =-4.$$ If $R\cup \{v\}=V(G)$, then $\rho(V(G))\le -4$, which contradicts that $G$ is a counterexample. Otherwise, $R\cup\{v\}\subsetneq V(G)$ and $\rho(R\cup\{v\})<\rho(R)$, which contradicts our choice of $R$. So each $v\in\overline{R}$ has at most one neighbor in $R$. This means that $G'$ does not have an $(I,F_k)$-coloring, since such a coloring could be combined with $\vph$ to produce an $(I,F_k)$-coloring of $G$. So $G'$ contains an $(I,F_k)$-critical subgraph $G''$. Let $W'' := V(G'')$, and by induction $\rho_{G''}(W'') \leq -3$. Because $G$ is $(I,F_k)$-critical (and thus does not contain proper $(I,F_k)$-critical subgraphs) $W'' \cap X \neq \emptyset$. Since $G'[X]$ is edgeless, $\rho_{G'}(X')\ge 0$ for all $X'\subseteq X$. Let $W := (W''\setminus X) \cup R$. By submodularity, \begin{equation}\label{big gap case 2 eqn} \rho_G(W) \leq \rho_{G'}(W'') - \rho_{G'}(X \cap W'') + \rho_G(R) \leq (-3) - (0) + \rho_G(R). \end{equation} By our choice of $R$, this implies that $W = V(G)$. We are then in one of two cases, each of which improves the bound in \eqref{big gap case 2 eqn}. If $X \subset W''$, then $X\cap W''=X$, so we use the prior result that $\rho_{G'}(X) \geq \rho_G(R)$ to strengthen \eqref{big gap case 2 eqn} and conclude that $\rho_G(V(G))=\rho_G(W) \le \rho_{G'}(W'') \leq -3$, which is a contradiction. So assume that $X \setminus W'' \neq \emptyset$. Because $W = V(G)$, we have $\overline{R} \subset W''$. By construction, every vertex in $X$ has a neighbor in $\overline{R}$ in $G'$, and therefore at least one edge with an endpoint in $R$ and the other endpoint in $\overline{R}$ was not accounted for in \eqref{big gap case 2 eqn}. Thus, \eqref{big gap case 2 eqn} improves to $\rho_G(W) \leq \rho_G(R) -3 - C_E \le -\frac{C_E+11}2 < -3$, which is a contradiction. This finishes Case 2, which completes the proof. \end{proof} It will be convenient to write $U^i_j$\aside{$U^i_j, F^i_j$} for the set of vertices with degree $i$ in $U_j$; similarly for $F^i_j$. When we do discharging, vertices in $U^2_j$ will need lots of charge, particularly when $j$ is small. This motivates our next lemma. It says that when $j$ is small enough, such vertices do not exist. \begin{lem} \label{getting rid of very small charge} If $U_j^2\ne \emptyset$, then $j\ge \frac{C_E-7}6$. \end{lem} \begin{proof} Assume, to the contrary, that there exists $j\le \frac{C_E-9}6$ and $v\in U_j^2$. Denote the neighbors of $v$ by $v_1$ and $v_2$. Our basic plan is to delete $v$ and add $j+1$ $F$-neighbors to each of $v_1$ and $v_2$; call this new graph $G'$. We show that $G'$ has an $(I,F_k)$-coloring $\vph'$, and extend $\vph'$ to $G$ as follows. If both $v_1$ and $v_2$ are colored with $F$, then color $v$ with $I$. Otherwise, color $v$ with $F$. It is easy to see this yields an $(I,F_k)$-coloring of $G$, a contradiction. Mainly, we need to show that $\rho_{G'}(R')\ge -2$ for all $R'\subseteq V(G')$, which we do by the Strong Gap Lemma. This proves that $G'$ has the desired $(I,F_k)$-coloring. We also need to handle the possibility that our construction of $G'$ creates a component of $F$ with more than $k$ vertices. \textbf{Case 1: For each $\bs{v_i\in N(v)}$ either $\bs{v_i\in U}$ or else $\bs{v_i\in F_{\ell_i}}$ and $\bs{\ell_i+j+1\le k}$.} We follow the outline above, but need to clarify a few details. If adding $j+1$ $F$-neighbors to some $v_i\in U$ results in $v_i$ having at least $k$ $F$-neighbors, then we instead color $v_i$ with $I$. By design, we do not create any vertices in $U$ with more than $k-1$ $F$-neighbors or vertices in $F$-components of order more than $k$. We also need to check that we do not create any edges with both endpoints colored $I$. By Lemma~\ref{noI-lem}, no vertex of $G$ is colored $I$. So we only need to check that we do not use $I$ on both $v_1$ and $v_2$ when $v_1v_2\in E(G)$. Suppose that we do. Assume that $v_1\in U_{\ell_1}$ and $v_2\in U_{\ell_2}$. So $\ell_1+j+1\ge k$ and $\ell_2+j+1\ge k$. Now $\rho_G(\{v,v_1,v_2\})=C_{U,\ell_1}+C_{U,\ell_2}+C_{U,j}-3C_E = \frac{9C_E-9}2-3(j+\ell_1+\ell_2)-3C_E = \frac{3C_E-9}2-3(j+\ell_1+1)-3(\ell_2-1)\le \frac{C_E-9}2-3(\ell_2-1)\le \frac{C_E-9}2-3(k-2-\frac{C_E-9}6)=C_E-3-3k<-3$. This contradicts the Weak Gap Lemma. Thus, $G'$ has a valid precoloring. Now we must show that $\rho_{G'}(R')\ge -2$ for all $R'\subseteq V(G')$. If $G[R']$ is edgeless, then clearly $\rho(R')\ge 0$. So assume $G[R']$ has at least one edge. If $R'\cap N(v)=\emptyset$, then $\rho_{G'}(R')=\rho_G(R')\ge 1$, by the Weak Gap Lemma. Instead suppose that $|R'\cap N(v)|=1$. By the Strong Gap Lemma, $\rho_{G'}(R')\ge \rho_G(R')-3(j+1)\ge \frac{C_E-3}2-3(j+1)\ge \frac{C_E-3}2-3\frac{C_E-3}6=0$. Finally, suppose that $|R'\cap N(v)|=2$. Now the Weak Gap Lemma (and the fact that $\rho_G(V(G))\ge -2$) gives \begin{align*} \rho_{G'}(R')&\ge \rho_G(R'\cup\{v\})+2C_E-C_{U,j}-3(j+1)2\\ &= \rho_G(R'\cup\{v\})+2C_E-(\frac{3C_E-3}2-3j)-6(j+1)\\ &= \rho_G(R'\cup\{v\})+\frac{C_E+3}2-3j-6\\ &\ge \rho_G(R'\cup \{v\})+\frac{C_E}2+\frac{3}2-\frac{C_E-9}2-6\\ &= \rho_G(R'\cup \{v\})\\ &\ge -2. \end{align*} \textbf{Case 2: There exists $\bs{v_i\in N(v)}$ such that $\bs{v_i\in F_{\ell_i}}$ and $\bs{j+\ell_i\ge k}$.} If $v_1$ and $v_2$ are both precolored $F$, then we simply delete $v$ (since we can extend $\vph'$ to $G$ by coloring $v$ with $I$). So, we assume that $v_1\in F_{{\ell}_1}$ with $j+{\ell}_1\ge k$ and $v_2\in U_{{\ell}_2}$. Now we simply delete $v$ and color $v_2$ with $F$. We must again ensure that $\rho_{G'}(R')\ge -2$ for all $R'\subseteq V(G')$. If $v_2\notin R'$, then $\rho_{G'}(R')=\rho_G(R')\ge 1$. So, assume that $v_2\in R'$. If $G'[R']$ is edgeless, then clearly $\rho_{G'}(R')\ge 0$. So assume that $G'[R']$ has at least one edge. Now, similar to above: \begin{align*} \rho_{G'}(R')&\ge \rho_G(R'\cup\{v,v_1\})+2C_E-C_{F,{\ell}_1}-C_{U,j}-C_{U,{\ell}_2}+C_{F,{\ell}_2+1}\\ &\ge \rho_G(R'\cup\{v,v_1\})+2C_E-3(k-{\ell}_1)-({3C_E-3}-3(j+{\ell}_2))+(C_E-3({\ell}_2+1))\\ &= \rho_G(R'\cup\{v,v_1\})-3k+3{\ell}_1+3j+3{\ell}_2-3{\ell}_2\\ &= \rho_G(R'\cup\{v,v_1\})-3k+3(j+{\ell}_1)\\ &\ge\rho(G'\cup\{v,v_1\})\\ &\ge -2. \end{align*} \par \end{proof} It will turn out that when $j> \frac{C_E-5}6$ vertices in $U_j^2$ will have nonnegative initial charge. By Lemma~\ref{getting rid of very small charge}, we know that $U_j^2=\emptyset$ when $j< \frac{C_E-7}6$. Thus, to finish the proof we focus on the vertices in $U^2_j$ when $j=\frac{C_E-5}6$ (in Section~\ref{k-even-sec}, where $k$ is even) and when $j=\frac{C_E-7}6$ (in Section~\ref{k-odd-sec}, where $k$ is odd). \section{Finishing the Proof when $k$ is Even} \label{k-even-sec} Throughout this section, $k$ is always even. Recall that when $k$ is even $C_E=3k-1$.\aside{$C_E$} We let $\ell:=\frac{C_E-5}6=\frac{3k-6}6 = \frac{k}2-1$.\aside{$\ell$} \begin{lem} \label{no2-threads-k-even} $G$ does not contain adjacent vertices $v$ and $w$ with $v,w\in U_\ell^2$. \end{lem} \begin{proof} Assume the lemma is false. Let $v'$ and $w'$ denote the remaining neighbors of $v$ and $w$, respectively (possibly $v'=w'$). By symmetry between $v'$ and $w'$, we assume that $v'\notin F_j$ with $j\ge k-\ell$ (otherwise $\rho(\{v,w,v',w'\})\le 2C_{F,k-\ell}+2C_{U,\ell}-3C_E = 6\ell+2(\frac{3C_E-3}2-3\ell)-3C_E=-3$, which contradicts the Weak Gap Lemma). Form $G'$ from $G\setminus\{v,w\}$ by adding $\ell+1$ $F$-neighbors to $v'$. If $v'$ now has at least $k$ $F$-neighbors, then move $v'$ to $I$. (By our assumption on $v'$, we know that $v'$ is not in an $F$-component of order at least $k+1$.) Fix $R'\subseteq V(G')$. If $G'[R']$ has no edges, then $\rho_{G'}(R')\ge 0$, since each individual vertex has nonnegative potential. If $v'\notin R'$, then $\rho_{G'}(R')=\rho_{G}(R')\ge 1$, by the Weak Gap Lemma. Assume instead that $v'\in R'$ and $G[R']$ contains at least one edge. By the Strong Gap Lemma, $\rho_{G'}(R')\ge \rho_G(R')-3(\ell+1)\ge \frac{C_E-3}2-3(\ell+1)=\frac{C_E-3}2-\frac{C_E-5+6}2=-2$. Thus, by minimality, $G'$ has an $(I,F_k)$-coloring $\vph'$. We extend $\vph'$ to $v$ and $w$ as follows. If $\vph'(v')=I$, then color $v$ with $F$ and color $w$ with the color unused on $w'$. Similarly, if $\vph'(w')=I$, then color $w$ with $F$ and color $v$ with the color unused on $v'$. (If $\vph'(v')=\vph'(w')=I$, then $v$ and $w$ lie in an $F$-component with order $2(\ell)+2=\frac{C_E-5}3+2 = 2(\frac{k}2-1)+2=k$.) Suppose instead that $\vph'(v')=\vph'(w')=F$. Now color $w$ with $I$ and $v$ with $F$. Note that this is an $(I,F_k)$-coloring of $G$, because of the extra $F$-neighbors of $v'$ in $G'$. \end{proof} Now we use discharging to show that $G$ cannot exist. We define our initial charge function so that our assumption $\rho(V(G))\ge -2$ gives an upper bound on the sum of the initial charges. (Recall the values of $C_{U,j}$ and $C_{F,j}$ from Definition~\ref{coeff-defn}. By Lemma~\ref{noI-lem}, $I=\emptyset$.) Precisely, let \begin{itemize} \item $\textrm{ch}(v):=C_Ed(v)-2C_{U,j}=C_Ed(v)-2(\frac{3C_E-3}2-3j)$ \aside{$\textrm{ch}(v)$}\\ \mbox{\!~~~~~~~~~}$= C_E(d(v)-3)+3+6j$ for each $v\in U_j$; and \item $\textrm{ch}(v):=C_Ed(v)-2C_{F,j}=C_Ed(v)-2(C_E-3j)$ \\ \mbox{\!~~~~~~~~~}$=C_E(d(v)-2)+6j$ for each $v\in F_j$ with $j\le \ell+1$; and \item $\textrm{ch}(v):=C_Ed(v)-2C_{F,j}\ge C_Ed(v)-2(3k-3(\ell+2))$ \\ \mbox{\!~~~~~~~~~}$=C_Ed(v)-3k+6=C_E(d(v)-1)+5$ for each $v\in F_j$ with $j\ge \ell+2$\\ \mbox{\!~~~~~~~~~~~~}(and this inequality is strict when $j>\ell+2$). \end{itemize} This definition of $\textrm{ch}(v)$ yields the inequality \begin{align} \sum_{v\in V(G)}\textrm{ch}(v) = -2\rho(V(G))\le 4. \label{charge-sum-ineq-even} \end{align} \begin{table}[!h] \centering $ \begin{array}{c||c|c|c|c|c|c|c|c} d(v) & U_0 & U_1 & F_1 & U_{\ell} & U_{\ell+1} & U_{\ell+2} & F_{\ell+2}\\ \hline 1 & & & & & & & 4\\ 2 & & & 4 & 0 & 2 & 8 \\ 3 & 0 & 6 & C_E + 3 \\ 4 & C_E-1 & C_E+5 & 2C_E+2 \\ \end{array} $ \caption{ Lower bounds on the final charges (when $k$ is even).\label{k-even-charge-table}} \end{table} We use a single discharging rule, and let \Emph{$\textrm{ch}^*(v)$} denote the charge at $v$ after discharging. \begin{itemize} \item[(R1)] Each vertex in $U^2_{\ell}$ takes 1 from each neighbor. \end{itemize} \begin{lem} After discharging by (R1) above, each vertex $v$ with an entry in Table~\ref{k-even-charge-table} has $\textrm{ch}^*(v)$ at least as large charge as shown. Each other vertex $v$ has $\textrm{ch}^*(v)\ge 5$. \label{k-even-easy-charge-lem} \end{lem} \begin{proof} Note that $\textrm{ch}^*(v)\ge \textrm{ch}(v)-d(v)$ for all $v\in V(G)$. If $v\in U_j$, then $\textrm{ch}^*(v)\ge C_E(d(v)-3)+3+6j-d(v)=(C_E-1)(d(v)-3)+6j$. If $v\in F_j$ and $j\le \ell+1$, then $\textrm{ch}^*(v) \ge C_E(d(v)-2)+6j-d(v)=(C_E-1)(d(v)-2)+6j-2$. If $v\in F_j$ and $j\ge \ell+2$, then $\textrm{ch}^*(v)\ge C_E(d(v)-1)+5-d(v) = (C_E-1)(d(v)-1)+4$ (and this inequality is strict when $j>\ell+2$). By Lemma~\ref{noI-lem}, $I=\emptyset$; by Lemma~\ref{getting rid of very small charge}, $U^2_j=\emptyset$ when $j<\ell$ . By Lemma~\ref{delta2-lem}, each $v\in V(G)$ has $d(v)\ge 2$ unless $v\in F^1_j$ with $j\ge \ell+2$. If $v\in U^2_{\ell+1}$, then $\textrm{ch}^*(v)\ge -C_E+1+(C_E-5+6)=2$. Thus, if $v\notin U_{\ell}^2$, then the lemma follows from what is above. By Lemma~\ref{no2-threads-k-even}, if $v\in U^2_{\ell}$, then $v$ does not give away any charge. So $v$ finishes with $\textrm{ch}(v)+2(1) = -C_E+3+6\ell+2(1) = -C_E+5+(C_E-5)=0$. \end{proof} \begin{cor} $V(G)\subseteq U^2_{\ell}\cup U^2_{\ell+1}\cup U^3_0\cup U^4_0 \cup F^1_{\ell+2} \cup F^2_1$ (with $U^4_0=\emptyset$ when $k\ge 4$) and $2|U^2_{\ell+1}|+4|U^4_0|+4|F^1_{\ell+2}| + 4|F^2_1|\le 4$. \end{cor} \begin{proof} This follows directly from Lemma~\ref{k-even-easy-charge-lem} and~\eqref{charge-sum-ineq-even}. \end{proof} \begin{lem} $G$ has an $(I,F_k)$-coloring, and is thus not a counterexample. \end{lem} \begin{proof} We now construct an $(I,F_k)$-coloring of $G$. We color each $v\in U_{\ell}^2$ with $I$ and each $v\notin U^2_{\ell}$ with $F$. By Lemma~\ref{no2-threads-k-even}, we know that $U^2_{\ell}$ is an independent set. So we only must check that $G-U^2_{\ell}$ is a forest in which each component has order at most $k$. Suppose that $G-U^2_{\ell}$ contains a cycle, $C$. Clearly $C$ has no vertex in $U^4_0\cup F_1^2$, since such a vertex would end with charge at least 6, a contradiction. (Also, $C$ has no vertex in $F^1_{\ell+2}$.) Furthermore, each vertex in $U^2_{\ell+1}\cup U^3_0$ on such a cycle would end with charge at least 2. Since $G$ is simple, $C$ has length at least 3, so its vertices end with charge at least 6, a contradiction. Thus, $G-U^2_{\ell}$ is acyclic. If $U^4_0\cup F^1_{\ell+2} \cup F_1^2\ne \emptyset$, then $U^2_{\ell+1}=\emptyset$ and $|U^4_0\cup F^1_{\ell+2} \cup F^2_1|=1$. Furthermore, $G$ is a bipartite graph with $U^2_{\ell}$ as one part and $U_0^3\cup U^4_0 \cup F^1_{\ell+2} \cup F^2_1$ as another (otherwise $G$ has total charge at least 5, a contradiction). So $G$ has an $(I,F_k)$-coloring using $I$ on $U^2_{\ell}$ and $F$ on $U^3_0\cup U^4_0 \cup F^1_{\ell+2} \cup F^2_1$. Assume instead that $U^4_0 \cup F^1_{\ell+2} \cup F^2_1=\emptyset$. Recall that $G-U^2_{\ell}$ is a forest. Let $T$ denote a component of this forest, let $n_2:=|U^2_{\ell+1}\cap V(T)|$, and let $n_3:=|U^3_0\cap V(T)|$. The number of edges incident to $T$ is $(\sum_{v\in V(T)}d(v))-2|E(T)|=2n_2+3n_3-2(n_2+n_3-1)=n_3+2$. Recall that $T$ gives away 1 along each such edge. Each vertex counted by $n_3$ begins with 3, and each vertex counted by $n_2$ begins with 4. Thus the total final charge of vertices of $T$ is $4n_2+3n_3-(n_3+2) = 4n_2+2n_3-2$. Since $G$ has total charge at most 4, either $n_2=1$ and $n_3\le 1$ or else $n_2=0$ and $n_3\le 3$. Now color all vertices of $T$ with $F$, except when $n_2=0$, $n_3=3$, and $k=2$. In that case, the total final charge of $T$ is 4, so every other component of $G-U^2_{\ell}$ is an isolated vertex in $U^3_0$. Now color the leaves of $T$ with $F$ and the center vertex, say $v$, with $I$. Also recolor the neighbor of $v$ outside of $T$ with $F$. \end{proof} \section{Finishing the Proof when $k$ is Odd} \label{k-odd-sec} \subsection{Reducible Configurations when $k$ is Odd} \label{k-odd-reduc-sec} Throughout this section, $k$ is always odd. Recall that when $k$ is odd $C_E=3k-2$.\aside{$C_E$} Further, let $\ell:=\frac{C_E-7}6=\frac{3k-9}6=\frac{k-3}2$.\aside{$\ell$} (Note that $C_E$ and $\ell$ are defined differently from the previous section.) We will frequently use the fact that $2\ell+3=k$. \begin{lem} \label{no2-threads-k-odd} $G$ does not contain adjacent vertices $v$ and $w$ with $v\in U_\ell^2$ and $w\in U_\ell^2\cup U_{\ell+1}^2$. \end{lem} \begin{proof} Assume the lemma is false. Let $v'$ and $w'$ denote the remaining neighbors of $v$ and $w$, respectively (possibly $v'=w'$). Form $G'$ from $G\setminus\{v,w\}$ by adding $\ell+1$ $F$-neighbors to $v'$. (Suppose this puts $v'$ in an $F$-component of order at least $k+1$. In this case, $\rho(\{v',v,w\})\le C_{F,k-\ell}+C_{U,\ell}+C_{U,\ell+1}-2C_E = 3\ell+(\frac{3C_E-3}2-3\ell)+(\frac{3C_E-3}2-3(\ell+1))-2C_E = 3C_E-3-2C_E-3(\ell+1) = C_E-3-3(\frac{C_E-7}6+1) = \frac{C_E-5}2$, which contradicts the Strong Gap Lemma. So $v'$ is not in an $F$-component of order at least $k+1$.) Now we show that $\rho_{G'}(R')\ge -2$ for all $R'\subseteq V(G')$. Fix some $R'\subseteq V(G')$. If $v'\notin R'$, then $\rho_{G'}(R')=\rho_{G}(R')\ge 1$, by the Weak Gap Lemma. If $G'[R']$ has no edges, then $\rho_{G'}(R')\ge 0$, since each coefficient in Definition~\ref{rho-defn} is nonnegative. Assume instead that $v'\in R'$ and $G[R']$ has at least one edge. By the Strong Gap Lemma, $\rho_{G'}(R')\ge \rho_G(R')-3(\ell+1)\ge \frac{C_E-3}2-3(\ell+1)=\frac{C_E-3}2-\frac{C_E-7+6}2=-1$. Thus, $G'$ has an $(I,F_k)$-coloring $\vph'$. We extend $\vph'$ to $v$ and $w$ as follows. If $\vph'(v')=I$, then color $v$ with $F$ and color $w$ with the color unused on $w'$. Similarly, if $\vph'(w')=I$, then color $w$ with $F$ and color $v$ with the color unused on $v'$. (If $\vph'(v')=\vph'(w')=I$, then $v$ and $w$ lie in an $F$-component with order at most $2(\ell)+3=\frac{C_E-7}3+3 = \frac{3k-9}3+3=k$.) Suppose instead that $\vph'(v')=\vph'(w')=F$. Now color $w$ with $I$ and $v$ with $F$. Note that this is an $(I,F_k)$-coloring of $G$, because of the extra $F$-neighbors of $v'$ in $G'$. \end{proof} \begin{lem} \label{no3withall2s-lem} $G$ does not contain a vertex $v\in U_0^3$ with all three neighbors in $U_\ell^2$. \end{lem} \begin{proof} Suppose the lemma is false. Form $G'$ from $G$ by deleting $v$ and its three 2-neighbors. Since $G$ is critical, $G'$ has an $(I,F_k)$-coloring $\vph'$. Now we extend $\vph'$ to all of $G$. Color each 2-neighbor of $v$ with the color unused on its neighbor in $G'$. If all three 2-neighbors of $v$ are colored $F$, then color $v$ with $I$. Otherwise, color $v$ with $F$. This produces an $(I,F_k)$-coloring of $G$ (because $2\ell+3=k$). \end{proof} \begin{lem} \label{noadj3swith32nbrs-lem} $G$ does not contain adjacent vertices $v, w\in U_0^3$ such that $v$ has two neighbors in $U^2_\ell$ and $w$ has at least one neighbor in $U^2_\ell$. \end{lem} \begin{proof} Suppose the lemma is false. Denote the 2-neighbors of $v$ by $x$ and $y$, and denote a 2-neighbor of $w$ in $U^2_{\ell}$ by $z$. Denote by $w'$, $x'$, $y'$, and $z'$ the remaining neighbors of $w$, $x$, $y$, and $z$ (other than $v$, $w$, and $z$); see Figure~\ref{noadj3swith32nbrs-fig}. We want to form $G'$ from $G$ by deleting $y$ and contracting both edges incident to $z$; however, this creates parallel edges when $w'z'\in E(G)$, so we consider two cases. Before doing that, we briefly consider the possibility that $y=z$. If $y=z$, then by criticality we color $G-\{v,w,x,y/z\}$. To extend the coloring to $G$, we color $w$ with the color unused on $w'$ and color $x$ with the color unused on $x'$. If both $w$ and $x$ are colored $F$, then we color $v$ with $I$; otherwise, we color $v$ with $F$. Finally, if both $v$ and $w$ are colored $F$, then we color $y/z$ with $I$; otherwise, we color $y/z$ with $F$. It is easy to check that this coloring has no cycle colored $F$ and no edge with both endpoints colored $I$. It also has no $F$-component of size larger than $2\ell+3=k$. Thus, we assume $y\ne z$. \textbf{Case 1: $\bs{w'z'\notin E(G)}$.} Form $G'$ from $G$ by deleting $y$ and contracting both edges incident to $z$; the new vertex $w\!*\!z'$ formed from $w$ and $z'$ inherits the precoloring of $z'$. Consider $R'\subseteq V(G')$. If $w*z'\notin R'$, then $\rho_{G'}(R')=\rho_G(R')\ge 1$, by the Weak Gap Lemma. If $G'[R']$ has no edges, then $\rho_{G'}(R')\ge 0$, since each individual vertex has nonnegative potential. So assume that $w*z'\in R'$ and $G'[R']$ has at least one edge. Now \begin{align*} \rho_{G'}(R')&=\rho_G((R'\setminus\{w*z'\})\cup\{w,z,z'\})-C_{U,\ell}-C_{U,0}+2C_E\\ &=\rho_G((R'\setminus\{w*z'\})\cup\{w,z,z'\})-(3C_E-3-3\ell)+2C_E\\ &=\rho_G((R'\setminus\{w*z'\})\cup\{w,z,z'\})-C_E+3+\frac{C_E-7}2\\ &=\rho_G((R'\setminus\{w*z'\})\cup\{w,z,z'\})-\frac{C_E+1}2\\ &\ge -2, \end{align*} where the final inequality holds because the Strong Gap Lemma gives $\rho_G(R'\setminus\{w*z'\})\cup\{w,z,z'\})\ge \frac{C_E-3}2$. Thus, $G'$ has an $(I,F_k)$-coloring $\vph'$. \textbf{Case 2: $\bs{w'z'\in E(G)}$.} Again form $G'$ from $G$ by deleting $y$ and contracting both edges incident to $z$; the new vertex $w*z'$ formed from $w$ and $z'$ inherits the precoloring of $z'$. Since $w'z'\in E(G)$, this creates parallel edges between $w'$ and $w*z'$. If one of $w'$ and $w*z'$ is colored with $F$, then delete both of the parallel edges and color the other endpoint with $I$. (By Lemma~\ref{noFedge-lem}, at least one of $w'$ and $z'$ is not colored $F$. ) If neither $w'$ nor $w*z'$ is colored with $F$, then we delete one edge between $w'$ and $w*z'$ and add $\frac{k-1}2$ $F$-neighbors to each of them. (It is not possible that each of $w'$ and $w*z'$ ends with at least $k$ $F$-neighbors, so gets recolored $I$, since in that case $\rho_G(\{w,z,w',z'\})$ violates the Strong Gap Lemma.) Now we must show that $\rho_{G'}(R')\ge -2$ for all $R'\subseteq V(G')$. If $R'\cap\{w',w*z'\}=\emptyset$, then $\rho_{G'}(R')=\rho_G(R')\ge 1$ by the Weak Gap Lemma. So, we assume that $R'\cap\{w',w*z'\}\ne\emptyset$. We will compute $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\})$. For convenience, let $\alpha:=-2C_{U,0}-C_{U,\ell}+C_I+3C_E$. We have 5 cases to consider. \begin{enumerate} \item We added $F$-neighbors to both $w'$ and $w*z'$ and $|R'\cap\{w',w*z'\}|=2$. Now $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\}) = -C_{U,0}-C_{U,\ell}+3C_E-3(k-1) = \alpha+C_{U,0}-C_I-3(k-1) = \alpha+\frac{3C_E-3}2-\frac{C_E-3}2-(C_E-1)=\alpha+1$. \item We added $F$-neighbors to both $w'$ and $w*z'$ and $|R'\cap\{w',w*z'\}|=1$. Now $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\}) \ge -2C_{U,0}-C_{U,\ell}+4C_E-\frac{3k-3}2 = \alpha+C_E-C_I-\frac{3k-3}2 = \alpha+C_E-\frac{C_E-3}2-\frac{C_E-1}2=\alpha+2$. \item We moved $w'$ or $w*z'$ to $I$ and $|R'\cap\{w',w*z'\}|=2$. Now $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\}) \ge -2C_{U,0}-C_{U,\ell}+C_I+3C_E = \alpha$. \item We moved $w'$ or $w*z'$ to $I$ and $R'$ contains the one we moved to $I$, but not the other. Now $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\}) \ge -2C_{U,0}-C_{U,\ell}-C_{F,1}+C_I+4C_E = \alpha-C_{F,1}+C_E \ge \alpha+3$. \item We moved $w'$ or $w*z'$ to $I$ and $R'$ contains the one we did not move to $I$, but not the other. Now $\rho_{G'}(R')-\rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\}) \ge -2C_{U,0}-C_{U,\ell}+4C_E = \alpha-C_I+C_E > \alpha$. \end{enumerate} Note that $\alpha = -2C_{U,0}-C_{U,\ell}+3C_E+C_I = -3C_E+3-(C_E+2)+3C_E+\frac{C_E-3}2 = -\frac{C_E+1}2$. Now, by the Strong Gap Lemma, $\rho_{G'}(R')\ge \rho_G((R'\setminus\{w*z'\})\cup\{w,z,w',z'\})-\frac{C_E+1}2 = \frac{C_E-3}2-\frac{C_E+1}2=-2$. Thus, $G'$ again has an $(I,F_k)$-coloring $\vph'$. We will show how to extend $\vph'$ to $G$ (after possibly modifying it a bit). We first extend $\vph'$ to an $(I,F_k)$-coloring $\vph$ of $G-y$ by uncontracting the two edges incident to $z$, coloring both $w$ and $z'$ with $\vph'(w\!*\!z')$, and coloring $z$ with the opposite color. \begin{figure} \caption{Forming $G'$ from $G$ in the proof of Lemma~\ref{noadj3swith32nbrs-lem} \label{noadj3swith32nbrs-fig} \end{figure} Suppose that $\vph(y')=I$. If $\vph(v)=I$, then we color $y$ with $F$ and are done. So assume $\vph(v)=F$. If $\vph(w)=\vph(x)=I$, then we again color $y$ with $F$ and are done. If $\vph(w)=\vph(x)=F$, then we recolor $v$ with $I$ and are done as above. So assume that exactly one of $w$ and $x$ uses $I$ in $\vph$ and the other uses $F$. First suppose that $\vph(w)=I$ and $\vph(x)=F$. If $\vph(x')=I$, then we color $y$ with $F$ and are done. Instead assume that $\vph(x')=F$. Now we recolor $x$ with $I$ and color $y$ with $F$. Thus, we assume instead that $\vph(w)=F$ and $\vph(x)=I$. If both neighbors of $w$ other than $v$ are colored $I$, then we color $y$ with $F$ and are done. So assume that $z$ is the only neighbor of $w$ colored $I$. Let $s_1$ and $s_2$ denote the orders of the $F$-components of $\vph$ that contain $w$ and $z'$, respectively. If $s_1\le k-(\ell+1)$, then we color $y$ with $F$. If $s_2\le k-(\ell+1)$, then we recolor $z$ with $F$, recolor $w$ with $I$, and color $y$ with $F$. The key observation is that one of these two inequalities must hold. Suppose not. The $F$-component in $\vph'$ containing $w*z$ shows that $k\ge s_1+s_2-1$. If both inequalities above fail, then $k\ge s_1+s_2-1 \ge (k-(\ell+1)+1)+(k-(\ell+1)+1)-1 = 2k-2\ell-1 = 2k-(k-3)-1=k+2$, which is a contradiction. Suppose instead that $\vph(y')=F$. If $\vph(v)=F$, then we color $y$ with $I$ and are done. Assume instead that $\vph(v)=I$. First suppose $w'$ and $z$ are colored $I$. Now recolor $v$ with $F$ and color $y$ with $I$; finally, if $x'$ is colored $F$, then recolor $x$ with $I$. This gives an $(I,F_k)$-coloring of $G$. Suppose instead that $w'$ is colored $F$. Let $s_1$ and $s_2$ denote the orders of the $F$-components of $\vph$ that contain $w$ and $z'$, respectively. Suppose that $s_1\le k-(\ell+2)$. Color $y$ with $I$, recolor $v$ with $F$, and if $\vph(x')=F$, then recolor $x$ with $I$. This gives an $(I,F_k)$-coloring of $G$. Suppose instead that $s_2\le k-(\ell+1)$. Again color $y$ with $I$, recolor $v$ with $F$, and if $\vph(x')=F$, then recolor $x$ with $I$. Finally, recolor $w$ with $I$ and recolor $z$ with $F$. Again, this gives an $(I,F_k)$-coloring of $G$. The key observation is that one of these two inequalities must hold; the proof is identical to that in the previous paragraph, except that the first inequality is tighter by 1. \end{proof} \subsection{Discharging when $k$ is Odd} \label{k-odd-discharging-sec} Now we use discharging to show that $G$ cannot exist. It is helpful to remember that $I=\emptyset$, by Lemma~\ref{noI-lem}, and $U_j^2=\emptyset$ when $j<\ell$, by Lemma~\ref{getting rid of very small charge}. Furthermore, by Lemma~\ref{delta2-lem}, each $v\in V(G)$ satisfies $d(v)\ge 2$ unless $v\in F_j$ with $j\ge \frac{k+3}2$. We define our initial charge function so that our assumption $\rho(V(G))\ge -2$ gives an upper bound on the sum of the initial charges. (Recall the values of $C_{U,j}$ and $C_{F,j}$ from Definition~\ref{coeff-defn}.) Precisely, let \begin{itemize} \item $\textrm{ch}(v):=C_Ed(v)-2C_{U,j}=C_Ed(v)-2(\frac{3C_E-3}2-3j)$ \aside{$\textrm{ch}(v)$}\\ \mbox{\!~~~~~~~~~}$= C_E(d(v)-3)+3+6j$ for each $v\in U_j$; and \item $\textrm{ch}(v):=C_Ed(v)-2C_{F,j}=C_Ed(v)-2(C_E-3j)$ \\ \mbox{\!~~~~~~~~~}$=C_E(d(v)-2)+6j$ for each $v\in F_j$ with $j\le \frac{k+1}2$; and \item $\textrm{ch}(v):=C_Ed(v)-2C_{F,j}\ge C_Ed(v)-2(3k-\frac{3k+9}2)$ \\ \mbox{\!~~~~~~~~~}$=C_Ed(v)-3k+9=C_E(d(v)-1)+7$ for each $v\in F_j$ with $j\ge \frac{k+3}2$. \end{itemize} This definition of $\textrm{ch}(v)$ yields the inequality \begin{align} \sum_{v\in V(G)}\textrm{ch}(v) = -2\rho(V(G))\le 4. \label{charge-sum-ineq-odd} \end{align} \begin{table}[!h] \centering $ \begin{array}{c||c|c|c|c|c|c|c|c} d(v) & U_0 & U_1 & U_2 & F_1 & F_2 & U_{\ell} & U_{\ell+1} & U_{\ell+2}\\ \hline 2 & & & & 2 & 8 & 0 & 0 & 4\\ 3 & 0 & 3 & 9 & C_E & C_E+6\\ 4 & C_E-5 & C_E+1 & C_E+7 & 2C_E-2 \end{array} $ \caption{ Lower bounds on the final charges (when $k$ is odd).\label{k-odd-charge-table}} \end{table} We use two discharging rules, and let \Emph{$\textrm{ch}^*(v)$} denote the charge at $v$ after discharging. \begin{enumerate} \item[(R1)] Each $v\in U_\ell^2$ (2-vertex) takes 2 from each neighbor. \item[(R2)] Each $v\in U_0^3$ (3-vertex) with two neighbors in $U_\ell^2$ takes 1 from its other neighbor. \end{enumerate} \begin{lem} After discharging with rules (R1) and (R2) above, each vertex $v$ with an entry in Table~\ref{k-odd-charge-table} has $\textrm{ch}^*(v)$ at least as large charge as shown. Each other vertex $v$ has $\textrm{ch}^*(v)\ge 5$. \label{easy-charge-lem} \end{lem} \begin{proof} Note that $\textrm{ch}^*(v)\ge \textrm{ch}(v)-2d(v)$ for all $v\in V(G)$. If $v\in U_j$, then $\textrm{ch}^*(v)\ge C_E(d(v)-3)+3+6j-2d(v)=(C_E-2)(d(v)-3)+6j-3$. If $v\in F_j$ and $j\le \frac{k+1}2$, then $\textrm{ch}^*(v) \ge C_E(d(v)-2)+6j-2d(v)=(C_E-2)(d(v)-2)+6j-4$. If $v\in F_j$ and $j\ge \frac{k+3}2$, then $\textrm{ch}^*(v)\ge C_E(d(v)-1)+7-2d(v)=(C_E-2)(d(v)-1)+5$. If $v\notin U_{\ell}^2\cup U_{\ell+1}^2\cup U_0^3$, then the lemma follows from what is above. If $v\in U_{\ell}^2$, then $v$ has no neighbors in $U_{\ell}^2\cup U^2_{\ell+1}$, by Lemma~\ref{no2-threads-k-odd}. Thus, $\textrm{ch}^*(v)=-4+2(2)=0$. If $v\in U_{\ell+1}^2$, then $v$ has no neighbors in $U^2_{\ell}$, by Lemma~\ref{no2-threads-k-odd}. Thus, $\textrm{ch}^*(v)\ge 2-2(1)=0$. Finally, suppose that $v\in U_0^3$. By Lemma~\ref{no3withall2s-lem}, $v$ does not have three neighbors in $U^2_{\ell}$. A vertex in $U_0^3$ is \Emph{needy} if it has two neighbors in $U^2_{\ell}$. By Lemma~\ref{noadj3swith32nbrs-lem}, a vertex in $U_0^3$ cannot have both a neighbor in $U^2_{\ell}$ and a needy 3-neighbor. Thus, we have $\textrm{ch}^*(v)\ge \min\{3-2,3-2(2)+1,3-3(1)\}=0$. \end{proof} \begin{cor} $V(G)=U^2_{\ell}\cup U^2_{\ell+1}\cup U^2_{\ell+2}\cup F_1^2\cup U_0^3\cup U_1^3\cup U_0^4$. Furthermore $4|U^2_{\ell+2}|+2|F_1^2|+3|U_1^3|+(C_E-5)|U_0^4|\le 4$. (In particular, $U_0^4=\emptyset$ when $k\ge 5$.) \label{few-vertex-types-cor} \end{cor} \begin{proof} This corollary follows directly from Lemma~\ref{easy-charge-lem} and~\eqref{charge-sum-ineq-odd}. \end{proof} If we knew that $\sum_{v\in V(G)}\textrm{ch}(v)<0$, then Lemma~\ref{easy-charge-lem} would yield a contradiction. However, we only know that $\sum_{v\in V(G)}\textrm{ch}(v)\le 4$, so we are not done yet. We will now try to construct the desired coloring. We show that we can do this unless $\sum_{v\in V(G)}\textrm{ch}(v)> 4$, which gives the desired contradiction. Our basic plan is to color all of $U^2_{\ell}$ with $I$. This will force all neighbors of $U^2_{\ell}$ into $F$. Furthermore, all but a constant number of vertices in $V(G)\setminus U^2_{\ell}$ will go into $F$. To do this, we consider the components of $G\setminus U^2_{\ell}$. All but a constant number of these have size at most 4, and all have size at most 8. \begin{lem} Each component of $G\setminus U^2_{\ell}$ is one of the 30 shown below in Figures~\ref{cases1234-fig}-\ref{case6T678-fig}, and has final charge as shown. (The coloring of vertices as black and white can be ignored for now.) \end{lem} \begin{proof} Let $J$ be a component of $G\setminus U_{\ell}^2$. Let $\textrm{ch}^*(J):=\sum_{v\in V(J)}\textrm{ch}^*(v)$. We will prove that if $J$ is some component other than one of those shown, then either $G$ contains a reducible configuration or $\textrm{ch}^*(J)>4$; both possibilities yield a contradiction. \textbf{Case 1: $\boldsymbol{V(J)\cap U_0^4\ne \emptyset}$.} (By Corollary~\ref{few-vertex-types-cor}, this is possible only when $k=3$.) Assume $v\in V(J)\cap U_0^4$. If $V(J)=\{v\}$, then we are done. Otherwise, $\textrm{ch}^*(v)\ge 3$. So, by Table~\ref{k-odd-charge-table}, we know $V(J)\setminus \{v\}\subseteq U_0^3\cup U_{\ell+1}^2$. Let $w$ be a neighbor of $v$ in $J$. If $w\in U_{\ell+1}^2$, then $\textrm{ch}^*(J)\ge \textrm{ch}^*(v)+\textrm{ch}^*(w)\ge 4+1$, a contradiction. The same is true if $w\in U_0^3$ unless $w$ is needy (recall that $w$ cannot have both a neighbor in $U_{\ell}^2$ and a needy 3-neighbor, by Lemma~\ref{noadj3swith32nbrs-lem}). If $v$ has at most two needy 3-neighbors, then we are done. Otherwise, $\textrm{ch}^*(v)\ge 5$, a contradiction. \textbf{Case 2: $\boldsymbol{V(J)\cap U_{\ell+2}^2\ne \emptyset}$.} Assume $v\in V(J)\cap U_{\ell+2}^2$. If $V(J)=\{v\}$, then we are done. Otherwise, $\textrm{ch}^*(v)\ge 5$, a contradiction. \begin{figure} \caption{The 9 possible components of $G\setminus U_0^2$ in Cases 1--4.\label{cases1234-fig} \label{cases1234-fig} \end{figure} \textbf{Case 3: $\boldsymbol{V(J)\cap F_1^2\ne \emptyset}$.} Assume $v\in V(J)\cap F_1^2$. If $V(J)=\{v\}$, then we are done. Otherwise, let $w$ be a neighbor of $v$ in $J$. If $w\in U_{\ell+1}^2$, then $\textrm{ch}^*(J)\ge \textrm{ch}^*(v)+\textrm{ch}^*(w)\ge 4+1$, a contradiction. Thus, we must have $w\in U_0^3$. If $w$ is not needy, then $\textrm{ch}^*(v)+\textrm{ch}^*(w)\ge 4+1$, a contradiction. Thus, $v$ has one or two needy neighbors (and this is all of $J$). \textbf{Case 4: $\boldsymbol{V(J)\cap U_1^3\ne \emptyset}$.} Assume $v\in V(J)\cap U_1^3$. If $V(J)=\{v\}$, then we are done. Otherwise, let $w$ be a neighbor of $v$. If $w$ is not a needy 3-neighbor of $v$, then $\textrm{ch}^*(v)\ge 5$, a contradiction. Further, $v$ has at most one needy 3-neighbor. Thus, we are done. \begin{figure} \caption{The 4 possible components of $G\setminus U_{\ell} \label{case5-fig} \end{figure} \textbf{Case 5: $\boldsymbol{V(J)\subseteq U_{\ell+1}^2\cup U_0^3}$ and $J$ contains a cycle.} Let $C$ be a cycle in $J$; see Figure~\ref{case5-fig}. It is easy to check that each cycle vertex finishes with charge at least 1; thus $|C|\le 4$. If $|C|=4$, then each cycle vertex is in $U_0^3$ and has a neighbor in $U_{\ell}^2$. Thus, $J\cong C_4$. Now suppose $|C|=3$. If $C$ contains a vertex in $U_{\ell+1}^2$, then it contains exactly one such vertex, and its other two vertices are in $U_0^3$, each with a neighbor in $U_{\ell}^2$. So $J\cong C_3$ (with a single vertex in $U_{\ell+1}^2$). So assume $C$ is a 3-cycle with all vertices in $U_0^3$. If no vertex on $C$ has a neighbor in $J\setminus C$, then we are done. Otherwise, exactly one cycle vertex does, and it is a needy 3-neighbor. \textbf{Case 6: $\boldsymbol{V(J)\subseteq U_{\ell+1}^2\cup U_0^3}$ and $J$ is a tree.} Let $T:=J$. Let $n_2:=|U_{\ell+1}^2\cap V(T)|$ and $n_3:=|U_0^3\cap V(T)|$. Recall that no vertex in $U_{\ell+1}^2$ has a neighbor in $U_{\ell}^2$, by Lemma~\ref{no2-threads-k-odd}. So each leaf of $T$ is in $U_0^3$. Form $T'$ from $T$ by replacing paths with internal vertices in $U_{\ell+1}^2$ by edges. So $|V(T')|=n_3$. Let $\textrm{ch}^*(T'):=\textrm{ch}^*(T)-2|U_{\ell+1}^2\cap V(T)|$. Note that $\textrm{ch}^*(T')$ is precisely the sum of charges that would have ended on $T'$ if it had appeared in $G$ when we did the discharging. Since each vertex of $T'$ has degree 3 in $G$, the number of edges (externally) incident to $T$ is $3|T'|-\sum_{v\in T'}d_{T'}(v)=3|T'|-2(|T'|-1)=|T'|+2$. Since $\textrm{ch}(T')=3|T'|$, and $T'$ sends 2 along each incident edge, we have $\textrm{ch}^*(T')=3|T'|-2(|T'|+2)=|T'|-4=n_3-4$. Since $\textrm{ch}^*(T)\le 4$, we get that $n_3\le 8$. Recall that a vertex in $U_0^3$ with three neighbors in $U_{\ell}^2$ is reducible, by Lemma~\ref{no3withall2s-lem}. So $n_3\ge 2$. Note that $\textrm{ch}^*(T)=n_3-4+2n_2\le 4$. So $n_2 \le \frac{8-n_3}2$. For brevity, we henceforth denote $|V(T')|$ by $|T'|$. We consider the seven possibilities when $|T'|\in\{2,\ldots,8\}$. Suppose $|T'|=2$. By Lemma~\ref{noadj3swith32nbrs-lem}, the edge of $T'$ must be subdivided in $T$ by one or more vertices of $U_{\ell+1}^2$. We have $n_2\le 3$, which gives the 3 possibilities in Figure~\ref{case6T2-fig}. \begin{figure} \caption{The 3 possible components of $G\setminus U_0^2$ in Case 6 when $|T'|=2$.\label{case6T2-fig} \label{case6T2-fig} \end{figure} Suppose $|T'|=3$. By Lemma~\ref{noadj3swith32nbrs-lem}, no vertex in $U_0^3$ has both a needy 3-neighbor and a neighbor in $U_{\ell}^2$. Thus, each edge of $T'$ must be subdivided in $T$ by a vertex in $U_{\ell+1}^2$. Recall that $n_2\le \frac{8-n_3}2$. So $n_3=3$ and $n_2=2$. \begin{figure} \caption{The 7 possible components of $G\setminus U_{\ell} \label{case6T345-fig} \end{figure} Suppose $|T'|=4$. The only 4-vertex trees are $K_{1,3}$ and $P_4$. Recall that $n_2\le \frac{8-n_3}2=2$. If $T'\cong P_4$, then $T$ must contain a vertex in $U_{\ell+1}^2$ incident to each leaf. There is a unique such tree, a 6-vertex path with each neighbor of a leaf in $U_{\ell+1}^2$ (and the four other vertices in $U_0^3$). So assume $T'\cong K_{1,3}$. Now $n_2\in\{0,1,2\}$. This results in 1, 1, and 2 possibilities with orders 4, 5, and 6. Suppose $|T'|=5$. The only 5-vertex subcubic trees are $P_5$ and $K_{1,3}$ with an edge subdivided. Now $n_2\le 1$. Thus, we cannot have $T'\cong P_5$, since then $T$ would have a vertex in $U_0^3$ with both a needy 3-vertex and a neighbor in $U_{\ell}^2$, which contradicts Lemma~\ref{noadj3swith32nbrs-lem}. So $T'$ is formed from $K_{1,3}$ by subdividing a single edge. Now we have a single possibility for $T$, which is formed from $K_{1,3}$ by subdividing a single edge twice. Suppose $|T'|=6$. Now $n_2\le 1$. There are 4 subcubic trees on 6 vertices. However, two of them contain two copies of a leaf adjacent to a vertex of degree 2 (in the tree). Neither of these are valid options for $T'$, by Lemma~\ref{noadj3swith32nbrs-lem}. Thus, either $T'$ is formed by subdividing a single edge of $K_{1,3}$ twice or else $T'$ is a double-star (adjacent 3-vertices, with 4 leaves). The first option yields one case, and the second yields 3 cases (since we might not add a vertex of $U_{\ell+1}^2$). Suppose $|T'|=7$. Now $n_2=0$; that is, $T'=T$. Thus, each leaf of $T'$ must be adjacent to a vertex of degree 3 in $T'$. Since $T'$ has a 3-vertex, it has at least 3 leaves. Since each leaf has a neighbor of degree 3 in $T$, tree $T$ has at least two 3-vertices. There is a single possibility. Suppose $|T'|=8$. The analysis is nearly the same as when $|T'|=7$. Now $T'$ must contain at least 4 leaves and at least two 3-vertices. Either $T'$ has 5 leaves and three 3-vertices or else $T'$ has 4 leaves, two 2-vertices, and two 3-vertices. Each case gives a single possibility. \end{proof} \begin{figure} \caption{The 7 possible components of $G\setminus U_{\ell} \label{case6T678-fig} \end{figure} \begin{lem} $G$ has an $(I,F_k)$-coloring, and is thus not a counterexample. \end{lem} \begin{proof} We now construct an $(I,F_k)$-coloring of $G$. As we described above, our plan is to color all vertices of $U_{\ell}^2$ with $I$ (since they form an independent set, by Lemma~\ref{no2-threads-k-odd}). For each possible acyclic component $J$ of $G\setminus U_{\ell}^2$, shown in Figures~\ref{cases1234-fig}, \ref{case6T2-fig}, \ref{case6T345-fig}, and \ref{case6T678-fig}, we show how to extend this coloring to $J$. Those vertices drawn as white are colored with $F$ and those drawn as black are colored with $I$. Doing this preserves that $I$ is an independent set and $G[F]$ is a forest with at most $k$ vertices in each component. The only complication is the four possible components $J$ that contain a cycle, shown in Figure~\ref{case5-fig}. In fact, the second and fourth of these are fine. Suppose instead that $J\in\{C_3,C_4\}$ with all vertices in $U_0^3$. Now we color one vertex $v$ of $J$ with $I$ (and the rest with $F$). To preserve that $I$ is an independent set, we recolor the neighbor $w$ of $v$ in $U_{\ell}^2$ with $F$. We must ensure that $w$ does not become part of a tree on $k+1$ vertices. Since $\textrm{ch}^*(J)\ge 3$, every other component $J'$ of $G\setminus U_{\ell}^2$ has $\textrm{ch}^*(J')\leq 1$; in particular, this is true of the component containing the neighbor of $w$ other than $v$. So $J'$ is either $K_{1,3}$ (with all vertices in $U_0^3$) or else $P_3$ (with its center vertex in $U_{\ell+1}^2$ and leaves in $U_0^3$). In each case for $J'$, the subgraph induced by its vertices colored $F$ is an independent set. Thus, recoloring $w$ with $F$ creates a tree colored $F$ with at most 2 vertices. \end{proof} \section*{Acknowledgments} Thank you to two anonymous referees who both provided helpful feedback. In particular, one referee read the paper extremely carefully and caught numerous typos, inconsistencies, and errors, and also suggested various improvements in the presentation. {\footnotesize{ } \end{document}
\begin{document} \title[Equivalent and attained version of Hardy's inequality in $\mathbb{R}^n$]{Equivalent and attained version\\ of Hardy's inequality in $\mathbb{R}^n$} \author{D.~Cassani \and B.~Ruf \and C.~Tarsi } \address[Daniele Cassani]{\newline\indent Dip. di Scienza e Alta Tecnologia \newline\indent Universit\`{a} degli Studi dell'Insubria \newline\indent and \newline\indent RISM--Riemann International School of Mathematics \newline\indent via G.B.~Vico 46, 21100 - Varese, Italy \newline\indent\texttt{[email protected]} } \address[Bernhard Ruf and Cristina Tarsi]{\newline \indent Dip. di Matematica \newline\indent Universit\`a degli Studi di Milano\newline\indent via C.~Saldini 50, 20133 - Milano, Italy\newline\indent \texttt{[email protected]}\newline\indent\texttt{[email protected]} } \date{\today} \begin{abstract} We investigate connections between Hardy's inequality in the whole space $\mathbb{R}^n$ and embedding inequalities for Sobolev-Lorentz spaces. In particular, we complete previous results due to \cite[Alvino]{Alv1} and \cite[Talenti]{Ta} by establishing optimal embedding inequalities for the Sobolev-Lorentz quasinorm $\|\nabla\,\cdot\,\|_{p,q}$ also in the range $p < q<\infty$, which remained essentially open since \cite{Alv1}. \par \noindent Attainability of the best embedding constants is also studied, as well as the limiting case when $q=\infty$. Here, we surprisingly discover that the Hardy inequality is equivalent to the corresponding Sobolev-Marcinkiewicz embedding inequality. Moreover, the latter turns out to be attained by the so-called ``ghost'' extremal functions of \cite[Brezis-Vazquez]{BV}, in striking contrast with the Hardy inequality, which is never attained. In this sense, our functional approach seems to be more natural than the classical Sobolev setting, answering a question raised in \cite{BV}. \end{abstract} \maketitle \section{Introduction} \noindent The classical Hardy inequality for smooth compactly supported functions in $\Omega\subseteq\mathbb{R}^n$ and for $1<p<n$, reads as follows \begin{equation}\label{Hi}\tag{$H_p$} \left(\frac{n-p}{p}\right)^p\int_{\Omega}\frac{|u|^p}{|x|^p}dx\leq \int_{\Omega} |\nabla u|^p dx \end{equation} where the constant in the left hand side of \eqref{Hi} is sharp for any sufficiently smooth domain containing the origin. Actually, Hardy proved in 1925 the one dimensional version of \eqref{Hi}, see \cite{KMP,D} for a historical insight into the subject. The original result has been extended and generalized by many authors in several directions which break through different aspects of Analysis, Geometry and PDE, among which we mention \cite{Ma,OK,M,BM,BMS,BV,CF,FT,GM,MMP}. \par While much progress has been achieved in understanding \eqref{Hi} and its generalizations, a basic question raised by Brezis and Vazquez in \cite{BV} on the attainability of the best constant in \eqref{Hi} has not been given a full answer yet. Indeed, in \cite{BV,Ma} it was found that additional lower order terms are admissible on the left hand side of \eqref{Hi}, as long as $\Omega$ stays bounded, and an extensive literature has been devoted to searching for such {\it remainder terms} in Hardy and Hardy-type inequalities, see \cite{FT,CF} and references therein. This phenomenon yields an obstruction to the attainability of the best constant in \eqref{Hi}, provided the domain $\Omega$ contains the origin. When $\Omega=\mathbb{R}^n$ the existence of a suitable class of remainders has been recently established in \cite{CF, SanoTakahashi}, see also \cite{GhoussoubMoradifam}. As mentioned, the presence of remainders prevents that the Hardy inequality is attained, and we refer also to the recent papers \cite{pincho1,pincho2} for a deeper understanding of this phenomenon. In particular, the Euler-Lagrange equation corresponding to the equality case in Hardy's inequality has no solution in the Sobolev space $\mathcal{D}^{1,p}(\mathbb{R}^n)$, defined as the completion of $C_c^\infty(\mathbb R^N)$ with respect to the norm $\|\nabla \, \cdot \, \|_p$, however it is explicitly solved by a class of functions which do not belong to this space. The lack of a proper function space setting was pointed out in \cite{BV} and has inspired our work since the very beginning. \par Another interesting aspect of \eqref{Hi} is its equivalence to the optimal Sobolev embedding for the space $\mathcal{D}^{1,p}(\mathbb{R}^n)$ in the context of Lorentz spaces, namely \begin{equation}\label{A_in}\tag{$A_{p,p}$} \|u\|_{p^\ast, p}\leq S_{n,p}\|\nabla u\|_p \end{equation} which was obtained by Alvino in \cite{Alv1}, see also Peetre \cite{P}. The constant $$ S_{n,p}=\frac{p}{n-p}\frac{\left[\Gamma(1+n/2)\right]^{\frac{1}{n}}}{\sqrt{\pi}}= \frac{p}{n-p}\,\Omegan^{-\frac 1n} $$ is best possible and the embedding given by \eqref{A_in} is optimal in regards to the target space $L^{{p^*},p}$ which is smallest among all rearrangements invariant spaces \cite{BS,EKP} ($\Gamma$ denotes the standard Euler Gamma function and $\Omegaega_n$ stands for the measure of the unit sphere in $\mathbb{R}^n$). In this sense, \eqref{A_in} yields the optimal version of the Sobolev embedding theorem. \par The equivalence between \eqref{Hi} and \eqref{A_in} is a consequence of the P\'olya-Szeg\"o principle and the Hardy-Littlewood inequality by which the left hand side of \eqref{Hi} does not increase under radially decreasing symmetrization and it is equal to the left hand side of \eqref{A_in} when $u$ is radially decreasing. \par \noindent Alvino in \cite{Alv1} proved actually the following inequalities \begin{equation}\label{A_in2}\tag{$A_{p,q}$} \|u\|_{p^*,q}\leq \frac{p}{n-p}\,\Omegan^{-\frac 1n} \|\nabla u\|_{p,q}, \quad 1\leq p<n \end{equation} related to the Sobolev-Lorentz embedding \begin{equation}\label{emb} \mathcal{D}_H^1L^{p,q}(\mathbb R^n) \hookrightarrow L^{p^*,q}(\mathbb R^n) \end{equation} with the restriction $$ 1\leq q \leq p\, , $$ see also \cite{CRT}. The homogeneous Sobolev-Lorentz space $\mathcal{D}_H^1L^{p,q}(\mathbb R^n)$ is obtained as the closure of smooth compactly supported functions with respect to the Lorentz quasi-norm $\|\nabla\,\cdot\,\|_{p,q}$. Note that the validity of the embedding \eqref{emb} for $1\leq q\leq +\infty$ is well known in the literature of interpolation theory: more direct and short proofs can be found in \cite{Ta, ALT}. \par \noindent Let us point out that the embedding constant in \eqref{A_in2} is sharp, and it does not depend on the second Lorentz index $q$. Moreover, up to a normalizing factor, it turns out to be the Hardy constant. \vspace*{0.3cm} \subsection*{Main results} $ $ \par \noindent Our first goal is to extend the validity of the embedding inequality \eqref{A_in} to the values $p < q\leq \infty$, still preserving the optimal constant, thus completing the results of Alvino \cite{Alv1} and Talenti \cite{Ta} to the whole range $1 \le q \le \infty$. \par In the case $q=\infty$ the functional setting is somewhat delicate, as no Meyer-Serrin type result holds for the corresponding homogeneous spaces. Thus, let us define for $1\leq p<\infty$ and $1\leq q \leq \infty$ the space $$ \mathcal{D}_W^1L^{p,q}(\mathbb R^n):=\left\{u\in L^{p^*,q}(\mathbb R^n): |\nabla u| \in L^{p,q}(\mathbb R^n)\right\}\ . $$ Then it turns out that for $q<\infty$ \begin{equation*} \mathcal{D}_H^1L^{p,q}(\mathbb R^n)= \mathcal{D}_W^1L^{p,q}(\mathbb R^n)=: \mathcal{D}^1L^{p,q}(\mathbb R^n) \end{equation*} whereas for $q=\infty$ one has $$\mathcal{D}_H^1L^{p,\infty}(\mathbb R^n)\subsetneq \mathcal{D}_W^1L^{p,\infty}(\mathbb R^n), $$ see \cite{C} and Section \ref{preliminaries} for more details. \par \begin{thm}\label{thm generAlv} Let $1\leq p<n$, $p< q\leq \infty$. Then the following inequality holds for any $u\in \mathcal{D}_W^1L^{p,q}(\mathbb R^n)$ \begin{equation}\label{SobLorineq-R^n}\tag{$A_{p,q}$} \|u\|_{p^{\ast},q}\leq \frac{p}{n-p}\,\Omegan^{-\frac 1n} \|\nabla u\|_{p,q} \end{equation} where the constant $\frac{p}{n-p}\,\Omegan^{-\frac 1n}$ is sharp. \end{thm} \par \noindent Then, surprisingly, we establish the equivalence between $(A_{p,\infty})$ and $(H_p)$. \par \begin{thm}\label{thm equivalence} Let $1\leq p<n$. Then, Hardy's inequality \begin{equation}\label{hardyequiv}\tag{$H_{p}$} \Big(\frac{n-p}{p}\Big)^p\int_{\mathbb R ^n}\frac{|u|^p}{|x|^p}dx\leq \int_{\mathbb R^n} |\nabla u|^p dx \end{equation} holds for any $u\in\mathcal{D}^{1,p}(\mathbb R^n)$ if and only if the Sobolev-Marcinkiewicz embedding inequality \begin{equation}\label{weakA}\tag{$A_{p,\infty}$} \|v\|_{p^{\ast},\infty}\leq \frac{p}{n-p}\,\Omegan^{-\frac 1n} \|\nabla v\|_{p,\infty} \end{equation} holds for any $v\in \mathcal{D}_W^1L^{p,\infty}(\mathbb R^n)$. \end{thm} \par Finally we study the attainability of $(A_{p,q})$. In particular, in the limiting case $q=\infty$, ($A_{p,\infty}$) turns out to be attained, in striking contrast to Hardy's inequality, regardless of their equivalence as established in Theorem \ref{thm equivalence}. \begin{thm}\label{thm attainedornot} Let $1\leq p<n$ and $1\le q\leq \infty$. Then, the sharp constant in \eqref{SobLorineq-R^n} is attained if and only if $q=+\infty$. Moreover, an extremal function for $(A_{p,\infty})$ in $\mathcal D^1_WL^{p,\infty}$ is given by \begin{equation*} \psi(x)= |x|^{-\fracrac{n-p}{p}} \end{equation*} \end{thm} \par \noindent {\bf Remark:} The extremal function in Theorem \ref{thm attainedornot} is exactly the ``ghost'' extremal function of \cite[Brezis-Vazquez]{BV}. \par \subsection*{Overview} $ $ \par \noindent In Section \ref{preliminaries} we recall for convenience some well known facts and prove a few preliminary results. Then, in Section \ref{H_implies_A} we prove Theorem \ref{thm generAlv} by showing that \eqref{A_in2} for $p<q\leq \infty$ can be obtained as a consequence of \eqref{A_in}, which is actually equivalent to Hardy's inequality. The proof relies on suitable scaling properties whereas the sharpness of the embeddng constants is proved by inspection. As a byproduct of Theorem \ref{H_implies_A}, the sharp Marcinkiewicz type inequality $(A_{p,\infty})$ in $\mathcal{D}^1_WL^{p,\infty}$ is a consequence of $(A_{p,p})$, that is of the Hardy inequality \eqref{Hi}. In Section \ref{weaktostrong}, we surprisingly prove also the converse, namely that the validity of $(A_{p,\infty})$ in $\mathcal{D}^1_WL^{p,\infty}$ implies Hardy's inequality \eqref{Hi} in $\mathcal{D}^{1,p}$. In Section \ref{att} we prove that the best constant in $(A_{p,q})$ is never attained as long as $q<\infty$, and then attained at the endpoint of the Lorentz scale for $q=\infty$. This is in striking contrast with Hardy's inequality which is never attained though being equivalent to $(A_{p,\infty})$. In this sense our functional framework, namely the Sobolev-Marcinkiewicz space $\mathcal{D}_W^1L^{p,\infty}$, seems to be qualified as more natural than the classical $\mathcal{D}^{1,p}$ setting in the tradition of Hardy type inequalities. This phenomenon throws light on the importance of considering the couple (inequality, functional setting), as the whole information retained can be differently shared between the two components through equivalent versions. Finally, in the Appendix we recall and adapt to our situation a by now standard technique to reduce the embedding problems to the radial case, as initially developed by \cite[Alvino-Lions-Trombetti]{ALT}. \par \section{Preliminaries}\label{preliminaries} \noindent For convenience of the reader, let us briefly recall some basic facts on Lorentz spaces \cite{L} which will be widely used throughout the paper. \\ \noindent For a measurable function $u: \Omega \to \mathbb R^+$, let $u^*$ denote its decreasing rearrangement which is defined as the distribution function of the distribution function $\mu_{u}$ of $u$, namely $$ u^*(s)=|\{t\in[0,+\infty)\,:\,\mu_{u}(t)>s\}|=\sup\{t>0\,:\, \mu_{u}(t)>s\}, \quad s\in [0, |\Omega|] $$ whereas the spherically symmetric rearrangement $u^{\#}(x)$ of $u$ can be defined as $$ u^{\#}(x)=u^{*}(\Omegaega_n |x|^n), \quad x\in \Omega^{\#} $$ where $\Omega^{\#}\subset \mathbb{R}^n$ is the open ball with center in the origin which satisfies $|\Omega^{\#}|=|\Omega|$ and $\Omegaega_n$ is the area of the unit sphere of $\mathbb R^n$. Clearly, $u^*$ is a nonnegative, non-increasing and right-continuous function on $[0,\infty)$. Moreover, the (nonlinear) rearrangement operator enjoys the following properties: \begin{enumerate} \item[i)] Positively homogeneous: $(\lambda u)^*=|\lambda|u^*,\quad\lambda\in\mathbb R$; \item[ii)] Sub-additive: $(u+v)^*(t+s)\leq u^*(t)+v^*(s),\quad t,s\geq 0$; \item[iii)] Monotone: $0\leq u(x)\leq v(x) \hbox{ a.e.~in } \Omega \fracield{R}ightarrow u^{\ast}(t)\leq v^{\ast}(t),\quad t\in (0,|\Omega|)$; \item[iv)] $u$ and $u^*$ are equidistributed and in particular (Cavalieri's principle) $$\int_\Omega A(|u(x)|)\,dx=\int_0^{|\Omega|}A(u^*(s))\,ds$$ for any continuous funtion $A:[0,\infty]\rightarrow [0,\infty]$, nondecreasing and such that $A(0)=0$; \item[v)] The following inequality holds (Hardy-Littlewood): $$\int_\Omega u(x)v(x)\,dx\leq\int_0^{|\Omega|}u^*(s)v^*(s)\,ds$$ provided the integrals involved are well defined. \item[vi)] The map $u\mapsto u^*$ preserves Lipschitz regularity, namely $$^*: Lip(\Omega)\longrightarrow Lip(0,|\Omega|)$$. \end{enumerate} \noindent Then, the Lorentz space $L^{p,q}(\Omega)$ is a rearrangement invariant Banach space \cite{BS} which can be defined as follows $$ L^{p,q}(\Omega) := \bigg\{ u : \Omega \to \mathbb R \ \hbox{measurable} \ \big| \ \|u\|_{p,q} := \Big(\int_0^{\infty}\left(u^{\ast}(t)t^{1/{p}}\right)^q\fracrac{dt}{t}\Big)^{\fracrac{1}{q}} < \infty \bigg\} $$ where the quantity $\|u\|_{p,q}$ is a quasi-norm which admits an equivalent norm. One clearly has $L^{p,p} = L^p$ and furthermore, with respect to the second index, Lorentz spaces satisfy the following inclusions (Lorentz scale) \begin{equation}\label{lor} L^{p,q_1} \subsetneq L^{p,q_2} \ , \ \hbox{ if } \ 1 \le q_1 < q_2 \le \infty \ \end{equation} For $q=\infty$ we obtain the so-called Marcinkiewicz or weak-$L^p$ space, which is defined as follows \begin{equation}\label{wqn} \|u\|_{p,\infty}:=\sup_{t>0} t^{\frac{1}{p}}u^*(t)\ \end{equation} \noindent Notice that in particular one has $L^{p^*,p}\subsetneq L^{p^*,p^*}=L^{p^*}$. \vspace*{0.2cm} \noindent Sobolev-Lorentz spaces generalize classical Sobolev spaces. First order Sobolev-Lorentz spaces can be defined either as the closure of smooth compactly supported functions $u$ with respect to the norm $\|\nabla u\|_{p,q}+\|u\|_{p,q}$, or as the set of functions in $L^{p,q}(\Omega)$ whose distributional gradient also belongs to $L^{p,q}$. We refer for the general theory on Sobolev-Lorentz spaces to \cite{C} and references therein, and to \cite{CP1} for more general Sobolev spaces realized on rearrangement invariant Banach spaces. \noindent Here we focus on {\emph{homogeneous Sobolev-Lorentz}} spaces defined for $1\leq p<+\infty$ and $1\leq q\leq +\infty$ by \begin{equation*} \mathcal D_H^1L^{p,q}=cl\big\{u\in \mathcal C^{\infty}_c(\mathbb R^n): \|\nabla u\|_{p,q}<\infty\big\}\ \end{equation*} Since $\mathcal D_H^1L^{p,q}\hookrightarrow L^{p^*,q}$, as a consequence of \cite{Alv1, Ta}, one may also consider the alternative definition given by \begin{equation*} \mathcal D_W^1L^{p,q}=\left\{u\in L^{p^*,q}(\mathbb R^n): \|\nabla u\|_{p,q}<\infty\right\}\ . \end{equation*} It turns out that the two spaces coincide as long as $q<\infty$ \cite[Section 4]{C}: $$ \mathcal D_W^1L^{p,q}= \mathcal D_H^1L^{p,q}:=\mathcal D^1L^{p,q} \quad \hbox{ if } 1\leq p,q<\infty $$ whereas in the {\emph{limiting case }} $q=\infty$, we have $$ \mathcal D_H^1L^{p,\infty} \subsetneq \mathcal D_W^1L^{p,\infty} $$ A function belonging to $\mathcal D_W^1L^{p,\infty}\setminus \mathcal D_H^1L^{p,\infty}$ is given by $u(x)=|x|^{-\fracrac{n-p}{p}}$ (\cite[Prop.~4.7]{C}). \noindent Sobolev-Lorentz spaces enjoy invariance properties by scaling. As a consequence, inequalities \eqref{SobLorineq-R^n} and in particular the Hardy inequality \eqref{Hi} are invariant under the action of the group of dilations, as established in the following \begin{prop}\label{dilation_inv} Let $\lambda>0$, $1\leq p<n$, $1\leq q\leq \infty$ and $u_\lambda(x):=u(\lambda x)$. Then, the following quotients \begin{eqnarray*} \displaystyle \fracrac{\|\nabla u_{\lambda}\|_{p,q}}{\|u_{\lambda}\|_{p^*,q}}\ \ , \quad &&u\in \mathcal{D}^1L^{p,q}(\mathbb{R}^n) \\ &&\\ \displaystyle \fracrac{\displaystyle\int_{\mathbb R^n} \fracrac{|u_{\lambda}|^p}{|x|^p}\,dx}{\|\nabla u_{\lambda}\|_p^p}\ \ , && u\in \mathcal D^{1,p}(\mathbb{R}^n) \end{eqnarray*} are constant with respect to $\lambda$. \end{prop} \begin{proof} Let us first consider the case $q<\infty$. We have $u^\sharp_\lambda(|x|)=u^\sharp(\lambda |x|)$, $u^*_\lambda(t)=u^*(\lambda^n t)$ and \begin{align*} \|u_\lambda\|_{p^*,q}^q &=\int_{0}^{+\infty}\left[u^*_\lambda (t)t^{\fracrac 1{p^*}}\right]^q\fracrac{dt}{t}=\int_{0}^{+\infty}\left[u^* (\tau)(\lambda^{-n}\tau)^{\fracrac 1{p^*}}\right]^q\fracrac{d \tau}{\tau}\\ &=\lambda^{-\fracrac{nq}{p^*}} \|u\|_{p^*,q}^q \end{align*} From $\nabla u_\lambda(x)=\lambda \nabla u(x)$, we have $|\nabla u_\lambda|^\sharp(|x|)=\lambda |\nabla u|^\sharp(\lambda |x|)$ and $|\nabla u_\lambda|^*(t)= \lambda |\nabla u|^*(\lambda^n t)$. Hence \begin{align*} \|\nabla u_\lambda\|_{p,q}^q &=\int_{0}^{+\infty}\left[|\nabla u_\lambda|^* (t)t^{\fracrac 1{p}}\right]^q\fracrac{dt}{t}=\int_{0}^{+\infty}\left[\lambda |\nabla u|^* (\tau)(\lambda^{-n}\tau)^{\fracrac 1{p}}\right]^q\fracrac{d \tau}{\tau}\\ &=\lambda^{(-\fracrac{n}{p}+1)q} \|\nabla u\|_{p,q}^q \end{align*} and the first claim follows as $(-\fracrac{n}{p}+1)=-\fracrac{n}{p^*}$\ .\\ When $q=\infty$ we have, \begin{align*} \|u_\lambda\|_{p^*,\infty}&=\sup_{t}t^{1/p^*}u_\lambda^*(t)= \lambda^{-n/p^*}\sup_{\tau}\tau^{1/p^*}u^*(\tau)\\ &= \lambda^{-n/p^*}\|u\|_{p^*, \infty} \end{align*} as well as \begin{align*} \|\nabla u_\lambda\|_{p,\infty}&=\sup_{t}t^{1/p}\nabla u_\lambda^*(t)= \lambda^{-n/p+1}\sup_{\tau}\tau^{1/p^*}\nabla u^*(\tau)\\ &= \lambda^{-n/p+1}\|u\|_{p, \infty} \end{align*} and the first claim follows as above. \noindent The second claim follows by observing that $$ \int_{\mathbb R^n}\fracrac{|v_\lambda|^p}{|x|^p}\,dx= \lambda^{-n+p} \int_{\mathbb R^n}\fracrac{|v|^p}{|x|^p}\,dx $$ \end{proof} \section{Proof of Theorem \ref{thm generAlv} }\label{H_implies_A} \noindent Next we will prove that the sharp embedding inequality $(A_{p,p})$ implies all the embedding inequalities $(A_{p,q})$, $p<q\leq +\infty$, still preserving the sharp embedding constants. The proof strongly relies on the reduction to the radial case, which is a rather delicate issue in the case $q>p$: indeed, the argument used in \cite{Alv1} for $q<p$, is based on a generalization of the P\'olya-Szeg\"o result, which cannot be applied here. However, following the approach of \cite{ALT}, one can prove that for any $u\in \mathcal C^{\infty}_c(\mathbb R^n)$ there exists $v\in \mathcal{D}^{1,\sharp}L^{p,q}(\mathbb R^n)$, namely $v\in \mathcal{D}^{1}L^{p,q}(\mathbb R^n)$ and it is radial and monotone decreasing, such that $$ \|v\|_{p^*,q}\geq \|u\|_{p^*,q} \quad \hbox{ and } \quad \|\nabla v\|_{p,q}\leq \|\nabla u\|_{p,q} $$ This fact allows to restrict to radial decreasing functions also in the case $p<q<+\infty$. Though the argument is standard by now, we outline the details in the Appendix. \par \noindent The proof of Theorem \ref{thm generAlv} is based on scaling arguments. Let us divide the proof into two steps. \noindent \textbf{Step 1.} The case $p<q<\infty$. \noindent Let $ u\in \mathcal{D}^1L^{p,q}(\mathbb R^n)$ such that $u=u^\sharp$ and define the radially decreasing function $$ v(x):=\left[u(|x|^{\frac pq})\right]^{\frac qp}=\left[u^*(|x|^{\frac{np}q}\Omegan)\right]^{\frac qp} $$ so that $$ v^{\sharp}(|x|)=\left[u(|x|^{\frac pq})\right]^{\frac qp}\ , \ v^*(t)=v^{\sharp}\big((\textstyle{\fracrac{t}{\Omegan}})^{1/n}\big) =\left[u\big((\textstyle{\fracrac{t}{\Omegan}})^{\frac{p}{qn}}\big)\right]^{\frac qp}=\Big[u^*(t^{\frac pq}\Omegan^{\frac{q-p}{q}})\Big]^{\frac qp} $$ \noindent One has $v\in L^{p^*,p}(\mathbb{R}^n)$, indeed \begin{eqnarray}\label{vnorm2} \nonumber \|v\|_{p^*,p}&=&\left\{\int_0^{\infty}\left[v^*(t)t^{1/p^*}\right]^{p}\frac{dt}{t}\right\}^{1/p}\\ \nonumber &=& \left\{\int_0^{\infty}\left[(u^*(t^{\frac pq}\Omegan^{\frac{q-p}{q}}))^{\frac qp}t^{1/ p^*}\right]^{ p}\frac{dt}{t}\right\}^{1/p}\\ \nonumber &=&\left\{\int_0^{\infty}\left[u^*(t^{\frac pq}\Omegan^{\frac{q-p}{q}})t^{\frac{n-p}{nq}}\right]^{q}\frac{dt}{t}\right\}^{1/p}\\ \nonumber &=& \left(\frac qp\right)^{\frac 1p}\left\{\int_0^{\infty}\left[u^*(\tau)\tau^{\frac{1}{p^*}}\Omegan^{\frac{p-q}{q p^*}}\right]^{q} \frac{d\tau}{\tau}\right\}^{1/p}\\ \nonumber&=& \left(\frac qp\right)^{\frac 1p}\Omegan^{\frac{p-q}{p p^*}} \left\{\int_0^{\infty}\left[u^*(\tau)\tau^{\frac{1}{p^*}}\right]^{q}\frac{d\tau}{\tau}\right\}^{1/p}\\ &=& \left(\frac qp\right)^{\frac 1p}\Omegan^{\frac{p-q}{p p^*}}\|u\|^{\frac qp}_{p^*, q} \end{eqnarray} Moreover, $v\in \mathcal{D}^{1,p}(\mathbb{R}^n)$ as one has $$ |\nabla v(x)|=\big|u(|x|^{\frac pq})\big|^{\frac{q-p}{p}}\big|\nabla u(|x|^{\frac pq})\big||x|^{\frac{p-q}{q}} $$ so that \begin{eqnarray*} \|\nabla v\|_{p} &=&\left\{\int_{\mathbb R^n}|\nabla v|^{p}dx\right\}^{1/p}\\ &=&\left\{\int_{\mathbb R^n}\left[|u(|x|^{\frac pq})|^{\frac{q-p}{p}}|\nabla u(|x|^{\frac pq})||x|^{\frac{p-q}{q}}\right]^{p}dx\right\}^{1/p} \end{eqnarray*} here the fact $q>p$ is crucial. Next apply the (generalized) Hardy-Littlewood inequality to have $$ \int_{\mathbb R^n}|f(x)g(x)h(x)|dx\leq \int_0^{\infty}f^*(t)g^*(t)h^*(t)dt=\int_{\mathbb R^n}f^{\sharp}(x)g^{\sharp}(x)h^{\sharp}(x)dx $$ Since $q>p$ the following hold \begin{eqnarray*} \left(|u(|x|^{\frac pq})|^{\frac{q-p}{p}}\right)^{\sharp}(|y|)&=&|u|^{\frac{q-p}{p}}(|y|^{\frac pq})\\ \left(|x|^{\frac{p-q}{q}}\right)^{\sharp}(|y|)&=&|y|^{\frac{p-q}{q}}\\ |\nabla u(|x^{\frac pq}|)|^{\sharp}(|y|)&=&|\nabla u|^{\sharp}(|y|^{\frac pq})\ . \end{eqnarray*} Thus \begin{eqnarray*} \|\nabla v\|_{p}&\leq &\left\{\int_{\mathbb R^n}\left[ |u|^{\frac{q-p}{p}}(|x|^{\frac pq})|\nabla u|^{\sharp}(|x|^{\frac pq})|x|^{\frac{p-q}{q}}\right]^{p}dx\right\}^{1/p}\\ &=&\left\{\frac{q}{p}\int_{\mathbb R^n}\left[ |u|^{\frac{q-p}{p}}(|y|)|\nabla u|^{\sharp}(|y|)|y|^{\frac{p-q}{p}}\right]^{p}|y|^{n\frac{q-p}{p}}dy\right\}^{1/p}\\ &=&\left(\frac qp\right)^{\frac 1p}\left\{\int_{\mathbb R^n}|u|^{q-p}(|y|)(|\nabla u |^{\sharp})^{p}(|y|)||y|^{\frac{(n-p)(q-p)}{p}}dy\right\}^{1/p} \end{eqnarray*} Let us now multiply and divide by $|y|^{n\frac{q-p}{q}}$ to get \begin{eqnarray}\label{nablavnorm2} \nonumber \|\nabla v\|_{p}&\leq& \left(\frac qp\right)^{\frac 1p}\left\{\int_{\mathbb R^n}\left[|u|^{q-p}|y|^{\frac{(n-p)(q-p)}{p}}\cdot |y|^{-n\frac{q-p}{q}} \right]\left[(|\nabla u|^{\sharp})^{p}(|y|)|y|^{n\frac{q-p}{q}}\right]dy\right\}^{1/p}\\ \nonumber &=&\left(\frac qp\right)^{\frac 1p}\left\{\int_{\mathbb R^n}\left[|u|^{q-p}\cdot |y|^{\frac{(q-p)[q(n-p)-np]}{qp}}\right]\left[(|\nabla u|^{\sharp})^{p}(|y|)|y|^{n\frac{q-p}{q}}\right]dy\right\}^{1/p}\\ \nonumber &\leq& \left(\frac qp\right)^{\frac 1p} \||u|^{q-p}\cdot |y|^{\frac{(q-p)[q(n-p)-np]}{qp}}\|^{1/p}_{\frac{q}{q-p}} \|(|\nabla u|^{\sharp})^{p}(|y|)|y|^{n\frac{q-p}{q}}\|^{1/p}_{\frac qp}\\ \nonumber & =& \left(\frac qp\right)^{\frac 1p} \left\{\int_{\mathbb R^n} |u|^q |y|^{\frac{q(n-p)-np}{p}} dy\right\}^{\frac{q-p}{qp}}\left\{\int_{\mathbb R^n} (|\nabla u|^{\sharp})^q |y|^{n\frac{q-p}{p}} dy\right\}^{\frac{1}{q}}\\ \nonumber & =& \left(\frac qp\right)^{\frac 1p}\left\{\Omegan^{-\frac{q(n-p)-np}{np}}\int_0^{\infty} (u^*)^q t^{\frac{q(n-p)-np}{np}} dt\right\}^{\frac{q-p}{qp}}\cdot \\ \nonumber &&\qquad \qquad \cdot \left\{\Omegan^{-\frac{q-p}{p}}\int_0^{\infty} (|\nabla u|^{*})^q t^{\frac{q-p}{p}} dt\right\}^{\frac{1}{q}} \\ &=&\left(\frac qp\right)^{\frac 1p} \Omegan^{-\frac{(n-p)(q-p)}{np^2}}\|u\|_{p^*,q}^{\frac{q-p}{p}}\cdot\|\nabla u\|_{p,q} \end{eqnarray} Now combine Alvino's inequality $$ \|v\|_{p^*, p}\leq \frac{p}{n-p}\Omegan^{-1/n}\|\nabla v\|_{p} $$ with \eqref{vnorm2} and \eqref{nablavnorm2} to obtain \begin{eqnarray*} \|u\|_{p^*,q}&=&\left(\frac pq\right)^{\frac 1q}\Omegan^{\frac{(q-p)(n-p)}{nqp}}\|v\|_{p^*, p}^{\frac{p}{q}}\\ &\leq &\left(\frac pq\right)^{\frac 1q}\Omegan^{\frac{(q-p)(n-p)}{nqp}}\left(\frac{p}{n-p}\right)^{\frac pq}\Omegan^{-\frac p{nq}}\|\nabla v\|_{p}^{\frac pq}\\ &\leq & \left(\frac{p}{n-p}\right)^{\frac pq}\Omegan^{-\frac p{nq}} \|u\|_{p^*,q}^{\frac{q-p}{q}}\cdot\|\nabla u\|_{p,q}^{\frac pq} \end{eqnarray*} and thus our claim \begin{eqnarray*} \|u\|_{p^*,q}&\leq & \frac{p}{n-p}\,\Omegan^{-\frac 1n} \|\nabla u\|_{p,q} \end{eqnarray*} \noindent \textbf{Step 2.} The case $q=\infty$. \noindent Let $ u\in \mathcal{D}_W^{1}L^{p,\infty}(\mathbb R^n)$. Let us define the auxiliary function $$ v(r)=r^{n/p^*}u^{\sharp}(r) $$ Then \begin{equation}\label{vinfty} \|u\|_{p^*, \infty}=\sup_{t>0} t^{1/p^*}u^*(t)=\Omegan^{1/p^*}\sup_{r>0}r^{n/p^*}u^\sharp(r)=\Omegan^{1/p^*}\|v\|_{\infty} \end{equation} Since $v$ has finite $L^\infty$ norm, it coincides with the limit of the $L^\gamma$ norm of $v$, as $\gamma \to +\infty$. For $1<\tilde p <n$ by applying inequality \eqref{SobLorineq-R^n} we have \begin{align*} \|v\|_\gamma^\gamma&=\int_0^{+\infty}(u^\sharp)^\gamma(r) r^{\fracrac{n-p}{p}\gamma}dr=\int_0^{+\infty}\left[u^\sharp(r)r^{\fracrac{n-p}{p}+\fracrac{1}{\gamma}}\right]^{\gamma}\fracrac{dr}{r}\\ &=\int_0^{+\infty}\left[u^\sharp(r)r^{n/\tilde{p}^*}\right]^{\gamma}\fracrac{dr}{r} =\left[ n\Omegan^{\gamma/\tilde{p}^*}\right]^{-1}\|u\|_{\tilde p^*, \gamma}^\gamma \quad \left(\hbox{where } \fracrac{n}{\tilde p}=\fracrac np + \fracrac 1{\gamma}\right)\\ &\leq \left[ n\Omegan^{\gamma/\tilde{p}^*}\right]^{-1}\left[ \fracrac{\tilde p}{n-\tilde p}\Omegan^{-1/n}\right]^\gamma \|\nabla u\|_{\tilde p, \gamma}^\gamma \\ &= \left[ n\Omegan^{\gamma/\tilde{p}^*}\right]^{-1}\left[ \fracrac{\tilde p}{n-\tilde p}\Omegan^{-1/n}\right]^\gamma n\Omegan^{\gamma/\tilde p} \int_0^1\left[|\nabla u|^\sharp r^{n/\tilde p}\right]^\gamma \fracrac{dr}{r}\\ &=\Omegan^{-\gamma/\tilde p^* + \gamma/\tilde p}\left[\fracrac{\tilde p}{n-\tilde p}\Omegan^{-1/n}\right]^\gamma\int_0^1\left[|\nabla u|^\sharp r^{n/ p}\right]^\gamma dr \quad \left(\hbox{since } \fracrac{n\gamma}{\tilde p}-1=\fracrac{n \gamma}{p}\right)\\ &= \Omegan^{-\gamma/\tilde p^* + \gamma/\tilde p}\left[\fracrac{\tilde p}{n-\tilde p}\Omegan^{-1/n}\right]^\gamma \||\nabla u|^\sharp r^{n/p}\|_\gamma^\gamma \end{align*} Combining the last inequality with \eqref{vinfty} we obtain \begin{multline*} \|u\|_{p^*, \infty}=\Omegan^{1/p^*}\|v\|_{\infty}= \Omegan^{1/p^*}\cdot\lim_{\gamma\to +\infty}\|v\|_\gamma\\ \leq \Omegan^{1/p^*}\cdot\lim_{\gamma \to +\infty}\Omegan^{-1/\tilde p^* + 1/\tilde p-1/n} \fracrac{\tilde p}{n-\tilde p} \||\nabla u|^\sharp r^{n/p}\|_\gamma\\ =\Omegan^{ 1/ p-1/n} \fracrac{ p}{n- p} \||\nabla u|^\sharp r^{n/p}\|_\infty \end{multline*} since $\tilde p\to p$ and $\tilde p ^*\to p^*$, as $\gamma \to \infty$. Recalling that $$ \||\nabla u|^\sharp r^{n/p}\|_\infty= \Omegan^{-1/p}\|\nabla u\|_{p,\infty} $$ the claim follows. \subsection{Best constants}\label{sharpness} \noindent The proof of Theorem \ref{thm generAlv} will be complete if we prove that the constant $\fracrac{p}{n-p}\Omegan^{-1/n}$ appearing in \eqref{SobLorineq-R^n} is sharp for any $1\leq p<q\leq \infty$. \noindent Notice that for $q=+\infty$, the sharpness will be a consequence of the attainability of the constant which will be proved in Section \ref{att}, hence we consider only the case $q<+\infty$. For this purpose we have to check that the maximizing sequence introduced in \cite{Alv1} for the case $1\leq q\leq p$ actually works also in the case $p< q \leq +\infty $. \noindent Consider the radial decreasing function $$ v_{\varepsilon}(x):=\left\{ \begin{array}{ll} \displaystyle |x|^{-\fracrac{n-p}{p}+\varepsilon}, & \hbox{if } |x|<1 \\ \\ 1-\left(\fracrac{n-p}{p}-\varepsilon\right)(|x|-1), & \hbox{if } 1\leq |x| < 1+\fracrac 1{\fracrac{n-p}{p}-\varepsilon} \\\end{array} \right. $$ whose gradient is given by $$ |\nabla v_{\varepsilon}(x)|:=\left\{ \begin{array}{ll} \left(\fracrac{n-p}{p}-\varepsilon\right)\displaystyle |x|^{-\fracrac{n}{p}+\varepsilon}, & \hbox{if } |x|<1 \\ \\ \left(\fracrac{n-p}{p}-\varepsilon\right), & \hbox{if } 1\leq |x| < 1+\fracrac 1{\fracrac{n-p}{p}-\varepsilon} \\\end{array} \right. $$ which is a decreasing radial function. One has \begin{equation*} \|\nabla v_{\varepsilon}\|_{p,q}^q=n\Omegan^{\fracrac qp}\left(\fracrac{n-p}{p}-\varepsilon\right)^q\fracrac 1q \bigg[\fracrac{1}{\varepsilon}+\fracrac{p}{q}\Big(\fracrac{1}{\fracrac{n-p}{p}-\varepsilon}+1\Big)^{\fracrac np q}-\fracrac pq\bigg] \end{equation*} and \begin{multline*} \|v_{\varepsilon}\|_{p^*,q}^q= n\Omegan^{\fracrac q{p^*}}\fracrac{1}{\varepsilon q}+ n\Omegan^{\fracrac q{p^*}}\int_1^{1+\fracrac 1{\fracrac{n-p}{p}-\varepsilon}}\left[1-\left(\fracrac{n-p}{p}-\varepsilon\right)(r-1)\right]^q r^{\fracrac{n-p}{p}q}\fracrac{dr}{r}\\ \leq n\Omegan^{\fracrac q{p^*}}\fracrac{1}{\varepsilon q}+ n\Omegan^{\fracrac q{p^*}}\int_1^{1+\fracrac 1{\fracrac{n-p}{p}-\varepsilon}}r^{\fracrac{n-p}{p}q}\fracrac{dr}{r}\\ = n\Omegan^{\fracrac q{p^*}}\fracrac{1}{\varepsilon q}+ n\Omegan^{\fracrac q{p^*}}\fracrac{pq}{n-p}\bigg[\Big(\fracrac{1}{\fracrac{n-p}{p}-\varepsilon}+1\Big)^{\fracrac{n-p}p q}-1\bigg] \end{multline*} so that \begin{multline*} \fracrac{\|\nabla v_{\varepsilon}\|_{p,q}^q}{\|v_{\varepsilon}\|_{p^*,q}^q}\geq \Omegan^{\fracrac qn} \fracrac{\left(\fracrac{n-p}{p}-\varepsilon\right)^q\fracrac 1q \bigg[\fracrac{1}{\varepsilon}+\fracrac{p}{q}\Big(\fracrac{1}{\fracrac{n-p}{p}-\varepsilon}+1\Big)^{\fracrac np q}-\fracrac pq\bigg]}{\fracrac{1}{\varepsilon q}+ \fracrac{pq}{n-p}\bigg[\Big(\fracrac{1}{\fracrac{n-p}{p}-\varepsilon}+1\Big)^{\fracrac{n-p}p q}-1\bigg]}\longrightarrow \Omegan^{\fracrac qn} \Big(\fracrac{n-p}{p}\Big)^q \end{multline*} as $\varepsilon\to 0$, which is our thesis. \section{Proof of Theorem \ref{thm equivalence}}\label{weaktostrong} \noindent As byproduct of Theorem \ref{thm generAlv} we have proved the implication $(H_p)\fracield{R}ightarrow (A_{p,\infty})$. We next prove the converse. \par \noindent Suppose that $(A_{p,\infty})$ holds, namely \begin{equation}\label{weakineq} \|v\|_{p^\ast, \infty}\leq \fracrac{p}{n-p}\Omegan^{-\fracrac 1n}\|\nabla v\|_{p, \infty}, \qquad v \in \mathcal D_W^1L^{p,\infty}(\mathbb R^n) \end{equation} Then we want to prove Hardy's inequality $(H_p)$ for any function $u \in \mathcal D^{1,p}$. Actually, thanks to the P\'olya-Szeg\"o and Hardy-Littlewood inequalities, we may restrict ourselves to prove the validity of $(H_p)$ for any $u\in \mathcal D^{1,p, \sharp}$, that is by density, the class of radially decreasing Lipschitz function with compact support, such that $\|\nabla u\|_p<+\infty$. By Proposition \ref{dilation_inv}, we may also assume that $u$ has support in $B_1$, the unit ball centered at the origin. \noindent Let us define an auxiliary radial function $v$ as follows $$ v(r)=\int_{r}^{1}\rho^{-\fracrac np}\int_{\rho}^{1}|u'(t)|^pt^{n-1}dt\,d\rho $$ Then $v\in \mathcal C^1(B_1\setminus \{0\})$ and $$ v'(r)= -\rho^{-\fracrac np}\int_{r}^1|u'(t)|^pt^{n-1}dt,\qquad v(1)=0, \,\, \lim_{r\to 0^+}v(r)=+\infty $$ so that $v$ is radially decreasing and also $|v'|=|\nabla v|$ is radially decreasing. \noindent Hence \begin{multline} \label{weaknorm_nablav} \|\nabla v\|_{p,\infty}=\Omegan^{\fracrac 1p}\sup_{B_1}|\nabla v|^{\sharp}|x|^{\fracrac np}= \Omegan^{\fracrac 1p}\sup_{0<r<1}|v'|r^{\fracrac np} \\= \Omegan^{\fracrac 1p}\int_0^1|u'|^{p}d\rho= \Omegan^{\fracrac 1p-1}\|\nabla u\|_p^p \end{multline} Moreover, $v\in L^{p^*, \infty}$ since \begin{multline*} \|v\|_{p^*, \infty}=\Omegan^{\fracrac 1{p^*}}\sup_{0<r<1}|v|r^{\fracrac{n-p}p}= \Omegan^{\fracrac 1{p^*}}\sup_{0<r<1}r^{\fracrac{n-p}p} \int_{r}^{1}\rho^{-\fracrac np}\int_{\rho}^{1}|u'(t)|^pt^{n-1}dt\,d\rho\\ = \Omegan^{\fracrac 1{p^*}}\sup_{0<r<1}r^{\fracrac{n-p}p} \int_{r}^{1}|u'(t)|^pt^{n-1}dt\int_{r}^{t}\rho^{-\fracrac np}\,d\rho\\=\fracrac{p}{n-p} \Omegan^{\fracrac 1{p^*}} \sup_{0<r<1}r^{\fracrac{n-p}p} \int_{r}^{1}|u'(t)|^pt^{n-1}\left(r^{-\fracrac{n-p}{p}}-t^{-\fracrac{n-p}{p}}\right)dt\\ \leq \fracrac{p}{n-p} \Omegan^{\fracrac 1{p^*}} \sup_{0<r<1} \int_{r}^{1}|u'(t)|^pt^{n-1}dt = \fracrac{p}{n-p} \Omegan^{\fracrac 1{p^*}-1} \|\nabla u\|_p^p<\infty \end{multline*} thus we have proved $v\in \mathcal D^{1}_WL^{p, \infty}(\mathbb{R}^n)$. \noindent Now the idea is to estimate from below the norm $\|v\|_{p^*, \infty}$ with the left hand side of Hardy's inequality involving the function $u$. Since $$ \|v\|_{p^*, \infty}=\Omegan^{\fracrac 1{p^*}}\sup_{0<r<1}|v|r^{\fracrac{n-p}p} $$ we have to estimate the quantity $$ |v|r^{\fracrac{n-p}p}=r^{\fracrac{n-p}p}\int_{r}^1\rho^{-\fracrac np}\int_{\rho}^1|u'(t)|^pt^{n-1}dt\,d\rho $$ from below with Hardy's integral involving $u$. Since $-u'(t)=|u'(t)|$, we have \begin{multline}\label{u^p} u^p(\rho)=\left[\int_{\rho}^1|u'|dt\right]^p = \left[\int_{\rho}^1|u'|t^{\fracrac{p-1}{p}\fracrac np}t^{-\fracrac{p-1}{p}\fracrac np}dt\right]^p\\ \leq \int_{\rho}^1|u'|^pt^{\fracrac{p-1}{p}n}dt\left[\int_{\rho}^1t^{-\fracrac np}\right]^{p-1}\\\leq \left(\fracrac{p}{n-p}\right)^{p-1}\rho^{-\fracrac{n-p}{p}\,(p-1)}\int_{\rho}^1|u'|^pt^{\fracrac{p-1}{p}n}dt\ . \end{multline} A key step in the proof is now the evaluation of the following limit, obtained applying de l'H\^opital's theorem (note that we have an indefinite form ${\infty}/{\infty}$, since $n>p$): \begin{eqnarray*} &&\quad\lim_{r\to 0^+}r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 u^p(\rho)\rho^{n-p-1}d\rho dz\\ &&\\ && = \lim_{r\to 0^+}\fracrac{\int_r^1 z^{-\fracrac np} \int_z^1 u^p(\rho) \rho^{n-p-1}d\rho dz}{r^{-\fracrac{n-p}{p}}}\\ &&\\ &&= \fracrac{p}{n-p} \lim_{r\to 0^+}\fracrac{ r^{-\fracrac np} \int_z^1 u^p(\rho)\rho^{n-p-1}d\rho dz}{r^{-\fracrac{n}{p}}}\\ &&= \fracrac{p}{n-p} \int_0^1 u^p(\rho)\rho^{n-p-1}d\rho \end{eqnarray*} and thanks to \eqref{u^p}, we have \begin{multline*} \int_0^1 u^p(\rho)\rho^{n-p-1}d\rho=\fracrac{n-p}{p} \lim_{r\to 0^+}r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 u^p(\rho)\rho^{n-p-1}d\rho dz\\ \leq \fracrac{n-p}{p} \sup_{0<r<1}r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 u^p(\rho)\rho^{n-p-1}d\rho dz \\ \leq \left(\fracrac{p}{n-p}\right)^{p-2}\sup_{0<r<1} r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 \rho^{-\fracrac{n-p}{p}\,(p-1)}\rho^{n-p-1}\int_{\rho}^1|u'|^pt^{\fracrac{p-1}{p}n}dtd\rho dz\\ = \left(\fracrac{p}{n-p}\right)^{p-2}\sup_{0<r<1} r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 \rho^{\fracrac{n}{p}-2}\int_{\rho}^1|u'|^pt^{\fracrac{p-1}{p}n}dtd\rho dz\ . \end{multline*} By Fubini's theorem, we reverse the order of integration in the last integral to get \begin{multline*} \left(\fracrac{p}{n-p}\right)^{p-2}\sup_{0<r<1} r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 |u'|^pt^{\fracrac{p-1}{p}n} \int_z^t\rho^{\fracrac{n}{p}-2}d\rho dt dz\\ \leq\left(\fracrac{p}{n-p}\right)^{p-1}\sup_{0<r<1} r^{\fracrac{n-p}{p}}\int_r^1 z^{-\fracrac np} \int_z^1 |u'|^pt^{\fracrac{p-1}{p}n}t^{\fracrac{n}{p}-1}\rho dt \\ =\left(\fracrac{p}{n-p}\right)^{p-1}\sup_{0<r<1} r^{\fracrac{n-p}{p}}v(r) \end{multline*} where we have used the fact $\fracrac{p-1}{p}n+\fracrac{n}{p}-1=n-1$. \noindent We conclude the proof by applying the embedding inequality \eqref{weakineq}, \begin{eqnarray*} \int_{\mathbb R^n}\fracrac{u^p}{|x|^p}dx&=&\Omegan \int_0^1 \int_0^1 u^p(\rho)\rho^{n-p-1}d\rho\\ &\leq& \Omegan\left(\fracrac{p}{n-p}\right)^{p-1}\sup_{0<r<1} r^{\fracrac{n-p}{p}}v(r)\\ &=& \Omegan\left(\fracrac{p}{n-p}\right)^{p-1}\Omegan^{-\fracrac 1{p^*} }\|v\|_{p^*, \infty}\\ &\leq& \Omegan^{1-\fracrac 1{p^*}}\left(\fracrac{p}{n-p}\right)^p\Omegan^{-\fracrac 1n}\|\nabla v\|_{p, \infty} \\ &=& \Omegan^{1-\fracrac 1p}\left(\fracrac{p}{n-p}\right)^p\|\nabla v\|_{p, \infty}\\ &=&\left(\fracrac{p}{n-p}\right)^p\|\nabla u\|_p^p \end{eqnarray*} thus Hardy's inequality. \section{Proof of Theorem \ref{thm attainedornot}}\label{att} \noindent Here we discuss the attainability of the sharp embedding constant in \eqref{SobLorineq-R^n}. Observe that for $q=+\infty$, the best embedding constant in $(A_{p, \infty})$ is attained by the function $\psi=|x|^{-\fracrac{n-p}{p}}$, which is radially decreasing together with the gradient $|\nabla \psi|=\fracrac{n-p}{p}|x|^{n/p}$. Hence $$ \|\psi\|_{p^*, \infty}=\Omegan^{1/p^*}\sup_{r>0}r^{-\fracrac{n-p}{p}}\cdot r^{\fracrac n{p^*}}= \Omegan^{1/p^*} $$ and $$ \|\nabla \psi\|_{p, \infty}=\Omegan^{1/p}\fracrac{n-p}{p}\sup_{r>0}r^{-\fracrac{n}{p}}\cdot r^{\fracrac n{p}}= \Omegan^{1/p}\fracrac{n-p}{p}\ . $$ Note that actually we have a whole family of extremal functions, due to the invariance by dilation proved in Proposition \ref{dilation_inv}. Moreover, there are plenty of maximizers for $(A_{p, \infty})$ in $\mathcal D^1_WL^{p^*,q}(\mathbb{R}^n)$, since it is enough to have a local asymptotic behavior as $\psi$. \noindent We next consider the case $p<q<+\infty$. We will argue by contradiction, proving that the sharp embedding constant is never attained. Let us suppose the inequality \eqref{SobLorineq-R^n} is attained at some $q<\infty$. Following the lines of Section \ref{H_implies_A}, we have at least one radially decreasing maximizer $u\in \mathcal D^1L^{p,q}(\mathbb{R}^n)$, namely a function $u$ such that \begin{equation}\label{extremal} \|u\|_{p^*,q}=\fracrac{p}{n-p}\Omegan^{-\fracrac 1n}\|\nabla u\|_{p,q}\ . \end{equation} Next define $$ v(x):=\left[u(|x|^{\frac pq})\right]^{\frac qp}=\left[u^*(|x|^{\frac{np}q}\Omegan)\right]^{\frac qp} $$ so that $$ v^{\sharp}(|x|)=\left[u(|x|^{\frac pq})\right]^{\frac qp}, \quad v^*(t)=u^{\sharp}\left((\textstyle{\fracrac{t}{\Omegan}})^{1/n}\right) =\left[u\left((\textstyle{\fracrac{t}{\Omegan}})^{\frac{p}{qn}}\right)\right]^{\frac qp}=\left[v^*(t^{\frac pq}\Omegan^{\frac{q-p}{q}})\right]^{\frac qp}\ $$ By \eqref{vnorm2}, one has $v\in L^{p^*,p}(\mathbb{R}^n)$ with $$ \|v\|_{p^*,p}= \left(\frac qp\right)^{\frac 1p}\Omegan^{\frac{p-q}{p p^*}}\|u\|^{\frac qp}_{p^*,q} $$ and $\nabla v\in L^p$, with $$ \|\nabla v\|_{p}\leq \left(\frac qp\right)^{\frac 1p} \Omegan^{-\frac{(n-p)(q-p)}{np^2}}\|u\|_{p^*,q}^{\frac{q-p}{p}}\cdot\|\nabla u\|_{p,q}\ . $$ By \eqref{extremal} we obtain $$ \|\nabla v\|_{p}\leq \left(\frac qp\right)^{\frac 1p} \left(\fracrac{p}{n-p}\right)^{\fracrac{q-p}{p}} \Omegan^{-\frac{(q-p)}{p^2}}\|\nabla u\|_{p,q}^{\frac{q}{p}} $$ and in turn \begin{multline*} \|v\|_{p^*,p}= \left(\frac qp\right)^{\frac 1p}\Omegan^{\frac{p-q}{p p^*}}\|u\|^{\frac qp}_{p^*,q}=\left(\frac qp\right)^{\frac 1p}\left(\fracrac{p}{n-p}\right)^{\fracrac qp}\Omegan^{-\fracrac 1n -\fracrac {q-p}{p^2}}\|\nabla u\|^{\fracrac qp}_{p,q}\\% \geq \left(\frac qp\right)^{\frac 1p}\left(\fracrac{p}{n-p}\right)^{\fracrac qp}\Omegan^{-\fracrac 1n -\fracrac {q-p}{p^2}} \left(\frac qp\right)^{-\frac 1p} \left(\fracrac{p}{n-p}\right)^{-\fracrac{q-p}{p}} \Omegan^{\frac{(q-p)}{p^2}} \|\nabla v\|_{p}\\ =\fracrac{p}{n-p} \Omegan^{-\fracrac 1n}\|\nabla v\|_{p}\ . \end{multline*} This directly implies $$ \|v\|_{p^*,p}= \fracrac{p}{n-p} \Omegan^{-\fracrac 1n}\|\nabla v\|_{p} $$ and, since $v=v^\sharp$ one has $$ \int_{\mathbb R^n} \fracrac{v^p}{|x|^p}\ dx = \fracrac{p}{n-p} \int_{\mathbb R^n} |\nabla v|^p dx $$ which contradicts the non-attainability of Hardy's inequality. \appendix \section{Reduction to the radial case} \noindent Here we follow \cite{ALT}. Let $u\in \mathcal D^1L^{p,q}$, $u\neq 0$, smooth and compactly supported. Notice that because of the invariance by dilation, we can also prescribe the measure of the support. We aim at proving that there exists $v\in \mathcal{D}^{1,\sharp}L^{p,q}$ such that $\|v\|_{p^*,q}\geq \|u\|_{p^*,q}$ and $\|\nabla v\|_{p,q}\leq \|\nabla u\|_{p,q}$. This yields the following maximization problem $$ \max\left\{\|v\|_{p^*, q} : v\in W_0^1L^{p,q}(\Omega), |\nabla v| \leq f \in L^{p,q} \,\,\hbox{ a.e. in }\Omega, f^*=|\nabla u|^*\right\}\geq \|u\|_{p^*,q} $$ (the last inequality is trivial: set $v\equiv u, f\equiv|\nabla u|$). It is known that for any $f\geq 0$, $f\in L^{p,q}(\Omega)$ there exists a maximal nonnegative sub-solution $v\in W^1L^{p,q}_0(\Omega)$ of the problem \begin{equation}\label{sub} |\nabla v| \leq f \end{equation} (see \cite{Li} Prop.~7.2, p.~164 where the statement is proved for $f\in W_0^{1,p}$ but it can be generalized thanks to the monotonicity of the decreasing rearrangement). \noindent Consider the maximization problem $$ I(u)=\{\sup \|v\|_{p^*,q} : v\hbox{ enjoys \eqref{sub}, } f\geq 0, f\in L^{p,q} \hbox{ and } f^\sharp=|\nabla u|^\sharp\}\geq \|u\|_{p^*,q} $$ It was proved in \cite{GN} that if $v$ satisfies \eqref{sub}, with $f\in L^{p,q}$ such that $f^\sharp=|\nabla u|^\sharp$, then $$ v^*(t)\leq \fracrac{1}{n\Omegan^{1/n}}\int_t^{|\Omega|} s^{1/n}F(s)\fracrac{ds}s $$ for some positive $ F\in L^{p,q}(0, |\Omega|)$ such that $F(\Omegan |x|^n)\prec |\nabla u|^\sharp (|x|) $ where, we recall $$ f\prec g \,\, \Longleftrightarrow \,\, \left\{ \begin{array}{ll} \displaystyle \int_0^t f^\ast(s)ds \leq \int_0^t g^\ast(s)ds , & \hbox{ for } t\in [0, |\Omega|] \\ \\ \displaystyle \int_0^{|\Omega|} f^\ast(s)ds = \int_0^{|\Omega|} g^\ast(s)ds & \\ \end{array} \right. $$ The relation $f\prec g$ it is known as the Hardy-Littlewood-Polya relation between $f$ and $g$. In particular one has $f^{**}(t)\leq g^{**}(t)$ for any $t$ (see \cite{GN,BS, ALT} for the definition of $\prec$ and its properties). It turns out that $$ I(u)\leq J(u) $$ where $J(u)$ is the following relaxed maximization problem \begin{multline*} J(u)=\Big\{\sup \|w\|_{p^*,q} : w(|x|)=\fracrac{1}{n\Omegan^{1/n}}\int_{\Omegan |x|^n}^{|\Omega|} F(s)s^{1/n}\fracrac {ds}s,\\ F\in L^{p,q}(0,|\Omega|): F(\Omegan |x|^n)\prec |\nabla u|^\sharp (|x|), F\geq 0\Big\} \end{multline*} By direct calculations we have \begin{equation}\label{redutc1} \|w\|_{p^*,q}\leq C_{n,q} \| F\|_{p,q}\leq C\|\nabla u\|_{p,q} \end{equation} \noindent Consider the following class $$ K(|\nabla u|^\sharp):=\{F\in L^{p,q}(0, |\Omega|): F\geq 0, F(\Omegan |x|^n)\prec |\nabla u|^\sharp(|x|)\} $$ for which the following properties are proved in \cite{ALT}: \begin{itemize} \item \ $K(|\nabla u|^\sharp)$ is a convex, weakly compact and closed set in $L^{p,q}(0, |\Omega|)$; \item \ $K(|\nabla u|^\sharp)$ is the weak closure, in $L^{p,q}(0, |\Omega|)$, of the set of positive functions $f$ such that $f^\sharp=|\nabla u|^\sharp$; \item \ any extreme point of $K(|\nabla u|^\sharp)$ (namely, any $F$ such that do not exist $F_1, F_2 \in K, F_1 \neq F_2$, for which $F=\fracrac{F_1+F_2}{2}$) is equi-measurable with $|\nabla u|$ with $F^*= |\nabla u|^*$. \end{itemize} (actually the result in \cite{ALT} is proved in $L^p(\Omega)$, but it can be straightforward generalized). \noindent Thanks to the previous properties, if $w_j$ is any maximizing sequence for $J(u)$, then $|\nabla w_j(|x|)|=F_j(\Omegan |x|^n)$ is uniformly bounded in $L^{p,q}(\Omega)$ (or, equivalently, $F_j(s)$ is uniformly bounded in $L^{p,q}(0,|\Omega|)$) so that $F_j(s)$ converges weakly in $L^{p,q}(0, |\Omega|)$ to some $F_0(s)\in K(|\nabla u|^\sharp)$, and $F_j(\Omegan |x|^n)$ converges weakly in $L^{p,q}(\Omega)$ to $F_0(\Omegan |x|^n)$. As a consequence, up to a subsequence, $\{w_j\}$ converges pointwise if $x\neq 0$, and also weakly into $L^{p^*,q}(\Omega)$, to the associated function $w_0\in L^{p^*,q}(\Omega)$: $$ w_j(|x|)\stackrel{L^{p^*,q}(\Omega)}{\rightharpoonup} w_0(|x|)=\fracrac{1}{n\Omegan^{1/n}}\int_{\Omegan |x|^n}^{|\Omega|} F_0(s)s^{1/n}\fracrac {ds}s $$ Once we prove that $w_j\to w_0$ in $L^{p^*,q}(\Omega)$, then $w_0$ will be a maximum point for $J(u)$, and hence an extreme point of $K(|\nabla u|^{\sharp})$. As a consequence, $|\nabla w_0|^{\sharp }(|x|) \equiv F^*_0 (\Omegan |x|^n)= |\nabla u|^\sharp (|x|)$ and thus $|\nabla w_0|^*(s)\equiv F^*_0(s)= |\nabla u|^* (s)$, and our claim follows: $$ w_0\in \mathcal D^{1,\sharp}L^{p,q}: \quad \|w_0\|_{p^*,q}=J(u)\geq I(u)\geq \|u\|_{p^*,q}, \quad \|\nabla w_0\|_{p,q}=\|\nabla u\|_{p,q} $$ \noindent In order to prove the strong convergence of $w_j$ in $L^{p^*,q}(\Omega)$, let us focus on its gradient $F_j(\Omegan |x|^n)$ whose Lorentz quasi-norm is given by $$ \|\nabla w_j\|_{L^{p,q}(\Omega)}^q=\|F_j\|_{L^{p,q}(0, |\Omega|)}^q=\int_0^{|\Omega|}\left[ F_j^{*}t^{\fracrac 1p}\right]^q\fracrac{dt}{t} $$ \noindent Let us recall the maximal function defined for a measurable function $f$ as follows $$f^{**}(t):=\fracrac{1}{t}\int_0^tf^*(s)\, ds$$ which defines an equivalent norm in $L^{p,q}(\Omega)$, as long as $p>1$, by $$|||f|||_{p,q}=\|t^{\fracrac{1}{p}-\fracrac{1}{q}}f^{**}(t)\|_{L^q(0,|\Omega|)}$$ Since $F_j(\Omegan |x|^n)\prec |\nabla u|^\sharp (|x|)$, we have $$ F_j^{*}(t)\leq F_j^{**}(t)\leq |\nabla u|^{**}(t), \qquad (F^*_j)^{q}(t)=(F^q_j)^{*}(t)\leq (F_j^q)^{**}(t)\leq (|\nabla u|^q)^{**}(t), $$ and hence $$ (F_j^q)^{*}(t)t^{\fracrac qp-1}\leq (|\nabla u|^q)^{**}(t) t^{\fracrac qp-1} \in L^{1}(0, |\Omega|), \quad \hbox{since } |\nabla u|\in L^{p,q } $$ On the other hand, since $F_j(s)$ converges weakly to $F_0(s)$ in $L^{p,q}(0, |\Omega|)$, we get $$ \int_0^{|\Omega|}F_j^*(t)dt= \int_0^{|\Omega|}F_j^*(t)\cdot 1 dt\longrightarrow \int_0^{|\Omega|}F_0^*(t)dt $$ so that, up to a subsequence, $F_j^*(t)$ converges in measure and a.e. to $F_0^*(t)$. We can then apply the Lebesgue dominated convergence theorem to the sequence $(F_j^q)^{*}(t)t^{\fracrac qp-1}$, obtaining $$ \|F_j\|_{L^{p,q}(0, |\Omega|)}\longrightarrow \|F_0\|_{L^{p,q}(0, |\Omega|)} $$ and eventually $|\nabla w_j|=F_j(\Omegan |x|^n)\to |\nabla w_0|= F_0(\Omegan |x|^n)$ strongly in $L^{p,q}(\Omega)$. From the embedding $W_0^1L^{p,q}\hookrightarrow L^{p^*,q}$, we have $w_j\to w_0$ strongly in $L^{p^*,q}(\Omega)$. \par \end{document}
\begin{document} \title[Suitable sets for paratopological groups] {Suitable sets for paratopological groups} \author{Fucai Lin*} \address{Fucai Lin: 1. School of mathematics and statistics, Minnan Normal University, Zhangzhou 363000, P. R. China; 2. Fujian Key Laboratory of Granular Computing and Application, Minnan Normal University, Zhangzhou 363000, China} \email{[email protected]; [email protected]} \author{Alex Ravsky} \address{Alex Ravsky: Pidstryhach Institute for Applied Problems of Mechanics and Mathematics, 3b Naukova str., 79060, Lviv, Ukraine} \email{[email protected]} \author{Tingting Shi} \address{Tingting Shi: School of mathematics and statistics, Minnan Normal University, Zhangzhou 363000, P. R. China} \email{[email protected]} \thanks{The first author is supported by the Key Program of the Natural Science Foundation of Fujian Province (No: 2020J02043), the NSFC (No. 11571158), the lab of Granular Computing, the Institute of Meteorological Big Data-Digital Fujian and Fujian Key Laboratory of Data Science and Statistics.} \thanks{* Corresponding author} \keywords{paratopological group, suitable set, saturated paratopological group, topological group} \subjclass[2020]{22A15, 54H99, 54H11} \date{\today} \begin{abstract} A paratopological group $G$ has a {\it suitable set} $S$. The latter means that $S$ is a discrete subspace of $G$, $S\cup \{e\}$ is closed, and the subgroup $\langle S\rangle$ of $G$ generated by $S$ is dense in $G$. Suitable sets in topological groups were studied by many authors. The aim of the present paper is to provide a start-up for a general investigation of suitable sets for paratopological groups, looking to what extent we can (by proving propositions) or cannot (by constructing examples) generalize to paratopological groups results which hold for topological groups, and to pose a few challenging questions for possible future research. We shall discuss when paratopological groups of different classes have suitable sets. Namely, we consider paratopological groups (in particular, countable) satisfying different separation axioms, paratopological groups which are compact-like spaces, and saturated (in particular, precompact) paratopological groups. Also we consider the permanence of a property of a group to have a suitable set with respect to (open or dense) subgroups, products and extensions. \end{abstract} \maketitle \section{Introduction} A {\em paratopological group} is a group $G$ endowed with a topology $\tau$ such that the group operation $\cdot:G\times G\to G$ is continuous. In this case $\tau$ is called a {\it semigroup topology} on $G$. If, additionally, the operation of taking inverse is continuous then $G$ is a {\it topological group}. A classical example of a paratopological group failing to be a topological group is the {\it Sorgenfrey line} $\mathbb S$, that is the additive group of real numbers, endowed with the Sorgenfrey topology generated by the base consisting of half-intervals $[a,b)$, $a<b$. Whereas the investigation of topological groups already is a fundamental branch of topological algebra (see, for instance,~\cite{Pon},~\cite{DikProSto}, and~\cite{AT}), paratopological groups are not so well-studied and have more variable structure. Basic properties of paratopological groups compared with the properties of topological groups are described in the book \cite{AT} by Arhangel'skii and Tkachenko, in the PhD thesis of the second author~\cite{Rav3}, the papers~\cite{Rav}, ~\cite{Rav2}, and in the survey \cite{Tka5} by Tkachenko. Suitable sets were considered in the context of Galois cohomology by Tate (see~\cite{Doa}) and in the context of free profinite groups by Mel'nikov~\cite{Mel}. Later Hofmann and Morris promoted the concept of suitable set in their seminal paper~\cite{HM1990} and in their well-known monograph on compact groups. Since then many authors studied suitable sets in topological groups, see, for instance, ~\cite{CG-F}, \cite{Tka97}. Fundamental results were obtained by Comfort et al. in~\cite{CMRS1998} and Dikranjan et al. in~\cite{DTT1999} and in~\cite{DTT2000}. Many examples of topological groups without suitable sets were constructed in~\cite{Tka97}. As far as we know, the first few results on suitable sets for paratopological groups were obtained by Guran, see~\cite{G2003} and Theorem 13.3 of ~\cite{BP2003}. The aim of the present paper is to provide a start-up for a general investigation of this topic, looking to what extent we can (by proving propositions) or cannot (by constructing examples) generalize to paratopological groups results which hold for topological groups, and to pose a few challenging questions for possible future research. A subset $S$ of a paratopological group $G$ is a {\it suitable set} for $G$, if $S$ is a discrete subspace of $G$, $S\cup \{e\}$ is closed, and the subgroup $\langle S\rangle$ of $G$ generated by $S$ is dense in $G$, see~\cite{G2003}. Let $\mathcal{S}$ (respectively, $\mathcal{S}_{c}$) be the class of paratopological groups $G$ having a suitable (respectively, closed suitable) set. It turns out that very often a suitable set of a group $G$ generates $G$. This fact suggests to devote a special attention to classes $\mathcal{S}_g$ (respectively, $\mathcal{S}_{cg}$) of paratopological groups $G$ having a suitable (respectively, closed suitable) set which generates $G$. In the paper we shall discuss when paratopological groups of different classes have suitable sets. Whereas the constructed examples are usually specific for paratopological groups, a lot of presented positive results are counterparts of those from papers ~\cite{CMRS1998} and~\cite{DTT1999}. Nevertheless Propositions~\ref{prop:CCnotCS}, \ref{p0new}, \ref{t2}, \ref{t2.2}, and \ref{t2.3} can be new for topological groups. Section~\ref{sec:sep} is devoted to paratopological groups (in particular, to countable) satisfying different separation axioms. Generalizing Proposition 1.4 from~\cite{CMRS1998}, in Proposition~\ref{tt} we show that any paratopological group with a suitable set is a $T_1$-space or a two-element group. In Examples~\ref{ex:ZT1notT2} and~\ref{ex:T1nS} are provided infinite $T_1$ non-Hausdorff paratopological groups with and without a suitable set, respectively. Section~\ref{sec:compactlike} is devoted to paratopological groups which are compact-like spaces. We observe that Theorem 3.2 from~\cite{DTT1999} can be generalized to paratopological groups. That is, if $\kappa$ is an infinite non-measurable cardinal and $\lambda=2^{\kappa}$ and $G$ is a $T_1$ paratopological group such that $d(G)\le\lambda$ and the group $G^\lambda$ is not countably compact then $G^\lambda$ has a closed suitable set. In Proposition~\ref{prop:CCnotCS} we show that if $G$ is a countably compact infinite locally finite $T_1$ paratopological group without non-trivial convergent sequences then $G$ has no suitable set. In Example~\ref{e1} we show that a non-Hausdorff $T_1$ periodic countable sequentially pracompact Abelian paratopological group constructed by Banakh in~\cite[Example 3.18]{BR2020} belongs to $\mathcal{S}_{cg}$. Complementing Proposition~\ref{prop:CCnotCS}, in Example~\ref{e2} we show that a feebly compact non-countably compact Baire Hausdorff paratopological group such that all its countable subsets are closed, constructed by Sanchis and Tkachenko in the proof of~\cite[Theorem 2]{ST2012}, belongs to $\mathcal{S}_{cg}$. Extending Proposition 2.7 from~\cite{DTT1999}, in Proposition~\ref{p0new} we show that if $G$ is a countably compact $T_1$ paratopological group with a suitable set, $H$ is a Hausdorff paratopological group, and $f:G\to H$ be a continuous homomorphism with dense image then $H$ has a suitable set. Section~\ref{sec:saturated} is devoted to saturated (in particular, to precompact) paratopological groups. In Proposition~\ref{t4} we show that if $G$ is a saturated Hausdorff paratopological group and $S$ be a (closed) suitable set for a the group reflexion $G^{\flat}$ of $G$ then $S$ is a (closed) suitable set for a group $G$ too. Guran in~\cite{G2003} announced that every Hausdorff separable non-precompact paratopological group $G$ has a closed suitable set. Complementing this, in Proposition~\ref{l15+} we show that any non-feebly compact precompact separable $T_1$ paratopological group has a closed suitable set. Also we show that each Hausdorff locally separable saturated first countable (non-pre\-com\-pact) paratopological group has a (closed) suitable set, see Corollary~\ref{c5+}. In Example~\ref{ex:GflatwoSS} we provide a zero-dimensional saturated $T_1$ paratopological group $G$ which is a sum of two of its closed discrete subsets such that the group reflexion $G^{\flat}$ has no suitable set. Adapting Theorem 3.6.I from~\cite{DTT1999}, in Proposition~\ref{ttt+} we show that every non-feebly compact non-precompact saturated $T_1$ paratopological group $G$ with a dense strictly $\sigma$-discrete subspace has a closed suitable set. In Proposition~\ref{qdpart+} we show that every non-precompact saturated $T_1$ paratopological group which is a $\sigma$-space has a suitable set. Section~\ref{sec:perm} is devoted to the permanence of a property of a group to have a suitable set with respect to (open or dense) subgroups, products and extensions. We observe that Theorem 4.2 from~\cite{CMRS1998} can be extended to paratopological groups, that is if an open subgroup of a $T_1$ paratopological group $G$ has a (closed) suitable set, then $G$ has a (closed) suitable set. In Section~\ref{sec:perm2} we show, in particular, that if a collectionwise Hausdorff paratopological group $G$ has a suitable set $S$ and $H$ is a (sequentially) dense subgroup of $G$ then $H$ has a suitable set of size at most $|S|\cdot\aleph_0$, if $G$ is hereditarily collectionwise Hausdorff, see Proposition~\ref{t2} or regular with countable pseudocharater, see Proposition~\ref{t2.3}. In Section~\ref{sec:perm3} we observe the following. Theorem 4.3 from~\cite{CMRS1998} can be generalized to paratopological groups as follows. Let $\Gamma$ be a non-empty family of $T_1$ paratopological groups such that each member of it has a suitable set. Then the product of $\Gamma$ has a suitable set contained in the $\sigma$-product of $\Gamma$. It follows that both the $\sigma$- and $\Sigma$-product of $\Gamma$ has a suitable set. Lemma 3.1 from~\cite{DTT1999} can be generalized to paratopological groups as follows. If a $T_1$ paratopological group $G$ contains a closed discrete set $A$ such that $|A|\geq d(G)$ then $G\times G\in \mathcal{S}_{c}$. Moreover, if $|A|=|G|$ then $G\times G\in \mathcal{S}_{cg}$. Complementing Proposition~\ref{l1} and extending Lemma 2.3 from ~\cite{DTT1999} to paratopological groups, in Proposition~\ref{prop:DPTGSS} we show that if $G$ is a $T_1$ paratopological group with a suitable set then $d(G)\leq \psi(G)e(G)$. Moreover, if $G$ has a closed suitable set then $d(G)\le e(G)$. \section{Separation axioms in paratopological groups}\label{sec:sep} These axioms describe specific structural properties of spaces. Basic separation axioms and relations between them are considered in~\cite[Section 1.5]{Eng}. It is well-known that each topological group is completely regular. On the other hand, simple examples show that none of the implications $T_0\Rightarrow T_1 \Rightarrow T_2 \Rightarrow T_3$ holds for paratopological groups (see \cite[Examples 1.6-1.8]{Rav} and page 5 in any of papers \cite{Rav2} or \cite{Tka5}) and there are only a few backwards implications between different separation axioms, see ~\cite[Section 1]{Rav2} or~\cite[Section 2]{Tka5}. On the other hand, Banakh and Ravsky proved in~\cite{BR2016} that every $T_0$ (semi)regular paratopological group is Tychonoff. The following proposition generalizes Proposition 1.4 from~\cite{CMRS1998} to paratopological groups. \begin{proposition}\label{tt} Any paratopological group with a suitable set is a $T_1$-space or a two-element group. \end{proposition} \begin{proof} Let $S$ be a suitable set of a paratopological group $G$ and $s$ be any element of $S$. The definition of a suitable set implies $\overline{\{s\}}\subset \{s,e\}$. If $\overline{\{s\}}$ or $\overline{\{e\}}$ is a one-point set then $G$ is a $T_1$-space. Otherwise $\overline{\{s\}}=\overline{\{e\}}=\{s,e\}$. It follows $s^2\in \overline{\{s\}}$, so $s^2=e$ and $G=\overline{\langle S\rangle}=\{s,e\}$. \end{proof} By Proposition~\ref{tt}, any topological group with at least three elements and a suitable set is a Tychonoff space. On the other hand, the following simple example shows that an infinite paratopological group with a suitable set can fail to be Hausdorff. \begin{example}\label{ex:ZT1notT2} Let $\vec{\mathbb Z}$ be a group $\mathbb Z$ endowed with a base $\{U_{n,m}:n,m\in\mathbb Z\}$, where $U_{n,m}= \{k\in\mathbb Z:k=n\mbox{ or } k\ge m\}$ for each $n,m\in\mathbb Z$. Then $\vec{\mathbb Z}$ is a non-Hausdorff paratopological group with a suitable set $\{-1,0\}$. \end{example} {\it All spaces below are supposed to be $T_{1}$}, if the opposite is not stated. According to~\cite[5.3]{BR2020}, topologies $\tau$ and $\sigma$ on a set $X$ are defined to be {\it cowide} if for any nonempty sets $U\in\tau$ and $V\in\sigma$ an intersection $U\cap V$ is nonempty. A topology cowide to itself is called {\it wide}. The spaces with wide topology are well-known as {\it irreducible} spaces, see~\cite{AM},~\cite{mDon}. A subset $U$ of a space is {\it regular open}, if $U=\operatorname{int}\overline{U}$. Stone \cite{Sto} and Kat\v{e}tov \cite{Kat} considered the topology $\tau_{sr}$ on a space $(X,\tau)$, generated by a base consisting of all regular open sets of the space $(X,\tau)$. The space $(X,\tau_{sr})$ is called the {\it semiregularization} of the space $(X,\tau)$. It is easy to show that the semiregularization of a Hausdorff space is Hausdorff. Moreover, a semiregularization of a Hausdorff paratopological group is a regular paratopological group, see~\cite[Example 1.9]{Rav2}, \cite[p. 31]{Rav3} or \cite[p. 28]{Rav3}. \begin{example}\label{ex:T1nS} There exists a non-Hausdorff paratopological group without a suitable set. There exists a paratopological group generated by its closed discrete subset such that the semiregularization of the group has no suitable set. \end{example} \begin{proof} According to~\cite[Theorem 2.4.a]{DTT1999} there exists a non-separable Lindel\"of topological linear space $(L,\tau)$ of countable pseudocharacter. The Cartesian product $G$ of $L$ and the group $\vec{\mathbb Z}$ from Example~\ref{ex:ZT1notT2} is a non-separable Lindel\"of paratopological group of countable pseudocharacter. By Proposition~\ref{prop:DPTGSS}, $G$ has no suitable set. We can consider $L$ as a linear space over a field $\mathbb Q$. Let $B$ be a basis of this space. Pick any vector $e\in B$ and define a (unique) linear map $s:L\to\mathbb Q$ such that $s(e)=1$ and $s(B\setminus\{e\})=0$. Let $T=\{x\in L:s(x)\ge 0\}$. Let $\sigma$ be a semigroup topology on $L$ consisting of all sets $U\subset L$ such that for every $u\in U$ there exists $x\in T$ such that $u+x+T\subset U$, see~\cite[5.2]{BR2020}. Let $\tau\vee\sigma$ be the supremum of topologies $\tau$ and $\sigma$; it is generated by the base $\{U\cap V:U\in\tau, V\in\sigma\}$. Put $S=s^{-1}(0)\cup\{e/n:n\in\mathbb N\}$. Clearly, $\langle S\rangle=L$, and, using that $s(y)\le 1$ for each $y\in S$, we can easily show that $S$ is closed and discrete in $(L,\sigma)$ and so in $(L,\tau\vee\sigma)$. Since $T$ is dense in $L$, by~\cite[Proposition 5.36]{BR2020}, the topologies $\tau$ and $\sigma$ are cowide. Since $L=T-T$, by~\cite[Proposition 5.35]{BR2020}, the topology $\sigma$ is wide. Thus by ~\cite[Lemma 5.31]{BR2020}, $(\tau\vee\sigma)_{sr}=\tau_{sr}=\tau$. That is the semiregularization of the group $(L,\tau\vee\sigma)$ is $(L,\tau)$, which has no suitable set. \end{proof} \subsection{Countable paratopological groups}\label{sec:countable} Recall that a group $G$ endowed with a topology is called {\it left topological} provided each left shift $x\mapsto gx$, $g\in G$, is a continuous map. Clearly, each paratopological group is left topological. By Theorem 2.2 from~\cite{CMRS1998}, each countable Hausdorff topological group is generated by a closed discrete set. Guran generalized this result to Hausdorff left topological groups, announced it in~\cite{G2003} (where groups are supposed to be Hausdorff) and presented it at a seminar; Banakh presented a proof in Theorem 13.3 of ~\cite{BP2003} \footnote{We clarified these details via a personal communication with Guran.}. Since an arbitrary locally finite group $G$, which is not finitely generated, endowed with a cofinite topology (consisting of the empty set and all subsets $A$ of $G$ such that $G\setminus A$ is finite) is a left topological group without a suitable set, the following question remains unanswered. \begin{question}\label{qc} Does each countable paratopological group have a suitable set? \end{question} \section{Paratopological groups which are compact-like spaces}\label{sec:compactlike} Different classes of compact-like spaces and relations between them provide a well-known investigation topic in general topology, see, for instance, basic \cite[Chap. 3]{Eng} and general \cite{Vau}, \cite{Ste}, \cite{DRRT}, \cite{Mat}, \cite{Lip} works. In the present paper we consider the following compact-like spaces. A space $X$ is {\it countably compact}, if each its infinite subset $A$ has an accumulation point $x$ (the latter means that each neighborhood of $x$ contains infinitely many points of the set $A$), or, equivalently, if each countable open cover of $X$ has a finite subcover. A space is {\it feebly compact} if each locally finite family of its nonempty open subsets of the space is finite. A space is pseudocompact if and only if it is Tychonoff and feebly compact, see \cite[Theorem 3.10.22]{Eng}. According to~\cite{GutRav}, a space $X$ is {\it sequentially pracompact} if it contains a dense set $D$ such that each sequence of points of the set $D$ has a convergent subsequence. Each countably compact and each sequentially pracompact space is feebly compact. In Sections 1.4 and 1.5 of~\cite{BR2020} are listed different classes of compact-like spaces and paratopological groups and considered relations between these classes. Remark that an investigation of compact-like paratopological groups is motivated by a problem on automatic continuity of inversion in a paratopological group. An interested reader can find known results and references on this subject in Section 5.1 of~\cite{Rav3} and Section 3 of the survey~\cite{Tka5}, in Introduction of~\cite{AlaSan} and in ~\cite{BR2020}. In Section 1.6 of~\cite{BR2020} is provided a brief survey of the topic. A cardinal $\kappa$ is called {\it measurable} if there is an $\aleph_{0}$-complete non-principal ultrafilter on $\kappa$. It is consistent with ZFC that there are no measurable cardinals. Moreover, under $V=L$ all cardinals are non-measurable, see~\cite{Dra74}. \begin{proposition} Let $\kappa$ be an infinite non-measurable cardinal and $\lambda=2^{\kappa}$. Let $G$ be a paratopological group such that $d(G)\le\lambda$. If the group $G^\lambda$ is not countably compact then it has a closed suitable set. \end{proposition} \begin{proof} The proof is similar to that of~\cite[Theorem 3.2]{DTT1999}. \end{proof} A group is {\it locally finite} if each its finitely generated subgroup is finite. Clearly, each periodic Abelian group is locally finite. \begin{proposition}\label{prop:CCnotCS} Let $G$ be a countably compact infinite locally finite paratopological group without non-trivial convergent sequences. Then $G$ has no suitable set. \end{proposition} \begin{proof} Suppose for a contradiction that $S$ is a suitable set for $G$. Since $G$ is locally finite, $S$ is infinite. Since $G$ is countably compact and $S$ is a suitable set, $S\cup\{e\}$ is a countably compact space with the only non-isolated point $e$. It follows that any sequence of distinct points of $S$ converges to $e$, a contradiction. \end{proof} Proposition~\ref{prop:CCnotCS} is closely related with the following question by van Douwen~\cite{vDou} from 1980, which is, according to references from~\cite[p. 2]{HvMR-GS}, considered central in the theory of topological groups. \begin{question} Is there an infinite countably compact topological group without non-trivial convergent sequences? \end{question} For a history of related results see the introductions of~\cite{Tom2019} and~\cite{HvMR-GS}. In particular, in 1976 Hajnal and Juh\'asz in~\cite{HJ} constructed the first example of such a group under the Continuum Hypothesis. In 2004 Garc\'ia-Ferreira, Tomita, and Watson in~\cite{G-FTW} obtained such a group from a selective ultrafilter. In 2009 Szeptycki and Tomita in~\cite{ST} showed that in the Random model there exists such a group, giving the first example that does not depend on selective ultrafilters. Finally, Hru\v s\'ak, van Mill, Ramos-Garc\'ia, and Shelah obtained a positive answer to van Douwen's question in ZFC, see~\cite{HvMR-GS} for "a presentable draft of the proof". The group built there is Boolean and at p.17 the authors remark that they do not know how to modify (in ZFC) their construction to get a non-torsion example and formulate the question about an existence of such a group in ZFC. We denote by TT an axiomatic assumption on an existence of such a torsion-free Abelian group. An example of such a group was constructed by Tkachenko~\cite{Tka} under the Continuum Hypothesis. Later, the Continuum Hypothesis was weakened to the Martin's Axiom for $\sigma$-centered posets by Tomita in~\cite{Tom1996}, for countable posets in \cite{KTW}, and finally to the existence of continuum many incomparable selective ultrafilters in \cite{MT}. The proof of~\cite[Lemma 6.4]{BDG} implies that under TT there exists a group topology $\tau$ on a free Abelian group $F$ generated by the set $\mathfrak c$ with the following property: for each countable infinite subset $M$ of the group $F$ there exists an element $\alpha \in \overline M\cap\mathfrak c$ such that $M$ is contained in a subgroup $\langle \alpha \rangle$ of $F$, generated by a set $\alpha$. Moreover, in~\cite[Example 3.22]{BR2020}, a topology $\sigma\supset\tau$ is constructed on $F$ such that $(F,\sigma)$ is a countably compact non-regular paratopological group, which is not a topological group and has the property mentioned above. The latter implies that both $(F,\tau)$ and $(F,\sigma)$ have no suitable set. \begin{example}\label{e1} There exists a non-Hausdorff periodic countable sequentially pracompact Abelian paratopological group $G\in\mathcal{S}_{cg}$. \end{example} \begin{proof} The group $G$ was constructed by Banakh in~\cite[Example 3.18]{BR2020} as follows. For each positive integer $n$ let $C_n$ be the set $\{0,\dots,n-1\}$ endowed with a binary operation $\oplus$ such that $x\oplus y\equiv x+y \mod n$ for any $x,y\in C_n$, that is $C_n$ is a cyclic group. Let $G=\bigoplus\limits_{n=1}^\infty C_n$ be a direct sum of the groups $C_n$. Let $\mathcal F$ be a family of all non-decreasing unbounded functions from $\omega\setminus\{0\}$ to $\omega$. For each $f\in\mathcal F$ put $$O_f=\{0\}\cup\{(x_n)\in G:\exists n\in\mathbb N\; (0<x(n)<f(n)\mbox{ and }\forall m>n\;x(m)=0)\}.$$ It is easy to check that the family $\{O_f:f\in\mathcal F\}$ is a base at the zero of a semigroup topology $\tau$ on $G$. For each positive integer $m\ge 2$ consider a function $\delta_m\in G$ such that $\delta_m(x)$ equals $1$, if $x=m$, and equals $0$, otherwise. We claim that $S=\{0\}\cup\{-\delta_{m}: m\ge 2\}$ is a closed suitable set. Indeed, since $G$ is a direct sum of cyclic groups $C_m$ for $m\ge 2$ and $\delta_m$ is a generator of $C_m$, we have $\langle S\rangle=G$. Let $f: \omega\setminus\{0\}\rightarrow\omega$ be a function such that $f(1)=0$ and $f(i)=i-2$ for any $i\geq 2$. It is easy to see that a set $(x+O_{f})\cap S$ is finite for any $x\in G$. Since $G$ is a $T_1$ space, it follows that $S$ is a closed discrete subset of $G$. \end{proof} The following result complements Proposition~\ref{prop:CCnotCS}. \begin{example}\label{e2} There exists a feebly compact non-countably compact Baire Hausdorff paratopological group $G\in \mathcal{S}_{cg}$ such that all its countable subsets are closed. \end{example} \begin{proof} Let $G$ be the feebly compact non-countably compact Baire Hausdorff Abelian paratopological group with all countable subsets closed constructed by Sanchis and Tkachenko in the proof of~\cite[Theorem 2]{ST2012}. They also constructed an open subsemigroup $C$ of the group $G$ such that $C^{-1}$ is a closed discrete subspace of $G$ and $G=CC^{-1}$, so $G\in \mathcal{S}_{cg}$. \end{proof} The following result extends Proposition 2.7 from~\cite{DTT1999}. \begin{proposition}\label{p0new} Let $G$ be a countably compact paratopological group with a suitable set, $H$ be a Hausdorff paratopological group, and $f:G\to H$ be a continuous homomorphism with dense image. Then $H$ has a suitable set. \end{proposition} \begin{proof} A group $\langle f(S)\rangle=f(\langle S\rangle)$ is dense in $H$. We claim that $f(S)\cup\{e\}$ is a compact space with all points of $S'=f(S)\setminus\{e\}$ isolated. Indeed, let $V$ be any open neighborhood of $e$ in $H$. Suppose for a contradiction that a set $f(S)\setminus V$ is infinite. Then a set $S\setminus f^{-1}(V)$ is infinite too. Since the space $G$ is countably compact, the set $S\setminus f^{-1}(V)$ has an accumulation point $x\in G\setminus f^{-1}(V)$, which is impossible since $x\ne e$. If $f(S)=\{e\}$ then $H=\overline{\langle f(S)\rangle}=\{e\}$. Otherwise $S'$ is a suitable set for $H$, because $\langle f(S)\rangle=\langle S'\rangle$, $S'\cup\{e\}=f(S)\cup\{e\}$ is a compact subset of a Hausdorff space $H$ and so closed in it, and all points of $S'$ are isolated in $S'$. \end{proof} \section{Saturated paratopological groups}\label{sec:saturated} Following Guran we call a paratopological group {\it saturated} if for each neighborhood $U$ of the identity of $G$ its inverse $U^{-1}$ has non-empty interior in $G$ ~\cite{BR2018}. By {\it the group reflexion} $G^\flat=(G,\tau^\flat)$ of a paratopological group $(G,\tau)$ we understand the group $G$ endowed with the strongest topology $\tau^\flat\subset\tau$ turning $G$ into a topological group. This topology admits a categorial description: $\tau^\flat$ is a unique topology on $G$ such that\begin{itemize} \item $(G,\tau^\flat)$ is a topological group; \item the identity homomorphism $\operatorname{id}:(G,\tau)\to(G,\tau^\flat)$ is continuous; \item for each continuous group homomorphism $h:G\to H$ into a topological group $H$, the homomorphism $h\circ \operatorname{id}^{-1}:G^\flat\to H$ is continuous. \end{itemize} Observe that the group reflexion of the Sorgenfrey line $\mathbb S$ is the usual real line $\mathbb R$.~\cite{BR2018}. By~\cite[Proposition 3]{BR2001}, for a saturated paratopological group $(G,\tau)$, a base at the identity of the topology $\tau^\flat$ consists of the sets $UU^{-1}$, where $U$ runs over open neighborhoods of $e$ in $G$. It follows (see also~\cite[Corollary 2]{BR2001}) that if $G$ is Hausdorff then its group reflexion $G^\flat$ is Hausdorff as well. Theorem 5 from ~\cite{BR2001} implies the following \begin{lemma}\label{l:Th5BR2001} Let $G$ be a saturated paratopological group. Then each non-empty open subset of $G$ contains a non-empty open subset of a (non-necessary $T_1$) group $G^\flat$. \end{lemma} \begin{proposition}\label{t4} Let $G$ be a saturated Hausdorff paratopological group and $S$ be a (closed) suitable set for a group $G^{\flat}$. Then $S$ is a (closed) suitable set for a group $G$. \end{proposition} \begin{proof} Since the identity map from $G$ to $G^\flat$ is continuous, $S$ is a (closed) discrete subspace of $G$ and $S\cup \{e\}$ is closed in $G$. By Lemma~\ref{l:Th5BR2001}, a dense in $G^\flat$ set $\langle S\rangle$ is also dense in $G$. \end{proof} \subsection{Precompact paratopological groups} Precompact paratopological groups are both compact-like and saturated (see \cite[Proposition 3.1]{Rav2} for the proof of the latter). Recall that a paratopological group $G$ is {\it precompact} if for each neighborhood $U$ of the identity of $G$, there exists a finite subset $F$ of $G$ such that $FU=G$. By Proposition 2.1 from~\cite{Rav2} a paratopological group $G$ is precompact iff for each neighborhood $U$ of the identity of $G$, there exists a finite subset $F$ of $G$ such that $UF=G$. Precompact topological groups are exactly subgroups of compact topological groups, see, for instance,~\cite[Corollary 3.7.17]{AT}, which implies that a subgroup of a precompact topological group is precompact. But a subgroup of a precompact paratopological group can fail to be precompact. Moreover, by Corollary 5 from~\cite{BR2003}, subgroups of precompact Hausdorff paratopological groups are exactly so-called Bohr-separated Hausdorff paratopological groups. The latter class includes, for instance, all locally compact Hausdorff topological groups and all locally convex Hausdorff linear topological spaces (or more generally all locally quasi-convex Abelian Hausdorff topological groups, see \cite{Aus} or \cite{Ba}). On the other hand, a dense subgroup of a precompact paratopological group is precompact, see \cite[Lemma 1]{T2013}. Also the following simple proposition holds \begin{proposition} An open subgroup $H$ of a precompact paratopological group $G$ is precompact. \end{proposition} \begin{proof} Pick any open neighborhood $U$ of $e$ in $H$. Since $H$ is open in $G$, the set $U$ is open in $G$, hence there exists a finite subset $F$ of $G$ such that $FU=G$. Since $(F\setminus H)U\cap H\subset (F\setminus H)H\cap H=\varnothing$, $H\subset (F\cap H)U$. \end{proof} \begin{proposition}\label{l15+} Let $G$ be a non-feebly compact precompact paratopological group with a countable dense subgroup $P$. Then there exists a subset $L$ of $P$ such that $L$ is discrete and closed in $G$ and $\langle L\rangle=P$. In particular, $L$ is suitable for $G$, and thus $G\in\mathcal{S}_{c}$. \end{proposition} \begin{proof} Since the space $G$ is not feebly compact, there is an infinite locally finite sequence $\{V_n\}$ of non-empty open subsets of $G$. By~\cite[Proposition 3.1]{Rav2}, the group $G$ is saturated. By Lemma~\ref{l:Th5BR2001}, for each $n$ there exists a non-empty open in $G^\flat$ set $W_n\subset V_n$. Put $U_n=\bigcup_{k\ge n} W_k$. Then $\{U_n\}$ is a non-decreasing family of non-empty open in $G^\flat$ sets which is locally finite in $G$. Let $\{x_{n}: n\in\omega\}$ be an enumeration of elements of $P$. As in the proof of~\cite[Lemma 3.5]{DTT1999} we can construct by induction an increasing sequence $\{L_{k}: k\in\omega\}$ of finite subsets of $P$ such that for each $k\in\omega$ we have $x_{k}\in \langle L_{k}\rangle$, $L_{k+1}\setminus L_{k}\subset U_{k}$, and $G=\langle L_{k}\rangle U_{k}$. Since the family $\{U_n\}$ is locally finite in $G$, it follows that $L=\bigcup_{n\in\omega} L_n$ is a closed discrete subset of $G$ and $\langle L\rangle=P$. \end{proof} \subsection{Non-precompact paratopological groups} For a Hausdorff paratopological group $G$ the {\it Hausdorff number} of $G$, denoted by $Hs(G)$, is the minimum cardinal number $\kappa$ such that for every neighborhood $U$ of $e$ there exists a family $\gamma$ of neighborhoods of $e$ such that $|\gamma|\le\kappa$ and $\bigcap_{V\in\gamma} VV^{-1}\subset U$, see~\cite{Tka2009}. \begin{proposition}\label{c5} Let $G$ be a Hausdorff locally separable saturated (non-precompact) pa\-ra\-to\-po\-lo\-gi\-cal group such that $Hs(G)\psi(G)\le\omega$. Then $G$ has a (closed) suitable set. \end{proposition} \begin{proof} Since a base at the identity of the topology $\tau^\flat$ consists of the sets $UU^{-1}$, where $U$ runs over open neighborhoods of $e$ in $G$, $\psi(G^\flat)\le Hs(G)\psi(G)\le\omega$. Since $G$ is saturated and locally separable, by Lemma~\ref{l:Th5BR2001} the group $G^{\flat}$ is locally separable and if the group $G$ is non-precompact then the group $G^{\flat}$ is non-precompact as well. By Theorem 5.14 (Corollary 5.9) from~\cite{CMRS1998}, the group $G^{\flat}$ has a (closed) suitable set. By Proposition~\ref{t4}, $G$ has a (closed) suitable set. \end{proof} \begin{corollary}\label{c5+} Each Hausdorff locally separable saturated first countable (non-pre\-com\-pact) paratopological group has a (closed) suitable set. \end{corollary} \begin{example} The Sorgenfrey line $\mathbb{S}$ is a Hausdorff first countable hereditarily Lindel\"of (by \cite[Exercise 3.8.A.c]{Eng}) saturated paratopological group. Any two-element subset $\{x,y\}$ of $\mathbb R$ such that $x/y$ is irrational is a suitable set both for $\mathbb R$ and $\mathbb S$. By Proposition~\ref{t2.2}, each dense subgroup of $\mathbb{S}$ has a countable closed suitable set. However, a subgroup $\mathbb Q$ of $\mathbb{S}$ or $\mathbb{R}$ has no finite suitable set, because each finitely generated subgroup of $\mathbb{Q}$ is discrete in $\mathbb R$. \end{example} \begin{question} Does every Hausdorff locally separable first-countable paratopological group $G$ have a suitable set? \end{question} Recall that in~\cite{G2003} is announced that every Hausdorff separable non-precompact paratopological group $G$ has a closed suitable set. \begin{remark} By \cite[Corollary 3.14]{DTT1999}, a free (Abelian) topological group over a metrizable space $X$ has a closed suitable set. On the other hand, Theorem 3.8 from~\cite{CMRS1998} implies that a (Markov) free topological group $F(\beta D)$ over a \v{C}ech-Stone compactification of an uncountable discrete space $D$ has no suitable set. By Example 2.13 from~\cite{DTT1999}, there exists a dense open pseudocompact subspace $Y$ of $\beta\mathbb N\setminus\mathbb N$ such that the free Abelian topological group $A(Y)$ over the space $Y$ has a closed suitable set. Therefore, the free Abelian topological group $A(\beta\mathbb N\setminus\mathbb N)$ over $\beta\mathbb N\setminus\mathbb N$ does not have a suitable set, but it contains a dense subgroup $A(Y)\in\mathcal S_c$. Usually, a free (Abelian) paratopological group over a space $X$ (see, for instance, ~\cite{PN2006} for the definitions) is much more asymmetric than a free topological group over $X$. In the proof of~\cite[Proposition 3.5]{PN2006} Pyrch and Ravsky showed that for any non-empty space $X$, a set $-X$ is a closed suitable set for the (Markov) free Abelian paratopological group $A_p(X)$. A similar result holds for the (Markov) free paratopological group over $X$, which, by~\cite[Theorem 4.13]{EN2012}, contains $X^{-1}$ as a closed subset and a discrete subspace. In particular, if $X=\beta D$ then, by Proposition 2.2 from~\cite{PN2006}, the group reflection of this group is topologically isomorphic to a free topological group over $X$, which has no suitable set by Theorem 3.8 from~\cite{CMRS1998}. \end{remark} \begin{example}\label{ex:GflatwoSS} There exists a zero-dimensional saturated paratopological group $G$ which is a sum of two of its closed discrete subsets such that the group reflexion $G^{\flat}$ has no suitable set. \end{example} \begin{proof} According to~\cite[Theorem 2.4.a]{DTT1999} there exists a non-separable Lindel\"of locally convex linear topological space $L$ of countable pseudocharacter. Put $G=L\times\mathbb R$. Let $\mathscr{B}$ be a local base at the identity of the group $L$ consisting of closed convex sets $U$ such that $U=-U$. For each natural $n$ and each $U\in\mathscr{B}$ put ${\check U}_n=\{(x,t)\in L\times\mathbb R:0\le t< 1/n, x\in tU\}$. It is easy to check that the family $\check{\mathscr{B}}=\{{\check U}_n:U\in\mathscr{B}, n\in\mathbb N\}$ satisfies Pontrjagin conditions (see, for instance,~\cite[Proposition 1.1]{Rav}) for a local base at the identity of a semigroup topology on the group $G$. Denote this topology by $\tau$. We claim that $\check{\mathscr{B}}$ consists of closed subsets of the group $(G,\tau)$, so the latter is zero-dimensional. Indeed, let $U$ be any closed convex neighborhood of the identity of $L$, $n$ be any natural number, and $(y,s)$ be any point of $G\setminus {\check U}_n$. If $s\ge\tfrac 1n$ then $((y,s)+{\check U}_n)\cap {\check U}_n=\varnothing$ If $s<0$ then $((y,s)+{\check U}_m)\cap {\check U}_n=\varnothing$ for each $m$ with $s+\tfrac 1m<0$. Assume that $0\le s<\tfrac 1n$. Since $y\not\in {\check U}_n$, $y\not\in sU$. Since the set $sU$ is closed in $L$, there exists $m\in\mathbb N$ such that $y\not\in\left(s+\tfrac 1m\right)U$. We claim that $((y,s)+{\check U}_{2m})\cap {\check U}_n=\varnothing$. Indeed, suppose for a contradiction that there exist $u,u'\in U$ and $0\le t<\tfrac 1n$, $0\le t'<\tfrac 1{2m}$, such that $(y,s)+(t'u',t')=(tu,t)$. Clearly, $t+t'\ne 0$. Since $U=-U$, we have $$y=tu-t'u'=(t+t')\frac{tu+t'(-u)}{t+t'}\in (s+2t')U\subset \left(s+\frac 1m\right)U,$$ a contradiction. It is easy to see that the group $(G,\tau)$ is saturated and its group reflexion $G^\flat$ is the Cartesian product $L\times\mathbb R$. So $G^\flat$ is a non-separable space of countable pseudocharacter. Since the space $\mathbb R$ is $\sigma$-compact, the space $G^\flat$ is Lindel\"of, see~\cite[Corollary 3.8.10]{Eng}. By Proposition~\ref{prop:DPTGSS}, $G^\flat$ has no suitable set. Pick any non-zero vector $x_0\in L$. The line $\ell=\mathbb Rx_0$ is closed in $G^\flat$ and so in $(G,\tau)$. It is easy to check that for any $x\in\ell$ and any $x_0\not\in U\in\mathscr{B}$ we have $(x+{\check U}_1)\cap\ell=\{x\}$, so $\ell$ is a discrete subset of $(G,\tau)$. A group $L'=L\times \{0\}$ is closed in $G^\flat$ and so in $(G,\tau)$. Clearly, for any $x\in L'$ and any $U\in\mathscr{B}$ we have $(x+{\check U}_1)\cap L'=\{x\}$, so $L'$ is a discrete subset of $(G,\tau)$. Finally, we have $G=\ell+L'$. In particular, $\ell\cup L'$ is a closed discrete generating subset of $G$. \end{proof} A subset $Y$ of a space $X$ is {\it strictly $\sigma$-discrete} (in $X$) if $Y$ is a countable union of closed discrete subsets of $X$~\cite{DTT1999}. We can adapt Theorem 3.6.I from ~\cite{DTT1999} to saturated paratopological groups as follows. \begin{proposition}\label{ttt+} Every non-feebly compact non-precompact saturated paratopological group $G$ with a dense strictly $\sigma$-discrete subspace has a closed suitable set. \end{proposition} \begin{proof} By Lemma~\ref{l:Th5BR2001}, the group $G^\flat$ is non-precompact too. That is $G^\flat$ has a neighborhood $U$ of the identity such that $FU\ne G$ for any finite subset $F$ of $G$. Pick an arbitrary symmetric neighborhood $V$ in $G^\flat$ of the identity such that $V^4\subset U$. It can be shown (see, for instance,~\cite[Remark 5.4]{CMRS1998}) that the family $\{aV:a\in A\}$ is discrete in $G^\flat$ for some countably infinite subset $A$ of $G$. For each $a\in A$ let $H_a$ be a closed discrete subspace of $G$ such that $H=\bigcup_{a\in A} H_a$ is dense in $G$. Then $F_a=H_a\cap V \cup\{e\}$ is a closed discrete subspace of $G$ for each $a\in A$ and a set $F=\bigcup_{a\in A} F_a$ is dense in $V$. Since $\{aF_a: a\in A\}$ is a locally finite family of closed discrete subsets, by Lemma 6.1 from~\cite{CMRS1998} its union $S$ is closed and discrete. Since $\langle F\rangle$ is dense in $V$ and $\langle S \rangle\supset F$, $\langle S \rangle$ is dense in the subgroup $\langle V\cup A\rangle$ of $G^\flat$. So $S$ is a closed suitable set for the open subgroup $\langle V\cup A\rangle$ of $G$. By Proposition~\ref{t6}, the group $G$ has a closed suitable set as well. \end{proof} A regular space is {\it a $\sigma$-space} provided it has $\sigma$-discrete (equivalently, $\sigma$-locally finite) network, see~\cite[4.3]{Gru}. If a topological group is a $\sigma$-space then by Corollary 3.10 from~\cite{DTT1999} it has a suitable set. This suggests the following \begin{question}\label{qd} Does every regular paratopological group which is a $\sigma$-space has a suitable set? \end{question} The following proposition provides a partial affirmative answer to Question~\ref{qd}. \begin{proposition}\label{qdpart+} Every non-precompact saturated paratopological group $G$ which is a $\sigma$-space has a suitable set. \end{proposition} \begin{proof} If $G$ is non-feebly compact then $G$ has a suitable set by Proposition~\ref{ttt+}. If $G$ is feebly compact then $G$ is a topological group by Proposition 3.15 from~\cite{BR2020} and so $G$ has a suitable set by Corollary 3.10 from~\cite{DTT1999}. \end{proof} An {\it almost topological group} is a paratopological group $(G,\tau)$ admitting a Hausdorff group topology $\gamma\subset\tau$ and a local base $\mathscr{B}$ at the identity $e$ of the group $(G, \tau)$ such that the set $U\setminus \{e\}\in\gamma$ for each $U\in\mathscr{B}$. In this case we say that $G$ is an almost topological group with a {\it structure} $(\tau, \gamma, \mathscr{B})$. \cite{Fer2012} \begin{remark} If in the proof of Example~\ref{ex:GflatwoSS} we take as $\mathscr{B}$ a local base at the identity of the group $L$ consisting of {\it open} convex sets, we construct a Hausdorff saturated paratopological group $(G,\tau)$ which is a sum of two of its closed discrete subsets such that $(\tau,\tau_\flat,\check{\mathscr{B}})$ is a structure of an almost topological group on $G$, but the group reflexion $G^{\flat}=(G,\tau_\flat)$ has no suitable set. \end{remark} \section{The permanence properties}\label{sec:perm} \subsection{Open subgroups} Similarly to the proof of \cite[Theorem 4.2]{CMRS1998}, we can show the following \begin{proposition}\label{t6} If an open subgroup of a paratopological group $G$ has a (closed) suitable set, then $G$ has a (closed) suitable set. \end{proposition} \begin{remark} (cf.,~\cite[Theorem 4.7]{CMRS1998}). Let $G$ be any paratopological group and $G'$ be the group $G$ endowed with the discrete topology. Then the group $G\times G'$ is generated by a closed discrete set $G\times \{e\}\cup \{(x,x):x\in G\}$ and $G$ is naturally isomorphic to an open subgroup $G\times \{e\}$ of $G\times G'$. \end{remark} \subsection{Dense subgroups}\label{sec:perm2} A subset $D$ of a space $X$ is called {\it strongly discrete} if each point $x\in D$ has a neighborhood $U_x$ such that the family $\{U_x:x\in D\}$ is {\em discrete}, that is each point of $X$ has a neighborhood intersecting at most one set of the family. It is easy to see that each strongly discrete subset of a $X$ is discrete and closed in $X$. A space is called {\it collectionwise Hausdorff} if each its closed discrete subset is strongly discrete. A subset $D$ of a space $X$ is called {\it sequentially dense} (in $X$) if each point $x\in X$ is a limit of a sequence of points of $D$. Clearly, each dense subset of a Fr\'{e}chet-Urysohn space is sequentially dense. A dense subgroup a topological group $G$ with a suitable set need not have a suitable set even when $G$ is compact, see \cite[Corollary 2.9]{DTT1999}. \begin{proposition}\label{t2} Let $G$ be a hereditarily collectionwise Hausdorff paratopological group with a suitable set $S$ and $H$ be a sequentially dense subgroup of $G$. Then $H$ has a suitable set of size at most $|S|\cdot\aleph_0$. \end{proposition} \begin{proof} Since the space $G\setminus\{e\}$ is collectionwise Hausdorff then each point $x\in S$ has a neighborhood $U_x\not\ni e$ in $G$ such that the family $\{U_x:x\in S\}$ is discrete in $G\setminus\{e\}$. If $x\in H$ then let $S_x=\{x\}$, otherwise let $S_x$ be a sequence of distinct points of $U_x\cap H$, converging to $x$. Since $\{S_x:x\in S\}$ is a locally finite family of closed discrete subsets of the space $G\setminus\{e\}$, by Lemma 6.1 from~\cite{CMRS1998} its union $S'$ is closed and discrete in $G\setminus\{e\}$. Since $\overline{\langle S'\rangle}\supset S$ and $\overline{\langle S\rangle}=G$, $\overline{\langle S'\rangle}=G$. It follows that $S'$ is a suitable set for $H$. \end{proof} Similarly we can prove the following \begin{proposition}\label{t2.2} Let $G$ be a collectionwise Hausdorff paratopological group with a closed suitable set $S$ and $H$ be a sequentially dense subgroup of $G$. Then $H$ has a closed suitable set of size at most $|S|\cdot\aleph_0$. \end{proposition} \begin{corollary}\label{t2c} Let $G$ be a metrizable paratopological group with a (closed) suitable set $S$ and $H$ be a dense subgroup of $G$. Then $H$ has a (closed) suitable set of size at most $|S|\cdot\aleph_0$. \end{corollary} \begin{proof} It suffices to remark that $H$ is sequentially dense in $G$ and $G$ is collectionwise normal, by Theorems 5.1.3 and 5.1.18 from ~\cite{Eng}. \end{proof} \begin{proposition}\label{t2.3} Let $G$ be a regular collectionwise Hausdorff paratopological group of countable pseudocharater with a suitable set $S$ and $H$ be a sequentially dense subgroup of $G$. Then $H$ has a suitable set of size at most $|S|\cdot\aleph_0$. \end{proposition} \begin{proof} Let $\{U_n:n\in\omega\}$ be a non-increasing sequence of open subsets of $G$ such that $U_0=G$ and $\bigcap\{U_n\}=\{e\}$. Let $n\in\omega$ be any number. Put $V_n=U_n\setminus \overline{U_{n+2}}$. Since the space $G$ is collectionwise Hausdorff, each point $x$ of a closed discrete subspace $S\cap V_n$ of $G$ has a neighborhood $U_{x,n}\subset V_n$ such that the family $\{U_{x,n}:x\in S\}$ is discrete. If $x\in H$ then let $S_{x,n}=\{x\}$, otherwise let $S_{x,n}$ be a sequence of distinct points of $U_{x,n}\cap H$, converging to $x$. It is easy to see that $\{S_{x,n}:n\in\omega,\, x\in S\cap V_n\}$ is a locally finite family of closed discrete subsets of the space $G\setminus\{e\}$, so, by Lemma 6.1 from~\cite{CMRS1998}, its union $S'$ is closed and discrete in $G\setminus\{e\}$. Since $\overline{\langle S'\rangle}\supset S$ and $\overline{\langle S\rangle}=G$, $\overline{\langle S'\rangle}=G$. It follows that $S'$ is a suitable set for $H$. \end{proof} \subsection{Products}\label{sec:perm3} Let $G$ be the product of a family $\Gamma=\{G_i\colon i\in I\}$ of paratopological groups. The $\Sigma$-($\sigma$-) product of $\Gamma$ is a subgroup of $G$ consisting of all elements with countably (finitely) many non-identity coordinates. Similarly to the proof of \cite[Theorem 4.1]{CMRS1998}, we can show the following \begin{proposition} Let $\Gamma$ be a non-empty family of paratopological groups such that each member of it has a suitable set. Then the product of $\Gamma$ has a suitable set contained in the $\sigma$-product of $\Gamma$. It follows that both the $\sigma$- and $\Sigma$-product of $\Gamma$ has a suitable set. \end{proposition} The proof of the following proposition is similar to that of~\cite[Lemma 3.1]{DTT1999}. \begin{proposition}\label{l1} If a paratopological group $G$ contains a closed discrete set $A$ such that $|A|\geq d(G)$ then $G\times G\in \mathcal{S}_{c}$. Moreover, if $|A|=|G|$ then $G\times G\in \mathcal{S}_{cg}$. \end{proposition} A {\it pseudocharacter $\psi(X)$} of a space $X$ is the smallest infinite cardinal $\kappa$ such that any point of $X$ is an intersection of at most $\kappa$ open subsets of $X$ and {\it extent $e(X)$} is the supremum of cardinalities of closed discrete subspaces of $X$. The following proposition complements Proposition~\ref{l1} and its proof is similar to the proof of~\cite[Lemma 2.3]{DTT1999}. \begin{proposition}\label{prop:DPTGSS} If $G$ is a paratopological group with a suitable set then $d(G)\leq \psi(G)e(G)$. Moreover, if $G$ has a closed suitable set then $d(G)\le e(G)$. \end{proposition} \begin{proof} Since the second claim is obvious we prove only the first. Let $S$ be a suitable set for $G$. For each open neighborhood $U$ of the identity, $S\setminus U$ is a closed discrete subspace of $G$, so $|S\setminus U|\le e(G)$. Let $\gamma$ be a family of open subsets in $G$ such that $\bigcap\gamma=\{e\}$ and $|\gamma|\le\psi(G)$. Then $S\setminus\{e\}=\bigcup\{S\setminus U: U\in\gamma\}$, so $|S|\le \psi(G)e(G)$. Since $S$ is a suitable set for $G$, $\langle S\rangle$ is a dense subset of $G$ of size at most $\psi(G)e(G)$. \end{proof} \subsection{Extensions} The following question is a counterpart of Problem 3.12 from~\cite{DTT2000} for paratopological groups. \begin{question} Let $G$ be a paratopological group and $H$ a closed normal subgroup of $G$. If $H$ and $G/H$ have suitable sets, does $G$ have a suitable set? \end{question} {\bf Acknowledgments.} The authors thank to Igor Guran for consultations and to Taras Banakh for sharing with them the book~\cite{BP2003} and other help. \end{document}
\begin{document} \title{Categorical Saito theory, II: Landau-Ginzburg orbifolds} \author[Junwu Tu]{ Junwu Tu} \address{Junwu Tu, Institute of Mathematical Science, ShanghaiTech University, Shanghai 201210, China.} \email{[email protected]} \begin{abstract} {\sc Abstract:} Let $W\in \mathbb{C}[x_1,\cdots,x_N]$ be an invertible polynomial with an isolated singularity at origin, and let $G\subset {{\sf SL}}_N\cap (\mathbb{C}^*)^N$ be a finite diagonal and special linear symmetry group of $W$. In this paper, we use the category ${\sf MF}_G(W)$ of $G$-equivariant matrix factorizations and its associated VSHS to construct a $G$-equivariant version of Saito's theory of primitive forms. We prove there exists a canonical categorical primitive form of ${\sf MF}_G(W)$ characterized by $G_W^{{\sf max}}$-equivariance. Conjecturally, this $G$-equivariant Saito theory is equivalent to the genus zero part of the FJRW theory under LG/LG mirror symmetry. In the marginal deformation direction, we verify this for the FJRW theory of ${\bar{\imath}}g(\frac{1}{5}(x_1^5+\cdots+x_5^5),\mathbb{Z}/5\mathbb{Z}{\bar{\imath}}g)$ with its mirror dual B-model Landau-Ginzburg orbifold ${\bar{\imath}}g(\frac{1}{5}(x_1^5+\cdots+x_5^5), (\mathbb{Z}/5\mathbb{Z})^4{\bar{\imath}}g)$. In the case of the Quintic family $\mathcal{W}=\frac{1}{5}(x_1^5+\cdots+x_5^5)-\psi x_1x_2x_3x_4x_5$, we also prove a comparison result of B-model VSHS's conjectured by Ganatra-Perutz-Sheridan~\cite{GPS}. \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction} \paragraph{{\bf Backgrounds and Motivations.}} This is a sequel to our previous work~\cite{Tu} devoted to define and study what might be called a "$G$-equivariant Saito's theory of primitive forms". Indeed, in {\em loc. cit.} we realized Saito's theory through Barannikov's notion~\cite{Bar} of Variation of Semi-infinite Hodge Structures (VSHS) associated with the category of matrix factorizations. Thus, to define a $G$-equivariant version of it, we simply consider the VSHS's associated with the category of $G$-equivariant matrix factorizations. The motivation to develop such a theory is, to the author's point of view, quite substantial. First of all, for an invertible polynomial $W$, it is proved in~\cite{HLSW}~\cite{LLSS} that Saito's theory of $W$ is mirror dual to the FJRW theory~\cite{FJRW} of $(W^t, G^{t,{\sf max}})$, i.e. the dual polynomial with its maximal diagonal symmetry group. But the FJRW theory is defined with all the subgroups of $G^{t,{\sf max}}$ containing the diagonal symmetry group $J^t$. To make sense of mirror symmetry for $(W^t, G^t)$ with $J^t\subset G^t\subset G^{t,{\sf max}}$, one needs a $G$-equivariant Saito theory of $W$. A first case study was initiated by He-Li-Li~\cite{HLL}. Secondly, Saito's theory (when $G$ is trivial) is generically semisimple, while adding a non-trivial group $G$ may produce non-semisimple theory. For example, this includes Calabi-Yau hypersurfaces through the so-called Landau-Ginzburg/Calabi-Yau correspondence. If the genuz zero theory is not semi-simple, one can not define its higher genus theory using Givental-Teleman's construction. This motivates our third point of studying the category of $G$-equivariant matrix factorizations. Indeed, there is an alternative approach to extract categorical enumerative invariants developed in~\cite{Cos1}\cite{Cos2}\cite{CalTu} which is by construction all genera. A comparison result between that approach with the current VSHS approach is not yet known. We refer to~\cite[Conjecture 1.8]{CLT} for a precise formulation of this conjectural comparison. \paragraph{{\bf A $G$-equivariant Saito theory.}} Let $W\in \mathbb{C}[x_1,\cdots,x_N]$ be an invertible polynomial with an isolated singularity at origin, and let $G\subset {{\sf SL}}_N\cap (\mathbb{C}^*)^N$ be a finite diagonal and special linear symmetry group of $W$. Consider the category ${\sf MF}_G(W)$ of $G$-equivariant matrix factorizations. Our construction of a $G$-equivariant Saito theory proceeds in two steps (done in Section~\ref{sec:hodge} and Section~\ref{sec:splitting} respectively): \begin{itemize} \item[1.] We first construct in Theorem~\ref{thm:construction-vshs} a primitive and polarized VSHS associated with ${\sf MF}_G(W)$. This VSHS provides us with a necessary framework to define Saito's notion of primitive forms. Note that in order to construct such a VSHS, a crucial ingredient is the Hodge-to-de-Rham degeneration property of the category ${\sf MF}_G(W)$. This was proved by Halpern-Leistner and Pomerleano in~\cite[Corollary 2.26]{HalPom}. \item[2.] We then apply a bijection result (Theorem~\ref{thm:bijection2}) proved in~\cite{AT} to construct categorical primitive forms through a certain linear algebra data: splittings of the non-commutative Hodge filtration. For the category ${\sf MF}_G(W)$, we prove in Theorem~\ref{thm:canonical} there exists a canonical splitting of the non-commutative Hodge filtration. Through the aforementioned bijection, this yields a canonical choice of categorical primitive form of ${\sf MF}_G(W)$. \end{itemize} These two steps yields a VSHS ${\mathscr V}^{{\sf MF}_G(W)}$ together with a canonical primitive form $\zeta^{{\sf MF}_G(W)}$. From this data, one obtains a Frobenius manifold by a standard construction~\cite{SaiTak}. Conjecturally, this Frobenius manifold is mirror dual to the genus zero part of the FJRW theory associated with the dual orbifold $(W^t, G^t)$. In~\cite{HLSW}), the conjecture is proved when $G=\{\id\}$ and $G^t=G^{t,{\sf max}}_{W^t}$. We also remark that the step $1.$ above, i.e. the construction of VSHS's holds in much more general setup, as the degeneration result~\cite[Corollary 2.26]{HalPom} is proved for any $W$ with proper critical loci, and with an arbitrary finite symmetry group. Step $2.$ is the main reason we restrict ourselves to invertible polynomials. \paragraph{{\bf Non-commutative calculations in the case of cubics.}} In Section~\ref{sec:cubic} we work out an example of $G$-equivariant Saito theory when $W=\frac{1}{3}(x^3_1+x_2^3+x_3^3)$ is the Fermat cubic and $G=\mathbb{Z}/3\mathbb{Z}$ the diagonal action. In this section, all computations are done non-commutatively in the sense that we work with Hochschild and cyclic chain complexes. The non-commutative calculation is needed to perform higher genus computations of categorical enumerative invariants as defined by Costello~\cite{Cos1}\cite{Cos2}, see also~\cite{CalTu}. \paragraph{{\bf Calculations in the case of quintics.}} Section~\ref{sec:quintic} uses the comparison result in our previous work~\cite{Tu} to deduce two main applications of the $G$-equivariant Saito theory: \begin{itemize} \item In Theorem~\ref{thm:b-model-comparison}, we prove that the categorical VSHS is isomorphic to the geometric one constructed by Griffiths in the case of the mirror quintic family. The proof relies on Griffiths~\cite{Gri} and Carlson-Griffiths~\cite{CarGri}'s calculation of VSHS's related to projective hypersurfaces. This $B$-model comparison result is needed in Ganatra-Perutz-Sheridan's work~\cite{GPS} which aims to establish a conceptual proof of the classical mirror symmetry conjecture via the categorical equivalence proved by Sheridan~\cite{She2}. In~\cite[Conjecture 1.14]{GPS} a general conjecture was stated about the comparison between categorical VSHS's and Griffiths' VSHS's. Theorem~\ref{thm:b-model-comparison} does not yield this conjecture in full generality, but suffices for Ganatra-Perutz-Sheridan's application in the case of quintic mirror symmetry. \item Theorem~\ref{thm:mirror} proves a LG/LG mirror symmetry result. Namely, for the Fermat quintic $W=\frac{1}{5}(x_1^5+\cdots+x_5^5)$, we match the equivariant Saito theory of ${\sf MF}_{(\mathbb{Z}/5\mathbb{Z})^4}(W)$ (defined using the canonical primitive form $\zeta^{{\sf MF}_{(\mathbb{Z}/5\mathbb{Z})^4}(W)}$) with the genus zero FJRW theory of $(W, \mathbb{Z}/5\mathbb{Z})$, both in the marginal deformation direction. The proof is based on our previous comparison result~\cite{Tu} and a technique of computing primitive forms developped by Li-Li-Saito in~\cite{LLS}. Once the primitive form is calculated, the associated genus zero prepotential function is matched easily with Chiodo-Ruan's calculation of the FJRW invariants~\cite{ChiRua}. \end{itemize} \paragraph{{\bf Conventions.}} We following Sheridan's sign convention and notations in~\cite{She}. In particular, the notation $\mu_n (n\geq 1)$ stands for higher $A_\infty$ products in the shifted sign convention. For a sign $|a|\in \mathbb{Z}/2\mathbb{Z}$, denote by $|a|'=|a|+1\in \mathbb{Z}/2\mathbb{Z}$ for its shift. Furthermore, as in {{\sl loc. cit.}}, for a $\mathbb{Z}$-graded vector space/module $M$, the notation $M[[u]]$ stands for the $u$-adic completion (with $\deg(u)=-2$) of $M[u]$ in the category of $\mathbb{Z}$-graded vector spaces/modules. For example, the completion of the ground field $\mathbb{K}[[u]]=\mathbb{K}[u]$ has no effect. Following~\cite[Section 3]{She}, for a Hochschild cochain $\varphi\in C^*(A)$ of an $A_\infty$-algebra, the notations $b^{1|1}(\varphi)$, $B^{1|1}(\varphi)$, $\iota(\varphi)$, $L_\varphi$ stands for certain actions of Hochschild cochains on Hochschild chains $C_*(A)$ or cyclic chains $C_*(A)[[u]]$, explicitly defined by \begin{align*} b^{1|1}(\varphi)(a_0|\cdots|a_n):= & \sum (-1)^{|\varphi|'\cdot (|a_0|'+\cdots+|a_j|')} \mu{\bar{\imath}}g( a_0,\ldots,\varphi(a_{j+1},\ldots),\overbrace{\ldots,a_k}{\bar{\imath}}g) | \ldots a_n,\\ B^{1|1}(\varphi)(a_0|\cdots|a_n):= & \sum (-1)^{|\varphi|'\cdot (|a_0|'+\cdots+|a_j|')} {\mathbf 1}|a_0|\ldots|\varphi(a_{j+1},\ldots)|\overbrace{\ldots,a_n},\\ \iota(\varphi):= & b^{1|1}(\varphi)+uB^{1|1}(\varphi),\\ L_\varphi(a_0|a_1|\cdots |a_n):= & \sum (-1)^{|\varphi|'\cdot (|a_0|'+\cdots+|a_j|')} a_0|a_1|\cdots|a_j|\varphi(a_{j+1},\cdots,a_{j+l})|\cdots|a_n\\ &+\sum \varphi(\overbrace{a_0,\cdots,a_{l-1}})|a_l|\cdots |a_n \end{align*} \paragraph{{\bf Acknowledgment.}} The author is deeply indebted to Lino Amorim, Andrei C\u ald\u araru, Yunfeng Jiang, Si Li, Sasha Polishchuk, Yefeng Shen and Nick Sheridan for useful discussions. It was Si Li who suggested that requiring $G_W^{\sf max}$-equivariance should fix a canonical primitive form, which is precisely the content of Theorem~\ref{thm:canonical}. Sasha Polishchuk pointed out to me the reference~\cite{HalPom} on the Hodge-to-de-Rham degeneration property. Special thanks to Andrei C\u ald\u araru, Rahul Pandharipande, and Nick Sheridan as organizers of a beautiful workshop held at the Forschungsinstitut f\" ur Mathematik, ETH Z\" urich where the author had the great opportunity to present results of the paper. In particular, Theorem~\ref{thm:b-model-comparison} was inspired by Sheridan's talk at the workshop who also suggested a careful treatment of convergence issues in Section~\ref{sec:quintic}. \section{Non-commutative Hodge theory}~\label{sec:hodge} In this section, we recall the categorical construction of VSHS's. The important notion of VSHS's was introduced by Barannikov~\cite{Bar}. For its categorical construction, our primary references are~\cite{She}\cite{Shk2}\cite{Shk3}\cite{Shk4}\cite{CLT}. \paragraph{{\bf VSHS's.}} Let $(R,\mathfrak{m})$ be a complete regular local ring of finite type over $\mathbb{K}$. A polarized VSHS over $R$ is given by the following data: \begin{itemize} \item A $\mathbb{Z}/2\mathbb{Z}$-graded free $R[[u]]$-module $\mathcal{E}$ of finite rank. \item A {{\sl flat}}, degree zero, meromorphic connection $\nabla$ such that it has a simple pole along $u=0$ in the $R$-direction, and at most an order two pole at $u=0$ in the $u$-direction. In other words, we have \[ \nabla_X: {\mathscr E} \rightarrow u^{-1} {\mathscr E}, \; X\in {\mathsf D}_{\mathsf{coh}}^ber(R), \; \;\; \nabla_{\frac{\partial}{\partial u}}: {\mathscr E} \rightarrow u^{-2} {\mathscr E}.\] \item A $R$-linear, $u$-sesquilinear, and $\nabla$-constant pairing $\langle-,-\rightarrowngle_{{\sf hres}}: \mathcal{E}\otimes \mathcal{E} \rightarrow R[[u]]$, such that its restriction to $u=0$ is non-degenerate. \end{itemize} We refer to~\cite[Section 2]{Tu} for more details. \begin{Definition} A VSHS ${\bar{\imath}}g({\mathscr E},\nabla,\langle-,-\rightarrowngle{\bar{\imath}}g)$ over a regular local ring $R=\mathbb{K}[[t_1,\cdots,t_\mu]]$ is called primitive if there exists a section $\zeta\in {\mathscr E}$ such that the map $\rho^\zeta: {\mathsf D}_{\mathsf{coh}}^ber_R \rightarrow {\mathscr E}/u{\mathscr E}$ defined by \[\rho^\zeta( \frac{\partial}{\partial t_j}) := \pi{\bar{\imath}}g( u\nabla_{\frac{\partial}{\partial t_j}} \zeta {\bar{\imath}}g)\] is an isomorphism. Here $\pi: {\mathscr E}\rightarrow {\mathscr E}/u{\mathscr E}$ is the canonical projection map. \end{Definition} \paragraph{{\bf Primitive VSHS's from Non-commutative geometry.}} Let $A$ be a $\mathbb{Z}/2\mathbb{Z}$-graded, finite-dimensional, smooth cyclic $A_\infty$ algebra of parity $N\in\mathbb{Z}/2\mathbb{Z}$. For $\epsilon\in \mathbb{Z}/2\mathbb{Z}$, denote by $HH_{[\epsilon]}(A)$ (respectively $HH^{[\epsilon]}(A)$) the parity $\epsilon$ part of the Hochschild homology (cohomology). The cyclic pairing on $A$ induces an isomorphism $A\rightarrow A^\vee[-N]$ of $A_\infty$-bimodules, which yields an isomorphism \[ HH^*(A) N^{\scriptscriptstyle\vee}g HH^{*+N} (A^\vee).\] The right hand side is naturally isomorphic to the shifted dual of the Hochschild homology $HH_{*-N}(A)^\vee$. There is also a degree zero pairing naturally defined on Hochschild homology, known as the Mukai-pairing \[ \langle-,-\rightarrowngle_{{\sf Muk}}: HH_*(A)\otimes HH_*(A) \rightarrow \mathbb{K}.\] Since $A$ is smooth and proper, this pairing is non-degenerate. Thus, it induces an isomorphism $HH_{*}(A) N^{\scriptscriptstyle\vee}g HH_{-*}(A)^\vee$. Putting together, we obtain a chain of isomorphisms: \begin{equation}~\label{def:D} D: HH^*(A) N^{\scriptscriptstyle\vee}g HH^{*+N} (A^\vee)N^{\scriptscriptstyle\vee}g HH_{*-N}(A)^\vee N^{\scriptscriptstyle\vee}g HH_{N-*}(A) \end{equation} Let us denote by $\omega:=D({\mathbf 1})$, the image of the unit element ${\mathbf 1}\in HH^*(A)$. It was shown in~\cite{AT} that the duality map $D$ is an isomorphism of $HH^*(A)$-modules, i.e. it intertwines the cup product with the cap product. Assume that $A$ satisfies the Hodge-to-de-Rham degeneration property. It was shown in~\cite{Iwa}\cite{Tu2} that the formal deformation theory of such $A$ is unobstructed. Let $\{\varphi_1,\cdots,\varphi_\mu\}$ be a basis of the even Hochschild cohomology $HH^{[0]}(A)$, and set \[ R:=\mathbb{K}[[t_1,\ldots,t_\mu]].\] Let ${\mathscr A}$ denote the universal family of deformations of $A$ as a $\mathbb{Z}/2\mathbb{Z}$-graded unital $A_\infty$-algebra, parametrized by the ring $R$. By construction, the Kodaira-Spencer map \begin{align*} {\mathbf K}_{\mathsf{coh}}^bS: {\mathsf D}_{\mathsf{coh}}^ber_\mathbb{K}(R) & \rightarrow HH^{[0]}({\mathscr A}),\\ \frac{\partial}{\partial t_j} & \mapsto [\frac{\partial}{\partial t_j}\mu^{\mathscr A}] \end{align*} is an isomorphism. Here $\mu^{\mathscr A}= \prod_n \mu_n^{\mathscr A}$ is the $A_\infty$-structure maps of the family ${\mathscr A}$. In this section, we prove the following \begin{Theorem}~\label{thm:construction-vshs} Let the notations be as in the previous paragraph. Then the parity $N$ part of the negative cyclic homology $HC^-_{[N]}({\mathscr A})$ of the universal family carries a natural primitive VSHS over $R$. \end{Theorem} \begin{Proof} Recall that the VSHS on $HC^-_{[N]}({\mathscr A})$ was constructed by putting \begin{itemize} \item[--] in the $t$-direction the Getzler's connection~\cite{Get} \item[--] in the $u$-direction the connection operator defined in~\cite{KonSoi}\cite{KKP}\cite{Shk}\cite{CLT} \item[--] the categorical higher residue pairing $\langle-,-\rightarrowngle_{{\sf hres}}$ defined in~\cite{Shk2}\cite{She} \end{itemize} The compatibility of the above data holds in general, see for example~\cite{CLT} and~\cite{She}. The non-degeneracy of the pairing is due to Shklyarov~\cite{Shk2}. What remains to be proved is the locally freeness of $HC^-_{[N]}({\mathscr A})$ as a $R[[u]]$-module, and its primitivity. We will first argue that the Hochschild cohomology $HH^{[0]}({\mathscr A})$ is a locally free $R$-module. In~\cite{Tu2}, it was proved that the DGLA $C^*(A)[1]$ is homotopy abelian, i.e. there is an $L_\infty$ quasi-isomorphism \[ {\mathscr U}: HH^*(A)[1] \rightarrow C^*(A)[1]\] where the left hand side is endowed with the trivial DGLA structure. Tensoring with $R$ and pushing forward the universal Maurer-Cartan element $\beta:=\sum_{j=1}^\mu t_j \varphi_j$ yields an $L_\infty$ quasi-isomorphism \[ {\mathscr U}^\beta: HH^*(A)[1]\otimes R \rightarrow C^*({\mathscr A})[1].\] This proves that $HH^*({\mathscr A})N^{\scriptscriptstyle\vee}g HH^*(A)\otimes R$ which is clearly locally free $R$-module of finite rank. In particular, its even degree part $HH^{[0]}({\mathscr A})$ is also locally free $R$-module of finite rank. Next, we shall relate Hochschild homology with Hochschild cohomology via the duality isomorphism~\ref{def:D}. Indeed, it was proved in~\cite{Tu2} that the universal family ${\mathscr A}$ can be chosen, after an gauge transformation, to be a cyclic universal family, with respect to the cyclic inner product $\langle-,-\rightarrowngle: A^{\otimes 2}\rightarrow \mathbb{K}$ on $A$, extended $R$-linearly to ${\mathscr A}$. Thus, there exists a duality isomorphism \[ D: HH^{[0]}({\mathscr A})N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( HH_{[N]}({\mathscr A}) {\bar{\imath}}g)^\vee N^{\scriptscriptstyle\vee}g HH_{[N]}({\mathscr A}).\] By the Nakayama lemma, $D$ is an isomorphism. To this end, it remains to prove that $HC^-_{[N]}({\mathscr A})$ is a locally free $R[[u]]$-module of finite rank. We may use the homological perturbation lemma for this. We may choose a homotopy retraction between \[ HH_*(A)[[u]]N^{\scriptscriptstyle\vee}g HC^-_*(A) N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( C_*(A)[[u]], b+uB{\bar{\imath}}g).\] The first isomorphism exists as we assume the Hodge-to-de-Rham degeneration property. Tensoring with $R$ and $\mathfrak{m}$-adically complete the result yields a $R[[u]]$-linear homotopy retraction \[ HH_*(A) \otimes R[[u]] N^{\scriptscriptstyle\vee}g{\bar{\imath}}g( C_*(A)[[u]], b+uB{\bar{\imath}}g) \widehat{\otimes} R.\] Here $\widehat{\otimes}$ stands for the $\mathfrak{m}$-adic completed tensor product. Adding the perturbation $L_{{\mathscr U}_*\beta}$ to the right hand side yields exactly the complex $C_*({\mathscr A})[[u]]$ which calculate the negative cyclic homology of ${\mathscr A}$. After homological perturbation we obtain \[ {\bar{\imath}}g( HH_*(A) \otimes R[[u]], \partialta {\bar{\imath}}g) N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( C_*(A)[[u]]\widehat{\otimes} R, b+L_{{\mathscr U}_*\beta}+uB{\bar{\imath}}g).\] We claim that the differential $\partialta$ is equal to zero. Indeed, let us consider the $u$-filtration on both sides, and consider the following commutative diagram. \[\begin{CD} HH_*(A)\otimes R @>>> HH_*(A) \otimes R[[u]]/u^{k+1} @>>> HH_*(A) \otimes R[[u]]/u^k\\ @VVV @VVV @VVV \\ HH_*({\mathscr A}) @>>> H^*{\bar{\imath}}g(C_*(A)[[u]]\widehat{\otimes} R/u^{k+1}{\bar{\imath}}g) @>>> H^*{\bar{\imath}}g(C_*(A)[[u]]\widehat{\otimes} R/u^k{\bar{\imath}}g) \end{CD}\] The point is that the homology of $C_*({\mathscr A})$, i.e. $HH_*({\mathscr A})$ is a locally free $R$-module of rank $\dim_\mathbb{K} HH_*(A)$ as we proved earlier, which implies that the induced differential $\partialta$ on the upper left corner $HH_*(A)\otimes R$ is trivial. Induction on $k$ yields that the differential $\partialta\equiv 0$. The primitivity of this VSHS is obtained by choosing a primitive element \[ \zeta:= \mbox{ any lift of } \; D({\mathbf 1}),\] with $D$ the duality map. Note that such a lift exists since $HC^-_{[N]}({\mathscr A})$ is locally free. To verify the primitivity of $\zeta$, we compute \begin{align*} \rho^\zeta(\frac{\partial}{\partial t_j}) &= \pi {\bar{\imath}}g( u\nabla_{\frac{\partial}{\partial t_j}}\zeta{\bar{\imath}}g)\\ &= -{\mathbf K}_{\mathsf{coh}}^bS(\frac{\partial}{\partial t_j})\cap D({\mathbf 1})\\ &= D({\mathbf K}_{\mathsf{coh}}^bS(-\frac{\partial}{\partial t_j})) \end{align*} Since both $D$ and the Kodaira-Spencer map are isomorphisms, we deduce that $\rho^\zeta$ is also an isomorphism. \end{Proof} \begin{Lemma} Let $HC^-_{[N]}({\mathscr A})$ be the primitive VSHS over $R$ defined in Theorem~\ref{thm:construction-vshs}. Then there exists a unique vector field ${\sf Eu}\in {{\sf Der}}_\mathbb{K}(R)$ such that the differential operator $\nabla_{u\frac{d}{du}}+\nabla_{\sf Eu}$ is regular, i.e. it has no poles. \end{Lemma} \begin{Proof} Recall from~\cite{CLT} the $u$-connection is of the form \[ \nabla_{u\frac{d}{du}}= u\frac{d}{du}+\frac{\mathbb{G}amma}{2}+\frac{\iota(\prod_{n} (2-n)\mu_n)}{2u}.\] Thus, the $u^{-1}$-component of the operator $\nabla_{u\frac{d}{du}}$ is given by capping with the Hochschild cohomology class \[ [\frac{\prod_{n} (2-n)\mu_n}{2}] \in HH^*({\mathscr A}).\] As such, if we set ${\sf Eu}:={\mathbf K}_{\mathsf{coh}}^bS^{-1} [\frac{\prod_{n} (2-n)\mu_n}{2}]$, the combined operator $\nabla_{u\frac{d}{du}}+\nabla_{\sf Eu}$ is regular. If ${\sf Eu}'$ is another such vector field, we deduce that ${\mathbf K}_{\mathsf{coh}}^bS({\sf Eu}-{\sf Eu}')=0$, but ${\mathbf K}_{\mathsf{coh}}^bS$ is an isomorphism, so ${\sf Eu}-{\sf Eu}'=0$. \end{Proof} \begin{Definition}~\label{defi:primitive} An element $\zeta \in HC^-_*({\mathscr A})$ is called a primitive form of the polarized VSHS $HC^-_*({\mathscr A})$ if it satisfies the following conditions: \begin{itemize} \item[P1.] (Primitivity) The map defined by \[ \rho^\zeta: {\mathsf D}_{\mathsf{coh}}^ber (R) \rightarrow HC^-_*({\mathscr A})/uHC^-_*({\mathscr A}), \;\; \rho^\zeta(v):=[u\cdot \nabla^{{\sf Get}}_v\zeta]\] is an isomorphism. \item[P2.] (Orthogonality) For any tangent vectors $v_1, v_2\in {\mathsf D}_{\mathsf{coh}}^ber (R)$, we have \[\langle u\nabla^{{\sf Get}}_{v_1} \zeta, u\nabla^{{\sf Get}}_{v_2} \zeta\rightarrowngle \in R.\] \item[P3.] (Holonomicity) For any tangent vectors $v_1,v_2,v_3\in {\mathsf D}_{\mathsf{coh}}^ber (R)$, we have \[ \langle u\nabla^{{\sf Get}}_{v_1} u\nabla^{{\sf Get}}_{v_2} \zeta, u\nabla^{{\sf Get}}_{v_3} \zeta\rightarrowngle \in R{\mathsf{op}}lus u\cdot R.\] \item[P4.] (Homogeneity) There exists a constant $r\in \mathbb{K}$ such that \[ (\nabla_{u\frac{\partial}{\partial u}}+\nabla_{{\sf Eu}}^{{\sf Get}})\zeta= r\zeta.\] \end{itemize} \end{Definition} \paragraph{{\bf Splittings of the Hodge filtration.}} The definition of primitive forms is admitted quite complicated. We use an idea due to Saito~\cite{Sai1}\cite{Sai2} to construct primitive forms via certain linear algebra data known as splittings of the Hodge filtration. We first describe this notion more precisely. The filtration defined by \[ F^k HC_*^-(A):= u^k\cdot HC_*^-(A), \;\; k\geq 0\] is known as the non-commutative Hodge filtration on the negative cyclic homology of $A$. Denote by \[ \pi: HC_*^-(A) \rightarrow HH_*(A)\] the natural projection map. \begin{Definition}~\label{defi-splitting} A splitting of the Hodge filtration of $A$ is a linear map \[ s: HH_{*}(A) \rightarrow HC^-_{*}(A),\] such that it satisfies \begin{itemize} \item[S1.] {{\sl Splitting property.}} $\pi\circ s=\id$; \item[S2.] {{\sl Lagrangian property.}} For any two classes $\alpha,\beta\in HH_{*}(A)$, we have $$\langle \alpha,\beta \rightarrowngle = \langle s(\alpha), s(\beta)\rightarrowngle_{\mathsf{hres}}.$$ \end{itemize} \begin{itemize} \item[S3.] {{\sl Homogeneity.}} A splitting $s$ is called a good splitting if $\nabla_{u^2\frac{\partial}{\partial u}} \imag (s) \subset \imag (s) {\bar{\imath}}goplus u \cdot \imag (s)$. \item[S4.] {{\sl $\omega$-Compatibility.}} Recall that $\omega=D({\mathbf 1})$ under the duality map~\ref{def:D}. A splitting $s: HH_*(A) \rightarrow HC_*^-(A)$ of the Hodge filtration is called {{\sl $\omega$-compatible}} if $\nabla_{u\frac{\partial}{\partial u}} s(\omega) - r\cdot s(\omega)\in u^{-1} \imag (s)$ for some constant $r\in \mathbb{K}$. \end{itemize} \end{Definition} The following result can be used to construct categorical primitive forms. See~\cite{AT} for a detailed proof. Its commutative version was proved in~\cite{LLS}. The proof in the non-commutative version is almost identical to its commutative version. \begin{Theorem}~\label{thm:bijection2} Let $HC^-_*({\mathscr A})$ be a polarized primitive VSHS constructed as in Theorem~\ref{thm:construction-vshs}. Let $\omega=D({\mathbf 1})\in HH_*(A)$. Then there exists a natural bijection between the following two sets \begin{align*} {\mathscr P} &:= \left\{ \zeta\in HC^-_*({\mathscr A}) | \;\zeta \mbox{ is a primitive form such that } \zeta|_{t=0,u=0}=\omega.\right\}\\ {\mathscr S} &:= \left\{ s: HH_{[N]}(A)\rightarrow HC_{[N]}^-(A) | \; s \mbox{ is an $\omega$-compatible good splitting of parity $[N]$}.\right\} \end{align*} Note that in the second set ${\mathscr S}$, the definition of a splitting of parity $[N]$ is the same as that of Definition~\ref{defi-splitting} by simply replacing $HH_*$ and $HC^-_*$ with $HH_{[N]}$ and $HC_{[N]}^-$. \end{Theorem} \paragraph{{\bf Saturated Calabi-Yau $A_\infty$-categories.}} Let ${\mathscr C}$ be a saturated (i.e. compact, smooth and compactly generated) Calabi-Yau $A_\infty$-category. Assume also that ${\mathscr C}$ satisfies the Hodge-to-de-Rham degeneration property. We may choose a split generator $E\in {\mathscr C}$, and apply the construction of VSHS described above to the $A_\infty$-algebra ${{\sf End}}_{\mathscr C}(E)$. By the Morita invariance of all the structures involved in the construction (see~\cite{She}), the resulting VSHS is independent of the generator $E$, up to isomorphism of VSHS's. For this reason, we shall denote by $${\mathscr V}^{{\mathscr C}}:= \Big( HC^-_*{\bar{\imath}}g( {{\sf End}}_{\mathscr C}(E){\bar{\imath}}g),\nabla,\langle-,-\rightarrowngle_{\sf hres}\Big)$$ this VSHS, suppressing its dependence on $E$. \section{Canonical splittings of invertible LG orbifolds}~\label{sec:splitting} Let $W\in \mathbb{C}[x_1,\cdots,x_N]$ be an invertible polynomial. We refer to~\cite[Section 2.1]{HLSW} and~\cite[Section 1.2]{Kra} for the definition of invertible polynomials. Denote by \[ G_W^{{\sf max}}:=\{ (\lambda_1,\cdots,\lambda_N)\in (\mathbb{C}^*)^N\mathtt{i}d W(\lambda_1 x_1,\cdots,\lambda_Nx_N)=W.\}\] the group of maximal diagonal symmetries of $W$. Fix a subgroup $G\subset G_W^{{\sf max}}\cap {\mathsf{Sym}}L_N$. In this section, we consider the category ${\sf MF}_G(W)$ of $G$-equivariant matrix factorizations of $W$. Since $G_W^{{\sf max}}$ is a commutative group, if $\varphi$ is a $G$-equivariant morphism, then $g\varphi$ remains a $G$-equivariant morphism for any $g\in G_W^{{\sf max}}$. Thus the maximal symmetry group $G_W^{{\sf max}}$ still acts on ${\sf MF}_G(W)$. The main result of this section is the following \begin{Theorem}~\label{thm:canonical} There exists a unique $G^{{\sf max}}_W$-equivariant good splitting in the sense of Definition~\ref{defi-splitting} of the non-commutative Hodge filtration of ${\sf MF}_G(W)$. Furthermore it is also compatible with the Calabi-Yau structure determined by $dx_1\wedge\cdots\wedge dx_N$. \end{Theorem} \begin{remark} For the Hodge-to-de-Rham degeneration property of ${\sf MF}_G(W)$, Kaledin' Theorem~\cite{Kal} does not apply as the category ${\sf MF}_G(W)$ is only $\mathbb{Z}/2\mathbb{Z}$-graded. Nevertheless, this degeneration is proved in~\cite[Corollary 2.26]{HalPom}. It is also known that the category ${\sf MF}_G(W)$ is a saturated, Calabi-Yau dg-category, see~\cite[Theorem 2.5.2]{PolVai}. Thus, we may apply Theorem~\ref{thm:construction-vshs} to obtain a primitive VSHS ${\mathscr V}^{{\sf MF}_G(W)}$. The above Theorem combined with Theorem~\ref{thm:bijection2}, implies that there exists a canonical primitive form \[ \zeta^{{\sf MF}_G(W)}\in {\mathscr V}^{{\sf MF}_G(W)}.\] \end{remark} The proof of Theorem~\ref{thm:canonical} occupies the rest of the section. The existence is proved as in~\cite{LLS}. Our contribution is the uniqueness. When $G$ is trivial, we would like to remark that the canonical splitting appears already in the works~\cite{HLSW}~\cite{LLSS} in the context of LG/LG mirror symmetry. \paragraph{{\bf Reduction to commutative theory.}} Let us first assume that the group $G=\id$. Recall from our previous work~\cite{Tu} the set of splittings of the non-commutative Hodge filtration of ${\sf MF}(W)$ is naturally in bijection with the set of splittings in the sense of Saito~\cite{Sai1}~\cite{Sai2}. Namely, one replaces \begin{align*} HH_*({\sf MF}(W)) & \mapsto {\sf Jac} (W) dx_1\cdots dx_N\\ HC_*^-({\sf MF}(W)) & \mapsto H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^N}^*[[u]], dW+ud_{DR}{\bar{\imath}}g)\\ \nabla_{u\frac{\partial}{\partial u}} &\mapsto \nabla_{u\frac{\partial}{\partial u}}:= u\frac{\partial}{\partial u} - \frac{N}{2} -\frac{1}{u}\cdot W\\ \langle-,-\rightarrowngle_{{\sf hres}} & \mapsto \langle-,-\rightarrowngle_{{\sf hres}} \mbox{\; (higher residue pairing of Saito)}\\ \omega & \mapsto dx_1\cdots dx_N \end{align*} \paragraph{{\bf Weights and homological degrees.}} The key ingredient in proving Theorem~\ref{thm:canonical} is the existence of a rational grading. Recall that by definition of an invertible polynomial (see for example~\cite{HLSW}) that there exists a rational weights $\wt (x_j)=q_j, \; j=1,\ldots N$ such that $q_j\in \mathbb{Q}\cap (0,\frac{1}{2}]$. With respect to these weights, the polynomial $W$ is quasi-homogeneous of weight $1$, or equivalently we have \begin{equation}~\label{eq:W} W= \sum_{j=1}^N q_j x_j \frac{\partial W}{\partial x_j} \end{equation} We define a rational grading on the twisted de rham complex $ {\bar{\imath}}g( \Omega_{\mathbb{C}^N}^*[[u]], dW+ud_{DR}{\bar{\imath}}g) $ by \[ \deg (x_i) = -2 q_i, \;\; \deg (dx_i) = -2 q_i +1, \;\; \deg (u) = -2.\] With respect to this degree, the operator $dW+ud_{DR}$ is of degree $-1$. \begin{Lemma}~\label{lem:omega} The cohomology class $[dx_1\cdots dx_N]\in H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^N}^*[[u]], dW+ud_{DR}{\bar{\imath}}g)$ satisfies \[ \nabla_{u\frac{\partial}{\partial u}} [dx_1\cdots dx_N]= -\frac{c_W}{2} [dx_1\cdots dx_N],\] where the constant is $c_W:=\sum_{j=1}^N(1-2q_j)$ known as the central charge of $W$. \end{Lemma} \begin{proof} We compute using Equation~\ref{eq:W}: \begin{align*} \nabla_{u\frac{\partial}{\partial u}} [dx_1\cdots dx_N] & =- \frac{N}{2} [dx_1\cdots dx_N] - \frac{1}{u} \sum_{j=1}^N q_j x_j \frac{\partial W}{\partial x_j} [dx_1\cdots dx_N]\\ &=- \frac{N}{2} [dx_1\cdots dx_N] - \frac{1}{u} \sum_{j=1}^N q_j [x_j (-1)^{j-1}dW\wedge dx_1\cdots \widehat{x_j}\cdots dx_N]\\ &= - \frac{N}{2} [dx_1\cdots dx_N] - \frac{1}{u} \sum_{j=1}^N (-1)^j q_j u \cdot [d_{DR} (x_jdx_1\cdots \widehat{x_j}\cdots dx_N)]\\ & = -\frac{N}{2} [dx_1\cdots dx_N] + \frac{1}{u} \sum_{j=1}^N q_j u\cdot [dx_1\cdots dx_N]\\ & = -\frac{c_W}{2} [dx_1\cdots dx_N]. \end{align*} \end{proof} Let $f\in \mathbb{C}[x_1,\ldots,x_N]$ be a quasi-homogeneous polynomial. Computing $ \nabla_{u\frac{\partial}{\partial u}} [f\cdot dx_1\cdots dx_N]$ as in the above proof yields \[ \nabla_{u\frac{\partial}{\partial u}} [f\cdot dx_1\cdots dx_N]= -\frac{\deg( [f\cdot dx_1\cdots dx_N])}{2} \cdot [f\cdot dx_1\cdots dx_N]. \] Thus, the homogeneity condition $S3.$ in Definition~\ref{defi-splitting} is indeed to require that the splitting map $s: {\sf Jac} (W) dx_1\cdots dx_N\rightarrow H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^N}^*[[u]], dW+ud_{DR}{\bar{\imath}}g)$ be degree preserving. The existence of a splitting in the rationally graded case is proved in general in~\cite{LLS}. In fact in {\em loc. cit.} the authors wrote down a non-empty parameter space of the set of splittings. The $\omega$-compatibility condition follows from the above Lemma~\ref{lem:omega}. When writing down a homogeneous splitting for $f\cdot dx_1\cdots dx_N$ in general, it looks like \[ f\cdot dx_1\cdots dx_N+u\cdot f_1\cdot dx_1\cdots dx_N +u^2 \cdot f_2\cdot dx_1\cdots dx_N +...\] with $\deg(f_k\cdot dx_1\cdots dx_N)-\deg(f\cdot dx_1\cdots dx_N)=2k$. Assume a lifting of the form $$\sum_{0\leq j\leq k-} f_j dx_1\cdots dx_N \cdot u^j$$ up to $u^{k-1}$ is given, the set of liftings to $u^k$ is a torsor over ${\bar{\imath}}g( {\sf Jac} (W) dx_1\cdots dx_N {\bar{\imath}}g)_{2k+\deg(f\cdot dx_1\cdots dx_N)}$, the degree ${\bar{\imath}}g(\deg(f\cdot dx_1\cdots dx_N)+2k{\bar{\imath}}g)$-part of the Hochschild homology. Thus to prove Theorem~\ref{thm:canonical} in the case when the group $G=\id$, it suffices to prove the following \begin{Proposition}~\label{prop:vanishing} Let $W$ be an invertible polynomial with maximal diagonal symmetry group $G^{{\sf max}}_W$. Then for any degree $a,b\in \mathbb{Q}$ such that $a\neq b$, we have \[ {{\sf Hom}}_{\mathbb{K}[G^{{\sf max}}]}{\bar{\imath}}g( ({\sf Jac} (W) dx_1\cdots dx_N )_a, ({\sf Jac} (W) dx_1\cdots dx_N )_b{\bar{\imath}}g)=0.\] In other words, there exists no non-trivial $\mathbb{K}[G^{{\sf max}}]$-equivariant maps between homogeneous components of different degrees. \end{Proposition} \begin{Proof} It is known that every invertible polynomial $W$ decomposes as \[ W=W_1+\cdots+W_l,\] with each $W_l$ so-called atomic polynomials which are of three types: Fermat, Chain, or Loop. We first prove the proposition for each of the three basic types. {{\bf (i) Fermat type.}} Let $W=x^n$. The weight $q=\frac{1}{n}$. The Hochschild homology is given by \[ {{\sf Jac}}(W)dx =\langle dx, \cdots, x^{n-2}dx \rightarrowngle,\] with the degree given by $\deg( x^kdx )= \frac{n-2k-2}{n}$ for $0\leq k\leq n-2$. In this case $G^{{\sf max}}_W=\langle g \rightarrowngle$ is generated by $g=e^{2\pi i /n}$. And the generator $g$ acts by \[ g(x^kdx) = e^{2\pi i \cdot \frac{k+1}{n}} x^kdx.\] Thus $g$ acts on the space of $\mathbb{C}$-linear maps ${{\sf Hom}}{\bar{\imath}}g( ({\sf Jac} (W) dx )_a, ({\sf Jac} (W) dx )_b{\bar{\imath}}g)$ by $e^{2\pi i\cdot \frac{a-b}{2}}$. But since the difference between two different weights $a$ and $b$ satisfies that \[-1< \frac{a-b}{2}<1,\] the factor $e^{2\pi i\cdot \frac{a-b}{2}}\neq 1$ for any $a\neq b$. This settles the Fermat case. {{\bf (ii) Chain type.}} For the Chain and Loop type, the exponent matrix and its inverse plays an important role in the proof. We refer to~\cite[Section 5.1]{HLSW} for details used here. Let $W=x_1^{a_1}+x_1x_2^{a_2}x_3+\cdots + x_{N-1} x_N^{a_N}$ be a chain type invertible polynomial. Denote by $E_W$ its exponent matrix \[ \begin{bmatrix} a_1 & 0 & \cdots & 0 & 0\\ 1 & a_2 & \cdots & 0&0\\ \vdots & \vdots & \cdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & a_N \end{bmatrix}\] For a row vector $r=(r_1,\cdots, r_N)$, we write $x^r:= x_1^{r_1}\cdots x_N^{r_N}$. We use a basis of ${{\sf Jac}}(W)dx$ by Krawitz~\cite{Kra} to perform the computation. This special basis consists of monomials $x^rdx$ with $0\leq r_j\leq a_j-1$ and $r\neq (*,\cdots,*,k, a_{N-2l}-1, \cdots, 0, a_{N-2}-1,0, a_N-1)$ with $k\geq 1$. The weights $q_j$'s are related to the matrix $E_W^{-1}$ by $(q_1,\ldots,q_N)^{{\sf t}}= E_W^{-1} \cdot (1,1,1,\cdots,1,1)^{{\sf t}}$, which implies that \[ \frac{\deg(x^{r'}dx)-\deg(x^{r}dx)}{2} = (r-r')\cdot E_W^{-1} \cdot (1,1,1,\cdots,1,1)^{{\sf t}}.\] Furthermore, the $G^{{\sf max}}$-equivariant condition is equivalent to that $(r-r')\cdot E_W^{-1}\in \mathbb{Z}^N$ since the column vectors of $E_W^{-1}$ generates the group $G^{{\sf max}}$. Without loss of generality, let us assume that $\deg(x^{r'}dx)>\deg(x^{r}dx)$. Then we have that \[ \frac{\deg(x^{r'}dx)-\deg(x^{r}dx)}{2} = (r-r')\cdot E_W^{-1} \cdot (1,1,1,\cdots,1,1)^{{\sf t}} \in \mathbb{Z}_{>0}.\] This implies that at least one entry in $(r-r')\cdot E_W^{-1}$ is greater or equal to $1$. But on the other hand, we may explicitly compute the matrix $E_W^{-1}$: \[ {\bar{\imath}}g(E_W^{-1}{\bar{\imath}}g)_{ij} = (-1)^{i+j} \prod_{j\leq l \leq i} \frac{1}{a_l}.\] If the $j$-th entry in $(r-r')\cdot E_W^{-1}$ is greater or equal to $1$, we get \begin{equation}~\label{eq:geq1} \sum_{j\leq i\leq N} (-1)^{i+j} \frac{(r_i-r_i')}{\prod_{j\leq l \leq i} a_l} \geq 1. \end{equation} However, since $|r_i-r_{i'}|\leq a_i-1$, we also have \begin{align*} |\sum_{j\leq i\leq N} (-1)^{i+j} \frac{(r_i-r_i')}{\prod_{j\leq l \leq i} a_l} | & \leq \sum_{j\leq i\leq N} \frac{(a_i-1)}{\prod_{j\leq l \leq i} a_l} \\ &= \frac{a_j-1}{a_j}+\frac{a_{j+1}-1}{a_ja_{j+1}}+\frac{a_{j+2}-1}{a_ja_{j+1}a_{j+2}}+\cdots\\ &= \frac{a_ja_{j+1}-1}{a_ja_{j+1}}+\frac{a_{j+2}-1}{a_ja_{j+1}a_{j+2}}+\cdots\\ &=\frac{a_ja_{j+1}a_{j+2}-1}{a_ja_{j+1}a_{j+2}}+\cdots\\ &< 1. \end{align*} Thus the inequality~\ref{eq:geq1} cannot hold, which proves the statement for the Chain type atomic polynomials. {{\bf (iii) Loop type.}} In this case, the exponent matrix is given by \[ \begin{bmatrix} a_1 & 0 & \cdots & 0 & 1\\ 1 & a_2 & \cdots & 0&0\\ \vdots & \vdots & \cdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & a_N \end{bmatrix}\] Its determinant is $D=\prod_{k=1}^N a_k-(-1)^N$. Its inverse is \[ (E_W^{-1})_{ij}= \begin{cases} (-1)^{N+i+j} \frac{\prod_{k=i+1}^{j-1}a_k}{D}, & \; i<j\\ (-1)^{i+j} \frac{\prod_{k=i+1}^N a_k \cdot \prod_{l=1}^{j-1} a_l }{D}, & \; i\geq j \end{cases},\] with the convention that empty product is $1$. As in the chain type case, we calculate using two basis elements $x^rdx$ and $x^{r'}dx$, with $r=(r_1,\ldots,r_N)$ and $r'=(r'_1,\ldots,r'_N)$ such that $0\leq r_j,r_j'\leq a_j-1, \; (\forall 1\leq j\leq N)$. As in the chain type, it suffices to show that every entry in $(r-r')\cdot E_W^{-1}$ is strictly less than $1$. Indeed, we have its $j$-th entry is \begin{align*} & \sum_{1\leq i< j} (-1)^{N+i+j} (r_i-r_i') \frac{\prod_{k=i+1}^{j-1}a_k}{D} +\sum_{j\leq i\leq N} (-1)^{i+j}(r_i-r_i') \frac{\prod_{k=i+1}^N a_k \cdot \prod_{l=1}^{j-1} a_l }{D} \\ \leq & \frac{ \prod_{k=1}^{j-1}a_k - \prod_{k=2}^{j-1} a_k + \cdots +a_{j-1}-1}{D} + \frac{ {\bar{\imath}}g( \prod_{k=j}^N a_k - \prod_{k=j+1}^N a_k +\cdots -1{\bar{\imath}}g) \prod_{l=1}^{j-1} a_l}{D}\\ = & \frac{ (\prod_{k=1}^{j-1}a_k -1 ) + (\prod_{k=j}^N a_k -1)\cdot \prod_{l=1}^{j-1} a_l}{D}\\ = & \frac{ \prod_{k=1}^N a_k -1 }{D} \end{align*} If $N$ is odd, the fraction $\frac{ \prod_{k=1}^N a_k -1 }{D}= \frac{ \prod_{k=1}^N a_k -1 }{\prod_{k=1}^N a_k +1}<1$, and we are done. If $N$ is even, the fraction $\frac{ \prod_{k=1}^N a_k -1 }{D}=1$. In the above inequality, equality can only hold when (with $N$ even) \[ (-1)^{i+j}(r_i-r_i')= a_i -1, \;\; \forall 1\leq i\leq N.\] If this is the case, one then verifies that \[ (r-r')\cdot E_W^{-1} = ((-1)^{j+1},(-1)^j,\ldots,1,\ldots,(-1)^{j+N}),\] i.e. the right hand side vector is the unique alternating vector with entries $\pm 1$ and the $j$-th position is $1$. But, then this implies that $ \frac{\deg(x^{r'}dx)-\deg(x^{r}dx)}{2} = (r-r')\cdot E_W^{-1} \cdot (1,1,1,\cdots,1,1)^{{\sf t}}=0$ since $N$ is even, which contradicts the assumption that $\deg(x^rdx)\neq \deg(x^{r'}dx)$. This proves the case for the Loop type invertible polynomails. To finish the proof, for a general invertible polynomial $W=W_1+\cdots+W_l$, we observe that \[ G_W^{{\sf max}} = G_{W_1}^{{\sf max}} \times \cdots \times G_{W_l}^{{\sf max}}.\] Furthermore, the Hochschild homology also decompose into \[ {{\sf Jac}}(W)={{\sf Jac}}(W_1)\otimes\cdots\otimes {{\sf Jac}}(W_l).\] Using that $G_W^{{\sf max}}$ is commutative, the result easily follows from the atomic cases. \end{Proof} \paragraph{{\bf Canonical splitting with an orbifolding group $G\subset G_W^{{\sf max}}\cap {\mathsf{Sym}}L_N$.}} To prove Theorem~\ref{thm:canonical} with an orbifolding group $G$, the idea is to use localization formula of Hochschild invariants to reduce the statement to each invariant pieces. For this purpose, we need the following \begin{Lemma}~\label{lem:invariants} Let $W$ be an invertible polynomial. Let $g\in G^{{\sf max}}$ be a diagonal symmetry of $W$. Denote by $V^g$ the fixed point locus of the action $g: V\rightarrow V$. Then $W|_{V^g}$ is also an invertible polynomial. \end{Lemma} \begin{Proof} Assume that $W=W_1+\cdots+W_l$ decomposes into sums of atomic polynomial $W_j$'s. Since $g$ is a diagonal symmetry, it also decomposes as $g=(g_1,\cdots, g_l)$, which shows that the restriction $W|_{V^g}$ is the sum of the restrictions $W_j$ on the invariant subspace of $g_j$. Thus it suffices to prove the statement for atomic polynomials. \begin{itemize} \item For the Fermat type $W=x^n$, the fixed subspace is either $\mathbb{C}$ or $0$, thus clearly invertible. \item For the chain type $W=x_1^{a_1}+x_1x_2^{a_2}+\cdots + x_{N-1} x_N^{a_N}$, notice that if $g(x_i)=x_i$ with $i\geq 2$, then it implis that $g(x_{i-1})=x_{i-1}$. Thus if we set $l(g)$ to be the maximal index such that $g(x_{l(g)})=x_{l(g)}$, we have that \[ V^g=\mathbb{C}^{l(g)}_{x_1,\cdots,x_{l(g)}}.\] It is clear that $W|_{V^g}$ is again atomic and of chain type. \item For the loop type, one can show that the fixed subspace $V^g$ is either the whole space $V$ or $0$, thus clearly invertible. \end{itemize} \end{Proof} \paragraph{{\bf Finishing the proof of Theorem~\ref{thm:canonical}.}} Recall the following localization formula: \begin{equation}~\label{eq:localization} HH_*({\sf MF}_G(W))N^{\scriptscriptstyle\vee}g {\bar{\imath}}goplus_{g\in G} {\bar{\imath}}g( HH_*({\sf MF}(W|_{V^g})){\bar{\imath}}g)_{G}. \end{equation} Similarly for its negative cyclic homology. Again, since $G_W^{\sf max}$ is a diagonal symmetry group, requiring a splitting to be $G_W^{\sf max}$-equivariant is the same as requiring the splitting be $G_{W|_{V^g}}^{{\sf max}}$-equivariant on each $g$-invariant subspace. Now the uniqueness follows from that on each $g$-invariant subspace by Lemma~\ref{lem:invariants}. \section{The cubic family}~\label{sec:cubic} As an example, we work out the case of the Fermat cubic \[ W=\frac{1}{3}(x_1^3+x_2^3+x_3^3)\] endowed with the orbifold group $G=\mathbb{Z}/3\mathbb{Z}=\langle (\zeta_3,\zeta_3,\zeta_3)\rightarrowngle$ acting diagonally on the $x$-variables, with $\zeta_3=e^{2\pi i/3}$. The purpose of the section is to demonstrate the computability of non-commutative Hodge structures and categorical primitive forms in the simplest Calabi-Yau case. Overall, we have chosen to provide a schematic flow of how the non-commutative computation could be done, while tedious details of most computations are omitted. To write down explicit formulas, it is convenient to use the Shuffle and cyclic Shuffle operations, denoted by ${\sf sh}$ and ${\sf sh}^c$ respectively. We refer to~\cite[Section 2.2]{Shk2} for the definitions of them. \paragraph{{\bf The $A_\infty$-algebra structure.}} By our previous work~\cite{Tu}, by Kontsevich's deformation quantization formula one can write down an $A_\infty$-algebra structure on the $Z/2\mathbb{Z}$-graded vector space \[ {{\sf End}}(\mathbb{C}^{\sf stab})= \Lambda^*(\epsilon_1,\epsilon_2,\epsilon_3),\] i.e. exterior tensors generated by three odd variables $\epsilon_1, \epsilon_2, \epsilon_3$. We denote by $\epsilon_{12}=\epsilon_1\wedge\epsilon_2$, and similarly $\epsilon_{213}=\epsilon_2\wedge\epsilon_1\wedge \epsilon_3$. We shall denote this $A_\infty$-algebra by $E$. For the Hochschild invariants of $E$, one can use the inverse of the HKR-map to obtain the following \begin{Proposition}~\label{prop:basis} The Hochschild homology of $E$ is an $8$-dimensional vector space, concentrated at the odd degree. Explicitly, a basis is given by \begin{align*} HH_*(E) & = {{\sf span}}{\bar{\imath}}g( [ \epsilon_{123}], [\epsilon_{123}|\epsilon_1],[\epsilon_{123}|\epsilon_2], [\epsilon_{123}|\epsilon_3],\\ &[\epsilon_{123}|{\sf sh} (\epsilon_1,\epsilon_2)], [\epsilon_{123}|{\sf sh}(\epsilon_1,\epsilon_3)], [ \epsilon_{123}|{\sf sh}(\epsilon_2,\epsilon_3)], [\epsilon_{123}|{\sf sh}(\epsilon_1,\epsilon_2,\epsilon_3)]{\bar{\imath}}g). \end{align*} \end{Proposition} \begin{remark} The above formula of generators of the Hochschild homology can also be obtained using the Kunneth product~\cite{Shk}. Indeed, let $A$ and $B$ be two strictly unital $A_\infty$ algebras. We have the Kunneth map $\times : C_*(A)\otimes C_*(B) \rightarrow C_*(A\otimes B)$ defined by \[ (a_0|a_1|\cdots|a_n)\times (b_0|b_1|\cdots|b_m) = (-1)^* a_0\otimes b_0| {{\sf sh}}{\bar{\imath}}g( a_1\otimes{\mathbf 1}|\cdots|a_n\otimes {\mathbf 1} |{\mathbf 1}\otimes b_1|\cdots|{\mathbf 1}\otimes b_m{\bar{\imath}}g).\] Here the sign $*=|b_0|\cdot (|a_1|'+\cdots+|a_n|')$. One can apply the formula by considering $EN^{\scriptscriptstyle\vee}g (A_3\otimes A_3) \otimes A_3$ with $A_3$ the $A_\infty$ algebra associated with $\frac{1}{3}x^3$. This approach is a bit {\sl ad. hoc.} since the tensor product of $A_\infty$-algebras are rather complicated to define, and may not be an associative operation on triple tensor products. Nevertheless, the Kunneth formula seems to still hold. \end{remark} \paragraph{{\bf The semi-direct product $E\rtimes \mathbb{Z}/3\mathbb{Z}$.}} The orbifold group $\mathbb{Z}/3\mathbb{Z}$ acts on $E$ diagonally on the variables $\epsilon_1$, $\epsilon_2$ and $\epsilon_3$. Since Kontsevich's deformation quantization is invariant under the affine transformation group, the $A_\infty$-algebra structure on $E$ is therefor $\mathbb{Z}/3\mathbb{Z}$-equivariant. Thus, we may form the semi-direct product $A_\infty$-algebra $E\rtimes \mathbb{Z}/3\mathbb{Z}$. By general orbifolding construction of compact generators, this latter algebra is Morita equivalent to the category ${\sf MF}_G(W)$. The Hochschild homology of $E\rtimes \mathbb{Z}/3\mathbb{Z}$ can be computed using the localization formula~\ref{eq:localization}. In this formula, we used the $G$-coinvariants which is isomorphic to the $G$-invariants since our base field $\mathbb{K}$ has characteristic zero. Throughout the section, we shall work with $G$-invariants instead of coinvariants. Since the dimension $N=3$ is odd, Formula~\ref{eq:localization} implies that \[ HH_{\sf odd}(E\rtimes \mathbb{Z}/3\mathbb{Z}) N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( HH_{\sf odd}(E) {\bar{\imath}}g)^{\mathbb{Z}/3\mathbb{Z}} ={{\sf span}} {\bar{\imath}}g( [ \epsilon_{123}],[\epsilon_{123}|{\sf sh}(\epsilon_1,\epsilon_2,\epsilon_3)]{\bar{\imath}}g).\] \paragraph{{\bf Explicit formula of the canonical splitting.}} The maximal diagonal symmetry group of $W$ is given by $G_W^{\sf max}=(\mathbb{Z}/3\mathbb{Z})^3$. We may write down an explicit formula for the unique $G_W^{{\sf max}}$-equivariant splitting of $E\rtimes \mathbb{Z}/3\mathbb{Z}$. These formulas are obtained through the so-called cyclic Kunneth map~\cite{Shk2}. Explicitly, the canonical splitting is given by \begin{align}~\label{eq:s} \begin{split} s([\epsilon_{123}])&=\sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k} d_id_jd_k\; \epsilon_{123}|{\sf sh} ( \epsilon_1^{3i}, \epsilon_2^{3j}, \epsilon_3^{3k}) u^{i+j+k}\\ &+\sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k+1}d_id_jd_k\; \epsilon_3|{\sf sh} {\bar{\imath}}g( {\sf sh}^c(\epsilon_1^{3i+1},\epsilon_2^{3j+1}), \epsilon_3^{3k}{\bar{\imath}}g) u^{i+j+k+1}\\ &+ \sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k}d_id_jd_k \; {\mathbf 1}|{\sf sh}^c{\bar{\imath}}g( \epsilon_{12}|{\sf sh}(\epsilon_1^{3i}, \epsilon_2^{3j}), \epsilon_3^{3k+1}{\bar{\imath}}g) u^{i+j+k+1}. \end{split} \end{align} Here $d_i:=\prod_{1\leq l\leq i} (3l-2)$, and $d_0=1$. Another similar formula looks like \begin{align}~\label{eq:omega} \begin{split} & s([\epsilon_{123}|{\sf sh}(\epsilon_1,\epsilon_2,\epsilon_3)])\\ =&\sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k} c_ic_jc_k\; \epsilon_{123}|{\sf sh} ( \epsilon_1^{3i+1}, \epsilon_2^{3j+1}, \epsilon_3^{3k+1}) u^{i+j+k}\\ &+\sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k+1}c_ic_jc_k\; \epsilon_3|{\sf sh} {\bar{\imath}}g( {\sf sh}^c(\epsilon_1^{3i+2},\epsilon_2^{3j+2}), \epsilon_3^{3k+1}{\bar{\imath}}g) u^{i+j+k+1}\\ &+ \sum_{i\geq 0, j\geq 0, k\geq 0} (-1)^{i+j+k}c_ic_jc_k \; {\mathbf 1}|{\sf sh}^c{\bar{\imath}}g( \epsilon_{12}|{\sf sh}(\epsilon_1^{3i+1}, \epsilon_2^{3j+1}), \epsilon_3^{3k+2}{\bar{\imath}}g) u^{i+j+k+1}, \end{split} \end{align} with $c_i:=\prod_{1\leq l\leq i} (3l-1)$, and $c_0=1$. One can check directly that the map $s$ defined above is $G_W^{\sf max}$-equivariant with respect to the diagonal action of $G_W^{\sf max}$ on the elements $\epsilon_1$, $\epsilon_2$ and $\epsilon_3$. Hence by Theorem~\ref{thm:canonical}, this is the canonical splitting of the non-commutative Hodge filtration. \paragraph{{\bf Deformations of the Fermat cubic.}} By the localization formula for the Hochschild cohomology, we have \[ HH^{{\sf even}}(E)^{\mathbb{Z}/3\mathbb{Z}}N^{\scriptscriptstyle\vee}g HH^{{\sf even}} (E\rtimes \mathbb{Z}/3\mathbb{Z}).\] Hence all deformations of $E\rtimes \mathbb{Z}/3\mathbb{Z}$ are from $\mathbb{Z}/3\mathbb{Z}$-invariant deformations of $E$. A universal family can be realized by deforming $W$ to \[ \mathcal{W}:= W+t_0+t\cdot x_1x_2x_3,\] and then apply Kontsevich's deformation quantization to $\mathcal{W}$ to obtain a family of $A_\infty$-algebras $E_\mathcal{W}$. From this family of $A_\infty$-algebras, taking negative cyclic homology yields a VSHS on $HC_*^-(E_\mathcal{W})$ over the formal power series ring $\mathbb{C}[[t_0,t]]$. One can show that there is an isomorphism of VSHS's~\footnote{See Proposition~\ref{prop:equiv} for an analogous statement and its proof.}: \[ {\mathscr V}^{{\sf MF}_{\mathbb{Z}/3\mathbb{Z}}(W)} N^{\scriptscriptstyle\vee}g HC_*^-(E_\mathcal{W})^{\mathbb{Z}/3\mathbb{Z}}.\] In the remaining part of the section, we use the above isomorphism and Theorem~\ref{thm:bijection2} to compute the primitive form $\zeta$ associated with the canonical splitting in Equations~\ref{eq:s} and~\ref{eq:omega}. Since the $t_0$-direction does not affect the main computation, we shall set $t_0=0$ in the remaining part of the section. We begin with the Kodaira-Spencer class of the above family. Denote by \[ \mu(t)_n: (E_\mathcal{W}[1])^{\otimes n} \rightarrow E_\mathcal{W}[1]\] the $A_\infty$-structure obtained from deformation quantization. Then we have \begin{Lemma} For any multi-index $J$, we have the following identities: \begin{align*} \frac{\partial}{\partial t}\mu(t)_3{\bar{\imath}}g(\epsilon_{\sigma(1)}\wedge \epsilon_J,\epsilon_{\sigma(2)},\epsilon_{\sigma(3)}{\bar{\imath}}g)&= \frac{1}{6}\epsilon_J, \;\;\forall \sigma\in {\mathsf{Sym}}igma_3.\\ \frac{\partial}{\partial t}\mu(t)_n{\bar{\imath}}g(\epsilon_{i_1}\wedge \epsilon_J,\epsilon_{i_2},\cdots,\epsilon_{i_n}{\bar{\imath}}g)&=0, \;\;\forall n\geq 4, \;\;\forall 1\leq i_j\leq 3. \end{align*} \end{Lemma} The Getzler-Gauss-Manin connection acts on cyclic chains of $E_{\mathcal{W}}$ by formula \begin{align*} \nabla^{\sf Get}_{\frac{\partial}{\partial t}} (\xi)&= \frac{\partial}{\partial t}(\xi)-u^{-1}\iota{\bar{\imath}}g(\frac{\partial\mu(t)}{\partial t}{\bar{\imath}}g)(\xi),\\ \iota{\bar{\imath}}g(\frac{\partial\mu(t)}{\partial t}{\bar{\imath}}g)(\xi)&= b^{1|1}(\frac{\partial\mu(t)}{\partial t}; \xi)+B^{1|1}(\frac{\partial\mu(t)}{\partial t}; \xi). \end{align*} For simplicity, we set $A:=\iota(\frac{\partial}{\partial t}\mu(t))$. \begin{Lemma} There exists an isomorphism of differential modules over $\mathbb{K}[[t]]$: \[ e^{tA/u}: {\bar{\imath}}g(HP_*(E)\widehat{\otimes}_\mathbb{K}\mathbb{K}[[t]], \frac{\partial}{\partial t}{\bar{\imath}}g) \rightarrow {\bar{\imath}}g( HP_*(E_{\mathcal{W}}), \nabla^{\sf Get}_{\frac{\partial}{\partial t}}{\bar{\imath}}g).\] In other words, the above isomorphism trivializes the differential module $HP_*(E_\mathcal{W})$. \end{Lemma} \begin{Lemma}~\label{lem:A-action} For any positive integers $I, J, K$, we have \begin{align*} A{\bar{\imath}}g( \epsilon_{123}|{\sf sh}(\epsilon_1^I, \epsilon_2^J, \epsilon_3^K){\bar{\imath}}g)&= \epsilon_{123}|{\sf sh}(\epsilon_1^{I-1}, \epsilon_2^{J-1}, \epsilon_3^{K-1}),\\ A{\bar{\imath}}g( \epsilon_3|{\sf sh}(\epsilon_1|\epsilon_2, \epsilon_1^I, \epsilon_2^J, \epsilon_3^K){\bar{\imath}}g) &=(I+\frac{1}{2})\cdot\epsilon_3|{\sf sh}(\epsilon_1^I,\epsilon_2^J,\epsilon_3^{K-1})+\epsilon_3|{\sf sh}(\epsilon_1|\epsilon_2,\epsilon_1^{I-1}, \epsilon_2^{J-1}, \epsilon_3^{K-1})\\ A{\bar{\imath}}g( {\mathbf 1}|{\sf sh}(\epsilon_{12}|\epsilon_3, \epsilon_1^I, \epsilon_2^J, \epsilon_3^K){\bar{\imath}}g)&= \frac{1}{2}\cdot \epsilon_1|{\sf sh}(\epsilon_1^{I-1},\epsilon_2^J,\epsilon_3^K)-\frac{1}{2}\cdot \epsilon_2|{\sf sh}(\epsilon_1^I,\epsilon_2^{J-1},\epsilon_3^K)\\ &+ {\mathbf 1} |{\sf sh}(\epsilon_{12}|\epsilon_3, \epsilon_1^{I-1},\epsilon_2^{J-1},\epsilon_3^{K-1})+{\mathbf 1} |{\sf sh} (\epsilon_{12}, \epsilon_1^{I-1}, \epsilon_2^{J-1}, \epsilon_3^K). \end{align*} \end{Lemma} \paragraph{{\bf Computation of the primitive form.}} Let us write \[ s_0:= s{\bar{\imath}}g(\epsilon_{123}{\bar{\imath}}g),\;\; \omega:= s{\bar{\imath}}g(\epsilon_{123}|{\sf sh} (\epsilon_1,\epsilon_2,\epsilon_3){\bar{\imath}}g)\] for the negative cyclic homology classes of $E$ in Equation~\ref{eq:s} and~\ref{eq:omega}. The class $\omega$ induces a trace map \[ \mathbb{T}r_\omega: E \rightarrow \mathbb{K}, \;\; \mathbb{T}r_\omega(\epsilon_{123})=1, \mbox{\; and $0$ otherwise.}.\] The graded symmetric bilinear form $\langle\epsilon_I,\epsilon_J\rightarrowngle=\mathbb{T}r_\omega(\epsilon_{I}\wedge\epsilon_J)$ is non-degenerate, which implies that $\omega$ defines a Calabi-Yau structure of $E$. As we have proved in the previous section, the canonical splitting is $\omega$-compatible. The canonical splitting $s$ determines a primitive form of the VSHS on $HC_{{\sf odd}}^-(E_{\mathcal{W}})^{\mathbb{Z}/3\mathbb{Z}}$ by Theorem~\ref{thm:bijection2}. We briefly recall this construction. Indeed, the splitting $s$ defines a subspace \[ L^s:= {\bar{\imath}}goplus_{l\geq 1} u^{-l} \cdot \imag (s) \subset {\bar{\imath}}g(HP_*(E){\bar{\imath}}g)^{\mathbb{Z}/3\mathbb{Z}},\] complimentary to the canonical subspace $(HC_*^-(E))^{\mathbb{Z}/3\mathbb{Z}}\subset (HP_*(E))^{\mathbb{Z}/3\mathbb{Z}}$. We can parallel transport the subspace $L^s$ using the Getzler-Gauss-Manin connection to nearby fibers, which yields a direct sum decomposition \[ HP_*(E_\mathcal{W})^{\mathbb{Z}/3\mathbb{Z}} N^{\scriptscriptstyle\vee}g HC_*^-(E_\mathcal{W})^{\mathbb{Z}/3\mathbb{Z}}{\mathsf{op}}lus (L^s)^{{\sf flat}}.\] Here and in the following, we use an superscript ${{\sf flat}}$ to mean Getzler-Gauss-Manin flat extension of periodic cyclic homology classes. Equivalently, this direct sum decomposition corresponds to a projection map \[ \Pi^s : HP_*(E_\mathcal{W})^{\mathbb{Z}/3\mathbb{Z}} \rightarrow HC_*^-(E_\mathcal{W})^{\mathbb{Z}/3\mathbb{Z}}\] which splits the canonical inclusion map. Then, the primitive form associated with the splitting $s$ is given by \[ \zeta:= \Pi^s{\bar{\imath}}g( \omega^{{\sf flat}}{\bar{\imath}}g).\] That is, the primitive form $\zeta$ is simply the projection of the flat extension $\omega^{{\sf flat}}$ which {{\sl a priori}} is only a periodic cyclic homology class, along the parallel transported splitting. Using Lemma~\ref{lem:A-action} to compute $s_0^{{\sf flat}}$ and $\omega^{{\sf flat}}$ yields \begin{align*} s_0^{{\sf flat}} &= \sum_{n\geq 0} \frac{ (-1)^n (d_n)^3 t^{3n}}{(3n)!} \epsilon_{123} + O(u),\;\; d_n=\prod_{1\leq i\leq n} (3i-2), \;\; d_0=1\\ \omega^{{\sf flat}} &= \sum_{n\geq 0} \frac{ (-1)^n (c_n)^3 t^{3n+1}}{(3n+1)!} \epsilon_{123} u^{-1} + O(u^0),\;\; c_n=\prod_{1\leq i\leq n} (3i-1), \;\; c_0=1 \end{align*} Let us set $g(t):=\sum_{n\geq 0} \frac{ (-1)^n (d_n)^3 t^{3n}}{(3n)!}$ and $h(t):=\sum_{n\geq 0} \frac{ (-1)^n (c_n)^3 t^{3n+1}}{(3n+1)!}$. Observe that $g(t)=1+O(t)$, hence it is invertible in $\mathbb{K}[[t]]$. Using $s_0^{{\sf flat}}$ to kill the $u^{-1}$-term of $\omega^{{\sf flat}}$ shows that \[ \zeta:= \omega^{{\sf flat}} - \frac{h(t)}{g(t)} s_0^{{\sf flat}} u^{-1} \in O(u^0).\] \paragraph{{\bf The flat coordinate $\tau$.}} The primitive form induces a flat structure on the deformation space. In this flat structure, as pointed out in~\cite[Section 3.2.2]{LLSS}, the flat coordinate can be read off from the coefficient of $u^{-1}$-term in $\omega^{{\sf flat}}$ which gives $\tau= \frac{h(t)}{g(t)}$. Using this flat coordinate to compute the genus zero prepotential function by formula \[ \partial_i\partial_j\partial_k {\mathscr F} := \langle u\nabla_{\partial_i} u \nabla_{\partial_j} u \nabla_{\partial_k} \zeta, \zeta \rightarrowngle_{{\mathsf{hres}}},\] with the vector fields $\partial_i$, $\partial_j$, $\partial_k$ are either $\partial/\partial t_0$ or $\partial/\partial \tau$. We obtain that ${\mathscr F}= \frac{1}{2}t_0^2\tau$, confirming that for the category ${\sf MF}_G(W)$ (which, by Orlov~\cite{Orl}, is equivalent to the derived category of coherent sheaves on the Elliptic curve $W=0$ in ${\mathbb{CP}}^2$) there is no higher non-vanishing invariants in genus zero. \section{The quintic family}~\label{sec:quintic} In this section, we study VSHS's associated with quintic families. A notable difference with the discussion in Section~\ref{sec:hodge} is that in this section, we shall work with VSHS's over an actual geometric base $B$, rather than a formal base. Another difference is that we no longer require primitivity. We shall emphasize these differences whenever it is necessary. Recall the mirror quintic family $[\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]$ is defined as follows. First, consider a one-parameter family of smooth Calabi-Yau hypersurfaces $\mathfrak{X}\subset B \times {\mathbb{CP}}^4$ defined by the following equation \[ \mathcal{W}(\psi,x):= \frac{1}{5}( x_1^5+x_2^5+x_3^5+x_4^5+x_5^5)-\psi x_1x_2x_3x_4x_5,\] with $\psi\in B=\mathbb{C}\backslash\{1,e^{2\pi i/5}, e^{4\pi i/5}, e^{6\pi i/5}, e^{8\pi i/5}\}$. Denote by $\pi: \mathfrak{X}\rightarrow B$ the canonical the projection map. Next, we observe that the group \[ (\mathbb{Z}/5\mathbb{Z})^3=\{ (a_1,a_2,a_3,a_4,a_5)\in (\mathbb{Z}/5\mathbb{Z})^5 \mathtt{i}d a_1+\cdots+a_5=0, a_5=0\}\] naturally acts on $\mathfrak{X}$ by sending $x_j \; (j=1,\ldots,5)$ to $e^{2 a_j \pi i/5} \cdot x_j$. We thus obtain a family of smooth Calabi-Yau orbifolds $[\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]$. Recall the following Lemma from~\cite[Lemma 2.7]{GPS}. \begin{Lemma}~\label{lem:equivalence} A $\mathbb{Z}$-graded polarized VSHS $({\mathscr E},\nabla,\langle-,-\rightarrowngle_{{\sf hres}})$ of parity $N$ over a base space $B$ is equivalent to the following data: \begin{itemize} \item[--] A locally free, finite rank, $\mathbb{Z}/2\mathbb{Z}$-graded $S$-module ${\mathscr V}N^{\scriptscriptstyle\vee}g {\mathscr V}_{[0]}{\mathsf{op}}lus {\mathscr V}_{[1]}$. \item[--] A flat connection $\nabla$ on each ${\mathscr V}_\sigma$. \item[--] A decreasing filtration $F^\bullet {\mathscr V}_{\sigma}$ on each ${\mathscr V}_\sigma$ that satisfies the Griffiths' transversality $\nabla F^p{\mathscr V}_{\sigma}\subset F^{p-1}{\mathscr V}_{\sigma}$. \item[--] A covariantly constant bilinear pairing \[ (-,-): {\mathscr V}_\sigma\otimes {\mathscr V}_\sigma \rightarrow S\] such that $(\alpha,\beta)=(-1)^N(\beta,\alpha)$, and satisfies the property that $(F^p{\mathscr V}_\sigma,F^q{\mathscr V}_\sigma)=0$ if $p+q>0$, and the induced pairing $(-,-): {\sf Gr}^p_F{\mathscr V}_\sigma\otimes {\sf Gr}^{-p}_F{\mathscr V}_\sigma \rightarrow S$ is non-degenerate for all $p$. \end{itemize} \end{Lemma} \begin{proof} We refer to~\cite[Lemma 2.7]{GPS} for the proof. For later use, we recall the following from {\sl loc. cit.}. First, we form the localization bundle $\widetilde{{\mathscr E}}:={\mathscr E}[u^{-1}]$. Notice that we have the periodicity isomorphism $\widetilde{{\mathscr E}}_k \stackrel{u\cdot}{\longrightarrow} \widetilde{{\mathscr E}}_{k-2}$. Associated with the polarized VSHS $({\mathscr E},\nabla,\langle-,-\rightarrowngle_{{\sf hres}})$, one sets \begin{itemize} \item ${\mathscr V}_\sigma := \widetilde{{\mathscr E}}_k$ with any $k$ such that $k\pmod{2}=\sigma$. \item The bilinear pairing is given by \begin{equation}~\label{eq:bilinear} (\alpha,\beta):= {\sqrt{-1}}^{\,k} \langle \tilde{\alpha},\tilde{\beta} \rightarrowngle_{\sf hres} \end{equation} if $\tilde{\alpha}\in \widetilde{{\mathscr E}}_k$ and $\tilde{\beta}\in \widetilde{{\mathscr E}}_{-k}$. Note that the right hand side lies inside $S$ by degree reason. \end{itemize} \end{proof} We shall freely use this equivalence in the rest of the section. For a $\mathbb{Z}$-graded VSHS $({\mathscr E},\nabla,\langle-,-\rightarrowngle_{{\sf hres}})$, to simplify the notations, denote by ${\bar{\imath}}g({\mathscr E}_{\sigma},\nabla,(-,-){\bar{\imath}}g) \; (\sigma\in \{ {[0]}, {[1]}\})$ the corresponding $\mathbb{Z}/2\mathbb{Z}$-graded module obtained via the lemma above. \paragraph{{\bf Griffiths' construction of VSHS's.}} Back to the orbifold $[\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]$, there are two VSHS's naturally associated with this data: one geometric and the other one categorical. We first recall the geometric construction of VSHS, essentially due to Griffiths~\cite{Gri1}. Roughly speaking, one first constructs a resolution~\cite{Roa}, say $\widetilde{\mathfrak{X}}$ of the orbifold $[\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]$. Denote by $\widetilde{\pi}: \widetilde{\mathfrak{X}} \rightarrow B$ the projection map to the base. Define a VSHS using the equivalent data described in Lemma~\ref{lem:equivalence} as follows. The underlying $\mathbb{Z}/2\mathbb{Z}$-graded bundle is purely odd given by $R^3\widetilde{\pi}_*\mathbb{C}\otimes_\mathbb{C} S$, the middle cohomology of the family $\widetilde{\pi}: \widetilde{\mathfrak{X}}\rightarrow B$. It is endowed with the classical Hodge filtration $F^\bullet$, and the Gauss-Manin connection $\nabla$. The polarization $(-,-)$ is the intersection form on middle cohomology. It turns out in this case, the VSHS can be described in terms of $\mathfrak{X}$ and taking $(\mathbb{Z}/5\mathbb{Z})^3$-invariants. Indeed, we have a natural isomorphism \begin{equation}~\label{eq:iso1} {\bar{\imath}}g( R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S {\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^3} N^{\scriptscriptstyle\vee}g R^3\widetilde{\pi}_*\mathbb{C}\otimes_\mathbb{C} S \end{equation} of VSHS's over $B$. Indeed, consider the pull-back map via $p: \widetilde{\mathfrak{X}}\rightarrow [ \mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]$. The morphism $p^*$ intertwines the Hodge filtration and the Gauss-Manin connection. For the polarization, we have \[ (p^*x, p^*y)= 5^3\cdot (x,y), \;\; \forall x, y \in {\bar{\imath}}g( R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S {\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^3}.\] The factor $5^3$ is a constant, which shows that $5^{3/2}\cdot p^*$ is an isomorphism between VSHS's. \paragraph{{\bf Categorical construction of VSHS's.}} The second construction of VSHS over $B$ is analogous to the one described in Section~\ref{sec:hodge}, with the difference that we work over $S=\mathbb{G}amma(B,{\mathscr O}_B)$, the ring of functions on $B$, instead of over a formal ring. More precisley, denote by $A:=\mathbb{C}[\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4,\epsilon_5]$ the super-commutative ring generated by $5$ odd variables. Recall as in~\cite{Tu}, we may use Kontsevich's deformation quantization formula to obtain a $S$-linear $A_\infty$-algebra structure on $A\otimes_\mathbb{C} R$. Indeed, let ${\mathscr U}: T_{\sf poly}(A)[1] \rightarrow C^*(A)[1]$ be Kontsevich's $L_\infty$ quasi-isomorphism. Tensoring it with $S$ yields (which we still denote by ${\mathscr U}$) \[ {\mathscr U}: T_{\sf poly}(A)[1]\otimes_{\mathbb{C}} S \rightarrow C^*(A\otimes_\mathbb{C} S)[1].\] The element $\mathcal{W}=\frac{1}{5}( x_1^5+x_2^5+x_3^5+x_4^5+x_5^5)-\psi x_1x_2x_3x_4x_5\in S[x_1,x_2,x_3,x_4,x_5]$ is obviously a Maurer-Cartan element of the left hand side, its push-forward \[ {\mathscr U}_*\mathcal{W}:= \sum_{k\geq 1} \frac{1}{k!}{\mathscr U}_k(\mathcal{W}^k)\] is a Maurer-Cartan element of $C^*(A\otimes_\mathbb{C} S)[1]$, which by definition is an ($S$-linear) $A_\infty$-algebra structure on $A\otimes_\mathbb{C} S$. We remark that the above infinite sum is well-defined, since for each element $a_1\otimes\cdots\otimes a_n\in A[1]^{\otimes n}$ the evaluation $ \sum_{k\geq 1} \frac{1}{k!}{\mathscr U}_k(\mathcal{W}^k)(a_1,\ldots,a_n)$ is only a finite sum, since by Kontsevich's explicit formula, each $\mathcal{W}$ acts on the $a$'s by a $5$-th order differential operator. Denote this $S$-linear $A_\infty$-algebra by $A^{\mathcal{W}}$, indicating that it is a deformation of $A$ using the Maurer-Cartan element $\mathcal{W}$. Note that as a $\mathbb{Z}/2\mathbb{Z}$-graded $S$-module, it is simply $A\otimes_\mathbb{C} S$. Since Kontsevich's deformation quantization formula is invariant under any affine symmetry, the maximal diagonal symmetry group $(\mathbb{Z}/5\mathbb{Z})^4:=\{ (a_1,a_2,a_3,a_4,a_5)\in (\mathbb{Z}/5\mathbb{Z})^5 \mathtt{i}d a_1+\cdots+a_5=0\}$ acts on the $A_\infty$-algebra $A^\mathcal{W}$. Denote by $A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4$ the associated semi-direct product $A_\infty$-algebra. The relationship of these $A_\infty$-algebras with matrix factorizations is the following. For each fixed $\psi\in B$, it was proved in~\cite{Tu} that the $A_\infty$-algebra $A^{\mathcal{W}_\psi}$ is a minimal model of the dg-algebra ${{\sf End}}(\mathbb{C}^{\sf stab})$ for the Koszul matrix factorization $\mathbb{C}^{\sf stab}\in {\sf MF}(\mathcal{W}_\psi)$ which generates the category ${\sf MF}(\mathcal{W}_\psi)$ by a result of Dyckerhoff~\cite{Dyc}. The second VSHS over $B$ is given by the negative cyclic homology $HC_{{\sf odd}}^-{\bar{\imath}}g( A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)$. As in Section~\ref{sec:hodge}, we put on it the categorical higher residue pairing, the $u$-connection and the Getzler connection. Similar to the geometric case, one can realize this VSHS as the $(\mathbb{Z}/4\mathbb{Z})^4$-invariant part of the VSHS $HC^-(A^\mathcal{W})$. \begin{Proposition}~\label{prop:equiv} There exists an isomorphism of VSHS's over $B$: \begin{equation}~\label{eq:iso2} HC_{\sf odd}^-{\bar{\imath}}g( A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( HC^-( A^\mathcal{W}){\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^4}. \end{equation} \end{Proposition} \begin{proof} For simplicity, denote $G=(\mathbb{Z}/5\mathbb{Z})^4$, and its dual group $G^\vee={\sf Hom}(G,\mathbb{C}^*)$ its dual group. There exists a quasi-isomorphism of chain complexes: \begin{align*} \gamma: {\bar{\imath}}g(C_*(A^\mathcal{W}\rtimes G){\bar{\imath}}g)_{G^\vee} & \rightarrow C_*(A^\mathcal{W})_G\\ \gamma ( a_0\rtimes g_0|\cdots|a_n\rtimes g_n) &= \begin{cases} a_0|g_0(a_1)|\cdots|g_0g_1\ldots g_{n-1}(a_n) \mbox{\;\; if $g_0\cdots g_n=\id$},\\ 0\mbox{\;\; otherwise} \end{cases} \end{align*} The fact $\gamma$ is a quasi-isomorphism follows from the localization formula~\ref{eq:localization} by observing that the condition $g_0\cdots g_n=\id$ is precisely to pick up the component $g=\id$ in this formula. It is straightforward to check that $\gamma$ intertwines both the $u$-connection and the Getzler connection in the $\psi$-direction. For the pairing, we verifies that \[ |G|\cdot \langle \gamma(x), \gamma(y) \rightarrowngle_{\sf hres}= \langle x, y \rightarrowngle_{\sf hres}.\] Thus $|G|^{1/2}\cdot \gamma$ is an isomorphism of VSHS's. \end{proof} The VSHS $HC^-( A^\mathcal{W})$ can be compared with Saito's original VSHS on the twisted de Rham complex. The following result was proved in our previous work~\cite{Tu} in the formal setting. We reprove it here over the ring $S$. \begin{Lemma}~\label{lem:comparison} There exists an isomorphism of VSHS's: \[ \Psi: HC^-(A^\mathcal{W}) N^{\scriptscriptstyle\vee}g H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)\] of VSHS's, where the right hand side is Saito's VSHS with connection operators given by $\nabla_{\frac{d}{d\psi}}:= \frac{d}{d\psi} - \frac{1}{u} \cdot x_1x_2x_3x_4x_5$, and $\nabla_{\frac{d}{du}}:= \frac{d}{du}-\frac{5}{2u}-\frac{\mathcal{W}}{u^2}.$ \end{Lemma} \begin{proof} For simplicity, denote by ${\mathscr V}^{{\sf Saito}}=H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)$. Reall from~\cite{Tu} the isomoprhism $\Psi$ is, up to scalar, given by the following composition \[ HC^-(A^\mathcal{W}) \stackrel{{\mathscr U}^{{\sf Sh},\mathcal{W}}_0}{\longrightarrow} H^*{\bar{\imath}}g( \Omega_A^*\otimes_{\mathbb{C}} R[[u]], L_{\mathcal{W}}+ud_{DR}{\bar{\imath}}g) \rightarrow ({\mathscr V}^{{\sf Saito}})^\vee \leftarrow {\mathscr V}^{{\sf Saito}}.\] The second arrow is always an isomorphism. The third arrow is induced from Saito's higher residue pairing, which is an isomorphism due to its non-degeneracy. The first map ${\mathscr U}^{{\sf Sh},\mathcal{W}}_0$ is given by a deformation of the Tsygan formality map using the Maurer-Cartan element $\mathcal{W}$. It is standard in deformation theory to show that it is an isomorphism in the formal setting. However, in our setting, we argue a stronger result that it remains an isomorphism over $S[[u]]$. Indeed, first observe that the map \[ {\mathscr U}^{{\sf Sh},\mathcal{W}}_0=\sum_{k\geq 0} \frac{1}{k!} {\mathscr U}^{\sf Sh}_k(\mathcal{W}^k)\] is in fact a finite sum when applied to any element $a_0|a_1|\cdots|a_n\in A\otimes A[1]^{\otimes n}$, because $\mathcal{W}$ acts by a $5$-order differential operator. To argue that $${\mathscr U}^{{\sf Sh},\mathcal{W}}_0: C_*(A^\mathcal{W})[[u]] \rightarrow{\bar{\imath}}g( \Omega_A^*\otimes_{\mathbb{C}} R[[u]], L_{\mathcal{W}}+ud_{DR}{\bar{\imath}}g) $$ is a quasi-isomorphism, it suffices to prove that \[ {\mathscr U}^{{\sf Sh},\mathcal{W}}_0: C_*(A^\mathcal{W}) \rightarrow {\bar{\imath}}g( \Omega_A^*\otimes_{\mathbb{C}} R, L_{\mathcal{W}}{\bar{\imath}}g)\] is a quasi-isomorphism of mixed complexes. To this end, we consider an exhaustive filtration on both sides by number of $\epsilon$'s. By definition, both differentials and the map ${\mathscr U}^{{\sf Sh},\mathcal{W}}_0$ preserves this filtration, hence it induces a map of the associated spectral sequences on both sides. The $E^1$-page of the left hand side can be computed using the Hochschild-Konstant-Rosenberg isomorphism for the algebra $A$, which yields exactly ${\bar{\imath}}g( \Omega_A^*\otimes_{\mathbb{C}} R, L_{\mathcal{W}}{\bar{\imath}}g)$. The $E^1$-page of the right hand side remains itself since $L_\mathcal{W}$ always reduces the number of $\epsilon$'s. Thus, we see that ${\mathscr U}^{{\sf Sh},\mathcal{W}}_0$ induces an isomorphism at the $E^1$-page, which proves that it is a quasi-isomorhism. \end{proof} \begin{Lemma}~\label{lem:Z-grading} The $\mathbb{Z}/5\mathbb{Z}$-invariant part $HC^-(A^\mathcal{W})^{\mathbb{Z}/5\mathbb{Z}}$ is a $\mathbb{Z}$-graded VSHS. \end{Lemma} \begin{proof} For this, we use the previous comparison result. Using the homogeneous property of $\mathcal{W}$ and calculate as in Lemma~\ref{lem:omega}, we have \[ \nabla_{u\frac{d}{du}} (\alpha)= -\frac{\deg(\alpha)}{2} \alpha,\] with the degree calculated by \begin{equation}~\label{eq:degree} \deg(x_j) = -2/5,\; \deg(dx_j) = 3/5,\; \deg (u) = -2, \; \deg(\psi) =0 \end{equation} Now, consider the diagonal action of $\mathbb{Z}/5\mathbb{Z}$ on $\mathcal{W}$. Since $\mathcal{W}$ has an isolated singularity at $x=0$, the cohomology group $H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)$ is concentrated at differential form degree $5$. Consider a cohomology class $\alpha$ of the form \[ \alpha_0 dx_1\cdots dx_5+ u \alpha_1 dx_1\cdots dx_5 + \cdots,\] with $\alpha_i$'s in $\mathbb{C}[x_1,\ldots,x_5]\otimes_\mathbb{C} S$. The action of $\mathbb{Z}/5\mathbb{Z}$ fixes $dx_1\cdots dx_5$ and $u$. This implies that if $\alpha$ is a $\mathbb{Z}/5\mathbb{Z}$-invariant class, then the polynomials $\alpha_i$'s are also invariant. In other words, the polynomial degree (in the $x$'s variables) of $\alpha_i$'s are all divisible by $5$. This further implies for a homogeneous $\alpha$, we have \[ \deg (\alpha) = \deg (\alpha_0) + \deg (dx_1\cdots dx_5) = -\frac{2}{5}(\mbox{ polynomial degree of $\alpha_0$})+3 \in 2\mathbb{Z}+1.\] This shows that $HC^-(A^\mathcal{W})^{\mathbb{Z}/5\mathbb{Z}}$ is $\mathbb{Z}$-graded, in fact it is odd integer graded. \end{proof} We may split the group $(\mathbb{Z}/5\mathbb{Z})^4N^{\scriptscriptstyle\vee}g (\mathbb{Z}/5\mathbb{Z}){\mathsf{op}}lus (\mathbb{Z}/5\mathbb{Z})^3$ with $\mathbb{Z}/5\mathbb{Z}=\langle (1,1,1,1,1)\rightarrowngle$. By Proposition~\ref{prop:equiv}, we have \[HC_{\sf odd}^-{\bar{\imath}}g( A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( HC^-( A^\mathcal{W}){\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^4} N^{\scriptscriptstyle\vee}g \Big( {\bar{\imath}}g( HC^-( A^\mathcal{W}){\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})} \Big)^{(\mathbb{Z}/5\mathbb{Z})^3}.\] Since the VSHS $ {\bar{\imath}}g( HC^-( A^\mathcal{W}){\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})} $ is already $\mathbb{Z}$-graded, so is $HC_{\sf odd}^-{\bar{\imath}}g( A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)$. Denote by $HC^-{\bar{\imath}}g(A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)_{[1]}$ the equivalent description of VSHS as in Lemma~\ref{lem:equivalence}. \begin{Theorem}~\label{thm:b-model-comparison} With the notations set as above. There exists an isomorphism \[ \Phi: HC^-{\bar{\imath}}g(A^\mathcal{W}\rtimes (\mathbb{Z}/5\mathbb{Z})^4 {\bar{\imath}}g)_{[1]} N^{\scriptscriptstyle\vee}g R^3\widetilde{\pi}_*\mathbb{C}\otimes_\mathbb{C} S,\] as VSHS's over $B=\mathbb{C}\backslash\{1,e^{2\pi i/5}, e^{4\pi i/5}, e^{6\pi i/5}, e^{8\pi i/5}\}$. \end{Theorem} \begin{proof} The idea of the proof is the following. The main work is to prove that there exists an isomorphism of VSHS's: \[ HC^-{\bar{\imath}}g( A^{\mathcal{W}}\rtimes (\mathbb{Z}/5\mathbb{Z}){\bar{\imath}}g)_{[1]} N^{\scriptscriptstyle\vee}g R^3{\pi}_*\mathbb{C}\otimes_\mathbb{C} S.\] Then we further take the $(\mathbb{Z}/5\mathbb{Z})^3$-invariants on both side and use isomorphisms in~\ref{eq:iso1} and~\ref{eq:iso2} to deduce the result. First, observe that there exists an isomorphism of VSHS's $$HC_{\sf odd}^-{\bar{\imath}}g(A^{\mathcal{W}}\rtimes (\mathbb{Z}/5\mathbb{Z}){\bar{\imath}}g)N^{\scriptscriptstyle\vee}g HC^-( A^\mathcal{W})^{\mathbb{Z}/5\mathbb{Z}},$$ proved in the same way as Proposition~\ref{prop:equiv}. Taking the $\mathbb{Z}/5\mathbb{Z}$-invariants of $\Psi$ in Lemma~\ref{lem:comparison}, we obtain an isomorphism of $\mathbb{Z}$-graded VSHS's (still denoted by $\Psi$): \[ \Psi: {\bar{\imath}}g( HC^-(A^\mathcal{W}){\bar{\imath}}g)^{\mathbb{Z}/5\mathbb{Z}} N^{\scriptscriptstyle\vee}g H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)^{\mathbb{Z}/5\mathbb{Z}}.\] To proceed from here, we construct a map \[\mathbb{T}heta: H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)^{\mathbb{Z}/5\mathbb{Z}}_{[1]}\rightarrow R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S\] via Griffiths' residue construction. Indeed, denote by ${{\sf Eu}}:= \sum_{j=1}^5 x_j \frac{\partial}{\partial x_j}$ the Euler vector field. Let $\alpha$ be a homogeneous element of the form $\alpha=[\alpha_0 dx_1\cdots dx_5+ u \alpha_1 dx_1\cdots dx_5 + \cdots]$, we define \begin{align*} \mathbb{T}heta(\alpha) &:= (-1)^{l_0}\cdot {{\sf Res}} \Big( \sum_i (-1)^i (l_i-1)! \frac{\iota_{{\sf Eu}}( \alpha_i dx_1\cdots dx_5)}{\mathcal{W}^{l_i}}\Big)\\ l_i &:= \frac{5-\deg(\alpha_idx_1\cdots dx_5)}{2} \end{align*} Note that by the previous discussion $l_i\in \mathbb{Z}$. The residue operator ${\sf Res}$ is defined in~\cite{Gri}. We need to verify that $\mathbb{T}heta$ vanishes on $(d\mathcal{W}+ud_{DR})$-exact terms. It suffices to show that for a homogeneous $4$-form $\beta\in \Omega^4_{B\times \mathbb{C}^5/B}$, we have that $\mathbb{T}heta(d\mathcal{W}\wedge \beta + ud_{DR})=0$. Indeed, set $l= \frac{5-\deg (d\mathcal{W}\wedge\beta)}{2}=\frac{6-\deg(\beta)}{2}$, we compute \begin{align*} &\mathbb{T}heta(d\mathcal{W}\wedge \beta + ud_{DR}\beta)\\ = & {{\sf Res}} \Big((-1)^l (l-1)!\frac{\iota_{{\sf Eu}} (d\mathcal{W}\wedge \beta)}{\mathcal{W}^l}- (-1)^l(l-2)!\frac{\iota_{{\sf Eu}}(d_{DR}\beta)}{\mathcal{W}^{l-1}}\Big)\\ =&(-1)^l {{\sf Res}} {\bar{\imath}}g( (l-1)!\frac{L_{{\sf Eu}} \mathcal{W}\wedge \beta -d\mathcal{W}\wedge \iota_{{\sf Eu}}\beta }{\mathcal{W}^l} {\bar{\imath}}g) - (-1)^l{{\sf Res}} {\bar{\imath}}g( (l-2)! \frac{L_{{\sf Eu}}\beta-d_{DR}\iota_{{\sf Eu}}\beta}{\mathcal{W}^{l-1}}{\bar{\imath}}g) \end{align*} We use $L_{\sf Eu}\mathcal{W}=5\mathcal{W}$, and furthermore, for the $4$-form $\beta$, its degree and its polynomial degree is related by $\deg(\beta) = - \frac{2}{5} ( \mbox{ polynomial degree of $\beta$} ) + 4$. Thus we obtain $L_{\sf Eu} \beta= \frac{20-5\deg(\beta)}{2}=5\cdot \frac{4-\deg(\beta)}{2}=5(l-1) $. Putting these two Lie derivatives in the above formula yields: \begin{align*} &\mathbb{T}heta(d\mathcal{W}\wedge \beta + ud_{DR}\beta)\\ = & (-1)^l {{\sf Res}} \Big( (l-2)!\frac{d_{DR}\iota_{{\sf Eu}}\beta}{\mathcal{W}^{l-1}}-(l-1)!\frac{d\mathcal{W}\wedge \iota_{{\sf Eu}}\beta }{\mathcal{W}^l}{\bar{\imath}}g)\\ = & (-1)^l {{\sf Res}} \Big( d_{DR} [ (l-2)!\frac{\iota_{{\sf Eu}}\beta}{\mathcal{W}^{l-1}}] \Big)\\ = & 0 \end{align*} The last equality is because the residue map is defined in terms of integrals over cycles which vanish on exact forms. In~\cite{Gri}, Griffiths had already shown that $\mathbb{T}heta$ is an isomorphism, and that it respects the Hodge filtration. In the following, we verify that $\mathbb{T}heta$ also intertwines with the connection operators, and that it intertwines the pairing up to a constant factor. We first check the connection operator. Indeed, for $\alpha=[\alpha_0 dx_1\cdots dx_5+ u \alpha_1 dx_1\cdots dx_5 + \cdots]$, we have \begin{align*} & \mathbb{T}heta\nabla_{\frac{d}{d\psi}}(\alpha)\\ =& \mathbb{T}heta ( [\frac{d}{d\psi}\alpha]- [x_1x_2x_3x_4x_5\cdot \alpha] )\\ =& (-1)^{l_0}{{\sf Res}} \Big( \sum_i (-1)^{i} (l_i-1)! \frac{\iota_{{\sf Eu}}( \frac{d}{d\psi}\alpha_i dx_1\cdots dx_5)}{\mathcal{W}^{l_i}}\Big) \\ &+(-1)^{l_0}{{\sf Res}} \Big( \sum_i (-1)^i l_i! \frac{\iota_{{\sf Eu}}( x_1x_2x_3x_4x_5\alpha_i dx_1\cdots dx_5)}{\mathcal{W}^{l_i+1}}\Big)\\ = & (-1)^{l_0} {{\sf Res}} \Big( \sum_i (-1)^{i} (l_i-1)! \frac{d}{d\psi} {\bar{\imath}}g(\frac{\iota_{{\sf Eu}}( \alpha_i dx_1\cdots dx_5)}{\mathcal{W}^{l_i}}{\bar{\imath}}g)\Big)\\ =& \nabla_{\frac{d}{d\psi}}\mathbb{T}heta(\alpha) \end{align*} Next, we check that $\mathbb{T}heta$ matches the pairing $(-,-)$ on $H^*{\bar{\imath}}g( \Omega_{B\times \mathbb{C}^5/B}^*[[u]], d\mathcal{W}+ud_{DR}{\bar{\imath}}g)^{\mathbb{Z}/5\mathbb{Z}}$ with the intersection pairing on $R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S$, up to some univeral constant. This follows from Carlson-Griffiths' calculation of the intersection pairing through the Residue map~\cite[Theorem 3]{CarGri}. To state their result, let $fdx_1\ldots dx_5,gdx_1\ldots dx_5\in {\bar{\imath}}g(\Omega^5_{B\times \mathbb{C}^5/B})^{\mathbb{Z}/5\mathbb{Z}}$ be two homogeneous $5$-forms, and denote by \[a=-\frac{\deg(f)}{2}, \;\;\; b:=-\frac{\deg(g)}{2}.\] With the degrees of $f$ and $g$ computed as by Equation~\ref{eq:degree}. Then Carlson-Griffiths' formula says that \[{\sf Res}( \frac{\iota_{\sf Eu} (fdx_1\ldots dx_5)}{\mathcal{W}^{a+1}}) \cap {\sf Res}( \frac{\iota_{\sf Eu} (gdx_1\ldots dx_5)}{\mathcal{W}^{b+1}})= c_{a,b}\cdot {\sf Res}_0 \begin{bmatrix} fgdx_1\ldots dx_5\\ \partial_1\mathcal{W},\ldots,\partial_5\mathcal{W} \end{bmatrix} \] where the constant $c_{a,b}:= (-1)^{a(a+1)/2+b(b+1)/2+3+b} \frac{5}{a!b!}$. From this formula, and that the pairing is only non-zero when $a+b=3$, we have \begin{align*} &\mathbb{T}heta(fdx_1\ldots dx_5) \cap \mathbb{T}heta(gdx_1\ldots dx_5) \\ =& (-1)^{a+b}a!b!{\sf Res}( \frac{\iota_{\sf Eu} (fdx_1\ldots dx_5)}{\mathcal{W}^{a+1}}) \cap {\sf Res}( \frac{\iota_{\sf Eu} (gdx_1\ldots dx_5)}{\mathcal{W}^{b+1}})\\ =& (-1)^{a+b}a!b!c_{a,b}\cdot {\sf Res}_0 \begin{bmatrix} fgdx_1\ldots dx_5\\ \partial_1\mathcal{W},\ldots,\partial_5\mathcal{W} \end{bmatrix} \\ =& (-1)^{\frac{(a+b)(a+b+1)}{2}+(a+b)b}5\cdot {\sf Res}_0 \begin{bmatrix} fgdx_1\ldots dx_5\\ \partial_1\mathcal{W},\ldots,\partial_5\mathcal{W} \end{bmatrix} \\ =& (-1)^b 5\cdot {\sf Res}_0 \begin{bmatrix} fgdx_1\ldots dx_5\\ \partial_1\mathcal{W},\ldots,\partial_5\mathcal{W} \end{bmatrix} \end{align*} On the other hand, by Lemma~\ref{lem:equivalence} and $\deg(fdx_1\ldots dx_5)=3-2a$, we have \begin{align*} &( fdx_1\ldots dx_5, gdx_1\ldots dx_5 ) \\ =& (\sqrt{-1})^{3-2a}\cdot {\sf Res}_0 \begin{bmatrix} fgdx_1\ldots dx_5\\ \partial_1\mathcal{W},\ldots,\partial_5\mathcal{W} \end{bmatrix} \end{align*} Comparing the two identities yields that \[ \mathbb{T}heta(fdx_1\ldots dx_5) \cap \mathbb{T}heta(gdx_1\ldots dx_5)= \sqrt{-1}\cdot 5\cdot (fdx_1\ldots dx_5, gdx_1\ldots dx_5 ).\] In conclusion, by composition of $\Psi$ (after taking $\mathbb{Z}/5\mathbb{Z}$-invariants) and $\mathbb{T}heta$ we obtain an isomorphism \[ \mathbb{T}heta\Psi: {\bar{\imath}}g( HC^-(A^\mathcal{W}){\bar{\imath}}g)^{\mathbb{Z}/5\mathbb{Z}}_{[1]} N^{\scriptscriptstyle\vee}g R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S\] which intertwines the connection operators, and preserves the pairing, up to a constant factor. From this, we deduce that the two VSHS's are isomorphic. Further taking $(\mathbb{Z}/5\mathbb{Z})^3$-invariants on both sides, we obtain an isomorphism of VSHS's: \[ {\bar{\imath}}g( HC^-(A^\mathcal{W}){\bar{\imath}}g)_{[1]}^{(\mathbb{Z}/5\mathbb{Z})^4} N^{\scriptscriptstyle\vee}g {\bar{\imath}}g( R^3\pi_*\mathbb{C}\otimes_\mathbb{C} S{\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^3} \] Using the isomorphisms~\ref{eq:iso1}~\ref{eq:iso2}, the result follows. \end{proof} \begin{remark} Ganatra-Perutz-Sheridan~\cite[Conjecture 1.14]{GPS} conjectured that for a smooth proper algebraic variety over a formal punctured disk, the VSHS associated with the derived category of coherent sheaves is isomorphic to Griffiths' VSHS. Theorem~\ref{thm:b-model-comparison} confirms this conjecture in the case of the mirror quintic family. To state this more precisely, consider the inclusion of rings \[ i: S \hookrightarrow \mathbb{C}((q))=\mathbb{C}[q^{-1},q]], \;\; q:=\psi^{-1}.\] Extension by scalar from $S$ to $\mathbb{C}((q))$ yields an isomorphism of VSHS's: \[ \Phi: HC^-\Big({\bar{\imath}}g(A^\mathcal{W}\otimes_S \mathbb{C}((q)){\bar{\imath}}g)\rtimes (\mathbb{Z}/5\mathbb{Z})^4 \Big)_{[1]} N^{\scriptscriptstyle\vee}g R^3\widetilde{\pi}_* \mathbb{C}((q)).\] Here we slightly abused the notation to still use $\widetilde{\pi}: \widetilde{\mathfrak{X}}\rightarrow {{\sf Spec\,}} \mathbb{C}((q))$ for the pull-back of $ \widetilde{\mathfrak{X}}\rightarrow B={{\sf Spec\,}} S$ along the extension $i$. The $A_\infty$-algebra $A^\mathcal{W}\otimes_S \mathbb{C}((q))$ may be seen as a minimal model of $\End ( \mathbb{C}((q))^{{\sf stab}})$, with $\mathbb{C}((q))^{{\sf stab}}$ a compact generator in the category of matrix factorizations ${\sf MF}(\mathcal{W})$ of $\mathcal{W}$ defined over the field $\mathbb{C}((q))$. Then, we use Orlov's Calabi-Yau/Landau-Ginzburg correspondence~\cite{Orl} that there exists an equivalence \[D^b({\sf coh}([\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]))N^{\scriptscriptstyle\vee}g {\sf MF}_{(\mathbb{Z}/4\mathbb{Z})^4}(\mathcal{W}),\] both of which are $\mathbb{C}((q))$-linear categories. Thus the categorical VSHS's associated with these two categories are isomorphic. By Theorem~\ref{thm:b-model-comparison}, the right hand side VSHS indeed matches with the geometric VSHS, hence we deduce that the VSHS associated with the category $D^b({\sf coh}([\mathfrak{X}/(\mathbb{Z}/5\mathbb{Z})^3]))$ is also isomorphic to $R^3\widetilde{\pi}_* \mathbb{C}((q))$, as conjectured by Ganatra-Perutz-Sheridan. \end{remark} \paragraph{{\bf Calculation of the canonical primitive form.}} In this susbsection, we calculate the canonical primitive form associated with the category ${\sf MF}_{(Z/5Z)^4}(W)$, restricted to only the $\psi$-direction. More precisley, we shall again work with the formal setting, i.e. over the ring $\widehat{S}:=\mathbb{C}[[\psi]]$. In Theorem~\ref{thm:construction-vshs}, we obtained a primitive VSHS ${\mathscr V}^{{\sf MF}_{(Z/5Z)^4}(W)}$, over a formal base $M={{\sf Spec}}\mathbb{C}[[HH^{[0]}{\bar{\imath}}g({\sf MF}_{(Z/5Z)^4}(W){\bar{\imath}}g)^\vee]]$. By Theorem~\ref{thm:canonical} we also obtain a canonical primitive form \[ \zeta^{{\sf MF}_{(Z/5Z)^4}(W)} \in {\mathscr V}^{{\sf MF}_{(Z/5Z)^4}(W)}.\] The goal of this section is to compute $\zeta^{{\sf MF}_{(Z/5Z)^4}(W)}$, restricted to the marginal direction $\psi$, given by the dual coordinate of the Hochschild cohomology class $[x_1\cdots x_5]\in HH^{[0]}{\bar{\imath}}g( {\sf MF}_{(Z/5Z)^4}(W){\bar{\imath}}g)$. The maximal diagonal symmetry group of the Fermat quintic is $G_W^{{\sf max}}=(\mathbb{Z}/5\mathbb{Z})^5$. We describe the canonical $G_W^{{\sf max}}$-equivariant splitting using the comparison isomorphism (Lemma~\ref{lem:comparison} and Proposition~\ref{prop:equiv}): \[ HC^-_{\sf odd}{\bar{\imath}}g({\sf MF}_{(Z/5Z)^4}(W){\bar{\imath}}g) N^{\scriptscriptstyle\vee}g H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^5}^*[[u]],dW+ud_{DR}{\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^4}.\] In this model, the canonical splitting is simply given by the following basis vectors \[ s_i:= (x_1x_2x_3x_4x_5)^i dx_1dx_2dx_3dx_4dx_5, \;\; i=0,1,2,3.\] The flat extensions of $s_i$ along the $\psi$-direction can be computed by the following \begin{Lemma}~\label{lem:trivial} Let $\widehat{B}:={{\sf Spec \,}} \mathbb{C}[[\psi]]$. Then there exists an isomorphism of differential modules \[ e^{(x_1x_2x_3x_4x_5)\psi/u}: H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^5}^*((u)),dW+ud_{DR}{\bar{\imath}}g)\otimes_\mathbb{C} \widehat{S} \rightarrow H^*{\bar{\imath}}g( \Omega_{\mathbb{C}^5\times \widehat{B}/\widehat{B}}^*((u)),d\mathcal{W}+ud_{DR}{\bar{\imath}}g).\] Here the left hand side is endowed with the trivial connection $\frac{d}{d\psi}$, while the right hand side is endowed with the Gauss-Manin connection $\nabla^{{\sf GM}}_{\frac{d}{d\psi}}$. \end{Lemma} Following a method of Li-Li-Saito~\cite{LLS}, we can compute the primitive form $\zeta$ associated with the splitting $s$ through its defining property that \[ e^{\frac{(x_1x_2x_3x_4x_5)\psi}{u}}s_0= \zeta +{\bar{\imath}}goplus_{k\geq 1} u^{-k} \widehat{S}\cdot e^{\frac{(x_1x_2x_3x_4x_5)\psi}{u}}{\bar{\imath}}g({{\sf Im}} s{\bar{\imath}}g).\] Observe that we have \begin{align*} &(x_1x_2x_3x_4x_5)^n dx_1dx_2dx_3dx_4dx_5 \\ & = x_1^{n-4}(x_2x_3x_4x_5)^n dW\wedge dx_2\cdots dx_5\\ &= -u\cdot (n-4)\cdot x_1^{n-5}(x_2x_3x_4x_5)^n dx_1dx_2dx_3dx_4dx_5\\ &= u\cdot (n-4) \cdot x_1^{n-5} x_2^{n-4} (x_3x_4x_5)^n dW\wedge dx_1dx_3\cdots dx_5\\ &= u^2\cdot (n-4)^2\cdot (x_1x_2)^{n-5}(x_3x_4x_5)^n dx_1dx_2dx_3dx_4dx_5\\ &= \cdots\\ &= -u^5(n-4)^5(x_1x_2x_3x_4x_5)^{n-5} dx_1dx_2dx_3dx_4dx_5 \end{align*} We take the differential $5$-form $$dx_1dx_2dx_3dx_4dx_5\in H^*{\bar{\imath}}g( \Omega_{\widehat{B}\times \mathbb{C}^5/\widehat{B}}^*[[u]],d\mathcal{W}+ud_{DR}{\bar{\imath}}g)^{(\mathbb{Z}/5\mathbb{Z})^4},$$ pull it back through the isomorphism $ e^{\frac{(x_1x_2x_3x_4x_5)\psi}{u}}$ in Lemma~\ref{lem:trivial} and compute with the above formula to obtain \begin{equation}~\label{eq:decomposition} e^{-\frac{(x_1x_2x_3x_4x_5)\psi}{u}} dx_1dx_2dx_3dx_4dx_5= \omega_0(\psi)\cdot s_0-u^{-1}\omega_1(\psi)\cdot s_1+u^{-2}\omega_2(\psi)\cdot s_2 -u^{-3}\omega_3(\psi)\cdot s_3 \end{equation} with the power series $\omega_i(\psi)$ defined by \[ \omega_i(\psi):= \sum_{k\geq 0} \frac{[(5k-4+i)(5k-9+i)\cdots (1+i)]^5}{(5k+i)!} \psi^{5k+i}.\] Applying the flat extension operator $e^{\frac{(x_1x_2x_3x_4x_5)\psi}{u}}$ to Equation~\ref{eq:decomposition} above yields \[ dx_1dx_2dx_3dx_4dx_5= \omega_0(\psi)\cdot s^{\sf flat}_0-u^{-1}\omega_1(\psi)\cdot s^{\sf flat}_1+u^{-2}\omega_2(\psi)\cdot s^{\sf flat}_2 -u^{-3}\omega_3(\psi)\cdot s^{\sf flat}_3\] which shows that we have the following decomposition of $s_0^{\sf flat}$: \[ s_0^{\sf flat}= \frac{1}{\omega_0}dx_1dx_2dx_3dx_4dx_5+ u^{-1}\frac{\omega_1}{\omega_0}\cdot s^{\sf flat}_1-u^{-2}\frac{\omega_2}{\omega_0}\cdot s^{\sf flat}_2 +u^{-3}\frac{\omega_3}{\omega_0}\cdot s^{\sf flat}_3\] The primitive form $\zeta$ associated to a splitting, by the correspondence in Theorem~\ref{thm:bijection2}, is simply the postive part of $s^{\sf flat}_0$, and thus we deduce that \[ \zeta= \frac{1}{\omega_0}\cdot dx_1dx_2dx_3dx_4dx_5.\] This calculation proves the following LG/LG mirror symmetry result for the $B$-model orbifold $(W,(\mathbb{Z}/5\mathbb{Z})^4)$ and its mirror dual $A$-model $(W,\mathbb{Z}/5\mathbb{Z})$. \begin{Theorem}~\label{thm:mirror} Let ${\mathscr F}_0$ denote the $g=0$ prepotential function associated with the category ${\sf MF}_{(\mathbb{Z}/5\mathbb{Z})^4}(W)$ endowed with its canonical splitting. Then, the restriction of ${\mathscr F}_0$ to the marginal deformation parameter in the flat coordinate $\tau:=\frac{\omega_1}{\omega_0}$ (or the mirror map) is equal to the genus zero FJRW prepotential function of $(W, \mathbb{Z}/5\mathbb{Z})$ restricted to the marginal direction. \end{Theorem} \begin{proof} Equation~\ref{eq:decomposition} matches with Chiodo-Ruan's calculation~\cite[Theorem 4.1.6]{ChiRua} of the $I$-function associated with the genus zero FJRW theory of $(W, \mathbb{Z}/5\mathbb{Z})$. The procedure to obtain the prepotential function from the $I$-function is formal, see for example~\cite[Section 3]{ChiRua}. \end{proof} \end{document}
\begin{document} \title{On the codimension growth of almost nilpotent Lie algebras} \author[D. Repov\v s and M. Zaicev] {Du\v san Repov\v s and Mikhail Zaicev} \address{Du\v san Repov\v s \\Faculty of Mathematics and Physics, and Faculty of Education, University of Ljubljana, P.~O.~B. 2964, Ljubljana, 1001 Slovenia} \email{[email protected]} \address{Mikhail Zaicev \\Department of Algebra\\ Faculty of Mathematics and Mechanics\\ Moscow State University \\ Moscow, 119992 Russia} \email{[email protected]} \thanks{The first author was partially supported by the Slovenian Research Agency grants P1-0292-0101 and J1-2057-0101. The second author was partially supported by RFBR grant No 09-01-00303a} \keywords{Polynomial identity, Lie algebra, codimensions, exponential growth} \subjclass[2010]{Primary 17C05, 16P90; Secondary 16R10} \begin{abstract} We study codimension growth of infinite dimensional Lie algebras over a field of characteristic zero. We prove that if a Lie algebra $L$ is an extension of a nilpotent algebra by a finite dimensional semisimple algebra then the PI-exponent of $L$ exists and is a positive integer. \end{abstract} \date{\today} \maketitle \section{Introduction}\label{intr} We consider algebras over a field $F$ of characteristic zero. Given an algebra $A$, we can associate to it the sequence of its codimensions $\{c_n(A)\}, n=1,2,\ldotsots~$. If $A$ is an associative algebra with a non-trivial polynomial identity then $c_n(A)$ is exponentially bounded \cite{Reg72} while $c_n(A)=n!$ if $A$ is not PI. For a Lie algebra $L$ the sequence $\{c_n(L)\}$ is in general not exponentially bounded (see for example, \cite{P}). Nevertheless, a class of Lie algebras with exponentially bounded codimensions is quite wide. It includes, in particular, all finite dimensional algebras \cite{B-Dr,GZ-TAMS2010}, Kac-Moody algebras \cite{Z1,Z2}, infinite dimensional simple Lie algebras of Cartan type \cite{M1}, Virasoro algebra and many others. When $\{c_n(A)\}$ is exponentially bounded the upper and the lower limits of the sequence $\sqrt[n]{c_n(A)}$ exist and the natural question arises: does the ordinary limit $\lim_{n\to\infty} \sqrt[n]{c_n(A)}$ exist? In 1980's Amitsur conjectured that for any associative PI algebra such a limit exists and is a non-negative integer. This conjecture was confirmed in \cite{GZ1,GZ2}. For Lie algebras the series of positive results was obtained for finite dimensional algebras \cite{GRZ1,GRZ2,Z3}, for algebras with nilpotent commutator subalgebras \cite{PM} and some other classes (see \cite{M2}). On the other hand it was shown in \cite{ZM} that there exists a Lie algebra $L$ with $$ 3.1 < \liminf_{n\to\infty} \sqrt[n]{c_n(L)} \le \limsup_{n\to\infty} \sqrt[n]{c_n(AL)} < 3.9 \quad . $$ This algebra $L$ is soluble and almost nilpotent, i.e. it contains a nilpotent ideal of finite codimension. Almost nilpotent Lie algebras are close in some sense to finite dimensional algebras. For instance, they have the Levi decomposition under some natural restrictions (see \cite[Theorem 6.4.8]{Baht}), satisfy Capelli identity, have exponentially bounded codimension growth, etc. Almost nilpotent Lie algebras play an important role in the theory of codimension growth since all minimal soluble varieties of a finite basic rank with almost polynomial growth are generated by almost nilpotent Lie algebras. Two of them have exponential growth with ratio $2$ and one is of exponential growth with ratio $3$. In the present paper we prove the following results. {\bf Teorem 1.} {\em Let $L$ be an almost nilpotent Lie algebra over a field $F$ of characteristic zero. If $N$ is the maximal nilpotent ideal of $L$ and $L/N$ is semisimple then the PI-exponent of $L$, $$ \mbox{exp}(L)=\lim_{n\to \infty}\sqrt[n]{c_n(L)}, $$ exists and is a positive integer.} Recall that a Lie algebra $L$ is said to be special (or SPI) if it is a Lie subalgebra of some associative PI-algebra. {\bf Teorem 2.} {\em Let $L$ be an almost nilpotent soluble special Lie algebra over a field $F$ of characteristic zero. Then the PI-exponent of $L$ exists and is a positive integer.} Note that the special condition in Theorem \ref{t2} is necessary since the counterexample constructed in \cite{ZM} is a finitely generated almost nilpotent soluble Lie algebra satisfying Capelli identity of low rank. Nevertheless, its PI-exponent $\mbox{exp}(L)$ exists (see \cite{VMZ}) but is not an integer since $\mbox{exp}(L)\approx 3,6$. Note also that Theorem \ref{t1} generalizes the main result of \cite{GRZ2} and gives an alternative and easier proof of the integrality of PI-exponent in the finite dimensional case considered in \cite{GRZ2}. \section{Preliminaries} Let $L$ be a Lie algebra over $F$. We shall omit Lie brackets in the product of elements of $L$ and write $ab$ instead of $[a,b]$. We shall also denote the right-normed product $a(b(c\cdotsots d)\ldotsots)$ as $abc\cdotsots d$. One can find all basic notions of the theory of identities of Lie algebras in \cite{Baht}. Let $\bar F$ be an extension of $F$ and $\bar L=L\otimes_F\bar F$. It is not difficult to check that $c_n(\bar L)$ over $\bar F$ coincides with $c_n(L)$ over $F$. Hence it is sufficient to prove our results only for algebras over an algebraically closed field. Let now $X$ be a countable set of indeterminates and let $Lie(X)$ be a free Lie algebra generated by $X$. Lie polynomial $f=f(x_1,\ldotsots, x_n)\in Lie(X)$ is an identity of a Lie algebra $L$ if $f(a_1,\ldotsots, a_n)=0$ for any $a_1,\ldotsots, a_n\in L$. It is known that the set of all identities of $L$ forms a T-ideal $Id(L)$ of $Lie(X)$, i.e. an ideal stable under all endomorphisms of $Lie(X)$. Denote by $P_n=P_n(x_1\ldotsots,x_n)$ the subspace of all multilinear polynomials in $x_1,\ldotsots,x_n$ in $Lie(X)$. Then the intersection $P_n\cap Id(L)$ is the set of all multilinear in $x_1,\ldotsots,x_n$ identities of $L$. Since $char~F=0$, the union $(P_1\cap Id(L))\cup (P_2\cap Id(L))\cup\ldotsots$ completely defines all identities of $L$. An important numerical invariant of the set of all identities of $L$ is the sequence of codimensions $$ c_n(L)=\dim P_n(L) \quad {\rm where}\quad P_n(L)=\frac{P_n}{P_n\cap Id(L)}, n=1,2,\ldotsots~. $$ If $\{c_n(L)\}$ is exponentially bounded one can define the lower and the upper PI-exponents of $L$ as $$ \underline{\mbox{exp}}(L) = \liminf_{n\to \infty}\sqrt[n]{c_n(L)}, \ \overline{\mbox{exp}}(L) = \limsup_{n\to \infty}\sqrt[n]{c_n(L)}, $$ and $$ \mbox{exp}(L)= \overline{\mbox{exp}}(L) =\underline{\mbox{exp}}(L), $$ the (ordinary) PI-exponent of $L$, in case equality holds. One of the main tools for studying asymptotics of $\{c_n(L)\}$ is the theory of representations of symmetric group $S_n$ (see \cite{JK} for details). Given a multilinear polynomial $f=f(x_1,\ldotsots, x_n)\in P_n$, one can define \begin{equation}\label{e0} \sigma f(x_1,\ldotsots, x_n)=f(x_{\sigma(1)},\ldotsots, x_{\sigma(n)}). \end{equation} Clearly, (\ref{e0}) induces $S_n$-action on $P_n$. Hence $P_n$ is an $FS_n$-module and $P_n\cap Id(L)$ is its submodule. Then $P_n(L)=\frac{P_n}{P_n\cap Id(L)}$ is also an $FS_n$-module. Since $F$ is of characteristic zero, $P_n(L)$ is completely reducible, \begin{equation}\label{e1} P_n(L)=M_1\oplus\cdotsots\oplus M_t \end{equation} where $M_1,\ldotsots,M_t$ are irreducible $FS_n$-modules and the number $t$ of summands on the right hand side of (\ref{e1}) is called the $n$th colength of $L$, $$ l_n(L)=t. $$ Recall that any irreducible $FS_n$-module is isomorphic to some minimal left ideal of group algebra $FS_n$ which can be constructed as follows. Let $\lambda\vdash n$ be a partition of $n$, i.e. $\lambda=(\lambda_1,\ldotsots,\lambda_k)$ where $\lambda_1\ge\ldotsots\ge\lambda_k$ are positive integers and $\lambda_1+\cdotsots+\lambda_k=n$. The Young diagram $D_\lambda$ corresponding to $\lambda$ is a tableau $$ D_\lambda=\; \begin{array}{|c|c|c|c|c|c|c|} \hline & & \cdotsots & & & \cdotsots & \\ \hline & & \cdotsots & \\ \cline{1-4}\vdots \\ \cline{1-1} \\ \cline{1-1} \end{array}, $$ containing $\lambda_1$ boxes in the first row, $\lambda_2$ boxes in the second row, and so on. Young tableau $T_\lambda$ is the Young diagram $D_\lambda$ with the integers $1,2,\ldotsots,n$ in the boxes. Given Young tableau, denote by $R_{T_\lambda}$ the row stabilizer of $T_\lambda$, i.e. the subgroup of all permutations $\sigma\in S_n$ permuting symbols only inside their rows. Similarly, $C_{T_\lambda}$ is the column stabilizer of $T_\lambda$. Denote $$ R(T_\lambda)=\sum_{\sigma\in R_{T_\lambda}} \sigma~,\quad C(T_\lambda)=\sum_{\tau\in C_{T_\lambda}} ({\rm sgn}~\tau)\tau~,\quad e_{T_\lambda}=R(T_\lambda)C(T_\lambda). $$ Then $e_{T_\lambda}$ is an essential idempotent of the group algebra $FS_n$ that is $e_{T_\lambda}^2=\alpha e_{T_\lambda}$ where $\alpha\in F$ is a non-zero scalar. It is known that $FS_n e_{T_\lambda}$ is an irreducible left $FS_n$-module. We denote its character by $\chi_\lambda$. Moreover, if $M$ is an $FS_n$-module with the character \begin{equation}\label{e2} \chi(M)=\sum_{\mu\vdash n} m_\mu \chi_\mu \end{equation} then $m_\lambda\ne 0$ in (\ref{e2}) for given $\lambda\vdash n$ if and only if $e_{T_\lambda}M\ne 0$. If $M=P_n(L)$ for Lie algebra $L$ then the $n$th cocharacter of $L$ is \begin{equation}\label{e3} \chi_n(L)=\chi(P_n(L))=\sum_{\lambda\vdash n} m_\lambda \chi_\lambda \end{equation} and then \begin{equation}\label{e4} l_n(L)=\sum_{\lambda\vdash n} m_\lambda,\quad c_n(L)=\sum_{\lambda\vdash n} m_\lambda d_\lambda \end{equation} where $m_\lambda$ are as in (\ref{e3}) and $$ d_\lambda=\deg \chi_\lambda=\dim FS_n e_{T_\lambda}. $$ Recall that Lie algebra $L$ satisfies Capelli identity of rank $t$ if every multilinear polynomial $f(x_1,\ldotsots,x_n), n\ge t,$ alternating on some $x_{i_1},\ldotsots,x_{i_t}$, $\{i_1\ldotsots,i_t\}\subseteq\{1,\ldotsots,n\}$ is an identity of $L$. It is known (see for example, \cite[Theorem 4.6.1]{GZbook} that $L$ satisfies Capelli identity of rank $t+1$ if and only if all $m_\lambda$ in (\ref{e4}) are zero as soon as $D_\lambda$ has more than $t$ rows, i.e. $\lambda_{t+1}\ne 0$. A useful reduction in the proof of the existence of PI-exponent is given by the following remark. \begin{lemma}\label{l1} Let $L$ be an almost nilpotent Lie algebra with the maximal nilpotent ideal $N$. Let $\dim L/N=p$ and let $N^q=0$. Then \begin{itemize} \item[(1)] $L$ satisfies Capelli identity of the rank $p+q$; and \item[(2)] the colength $l_n(L)$ is a polynomially bounded function of $n$. \end{itemize} \end{lemma} {\em Proof.} Choose an arbitrary basis $e_1,\ldotsots, e_p$ of $L$ modulo $N$ and an arbitrary basis $\{b_\alpha\}$ of $N$. If $f=f(x_1,\ldotsots,x_n)$ is a multilinear polynomial then $f$ is an identity of $L$ if and only if $f$ vanishes under all evaluations $\{x_1,\ldotsots,x_n\}\rightarrow B=\{e_1,\ldotsots,e_p\}\cup \{b_\alpha\}$. Suppose $n\ge p+q$ and $f$ is alternating on $x_1,\ldotsots,x_{p+q}$. If $\varphi: \{x_1,\ldotsots,x_n\}\rightarrow B$ is an evaluation such that $\varphi(x_i)=\varphi(x_j)$ for some $1\le i<j\le p+q$ then $\varphi(f)=0$ since $f$ is alternating on $x_i,x_j$. On the other hand if any $e_i$ appears among $y_1=\varphi(x_1),\ldotsots, y_{p+q}=\varphi(x_{p+q})$ at most once then $\{y_1,\ldotsots,y_{p+q}\}$ contains at least $q$ basis elements from $N$. Hence $\varphi(f)=0$ since $N^q=0$ and we have proved the first claim of the lemma. The second assertion now follows from the results of \cite{ZM2}. $\Box$ As a consequence of Lemma \ref{l1} we get the following: \begin{lemma}\label{l2} If $L$ is an almost nilpotent Lie algebra then the sequence $\{c_n(L)\}$ is exponentially bounded. \end{lemma} {\em Proof.} By Lemma \ref{l1}, there exist an integer $t$ and a polynomial $f(n)$ such that $m_\lambda=0$ in (\ref{e4}) for all $\lambda\vdash n$ with $\lambda_{t+1} \ne 0$ and $l_n(L)=\sum_{\lambda\vdash n}m_\lambda \le f(n)$. It is well-known (see for example, \cite[Corollary 4.4.7]{GZbook}) that $d_\lambda=\deg\chi_\lambda \le t^n$ if $\lambda=(\lambda_1,\ldotsots,\lambda_k)$ and $k\le n$. Hence we get from (\ref{e4}) an upper bound $$ c_n(L)\le f(n) t^n $$ and the proof is completed. $\Box$ \section{The upper bound for PI-exponent} The exponential upper bound for codimensions obtained in Lemma \ref{l2} is not precise. In order to prove the existence and integrality of $\mbox{exp}(L)$ we shall find a positive integer $d$ such that $\overline{\mbox{exp}}(L)\le d$ and $\underline{\mbox{exp}}(L)\ge d$. Let $L$ be a Lie algebra with a maximal nilpotent ideal $N$ and a finite dimensional semisimple factor-algebra $G=L/N$. Fix a decomposition of $G$ into the sum of simple components $$ G=G_1\oplus\cdotsots\oplus G_m $$ and denote by $\varphi_1,\ldotsots,\varphi_m$ the canonical projections of $L$ to $G_1,\ldotsots,G_m$, respectively. Let $g_1,\ldotsots,g_k$ be elements of $L$ such that for some $\{i_1,\ldotsots,i_k\}\subseteq\{1,\ldotsots,m\}$ one has $$ \varphi_{i_t}(g_t)\ne 0,\quad \varphi_{i_j}(g_t)=0\quad {\rm for ~all}\quad j\ne t,~1\le t\le k. $$ For any non-zero product $M$ of $g_1,\ldotsots,g_k$ and some $u_1,\ldotsots,u_t\in N$ we define the height of $M$ as $$ ht(M)=\dim G_{i_1}+\cdotsots+\dim G_{i_k}. $$ Now we are ready to define a candidate to PI-exponent of $L$ as \begin{equation}\label{e5} d=d(L)=\max\{ht(M)\vert 0\ne M\in L\}. \end{equation} In order to get an upper bound for $\overline{\mbox{exp}}(L)$ we define the following multialternating polynomials. Let $Q_{r,k}$ be the set of all polynomials $f$ such that \begin{itemize} \item[(1)] $f$ is multilinear, $n=\deg f\ge rk$, $$ f=f(x_1^1,\ldotsots,x_r^1,\ldotsots,x_1^k,\ldotsots,x_r^k,y_1,\ldotsots, y_s) $$ where $rk+s=n$; and \item[(2)] $f$ is alternating on each set $x_1^i,\ldotsots,x_r^i, 1\le i\le k$. \end{itemize} We shall use the following lemma (see \cite[Lemma 6]{Z3}). \begin{lemma}\label{l3} If $f\equiv 0$ is an identity of $L$ for any $f\in Q_{d+1,k}$ for some $d,k$ then $\overline{\mbox{exp}}(L)\le d$. \end{lemma} $\Box$ Note that Lemma 6 in \cite{Z3} was proved for a finite dimensional Lie algebra $L$. In fact, it sufficient to assume that $L$ satisfies Capelli identity and that $l_n(L)$ is polynomially bounded. \begin{lemma}\label{l4} Let $L$ be an almost nilpotent Lie algebra and $d=d(L)$ as defined in (\ref{e5}). Then $\overline{\mbox{exp}}(L)\le d$. \end{lemma} {\em Proof}. Let $N$ be the maximal nilpotent ideal of $L$ and let $N^p=0$. We shall show that any polynomial from $Q_{d+1,p}$ is an identity of $L$ and apply Lemma \ref{l3}. Given $1\le i\le m$, we fix a basis $B_i$ of $L$ modulo $\widetilde G_1+\cdotsots+ \widetilde G_{i-1}+\widetilde G_{i+1}+\cdotsots+\widetilde G_m+N$ where $\widetilde G_j$ is the full preimage of $G_j$ under the canonical homomorphism $L\to L/N$. In other words, $|B_i|=\dim G_i$, $\varphi_i(B_i)$ is a basis of $G_i$ and $\varphi_j(B_i)=0$ for any $1\le j\ne i\le m$. Fix also a basis $C$ of $N$ and let $B=C\cup B_1\cup\cdotsots\cup B_m$. Then a multilinear polynomial $f$ is an identity of $L$ if and only if $\varphi(f)=0$ for any evaluation $\varphi: X\to B$. Suppose $f=f(x_1^1,\ldotsots,x_{d+1}^1,\ldotsots, x_1^p,\ldotsots,x_{d+1}^p, y_1,\ldotsots,y_s) \in Q_{d+1,p}$ is multilinear and alternating on each set $\{x_1^i,\ldotsots,x_{d+1}^i\}$, $1\le i \le p$. Given $1\le i\le p$, first consider an evaluation $\rho:X\to L$ such that $\rho(x_j^i)= b_j\in B_{t_j}$, $1\le j\le d+1$, with $$ \dim G_{t_1}+\cdotsots+ \dim G_{t_{d+1}}\ge d+1 $$ in $L/N=G$. Then by definition of $d$ any product of elements of $B$ containing factors $b_1,\ldotsots, b_{d+1}$ is zero, hence $\rho(f)=0$. If $\dim G_{t_1}+\cdotsots + \dim G_{t_{d+1}}\le d$ then $b_1,\ldotsots, b_{d+1}$ are linearly dependent modulo $N$, say, $b_{d+1}=\alpha_1 b_1+\cdotsots+\alpha_d b_d+w$, $w\in N$. Then the value of $\rho(f)$ is the same as of $\rho':X\to L$ where $\rho'(x_1^i)=\rho(x_1^i)=b_1, \ldotsots,\rho'(x_d^i)=\rho(x_d^i)=b_d$, $\rho'(x_{d+1}^i)=w$ since $f$ is alternating on $x_1^i,\ldotsots,x_{d+1}^i$. It follows that for any evaluation $\rho: X\to B$ one should take at least one value $\rho(x_j^i)$ in $N$ for any $i=1,\ldotsots,p$ otherwise $\rho(f)=0$. But in this case $\rho(f)$ is also zero since $\rho(f)\in N^p=0$ and we have completed the proof. $\Box$ \section{The lower bound for PI-exponent} As in the previous Section let $L$ be an almost nilpotent Lie algebra with the maximal nilpotent ideal $N$ and suppose that semisimple finite dimensional factor-algebra $G=L/N=G_1\oplus\cdotsots\oplus G_m$ where $G_1,\ldotsots,G_m$ are simple. \begin{lemma}\label{l5} Given an algebra $L$ as above, there exist positive integers $q$ and $s$ such that for any $r=tq, t=1,2,\ldotsots$, and for any integer $j\ge s$ one can find a multilinear polynomial $h_t=h_t(x_1^1,\ldotsots,x_d^1,\ldotsots,x_1^r,\ldotsots,x_d^r,y_1,\ldotsots, y_{j})$ such that: \begin{itemize} \item[(1)] $h_t$ is alternating on each set $\{x_1^i,\ldotsots,x_d^i\}$, $1\le i\le r$; and \item[(2)] $h_t$ is not an identity of $L$, \end{itemize} where $d=d(L)$ is defined in (\ref{e5}). \end{lemma} {\em Proof.} Let $B_1,\ldotsots, B_m$ be as in Lemma \ref{l4}. Then by the definition (up to reindexing of $G_1,\ldotsots,G_m$) there exist $b_1\in B_1,\ldotsots, b_k\in B_k$, $a_1,\ldotsots,a_p\in L$ such that for some multilinear monomial $w(z_1,\ldotsots, z_{k+p})$ the value $$ w(b_1,\ldotsots, b_k,a_1,\ldotsots,a_p) $$ is non-zero and $\dim G_1+\cdotsots+\dim G_k=d$ in $G=L/N$. Recall that for the adjoint representation of $G_i$ there exists a central polynomial (see \cite[Theorem 12.1]{R}) i.e. an associative multilinear polynomial $g_i$ which assumes only scalar values on $ad~x_\alpha, x_\alpha \in G_i$. Moreover, $g_i$ is not an identity of the adjoint representation of $G_i$ and it depends on $q$ disjoint alternating sets of variables of order $d_i=\dim G_i$. That is, \begin{equation}\label{e6} g_i=g_i(x_{1,d_i}^1,\ldotsots,x_{d_i,d_i}^1,\ldotsots,x_{1,d_i}^q,\ldotsots,x_{d_i,d_i}^q) \end{equation} is skew-symmetric on each $\{x_{1,d_i}^j,\ldotsots,x_{d_i,d_i}^j\}$ and for some $a_{1,d_i}^1,\ldotsots,a_{d_i,d_i}^q\in G_i$ the equality $$ g_i(ad~a_{1,d_i}^1,\ldotsots,ad~a_{d_i,d_i}^q)(c_i)=c_i $$ holds for any $c_i\in G_i$. Cleary, $a_{1,d_i}^j,\ldotsots, a_{d_i,d_i}^j$ are linearly independent for any fixed $1\le j \le q$ since $g$ is alternating on $\{x_{1,d_i}^j,\ldotsots, x_{d_i,d_i}^j\}$. Hence there exists an evaluation of (\ref{e6}) in $L$ with all $\widetilde a_{\beta\gamma}^\alpha, \widetilde c_i$ in $B_i$ such that \begin{equation}\label{e7} g_i(ad~\widetilde a_{1,d_i}^1,\ldotsots,ad~\widetilde a_{d_i,d_i}^q) (\widetilde c_i)\equiv \widetilde c_i({\rm mod}~N). \end{equation} On the other hand, if at least one of $\widetilde a_{\beta\gamma}^\alpha$ lies in $B_j$, $j\ne i$, or in $N$ then \begin{equation}\label{e8} g_i(ad~\widetilde a_{1,d_i}^1,\ldotsots,ad~\widetilde a_{d_i,d_i}^q) (\widetilde c_i)\equiv 0~({\rm mod}~N). \end{equation} Since we can apply $g_i$ several times, the integer $q$ can be taken to be the same for all $i=1,\ldotsots,k$. Moreover, it follows from (\ref{e7}), (\ref{e8}) that for any $t=1,2,\ldotsots$ there exists a multilinear Lie polynomial $$ f_i^t=f_i^t(x_{1,d_i}^1,\ldotsots,x_{d_i,d_i}^1,\ldotsots,x_{1,d_i}^{tq}, \ldotsots,x_{d_i,d_i}^{tq},y_i) $$ alternating on each set $x_{1,d_i}^j,\ldotsots,x_{d_i,d_i}^j$, $1\le j\le tq$, such that $$ f_i^t(\widetilde a_{1,d_i}^1,\ldotsots,\widetilde a_{d_i,d_i}^{tq}, \widetilde c_i)\equiv \widetilde c_i ({\rm mod}~N) $$ for some $\widetilde a_{1,d_i}^1,\ldotsots,\widetilde a_{d_i,d_i}^{tq}\in B_i$ and for any $\widetilde c_i\in B_i$. Recall that the monomial $w=w(z_1,\ldotsots,z_{k+q})$ has a non-zero evaluation $$ \bar w=w(b_1,\ldotsots,b_k,a_1,\ldotsots,a_p) $$ in $L$ with $b_1\in B_1,\ldotsots,b_k\in B_k$. Replacing $z_i$ by $f_i^t$ in $w$ for all $i=1,\ldotsots, k$ and alternating the result we obtain a polynomial $$ h_t=Alt~w(f_1^t(x_{1,d_1}^1,\ldotsots,x_{d_1,d_1}^{tq},y_1),\ldotsots, f_k^t(x_{1,d_k}^1,\ldotsots,x_{d_k,d_k}^{tq},y_k), z_{k+1},\ldotsots, z_p), $$ where $Alt=Alt_1\cdotsots Alt_{tq}$ and $Alt_j$ denotes the total alternation on variables $$ x_{1,d_1}^j,\ldotsots,x_{d_1,d_1}^{j},\ldotsots,x_{1,d_k}^j,\ldotsots,x_{d_k,d_k}^{j}. $$ Now if $\bar w=w(b_1,\ldotsots,b_k,a_1,\ldotsots,a_p)\in N^i\setminus N^{i+1}$ for some integer $i\ge 0$ in $L$ then according to (\ref{e7}), (\ref{e8}) we get $$ \rho(h_t)\equiv \left(d_1!\cdotsots d_k!\right)^{tq}\bar w ({\rm mod}~N^{i+1}), $$ where $\rho:X\to L$ is an evaluation, $\rho(x^\alpha_{\beta\gamma})=a^\alpha_{\beta\gamma}$, $\rho(y_j)=b_j$, $\rho(z_{k+j})=a_j$. In particular, $h_t$ is not an identity of $L$. Renaming variables $x^\alpha_{\beta\gamma}, y_1,\ldotsots, y_k,z_{k+1},\ldotsots,z_{k+p}$ we obtain the required polynomial $h_t$ with $s=k+p$. In order to get similar multialternating polynomial $h_t$ for $k+p+1$ we replace the initial polynomial $w=w(z_1,\ldotsots,z_{k+p})$ by $w'=w'(z_1,\ldotsots,z_{k+p+1}) = w(z_1z_{k+p+1},z_2,\ldotsots,z_{k+p})$. Since $G_1$ is simple we have $G_1^2=G_1$. Hence there exists an element $a_{p+1}\in B_1$ such that $w'(b_1,\ldotsots,b_k,a_1,\ldotsots, a_{p+1})=w(b_1a_{p+1},b_2,\ldotsots,b_k, \\ a_1,\ldotsots,a_p)\ne 0$. Continuing this process we obtain similar $h_t$ for all integers $k+p+2,k+p+3,\ldotsots~$. $\Box$ Using multialternating polynomials constructed in the previous lemma we get the following lower bound for codimensions. \begin{lemma}\label{l6} Let $L,q$ and $s$ be as in Lemma \ref{l5}. Then there exists a constant $C>0$ such that $$ c_n(L)\ge \frac{1}{Cn^{2d}}\cdotsot d^n $$ for all $tq+s\le n\le tq+s+q-1$ and for all $t=1,2,\ldotsots~$. \end{lemma} {\em Proof}. Given $t$ and $s\le s'\le s+q-1$, consider the polynomial $h_t$ constructed in Lemma \ref{l5}. Then $n=\deg h_t=tqd+s'$ and $h_t$ depends on $tq$ alternating sets of indeterminates of order $d$. Denote by $M$ the $FS_n$-submodule of $P_n(L)$ generated by $h_t$. Let $n_0=tqd$ and let the subgroup $S_{n_0}\subseteq S_n$ act on $tqd$ alternating indeterminates $x_1^1,\ldotsots,x_d^{tq}$. Then $M_0=FS_{n_0}h_t$ is a nonzero subspace of $M$. Obviously, \begin{equation}\label{e9} c_n(L)\ge\dim M\ge \dim M_0. \end{equation} Consider character of $M_0$ and its decomposition onto irreducible components \begin{equation}\label{e10} \chi(M_0)=\sum_{\lambda\vdash n_0} m_\lambda\chi_\lambda. \end{equation} By Lemma \ref{l1} algebra $L$ satisfies Capelli identity of rank $d_0\ge \dim L/N\ge d$. Hence $m_\lambda=0$ in (\ref{e10}) as soon as the height $ht(\lambda)$ of $\lambda$, i.e. the number of rows in Young diagram $D_\lambda$, is bigger than $d_0$. Now we prove that for any multilinear polynomial $f=f(x_1,\ldotsots,x_n)$ and for any partition $\lambda\vdash n_0$ with $\lambda_{d+1}\ge p$ where $N^p=0$ the polynomial $e_{T_\lambda}f$ is an identity of $L$. Since $e_{T_\lambda}=R(T_\lambda)C(T_\lambda)$, it is sufficient to show that $h=h(x_,\ldotsots, x_n)= C(T_\lambda) f$ is an identity of $L$. Note that the set $\{x_1,\ldotsots,x_{n_0}\}$ is a disjoint union $$ \{x_1,\ldotsots,x_{n_0}\}=X_0\cup X_1\cup\ldotsots\cup X_p $$ where $|X_1|,\ldotsots, |X_p|\ge d+1$ and $h$ is alternating on each $X_1,\ldotsots,X_p$, i.e. $h\in Q_{d+1,p}$. As it was shown in the proof of Lemma \ref{l4}, $h$ is an identity of $L$. It follows that $m_\lambda\ne 0$ in (\ref{e10}) for $\lambda\vdash n_0$ only if $\lambda_{d+1}<p$. In particular \begin{equation}\label{e11} n_0-(\lambda_1+\cdotsots+\lambda_d)\le(d-d_0)p. \end{equation} By the construction of essential idempotent $e_{T_\lambda}$ any polynomial $e_{T_\lambda}f(x_1,\ldotsots,x_{n_0})$ is symmetric on $\lambda_1$ variables corresponding to the first row of $T_\lambda$. Since $h_t$ depends on $tq$ alternating sets of variables it follows that $e_{T_\lambda}h_t=0$ for any $\lambda\vdash n_0$ with $\lambda_1\ge tq+1$. Denote $c_1=(d-d_0)p$. If $m_\lambda\ne 0$ in (\ref{e10}) for $\lambda\vdash n_0$, $\lambda=(\lambda_1,\ldotsots,\lambda_k)$, then $k\le d_0$ and \begin{equation}\label{e12} \lambda_{d-1}\le\ldotsots\le \lambda_1\le tq. \end{equation} If $\lambda_d<tq-c_1$ then combining (\ref{e11}) and (\ref{e12}) we get $\lambda_{d+1}+\cdotsots+ \lambda_k=n_0-(\lambda_1+\cdotsots+\lambda_d) \le c_1$ and $$ n_0=(\lambda_1+\cdotsots+\lambda_{d-1})+\lambda_d+(\lambda_{d+1}+\cdotsots+\lambda_k) < tq(d-1)+tq-c_1+c_1=tqd=n_0, $$ a contradiction. Hence $\lambda_d\ge tq-c_1$ and Young diagram $D_\lambda$ contains a rectangular diagram $D_\mu$ where $$ \mu=(\underbrace{tq-c_1,\ldotsots,tq-c_1}_{d}) $$ is a partition of $n_1=d(tq-c_1)=n_0-c_1d = n-s'-c_1d \ge n-s-q-c_1d+1$ since $s'\le s+q-1$. From Hook formula for dimensions of irreducible $S_n$-representations (see \cite[Proposition 2.2.8]{GZbook}) and from Stirling formula for factorials it easily follows that $$ d_\mu=\deg \chi_\mu>\frac{d^{n_1}}{n_1^{2d}} $$ for all $n$ sufficiently large. Since $\dim M_0\ge d_\lambda\ge d_\mu$ and $n_1\ge n-c_2$ for constant $c_2=s+q+c_1d-1$ we conclude from (\ref{e9}) that $$ c_n(L)> \frac{d^{n}}{Cn^{2d}} $$ where $C=d^{c_2}$ and we are done. $\Box$ \section{Existence of PI-exponents} It follows from Lemma \ref{l6} that $$ \underline{\mbox{exp}}(L) = \liminf_{n\to \infty}\sqrt[n]{c_n(L)}\ge d. $$ Combining this inequality with Lemma \ref{l4} we get the following \begin{theorem}\label{t1} Let $L$ be an almost nilpotent Lie algebra over a field $F$ of characteristic zero. If $N$ is the maximal nilpotent ideal of $L$ and $L/N$ is semisimple then the PI-exponent of $L$, $$ \mbox{exp}(L)=\lim_{n\to \infty}\sqrt[n]{c_n(L)}, $$ exists and is a positive integer. \end{theorem} Now consider the case of soluble almost nilpotent special Lie algebras. \begin{theorem}\label{t2} Let $L$ be an almost nilpotent soluble special Lie algebra over a field $F$ of characteristic zero. Then the PI-exponent of $L$ exists and is a positive integer. \end{theorem} {\em Proof}. Let $L$ be a special soluble Lie algebra with a nilpotent ideal $N$ of a finite codimension. By Lemma \ref{l2} algebra $L$ satisfies Capelli identity of some rank. Then the variety $V={\rm Var~}L$ generated by $L$ has a finite basis rank \cite{Z4} that is $L$ has the same identities as some $k$-generated Lie algebra $L_k\in V$. Clearly, $\underline{\mbox{exp}}(L)=\underline{\mbox{exp}}(L_k)$ and $\overline{\mbox{exp}}(L) = \overline{\mbox{exp}}(L_k)$. Since $L$ is soluble it follows that $L_k$ is a finitely generated soluble Lie algebra from special variety $V$. By \cite[Proposition 6.3.2, Theorem 6.4.6]{Baht} we have $(L_k^2)^t=0$ for some $t \ge 1$. In this case $\mbox{exp}(L)$ exists and is a non-negative integer \cite{PM}. $\Box$ \end{document}
\begin{document} \title{ Experimental realization of the Br\"{u}schweiler's algorithm in a homo-nuclear system \thanks{Published in The Journal of Chemical Physics 117, 3310 (2002).}} \author{Li Xiao$^{a,b}$, G. L. Long$^{a,b,c,d}$\thanks{corresponding author, [email protected]}, Hai-Yang Yan$^{a,b}$, Yang Sun$^{e,a,b}$} \address{$^a$Department of Physics, Tsinghua University, Beijing 100084, China\\ $^b$ Key Laboratory For Quantum Information and Measurements, Beijing 100084, China\\ $^c$ Center for Atomic and Molecular Nanosciences, Tsinghua University, Beijing 100084, China\\ $^d$Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100084, China\\ $^e$Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, U.S.A.} \maketitle \begin{abstract} Compared with classical search algorithms, $Grover$ quantum algorithm [$ Phys. ${\it \ }$Rev.${\it \ }$Lett.${\it ,} {\bf 79}, 325(1997)] achieves quadratic speedup and $Br\ddot{u}schweiler$ hybrid quantum algorithm [$Phys.$ {\it \ }$Rev.$ $Lett.$, {\bf 85}, 4815(2000)] achieves an exponential speedup. In this paper, we report the experimental realization of the $Br \ddot{u}schweiler$ algorithm in a $3$-qubit NMR ensemble system. The pulse sequences are used for the algorithms and the measurement method used here is improved on that used by Br\"{u}schweiler, namely, instead of quantitatively measuring the spin projection of the ancilla bit, we utilize the shape of the ancilla bit spectrum. By simply judging the downwardness or upwardness of the corresponding peaks in an ancilla bit spectrum, the bit value of the marked state can be read out, especially, the geometric nature of this read-out can make the results more robust against errors. \end{abstract} \noindent{PACS numbers: 03.67.Lx, 82.56.Jn, 76.60.-k.} \section{Introduction} Quantum algorithms are very important in quantum computing. One can find this point in Deutsch and Josza's quantum algorithm which demonstrates the incomparable advantage of quantum computing \cite{r3}. Two more famous quantum algorithms which are closely related to practical applications of quantum computation are: Shor's factoring algorithm \cite{r1} and Grover's quantum search algorithm\cite{r2}. The factorization of a large number into prime factors is a difficult mathematical problem because existing classical algorithms require exponential times to complete the factorization in terms of the input. However, Shor's quantum algorithm drastically decreases this to polynomial times. Another similar example is searching marked items from an unsorted database. Actually, many scientific and practical problems can be abstracted to such search problem. Hence it is a very important subject. Classically, it can only be done by exhaustive searching. Unlike Shor's algorithm, Grover's quantum algorithm achieves only quadratic speedup over classical algorithms, namely the number of searching is reduced from $O(N)$ to $O(\sqrt{N})$. However, it has been proven that Grover's algorithm is optimal for quantum computing\cite{zalka99}. The strong restriction of the optimality theorem can be broken off if we go out of quantum computation and then exponential speedup may be achieved. Using nonlinear quantum mechanics, Abram and Lloyd\cite{abramlloyd} have constructed a quantum algorithm that achieves exponential speedup. However the applicability of nonlinear quantum mechanics is still under investigation, let alone the realization of their algorithm at present. Recently, by using multiple-quantum operator algebra, Br\"{u}schweiler put forward a hybrid quantum search algorithm that combines DNA computing idea with the quantum computing idea \cite{r4}. The new algorithm achieves an exponential speedup in searching an item from an unsorted database. It requires the same amount of resources as effective pure state quantum computing. There are several known schemes for quantum computers, such as cooled ions\cite{r5}, cavity QED\cite{r6}, nuclear magnetic resonance\cite {r7} and so on. NMR technique is sophisticated and many quantum algorithms have been realized by using NMR system\cite{r8,r9,r10,r11,r12,r13,rlong}. Many studies show that the NMR system is particularly suitable for the realization of such algorithms, in which ensembles of quantum nuclear spin system are involved. Strictly speaking, Br\"{u}schweiler algorithm is not a pure quantum algorithm, and thus the realization of Br\"{u}schweiler algorithm in NMR enjoys the freedom from the debate\cite{rschack,rlaf} about the quantum nature of the NMR computation using effective pure state. Because this algorithm is exponentially fast, it tales much shorter time to finish a search problem and this also makes the algorithm more robust again decoherence. In this paper, we report the experimental realization of $Br\ddot{u}schweiler $ algorithm in a 3-qubit homo-nuclear system. In the procedure, we have improved the measurement method used by Br\"{u}schweiler in his paper\cite {r4}. Instead of measuring the ancilla bit's spin polarization, we utilize the shapes of ancilla bit's spectra, $i.e.$ by judging the downwardness or upwardness of the corresponding peaks in the spectrum, the bit value of the marked state can be read out. Since the geometric property of the spectrum is easy to be recognized, this makes the algorithm more tolerant to errors. Our paper is organized as follows. After this introduction, we briefly describe Br\"{u}schweiler's original algorithm in section \ref{s2}, and then we introduce our modification part based on $Br\ddot{u}schweiler$ algorithm in section \ref{s3}. In section \ref{s4}, we present the details of the pulse sequences of the algorithm and the results of our experiment. Finally, a summary is given. \section{Br\"{u}schweiler's algorithm} \label{s2}NMR techniques lie far ahead of other suggested quantum computing technologies. However, during recent years, the rapid developing tendency becomes slow and slow. People have taken more effort in preparing an effective pure state, but compared with a pure state quantum computer, there is no essential speedup. Br\"{u}schweiler's wonderful idea may shed a light on this area, he takes advantage of the mixed state nature in the NMR system and achieves an exponential speedup in searching an unsorted database. For convenience in following discussion, here we repeat the main idea of the Br\"{u}schweiler's algorithm in brief (in detail, see Refs. \cite{r4,r16}). As is well-known, the preparation of the effective pure state is one of the most troublesome part in a NMR quantum computing experiment. On the other hand, the effective pure state also sets a restriction on the number of qubits\cite{r14,r15}. The effective pure state is represented by the density operator \begin{equation} \rho =(1-\varepsilon )2^{-n}{\bf \hat 1}+\varepsilon \left| 00...0\right\rangle \left\langle 00...0\right| . \label{e1} \end{equation} At room temperature, under the high temperature approximation we have \begin{equation} \varepsilon =\frac{nhv}{2^nkT}. \label{e2} \end{equation} In Eq. (\ref{e1}), the second term's contribution to the outcome is scaled by the factor $\varepsilon $, which decreases exponentially with $n$, namely, the number of qubit, but the first term has no contribution at all \cite{contribution}. In NMR ensemble system, the state can be represented by density operators which are linear combinations of direct products of spin polarization operators \cite{r7,r16}. In a strong external magnetic field, the eigenstates of the Zeeman Hamiltonian \begin{equation} |\phi _{in}\rangle=\left| 001...01\right\rangle \>=\left| \alpha \alpha \beta ...\alpha \beta \right\rangle , \end{equation} are mapped on states in the spin Liouville space \begin{equation} \sigma _{in}=\left| \phi \right\rangle \left\langle \phi \right| ={ I} _1^\alpha { I}_2^\alpha { I}_3^\beta ...{ I}_{n-1}^\alpha { I} _n^\beta , \label{sigmain} \end{equation} where \begin{equation} { I}_k^\alpha =\left| \alpha ^k\right\rangle \left\langle \alpha ^k\right| ={\frac 12}({\bf 1}_k+2{I}_{kz})=\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right] , \end{equation} \begin{equation} { I}_k^\beta =\left| \beta ^k\right\rangle \left\langle \beta ^k\right| ={ \frac 12}({\bf 1}_k-2{ I}_{kz})=\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] , \end{equation} represent respectively spin up and spin down state of the spin. Usually, the oracle or query is a computable function $f$: $f(x)=0$ for all $x$ except for $x=z$ which is the item that we want to find out for which $f(z)=1$. Usually, the oracle can be expressed as a permutation operation which is a unitary operation $U_f$, implemented using logic gates \cite{r4}. In Br\"{u} schweiler algorithm, an extra bit(also called the ancilla bit) is used and its state is represented by ${ I}_0$. The output of the oracle is stored on the ancilla bit ${ I}_0$ whose state is prepared in the $\alpha $ state at the beginning. The output of $f$ can be represented by an expectation value of ${ I}_{0z}$ for a pure state \begin{equation} f=F({ I}_0^\alpha \sigma _{in})={\frac 12}-Tr(U_f{ I}_0^\alpha \sigma _{in}U_f^{+}{ I}_{0z}). \end{equation} If $\sigma _{in}$ happens to satisfy the oracle, then $I_0^\alpha $ is changed to $I_0^\beta $. This gives the value of the trace equal to $-1/2$, and hence $f$ equals to 1. The input of $f$ can be an mixed state of the form $\rho =\sum_{j=1}^N\;{ I}_0^\alpha \;\sigma _j$ where $\sigma _j$ is one of the form in Eq. (\ref{sigmain}): \begin{equation} f=\sum\limits_{j=1}^NF({ I}_0^\alpha \;\sigma _j)=F(\sum\limits_{j=1}^N { I}_0^\alpha \;\sigma _j)+{\frac{{N-1}}2}. \label{oracle} \end{equation} The oracle is applied simultaneously to all the components in the NMR ensemble. The oracle operation is quantum mechanical. Br\"{u}schweiler put forward two versions of search algorithm. We adopt his second version. The essential of the Br\"{u}schweiler algorithm is as follow: suppose that the unsorted database has $N=2^n$ number of items. We need $n$ qubit system to represents these $2^n$ items. The algorithm contains $n$ oracle queries each followed by an measurement: \newline (1) Each time, ${ I}_0^\alpha { I}_k^\alpha $ ($k=1,2,...,n$) is prepared. In fact, the input state ${ I}_0^\alpha ...{\bf 1}...I_k^\alpha ...{\bf 1}...$ is a highly mixed state\cite{r16}. In the following text, the identity operator will be omitted. This Liouville operator actually represents the $2^{n-1}$ number of items encoded in mixed state: \begin{eqnarray} { I}_0^\alpha { I}_k^\alpha &=&{ I}_0^\alpha ({ I}_1^\alpha +{I}_1^\beta )({ I}_2^\alpha +{ I}_2^\beta )... ({ I}_n^\alpha +{ I}_n^\beta ) \nonumber \\ &=&\sum_{\gamma _1,\gamma _2,...,\gamma _{k-1},\gamma _k,\gamma _{k+1},...,\gamma _n=\alpha ,\beta }{ I}_0^\alpha { I}_1^{\gamma _1}{ I}_2^{\gamma _2}...{ I}_{k-1}^{\gamma _{k-1}}{ I}_k^\alpha { I}_{k+1}^{\gamma _{k+1}}...{I}_n^{\gamma _n} \nonumber \\ &=&\sum_{i_1,i_2,...,i_{k-1},i_{k+1},...,i_n=0,1}\left| i_1i_2...i_{k-1}0i_{k+1}...i_n\right\rangle \>\left\langle i_1i_2...i_{k-1}0i_{k+1}...i_n\right| . \label{state} \end{eqnarray} This mixed state contains half of the whole items in the database. The $k$ -th bit is set to $\alpha $. The other half of the database with $k$-th bit equals to $\beta $(or 1) is not included. (2) Applying the oracle function to the system. As seen in eq. (\ref{oracle}), the operation is done simultaneously to all the basis states. If $k$-th bit of the marked state is $0$, then the marked state is contained in eq. (\ref{state}). One of the $ 2^{n}$ terms in equation (\ref{state}) satisfies the oracle and the oracle changes the sign of the ancilla bit from $\alpha $ to $\beta $. If one measures the spin of ancilla spin after the function $f$, the value will be $ f=(2^{n}-1)\times 1/2+1/2-(2^{n}-2)\times 1/2=1$. If the $k$-th bit of the marked state is 1, then the state (\ref{state}) will not contain the marked item. Upon the operation of the function $f$, there is no flip in the ancilla bit. A measurement on the ancilla bit's spin $I_{0z}$ will yield $ f=1/2\times (2^{n}-1)+1/2-(2^{n})\times 1/2=0$. However, without obtaining the value of $f$, we can know the marked state by measuring the ancilla bit's spin. If one measures the spin of ancilla spin after the oracle, the value will be $(2^{n-1}-1)\times 1/2-1/2=N/4-1$ for the $k$-th bit of the marked state being 0. If the $k$-th bit of the marked state is 1, then the state (\ref{state}) will not contain the marked item. Upon the operation of the oracle, there is no flip in the ancilla bit. A measurement on the ancilla bit's spin $I_{0z}$ will yield $1/2\times (2^{n-1})=N/4$. Therefore by measuring the ancilla bit's spin, one actually reads out the $k$-th bit of the marked state. (3) By repeating the above procedure for $k$ from 1 to $ n$, one can find out each bit value of the marked state. In the following, we give a simple example with $N=4$ for illustrating the algorithm, and the example is realized in an experiment. The example is used for demonstration. The advantage of the algorithm will be seen if the number of qubit becomes large. Suppose the unsorted database with four items $ \{00,01,10,11\}$ is represented by Zeeman eigenstates of the two spins ${ I}_1$ , ${ I}_2$. The item $z=10$ is the one which we want. That is to say, $f={\bf 1} $ for $z=10$, which is expressed as ${ I}_1^\beta {I}_2^\alpha $. For the other three items, $\{00({ I}_1^\alpha { I}_2^\alpha )$, $01({ I}_1^\alpha { I}_2^\beta )$, $ 11({ I}_1^\beta { I}_2^\beta )\}$, $f=0$. Function $f$ can be realized by a permutation illustrated in Fig.{1} (similar as Figure 2 in Ref. \cite{r4}). The extra qubit ${ I}_0^\alpha $ is included in the permutation. First we prepare a mixed state ${ I}_0^\alpha { I}_1^\alpha $, which is the sum of ${I}_0^\alpha { I}_1^\alpha { I}_2^\alpha +{ I}_0^\alpha { I}_1^\alpha { I}_2^\beta $. Then the permutation described in Fig.{1} is operated on this mixed state. Since the first bit of the marked state is 1, the permutation will have no effect on the ancilla bit because it is obviously that the state ${I}_0^\alpha { I}_1^\alpha $ will not contain the marked state. ${ I}_0^\alpha { I}_1^\alpha {I}_2^\alpha $, ${ I}_0^\alpha { I}_1^\alpha { I}_2^\beta $ each contributes $1/2$ to the spin of the ancilla bit. Upon measurement of the ancilla bit on its spin, the intensity will be $2\times 1/2=1$ unit. That tells us that the first bit of the marked item is 1( in state ${ I}_1^\beta $). Secondly, we prepare another state, ${ I}_0^\alpha { I}_2^\alpha ={ I}_0^\alpha {I}_1^\alpha { I}_2^\alpha +{ I}_0^\alpha { I}_1^\beta { I}_2^\alpha $. We get output ${I}_0^\alpha { I}_1^\alpha { I}_2^\alpha +{ I}_0^\beta { I}_1^\beta { I}_2^\alpha $ after the action of permutation $f$. Measuring the spin of ancilla bit, we get $0$, since $ { I}_0^\alpha { I}_1^\alpha { I}_2^\alpha $ and ${ I}_0^\beta { I}_1^\beta { I}_2^\alpha $ contribute to the spin measurement equally but with opposite signs. Then this tells us that the second bit is $0 $( in state ${ I}_2^\alpha $). After these two measurement, we have obtained the marked state. In the actual experiment, we have modified the measuring part of the algorithm. We read out the bit values by looking at the shape of the ancilla bit. It is more clearly and concise. \section{Modification to the original algorithm} \label{s3}Need not measure the ${ I}_0^z$, we can distinguish the state of the ancilla bit by the shape of its spectrum. Because different initial state $ { I}_0^\alpha { I}_k^\alpha $ has the same form, except for the difference in the $ k$ subscript, it is natural that the spectrum ${ I}_0$ will have similar shapes for ${ I}_0^\gamma \ { I}_{k_1}^\delta $ and ${ I}_0^\gamma { I}_{k_2}^\delta $. We use the shape of the spectrum of the state ${ I}_0^\alpha { I}_k^\alpha $ as a reference where $k=1,2\cdots $. First, the phase of ${ I}_0^\alpha { I}_1^\alpha $ is determined as making peaks of the spectrum up. In this NMR system, the $ { I}_0$ bit has $J$ coupling to both ${ I}_1$ and ${ I}_2$ and there are only two peaks in the ${ I}_0$ spectrum for the state ${ I}_0^\alpha { I}_1^\alpha $, the localities of two peaks are determined by the order of the nuclei, which is the first, second$\cdots $ The ${ I}_0$ spectrum of ${ I}_0^\alpha { I}_1^\alpha $ before the operation of the permutation $U_f$ is given in Fig. 2(a). After the permutation operation, we measure the spectrum of ${ I}_0$ in new states again. If the shape of the spectrum is the same as the one before the oracle, i.e., two peaks are up still, then the permutation operation has not changed the state ${ I}_0^\alpha { I}_k^\alpha $, and this means that ${ I}_k$ is 1, that is to say, the $k$-th bit value of the marked states $z$ is $1$. If the $k$-th bit of the marked state is 0, the ancilla bit will flip after the operation of the permutation $U_f$. We can see from density matrices before and after the query operation $U_f$. Before the query is evaluated on the mixed state ${ I}_0^\alpha { I}_1^\alpha $, the density matrix(apart from a multiple of the identity matrix and a scaling factor) is \begin{equation} \rho _{01in}=\left( \begin{array}{cccccccc} 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) , \end{equation} After the query, the matrix at the acquisition is \begin{equation} \rho _{01out}=\left( \begin{array}{cccccccc} 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) . \end{equation} When we measure the spectrum of ancilla bit ${ I}_0$, the left peak, corresponding matrix element $51$ and the right peak, corresponding the matrix element $62$, do not change. This indicates that the shape of the spectrum does not change. As for the second step, before the query is evaluated on the mixed state ${ I}_0^\alpha { I}_2^\alpha $, the outcome matrix is \begin{equation} \rho _{01in}=\left( \begin{array}{cccccccc} 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.5 & 0 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) , \end{equation} and after the query, the matrix becomes \begin{equation} \rho _{02out}=\left( \begin{array}{cccccccc} 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.5 & 0 & 0 & 0 & -0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0 & 0 & 0 & 0.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -0.5 & 0 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) . \end{equation} The left peak( $(51)$ matrix element) does not change, but the right peak, ((72) matrix element) changes sign. Thus the right peak of the spectrum will be downward. This method of ''reading out'' the bit of the marked state is effective. Since it depends on the shape of the spectrum, a topological quantity, it is insensitive to errors as compared to the quantitative measurement of the spin of the ancilla bit. \section{The realization of the algorithm in NMR experiment} \label{s4}We implemented Br\"{u}hweiler algorithm in a 3 qubit homonuclear NMR system. The physical system used in the experiment is $ ^{13}C $ labeled alanine $^{13}C^1H_3-^{13}C^0H(NH_2^{+})-^{13}C^2OOH$. The solvent is $D_2O$. The experiment is performed in a Bruker Avance DRX500 spectrometer. The parameters of the sample were determined by experiment to be: $J_{02}=54.2$Hz, $J_{01}=35.1$Hz and $J_{12}=1.7$Hz. In the experiment, $^1H$ is decoupled throughout the whole process. $^{13}C^0$, $^{13}C^1$ and $^{13}C^2$ are used as the 3 qubits, whose state are represented by ${ I}_0$, ${ I}_1$, ${ I}_2$ respectively. $ ^{13}C^0$ is used as the ancilla bit and the result of the oracle is stored on it, $^{13}C^1$ and $^{13}C^2$ are the second and third qubit respectively. We assume the marked item is $10$. Firstly, the state ${ I}_0^\alpha { I}_1^\alpha $ is prepared. It is achieved by a sequence of selective and non-selective pulses, and $J$-coupling evolution. We begin our experiment from thermal equilibrium state. This thermal state is expressed as, \begin{equation} \sigma (0_{-})=I_z^0+I_z^1+I_z^2. \label{thermal} \end{equation} The input state $I_0^\alpha I_1^\alpha$ can be written as ${1\over 2}({1\over 2}{\bf 1}+I^0_z+I^1_z+2 I^0_zI^1_z) $. The identity operator does not contribute signals in NMR, and a scale factor is irrelevant, thus $I_0^\alpha I_1^\alpha$ is equivalent to $I^0_z+I^1_z+2 I^0_zI^1_z$. The pulse sequence\cite{r12,r18,r19} \begin{equation} \left( \frac \pi 2\right) _y^2\Rightarrow Grad\Rightarrow \left( \frac \pi 4 \right) _x^{0,1}\Rightarrow \tau \Rightarrow \left( \frac \pi 6\right) _{-y}^{0,1}\Rightarrow Grad \label{pulseinitial} \end{equation} applied to the thermal state produces this input state \begin{equation} \sigma(0_{+})={\sqrt{6} \over 4}(I^0_z+I^1_z+2 I^0_z I^1_z). \label{stateop1} \end{equation} However in our experiment only the spectrum of $I_0$ is needed, and only $J$ coupling between qubit 0 and 1 is retained, a simplified pulse sequence is actually used in the present experiment to prepare an equivalent input state: \begin{equation} \left( \frac \pi 2\right) _y^0\Rightarrow \tau' \Rightarrow \left( \frac \pi 2\right) _x^0 \Rightarrow \left( \frac \pi 4\right) _{-y}^0\Rightarrow Grad. \label{pulsesecond} \end{equation} Here the subscripts denote the directions of the radio frequency pulse, and the superscripts denote the nuclei on which the radio frequency are operated. Two numbers at the superscript mean that the pulse are applied simultaneously to two nuclei(In actual experiment, the pulses are applied in sequence. Because the duration of the pulse is very short, they can be regarded as simultaneous). $Grad$ refers to applying gradient field. $\tau =1/(2J_{01})$ or $\tau'=1/(4J_{01})$ is the free evolution time during which nuclear $ ^{13}C^2$ is decoupled. The second pulse sequence is operated more easily, because only selective to ${ I}_0$ is considered. Pulse sequence (\ref{pulsesecond}) transforms the thermal state (\ref{thermal}) into \begin{eqnarray} \sigma(0_{+})={1\over 2} (I^0_z+I^1_z+2I^0_zI^1_z)+{1\over 2}I^1_z+I^2_z. \label{stateop2} \end{eqnarray} States (\ref{stateop1}) and (\ref{stateop2}) are equivalent, because ${1\over 2}I^1_z$ and $I^2_z$ does not contribute to the $I^0$ spectrum, and the scaling factor does not matter. The oracle, represented as a permutation $f$ is applied to this initial state: ${ I}_0^\alpha \ { I}_1^\alpha $. Then result of the oracle operation is stored on the ancilla bit ${ I}_0$, that is, the state of the $^{13}C^0$ indicates the state of the first bit of the marked item. Specifically the expression of the unitary operation corresponding to the permutation $f$ is \begin{equation} U_f=\left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) . \end{equation} The permutation $U_f$ can be completed using a sequence logic gates given in Fig. 3. The left one is CNOT gate and the right one is Toffoli gate. The pulse sequence can be found out in Ref.\cite{r17,r18}. The pulse sequence will be very complex if we write according to the network although it is very rigorous. Since we assume that there is only one marked state and only the spectrum of $I_0$ is needed, the function of the $U_f$ can be realized by the pulse sequence shown below \begin{equation} \left( \frac \pi 2\right) _y^0\Rightarrow \tau \Rightarrow \left( \frac \pi 2 \right) _x^0, \end{equation} where $\tau =1/(2J_{01})$. After the operation of the oracle, we measure the spectrum of the ancilla bit. This pulse sequence achieves the same result as that for the gate shown in Fig.3: \begin{eqnarray} \begin{array}{ccc} I^\alpha_0I^{\alpha}_1=I^\alpha_0I^\alpha_1I^\alpha_2+I^\alpha_0I^\alpha_1I^\beta_2 & \rightarrow & I^\alpha_0I^\alpha_1I^\alpha_2+I^\alpha_0I^\alpha_1I^\beta_2\\ I^\alpha_0I^{\alpha}_2=I^\alpha_0I^\alpha_1I^\alpha_2+I^\alpha_0I^\beta_1I^\alpha_2 & \rightarrow & I^\alpha_0I^\alpha_1I^\alpha_2+I^\beta_0I^\beta_1I^\alpha_2\end{array} \end{eqnarray} Secondly, the initial state ${ I}_0^\alpha { I}_2^\alpha $ is prepared. There are two ways to prepare this initial state. One method is to use a pulse sequence as in Eq. (\ref{pulseinitial}) or (\ref{pulsesecond}) by exchanging 1 with 2 in the superscripts. Another method is to use the swap operator in Ref.\cite{r16} \begin{equation} \left( \frac \pi 2\right) _y^{1,2}\Rightarrow \tau _1\Rightarrow \left( \frac \pi 2\right) _x^{1,2}\Rightarrow \tau _1\Rightarrow \left( \frac \pi 2 \right) _{-y}^{1,2}, \label{swap} \end{equation} onto the initial state ${ I}_0^\alpha { I}_1^\alpha $ and the state ${ I}_0^\alpha { I}_2^{\alpha }$ will be obtained. In the experiment, we adopt the second approach. The swap operator is important in generalizing the experiment into more qubit system and we will discuss this later. Then we apply the permutation $U_f$ again, and the result of the oracle is stored in the ancilla bit $^{13}C^0$. The spectra for ${ I}_0 $ after the oracle query $U_f $ operated on ${ I}_0^\alpha { I}_1^\alpha $ and ${ I}_0^\alpha { I}_2^\alpha $ are given in Fig. 2 (b) and Fig. 2 (c) respectively. We can see clearly that the one has the same shape as the reference spectrum and the other one has flipped the right peak. This tells us that the first bit and the second bit of the marked state are $1$ and $0$ respectively. Thus the marked state is $10$. We also notice that there are small differences between the spectra before and after the permutation operations for ${ I}_0^\alpha{ I}_1^\alpha$. These are expected due to imperfections caused by the inhomogeneous field, the errors in the selective pulse and in the evolution of chemical shift. \section{Summary} \label{s5}In summary, we have successfully demonstrated the Br\"{u}schweiler algorithm in a 3-qubit homo-nuclear NMR system. Pulse sequences are given. A new method for ``reading out'' the bit value of the marked state is proposed and realized. The number of iteration required for this algorithm is very small. This is particularly propitious to resist decoherence, especially for NMR system at room temperature. Another advantage of this algorithm is its robustness against errors, $i.e.$ the shape of the spectrum in reading the bit of the marked state has a special feature and one can easily distinguish it from others. Another advantage is its high probability in finding the marked state, it is 100\%!. It should be point out that there are still several issues to be addressed in generalizing the searching machine to more qubit system. First, one must find a suitable molecule to act as the quantum computer. According to Br\"{u} schweiler's original algorithm, the ancilla qubit ${ I}_0^\alpha $ must interact with every other qubit. However, in a molecule, the interaction between remote nuclear spins is very weak. This may be overcome by the swap operation as given in Ref. \cite{r16}. Using swap operation, we can prepare any initial state ${ I}_0^\alpha { I}_k^\alpha $ without the direct interaction between spin ${ I}_0^\alpha $ and spin ${ I}_k^\alpha $. And, the qubit also can be read out easily from the shape of the spectrum. All these are under consideration in our future work. The authors thank Prof. X. Z. Zeng, Prof. M.L. Liu and Dr. J. Luo for help in preparing the NMR sample and the use of selective pulses. Helpful discussions with Dr. P. X. Wang are gratefully acknowledged. This work is supported in part by China National Science Foundation, the National Fundamental Research Program, Contract No. 001CB309308 , the Hang-Tian Science Foundation. {\Large References } \begin{references} \bibitem{r3} D. Deutsch and R. Josza, {\it Proc. R. Soc. London A.,} {\bf 439}, 553(1992). \bibitem{r1} P. Shor, {\it in Proceedings of the 35}$^{th}${\it \ Annual Symposium on the Foundations of Computer Science}, edited by S. Goldwasser(IEEE Computer Society, Los Alamitos) P116. \bibitem{r2} L. K. Grover, {\it Phys. Rev. Lett.,} {\bf 79}, 325(1997). \bibitem{zalka99} C. Zalka, {\it Phys. Rev. A}{\bf , 60}, 2746(1999). \bibitem{abramlloyd} D. Abrams and S. Lloyd, {\it Phys. Rev. Lett}, {\bf 81} , 3992(1998). \bibitem{r4} R. Br\"{u}schweiler, {\it Phys. Rev. Lett.}, {\bf 85}, 4815(2000). \bibitem{r5} C. Monroe, D. M. Meekhof, and B. E. King, {\it Phys. Rev. Lett. }, {\bf 75}, 4714(1995). \bibitem{r6} Q. A.Turchette, C. J. Hood, W. Lange, H. Mabuchi, and H. J. Kimble, {\it Phys. Rev. Lett.}, {\bf 75}, 4710(1995) \bibitem{r7} R. R. Ernst, G. Bodenhausen, and A. Wokaun, ``{\it Principles of Nuclear Magnetic Resonance in One and Two Dimensions}'' (Oxford University Press, 1987) \bibitem{r8} D. G. Cory, A. F. Fahmy, and T. F. Havel, {\it Proc. Natl. Acad. Sci.} USA, {\bf 94}, 1634(1997) \bibitem{r9} N. A. Gersenfeld and I. L. Chuang, {\it Science}, {\bf 275}, 350(1997) \bibitem{r10} J. A. Jones and M. Mosca, {\it J. Chem. Phys.,} {\bf 109}, 1648(1998) \bibitem{r11} I. L. Chuang, V. L. M. Vandersypen, X. L. Zhou, D. W. Leung \& S. Lloyd, {\it Nature}, {\bf 393}, 143(1998) \bibitem{r12} J. A. Jones, {\it Prog. Nucl. Mag. Res. Sp.}, {\bf 38}, 325 (2001) \bibitem{r13} J. A. Jones, {\it Phys. Chem. Comm.} {\bf 11}, 1 (2001) \bibitem{rlong} G. L. Long, H. Y. Yan, Y. S. Li, C. C. Tu, J. X. Tao, H. M. Chen, M. L. Liu, X. Zhang, J. Luo, L. Xiao, and X. Z. Zeng, {\it Phys. Lett. A,} {\bf 286}, 121(2001). \bibitem{rschack} S. L. Braunstein, C. M. Caves, R. Josza, N. Linden, S. Popescu, and R. Schack, {\it Phys. Rev. Lett.}, {\bf 83},1054(1999). \bibitem{rlaf} R. Laflamme and D. G. Cory,``{\it NMR quantum information processing and entanglement}'', quant-ph/0110029 \bibitem{r14} W. S. Warren, {\it Science}, {\bf 277}, 1688(1997) \bibitem{r15} N. A. Gershenfeld and I. L. Chuang, {\it Science}, {\bf 277}, 1689(1997). \bibitem{r16} Z. L. Madi, R.Br\"{u}schweiler and R. R. Ernst, {\it J. Chem. Phys.}, {\bf 109}, 10603(1998) \bibitem{contribution} I. L. Chuang, N. Gershenfeld, M. Kubinec, and D. W. Leung, {\it Proc. R. Soc. London A.,} {\bf 454}, 447(1998) \bibitem{r17} A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, {\it Phys. Rev. A}, {\bf 52}, 3457(1995) \bibitem{r18} J. A. Jones, R. H. Hansen, and M. Mosca, {\it J. Magn. Res.}, {\bf 135}, 353(1998) \bibitem{r19} M. Pravia, E. fortunato, Y. Weinstein, M. D. Price, G. Teklemariam, R. J. Nelson, Y. Sharf, S. Somaroo, C. H. Tseng, T. F. Havel, and D. G. Cory, {\it Concepts Magn. Reson.},{\bf 11}, 225 (1999). \end{references} \begin{figure} \caption{Representation of the oracle $U_f$ spanning on spins $I_1$, $I_2$ whose function corresponds to one permutation and results of the query stored on $I_0$.} \end{figure} \begin{figure} \caption{Experimental realization of the Br\"{u} \end{figure} \begin{figure} \caption{Network for realizing the oracle $U_f.$} \end{figure} \end{document}
\begin{equation}egin{document} \title[]{Some remarks on Calabi-Yau and Hyper-K\"ahler foliations} \date{\today} \author[G.~Habib]{Georges Habib} \address{Lebanese University \\ Faculty of Sciences II \\ Department of Mathematics\\ P.O. Box 90656 Fanar-Matn \\ Lebanon} \end{equation}mail[G.~Habib]{[email protected]} \author[L. ~Vezzoni]{Luigi Vezzoni} \address{Dipartimento di Matematica \\ Universit\`a di Torino\\ Via Carlo Alberto 10\\ 10123 Torino\\ Italy} \end{equation}mail[L.~Vezzoni]{[email protected]} \subjclass[1991]{53C10; 53D10; 53C25} \keywords{Riemannian foliations, transverse structures, Sasakian manifolds} \thanks{This work was partially supported by G.N.S.A.G.A. of I.N.d.A.M. and by the project FIRB \lq\lq Geometria Differenziale e Teoria Geometrica dell Funzioni\rq\rq\,. } \begin{equation}egin{abstract} We study Riemannian foliations whose transverse Levi-Civita connection $\nabla$ has special holonomy. In particular, we focus on the case where ${\rm Hol}(\nabla)$ is contained either in ${\rm SU}(n)$ or in ${\rm Sp}(n)$. We prove a Weitzenb\"{o}ck formula involving complex basic forms on K\"ahler foliations and we apply this formula for pointing out some properties of transverse Calabi-Yau structures. This allows us to prove that links provide examples of compact simply-connected contact Calabi-Yau manifolds. Moreover, we show that a simply-connected compact manifold with a K\"ahler foliation admits a transverse hyper-K\"ahler structure if and only if it admits a compatible transverse hyper-Hermitian structure. This latter result is the \lq \lq foliated version'' of a theorem proved by Verbitsky in \cite{verb}. In the last part of the paper we adapt our results to the Sasakian case, showing in addition that a compact Sasakian manifold has trivial transverse holonomy if and only if it is a compact quotient of the Heisenberg Lie group. \end{equation}nd{abstract} \maketitle \tableofcontents \section{Introduction} Riemannian foliations were introduced by B. Reinhart in \cite{Rein} and are a natural generalization of Riemannian submersions. Roughly speaking, a {\end{equation}m Riemannian foliation} on a manifold $M$ is a decomposition of $M$ into submanifolds given by local Riemannian submersions to a base Riemannian manifold $T$ whose metric is invariant by the transition maps. Riemannian foliations are characterized by the existence of a Riemannian metric on the whole manifold whose restriction to the normal bundle depends only on the transverse variables of a local chart. One of the basic tool for studying the geometry of Riemannian foliations is the holonomy group of the so-called {\end{equation}m transverse Levi-Civita connection} $\nabla$. This connection is defined as the pull-back of the Levi-Civita connection of the base manifold by the local submersions. Many additional structures on a foliated manifold can be described in terms of the holonomy group of $\nabla$. For instance, {\end{equation}m transverse K\"ahler structures} are defined as Riemannian foliations having the holonomy group of $\nabla$ contained in ${\rm U}(n)$, where $q=2n$ is the codimension of the foliation. K\"ahler foliations play an important role in many different geometrical contexts: for instance, Sasakian structures and Vaisman metrics induce a K\"ahler foliation. In this paper, we investigate the geometry of Riemannian foliations having special transverse holonomy. In particular, we focus on the case of a foliated manifold having either ${\rm Hol}(\nabla)\subseteq {\rm SU}(n)$ or ${\rm Hol}(\nabla)\subseteq {\rm Sp}(n)$. The case ${\rm Hol}(\nabla)\subseteq {\rm SU}(n)$ corresponds to the geometry of {\end{equation}m Calabi-Yau} foliations, while ${\rm Hol}(\nabla)\subseteq {\rm Sp}(n)$ to {\end{equation}m hyper-K\"ahler } foliations. Besides other reasons, our study is motivated by the El Kacimi paper \cite{EKA} containing the foliated version of the Calabi-Yau theorem. Examples of Calabi-Yau foliations are provided by submersions over Calabi-Yau manifolds, desingularizations of Calabi-Yau orbifolds and contact Calabi-Yau structures (see \cite{TV}); while examples of hyper-K\"ahler foliations can be obtained by considering submersions over hyper-K\"ahler manifolds, desingularizations of hyper-K\"ahler orbifolds, $3$-cosymplectic structures (see e.g. \cite[Section 13.1]{BGlibro}) and by the connected sum of some copies of $S^2\times S^3$ (see \cite{4,cuadros} and the last paragraph of the present paper). As a first result of the paper, we provide a Weitzenb\"ock formula for K\"ahler foliations (see theorem \ref{W} in section \ref{Weitzenbock}). This formula allows us to establish some analogies between foliated Calabi-Yau manifolds and classical Calabi-Yau manifolds (see section \ref{sectionSU(n)}). As main result about hyper-K\"ahler foliations, we prove that every simply-connected compact manifold carrying a K\"ahler foliation and a compatible transverse hyper-complex structure actually admits a hyper-K\"ahler foliation. That is the foliated version of a theorem of Verbitsky (see \cite{verb}). A key ingredient in the proof of this last result is the existence and uniqueness of a special connection having skew-symmetric transverse torsion on every manifold carrying a Hermitian foliation. The existence of this connection in contact metric manifolds was showed by Friedrich and Ivanov in \cite{FI}. In the last part of the paper we consider Sasakian manifolds. We prove that a transversally flat compact Sasakian manifold is always a compact quotient of the Heisenberg group (see theorem \ref{1}); we point out that some links provide examples of compact simply-connected contact Calabi-Yau manifolds and we adapt some results proved in the first part to the Sasakian case. \overline{i}gbreak\noindent{\it Acknowledgments.} The authors are grateful to Charles P. Boyer, Thomas Brun Madsen, Gueo Grancharov, Valentino Tosatti and Misha Verbitsky for useful conversations, suggestions and remarks. \section{Preliminaries} In this section, we recall some basic materials about foliations, transverse structures and basic cohomology; we refer to \cite{T, molino, BGlibro} and the references therein for detailed expositions about these topics. \subsection{Transverse structures on foliations} Let $M$ be a smooth manifold. A foliation $\mathcal F$ on $M$ of codimension $q$ can be defined as an open cover $\{U_k\} $ of $M$ together with a family of submersions $f_k\colon U_k\to T$ over a manifold $T$ of dimension $q$ (called the base of the foliation) such that whenever $U_j\cap U_k\neq \end{equation}mptyset$ there exists a diffeomorphism $\mathfrak{g}amma_{jk}\colon f_j( U_j\cap U_k)\to f_k(U_j\cap U_k)$ such that $$ f_{j}=\mathfrak{g}amma_{jk}\circ f_k\,. $$ The basic examples of foliations are provided by global submersions. A {\end{equation}m transverse structure} on a foliated manifold $(M,\mathcal{F})$ is by definition a geometric structure on the base manifold $T$ which is invariant by the transition maps $\mathfrak{g}amma_{jk}$. A foliation is called {\end{equation}m Riemannian} if it admits a transverse Riemannian metric $g_Q$. In contrast to the non-foliated case the existence of a transverse metric is not always guaranteed. Given a foliated manifold $(M,\mathcal{F})$, we denote by $L$ the subbundle of $TM$ induced by $\mathcal{F}$; by $Q$ the normal bundle $TM/L$; by $\pi\colon TM\to Q$ the natural projection and for a section $X$ of $TM$ we usually set $X_Q:=\pi(X)$. A transverse metric on $\mathcal{F}$ induces a metric $g_Q$ along the fibers of $Q$ which always satisfies the so-called {\end{equation}m holonomy invariant} condition \begin{equation}egin{equation}\label{bundle-like} \mathcal{L}_Xg_Q=0 \end{equation}nd{equation} for every $X\in \Gamma(L)$, where $\mathcal{L}$ denotes the Lie derivative. By using the projection $\pi$ we can regard $g_Q$ as a symmetric tensor on $M$ which can be always ``completed'' to a global metric $g$ on $M$. This means that there exists a Riemannian metric $g$ on $M$ such that $$ g(X,Y)=g_Q(X_Q,Y_Q) $$ for every sections $X,Y$ of $L^{\perp}$. Such a global metric $g$ can be explicitly defined by starting from an arbitrary Riemannian metric $g'$ of $M$ and by setting $$ g(X,Y):=g'(X_L,Y_L)+g_Q(X_Q,Y_Q) $$ for every $X,Y\in \Gamma(TM)$, where the subscript $L$ here denotes the projection onto $\Gamma(L)$ with respect to $g'$. On the other hand, given a foliated Riemannian manifold $(M,\mathcal{F},g)$, the metric $g$ always induces a metric $g_Q$ along the fibers of $Q:=TM/L\simeq L^{\perp}$. Such $g_Q$ makes $\mathcal{F}$ a Riemannian foliation if and only if it satisfies the {\end{equation}m holonomy invariant} condition \end{equation}qref{bundle-like}. In this case $g$ is called a {\end{equation}m bundle-like} metric. Given a Riemannian foliation $(\mathcal{F},g_Q)$ on a manifold $M$, some special transverse structures can be characterized in terms of the holonomy of the so-called {\end{equation}m transversal Levi-Civita connection}. This connection is defined as the unique connection $\nabla$ on $Q$ preserving $g_Q$ and having vanishing transverse torsion, i.e. $$ \nabla_{X}Y_Q-\nabla_{X}Y_Q=[X,Y]_Q $$ for all $X,Y\in \Gamma(TM)$. The connection $\nabla$ can be defined in an explicit way in terms of the Levi-Civita connection $\nabla^g$ of a bundle-like metric $g$ inducing $g_Q $ by setting \begin{equation} \nabla_{X}s= \begin{equation}egin{cases} \begin{equation}egin{array}{ccl} [X,\sigma(s)]_{Q}& \mbox{if}& X\in \Gamma(L),\\ (\nabla^g_{X}\sigma(s))_{Q}& \mbox{if} &X\in \sigma(Q), \end{equation}nd{array} \end{equation}nd{cases} \end{equation} for every $s\in \Gamma(Q)$, where $\sigma\colon \Gamma(Q)\to \Gamma(L^{\perp})$ is the natural isomorphism (see e.g. \cite{T}). This last description of $\nabla$ does not depend on the choice of the metric $g$. The curvature $R^\nabla$ of $\nabla$ vanishes along the leaves of $\mathcal{F}$ (see \cite{T}): that is, $X\longrightarrowcorner R^\nabla=0$ for all $X\in \Gamma(L)$. That allows us to define the {\end{equation}m transverse Ricci tensor} as $$ {\rm Ric^{\nabla}}(s_1,s_2)=\sum_{k=1}^qg_{Q}(R^{\nabla}(\tilde e_k,\tilde s_1)s_2,e_k) $$ for every $s_1,s_2\in \Gamma(Q)$, where $\{e_k\}$ is an arbitrary orthonormal frame of $\Gamma(Q)$ and $\tilde e_k$ and $\tilde s_1$ are arbitrary vector fields on $M$ projecting onto $e_k$ and $s_1$, respectively. Now we recall the definition of the {\end{equation}m basic cohomology complex.} A differential form $\alpha$ on a foliated manifold $(M,\mathcal{F})$ is called {\end{equation}m basic} if it is constant along the leaves of $\mathcal{F}$, i.e. if it satisfies $$ X\longrightarrowcorner \alpha=0\,,\quad X\longrightarrowcorner d\alpha=0\,, $$ where $X\longrightarrowcorner $ denotes the contraction along $X\in \Gamma(L)$. It is straightforward to see that the exterior derivative $d$ preserves the set of basic forms $\Lambda_B(M)$ and its restriction $d_B$ to this set is used to define the so-called basic cohomology groups $H^r_B(M)$ (see e.g. \cite{molino, T}). From now until the end of this paragraph, we assume that $M$ compact and $\mathcal{F}$ is transversally oriented. The orientation of $\mathcal{F}$ induces a {\end{equation}m basic Hodge star operator} $$ *_B\colon \Lambda_B^r(M)\to \Lambda_B^{q-r}(M) $$ in the usual way. Moreover, the so-called {\end{equation}m characteristic form} $\chi_\mathcal{F}$ is defined as the volume form of the leaves and is locally given by $$ \chi_\mathcal{F}(X_1,\dots,X_p)={\rm det}\left( g(X_i,E_j)\right)\,,\mbox{ for } X_r\in \Gamma(TM) $$ where $\{E_k\}_{k=1,\cdots,p}$ is a local oriented orthonormal frame of $\Gamma(L)$ and $p$ is the rank of $L$. Thus, one may define a natural scalar product on the set of basic forms by setting \begin{equation}egin{equation}\label{scalarbasic} (\alpha,\begin{equation}eta)_B:=\int_M\alpha\widetilde{\nabla}edge *_B\begin{equation}eta\widetilde{\nabla}edge\chi_\mathcal{F} \end{equation}nd{equation} Let $\delta_B$ be the formal adjoint of $d_B$ with respect to the scalar product \end{equation}qref{scalarbasic} and $\mathcal{D}elta_B:=d_B\delta_B+\delta_Bd_B$ the {\end{equation}m basic Laplacian operator}. Then $\mathcal{D}elta_B$ is a transversally elliptic operator and by \cite{EHS} its kernel has finite dimension. Moreover, in view of the basic Hodge theorem, $H^r_B(M)$ is isomorphic to $\mathcal{H}^r_B(M):=\ker \mathcal{D}elta_B\cap \Lambda_B^r(M)$ \cite{EK-Hc2,K-To15}. Therefore the basic cohomology groups have finite dimensions, but in general they do not satisfy the Poincar\'e duality (see \cite{Ca} for an example). This latter is guaranteed when the foliation is {\end{equation}m taut}, i.e. when the leaves are minimal with respect to a suitable bundle-like metric or, equivalently in view of \cite{Ma}, when the top basic cohomology group $H^{q}_B(M)$ is non-trivial (and then it is $1$-dimensional). Given a foliated manifold with a bundle-like metric $(M,\mathcal{F},g),$ the {\end{equation}m mean curvature vector field} is defined as $$ H=\sum_{l=1}^p (\nabla^g_{E_l} E_l)_Q $$ where $\{E_l\}_{l=1,\cdots,p}$ is an arbitrary orthonormal frame of $\Gamma(L)$. The $1$-form $\kappa$ dual to $H$ is usually called {\end{equation}m the mean curvature form}. Notice that the leaves of $\mathcal{F}$ are minimal with respect to $g$ if and only if $\kappa$ vanishes. In view of \cite{domin}, it is always possible to find a compatible bundle-like metric $g$ on $M$ whose induced $\kappa$ is basic. Moreover, when $\kappa$ is basic it is automatically closed (see \cite{Kamber,T}) and consequently on a compact simply-connected manifold every Riemannian foliation is taut (see \cite{T}). Moreover, the forms $\kappa$ and $\chi_\mathcal{F}$ are related by the following formula arising from \cite{Ru} \begin{equation}egin{equation}\label{Ru} \alpha\widetilde{\nabla}edge d\chi_\mathcal{F}= -\alpha\widetilde{\nabla}edge \kappa \widetilde{\nabla}edge \chi_\mathcal{F} \end{equation}nd{equation} holding for every basic form $\alpha$ of degree $q-1$. Hence if $(\mathcal{F},g_Q)$ is an orientable taut Riemannian foliation there exists a $p$-form $\chi_\mathcal{F}$ which restricts to a volume along the leaves and satisfies $$ \alpha\widetilde{\nabla}edge d\chi_\mathcal{F}=0 $$ for every basic $(q-1)$-form $\alpha$. \subsection{Transverse K\"ahler structures} In this section, we focus on transverse Hermitian and transverse K\"ahler structures. Let $(M,\mathcal{F},g_Q)$ be a manifold equipped with a Riemannian foliation. A {\end{equation}m transverse complex} structure on $(M,\mathcal{F})$ is an endomorphism $J$ of $Q$ such that $$ J^2=-{\rm Id}_Q\,, $$ (i.e. a transverse almost complex structure). The pair $(g_Q,J)$ is said to be a {\end{equation}m transverse Hermitian} structure if and only if $$ g_Q(J\cdot,J\cdot)=g_Q(\cdot,\cdot)\,. $$ In this case, the triple $(\mathcal{F},g_Q,J)$ is called a {\end{equation}m Hermitian foliation}. The basic $2$-form $\omega$ obtained as the pull-back to $M$ of the skew-symmetric tensor $g_Q(J\cdot,\cdot)$ is usually called the {\end{equation}m fundamental form} of the foliation and it is closed if and only if $(g_Q,J)$ is induced by a transverse K\"ahler structure. In analogy to the non-foliated case, the condition $d\omega=0$ writes in terms of the transverse Levi-Civita connection as $\nabla J=0$. Given a transverse complex structure $J$ on a foliated manifold $(M,\mathcal{F})$, the complexified normal bundle $Q^{\C}:=Q\otimes \C$ splits into the two eigenbundles $Q^{1,0}$ and $Q^{0,1}$ corresponding to the eigenvalues $i$ and $-i$ of $J$ and we have $$ \Lambda^r(Q^*)\otimes \C=\mathop\overline{i}goplus\limits_{i+j=r}\Lambda^{i,j}(Q)\,. $$ Since the space of smooth sections of $\Lambda^r(Q^*)$ is isomorphic to the space of $r$-forms $\alpha$ on $M$ satisfying $X\longrightarrowcorner \alpha=0$ for every $X\in \Gamma(L)$, $J$ induces an operator (which we still denote by $J$) on complex basic forms. Therefore we have the natural splitting $$ \Lambda_B^r(M,\C)=\mathop\overline{i}goplus\limits_{i+j=r}\Lambda_B^{i,j}(M), $$ and the complex extension of $d_B$ splits as $$ d_B=\partial_B+\begin{equation}ar\partial_B $$ where $$ \partial _{B}\colon \Lambda_{B}^{i,j}(M)\rightarrow \Lambda _{B}^{i+1,j}(M)\,,\quad \begin{equation}ar{\partial}_{B}:\Lambda _{B}^{i,j}(M)\rightarrow \Lambda _{B}^{i,j+1}(M)\,. $$ In analogy to the non-foliated case, we have $\partial _{B}^2=\begin{equation}ar{\partial}_{B}^2=0$, $\partial _{B}\begin{equation}ar{\partial}_{B}+\begin{equation}ar{\partial}_{B}\partial _{B}=0.$ Moreover, if $(g_Q,J)$ is a transverse K\"ahler structure, the transverse Ricci tensor of $g_Q$ is $J$-invariant and thus induces the {\end{equation}m transverse Ricci form} $\rho_B$. This latter is the closed basic form obtained as the pull-back of $\frac{1}{2\pi}{\rm Ric}^{\nabla}(J\cdot,\cdot)$ to $\Lambda^2_B(M)$. According to the non-foliated case, one has $$ \rho_B(\cdot,\cdot)=-\frac{i}{2\pi}\partial_B\begin{equation}ar\partial_B {\rm log}(G) $$ where $G={\rm det}(g_{r\begin{equation}ar s})$ and the functions $g_{r\begin{equation}ar s}$ are computed with respect to suitable transverse complex coordinates. The class of $\rho_B$ in $H_B^2(M,\R)$ is by definition the {\end{equation}m first basic Chern class} of $(\mathcal{F},J)$ and it is usually denoted by $c_B^1$. The following important theorem is due to El Kacimi and provides a foliated version of the celebrated Calabi-Yau theorem: \begin{equation}egin{theorem}[\cite{EKA}]\label{transvcalabiyau} Let $(M,\mathcal{F},g_Q,J$) be a compact manifold endowed with a taut K\"ahler foliation and let $\omega$ be its fundamental form. If $c_B^1$ is represented by a real basic $(1,1)$-form $\rho'_B$, then $\rho'_B$ is the basic Ricci form of a unique transverse K\"ahler form $\omega'$ in the same basic cohomology class of $\omega$. In particular, if $c_B^1=0$, then there exists a transverse K\"ahler metric having vanishing transverse Ricci tensor. \end{equation}nd{theorem} \begin{equation}egin{rem}\label{remarkCY} {\end{equation}m Let $(M,\mathcal{F},g_Q,J)$ be a Hermitian foliation of real codimension $2n$. Then the first Chern class of $K:=\Lambda^{n,0}(Q)$ vanishes if and only if there exists a nowhere vanishing $\end{equation}ta\in\Lambda^{n,0}(Q)$. Such an $\end{equation}ta$ induces a nowhere vanishing section $\psi$ of $\Lambda^{2n}(M,\C)$ satisfying $X\longrightarrowcorner \psi=0 $ for every section $X$ of $L$. Since $\psi$ is not necessary basic, condition $c^1(K)=0$ does not imply $c^1_B(\mathcal{F})=0$ and it is quite natural asking if the existence of a nowhere vanishing basic $(n,0)$-form $\psi$ implies $c^1_B(\mathcal{F})=0$. This fact is certainly true if $M$ is simply-connected since in this case the same argument as in the non foliated case (see e.g. \cite{Audin}) allows us to prove that $\rho=-i\partial_B\begin{equation}ar \partial_B f$, where $f=g_Q(\end{equation}ta,\begin{equation}ar \end{equation}ta)$ is the pointwise norm of the form $\end{equation}ta\in \Lambda_B^{n,0}(M)$ corresponding to $\psi$.} \end{equation}nd{rem} An important class of transverse K\"ahler structures is provided by Sasakian manifolds \cite{einstein1}. These latters are characterized by the existence of a unit Killing vector field $\xi$ on a $(2n+1)$-dimensional Riemannian manifold $(M,g)$ such that the tensor field $\Phi$ defined for $X\in \Gamma(TM)$ as $\Phi(X)=\nabla^g_X\xi$ satisfies \begin{equation}egin{enumerate} \item[1.] $\Phi^2=-{\rm Id}_{TM}+\xi^\flat\otimes \xi,$ \item[2.] $(\nabla^g_X \Phi)(Y)=g(\xi,Y)X-g(X,Y)\xi,$ \end{equation}nd{enumerate} where $X, Y$ are vector fields in $\Gamma(TM).$ The vector field $\xi$ is called the {\end{equation}m Reeb vector field} and generates a $1$-dimensional Riemannian foliation $\mathcal{F}$ such that the restriction of $\Phi$ to $\xi^\perp$ gives a transverse K\"ahler structure with vanishing mean curvature. Usually a Sasakian structure is denoted by a quadruple $(\xi,\end{equation}ta,\Phi,g)$, where $\end{equation}ta=\xi^\flat$ is the $1$-form dual to $\xi$. About the geometry of Sasakian manifolds we refer to \cite{Sas60,SH62, BGlibro} and the references therein, whilst for the Sasakian-version of theorem \ref{transvcalabiyau} we refer to \cite{BGM} and \cite{BGN}. Many interesting examples of non-K\"ahlerian complex spaces carry a K\"ahler foliation. For instance, Vaisman manifolds, Calabi-Eckmann manifolds and some Oeljeklaus-Toma manifolds carry K\"ahler foliations having the transverse K\"ahler form exact (see e.g. \cite{dragomir, ornea} and the references therein). Another interesting examples of K\"ahler foliations come from the physical study of A-branes (see \cite{K,HO}) and they are obtained by considering a coisotropic submanifold of a K\"ahler manifold. A submanifold of $N\hookrightarrow (M,\omega,J,g)$ of a K\"ahler manifold is called {\end{equation}m coisotropic} if \begin{equation}egin{equation}\label{coso} TN^{\omega}\subset TN \end{equation}nd{equation} where $$ \left(T_{y}N\right)^{\omega}=\{v\in T_yM\,\,:\,\, \omega(v,w)=0\,\,,\mbox{ for all }w\in T_yN \}\,. $$ The coisotropic condition \end{equation}qref{coso} implies that $\mathcal{F}_y:=(T_yN)^{\omega}$ is a distribution on $N$ which is intregrable by the closure of $\omega$. Moreover, the complex structure $J$ always preserves the orthogonal complement of the bundle $L$ induced by $\mathcal{F}$ and we have the following \begin{equation}egin{prop}\label{coisotropic} The metric $g$ is always bundle-like with respect to $\mathcal{F}$ and $\mathcal{F}$ is a locally trivial K\"ahler foliation. \end{equation}nd{prop} \begin{equation}egin{proof} The metric $g$ is bundle-like if and only if $$ \mathcal{L}_{X}g(X_1,X_2)=0 $$ for every $X\in \Gamma(L)$ and $X_1,X_2\in \Gamma(L^\perp)$. Such a relation can be read in terms of the Levi-Civita connection $\nabla^g$ of $g$ as $$ g(\nabla^g_{X_1}X,X_2)=-g(\nabla^g_{X_2}X,X_1)\,. $$ Now it is enough to observe that the K\"ahler condition $\nabla^g\omega=0$ implies $$ g(\nabla^g_{X_1}X,X_2)=0 $$ since $g(\nabla^g_{X_1}X,X_2)=\omega(\nabla^g_{X_1}X,JX_2)$ and $$ 0=(\nabla^g_{X_1}\omega)(X,JX_2)=X_1\omega(X,JX_2)-\omega(\nabla^g_{X_1}X,JX_2)-\omega(X,\nabla^g_{X_1}JX_2)= -\omega(\nabla^g_{X_1}X,JX_2)\,. $$ \end{equation}nd{proof} \section{A Weitzenb\"{o}ck formula for transverse K\"ahler structures}\label{Weitzenbock} In this section, we establish a transverse Weitzenb\"{o}ck formula for complex-valued basic forms on K\"ahler foliations and we also derive some vanishing results. We strictly follow the classical computations in the non-foliated case as described in \cite{moroianu}. Let $(M,\mathcal{F},g_Q,J)$ be a compact manifold with a K\"ahler foliation of codimension $q$ and let $g$ a bundle-like metric on $M$ inducing $g_Q$. We may assume in view of \cite{domin} that the mean curvature form $\kappa$ of the foliation is a basic (otherwise we can work with the basic component of $\kappa$). From now until the end of this section, we identify the bundle $Q$ with its dual $Q^*$ and vectors with $1$-forms by using the transverse metric. In analogy to the non-foliated case, the two operators $\partial _{B}$ and $\begin{equation}ar{\partial}_{B}$ can be written in terms of the transverse Levi-Civita connection $\nabla$ of $(\mathcal{F},g_Q)$ as $$ \partial_{B}=\frac{1}{2}\sum_{j=1}^q(e_{j}+iJe_{j})\widetilde{\nabla}edge \nabla_{e_{j}}\,\,\text{and}\,\,\begin{equation}ar{\partial}_{B}=\frac{1}{2}\sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla_{e_{j}}, $$ where $\{e_j\}_{j=1,\cdots,q}$ is a local orthonormal frame of $\Gamma(Q)$. In \cite{HabRi}, the authors established a Bochner-Weitzenb\"ock formula for the Laplacian operator corresponding to the {\end{equation}m twisted derivative} $\tilde{d}_B=d_B-\frac{1}{2}\kappa\widetilde{\nabla}edge$. As mentionned before the mean curvature is $d_B$-closed; in particular we have that $\tilde{d}_B^2=0$. In the same spirit as \cite{HabRi}, we modify the operators $\partial_{B}$ and $\begin{equation}ar{\partial}_{B}$ by introducing the following two twisted operators \begin{equation}egin{equation*} \tilde{\partial}_{B}=\frac{1}{2}\sum_{j=1}^q(e_{j}+iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{4}(\kappa +iJ\kappa )\widetilde{\nabla}edge \,\,\text{and}\,\, \tilde{\begin{equation}ar{\partial}}_{B}=\frac{1}{2}\sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge. \end{equation}nd{equation*} We readily have that $\tilde{d}_{B}=\tilde{\partial}_{B}+\tilde{\begin{equation}ar{\partial}}_{B}$ and the formal adjoint to $\tilde{\begin{equation}ar{\partial}}_{B}$ with respect to the scalar product \end{equation}qref{scalarbasic} writes as \begin{equation}egin{equation*} \left(\tilde{\begin{equation}ar{\partial}}_{B}\right)^{\ast }=-\frac{1}{2}\sum_{j=1}^q(e_{j}+iJe_{j}) \longrightarrowcorner \nabla _{e_{j}}+\frac{1}{4}(\kappa +iJ\kappa )\longrightarrowcorner\,. \end{equation}nd{equation*} Now we state the main result of this section: \begin{equation}egin{theorem}\label{W} Let $(M,g,\mathcal{F},J)$ be a compact Riemannian manifold endowed with a K\"ahler foliation. Assume that the mean curvature $\kappa$ is basic-harmonic. Then the following Weitzenb\"{o}ck-type formula holds $$ \begin{equation}egin{aligned} 2((\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B}+\tilde{\begin{equation}ar{\partial}}_{B}(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }) =&\,\nabla ^{\ast }\nabla +\frac{1}{4}|\kappa |^{2}+\mathfrak{R}+\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge(\nabla _{e_{j}}\kappa +iJ(\nabla _{e_{j}}\kappa ))\longrightarrowcorner \\ &\,+\frac{i}{2}(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1}{4}(\nabla_{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll}+iJe_{\end{equation}ll }\longrightarrowcorner )\,, \end{equation}nd{aligned} $$ where the third term $\mathfrak{R}$ is \begin{equation}egin{equation*} \mathfrak{R}=\frac{i}{2}R^\nabla(Je_{j},e_{j})-\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner R^\nabla(e_{j},e_{\end{equation}ll }). \end{equation}nd{equation*} \end{equation}nd{theorem} \begin{equation}egin{proof} Let $p$ be a fixed point of $M$ and let $\{e_i\}_{i=1,\cdots,q}$ be an orthonormal frame of $\Gamma(Q)$ which we may assume to be parallel at $p$. Then working at $p$ we have $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast}\tilde{\begin{equation}ar{\partial}}_{B} =\,&(\tilde{ \begin{equation}ar{\partial}}_{B})^{\ast}\left( \sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{2}(\kappa -iJ\kappa )\widetilde{\nabla}edge \right)\\ =&-\frac{1}{2}\sum_{\end{equation}ll=1}^q(e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}\left( \sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{2}(\kappa -iJ\kappa )\widetilde{\nabla}edge \right)\\ &+\frac{1}{4}(\kappa +iJ\kappa )\longrightarrowcorner \left( \sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{2}(\kappa -iJ\kappa )\widetilde{\nabla}edge \right) \end{equation}nd{aligned} $$ which yields to $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B} =&-\frac{1}{2}\sum_{\end{equation}ll=1}^q(e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \left( \sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{\end{equation}ll }}\nabla _{e_{j}}-\frac{1}{2} (\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge \,-\frac{1 }{2}(\kappa -iJ\kappa )\widetilde{\nabla}edge \nabla _{e_{\end{equation}ll }}\right) \notag \\ &+\frac{1}{4}(\kappa +iJ\kappa )\longrightarrowcorner \left( \sum_{j=1}^q(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}-\frac{1}{2}(\kappa -iJ\kappa )\widetilde{\nabla}edge \right). \end{equation}nd{aligned} $$ Then we get $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast}\tilde{\begin{equation}ar{\partial}}_{B} =&-\frac{1}{ 2}(2\delta ^{\end{equation}ll j}-2ig(Je_{j},e_{\end{equation}ll }))\nabla _{e_{\end{equation}ll }}\nabla _{e_{j}}+ \frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}\nabla _{e_{j}} \notag \\ &+\frac{1}{2}g(\nabla _{e_{\end{equation}ll }}\kappa ,e_{\end{equation}ll })+\frac{i}{2}(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1}{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \\ &+\frac{1}{2}g(\kappa ,e_{\end{equation}ll })\nabla _{e_{\end{equation}ll }}-\frac{i}{2}g(J\kappa ,e_{\end{equation}ll })\nabla _{e_{\end{equation}ll }}-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }} \notag \\ &+\frac{1}{2}g(\kappa ,e_{j})\nabla _{e_{j}}+\frac{i}{2}g(J\kappa ,e^{j})\nabla _{e_{j}}-\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}} \notag \\ &-\frac{1}{4}|\kappa |^{2}+\frac{1}{8}(\kappa -iJ\kappa )\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \end{equation}nd{aligned} $$ which gives that $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast}\tilde{\begin{equation}ar{\partial}}_{B} =&\nabla ^{\ast }\nabla +ig(Je_{j},e_{\end{equation}ll })\nabla _{e_{\end{equation}ll }}\nabla _{e_{j}}+\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}\nabla _{e_{j}} \notag \\ &+\frac{1}{2}\mathrm{div}_Q(\kappa )+\frac{i}{2}g(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1}{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \notag \\ &-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}-\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}} \notag \\ &-\frac{1}{4}|\kappa |^{2}+\frac{1}{8}(\kappa -iJ(\kappa ))\widetilde{\nabla}edge (\kappa +iJ(\kappa)), \end{equation}nd{aligned} $$ (here and in the following we omit the symbol of sum). In the last equality above, we have made use of the relation $$ \nabla^*\nabla=-\sum_{j=1}^q\nabla_{e_j}\nabla_{e_j}+\nabla_{\kappa}. $$ Since we are assuming that the mean curvature form is basic-harmonic, the divergence of $\kappa$ is thus equal to the square of its norm. Thus, we have $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B} =&\nabla ^{\ast }\nabla +\frac{1}{4}|\kappa |^{2}+\frac{i}{2} g(Je_{j},e_{\end{equation}ll })R^\nabla(e_{\end{equation}ll },e_{j})+\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge \left( (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}\nabla _{e_{j}}\right) \\ &+\frac{i}{2}g(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1}{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \\ &-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}-\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}}+\frac{1}{8}(\kappa -iJ\kappa )\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \\ =&\nabla ^{\ast }\nabla +\frac{1}{4}|\kappa |^{2}+\frac{i}{2} R^\nabla(Je_{j},e_{j})+\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{j}}\nabla _{e_{\end{equation}ll }} \\ &+\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner R^\nabla(e_{\end{equation}ll },e_{j})+\frac{i}{2}g(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1 }{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \\ &-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}-\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}}+\frac{1}{8}(\kappa -iJ\kappa )\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner, \end{equation}nd{aligned} $$ which finally gives $$ \begin{equation}egin{aligned} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B} =&\nabla ^{\ast }\nabla +\frac{1}{4}|\kappa |^{2}+\mathfrak{R}+\frac{1}{2} (e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{j}}\nabla _{e_{\end{equation}ll }}+\frac{i}{2}(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa ) \\ &-\frac{1}{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner -\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}\\ &-\frac{1}{4} (e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}} +\frac{1}{8}(\kappa -iJ\kappa )\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner\,. \end{equation}nd{aligned} $$ On the other hand, $$ \begin{equation}egin{aligned} 2\tilde{\begin{equation}ar{\partial}}_{B}(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast } =&\tilde{ \begin{equation}ar{\partial}}_{B}(-(e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}+ \frac{1}{2}(\kappa +iJ\kappa )\longrightarrowcorner ) \\ =&\frac{1}{2}\{(e_{j}-iJe_{j})\widetilde{\nabla}edge \nabla _{e_{j}}(-(e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner +\frac{1}{2}(\kappa +iJ\kappa )\longrightarrowcorner )\} \\ &-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (-(e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }}+\frac{1}{2}(\kappa +iJ\kappa )\longrightarrowcorner ) \\ =&-\frac{1}{2}(e_{j}-iJe_{j})\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{j}}\nabla _{e_{\end{equation}ll }}+\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\nabla _{e_{j}}\kappa +iJ(\nabla _{e_{j}}\kappa ))\longrightarrowcorner \\ &+\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner \nabla _{e_{j}}+\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll })\longrightarrowcorner \nabla _{e_{\end{equation}ll }} \\ &-\frac{1}{8}(\kappa -iJ\kappa )\widetilde{\nabla}edge (\kappa +iJ\kappa )\longrightarrowcorner. \end{equation}nd{aligned} $$ Thus, by taking the sum of the last two equations we find $$ \begin{equation}egin{aligned} 2((\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B}+\tilde{\begin{equation}ar{ \partial}}_{B}(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }) = \,&\nabla ^{\ast }\nabla +\frac{1}{4}|\kappa |^{2}+\mathfrak{R}+\frac{1}{4}(e_{j}-iJe_{j})\widetilde{\nabla}edge (\nabla _{e_{j}}\kappa +iJ(\nabla _{e_{j}}\kappa ))\longrightarrowcorner \\ &+\frac{i}{2}(Je_{\end{equation}ll },\nabla _{e_{\end{equation}ll }}\kappa )-\frac{1}{4}(\nabla _{e_{\end{equation}ll }}\kappa -iJ(\nabla _{e_{\end{equation}ll }}\kappa ))\widetilde{\nabla}edge (e_{\end{equation}ll }+iJe_{\end{equation}ll }\longrightarrowcorner)\,, \end{equation}nd{aligned} $$ as required. \end{equation}nd{proof} The previous theorem has the following remarkable consequence when it is applied to $(p,0)$-forms: \begin{equation}egin{cor} In the hypothesis of theorem $\ref{W}$, for every form $\alpha\in \Lambda_B^{p,0}(M)$ we have \begin{equation}egin{equation*} 2(\tilde{\begin{equation}ar{\partial}}_{B})^{\ast }\tilde{\begin{equation}ar{\partial}}_{B}\alpha=\nabla ^{\ast }\nabla \alpha+\frac{1}{4}|\kappa |^{2}\alpha+\frac{i}{2}\sum_{j=1}^qR^\nabla(Je_{j},e_{j})\alpha- \frac{i}{2}{\rm div}_Q(J\kappa)\alpha\,. \end{equation}nd{equation*} \end{equation}nd{cor} Another consequence of theorem \ref{W} is the following \begin{equation}egin{theorem}\label{vanishing} Let $(M,g,\mathcal{F},J)$ be a compact manifold endowed with a K\"ahler foliation. If the transverse Ricci curvature is negative definite, then $\mathcal{F}$ has only trivial transversally holomorphic vector fields. \end{equation}nd{theorem} \begin{equation}egin{proof} We show that every $\xi \in \Lambda_{B}^{1,0}(M)$ satisfying $\begin{equation}ar{\partial}_{B}\xi=0$ is trivial. Such a $\xi$ satisfies $\tilde{\begin{equation}ar{\partial}}_{B}\xi =-\frac{1}{4}(\kappa -iJ\kappa )\widetilde{\nabla}edge \xi $ and hence $|\tilde{\begin{equation}ar{\partial}}_{B}\xi |_{H}^{2}= \frac{1}{8}|\kappa |^{2}|\xi |^{2}$. By taking the product with $\xi $ in the Weitzenb\"{o}ck formula and integrating over $M$ we get \begin{equation}egin{equation*} \frac{1}{4}\int_{M}|\kappa |^{2}|\xi |_{H}^{2}v_g=\int_{M}|\nabla \xi |^{2}v_g+ \frac{1}{4}\int_{M}|\kappa |^{2}|\xi |_{H}^{2}v_g-\int_{M}H({\rm Ric}^\nabla(\xi ),\xi )v_{g}. \end{equation}nd{equation*} In the above identity, we have used $\frac{1}{2}\sum_{j=1}^qR^\nabla(Je_{j},e_{j})\xi ={\rm Ric}^\nabla(J\xi )=i{\rm Ric}^\nabla(\xi)$. The assumption on the Ricci curvature to be negative definite implies the statement. \end{equation}nd{proof} We point out that theorem \ref{vanishing} was obtained before by S.D. Jung in \cite{jung} by using another method. \begin{equation}egin{theorem}\label{SUn} Let $(M,g,\mathcal{F},J)$ be a compact manifold endowed with a K\"ahler foliation. If the transverse Ricci curvature vanishes, then every transversally holomorphic form is parallel. Moreover, if the transverse curvature of $M$ is positive definite, there every transversally holomorphic $(p,0)$-form on $M$ is trivial. \end{equation}nd{theorem} \begin{equation}egin{proof} Let $\mathfrak{g}amma$ be a transversally holomorphic $(p,0)$-form. Then $\begin{equation}ar\partial_B\mathfrak{g}amma=0$ and thus $\tilde{\begin{equation}ar \partial}_B\mathfrak{g}amma=-\frac{1 }{4}(\kappa-iJ\kappa)\widetilde{\nabla}edge\mathfrak{g}amma$ which implies $|\tilde{\begin{equation}ar \partial} _B\mathfrak{g}amma|_H^2=\frac{1}{8}|\kappa|^2|\mathfrak{g}amma|^2$. Using the Weitzenb\"ock formula and the identities $$ R^\nabla(X,Y)\mathfrak{g}amma=R^\nabla(X,Y)e_k\widetilde{\nabla}edge e_k\longrightarrowcorner\mathfrak{g}amma $$ and $$ \frac{1}{2}\sum_{j=1}^q R^\nabla(J(e_i),e_i)e_k={\rm Ric}^\nabla(J(e_k)), $$ we get the first part of the statement. The second part of the theorem can be obtained following the same proof as in the non-foliated case (see e.g. \cite{moroianu}). \end{equation}nd{proof} \section{Transverse Calabi-Yau structures}\label{sectionSU(n)} In this short section, we consider {\end{equation}m transverse Calabi-Yau} structures. Such structures can be defined as Riemannian foliations having transverse holonomy contained in ${\rm SU}(n)$ and have been taken into account in \cite{moro1,moro2} where it is proved that in the {\end{equation}m taut} case the moduli space is a smooth Hausdorff manifold of finite dimension (i.e. a generalization of the Bogomolov-Tian-Todorov theorem \cite{Bogo, tian, todorov} to the foliated case). Moreover, Calabi-Yau foliations can be used for desingularizing Calabi-Yau orbifolds. The following proposition is a direct consequence of theorem \ref{SUn} \begin{equation}egin{prop}\label{transvholomorphic} On a compact manifold endowed with a Calabi-Yau foliation every transversally holomorphic form is parallel with respect to the transverse Levi-Civita connection. \end{equation}nd{prop} Moreover, we have the following: \begin{equation}egin{prop}\label{SU(n)} Let $(M,\mathcal{F},g_Q,J)$ be a compact simply-connected manifold carrying a K\"ahler foliation. Assume $\rho_Q=0$; then ${\rm Hol}(\nabla)$ is contained in ${\rm SU}(n)$ and $\mathcal{F}$ is a Calabi-Yau foliation. \end{equation}nd{prop} \begin{equation}egin{proof} Let $(\mathcal{F},g_Q,J)$ be a K\"ahler foliation, $K:=\Lambda^{n,0}(Q)$ and let $R^K$ be the curvature of the connection induced by the transverse Levi-Civita connection on $K$. Fix a global section $\psi$ of $K$. Then a standard computation yields $$ R^{K}_{X,Y}\psi=\rho_Q(X,Y)\psi $$ for every pair of smooth vector fields on $M$. Hence if we assume $\rho_Q=0$, then we have $R^K=0$ which is equivalent to require that ${\rm Hol}^0(\nabla)\subseteq {\rm SU}(n)$. Hence when $M$ is simply-connected, we have ${\rm Hol}(\nabla)\subseteq {\rm SU}(n)$, as required. \end{equation}nd{proof} Combining the El Kacimi theorem \ref{transvcalabiyau} with the last proposition, we get the following \begin{equation}egin{cor}\label{4.4} Let $(\mathcal{F},g_Q,J)$ K\"ahler foliation on a compact simply-connected manifold $M$ and assume $c_B^1=0$. Then there exists a unique transverse K\"ahler form $\omega'$ in the same basic cohomology class of the fundamental form of $g_Q$ whose transverse Levi-Civita connection $\nabla'$ satisfies ${\rm Hol}(\nabla')\subseteq {\rm SU}(n)$. \end{equation}nd{cor} \section{Transverse Hyper-K\"ahler structures} A particular class of Calabi-Yau foliations is provided by hyper-K\"ahler foliations. These latters are characterized by a triple $(J_1,J_2,J_3)$ of transverse complex structures on a foliated manifold $(M,\mathcal{F},g_Q)$ such that $(\mathcal{F},g_Q,J_r)$ is a K\"ahler foliation for every $r=1,2,3$ and it is satisfied the quaternionic relation \begin{equation}\label{quaternion} J_1J_2=-J_2J_1=J_3. \end{equation} Requiring the existence of this structure on a foliated manifold $(M,\mathcal{F})$ is equivalent to require the existence of a transverse metric having transverse holonomy group contained in ${\rm Sp}(n).$ Moreover, if $(\mathcal{F},g_Q,J_1)$ is a K\"ahler foliation, then it is {\end{equation}m transversally hyper-K\"ahler} if and only if there exists a basic $(2,0)$-form $\Omega$ such that $$ \Omega^n\neq 0\,,\qquad \nabla\Omega=0\,. $$ If $(J_1,J_2,J_3)$ is simply a triple of transverse complex structures compatible with a fixed transverse metric and satisfying \end{equation}qref{quaternion}, we refer to $(g_Q,J_1,J_2,J_3)$ as a {\end{equation}m transverse hyper-Hermitian} structure. The main result of this section is the following theorem which is the foliated counterpart of the main theorem in \cite{verb} \begin{equation}egin{theorem}\label{mainluigi} Let $(M,\mathcal{F},g_Q,J_1)$ be a compact simply-connected manifold carrying a K\"ahler foliation. Assume that there exists a pair $(J_2,J_3)$ of transverse complex structures such that $(J_1,J_2,J_3)$ is a transverse hyper-Hermitian structure. Then there exists a transverse metric $g'_Q$ on $(M,\mathcal{F})$ having holonomy contained in ${\rm Sp}(n)$. \end{equation}nd{theorem} We divide the proof in a sequence of lemmas. The first one is a generalization of theorem 8.2 of \cite{FI} to the foliated non-contact case: \begin{equation}egin{lemma}\label{ivanov} For every Hermitian foliation $(\mathcal{F},g_Q,J)$ on a manifold $M,$ there exists a unique connection $\widetilde{\nabla}idetilde{\nabla}$ on $Q$ preserving $(g_Q,J)$ and such that \begin{equation}egin{enumerate} \item[1.] $g_Q(T(X_Q,Y_Q),Z_Q)=-g(T(X_Q,Z_Q),Y_Q)$ for every $X,Y,Z\in \Gamma(TM);$ \item[2.] $\widetilde{\nabla}idetilde{\nabla}_{\xi}s=\nabla_{\xi}s$ for every $s\in \Gamma(Q)$ and $\xi\in \Gamma(L)$, \end{equation}nd{enumerate} where $T$ is the transverse torsion tensor $$ T(X,Y)=\widetilde{\nabla}idetilde{\nabla}_XY_Q-\widetilde{\nabla}idetilde{\nabla}_{Y}X_Q-[X,Y]_{Q}\,. $$ \end{equation}nd{lemma} \begin{equation}egin{proof} Let $g$ be a bundle-like metric on $M$ inducing $g_Q$ and we identify $Q$ with orthogonal complement of $L$ in $TM$. We define explicitly $\tilde \nabla$ as \begin{equation}egin{equation}\label{N'} g(\widetilde{\nabla}idetilde{\nabla}_{Z}X,Y)=\begin{equation}egin{cases} \begin{equation}egin{array}{lcl} g_Q(\nabla_{Z}X,Y)+\frac12 d\omega(JZ,JX,JY)& \mbox{if} &Z\in\Gamma(Q),\\ g_Q(\nabla_{Z}X,Y)& \mbox{if}& Z\in \Gamma(L)\,, \end{equation}nd{array} \end{equation}nd{cases} \end{equation}nd{equation} where $\omega$ is the fundamental form of $(g_Q,J)$. First of all, we observe that the connection $\widetilde{\nabla}idetilde{\nabla}$ described by formula \end{equation}qref{N'} satisfies the two labels of the statement (here we use that $g_Q(T(X,Y),Z)=-d\omega(JX,JY,JZ)).$ On the other hand, the new connection satisfies $\widetilde{\nabla}idetilde{\nabla}g_Q=0,$ since $\nabla$ preserves $g_Q$ and the second formula in \end{equation}qref{N'} forces $\widetilde{\nabla}idetilde{\nabla}_{\xi}J=0$ for $\xi \in \Gamma(L)$. Moreover if $X,Y,Z$ lies in $\Gamma(Q)$, a direct computation gives \begin{equation}egin{equation}\label{domega} 2g_Q((\nabla_{Z}J)X,Y)=d\omega(X,JY,JZ)+d\omega(JX,Y,JZ) \end{equation}nd{equation} which implies that $\widetilde{\nabla}idetilde{\nabla}$ preserves $J$. This implies the first part of the proof. We shall now prove the uniqueness. Assume to have a connection $\widetilde{\nabla}idetilde{\nabla}$ preserving $(g_Q,J)$ and satisfying the conditions 1 and 2 of the statement. Therefore, we can write \begin{equation}egin{equation}\label{newconnection} g_Q(\widetilde{\nabla}idetilde{\nabla}_{Z}X,Y)=g_Q(\nabla_{Z}X,Y)+\frac12 S(Z,X,Y) \end{equation}nd{equation} for a tensor $S$ defined in $TM\times Q\times Q$. Since $\nabla$ and $\widetilde{\nabla}idetilde{\nabla}$ both preserve $g_Q$, the tensor $S$ is skew-symmetric in the last two entries, i.e. $$ S(Z,X,Y)=-S(Z,Y,X)\,. $$The item 2 implies $S(\xi,X,Y)=0$ for all $\xi\in\Gamma(L)$. An easy computation involving the torsion of $\widetilde{\nabla}idetilde{\nabla}$, formula \end{equation}qref{newconnection} and the item 1 implies $$ S(X,Y,Z)=S(Y,Z,X) $$ for all $X,Y,Z\in\Gamma(Q)$. Using that $\widetilde{\nabla}idetilde{\nabla}$ preserves the structure $J$, it is easy to show that the following relation holds \begin{equation}egin{equation}\label{Sd} S(Z,JX,Y)+S(Z,X,JY)=-2g_Q((\nabla_Z J)X,Y) \end{equation}nd{equation} for $X,Y,Z\in \Gamma(Q)$. By considering the cyclic sum in \end{equation}qref{Sd} we get $$ \mathfrak{S}_{Z,X,Y}S(Z,JX,Y)=-\mathfrak{S}_{Z,X,Y}g_Q((\nabla_Z J)X,Y)=-d\omega(Z,X,Y) $$ and $$ \mathfrak{S}_{Z,X,Y}S(JZ,X,JY)=d\omega(JZ,JX,JY). $$ Now the integrability of $J$ gives $$ S(X, Y, Z)=S(JX,JY, Z)+S(JX, Y, JZ)+S(X, JY , JZ) $$ which implies $$ S(X, Y, Z)=d\omega(JZ,JX,JY)\,, $$ as required. \end{equation}nd{proof} \begin{equation}egin{lemma}\label{anticommute} Let $(\mathcal{F},g_Q,J_1,J_2,J_3)$ be a transverse hyper-Hermitian foliation and we denote by $\partial_k$ the $\partial_B$ operator with respect to $J_k$. If $\tilde{\partial}_2:=-J_2^{-1}\begin{equation}ar \partial_1 J_2$, then \begin{equation}egin{equation}\label{---} \partial_1\tilde{\partial}_2=-\tilde{\partial}_2\partial_1\,. \end{equation}nd{equation} \end{equation}nd{lemma} \begin{equation}egin{proof} Let $g$ be a bundle-like metric on $M$ inducing $g_Q$. We identify $Q$ with the orthogonal complement of $L$ in $TM$. The proof of the statement can be then obtained by using standard algebraic computations. It's enough to check \end{equation}qref{---} for basic maps and basic $(1,0)$-forms. We show how things work for functions and we omit the proof for $1$-forms. Let $f$ be a basic map and let $Z,W$ be two smooth sections of $Q^{1,0}$. Then we have $$ \begin{equation}egin{aligned} (\partial_1\tilde \partial_2f)(Z,W)&=\,Z(\tilde \partial_2f(W))-W(\tilde \partial_2f(Z))- \tilde \partial_2f([Z,W])\\ &=\,ZJ_2W(f)-WJ_2Z(f)-J_2[Z,W](f) \end{equation}nd{aligned} $$ and $$ (\tilde{\partial}_2\partial_1f)(Z,W)=J_2ZW(f)-J_2WZ(f)+J_2[J_2Z,J_2W](f)\,. $$ Therefore $$ \begin{equation}egin{aligned} (\partial_1\tilde \partial_2+\tilde{\partial}_2\partial_1)(f)(Z,W)=&\,\left(ZJ_2W-WJ_2Z-J_2[Z,W] +J_2ZW-J_2WZ+J_2[J_2Z,J_2W]\right)(f)\\ =&\,J_2N_{J_2}(Z,W)(f)=0\,, \end{equation}nd{aligned} $$ as required. \end{equation}nd{proof} \begin{equation}egin{lemma}\label{fundpre} Let $(M,\mathcal{F},g_Q,J_1)$ be a compact manifold with a taut Calabi-Yau foliation of codimension $4n$. Assume that there exists a pair $(J_2,J_3)$ of transverse complex structures such that $(J_1,J_2,J_3)$ is a transverse hyper-complex structure. Then $g_Q$ has transverse holonomy contained in ${\rm Sp}(n)$. \end{equation}nd{lemma} \begin{equation}egin{proof} Let $g_1$ be the transverse Riemannian metric $$ g_1(\cdot,\cdot)=\frac12 g_Q(\cdot,\cdot)+\frac12 g_Q(J_2\cdot,J_2\cdot)\,. $$ This metric is compatible with each $J_k$. For each $k=1,2,3,$ we denote by $F_k$ the fundamental form of $(\mathcal{F},g_1, J_k)$ and let $$ \Omega_1:=\frac12\left(F_2+i F_3\right). $$ The basic form $\Omega_1$ is of type $(2,0)$ with respect to $J_1$ and satisfies $$ \Omega_1^{n}\neq 0\,. $$ First of all we show that $\Omega_1$ is $\partial_1$-closed, where $\partial_1$ denotes the $\partial_B$-operator with respect to $J_1$. Let $\Omega_2$ the $(2,0)$-component of $\omega$ with respect to $J_2$. The form $\Omega_2$ is basic and from the closure of $\omega$ we get $$ \partial_2\Omega_2=0, $$ where $\partial_2$ denotes the $\partial_B$-operator computed with respect to $J_2$. On the other hand, an easy computation yields $$ \Omega_2=\frac{1}{2}(F_1-i F_3)\,. $$ Taking into account the formula \begin{equation}egin{equation*} \partial_k\mathfrak{g}amma=\frac{1}{2}\overline{i}g(d+(-1)^r i J_k dJ_k\overline{i}g)\mathfrak{g}amma \end{equation}nd{equation*} holding for every complex basic form $\mathfrak{g}amma$ of degree $r$ (see e.g. \cite{GP} for the non-foliated case), we deduce that condition $\partial_2\Omega_2=0$ forces to have $$ J_1\,dF_1=J_3\,dF_3\,. $$ Let $\tilde \nabla^k$ be the connection with skew-symmetric torsion induced by $(g_1,J_k)$ as in lemma \ref{ivanov}. Formula \end{equation}qref{N'} implies that all the $\tilde \nabla^k$'s have the same torsion and then we find $\tilde\nabla^1=\tilde\nabla^3$ and condition $J_2=J_3J_1$ implies $\tilde\nabla^1J_2=0$. Therefore $$ \tilde\nabla^1=\tilde\nabla^2=\tilde\nabla^3 $$ which gives $J_2dF_2=J_3 dF_3$ and thus we obtain that $\partial_1 \Omega_1=0.$ Hence $\Omega_1$ satisfies $$ \Omega_1^n\neq 0\,,\quad \partial_1\Omega_1=0\,. $$ \noindent Since $M$ is compact and $\mathcal{F}$ is taut we can use the transverse Hodge theory for writing $$ \Omega_1=\Omega+\partial_1 \alpha, $$ where $\Omega$ is the $g_Q$-basic harmonic component of $\Omega_1$. Since $\Omega$ is transversally holomorphic, theorem \ref{SUn} implies that $\nabla\Omega_1=0$. In particular the norm of $\Omega^n$ is constant. In order to finish the proof, we have to show that $\Omega^n_p\neq 0$ for every $p\in M$. Assume by contradiction $\Omega_p^n=0$ at a point $p\in M$ and let $\psi=\Omega_1^n$. Since the norm of $\Omega^n$ is constant, $\Omega^n\end{equation}quiv 0$ and $$ \psi=\partial_1 \begin{equation}eta $$ for a basic form $\begin{equation}eta$. The last step consists in showing that $\psi$ cannot be $\partial_1$-exact. We will obtain this result by adapting the last step in the proof of the main theorem in \cite{verb} to our case. Taking into account the isomorphism $\Lambda_B^{2n,1}(M)\cong\Lambda_B^{2n,0}(M)\widetilde{\nabla}edge \Lambda_B^{0,1}(M)$, there exists a basic $(1,0)$-form $\theta$ which is $\partial_1$-closed such that \begin{equation}egin{equation}\label{verb} \partial_1 \begin{equation}ar \psi=\theta \widetilde{\nabla}edge \begin{equation}ar \psi\,. \end{equation}nd{equation} Let us consider the complex \begin{equation}egin{equation} \label{como} \Lambda^{0,0}_B(M)\xrightarrow{\partial_1+\frac12 \theta}\Lambda^{1,0}_B(M)\xrightarrow{\partial_1+ \frac12\theta}\Lambda^{2,0}_B(M)\xrightarrow{\partial_1+\frac12\theta}\cdots \end{equation}nd{equation} where $$ \left(\partial_1+\frac12 \theta\right)\alpha=\partial_1\alpha+\frac12\theta\widetilde{\nabla}edge\alpha\,. $$ Since $\theta$ is $\partial_1$-closed, we have $\left(\partial_1+\frac12\theta\right)^2=0$. In view of \cite[thm. 2.8.7]{EKA} the cohomology of the complex \end{equation}qref{como} is finite-dimensional and its cohomology groups can be identified with the kernel of the Laplacian operator associated with $D_1=\partial_1+\frac12\theta$. Now, following the approach of \cite{verb2}, we observe that the operator $$ L\colon \Lambda^{*,0}_B(M)\to \Lambda^{*+2,0}_B(M) $$ defined as $\begin{equation}eta\mapsto \begin{equation}eta\widetilde{\nabla}edge \Omega_1$ preserves the kernel of $D_1D_1^*+D_1^*D_1$, where $$ D_1^*(\begin{equation}eta):=\partial_1^*\begin{equation}eta+\frac12 *_B(\begin{equation}ar \theta\widetilde{\nabla}edge*_B\begin{equation}eta)=-*_B\begin{equation}ar\partial_1(*_B\begin{equation}eta)+\frac12 \begin{equation}ar\theta\longrightarrowcorner\begin{equation}eta $$ is the formal adjoint to $D_1$ with respect the scalar product \end{equation}qref{scalarbasic} induced by $g_1$. This basically comes from the following three identities which will be proved afterwards \begin{equation}egin{eqnarray} \label{VV3}&& LD_1-D_1L=0\,,\\ \label{VV1}&& D_1D_2+ D_2D_1=0\,,\\ \label{VV2}&& L D_1^*- D_1^* L=\,D_2\,, \end{equation}nd{eqnarray} where $$ D_2(\begin{equation}eta)=-J_2^{-1}\begin{equation}ar \partial_1J_2(\begin{equation}eta)+\frac12 J_2(\begin{equation}ar \theta)\widetilde{\nabla}edge \begin{equation}eta=\tilde\partial_2(\begin{equation}eta)+\frac12 J_2(\begin{equation}ar \theta)\widetilde{\nabla}edge \begin{equation}eta\,. $$ Indeed assuming that equalities \end{equation}qref{VV3}, \end{equation}qref{VV1} and \end{equation}qref{VV2} hold, we write $$ \begin{equation}egin{aligned} L(D_1D_1^*+D_1^*D_1)=&\,L D_1D_1^*+LD_1^*D_1=D_1 L D_1^*+D_1^*LD_1+D_2D_1\\ =&\,D_1 D_1^*L+D_1D_2+D_1^*D_1L+D_2D_1=(D_1D_1^*+D_1^*D_1)L\,, \end{equation}nd{aligned} $$ as required. In particular the map $[f]\mapsto [f\psi]$ induces an isomorphism $$ \ker\, D_1\cap \Lambda^{0,0}_B(M)\to \frac{\Lambda_B^{2n,0}(M)}{D_1(\Lambda^{2n-1,0}_B(M))}. $$ Assume by contradiction $\psi=\partial_1\begin{equation}eta$. We can certainly find a nowhere vanishing closed basic $(n,0)$-form $\end{equation}ta$ such that $$ \begin{equation}ar \psi=g\begin{equation}ar \end{equation}ta $$ for a nowhere vanishing basic map $g$. Then $$ \theta \widetilde{\nabla}edge \begin{equation}ar \psi=\frac{1}{g}\partial_1(g)\widetilde{\nabla}edge \begin{equation}ar \psi\,, $$ i.e. $\partial_1g=g\theta$. Let $f=g^{-\frac{1}{2}}$; then $f$ gives a non-trivial cohomology class in $\ker D_1\cap \Lambda^{0,0}_B(M)$. Moreover, $[f\psi]=0$, since $$ D_1(f\begin{equation}eta)=f\psi\,. $$ Hence $\psi$ cannot be $\partial_1$-exact and the claim follows. In order to finish the proof it remains to show that formulas \end{equation}qref{VV3}, \end{equation}qref{VV1}, \end{equation}qref{VV2} are true. For the first one, we simply have $$ D_1L\mathfrak{g}amma= \partial_1(\mathfrak{g}amma\widetilde{\nabla}edge \Omega_1)+ \frac12 \theta\widetilde{\nabla}edge \mathfrak{g}amma\widetilde{\nabla}edge \Omega_1 = \partial_1 \mathfrak{g}amma\widetilde{\nabla}edge\Omega_1+\left(\frac12 \theta\widetilde{\nabla}edge \mathfrak{g}amma \right)\widetilde{\nabla}edge \Omega_1=LD_1\mathfrak{g}amma $$ for every $\mathfrak{g}amma\in \Lambda^{k,0}_B(M) $. The proof of the other two formulas is a bit more involute and we need lemma \ref{anticommute} and some computations of linear algebra proved as in \cite{verb}. First of all, it is easy to show that $$ J_2\begin{equation}ar\psi=\psi\,. $$ Therefore $\begin{equation}ar \partial_1(J_2\begin{equation}ar \psi)=\begin{equation}ar \theta \widetilde{\nabla}edge \psi$ which implies the useful relation $$ \tilde \partial_2 \begin{equation}ar\psi= J_2\begin{equation}ar \theta\widetilde{\nabla}edge \begin{equation}ar\psi\,, $$ where $\tilde \partial_2 $ is defined in lemma \ref{anticommute}. Now applying \end{equation}qref{---} to $\begin{equation}ar\psi$ we easily get formula \end{equation}qref{VV1}. Formula \end{equation}qref{VV2} is equivalent to \begin{equation}egin{equation} \overline{i}g(L D_1^*- D_1^* L)\begin{equation}eta-\frac12\,J_2(\begin{equation}ar\theta)\widetilde{\nabla}edge \begin{equation}eta=\tilde \partial_2(\begin{equation}eta), \end{equation}nd{equation} which it is enough to be checked for smooth basic maps and $\tilde \partial_2$-closed $(1,0)$-forms. Let $f$ be a smooth basic map, then $$ LD_1^*f=0 $$ and $$ \begin{equation}egin{aligned} -D_1^*Lf-\frac12 fJ_2\begin{equation}ar\theta=&\,*_B(\begin{equation}ar \partial_1(*_Bf\Omega_1))-\frac12 f\begin{equation}ar\theta\longrightarrowcorner \Omega_1-\frac12 fJ_2\begin{equation}ar\theta\\ =&\,*_B\left( \begin{equation}ar \partial_1(f)\widetilde{\nabla}edge (*_B\Omega_1)+ f \begin{equation}ar \theta\widetilde{\nabla}edge *_B\Omega_1\right)-fJ_2\begin{equation}ar\theta=J_2(\begin{equation}ar\partial_1(f)) =\,\tilde\partial_2f \end{equation}nd{aligned} $$ where we have used that $*_B\Omega_1=n(\frac{1}{n!})^2\Omega_1^n\widetilde{\nabla}edge\begin{equation}ar\Omega_1^{n-1}$ and the natural identity $$ *_B(\begin{equation}ar Z \widetilde{\nabla}edge*_B\Omega_1)=\begin{equation}ar Z \longrightarrowcorner \Omega_1=J_2\begin{equation}ar Z. $$ Now we prove \end{equation}qref{VV2} for a basic $\tilde\partial_2$-closed $(1,0)$-form $\alpha$. We have $$ LD_1^*\alpha= \left(\partial_1^{*} \alpha\right)\,\Omega_1+\frac12 *_B(\begin{equation}ar \theta\widetilde{\nabla}edge *_B\alpha)\,\Omega_1=\left(\partial_1^{*} \alpha\right)\Omega_1+\frac12 g_1(\begin{equation}ar\theta,\alpha)\Omega_1\,. $$ $$ \begin{equation}egin{aligned} -D_1^*L\alpha=&-\partial_1^{*} (\alpha\widetilde{\nabla}edge \Omega_1)-\frac12*_B\left( \begin{equation}ar \theta\widetilde{\nabla}edge *_B(\alpha\widetilde{\nabla}edge \Omega_1)\right)=-\partial_1^{*} (\alpha\widetilde{\nabla}edge \Omega_1)-\frac12\begin{equation}ar\theta\longrightarrowcorner(\alpha\widetilde{\nabla}edge\Omega_1)\,. \end{equation}nd{aligned} $$ $$ D_2\alpha=\frac12 J_2(\begin{equation}ar \theta)\widetilde{\nabla}edge \alpha. $$ Now taking into account that $\alpha$ is $\tilde\partial_2$-closed, we also have $$ \begin{equation}egin{cases} \left(\partial_1^{*} \alpha\right)\,\Omega_1=-*_B(\begin{equation}ar \theta\widetilde{\nabla}edge *_B\alpha)\,\Omega_1\,,\\ \partial_1^{*} (\alpha\widetilde{\nabla}edge \Omega_1)=-*_B\left( \begin{equation}ar \theta\widetilde{\nabla}edge *_B(\alpha\widetilde{\nabla}edge \Omega_1)\right) \end{equation}nd{cases} $$ and therefore $$ \begin{equation}egin{aligned} \left(L D_1^*- D_1^* L\right)\alpha=&\,-\frac12 *_B(\begin{equation}ar \theta\widetilde{\nabla}edge *_B\alpha)+\frac12 *_B\left( \begin{equation}ar \theta\widetilde{\nabla}edge *_B(\alpha\widetilde{\nabla}edge \Omega_1)\right) \\ =&\,\frac12 \begin{equation}ar\theta\longrightarrowcorner(\alpha\widetilde{\nabla}edge\Omega_1)-\frac12(\begin{equation}ar\theta\longrightarrowcorner\alpha)\Omega_1=-\frac12 \alpha\widetilde{\nabla}edge(\begin{equation}ar\theta\longrightarrowcorner\Omega_1)=\frac12 J_2(\begin{equation}ar\theta)\widetilde{\nabla}edge\alpha. \end{equation}nd{aligned} $$ i.e. $$ \left(L D_1^*- D_1^* L\right)\alpha=D_2 \alpha\,, $$ as required. \end{equation}nd{proof} Now we are ready to prove theorem \ref{mainluigi} \begin{equation}egin{proof}[Proof of theorem $\ref{mainluigi}$] Assume that there exists a pair $(J_2,J_3)$ of transverse complex structures as in the statement. Let \begin{equation}egin{equation*} \Omega(\cdot,\cdot):=\frac12\left(g_Q(J_2\cdot,\cdot)+ig_Q(J_3\cdot,\cdot)\right). \end{equation}nd{equation*} This form can be regarded as a basic form of type $(2,0)$ with respect to $J_1$; therefore $\psi:=\Omega^{n}$ is a nowhere vanishing {\end{equation}m basic} $(2n,0)$-form and remark \ref{remarkCY} implies that the first basic Chern class of $(M,\mathcal{F},J_1)$ vanishes. Since $M$ is simply-connected, the foliation is taut and theorem \ref{transvcalabiyau} ensures the existence of a transverse metric $g'_Q$ whose transverse Levi-Civita connection $\nabla'$ has holonomy contained in ${\rm SU}(2n)$. Hence $(M,\mathcal{F},g_Q',J_1,J_2,J_3)$ satisfies the hypothesis of lemma \ref{fundpre} and $g'_Q$ has transverse holonomy group contained in ${\rm Sp}(n)$. \end{equation}nd{proof} We remark that in the non-foliated case the statement of theorem \ref{mainluigi} holds without the assumption on $M$ to be simply-connected (see \cite{verb}). Indeed, every compact K\"ahler manifold with vanishing first Chern class can be covered by a K\"ahler manifold with holomorphically trivial canonical bundle and this fact allows us to drop the assumption on $M$ to be simply-connected. Unfortunately, it seems that a similar construction cannot be performed in the foliated case. Examples of hyper-K\"ahler foliations are provided by submersions in hyper-K\"ahler manifolds. Other examples are given in the next section in the set-up of Sasakian manifolds. Moreover, a naif way for constructing hyper-K\"ahler foliations consists in generalizing proposition \ref{coisotropic} to the hyper-K\"ahler case. Indeed, let $(M,g,J_1,J_2,J_3)$ be a hyper-K\"ahler manifold with induced fundamental forms $(\omega_1,\omega_2,\omega_3)$ and let $i\colon N\hookrightarrow M$ be a submanifold. Assume that $N$ is coisotropic with respect to both $\omega_1$ and $\omega_2$ and $$ (TN)^{\omega_1}=(TN)^{\omega_2}\,. $$ In this case $\mathcal{F}_p:=(T_pN)^{\omega_1}$ induces a locally trivial hyper-K\"ahler foliation on $N$. \section{Sasakian manifolds with special transverse holonomy} In this section, we adapt the results of the previous sections to the Sasakian case having special transverse holonomy. In addition to the previous part of the paper, we consider also the case of a foliated manifold with trivial transverse holonomy group. \subsection{Riemannian foliations with $\rm{ Hol}(\nabla)=0$} We have the following result: \begin{equation}egin{theorem} Let $(M,\mathcal{F},g_Q)$ be a compact manifold endowed with a Riemannian foliation of codimension $q$. The transversal holonomy group is trivial if and only if $M$ is the total space of a fibration over the flat torus $\mathbb{T}^q.$ \end{equation}nd{theorem} \begin{equation}egin{proof} Since the transverse holonomy group is trivial, there exists a global parallel orthonormal frame $\{s_i\}_{i=1,\cdots,q}$ of sections of $Q$. Thus the $1$-forms $\omega_i$ dual to $s_i$ regarded as $1$-forms on $M$ are $d$-closed and linearly independent at each point. That means the foliation $\mathcal{F}$ is an $\R^q$-Lie foliation and hence in view of \cite[p.154]{Go} $M$ is the total space of a fibration over the flat torus. \end{equation}nd{proof} Now we consider Sasakian manifolds with trivial transverse holonomy. The key pattern is the following:\\ \noindent Let $\mathfrak{h}_{2n+1}$ be the $2n+1$-dimensional Heisenberg Lie algebra whose structure equations are given by the choose of a cobasis $\{e^i\}$ satisfying $$ \begin{equation}egin{cases} & de^k=0\,,\quad k=1,\dots, 2n\,,\\ & de^{2n+1}=e^1\widetilde{\nabla}edge e^2+e^{3}\widetilde{\nabla}edge e^4+\dots e^{2n-1}\widetilde{\nabla}edge e^{2n}\,. \end{equation}nd{cases} $$ (Shortly $\mathfrak{h}_{2n+1}$ has structure equations $(0,\dots,0,12+\dots+(n-1)n)$). The simply-connected Lie group ${\rm H}$ associated to $\mathfrak{h}_{2n+1}$ has the natural invariant Sasakian structure $$ \xi=e_{2n+1}\,,\quad \end{equation}ta=e^{2n+1}\,,\quad g=\sum e^k\otimes e^k\,,\quad \Phi=e^1\otimes e_2-e^2\otimes e_1+\dots +e^{2n-1}\otimes e_{2n+1}- e^{2n+1}\otimes e_{2n-1} $$ $\{e_i\}$ is being the dual basis to $\{e^i\}$. It is standard to check that such a Sasakian structure satisfies ${\rm Hol}(\nabla)=0$. Hence if $\Gamma\subseteq {\rm H}$ is a co-compact lattice, the compact manifold $M=\Gamma \begin{equation}ackslash {\rm H}$ inherits a natural $\nabla$-flat Sasakian structure. The next result says that every $\nabla$-flat compact Sasakian manifold is of the form $M=\Gamma\begin{equation}ackslash {\rm H}$, for some lattice $\Gamma$. \begin{equation}egin{theorem}\label{1} Let $(M,\xi,\end{equation}ta,\Phi,g)$ be a compact Sasakian manifold. Then the holonomy group of $\nabla$ is trivial if and only if $M$ is a compact quotient of the odd-dimensional Heisenberg Lie group ${\rm H}$ by a lattice and $(\xi,\end{equation}ta,\Phi,g)$ lifts to an invariant Sasakian structure on ${\rm H}$. \end{equation}nd{theorem} \begin{equation}egin{proof} Let $(M,\xi,\end{equation}ta,\Phi,g)$ be a Sasakian manifold. The holonomy group of $\nabla$ is trivial if and only if there exists a global transverse unitary frame $\{Z_r\}$ satisfying $$ \nabla_{\ov{Z}_r}Z_k=\nabla_{Z_r}Z_k=\nabla_{\xi}Z_k=\nabla_{\xi}\ov{Z}_k=0\,,\quad r,k=1,\dots,n\,. $$ Conditions $\nabla_{Z_r}Z_k=\nabla_{Z_r}{\ov Z}_k=0$ say that $$ \nabla_{Z_r}^gZ_k=0\,,\quad \nabla_{\ov{Z}_r}^gZ_k=-i\delta_{rk}\,\xi $$ whilst condition $\nabla_{\xi}Z_k=0$ can be rewritten in terms of brackets as $$ [Z_k,\xi]=0\,,\quad k=1,\dots,n\,. $$ It follows that there exists a frame $\{X_i\}$ on $M$ such that $[X_i,X_j]=\sum \lambda_{ij}^k\, X_k$ for some constants $\lambda_{ij}^k$. In view of \cite{palais}, $M$ can be written as a quotient of a simply-connected nilpotent Lie group $N$ by a lattice. The vector fields $\{Z_i,\xi\}$ lift to invariant vector fields on $N$. Let $\{\zeta^i,\end{equation}ta\}$, be the dual frame to $\{Z_i,\xi\}$; then $$ d\zeta^i=0\,,\quad d\end{equation}ta=i\,\sum_k \zeta^{k}\widetilde{\nabla}edge \ov{\zeta}^k $$ and $N$ is the Heisenberg Lie group ${\rm H}$, as required. \end{equation}nd{proof} \begin{equation}egin{rem}{\end{equation}m Notice that from the point of view of transverse geometry, manifolds associated to the Heisenberg group play in Sasakian geometry the role that complex tori play in K\"ahler geometry.} \end{equation}nd{rem} \subsection{Sasakian manifolds with ${\rm Hol}(\nabla)\subseteq{\rm SU}(n)$}\label{sakSu(n)} In \cite{TV} it has been studied the geometry of Sasakian manifolds satisfying ${\rm Hol}(\nabla)\subseteq {\rm SU}(n)$. These manifolds were named {\end{equation}m contact Calabi-Yau}. The main result of \cite{TV} is a generalization of the McLean theorem (see \cite{Maclean}) to the Sasakian context, where the role of Lagrangian submanifolds was replaced by some special Legendrian immersions. Indeed, a {\end{equation}m contact Calabi-Yau} manifold can be defined as a $2n+1$-dimensional Sasakian manifold $(M,\xi,\end{equation}ta,\Phi,g)$ with an addition structure given by a basic closed transverse complex volume form $\end{equation}psilon$. In analogy to the classical Calabi-Yau case, the real part of $\end{equation}psilon$ is a calibration on $M$ (see \cite{HL}) whose calibrated submanifolds are given by $n$-dimensional smooth embeddings $i\colon N\hookrightarrow M$ satisfying $$ \begin{equation}egin{cases} i^*(\end{equation}ta)=0\\ i^*(\Im\mathfrak{m}\,\end{equation}psilon)=0\,. \end{equation}nd{cases} $$ Condition $i^*(\end{equation}ta)=0$ says that $N$ is a {\end{equation}m Legendrian} submanifold and for this reason such submanifolds were named in \cite{TV} {\end{equation}m special Legendrian}. The main result of \cite{TV} is the following: \begin{equation}egin{theorem}[Tomassini-Vezzoni \cite{TV}] The moduli space of compact Legendrian submanifolds isotopic to a fixed one is always a $1$-dimensional smooth manifold. \end{equation}nd{theorem} \noindent Some further results about special Legendrian submanifolds with boundary are pointed out in \cite{lucking}. In this section, we observe that links provide examples of simply-connected compact contact Calabi-Yau manifolds. In order to show this, we consider the following two results arising from section \ref{sectionSU(n)} and the Sasakian version of the El Kacimi theorem (see \cite{BGN}): \begin{equation}egin{prop}\label{SU(n)Sak} Let $(M,\xi,\end{equation}ta,\Phi,g)$ be a compact simply-connected null Sasaki $\end{equation}ta$-Einstein manifold. Then ${\rm Hol}(\nabla)$ is contained in ${\rm SU}(n)$. \end{equation}nd{prop} Here we recall that in Sasakian geometry {\end{equation}m null $\end{equation}ta$-Einstein} means that the transverse Ricci tensor vanishes. \begin{equation}egin{prop}\label{imp} Let $(M^{2n+1},\xi,\end{equation}ta,\Phi,g)$ be a compact simply-connected Sasaki manifold with $c_B^1(\mathcal{F})=0$. Then there exists a basic $1$-form $\zeta$ on $(M,\xi)$ and a unique Sasakian structure $(M,\xi',\end{equation}ta',\Phi',g')$ such that $$ \xi'=\xi\,,\quad \end{equation}ta'=\end{equation}ta+\zeta\,,\quad \Phi'=\Phi-\zeta\otimes\xi\circ\Phi\,,\quad g'=d\end{equation}ta'\circ({\rm Id}\otimes \Phi')+\end{equation}ta'\otimes \end{equation}ta' $$ and the transverse holonomy group of the metric $g'$ is contained in ${\rm SU}(n).$ \end{equation}nd{prop} It is known that Links provide examples of simply-connected null Sasakian $\end{equation}ta$-Einstein manifolds. More precisely, given a link $L_f=C_{f}\cap S^{2n-1}$, where $f=(f_1,\dots, f_p)$ are independent weighted homogeneous polynomials of degrees $(d_1,\dots,d_p)$ and weights $(w_1,\dots, w_p)$, then $L_f$ is $(n-p-1)$-connected (see \cite{L}) and inherits a natural $\end{equation}ta$-Einstein Sasakian structure $(\xi,\end{equation}ta,\Phi,g)$ induced by the weighted Sasakian structure of the sphere. Since $L_f$ is null whenever $\sum (d_i-w_i)=0$ (see e.g. \cite{BGlibro}), by making use of proposition \ref{imp}, we infer \begin{equation}egin{prop}\label{links} Let $L_f$ be a link, where $f=(f_1,\dots, f_p)$ are independent weighted homogeneous polynomials of degrees $(d_1,\dots,d_p)$ and weights $(w_1,\dots, w_p)$ and assume $\sum (d_i-w_i)=0$. Then $L_f$ carries a contact Calabi-Yau structure. \end{equation}nd{prop} \subsubsection{A link with $G_2$-geometry} Let $(M,\xi,\end{equation}ta,\Phi,g,\end{equation}psilon)$ be a $7$-dimensional {\end{equation}m contact Calabi-Yau manifold} and consider the $3$-form $$ \sigma=\end{equation}ta\widetilde{\nabla}edge d\end{equation}ta+\Re\mathfrak e\,\end{equation}psilon. $$ Then $\sigma$ induced a $G_2$-structure on $M$. Since $$ d*\sigma= 0 $$ this induced $G_2$-structure is always {\end{equation}m co-calibrated}. A similar construction can be done in $7$-dimensional $3$-Sasakian manifolds (see \cite{AF}). Therefore, proposition \ref{links} readily implies \begin{equation}egin{cor} Let $L_f$ be a link as in proposition $\ref{links}$. Then $L_f$ has a co-calibrated $G_2$-structure. \end{equation}nd{cor} \subsection{Sasakian manifolds with ${\rm Hol}(\nabla)\subseteq{\rm Sp}(n)$}In this subsection we translate theorem \ref{mainluigi} to the context of Sasakian manifolds. Condition ${\rm Hol}(\nabla)\subseteq {\rm Sp} (n)$ for a Sasakian manifold $(M,\xi,\end{equation}ta,\Phi_1,g)$ is equivalent to require the existence of a pair $\Phi_2,\Phi_3\in {\rm End} (TM)$ such that \begin{equation}egin{equation}\label{sas} (\xi,\end{equation}ta,\Phi_k,g) \mbox{ is a Sasakian structure for } k=2,3 \end{equation}nd{equation} and satisfy the transverse quaternionic relations \begin{equation}egin{equation}\label{quater} \Phi_1\Phi_2=-\Phi_2\Phi_1\,,\quad \Phi_1\Phi_2=\Phi_3\,. \end{equation}nd{equation} Conditions \end{equation}qref{sas} can be alternatively rewritten as \begin{equation}egin{eqnarray} && \label{3} \Phi_k^2=-{\rm I}+\end{equation}ta\otimes \xi\\ && \label{4} N_{\Phi_k}=0\\ && g(\Phi_k \cdot,\Phi_k \cdot)=g(\cdot,\cdot)-\end{equation}ta(\cdot)\end{equation}ta(\cdot) \end{equation}nd{eqnarray} where $N_{\Phi_k}$ is the Nijenhuis tensor $$ N_{\Phi_k}(X,Y)=[\Phi_kX,\Phi_kY]-\Phi_k[\Phi_kX,Y]-\Phi_k[X,\Phi_kX]+\Phi_k^2[X,Y]\,. $$ The following result is the generalization of theorem \ref{mainluigi} to the Sasakian context: \begin{equation}egin{theorem}\label{sasaki} Let $(M,\xi,\end{equation}ta,\Phi_1,g)$ be a compact simply-connected $4n+1$-dimensional Sasakian manifold. Assume that there exists a pair $\{J_2,J_3\}$ of transverse complex structures inducing with $(g,\Phi_1)$ a transverse hyper-Hermitian structure. Then there exists a Sasakian structure $(\xi,\end{equation}ta',\Phi',g')$ on $M$ having transverse holonomy contained in ${\rm Sp}(n)$. \end{equation}nd{theorem} \begin{equation}egin{proof} The existence of $\{J_2,J_3\}$ implies that the first basic Chern class of $(M,\xi,\end{equation}ta,\Phi_1,g)$ is zero. Then in view of proposition \ref{imp}, there exists a Sasakian structure $\mathcal{S}'=(\xi,\end{equation}ta',\Phi',g')$ on $M$ $$ \end{equation}ta'=\end{equation}ta+\zeta\,,\quad \Phi'=\Phi-\xi\otimes\zeta\circ\Phi\,,\quad g'=d\end{equation}ta'\circ({\rm Id}\otimes \Phi')+\end{equation}ta'\otimes \end{equation}ta' $$ having transverse holonomy contained in ${\rm SU}(2n)$. Lemma \ref{fundpre} implies that the transverse Levi-Civita connection of $g'$ has holonomy contained in ${\rm Sp}(n)$, as required. \end{equation}nd{proof} Note that, since ${\rm Sp}(1)={\rm SU}(2)$, corollary \ref{imp} implies that in dimension $5$ every simply-connected Sasakian manifolds satisfying ${\rm Hol}(\nabla)\subseteq {\rm Sp}(1)$ is in fact a compact simply-connected null Sasaki $\end{equation}ta$-Einstein manifolds. These kind of manifolds are classified in \cite{cuadros} where it is showed that a $5$-dimensional simply-connected compact manifold admits a null Sasaki $\end{equation}ta$-Einstein structure if and only if it is obtained as a connected sum of $k$-copies of $S^2\times S^3$, where $k=3,\dots,9$. \begin{equation}egin{thebibliography}{12} \overline{i}bitem{AF} Agricola I., Friedrich T.: 3-Sasakian manifolds in dimension seven, their spinors and $G_2$-structures. {\end{equation}m J. Geom. Phys.} {\begin{equation}f 60} (2010), no. 2, 326--332. \overline{i}bitem{Audin} Audin M. and Lafontaine J. (editors): {\end{equation}m Holomorphic Curves in Symplectic Geometry}, vol. {\begin{equation}f 117} of Progress in Mathematics, Birkh\"auser, 1994. \overline{i}bitem{Bogo} Bogomolov F.A.: Hamiltonian K\"ahlerian manifolds, {\end{equation}m Dokl. Akad. Nauk SSSR} {\begin{equation}f 243} (1978), 1101--1104. \overline{i}bitem{einstein1} Boyer C. P., Galicki K.: On Sasakian-Einstein geometry. {\end{equation}m Internat. J. Math.} {\begin{equation}f 11} (2000), no. 7, 873--909. \overline{i}bitem{BGlibro} Boyer C. P., Galicki K.: {\end{equation}m Sasakian geometry}. Oxford Mathematical Monographs. Oxford University Press, Oxford, 2008. xii+613 pp. \overline{i}bitem{4} Boyer C. P. , Galicki K., Koll\'ar J.: Einstein metrics on spheres. {\end{equation}m Ann. of Math.} {\begin{equation}f 162} (2005), no. 1, 557--580. \overline{i}bitem{BGM} Boyer C. P., Galicki K., Matzeu P.: On Eta-Einstein Sasakian Geometry. \end{equation}mph{Commun. Math. Phys.} {\begin{equation}f 262} (2006), 177--208. \overline{i}bitem{BGN} Boyer C. P., Galicki K., Nakamaye M.: On the geometry of Sasakian-Einstein 5-manifolds. \end{equation}mph{Math. Ann.} {\begin{equation}f 325} (2003), pp. 485-524. \overline{i}bitem{Ca} Carri\`ere, Y. Flots riemanniens. Journ\'ees sur les structures transverses des feuilletages, Toulouse, {\end{equation}m Ast\'erisque} {\begin{equation}f 116} (1984). \overline{i}bitem{cuadros} Cuadros J.: Null Sasaki eta-Einstein Structures in Five Manifolds. {\tt arXiv:0909.4581}. \overline{i}bitem{dragomir} Dragomir, S., Ornea, L.: {\end{equation}m Locally conformal K\"ahler geometry.} Progress in Mathematics, {\begin{equation}f 155}. Birkh\"auser Boston, Inc., Boston, MA, 1998. xiv+327 pp. \overline{i}bitem{domin} Dom\'inguez, D.: Finiteness and tenseness theorems for Riemannian foliations, {\end{equation}m Amer. J. Math. } {\begin{equation}f 120} (1998), 1237--1276. \overline{i}bitem{EKA} El Kacimi Alaoui A.: Op\'erateurs transversalement elliptiques sur un feuilletage riemannien et applications. \end{equation}mph{Compositio Math.} {\begin{equation}f 73} (1990), 57--106. \overline{i}bitem{EKA2} El Kacimi Alaoui A.: Towards a basic index theory. \end{equation}mph{Dirac operators: yesterday and today {\rm 251--261}, Int. Press, Somerville, MA.} 2005. \overline{i}bitem{EK-Hc2} El Kacimi Alaoui A., Hector G.: D\'ecomposition de Hodge sur l'espace des feuilles d'un feuilletage riemannien, {\end{equation}m Ann. Ist. Fourier (Grenoble)} {\begin{equation}f 36} (1986), 207--227. \overline{i}bitem{EHS} El Kacimi Alaoui A., Hector G., Sergiescu V.: La cohomologie basique d'un feuilletege riemannien est de dimension finie, {\end{equation}m Math. Z.} {\begin{equation}f 188} (1985), 593--599. \overline{i}bitem{F} Friedrich Th.: $G_2$-manifolds with parallel characteristic torsion, {\end{equation}m J. Diff. Geom. Appl.} {\begin{equation}f 25} (2007), 632--648. \overline{i}bitem{FKM} Friedrich Th., Kath I., Moroianu A., Semmelmann U.: On nearly parallel $G_2$-structures, {\end{equation}m J. Geom. Phys.} {\begin{equation}f 23} (1997), 256--286 \overline{i}bitem{FI} Friedrich Th., Ivanov S.: Parallel spinors and connections with skew symmetric torsion in string theory. {\end{equation}m Asian Journ. Math.} {\begin{equation}f6 }(2002), 303--336. \overline{i}bitem{Go} Godbillon, C.: {\end{equation}m Feuilletages, \'etudes g\'eom\'etriques.} Basel, Birkh\"auser 1991. \overline{i}bitem{GP} Grantcharov G., Poon Y.-S.: Geometry of hyper-K\"ahler connection with torsion. {\end{equation}m Commun. Math. Phys.}, {\begin{equation}f 213} (2000), 19--37. \overline{i}bitem{HabRi} Habib G., Richardson K.: Modified differentials and basic cohomology for Riemannian foliations. \end{equation}mph{to appear in J. Geom. Anal.} \overline{i}bitem{HL} Harvey R., Lawson H.B., Jr.: Calibrated geometries. {\end{equation}m Acta Math.} {\begin{equation}f 148} (1982), 47--157. \overline{i}bitem{HO} Oh Y.-G., Park J.-S.: Deformations of coisotropic submanifolds and strong homotopy Lie algebroids. {\end{equation}m Invent. Math.} {\begin{equation}f 161} (2005), no. 2, 287--360. \overline{i}bitem{jung} Jung, S.D. : Transversal infinitesimal automorphisms on K\"ahler foliations. {\tt arXiv:1106.0358v1} \overline{i}bitem{Kamber} Kamber F. W., Tondeur, Ph.: {\end{equation}m Foliations and metrics.} Differential geometry (College Park, Md., 1981/1982), 103--152, Progr. Math., 32, Birkh\"auser Boston, Boston, MA, 1983. \overline{i}bitem{K-To15} Kamber F. W., Tondeur, Ph.: Foliations and harmonic forms, Harmonic mappings, twistors and $\sigma$-models. {\end{equation}m Adv. Ser. Math. Phys.} {\begin{equation}f 4}. Word Sci. Publishing, Singapore (1988), 15--25. \overline{i}bitem{K} Kapustin A., Orlov D.: Remarks on A-branes, mirror symmetry, and the Fukaya category. {\end{equation}m J. Geom. Phys.} {\begin{equation}f 48} (2003), no. 1, 84--99. \overline{i}bitem{L} Looijenga E.J.N.: {\end{equation}m Isolated Singular Points of Complete Intersections.} London, Mathematical Society Lecture Note Series, vol. 77, Cambridge University Press, Cambridge (1984). \overline{i}bitem{lucking} Lu G., Chen X.: Deformations of Special Legendrian Submanifolds with Boundary. {\tt arXiv:1106.5086}. \overline{i}bitem{Maclean} McLean R. L.: Deformations of Calibrated Geometries. \end{equation}mph{Comm. Anal. Geom.} {\begin{equation}f 6} (1998), 705--747. \overline{i}bitem{Ma} Masa X.: Duality and minimality in Riemannian foliations. {\end{equation}m Comment. Math. Helv.} {\begin{equation}f 67} (1992), 17--27. \overline{i}bitem{molino} Molino P.: {\end{equation}m Riemannian foliations.} Progress in Mathematics, {\begin{equation}f 73}. Birkh\"auser, Boston 1988. \overline{i}bitem{moro1} Moriyama T.: Deformations of transverse Calabi-Yau structures on foliated manifolds. {\end{equation}m Publ. Res. Inst. Math. Sci.} {\begin{equation}f 46} (2010), no. 2, 335--357. \overline{i}bitem{moro2} Moriyama T.: The moduli space of transverse Calabi-Yau structures on foliated manifolds. {\end{equation}m Osaka J. Math.} {\begin{equation}f 48} (2011), no. 2, 383--413. \overline{i}bitem{moroianu} Moroianu A.: {\end{equation}m Lectures on K\"ahler geometry.} London Mathematical Society Student Texts, {\begin{equation}f 69}. Cambridge University Press, Cambridge, 2007. \overline{i}bitem{ornea} Ornea L, Verbitsky M.: Oeljeklaus-Toma manifolds admitting no complex subvarieties. {\end{equation}m Math. Res. Lett.} {\begin{equation}f 18} (2011), no. 4, 747--754. \overline{i}bitem{palais} Palais R. S.: A global formulation of the Lie theory of transformation groups. \end{equation}mph{Mem. Amer. Math. Soc. No. 22} (1957). \overline{i}bitem{Rein} Reinhart, B.: Foliated manifolds with bundle-like metrics. {\end{equation}m Ann. Math.} {\begin{equation}f 69} (1959), 119--132. \overline{i}bitem{Ru} Rummler H.: Quelques notions simples en g\'eom\'etrie riemannienne et leurs applications aux feuilletages compacts. {\end{equation}m Comment. Math. Helv. } {\begin{equation}f 54} (1979), no. 2, 224--239. \overline{i}bitem{Sas60} Sasaki S.: On differentiable manifolds with certain structures which are closely related to almost contact structure, I. {\end{equation}m T$\hat{o}$hoku Math. J.} (2) {\begin{equation}f 12} (1960), 459--476. \overline{i}bitem{SH62} Sasaki S., Hatakeyama Y.: On differentiable manifolds with certain structures which are closely related to almost contact structure, II. {\end{equation}m T$\hat{o}$hoku Math. J.} (2) {\begin{equation}f 13} (1961), 281--294. \overline{i}bitem{tian} Tian G.: Smoothness of the universal deformation space of compact Calabi-Yau manifolds and its Petersson-Weil metric, {\end{equation}m Mathematical aspects of string theory (San Diego, Calif., 1986), World Sci. Publishing, Singapore,} (1987), Adv. Ser. Math. Phys., vol. 1, 629--646. \overline{i}bitem{todorov} Todorov A.N.: The Weil--Petersson geometry of the moduli space of ${\rm SU}(n\mathfrak{g}eq 3)$ (Calabi-Yau) manifolds I, {\end{equation}m Comm. Math. Phys.} {\begin{equation}f 126} (1989), 325--346. \overline{i}bitem{TV} Tomassini A., Vezzoni L.: Contact Calabi-Yau manifolds and Special Legendrian submanifolds. {\end{equation}m Osaka J. Math.} {\begin{equation}f 45} (2008), 127--147. \overline{i}bitem{T} Tondeur Ph.: {\end{equation}m Geometry of Foliations.} Birkh\"auser, Boston, 1997. \overline{i}bitem{verb2} Verbitsky M: HyperK\"ahler manifolds with torsion, supersymmetry and Hodge theory. {\end{equation}m Asian J. Math.} {\begin{equation}f 6} (2002), no. 4, 679--712. \overline{i}bitem{verb} Verbitsky M.: Hypercomplex structures on K\"ahler manifolds. {\end{equation}m Geom. Funct. Anal.} {\begin{equation}f 15} (2005), no. 6, 1275--1283. \end{equation}nd{thebibliography} \end{equation}nd{document}
\begin{document} \title[Generalized degenerate Bernoulli numbers and polynomials]{Generalized degenerate Bernoulli numbers and polynomials arising from Gauss hypergeometric function} \author{Taekyun Kim $^{1,\dagger}$} \address{$^{1}$ Department of Mathematics, Kwangwoon University, Seoul 139-701, Republic of Korea} \email{[email protected]$^{\dagger}$, [email protected]$^{\ddagger}$, [email protected]} \author{DAE SAN KIM $^{2}$*} \address{$^{2}$ Department of Mathematics, Sogang University, Seoul 121-742, Republic of Korea} \email{[email protected]*} \author{Lee-Chae Jang $^{3}$**} \address{$^{3}$ Graduate School of Education, Konkuk University, Seoul 143-701, Republic of Korea} \email{[email protected]**} \author{Hyunseok Lee $^{1,\ddagger}$} \author{Hanyoung Kim $^{1,\dagger\dagger}$} \subjclass[2010]{11B68; 11B73; 11B83; 33C05} \keywords{generalized degenerate Bernoulli numbers; generalized degenerate Bernoulli polynomials; degenerate type Eulerian numbers} \maketitle \begin{abstract} In a previous paper, Rahmani introduced a new family of $p$-Bernoulli numbers and polynomials by means of the Gauss hypergeometric function. Motivated by this paper and as a degenerate version of those numbers and polynomials, we introduce the generalized degenerate Bernoulli numbers and polynomials again by using the Gauss hypergeometric function. In addition, we introduce the degenerate type Eulerian numbers as a degenerate version of Eulerian numbers. For the generalized degenerate Bernoulli numbers, we express them in terms of the degenerate Stirling numbers of the second kind, of the degenerate type Eulerian numbers, of the degenerate $p$-Stirling numbers of the second kind and of an integral on the unit interval. As to the generalized degenerate Bernoulli polynomials, we represent them in terms of the degenerate Stirling polynomials of the second kind. \end{abstract} \section{Introduction} As the first degenerate versions of some special numbers, Carlitz introduced the degenerate Stirling, Bernoulli and Euler numbers in [3]. In recent years, degenerate versions of many special polynomials and numbers have been investigated by means of various different tools including generating functions, combinatorial methods, umbral calculus, $p$-adic analysis, differential equations, special functions, probability theory and analytic number theory. Here we would like to remark that studying degenerate versions of some special polynomials and numbers has yielded many interesting arithmetic and combinatorial results (see [7-13 and references therein]) and has potential to find many applications to diverse areas in science and engineering as well as in mathematics. For example, it was shown in [10,11] that both the degenerate $\lambda$-Stirling polynomials of the second kind and the $r$-truncated degenerate $\lambda$-Stirling polynomials of the second kind appear in the expressions of the probability distributions of appropriate random variables. Also, we would like to emphasize that studying degenerate versions is applied not only to polynomials but also to transcendental functions. Indeed, the degenerate gamma functions were introduced and some interesting results were derived in [9]. \par In [14], Rahmani introduced a new family of $p$-Bernoulli numbers and polynomials by means of the Gauss hypergeometric function which reduce to the classical Bernoulli numbers and polynomials for $p=0$. Motivated by that paper and as a degenerate version of those numbers and polynomials, in this paper we introduce the generalized degenerate Bernoulli numbers and polynomials again in terms of the Gauss hypergeometric function which reduce to the Carlitz degenerate Bernoulli numbers and polynomials for $p=0$. In addition, we introduce the degenerate type Eulerian numbers as a degenerate version of Eulerian numbers. The aim of this paper is to study the generalized degenerate Bernoulli numbers and polynomials and to show their connections to other special numbers and polynomials. Among other things, for the generalized degenerate Bernoulli numbers we express them in terms of the degenerate Stirling numbers of the second kind , of the degenerate type Eulerian numbers, of the degenerate $p$-Stirling numbers of the second kind and of an integral on the unit interval. As to the generalized degenerate Bernoulli polynomials, we represent them in terms of the degenerate Stirling polynomials of the second kind. For the rest of this section, we recall the necessary facts that are needed throughout this paper. For any $\lambda\in\mathbb{R}$, the degenerate exponential functions are defined by \begin{equation} e_{\lambda}^{x}(t)=\sum_{n=0}^{\infty}(x)_{n,\lambda}\frac{t^{n}}{n!},\quad e_{\lambda}(t)=e_{\lambda}^{1}(t),\quad(\mathrm{see}\ [6,9]),\label{1} \end{equation} where $(x)_{0,\lambda}=1,\ (x)_{n,\lambda}=x(x-\lambda)\cdots(x-(n-1)\lambda)$, $(n\ge 1)$. Note that $\displaystyle\lim_{\lambda\rightarrow 0}e^{x}_{\lambda}(t)=e^{xt}\displaystyle$. \par Let $\log_{\lambda}(t)$ be the compositional inverse function of $e_{\lambda}(t)$ with $\log_{\lambda}\big(e_{\lambda}(t)\big)=e_{\lambda}\big(\log_{\lambda}(t)\big)=t$. Then we have \begin{equation} \log_{\lambda}(1+t)=\sum_{n=1}^{\infty}\lambda^{n-1}(1)_{n,1/\lambda}\frac{t^{n}}{n!},\quad(\mathrm{see}\ [7]).\label{2} \end{equation} In [7], the degenerate Stirling numbers of the first kind are defined by \begin{equation} (x)_{n}=\sum_{l=0}^{n}S_{1,\lambda}(n,l)(x)_{l,\lambda},\quad(n\ge 0),\label{3} \end{equation} where $(x)_{0}=1,\ (x)_{n}=x(x-1)(x-2)\cdots(x-n+1)$, $(n\ge 1)$. \par As the inversion formula of $\eqref{3}$, the degenerate Stirling numbers of the second kind are defined by \begin{equation} (x)_{n,\lambda}=\sum_{k=0}^{n}S_{2,\lambda}(n,k)(x)_{k},\quad(n\ge 0),\quad(\mathrm{see}\ [7]).\label{4} \end{equation} From \eqref{3} and \eqref{4}, we note that \begin{equation} \frac{1}{k!}\big(\log_{\lambda}(1+t)\big)^{k}=\sum_{n=k}^{\infty}S_{1,\lambda}(n,k)\frac{t^{n}}{n!},\label{5} \end{equation} and \begin{equation} \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}=\sum_{n=k}^{\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!},\quad(k\ge 0),\quad(\mathrm{see}\ [7]).\label{6} \end{equation} It is well known that the Gauss hypergeometric function is given by \begin{equation} \pFq{2}{1}{a,b}{c}{x}=\sum_{k=0}^{\infty}\frac{\langle a\rangle_{k}\langle b\rangle_{k}}{\langle c\rangle_{k}}\frac{x^{k}}{k!},\quad(\mathrm{see}\ [1,2,12]),\label{7} \end{equation} where $\langle a\rangle_{0}=1,\ \langle a\rangle_{k}=a(a+1)\cdots(a+k-1),\ (k\ge 1)$. \par The Pfaff's transformation formula is given by \begin{equation} \pFq{2}{1}{a,b}{c}{x}=(1-x)^{-a}\pFq{2}{1}{a,c-b}{c}{\frac{x}{x-1}},\quad(\mathrm{see}\ [1,2]),\label{8} \end{equation} and the Euler's transformation formula is given by \begin{equation} \pFq{2}{1}{a,b}{c}{x}=(1-x)^{c-a-b}\pFq{2}{1}{c-a,c-b}{c}{x},\quad(\mathrm{see}\ [1,2]).\label{9} \end{equation} The Eulerian number $\eulerian{n}{k}$ is the number of permutation $\{1,2,3,\dots,n\}$ having $k$ permutation ascents. The Eulerian numbers are given explicitly by the finite sum \begin{equation} \eulerian{n}{k}=\sum_{j=0}^{k+1}(-1)^{j}\binom{n+1}{j}(k-j+1)^{n},\quad(n, k \ge 0,n \ge k), \label{10} \end{equation} and \begin{equation} \sum_{k=0}^{n}\eulerian{n}{k}=n!,\quad(\mathrm{see}\ [4,5]).\label{11} \end{equation} For $n,m\ge 0$, we have \begin{equation} \eulerian{n}{m}=\sum_{k=0}^{n-m}S_{2}(n,k)\binom{n-k}{m}(-1)^{n-k-m}k!,\quad(\mathrm{see}\ [5]),\label{12} \end{equation} and \begin{equation} x^{n}=\sum_{k=0}^{n}\eulerian{n}{k}\binom{x+k}{n},\quad(\mathrm{see}\ [4,5]).\label{13} \end{equation} Recently, the degenerate Stirling polynomials of the second kind are defined by \begin{equation} \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}e_{\lambda}^{x}(t)=\sum_{n=k}^{\infty}S_{2,\lambda}(n,k|x)\frac{t^{n}}{n!},\quad(k\ge 0),\quad(\mathrm{see}\ [8]).\label{14} \end{equation} Thus, by \eqref{14}, we get \begin{align} S_{2,\lambda}(n,k|x)\ &=\ \sum_{l=k}^{n}\binom{n}{l}S_{2,\lambda}(l,k)(x)_{n-l,\lambda},\quad(\mathrm{see}\ [8]), \label{15} \\ &=\ \sum_{l=0}^{n}\binom{n}{l}S_{2,\lambda}(l,k)(x)_{n-l,\lambda},\quad(n\ge 0).\nonumber \end{align} For $x=0$, $S_{2,\lambda}(n,k)=S_{2,\lambda}(n,k|0)$, $(n,k\ge 0, n\ge k)$, are called the degenerate Stirling numbers of the second kind.\par Carlitz introduced the degenerate Bernoulli polynomials given by \begin{equation} \frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)=\sum_{n=0}^{\infty}\beta_{n,\lambda}(x)\frac{t^{n}}{n!},\quad(\mathrm{see}\ [3]). \label{16} \end{equation} When $x=0$, $\beta_{n,\lambda}=\beta_{n,\lambda}(0)$, $(n\ge 0)$, are called the degenerate Bernolli numbers. \par \section{Generalized degenerate Bernoulli numbers} By \eqref{1} and \eqref{2}, we get \begin{align} \frac{t}{e_{\lambda}(t)-1}\ &=\ \frac{1}{e_{\lambda}(t)-1}\sum_{n=1}^{\infty}\lambda^{n-1}(1)_{n,1/\lambda}\frac{1}{n!}\big(e_{\lambda}(t)-1\big)^{n}\label{17} \\ &=\ \sum_{k=0}^{\infty}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{k+1}\cdot\frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}\nonumber \\ &=\ \sum_{k=0}^{\infty}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{k+1}\sum_{n=k}^{\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!} \nonumber \\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{k+1}S_{2,\lambda}(n,k)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{16} and \eqref{17}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}=\sum_{k=0}^{n}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{k+1}S_{2,\lambda}(n,k). \end{displaymath} \end{theorem} Replacing $t$ by $\log_{\lambda}(1+t)$ in \eqref{16}, we get \begin{align} \frac{\log_{\lambda}(1+t)}{e_{\lambda}(\log_{\lambda}(1+t))-1}\ &=\ \sum_{k=0}^{\infty}\beta_{k,\lambda}\frac{1}{k!}\big(\log_{\lambda}(1+t)\big)^{k}\label{18} \\ &=\ \sum_{k=0}^{\infty}\beta_{k,\lambda}\sum_{n=k}^{\infty}S_{1,\lambda}(n,k)\frac{t^{n}}{n!} \nonumber \\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}S_{1,\lambda}(n,k)\beta_{k,\lambda}\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} On the other hand, by \eqref{2}, we get \begin{align} \frac{\log_{\lambda}(1+t)}{e_{\lambda}(\log_{\lambda}(1+t))-1}\ &=\ \frac{1}{t}\log_{\lambda}(1+t)\ =\ \frac{1}{t}\sum_{n=1}^{\infty}\lambda^{n-1}(1)_{n,1/\lambda}\frac{t^{n}}{n!} \label{19} \\ &=\ \sum_{n=0}^{\infty}\frac{\lambda^{n}(1)_{n+1,1/\lambda}}{n+1}\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{18} and \eqref{19}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \sum_{k=0}^{n}S_{1,\lambda}(n,k)\beta_{k,\lambda}=\frac{1}{n+1}\lambda^{n}(1)_{n+1,1/\lambda}. \end{displaymath} \end{theorem} From \eqref{16} and \eqref{17}, we note that \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}\frac{t^{n}}{n!}\ &=\ \frac{1}{e_{\lambda}(t)-1}\sum_{n=1}^{\infty}\lambda^{n-1}(1)_{n,1/\lambda}\frac{1}{n!}\big(e_{\lambda}(t)-1\big)^{n}\label{20} \\ &=\ \sum_{n=0}^{\infty}\frac{(-1)^{n}(1)_{n+1,1/\lambda}\lambda^nn!}{(n+1)!}\frac{(1-e_{\lambda}(t))^{n}}{n!}\nonumber \\ &=\ \sum_{n=0}^{\infty}\frac{\langle 1-\lambda\rangle_{n}\langle 1\rangle_{n}}{\langle 2\rangle_{n}}\frac{(1-e_{\lambda}(t))^{n}}{n!}\ \nonumber\\ &=\ \pFq{2}{1}{1-\lambda,1}{2}{1-e_{\lambda}(t)}.\nonumber \end{align} In view of \eqref{20}, we may consider the {\it{generalized degenerate Bernoulli numbers}} given in terms of Gauss hypergeometric function by \begin{equation} \pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}=\sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}\frac{t^{n}}{n!},\label{21} \end{equation} where $p\in\mathbb{Z}$ with $p\ge -1$. When $p=0$, $\beta_{n,\lambda}^{(0)}=\beta_{n,\lambda},\ (n\ge 0)$. \par Let us take $p=-1$ in \eqref{21}. Then we have \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(-1)}\frac{t^{n}}{n!}\ &=\ \pFq{2}{1}{1-\lambda,1}{1}{1-e_{\lambda}(t)}\nonumber \\ &=\ \sum_{n=0}^{\infty}\frac{(\lambda-1)_{n}}{n!}\big(e_{\lambda}(t)-1\big)^{n}\ =\ \sum_{n=0}^{\infty}\binom{\lambda-1}{n}\big(e_{\lambda}(t)-1\big)^{n}\label{22}\\ &=\ e_{\lambda}^{\lambda-1}(t)\ =\ \sum_{n=0}^{\infty}(\lambda-1)_{n,\lambda}\frac{t^{n}}{n!}.\nonumber \end{align} By comparing the coefficients on the both sides of \eqref{22}, we get \begin{equation} \beta_{n,\lambda}^{(-1)}=(\lambda-1)_{n,\lambda},\quad(n\ge 0). \label{23} \end{equation} From \eqref{21}, we note that \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}\frac{t^{n}}{n!}\ &=\ \pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}\ =\ \sum_{k=0}^{\infty}\frac{\langle 1-\lambda\rangle_{k}\langle 1\rangle_{k}}{\langle p+2\rangle_{k}}\frac{(1-e_{\lambda}(t))^{k}}{k!}\label{24} \\ &=\ (p+1)!\sum_{k=0}^{\infty}\frac{\lambda^{k}(1)_{k+1,1/\lambda}k!}{(p+k+1)!}\frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k} \nonumber \\ &=\ (p+1)!\sum_{k=0}^{\infty}\frac{\lambda^{k}(1)_{k+1,1/\lambda}k!}{(p+k+1)!}\sum_{n=k}^{\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!} \nonumber \\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{\binom{p+k+1}{p+1}}S_{2,\lambda}(n,k)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by comparing the coefficients on both sides of \eqref{24}, we obtain the following theorem. \begin{theorem} For $n\ge 0$ and $ p\ge -1$, we have \begin{displaymath} \beta_{n,\lambda}^{(p)}= \sum_{k=0}^{n}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{\binom{p+k+1}{p+1}}S_{2,\lambda}(n,k). \end{displaymath} \end{theorem} From \eqref{6}, we get \begin{align} \sum_{n=k}^{\infty}S_{2,\lambda}(n,k)\frac{t^{n}}{n!}\ &=\ \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}\ =\ \frac{1}{k!}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}e_{\lambda}^{l}(t) \label{25} \\ &=\ \sum_{n=0}^{\infty}\bigg(\frac{1}{k!}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}(l)_{n,\lambda}\bigg)\frac{t^{n}}{n!}. \nonumber \end{align} By \eqref{25}, we get \begin{equation} \sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}(l)_{n,\lambda}=\left\{\begin{array}{cc} k!S_{2,\lambda}(n,k), & \textrm{if $n\ge k$}, \\ 0, & \textrm{otherwise.} \end{array}\right. \label{26} \end{equation} Let $\triangle$ be a difference operator with $\triangle f(x)=f(x+1)-f(x)$. Then we have \begin{displaymath} \triangle^{n}f(x)=\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}f(x+k). \end{displaymath} From \eqref{26}, we have \begin{equation} k!S_{2,\lambda}(n,k)=\triangle^{k}(0)_{n,\lambda},\quad(n,k\ge 0, n\ge k).\label{27} \end{equation} In light of \eqref{12}, we may consider the {\it{degenerate type Eulerian numbers}} given by \begin{equation} (-1)^{n-m}\eulerian{n}{m}_{\lambda}=\sum_{k=0}^{n-m}\lambda^{k}(1)_{k+1,1/\lambda}\binom{n-k}{m}\frac{\triangle^{k}(0)_{n,\lambda}}{k!}.\label{28} \end{equation} By \eqref{27} and \eqref{28}, we get \begin{equation} (-1)^{n-m}\eulerian{n}{m}_{\lambda}=\sum_{k=0}^{n-m}\lambda^{k}(1)_{k+1,1/\lambda}\binom{n-k}{m}S_{2,\lambda}(n,k).\label{29} \end{equation} We observe that \begin{align} \sum_{k=0}^{n}\lambda^{k}(1)_{k+1,1/\lambda}S_{2,\lambda}(n,k)(t+1)^{n-k}\ &=\ \sum_{k=0}^{n}\lambda^{k}(1)_{k+1,1/\lambda}S_{2,\lambda}(n,k)\sum_{m=0}^{n-k}\binom{n-k}{m}t^{m}\label{30} \\ &=\ \sum_{m=0}^{n}\bigg(\sum_{k=0}^{n-m}\lambda^{k}(1)_{k+1,1/\lambda}S_{2,\lambda}(n,k)\binom{n-k}{m}\bigg)t^{m} \nonumber \\ &=\ \sum_{m=0}^{n}(-1)^{n-m}\eulerian{n}{m}_{\lambda}t^{m}. \nonumber \end{align} From \eqref{30} and Theorem 3, we note that \begin{align} \beta_{n,\lambda}^{(p)}\ &=\ \sum_{k=0}^{n}\lambda^{k}(1)_{k+1,1/\lambda}\binom{p+k+1}{k}^{-1}S_{2,\lambda}(n,k) \label{31} \\ &=\ (p+1)\sum_{k=0}^{n}\lambda^{k}(1)_{k+1,1/\lambda}S_{2,\lambda}(n,k)\int_{0}^{1}t^{p}(1-t)^{k}dt\nonumber\\ &=\ (p+1)\int_{0}^{1}\sum_{k=0}^{n}\lambda^{k}(1-t)^{n}t^{p}(1)_{k+1,1/\lambda}S_{2,\lambda}(n,k)\bigg(1+\frac{t}{1-t}\bigg)^{n-k}dt \nonumber \\ &=\ (p+1)\int_{0}^{1}(1-t)^{n}t^{p}\sum_{k=0}^{n}\eulerian{n}{k}_{\lambda}(-1)^{n-k}\bigg(\frac{t}{1-t}\bigg)^{k}dt \nonumber \\ &=\ (p+1)\sum_{k=0}^{n}\eulerian{n}{k}_{\lambda}(-1)^{n-k}\int_{0}^{1} (1-t)^{n-k}t^{p+k}dt\nonumber \\ &=\ (p+1)\sum_{k=0}^{n}\eulerian{n}{k}_{\lambda}(-1)^{n-k}\frac{(n-k)!(p+k)!}{(p+n+1)!}\nonumber \\ &=\ \frac{p+1}{n+p+1}\sum_{k=0}^{n}\eulerian{n}{k}_{\lambda}(-1)^{n-k}\binom{p+n}{p+k}^{-1}. \nonumber \end{align} Therefore, by \eqref{31}, we obtain the following theorem. \begin{theorem} For $n,p\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}^{(p)}= \frac{p+1}{n+p+1}\sum_{k=0}^{n}\eulerian{n}{k}_{\lambda}(-1)^{n-k}\binom{p+n}{p+k}^{-1}. \end{displaymath} \end{theorem} Let $r$ be a positive integer. The unsigned $r$-Stirling number of the first kind ${n \brack k}_{r}$ is the number of permutations of the set $[n]=\{1,2,3,\dots,n\}$ with exactly $k$ disjoint cycles in such a way that the numbers $1,2,3,\dots,r$ are in distinct cycles, while the $r$-Stirling number of the second kind ${n \brace k}_{r}$ counts the number of partitions of the set $[n]$ into $k$ non-empty disjoint subsets in such a way that the numbers $1,2,3,\dots,r$ are in distinct subsets. In [13], Kim-Kim-Lee-Park introduced the unsigned degenerate $r$-Stirling numbers of the first kind ${n \brack k}_{r,\lambda}$ as a degenerate version of ${n \brack k}_{r}$ and the degenerate $r$-Stirling number of the second kind ${n \brace k}_{r,\lambda}$ as a degenerate version of ${n \brace k}_{r}$. It is known that the degenerate $r$-Stirling numbers of the second kind are given by \begin{equation} (x+r)_{n,\lambda}=\sum_{k=0}^{n}{n+r \brace k+r}_{r,\lambda}(x)_{k},\quad(n \ge 1). \label{32} \end{equation} From \eqref{32}, we note that \begin{equation} \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}e_{\lambda}^{r}(t)=\sum_{n=k}^{\infty}{n+r \brace k+r}_{r,\lambda}\frac{t^{n}}{n!},\quad(k\ge 0,\ r \ge 1). \label{33} \end{equation} By the Euler's transformation formula in \eqref{9}, we get \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}\frac{t^{n}}{n!}\ &=\ \pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)} \label{34} \\ &=\ e_{\lambda}^{p+\lambda}(t)\sum_{k=0}^{\infty}\frac{\langle p+1+\lambda\rangle_{\lambda}\langle p+1\rangle_{k}}{\langle p+2\rangle_{\lambda}}\frac{(1-e_{\lambda}(t))^{k}}{k!}\nonumber \\ &=\ \frac{p+1}{\lambda^{p}\langle 1\rangle_{p+1,1/\lambda}}\sum_{k=0}^{\infty}\frac{\lambda^{k+p}\langle 1\rangle_{p+k+1,1/\lambda}}{p+k+1}\frac{(-1)^{k}}{k!}\big(e_{\lambda}(t)-1\big)^{k}e_{\lambda}^{p+k}(t)\nonumber\\ &=\ \frac{p+1}{\lambda^{p}\langle 1\rangle_{p+1,1/\lambda}}\sum_{k=0}^{\infty}\frac{\lambda^{k+p}\langle 1\rangle_{p+k+1,1/\lambda}}{p+k+1}(-1)^{k}\sum_{m=k}^{\infty}{m+p \brace k+p}_{p,\lambda}\frac{t^{m}}{m!}\sum_{l=0}^{\infty}(k)_{l,\lambda}\frac{t^l}{l!} \nonumber \\ &=\ \sum_{m=0}^{\infty}\frac{p+1}{\lambda^{p}\langle 1\rangle_{p+1,1/\lambda}}\sum_{k=0}^{m}\frac{\lambda^{k+p}\langle 1\rangle_{p+k+1,1/\lambda}}{p+k+1}(-1)^{k}{m+p \brace k+p}_{p,\lambda}\frac{t^{m}}{m!}\sum_{l=0}^{\infty}(k)_{l,\lambda}\frac{t^l}{l!}\nonumber \\ &=\ \sum_{n=0}^{\infty}\sum_{m=0}^{n}\binom{n}{m}\frac{p+1}{\langle 1\rangle_{p+1,1/\lambda}}\sum_{k=0}^{m}\frac{(-\lambda)^{k}}{p+k+1} \langle 1\rangle_{p+k+1,1/\lambda}{m+p \brace k+p}_{p,\lambda}(k)_{n-m,\lambda}\frac{t^{n}}{n!}, \nonumber \end{align} where $\langle x\rangle_{0,\lambda}=1,\ \langle x\rangle_{n,\lambda}=x(x+\lambda)\cdots(x+(n-1)\lambda),\ (n\ge 1)$. \par Therefore, we obtain the following theorem. \begin{theorem} For $n\ge 1$ and $p\ge 0$, we have \begin{equation*} \beta_{n,\lambda}^{(p)}=\frac{p+1}{\langle 1\rangle_{p+1,1/\lambda}} \sum_{m=0}^{n}\sum_{k=0}^{m}\binom{n}{m}\frac{(-\lambda)^{k}}{p+k+1} \langle 1\rangle_{p+k+1,1/\lambda}{m+p \brace k+p}_{p,\lambda}(k)_{n-m,\lambda}. \end{equation*} \end{theorem} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\beta_{n,\lambda}^{(p)}=\frac{p+1}{p!}\sum_{m=0}^{n}\sum_{k=0}^{m}\binom{n}{m}(-1)^{k}\frac{(p+k)!}{p+k+1}{m+p \brace k+p}_{p}(k)_{n-m}. \end{displaymath} From Theorem 3, we have \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}\frac{t^{n}}{n!}\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{\binom{p+k+1}{p+1}}S_{2,\lambda}(n,k)\bigg)\frac{t^{n}}{n!}\label{35} \\ &=\ \sum_{k=0}^{\infty}\frac{\lambda^{k}(1)_{k+1,1/\lambda}}{\binom{p+k+1}{p+1}}\frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}\nonumber \\ &=\ (p+1)\sum_{k=0}^{\infty}\frac{p!k!}{(k+p+1)!}\lambda^{k}(1)_{k+1,1/\lambda}\frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k} \nonumber \\ &=\ (p+1)\sum_{k=0}^{\infty}\frac{(-1)^{k}\lambda^{k}(1)_{k+1,1/\lambda}}{k!}\big(1-e_{\lambda}(t)\big)^{k}\int_{0}^{1}(1-x)^{p}x^{k}dx\nonumber \\ &=\ (p+1)\sum_{k=0}^{\infty}(-1)^{k}\binom{\lambda-1}{k}\big(1-e_{\lambda}(t)\big)^{k}\int_{0}^{1}(1-x)^{p}x^{k}dx\nonumber \\ &=\ (p+1)\int_{0}^{1}(1-x)^{p}\big(1-x(1-e_{\lambda}(t))\big)^{\lambda-1}dx. \nonumber \end{align} Therefore, we obtain the following theorem. \begin{theorem} For $p\ge 0$, we have \begin{displaymath} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}\frac{t^{n}}{n!}=(p+1)\int_{0}^{1}(1-x)^{p}\big(1-x(1-e_{\lambda}(t))\big)^{\lambda-1}dx. \end{displaymath} \end{theorem} \section{Generalized degenerate Bernoulli polynomials} In this section, we consider the {\it{generalized degenerate Bernoulli polynomials}} which are derived from the Gauss hypergeometric function. In light of \eqref{21}, we define the generalized degenerate Bernoulli polynomials by \begin{equation} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}(x)\frac{t^{n}}{n!}=\pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}e_{\lambda}^{x}(t). \label{36} \end{equation} When $x=0$, $\beta_{n,\lambda}^{(p)}(0)=\beta_{n,\lambda}^{(p)}$, $(n\ge 0)$. Thus, by \eqref{36}, we get \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}(x)\frac{t^{n}}{n!}\ &=\ \pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}e_{\lambda}^{x}(t) \label{37} \\ &=\ \sum_{l=0}^{\infty}\beta_{l,\lambda}^{(p)}\frac{t^{l}}{l!}\sum_{m=0}^{\infty}(x)_{m,\lambda}\frac{t^{m}}{m!}\nonumber \\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(p)}(x)_{n-l,\lambda}\bigg)\frac{t^{n}}{n!}. \nonumber \end{align} Therefore, by comparing the coefficients on both sides of \eqref{37}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}^{(p)}(x)=\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(p)}(x)_{n-l,\lambda}. \end{displaymath} \end{theorem} From \eqref{36}, we note that \begin{align*} \sum_{n=1}^{\infty}\frac{d}{dx}\beta_{n,\lambda}^{(p)}(x)\frac{t^{n}}{n!}\ &=\ \pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}\frac{d}{dx}e_{\lambda}^{x}(t) \\ &=\ \frac{1}{\lambda}\log(1+\lambda t)\pFq{2}{1}{1-\lambda,1}{p+2}{1-e_{\lambda}(t)}e_{\lambda}^{x}(t) \\ &=\ \frac{1}{\lambda}\sum_{l=1}^{\infty}\frac{(-1)^{l-1}\lambda^{l}}{l}t^{l}\sum_{m=0}^{\infty}\beta_{m,\lambda}^{(p)}(x)\frac{t^{m}}{m!} \\ &=\ \sum_{n=1}^{\infty}\bigg(\sum_{l=1}^{n}\frac{(-\lambda)^{l-1}}{l}\frac{n!\beta_{n-l,\lambda}^{(p)}(x)}{(n-l)!}\bigg)\frac{t^{n}}{n!} \end{align*} Thus, we have \begin{displaymath} \frac{d}{dx}\beta_{n,\lambda}^{(p)}(x)= \sum_{l=1}^{n}(-\lambda)^{l-1}(l-1)!\binom{n}{l}\beta_{n-l,\lambda}^{(p)}(x). \end{displaymath} \begin{proposition} For $n\ge 1$, we have \begin{displaymath} \frac{d}{dx}\beta_{n,\lambda}^{(p)}(x)= \sum_{l=1}^{n}(-\lambda)^{l-1}(l-1)!\binom{n}{l}\beta_{n-l,\lambda}^{(p)}(x). \end{displaymath} \end{proposition} By \eqref{14}, we easily get \begin{align*} \sum_{n=k}^{\infty}S_{2,\lambda}(n,k|x)\frac{t^{n}}{n!}\ &=\ \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}e_{\lambda}^{x}(t) \\ &=\ \frac{1}{k!}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}e_{\lambda}^{l+x}(t) \\ &=\ \sum_{n=0}^{\infty}\bigg(\frac{1}{k!}\sum_{l=0}^{k}(-1)^{k-l}(l+x)_{n,\lambda}\bigg)\frac{t^{n}}{n!}. \end{align*} Thus we have \begin{equation} \frac{1}{k!}\sum_{l=0}^{k}\binom{k}{l}(-1)^{k-l}(l+x)_{n,\lambda}=\left\{\begin{array}{ccc} S_{2,\lambda}(n,k|x), & \textrm{if $n\ge k$,}\\ 0, & \textrm{otherwise.} \end{array}\right. \label{38} \end{equation} From \eqref{38}, we note that \begin{displaymath} S_{2,\lambda}(n,k|x)=\frac{1}{k!}\triangle^{k}(x)_{n,\lambda},\quad(n\ge k). \end{displaymath} \begin{lemma} For $n,k\ge 0$ with $n\ge k$, we have \begin{displaymath} S_{2,\lambda}(n,k|x)=\frac{1}{k!}\triangle^{k}(x)_{n,\lambda},\quad(n\ge k). \end{displaymath} \end{lemma} Now, we observe that \begin{align} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(p)}(x)\frac{t^{n}}{n!}\ &=\ \sum_{k=0}^{\infty}\frac{(p+1)!k!}{(p+k+1)!}\lambda^{k}(1)_{k+1,1/\lambda}\frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}e_{\lambda}^{x}(t) \label{39} \\ &=\ \sum_{k=0}^{\infty}\frac{(1)_{k+1,1/\lambda}\lambda^{k}}{\binom{p+k+1}{p+1}}\sum_{n=k}^{\infty}S_{2,\lambda}(n,k|x)\frac{t^{n}}{n!} \nonumber \\ &=\ \sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\frac{(1)_{k+1,1/\lambda}\lambda^{k}}{\binom{p+k+1}{p+1}}S_{2,\lambda}(n,k|x)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{39}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}^{(p)}(x)= \sum_{k=0}^{n}\frac{(1)_{k+1,1/\lambda}\lambda^{k}}{\binom{p+k+1}{p+1}}S_{2,\lambda}(n,k|x). \end{displaymath} \end{theorem} \begin{remark} Let $p$ be a nonnegative integer. Then, by Theorem 7 and \eqref{36}, we easily get \begin{align*} &\ \beta_{n,\lambda}^{(p)}(x+y)=\sum_{k=0}^{n}\binom{n}{k}\beta_{k,\lambda}^{(p)}(x)(y)_{n-k,\lambda},\quad(n \ge 0),\nonumber \\ &\ \beta_{n,\lambda}^{(p)}(x+1)-\beta_{n,\lambda}^{(p)}(x)\ =\ \sum_{k=0}^{n-1}\binom{n}{k}\beta_{k,\lambda}^{(p)}(x)(1)_{n-k,\lambda},\quad(n \ge 1),\nonumber \\ &\ \beta_{n,\lambda}^{(p)}(mx)\ =\ \sum_{k=0}^{n}\binom{n}{k}\beta_{k,\lambda}^{(p)}(x)(m-1)^{n-k}(x)_{n-k,\lambda/m-1},\quad(n \ge 0, m \ge 2). \nonumber \end{align*} \end{remark} \section{Conclusion} In recent years, degenerate versions of many special polynomials and numbers have been investigated by means of various different tools including generating functions, combinatorial methods, umbral calculus, $p$-adic analysis, differential equations, special functions, probability theory and analytic number theory. Studying degenerate versions of some special polynomials and numbers, which was initiated by Carlitz in [3], has yielded many interesting arithmetic and combinatorial results (see [7-13 and references therein]) and has potential to find many applications in diverse areas.\par A new family of $p$-Bernoulli numbers and polynomials, which reduce to the classical Bernoulli numbers and polynomials for $p=0$, was introduced by Rahmani in [14] by means of the Gauss hypergeometric function. Motivated by that paper, we were interested in finding a degenerate version of those numbers and polynomials. Indeed, the generalized degenerate Bernoulli numbers and polynomials, which reduce to the Carlitz degenerate Bernoulli numbers and polynomials for $p=0$, were introduced again in terms of the Gauss hypergeometric function. In addition, the degenerate type Eulerian numbers was introduced as a degenerate version of Eulerian numbers. \par In this paper, we expressed the generalized degenerate Bernoulli numbers in terms of the degenerate Stirling numbers of the second kind, of the degenerate type Eulerian numbers, of the degenerate $p$-Stirling numbers of the second kind and of an integral on the unit interval. In dddition, we represented the generalized degenerate Bernoulli polynomials in terms of the degenerate Stirling polynomials of the second kind. \par It is one of our future projects to continue pursuing this line of research. Namely, by studying degenerate versions of some special polynomials and numbers, we want to find their applications in mathematics, science and engineering. {\bf Acknowledgements} The authors would like to thank the reviewers for their valuable comments and suggestions and Jangjeon Institute for Mathematical Science for the support of this research. {\bf Funding} Not applicable. {\bf Availability of data and materials} Not applicable. {\bf Competing interests} The authors declare that they have no conflicts of interest. {\bf Authors’ contributions} TK and DSK conceived of the framework and structured the whole paper; DSK and TK wrote the paper; HL typed; LCJ, and HYK checked the results of the paper; DSK and TK completed the revision of the paper. All authors have read and approved the final version of the manuscript. \end{document}
\begin{document} \title{Large random simplicial complexes, I} \author{A. Costa and M. Farber} \date{March 18, 2015} \maketitle \section{Introduction} Networks, such as the Internet as well as social and biological networks of various nature, have been the subject of intense study in recent years. Usually one thinks of a network as being a large graph with nodes representing objects (or sites) and edges representing connections or links between the objects. Random graph theory provides a language and mathematical techniques for studying large random networks in different contexts. If we are interested not only in pairwise relations between the objects but also in relations between triples, quadruples etc, we will have to use instead of graphs the high dimensional simplicial complexes as geometrical models of networks. The mathematical study of large random simplicial complexes started relatively recently and several different probabilistic models of random topological objects have appeared within the last 10 years, see \cite{CFK} and \cite{Ksurvey} for surveys. One may mention random surfaces \cite{PS}, random 3-dimensional manifolds \cite{DT}, random configuration spaces of linkages \cite{F}. Linial, Meshulam and Wallach \cite{LM}, \cite{MW} studied an important analogue of the classical Erd\H os--R\'enyi \cite{ER} model of random graphs in the situation of high-dimensional simplicial complexes. The random simplicial complexes of \cite{LM}, \cite{MW} are $d$-dimensional, have the complete $(d-1)$-skeleton and their randomness shows only in the top dimension. Some interesting results about the topology of random 2-complexes in the Linial--Meshulam model were obtained in \cite{BHK}, \cite{CCFK}, \cite{CF1}. A different model of random simplicial complexes was studied by M. Kahle \cite{Kahle1} and by some other authors, see for example \cite{CFH}. These are the clique complexes of random Erd\H os--R\'enyi graphs, i.e. here one starts with a random graph in the Erd\H os--R\'enyi model and declares as a simplex every subset of vertices which form a {\it clique} (a subset such that every two vertices are connected by an edge). Compared with the Linial - Meshulam model, the clique complex has randomness in dimension one but it influences its structure in all the higher dimensions. In \cite{CF14} we initiated the study of a more general and more flexible model of random simplicial complexes with randomness in all dimensions. Here one starts with a set of $n$ vertices and retain each of them with probability $p_0$; on the next step one connects every pair of retained vertices by an edge with probability $p_1$, and then fills in every triangle in the obtained random graph with probability $p_2$, and so on. As the result we obtain a random simplicial complex depending on the set of probability parameters $$(p_0, p_1, \dots, p_r), \quad 0\le p_i\le 1.$$ Our multi-parameter random simplicial complex includes both Linial-Meshulam and random clique complexes as special cases. The topological and geometric properties of multi-parameter random simplicial complexes depend on the whole set of parameters and their thresholds can be understood as convex subsets and not as single numbers as in all the previously studied models. In this paper we develop further the multi-parameter model. Firstly, we give an intrinsic characterisation of the multi-parameter probability measure. Secondly, we show that in multi-parameter random simplicial complexes the links of simplexes and their intersections are also multi-parameter random simplicial complexes. Thirdly, we find conditions under which a multi-parameter random simplicial complex is connected and simply connected. Note that already in the case of random clique complex, the links of simplexes are two-parameter random simplicial complexes, see Example \ref{ex33}. In \cite{CF15a} we state {\it a homological domination principle} for random simplicial complexes, claiming that the Betti number in one specific dimension $k=k({\mathfrak p})$ (which is explicitly determined by the probability multi-parameter ${\mathfrak p}$) significantly dominates the Betti numbers in all other dimensions. We also state and discuss evidence for two interesting conjectures which would imply a stronger version of the domination principle, namely that generically homology of random simplicial complex coincides with that of a wedges of spheres of dimension $k=k({\mathfrak p})$; moreover, for $k=k({\mathfrak p})\ge 3$ a random complex collapses to a wedge of spheres of dimension $k=k({\mathfrak p})$. In the following papers we shall describe the properties of fundamental groups of the multi-parameter random simplicial complexes and also their Betti numbers. In this paper we use the following notations. Given a simplicial complex $Y$ and a simplex $\sigma\subset Y$ we denote by $\rm {St}_Y(\sigma)$ {\it the star} of $\sigma$ in $Y$. A simplex $\tau\subset Y$ belongs to the star $\rm {St}_Y(\sigma)$ iff the union of the sets of vertices $V(\sigma)\cup V(\tau)$ spans a simplex of $Y$. Clearly, $\rm {St}_Y(\sigma)$ is a simplicial subcomplex of $Y$. {\it The link of a simplex} $\sigma$ in $Y$ is defined as the simplicial subcomplex of $\rm {St}_Y(\sigma)$ consisting of the simplexes $\tau\subset \rm {St}_Y(\sigma)$ such that $V(\tau)\cap V(\sigma)=\emptyset$. This research was supported by the EPSRC. \section{Multi-parameter random simplicial complexes} \subsection{Faces and external faces} \label{1.1} Let $\Delta_n$ denote the simplex with the vertex set $\{1, 2, \dots, n\}$. We view $\Delta_n$ as an abstract simplicial complex of dimension $n-1$. Given a simplicial subcomplex $Y\subset \Delta_n$, we denote by $f_i(Y)$ the number of {\it $i$-faces} of $Y$ (i.e. $i$-dimensional simplexes of $\Delta_n$ contained in $Y$). We shall use the symbol $F(Y)$ to denote the set of all faces of $Y$. \begin{definition} An external face of a subcomplex $Y\subset \Delta_n$ is a simplex $\sigma \subset \Delta_n$ such that $\sigma \not\subset Y$ but the boundary of $\sigma$ is contained in $Y$, ${\mathfrak p}artial \sigma \subset Y$. \end{definition} We shall denote by $E(Y)$ the set of all external faces of $Y$; the symbol $e_i(Y)$ will indicate the number of $i$-dimensional external faces of $Y$. A vertex $i\in \{1, \dots, n\}$ is an external vertex of $Y\subset \Delta_n$ iff $i\notin Y$. An edge $(ij)$ is an external edge of $Y$ iff $i,j\in Y$ but $(ij)\not\subset Y$. For $i=0$, we have $e_0(Y)+f_0(Y)=n$ and for $i>0$, $$f_i(Y)+e_i(Y)\le \binom n {i+1}.$$ Note that for any simplex $\sigma\subset \left(\Delta_n- Y\right)$ which is not a simplex of $Y$ has a face $\sigma'\subset \sigma$ which is an external face of $Y$. In other words, the complement $\Delta_n-Y$ is the union of the open stars of the external faces of $Y$, $$\Delta_n-Y \, =\, \bigcup_{\sigma\in E(Y)} {\rm St}(\sigma).$$ For two subcomplexes $Y, Y'\subset \Delta_n$, one has $Y\subset Y'$ if and only if for any external face $\sigma'$ of $Y'$ there is a face $\sigma\subset \sigma'$ which is an external face of $Y$. \subsection{The model} Fix an integer $r\ge 0$ and a sequence $${\mathfrak p}=(p_0, p_1, \dots, p_r)$$ of real numbers satisfying $$0\le p_i\le 1.$$ Denote $$q_i=1-p_i.$$ For a simplex $\sigma\subset \Delta_n$ we shall use the notations $p_\sigma=p_i$ and $q_\sigma=q_i$ where $i=\dim \sigma$. We consider the probability space ${\Omega_n^r}$ consisting of all subcomplexes $$Y\subset \Delta_n, \quad \mbox{with}\quad \dim Y\le r.$$ Recall that the symbol $\Delta_n^{(r)}$ stands for the $r$-dimensional skeleton of $\Delta_n$, which is defined as the union of all simplexes of dimension $\leq r$. Thus, our probability space ${\Omega_n^r}$ consists of all subcomplexes $Y\subset \Delta_n^{(r)}$. The probability function \begin{eqnarray} {\rm P}_{r,{\mathfrak p}}: {\Omega_n^r}\to {\mathbf R} \end{eqnarray} is given by the formula \begin{eqnarray}\label{def1} {\rm P}_{r, {\mathfrak p}}(Y) \, &=&\, {\mathfrak p}rod_{\sigma\in F(Y)} p_\sigma \cdot {\mathfrak p}rod_{\sigma\in E(Y)} q_\sigma \nonumber\\ \\ &=& \, {\mathfrak p}rod_{i=0}^r p_i^{f_i(Y)}\cdot {\mathfrak p}rod_{i=0}^r q_i^{e_i(Y)}\nonumber \end{eqnarray} In (\ref{def1}) we use the convention $0^0=1$; in other words, if $p_i=0$ and $f_i(Y)=0$ then the corresponding factor in (\ref{def1}) equals 1; similarly if some $q_i=0$ and $e_i(Y)=0$. We shall show below that ${\rm P}_{r, {\mathfrak p}}$ is indeed a probability function, i.e. \begin{eqnarray}\label{sum1} \sum_{Y\subset \Delta_n^{(r)}}{\rm P}_{r, {\mathfrak p}}(Y) = 1, \end{eqnarray} see Corollary \ref{prob}. If $p_i=0$ for some $i$ then according to (\ref{def1}) we shall have ${\rm P}_{r, {\mathfrak p}}(Y)=0$ unless $f_i(Y)=0$, i.e. if $Y$ contains no simplexes of dimension $i$ (in this case $Y$ contains no simplexes of dimension $\ge i$). Thus, if $p_i=0$ the probability measure ${\rm P}_{r, {\mathfrak p}}$ is supported on the set of subcomplexes of $\Delta_n$ of dimension $<i$. In the special case when one of the probability parameters satisfies $p_i=1$ one has $q_i=0$ and from formula (\ref{def1}) we see ${\rm P}_{r, {\mathfrak p}}(Y)=0$ unless $e_i(Y)=0$, i.e. if the subcomplex $Y\subset \Delta_n^{(r)}$ has no external faces of dimension $i$. In other words, we may say that if $p_i=1$ the measure ${\rm P}_r$ is concentrated on the set of complexes satisfying $e_i(Y)=0$, i.e. such that any boundary of the $i$-simplex in $Y$ is filled by an $i$-simplex of $Y$. \begin{lemma}\label{cont} Let $$A\subset B\subset \Delta_n^{(r)}$$ be two subcomplexes satisfying the following condition: the boundary of any external face of $B$ of dimension $\le r$ is contained in $A$. Then \begin{eqnarray}\label{twosided} {\rm P}_{r, {\mathfrak p}}(A\subset Y\subset B) &=& {\mathfrak p}rod_{\sigma\in F(A)} p_\sigma \cdot {\mathfrak p}rod_{\sigma\in E(B)}q_\sigma \nonumber\\ \\ &=& {\mathfrak p}rod_{i=0}^r p_i^{f_i(A)} \cdot {\mathfrak p}rod_{i=0}^r q_i^{e_i(B)}.\nonumber \end{eqnarray} \end{lemma} \begin{proof} We act by induction on $r$. For $r=0$, the complexes $A\subset B$ are discrete sets of vertices and the condition of the Lemma is automatically satisfied (since the boundary of any 0-face is the empty set). A subcomplex $Y\subset \Delta_n^{(0)}$ satisfying $A\subset Y\subset B$ is determined by a choice of $f_0(Y)-f_0(A)$ vertices out of $f_0(B)-f_0(A)$ vertices. Hence using formula (\ref{def1}), \begin{eqnarray*} {\rm P}_{0, {\mathfrak p}}(A\subset Y\subset B) &=& \sum_{k=0}^{f_0(B)-f_0(A)} \binom {f_0(B)-f_0(A)} k \cdot p_0^{f_0(A)+k}q_0^{n-f_0(A)-k}\\ &=& p_0^{f_0(A)} \cdot q_0^{n-f_0(A)}\cdot \left(1+ \frac{p_0}{q_0}\right)^{f_0(B)-f_0(A)} \\ &=& p_0^{f_0(A)} \cdot q_0^{n-f_0(B)}\\ &=& p_0^{f_0(A)} \cdot q_0^{e_0(B)}, \end{eqnarray*} as claimed. Now suppose that formula (\ref{twosided}) holds for $r-1$ and consider the case of $r$. Note the formula \begin{eqnarray}\label{ind} {\rm P}_{r, {\mathfrak p}}(Y) = {\rm P}_{r-1, {\mathfrak p}'}(Y^{r-1}) \cdot q_r^{g_r(Y)}\cdot \left(\frac{p_r}{q_r}\right)^{f_r(Y)} \end{eqnarray} where $g_r(Y)=e_r(Y)+f_r(Y)$ is the number of boundaries of $r$-simplexes contained in $Y$ and ${\mathfrak p}'=(p_0, \dots, p_{r-1})$. Note that the first two factors in (\ref{ind}) depend only on the skeleton $Y^{r-1}$. We denote by $g_r^B(Y)$ the number of $r$-simplexes of $B$ such that their boundary ${\mathfrak p}artial \Delta^r$ lies in $Y$. Clearly the number $g_r^B(Y)$ depends only on the skeleton $Y^{r-1}$. Our assumption that the boundary of any external $i$-face of $B$ is contained in $A$ for $i\le r$ implies that for any subcomplex $A\subset Y\subset B$ \begin{eqnarray}\label{indep} g_r(Y)-g_r^B(Y) = e_r(B). \end{eqnarray} A complex $Y$ is uniquely determined by its skeleton $Y^{r-1}$ and by the set of its $r$-faces. Given the skeleton $Y^{r-1}$, the number $f_r(Y)$ is arbitrary satisfying $$f_r(A)\subset f_r(Y)\subset g_r^B(Y).$$ Thus using (\ref{ind}) we find that the probability \begin{eqnarray*} {\rm P}_{r,{\mathfrak p}}(A\subset Y\subset B) = \sum_{A\subset Y\subset B} {\rm P}_{r, {\mathfrak p}}(Y) \end{eqnarray*} equals \begin{eqnarray*} &\sum_{Y^{r-1}}& {\rm P}_{r-1, {\mathfrak p}'}(Y^{r-1}) \cdot q_r^{g_r(Y)}\cdot \sum_{k=0}^{g_r^B(Y)-f_r(A)} \binom {g_r^B(Y)-f_r(A)} k \cdot \left(\frac{p_r}{q_r}\right)^{f_r(A)+k} \\ &=& \sum_{Y^{r-1}} {\rm P}_{r-1, {\mathfrak p}'}(Y^{r-1}) \cdot q_r^{g_r(Y)}\cdot \left(\frac{p_r}{q_r}\right)^{f_r(A)}\cdot \left(1+\frac{p_r}{q_r}\right)^{g_r^B(Y)-f_r(A)}\\ &=& \sum_{Y^{r-1}} {\rm P}_{r-1, {\mathfrak p}'}(Y^{r-1}) \cdot p_r^{f_r(A)} \cdot q_r^{g_r(Y)-g_r^B(Y)}\\ &=& p_r^{f_r(A)}\cdot q_r^{e_r(B))}\cdot \sum_{Y^{r-1}} {\rm P}_{r-1, {\mathfrak p}'}(Y^{r-1}). \end{eqnarray*} Here we used the equation (\ref{indep}). Next we may combine the obtained equality with the inductive hypothesis $${\rm P}_{r-1, {\mathfrak p}'}({A^{r-1}\subset Y^{r-1}\subset B^{r-1}}) = {\mathfrak p}rod_{i=0}^{r-1} p_i^{f_i(A)}\cdot {\mathfrak p}rod_{i=0}^{r-1}q_i^{e_u(B)}$$ to obtain (\ref{twosided}). \end{proof} The assumption that any external face of $B$ is an external face of $A$ is essential in Lemma \ref{cont}; the lemma is false without this assumption. Taking $B=\Delta_n^{(r)}$ in Lemma \ref{cont} we obtain: \begin{corollary}\label{cont2c} Let $A\subset\Delta_n^{(r)}$ be a subcomplex. Then \begin{eqnarray}\label{twosided1} {\rm P}_{r, {\mathfrak p}}(Y\supset A) \, =\, \sum_{Y\supset A} {\rm P}_{r,{\mathfrak p}}(Y) \, =\, {\mathfrak p}rod_{\sigma\in F(A)} p_\sigma = {\mathfrak p}rod_{i=0}^r p_i^{f_i(A)}. \end{eqnarray} \end{corollary} Taking the special case $A=\emptyset$ in (\ref{twosided1}) we obtain the following Corollary confirming the fact that ${\rm P}_{r,{\mathfrak p}}$ is a probability function. \begin{corollary}\label{prob} $$\sum_{Y\subset \Delta_n^{(r)} }{\rm P}_{r,{\mathfrak p}}(Y) = 1. $$ \end{corollary} \subsection{The number of vertices of $Y$} We start with the following example. \begin{example}\label{emptyset} {\rm According to formula (\ref{def1}) the probability of the empty subcomplex $Y=\emptyset$ equals $${\rm P}_{r,{\mathfrak p}}(Y=\emptyset) = (1-p_0)^n.$$ If $p_0\to 0$ then ${\rm P}_{r,{\mathfrak p}}(Y=\emptyset) = (1-p_0)^n \sim e^{-p_0n}.$ Hence, we see that if $np_0\to 0$ then ${\rm P}_{r,{\mathfrak p}}(Y=\emptyset)\to 1$; we may say that in this case $Y=\emptyset$, a.a.s. If $p_0=c/n$ then $${\rm P}_{r,{\mathfrak p}}(Y=\emptyset) = (1-c/n)^n \to e^{-c}$$ as $n\to \infty$. Thus, for $p_0=c/n$ the empty subset appears with positive probability $\sim e^{-c}$, a.s.s. Since we intend to study non-empty large random simplicial complexes, we shall always assume that $p_0=\frac{\omega}{n}$ where $\omega$ tends to $\infty$. } \end{example} For $t\in \{0, 1, \dots, n\}$ denote by ${\Omega_{n,t}^r}$ the set of all subcomplexes $Y\subset \Delta_n^{(r)}$ with $f_0(Y)=t$. \begin{lemma} \label{tt} One has \begin{eqnarray} \sum_{Y\in \Omega_{n,t}^r} {\rm P}_{r, {\mathfrak p}}(Y) = \binom n t \cdot p_0^t\cdot q_0^{n-t}. \end{eqnarray} \end{lemma} \begin{proof} For a subset $A\subset \{1, 2, \dots, n\}$ with $|A|=t$ denote by $B_A\subset \Delta_n$ the $r$-dimensional skeleton of the simplex spanned by $A$. The pair $A\subset B_A$ satisfies the condition of Lemma \ref{cont} and applying this lemma we obtain $${\rm P}_{r, {\mathfrak p}}(A\subset Y\subset B_A) = p_0^tq_0^{n-t}.$$ Since we have $\binom n t$ choices for $A$ the result follows. \end{proof} \begin{lemma} \label{vertices} Consider a random simplicial complex $Y\in {\Omega_n^r}$ with respect to the multi-parameter probability measure ${\rm P}_{r, {\mathfrak p}}$ where ${\mathfrak p}=(p_0, p_1, \dots, p_r)$. Assume that $p_0=\omega/n$ where $\omega \to \infty$. Then for any $0<\epsilon<1/2$ there exists an integer $N_\epsilon$ such that for all $n>N_\epsilon$ with probability $$\ge 1- 2e^{-\frac{1}{3}\omega^{{2\epsilon}}},$$ the number of vertices $f_0(Y)$ of $Y$ satisfies the inequality \begin{eqnarray} (1-\delta)\omega \le f_0(Y) \le (1+\delta)\omega, \end{eqnarray} where $$\delta=\omega^{-1/2 +\epsilon}.$$ \end{lemma} \begin{proof} By Lemma \ref{tt}, $f_0$ is a binomial random variable with $\mathbb E(f_0)=np_0=\omega$ and we may apply the Chernoff bound (see \cite{JLR}, Corollary 2.3 on page 27). Let $N_\epsilon$ be such that for all $n>N_\epsilon$ we have $\delta\leq 3/2$. Then $${\rm P}_{r, {\mathfrak p}}(|f_0-\omega|\geq\delta \cdot\omega)<2\exp\left(-\frac{\delta^2}{3}\mathbb{E}f_0\right) = 2\exp\left(-\frac{1}{3} \omega^{2\epsilon}\right).$$ This completes the proof. \end{proof} \begin{example}{\rm It is easy to see that a random complex $Y \in \Omega_n^r$ is zero-dimensional a.a.s. assuming that \begin{eqnarray}\label{dim0} n^2 p_0^2p_1 \to 0. \end{eqnarray} Indeed using Lemma \ref{cont} one finds that the expected number of edges in $Y$ is $$\binom n 2\cdot {p_0}^2p_1$$ and the statement follows from the first moment method. } \end{example} \subsection{Important special cases.} The multi-parameter model we consider in this paper turns into some important well known models in special cases: When $r=1$ and ${\mathfrak p}=(1, p)$ we obtain the classical model of random graphs of Erd\"os and R\'enyi \cite{ER}. When $r=2$ and ${\mathfrak p}= (1,1,p)$ we obtain the Linial - Meshulam model of random 2-complexes \cite{LM}. When $r$ is arbitrary and fixed and ${\mathfrak p} = (1, 1, \dots, 1, p)$ we obtain the random simplicial complexes of Meshulam and Wallach \cite{MW}. For $r=n-1$ and ${\mathfrak p}=(1,p, 1,1, \dots, 1)$ one obtains the clique complexes of random graphs studied in \cite{Kahle1}. \subsection{Characterisation of the multi-parameter measure.} In this subsection we show that the property of Corollary \ref{cont2c} is characteristic for the multi-parameter measure. \begin{lemma}\label{characterisation} Let ${\mathbf P}$ be a probability measure on the set ${\Omega_n^r}$ of all $r$-dimensional subcomplexes of $\Delta_n$. Suppose that there exist real numbers $p_0, p_1, \dots, p_r\in [0,1]$ such that for any subcomplex $A\subset \Delta_n^{(r)}$ one has \begin{eqnarray} {\mathbf P}(Y\supset A) = \sum_{Y\supset A} {\mathbf P}(Y) = {\mathfrak p}rod_{i=0}^r p_i^{f_i(A)}. \end{eqnarray} Then ${\mathbf P}$ coincides with the measure $P_{r, {\mathfrak p}}: {\Omega_n^r} \to {\mathbf R}$ given by formula (\ref{def1}) with the multi-parameter ${\mathfrak p}=(p_0, p_1, \dots, p_r)$. \end{lemma} \begin{proof} Let $A\subset \Delta_n$ be a subcomplex. We want to show that $${\mathbf P}(A) = {\mathfrak p}rod_{i=0}^r p_i^{f_i(A)} \cdot {\mathfrak p}rod_{i=0}^r q_i^{e_i(A)} =P_{r,{\mathfrak p}}(A), \quad \mbox{where}\quad q_i=1-p_i.$$ Let $E=E(A)$ denote the set of external faces of $A$. For each subset $S\subset E$ we denote by $A_S$ the simplicial complex $$A_S \, =\, A\cup\bigcup_{\sigma\in S}\sigma.$$ Here $S=\emptyset$ is also allowed and $A_\emptyset =A$. Then by our assumption concerning ${\mathbf P}$ we have $${\mathbf P}(Y\supset A_S) = {\mathbf P}(Y\supset A)\cdot {\mathfrak p}rod_{\sigma\in S}p_\sigma,$$ where $p_\sigma$ denotes $p_i$ with $i=\dim \sigma$. Clearly, $$\{A\} = \{Y; Y\supset A\}- \bigcup_{\sigma\in E(A)} \{Y; Y\supset (A\cup \sigma)\}$$ and using the inclusion-exclusion formula we have (note that below $S$ runs over all subsets of $E$ including the empty set) \begin{eqnarray*}{\mathbf P}(A) &= & \sum_{S\subset E} (-1)^{|S|} {\mathbf P}(Y\supset A_S)\\ &=& {\mathbf P}(Y\supset A) \cdot \sum_{S\subset E}(-1)^{|S|} {\mathfrak p}rod_{\sigma\in S} p_\sigma \\ &= & {\mathbf P}(Y\supset A) \cdot {\mathfrak p}rod_{\sigma\in E}(1-p_\sigma) \\ &=& {\mathfrak p}rod_{i=0}^r p_i^{f_i(A)}\cdot {\mathfrak p}rod_{i=0}^r q_i^{e_i(A)} \\ &=& P_{r,{\mathfrak p}}(A). \end{eqnarray*} \end{proof} \section{Links in multi-parameter random simplicial complexes.} In this section we show that links of simplexes in a multi-parameter random complex are also multi-parameter random simplicial complexes and we find the probability multi-parameters of the links. We also study the intersections of the links and find their probability multi-parameters. First we consider the link of a single vertex. \begin{lemma}\label{link} Let $Y\in {\Omega_n^r}$ be a multi-parameter random complex with respect to the measure ${\rm P}_{r,{\mathfrak p}}$, where ${\mathfrak p}=(p_0, p_1, \dots, p_r)$. Then the link of any vertex of $Y$ is a multi-parameter random complex $L\in \Omega_{n-1}^{r-1}$ with the multi-parameter ${\mathfrak p}' = (p'_0, p'_1, \dots, p'_{r-1})$ where $$p'_i=p_ip_{i+1}, \quad \mbox{for} \quad i=0, 1, \dots, r-1.$$ \end{lemma} \begin{proof} Let us assume that $Y$ contains the vertex $1\in \{1, \dots, n\}$. Then the link of $1$ in $Y$ is the union of all simplexes $(i_0, i_1, \dots, i_p)\subset \Delta_n$ such that $1<i_0<i_1< \dots<i_p\le n$ and the simplex $(1, i_0, i_1, \dots, i_p)$ is contained in $Y$. Let $\Delta'\subset \Delta_n$ denote the simplex spanned by $\{2, 3, \dots, n\}$. If $Y\subset \Delta_n$ is a subcomplex of dimension $\le r$ containing $1$, then the link of 1 in $Y$, denoted $L(Y)\subset \Delta'$, is a subcomplex of dimension $r-1$. We may define the following probability function on the set of all subcomplexes $L\subset \Delta'^{(r-1)}$: \begin{eqnarray} {\mathbf P}(L) = p_0^{-1}\cdot \sum_{1\in Y \& L(Y)=L} {\rm P}_{r, {\mathfrak p}}(Y). \end{eqnarray} The fact that $\mathbf P$ is a probability measure follows from Corollary \ref{cont2c} applied to the subcomplex $A=\{1\}$. We want to apply to the measure $\mathbf P$ the criterion of Lemma \ref{characterisation}. Hence we need to compute $${\mathbf P}(Z\supset L) = \sum_{Z\supset \Delta'} {\mathbf P}(Z)$$ where $Z\subset \Delta'$ runs over all subcomplexes of dimension $r-1$. Clearly we have \begin{eqnarray*}{\mathbf P}(Z\supset L) &=& p_0^{-1} \sum_{1\in Y \& L(Y)\supset L} {\rm P}_{r, {\mathfrak p}}(Y) \\ &=& p_0^{-1}\cdot {\rm P}_{r,{\mathfrak p}}(Y\supset CL) .\end{eqnarray*} Here $CL\subset \Delta_n$ denotes the cone over $L$ with apex $1$. Applying Corollary \ref{cont2c} and observing that \begin{eqnarray*}f_i(CL) &=& f_i(L)+f_{i-1}(L), \quad i= 1, \dots, r,\\ f_0(CL)&=& f_0(L)+1,\end{eqnarray*} we get (since $f_r(L)=0$) $${\rm P}_{r,{\mathfrak p}}(Y\supset CL) = p_0\cdot (p_0p_1)^{f_0(L)}\cdot (p_1p_2)^{f_1(L)}\cdots (p_{r-1}p_r)^{f_{r-1}(L)}.$$ Thus, $${\mathbf P}(Z\supset L) = {\mathfrak p}rod_{i\ge 0} \left(p_ip_{i+1}\right)^{f_i(L)}$$ and hence Lemma \ref{characterisation} implies that $L$ is a multi-parameter random complex with the multi-parameter ${\mathfrak p}' = (p_0p_1, p_1p_2, \dots, p_{r-1}p_r).$ \end{proof} Next we consider the general case. \begin{lemma}\label{linkk} Let $Y\in {\Omega_n^r}$ be a multi-parameter random complex with respect to the measure ${\rm P}_{r,{\mathfrak p}}$, where ${\mathfrak p}=(p_0, p_1, \dots, p_r)$. Then the link of any $k$-dimensional simplex of $Y$ (where $k<r$) is a multi-parameter random simplicial complex $L\in \Omega_{n-k-1, r-k-1}$ with the multi-parameter $${\mathfrak p}' = (p'_0, p'_1, \dots, p'_{r-k-1})$$ where \begin{eqnarray}\label{haha} p'_i={\mathfrak p}rod_{j=i}^{i+k+1} p_j^{\binom {k+1}{j-i}}. \end{eqnarray} For example, for $k=1$ we have $$p'_i=p_ip_{i+1}^2p_{i+2},$$ and for $k=2$, $$p'_i= p_ip_{i+1}^3p_{i+2}^3p_{i+3}.$$ \end{lemma} \begin{proof} Let $\sigma_0\subset \Delta_n$ be a fixed $k$-dimensional simplex; without loss of generality we may assume that $\sigma_0=(1, 2, \dots, k+1)$. Consider the complexes $Y\in {\Omega_n^r}$ containing $\sigma_0$; for each of these complexes let $L(Y)$ denote the link of $\sigma_0$ in $Y$. Clearly $L(Y)$ is a subcomplex of the simplex $\Delta'$ spanned by the vertices $k+1, \dots, n$. Since $\dim L(Y)\le r-k-1$ we may view $L(Y)$ as an element of $\Omega_{n-k-1, r-k-1}$. Recall that the link of $\sigma_0$ in $Y$ is the union of all simplexes $(i_0, i_1, \dots, i_p)\subset \Delta'$ such that $k+1<i_0<i_1< \dots<i_p\le n$ and the simplex $(1, ,2, \dots, k+1, i_0, i_1, \dots, i_p)$ is contained in $Y$. As in the previous Lemma, define the following probability function on the set of all subcomplexes $L\subset \Delta'^{(r-k-1)}$: \begin{eqnarray}\label{probinduced} {\mathbf P}(L) = \left[{\mathfrak p}rod_{i=0}^k p_i^{\binom {k+1}{i+1}}\right]^{-1} \cdot \sum_{\sigma_0\subset Y \& L(Y)=L} {\rm P}_{r, {\mathfrak p}}(Y). \end{eqnarray} Here $Y$ runs over all subcomplexes $Y\in {\Omega_n^r}$ containing the simplex $\sigma_0$. The first factor normalises (\ref{probinduced}) and makes it a probability measure as follows from Corollary \ref{cont2c} applied to the subcomplex $A=\sigma_0$. We want to apply to the measure $\mathbf P$ the criterion of Lemma \ref{characterisation} and we need to compute $${\mathbf P}(Z\supset L) = \sum_{Z\supset \Delta'} {\mathbf P}(Z)$$ where $Z\subset \Delta'$ runs over all subcomplexes of dimension $r-k-1$. Clearly we have \begin{eqnarray*}{\mathbf P}(Z\supset L) &=& \left[{\mathfrak p}rod_{i=0}^k p_i^{\binom {k+1}{i+1}}\right]^{-1} \cdot \sum_{\sigma_0\subset Y \& L(Y)\supset L} {\rm P}_{r, {\mathfrak p}}(Y) \\ \\ &=& \left[{\mathfrak p}rod_{i=0}^k p_i^{\binom {k+1}{i+1}}\right]^{-1} \cdot {\rm P}_{r,{\mathfrak p}}(Y\supset \sigma_0\ast L) .\end{eqnarray*} Here $\sigma_0\ast L\subset \Delta_n$ denotes the join of $\sigma_0$ and $L$. To compute the last factor we may apply Corollary \ref{cont2c}. Note that \begin{eqnarray*}f_i(\sigma_0\ast L) &=& \sum_{j=0}^{k+1} \binom {k+1} j \cdot f_{i-j}(L), \quad \mbox{for} \quad i>k\\ f_i(\sigma_0\ast L) &=& \sum_{j=0}^{i} \binom {k+1} j \cdot f_{i-j}(L)+ \binom {k+1}{i+1},\quad \mbox{for} \quad i\le k.\end{eqnarray*} Hence, we get (since $f_r(L)=0$) $${\rm P}_{r,{\mathfrak p}}(Y\supset \sigma_0\ast L) = {\mathfrak p}rod_{i=0}^k p_i^{\binom {k+1}{i+1}} \cdot {\mathfrak p}rod_{i=0}^r {\mathfrak p}rod_{j=0}^{k+1} \left[p^{\binom {k+1} j}\right]^{f_{i-j}(L)}$$ Thus, substituting in the formula above we obtain $${\mathbf P}(Z\supset L) = {\mathfrak p}rod_{i\ge 0}^{r-k-1} \left(p'_i\right)^{f_i(L)}$$ where the numbers $p'_i$ are given by (\ref{haha}). This completes the proof. \end{proof} \begin{example}\label{ex33} {\rm Let $r=n-1$ and ${\mathfrak p}=(1, p, 1, \dots, 1)$. Hence we consider clique complexes $Y$ of Erd\"os - R\'enyi random graphs with edge probability $p$. The link of a vertex of $Y$ has multi-parameter ${\mathfrak p}'=(p, p, 1, \dots, 1)$, i.e. it has two probability parameters. The link of an edge of $Y$ has probability multi-parameter $(p^2, p, 1, \dots, 1)$ and the link of a 2-simplex has probability multi-parameter $(p^3, p, 1, \dots, 1)$. Thus, links of simplexes in clique complexes of Erd\"os - R\'enyi random graphs are also clique complexes but the underlying random graphs are of slightly more general nature as they have a vertex probability parameter $\not=1$. } \end{example} Recall that the degree of a $k$-dimensional simplex $\sigma$ in a simplicial complex $Y$ is defined as the number of $(k+1)$-dimensional simplexes containing $\sigma$. Clearly the degree of $\sigma$ in $Y$ coincides with the number of vertices in the link of $\sigma$ in $Y$. Hence, applying Lemma \ref{tt} in combination with Lemma \ref{linkk} we obtain: \begin{corollary} The degree of a vertex of a random complex $Y\in \Omega_n^r$ with respect to ${\rm P}_{r, {\mathfrak p}}$, where ${\mathfrak p}=(p_0, p_1, \dots, p_r)$, has binomial distribution ${\it Bi}(n-1, p_0p_1)$. In other words, probability that a vertex of $Y$ has degree $k$ equals $$\binom {n-1} k \cdot (p_0p_1)^k \cdot (1-p_0p_1)^{n-1-k}, $$ where $k=0,1, \dots, n-1.$ More generally, the degree of a $k$-dimensional simplex $\sigma$ of a random complex $Y\in \Omega_n^r$ with respect to ${\rm P}_{r, {\mathfrak p}}$ has binomial distribution ${\it Bi}(n-k-1, p)$ where $$p={\mathfrak p}rod_{i=0}^{k+1} p_i^{\binom {k+1} i} = p_0p_1^{k+1}p_2^{\binom {k+1} 2} \cdots p_k^{k+1} p_{k+1}.$$ In other words, probability that a $k$-simplex of $Y$ has degree $k$ equals $$\binom {n-k-1} k \cdot p^k \cdot (1-p)^{n-1-k}, $$ where $k=0,1, \dots, n-k-1.$ \end{corollary} The following Corollary will be used later in this paper. \begin{corollary} \label{degreezero} Assume that $p_0=\omega/n$ with $\omega\to \infty$ and \begin{eqnarray} p_1^2\cdot p_2 \, \ge\, \frac{2\log \omega + c}{\omega}, \end{eqnarray} where $c$ is a constant. Then there exists $N$ (depending on the sequence $\omega$ and on $c$) such that for all $n>N$ the probability that a random complex $Y\in \Omega_n^r$ with respect to the measure ${\rm P}_{r, {\mathfrak p}}$ has an edge of degree zero is less than $$p_1\cdot e^{2-c}.$$ \end{corollary} \begin{proof} An edge of $Y$ has degree zero if and only if its link in $Y$ is the empty set $\emptyset$. Since the link of an edge has the multi-parameter $(p'_0, p'_1, \dots, p'_{r-2})$ where $$p'_i = p_ip_{i+1}^2p_{i+2}$$ (see Lemma \ref{linkk}) we may apply the result of Example \ref{emptyset} to obtain that the probability that a given edge of $Y$ has degree zero equals $$(1-p_0p_1^2p_2)^{n-2}.$$ Hence the expectation of the number of the degree zero edges in $Y$ equals to \begin{eqnarray*} \binom n 2 \cdot p_0^2\cdot p_1\cdot \left( 1-p_0p_1^2p_2 \right)^{n-2} &\le& \frac{1}{2}\cdot \omega^2\cdot p_1\cdot \exp\left(-p_0p_1^2p_2(n-2)\right)\\ &\le & \frac{1}{2}\cdot p_1 \cdot \exp\left(2\log \omega - (2\log \omega +c)\cdot \frac{n-2}{n}\right)\\ &=& \frac{1}{2}\cdot p_1\cdot \left\{\exp\left(\frac{4\log \omega}{n}\right)\cdot e^{\frac{2c}{n}}\right\}\cdot e^{-c}. \end{eqnarray*} The first factor in the figure brackets tends to $1$ and hence it is less than $2$ for large $n$. The second factor in the figure brackets is less than $e^2$ (since our assumptions imply $c<\omega\le n$). This complex the proof. \end{proof} Next we consider the intersections of links of several vertices. \begin{lemma} \label{linksintersection} Let $k<n$ be fixed integers and let $Y\in {\Omega_n^r}$ be a random $r$-dimensional simplicial complex with probability multi-parameter ${\mathfrak p}=(p_0, \dots, p_r)$. Consider the intersection $L(Y)$ of links of $k$ distinct vertices of $Y$. Then $L(Y)\in \Omega_{n-k}^{r-1}$ is a random simplicial complex with respect to the multi-parameter ${\mathfrak p}'=(p'_0, \dots, p'_{r-1})$ where \begin{eqnarray}\label{klinks} p'_i =p_ip_{i+1}^k, \quad\mbox{for}\quad i=0, \dots, r-1.\end{eqnarray} \end{lemma} \begin{proof} Consider the set $\Omega'$ of simplicial complexes $Y\in {\Omega_n^r}$ containing the vertices $\{1, \dots, k\}$; the function $Y\mapsto p_0^{-k}{\rm P}_{r, n}(Y)$ is a probability measure on $\Omega'$ (by Corollary \ref{cont2c}). For $Y\in \Omega'$ let $L_i(Y)$ denote the link of the vertex $i$ in $Y$ where $i=1, \dots, k$. Let $\Delta'$ denote the simplex spanned by the remaining vertices $k+1, k+2, \dots, n$. The intersection $L(Y)=L_1(Y)\cap \dots\cap L_k(Y)$ is a subcomplex of $\Delta'$ of dimension $\le r-1$. We obtain a map $$\Omega' \to \Omega_{n-k}^{r-1}, \quad Y\mapsto L(Y)$$ and we wish to describe the pushforward measure $\mathbf P$ on $\Omega_{n-k}^{r-1}$ which (by the definition) is given by ${\mathbf P}(Z)=p_0^{-k}\cdot \sum_{L(Y)=Z} {\rm P}_{r, {\mathfrak p}}(Y).$ Given a subcomplex $L\subset \Delta'$, consider the quantity $${\mathbf P}(Z\supset L) = \sum_{L\subset Z\subset \Delta'} {\mathbf P}(Z);$$ in the sum $Z$ runs over all subcomplexes of $\Delta'$ satisfying $L\subset Z\subset \Delta'$, $\dim Z\le r-1$. By the construction of ${\mathbf P}$, we have $${\mathbf P}(Z\supset L) = p_0^{k} \cdot \sum_{L(Y)\supset L} {\rm P}_{r, {\mathfrak p}}(Y);$$ here $Y$ runs over all subcomplexes $Y\subset \Delta_n^{(r)}$ containing the vertices $1, \dots, k$ and such that $L(Y)\supset L$. These last two conditions can be expressed by saying that $Y$ contains the join $$J=\{1, 2, \dots, k\}\ast L$$ as a subcomplex. Note that $J$ is the union of $k$ cones over $L$ with vertices at the points $1, \dots, k$. One has \begin{eqnarray*} f_i(J) &=& f_i(L)+kf_{i-1}(L), \quad \mbox{for}\quad i>0, \\ f_0(J) &=& f_0(L) +k. \end{eqnarray*} Thus, using Corollary \ref{cont2c} we obtain \begin{eqnarray*} {\mathbf P}(Z\supset L) &=& p_0^{-k} \cdot \sum_{J\subset Y} {\rm P}_{r, {\mathfrak p}}(Y) \\ &=& p_0^{-k}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(J)}\\ &=& (p_0p_1^k)^{f_0(L)}\cdots (p_{r-1}p_r^k)^{f_{r-1}(L)}. \end{eqnarray*} Finally we apply Lemma \ref{characterisation} which implies that $\mathbf P$ is a multi-parameter probability measure on $\Omega_{n-k, r-1}$ with respect to the multi-parameter (\ref{klinks}). \end{proof} \section{Intersections of random complexes} In this section we show that intersection of multi-parameter random simplicial complexes is also a multi-parameter random simplicial complex with respect to the product of multi-parameters. Let $Y, Y'\in {\Omega_n^r}$ be two simplicial subcomplexes of $\Delta_n^{(r)}$. Suppose that both $Y, Y'$ are random and that their probability measures are ${\rm P}_{r, {\mathfrak p}}$ and ${\rm P}_{r, {\mathfrak p}'}$, correspondingly, see (\ref{def1}). Here ${\mathfrak p} = (p_0, \dots, p_r)$ and ${\mathfrak p}'=(p'_0, \dots, p'_r)$ are the corresponding probability multi-parameters. The intersection $$Z=Y\cap Y'\, \in {\Omega_n^r}$$ appears with probability \begin{eqnarray}\label{intersection} {\mathbf P}(Z) = \sum_{Y\cap Y'=Z} {{\rm P}}_{r, {\mathfrak p}}(Y)\cdot {{\rm P}}_{r, {\mathfrak p}'}(Y'). \end{eqnarray} This measure $\mathbf P$ is the pushforward of the product measure ${\rm P}_{n,{\mathfrak p}}\times {\rm P}_{n, {\mathfrak p}'}$ under the map $${\Omega_n^r}\times {\Omega_n^r} \to {\Omega_n^r}, \quad (Y, Y')\mapsto Y\cap Y'.$$ \begin{lemma} For $Z\in {\Omega_n^r}$ one has $${\mathbf P}(Z) = {\rm P}_{r, {\mathfrak p}{\mathfrak p}'}(Z),$$ where ${\mathfrak p}{\mathfrak p}'=(p_0p'_0, p_1p'_1, \dots, p_rp'_r)$. \end{lemma} \begin{proof} Let ${\mathbf P}$ be the probability measure on ${\Omega_n^r}$ given by (\ref{intersection}). To apply Lemma \ref{characterisation} we compute \begin{eqnarray*} {\mathbf P}(Z\supset A) &=& \sum_{Z\supset A} {\mathbf P}(Z) \\ &=& \sum_{Y\cap Y'\supset A} {{\rm P}}_{r, {\mathfrak p}}(Y)\cdot {{\rm P}}_{r, {\mathfrak p}'}(Y') \\ &=& \left[ \sum_{Y\supset A} {\rm P}_{r, {\mathfrak p}}(Y) \right]\cdot \left[ \sum_{Y'\supset A} {\rm P}_{r, {\mathfrak p}'}(Y') \right]\\ &=& {\mathfrak p}rod_{i\ge 0} p_i^{f_i(A)} \cdot {\mathfrak p}rod_{i\ge 0} {p'_i}^{f_i(A)} \\ &=& {\mathfrak p}rod_{i\ge 0} (p_ip'_i)^{f_i(A)} . \end{eqnarray*} Now, Lemma \ref{characterisation} gives ${\mathbf P}(Z) = {\rm P}_{r, {\mathfrak p}{\mathfrak p}'}(Z)$ for any $Z\in {\Omega_n^r}$. \end{proof} Lemma \ref{intersection} suggests a way how a random simplicial complex $Y\in {\Omega_n^r}$ with general probability multi-parameter ${\mathfrak p}=(p_0, p_1, \dots, p_r)$ can be generated. Consider the probability multi-parameter $${\mathfrak p}_i=(1, \dots, 1, p_i, 1, \dots, 1)$$ where $p_i$ occurs on the place with index $i$, where $i=0, 1, \dots, r$. Generate a random complex $Y_i\in {\Omega_n^r}$ with respect to the measure ${\mathfrak p}_i$. Then the intersection $$Y=Y_0\cap Y_1\cap \dots \cap Y_r$$ is a random complex with respect to the original measure ${\rm P}_{n, {\mathfrak p}}$. Hence, ${\rm P}_{n,{\mathfrak p}}$ is the pushforward of the product measure ${\rm P}_{r, {\mathfrak p}_0}\times {\rm P}_{r, {\mathfrak p}_1}\times\cdots\times {\rm P}_{r, {\mathfrak p}_r}$ with respect to the map $${\Omega_n^r}\times \cdots \times {\Omega_n^r} \to {\Omega_n^r}, \quad (Y_0, \dots, Y_r)\mapsto \bigcap_{i=0}^r Y_i.$$ Note that a random complex with respect to ${\rm P}_{r, {\mathfrak p}_i}$ has the following structure: (1) we start with the full $(i-1)$-dimensional skeleton $\Delta_n^{(i-1)}$ and (2) add $i$-dimensional simplexes at random, independently of each other, with probability $p_i$ (as in the Linial - Meshulam model), and then (3) we subsequently add all the external $j$-dimensional faces to the complex obtained on the previous step for $j=i+1, j+2, \dots, r$. \begin{example}{\rm Consider the following construction. Start with a multi-para\-me\-ter random simplicial complex $Y$ with the multi-parameter ${\mathfrak p}=(p_0, \dots, p_r)$. Let $\Delta'\subset \Delta_n$ denote the simplex spanned by the vertices $2, 3, \dots, n$. We claim that the intersection $$Y\cap \Delta'\in \Omega_{n-1}^{r}$$ is a multi-parameter random simplicial complex with the same multi-parameter ${\mathfrak p}$. Indeed, the pushforward of the measure ${\rm P}_{r, {\mathfrak p}}$ under the map $$\Omega_{n}^{r} \to \Omega_{n-1}^{r}, \quad Y\mapsto Y\cap \Delta'$$ is $${\mathbf P}(Z) = \sum_{Y\cap \Delta'=Z} {\rm P}_{r, {\mathfrak p}}(Y).$$ For a subcomplex $A\subset \Delta'$ we have (using Corollary \ref{cont2c}) $${\mathbf P}(Z\supset A) = \sum_{Y\supset A} {\rm P}_{r, {\mathfrak p}}(Y) = {\mathfrak p}rod_{i\ge 0} p_i^{f_i(A)}.$$ Now the result follows from Lemma \ref{characterisation}. }\end{example} \section{Isolated subcomplexes}\label{secisolated} We shall say that a simplicial subcomplex $S\subset Y$ is {\it isolated} if no edge of $Y$ connects a vertex of $S$ with a vertex of $Y$ which is not in $S$. In other words, $S\subset Y$ is isolated if it is a union of several connected components of $Y$. \begin{lemma}\label{lisolated} Given a subcomplex $S\subset \Delta_n^{(r)}$, and let $Y\in {\Omega_n^r}$ be a random simplicial complex with respect to a multi-parameter ${\mathfrak p}=(p_0, \dots, p_r)$. The probability that $Y$ contains $S$ as an isolated subcomplex equals \begin{eqnarray}\label{isolated} \left[q_0 +p_0\cdot q_1^{f_0(S)}\right]^{n-f_0(S)}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(S)}. \end{eqnarray} \end{lemma} \begin{proof} Let $K$ be a subset of $\{1, \dots, n\}-V(S)$ where $V(S)$ denotes the set of vertices of $S$. Denote $A_K=S\cup K$ and $B_K=\Delta_S \cup \Delta_K$, where $\Delta_S$ and $\Delta_K$ are the simplexes spanned by $V(S)$ and $K$ respectively. The pair $A_K\subset B_K$ satisfies the condition of Lemma \ref{cont}. Indeed, the external faces of $B_K$ are the vertices $i\notin V(S)\cup K$ and the edges connecting the elements of $K$ and the vertices of $S$; these external faces of $B_K$ are also external faces of $A_K$. Denoting $|K|=k$ we have \begin{eqnarray*}f_0(A_K) &=& f_0(S) +k,\\ f_i(A_K)&=&f_i(S), \quad i\ge 1,\\ e_0(B_K)&=&n-f_0(S) -k,\\ e_1(B_K)&=& k\cdot f_0(S),\\ e_i(B_K) &=& 0, \quad \mbox{for}\quad i\ge 2.\end{eqnarray*} Therefore, applying Lemma \ref{cont} we find \begin{eqnarray*} {\rm P}_{r, {\mathfrak p}}(A_K\subset Y\subset B_K) &=& {\mathfrak p}rod_{i=0}^r p_i^{f_i(A_K)}\cdot {\mathfrak p}rod_{i=0}^r q_i^{e_i(B_K)}\\ &=& p_0^k \cdot q_0^{n-f_0(S)-k}\cdot q_1^{kf_0(S)}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(S)}\\ &=& \left[\frac{p_0q_1^{f_0(S)}}{q_0}\right]^k \cdot q_0^{n-f_0(S)}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(S)}. \end{eqnarray*} Clearly, the probability that $Y$ contains $S$ as an isolated subcomplex equals the sum $$\sum_K {\rm P}_{r, {\mathfrak p}}(A_K\subset Y\subset B_K) $$ where $K$ runs over all subsets of $\{1, \dots, n\}-V(S)$. Hence we obtain that the desired probability equals \begin{eqnarray*} &&\sum_K {\rm P}_{r, {\mathfrak p}}(A_K\subset Y\subset B_K) \\ &=& q_0^{n-f_0(S)}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(S)}\cdot \sum_{k=0}^{n-f_0(S)} \binom n k \cdot \left[\frac{p_0q_1^{f_0(S)}}{q_0}\right]^k\\ &=& q_0^{n-f_0(S)}\cdot {\mathfrak p}rod_{i=0}^r p_i^{f_i(S)}\cdot \left[ 1+ \frac{p_0q_1^{f_0(S)}}{q_0}\right]^{n-f_0(S)} \end{eqnarray*} which is equivalent to formula (\ref{isolated}). \end{proof} \begin{example}\label{vertex}{\rm Consider the special case when the complex $S\subset \Delta_n^{(r)}$ is a single point, $S=\{i\}$. We obtain that the probability that $Y$ contains the vertex $\{i\}$ as an isolated point equals $$p_0(1-p_0p_1)^{n-1}.$$ }\end{example} \begin{example}\label{tree}{\rm Let $S_v\subset \Delta_n^{(r)}$ be a tree with $v$ vertices. Then the probability that $Y$ contains $S_v$ as an isolated subcomplex equals \begin{eqnarray} \left[q_0+p_0\cdot q_1^v\right]^{n-v}\cdot p_0^v\cdot p_1^{v-1}. \end{eqnarray} }\end{example} We shall use the results of Examples \ref{vertex} and \ref{tree} to describe the range of the probability multi-parameter for which the random complex contains an isolated vertex, a.a.s. \begin{lemma}\label{lm54} Let $Y\in \Omega_n^r$ be a random complex with respect to the probability measure ${\rm P}_{r, {\mathfrak p}}$, where ${\mathfrak p}=(p_0, p_1, \dots, p_r)$. As above, we shall assume that $p_0=\omega/n$, where $\omega\to \infty$. Then: \newline (A) If \begin{eqnarray}\label{minus} p_1=\frac{\log \omega - \omega_1}{\omega},\end{eqnarray} for a sequence $\omega_1\to \infty$ then a random complex $Y\in \Omega_n^r$ contains an isolated vertex, a.a.s. In particular, under this condition a random complex $Y$ is disconnected, a.a.s. \newline (B) If \begin{eqnarray}\label{plus} p_1=\frac{\log \omega + \omega_1}{\omega},\quad \omega_1\to \infty,\end{eqnarray} then a random complex $Y\in\Omega_n^r$ contains no isolated vertexes, a.a.s. \end{lemma} We shall see below in \S \ref{seccon} that under condition (\ref{plus}) a random complex $Y$ is connected, a.a.s. \begin{proof} For $i\in \{1, \dots, n\}$ let $X_i: \Omega_{n}^r \to {\mathbf R}$ denote the random variable $X_i(Y)=1$ if $Y$ contains $i$ as an isolated vertex, otherwise $X_i(Y)=0$. The sum $X=\sum_{i=1}^n X_i$ counts the number of isolated vertexes in random simplicial complexes. By Example \ref{vertex} we have $$\mathbb E(X)=np_0(1-p_0p_1)^{n-1}.$$ First, we shall assume (\ref{minus}). Then $$\mathbb E(X)= \omega\left(1- \frac{\log\omega -\omega_1}{n}\right)^{n-1}$$ and denoting $x= \frac{1}{n}(\log \omega -\omega_1)$ and using the power series expansion for $\log(1-x)$ we obtain \begin{eqnarray*} \log \mathbb E(X) &=& \log \omega -(n-1)\left[x +\frac{1}{2}x^2 +\frac{1}{3}x^3 +\dots\right]\\ \\ &=& \frac{1}{n}\log \omega +\frac{n-1}{n}\cdot \omega_1 -(n-1)\cdot x^2\cdot [\frac{1}{2}+\frac{1}{3}x + \frac{1}{4} x^2 + \dots]\\ \\ &\ge & \frac{n-1}{n}\cdot \omega_1 \, -\, 1. \end{eqnarray*} Here we used that $x=p_0p_1\to 0$ and $$nx^2 \le n\cdot \frac{(\log \omega)^2}{n^2} = \frac{(\log \omega)^2}{n}\le \frac{(\log n)^2}{n} \to 0.$$ Therefore, the expectation $\mathbb E(X)$ tends to infinity. To show that $X>0$ under the assumption (\ref{minus}) we shall apply the Chebyshev inequality in the form \begin{eqnarray}\label{cheb1} {\rm P}_{r, {\mathfrak p}}(X=0) \le \frac{\mathbb E(X^2)}{\mathbb E(X)^2} -1. \end{eqnarray} Hence statement (A) of the lemma follows once we know that the ratio $\frac{\mathbb E(X^2)}{\mathbb E(X)^2}$ tends to $1$. Clearly $\mathbb E(X^2) = \sum_{i,j}\mathbb E(X_iX_j)$ and for $i\not=j$ the number $\mathbb E(X_iX_j)$ is the probability that $i$ and $j$ are isolated vertices of $Y$. Obviously, this probability equals the difference $a-b$ where $a$ is the probability that $Y$ contains the complex $S=\{i,j\}$ as an isolated subcomplex and $b$ is the probability that $Y$ contains the edge $(ij)$ as an isolated subcomplex. Applying Lemma \ref{lisolated} one obtains that $a= \left[q_0+p_0q_1^2\right]^{n-2}p_0^2$ while $b=\left[q_0+p_0q_1^2\right]^{n-2}p_0^2p_1$ and hence for $i\not=j$, $$\mathbb E(X_iX_j) = \left[q_0+p_0q_1^2\right]^{n-2}\cdot p_0^2\cdot q_1.$$ We obtain $$\mathbb{E}(X^2)=\mathbb{E}(X)+(n^2-n)\cdot (q_0+p_0q_1^2)^{n-2}\cdot p_0^2\cdot q_1. $$ Hence \begin{eqnarray*} \frac{\mathbb E(X^2)}{\mathbb E(X)^2}= \mathbb E(X)^{-1} + \left(1-\frac{1}{n}\right) \cdot \left[1+ \frac{p_0q_0p_1^2}{(1-p_0p_1)^2}\right]^{n-2}\cdot \frac{q_1}{(1-p_0p_1)^2}. \end{eqnarray*} The first summand $\mathbb E(X)^{-1}$ tends to $0$ (as shown above). Denoting $$y= \frac{p_0q_0p_1^2}{(1-p_0p_1)^2}$$ we observe that $$ny=\frac{(\log \omega - \omega_1)^2}{\omega}\cdot \frac{q_0}{(1-p_0p_1)^2} $$ tends to $0$ as $n\to \infty$. Hence \begin{eqnarray}\label{power} 1\le \left[1+\frac{p_0q_0p_1^2}{(1-p_0p_1)^2}\right]^{n-2} \le \sum_{k=0}^{n-2} [(n-2)y]^k \le \frac{1}{1-(n-2)y}. \end{eqnarray} and both sides of this inequality tend to $1$. Hence we conclude that the ratio $$\frac{\mathbb E(X^2)}{\mathbb E(X)^2}$$ tends to $1$ as $n\to \infty$ and (\ref{cheb1}) implies that a random complex $Y\in \Omega_n^r$ contains an isolated point with probability $\to 1$ as $n\to \infty$. Next we prove statement (B) under the assumption (\ref{plus}). We use the first moment method and show that the expectation $\mathbb E(X)$ tends to zero if (\ref{plus}) holds. As above, we have \begin{eqnarray*} \mathbb E(X) &=& np_0\left(1-p_0p_1\right)^{n-1} \\ &=& \omega \cdot \left(1-\frac{\log \omega + \omega_1}{n}\right)^{n-1} \\ &<& \omega \cdot e^{-\frac{\log \omega +\omega_1}{n}\cdot (n-1)}\\ &=& \omega^{\frac{1}{n}}\cdot e^{-\frac{n-1}{n} \cdot \omega_1}. \end{eqnarray*} The logarithm of the first factor $\frac{1}{n}\log \omega \le \frac{1}{n}\log n$ tends to zero and hence the first factor $\omega^{\frac{1}{n}}$ is bounded. Clearly, the second factor tends to $0$ as $n\to \infty$. Thus, by the first moment method, a random complex $Y\in \Omega_n^r$ has an isolated vertex with probability tending to $0$ with $n$. \end{proof} \section{Connectivity of random complexes}\label{seccon} In this section we find the range (threshold) of connectivity of a multi-parameter random simplicial complex $Y\in \Omega_n^r$ with respect to the probability measure ${\rm P}_{r, {\mathfrak p}}$ where $${\mathfrak p}=(p_0, p_1, \dots, p_r)$$ is the multi-parameter. Everywhere in this section we shall assume that \begin{eqnarray}\label{everywhere} p_0=\frac{\omega}{n}, \quad \mbox{where}\quad \omega \to \infty. \end{eqnarray} This is to ensure that the number of vertices of $Y$ tends to $\infty$. The connectivity depends only on the 1-skeleton and hence only the parameters $p_0$ and $p_1$ are relevant. Our treatment in this section is similar to the classical analysis of the connectivity of random graphs in the Erd\H{o}s--R\'{e}nyi model with an extra difficulty which arises due to the number of vertices being random. In the following section we apply Theorem \ref{propcon} to establish the region of simple connectivity of multi-parameter random simplicial complexes; this region depends on combination of the parameters $p_0, p_1, p_2$. The following is the main result of this section. \begin{theorem}\label{propcon} Consider a random simplicial complex $Y\in\Omega_{n}^{r}$ (where $r\ge 1$) with respect to a multi-parameter probability measure ${\rm P}_{r, {\mathfrak p}}$ satisfying (\ref{everywhere}). Assume that \begin{eqnarray}\label{cc} p_1\, \ge\, \frac{k\log \omega+c}{\omega}\end{eqnarray} for an integer $k\ge 1$ and a constant $c>0$. Then there exists a constant $N>0$ (depending on the sequence $\omega$ and on $c$) such that for all $n>N$ the complex $Y$ is connected with probability greater than \begin{eqnarray}\label{probab} 1-Ce^{-c}\omega^{1-k}, \end{eqnarray} where $C$ is a universal constant. \end{theorem} \begin{corollary}\label{corconn} If additionally to (\ref{everywhere}) one has $$p_1=\frac{\log\omega+\omega_1}{\omega},$$ for a sequence $\omega_1\to \infty$ then a random complex $Y\in \Omega_{n}^{r}$ with respect to ${\rm P}_{r, {\mathfrak p}}$ is connected, a.a.s.. \end{corollary} Corollary \ref{corconn} complements the statement of part (A) of Lemma \ref{lm54} saying that a random complex $Y\in \Omega_n^r$ is disconnected if $p_1=\frac{\log\omega-\omega_1}{\omega}.$ Corollary \ref{corconn} follows from Theorem \ref{propcon} in an obvious way. \begin{example}\label{example63}{\rm Assume that $p_0=n^{-\alpha_0}$ and $p_1=n^{-\alpha_1}$ where $\alpha_0, \alpha_1\ge 0$ are constants. In this special case Corollary \ref{corconn} implies that a random simplicial complex $Y\in \Omega_n^r$ is {\it connected} for \begin{eqnarray}\label{condomain} \alpha_0+\alpha_1<1. \end{eqnarray} Note that part (A) of Lemma \ref{lm54} implies that $Y$ is {\it disconnected} if \begin{eqnarray}\label{discondomain} \alpha_0+\alpha_1>1. \end{eqnarray} } \end{example} \begin{proof}[Proof of Theorem \ref{propcon}.] For $v\ge 1$ let $E_v\subset \Omega_n^r$ denote the set of disconnected simplicial complexes $Y\subset \Delta_n$ such that $$v=\min_{j\in J} f_0(Y_j),$$ where $$Y=\sqcup_{j\in J} Y_j$$ is the decomposition of $Y$ into the connected components. In other words, $v$ is the smallest number of vertices contained in a single connected component of $Y\in E_v$. For $t=0, 1, \dots, n$ we denote by $E_{v,t}$ the intersection $$E_{v,t} = E_v \cap \Omega_{n,t}^r$$ where $\Omega_{n,t}^r$ is the set of all complexes $Y\in \Omega_{n,r}$ with $f_0(Y)=t$. Clearly, a complex $Y\in \Omega_{n,t}^r$ is disconnected if and only if $Y\in E_{v,t}$ for some $1\le v\le t/2$. By Lemma \ref{vertices}, for any fixed $\epsilon \in (0, 1/2)$, \begin{eqnarray}\label{25} \sum_{|t-\omega|> \delta\omega} {\rm P}_{r, {\mathfrak p}}(\Omega_{n,t}^r) \le 2\exp(-\frac{1}{3}\omega^{2\epsilon}). \end{eqnarray} where \begin{eqnarray}\label{delta}\delta= \omega^{-\frac{1}{2}+\epsilon}.\end{eqnarray} (One may assume everywhere below that $\epsilon = 1/4$). Our goal is to estimate above the sum \begin{eqnarray}\label{sum11} \sum_{|t-\omega|\le\delta \omega}\, \sum_{v\ge 1}^{t/2} {\rm P}_{r, {\mathfrak p}}(E_{v,t}) \end{eqnarray} since (using (\ref{25})), \begin{eqnarray}\label{sum2} {\rm P}_{r,{\mathfrak p}}(Y; b_0(Y)>1) \le \sum_{|t-\omega|<\delta \omega}\, \sum_{v\ge 1}^{t/2} {\rm P}_{r, {\mathfrak p}}(E_{v,t}) +2\exp(-\frac{1}{3} \omega^{2\epsilon}). \end{eqnarray} The left hand side of (\ref{sum2}) is the probability that $Y$ is disconnected (i.e. its zero-dimensional Betti number $b_0(Y)$ is greater than $1$.) Hence an upper bound for the sum (\ref{sum11}) will give an upper bound on the probability that $Y$ is disconnected. For a tree $T\subset \Delta_n$ on $v$ vertices and for a subset $K\subset \{1, \dots, n\} -F_0(T)$ of cardinality $t-v$, denote $$A_{T,K} = T\cup K, \quad B_{T,K}= \Delta_S \cup \Delta_K,$$ where $S=V(T)$ is the set of vertices of $T$ and $\Delta_S$ and $\Delta_T$ denote the simplexes spanned by $S$ and $T$ correspondingly. The pair of subcomplexes $A_{T,K}\subset B_{T,K}$ satisfies the condition of Lemma \ref{cont}. Let $P_{T,K}$ denote the probability $$P_{T,K} \, = \, {\rm P}_r(A_{T,K}\subset Y\subset B_{T,K})= p_0^{t} p_1^{v-1}q_0^{n-t}q_1^{v(t-v)},$$ where we have used Lemma \ref{cont}. Any complex $Y\in E_{v,t}$ satisfies $A_{T,K}\subset Y\subset B_{T, K}$ for a tree $T$ on $1\le v\le t/2$ vertices and for a unique subset $K$ of cardinality $t-v$. Hence, taking into account the Cayley formula for the number of trees on $v$ vertices we obtain \begin{eqnarray*} {\rm P}_{r, {\mathfrak p}}(E_{v,t}) &\le& \binom n v \cdot \binom {n-v} {t-v} \cdot v^{v-2} \cdot P_{T,K} \\ &=& \binom n v \cdot \binom {n-v} {t-v} \cdot v^{v-2} \cdot p_0^{t}\cdot p_1^{v-1}\cdot q_0^{n-t}\cdot q_1^{v(t-v)}\\ &=& \binom n t \cdot \binom t v \cdot v^{v-2} \cdot p_0^{t}\cdot p_1^{v-1}\cdot q_0^{n-t}\cdot q_1^{v(t-v)}. \end{eqnarray*} Therefore we have \begin{eqnarray*} \sum_{t=(1-\delta)\omega}^{(1+\delta)\omega} \sum_{v=1}^{t/2} {\rm P}_{r, {\mathfrak p}}(E_{v,t}) &\le& \sum_{t=(1-\delta)\omega}^{(1+\delta)\omega} \sum_{v=1}^{t/2} \binom n t \cdot \binom t v \cdot v^{v-2} \cdot p_0^{t}\cdot p_1^{v-1}\cdot q_0^{n-t}\cdot q_1^{v(t-v)}\\ &=& \sum_{t=(1-\delta)\omega}^{(1+\delta)\omega} \binom n t \cdot p_0^t\cdot q_0^{n-t} \cdot \sum_{v=1}^{t/2} \binom t v \cdot v^{v-2}\cdot p_1^{v-1}\cdot q_1^{v(t-v)}. \end{eqnarray*} Our plan is to show that there exists $N>0$ such that for the values of $t$ lying in the interval $[(1-\delta)\omega, (1+\delta)\omega]$ and for all $n>N$ the internal sum \begin{eqnarray}\label{internal}\sum_{v=1}^{t/2} \binom t v \cdot v^{v-2}\cdot p_1^{v-1}\cdot q_1^{v(t-v)}\end{eqnarray} can be estimated above by $C\omega^{1-k}e^{-c}$ where $C$ is a universal constant. Then we will have \begin{eqnarray*} \sum_{t=(1-\delta)\omega}^{(1+\delta)\omega} \sum_{v=1}^{t/2} {\rm P}_{r, {\mathfrak p}}(E_{v,t})&\le& C\omega^{1-k}e^{-c} \sum_{t=(1-\delta)\omega}^{(1+\delta)\omega} \binom n t \cdot p_0^t\cdot q_0^{n-t}\\ &\le& C\omega^{1-k}e^{-c} \end{eqnarray*} which together with (\ref{sum2}) will complete the proof of Theorem \ref{propcon}. Note that the summand $2\exp(-\frac{1}{3} \omega^{2\epsilon})$ which appears in (\ref{internal}) is less than $\omega^{1-k}e^{-c}$ for $n$ large enough. For the term with $v=1$ we have \begin{eqnarray*} tq_1^{t-1} &=& t(1-p_1)^{t-1} \\ &\le& (1+\delta)\omega \cdot \exp(-p_1(t-1))\\ &=& (1+\delta)\omega \cdot \exp(-p_1t)\cdot \exp(p_1)\\ &\le & (1+\delta)e\cdot \omega \cdot \exp(-\frac{k\log \omega +c}{\omega}\cdot (1-\delta)\omega)\\ &=& (1+\delta) e\cdot \omega^{1-k +k\delta}\cdot e^{-c(1-\delta)}\\ &=& \left\{(1+\delta)e\omega^{k\delta}e^{c\delta}\right\}\cdot \omega^{1-k}e^{-c}\\ &\le& 2e \cdot \omega^{1-k}e^{-c} \end{eqnarray*} for $n$ large enough. Here we used the fact that the expression in the figure brackets tends to $e$ for $n\to \infty$. Note that the factor $\omega^{k\delta}$ tends to $1$ as follows from the definition of $\delta$, see (\ref{delta}). Next consider the term with $v=2$: \begin{eqnarray*} \binom t 2 \cdot p_1\cdot (1-p_1)^{2(t-2)} &\le& t^2\exp(-p_1(t-2)\cdot 2) \\ &=& t^2\cdot \exp(-2p_1t)\cdot \exp{4p_1}\\ &\le& e^4t^2\exp\left(-\frac{k\log\omega+c}{\omega}\cdot 2(1-\delta)\omega\right)\\ &\le& \left\{e^4(1+\delta)^2\omega^{2k\delta}e^{2\delta c} \right\}\cdot\omega^{2-2k} \cdot e^{-2c}\\ &\le& 2e^4 \omega^{1-k}e^{-c} \end{eqnarray*} for $n$ large enough. We used the fact that the expression in the figure brackets tends to $e^4$ for $n\to \infty$. Consider now a term with $v\ge 3$. Using the Stirling's formula we have \begin{eqnarray}\label{three}\binom t v v^{v-2} \le \frac{t^vv^{v-2}}{v!} \le \frac{(et)^v}{\sqrt{2{\mathfrak p}i}v^{5/2}} \le (3t)^v \le (3(1+\delta)\omega)^v \le (6\omega)^v. \end{eqnarray} The function $x\mapsto x^{v-1}(1-x)^{v(t-v)}$ is decreasing for $\frac{v-1}{v-1+v(t-v)} <x<1$. Hence, observing that for $n$ large enough $$\frac{v-1}{v-1+v(t-v)}\le \frac{1}{t-v+1} \le 2t^{-1} \le \frac{2}{(1-\delta)\omega}\le \frac{k\log \omega+ c}{\omega} \le p_1 \le 1$$ we obtain \begin{eqnarray*} p_1^{v-1}q_1^{v(t-v)} &\le & \left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \left(1-\frac{k\log\omega+c}{\omega}\right)^{v(t-v)} \\ &\le& \left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \exp\left(-\frac{k\log \omega +c}{\omega}\cdot (t-v)\right)^v\\ &\le & \left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \exp\left(-\frac{k\log \omega +c}{\omega}\cdot t/2\right)^v\\ &\le&\left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \exp\left(-\frac{k\log \omega +c}{\omega}\cdot (1-\delta)\omega /2\right)^v\\ &=& \left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \left[\omega^{-k\frac{1-\delta}{2}}\cdot e^{-c\frac{1-\delta}{2}}\right]^v. \end{eqnarray*} Combining with (\ref{three}) we get \begin{eqnarray*} \binom t v v^{v-2} p_1^{v-1}q_1^{v(t-v)} &\le& (6\omega)^v \cdot \left[\frac{k\log \omega +c}{\omega}\right]^{v-1}\cdot \omega^{-k\frac{v(1-\delta)}{2}} \cdot e^{-c\frac{v(1-\delta)}{2}}\\ &=& 6\cdot \left[6(k\log\omega +c)\right]^{v-1} \cdot \omega^{1-k\frac{v(1-\delta)}{2}}\cdot e^{-c\frac{v(1-\delta)}{2}}\\ &\le & \left\{ 6\cdot \left[6(k\log\omega +c)\right]^{v-1} \omega^{-k\left[ \frac{v(1-\delta)}{2}-1 \right]} \right\} \omega^{1-k} \cdot e^{-c}\\ &\le & \left\{ 6\cdot \left[6(k\log\omega +c)\right]^{v-1} \omega^{-k\left[ \frac{v-1}{7} \right]} \right\} \cdot \omega^{1-k} \cdot e^{-c}\\ &=& 6\cdot \left\{\left(6(k\log\omega +c)\right)\cdot \omega^{-\frac{k}{7}} \right\}^{v-1}\cdot \omega^{1-k} \cdot e^{-c}. \end{eqnarray*} On one of the steps we used the inequality $\frac{v(1-\delta)}{2} -1\ge \frac{v-1}{7}$. Observe that the expression $$q= \left(6(k\log\omega +c)\right)\cdot \omega^{-\frac{k}{7}}$$ tends to $0$ as $n\to \infty$ and hence there exists $N$ such that for all $n>N$ one has \begin{eqnarray*} \sum_{v=3}^{t/2} \binom t v v^{v-2} p_1^{v-1}q_1^{v(t-v)} &\le& 6 \omega^{1-k}e^{-c}\left\{\sum_{v=3}^{t/2} q^{v-1}\right\} \\ &\le& 12 q^2\cdot \omega^{1-k} e^{-c}\\ &\le & \omega^{1-k}e^{-c}. \end{eqnarray*} Combining this inequality with the estimates for $v=1$ and $v=2$ completes the proof of Theorem \ref{propcon}, as explained above. \end{proof} \section{When is a random simplicial complex simply connected?} In this section we give establish a region of simple connectivity of the random complex $Y\in \Omega_n^r$ with respect to the probability measure ${\rm P}_{r, {\mathfrak p}}$ where $${\mathfrak p}=(p_0, p_1, \dots, p_r).$$ Recall that a simplicial complex $Y$ is said to be simply connected if it is connected and its fundamental group ${\mathfrak p}i_1(Y, y_0)$ is trivial. The last condition is equivalent to the requirement that any continuous map of circle $S^1\to Y$ can be extended to a continuous map of the 2-disc $D^2 \to Y$. As in the previous section we shall assume that \begin{eqnarray}\label{everywhere1} p_0=\frac{\omega}{n}, \quad \mbox{where}\quad \omega \to \infty. \end{eqnarray} \begin{theorem}\label{simplec} Let $Y\in \Omega_n^r$ be a random complex with respect to the measure ${\rm P}_{r, {\mathfrak p}}$ where ${\mathfrak p}=(p_0, \dots, p_r)$. Additionally to (\ref{everywhere1}) we shall assume that there exist sequences $\omega_1, \omega_2, \omega_3\to \infty$ one has \begin{eqnarray} \omega p_1^3 &=& 3 \log \omega +\omega_1, \label{three1}\\ \omega p_1^2p_2 &=& 2\log \omega +\omega_2, \label{three2}\\ \omega p_1^3 p_2^2 &= &3\log \omega +6\log p_1 + \omega_3.\label{three3} \end{eqnarray} Then $Y$ is simply connected a.a.s. \end{theorem} \begin{remark} {\rm In general, the conditions (\ref{everywhere1}), (\ref{three1}), (\ref{three2}), (\ref{three3}) are independent. For example, if $p_2=1$ then (\ref{three1}) implies (\ref{three2}) and (\ref{three3}), while if $p_1=1$ then (\ref{three1}) is satisfied automatically and (\ref{three3}) implies (\ref{three2}). However if we assume that $p_i=n^{-\alpha_i}$ where $\alpha_i\ge 0$ are constants then (\ref{three1}), (\ref{three2}), (\ref{three3}) become \begin{eqnarray} \alpha_0 +3\alpha_2 <1,\label{three4}\\ \alpha_0+ 2\alpha_1 + \alpha_2 <1,\label{three5} \\ \alpha_0+3\alpha_1+ 2\alpha_2<1. \label{three6} \end{eqnarray} and we see that the last inequality (\ref{three6}) implies the inequalities (\ref{three4}) and (\ref{three5}). } \end{remark} \begin{corollary} \label{cor73} Let ${\mathfrak p}=(p_0, p_1, \dots, p_r)$ be a multi-parameter of the form $p_i=n^{-\alpha_i}$, where $\alpha_i$ are constants, $i=0, 1, \dots, r$. A random complex $Y\in \Omega_n^r$ is simply connected assuming that \begin{eqnarray} \alpha_0+3\alpha_1+ 2\alpha_2<1. \end{eqnarray} \end{corollary} We shall show in a forthcoming paper that a random complex is not simply connected if $\alpha_0+3\alpha_1+ 2\alpha_2>1.$ \begin{remark} {\rm In the special case $p_0=p_1=1$ (the Linial - Meshulam model) Theorem \ref{simplec} reduces to Theorem 1.4 from \cite{BHK}. In the special case $p_0=p_2=1$ (clique complexes of random graphs) the result of Theorem \ref{simplec} follows from Theorem 3.4 from \cite{Kahle1}. The general plan of the proof of Theorem \ref{simplec} repeats the strategy of \cite{Kahle1}, proof of Theorem 3.4; namely, we apply the Nerve Lemma to the cover by stars of vertices. } \end{remark} First recall a version of the Nerve Lemma, see Lemma 1.2 in \cite{BLVZ}. \begin{lemma}[The Nerve Lemma]\label{nerve} Let $X$ be a simplicial complex and let $\{S_i\}_{i\in I}$ be a family of subcomplexes covering $X$. Suppose that for any $t\geq 1$ every non-empty intersection $$S_{i_1}\cap\ldots\cap S_{i_t}$$ is $(k-t+1)$-connected . Then $X$ is $k$-connected if and only if the nerve complex $\mathcal{N}(\{S_i\}_{i\in I})$ is $k$-connected. \end{lemma} Recall that the nerve $\mathcal{N}(\{S_i\}_{i\in I})$ is defined as the simplicial complex on the vertex set $I$ with a subset $\sigma\subset I$ forming a simplex if and only if the intersection $\cap_{i\in \sigma} S_i\neq\emptyset$ is not empty. Given a random simplicial complex $Y\subset \Delta_n^{(r)}$, one may apply the Nerve Lemma \ref{nerve} to the cover $\{S_i\}_{i\in I}$, where $I=V(Y)$ (the set of vertices of $Y$) and $S_i$ is the star of the vertex $i$ in $Y$. Note that each star $S_i$ is contractible so that the condition of the Lemma \ref{nerve} is automatically satisfied for $t=1$. To establish the simple connectivity of $Y$ we need to show that (a) the intersection of any two stars $S_i\cap S_j$ is connected and (b) that the nerve complex $\mathcal{N}(\{S_i\}_{i\in I})$ is simply connected. Let us first tackle the task (b). The nerve $\mathcal{N}(\{S_i\}_{i\in I})$ is simply connected provided it has complete 2-dimensional skeleton, i.e. the intersection of any three stars $S_i\cap S_j\cap S_r\not=\emptyset$ is non-empty. This condition can be expressed by saying that any 3 vertices of $Y$ have a common neighbour, compare \cite{Kahle1}, \cite{Mesh}. The following Lemma describes the conditions under which any $k$ vertices of a random simplicial complex $Y\in \Omega_n^r$ have a common neighbour. \begin{lemma} \label{neighbour} Assume that a random simplicial complex $Y\in \Omega_{n}^r$ with respect to the measure ${\rm P}_{r, {\mathfrak p}}$ where ${\mathfrak p}=(p_0, \dots, p_r)$, satisfies \begin{eqnarray} \label{32} p_0=\frac{\omega}{n}, \quad\quad \omega\to \infty\end{eqnarray} and \begin{eqnarray}\label{33} p_1\, =\, \left(\frac{k\log \omega +\omega_1}{\omega}\right)^{1/k} \end{eqnarray} where $k\ge 2$ is an integer and $\omega_1 \to \infty$. Then every $k$ vertices of $Y$ have a common neighbour, a.a.s. \end{lemma} \begin{proof} Given a subset $S\subset \{1, \dots,n\}$ with $|S|=k$ elements, we want to estimate the probability that a random complex $Y\in \Omega_n^r$ contains $S$ and the vertices of $S$ have no common neighbours in $Y$. Let $T\subset \{1, \dots, n\}$ be a set of $|T|=t$ vertices containing $S$ and let $$E_J=\{e_\alpha\}_{\alpha\in J}$$ be a set of edges of $\Delta_n$ such that each edge $e_\alpha$ connects a point $\alpha(0)\in S$ with a point $\alpha(1)\in T- S$ and for any $i\in T - S$ there exists $\alpha\in J$ such that $\alpha(1)=i$. Clearly, $t-k \le |J| \le k(t-k)$. Denote by $A_J\subset \Delta_n$ the graph obtained by adding to $T$ all edges connecting points of $S$ with points of $T-S$ which do not belong to $E_J$. Denote by $B_J$ the subcomplex of $\Delta_n$ obtained from the simplex $\Delta_T$ spanned by $T$ by removing the union of open stars of the edges $e_\alpha$, where $\alpha\in J$. In other words, to obtain $B_J$ we remove from $\Delta_T$ all simplexes which contain one of the edges $e_\alpha$ for $\alpha\in J$. The pair $A_J\subset B_J$ satisfies the condition of Lemma \ref{cont}; indeed, external faces of $B_J$ are vertices $\{1, \dots, n\}-T$ and the edges $e_\alpha$; all these are also external faces of $A_J$. Applying Lemma \ref{cont} we obtain \begin{eqnarray}\label{ta} {\rm P}_{r, {\mathfrak p}}(A_J\subset Y\subset B_J) = p_0^t\cdot p_1^{k(t-k)-|J|}\cdot q_0^{n-t}\cdot q_1^{|J|}. \end{eqnarray} Note that any complex $Y\in \Omega_n^r$ containing the set of vertices $S$ and such that there is no common neighbour for $S$ in $Y$ satisfies $$A_J\subset Y\subset B_J$$ for $T=V(Y)$ (the set of vertices of $Y$) and for a unique choice of the set of edges $E_J$ (it is the set of edges connecting points of $S$ with points of $T-S$ which do not belong to $Y$). For a set of edges $J$ as above and for a vertex $i\in T-S$ we denote by $\beta_i^J$ the number of edges $e_\alpha\in E_J$ such that $\alpha(1)=i$. Then $$1\le \beta^J_i\le k\quad\mbox{and}\quad |J|=\sum_{i\in T_S} \beta_i^J.$$ There are $\binom n t \cdot \binom t k$ choices for the pair $S\subset T$ and there are $${\mathfrak p}rod_{i=1}^{t-k}\binom k {\beta^J_i}$$ choices for the set $E_J$ with given vector $(\beta_1^J, \dots, \beta_{t-k}^J)$, and each $\beta^J_i$ can vary in the interval $\{1, \dots, k\}$. Hence we obtain that the probability that a random complex $Y\in \Omega_n^r$ has $k$ vertices without a common neighbour equals \begin{eqnarray*}&{}& \sum_{t=k}^n \binom n t \cdot \binom t k \cdot \sum_{1\le \beta_i\le k} {\mathfrak p}rod_{i=1}^{t-k} \binom k {\beta_i}\cdot p_0^t p_1^{k({t-k})-\sum \beta_i} \cdot q_0^{n-t}\cdot q_1^{\sum \beta_i}\\ &=& \binom n k \sum_{t=k}^n \binom {n-k}{t-k} \cdot p_0^t \cdot p_1^{k(t-k)} \cdot \left\{\left(1+\frac{q_1}{p_1}\right)^{k} -1\right\}^{t-k}\cdot q_0^{n-t}\\ &=& \binom n k \sum_{t=k}^n \binom {n-k}{t-k} \cdot p_0^{t}\left(1-p_1^{k}\right)^{t-k}\cdot q_0^{n-t} \\&=& p_0^k \binom n k \left(q_0+p_0(1- p_1^{k})\right)^{n-k}\\ &=& p_0^k \binom n k \left(1-p_0p_1^k\right)^{n-k}. \end{eqnarray*} Hence taking into account our assumptions (\ref{32}) and (\ref{33}) we obtain that the probability that a random complex $Y\in \Omega_n^r$ has $k$ vertices without a common neighbour is \begin{eqnarray*} &{}&p_0^k \binom n k \left(1-p_0p_1^k\right)^{n-k}\\ &\le& p_0^kn^k e^{-np_0p_1^k\frac{n-k}{n}}\\ &\le& \omega^k \cdot e^{(-k\log \omega - \omega_1)\frac{n-k}{n}}\\ &=& \omega^{\frac{k^2}{n}}\cdot e^{-\omega_1\frac{n-k}{n}}. \end{eqnarray*} The logarithm of the first factor $\omega^{\frac{k^2}{n}}$ tends to $0$ (as $\log \omega\le \log n$) and therefore the first factor tens to $1$, i.e. it is bounded, while the second factor $e^{-\omega_1\frac{n-k}{n}}$ clearly tends to zero. This complex the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{simplec}] Let $A_n^r\subset \Omega_n^r$ denote the set of simplicial complexes $Y$ such that for any two vertices $i, j\in Y$ the intersection of their links $\rm {lk}_Y(i)\cap \rm {lk}_Y(j)$ is connected. Let $B_n^r\subset \Omega_n^r$ denote the set of simplicial complexes $Y$ such that the degree of any edge $e\subset Y$ satisfies $\deg_Y e\ge 1$. Let $C_n^r\subset \Omega_n^r$ denote the set of simplicial complexes $Y$ such that any three vertices of $Y$ have a common neighbour. Let us show that ${\rm P}_{r, {\mathfrak p}}(A_n^r)\to 1$ as $n \to \infty$ under the assumption (\ref{three3}). Indeed, by Lemma \ref{linksintersection}, the intersection of two links is a multiparameter random simplicial complex with the multi-parameter $(p'_0, p'_1, \dots, p'_{r-1})$ where $$p'_i = p_ip_{i+1}^2.$$ Next we apply Theorem \ref{propcon} with $k=3$. Our assumption (\ref{three3}) is equivalent to $$p_1'= \frac{3\log \omega' + \omega_3}{\omega'} , \quad \mbox{where}\quad \omega'=np_0',$$ $p_0'=p_0p_1^2$, $p_1'=p_1p_2^2$ and $\omega= np_0$. By Theorem \ref{propcon}, the probability that the intersection of links of a given pair of vertices of $Y$ is disconnected is less or equal than $Ce^{-\omega_3}\omega^{-2}$, for a universal constant $C$. It follows that the expected number of pairs of vertices $i, j$ of $Y\in \Omega_n^r$ such that the intersection $\rm {lk}_Y(i)\cap \rm {lk}_Y(j)$ is disconnected is less or equal than $$\binom n 2 \cdot p_0^2 \cdot Ce^{-\omega_3} \omega^{-2}\le Ce^{-\omega_3},$$ which tends to $0$ with $n$. \begin{figure} \caption{Regions of connectivity and simple connectivity.} \label{simpleconnectivity} \end{figure} By Corollary \ref{degreezero}, the probability that $Y$ contains an edge of degree zero is less than $p_1e^{2-\omega_2}$ under the assumption (\ref{three2}). Hence we see that ${\rm P}_{r, {\mathfrak p}}(B_n^r)\to 1$ as $n \to \infty$, under the assumption (\ref{three2}). By Lemma \ref{neighbour}, due to the assumption (\ref{three1}), one has ${\rm P}_{r, {\mathfrak p}}(C_n^r)\to 1$ as $n \to \infty$. It follows that $${\rm P}_{r, {\mathfrak p}}(A_n^r\cap B_n^r \cap C_n^r)\to 1$$ as $n \to \infty$. Let us show that every complex $Y\in A_n^r\cap B_n^r \cap C_n^r$ is simply connected. As explained in the paragraph preceding Lemma \ref{neighbour}, we may apply the Nerve Lemma \ref{nerve} to the cover by stars of vertices, and we only need to establish the task (a), i.e. to show that in a random complex $Y\in \Omega_n^r$ (under the assumptions of Theorem \ref{simplec}) the intersection of the stars of any two vertices is connected, a.a.s. Note that the task (b) is automatically satisfied because $Y\in C_n^r$. Let $i, j$ be two distinct vertices of $Y$. If the edge $(ij)$ is not contained in $Y$ then \begin{eqnarray}\rm {St}_Y(i)\cap \rm {St}_Y(j) = \rm {lk}_Y(i)\cap \rm {lk}_Y(j);\end{eqnarray} This intersection is connected since $Y\in A_n^r$. However if $(ij)\subset Y$ then we have \begin{eqnarray}\label{inters} \rm {St}_Y(i)\cap \rm {St}_Y(j) =\left(\rm {lk}_Y(i)\cap \rm {lk}_Y(j)\right)\cup \rm {St}_Y(ij). \end{eqnarray} The intersection $\rm {lk}_Y(i)\cap \rm {lk}_Y(j)$ is connected (since $Y\in A_n^r$) and $\rm {lk}_Y(i)\cap \rm {lk}_Y(j)$ is non-empty (since $Y\in B_n^r$). Since the star $\rm {St}_Y(ij)$ is non-empty and contractible, the union (\ref{inters}) is connected since $$\rm {lk}_Y(i)\cap \rm {lk}_Y(j)\cap \rm {St}_Y(ij)\not=\emptyset$$ (since $Y\in B_n^r$). As explained above, the Nerve Lemma \ref{nerve} is applicable and implies that any $Y\in A_n^r\cap B_n^r \cap C_n^r$ is simply connected. \end{proof} \begin{figure} \caption{Dimension of the random simplicial complex for various values of parameters $\alpha_1, \alpha_2$.} \label{dimension1} \end{figure} As an illustration consider the special case when $p_0=1$, $p_1=n^{-\alpha_1}$ and $p_2=n^{-\alpha_2}$ with $\alpha_1, \alpha_2$ being constant (i.e. independent of $n$). Then for $\alpha_1>1$ the random complex $Y$ is disconnected and for $\alpha_1<1$ the complex $Y$ is connected (see Example \ref{example63}) and for $3\alpha_1+2\alpha_2<1$ the complex $Y$ is simply connected (by Corollary \ref{cor73}). Figure \ref{simpleconnectivity} depicts the regions of connectivity. Figure \ref{dimension1} shows the dimension of a multi-parameter random simplicial complex, again assuming that $p_1=n^{-\alpha_1}$ and $p_2=n^{-\alpha_2}$ with $\alpha_1, \alpha_2$ being constant and $p_i=1$ for $i=0, 3, 4, \dots$. Details and proofs can be found in \cite{CF14}. In a forthcoming paper we shall show that in the domain $$1<3\alpha_1+2\alpha_2, \quad 0<\alpha_1<1, \quad 0<\alpha_2$$ the fundamental group of a random simplicial complex is nontrivial and hyperbolic in the sense of Gromov. It is a non-trivial random group depending on three parameters $p_0, p_1, p_2$ and we shall describe regions of various cohomological dimension and torsion in this random group. \end{document}
\begin{document} \title[Generic uniqueness of the bias of finite stochastic games]{Generic uniqueness of the bias vector of finite zero-sum stochastic games with perfect information} \author[M.~Akian]{Marianne Akian} \author[S.~Gaubert]{St\'ephane Gaubert} \author[A.~Hochart]{Antoine Hochart} \address{INRIA Saclay-Ile-de-France and CMAP, Ecole polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France} \email{[email protected]} \email{[email protected]} \email{[email protected]} \thanks{A.~Hochart has been supported by a PhD fellowship of Fondation Math\'ematique Jacques Hadamard (FMJH). The authors are also partially supported by the PGMO program of EDF and FMJH and by the ANR (MALTHY Project, number ANR-13-INSE-0003).} \subjclass[2010]{47J10; 91A20,93E20} \keywords{Zero-sum games, ergodic control, nonexpansive mappings, fixed point sets, policy iteration} \date{May 24, 2017} \begin{abstract} Mean-payoff zero-sum stochastic games can be studied by means of a nonlinear spectral problem. When the state space is finite, the latter consists in finding an eigenpair $(u,\lambda)$ solution of $T(u)=\lambda e + u$, where $T:\mathbb{R}^n \to \mathbb{R}^n$ is the Shapley (or dynamic programming) operator, $\lambda$ is a scalar, $e$ is the unit vector, and $u \in \mathbb{R}^n$. The scalar $\lambda$ yields the mean payoff per time unit, and the vector $u$, called {\em bias}, allows one to determine optimal stationary strategies in the mean-payoff game. The existence of the eigenpair $(u,\lambda)$ is generally related to ergodicity conditions. A basic issue is to understand for which classes of games the bias vector is unique (up to an additive constant). In this paper, we consider perfect-information zero-sum stochastic games with finite state and action spaces, thinking of the transition payments as variable parameters, transition probabilities being fixed. We show that the bias vector, thought of as a function of the transition payments, is generically unique (up to an additive constant). The proof uses techniques of nonlinear Perron-Frobenius theory. As an application of our results, we obtain an explicit perturbation scheme allowing one to solve degenerate instances of stochastic games by policy iteration. \end{abstract} \maketitle \section{Introduction} \subsection{The ergodic equation for stochastic games} Repeated zero-sum games describe long-term interactions between two agents, called players, with opposite interests. In this paper, we consider {\em perfect-information zero-sum stochastic games}, in which the players choose repeatedly and alternatively an action, being informed of all the events that have previously occurred (state of nature and chosen actions). These choices determine at each stage of the game a payment, as well as the next state by a stochastic process. Given a finite horizon $k$ and an initial state $i$, one player intends to minimize the sum of the payments of the $k$ first stages, while the other player intends to maximize it. This gives rise to the value of the $k$-stage game, denoted by $v_i^k$. A major topic in the theory of zero-sum stochastic games is the asymptotic behavior of the mean values per time unit $(v_i^k/k)$ as the horizon $k$ tends to infinity. The limit, when it exists, is referred to as the {\em mean payoff}. This question was first addressed in the case of a finite state space by Everett~\cite{Eve57}, Kohlberg~\cite{Koh74}, and Bewley and Kohlberg~\cite{BK76}. See also Rosenberg and Sorin~\cite{RS01}, Sorin~\cite{Sor04}, Renault~\cite{Ren11}, Bolte, Gaubert and Vigeral~\cite{BGV15} and Ziliotto~\cite{ziliotto} for more recent developments. We refer the reader to~\cite{NS03} for more background on stochastic games. A way to study the asymptotic behavior of the values $(v_i^k)_k$ consists in exploiting the recursive structure of the game. This structure is encompassed in the {\em dynamic programming} or {\em Shapley operator} of the game. In this paper, since the state space is assumed to be finite, say $\{1,\dots,n\}$, the latter is a map $T: \mathbb{R}^n \to \mathbb{R}^n$. Then, a basic tool to study the asymptotic properties of the sequence $(v_i^k)_k$ is the following nonlinear spectral problem, called the {\em ergodic equation}: \begin{equation} \label{eq:ergodic-equation} T(u) = \lambda e + u \enspace , \end{equation} where $e$ is the unit vector of $\mathbb{R}^n$. Indeed, if there exist a vector $u \in \mathbb{R}^n$ and a scalar $\lambda \in \mathbb{R}$ solution of~\eqref{eq:ergodic-equation}, then, not only the sequence $(v_i^k/k)_k$ converges as the horizon $k$ tends to infinity, but also the limit is independent of the initial state $i$, and equal to $\lambda$. This scalar, which is unique, is called the {\em ergodic constant} or the {\em (additive) eigenvalue} of $T$, and the vector $u$, called {\em bias vector} or {\em (additive) eigenvector}, gives optimal stationary strategies in the mean-payoff game. A first question is to understand when the ergodic equation is solvable. In \cite{AGH15}, we considered the Shapley operator $T: \mathbb{R}^n \to \mathbb{R}^n$ of a game with finite state space and bounded transition payment function, and gave necessary and sufficient conditions under which the ergodic equation is solvable for all the operators $g+T$ with $g \in \mathbb{R}^n$, or equivalently for all perturbations $g$ of the payments that only depend on the state. Moreover, assuming the compactness of the action sets of players and some continuity of the transition functions, these conditions can be characterized in terms of reachability in directed hypergraphs. A second question concerns the structure of the set of bias vectors. For one-player problems, i.e., for discrete optimal control, the ergodic equation~\eqref{eq:ergodic-equation}, also known as the {\em average case optimality equation}, has been much studied, either in the deterministic or in the stochastic case (Markov decision problems). Then, the representation of bias vectors and their relation with optimal strategies is well understood. Indeed, in the deterministic case, the analysis of the ergodic equation relies on max-plus spectral theory, which goes back to the work of Romanovsky~\cite{Rom67}, Gondran and Minoux~\cite{GM77} and Cuninghame-Green~\cite{CG79}. Kontorer and Ya\-ko\-venko~\cite{KY92} deal specially with infinite horizon optimization and mean-payoff problems. We refer the reader to the monographies~\cite{BCOQ92, KM97,butkovic} or surveys~\cite{Bap98,ABG13} for more background on max-plus spectral theory. One of the main result of this theory shows that the set of bias vectors has the structure of a max-plus (tropical) cone, i.e., that it is invariant by max-plus linear combinations, and it has a unique minimal generating family consisting of certain ``extreme'' generators, which can be identified by looking at the support of the maximizing measures in the linear programming formulation of the optimal control problem, or at the ``recurrence points'' of infinite optimal trajectories. A geometric approach to some of these results, in terms of polyhedral fans, has been recently given by Sturmfels and Tran~\cite{ST13}. The eigenproblem~\eqref{eq:ergodic-equation} has been studied in the more general infinite-dimensional state space case, see Kolokoltsov and Maslov~\cite{KM97}, Mallet-Paret and Nussbaum~\cite{Nuss-Mallet}, and Akian, Gaubert and Walsh~\cite{AGW09} for an approach in terms of horoboundaries. It has also been studied in the setting of weak KAM theory, for which we refer the reader to Fathi~\cite{Fat}. In the weak KAM setting, the bias vector becomes the solution of an ergodic Hamilton-Jacobi PDE; Figalli and Rifford showed in~\cite{rifford} that this solution is unique for generic perturbations of a Hamiltonian by a potential function. In the stochastic case, the structure of the set of bias vectors is still known when the state space is finite, see Akian and Gaubert~\cite{AG03}. In the two-player case, the structure of the set of bias vectors is less well known, although the description of this set remains a fundamental issue. In particular, the uniqueness of the bias vector up to an additive constant is an important matter for algorithmic purposes. Indeed, the nonuniqueness of the bias typically leads to numerical instabilities or degeneracies. In particular, the standard Hoffman and Karp policy iteration algorithm~\cite{HK66} may fail to converge in situations in which the bias vector is not unique. The standard approach to handle such degeneracies is to approximate the ergodic problem by the discounted problem. This was proposed by Puri~\cite{puri} in the case of deterministic games and this was further analyzed by Paterson and Zwick~\cite{zwick}, see also the discussion by Friedmann in~\cite{friedmann}. A different approach was proposed by Cochet-Terrasson and Gaubert~\cite{CTG06}, Akian, Cochet-Terrasson, Detournay and Gaubert~\cite{ACTDG}, and by Bourque and Raghavan~\cite{BR14}, allowing one to circumvent such degeneracies at the price of an increased complexity of the algorithm (handling the nonuniqueness of the bias). Hence, it is of interest to understand when such technicalities can be avoided. \subsection{Main results} We address the question of the uniqueness of the bias vector for stochastic games with perfect information and finite state and action spaces, restricting our attention to games for which the ergodic equation~\eqref{eq:ergodic-equation} is solvable {\em for all state-dependent perturbations of the transition payments}. Our main result, Theorem~\ref{thm:generic-uniqueness}, shows that the bias vector is generically unique up to an additive constant. More precisely, we show that the set of perturbation vectors for which the bias vector is not unique belongs to a polyhedral complex the cells of which have codimension one at least. A first ingredient in the proof relies on nonlinear Perron-Frobenius theory~\cite{AG03}. A second ingredient is a general result, showing that the set of fixed points of a nonexpansive self-map of $\mathbb{R}^n$ is a retract of $\mathbb{R}^n$, see Theorem~\ref{thm:nonexpansive-retract}. This allows us to infer the uniqueness of the bias vector of a Shapley operator from the uniqueness of the bias vector of the reduced Shapley operators obtained by fixing the strategy of one player. We then present an algorithmic application of our results. Hoffman and Karp introduced a policy iteration algorithm to solve mean-payoff zero-sum stochastic games with perfect information and finite state and action spaces~\cite{HK66}. They showed that policy iteration does terminate if every pair of strategies of the two players yields an irreducible Markov chain. If this irreducibility assumption is not satisfied, policy iteration may cycle. However, the irreducibility assumption is not satisfied by many classes of games -- in particular, it is essentially never satisfied for deterministic games. The cycling of the Hoffman-Karp algorithm is due to the nonuniqueness of the bias vector. Hence, we deduce from our results that the Hoffman-Karp policy iteration algorithm does converge if the payment is generic. Moreover, we provide a family of effective perturbations of the payment for which the bias vector is unique (up to an additive constant). This leads to an explicit perturbation scheme, allowing one to solve nongeneric instances by policy iteration, avoiding the classical irreducibility condition or the use of vanishing discount based perturbation schemes. The paper is organized as follows. After some preliminaries on stochastic games in Section~\ref{sec:preliminaries}, we establish the generic uniqueness of the bias vector in Section~\ref{sec:generic-uniqueness}. The application to policy iteration is presented in Section~\ref{sec:applications}. We finally point out that some of the present results have been announced in the conference article~\cite{cdc2014}. \section{Preliminaries on zero-sum stochastic games} \label{sec:preliminaries} \subsection{Games with perfect information} \label{sec:prel1} In this paper we consider {\em finite (zero-sum) stochastic games with perfect information}, where the second player, called player \mathcal{M}AX, always makes a move after being informed of the action chosen by the first player, called player \mathcal{M}IN. Such a game is characterized by the following: \begin{itemize} \item a finite state space, that we denote by $S = \{1,\dots,n\}$; \item finite action spaces, denoted by $A_i$ for player \mathcal{M}IN when the current state is $i \in S$, and by $B_{i,a}$ for player \mathcal{M}AX in state $i$ and after action $a \in A_i$ has been chosen by player \mathcal{M}IN; \item a transition payment $r_i^{a b} \in \mathbb{R}$, paid by player \mathcal{M}IN to player \mathcal{M}AX, when the current state is $i \in S$ and the last actions selected by the players are $a \in A_i$ and $b \in B_{i,a}$; \item a transition probability $P_i^{a b} \in \Delta(S)$ (where $\Delta(S)$ denotes the set of probability measures over $S$) which gives the law according to which the next state is determined, when the current state is $i \in S$ and the last actions selected by the players are $a \in A_i$ and $b \in B_{i,a}$. \end{itemize} Denoting by $K_A := \bigcup_{i \in S} \{i\} \times A_i$ the action set of player \mathcal{M}IN and by $K_B := \bigcup_{(i,a) \in K_A} \{(i,a)\} \times B_{i,a}$ the action set of player \mathcal{M}AX, a finite stochastic game with perfect information is thus defined by a 5-tuple $\mathcal{G}amma := (S, K_A, K_B, r, P)$ where the three first sets are finite. Such a game is played in stages, starting from a given initial state $i_0$, as follows: at step $\ell$, if the current state is $i_\ell$, player \mathcal{M}IN chooses an action $a_\ell \in A_{i_\ell}$, and player \mathcal{M}AX subsequently chooses an action $b_\ell \in B_{i_\ell, a_\ell}$. Then, player \mathcal{M}IN pays $r_{i_\ell}^{a_\ell b_\ell}$ to player \mathcal{M}AX and the next state is chosen according to the probability law $P_{i_\ell}^{a_\ell b_\ell}$. The information is perfect, meaning that at each step, the players have a perfect knowledge of all the previously chosen actions, as well as the state previously visited. Denote by $H_k^\text{\mathcal{M}IN} := (K_B)^k \times S$ the set of histories of length $k$ of player \mathcal{M}IN and by $H^\text{\mathcal{M}IN} := \bigcup_{k \in \mathbb{N}} H_k^\text{\mathcal{M}IN}$ the set of all finite histories of player \mathcal{M}IN. Likewise, define $H_k^\text{\mathcal{M}AX} := (K_B)^k \times K_A$ and $H^\text{\mathcal{M}AX} := \bigcup_{k \in \mathbb{N}} H_k^\text{\mathcal{M}AX}$ to be, respectively, the set of histories of length $k$ and the set of all finite histories of player \mathcal{M}AX. Let $H_\infty := (K_B)^\mathbb{N}$ be the set of infinite histories. A strategy of player \mathcal{M}IN is a map \[ \sigma: H^\text{\mathcal{M}IN} \to \bigcup_{i \in S} \Delta(A_i) \] (where $\Delta(X)$ denote the set of probability measures over any set $X$) such that, for every finite history $h_k = (i_0,a_0,b_0,\dots,i_k) \in H_k^\text{\mathcal{M}IN}$, we have $\sigma(\cdot \mid h_k) \in \Delta(A_{i_k})$. Likewise, a strategy of player \mathcal{M}AX is a map \[ \tau: H^\text{\mathcal{M}AX} \to \bigcup_{(i,a) \in K_A} \Delta(B_{i,a}) \] such that, for every finite history $h'_k = (i_0,a_0,b_0,\dots,i_k,a_k) \in H_k^\text{\mathcal{M}AX}$, we have $\tau(\cdot \mid h'_k) \in \Delta(B_{i_k,a_k})$. A strategy $\sigma$ (resp.\ $\tau$) of player \mathcal{M}IN (resp.\ \mathcal{M}AX) is pure and Markovian if it depends only on the current stage and state (resp.\ the current stage, state and action of player \mathcal{M}IN) and is deterministic, that is its values are Dirac probabilities. Such a strategy is stationary if it does not depend on stage. We denote by $\mathcal{S}_{\mathrm p}$ the finite set of (deterministic) {\em policies} of player \mathcal{M}IN, i.e., the set of maps $\sigma: S \to \bigcup_{i \in S} A_i$ such that $\sigma(i) \in A_i$ for every state $i \in S$. Likewise, we denote by $\mathcal{T}_{\mathrm p}$ the set of (deterministic) policies of player \mathcal{M}AX, i.e., the set of maps $\tau: K_A \to \bigcup_{i \in S, a \in A_i} B_{i,a}$ such that $\tau(i,a) \in B_{i,a}$ for every state $i \in S$ and action $a \in A_i$. Then, a pure Markovian strategy of player \mathcal{M}IN (resp.\ \mathcal{M}AX) can be identified to a sequence of policies: for every history of length $k$ of player \mathcal{M}IN, $h_k = (i_0,a_0,b_0,\dots,i_k) \in H_k^\text{\mathcal{M}IN}$, $\sigma(\cdot\mid h_k)$ is the Dirac measure at $\sigma_k(i_k)$. Likewise, for every history of length $k$ of player \mathcal{M}AX, $h'_k = (i_0,a_0,b_0,\dots,i_k,a_k) \in H_k^\text{\mathcal{M}AX}$, $\tau(\cdot \mid h'_k)$ is the Dirac measure at $\tau_k(i_k,a_k)$. Moreover, a pure Markovian stationary strategy can be identified to a single policy. Given an initial state $i$ and strategies $\sigma$ and $\tau$ of the players, the triple $(i,\sigma,\tau)$ defines a probability law on $H_\infty$ the expectation of which is denoted by $\mathbb{E}_{i,\sigma,\tau}$. The {\em payoff} of the $k$-stage game is the following additive function of the transition payments: \begin{equation*} \label{eq:finite-horizon-payoff} J_i^k(\sigma,\tau) := \mathbb{E}_{i,\sigma,\tau} \Bigg[ \sum_{\ell=0}^{k-1} r_{i_\ell}^{a_\ell b_\ell} \Bigg] \enspace . \end{equation*} Player \mathcal{M}IN intends to choose a strategy minimizing the payoff $J_i^k$, whereas player \mathcal{M}AX intends to maximize the same payoff. The value of the $k$-stage game starting at state $i$ is then defined by \[ v_i^k := \inf_\sigma \sup_\tau J_i^k(\sigma,\tau) = \sup_\tau \inf_\sigma J_i^k(\sigma,\tau) \enspace , \] when the equality holds, where the infimum and the supremum are taken over the set of all strategies of players \mathcal{M}IN and \mathcal{M}AX, respectively. Since the game has finite state and action spaces, the value exists, with the infima and suprema realized by pure Markovian strategies of both players~\cite{Sha53}. In this paper, we are interested in the asymptotic behavior of the sequence of mean values per time unit $(v^k/k)_{k \geqslant 1}$. When the latter ratio converges, the limit will be called the {\em mean-payoff vector}. A stronger condition is the existence of a {\em uniform value} $v^{\text{U}}$, meaning that \begin{align} \inf_{\sigma}\, \limsup_{k \to \infty} \, \sup_{\tau}\; \frac{1}{k} J_i^k(\sigma,\tau) \leqslant v_i^\text{U} \leqslant \sup_{\tau}\,\liminf_{k \to \infty} \,\inf_{\sigma}\;\frac{1}{k} J_i^k(\sigma,\tau) \enspace ,\label{e-def-unifvalue} \end{align} where infima and suprema are taken over all strategies (see for instance~\cite{sorinbook,renault12}). This implies that $\lim_k v^k/k = v^{\text{U}}$. Note that the first (resp.\ second) inequality means that player \mathcal{M}IN (resp.\ \mathcal{M}AX) uniformly guarantees $v_i^\text{U}$. Moreover, following~\cite{laraki-renault}, {\em optimal uniform strategies} are defined as strategies $\sigma^*$ and $\tau^*$ for players \mathcal{M}IN and \mathcal{M}AX respectively such that \begin{align} \limsup_{k \to \infty} \, \sup_{\tau}\; \frac{1}{k} J_i^k(\sigma^*,\tau) = v_i^\text{U} =\liminf_{k \to \infty} \,\inf_{\sigma}\;\frac{1}{k} J_i^k(\sigma,\tau^*) \enspace . \label{e-def-optuniform} \end{align} Note that Condition~\eqref{e-def-optuniform} is stronger than~\eqref{e-def-unifvalue}. When $v^\text{U}$ exists, it also coincides with the value of the zero-sum game in which one considers one the following {\em limiting average payoffs}: \begin{equation} \label{eq:average-payoffs} \begin{aligned} J_i^\text{+}(\sigma,\tau) & := \limsup_{k \to \infty} \; \frac{1}{k} J_i^k(\sigma,\tau) \enspace , \\ J_i^\text{-}(\sigma,\tau) & := \liminf_{k \to \infty} \; \frac{1}{k} J_i^k(\sigma,\tau) \enspace . \end{aligned} \end{equation} Moreover, optimal uniform strategies are also optimal for these games. Mertens and Neyman~\cite{MN81} proved that for stochastic games with finite state and action spaces and imperfect information the uniform value exists. It follows that the uniform value also exists for finite state and action spaces games with perfect information -- the latter can be reduced to degenerate instances of imperfect-information games in which in each state, only one of the players has a choice of action. We shall also recall in Theorem~\ref{th-inv} how for this class of games, the existence of the uniform value and of uniform optimal strategies, follows from a result of Kohlberg. In the computer science litterature~\cite{zwick,AM09}, mean-payoff games are defined in a slightly different manner, following Ehrenfeucht and Mycielski~\cite{ehrenfeucht}, as non-zero sum games in which player \mathcal{M}IN\ wishes to minimize $J_i^\text{+}(\sigma,\tau)$ whereas player \mathcal{M}AX\ wishes to maximize $J_i^\text{-}(\sigma,\tau)$. Liggett and Lippman~\cite{LL69} showed that such games admit optimal policies $\sigma^*,\tau^*$ and a value $v^*$, meaning that \[ J_i^\text{+}(\sigma^*,\tau) \leqslant v^*=J_i^\text{+}(\sigma^*,\tau^*)= J_i^\text{-}(\sigma^*,\tau^*) \leqslant J_i^\text{-}(\sigma,\tau^*) \] for all pair of strategies $\sigma,\tau$ of players \mathcal{M}IN\ and \mathcal{M}AX. The latter property is implied by the existence of the uniform value and the existence of pure Markovian stationary uniform optimal strategies (also called uniform optimal policies), so that $v^*=v^{\text{U}}$. Therefore, in the sequel, we will use the term {\em mean-payoff games} with a general meaning, understanding that the different approaches that we just discussed lead to the same notion of value. In particular, the notion of value and optimal policies can always been understood in the strongest sense (uniform value and uniform optimal policies). \subsection{The operator approach}\label{subsec-operator} The study of the value vector $v^k = (v_i^k)_{i \in S}$ involves the {\em dynamic programming operator}, or {\em Shapley operator} of the game. The latter is a map $T: \mathbb{R}^n \to \mathbb{R}^n$ whose $i$th coordinate is given by \begin{equation} \label{eq:Shapley-operator} T_i(x) = \min_{a \in A_i} \max_{b \in B_{i,a}} \left( r_i^{a b} + P_i^{a b} x \right) \enspace , \quad x \in \mathbb{R}^n \enspace. \end{equation} Note that an element $P \in \Delta(S)$ is seen as a row vector $P=(P_j)_{j \in S}$ of $\mathbb{R}^n$, so that $P x$ means $\sum_{j \in S} P_j x_j$. Also note that, given a vector $g \in \mathbb{R}^n$, the operator $g+T$ appears as the Shapley operator of the game $(S, K_A, K_B, \tilde{r}, P)$ where the transition payment function satisfies $\tilde r_i^{a b} = g_i + r_i^{a b}$. The latter game is almost identical to the initial game $(S, K_A, K_B, r, P)$, except that the transition payments are perturbed with quantities that only depend on the state, hence the designation of $g$ as an {\em additive (state-dependent) perturbation vector}. The Shapley operator allows one to determine recursively the value vector of the $k$-stage game: \begin{equation} \label{eq:dynamic-prog-principle} v^k = T(v^{k-1}) \enspace , \quad v^0 = 0 \enspace . \end{equation} Also, $T$ is {\em monotone} and {\em additively homogeneous}, meaning that it satisfies the following two properties, respectively: \begin{align*} \tag{monotonicity} & x \leqslant y \implies T(x) \leqslant T(y) \enspace , \quad x, y \in \mathbb{R}^n \enspace ,\\ \tag{additive homogeneity} & T(x + \lambda e) = T(x) + \lambda e \enspace , \quad x \in \mathbb{R}^n, \enspace \lambda \in \mathbb{R} \enspace , \end{align*} where $\mathbb{R}^n$ is endowed with its usual partial order and $e$ is the unit vector of $\mathbb{R}^n$. A first consequence of the additive homogeneity of $T$ is that, for any bias vector $u$ and any scalar $\alpha \in \mathbb{R}$, the vector $u + \alpha e$ is also a bias: we say that $u$ is defined up to an {\em additive constant}. More generally, the importance of the above axioms in stochastic control and game theory has been known for a long time~\cite{CT80}. In particular, they imply that $T$ is {\em sup-norm nonexpansive}: \[ \|T(x) - T(y)\|_\infty \leqslant \|x-y\|_\infty \enspace , \quad x, y \in \mathbb{R}^n \enspace . \] According to the dynamic programming principle~\eqref{eq:dynamic-prog-principle}, the mean-payoff vector defined above is given by \[ \chi(T) := \lim_{k \to \infty} \frac{T^k(0)}{k} \enspace , \] where $T^k := T \circ \dots \circ T$ denotes the $k$th iterate of $T$. Observe that, since $T$ is sup-norm nonexpansive, $0$ could be replaced by any vector $x \in \mathbb{R}^n$ in the above limit. Here, since the action spaces are finite, $T$ is a {\em piecewise affine} map over $\mathbb{R}^n$. Kohlberg showed in~\cite{Koh80} that a piecewise affine self-map $T$ of $\mathbb{R}^n$ that is nonexpansive in an arbitrary norm has an {\em invariant half-line}, meaning that there exist two vectors $u, \nu \in \mathbb{R}^n$ such that \begin{equation} \label{eq:invariant-half-line} T(u + \alpha \nu) = u + (\alpha + 1) \nu \end{equation} for every scalar $\alpha$ large enough. Kohlberg's theorem applies to the present setting, since the Shapley operator~\eqref{eq:Shapley-operator} is nonexpansive in the sup-norm. This implies that the limit $\chi(T)$ does exist and coincides with $\nu$. It follows that the vector $\nu$ arising in the definition of invariant half-lines is unique. The existence of uniform optimal policies follows from the existence of an invariant half-line. Moreover, such policies are readily computed from the invariant half-line. Although these properties are surely known to some experts, we could not find a reference for them, so we next state them as Theorem~\ref{th-inv}. To this end, we define the {\em reduced Shapley operator} $T^\sigma: \mathbb{R}^n \to \mathbb{R}^n$ associated with the policy $\sigma \in \mathcal{S}_{\mathrm p}$ of player \mathcal{M}IN. Its $i$th coordinate map is given by \begin{align}\label{e-def-Tsigma} T_i^\sigma(x) = \max_{b \in B_{i,\sigma(i)}} \big( r_i^{\sigma(i) b} + P_i^{\sigma(i) b} x \big), \quad x \in \mathbb{R}^n \enspace. \end{align} Since the action spaces are finite, we readily have, for all $x \in \mathbb{R}^n$, \begin{equation} \label{eq:T-min} T(x) = \min_{\sigma \in \mathcal{S}_{\mathrm p}} T^\sigma(x) \enspace , \end{equation} where by $\min$, we mean that for every $x\in\mathbb{R}^n$, the minimum is attained. Indeed, it suffices to take for $\sigma(i)$ any action $a\in A_i$ achieving the minimum in~\eqref{eq:Shapley-operator}. Similarly, to any policy $\tau \in \mathcal{T}_{\mathrm p}$ of player \mathcal{M}AX, we associate a {\em dual reduced Shapley operator} $^{\tau} T$, whose $i$th coordinate map is given by \[ \leftidx{^\tau}{T}{_i}(x) = \min_{a \in A_{i}} \big( r_i^{a \tau(i,a)} + P_i^{a \tau(i,a)} x \big), \quad x \in \mathbb{R}^n \enspace. \] For all $x \in \mathbb{R}^n$, we have \begin{equation} \label{eq:T-max} T(x) = \max_{\tau \in \mathcal{T}_{\mathrm p}} \leftidx{^\tau}{T}{}(x) \enspace . \end{equation} \begin{theorem}[Coro.\ of~\cite{Koh80}]\label{th-inv} Perfect information stochastic games with finite state and action spaces have a uniform value and uniform optimal policies $\sigma^*,\tau^*$ that are obtained as follows: given an invariant half-line $\alpha \mapsto u+ \alpha \nu$ of the Shapley operator $T$, take for $\sigma^*$ any policy $\sigma$ attaining the minimum in~\eqref{eq:T-min} when $x$ is substituted by $u+\alpha \nu$ for $\alpha$ large enough. Similarly, take for $\tau^*$ any policy $\tau$ attaining the maximum in~\eqref{eq:T-max} when $x$ is substituted by the same quantity. \end{theorem} \begin{proof} Recall that the {\em germ} of a real function $\mathbb{R}\to \mathbb{R}, \; \alpha \mapsto f(\alpha)$, at the point $+\infty$ is the equivalence class of $f$ modulo the relation $f\sim g$ if there exists $\alpha_0$ such that $f(\alpha) =g(\alpha)$ for $\alpha\geqslant \alpha_0$. We shall use the same notation $f$ for the function and its equivalence class. We shall consider in particular the set $\mathbb{A}$ of germs of affine functions. Observe that $\mathbb{A}$ is invariant by linear combinations, by translation by a constant, and also by the operations $\min$ and $\max$, because $\mathbb{A}$ is totally ordered (of two affine functions of $\alpha$, one ultimately dominates the other as $\alpha\to \infty$). Since the action spaces are finite, if $f\in \mathbb{A}^n$, we may identify $\alpha \to T(f(\alpha))$ to an element of $\mathbb{A}^n$. In other words, $T$ acts on vectors of germs of affine functions. We now substitute $x=u+ \alpha \nu$ in~\eqref{eq:Shapley-operator}. Since $\mathbb{A}$ is totally ordered, the minima and maxima in~\eqref{eq:Shapley-operator} are attained by actions independent of $\alpha$ provided that $\alpha$ is large enough. I.e., there exists policies $\sigma^*$ and $\tau^*$ of players \mathcal{M}IN and \mathcal{M}AX respectively and a constant $\alpha_0$ such that for all $\alpha\geqslant \alpha_0$, \begin{align}\label{e-invhalf} u+(\alpha +1) \nu = T(u+ \alpha \nu) = T^{\sigma^*}(u+ \alpha \nu) = \leftidx{^{\tau^*}}{\!T}{}(u+\alpha \nu) \enspace . \end{align} It will now be convenient to use the operator $\leftidx{^\tau}{T}{^\sigma}$ from $\mathbb{R}^n$ to $\mathbb{R}^n$, such that: \[ \leftidx{^\tau}{T}{^\sigma_i}(x) = r_i^{\sigma(i) \tau(i,\sigma(i))} + P_i^{\sigma(i) \tau(i,\sigma(i))} x \enspace. \] We have \[ T(u+\alpha \nu) = \leftidx{^{\tau^*}}{\!T}{^{\sigma^*}}(u+\alpha \nu) \enspace . \] The dynamic programming principle, for one-player games, implies that \[ \inf_{\sigma} J_i^k(\sigma,\tau^*) = (\leftidx{^{\tau^*}}{\!T}{})^k_i(0) \enspace , \] where the infimum is taken over all strategies of player \mathcal{M}IN. By~\eqref{e-invhalf}, $\alpha \mapsto u+ \alpha \nu$ is an invariant half-line of $\leftidx{^{\tau^*}}{\!T}{}$, and so \[ \lim_k \inf_{\sigma}\frac{1}{k} J_i^k(\sigma,\tau^*) = \lim_k \frac{1}{k}(\leftidx{^{\tau^*}}{\!T}{})^k_i(0) = \chi_i (\leftidx{^{\tau^*}}{\!T}{})= \nu_i \enspace, \] showing that the second equality in~\eqref{e-def-optuniform} holds, with $v^{\text{U}}=\nu$. Considering $T^{\sigma^*}$ instead of $\leftidx{^{\tau^*}}{\!T}{}$ and arguing by duality, we deduce that the first equality in~\eqref{e-def-optuniform} also holds, so that $\sigma^*,\tau^*$ are uniform optimal policies, which implies that the uniform value exists. \end{proof} Let us mention that the mean-payoff vector and the uniform value exist, more generally, when $T$ is semialgebraic~\cite{BK76} or even definable in an o-minimal structure~\cite{BGV15}. However, the existence of $\chi(T)$ is not guaranteed in general: a recent result of Vigeral~\cite{Vig13} shows that the limit may not exist even with a compact action space and transition payments and probabilities that are continuous with respect to the actions. Finally, observe that the existence of an invariant half-line $\alpha \mapsto u+ \alpha \nu$ where $\nu= \lambda e$ is a constant vector is equivalent to the solvability of the ergodic equation $T(u)= u+ \lambda e$. In the present setting, this implies that the solvability of the ergodic equation is equivalent to the mean-payoff vector being independent of the initial state, that is $\chi_i(T) = \chi_j(T)$ for every $i, j \in S$. Moreover, in this special case, the optimal policies $\sigma^*,\tau^*$ constructed in Theorem~\ref{th-inv} are obtained by selecting minimizing and maximizing actions in the expression of $T(x)$ in~\eqref{eq:Shapley-operator}, when $x$ is replaced by $u$. \section{Generic uniqueness of the bias vector of stochastic games} \label{sec:generic-uniqueness} \subsection{Statement of the main result} \label{sec:main-result} Let us first recall some definitions. A {\em polyhedron} in $\mathbb{R}^n$ is an intersection of finitely many closed half-spaces, a {\em face} of a polyhedron is an intersection of this polyhedron with a supporting half-space, and a {\em polyhedral complex} is a finite set $\mathcal{K}$ of polyhedra satisfying the two following properties: \begin{enumerate} \item $P \in \mathcal{K}$ and $F$ is a face of $P$ implies that $F \in \mathcal{K}$; \item for all $P, Q \in \mathcal{K}$, $P \cap Q$ is a face of $P$ and $Q$. \end{enumerate} A polyhedron in $\mathcal{K}$ is called a {\em cell} of the polyhedral complex. We refer to the textbook~\cite{DLRS10} for background on polyhedral complexes. Also, a map over $\mathbb{R}^n$ is said to be {\em piecewise affine} if $\mathbb{R}^n$ can be covered by a finite union of polyhedra (with nonempty interior) on which its restriction is affine. in that case, the set of such polyhedra can be refined in a polyhedral complex. In~\cite{Ovc02,AT07}, it is shown that the piecewise affine functions are exactly the ones that are defined as in~\eqref{eq:Shapley-operator}, i.e., the functions that can be written as a minimax over finite sets of affine functions. Finally, we introduce the following definition, that extends the notion of (finite) ergodic Markov chain. \begin{definition}\label{def-ergodicity} A stochastic game with finite state space and Shapley operator $T: \mathbb{R}^n \to \mathbb{R}^n$ is {\em ergodic} if the ergodic equation~\eqref{eq:ergodic-equation} is solvable for all operators $g+T$ with $g \in \mathbb{R}^n$. \end{definition} The authors have given in~\cite{AGH15} necessary and sufficient conditions for a perfect-information stochastic game with finite state space and bounded payment function to be ergodic, conditions that we partly recall in the next subsection. Note however that in the case of finite games, the existence of an invariant half-line for piecewise affine maps readily implies that the latter definition is equivalent to the fact that the mean-payoff vector is constant for all additive perturbation vectors of the transition payments. We now state the main result of this paper, the proof of which is postponed to Subsection~\ref{sec:proof-main-thm}. \begin{theorem} \label{thm:generic-uniqueness} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information. Assume that the game is ergodic. Then, the space $\mathbb{R}^n$ can be covered by a polyhedral complex such that, for any additive perturbation vector $g \in \mathbb{R}^n$ in the interior of a full-dimensional cell, $g+T$ has a unique bias vector, up to an additive constant. In particular, the set of perturbation vectors $g$ for which $g+T$ has more than one bias vector, up to an additive constant, is included in a finite union of subspaces of codimension at least $1$. \end{theorem} \begin{remark} This perturbation theorem bears some conceptual similarity with results of weak KAM theory; we refer to the monograph by Fathi~\cite{Fat} for more information. The latter theory deals with a class of one-player deterministic games with continuous time and space. In this setting, the bias vector $u$ and the eigenvalue $\lambda$ are solution of an ergodic Hamilton-Jacobi PDE $H(x,D_x u) = \lambda$ where the Hamiltonian $(x,p)\mapsto H(x,p)$ is convex in the adjoint variable $p$. One may consider the perturbation of a Hamiltonian by a potential, which amounts to replacing $H(x,p)$ by $H(x,p)+V(x)$, for some function $V$. This is similar to the replacement of the Shapley operator $T$ by the perturbed Shapley operator $g+T$ in Theorem~\ref{thm:generic-uniqueness}. As observed by Figalli and Rifford in~\cite[Th.~4.2]{rifford}, it follows from weak KAM theory results that under some assumptions, the solution $u$ of $V(x)+H(u,D_x u)=\lambda$ is unique up to an additive constant for a generic function $V$. Theorem~\ref{thm:generic-uniqueness} shows that an analogous property is valid for finite two-player zero-sum stochastic games. We note however that Theorem~\ref{thm:generic-uniqueness} does not extend easily to the case of PDE, since zero-sum games correspond to Hamilton-Jacobi PDE with a {\em nonconvex} Hamiltonian (to which current weak KAM methods do not apply). \end{remark} \subsection{Nonlinear spectral theory} \label{sec:nonlinear-spectral-theory} The purpose of this subsection is to present or extend some known results in nonlinear spectral theory that will be useful to prove Theorem~\ref{thm:generic-uniqueness}, as well as further results in Section~\ref{sec:applications}. \subsubsection{Recession operator and ergodicity conditions} To characterize the ergodicity of a perfect-information stochastic game with finite state space, we shall use, along the lines of~\cite{GG04,AGH15}, the {\em recession operator} associated with the Shapley operator $T: \mathbb{R}^n \to \mathbb{R}^n$. This operator is a self-map of $\mathbb{R}^n$ defined by \begin{equation} \label{eq:recession-operator} \widehat{T}(x) := \lim_{\alpha \to +\infty} \frac{T(\alpha x)}{\alpha} \enspace , \quad x \in \mathbb{R}^n \enspace. \end{equation} Its existence is not guaranteed in general, but it does exist when the game is finite, i.e., when $T$ is piecewise affine (and more generally when the payment function is bounded). In this case, $\widehat{T}$ is the Shapley operator of a modified version of the stochastic game represented by $T$ where the transition payments are set to $0$. Indeed, if $T$ is given as in~\eqref{eq:Shapley-operator}, then it is readily seen that the $i$th coordinate map of $\widehat{T}$ is \[ \widehat{T}_i (x) = \min_{a \in A_i} \max_{b \in B_{i,a}} P_i^{a b} x \enspace , \quad x \in \mathbb{R}^n \enspace. \] In this case, we also easily get that \begin{equation*} \label{eq:recession-operator-uniform-convergence} \| T - \widehat{T} \|_\infty \leqslant \|r\|_\infty \enspace , \end{equation*} which implies in particular that the limit~\eqref{eq:recession-operator} defining $\widehat{T}$ is uniform in $x$. Observe that if $\widehat{T}$ exists, then it inherits from $T$ the additive homogeneity and the monotonicity properties. Furthermore, it is positively homogeneous, meaning that $\widehat{T}(\alpha x) = \alpha \widehat{T}(x)$ for every $\alpha \geqslant 0$. As a consequence, any vector proportional to the unit vector of $\mathbb{R}^n$ is a fixed point of $\widehat{T}$. We shall call such fixed points {\em trivial} fixed points. The following result relates the ergodicity of a game to the fixed points of the recession operator associated with its Shapley operator. Since it applies to games with bounded payment function, it deals a fortiori with the case of finite games. \begin{theorem}[{\cite[Th.~3.1]{AGH15}}] Let $\mathcal{G}amma$ be a perfect-information stochastic game with finite state space and bounded payment function. Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator. The following are equivalent: \begin{enumerate} \item the recession operator has only trivial fixed points; \item the mean-payoff vector exists and is constant for all additive perturbation vectors of the transition payments; \item the ergodic equation~\eqref{eq:ergodic-equation} has a solution for all Shapley operators $g+T$, $g \in \mathbb{R}^n$. \end{enumerate} \label{thm:ergodicity-condition} \end{theorem} \subsubsection{Characterization of the ergodic constant} \label{sec:ergodic-constant-characterization} Let $\mathcal{G}amma$ be a finite stochastic game with perfect information, and let $T$ be its Shapley operator. We shall make use of the reduced Shapley operator $T^\sigma: \mathbb{R}^n \to \mathbb{R}^n$ associated with the policy $\sigma \in \mathcal{S}_{\mathrm p}$ of player \mathcal{M}IN, defined in~\eqref{e-def-Tsigma}. We also have, for all $x \in \mathbb{R}^n$, \begin{equation} \label{eq:stochastic-control-operator} T^\sigma(x) = \max_{\tau \in \mathcal{T}_{\mathrm p}} \big( r^{\sigma \tau} + P^{\sigma \tau} x \big) \enspace , \end{equation} where $r^{\sigma \tau}$ is the vector in $\mathbb{R}^n$ whose $i$th entry is defined by $r^{\sigma \tau}_i = r^{\sigma(i) \tau(i,\sigma(i))}_i$ and $P^{\sigma \tau}$ is the $n \times n$ stochastic matrix whose $i$th row is given by $P^{\sigma \tau}_i = P^{\sigma(i) \tau(i,\sigma(i))}_i$. Observe that $T^\sigma$ is convex (componentwise), monotone and additively homogeneous. If $P$ is a $n \times n$ stochastic matrix, the {\em directed graph} associated with $P$ is composed of the nodes $1,\dots,n$ and of the arcs $(i,j)$, $1 \leqslant i,j \leqslant n$, such that $P_{i j} > 0$. A {\em class} of the matrix $P$ is a maximal set of nodes such that every two nodes in the set are connected by a directed path. A class is said to be {\em final} if every path starting from a node of this class remains in it. Let us denote by $\mathcal{M}(P)$ the set of invariant probability measures of $P$, i.e., the set of stochastic (column) vectors $m \in \mathbb{R}^n$ such that $\transpose{m} \, P = \transpose{m}$. Given a final class $C$ of $P$, there is a unique invariant probability measure $m \in \mathcal{M}(P)$ the support of which is $C$, i.e., $\{1 \leqslant i \leqslant n \mid m_i > 0 \} = C$. Moreover, the set $\mathcal{M}(P)$ is the convex hull of such measures. Since the number of final classes of $P$ is finite, $\mathcal{M}(P)$ is a convex polytope. Let us denote by $\overline{\chi}(T)$ the {\em upper mean payoff} of $T$, i.e., the greatest entry of the mean-payoff vector $\chi(T)$. We next give a characterization of $\overline{\chi}(T)$. Obviously, if $T$ satisfies the ergodic equation~\eqref{eq:ergodic-equation}, then $\chi(T)$ is a constant vector and the eigenvalue is $\lambda(T) = \overline{\chi}(T)$. In the sequel, we denote by $\<x,y>$ the standard scalar product in $\mathbb{R}^n$ of two vectors $x$ and $y$. \begin{lemma} \label{lem:chibar-characterization} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information $\mathcal{G}amma$. Then the upper mean payoff of $T$ is given by \begin{equation} \label{eq:chibar} \overline{\chi}(T) = \min_{\sigma \in \mathcal{S}_{\mathrm p}} \max \{ \< m , r^{\sigma \tau} > \mid \tau \in \mathcal{T}_{\mathrm p}, \; m \in \mathcal{M}(P^{\sigma \tau}) \} \enspace . \end{equation} \end{lemma} \begin{proof} First, observe that for all policies $\sigma \in \mathcal{S}_{\mathrm p}$, we have $T \leqslant T^\sigma$, which yields, by monotonicity of the operators, $\chi(T) \leqslant \chi(T^\sigma)$, and in particular $\overline{\chi}(T) \leqslant \overline{\chi}(T^\sigma)$. Considering an invariant half-line~\eqref{eq:invariant-half-line} of $T$, we know that there exist a vector $u \in \mathbb{R}^n$ such that $T(u) = u + \chi(T)$. Let $\sigma \in \mathcal{S}_{\mathrm p}$ be a policy of player \mathcal{M}IN such that $T(u) = T^\sigma(u)$. Then, we have $T^\sigma(u) \leqslant u + \overline{\chi}(T) e$. Furthermore, we know by a Collatz-Wielandt formula (see~\cite[Prop.~1]{GG04}) that, for any monotone and additively homogeneous map $F: \mathbb{R}^n \to \mathbb{R}^n$, we have \[ \overline{\chi}(F) = \inf \{ \mu \in \mathbb{R} \mid \exists x \in \mathbb{R}^n, \; F(x) \leqslant \mu e + x \} \enspace . \] So, we deduce that $\overline{\chi}(T^\sigma) \leqslant \overline{\chi}(T)$, and finally that \[ \overline{\chi}(T) = \min_{\sigma \in \mathcal{S}_{\mathrm p}} \overline{\chi}(T^\sigma) \enspace . \] Now we fix a policy $\sigma$ of player \mathcal{M}IN, and we let $\chi := \chi(T^\sigma)$ and \[ \mu^{\sigma} := \max \{ \< m , r^{\sigma \tau} > \mid \tau \in \mathcal{T}_{\mathrm p}, \; m \in \mathcal{M}(P^{\sigma \tau}) \} \enspace . \] Since $T^\sigma$ has an invariant half-line with direction $\chi$, there is a vector $v \in \mathbb{R}^n$ such that $T^\sigma(v + \alpha \chi) = v + (\alpha+1) \chi$ for all $\alpha \geqslant 0$. In particular, for every policy $\tau \in \mathcal{T}_{\mathrm p}$ we have \[ r^{\sigma \tau} + P^{\sigma \tau} v \leqslant T^\sigma(v) = v + \chi \leqslant v + \overline{\chi}(T^\sigma) e \enspace . \] Multiplying this inequality by any $m \in \mathcal{M}(P^{\sigma \tau})$, we deduce that $\mu^\sigma \leqslant \overline{\chi}(T^\sigma)$. Furthermore, since the germs of affine functions from $\mathbb{R}$ to $\mathbb{R}$ at infinity are totally ordered, there exists a policy $\tau \in \mathcal{T}_{\mathrm p}$ such that \[ T^\sigma(v + \alpha \chi) = r^{\sigma \tau} + P^{\sigma \tau} (v + \alpha \chi) \] for all $\alpha$ large enough. In particular, since the equality \begin{equation} \label{eq:halfline-linear-map} v + (\alpha+1) \chi = r^{\sigma \tau} + P^{\sigma \tau} (v + \alpha \chi) \end{equation} holds for all $\alpha$ large enough, we get that $P^{\sigma \tau} \chi = \chi$. Thus, $\chi$ is an {\em harmonic vector} for the stochastic matrix $P^{\sigma \tau}$, and as such it is constant on any final class of $P^{\sigma \tau}$, and its maximum is attained on one of these final class (see~\cite[Lem.~2.9]{AG03}). Let $m \in \mathcal{M}(P^{\sigma \tau})$ be the invariant probability measure associated with a final class $C$ of $P^{\sigma \tau}$ such that $\chi_i = \overline{\chi}(T^\sigma)$ for all $i \in C$. Then, we deduce from~\eqref{eq:halfline-linear-map} that $\<m,r^{\sigma \tau}> = \overline{\chi}(T^\sigma)$, which yields $\mu^\alpha \geqslant \overline{\chi}(T^\sigma)$ and finally $\mu^\alpha = \overline{\chi}(T^\sigma)$. \end{proof} Let us mention that in~\eqref{eq:chibar}, the set of invariant probability measures of $P^{\sigma \tau}$, $\mathcal{M}(P^{\sigma \tau})$, may be replaced by the set of its extreme points, denoted by $\mathcal{M}^*(P^{\sigma \tau})$, since it is a convex polytope. Note that $\mathcal{M}^*(P^{\sigma \tau})$ is the set of invariant probability measures, the support of which are the final classes of $P^{\sigma \tau}$. \subsubsection{Structure of the eigenspace} An ingredient of our approach is a result of \cite{AG03} which describes the eigenspace of {\em one-player} Shapley operators $T$. We next recall this result. We assume that $T$ arises from a game in which only player \mathcal{M}AX has nontrivial actions, so that player \mathcal{M}IN has only one possible policy. We also assume that $T$ satisfies the ergodic equation~\eqref{eq:ergodic-equation} so that its eigenvalue $\lambda(T)$ is equal to the entries of the mean-payoff vector $\chi(T)$ which is constant. In this case, the representation of the upper-mean payoff $\overline{\chi}(T)$, hence of the eigenvalue $\lambda(T)$, in Lemma~\ref{lem:chibar-characterization} simplifies as the dependency in $\sigma$ can be dropped, and we arrive, with a trivial simplification of the notation, to \begin{align} \lambda(T) = \max \{ \< m , r^{\tau} > \mid \tau \in \mathcal{T}_{\mathrm p}, \; m \in \mathcal{M}(P^{ \tau}) \} \enspace . \label{eq:eigenvalue-convex} \end{align} It is shown in~\cite{AG03} that the dimension of the eigenspace of $T$ is controlled by the number of {\em critical classes}. The latter can be defined through the notion of maximizing measures, which rely on {\em randomized} policies. For every state $i$, such a policy $\tau$ assigns to every action $b \in B_i$ a probability $\tau(i,b)$ that this action is selected. This leads to the stochastic matrix $P^\tau$ with entries $P^\tau_{i j} = \sum_{b \in B_i} P^{b}_{i j} \, \tau(i,b)$, and to the payment vector $r^{\tau}$ with entries $r^\tau_i = \sum_{b \in B_i} r^{b}_{i} \, \tau(i,b)$. Then, the maximum in~\eqref{eq:eigenvalue-convex} is unchanged if it is taken over the set of randomized policies $\tau$ and of invariant probability measures $m$ of the corresponding stochastic matrix $P^\tau$ (see~\cite[Prop.~7.2]{AG03}). A measure $m$ is {\em maximizing} if $\lambda(T) = \<m,r^\tau>$ for some randomized policy $\tau$, and if $m \in \mathcal{M}^*(P^\tau)$, that is, the support of $m$ is a final class of $P^\tau$. A subset $I\subset \{1,\dots,n\}$ is a {\em critical class} if there exists a maximizing measure $m$ whose support is $I$, i.e., $I = \{ 1 \leqslant i \leqslant n \mid m_i>0 \}$, and if $I$ is a maximal element with respect to inclusion among all the subsets of $\{1,\dots,n\}$ which arise in this way. Note that critical classes are disjoint. The following lemma gives a sufficient condition for the critical class to be unique, Note that the result does not require randomized stationary strategies. \begin{lemma} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be a convex, monotone and additively homogeneous map. Suppose that the ergodic equation~\eqref{eq:ergodic-equation} is solvable. If there is a unique probability measure which attains the maximum in~\eqref{eq:eigenvalue-convex}, then $T$ has a unique critical class. \label{lem:uniqueness-critical-class} \end{lemma} \begin{proof} Let $C$ be a critical class of $T$. There exists a randomized policy $\tau$ such that $C$ is a final class of $P^\tau$ and the unique invariant probability measure $m$ with support $C$ satisfies $\lambda = \lambda(T) = \<m,r^\tau>$. Let $u$ be an eigenvector. For every $i \in \{1,\dots,n\}$, we have $\lambda + u_i - r^\tau_i - P^\tau_i u \geqslant 0$. Furthermore, since $\transpose{m} \, P^\tau = \transpose{m}$, we also have \[ \sum_{i \in C} m_i \, ( \lambda + u_i - r^\tau_i - P^\tau_i u) = \lambda + \<m,u> - \<m,r^\tau> - \<m,(P^\tau u)> = 0 \enspace . \] This yields that $\lambda + u_i - r^\tau_i - P^\tau_i u = 0$ for every $i \in C$. Let $\tau'$ be a deterministic policy such that $\tau'(i) = b_i$ with $\tau(i,b_i) > 0$ for all indices $i$. Since $P^{\tau}$ can be written as a convex combination with positive coefficients of the $P^{\tau'}$ with such $\tau'$, we deduce that $C$ contains a final class $C'$ of $P^{\tau'}$, and so there exists an invariant probability measure $m'$ of $P^{\tau'}$ whose support $C'$ is included in $C$. Similarly $r^{\tau}$ is a convex combination of the $r^{\tau'}$ with same coefficients as for $P^{\tau}$. Hence, we deduce by a similar argument as above that $\lambda + u_i - r^{\tau'}_i - P^{\tau'}_i u = 0$ for every $i \in C$. Multiplying by $m'$, we obtain that $\lambda = \<m',r^{\tau'}>$, which implies that $m'$ attains the maximum in~\eqref{eq:eigenvalue-convex}. Since such a probability measure is unique, $m'$ and $C'$ are the same for all $\tau'$ as above, hence $C=C'$ is the support of the unique probability measure attaining the maximum in~\eqref{eq:eigenvalue-convex}. \end{proof} We now describe the eigenspace of $T$. \begin{theorem}[{\cite[Th.~1.1]{AG03}}] \label{thm:convex-spectral-theorem} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be a convex, monotone and additively homogeneous map. Suppose that the ergodic equation~\eqref{eq:ergodic-equation} is solvable. and let $C\subset \{1,\dots,n\}$ be the set of elements of critical classes. We denote by $\pi_C$ the restriction map $\mathbb{R}^n\to \mathbb{R}^C$, $x\mapsto (x_i)_{i\in C}$. Then, the set of eigenvectors of $T$, denoted by $\mathcal{E}(T)$, satisfies the following properties: \begin{enumerate} \item Every element $x$ of $\mathcal{E}(T)$ is uniquely determined by its restriction $\pi_C(x)$. \item The set $\pi_C(\mathcal{E}(T))$ is convex and its dimension is at most equal to the number of critical classes of $T$; moreover, the latter bound is attained when $T$ is piecewise affine. \end{enumerate} \end{theorem} In particular, combined with Lemma~\ref{lem:uniqueness-critical-class}, the above result yields the following. \begin{corollary} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be a convex, monotone and additively homogeneous map. Suppose that the ergodic equation~\eqref{eq:ergodic-equation} is solvable. If there is a unique probability measure which attains the maximum in~\eqref{eq:eigenvalue-convex}, then $T$ has a unique eigenvector up to an additive constant. \label{coro:uniqueness-eigenvector} \end{corollary} We refer the reader to~\cite{AG03} for more background on critical classes, which admit several characterizations and can be computed in polynomial time when the game is finite. We only provide here a simple illustration in order to understand the latter theorem. \begin{example} Let $T: \mathbb{R}^2\to\mathbb{R}^2$ be such that \[ T_1(x) = \max \Big\{ x_1, \frac{x_1+x_2}{2} \Big\} \enspace , \quad T_2(x) = \max \Big\{ -3+x_2, \frac{x_1+x_2}{2} \Big\} \enspace . \] We have $T(0)=0$, which shows in particular that the upper mean payoff is $0$. If player \mathcal{M}AX chooses, when in state $1$, the action corresponding to the first term in the expression of $T_1$, and when in state $2$, the action corresponding to the second term in the expression of $T_2$, we arrive at the transition matrix \[ P= \begin{pmatrix} 1 & 0 \\ 1/2 & 1/2\end{pmatrix} \] which has the invariant probability measure $m=\transpose{(1,0)}$. This measure attains the maximum in~\eqref{eq:eigenvalue-convex}. However, its support $I=\{1\}$ is not a critical class for it is not maximal with respect to inclusion. Indeed, if player \mathcal{M}AX chooses instead, when in state $1$, the action corresponding to the second term in the expression of $T_1$, we arrive at the transition matrix \[ P = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2\end{pmatrix} \] which has the invariant probability measure $m=\transpose{(1/2,1/2)}$. This measure attains the maximum in~\eqref{eq:eigenvalue-convex} and its support $I=\{1,2\}$ is maximal with respect to inclusion. Hence, $I=\{1,2\}$ is the unique critical class. It follows from Theorem~\ref{thm:convex-spectral-theorem} that $0$ is the unique eigenvector of $T$, up to an additive constant. \end{example} Another ingredient is a variant of a result of Bruck~\cite{Bru73}, concerning the topology of fixed-point sets of nonexpansive maps. We now consider a {\em two-player} Shapley operator $T$ such that the ergodic equation~\eqref{eq:ergodic-equation} is solvable, and denote by \[ \mathcal{E}(T):= \{u \in \mathbb{R}^n \mid T(u)= \lambda e + u\} \] the set of eigenvectors of $T$ (recall that the eigenvalue $\lambda$ is unique). \begin{theorem}[{Compare with~\cite[Th.~2]{Bru73}}] \label{thm:nonexpansive-retract} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be a monotone and additively homogeneous map. Assume that the ergodic equation~\eqref{eq:ergodic-equation} is solvable. Then, the set of eigenvectors $\mathcal{E}(T)$ is a retract of $\mathbb{R}^n$ by a sup-norm nonexpansive map, meaning that $\mathcal{E}(T)=p(\mathbb{R}^n)$ where $p$ is a sup-norm nonexpansive self-map of $\mathbb{R}^n$ such that $p=p^2$. In particular, $\mathcal{E}(T)$ is arcwise connected. \end{theorem} \begin{proof} The result of Bruck~\cite[Th.~2]{Bru73} shows that, under some compactness conditions, the fixed-point set of a nonexpansive self-map of a Banach space is a retract of the whole space by a nonexpansive map. Assume now that $T$ is monotone, additively homogeneous, and admits an eigenvector for the eigenvalue $\lambda$. Then, the eigenspace $\mathcal{E}(T)$ coincides with the fixed-point set of the map $x\mapsto -\lambda e +T(x)$. The latter map is sup-norm nonexpansive and satisfies the condition of~\cite[Th.~2]{Bru73}, and so, $\mathcal{E}(T)$ is a nonexpansive retract of $\mathbb{R}^n$. \end{proof} \begin{remark} The retraction $p$ in Theorem~\ref{thm:nonexpansive-retract} can be chosen to be monotone and additively homogeneous. This can actually be shown by elementary means, following a construction in the proof of~\cite[Lem.~3]{GG04}. Indeed, we may assume without loss of generality that $\lambda=0$, and consider $q(x):= \lim_{k \to \infty} \inf_{\ell \geqslant k} T^\ell(x)$, which is finite because every orbit of a nonexpansive map that admits a fixed point must be bounded. Since $T$ is monotone and continuous, we get $T(q(x)) \leqslant q(x)$, and so, $p(x):= \lim_{k\to \infty} T^k(q(x))$, which is the limit of a nonincreasing and bounded sequence, exists and is finite. The map $p$ is easily shown to be monotone and additively homogeneous and to satisfy $p=p^2$. \end{remark} \subsection{Proof of Theorem~\ref{thm:generic-uniqueness}} \label{sec:proof-main-thm} Let $T$ be the Shapley operator of a finite stochastic game with perfect information $\mathcal{G}amma$ which is assumed to be ergodic. Let $\sigma \in \mathcal{S}_{\mathrm p}$ be a policy of player \mathcal{M}IN. We define the real map $\lambda^\sigma(\cdot)$ on $\mathbb{R}^n$ by \begin{equation} \label{eq:eigenvalue-one-player} \lambda^\sigma(g) := \max \{ \< m, (g+r^{\sigma \tau}) > \mid \tau \in \mathcal{T}_{\mathrm p}, \; m \in \mathcal{M}^*(P^{\sigma \tau}) \} \enspace , \end{equation} where $\mathcal{M}^*(P)$ denotes the set of extreme points of the convex polytope $\mathcal{M}(P)$, that is, the set of invariant probability measures associated with the final classes of the stochastic matrix $P$. The fact that $\mathcal{M}^*(P^{\sigma \tau})$ is a set of probability measures yields that $\lambda^{\sigma}$ is monotone and additively homogeneous, hence sup-norm nonexpansive (and continuous). Furthermore, since the set of policies $\mathcal{T}_{\mathrm p}$ of player \mathcal{M}AX is finite, as well as all the sets $\mathcal{M}^*(P^{\sigma \tau})$, then the map $\lambda^\sigma$ is piecewise affine. We now define the polyhedral complex $\mathcal{C}^\sigma$ covering $\mathbb{R}^n$, the full-dimensional cells of which are precisely the maximal polyhedra on which the piecewise affine map $\lambda^\sigma$ coincides with an affine map. Let $Q$ be a cell of $\mathcal{C}^\sigma$ with full dimension. We claim that if a vector $g$ is in the interior of $Q$, then the set of eigenvectors of the reduced one-player Shapley operator $F := g+T^\sigma$ is either empty or reduced to a line. To see this, it suffices to observe that the measure $m$ attaining the maximum in~\eqref{eq:eigenvalue-one-player} is unique for all $g$ in the interior of $Q$ and independent of the choice of $g$ in this interior. Indeed, if $m$ is such a measure, then $m\in\mathcal{M}^*(P^{\sigma \tau})$ for some $\tau \in \mathcal{T}_{\mathrm p}$ and for $d=\< m, r^{\sigma \tau} > \in\mathbb{R}$, we have $\lambda^\sigma(g')\geqslant\<m,g'> + d$, for all $g'\in \mathbb{R}^n$, with equality at $g$. Since $g'\mapsto \lambda^\sigma(g')$ is an affine map on $Q$, and $g$ is in the interior of $Q$, the equality $\lambda^\sigma(g')=\<m,g'> + d$ holds for all $g'\in Q$ and so $m$ must coincide with the linear part of the affine map, which is independent of $g$ (and unique). We deduce from Corollary~\ref{coro:uniqueness-eigenvector} that $\mathcal{E}(F)$, if it is nonempty, is reduced to a line of direction $e$. Consider now the polyhedral complex $\mathcal{C}$ obtained as the refinement of all the complexes $\mathcal{C}^\sigma$. This complex still covers $\mathbb{R}^n$ and has cells with nonempty interior. Let $g$ be a perturbation vector in the interior of a full-dimensional cell of $\mathcal{C}$. Since the game $\mathcal{G}amma$ is ergodic, $\mathcal{E}(g+T)$ is not empty. Let $u$ be an eigenvector of $g+T$. According to~\eqref{eq:T-min}, there is a policy $\sigma \in \mathcal{S}_{\mathrm p}$ of player \mathcal{M}IN such that $g+T(u) = g+T^\sigma(u)$. Hence $u$ is also an eigenvector of $g+T^\sigma$. So, there is a finite family $\Sigma^*$ of $\mathcal{S}_{\mathrm p}$ such that $\mathcal{E}(g+T) = \bigcup_{\sigma \in \Sigma^*} \mathcal{E}(g+T^\sigma)$. Moreover, we have proved that for any policy $\sigma \in \Sigma^*$, the eigenspace $\mathcal{E}(g+T^\sigma)$ is reduced to a line. Thus, $\mathcal{E}(g+T)$ is composed of a finite union of lines which all have the same direction, $e$. Consider now the hyperplane orthogonal to the unit vector, $H:=\{x\in \mathbb{R}^n\mid \<x,e> =0\}$, and let $\pi$ denote the orthogonal projection on $H$. Then, $\pi(\mathcal{E}(g+T))=\mathcal{E}(g+T)\cap H$ is finite. However, by Theorem~\ref{thm:nonexpansive-retract}, $\mathcal{E}(g+T)$ is connected. Then, the set $\pi(\mathcal{E}(g+T))$ is also connected, and since it is finite, it must be reduced to a point. It follows that $g+T$ has a unique eigenvector, up to an additive constant. \qed \subsection{Example} We conclude this section by an example illustrating Theorem~\ref{thm:generic-uniqueness}. Consider the following Shapley operator defined on $\mathbb{R}^3$ (here we use $\wedge$ and $\vee$ instead of $\min$ and $\max$, respectively, and we recall that the addition has precedence over them): \begin{equation*} T(x) = \begin{pmatrix} \frac{1}{2} (x_1 + x_3) \, \wedge \, 1 + \frac{1}{2} (x_1 + x_2)\\ 2 + \frac{1}{2} (x_1 + x_3) \, \wedge \, \left(1 + \frac{1}{2} (x_1 + x_2) \, \vee \, -2 + x_3 \right)\\ 3 + \frac{1}{2} (x_1 + x_3) \, \vee \, 1 + x_3 \end{pmatrix} \enspace . \end{equation*} It can be proved, using Theorem~\ref{thm:ergodicity-condition}, that the ergodic equation~\eqref{eq:ergodic-equation} is solvable for every perturbation vector $g \in \mathbb{R}^3$. Figure~\ref{fig} shows the intersection of the hyperplane $\{g \in \mathbb{R}^3 \mid g_3 = 0\}$ and the polyhedral complex introduced in Theorem~\ref{thm:generic-uniqueness}. Here, for each vector $g$ in the interior of a full-dimensional polyhedron, $g+T$ has a unique eigenvector up to an additive constant. \begin{figure}\label{fig} \end{figure} Let us detail what happens in the neighborhood of $g=0$, point in which $g+T$ fails to have a unique eigenvector. Note that in the neighborhood of $g=0$, the eigenvalue of $g+T$ remains $1$. \begin{itemize} \item If $g_1+g_2 = 0$, the eigenvectors of $g+T$ are defined by \[ x_1=x_2+2 g_1 \enspace , \quad -3+g_2 \leqslant x_2-x_3 \leqslant -2-g_1 \enspace . \] \item If $g_1+g_2 > 0$, the unique eigenvector, up to an additive constant, is \[ (-2+2 g_1,-2+2 g_1+2 g_2,0) \enspace . \] \item If $g_1+g_2 < 0$, the unique eigenvector, up to an additive constant, is \[ (-3+2 g_1+g_2,-3+g_2,0) \enspace . \] \end{itemize} \section{Application to policy iteration} \label{sec:applications} We finally apply our results to show that policy iteration combined with a perturbation scheme can solve degenerate stochastic games. \subsection{Hoffman-Karp policy iteration} Let us recall the notation of Sections~\ref{sec:preliminaries} and~\ref{sec:generic-uniqueness}. We denote by $\mathcal{G}amma$ a finite stochastic game with perfect information. The (finite) set of policies of player \mathcal{M}IN is denoted by $\mathcal{S}_{\mathrm p}$, i.e., an element of $\mathcal{S}_{\mathrm p}$ is a map $\sigma: S \to \bigcup_{i \in S} A_i$ such that $\sigma(i) \in A_i$ for every state $i \in S$. The (finite) set of policies of player \mathcal{M}AX is denoted by $\mathcal{T}_{\mathrm p}$, i.e., an element of $\mathcal{T}_{\mathrm p}$ is a map $\tau: K_A \to \bigcup_{i \in S, a \in A_i} B_{i,a}$ such that $\tau(i,a) \in B_{i,a}$ for every state $i \in S$ and action $a \in A_i$. Finally, recall that for $\sigma \in \mathcal{S}_{\mathrm p}$ and $\tau \in \mathcal{T}_{\mathrm p}$, $P^{\sigma \tau}$ denotes the $n \times n$ stochastic matrix whose $i$th row is given by $P^{\sigma \tau}_i = P^{\sigma(i) \tau(i,\sigma(i))}_i$. When $T:\mathbb{R}^n \to \mathbb{R}^n$ is the Shapley operator~\eqref{eq:Shapley-operator} of a finite stochastic game with perfect information, Hoffman and Karp~\cite{HK66} have introduced a policy iteration algorithm, which takes the description of the game as the input and returns the eigenvalue $\lambda$ and an eigenvector $u$ of $T$, i.e., a solution $(\lambda,u) \in \mathbb{R} \times \mathbb{R}^n$ of the ergodic equation $T(u) = \lambda e + u$. Also, optimal stationary strategies for both players in the mean-payoff game can be derived from the output of the algorithm. This algorithm and more generally policy iteration procedures are a standard general way for solving mean-payoff stochastic games. In worst-case scenario, they require an exponential number of iterations, see~\cite{friedmann}. However, no polynomial-time algorithm is known to solve mean-payoff games. In fact, it is an important open question to know whether there exists one, since this problem, or any problem involving a polynomial-time equivalent perfect-information two-player zero-sum stochastic game (such as discounted stochastic games, simple stochastic games, parity games, see~\cite{AM09}) is one of the few problems that belong to the complexity class NP~$\cap$~coNP, see~\cite{Con92}. It is convenient here to state an abstract, slightly more general, version of the Hoffman-Karp algorithm, described in terms of the operators $T$ and $T^\sigma$ (Algorithm~\ref{algo:main}). \begin{algorithm} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \DontPrintSemicolon \Input{Shapley operator $T$ of perfect-information finite stochastic game. } \Output{eigenvalue $\lambda$ and eigenvector $u$ of $T$.} \BlankLine \KwSty{initialization}: select an arbitrary policy $\sigma_0 \in \mathcal{S}_{\mathrm p}$\; \mathbb{R}epeat{$\sigma_{k+1} = \sigma_{k}$}{ compute an eigenpair $(\lambda^k,v^k)$ of $T^{\sigma_k}$\; \label{it:value-search} improve the policy $\sigma_k$ in a conservative way: select a policy $\sigma_{k+1} \in \mathcal{S}_{\mathrm p}$ such that $T^{\sigma_{k+1}}(v^k) = T(v^k)$ with, for every state $i \in S$ such that $T_i^{\sigma_k}(v^k) = T_i(v^k)$, $\sigma_{k+1}(i) = \sigma_k(i)$\; } \KwSty{return} $\lambda^k$ and $v^k$\; \caption{Policy iteration, compare with~\cite{HK66}} \label{algo:main} \end{algorithm} We assume Algorithm~\ref{algo:main} is interpreted in exact arithmetics (the vectors $v^k$ have rational coordinates and the $\lambda^k$ are rational numbers). To implement Step~\ref{it:value-search}, we may call any oracle able to compute the eigenvalue and an eigenvector of a one-player stochastic game. In the original approach of Hoffman and Karp, the oracle consists in applying the same policy iteration algorithm for the one-player game with fixed policy $\sigma_k$. The proof of Hoffman and Karp shows that Algorithm~\ref{algo:main} is valid under a restrictive assumption. \begin{theorem}[{Corollary of~\cite{HK66}}]\label{th-HK} Algorithm~\ref{algo:main} terminates and is correct if for all choices of policies $\sigma$ and $\tau$ of the two players, the corresponding transition matrix $P^{\sigma\tau}$ is irreducible. \end{theorem} Indeed, it is easy to see that the sequence $(\lambda^k)_k$ of Algorithm~\ref{algo:main} is nonincreasing, that is, $\lambda^{k+1} \leqslant \lambda^k$ for all iterations $k$. The irreducibility assumption was shown to imply that the latter inequalities are always strict, which entails the finite time convergence (each policy yields a unique well defined eigenvalue, these eigenvalues constitute a decreasing sequence, and there are finitely many policies). However, the assumption that {\em all} the stochastic matrices $P^{\sigma \tau}$ be irreducible is way too strong to guarantee that Algorithm~\ref{algo:main} is properly posed. Indeed, to execute the algorithm, it suffices that at every iteration $k$ the operator $T^{\sigma_k}$ admits an eigenvalue and an eigenvector, which is the case in particular if for all policies $\sigma$, the graph obtained by taking the {\em union} of the edge sets of all the graphs associated to $P^{\sigma \tau}$ for the different choices of $\tau$ is strongly connected (the existence of the eigenvalue and eigenvector, in this generality, goes back to Bather~\cite{bather}, see also \cite{GG04,AGH15} for a more general discussion). In particular, the irreducibility assumption of Hoffman and Karp is essentially never satisfied for deterministic games, whereas the condition involving the union of the edge sets is satisfied by relevant classes of deterministic games. It should be noted that Algorithm~\ref{algo:main} may, in general, lead to degenerate iterations, in which $\lambda^{k+1}=\lambda^k$. As shown by an example in~\cite[Sec.~6]{ACTDG}, this may lead the algorithm to cycle when the bias vector is not unique. This difficulty was solved first in the deterministic framework in~\cite{CTGG99}, where it was shown that cycling can be avoided by enforcing a special choice of the bias vector, obtained by a nonlinear projection operation. This approach was then extended to the stochastic framework in~\cite{CTG06,ACTDG}. As a special case of these results, we get that policy iteration is correct and does terminate under much milder conditions than in Theorem~\ref{th-HK}. \begin{theorem}[Corollary of~{\cite[Th.~7]{CTG06}}] \label{thm:PI-termination} Algorithm~\ref{algo:main} terminates and is correct if for each choice of policy $\sigma$ of player \mathcal{M}IN, the operator $T^{\sigma}$ has an eigenvalue and a {\em unique} eigenvector, up to an additive constant. \end{theorem} We next show that the conditions of Theorem~\ref{thm:PI-termination} are satisfied for generic payments, and conclude that nongeneric instances can still be solved by the Hoffman-Karp algorithm, after an effective perturbation of the input. \subsection{Generic termination of policy iteration} Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information. The following assumption guarantees that Algorithm~\ref{algo:main} is well posed for any additive perturbation of $T$. \begin{assumption} \label{asm:ergodicity} For any policy $\sigma \in \mathcal{S}_{\mathrm p}$ of player \mathcal{M}IN, the one-player game with Shapley operator $T^{\sigma}$ is ergodic in the sense of Definition~\ref{def-ergodicity}, meaning that for all perturbation vectors $g \in \mathbb{R}^n$, the operator $g+T^{\sigma}$ has an eigenvalue (and an eigenvector). \end{assumption} This assumption is much milder than the original assumption of Hoffman and Karp, requiring all the transition matrices $P^{\sigma \tau}$ to be irreducible (see Theorem~\ref{th-HK}). Also, since $\widehat{T} = \min_{\sigma \in \mathcal{S}_{\mathrm p}} \widehat{T^\sigma}$, it readily follows from Theorem~\ref{thm:ergodicity-condition} that Assumption~\ref{asm:ergodicity} implies that the original two-player game is ergodic. Moreover, we shall see in the next subsection that one can always transform a game (in polynomial time) by a ``big $M$'' trick in such a way that Assumption~\ref{asm:ergodicity} becomes satisfied. By using the arguments of Section~\ref{sec:generic-uniqueness}, we now show that under Assumption~\ref{asm:ergodicity}, Algorithm~\ref{algo:main} terminates for a generic perturbation of the payments. \begin{theorem} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information satisfying Assumption~\ref{asm:ergodicity}. Then, the space $\mathbb{R}^n$ can be covered by a polyhedral complex such that for each additive perturbation vector $g \in \mathbb{R}^n$ in the interior of a full-dimensional cell, Algorithm~\ref{algo:main} terminates after a finite number of steps and gives an eigenpair of $g+T$. \label{thm:generic-termination} \end{theorem} \begin{proof} Consider the same complex $\mathcal{C}$ as in Section~\ref{sec:generic-uniqueness} and let $g$ be a perturbation vector in the interior of a full-dimensional cell of $\mathcal{C}$. It follows from the proof of Theorem~\ref{thm:generic-uniqueness} (see Subsection~\ref{sec:proof-main-thm}) that for any policy $\sigma \in \mathcal{S}_{\mathrm p}$, the eigenvector of $g+T^{\sigma}$, which exists according to Assumption~\ref{asm:ergodicity}, is unique up to an additive constant. Hence, at each step $k$ of Algorithm~\ref{algo:main}, the bias vector $v^k$ of $g+T^{\sigma_{k}}$ is unique up to an additive constant. The conclusion follows from Theorem~\ref{thm:PI-termination}. \end{proof} We next provide an explicit perturbation $g$, depending on a parameter $\varepsilon$, for which the policy iteration algorithm applied to $g+T$ is valid. We shall see in the next subsection that $\varepsilon$ can be instantiated with a polynomial number of bits, in such a way that the original unperturbed problem is solved. Before giving this explicit perturbation scheme, let us mention that the polyhedral complex $\mathcal{C}$ introduced in the proof of Theorem~\ref{thm:generic-uniqueness} (Subsection~\ref{sec:proof-main-thm}) can be constructed for any perfect-information finite stochastic game, whether it is ergodic or not. Recall indeed that $\mathcal{C}$ is obtained as a refinement of all the regions where the maps $g \mapsto \lambda^\sigma(g)$ with $\sigma \in \mathcal{S}_{\mathrm p}$, defined in~\eqref{eq:eigenvalue-one-player}, are affine. In particular, we do not assume ergodicity in the next two propositions. \begin{proposition} \label{prop:perturbation-scheme} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information. Then, there exists $\varepsilon_0 > 0$ such that all perturbation vectors $g_\varepsilon := (\varepsilon, \varepsilon^2, \dots, \varepsilon^n)$ with $0 < \varepsilon < \varepsilon_0$ are in the interior of the same full-dimensional cell of the polyhedral complex $\mathcal{C}$ introduced in Subsection~\ref{sec:proof-main-thm}. \end{proposition} \begin{proof} The cells of the polyhedral complex $\mathcal{C}$ introduced in Subsection~\ref{sec:proof-main-thm} that are not full-dimensional are included in an arrangement of a finite number of hyperplanes. The real curve $\varepsilon \mapsto g_\varepsilon= (\varepsilon,\dots,\varepsilon^n)$ cannot cross a given hyperplane in this arrangement more than $n$ times (otherwise, a polynomial of degree $n$ would have strictly more than $n$ roots). We deduce that there is a value $\varepsilon_0 > 0$ such that the restriction of the curve $\varepsilon \mapsto g_\varepsilon$ to the open interval $(0,\varepsilon_0)$ crosses no hyperplane of the arrangement. Therefore, it must stay in the interior of a full-dimensional cell of the complex $\mathcal{C}$. \end{proof} The following result is a refinement of the previous one. \begin{proposition} \label{prop:optimal-policies} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a finite stochastic game with perfect information. Then, there exist $\varepsilon_1 > 0$, policies $\sigma \in \mathcal{S}_{\mathrm p}$ and $\tau \in \mathcal{T}_{\mathrm p}$, and an invariant probability measure $m^{\sigma \tau}$ of the stochastic matrix $P^{\sigma \tau}$ such that for all $\varepsilon \in [0,\varepsilon_1]$, the upper mean payoff of $g_\varepsilon + T$ is given by $\overline{\chi}(g_\varepsilon + T) = \<m^{\sigma \tau}, (g_\varepsilon + r^{\sigma \tau})>$. \end{proposition} \begin{proof} Recall that the upper mean payoff of $g+T$ is given by \begin{gather*} \label{e-min} \overline{\chi}(g+T) = \min_{\sigma \in \mathcal{S}_{\mathrm p}} \lambda^\sigma(g) \\ \label{e-max} \text{with} \qquad \lambda^\sigma(g) = \max \{ \< m, (g+r^{\sigma \tau}) > \mid \tau \in \mathcal{T}_{\mathrm p}, \; m \in \mathcal{M}^*(P^{\sigma \tau}) \} \enspace . \end{gather*} By construction of the polyhedral complex $\mathcal{C}$ in Subsection~\ref{sec:proof-main-thm}, in the interior of a full-dimensional cell, each piecewise affine map $g \mapsto \lambda^\sigma(g)$ coincides with a unique affine map, but $g \mapsto \overline{\chi}(g+T)$ need not be affine. Hence, we can refine the complex $\mathcal{C}$ into a complex $\mathcal{C}'$ such that the latter piecewise affine map also coincides with a unique affine map on each cell of full dimension. The same proof as Proposition~\ref{prop:perturbation-scheme} leads to the existence of a parameter $\varepsilon_1$ such that all the perturbations $g_\varepsilon$ with $\varepsilon \in (0,\varepsilon_1)$ lie in the interior of the same full-dimensional cell of $\mathcal{C}'$. Let $Q$ be this cell. By construction of the latter complex, there exists a policy $\sigma \in \mathcal{S}_{\mathrm p}$ such that $\overline{\chi}(g+T) = \lambda^\sigma(g)$ for all $g \in Q$, and for that policy $\sigma$ there exists a policy $\tau \in \mathcal{T}_{\mathrm p}$ and $m^{\sigma \tau} \in \mathcal{M}^*(P^{\sigma \tau})$ such that $\lambda^\sigma(g) = \<m^{\sigma \tau},(g+r^{\sigma \tau})>$ for all $g \in Q$. \end{proof} It follows that solving the game with Shapley operator $g_\varepsilon +T$ for $\varepsilon$ small enough entails a solution of the original game. \begin{proposition} If $T$ satisfies Assumption~\ref{asm:ergodicity}, then there exists $\varepsilon_1 > 0$ (same as in Proposition~\ref{prop:optimal-policies}) such that Algorithm~\ref{algo:main} terminates for any input $g_\varepsilon + T$ with $\varepsilon \in (0,\varepsilon_1)$. Furthermore, for any policy $\sigma$ satisfying $\lambda(g+T) = \lambda(g+T^\sigma)$, we also have $\lambda(T) = \lambda(T^\sigma)$. \end{proposition} \begin{proof} First note that here, since the game is ergodic, we have $\overline{\chi}(g+T) = \lambda(g+T)$ for all perturbations $g \in \mathbb{R}^n$. Following Proposition~\ref{prop:optimal-policies}, the parameter $\varepsilon_1$ is such that all perturbations $g_\varepsilon$ with $0 < \varepsilon < \varepsilon_1$ lie in the interior of the same full-dimensional cell of the complex $\mathcal{C}'$ (which is a refinement of $\mathcal{C}$, see the proof of Proposition~\ref{prop:optimal-policies}). Denote by $Q$ this cell. The termination of Algorithm~\ref{algo:main} is then a straightforward consequence of Theorem~\ref{thm:generic-termination}. By definition of the polyhedral complex $\mathcal{C}'$, the piecewise affine map $g \mapsto \lambda(g+T)$ is affine when restricted to $Q$. Furthermore, for any policy $\sigma \in \mathcal{S}_{\mathrm p}$, we have either $\lambda(g+T) = \lambda^\sigma(g) = \lambda(g+T^\sigma)$ for all $g$ in $Q$, or $\lambda(g+T) < \lambda^\sigma(g) = \lambda(g+T^\sigma)$ for all $g$ in the interior of $Q$. Hence the result. \end{proof} \subsection{Complexity issues} In this subsection, we show that computing the upper mean payoff of a Shapley operator (a fortiori the eigenvalue if it exists) is polynomial-time reducible to the computation of the eigenvalue of a Shapley operator for which Algorithm~\ref{algo:main} terminates. This fact is a direct consequence of Theorem~\ref{thm:polynomial-reduction} below. To do so, we shall need explicit bounds on the perturbation parameter $\varepsilon$. We first explain how the general case can be reduced to the situation in which Assumption~\ref{asm:ergodicity} holds. To that purpose, let use introduce for any real number $M \geqslant 0$, the map $R_M: \mathbb{R}^n \to \mathbb{R}^n$ whose $i$th coordinate is given by \[ [R_M (x)]_i = \max \Big\{ x_i, \max_{1 \leqslant j \leqslant n} (-M + x_j) \Big\} \enspace , \quad x \in \mathbb{R}^n \enspace . \] It is convenient to introduce {\em Hilbert's seminorm} on $\mathbb{R}^n$, defined by \[ \Hnorm{x} = \max_{1 \leqslant i \leqslant n} x_i - \min_{1 \leqslant i \leqslant n} x_i \enspace . \] Observe that $R_M$ is a projection on the set $\{ x \in \mathbb{R}^n \mid \Hnorm{x} \leqslant M \}$, meaning that $R^2_M = R_M$ and \[ R_M(x) = x \iff \Hnorm{x} \leqslant M \enspace . \] \begin{lemma} \label{lem:game-reduction} Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a perfect-information finite stochastic game, and let $M \geqslant 0$. Then, $T \circ R_M$ has an eigenvalue. \end{lemma} \begin{proof} First, note that the recession operator of $\widehat{R}_M$ is given by \[ \widehat{R}_M (x) = (\max x) \, e \enspace , \quad x \in \mathbb{R}^n \enspace , \] where $\max x := \max_{1 \leqslant i \leqslant n} x_i$. Second, it has been noted in Subsection~\ref{sec:nonlinear-spectral-theory} that the limit~\eqref{eq:recession-operator} defining $\widehat{T}$ is uniform in $x$. Hence, we get that $\widehat{T \circ R}_M = \widehat{T} \circ \widehat{R}_M$. Thus, using the properties of recession operators, we have, for any vector $x \in \mathbb{R}^n$, \[ \widehat{T \circ R}_M (x) = \widehat{T} \circ \widehat{R}_M (x) = \widehat{T} \big( (\max x) \, e \big) = (\max x) \, e \enspace . \] This proves that the only fixed points of $\widehat{T \circ R}_M$ are trivial fixed points. The conclusion follows from Theorem~\ref{thm:ergodicity-condition}. \end{proof} Given a perfect-information finite stochastic game $\mathcal{G}amma$ with Shapley operator $T$, the operator $T \circ R_M$ can be interpreted as the Shapley operator of another perfect-information finite stochastic game with state space $S$. In this game, at each step, if the current state is $i \in S$, player \mathcal{M}IN start by choosing an action $a \in A_i$. Then, player \mathcal{M}AX chooses an action $b \in B_{i,a}$ which gives rise to a transition payment $r_i^{a b}$ and a state, $j$, is chosen by nature with probability $[P_i^{a b}]_j$ and announced to the players. Finally, player \mathcal{M}AX chooses the next state to be either $j$ with no additional payment, or to be any other state $k$ with an additional payment of $-M$. In other words, player \mathcal{M}AX has the option of teleporting himself to any other state, by accepting a penalty $M$. Note that, since player \mathcal{M}IN has the same action space in the latter game as in the game $\mathcal{G}amma$, the sets of her stationary strategies in both games are identical. Then, for a fixed policy $\sigma$ of player \mathcal{M}IN, the one-player Shapley operator $(T \circ R_M)^\sigma$ is equal to $T^\sigma \circ R_M$, and we get from Lemma~\ref{lem:game-reduction} the following result. \begin{corollary} \label{coro:ergodicity-assumption} Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a perfect-information finite stochastic game. Then, $T \circ R_M$ satisfies Assumption~\ref{asm:ergodicity}. \end{corollary} In the modified game with Shapley operator $T \circ R_M$, player \mathcal{M}AX makes, at each step, the final decision about the next state, provided an additional cost of $M$. The following result shows that if this cost is large enough, then player \mathcal{M}AX cannot do better, in the long run, than in the game $\mathcal{G}amma$. \begin{lemma} \label{lem:homogeneization} Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a perfect-information finite stochastic game. Then, there exists a positive constant $M_0$ such that for any $M > M_0$, the eigenvalue of $T \circ R_M$ is equal to the upper mean payoff $\overline{\chi}(T)$. \end{lemma} \begin{proof} First, note that $R_M(x) \geqslant x$ for all $x \in \mathbb{R}^n$. Hence, by monotonicity of $T$, we deduce that $T \circ R_M \geqslant T$, which yields $\chi(T \circ R_M) \geqslant \chi(T)$. Since $T \circ R_M$ has an eigenvalue, denoted by $\lambda(T \circ R_M)$, then we have $\lambda(T \circ R_M) \geqslant \overline{\chi}(T)$. Second, we know that $T$ has an invariant half-line with direction $\chi(T)$. So there exists a vector $u \in \mathbb{R}^n$ such that $T(u) = u + \chi(T)$. Now let $M_0 := \Hnorm{u}$. For every $M > M_0$, we have \[ T \circ R_M (u) = T(u) = u + \chi(T) \leqslant u + \overline{\chi}(T) e \enspace . \] By application of a Collatz-Wielandt formula (see~\cite{GG04}), we know that the eigenvalue of $T \circ R_M$ is given by \[ \lambda(T \circ R_M) = \inf \{ \mu \in \mathbb{R} \mid \exists u \in \mathbb{R}^n, \; T \circ R_M(u) \leqslant u + \mu e \} \enspace . \] Hence $\lambda(T \circ R_M) \leqslant \overline{\chi}(T)$. \end{proof} We shall need a technical bound on invariant probability measures of stochastic matrices arising from strategies. We state it here for an arbitrary irreducible stochastic matrix. \begin{lemma} \label{lem-matrix} Let $P$ be a $n\times n$ irreducible stochastic matrix whose entries are rational numbers with numerators and denominators bounded by an integer $D$. Then, the entries of the invariant probability measure of $P$ are rational numbers whose least common denominator is bounded by \begin{equation*} \label{eq:bound-denominators} n^{n/2} D^{n^2} \end{equation*} \end{lemma} \begin{proof} The invariant probability measure $m$ of $P$ is the unique solution of the linear system \begin{equation} \label{eq:invariant-measure} \begin{cases} (I - \transpose{P}) \, m = 0\\ \transpose{e} \, m = 1 \end{cases} \enspace , \end{equation} where $I$ is the identity matrix. Note that one row of the subsystem $(I - \transpose{P}) \, m = 0$ is redundant since we are dealing with stochastic vectors and $\transpose{e} \, m = 1$. Then, by deleting this row, and by multiplying every row of the latter subsystem by all the denominators of the coefficients appearing in this row, we arrive at a Cramer linear system with integer coefficients of absolute value less than $ D^n$, and with unit coefficients on the last row. Solving this system by Cramer's rule, we obtain that the entries of $m$ are rational numbers whose denominators divide the determinant of the system. Using Hadamard's inequality for determinants, we deduce that these denominators are bounded by \begin{flalign*} && big( (n-1) (D^n)^2 + 1 \big)^{n/2} \leqslant n^{n/2} D^{n^2} \enspace . && \qed \end{flalign*} \renewcommand{\qedsymbol}{} \end{proof} Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of a perfect-information finite stochastic game $\mathcal{G}amma$ with state space $\{1,\dots,n\}$. We just showed that the upper mean payoff of $T$ can be recovered from the eigenvalue of the operator $T \circ R_M$ for a suitable large $M$. The latter operator satisfies Assumption~\ref{asm:ergodicity}, and so, we can in principle apply Algorithm~\ref{algo:main} to it. However, to do so in a way which leads to a polynomial-time transformation of the input, it is convenient to introduce the following modified Shapley operator $T_M: \mathbb{R}^{2n} \to \mathbb{R}^ {2n}$ given, for all $(x,y) \in \mathbb{R}^n \times \mathbb{R}^n$, by \[ T_M(x,y) := \left( T(y),R_M(x) \right) \enspace . \] Note that we have \begin{equation*} (T_M)^2(x,y) = \begin{pmatrix} T \circ R_M(x) \\ R_M \circ T (y) \end{pmatrix} \enspace . \label{eq:T^2_M} \end{equation*} The following immediate lemma shows that one can recover the bias vectors and the eigenvalue of $T\circ R_M$ from those of $T_M$. \begin{lemma}\label{lemma-transform} If $v$ is a bias vector of the operator $T\circ R_M$ with eigenvalue $\lambda$, then $(v,R_M(v)-(\lambda/2) e )$ is a bias vector of the operator $T_M$ with eigenvalue $\lambda/2$, and all bias vectors of $T_M$ arise in this way. \qed \end{lemma} The operator $T_M$ is the dynamic programming operator of a game, denoted by $\mathcal{G}amma_M$, with state space $\{1,\dots,2n\}$. In each state $i \in \{1,\dots,n\}$, the actions, the payments and the transition function are the same as in $\mathcal{G}amma$, except that the next state is labeled by an element of $\{n+1,\dots,2n\}$ instead of $\{1,\ldots,n\}$. Moreover, in each state $i \in \{n+1,\dots,2n\}$, player \mathcal{M}IN has only one possible action, while player \mathcal{M}AX chooses the next state $j$ among $\{1,\dots,n\}$ with a cost $M$ if $i-j \neq n$. In particular, the policies of player \mathcal{M}IN in the two games $\mathcal{G}amma$ and $\mathcal{G}amma_M$ are in one-to-one correspondence, and to simplify the presentation, we shall use the same notation for these policies. Hence, we shall write $(T_M)^\sigma(x,y) = (T^\sigma(y),R_M(x))$ for such a policy $\sigma$. We saw in Lemma~\ref{lem:game-reduction} that the operator $T\circ R_M$ has an eigenvalue. The same is true for the operator $T_M$ by Lemma~\ref{lemma-transform}. Moreover, the same conclusion applies to the operator $(T_M)^\sigma$ for any policy $\sigma$, so that $T_M$ satisfies Assumption~\ref{asm:ergodicity}. We know that for $\varepsilon>0$ small enough, the perturbed operator $g_\varepsilon +T\circ R_M$ has a unique bias vector, up to an additive constant, where $g_\varepsilon = (\varepsilon,\dots,\varepsilon^n)$. This leads to considering, for $\varepsilon > 0$, \[ T_{M,\varepsilon} := (g_\varepsilon,0) + T_M \enspace . \] \begin{theorem} \label{thm:polynomial-reduction} Let $\mathcal{G}amma$ be a perfect-information finite stochastic game whose transition payments and probabilities are rational numbers with numerators and denominators bounded by an integer $D\geqslant 2$. Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be the Shapley operator of $\mathcal{G}amma$. If \[ M > 4 n^{n/2} D^{n^2+1} \quad \text{and} \quad 0 < \varepsilon < \frac{1}{ n^n D^{2 n (n+1)} } \enspace , \] then the upper mean payoff of $T$ can be recovered from $T_{M,\varepsilon} = (g_\varepsilon,0) + T_M$, in the sense that for any policy $\sigma$ of player \mathcal{M}IN such that $\lambda(T_{M,\varepsilon}) = \lambda((T_{M,\varepsilon})^\sigma)$, we have $\overline{\chi}(T) = \overline{\chi}(T^\sigma)$. Furthermore, such a policy can be obtained by applying Algorithm~\ref{algo:main} with the input $T_{M,\varepsilon}$. \end{theorem} \begin{proof} Let $g \in [0,1]^n$, and fix a policy $\sigma$ of player \mathcal{M}IN. In the game $\mathcal{G}amma_M$, consider a policy of player \mathcal{M}AX such that, when in state $i \in \{1,\dots,n\}$, he chooses action $b_i \in B_{i,\sigma(i)}$, and when in state $i \in \{n+1,\dots,2n\}$, he chooses the next state to be $j(i) \in \{1,\dots,n\}$. Then, the transition matrix associated with that choice of policy is the following $2n \times 2n$ block matrix: \begin{equation} \label{eq:transition-marix} \begin{pmatrix} 0 & P^{\sigma \tau} \\ Q & 0 \end{pmatrix} \enspace , \end{equation} where $\tau \in \mathcal{T}_{\mathrm p}$ is a policy of player \mathcal{M}AX in the game $\mathcal{G}amma$ such that $\tau(i,\sigma(i)) = b_i$ for each state $i$, and where $Q$ is the $n \times n$ stochastic matrix whose coefficients are $Q_{i j} = 1$ for $j=j(i+n)$ and $0$ otherwise. Let $(m,m') \in \mathbb{R}^n \times \mathbb{R}^n$ be an invariant probability measure of the stochastic matrix~\eqref{eq:transition-marix}. The vectors $m$ and $m'$ satisfy in particular \begin{equation} \label{eq:invariant-measures} \transpose{m}\, P^{\sigma \tau} = \transpose{m'}\enspace , \quad \transpose{m'} Q = \transpose{m} \enspace , \quad \<m,e> + \<m',e> = 1 \enspace . \end{equation} We are interested in the eigenvalue of the perturbed one-player Shapley operator $(g,0) + (T_M)^\sigma$. Hence, following formula~\eqref{eq:eigenvalue-convex}, we consider the quantity \[ \gamma := \<m,(g+r^{\sigma \tau})> - M \sum_{\substack{1 \leqslant i \leqslant n\\ Q_{i i} = 0}} m'_i \enspace . \] If for every index $i$ in the support of $m'$, we have $Q_{i i} = 1$, then we deduce from the second equality in~\eqref{eq:invariant-measures} that $m' = m$. This yields that $2m$ is an invariant probability measure of $P^{\sigma \tau}$, and that \[ \gamma = \<m,(g+r^{\sigma \tau})> \leqslant \frac{1}{2} \, \overline{\chi}(g+T^\sigma) \enspace . \] Note that the equality is attained in the above inequality for some policy $\tau$ and some invariant probability measure $m$. If there is an index $i$ in the support of $m'$ such that $Q_{i i} = 0$, then we have \[ \gamma \leqslant \<m,(g+r^{\sigma \tau})> - M \, m'_i \leqslant 1 + \max_{i,a,b} r_i^{a b} - K_1 \, M \enspace , \] where $K_1$ is a positive constant such that $K_1 \leqslant m'_i$. Note that $K_1$ can be chosen independently of $M$ and of the particular choices of the policies. Then, taking $M > M_0 := (K_1)^{-1} (1 + (3/2) \|r\|_\infty)$, we obtain that \[ \gamma < \frac{1}{2} \, \min_{i,a,b} r_i^{a b} \leqslant \frac{1}{2} \, \overline{\chi}(g+T^\sigma) \enspace . \] Thus, we have proved that, for all policies $\sigma$ of player \mathcal{M}IN and for all $g \in [0,1]^n$, we have \[ \lambda \big( (g,0)+(T_M)^\sigma \big) = \frac{1}{2} \, \overline{\chi}(g+T^\sigma) \enspace , \] as soon as $M > M_0$. In particular, the choice of the parameter $\varepsilon$ such that $T_{M,\varepsilon}$ is a generic instance only relies on $T$ (Proposition~\ref{prop:optimal-policies}). We now fix some $M > M_0$. Consider, for policies $\sigma, \sigma'$ of player \mathcal{M}IN and $\tau, \tau'$ of player \mathcal{M}AX, two distinct pairs $(m,d) \neq (m',d')$, where $m \in \mathcal{M}^*(P^{\sigma \tau})$, $m' \in \mathcal{M}^*(P^{\sigma' \tau'})$, $d := \<m, r^{\sigma \tau}>$ and $d' := \<m', r^{\sigma' \tau'}>$. We need to compare the affine maps $g \mapsto \<m,g>+d$ and $g \mapsto \<m',g>+d'$ along the curve $\varepsilon \mapsto g_\varepsilon$ with $\varepsilon \in (0,1)$. Assume first that $d = d'$. Then $m \neq m'$ and we can select the smallest index $i$ such that $m_i \neq m'_i$. Note that since $m$ and $m'$ are stochastic vectors, we necessarily have $i < n$ and we also have the existence of another index $j$ such that $i < j \leqslant n$ and $m_j \neq m'_j$. Without loss of generality, we may assume that $m_i - m'_i > 0$. Let $K_2 \in \mathbb{R}$ be such that $0 < K_2 < m_i - m'_i$. Then, for any positive parameter $\varepsilon < K_2 \, n^{-1}$, we have \begin{multline*} (\<m,g_\varepsilon> + d) - ( \<m',g_\varepsilon> + d' ) = (m_i - m'_i) \varepsilon^i + \sum_{i < j \leqslant n} (m_j-m'_j) \varepsilon^j \\ > K_2 \, \varepsilon^i - n \varepsilon^{i+1} = \varepsilon^i \, (K_2 - n \varepsilon) > 0 \enspace . \end{multline*} Assume now that $d \neq d'$, say $d > d'$, and let $K_3 \in \mathbb{R}$ be such that $0 < K_3 < d - d'$. Then, for any positive parameter $\varepsilon < K_3$, we have \[ (\<m,g_\varepsilon> + d) - ( \<m',g_\varepsilon> + d' ) > \varepsilon^n - \varepsilon + K_3 > 0 \enspace . \] Note that we can choose the positive constants $K_2$ and $K_3$ independently of $\sigma$, $\sigma'$, $\tau$, $\tau'$, $m$ and $m'$. Hence, the above arguments show that the set of polynomial functions $\varepsilon \mapsto \<m,(g_\varepsilon+r^{\sigma \tau})>$ with $\sigma \in \mathcal{S}_{\mathrm p}$, $\tau \in \mathcal{T}_{\mathrm p}$, and $m \in \mathcal{M}^*(P^{\sigma \tau})$, is totally ordered if $\varepsilon$ is restricted to the interval $(0 , \min \{K_2 n^{-1},K_3\} )$. Thus, the parameter $\varepsilon_1$ of Proposition~\ref{prop:optimal-policies} may be taken equal to $\min \{ K_2 n^{-1},K_3 \}$. To complete the proof, we next explain how to instantiate the constants $K_1$ to $K_3$. Let us start with $K_2$. It is a lower bound on the absolute values of the differences between two distinct entries (with same index) of invariant probability measures associated with the transition matrices of $\mathcal{G}amma$. These differences are of the form $|p_1/q_1-p_2/q_2| \geqslant 1/(q_1q_2)$, where $p_1,p_2,q_1,q_2$ are integers. By Lemma~\ref{lem-matrix}, we know that $q_1,q_2 \leqslant n^{n/2}D^{n^2}$, and so one can choose \[ K_2 = \frac{1}{ n^n D^{2 n^2}} \enspace . \] Likewise, $K_3$ is a lower bound on the absolute values of the differences between two distinct scalar products $\<m,r^{\sigma \tau}>$. Let $m \in \mathcal{M}(P^{\sigma \tau})$. It follows from Lemma~\ref{lem-matrix} that the $i$th entry of $m$ can be written as $m_i = p_i / q$ where $p_i$ is an integer and $q \leqslant n^{n/2} D^{n^2}$ is an integer independent of $i$. Since every entry of $r^{\sigma \tau}$ has a denominator at most $D$, it follows that $\<m,r^{\sigma \tau}>$ is a rational number with denominator at most $n^{n/2} D^{n^2} D^n$. Therefore, the difference between two distinct values of $\<m,r^{\sigma \tau}>$ is at least $K_3=(n^{n} D^{2n^2} D^{2n})^{-1}$, and so \[ \min \{ K_2 n^{-1},K_3 \} = \frac{1}{n^n D^{2 n^2}} \min \left\{ \frac{1}{n}, \frac{1}{D^{2n}} \right\} = \frac{1}{n^nD^{2n(n+1)}}\enspace . \] Finally the constant $K_1$ is a lower bound for the positive entries of the restrictions $m'$ of the invariant probability measures $(m,m')$ of the transition matrices~\eqref{eq:transition-marix} arising in the game $\mathcal{G}amma_M$. A direct application of Lemma~\ref{lem-matrix} provides the following coarse bound: \[ K_1 = \frac{1}{(2n)^n D^{4 n^2}} \enspace . \] This bound can be improved by noting that the matrices~\eqref{eq:transition-marix} have a particular structure. Indeed, if $(m,m')$ is an invariant probability measure of~\eqref{eq:transition-marix}, then it satisfies~\eqref{eq:invariant-measures}, and so $2m'$ is an invariant probability measure of the matrix $QP^{\sigma\tau}$, the entries of which are entries of $P^{\sigma\tau}$ (since $Q$ has only one nonzero entry in each row and this entry is equal to $1$). Applying Lemma~\ref{lem-matrix} to the matrix $QP^{\sigma\tau}$, we obtain the following lower bound for the positive entries of $m'$: \[ K_1 = \frac{1}{2 n^{n/2} D^{n^2}} \enspace . \] Hence, \begin{flalign*} && M_0 \leqslant (1 + (3/2)D) \; 2 n^{n/2} D^{n^2}\leqslant 4 n^{n/2} D^{n^2+1} \enspace . && \qed \end{flalign*} \renewcommand{\qedsymbol}{} \end{proof} An important special case to which the method of Theorem~\ref{thm:polynomial-reduction} can be applied concerns {\em deterministic} mean-payoff games~\cite{gurvich,zwick}. The input of such games can be described, as in~\cite{AGGut10}, by means of two matrices $A, B \in (\mathbb{Z} \cup \{-\infty\})^{m \times n}$. The corresponding Shapley operator can be written as \begin{align} \label{e-det} T_i(x) = \min_{1 \leqslant j \leqslant m} \big( -A_{ji} + \max_{1 \leqslant k \leqslant n} (B_{jk} + x_k) \big) \enspace , \quad x \in \mathbb{R}^n \enspace , \quad 1 \leqslant i \leqslant n \enspace . \end{align} The corresponding game is played by moving a token on a graph in which $n$ nodes, denoted by $1,\dots,n$, belong to player \mathcal{M}IN, whereas $m$ other nodes, denoted by $1',\dots,m'$, belong to player \mathcal{M}AX. In state $i \in \{1,\dots,n\}$, player \mathcal{M}IN can move the token to a state $j \in \{1',\dots,m'\}$ such that $A_{ji}\neq -\infty$, receiving $A_{ji}$. In state $j$, player \mathcal{M}AX can move the token to a state $k \in \{1,\dots,n\}$ such that $B_{jk} \neq -\infty$, receiving $B_{jk}$. We assume that the matrix $B$ has no identically infinite row, and that the matrix $A$ has no identically infinite column, meaning that each player has at least one available action in each state. Then, the modified operator $T \circ R_M$ corresponds to the matrix $B_M$ in which infinite entries of $B$ are replaced by $-M$, and the operator $g + T \circ R_M$ arises by subtracting the constant $g_i$ to every entry in the $i$th column of $A$. \begin{theorem} \label{theo-det} Let $T$ denote the Shapley operator~\eqref{e-det} of a deterministic mean-payoff game, with integer payoffs bounded in absolute value by $D \geqslant 2$. Then, for \[ M > 4 n D \quad \text{and} \quad 0 < \varepsilon < 1 / n^3 \enspace , \] the policy iteration Algorithm~\ref{algo:main} applied to the operator $g_\varepsilon + T \circ R_M$ terminates, and we can compute the upper mean payoff of $T$ from any policy $\sigma$ of player \mathcal{M}IN such that $\lambda(g_\varepsilon + T \circ R_M) = \lambda(g_\varepsilon + T^\sigma \circ R_M)$, in the sense that $\overline{\chi}(T) = \overline{\chi}(T^\sigma)$. \end{theorem} \begin{proof} We adapt the proof of Theorem~\ref{thm:polynomial-reduction} to the case of deterministic transition matrices. In that special case, every invariant probability measure is uniform on its support, hence its positive entries are bounded below by $1/n$ if the state space has cardinality $n$. Since the constant $K_1$ is a lower bound for the positive entries of the invariant probability measures of the transition matrices in $\mathcal{G}amma_M$, one can choose $K_1 = 1/(2n)$, and then \[ M_0 \leqslant (2n) (1 + (3/2) D) \leqslant 4 n D \enspace . \] The constant $K_2$, which is a lower bound on the absolute values of the differences between two distinct entries of invariant probability measures of transition matrices in $\mathcal{G}amma$, can be chosen as $K_2 = 1 / n^2$. As for $K_3$, it is a lower bound on the absolute values of the differences between two distinct values of $\<m,r^{\sigma \tau}>$. Since the payments are integers, every scalar product $\<m,r^{\sigma \tau}>$ is a rational number whose denominator divides the denominator of the positive entries of $m$ which are themselves bounded by $n$. Hence, one can choose $K_3 =1 / n^2$, and the parameter $\varepsilon$ must be lower than \begin{flalign*} && \min \{ K_2 n^{-1}, K_3 \} = 1 / n^3 \enspace . && \qed \end{flalign*} \renewcommand{\qedsymbol}{} \end{proof} \begin{remark} One step in Algorithm~\ref{algo:main} consists in computing an eigenpair $(\lambda^k,v^k)$ of the reduced Shapley operator $T^{\sigma_k}$ obtained by fixing the strategy $\sigma_k$ of player \mathcal{M}IN. This is a simpler problem which can be solved by several known methods. We may apply, for instance, a similar policy iteration algorithm to $T^{\sigma_k}$, iterating this time in the space of policies $\tau$ of player \mathcal{M}AX. In this way, for each choice of $\tau$, we arrive at an operator of the form $T^{\sigma_k,\tau} (x) = g + P x$, where $P$ is a stochastic matrix which cannot in general be assumed to be irreducible. However, for one-player problems, a classical version of policy iteration, the multichain policy iteration introduced by Howard~\cite{How60} and Denardo and Fox~\cite{DF68}, does allow one to determine $(\lambda^k,v^k)$ (without genericity conditions). Moreover, in the special case of deterministic games, the vector $v^k$ is known to be a tropical eigenvector and $\lambda^k$ is a tropical eigenvalue. The tropical eigenpair can be computed by direct combinatorial algorithms, see e.g.\ the discussion in~\cite{CTGG99}. \end{remark} \begin{remark} Theorem~\ref{theo-det} should be compared with the other known perturbation scheme, relying on vanishing discount. This method requires the computation of a fixed point of the operator $x \mapsto T(\alpha x)$ for $0 < \alpha < 1$ sufficiently close to one. It is known that, for deterministic mean-payoff games, if the discount factor $\alpha$ is chosen so that \[ \alpha > 1 - \frac{1}{4(n+m)^3 D} \enspace , \] where $D$ denotes the maximal absolute value of a finite entry $A_{ij}$ or $B_{ij}$, then, the solution of the mean-payoff problem can be derived from the solution of the discounted problem, see~\cite[Sec.~5]{zwick}. The latter can be obtained by policy iteration (which terminates without any nondegeneracy conditions in the discounted case). Applying Algorithm~\ref{algo:main} to the map $g_\varepsilon + T$ requires solving linear systems in which the matrix is independent of $\varepsilon$. If this is done by calling the Denardo-Fox algorithm to solve one-player problem (see previous remark), these systems are well conditioned. By comparison, vanishing discount requires the inversion of a matrix which becomes singular as $\alpha\to 1$. In particular, if policy iteration is interpreted in floating-point arithmetics, vanishing discount based perturbations may lead to numerical instabilities or to overflows, whereas the present additive perturbation scheme is insensitive to this pathology, because it only perturbs the right-hand sides of the linear systems to be solved. \end{remark} \begin{remark} The present approach allows one to compute the {\em upper mean payoff}, i.e., the maximum of the mean payoff over all initial states. This leads to no loss of expressivity since it follows from known reductions that this problem is polynomial time equivalent to solving a mean-payoff game in which the initial state is fixed: combine the results of Appendix C in the extended version of~\cite{AGS16} (especially Lemma C.2 and Corollary C.3) with the reductions in~\cite{AM09}. An alternative route to compute the mean payoff of a given initial state, avoiding the use of such reductions, would be to extend the present perturbation scheme to the ``multichain'' version of policy iteration, discussed in~\cite{CTG06,ACTDG}. \end{remark} \end{document}
\begin{document} \title{The Online Replacement Path Problem} \author{David Adjiashvili\inst{1} \and Marco Senatore\inst{2}} \institute{ Institute for Operations Research (IFOR) \\ Eidgen\"ossische Technische Hochschule (ETH) Z\"urich \\ R\"amistrasse 101, 8092 Z\"urich, Switzerland \\ \email{[email protected]} \and Dipartimento di Informatica, Sistemi e Produzione \\ Universit\`a di Roma “Tor Vergata“ \\ Via del Politecnico 1, 00133 Rome, Italy \\ \email{[email protected]} } \maketitle \begin{abstract} We study a natural online variant of the replacement path problem. The \textit{replacement path problem} asks to find for a given graph $G = (V,E)$, two designated vertices $s,t\in V$ and a shortest $s$-$t$ path $P$ in $G$, a \textit{replacement path} $P_e$ for every edge $e$ on the path $P$. The replacement path $P_e$ is simply a shortest $s$-$t$ path in the graph, which avoids the \textit{failed} edge $e$. We adapt this problem to deal with the natural scenario, that the edge which failed is not known at the time of solution implementation. Instead, our problem assumes that the identity of the failed edge only becomes available when the routing mechanism tries to cross the edge. This situation is motivated by applications in distributed networks, where information about recent changes in the network is only stored locally, and fault-tolerant optimization, where an adversary tries to delay the discovery of the materialized scenario as much as possible. Consequently, we define the \textit{online replacement path problem}, which asks to find a nominal $s$-$t$ path $Q$ and detours $Q_e$ for every edge on the path $Q$, such that the worst-case arrival time at the destination is minimized. Our main contribution is a label setting algorithm, which solves the problem in undirected graphs in time $O(m \log n)$ and linear space for all sources and a single destination. We also present algorithms for extensions of the model to any bounded number of failed edges. \end{abstract} \section{Introduction}\label{sec:inro} Modeling the effects of limited reliability of networks in modern routing schemes is important from the point of view of most applications. It is often unrealistic to assume that the nominal network known at the stage of decision making will be available in its entirety at the stage of solution implementation. Several research directions have emerged as a result. The main paradigm in most works is to obtain a certain 'fault-tolerant' or 'redundant' solution, which takes into account a certain set of likely network realizations at the implementation phase. One important example is the \textit{replacement path problem} (RP) \cite{RepPathFirst,RepPathUndir}. The input in RP is a nominal network given as a graph $G=(V,E)$, a source $s$, a destination $t$ and one shortest $s$-$t$ path $P$. The goal is to find for every edge $e$ on the path $P$, a shortest path in $G$, which does not use the edge $e$. RP attempts to model the situation in which any link in the network may fail before the routing process starts. It is hence desirable to compute in advance the shortest replacement paths, for the case of a failure of any one of the edges in the nominal path $P$. In the event of a failure the routing mechanism simply chooses the corresponding pre-computed path. The applicability of RP is, however, limited to those situations, in which it is possible to know the identity of the failed link before the routing process starts. This assumption is not realistic in many important applications, in which faults in the network occur 'online', or the information about them is stored in a distributed fashion. The latter situation is commonplace, for example, in transportation networks (e.g. accidents in road networks). Furthermore, it is a common feature of very large networks, such as the Internet. This paper studies the \textit{online replacement path problem} (ORP), which captures this online failure setting. The most notable difference between RP and ORP is that in ORP we assume that the routing mechanism is informed about the failed link at the moment it tries to use it. Another important difference is related to the nominal path $P$. In RP a certain nominal shortest path is provided in the input. This is no longer the case in ORP, since the detour taken in the event of a failure does not always start from the source vertex $s$. This means that in ORP we simultaneously optimize both the nominal path and the optimal detours, taking into account a certain global objective function. An informal formulation of ORP is as follows. We would like to route a certain package through a network from a given source to a given destination as quickly as possible. We are aware of the existence of a failed link in the network, but we do not know its location. It is possible to observe that a certain link has failed by \textit{probing} it. In order to probe a link, the package should be at one of the endpoints of this link. If a probed link is intact, the package crosses the link to the other endpoint and the cost of traversing the link in incurred. Otherwise the package stays in the same endpoint and the routing mechanism is informed about the failed link. In other words, it is only possible to observe that a link has failed by trying to cross it. The goal is to find a set of paths (a nominal path and detours for every edge on the nominal path) that will minimize the latest possible arrival time to the destination. A solution of ORP should hence specify both the nominal path and the optimal detours at every vertex along the path, which avoid the next edge on the nominal path. The latter informal definition suggests that in ORP we take a conservative fault-tolerant approach. In fact, it is assumed that a failed link does exist in the network, but its identity is unknown. This suggests another application of ORP. In many applications it is only necessary to route a certain object within a certain time, called a \textit{deadline}. As long as the object reaches its destination before the deadline, no penalty is incurred. On the other hand, if the deadline is not met, a large penalty is due. An example of such an application is organ transportation for transplants (see e.g. Moreno, Valls and Ribes~\cite{OrganTrans}), in which it is critical to deliver a certain organ before the scheduled time for the surgery. In this application it does not matter how early the organ arrives at the destination, as long as it arrives in time. In such applications it is often too risky to take an unreliable shortest path, which admits only long detours in some scenarios, whereas a slightly longer path with reasonably short detours meets the deadline in \textit{every} scenario. Hence, with ORP it is often possible to immunize the path against faults in the network. The main result of this paper is that the solution to ORP in undirected networks can be computed in $O(m \log n)$ time and linear space for all sources and a single destination, where $n$ and $m$ are the number of vertices and edges in the network, respectively. Furthermore, this solution can be stored in $O(n)$ space. The basic algorithm is a label-setting algorithm, similar to Dijkstra's algorithm for ordinary shortest paths. We describe this algorithm in Section~\ref{sec:k_is_one}. The main technical difficulty lies in the need to pre-compute certain shortest path distances in modified graphs. This difficulty is overcome in Section~\ref{sec:impl}, which provides a fast implementation of this step. We generalize the model to incorporate the possibility of an arbitrary bounded number $k$ of failed edges in Section~\ref{sec:korp}. This section also gives an alternative view on the problem using the notion of \textit{routing strategies}, and provides a polynomial algorithm for the problem. We mention other results linking ORP with the shortest path problem in Section~\ref{sec:ORPvsSP}. In particular, we show that it is possible to solve a bi-objective variant of ORP. This result implies that it is possible to efficiently obtain Pareto-optimal paths with respect to ordinary distance and the cost corresponding to the ORP problem, making it an attractive method for various applications. We also analyze the performance of a greedy heuristic which attempts to route along a shortest path in the remaining network. We show that this heuristic, which is implemented in many applications, is a poor approximation for ORP. We summarize in Section~\ref{sec:conclusions}. The following section reviews related work. \section{Related Work}\label{sec:relatedwork} The replacement path problem was first introduced by Nisan and Ronen~\cite{RepPathFirst}. The motivation for their definition stemmed from the following question in auction theory: what is the true price of a link in a network, when we try to connect two distinct vertices $x$ and $y$, and every edge is owned by a self-interested agent. It turns out that compensating agents with respect to the declarations of other agents in the auction leads to truthful declarations, namely declarations which reflect the true costs of the agent. Such pricing schemes are called \textit{Vickrey schemes}. In the setup of networks, replacement paths lengths correspond to Vickrey prices for the individual failed edges. Another important application of RP is the \textit{$k$ shortest simple paths problem} (kSSP), which reduces to $k$ replacement path computations. The complexity of the RP problem for undirected graphs is well understood. Malik, Mittal and Gupta~\cite{FirstUndirRP} give a simple $O(m + n \log n)$ algorithm. A mistake in this paper was later corrected by Bar-Noy, Khuller and Schieber~\cite{FirstUndirRPFix}. This running time is asymptotically the same as a single source shortest path computation. Nardelli, Proietti and Widmayer~\cite{NodeRPUndir} later provided an algorithm with the same complexity for the variant of RP, in which vertices are removed instead of edges. The same authors give efficient algorithms for finding detour-critical edges for a given shortest path in~\cite{DetourCritical1,DetourCritical2}. In directed graphs the situation is significantly different. A trivial upper bound for RP corresponds to $O(n)$ single shortest path computations. This gives $O(n(m + n \log n))$ for general directed graphs with nonnegative weights. This was slightly improved to $O(mn + n^2 \log \log n)$ by Gotthilf and Lewenstein~\cite{GenRepPathImpr}. The challenge of improving the $O(mn)$ bound for RP on directed graphs was mainly tackled by restricting the class of graphs or by allowing approximate solutions. Along the lines of the former approach, algorithms were developed for unweighted graphs (Roditty and Zwick~\cite{DirUnweightedRP}) and planar graphs (Emek, Peleg and Roditty~\cite{DirRPPlan1}, Klein, Mozes, and Weimann~\cite{DirRPPlan2} and Wulff-Nilsen~\cite{DirRPPlan3}). The latter approach was successfully applied to obtain $\frac{3}{2}$-approximate solutions by Roditty~\cite{3over2RP} and $(1+\epsilon)$-approximate solutions by Bernstein~\cite{1plusEpsRP}. Weimann and Yuster~\cite{RPMatMul} applied fast matrix multiplication techniques to obtain a randomized algorithm with sub-cubic running time for certain ranges of the edge weights. Another problem which bears resemblance to ORP is the \textit{stochastic shortest path with recourse problem} (SSPR), studied by Andreatta and Romeo~\cite{StochSPRecourse}. This problem can be seen as the stochastic analogue of ORP. Finally, we briefly review some related work on robust counterparts of the shortest path problem. The shortest path problem with cost uncertainty was studied by Yu and Yang~\cite{RSP}, who consider several models for the scenario set. These results were later extended by Aissi, Bazgan and Vanderpooten~\cite{MinMaxRegretSP}. These works also considered a two-stage min-max regret criterion. Dhamdhere, Goyal, Ravi and Singh~\cite{DemandR} developed the demand-robust model and gave an approximation algorithm for the shortest path problem. A two-stage feasibility counterpart of the shortest path problem was addressed in Adjiashvili and Zenklusen~\cite{AdaptRob}. Puhl~\cite{RRSP} provided hardness results for numerous two-stage counterparts the shortest path problem, and gave some approximation algorithms. \section{An Algorithm for ORP}\label{sec:k_is_one} In this section we develop an algorithm for ORP. Some technical proofs are left in Appendix~\ref{app:proofs}. Let us establish some notations first. We are given an undirected edge-weighted graph $G=(V,E,\ell)$, a source $s\in V$ and destination $t\in V$. We are assuming throughout this paper that the edge weights $\ell$ are nonnegative. For two vertices $u,v \in V$ let $\mathcal{P}_{u,v}$ denote the set of simple $u$-$v$ paths in $G$. Let $N(u)$ denote the set of neighbors of $u$ in $G$. For a set of edges $A \subset E$ let $\ell(A) = \sum_{e\in A} \ell(e)$. For an edge $e \in E$ and a set of edges $F \subseteq E$, let $G-e$ and $G-F$ denote the graph obtained by removing the edge $e$ and the edges in $F$, respectively. For a graph $H$ let $d_H(\cdot,\cdot)$ denote the shortest path distance in $H$. Paths are always represented as sets of edges, while walks are represented as sequences of vertices. For a path $P$ with incident vertices $u$ and $v$ let $P[u,v]$ denote the subpath of $P$ from $u$ to $v$. For an edge $e\in E$ and $u\in V$ let $$ s^{-e}_u = d_{G-e}(u,t) $$ denote the shortest $u$-$t$ path distance in $G-e$. Our algorithm uses a label-setting approach, analogous to Dijkstra's algorithm for shortest paths. In other words, in every iteration the algorithm updates certain tentative labels for the vertices of the graph, and fixes a final label to a single vertex $u$. This final label represents the connection cost of $u$ by an optimal path to $t$. \begin{definition}\label{def:robustlenght} Given a vertex $v \in V$, the \textit{robust length} of the $v$-$t$ path $P$ is $$ \mathrm{Val}(P) = \max\{\ell(P), \max_{uu'\in P} \{\ell(P[v,u]) + s^{-uu'}_u\}\}. $$ The potential $y(v)$ is defined as the minimum of $\mathrm{Val}(P)$ over all $P\in \mathcal{P}_{v,t}$, and any path $P^*$ attaining $\mathrm{Val}(P^*) = y(v)$ is called an optimal nominal path. Finally, $ORP$ is to compute $y(s)$ and obtain a corresponding optimal nominal path. \end{definition} The robust length of a $v$-$t$ path $P$ is simply the maximal possible cost incurred by following $P$ until a certain vertex, and then taking the best possible detour from that vertex to $t$ which avoids the next edge on the path. To avoid confusion, we stress that in ORP we assume the existence of at most one failed edge in the graph. Consider next a scenario in which an edge $uu'\in P$ fails and let $u\in V$ be the vertex which is closer to $v$. We can assume without loss of generality that the best detour is a shortest $u$-$t$ path in the graph $G-uu'$. Critically, the values $s^{-uu'}_u$ are independent of the chosen path. Observe that from non-negativity of $\ell$ we obtain $\mathrm{Val}(P) \geq \mathrm{Val}(P')$, whenever $P$ and $P'$ are $u$-$t$ and $v$-$t$ paths respectively, and $P'$ is a subpath of $P$. We denote this property by \textit{monotonicity}. Furthermore, we can prove the following. \begin{lemma}\label{lem:Tproperty} Let $P_u \in \mathcal{P}_{u,t}$ and let $v \in N(u)$ be a vertex, not incident to $P_u$. Then the path $P_v = P_u \cup \{vu\}$ satisfies \begin{equation*} \mathrm{Val}(P_v)=\max\{\ell(vu)+\mathrm{Val}(P_u),s^{-vu}_{v} \}. \end{equation*} \end{lemma} Our algorithm for ORP updates the potential on the vertices of the graph, using the property established by the following lemma. \begin{lemma}\label{lem:dp_operator} Let $U\subset V$, with $t \in U$, be the set of vertices for which the potential is known. And let $uv$ be the edge such that: \begin{equation}\label{eq:DPequation} uv = \argmin_{zw\in E: w\in U,z \in V \setminus U}\{\max\{\ell(zw)+y(w),s^{-zw}_{z}\}\}. \end{equation} Then if $P_u \in \mathcal{P}_{u,t}$ with $\mathrm{Val}(P_u) = y(u)$ and $P_v=P_u \cup \{uv\}$ it holds that $\mathrm{Val}(P_v) = y(v)$. \end{lemma} Lemma~\ref{lem:dp_operator} provides the required equation for our label-setting algorithm, whose formal statement is given as Algorithm~\ref{alg:main}. The algorithm iteratively builds up a set $U$, consisting of all vertices, for which the correct potential value of $y(u)$ was already computed. The correctness of the algorithm is a direct consequence of Lemma~\ref{lem:dp_operator}. \begin{algorithm} \caption{ }\label{alg:main} \begin{algorithmic}[1] \STATE{Compute $s^{-uv}_{u}$ for each $uv \in E$.} \STATE $U = \emptyset$; \quad $W = V$; \quad $y'(t) = 0$; \quad $y'(u)=\infty \,\, \forall u\in V-t$. \STATE{$successor(u)= \text{NIL} \,\, \forall u\in V$.} \WHILE{$U \neq V$} \STATE{Find $u = \argmin_{z \in W}y'(z)$. } \STATE $U = U + u$; \quad $W = W - u$; \quad $y(u)=y'(u)$. \FORALL{ $vu \in E$ with $v \in W$} \IF{ $y'(v) > \max\{\ell(vu)+y'(u),s^{-vu}_{v}\}$ } \STATE{$y'(v) = \max\{\ell(vu)+y'(u),s^{-vu}_{v}\}$.} \STATE{$successor(v) = u$.} \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Consider the running time of Algorithm~\ref{alg:main}. We let $n$ and $m$ denote the number of vertices and edges of the input graph, respectively. An efficient implementation of step~$1$ is delayed to the next section, and constitutes the heart of our efficient algorithms. For steps~$2$-$10$ we use the implementation of Fredman and Tarjan~\cite{FibHeap} for priority queues (heaps) called \textit{Fibonacci Heaps}. We adopt here the same implementation that is used to obtain $O(m + n \log n)$ running time for Dijkstra's algorithm. We omit the details as they are identical to those in~\cite{FibHeap}. We comment that our implementation of step~$1$ uses $O(m\log n)$ time, hence this computation dominates the running time. In fact, the running time of Algorithm~\ref{alg:main} remains the same if steps~$2$-$10$ are implemented using simpler data structures, such as Binary Heaps. Finally, let us remark that Algorithm~\ref{alg:main} works both for directed and undirected graphs. However, the following section provides an implementation of step~$1$ for undirected graphs only. We remark that in directed graphs, Algorithm~\ref{alg:main} can be trivially implemented in time $O(n\mathrm{SP}(n,m))$, where $\mathrm{SP}(n,m)$ is the complexity of a single shortest path computation on a graph with $n$ vertices and $m$ edges. This is a simple consequence of our first observation in the following section. \section{Efficient Computation of $s$-Values}\label{sec:impl} It remains to provide an efficient implementation for the computation of the values $s^{-uu'}_u$. We will assume a random access machine (RAM) as a computational model. We start by computing the shortest path tree $T$ in $O(m \log n)$ time from every vertex to $t$. Let $d^*(u) = d_G(u,t)$. Observe that $s^{-uu'}_u = d^*(u)$ holds for every edge $uu'$ outside of $T$. We can hence concentrate our efforts on computing $s$-values for edges in the tree. For a vertex $v\in V$ we denote by $T_v \subset T$ the subtree rooted at $v$. We set $E' = E\setminus T$. \begin{lemma}\label{lem:expression_svalues} Consider a vertex $u \in V$. Let $vw\in E'$ be an edge attaining \begin{equation}\label{eq:computing_s1} \min_{vw \in E', v\in T_u w\not\in T_u} \{d_G(u,v) + \ell(vw) + d^*(w)\}, \end{equation} and let $uu'$ be the first edge on the $u$-$t$ path in $T$. Then $$ s^{-uu'}_u = d_G(u,v) + \ell(vw) + d^*(w). $$ \end{lemma} By defining $c_{vw} = d^*(v) + d^*(w) + \ell(vw)$ for each edge $vw\in E'$ and substituting in the expression for $s^{-uu'}_u$ obtained in the previous lemma we can write \begin{equation}\label{eq:computing_s3} s^{-uu'}_u = c_{vw} - d^*(u), \end{equation} which is a convenient expression for computing the $s$-values. Indeed this expression shows that value $s^{-uu'}_u$ only depends on the lowest value $c_e$, over all $e$ with exactly one incident vertex in $T_u$. We leave the remaining details of our implementation to the proof of Theorem~\ref{thm:compexity_s}. Before stating the theorem, let us define an important ingredient, which is used hereafter. To efficiently obtain the information, whether a certain edge $e\in E'$ corresponds to a feasible detour for $u$ (namely if exactly one endpoint of $e$ is in $T_u$), we need an algorithm for computing the \textit{least common ancestor (LCA)} in trees. Given a tree rooted at a vertex $t$ and two vertices $u,v$ in the tree, the least common ancestor $\mathrm{lca}(u,v)$ of $u$ and $v$ is the vertex at which the $t$-$u$ and $t$-$v$ paths diverge in the tree. For an edge $e$ we write $\mathrm{lca}(e)$ to denote the LCA of the endpoints of $e$. It is straightforward to see that an edge $e = vw$ corresponds to a feasible detour for an edge $uu'$ if and only if either the $v$-$\mathrm{lca}(v,w)$ path or the $w$-$\mathrm{lca}(v,w)$ path in $T$ contain $uu'$. For this purpose we use the algorithm of Gabow and Tarjan~\cite{LCA}, which uses $O(n + p)$ time to compute the LCA of $p$ pairs of vertices in a rooted tree. In our case we have $p=O(m)$, since we would like to compute the LCA for every pair of vertices connected by an edge $e\in E'$, hence this computation takes $O(m)$ time. We assume henceforth that given an edge $e\in E'$ we have access to $\mathrm{lca}(e)$ in constant time. \begin{theorem}\label{thm:compexity_s} The values $s^{-uu'}_u$ can be computed in $O(m \log n)$ time and linear space for all $uu' \in E$. Furthermore, they can be stored in a data structure of size $O(n)$. \end{theorem} \begin{proof} The algorithm is summarized as Algorithm~\ref{alg:impl} in Appendix~\ref{svaluesalg}. The algorithm starts by sorting the set of edges $E'$ according to increasing order of $c_e$. Let $L$ be the sorted list. This operation (as well as the computation of the values $c_e$) takes $O(m \log n)$ time. The algorithm relies on the following fact. Let $e$ be the first edge in the sorted list $L$, which represents a feasible detour for $uu' \in T$. From the previous discussion, the edge $e$ corresponds to the optimal detour for $uu'$. Furthermore, if we traverse the two paths from the endpoints of $e$ to $\mathrm{lca}(e)$ we cross the edge $uu'$. The algorithm indeed performs the latter traversals in a copy of $T$ and marks all edges along the way (which were not marked before, using an edge that has lower $c$-value) as belonging to the edge $e$. To obtain the desired running time we need to avoid traversing the same parts of $T$ several times. To achieve this we create a second copy $\bar T$ of the tree $T$. For every vertex $u\in T$ we store a pointer $p[u]$ from $u\in T$ to its copy $\bar u$ in $\bar T$, and another pointer $p[\bar u]$, pointing in the opposite direction. We start iterating over the edges in the sorted list $L$. In the beginning of the $i$'th iteration, the first edge $e_i =xy$ in $L$ is removed from $L$, and the algorithm jumps to the corresponding vertices $\bar x = p[x]$ and $\bar y = p[y]$ in $\bar T$. The algorithm also memorizes $\bar z = \mathrm{lca}(\bar x,\bar y)$ - the vertex in $\bar T$, corresponding to $z = \mathrm{lca}(x,y)$. Next the $\bar x$-$\bar z$ and $\bar y$-$\bar z$ paths are traversed in $\bar T$, marking all edges along the way as belonging to $e_i$, and computing the $s$-values for them using (\ref{eq:computing_s3}). Finally all these aforementioned edges are contracted in $\bar T$ and the pointers are updated so that every vertex in $T$ always points to its corresponding super-vertex in $\bar T$ and vice-versa. Finally, the list is post-processed to remove some unnecessary edges in the following way. At every step, the first edge $e = vw$ in the list is inspected. If $p[v] = p[w]$, this edges is a self-loop in $\bar T$, and it can not correspond to correct $s$-value assignments anymore. Consequently, this edges is removed from the list and the next edge is inspected. The process ends when $p[v] \neq p[w]$, or when $L$ is empty. In the latter case the algorithm terminates. This concludes the $i$'th iteration. Note that the running time of the $i$'th iteration is proportional to the combined length of the $\bar x$-$\bar z$ path, the $\bar y$-$\bar z$ path and the prefix of $L$ that is deleted in the post-processing. Since we contract every edge we traverse, in the tree $\bar T$, we never traverse the same edge twice. We conclude that the running time of this last stage of the algorithm is $O(n + m)$. Figure~\ref{fig:treealg} illustrates the algorithm. Finally, note that we can store a representation of all $s$-values and the corresponding detours in $O(n)$ space. This is achieved by storing the tree $T$, and for every vertex $u$ and the corresponding edge $uu'$ in $T$, we store a pointer to the edge attaining the minimum in (\ref{eq:computing_s1}). The values $s^{-uu'}_u$ can either be stored in a separate list using $O(m)$ additional space, or computed in constant time from the aforementioned information, by using (\ref{eq:computing_s3}). \qed \end{proof} \begin{figure} \caption{Top left: The shortest path tree $T$. The next edge in $E'$ to be processed is $e_i = xy$. The cost $c_{xy} \label{fig:treealg} \end{figure} Note that the running time of Algorithm~\ref{alg:impl} is dominated by the sorting of the costs of the edges. This fact is a significant advantage since sorting algorithms are very efficient in practice. In fact, if the sorted list $L$ was provided in the input, the running time of the algorithm could be improved to $O(m + n\log n)$, via the implementation mentioned in Section~\ref{sec:k_is_one}. Another case in which the latter complexity bound can be attained is that of unweighted graphs, where bucketing can be used in order to sort the costs $c_e$ in linear time. Theorem~\ref{thm:1rta} summarizes our main result. \begin{theorem}\label{thm:1rta} Given an instance of ORP the potential $y$ and the corresponding paths can be computed in time $O(m \log n)$. \end{theorem} \section{$k$-ORP}\label{sec:korp} Let us formally define $k$-ORP, the online replacement path problem with $k$ failed edges. We refer to the parameter $k$ as the \textit{failure parameter}. In $k$-ORP a \textit{scenario} corresponds to a removal of any $k$ of the edges in the graph. In this setup it is no longer convenient to describe the problem in terms of paths and detours. Instead we introduce the notion of a \textit{routing strategy}. A routing strategy $R:2^E \times V \rightarrow V$ is a function which, given a subset $F'\subset E$ of known failed links and a vertex $v\in V$, returns a vertex $u\in V$. We call $E'=E\setminus F'$ the set of \textit{active edges}. We are assuming the existence of a certain governing mechanism, which takes as input a routing strategy and executes it on a given instance. Provided with a routing strategy $R$, this mechanism iteratively moves from the current vertex $u$ to the vertex $v = R(F',u)$, where $F'$ is the set of failed links probed so far. The process starts at a given origin $s$ with $F' = \emptyset$ and ends when $t$ is reached. Since $R$ is deterministic, this process defines a unique, possibly infinite, walk $\theta_R(u,E,F)$ in $G$ for each origin $u \in V$ and every scenario $F \subset E$ with $|F| \leq k$. We remark that if $G$ contains at least $k+1$ edge-disjoint $s$-$t$ paths, there exists a routing strategy, which does not cycle. \begin{definition}\label{kdistance} Given $E' \subset E$, a vertex $u \in V$ and a routing strategy $R$, the \textit{$k$-value} of $R$ with respect to $E'$ and $u$ is defined as \begin{equation*} \mathrm{Val}_k(u,E',R) = \max_{F \subseteq E', |F| \leq k} \ell(\theta_R(u,E',F)). \end{equation*} The corresponding $k$-potential of $u$ with respect to $E'$ is defined as \begin{equation*} y^k(u,E')=\min_{R} \mathrm{Val}_k(u,E',R). \end{equation*} \end{definition} Finally, $k$-ORP is to find an \textit{optimal routing strategy} $R^*$, which minimizes $\mathrm{Val}_k(s,E,R)$, namely to solve \begin{equation*} R^* = \argmin_{R} \mathrm{Val}_k(s,E,R). \end{equation*} The following relation, which is a generalization of (\ref{eq:DPequation}), gives rise to a simple recursive algorithm for $k$-ORP. For each $v \in V$ it holds: \begin{equation}\label{eq:kDPequation} y^k(v,E) = \min_{v: vu\in E}\{\max\{\ell(vu)+y^k(u,E),y^{k-1}(v,E \setminus vu)\}\}. \end{equation} The correctness of this relation can be proved by induction on $k$, using the arguments in Section~\ref{sec:k_is_one}. The exact statement of the algorithm is given in Appendix~\ref{app:general_k}, alongside a complexity analysis. We summarize with the following theorem. \begin{theorem}\label{thm:korp} \sloppy $k$-ORP can be solved on undirected graphs in time $O(m^k \log n)$. \end{theorem} \section{ORP vs. Shortest Paths}\label{sec:ORPvsSP} As we have seen, the length of a path and its robust length are different, and often conflicting objectives functions. In this section we mention two results relating these two objectives. Consider first the problem of finding an optimal solution to an instance of ORP with the shortest possible nominal path length. This bi-objective problem asks to find a Pareto-optimal path with respect to the latter two objective functions. Our first result asserts that this problem can be solved in polynomial time with a simple adaptation of Dijkstra's algorithm. In fact, we are able to solve the following problem for every bound $B \geq OPT$. $$ P^* = \argmin_{P\in \mathcal{P}_{s,t}: \,\, \mathrm{Val}(P) \leq B}{l(P)}. $$ The algorithm and possible applications of this problem are given in Appendix~\ref{app:pareto}. The second result analyzes the performance of certain greedy heuristics for $k$-ORP. We show that a routing strategy that always tries to route along the shortest path in the remaining graph is a $(2^{k+1}-1)$-approximation algorithm for $k$-ORP. The details are available in Appendix~\ref{app:sp_heuristic}. \section{Conclusions}\label{sec:conclusions} This paper introduces a natural variant of the replacement path problem, the online replacement path problem. ORP captures many real-life situations, which occur in faulty or large distributed networks. The most important characteristic of ORP is that the information about the failed edge in the network is only available locally, namely on the edge itself. ORP possesses many of the nice characteristics of ordinary shortest paths. In particular, we show that ORP can be solved by a simple label-setting algorithm. The computational bottleneck of this algorithm is the need to pre-compute certain shortest paths in an adapted network. We give an efficient implementations of this step, which uses $O(m \log n)$ time and linear space. We also generalize the algorithm to deal with an arbitrary constant number $k$ of failed edges. Along this vein we introduce the notion of a routing strategy. Finally, we observe that a Pareto-optimal path with respect to ordinary distance and robust length can be found in polynomial time. We conclude by mentioning a number of promising directions for future research. The complexity of ORP with a variable number $k$ of failed edges remains open. In fact, it is not clear if the decision problem $y^k(s,E) \leq M$ is in NP. The complexity of ORP in undirected graphs may potentially be improved to $O(m + n \log n)$. Finally, the complexity of ORP in directed graphs remains open. Our results give an algorithm with the running time of $O(n \mathrm{SP}(n,m))$, where $\mathrm{SP}(n,m)$ is the complexity of a single shortest path computation on a graph with $n$ vertices and $m$ edges. This problem threatens to be as challenging as RP in directed graphs. \appendix \section{Proofs}\label{app:proofs} \subsection{Proof of Lemma~\ref{lem:Tproperty}} Applying the definition we compute \begin{equation*} \begin{array}{lll} \ell(vu)+\mathrm{Val}(P_u)&=& \ell(vu)+\max\{\ell(P_u),\max_{zw\in P_u}\{\ell(P_u[u,z])+ s^{-zw}_{z} \}\} \\ &=& \max\{\ell(vu)+\ell(P_u),\max_{zw\in P_u}\{\ell(vu)+ \ell(P_u[u,z])+ s^{-zw}_{z} \}\}\\ &=& \max\{\ell(P_v),\max_{zw\in P_v \setminus \{vu\}}\{\ell(P_v[v,z])+ s^{-zw}_{z} \}\}. \end{array} \end{equation*} Substituting in the desired expression we obtain \begin{equation*} \begin{array}{lll} \max\{\ell(vu)+\mathrm{Val}(P_u),s^{-vu}_{v} \} = \\ \max\{\ell(P_v),\max_{zw\in P_v \setminus \{vu\}}\{\ell(P_v[u,z])+ s^{-zw}_{z} \}, s^{-vu}_{v} \} = \mathrm{Val}(P_v), \end{array} \end{equation*} which proves the lemma. \subsection{Proof of Lemma~\ref{lem:dp_operator}} Assume towards contradiction that there exists a path $P^*_v\in \mathcal{P}_{v,t}$ such that $\mathrm{Val}(P^*_v)<\mathrm{Val}(P_v)$. Let $w\in U$ and $z\in V\setminus U$ be two vertices such that $zw \in P^*_v$. Consider the partition of $P^*_v$ given by $P^*_v = P^*_v[v,z] \cup \{zw\} \cup P^*_v[w,t]$ (Note that if $w=t$ then $P^*_v[w,t] = \emptyset$ and if $z=v$ then $P^*_v[v,z] = \emptyset$). By the choice of $vu$ we have \begin{equation*}\label{uno} \max\{\ell(vu)+y(u),s^{-vu}_{v}\} \leq \max\{\ell(zw)+y(w),s^{-zw}_{z}\}, \end{equation*} which, by $\mathrm{Val}(P_u) = y(u)$ and $\mathrm{Val}(P^*_v[w,t]) \geq y(w)$ implies \begin{equation*}\label{due} \max\{\ell(vu)+\mathrm{Val}(P_u),s^{-vu}_{v}\} \leq \max\{\ell(zw)+\mathrm{Val}(P^*_v[w,t]),s^{-zw}_{z}\}. \end{equation*} The latter inequality and Lemma~\ref{lem:Tproperty} give $\mathrm{Val}(P_v) \leq \mathrm{Val}(P^*_v[z,t])$. On the other hand, by monotonicity we have $\mathrm{Val}(P^*_v) \geq \mathrm{Val}(P^*_v[z,t])$. We conclude that $\mathrm{Val}(P_v) \leq \mathrm{Val}(P^*_v[z,t]) \leq \mathrm{Val}(P^*_v)$; a contradiction. \subsection{Proof of Lemma~\ref{lem:expression_svalues}} Clearly this choice represents a $u$-$t$ path in $G-uu'$, so $s^{-uu'}_u \leq d_G(u,v) + \ell(vw) + d^*(w)$ holds. Assume towards contradiction that a better $u$-$t$ path $P$ existed in $G-uu'$, namely $\ell(P) < d_G(u,v) + \ell(vw) + d^*(w)$. To reach $t$ from $u$ without using $uu'$ the path $P$ needs to contain an edge $v'w'$, such that $v'\in T_u$ and $w'\not\in T_u$. We obtain \begin{equation*} \begin{array}{lll} \ell(P) = \ell(P[u,v']) + \ell(v'w') + \ell(P[w',t]) &\geq& d_G[u,v'] + \ell(v'w') + d^*(w') \\ &\geq& d_G(u,v) + \ell(vw) + d^*(w), \end{array} \end{equation*} which contradicts the choice of $P$. \section{A Summary of the Algorithm in Section~\ref{sec:impl}}\label{svaluesalg} \begin{algorithm} \caption{ }\label{alg:impl} \begin{algorithmic}[1] \STATE{Compute the shortest path tree $T$.} \STATE{Create a copy $\bar T$ of $T$.} \STATE{Store a pointer $p[u]$ in every vertex $u$ in $T$ to its corresponding vertex $\bar u$ in $\bar T$.} \STATE{Store a pointer $p[\bar u]$ in every vertex $\bar u$ in $\bar T$ to its corresponding vertex $u$ in $T$.} \STATE{Store similar pointers $p[\bar e]$ from edges in $\bar T$ to corresponding edges in $T$.} \STATE{Compute $c_e$ for every $e\in E'$.} \STATE{Sort $E'$ according to increasing order of $c_e$. Let $L$ be the sorted list.} \WHILE{$\bar T$ contains more than one vertex} \STATE{Remove the first edge $e = xy$ from $L$.} \STATE{$\bar x = p[x]$, \quad $\bar y = p[y]$.} \STATE{$z = \mathrm{lca}(e)$, \quad $\bar z = p[\mathrm{lca}(e)]$.} \STATE{Let $Y \subset \bar T$ denote the union of the $\bar x$-$\bar z$ and $\bar y$-$\bar z$ paths in $\bar T$.} \FOR{$\bar a\in Y$} \STATE{$uu' = p[\bar a]$.} \STATE{$s^{-uu'}_u = c_{e} - d^*(u)$.} \ENDFOR \STATE{Contract $Y$ in $\bar T$ and update pointers.} \REPEAT \STATE{Let $e=vw$ be the first edge in $L$.} \IF{$p[v] = p[w]$} \STATE{Remove $e$ from $L$.} \ENDIF \UNTIL{$p[v] \neq p[w]$ \OR $L = \emptyset$.} \ENDWHILE \end{algorithmic} \end{algorithm} \section{An Algorithm for $k$-ORP}\label{app:general_k} Algorithm~\ref{alg:korp} solves (\ref{eq:kDPequation}) for every $v\in V-t$ and computes the corresponding optimal routing strategy. Note that Algorithm~\ref{alg:korp} is identical to Algorithm~\ref{alg:main} when $k=1$. To formally prove the correctness of Algorithm~\ref{alg:korp} one needs to state equivalent monotonicity properties, as well as analogues of Lemma~\ref{lem:Tproperty} and Lemma~\ref{lem:dp_operator}. They are however omitted, as they are identical to those of Section~\ref{sec:k_is_one}. \begin{algorithm} \caption{ }\label{alg:korp} \begin{algorithmic}[1] \STATE{Compute $y^{k-1}(u,E \setminus uv)$ for each $uv \in E$.} \STATE{$U = \emptyset$.} \STATE{$W = V$.} \STATE{$y^{k}(t,E) = 0$.} \STATE{$y^{k}(u,E) = \infty$ for each $u \in W$.} \WHILE{$U \neq V$} \STATE{Find $u = \argmin_{z \in W}\{y^{k}(z,E)\}$. } \STATE{$U = U + u$.} \STATE{$W = W - u$.} \FORALL{ $vu \in E$ such that $v \in W$} \IF{ $y^{k}(v,E) > \max\{\ell(uv)+y^{k}(u,E),y^{k-1}(v,E\setminus \{vu\})\}$ } \STATE{$y^{k}(v,E) = \max\{\ell(uv)+y^{k}(u,E),y^{k-1}(v,E\setminus \{vu\})\}$.} \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} We conclude by bounding the complexity of a naive implementation of this algorithm. Let $T(m,n,k)$ denote the running time of the algorithm on a graph with $n$ vertices and $m$ edges and failure parameter $k$. The algorithm in Section~\ref{sec:impl} gives $T(n,m,1) = O(m \log n)$. For $k>1$ we have $T(m,n,k) = O(m + n \log n) + m T(m-1,n,k-1) = O(m^{k-1}T(n,m,1))$. \sloppy This finally gives $T(n,m,k) = O(m^k \log n)$. \section{Pareto-Optimal ORPs}\label{app:pareto} In this section we are concerned with obtaining a path $P^*$ with robust length at most $B$, and a shortest nominal path length among all such paths. This problem is equivalent to that of finding a Pareto-optimal path with respect to the objective functions corresponding to the ordinary distance function, and the robust length. We assume $B\geq OPT$, the optimal solution value of the corresponding ORP instance. Formally, we aim at finding an $s$-$t$ path $P^*$ satisfying $$ P^* = \argmin_{P\in \mathcal{P}_{s,t} : \,\, \mathrm{Val}(P)\leq B}{l(P)}. $$ The algorithm for this problem is a slightly modified version of Dijkstra's algorithm on the directed edge-weighted graph $G'=(V,K, \omegaega)$, where $K$ has two directed edges $uv$ and $vu$ for each undirected edge $uv \in E$, with $\omega(uv)=\omega(vu)=\ell(uv)$. Algorithm~\ref{alg:pareto} is a formal statement of the algorithm. Note that the only difference with Dijkstra's algorithm is the condition in step $8$. Unlike ordinary shortest paths, when a vertex $u$ is selected, and the distance labels of its neighbors are updated, it is not sufficient to check for each $uv \in K$ the usual condition $$d(u)+ \omega(uv) \leq d(v).$$ We need to additionally verify whether the edge $uv$ can belong to a path with robust length $B$ or not. In other words, we need to check whether the length of the path from $s$ to $u$ plus the length of the detour from $u$ to $t$ avoiding $uv$ is at most $B$, namely $$d(u) + s^{-uv}_{u} \leq B.$$ Algorithm~\ref{alg:pareto} can clearly be implemented to run in $O(m + n \log n)$ time using the results of Section~\ref{sec:impl}. \begin{algorithm} \caption{}\label{alg:pareto} \begin{algorithmic}[1] \STATE{ $S=\emptyset$; \quad $\bar S = V$} \STATE{ $d(s) = 0$; \quad $d(v) = \infty \,\, \forall v \in V - s$} \WHILE{$t \notin S$} \STATE{Find $u = \argmin_{z \in \bar S}{d(z)}$} \STATE{$S = S + u$} \STATE{$\bar S = \bar S - u$} \FOR{$v \in N(u)\setminus S$} \IF{$d(u)+ \omega(uv) \leq d(v)$ and $d(u) + s^{-uv}_{u} \leq B$} \STATE{$d(v)=d(u)+ \omega(uv)$} \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Let us briefly discuss the potential applications of the latter problem. While shortest paths often have undesirable behavior in unreliable networks, an optimal solution to ORP might have a prohibitively large cost. In some applications faults occur rarely, hence it is preferred to have the cost of the nominal path as low as possible. At the same time, it is necessary that the cost does not exceed a certain threshold $B$, in every scenario. Consequently, it makes sense to regard the threshold $B$ as a hard constraint on the robust length, and optimize the length of the nominal path. Algorithm~\ref{alg:pareto} gives the decision maker the desired freedom to choose the level of conservatism that she desires. \section{The Shortest Path Heuristics}\label{app:sp_heuristic} It is common to use heuristics which rely on shortest paths in routing algorithms. In this section we show that a naive shortest path routing strategy performs very poorly for $k$-ORP. In fact, the approximation guarantee it provides grows exponentially with the adversarial budget $k$. To this end we define more formally the shortest path heuristics, which we denote by $R_{SP}$. The routing strategy $R_{SP}$ works as follows. At each vertex $u\in V$ and given a set of known failed edges $F'$, $R_{SP}$ tries to route the package along a shortest path in the remaining graph $G-F'$. In the following lemma we show that $R_{SP}$ is a factor $2^{k+1}-1$ approximation for the optimal routing strategy, in the presence of at most $k$ failed edges. \begin{lemma}\label{lem:sp_heuristic_general} Let $\mathcal{I} = (G,s,t)$ be an instance of $k$-ORP. Then $$ \mathrm{Val}_k(s,E,R_{SP}) \leq (2^{k+1}-1) y^k(s,E). $$ \end{lemma} \begin{proof} Let $OPT = y^k(s,E)$, $R^* = \argmin_R {\mathrm{Val}_k(s,E,R)}$ and $Q^0 = \theta_{R^*}(s,E, \emptyset)$ be the corresponding nominal path. Consider a set $F$ of failed edges with $|F| \leq k$. Define $\omegaega = \theta_{R_{SP}}(s,E,F)$ to be the walk followed by $R_{SP}$ in $G-F$, and let $\omegaega = (s = u_1, u_2, \cdots, u_m = t)$ be the corresponding sequence of vertices. We divide our analysis according to the number of failed edges encountered by $R_{SP}$. We prove by induction on $i$, that if the routing strategy encountered a total of $i\leq k$ failed edges then \begin{equation}\label{eq:induction} \ell(\omegaega) \leq (2^{i+1} - 1) OPT. \end{equation} The base case $i=0$ corresponds to scenarios in which no failed edge is encountered in the routing. In this case $\omegaega$ is simply a shortest $s$-$t$ path in $G$, hence $\ell(\omegaega) \leq OPT$, as required. Assume next that (\ref{eq:induction}) holds for every $j<i$ and consider the case that the routing encounters exactly $i$ failed edges. We can assume without loss of generality that all edges of $F$ were probed and $|F| =i$. Let $r < m$ be such that $u_r$ is the vertex incident to the $i$'th failed edge in the routing. In other words, before reaching $u_r$, the routing probed exactly $i-1$ failed edges. Let $e = \{u_r,w\}$ be the $i$'th failed edge probed by the routing strategy. Consider an execution of the routing strategy on the same instance with the different failure scenario $F' = F-e$. The resulting walk $\omegaega'$ will have the first $r$ vertices in common with $\omegaega$, namely the sub-walk $\sigma = (s=u_1,\cdots, u_r)$ will appear in both walks. By the inductive hypothesis we have that $\ell(\sigma) \leq \ell(\omegaega') \leq (2^i-1)OPT$. It remains to bound the length of the tail of $\omegaega$ from $u_r$ until $u_m=t$ to complete the proof. To this end recall that $R_{SP}$ routes the package along the shortest remaining path in the graph. Since the last failure encountered by $R_{SP}$ is $e$, the remaining path is simply the shortest $u_r$-$t$ path in $G-F$. To bound the length of this path we construct a $u_r$-$t$ walk $\theta$ as follows. First $\theta$ traces the entire route taken by $R_{SP}$ back to $s$ and then uses the walk that $R^*$ would use to reach $t$ from $s$ in the scenario $F$. Clearly we have $\ell(\theta) \leq (2^i-1)OPT + OPT$. Furthermore, this walk is intact in $G-F$. This gives the required bound $\ell(\omegaega) \leq (2^i-1) OPT + (2^i-1)OPT + OPT = (2^{i+1}-1)OPT$ and finishes the proof. \qed \end{proof} The bound obtained in Lemma~\ref{lem:sp_heuristic_general} seems crude at first glance. In particular, in the inductive step we follow the entire walk performed so far backwards to reach $s$ and start over. In the following example we show that the bound of Lemma~\ref{lem:sp_heuristic_general} is tight. \begin{example}\label{ex:general_sp_bad} Let $M \in \mathbb{Z}_+$ be a large integer. Consider the following instance $\mathcal{I} = (G,s,t)$ of $k$-ORP. The graph contains $k+1$ parallel edges connecting $s$ and $t$ with length $M+1$. In addition the graph contains a path $(s,u_1, \cdots, u_{k})$ of length $k+1$. The edge $su_1$ has length $M$ and every edge $u_iu_{i+1}$ has length $2^iM$. Finally, the vertices $u_1, \cdots, u_k$ are connected to $t$ with edges of length zero. The construction is illustrated in Figure~\ref{fig:sp_heuristic_bad}. Consider the failure scenario, which fails all edges $u_it$ for $i\in [k]$. The routing strategy $R_{SP}$ will follow the path $(s, u_1, \cdots, u_k)$, then follow it back to $s$ and then take one of the edges with length $M+1$ to $t$. The total length of this walk is $2(M + 2M + \cdots + 2^k M) + M + 1 = (2^{k+1} - 1)M + 1$. At the same time the optimal routing strategy routs the package along the edges with length $M+1$ with a worst-case cost of $M+1$. The ratio between the two numbers tends to $2^{k+1} -1$ as $M$ tends to infinity. \end{example} \begin{figure} \caption{A bad example for the shortest path heuristic. The dashed edges correspond to the worst case scenario.} \label{fig:sp_heuristic_bad} \end{figure} \end{document}
\begin{document} \title{Moduli spaces of stable sheaves over quasi-polarized surfaces, and the relative Strange Duality morphism} \begin{prelims} \DisplayAbstractInEnglish \DisplayKeyWords \DisplayMSCclass \languagesection{Fran\c{c}ais} \DisplayTitleInFrench \DisplayAbstractInFrench \end{prelims} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} The work on the present paper started with an attempt to strengthen the results on Strange Duality on K3 surfaces, and is largely motivated by the approach of Marian and Oprea \cite{MO}. Strange Duality is a conjectural duality between global sections of two natural line bundles on moduli spaces of stable sheaves. It originated as a representation theoretic observation about pairs of affine Lie algebras, and then was reformulated geometrically over the moduli of bundles over curves \cite{DT}, \cite{Beau}. In our paper, we develop the geometric approach to Strange Duality over surfaces in the spirit of Marian and Oprea. They proved the Strange Duality conjecture for Hilbert sche\-mes of points on surfaces, moduli of sheaves on elliptic K3 surfaces with a section \cite{MO}; and cases for abelian surfaces \cite{MO14:abelian}, including a joint work \cite{BMOY:abelian}. The latter used birational isomorphisms of moduli spaces of stable sheaves with Hilbert schemes of points on the same K3 surface, following Bridgeland \cite{Br}, to reduce the question to the known case of Hilbert schemes. Further, Marian and Oprea use this result to conclude the Strange Duality isomorphism for a generic K3 surface in the moduli space of polarized K3 surfaces of degree at least four \cite{MO} (the idea first appeared in their earlier paper \cite{MO10}), for a pair of vectors whose determinants are equal to the polarization. In order to make this argument work for K3 surfaces of degree two, we have to construct moduli spaces of stable sheaves over the stack of \emph{quasi}\-po\-la\-rized K3 surfaces, without assuming that the quasi-polarization (a choice of a big and nef line bundle) is ample. This is needed because elliptic K3 surfaces of degree two are not polarized, so the original approach of Marian and Oprea needs modification. The question of whether the Strange Duality construction can be extended from the polarized locus to the whole moduli stack of quasi-polarized K3 surfaces was left open in \cite{MO}. Stepping away from the ample locus requires that we retrace classical results in moduli theory: we prove openness of the stable locus, show that relative moduli spaces exist, and use the theory of good moduli spaces to derive gluing and descent results. This notion was introduced by Jarod Alper \cite{Alp}, and further developed by Alper, Hall, Halpern-Leistner, Heinloth and Rydh in numerous works; the most important for the present paper will be a recent remarkable result giving a criterion for when a stack has a good moduli space \cite{AHHL}. This part of our work culminates in the following result, which we consider the main contribution of the paper: \begin{Thrm}[Theorems~\ref{Thrm:good_mm} and~\ref{Thrm:construct_Mc}] Let $\mathcal{K}$ be the moduli stack of quasi-polarized projective surfaces, and let $\mathcal{X}$ be the universal surface with the universal quasi-polarization $\mathcal{H}$. Fix a Chern character $v$ over $\mathcal{X}$. Assume that, pointwise over $\mathcal{K}$, slope stability is equivalent to slope semistability for sheaves in class $v$. Then the stack of stable sheaves $\mathcal{Q} \to \mathcal{K}$ of K-theory class $v$ is algebraic. Further, there exists a relative good moduli space $\mathcal{Q} \to \mathcal{M}$. The stack $\mathcal{M} \to \mathcal{K}$ is fiberwise $($i.e. over each closed point of $\mathcal{K})$ the moduli scheme of stable sheaves of class $v$ with respect to the restriction of the universal quasi-polarization. \end{Thrm} Then we apply the developed theory to construct the Strange Duality morphism: this requires knowing that we have a good morphism to $\mathcal{M}$ from the moduli stack which possesses a universal family of stable sheaves. Along the way, we use the Descent Lemma (Lemma \ref{Lem:good_descent}), where we show that quasi-coherent sheaves descend along good morphisms. \begin{Thrm}[Equation \eqref{Eq:SD_mm}] The Strange Duality morphism exists for a pair of orthogonal K-theory vectors on the universal K3 surface $\mathcal{X} \to \mathcal{K}$. It is defined up to a twist by a line bundle. \end{Thrm} \begin{Rem} We attempted to use the Marian-Oprea trick (from their paper \cite{MO}) to extend the generic Strange Duality isomorphism to degree two. Employing the relative moduli space construction from the present paper, it works as follows: working with an elliptic K3 surface (which in degree two lies in the quasi-polarized locus), use a Fourier-Mukai functor to establish a birational isomorphism of a pair of Hilbert schemes with a pair of moduli spaces of higher-rank sheaves, and using functoriality, identify the theta divisors on the two spaces; this proves the Strange Duality over the elliptic locus; by continuity, the Strange Duality morphism would be an isomorphism on a dense open substack. For this to work, one needs to find a pair of orthogonal vectors of rank at least three and a suitable Fourier-Mukai kernel to get to a pair of vectors of rank one whose sum of determinants is big and nef. The author could not find such vectors for the following choices of kernel: the ideal sheaf of the diagonal on the fibered square of the K3 surface, and a universal sheaf classifying rank $d+1$, degree $d$ stable fiber sheaves. The author is working on a more explicit description of other possible Fourier-Mukai kernels. \end{Rem} \subsubsection*{Outline of the paper} We start with constructing relative moduli spaces in Section \ref{Sec:Rel_msp}. We first show that the stack of stable sheaves with respect to the universal quasi-polarization is algebraic. Then we recall some theory of good moduli spaces, and prove Descent Lemma (Lemma \ref{Lem:good_descent}) for good morphisms. Finally, we construct the relative space of stable sheaves locally over schematic charts of $\mathcal{K}$, and then glue the resulting spaces using their universal properties. Then, we apply the developed theory to the Strange Duality. In Section \ref{Sec:SD_mm}, we start with generalizing Marian-Oprea's construction of the theta line bundles, and use it to extend the Strange Duality morphism to the quasi-polarized locus. \subsubsection*{Conventions} We work over an algebraically closed field of characteristic zero. We write $(-)^\vee$ for the derived dual of a sheaf and $-\otimes -$ for the derived tensor product. Given a morphism of schemes $f : X \to Y$, we denote by $f_*$ and $f^*$ the derived functors of pushforward and pullback, respectively. When we want to work with the classical functors instead of derived, we write $\mathrm{L}^0 f^*$ for the nonderived pullback and $\mathbb{R}ab f_*$ for nonderived pushforward. Note however that we distinguish between $\mathrm{Hom}$ and $\mathrm{RHom}$ (because $\mathrm{Hom}$ makes sense in the derived category on its own). For the moduli theory of sheaves, whenever we say stability, we mean slope stability with respect to a chosen quasi-polarization. We generally need a way to fix a numerical characteristic of the sheaves in question in order to obtain any finiteness results. So, for a stack $\mathcal{X}$, we use zeroth algebraic K-theory $\mathrm{K}_0\, \mathcal{X}$ and zeroth topological K-theory $\mathrm{K}_0\,t \mathcal{X}$ (defined by Blanc \cite{Blanc} for $\mathbb{C}$-stacks, and by Blanc, Robalo, To\"en, Vezzosi \cite{BRTV} in greater generality). For a complex variety $X$, we can also define oriented topological K-theory $\mathrm{K}_0\,or X$ by fixing the determinant of a topological K-theory vector. We will call a vector $v$ in any K-theory $\mathrm{K}_0\,any X$ a fixed \emph{numerical characteristic}, or \emph{K-theory class}. When we need to be specific, we will add adjectives algebraic, topological or oriented topological to refer to the corresponding variants of K-theory. Let $\mathbb{C}how X$ denote the Chow ring of a smooth projective variety $X$. It is well-known that there is a function called Chern character $\mathrm{ch}\, : \mathrm{D^b} X \to \mathbb{C}how X$, from objects of the derived category to the Chow ring, that factors as a ring homomorphism through the Grothendieck group: $\mathrm{ch}\, : \mathrm{K}_0\, X \to \mathbb{C}how X$. Note that the Euler pairing descends to each of the K-groups by taking representative complexes $E$ and $F$ and computing Euler characteristic of their derived tensor product: $$\mathrm{ch}\,i(E \otimes F) \deq \mathrm{ch}\,i \left( \mathbb{R}Gamma(E \otimes F) \right) \mbox{.}$$ We don't use Chern classes a lot, and instead we prefer to write a K-theory vector $v$ in terms of components of its Chern character: $\mathrm{ch}\,n_0 v = \mathrm{rk}\, v$, $\mathrm{ch}\,n_1 v = \mathrm{c}_1 v$, $\mathrm{ch}\,n_2 v = \frac{1}{2} (\mathrm{c}_1 v)^2 - \mathrm{c}_2 v$, etc. \section{Relative moduli spaces of stable sheaves with respect to a quasi-polarization} \label{Sec:Rel_msp} Let $\mathcal{K}$ be a stack of quasi-polarized projective surfaces that admits a universal family $u: \mathcal{X} \to \mathcal{K}$ with universal quasi-polarization $\mathcal{H}$. It means that we want $\Hom{}{T,\mathcal{K}}$ to classify families on a scheme $T$ given by pullbacks of $\mathcal{X}$ and $\mathcal{H}$ to $T$. Fix a K-theory class $v$ over $\mathcal{X} \to \mathcal{K}$. For the main application, $\mathcal{K}$ will be the moduli stack of quasi-polarized K3 surfaces and $u: \mathcal{X} \to \mathcal{K}$ will be the universal quasi-polarized K3 surface with quasi-polarization $\mathcal{H}$. Our aim is to define the relative moduli space of slope stable sheaves $\mathcal{M} \deq \mathcal{M}_v \to \mathcal{K}$. Our idea is to start with the moduli functor $\widetilde \mathcal{M}$ of all flat families of sheaves of fixed K-theory class. Usually properness of support is assumed, but in our case it is an automatic condition due to the projectivity assumption. It is known that this functor is representable by an Artin stack, see for example a very general result of Lieblich (\cite{Lieb}, the main theorem). Then we will prove that the subfunctor $\mathcal{Q} \subset \widetilde \mathcal{M}$ of stable sheaves is open, hence also is an Artin stack. This is well-known when quasi-polarization is ample, but we will need additional technical arguments in order to generalize it to the non-polarized locus of K3 surfaces. In the next step, we observe that $\mathcal{Q}$ admits a ``good moduli space morphism'' onto a relative moduli space, which will be denoted by $\mathcal{M} = \mathcal{M}_v$. For this, we use the result of existence of good moduli spaces by Alper, Heinloth and Halpern-Leistner \cite{AHHL}. Note that the fiber of $\mathcal{M}$ is a scheme over each K3 surface $[X] \in \mathcal{K}$, but globally $\mathcal{M}$ is still a stack. Pointwise it is well-known that $\mathcal{M}$ is a scheme for the polarized case. However, to our knowledge, the case of non-ample quasi-polarization, and a construction of a good moduli space morphism are new results. They are summarized in the main theorem of this section: \begin{Thrm*}[\emph{cf.} Theorem~\ref{Thrm:construct_Mc}] Let $\mathcal{K}$ be a stack of quasi-polarized surfaces that admits the universal surface $\mathcal{X}$ with the universal quasi-polarization $\mathcal{H}$. Fix a K-theory class $v$ over $\mathcal{X}$. Assume that, pointwise over $\mathcal{K}$, stability is equivalent to semistability for sheaves in class $v$. Then there exists a stack $\mathcal{M} \to \mathcal{K}$ which is fiberwise $($\textit{i.e.} over each closed point of $\mathcal{K})$ the moduli scheme of stable sheaves of class $v$ with respect to the restriction of the universal quasi-polarization. \end{Thrm*} \subsection{Constructibility and generization} We start with proving that the subfunctor $\mathcal{Q} \subset \widetilde \mathcal{M}$ is constructible and preserved by generization. Then, by a topological lemma, we will be able to conclude that it is an open subfunctor. Recall that a stack is a contravariant (quasi-)functor $\mathcal{S}{}ch^{op} \to \mathcal{G}{}pd$ from the category of schemes $\mathcal{S}{}ch^{op}$ to the 2-category of groupoids $\mathcal{G}{}pd$ which satisfies a ``level two'' sheaf condition. \begin{Def} Consider a moduli problem $\mathcal{F} : \mathcal{S}{}ch^{op} \to \mathcal{G}{}pd$ and a subfunctor $\mathcal{G} \subset \mathcal{F}$. We say that $\mathcal{G}$ is a constructible subfunctor of $\mathcal{F}$ if, for any family $X \in \mathcal{F}(\tilde B)$ parametrized by the scheme $\tilde B$, the locus $B \deq \left\{ b \in \tilde B \mid X_b \in \mathcal{G}(b) \right\}$ is a constructible subset of $\tilde B$. \end{Def} \begin{Lem} \label{constructibility} Fix a Chern character $v$ over $\mathcal{X}$. Assume that pointwise on $\mathcal{K}$, stability with respect to $\mathcal{H}$ is equivalent to semistability. Then the moduli subfunctor $\mathcal{Q} \subset \widetilde \mathcal{M}$ of stable sheaves is a constructible subfunctor. \end{Lem} \begin{proof} We will be checking constructibility by taking families of $\mathcal{Q}$ and $\widetilde \mathcal{M}$ parametrized by an arbitrary scheme $\tilde B$. Note that this condition can be checked on an open cover, so by possibly taking affine opens in $\tilde B$, we can assume that $\tilde B$ is quasi-compact and quasi-separated. Further, we can reduce the question to a Noetherian base by using Noetherian approximation by Thomason--Trobaugh \cite{TT} as follows. By \cite[Theorem~C.9]{TT}, a quasi-compact and quasi-separated scheme $\tilde B$ over a ground field admits an approximation by Noetherian schemes $C_i$; moreover, the bonding maps of the system are all affine: $$ \tilde B = \lim C_i . $$ Both $\mathcal{Q}$ and $\widetilde \mathcal{M}$ evaluated at $\tilde B$ parametrize certain sheaves over $X \deq \mathcal{X} \fprod{\mathcal{K}} \tilde B$, which, being a family of K3 surfaces, is a scheme. By \cite[Lemma~01ZM]{Stacks}, we can choose a Noetherian $X_i$ and $X_i \to C_i$ for sufficiently large $i$ such that $X \cong \tilde B \fprod{C_i} X_i$. So, possibly taking a subset of the indexing set, we can assume that $X_i$ is chosen for all $i$. Then by a simple category theory fact, we have $X \cong \lim X_i$. By \cite[Lemmas~0B8W and~05LY]{Stacks}, a flat sheaf on $X$ is a pullback of some flat sheaf on a finite step $X_i$. Therefore, we can study a particular flat family parametrized by a finite step $C_i$, which means that without loss of generality, we can assume that $\tilde B$ is Noetherian. Finally, note that both constructibility and stability can be checked on closed points, so it is enough to check the condition for reduced Noetherian bases. Let $F$ be a family of sheaves parametrized by a reduced Noetherian base $\tilde B$, that is $F \in \mathbb{C}oh X$ is a coherent sheaf over a family of quasi-polarized K3 surfaces $X \to \tilde B$ of Chern class $v$ and flat over $\tilde B$. We want to show that the locus $B \deq \left\{ b \in \tilde B \mid F_b \mbox{ is stable} \right\}$ is constructible. To that end, denote by $H$ the quasi-polarization of $X \to \tilde B$. We will use Noetherian induction on the base: we will stratify $\tilde B$ with locally closed disjoint subsets $B_i$, and prove that $B \cap B_i$ is open in each $B_i$. The Noetherian property will be used to prove that the set $\left\{ B_i \right\}$ of the strata is finite. Note that the locus $B_0$, where the quasi-polarization $H_{X_b}$ is ample, is open. It is a standard result that semistability is open in flat families \cite[Proposition~2.3.1]{HuL}; with our assumption that semistability implies stability, we then obtain that the stable locus $B_0 \cap B$ is open in $B_0$. Consider the strictly quasi-polarized locus $\tilde B \setminus B_0$ (\emph{i.e.} where the quasi-polarization is not ample) and pick an irreducible component $B_1$ with the generic point $\eta$. The surface $X_\eta$ is projective, so we can pick an ample line bundle $L_\eta$ over $X_\eta$. Note that $B_1$ and the restriction $X_{B_1}$ are integral schemes, hence the sheaf of total quotient rings of $\mathcal{O}_X$ is the constant sheaf $K$ with fiber equal to the field of fractions of the generic point $\kappa \in X_\eta \subset X$, and so every line bundle on $X_\eta$ comes from a Cartier divisor. Let's say that $L_\eta \cong \mathcal{O}_{X_\eta}(D_\eta)$ for $D_\eta \in \mathrm{H}^0 \left( X_\eta, K_{X_\eta}^\times / \mathcal{O}_{X_\eta}^\times \right)$. We can extend this divisor to a Cartier divisor $D$ over some open subset $U'$ of $X$. Let us for a moment denote by $f$ the morphism $X \to B$. We now argue that $U'$ can be extended to an open set of the form $f^{-1}(U)$ for some $U \subset B$. The morphism $f$ is a flat family of projective surfaces, so by \cite[Lemma~01UA]{Stacks}, it is open. So the set $U = f(U') \subset B$ is open. Since every fiber is proper, one can note that every regular function is constant along a fixed fiber. Therefore, if the section corresponding to $D$ is defined at one point of a fiber, it is defined over the whole fiber. So $D_\eta$ can be extended to a divisor on $f^{-1}(U)$, and we get an extension of $L_\eta$ to $L$ over $X_U$. Note further that being ample is an open condition, so we may assume, after possibly shrinking $U$, that $L$ is relatively ample. Now we will show that we can pick a small enough $\epsilon \in \mathbb{R}^{+}$ such that stability with respect to $H_U + \epsilon L$ is equivalent to stability with respect to $H_U$ for every point in $U$. The argument for local finiteness of the walls \cite[Lemma~4.C.2]{HuL} (the result is summarized in \ref{hyparr}) can be extended to a neighborhood, and on each of those, we pick $\epsilon$ as above; then by quasi-compactness of the base $U$ we can pick the minimum of the $\epsilon$'s for every open neighborhood. Now we have a polarization over $X_U$ and can deduce openness of the locus where $F_U$ is stable on a fiber with respect to the ample $H_U + \epsilon L$. This locus is exactly $B \cap U \subset B_1$. At this point, we want to redefine $B_1$ to be $U$, and pass to consideration of the closed subset $\tilde B \setminus (B_0 \sqcup B_1)$ of $\tilde B$. The choice of the subsequent $B_i$'s is done in the same fashion. By the Noetherian assumption, there are only finitely many $B_i$'s, and for each of those, the subset $B \cap B_i$ is open inside $B_i$. Since $B_i$ is locally closed inside $\tilde B$, we get that $B$ is equal to the finite disjoint union of locally closed subsets $B \cap B_i \subset \tilde B$, hence constructible. \end{proof} \begin{Def} Consider a moduli problem $\mathcal{F} : \mathcal{S}{}ch \to \mathcal{G}{}pd$ and a subfunctor $\mathcal{G} \subset \mathcal{F}$. We say that $\mathcal{G}$ is closed under generization if, for any family $X \in \mathcal{F}(\mathrm{Spec}\, R)$ parametrized by the spectrum of a valuation ring $R$ such that the fiber of $X$ over the closed point belongs to the subfunctor $\mathcal{G}$, the generic fiber is also in $\mathcal{G}$. \end{Def} \begin{Lem} \label{generization} Fix a Chern character $v$ over $\mathcal{X}$. Then the moduli subfunctor $\mathcal{Q} \subset \widetilde \mathcal{M}$ of stable sheaves is preserved under generization. \end{Lem} \begin{proof} Assume that $F$ is a flat family over $X$ parametrized by $\mathrm{Spec}\, R$, where $R$ is a valuation ring with fraction field $K$ and residue field $k$. Assume further that $F$ is stable when restricted to the closed fiber $X_k$. We want to prove that its restriction $F_K$ to the open fiber is also stable. To that end, pick a proper quotient sheaf $F_K \to G_K \to 0$. We consider slope stability with respect to a quasi-polarization $H$ which may not be ample. The function $P(n) \deq P_{G_K}^{H_K}(n) \deq \mathrm{ch}\,i(X_K, G_K \otimes H_K^n)$ is not usually called Hilbert polynomial for just big and nef line bundles, so we will call it a quasi-Hilbert polynomial. Note that this function is still a polynomial, because the standard argument still applies. Since the Quot scheme $\mathrm{Quot}_{X/R}^{F,P}$ is proper, we can extend the quotient $F_K \to G_K \to 0$ to a flat quotient $F \to G \to 0$ over $X$, by the existence part of the valuative criterion. Recall that the slope is a rational function of some coefficients of the quasi-Hilbert polynomial, therefore it is constant in flat families, so $\mu_{H_K}(G_K) = \mu_{H_k}(G_k) < \mu_{H_k}(F_k) = \mu_{H_K}(F_K)$. And so we can conclude stability of $F_K$. \end{proof} \subsection{Technicalities to prove openness} In this part, we will briefly remind a topology result that connects the properties of being constructible and open following Stacks Project \cite{Stacks}. \begin{Def}[{\cite[Definition~004X]{Stacks}}] \label{char_constructible} A topological space $X$ is called \emph{sober} if every irreducible closed subset has a unique generic point. \end{Def} \begin{Lem}[{\cite[Lemma~0542]{Stacks}}] \label{Lem:Stacks_openness} Let $X$ be a Noetherian sober topological space. Let $E\subset X$ be a subset of $X$. If $E$ is constructible and stable under generization, then $E$ is open. \end{Lem} \begin{Cor} \label{Lem:openness} Let $X$ be a Noetherian scheme and $E \subset X$ a constructible subset which is preserved by generization. Then $E$ is open in X. \end{Cor} \begin{proof} We observe that the topological space of a Noetherian scheme is Noetherian sober. Then we apply Lemma \ref{Lem:Stacks_openness}. \end{proof} \subsection{Good morphisms} In this subsection we remind the definition of a \emph{good morphism}, as introduced by Alper. Our plan is to first prove that $\mathcal{Q}$ -- the stack of stable flat sheaves of a fixed K-theory class -- admits a good moduli space when pulled back to a scheme; for this, we will heavily cite the work of Alper, Heinloth and Halpern-Leistner on existence of good moduli spaces \cite{AHHL}. Then we show that these glue to a ``relative good moduli space'' $\mathcal{Q} \to \mathcal{M}$. \begin{Def} We now recall Alper's \cite{Alp} definition of good morphisms. \begin{enumerate}[label = (\roman*)] \item Let $\mathcal{X}$ and $\mathcal{Y}$ be two Artin stacks with a quasi-compact morphism $f : \mathcal{X} \to \mathcal{Y}$. We call $f$ a \emph{good morphism} if $\mathbb{R}ab f_* : \mathrm{QCoh}\, \mathcal{X} \to \mathrm{QCoh}\, \mathcal{Y}$ is exact and the natural map $\mathcal{O}_\mathcal{Y} \to \mathbb{R}ab f_* \mathcal{O}_\mathcal{X}$ is an isomorphism. \item If in the previous definition $\mathcal{Y} = Y$ is an algebraic space, then $f : \mathcal{X} \to Y$ is called a \emph{good moduli space}. \end{enumerate} \end{Def} \begin{Rem} We adopt a shorter terminology ``good morphism'', while Alper calls that a ``good moduli space morphism''. Our choice is motivated by the belief that the notion of a good morphism is more fundamental than its application to moduli theory. One argument in support of this point of view is that good morphisms satisfy descent (for purely formal reasons), as we show now in Lemma \ref{Lem:good_descent}. We shall use the lemma later. \end{Rem} Let us recall a fundamental result about universality of good moduli spaces. \begin{Def} Given a morphism of stacks $\nu : \mathcal{Q} \to \mathbb{N}c$, denote the natural projections by $\nu_i : \mathcal{Q} \fprod{\mathbb{N}c} \mathcal{Q} \to \mathcal{Q}$ and $\nu_{ij} : \mathcal{Q} \fprod{\mathbb{N}c} \mathcal{Q} \fprod{\mathbb{N}c} \mathcal{Q} \to \mathcal{Q} \fprod{\mathbb{N}c} \mathcal{Q}$. We define the objects of the \emph{category of descent data} $\mathrm{QCoh}\, (\nu : \mathcal{Q} \to \mathbb{N}c)$ as tuples $(F,\iota)$, with $F \in \mathrm{QCoh}\,(\mathcal{Q})$ and an isomorphism $\iota : \nu_1^* F \to \nu_2^* F$ subject to the \emph{cocycle condition} $\nu_{13}^* \iota = \nu_{23}^* \iota \circ \nu_{12}^* \iota$. Morphisms are those morphisms of quasi-coherent sheaves which commute with $\iota$. \end{Def} \begin{Def} We say that $\mathrm{QCoh}\,$ \emph{satisfies descent along a morphism} $\nu : \mathcal{Q} \to \mathbb{N}c$ if the functor given by $\mathrm{L}^0 \nu^* : \mathrm{QCoh}\, \mathbb{N}c \to \mathrm{QCoh}\, (\nu : \mathcal{Q} \to \mathbb{N}c)$ is an equivalence of categories. \end{Def} \begin{Lem}[Descent Lemma] \label{Lem:good_descent} quasi-coherent sheaves satisfy descent along good morphisms. \end{Lem} \begin{proof} Take a good morphism $\nu : \mathcal{Q} \to \mathbb{N}c$. We will argue that the functor $\mathrm{L}^0 \nu^* : \mathrm{QCoh}\, \mathbb{N}c \to \mathrm{QCoh}\, (\mathcal{Q} \to \mathbb{N}c)$ establishes an equivalence. Note that we have a right adjoint functor $\mathbb{R}ab \nu_* : \mathrm{QCoh}\, \mathcal{Q} \to \mathbb{N}c$. In the setup of having two adjoint functors, it is enough to prove that $\mathrm{L}^0 \nu^*$ is fully faithful and the exact $\mathbb{R}ab \nu_*$ ``detects zero objects'' in $\mathrm{QCoh}\, (\mathcal{Q} \to \mathbb{N}c)$. At a glance, it is not obvious that $\mathbb{R}ab \nu_*$ should be a quasi-inverse, but being a good moduli space morphism (term introduced by Alper in his thesis paper \cite{Alp}) is a strong condition, so it will follow from the proof. First we will prove that $\mathrm{L}^0 \nu^*$ is fully faithful. So consider the moprhism $$ \mathrm{L}^0 \nu^* : \Hom{\mathbb{N}c}{F,G} \to \Hom{\mathcal{Q}}{\mathrm{L}^0 \nu^* F, \mathrm{L}^0 \nu^* G} \mbox{.} $$ Note that by adjunction, the right hand side is isomorphic to: $$ \Hom{\mathcal{Q}}{\mathrm{L}^0 \nu^* F, \mathrm{L}^0 \nu^* G} = \Hom{\mathbb{N}c}{F, \mathbb{R}ab \nu_* (\mathcal{O}_\mathcal{Q} \otimes \mathrm{L}^0 \nu^* G)} \mbox{.} $$ Further, Alper proved projection formula that is applicable in this setting, see his Proposition 4.5 together with Remark 4.4 in his paper \cite{Alp}, so we in fact can simplify the right hand side and get a morphism: $$ \mathrm{L}^0 \nu^* : \Hom{\mathbb{N}c}{F,G} \to \Hom{\mathbb{N}c}{F, \mathbb{R}ab \nu_* (\mathcal{O}_\mathcal{Q}) \otimes G} \mbox{.} $$ But since by assuption we have $\mathbb{R}ab \nu_* (\mathcal{O}_\mathcal{Q}) \cong \mathcal{O}_\mathbb{N}c$, we get that $\mathbb{R}ab \nu^*$ induces an isomorphism on Hom-spaces, as desired. In particular, it follows that $\mathbb{R}ab \nu_* \mathrm{L}^0 \nu^*$ is isomorphic to the identity functor. Now we prove that $\mathbb{R}ab \nu_*$ ``detects zero objects''. Let $(G,\iota) \in \mathrm{QCoh}\, (\mathcal{Q} \to \mathbb{N}c)$ and assume that $\mathbb{R}ab \nu_* G = 0$. If $q_1$ and $q_2$ are two projections $\mathcal{Q} \fprod{\mathbb{N}c} \mathcal{Q} \to \mathcal{Q}$, then the gluing data $\iota$ is a fixed isomorphism $\iota : \mathrm{L}^0 q_1^* G \to \mathrm{L}^0 q_2^* G$. Now we would like to apply Alper's base change formula for good moduli space morphisms (see Lemma~4.7(iii) together with Remark~4.4 in \cite{Alp}) to $\mathbb{R}ab q_{1*} \iota$ to get an isomorphism: $$ G \cong \mathbb{R}ab q_{1*}\mathrm{L}^0 q_1^* G \cong \mathbb{R}ab q_{1*}\mathrm{L}^0 q_2^* G \cong \mathrm{L}^0 \nu^* \mathbb{R}ab \nu_* G = 0 \mbox{.}$$ To conclude that $\mathrm{L}^0 \nu^*$ establishes an equivalence, we can formally observe that it is essentially surjective. Indeed, take some object $F \in \mathrm{QCoh}\, (\mathcal{Q} \to \mathbb{N}c)$ and consider the natural morphism: $$ \mathrm{L}^0 \nu^* \mathbb{R}ab \nu_* F \to F \mbox{.} $$ Now we can complete the sequence by kernel and cokernel and apply $\mathbb{R}ab \nu_* F$, which is exact, to the resulting sequence. From the above results, the middle morphism is an isomorphism, and since $\mathbb{R}ab \nu_* F$ detects zero objects, we conclude that both kernel and cokernel vanish. Therefore we conclude that $\mathrm{L}^0 \nu^* \mathbb{R}ab \nu_* F \cong F$ and thus lies in the essential image of $\mathrm{L}^0 \nu^*$. \end{proof} Returning to our situation, assume that we consider a pullback of $\mathcal{Q} \to \mathcal{K}$ to any Noetherian affine scheme $K \to \mathcal{K}$, so we get an Artin stack over a Noetherian base $\mathcal{Q}_K \to K$ that parametrizes flat sheaves with a fixed K-theory class over the family of quasi-polarized surfaces $X \deq \mathcal{X} \fprod{\mathcal{K}} K \to K$. \begin{Thrm}[{\cite[Theorem~6.6]{Alp}}] \label{Thrm:univgms} Suppose $\mathcal{X}$ is a locally Noetherian Artin stack and $f: \mathcal{X} \to Y $ a good moduli space. Then $f$ is universal for maps to algebraic spaces, i.e. for any algebraic space $Z$, the following natural map of sets is a bijection: $$ f^* : \Hom{}{Y,Z} \to \Hom{}{\mathcal{X},Z} .$$ \end{Thrm} \begin{Lem} \label{Lem:good_moduli} Take a Noetherian scheme $K$ with a morphism $K \to \mathcal{K}$. Then the morphism $\mathcal{Q}_K \to K$ admits a good moduli space. \end{Lem} \begin{proof} We want to apply the criterion for existence of good moduli spaces (Theorem A in \cite{AHHL}), and so we check that the conditions in the criterion are verified. By \cite[Example 7.1]{AHHL}, this stack $\mathcal{Q}_K$ coincides with the moduli functor given by \cite[Definition~7.8]{AHHL}. Therefore by \cite[Lemma~7.16]{AHHL}, this stack is $\Theta$-reductive (\emph{cf.} Definition~3.10 in \cite{AHHL}). The stabilizer groups of the stack $\mathcal{Q}$ are all $\mathbb{G}_m$ by stability of sheaves, hence connected and reductive. So $\mathcal{Q}$ is locally linearly reductive (\emph{cf.} Definition~2.1 in \cite{AHHL}), and by \cite[Proposition~3.56]{AHHL} it has unpunctured inertia (\emph{cf.} Definition~3.53 in \cite{AHHL}). By \cite[Lemmas~0DPW and~0DPX]{Stacks}, the stack $\mathcal{Q}_K$ is of finite presentation and with affine diagonal. So we can apply Theorem A of \cite{AHHL} to conclude that $\mathcal{Q}_K$ admits a good moduli space $\nu_K : \mathcal{Q}_K \to M_K$, and the morphism $\nu_K$ is universal for maps to an algebraic space by Theorem \ref{Thrm:univgms} (\cite[Theorem~6.6]{Alp}). \end{proof} We now want to show that the good moduli spaces $M_K \to K$ ``glue'' to a relative good moduli space $\mathcal{M} \to \mathcal{K}$, that is there exists a good moduli space morphism $\nu : \mathcal{Q} \to \mathcal{M}$ such that $\mathcal{M} \to \mathcal{K}$ is a relative algebraic space. \begin{Thrm} \label{Thrm:good_mm} There exists a relative good moduli space $\mathcal{M} \to \mathcal{K}$ such that $\mathcal{M}$ is an algebraic stack; for each scheme $K \to \mathcal{K}$, the pullback $\mathcal{M}_K$ is isomorphic to $M_K$; and there exists a morphism $\nu : \mathcal{Q} \to \mathcal{M}$ which is good. \end{Thrm} \begin{proof} Since the moduli stack of quasi-polarized K3 surfaces $\mathcal{K}$ is an Artin stack, we can choose a smooth surjection $K \to \mathcal{K}$ from a scheme $K$. This morphism is representable by algebraic spaces, so the fibered product $K'\deq K \times_{\mathcal{K}} K$ is an algebraic space; and the projection morphisms $k_1, k_2: K' \rightrightarrows K$ are still smooth, being pullbacks of a smooth morphism. The spaces $K$ and $K'$ naturally assemble into a smooth groupoid of algebraic spaces \cite[Lemma~04T4]{Stacks}, and the quotient groupoid is isomorphic to the original stack $\mathcal{K} \cong [K/K']$ \cite[Lemma~04T5]{Stacks}, so we have obtained a groupoid presentation of $\mathcal{K}$. By Lemma \ref{Lem:good_moduli}, there exists a good moduli space $\mathcal{Q}_K \to M_K$. Since good moduli spaces are universal for morphisms to algebraic spaces (Theorem \ref{Thrm:univgms}, \cite[Theorem~6.6]{Alp}), we also obtain the unique canonical morphism $u: M_K \to K$. We now want to produce an algebraic space $P$ so that $P\rightrightarrows M_K$ becomes a smooth groupoid which would then yield a quotient stack. To that end, study the pullback $$ P \deq K' \fprod{k_i,K} M_K = (K \fprod{\mathcal{K}} K) \fprod{k_i,K} M_K = K \fprod{\mathcal{K}} M_K .$$ The object $P$ does not depend (up to isomorphism) on the projection $k_i$ we choose, but the two projections induce two smooth morphisms $p_1, p_2: P\rightrightarrows M_K$, where $p_i = k_i \times 1_{M_K}$. Further, the rest of the structure maps for $P\rightrightarrows M_K$ --- composition, identity, inverse as in \cite[\S 0230]{Stacks} --- are obtained from the groupoid $K' \rightrightarrows K$ by pullback and yield the structure of a groupoid in algebraic spaces for $P \rightrightarrows M_K$ \cite[044B]{Stacks}. Now, it is known that the quotient stack of a smooth groupoid is algebraic \cite[Theorem~04TK]{Stacks}, so we put $\mathcal{M} \deq [M_K/P]$ to get the relative good moduli space. Since we had a morphism of groupoids $$\Big[P \rightrightarrows M_K\Big] \to \Big[ K' \rightrightarrows K \Big],$$ we also obtain a morphism of the quotient stacks $\mathcal{M} \to \mathcal{K}$ \cite[Lemma~046Q]{Stacks}. To argue that we have a canonical morphism $\nu: \mathcal{Q} \to \mathcal{M}$, we will construct a morphism from a groupoid associated to $\mathcal{Q}$ to the groupoid $P \rightrightarrows M_K$. Pick a smooth cover by a scheme $Q \to \mathcal{Q}_K$ -- it induces a smooth cover $Q \to \mathcal{Q}$. Denote by $v :Q \to M_K$ the composition of the cover with $\mathcal{Q}_K \to M_K$. Put $Q' = Q \times_\mathcal{Q} Q$, then we get a groupoid presentation $q_1,q_2 : Q' \rightrightarrows Q$ of $\mathcal{Q}$. Let us summarize the notation in the diagram: \begin{diagram} Q' & \pile{\rTo^{q_1,q_2} \\ \rTo} & Q & \rTo & \mathcal{Q}_K & \rTo^q & \mathcal{Q} \\ \dDashto && \dTo_v & \ldTo^g &&& \dDashto_\nu \\ P & \pile{\rTo^{p_1,p_2} \\ \rTo} & M_K && \rTo^p && \mathcal{M} \\ \dTo && \dTo_u && && \dTo \\ K' & \pile{\rTo^{k_1,k_2} \\ \rTo} & K && \rTo && \mathcal{K} \\ \end{diagram} Since $K' = K\times_\mathcal{K} K$, the two morphisms $uvq_i : Q' \rightrightarrows K$ define a canonical morphism $w: Q' \to K'$. Then the pair of morphisms $(w,vq_i)$ for any $i=1,2$ define a canonical morphism to the fibered product $Q' \to K' \times_K M_K = P$, and we then have a morphism of groupoids $$\Big[ Q' \rightrightarrows Q \Big] \to \Big[ P \rightrightarrows M_K \Big]$$ which induces a morphism of the quotient stacks $\nu : \mathcal{Q} \to \mathcal{M}$. We can now check that $\nu$ is good. First, let us study $\mathbb{R}ab \nu_* \mathcal{O}_\mathcal{Q}$. By descent, it is isomorphic to $\mathcal{O}_\mathcal{M}$ if and only if its pullback $\mathrm{L}^0 p^* \mathbb{R}ab \nu_* \mathcal{O}_\mathcal{M}$ is isomorphic to $\mathcal{O}_{M_K}$. But $p$ is smooth, hence flat, so by base change \cite[Corollary~1.4.(2)]{Hall:basechange}, and using that $g$ is a good moduli space, we have: $$ \mathrm{L}^0 p^* \mathbb{R}ab \nu_* \mathcal{O}_\mathcal{M} = \mathbb{R}ab q_* \mathrm{L}^0 q^* \mathcal{O}_\mathcal{M} \cong \mathcal{O}_{M_K} .$$ Using base change again, we can check that $\nu_*$ is exact, so $\nu$ is good. \end{proof} \begin{Rem} \label{Rem:pointwise} The property of being a good moduli space is preserved under arbitrary base change \cite{Alp}, therefore, for a closed point $[X] \in \mathcal{K}$, the spaces $\mathcal{M}_{[X]}$ and $M_{[X]}$ are isomorphic, so $\mathcal{M}_{[X]}$ is a good moduli space of the stack of stable sheaves over the surface $X$. \end{Rem} \subsection{The good morphism is fiberwise a scheme} We will briefly summarize several results about change of polarization from the book by Huybrechts and Lehn \cite[\S4.3]{HuL}. Then we will apply these results to our situation to show that for a closed point $[X] \in \mathcal{K}$, the fiber $\mathcal{M}_{[X]}$ is a scheme. \begin{Fact}[\emph{cf.} {\cite[Lemma~4.C.2 and Theorem~4.C.3]{HuL}}] \label{hyparr} Let $X$ be a smooth projective surface over an algebraically closed field of characteristic zero. For a fixed Chern character $v$ on $X$, there is a locally finite hyperplane arrangement (the hyperplanes are called \emph{walls}) in the numerical group $\mathbb{N}umR X$ satisfying the following property: if a big and nef divisor $H \in \mathbb{N}umR X$ is not on a wall and $\mathrm{rk}\, v$ is coprime with $\mathrm{c}_1 (v)\cdot H$, then a torsion-free sheaf of Chern character $v$ is $H$-stable iff it is $H$-semistable. \end{Fact} \begin{Lem} \label{schemeness} Fix a Chern character $v$ over $\mathcal{X}$. Assume that semistable sheaves of class $v$ are stable. Then for any quasi-polarized surface $[X,H] \in \mathcal{K}$, the restriction $\mathcal{M}_{[X]}$ is a scheme. \end{Lem} \begin{proof} This is well-known in the case when the quasi-polarization is ample and follows from Remark \ref{Rem:pointwise} and the assumption that semistability is equivalent to stability. So we will reduce the general case $[X,H]$ with $H$ big and nef to the ample case by considering a small ample shift. For a big and nef $H$ (which may lie on a wall -- it wouldn't pose problems), we can find an ample divisor $H_1 \in \mathbb{N}umR X$ such that the semiopen line segment $(H,H_1]$ does not intersect any walls -- this follows from the fact that the hyperplane arrangement is locally finite (Fact \ref{hyparr}). From the assumption that $\mathrm{c}_1(v)$ is indivisible and the same Fact \ref{hyparr} it also follows that stability with respect to any $H_\epsilon \in (H,H_1]$ is equivalent to semistability, and in addition, we assumed equivalence of $H$-stability and $H$-semistability. We now want to prove that in this setup, a sheaf $F$ is $H$-stable iff it is $H_1$-stable. Assume that it is $H$-stable, but not $H_1$-stable. Fix an $H_1$-destabilizing subsheaf $F_1 \subset F$ and let us define $\delta \deq \frac{\mathrm{c}_1(F_1)}{\mathrm{rk}\, F_1} - \frac{\mathrm{c}_1(F)}{\mathrm{rk}\, F}$. Note that pairing with $\delta$ is a linear function on $\mathbb{N}umR X$ and $H\cdot\delta < 0$ from $H$-stability of $F$, while $H_1\cdot\delta > 0$ from $H_1$-instability. Hence there exists some $H_\epsilon \in (H,H_1)$ such that $H_\epsilon\cdot\delta = 0$ proving that $F$ is strictly semistable with respect to $H_\epsilon$ and with destabilizig subsheaf $F_1$. But this contradicts our setup where stability is equivalent to semistability. The proof that $H_1$-stability implies $H$-stability is analogous. \end{proof} \begin{Rem} It is interesting to note that under the assumptions of the above lemma, the resulting moduli space with respect to quasi-polarization does not depend on the small ample shift, even if the two ample shifts are separated by a wall. The latter may happen when $H$ happens to be on a wall. \end{Rem} \subsection{Proof of the main theorem} Now we can combine the above results and prove the following theorem. \begin{Thrm} \label{Thrm:construct_Mc} Let $\mathcal{K}$ be a stack of quasi-polarized surfaces that admits the universal family $\mathcal{X}$ with the universal quasi-polarization $\mathcal{H}$. Fix a Chern character $v$ over $\mathcal{X}$. Assume that, pointwise over $\mathcal{K}$, stability is equivalent to semistability for sheaves in class $v$. Then there exists a stack $\mathcal{M} \to \mathcal{K}$ which is fiberwise $($i.e. over each closed point of $\mathcal{K})$ the moduli scheme of stable sheaves of class $v$ with respect to the restriction of the universal quasi-polarization. \end{Thrm} \begin{proof} We have proved in Lemmas \ref{constructibility} and \ref{generization} that $\mathcal{Q}$ is constructible in $\widetilde \mathcal{M}$ and preserved by generization. Therefore, by Lemma \ref{Lem:openness}, the subfunctor $\mathcal{Q} \subset \widetilde \mathcal{M}$ is open, and since $\widetilde \mathcal{M}$ is an Artin stack, then $\mathcal{Q}$ is also an Artin stack. By Theorem \ref{Thrm:good_mm}, there exists a good moduli space morphism $\nu : \mathcal{Q} \to \mathcal{M}$ such that fiberwise we get good moduli spaces. By Lemma \ref{schemeness}, the family $\mathcal{M} \to \mathcal{K}$ is fiberwise a scheme. \end{proof} \section{The Strange Duality morphism} \label{Sec:SD_mm} \subsection{Defining theta line bundles} Let $\mathcal{K}$ be the moduli stack of quasi-polarized K3 surfaces. Let $\mathcal{X} \to \mathcal{K}$ be the universal quasi-polarized K3 surface, so it will also be the moduli stack of pointed quasi-polarized K3 surfaces. Then $\mathcal{Y} \deq \mathcal{X} \times_\mathcal{K} \mathcal{X}$ is the universal pointed quasi-polarized K3 surface. For a fixed Chern character $v$, let $\mathcal{M}_v \to \mathcal{K}$ be the stack of stable sheaves with Chern character $v$ which is pointwise a scheme, as constructed in Theorem \ref{Thrm:good_mm} and Theorem \ref{Thrm:construct_Mc}. So $\mathcal{M}_v$ is a stack, but over each point of $\mathcal{K}$, the fiber is a scheme -- the moduli scheme of stable sheaves with respect to the quasi-polarization at this point of the moduli space. Consider $\mathbb{N}c_v \deq \mathcal{M}_v \times_\mathcal{K} \mathcal{X}$ -- it will be the relative moduli space over the stack of pointed K3 surfaces. Unfortunately, there is no universal family over $\mathbb{N}c_v$, so we need to work with the stack $\mathcal{Q} \to \mathbb{N}c_v$, which is the moduli stack of stable sheaves before we ``forget'' the $\mathbb{G}_m$-automorphisms of the sheaves. We can construct it analogously to Theorem \ref{Thrm:good_mm} or pull back the $\mathcal{Q}$ from $\mathcal{Q} \to \mathcal{K}$ along $\mathcal{X} \to \mathcal{K}$. Then we have the universal family $E \in \mathbb{C}oh \left( \mathcal{Y} \fprod{\mathcal{K}} \mathcal{Y} \fprod{\mathcal{X}} \mathcal{Q} \right)$ of stable sheaves. Consider the following Cartesian square. We will use it to define a line bundle on $\mathcal{Q}$ and, with Lemma~\ref{Lem:good_descent}, argue that it descends to $\mathbb{N}c_v$, so that we can later use this universal theta line bundle to construct the Strange Duality morphism in families. \begin{diagram} \mathcal{Y} && \lTo^q && \mathcal{Y} \fprod{\mathcal{X}} \mathcal{Q} \\ \dTo &&&& \dTo_p \\ \mathcal{X} & \lTo & \mathbb{N}c_v & \lTo & \mathcal{Q} \\ \end{diagram} Taking an algebraic K-theory class $w$ on $\mathcal{Y}$, we can use Fourier-Mukai transform and define uniquely up to an isomorphism a line bundle $$L \deq \det p_* (E \otimes q^* w)$$ on $\mathcal{Q}$. Further, assuming that $w$ is orthogonal to $v$, we can argue that this line bundle $L$ descends along $\mathcal{Q} \to \mathbb{N}c_v$, as described in Lemma \ref{Lem:theta_on_ptd}. We will need the following preliminary result. \begin{Lem} \label{Lem:twisted_descent_datum} Let $B$ be a locally Noetherian scheme and $\pi: E \to B$ be a $\mathbb{G}_{m}$-bundle over $B$, i.e. there is a line bundle $\mathcal{L}$ on $B$ such that $E = \mathcal{S}\mathrm{pec}_B \big( \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n} \big)$. Let $F$ and $G$ be two indecomposable complexes of coherent sheaves on $B$ and assume that $\pi^* F \cong \pi^* G$. Then there exists $k\in \mathbb{Z}$ such that $F \cong G \otimes \mathcal{L}^k$. \end{Lem} \begin{proof} Since coherent sheaves on a relative spectrum of a sheaf of algebras $\mathcal{A} = \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n}$ correspond to quasi-coherent sheaves on the base $B$ that are finitely generated $\mathcal{A}$-modules, we can view the isomorphism $\pi^* F \cong \pi^* G$ as an isomorphism of complexes of quasi-coherent sheaves on $B$: $$ \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n} \otimes F \cong \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n} \otimes G .$$ Consider the direct summand $F = \mathcal{L}^{\otimes 0} \otimes F$ of the left hand side of the isomorphism. $$ F \subset \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n} \otimes F \xrightarrow{\cong} \bigoplus_{n \in \mathbb{Z}} \mathcal{L}^{\otimes n} \otimes G .$$ Viewing $F$ as a subobject of the right hand side, we get a decomposition of $F$ into direct summands $F \cap \mathcal{L}^{\otimes n} \otimes G$; by assumption, a nontrivial decomposition cannot happen, so there is only one index $k$ for which $F \cap \mathcal{L}^{\otimes k} \otimes G \neq 0$, and therefore the morphism from $F$ factors through $\mathcal{L}^{\otimes k} \otimes G$. Using a similar argument for $\mathcal{L}^{\otimes k} \otimes G$, we can deduce that in fact $F$ is identified with $\mathcal{L}^{\otimes k} \otimes G$ by the isomorphism of pullbacks. \end{proof} \begin{Lem} \label{Lem:theta_on_ptd} Take two orthogonal algebraic K-theory classes $v$ and $w$ on the universal pointed K3 surface $\mathcal{Y}$. As before, $E \in \mathbb{C}oh \left( \mathcal{Y} \fprod{\mathcal{X}} \mathcal{Q} \right)$ is the universal family. Define $L \deq \det p_* (E \otimes q^* w)$ on $\mathcal{Q}$. Then the line bundle $L$ descends to $\mathbb{N}c_v$. \end{Lem} \begin{proof} We will proceed as follows: first, we prove that the rank of $p_* (E \otimes q^* w)$ is zero using orthogonality of $v$ and $w$, then we recall that there exists a ``descent datum'' for $E$ which does not satisfy the cocycle condition, and we use it to construct descent datum for $L$, and finally we argue that the descent datum for $L$ satisfies the cocycle condition with the use of the first observation about rank. \textit{Step 1: rank equals zero.} Now we want to use orthogonality of $v$ and $w$ to prove that $\mathrm{rk}\, p_* (E \otimes q^* w) = 0$. For that, let us consider the restriction of this sheaf to a point $\iota: \{ * \} \to \mathcal{Q}$, so that \[\mathrm{rk}\, p_* (E \otimes q^* w) = \mathrm{rk}\, \iota^* p_* (E \otimes q^* w) = \mathrm{ch}\,i \left( \iota^* p_* (E \otimes q^* w) \right).\] Let $X$ denote the K3 surface that corresponds to the chosen point $\iota$ in $\mathcal{Q}$, then we have the following pullback diagram: \begin{diagram} \mathcal{Y} & \lTo^q & \mathcal{Y} \fprod{\mathcal{X}} \mathcal{Q} & \lTo^\kappa & X \\ \dTo && \dTo^p && \dTo_{\gamma} \\ \mathcal{X} & \lTo & \mathcal{Q} & \lTo^\iota & \{ * \} \\ \end{diagram} Now we can compute the rank. Note that we use base change formula in the first line and orthogonality of $v$ and $w$ in the second line: \[ \mathrm{rk}\, p_* (E \otimes q^* w) = \mathrm{ch}\,i \left( \iota^* p_* (E \otimes q^* w) \right) = \mathrm{ch}\,i \left( \gamma_* \kappa^* (E \otimes q^* w) \right) = \mathrm{ch}\,i \left( \kappa^* E \otimes (q\kappa)^* w \right) = 0. \] \textit{Step 2: ``descent data'' for $p_* (E \otimes q^* w)$ and $L$.} Let $a: A \to \mathcal{Q}$ be a smooth atlas. Then its composition $\nu a$ with $\nu : \mathcal{Q} \to \mathbb{N}c_v$ is a smooth atlas for $\mathbb{N}c_v$, since formal smoothness can be verified by a lifting property and finite presentation is automatic. Introduce projection morphisms $q_1$, $q_2$, $r_1$, $r_2$, summarized in the diagram below, where $B = A \fprod{\mathcal{Q}} A$ and $C = A \fprod{\mathbb{N}c_v} A$: \begin{diagram} \mathcal{Q} & \lTo^{a} & A & \lTo^{q_i} & B \\ \dTo_\nu && \dTo_{=} && \dTo_{\pi} \\ \mathbb{N}c_v & \lTo^{\nu a} & A & \lTo^{r_i} & C & \lTo^{r_{ij}} & C\fprod{A}C \\ \end{diagram} We let $r_{12}$, $r_{23}$, $r_{13}$ be projection and composition morphisms from $C\fprod{A}C$ to $C$ that determine the structure of a groupoid. Since fibers of $\nu$ are $B\mathbb{G}_{m}$, we get, by the magic square diagram, that $\pi$ is a $\mathbb{G}_{m}$-fibration given by some line bundle $T$. We know that the complex $p_* (E \otimes q^* w)$ on $\mathcal{Q}$ corresponds to a complex $F$ on $A$ that has a gluing isomorphism $q_1^* F \to q_2^* F$ on $B$. Since $q_i = r_i \pi$ and by Lemma \ref{Lem:twisted_descent_datum}, if $F$ was indecomposable, we would get an isomorphism $\psi: r_1^* F \to r_2^* F \otimes T^{\otimes k}$ for some integer $k$. The complex $F$ is not necessarily indecomposable, so we wish to apply Lemma \ref{Lem:twisted_descent_datum} to each summand. However, since $p_* (E \otimes q^* w)$ is a complex of sheaves on a stack with $B \mathbb{G}_{m}$ stabilizers, we can calculate the weight of the $\mathbb{G}_{m}$-action on the fibers which would determine the corresponding twist, and we will conclude that the twist is the same for each summand of $F$. Similar to Step 1, let $\iota : B\mathbb{G}_{m} \to \mathcal{Q}$ be an embedding of a point with its stabilizer, then we have the following commutative diagram: \begin{diagram} \mathcal{Y} & \lTo^q & \mathcal{Y} \fprod{\mathcal{X}} \mathcal{Q} & \lTo^\kappa & X \times B\mathbb{G}_{m} \\ \dTo && \dTo^p && \dTo_{\gamma} \\ \mathcal{X} & \lTo & \mathcal{Q} & \lTo^\iota & B\mathbb{G}_{m} \\ \end{diagram} Then consider the restriction along $\iota$, where $E_X$ and $w_X$ denote the restrictions of $E$ and $w$ to $X$: $$ \iota^* p_* (E \otimes q^* w) = \gamma_* \left( E_X \otimes w_X \right) .$$ We can see that $\mathbb{G}_{m}$ acts on $w_X$ trivially and on $E_X$ by tautological scaling, so the resulting action on the cohomology of $E_X \otimes w_X$ is also tautological scaling with weight one. Therefore, we have an isomorphism $$\psi: r_1^* F \to r_2^* F \otimes T .$$ Since $q_1^* F \to q_2^* F$ satisfies the cocycle condition, we get that the following composition, denoted by $1\otimes f$, is an isomorphism: $$ 1\otimes f = \left( r_{13}^* \psi \right)^{-1} \circ r_{23}^* \psi \circ r_{12}^* \psi : r_{13}^*r_1^* F \otimes T \to r_{13}^*r_1^* F \otimes T^{\otimes 2} \mbox{.} $$ \textit{Step 3: cocycle condition for $\varphi \deq \det \psi$.} Let us first write $\varphi$, remembering from Step 1 that rank is zero: $$ \varphi : \det r_1^* F \to \det r_2^* F \otimes T^{\mathrm{rk}\, F} = \det r_2^* F .$$ So we have a ``descent datum'' for $\det F$, and now we verify that the cocycle condition holds: \begin{equation*} \begin{split} \left( r_{13}^* \varphi \right)^{-1} \circ r_{23}^* \varphi \circ r_{12}^* \varphi = \det (1 \otimes f) = 1\otimes f^{\mathrm{rk}\, F} = 1 \mbox{.} \end{split} \end{equation*} So $L$ satisfies the cocycle condition and hence descends to $\mathbb{N}c_v$. \end{proof} Recall that $\mathcal{M}_v \to \mathcal{K}$ is the relative moduli scheme of stable sheaves over the stack of quasi-polarized K3 surfaces, while $\mathbb{N}c_v = \mathcal{M}_v \fprod{\mathcal{K}} \mathcal{X} \to \mathcal{X}$ is the same over pointed quasi-polarized surfaces, so every fiber of $\mathbb{N}c_v \to \mathcal{M}_v$ is naturally the underlying surface. Let $L_w$ now denote the line bundle on $\mathbb{N}c_v$ constructed in Lemma \ref{Lem:theta_on_ptd}. We now want to argue that $L_w$, possibly up to a twist by the quasi-polarization, is isomorphic to the pullback along $\mathbb{N}c_v \to \mathcal{M}_v$ of some line bundle on $\mathcal{M}_v$. \begin{Lem}[Marian-Oprea \cite{MO}] Pick two orthogonal K-theory vectors: $$v = r_v \mathcal{O} + d_v \mathcal{H} + a_v \mathcal{O}_\sigma ,$$ $$w = r_w \mathcal{O} + d_w \mathcal{H} + a_w \mathcal{O}_\sigma $$ in the algebraic K-theory $\mathrm{K}_0\, \mathcal{Y}$, where we recall that $\mathcal{Y}$ is the universal pointed K3 surface with the universal quasi-polarization $\mathcal{H}$, and we use $\sigma$ to denote the class of the natural section $\mathcal{X} \to \mathcal{Y}$. Let $L$ be the line bundle that we descended from $\det$ $q_i^* p_* (E \otimes q^* w)$ on $\mathcal{Q}$ to $\mathbb{N}c_v$. Then the restriction of $L$ to a fiber $X$ of $\mathbb{N}c_v \to \mathcal{M}_v$ is isomorphic to a power of the quasi-polarization $H^n = \mathcal{H}^n_{|X}$, and $n$ is independent of the choice of a fiber. \end{Lem} \begin{proof} See the discussion above Equation (4.1) on Page 2080 of the paper ``On Verlinde sheaves and strange duality'' by Marian and Oprea \cite{MO}. \end{proof} This lemma shows that $L_w$ and a tensor power of the polarization $\mathcal{H}^n$ are fiberwise isomorphic, and therefore the twist $L_w \otimes \mathcal{H}^{-n}$ of the determinant line bundle $L_w$ on $\mathbb{N}c_v$ comes as a pullback from $\mathcal{M}_v$. Let us denote a suitable line bundle on $\mathcal{M}_v$ by $\Theta_w$. \begin{Def} Pick two orthogonal algebraic K-theory classes $v$ and $w$ over $\mathcal{Y} = \mathcal{X} \fprod{\mathcal{K}} \mathcal{X}$. There exists a line bundle $\Theta_w$ on $\mathcal{M}_v$ whose pullback to $\mathbb{N}c_v = \mathcal{M}_v \fprod{\mathcal{K}} \mathcal{X}$ is isomorphic, up to a twist by the universal quasi-polarization, to the determinant line bundle $L_w$. This line bundle $\Theta_w$ is called a \emph{theta line bundle}. \end{Def} \subsection{Constructing the Strange Duality morphism} Recall that our aim is to extend the definition of the Strange Duality morphism to the relative case. Pointwise, the morphism is expected to establish a duality between two vector spaces of global sections. The relative version of cohomology is the derived pushforward functor, therefore we will work with the pushforwards of the theta line bundles. \subsubsection*{Assumptions} Recall that $\mathcal{K}$ stands for the moduli stack of quasi-polarized K3 surfaces and $\mathcal{X} \to \mathcal{K}$ denotes the universal K3 surface with quasi-polarization $\mathcal{H}$. Let $v$ and $w$ in $\mathrm{K}_0\, \mathcal{X}$ be two poinwise orthogonal numerical characteristics, that is $\mathrm{ch}\,i(v\otimes w)=0$ on each K3 surface in the family. Assume that pointwise on $\mathcal{K}$, semistable sheaves of classes $v$ and $w$ are stable. By the results of the previous section, this ensures that we have relative moduli spaces $\pi_v: \mathcal{M}_v \to \mathcal{K}$ and $\pi_w: \mathcal{M}_w \to \mathcal{K}$ with the theta line bundles $\Theta_w$ on $\mathcal{M}_v$ and $\Theta_v$ on $\mathcal{M}_w$. Let $\pi: \mathcal{M}_v \fprod{\mathcal{K}} \mathcal{M}_w \to \mathcal{K}$ denote the natural projection. \begin{Def} The pushforwards $W \deq \pi_{v*} \Theta_w$ and $V \deq \pi_{w*} \Theta_v$ are known as the \emph{Verlinde complexes}. \end{Def} \begin{Def} Define the \emph{Brill-Noether locus} $\Theta$ of jumping zeroth cohomology on $\mathcal{M}_v \fprod{\mathcal{K}} \mathcal{M}_w$ as follows: $$ \Theta \deq \left\{ (X,E,F) \mid \mathbb{R}Gamma^0 (X,E \otimes F) \neq 0 \right\} \mbox{.} $$ \end{Def} One naturally expects $\Theta$ to be a divisor or coincide with the whole locus $\mathcal{M}_v(X) \times \mathcal{M}_w(X)$ over each point $[X] \in \mathcal{K}$. The locus in $\mathcal{K}$ where $\Theta$ is not a divisor is of codimension at least two if the complement is not empty. \begin{Lem}[\emph{cf.} \protect{\cite[Remark 4.2]{MO}}] There exists a line bundle $\mathcal{T}$ on $\mathcal{K}$ so that we have an isomorphism on $\mathcal{M}_v \fprod{\mathcal{K}} \mathcal{M}_w$: $$ \pi^* \mathcal{T} \otimes \mathcal{O}(\Theta) \cong \Theta_w \boxtimes \Theta_v \mbox{.} $$ \end{Lem} We can pushforward the isomorphism to $\mathcal{K}$. After using projection formula twice as well as flat base change isomorphism, we get the following: $$ \mathcal{T} \otimes \pi_* \mathcal{O}(\Theta) \cong \pi_* \left( \Theta_w \boxtimes \Theta_v \right) \cong W \otimes V \mbox{.} $$ The section of $\mathcal{O}(\Theta)$ corresponds to a section $\pi_* \mathcal{O}(\Theta)$, so by local triviality of $\mathcal{T}$, it corresponds locally to a morphism $W^\vee \to V$. We will denote this morphism by $\mbox{\textbf{D}}$ and call it the Strange Duality morphism, remembering that it is only defined up to the twist $\mathcal{T}$: \begin{equation} \label{Eq:SD_mm} \mbox{\textbf{D}} : W^\vee \to V \mbox{.} \end{equation} \ifx\undefined\leavevmode\hbox to3em{\hrulefill}\, \newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,} \fi \end{document}
\begin{document} \title{Improved Bounds for $r$-Identifying Codes of the Hex Grid} \author{Brendon Stanton\thanks{Iowa State University, Department of Mathematics, 396 Carver Hall, Ames IA, 50010}} \maketitle \begin{abstract}For any positive integer $r$, an $r$-identifying code on a graph $G$ is a set $C\subset V(G)$ such that for every vertex in $V(G)$, the intersection of the radius-$r$ closed neighborhood with $C$ is nonempty and pairwise distinct. For a finite graph, the density of a code is $|C|/|V(G)|$, which naturally extends to a definition of density in certain infinite graphs which are locally finite. We find a code of density less than $5/(6r)$, which is sparser than the prior best construction which has density approximately $8/(9r)$. \end{abstract} \section{Introduction} Given a connected, undirected graph $G=(V,E)$, define $B_r(v)$, called the ball of radius $r$ centered at $v$, to be $$B_r(v)=\{u\in V(G): d(u,v)\le r\} $$ where $d(u,v)$ is the distance between $u$ and $v$ in $G$. Let $C\subset V(G)$. We say that $C$ is an $r$-identifying code if $C$ has the properties: \begin{enumerate} \item $B_r(v) \cap C \neq \emptyset, \text{ for all } v\in V(G) \text{ and}$ \item $B_r(u) \cap C \neq B_r(v)\cap C, \text{ for all distinct } u,v\in V(G).$ \end{enumerate} It is simply called a code when $r$ is understood. The elements of $C$ are called codewords. We define $I_r(v)=I_r(v,C)=B_r(v)\cap C$. We call $I_r(v)$ the identifying set of $v$ with respect to $C$. If $I_r(u)\neq I_r(v)$ for some $u\neq v$, the we say $u$ and $v$ are distinguishable. Otherwise, we say they are indistinguishable. Vertex identifying codes were introduced in ~\cite{Karpovsky1998} as a way to help with fault diagnosis in multiprocessor computer systems. Codes have been studied in many graphs. Of particular interest are codes in the infinite triangular, square, and hexagonal lattices as well as the square lattice with diagonals (king grid). We can define each of these graphs so that they have vertex set ${\mathbb Z}\times{\mathbb Z}$. Let $Q_m$ denote the set of vertices $(x,y)\subset {\mathbb Z}\times{\mathbb Z}$ with $|x|\le m$ and $|y|\le m$. The density of a code $C$ defined in ~\cite{Charon2002} is $$D(C)=\limsup_{m\rightarrow\infty}\frac{|C\cap Q_m|}{|Q_m|}.$$ When examining a particular graph, we are interested in finding the minimum density of an $r$-identifying code. The exact minimum density of an $r$-identifying code for the king grid has been found in ~\cite{Charon2004}. General constructions of $r$-identifying codes for the square and triangular lattices have been given in ~\cite{Honkala2002} and ~\cite{Charon2001}. For this paper, we focus on the hexagonal grid. It was shown in ~\cite{Charon2001} that $$\frac{2}{5r}-o(1/r)\le D(G_H,r) \le \frac{8}{9r}+o(1/r).$$ Where $D(G_H,r)$ represents the minimum density of an $r$-identifying code in the hexagonal grid. The main theorem of this paper is Theorem ~\ref{maintheorem}: \begin{theorem}\label{maintheorem} There exists an $r$-identifying code of density $$\frac{5r+3}{6r(r+1)}, \text{ if $r$ is even} ;\qquad \frac{5r^2+10r-3}{(6r-2)(r+1)^2}, \text{ if $r$ is odd}.$$ \end{theorem} The proof of Theorem ~\ref{maintheorem} can be found in Section ~\ref{mainthmsection}. Section ~\ref{descriptionsection} provides an brief description of a code with the aforementioned density and gives a few basic definitions needed to describe the code. Section ~\ref{distanceclaimssection} provides a few technical claims needed for the proof of Theorem ~\ref{maintheorem} and the proofs of these claims can be found in Section ~\ref{lemmaproofs}. \section{Construction and Definitions}\label{descriptionsection} \begin{figure}\label{fig:code6} \end{figure} \begin{figure}\label{fig:code7} \end{figure} For this construction we will use the brick wall representation of the hex grid. To describe this representation, we need to briefly consider the square grid $G_S$. The square grid has vertex set $V(G_S)={\mathbb Z}\times{\mathbb Z}$ and $$E(G_S)=\{\{u=(i,j),v\}: u-v\in \{(0, \pm 1),(\pm 1,0)\}\}.$$ Let $G_H$ represent the hex grid. Then $V(G_H)={\mathbb Z}\times{\mathbb Z}$ and $$E(G_H)=\{\{u=(i,j),v\}: u-v\in \{(0,(-1)^{i+j+1}),(\pm 1,0)\}\}.$$ In other words, if $x+y$ is even, then $(x,y)$ is adjacent to $(x,y+1),(x-1,y),$ and $(x+1,y)$. If $x+y$ is odd, then $(x,y)$ is adjacent to $(x,y-1),(x-1,y),$ and $(x+1,y)$. However, the first representation shows clearly that the hex grid is a subgraph of the square grid. For any integer $k$, we also define a \textit{horizontal line} $L_k=\{(x,k): x\in {\mathbb Z} \}$. Note that if $u,v\in V(G_S)$, then the distance between them (in the square grid) is $\|u-v\|_1$. From this point forward, let $d(u,v)$ represent the distance between two vertices in the hex grid. If $u\in V(G_H)$ and $U,V\subset(G_H)$, we define $d(u,V)=\min \{d(u,v):v\in V\}$ and $d(U,V)=\min \{d(u,V): u\in U\}$. Define $$L_k'=\left\{\begin{array}{cc} L_k\cap \{(x,k):x\not\equiv 1,3,5,\ldots,r-1\mod 3r\},& \text{if $r$ is even}; \\ L_k\cap \{(x,k):x\not\equiv 1,3,5,\ldots,r-1\mod 3r-1\},& \text{if $r$ is odd} \\ \end{array}\right.$$ and with $\delta=0$ if $k$ is even and $\delta=1$ otherwise define $$L_k''=\left\{\begin{array}{cc} L_k\cap \{(x,k):x\equiv \delta \mod r\},& \text{if $r$ is even}; \\ L_k\cap \{(x,k):x\equiv 0\mod r+1\},& \text{if $r$ is odd} \\ \end{array}\right..$$ Finally, let $$C'=\bigcup_{n=-\infty}^\infty L_{n(r+1)}' \qquad\text{and}\qquad C''=\bigcup_{n=-\infty}^\infty L_{\lfloor (r+1)/2\rfloor+2n(r+1)}''.$$ Let $C=C'\cup C''$. We will show in Section ~\ref{mainthmsection} that $C$ is a valid code of the density described in Theorem ~\ref{maintheorem}. Partial pictures of the code are shown for $r=6$ in Figure ~\ref{fig:code6} and $r=7$ in Figure ~\ref{fig:code7}. \section{Distance Claims}\label{distanceclaimssection} We present a list of lemmas on the distances of vertices in the hex grid. These lemmas will be used in the proof of our construction. The proofs of these lemmas can be found in Section \ref{lemmaproofs}. \begin{claim}\label{taxicab1} For $u,v\in V(G_H)$, $d(u,v)\ge \|u-v\|_1$.\end{claim} \begin{proof} Note that Lemma ~\ref{taxicab1} simply says that the distance between any two vertices in our graph is at least as long as the their distance in the square grid. Since $G_H$ is a subgraph of $G_S$, any path between $u$ and $v$ in $G_H$ is also a path in $G_S$ and so the claim follows. \end{proof} \begin{claim}\label{taxicab2} \textbf{(Taxicab Lemma)} For $u=(x,y),v=(x'y')\in V(G_H)$, if $|x-x'|\ge|y-y'|$, then $d(u,v)=\|u-v\|_1$.\end{claim} This fact is used so frequently that we give it the name the Taxicab Lemma. It states that if the horizontal distance between two vertices is at least as far as the vertical distance, then the distance between those two vertices is exactly the same as it would be in the square grid. The proof of the Taxicab Lemma and the remainder of these claims can be found in Section ~\ref{lemmaproofs}. Claims ~\ref{vertexlinedistance} and ~\ref{vertexlinedistance2} say that the distance between a point $(k,a)$ and a line $L_b$ is either $2|a-b|$ or $2|a-b|-1$ depending on various factors. It also follows from these claims that $d(L_a,L_b)=2|a-b|-1$ if $a\neq b$. \begin{claim}\label{vertexlinedistance} Let $a<b$, then $$d((k,a),L_b)=\left\{\begin{array}{cc} 2(b-a)-1, & \text{if $a+k$ is even}; \\ 2(b-a), & \text{if $a+k$ is odd}. \end{array} \right.$$ \end{claim} \begin{claim}\label{vertexlinedistance2} Let $a>b$, then $$d((k,a),L_b)=\left\{\begin{array}{cc} 2(a-b), & \text{if $a+k$ is even}; \\ 2(a-b)-1, & \text{if $a+k$ is odd}. \end{array} \right.$$ \end{claim} The next three claims all basically have the same idea. If we are looking at a point $(x,y)$ and a horizontal line $L_k$ such that $(x,y)$ is within some given distance $d$, we can find a sequence $S$ of points on this line such that each point is at most distance $d$ from $(x,y)$ and the distance between each point in $S$ and its closest neighbor in $S$ is exactly 2. \begin{claim}\label{evenvertexdistance} Let $k$ be a positive integer. There exist paths of length $2k$ from $(x,y)$ to $v$ for each $v$ in $$\{(x-k+2j,y\pm k): j=0,1,\ldots,k\}$$ \end{claim} \begin{claim}\label{oddevenvertexdistance} Let $k$ be a positive integer and let $x+y$ be even. There exist paths of length $2k+1$ from $(x,y)$ to $v$ for each $v$ in $$\{(x-k+2j,y+k+1): j=0,1,\ldots,k\}\cup\{(x-k-1+2j,y-k): j=0,1,\ldots,k+1\}$$ \end{claim} \begin{claim}\label{oddoddvertexdistance} Let $k$ be a positive integer and let $x+y$ be odd. There exist paths of length $2k+1$ from $(x,y)$ to $v$ for each $v$ in $$\{(x-k+2j,y-k-1): j=0,1,\ldots,k\}\cup\{(x-k-1+2j,y+k): j=0,1,\ldots,k+1\}$$ \end{claim} In the final claim, we are simply stating that if we are given a point $(x,y)$ and a line $L_k$ such that $d((x,y),L_k)<r$, we can find a path of vertices on that line which are all distance at most $r$ from $(x,y)$. \begin{claim}\label{vertexballdistance} Let $(x,y)$ be a vertex and $L_k$ be a line. If $d((x,y),L_k)< r$, then $$\{(x-(r-|y-k|)+j,k): j=0,1,\ldots, 2(r-|y-k|)\}\subset B_r((x,y))$$ \end{claim} \section{Proof of Main Theorem}\label{mainthmsection}Here, we wish to show that the set described in Section~\ref{descriptionsection} is indeed a valid $r$-identifying code. \begin{proofcite}{Theorem~\ref{maintheorem}} Let $C=C'\cup C''$ be the code described in Section ~\ref{descriptionsection}. Let $d(C)$ be the density of $C$ in $G_H$, $d(C')$ be the density of $C'$ in $G_H$ and $d(C'')$ be the density of $C''$ in $G_H$. Since $C'$ and $C''$ are disjoint, we see that \begin{eqnarray*} d(C) &=& \limsup_{m\rightarrow\infty}\frac{|C\cap G_m|}{|G_m|} \\ &=& \limsup_{m\rightarrow\infty}\frac{|(C'\cup C'')\cap G_m|}{|G_m|} \\ &=& \limsup_{m\rightarrow\infty}\frac{|C'\cap G_m|}{|G_m|}+\limsup_{m\rightarrow\infty}\frac{|C''\cap G_m|}{|G_m|} \\ &=& d(C')+d(C'') \end{eqnarray*} It is easy to see that both $C'$ and $C''$ are periodic tilings of the plane (and hence $C$ is also periodic). The density of periodic tilings was discussed By Charon, Hudry, and Lobstein~\cite{Charon2002} and it is shown that the density of a periodic tiling on the hex grid is $$D(C)=\frac{\text{\# of codewords in tile}}{\text{size of tile}} .$$ For $r$ even, we may consider the density of $C'$ on the tile $[0,3r-1]\times[0,r]$. The size of this tile is $3r(r+1)$. On this tile, the only members of $C'$ fall on the horizontal line $L_0$, of which there are $2r+r/2$. For $r$ odd, we may consider the density of $C'$ on the tile $[0,3r-2]\times[0,r]$. The size of this tile is $(3r-1)(r+1)$. On this tile, the only members of $C'$ fall on $L_0$, of which there are $2r+(r-1)/2$. Thus $$d(C')=\left\{\begin{array}{cc} \frac{5}{6(r+1)} & \text{if $r$ is even} \\ \frac{5r-1}{(6r-2)(r+1)} & \text{if $r$ is odd} \end{array} \right.$$ For $C''$, we need to consider the tiling on $[0,r-1]\times[0,2r+1]$ if $r$ is even and $[0,r]\times[0,2r+1]$ if $r$ is odd. In either case, the tile contains only a single member of $C''$. Hence, $$d(C'')=\left\{\begin{array}{cc} \frac{1}{2r(r+1)} & \text{if $r$ is even} \\ \frac{1}{2(r+1)^2} & \text{if $r$ is odd} \end{array} \right.$$ Adding these two densities together gives us the numbers described in the theorem. It remains to show that $C$ is a valid code. We will say that a vertex $v$ is \emph{nearby} $L_k'$ if $I_r(v)\cap L_k'\neq \emptyset$. An outline of the proof is as follows: \begin{enumerate} \item Each vertex $v\in V(G_H)$ is nearby $L_{n(r+1)}'$ for exactly one value of $n$ (and hence, $I_r(v)\neq\emptyset$). \item Since each vertex is nearby $L_{n(r+1)}'$ for exactly one value of $n$, we only need to distinguish between vertices which are nearby the same horizontal line. \item The vertices in $C''$ distinguish between vertices that fall above the horizontal line and those that fall below the line. \item We then show that $L_{n(r+1)}'$ can distinguish between any two vertices which fall on the same side of the horizontal line (or in the line). \end{enumerate} \textbf{Part (1)}: We want to show that each vertex is nearby $L_{n(r+1)}$ for exactly one value of $n$. From Claim \ref{vertexlinedistance}, it immediately follows that $d(L_{n(r+1)},L_{(n+1)(r+1)})=2r+1$. So by the triangle inequality, no vertex can be within a distance $r$ of both of these lines. Thus, no vertex can be nearby more than one horizontal line of the form $L_{n(r+1)}$. \begin{claim}\label{codewordclaim} Let $u,v\in L_{n(r+1)}$. Then: \begin{enumerate} \item If $u\sim v$ then at least one of $u$ and $v$ is in $C'$. \item If $r\le d(u,v)\le 2r+1$ then at least one of $u$ and $v$ is in $C'$. \end{enumerate} Furthermore, note that for any $x$ at least one of the following vertices is in $C$: $$\{(x+2k,n(r+1)): k=0,1,\ldots,\lceil(r+1)/2\rceil\} .$$ \end{claim} The proof of this claim is in Section ~\ref{lemmaproofs}. From this point forward, let $v=(i,j)$ with $n(r+1)\le j < (n+1)(r+1)$. First, suppose that $r$ is even. Then for $j\le r/2+ n(r+1)$ we have $d((i,j) ,(i-r/2,j-r/2))=r$ by the Taxicab Lemma. Since $j-r/2\le n(r+1)$, there is $u\in L_{n(r+1)}$ such that $d((i,j),u)\le r$. If $d((i,j),u)<r$, then $u$ and $u+(1,0)$ are both within distance $r$ of $(i,j)$ and by Claim ~\ref{codewordclaim} one of them is in $C'$. If $d((i,j),L_{n(r+1)})=r$, then by Claim \ref{evenvertexdistance} $(i,j)$ is within distance $r$ of one of the vertices in the set $\{(i-r/2+2k: k=0,1,\ldots,r/2\}$ and again by Claim ~\ref{codewordclaim} one of them must be in $C'$. By a symmetric argument, we can show that if $j>r/2$, then there is a $u\in C'\cap L_{(n+1)(r+1)}$ such that $d((i,j),u)\le r$. If $r$ is odd, and $j\neq (r+1)/2$, then we can use the same argument to show there is a codeword in $C'\cap L_{(n+1)(r+1)}$ or $C'\cap L_{(n+1)(r+1)}$ within distance $r$ of $(i,j)$. If $j=(r+1)/2$, then we refer to Claims \ref{oddevenvertexdistance}, \ref{oddoddvertexdistance} and \ref{codewordclaim} to find an appropriate codeword. Since no vertex can be nearby more than one horizontal line of the form $L_{n(r+1)}$, this shows that each vertex is nearby exactly one horizontal line of this form. So we have shown that $v$ is nearby exactly one horizontal line of the form $L_{n(r+1)}'$, but this also shows that $I_r(v)\neq \emptyset$ for any $v\in V(G_H)$. \textbf{Part (2)}: We now turn our attention to showing that $I_r(u)\neq I_r(v)$ for any $u\neq v$. We first note that if $u$ is nearby $L_{n(r+1)}'$ and $v$ is nearby $L_{m(r+1)}'$ and $m\neq n$, then there is some c in $I_r(v)\cap L_{m(r+1)}'$ but $c\not\in I_r(u)$ since $u$ is not nearby $L_{m(r+1)}'$. Thus, $u$ and $v$ are distinguishable. Hence it suffices to consider only the case that $n=m$. \textbf{Part (3)}: We consider the case that $u$ and $v$ fall on opposite sides of $L_{n(r+1)}$. Suppose that $u=(i,j)$ where $j>n(r+1)$ and $v=(i',j')$ where $j'<n(r+1)$. We see that if $n=2k$, then $n(r+1)<\lfloor(r+1)/2\rfloor+2k(r+1)< (n+1)(r+1)$ and if $n=2k+1$ then $(n-1)(r+1)<\lfloor(r+1)/2\rfloor+2k(r+1)< n(r+1)$. In the first case, we see that \begin{eqnarray*} d((i',j'),L_{\lfloor(r+1)/2\rfloor+2k(r+1)}) &\ge& d(L_{2k(r+1)-1},L_{\lfloor(r+1)/2\rfloor+2k(r+1)})\\ &=& 2\lfloor (r+1)/2\rfloor + 1\\ &\ge& r+1 \end{eqnarray*} and likewise in the second case we have that $d((i,j), L_{\lfloor(r+1)/2\rfloor+2k(r+1)})\ge r+1$ and so it follows that at most one of $(i,j)$ and $(i',j')$ is within distance $r$ of a code word in $C''$. If $r$ is odd, then $\lfloor(r+1)/2\rfloor = (r+1)/2$ and $|j-(r+1)/2+2k(r+1)|\le(r-1)/2$. Hence, we see that $d((i,j),L_{(r+1)/2+2k(r+1)})\le r-1$ . From Claim \ref{vertexballdistance}, it follows that $$\{(m,(r+1)/2+n(r+1)): (r-1)/2\le m \le (3r-1)/2\}\subset B_r((i,j)) .$$ We note that the cardinality of this set is at least $r+1$ and so in the case that $n$ is even we see that at least one of these must be in $C''$. A symmetric argument applies if $n$ is odd, showing that $(i',j')$ is within distance $r$ of a codeword in $C''$. If $r$ is even, we apply a similar argument to show that $(i,j)$ is within distance $r-1$ of $r$ vertices in $L_{n(r+1)+\lfloor(r+1)/2\rfloor}$ and $(i',j')$ is within distance $r$ of $r-1$ vertices of $L_{n(r+1)-\lceil(r+1)/2\rceil}$. If $n$ is even, then clearly $(i,j)$ is within distance $r$ of some code word in $C''$. However, if $n$ is odd, we have only shown that $$\{(m,\lceil(r+1)\rceil/2+n(r+1)): r/2-2\le m \le 3r/2-1\}\subset B_{r-1}((i,j)) .$$ Consider the set $$\{(m,\lfloor(r+1)\rfloor/2+n(r+1)): r/2-2\le m \le 3r/2-1\}.$$ This set has $r$ vertices and so 1 of them must be a codeword. Furthermore, by the way we constructed $C''$, the sum of the coordinates of codeword are even and so there is an edge connecting it to a vertex in $$\{(m,\lceil(r+1)\rceil/2+n(r+1)): r/2-2\le m \le 3r/2-1\}$$ and so it is within radius $r$ of $(i',j')$. In either case, we have exactly one of $u$ and $v$ is within distance $r$ of a codeword in $C''$. \textbf{Part (4)}: Now we have shown that two vertices are distinguishable if they are nearby two different lines in $C'$ or if they fall on opposite sides of a line $L_{n(r+1)}$ for some $n$. To finish our proof, we need to show that we can distinguish between $u=(i,j)$ and $v=(i',j')$ if $u$ and $v$ are nearby the same line $L_{n(r+1)}'$ and $u$ and $v$ fall on the same side of that line. Without loss of generality, assume that $j,j'\ge n(r+1)$. \textbf{Case 1}: $u,v\in L_{n(r+1)}$ Without loss of generality, we can write $u=(x,n(r+1))$ and $v=(x+k,n(r+1))$ for $k>0$. If $k>1$ then $(x-r,n(r+1))$ and $(x-r+1,n(r+1))$ are both within distance $r$ of $u$ but not of $v$ and one of them must be a codeword by Claim \ref{codewordclaim}. If $k=1$, then $(x-r,n(r+1))$ is within distance $r$ of $u$ but not $v$ and $(x+r+1,n(r+1))$ is within distance $r$ of $v$ but not $u$. Since $d((x-r,n(r+1)),(x+r+1,n(r+1)))=2r+1$, Claim \ref{codewordclaim} states that at least one of them must be a codeword. \textbf{Case 2}: $u\in L_{n(r+1)}$, $v\not\in L_{n(r+1)}$ Without loss of generality, assume that $i\le i'$. We have $u=(i,n(r+1))$. If $i=i'$ then $v=(i,n(r+1)+k)$ for some $k>0$. Consider the vertices $(i-r,n(r+1))$ and $(i+r,n(r+1))$. These are each distance $r$ from $u$ and they are distance $2r$ from each other, so by Claim \ref{codewordclaim} at least one of them is a codeword. However, by Claim \ref{taxicab1}, these are distance at least $r+k>r$ from $v$ and so $I_r(u)\neq I_r(v)$. If $i<i'$, then we can write $v=(i+j, n(r+1)+k)$ for $j,k>0$. Consider the vertices $x_1=(i-r, n(r+1))$ and $x_2=(i-r+1,n(r+1))$. By the Taxicab Lemma, $d(u,x_1)=r$ and $d(u,x_2)=r-1$. Further, since they are adjacent vertices in $L_{n(r+1)}$, one of them is in $C'$ by Claim \ref{codewordclaim}. However, by Claim \ref{taxicab1} we have $d(v,x_1)\ge r+j+k>r$ and $d(v,x_2)\ge r-1+j+k>r$. Hence, neither of them is in $I_r(v)$ and so $I_r(u)\neq I_r(v)$. \textbf{Case 3}: $u,v\not\in L_{n(r+1)}$ Assume without loss of generality that $n=0$ and so $1\le j,j'\le \lceil r/2\rceil$. \textbf{Subcase 3.1}: $j<j'$, $i=i'$ From Claim \ref{vertexlinedistance2} it immediately follows that $d((i,j),L_0)<d((i',j'),L_0)\le r$. By the Taxicab Lemma, we have $d((i-(r-j),0),(i,j))=d((i+(r-j),0),(i,j))=r$. Further note that $d((i-(r-j),(i+(r-j),0))=2(r-j)$ and since $1\le j<\lceil r/2\rceil$ we have $r+1\le 2(r-j)\le 2r-2$ and so at least one of these two vertices is a codeword by Claim \ref{codewordclaim}. By the Taxicab Lemma, neither of these two vertices is in $B_r((i',j'))$. \textbf{Subcase 3.2}: $j<j'$, $i\neq i'$ Without loss of generality, we may assume that $i<i'$. Then, we wish to consider the vertices $(i-(r-j),0)$ and $(i-(r-j)+1,0)$. We see that $d((i,j),(i-(r-j)+1,0))=r-1$ and so by the Taxicab Lemma, neither of these vertices is in $B_r((i',j'))$. But since these vertices are adjacent and both in $L_0$ at least one of them is a codeword by Claim \ref{codewordclaim} and so that one is in $I_r((i,j))$ but not in $I_r((i',j'))$. \textbf{Subcase 3.3}: $j=j'$, $d(u,L_0)<r$ or $d(v,L_0)<r$ Without loss of generality, assume that $d((i,j),L_0)<r$ and that $i<i'$. Then $v=(i+k,j)$ for some $k>0$. First suppose that $k>1$. Then, we wish to consider the vertices $(i-(r-j),0)$ and $(i-(r-j)+1,0)$. We see that $d((i,j),(i-(r-j)+1,0))=r-1$ and so by the Taxicab Lemma, neither of these vertices is in $B_r((i',j'))$. But since these vertices are adjacent and both in $L_0$ so by Claim \ref{codewordclaim} at least one of them is a codeword and so that one is in $I_r((i,j))$ but not in $I_r(i',j'))$. If $k=1$, consider the vertices $x_1=(i-(r-j),0)$ and $x_2=(i+1+(r-j),0)$. We see that $$\begin{array}{ll} d(x_1,u) &= r \\ d(x_1,v) &= r+1 \\ d(x_2,u) &= r+1 \\ d(x_2,v) &= r \end{array}$$ all by the taxicab lemma. Furthermore, $d(x_1,x_2)=2(r-j)+1$ and so $r+1\le d(x_1,x_2)\le 2r+1$ as in the above case and so one of them is a code word by Claim \ref{codewordclaim}. Hence $I_r(u)\neq I_r(v)$. \textbf{Subcase 3.4}: $j=j'$, $d(u,L_0)=d(v,L_0)=r$ Without loss of generality, suppose $i'<i$. We wish to consider the vertices in the two sets $$U=\{(i-(r-j)+2k,0): k=0,1,2,\ldots,r-j\}$$ and $$V=\{(i'-(r-j)+2k,0):k=0,1,2,\ldots,r-j\}.$$ Note that $$|U|=|V|=\left\{\begin{array}{cc} r/2+1 & \text{if $r$ is even} \\ (r+1)/2 & \text{if $r$ is odd} \end{array} \right.$$ From Claims \ref{evenvertexdistance}, \ref{oddevenvertexdistance}, and \ref{oddoddvertexdistance} we have $U\subset B_r(u)$ and $V\subset B_r(v)$. Each of $U$ and $V$ contain a codeword by Claim \ref{codewordclaim}. Hence, if $U\cap V=\emptyset$ then $u$ and $v$ are distinguishable. If $U\cap V\neq \emptyset$ then the leftmost vertex in $U$ is also in $V$. We further note that the leftmost vertex in $U$ is not the leftmost vertex in $V$ or else $U=V$ and $u=v$. Thus, we must have $i'-(r-j)+2\le i\le i'+(r-j)$ and so $2\le i-i'\le 2(r-j)$. Let $x_1=(i+(r-j),0)$ and let $x_2=(i'-(r-j),0)$. By definition $x_1\in U$ and $x_2\in V$. By the Taxicab Lemma we have $d(x_1,v)=|i+r-j-i'|+|j|=r+i-i'>r$ and likewise $d(x_2,u)>r$ so $x_1\not\in I_r(v)$ and $x_2\not\in I_r(u)$. Also we have $d(x_1,x_2)=2(r-j)+i-i'$ which gives $2(r-j)+2\le d(x_1,x_2)\le 4(r-j)$. Since $d(u,L_0)=r$ we must have $r-j=r/2$ if $r$ is even and $r-j=(r-1)/2$ if $r$ is odd by Claims ~\ref{vertexlinedistance} and ~\ref{vertexlinedistance2}. In either case this gives $r+2\le d(x_1,x_2)\le 2r$. By Claim \ref{codewordclaim}, one of $x_1$ and $x_2$ is a codeword and so $u$ and $v$ are distinguishable. This completes the proof of Theorem \ref{maintheorem}. \end{proofcite} \section{Proof of Claims}\label{lemmaproofs} We now complete the proof by going back and finishing off the proofs of the claims and lemmas that were presented in Section \ref{distanceclaimssection}. \begin{proofcite}{Claim~\ref{taxicab2}}\textbf{(Taxicab Lemma)} By Claim ~\ref{taxicab1}, it suffices to show there exists a path of length $\|u-v\|$ between $u$ and $v$. Since $|x-x'|\ge|y-y'|$, either $u=v$ or $x\neq x'$. If $u=v$ then the claim is trivial, so assume without loss of generality that $x< x'$. We will first assume that $y\le y'$. We note that no matter what the parity of $i+j$, there is always a path of length 2 from $(i,j)$ to $(i+1,j+1)$. Then for $1\le i \le |y-y'|$, define $P_i$ to be the path of length 2 from $(x+i-1,y+i-1)$ to $(x+i,y+i)$. Then $\bigcup_{i=1}^{|y-y'|} P_i$ is a path of length $2(y'-y)$ from $(x,y)$ to $(x+y'-y, y')$. Since $(i,j)\sim (i+1,j)$ for all $i,j$, there is a path $P'$ of length $x'-x+y-y'$ (Note: this number is nonnegative since $x'-x\ge y'-y$.) from $(x+y'-y, y')$ to $(x',y')$ by simply moving from $(i,y')$ to $(i+1,y')$ for $x+y'-y\le i< x'$. Then, we calculate the total length of $$P'\cup \bigcup_{i=1}^{|y-y'|}P_i$$ to be $|x'-x|+|y'-y|$. By Claim \ref{taxicab1}, it follows that $d(u,v)= |x-x'|+|y-y'|=\|u-v\|_1$. If $y>y'$, then a symmetric argument follows by simply following a downward diagonal followed by a straight path to the right. \end{proofcite} \begin{proofcite}{Claim~\ref{vertexlinedistance}} We proceed by induction on $b-a$. The base case is trivial since if $a+k$ is even, then $(k,a)\sim(k,a+1)=(k,b)$. If $a+k$ is odd, then there is no edge directly to $L_b$ and so any path from $(k,a)$ to $L_b$ has length at least 2. This is attainable by the path from $(k,a)$ to $(k+1,a)$ to $(k+1,b)$. The inductive step follows similarly. Let $a+k$ be even. Then there is an edge between $(k,a)$ and $(k,a+1)$. Since $k+a+1$ is odd, the shortest path from $(k,a+1)$ to $L_b$ has length $2(b-a-1)$ and so the union of that path with the edge $\{(k,a),(k,a+1)\}$ gives a path of length $2(b-a)-1$. Likewise, if $k+a$ is odd, then we take the path from $(k,a)$ to $(k+a,a)$ to $(k+1,a+1)$ union a path from $(k+1,a+1)$ to $L_b$ gives us a path of length $2(b-a)$. Further, any path from $L_a$ to $L_b$ must contain at least one point in $L_{a+1}$. So if we take a path of length $\ell$ from $(k,a)$ to $(j,a+1)$ (note that $j+a+1$ must be odd), then we get a path of length $\ell+2(b-a-1)$. From the base case, we have chosen our paths from $L_a$ to $L_{a+1}$ optimally and so this path is minimal. \end{proofcite} \begin{proofcite}{Claim~\ref{vertexlinedistance2}} The proof is symmetric to the previous proof. \end{proofcite} \begin{proofcite}{Claim~\ref{evenvertexdistance}} The proof is by induction on $k$. If $k=1$, then as noted before, there is always a path of length 2 from $(x,y)$ to $(x\pm 1, y\pm 1)$. For $k>1$, there is a path of length 2 from $(x,y)$ to $(x-1,y+1)$ and to $(x+1,y+1)$. By the inductive hypothesis, there is a path of length $2(k-1)$ from $(x-1, y+1)$ to each vertex in $$S=\{(x-k+2j,y+k):j=0,1,\ldots,k-1\}$$ and a path of length $2(k-1)$ from $(x+1, y+1)$ to $(x+k,y+k)$. Taking the union of the path of length 2 from $(x,y)$ to $(x-1,y+1)$ and the path of length $2(k-1)$ from $(x-1,y+1)$ to each vertex in $S$ gives us a path of length $2k$ to each vertex in $$\{(x-k+2j,y+ k): j=0,1,\ldots,k-1\} .$$ Then, taking the path of length 2 from $(x,y)$ to $(x+1,y+1)$ and the path of length $2(k-1)$ from $(x+1,y+1)$ to $(x+k,y+k)$ gives us a path of length $2k$ from $(x,y)$ to each vertex in $$\{(x-k+2j,y+ k): j=0,1,\ldots,k\}.$$ By a symmetric argument, we can find paths of length $2k$ from $(x,y)$ to each vertex in $$\{(x-k+2j,y-k): j=0,1,\ldots,k\} .$$ \end{proofcite} \begin{proofcite}{Claim~\ref{oddevenvertexdistance}} Since $x+y$ is even, $(x,y)\sim(x,y+1)$. By Claim ~\ref{evenvertexdistance}, there are paths of length $2k$ from $(x,y+1)$ to each vertex in $\{(x-k+2j,y+k+1): j=0,1,\ldots,k\}$ and hence there are paths of length $2k+1$ from $(x,y)$ to each vertex in that set. Since $(x,y)\sim(x-1,y)$, by Claim ~\ref{evenvertexdistance}, there are paths of length $2k$ from $(x-1,y)$ to each vertex in $\{(x-k-1+2j,y-k): j=0,1,\ldots,k\}$. Similarly, since $(x,y)\sim(x+1,y)$ there is a path of length $2k$ from $(x+1,y)$ to $(x+k+1,y-k)$ and so there is a path of length $2k+1$ from $(x,y)$ to each vertex in $\{(x-k-1+2j,y-k): j=0,1,\ldots,k+1\}$. \end{proofcite} \begin{proofcite}{Claim~\ref{oddoddvertexdistance}} Since $x+y$ is odd, $(x,y)\sim(x,y-1)$. By Claim ~\ref{evenvertexdistance}, there are paths of length $2k$ from $(x,y-1)$ to each vertex in $\{(x-k+2j,y-k-1): j=0,1,\ldots,k\}$ and hence there are paths of length $2k+1$ from $(x,y)$ to each vertex in that set. Since $(x,y)\sim(x-1,y)$, by Claim ~\ref{evenvertexdistance}, there are paths of length $2k$ from $(x-1,y)$ to each vertex in $\{(x-k-1+2j,y+k): j=0,1,\ldots,k\}$. Similarly, since $(x,y)\sim(x+1,y)$ there is a path of length $2k$ from $(x+1,y)$ to $(x+k+1,y+k)$ and so there is a path of length $2k+1$ from $(x,y)$ to each vertex in $\{(x-k-1+2j,y+k): j=0,1,\ldots,k+1\}$. \end{proofcite} \begin{proofcite}{Claim~\ref{vertexballdistance}} Without loss of generality, assume that $y\ge k$. If $y=k$, the Claim is trivial, so let $\ell>0$ be the length of the shortest path between $(x,y)$ and $L_k$. By assumption, $\ell\le r-1$. First assume that $\ell$ is even. By Claim ~\ref{evenvertexdistance}, there is a path of length $\ell$ from $(x,y)$ to each vertex in $S=\{((x-\ell/2+2j,k):j=0,1,\ldots,\ell/2\}$. Note that the vertices in the set $S'=\{((x-\ell/2+2j-1,k):j=0,1,\ldots,\ell/2+1\}$ all fall within distance 1 of a vertex in $S$ and so $S\cup S' = \{((x-\ell/2-1+j,k):j=0,1,\ldots,\ell+2\}\subset B_r((x,y))$. Now note that $d((x,y),(x-\ell/2,k))=\ell$, so if $x-(r-|y-k|)\le x_0\le x-\ell/2$ for some $x_0$, then \begin{eqnarray*} d((x_0,k),(x,y))&\le& d((x_0,k),(x-\ell/2,k))+d((x-\ell/2,k),(x,y)) \\ &\le& (r-|y-k|-\ell/2)+\ell . \end{eqnarray*} Since $|y-k|=\ell/2$ this gives $d((x_0,k),(x,y))\le r$. Likewise, if $x+\ell/2\le x_0\le x+(r-|y-k|)$ for some $x_0$, then $d((x_0,k),(x,y))\le r$. Hence, $d((x_0,k),(x,y))\le r$ for all $x-(r-|y-k|)\le x_0\le x+(r-|y-k|)$ and so the Claim follows. If $\ell$ is odd, the Claim follows by using either Claim ~\ref{oddevenvertexdistance} or Claim ~\ref{oddoddvertexdistance} and applying a similar argument. \end{proofcite} \begin{proofcite}{Claim~\ref{codewordclaim}} This first part of this claim is immediate from the definition of $C'$ since the only vertices in $L_{n(r+1)}$ which are not in $L_{n(r+1)}'$ are separated by distance 2. To see the second part, let $\ell=3r$ if $r$ is even and $\ell=3r-1$ if $r$ is odd. Write $L_{n(r+1)}'=S_1\cup S_2\cup S_3$ where \begin{eqnarray*} S_1 &=& L_{n(r+1)}'\cap\{(x,y), x\equiv 1,\ldots, r-1 \mod \ell\}\\ S_2 &=& L_{n(r+1)}'\cap\{(x,y), x\equiv r,r+1,\ldots, 2r-1 \mod \ell\}\\ S_3 &=& L_{n(r+1)}'\cap\{(x,y), x\equiv 2r,2r+1,\ldots, \ell \mod \ell\}\\ \end{eqnarray*} Note that each vertex in $S_2\cup S_3$ is in $C'$ and so if $u,v\in L_{n(r+1)}$ are both not in $C'$, they are both in $S_1$. However, it is clear that for two vertices in $S_1$, their distance is either strictly less than $r$ or at least than $2r+1$. Finally, for the last part of the claim, we use the same partition of $L_{n(r+1)}$ so that all non-codewords fall in $S_1$. However, the distance between $(x,n(r+1))$ and $(x+2\lceil(r+1)/2\rceil)$ is at least $r+1$ so one of those two vertices cannot fall in $S_1$ and so it is a codeword. \end{proofcite} \section{Conclusion} In an upcoming paper, we also provide improved lower bounds for $D(G_H,r)$ when $r=2$ or $r=3$~\cite{MartinSubmitted}. Below are a couple tables noting our improvements. This includes the results not only from our paper, but also from the aforementioned paper. $$\begin{array}{c} \begin{array}{|c|c||c|c|} \hline \multicolumn{4}{|c|}{\text{Hex Grid}}\\ \hline r & \text{previous lower bounds} & \text{new lower bounds} & \text{upper bounds} \\ \hline 2 & 2/11\approx 0.1818 ^\text{~\cite{Karpovsky1998}} & 1/5 = 0.2^\text{~\cite{MartinSubmitted}} & 4/19\approx 0.2105 ^\text{~\cite{Charon2002}}\\ \hline 3 & 2/17\approx 0.1176 ^\text{~\cite{Charon2001}} & 3/25 = 0.12^\text{~\cite{StantonPending}} & 1/6\approx 0.1667 ^\text{~\cite{Charon2002}}\\ \hline \multicolumn{4}{|c|}{\text{Square Grid}}\\ \hline 2 & 3/20=0.15 ^\text{~\cite{Charon2001}} & 6/37\approx 0.1622^\text{~\cite{MartinSubmitted}} & 5/29\approx 0.1724 ^\text{~\cite{Honkala2002}} \\ \hline \end{array}\\ \\ \begin{array}{|c|c|c||c|} \hline \multicolumn{4}{|c|}{\text{Hex Grid}}\\ \hline r & \text{lower bounds} & \text{new upper bounds} & \text{previous upper bounds} \\ \hline 15 & 2/77\approx 0.0260 ^\text{~\cite{Charon2001}} & 1227/22528\approx 0.0523 & 1/18\approx 0.0556 ^\text{~\cite{Charon2002}} \\ \hline 16 & 2/83\approx 0.0241 ^\text{~\cite{Charon2001}} & 83/1632\approx 0.0509 & 1/18\approx 0.0556^\text{~\cite{Charon2002}}\\ \hline 17 & 2/87\approx 0.0230 ^\text{~\cite{Charon2001}} & & 1/22\approx 0.0455^\text{~\cite{Charon2002}}\\ \hline 18 & 2/93\approx 0.0215 ^\text{~\cite{Charon2001}} & 31/684 \approx 0.0453 & 1/22\approx 0.0455 ^\text{~\cite{Charon2002}}\\ \hline 19 & 2/97\approx 0.0206 ^\text{~\cite{Charon2001}} & 387/8960\approx 0.0432 & 1/22\approx 0.0455 ^\text{~\cite{Charon2002}}\\ \hline 20 & 2/103\approx 0.0194 ^\text{~\cite{Charon2001}} & 103/2520\approx 0.0409 & 1/22\approx 0.0455 ^\text{~\cite{Charon2002}}\\ \hline 21 & 2/107\approx 0.0187 ^\text{~\cite{Charon2001}} & & 1/26\approx 0.0385 ^\text{~\cite{Charon2002}}\\ \hline \end{array} \end{array}$$ For even $r\ge 22$, we have improved the bound from approximately $8/(9r)$~\cite{Charon2001} to $(5r+3)/(6r(r+1))$ and for odd $r\ge 23$ we have improved the bound from approximately $8/(9r)$~\cite{Charon2001} to $(5r^2+7r-3)/((6r-2)(r+1)^2)$. Constructions for $2\le r\le 30$ given in ~\cite{Charon2002} were previously best known. However, the best general constructions were given in ~\cite{Charon2001}. \end{document}
\begin{document} \title{A non-LEA Sofic Group} \author{Aditi Kar \and Nikolay Nikolov} \noindent \address{ A. Kar, University of Southampton, SO17 1BJ, UK, [email protected] \newline N. Nikolov, University of Oxford, OX2 6GG, UK. [email protected].} \begin{abstract} We describe elementary examples of finitely presented sofic groups which are not residually amenable (and thus not initially subamenable or LEA, for short). We ask if an amalgam of two amenable groups over a finite subgroup is residually amenable and answer this positively for some special cases, including countable locally finite groups, residually nilpotent groups and others. \textbf{MSC}: 20E08, 20E26 \end{abstract} \maketitle Gromov \cite{gromov} defined the class of sofic groups as a generalization of residually finite and of amenable groups, see \cite{cap} for an introduction. The question of the existence of a non-sofic group remains open. Two related classes of groups defined in \cite{gromov} are the LEF groups and LEA groups. A group $G$ is LEF (locally embeddable in finite) if for every finite subset $F \subset G$, there exists a partial monomorphism of $F$ into a finite group $H$, meaning there exists an injection $f: F \rightarrow H$ such that if $x,y,xy \in F$ then $f(xy)=f(x)f(y)$. Similarly, a group is LEA (locally embeddable in amenable, also called \emph{initially subamenable}) if every finite subset of $G$ supports a partial monomorphism to an amenable group. It is not hard to show that amongst finitely presented groups, the class of LEF groups coincides with the residually finite groups and the class of LEA groups coincides with the residually amenable groups. Gromov's question whether every sofic group is LEA was answered negatively by Cornulier \cite{Yves}. Cornulier's example is an extension of an LEF group by an amenable quotient and hence, the LEF normal subgroup is co-amenable. For an introduction to co-amenability, we refer the reader to \cite{MPopa}. It was recently proved that if $A$ and $B$ are sofic groups and $C$ is an amenable subgroup of both $A$ and $B$, then the free product with amalgamation $G=A*_CB$ is also sofic, see \cite{CollinsDykema}, \cite{ElekSzabo} and \cite{Paunescu}. This theorem allows us to produce further examples of finitely presented sofic groups which are not residually amenable or equivalently LEA. The examples actually show that the class of LEA groups is not closed under taking free products with infinite cyclic amalgamations. Consider $\mathrm{SL}_n(\mathbb{Z}[\frac{1}{p}])$ for $n \geq 3$ and $p$, a prime. For $x \in \mathbb Z [\frac{1}{p}]$ denote $z(x) = \left( \begin{array}{ccc} 1 & x & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \mathrm{I}_{n-2} \end{array} \right)$ and $Z$ be the infinite cyclic subgroup generated by the element $z=z(1)$. \begin{theorem} \label{main} The amalgam $G= \mathrm{SL}_n(\mathbb{Z}[\frac{1}{p}]) *_Z \mathrm{SL}_n(\mathbb{Z}[\frac{1}{p}])$ is sofic but not LEA. In fact $G$ does not have a co-amenable LEA subgroup. \end{theorem} \begin{proof} As we noted earlier, soficity of $G$ follows from \cite{CollinsDykema}. Now, suppose for the sake of contradiction that $H$ is a co-amenable LEA subgroup of $G$. We denote the two copies of $\mathrm{SL}_n(\mathbb{Z}[\frac{1}{p}])$ as $\Sigma_1$ and $\Sigma_2$ with two isomorphisms $f_i: \mathrm{SL}_n(\mathbb{Z}[\frac{1}{p}]) \rightarrow \Sigma_i$ ($i=1,2$). We have $f_1(z)=f_2(z)$ and will identify the cyclic group $Z$ with its image $f_1(Z)=f_2(Z)$ in $G$. Define $Y= \{ y(x) \ | \ x \in \mathbb Z [\frac{1}{p}] \}$ and let $Y_i= f_i(Y)$, a subgroup of $\Sigma_i$ isomorphic to $(\mathbb Z[\frac{1}{p}], +)$. Thus $Y_1 \cap Y_2=Z$. We claim that some $G$-conjugate of $H$ intersects both $\Sigma_1$ and $\Sigma_2$ in subgroups of finite index. Let $X=G/H$ be the set of left cosets of $H$ in $G$. The co-amenability of $H$ in $G$ is equivalent to the amenability of the action of $G$ on $X$, that is, the existence of a finitely additive $G$-invariant mean $\mu$ on all subsets of $X$ \cite{MPopa}. Since $\Sigma_i$ is a lattice in $\mathrm{SL}_n(\mathbb R) \times \mathrm{SL}_n(\mathbb Q_p)$ it has property $(T)$ (see for example \cite[Theorem 1.4.15]{BHV}). Therefore every amenable action of $\Sigma_i$ must have a finite orbit \cite[Lemma 4.2]{GM}. Let $X_{i, \infty} \subset X$ be the union of all the infinite orbits of $\Sigma_i$ on $X$. We must have that $\mu(X_{i, \infty})=0$, otherwise we can normalize $\mu$ on $X_{i, \infty}$ to a $\Sigma_i$-invariant mean where $\Sigma_i$ acts on $X_{i,\infty}$ without finite orbits. Therefore $ \mu(X_{1, \infty} \cup X_{2,\infty})=0$ and we can take $t \in X \backslash (X_{1, \infty} \cup X_{2,\infty})$. The stabilizer of $t$ in $G$ is the required conjugate of $H$. By replacing $H$ with $\mathrm{Stab}_G(t)$ we may assume from now on that $M_i= \Sigma_i \cap H$ has finite index in $\Sigma_i$ for $i=1,2$. In particular $H \cap Y_i$ is a finite index subgroup of $Y_i$. Note that $Y_i/Z$ is a divisible group and so $Y_i= (Y_i \cap H)Z$ giving that $[Y_1: (Y_1 \cap H)]= [Z: (Z \cap H)]=[Y_2: (Y_2 \cap H)]=a$, say. The integer $a$ is coprime to $p$ and $H \cap Y_i= f_i \circ y (a \mathbb Z [\frac{1}{p}])$. Define $\beta_i=f_i(y(\frac{a}{p}))$, thus $\beta_i \in (H \cap Y_i) \backslash Z$ while $\beta_1^p=\beta_2^p \in Z \cap H$. Note that each $M_i=\Sigma_i \cap H$ is a finitely presented group. Therefore there exists a finite set $F \subset M_1 \cup M_2$ such that every partial monomorphism $h$ from $F \subset H$ into any group $A$ extends to a map $h: M_1 \cup M_2 \rightarrow A$, such that $h|_{M_i}$ is a group homomorphism. Now assume that $A$ is amenable. Since $M_1, M_2$ have property $(T)$ their images $h(M_i)$ in $A$ are finite. In particular the groups $h(H \cap Y_1)$ and $h( H \cap Y_2)$ are finite and $p$-divisible and so must have size coprime to $p$. Every finite image of $H \cap Y_i$ coincides with the image of its subgroup $Z \cap Y_i$ and so $h(H \cap Y_1)= h(H \cap Z)= h(H \cap Y_2) $ is a finite group $L$ with $gcd(p, |L|)=1$. Note that $h(\beta_1)^p=h(\beta_2)^p$. Since every element of $L$ has a unique $p$-th root it follows that $h(\beta_1)=h(\beta_2)$. However $\beta_1 \not = \beta_2$ in $H$. We deduce that there cannot be a partial monomorphism from $F \cup \{\beta_1, \beta_2\}$ into any amenable group, and so $H$ is not LEA. \end{proof} \subsection*{Residually amenable groups} Theorem \ref{main} shows that a free product with amalgamation over $\mathbb Z$ of two residually amenable groups need not be residually amenable. On the other hand it is not difficult to show that a free product of residually amenable groups is residually amenable, for a proof see \cite{berlai}. A natural question is whether this result extends to amalgamations over finite subgroups: \begin{question} \label{q1} Is the class of residually amenable groups closed under free products with amalgamation over finite subgroups? \end{question} To show that a free product $G$ of residually amenable groups $A,B$ amalgamating a finite subgroup is residually amenable, it is sufficient to consider the case when $A$ and $B$ are both amenable. Let $g$ be a non-trivial element of $G$ and let $S=\{a_1,\ldots,a_k\}$ and $T:=\{b_1,\ldots, b_n\}$ be the non-trivial syllables of $g$. Then, there exist homomorphisms $\alpha, \beta$ from $A, B$ respectively to amenable groups $A',B'$, which are injective on $C \cup S$ and $C\cup T$. This clearly allows one to define a homomorphism from $G$ to $A'*_C B'$ such that the image of $g$ is non-trivial. Another question related to Question \ref{q1} is the following. For a class of groups $\mathcal L$ we say that $\mathcal L$ \emph{admits amalgamations over finite subgroups} when the following condition holds: For all $A, B \in \mathcal L$ and a finite group $C$ having two injective homomorphisms $\phi_A, \phi_B$ to $A$ and $B$ respectively, there exists a group $H \in \mathcal L$ and two injective homomorphisms $\psi_A, \psi_B$ from $A, B$ respectively to $H$ such that $\psi_A \circ \phi_A=\psi_B \circ \phi_B$. \begin{question} \label{q2} Does the class of amenable groups admit amalgamations over finite subgroups? \end{question} The authors don't know the answer to Question \ref{q2} even in the case when $A$ and $B$ are solvable groups. Note that if Question \ref{q2} has positive answer then there is a homomorphism $f:A*_CB \rightarrow H$ such that the the kernel of $f$ is free. Hence $A *_C B$ is residually amenable, which easily implies that Question \ref{q1} has a positive answer. We can answer questions \ref{q1} and \ref{q2} affirmatively in the special case of locally finite groups. \begin{theorem}The class of countable locally finite groups admits amalgamations over finite subgroups. As a consequence, the amalgam of two countable locally finite groups over a finite subgroup is residually amenable. \end{theorem} \begin{proof} Let $A$ and $B$ be two countable locally finite groups and let $C$ be a finite group with two injective homomorphisms $\phi_A, \phi_B$ to $A$ and $B$ respectively. Let $A$ be the direct limit of the finite groups $\{A_{j}\}_{j=1}^\infty$ with injective homomorphisms $f_{A,i}: A_{i} \rightarrow A_{i+1}$ and denote by $\{B_j\}_{j=1}^\infty$ and $f_{B,i}: B_{i} \rightarrow B_{i+1}$ the corresponding direct limit for $B$. Since $C$ is finite we may assume that $\phi_A (C) \subset A_{1}$ and $\phi_B (C) \subset B_{1}$. Now recall the following lemma \begin{lemma}[Lemma II.2.6.10 \cite{se}] \label{serre} The class of finite groups admits amalgamations over finite subgroups. \end{lemma} We apply Lemma \ref{serre} to $A_{1},B_{1},C,\phi_A,\phi_B$ and obtain a finite group $H_1$ with injections $t_{A,1},t_{B,1}$ from $A_1$, respectively $B_1$ to $H_1$ such that $t_{A,1} \phi_A= t_{B,1} \phi_B$. Next we apply Lemma \ref{serre} to $H_1,A_2,A_1, t_{A,1},f_{A,1}$ and obtain a finite group $H_2$ with injections $d_1: H_1 \rightarrow H_2$ and $t_{A,2}: A_2 \rightarrow H_2$ which agree on $A_1$. The next application of Lemma \ref{serre} to $H_2,B_2,B_1, d_1 t_{B,1}, f_{B,1}$ gives a finite group $H_3$ and injections $d_2: H_2 \rightarrow H_3$ and $t_{B,2}: B_2 \rightarrow H_3$ which agree on $B_1$. Continuing in the same way by induction we get a directed system $(H_j)$ of finite groups with injections $d_i :H_i\rightarrow H_{i+1}$ such that $H_{2i+1}$ contains an embedded copy of $A_i$ and $B_i$ with intersection containing an isomorphic copy of $C$. Let $H$ be the direct limit of $(H_j)$. By construction there are induced monomorphisms $\psi_A$ and $\psi_B$ from $A$ and $B$ to $H$ as required. The final part follows by observing that the homomorphism $\psi_A * \psi_B : A*_CB \rightarrow H$ has kernel which is a free group $K$. Now if $K^{(n)}$ is the $n$-th term of the derived series of $K$ we have that $\frac{A*_C B} {K^{(n)}}$ is an amenable group for each $n \in \mathbb N$ and therefore $A*_CB$ is residually amenable. \end{proof} Recall that the $FC$-centre of $G$ is defined to be the union of the finite conjugacy classes in $G$. This is a characteristic subgroup of $G$. \begin{theorem}Let $A, B$ be residually amenable groups and let $C$ be a finite group such that $C$ is contained in the $FC$-centre of both $A$ and $B$. Then, $A*_C B$ is residually amenable. \end{theorem} \begin{proof} Note first that the normal closures $N$, (resp. $M$) of $C$ in $A$ (resp. $B$) must be finite. Indeed $N$ is generated by the finitely many conjugates of $C$ in $A$, implying that the centre of $N$ has finite index in $N$ and therefore $N$ must be finite because it is generated by finitely many elements of finite order. Set $G=A*_C B$. Without loss of generality, we can assume that $A$ and $B$ are amenable. There is a surjective map $f$ from $G$ onto the direct product $A/N \times B/M $. The kernel $K$ of $f$ is the Bass-Serre fundamental group of a graph of groups in which the vertex stabilizers are isomorphic to either $N$ or $M$ and in particular, have bounded size. In this situation, \cite[Theorem 7.7]{ScottWall} applies to give that $K$ embeds in the fundamental group of a finite graph of finite groups and is hence, virtually free (possibly, not finitely generated). Choose a free normal subgroup $F$ of finite index in $K$ and form $N= \cap_{g\in G} F^g$. The subgroup $N$ is normal in $G$ and $K/N$ is locally finite because it is a subdirect product of isomorphic finite groups. This implies, $G$ is an extension of the free subgroup $N$ by the amenable group $K/N$-by-($A/N \times B/M $) and therefore, is residually amenable. \end{proof} Finally we note that Question \ref{q1} has a positive answer when $C$ is Hausdorff in the profinite topologies of $A$ and $B$. \begin{theorem} Let $A, B$ be residually amenable groups and let $C$ be a finite subgroup of $A$ and $B$. Suppose that there are finite index subgroups $A_1 \leq A$ and $B_1 \leq B$ such that $C \cap A_1= 1=C \cap B_1$. Then $A*_C B$ is residually amenable. \end{theorem} \begin{proof} Set $G=A*_C B$. Without loss of generality, assume that $A$ and $B$ are amenable. By replacing $A_1$ and $B_1$ with smaller subgroups if necessary we may assume that $A_1 \vartriangleleft A$ and $B_1 \vartriangleleft B$. The group $C$ embeds in both $A/A_1$ and in $B/B_1$. The amalgam of finite groups $(A/A_1) *_C (B/B_1)$ is residually finite and so has a finite image $P$ where $C$ maps injectively. Clearly $G$ maps onto $P$ and the kernel of this map is a finite index normal subgroup $N$ of $G$ which does not intersect any conjugate of $C$. The subgroup $N$ acts on the Bass Serre tree for $G$ with trivial edge stabilizers and amenable vertex stabilizers. Therefore $N$ is a free product of amenable groups and is residually amenable. Since $N$ has finite index in $G$, it follows that $G$ is residually amenable. \end{proof} The above result is applicable for many classes of groups. For example, using that finitely generated nilpotent groups are residually finite we easily obtain the following corollary. Let $\gamma_n(G)$ denote the $n$-th term of the lower central series of $G$. \begin{corollary} Let $A$ and $B$ be finitely generated residually amenable groups. Let $C$ be a finite subgroup of $A$ and $B$ such that $C \cap \gamma_n (A)=1 = C \cap \gamma_m(B)$ for some $m,n \in \mathbb N$. Then $A*_C B$ is residually amenable. \end{corollary} \textbf{Acknowledgement} We thank the referee for pointing out the validity of the second sentence of Theorem 1 and for many suggestions which improved the presentation. \end{document}
\begin{document} \keywords{random Fibonacci sequence; Rosen continued fraction; upper Lyapunov exponent; Stern-Brocot intervals; Hecke group} \subjclass[2000]{37H15, 60J05, 11J70} \begin{abstract} We study the generalized random Fibonacci sequences defined by their first nonnegative terms and for $n\ge 1$, $F_{n+2} = \lambda F_{n+1} \pm F_{n}$ (linear case) and $\widetilde F_{n+2} = |\lambda \widetilde F_{n+1} \pm \widetilde F_{n}|$ (non-linear case), where each $\pm$ sign is independent and either $+$ with probability $p$ or $-$ with probability $1-p$ ($0<p\le 1$). Our main result is that, when $\lambda$ is of the form $\lambda_k = 2\cos (\pi/k)$ for some integer $k\ge 3$, the exponential growth of $F_n$ for $0<p\le 1$, and of $\widetilde F_{n}$ for $1/k < p\le 1$, is almost surely positive and given by $$ \int_0^\infty \log x\, d\nu_{k, \rho} (x), $$ where $\rho$ is an explicit function of $p$ depending on the case we consider, taking values in $[0, 1]$, and $\nu_{k, \rho}$ is an explicit probability distribution on ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$ defined inductively on generalized Stern-Brocot intervals. We also provide an integral formula for $0<p\le 1$ in the easier case $\lambda\ge 2$. Finally, we study the variations of the exponent as a function of $p$. \end{abstract} \title{Almost-sure Growth Rate of Generalized Random Fibonacci sequences} \section{Introduction} Random Fibonacci sequences have been defined by Viswanath by $F_1=F_2=1$ and the random recurrence $F_{n+2}= F_{n+1} \pm F_{n} $, where the $\pm$ sign is given by tossing a balanced coin. In~\cite{viswanath2000}, he proved that $$ \sqrt[n]{|F_n|}\longrightarrow1.13198824\ldots \quad\mbox{a.s.} $$ and the logarithm of the limit is given by an integral expression involving a measure defined on Stern-Brocot intervals. Rittaud~\cite{rittaud2006} studied the exponential growth of ${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}(|F_n|)$: it is given by an explicit algebraic number of degree 3, which turns out to be strictly larger than the almost-sure exponential growth obtained by Viswanath. In~\cite{janvresse2007}, Viswanath's result has been generalized to the case of an unbalanced coin and to the so-called non-linear case $F_{n+2}= |F_{n+1} \pm F_{n}|$. Observe that this latter case reduces to the linear recurrence when the $\pm$ sign is given by tossing a balanced coin. A further generalization consists in fixing two real numbers, $\lambda$ and $\beta$, and considering the recurrence relation $F_{n+2}=\lambda F_{n+1}\pm \beta F_{n}$ (or $F_{n+2}=\vert \lambda F_{n+1}\pm \beta F_{n}\vert$), where the $\pm$ sign is chosen by tossing a balanced (or unbalanced) coin. By considering the modified sequence $G_{n}:=F_{n}/\beta^{n/2}$, which satisfies $G_{n+2}=\frac{\lambda}{\sqrt{\beta}} G_{n+1}\pm G_{n}$, we can always reduce to the case $\beta=1$. The purpose of this article is thus to generalize the results presented in~\cite{janvresse2007} on the almost-sure exponential growth to random Fibonacci sequences with a multiplicative coefficient: $(F_n)_{n\ge 1}$ and $(\widetilde F_n)_{n\ge 1}$, defined inductively by their first two positive terms $F_1=\widetilde F_1 =a$, $F_2=\widetilde F_2=b$ and for all $n\ge 1$, \begin{equation} \label{linear case} F_{n+2} = \lambda F_{n+1} \pm F_{n} \qquad \mbox{(linear case)}, \end{equation} \begin{equation} \label{non-linear case} \widetilde F_{n+2} = |\lambda \widetilde F_{n+1} \pm \widetilde F_{n}| \qquad \mbox{(non-linear case)}, \end{equation} where each $\pm$ sign is independent and either $+$ with probability $p$ or $-$ with probability $1-p$ ($0<p\le 1$). We are not yet able to solve this problem in full generality. If $\lambda\ge2$, the linear and non-linear cases are essentially the same, and the study of the almost-sure growth rate can easily be handled (Theorem~\ref{th:case_2}). The situation $\lambda<2$ is much more difficult. However, the method developed in~\cite{janvresse2007} can be extended in a surprisingly elegant way to a countable family of $\lambda$'s, namely when $\lambda$ is of the form $\lambda_k = 2\cos (\pi/k)$ for some integer $k\ge 3$. The simplest case $\lambda_3=1$ corresponds to classical random Fibonacci sequences studied in~\cite{janvresse2007}. The link made in \cite{janvresse2007} and \cite{rittaud2006} between random Fibonacci sequences and continued fraction expansion remains valid for $\lambda_{k}=2\cos(\pi/k)$ and corresponds to so-called Rosen continued fractions, a notion introduced by Rosen in~\cite{rosen1954}. These values $\lambda_{k}$ are the only ones strictly smaller than $2$ for which the group (called {\em Hecke group}) of transformations of the hyperbolic half plane ${\ifmmode{\mathbbm{H}}\else{$\mathbbm{H}$}\fi}^2$ generated by the transformations $z\longmapsto -1/z$ and $z\longmapsto z+\lambda$ is discrete. In the linear case, the random Fibonacci sequence is given by a product of random i.i.d. matrices, and the classical way to investigate the exponential growth is to apply Furstenberg's formula~\cite{furstenberg1963}. This is the method used by Viswanath, and the difficulty lies in the determination of Furstenberg's invariant measure. In the non-linear case, the involved matrices are no more i.i.d., and the standard theory does not apply. Our argument is completely different and relies on some reduction process which will be developed in details in the linear case. Surprisingly, our method works easier in the non-linear case, for which we only outline the main steps. Our main results are the following. \begin{theo} \label{MainTheorem} Let $\lambda=\lambda_k=2\cos (\pi/k)$, for some integer $k\ge 3$. For any $\rho\in[0, 1]$, there exists an explicit probability distribution $\nu_{k, \rho}$ on ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$ defined inductively on generalized Stern-Brocot intervals (see Section~\ref{nu_rho} and Figure~\ref{mesure}), which gives the exponential growth of random Fibonacci sequences: \begin{itemize} \item {\bf Linear case:} Fix ${F}_1>0$ and ${F}_2>0$. For $p=0$, the sequence $(|F_n|)$ is periodic with period $k$. For any $p\in ]0,1]$, $$ \dfrac{1}{n} \log |F_n| \tend{n}{\infty}\gamma_{p,\lambda_k} = \int_0^\infty \log x\, d\nu_{k, \rho} (x) >0 $$ almost-surely, where $$ \rho := \sqrt[k-1]{1-p_R} $$ and $p_R$ is the unique positive solution of $$\left(1-\dfrac{px}{p+(1-p)x}\right)^{k-1} = 1-x.$$ \item {\bf Non-linear case:} For $p\in ]1/k, 1]$ and any choice of ${\tilde F}_1>0$ and ${\tilde F}_2>0$, $$ \dfrac{1}{n} \log \widetilde F_n \tend{n}{\infty} \widetilde \gamma_{p,\lambda_k} = \int_0^\infty \log x\, d\nu_{k, \rho} (x) >0$$ almost-surely, where $$ \rho := \sqrt[k-1]{1-p_R} $$ and $p_R$ is, for $p<1$, the unique positive solution of $$\left(1-\frac{px}{(1-p)+px}\right)^{k-1} = 1-x.$$ (For $p=1$, $p_R=1$.) \end{itemize} \end{theo} \begin{figure} \caption{The measure $\nu_{k, \rho} \label{mesure} \end{figure} The behavior of $(\widetilde F_{n})$ when $p\le 1/k$ strongly depends on the choice of the initial values. This phenomenon was not perceived in~\cite{janvresse2007}, in which the initial values were set to $\widetilde F_{1}=\widetilde F_{2}=1$. However, we have the general result: \begin{theo} \label{Theorem2} Let $\lambda=\lambda_k=2\cos (\pi/k)$, for some integer $k\ge 3$. In the non-linear case, for $0\le p\le 1/k$, there exists almost-surely a bounded subsequence $(\widetilde F_{n_j})$ of $(\widetilde F_{n})$ with density $(1-kp)$. \end{theo} The bounded subsequence in Theorem~\ref{Theorem2} satisfies $\widetilde F_{n_{j+1}} = |\lambda \widetilde F_{n_{j}}-\widetilde F_{n_{j-1}}|$ for any $j$, which corresponds to the non-linear case for $p=0$. We therefore concentrate on this case in Section~\ref{Sec:p=0} and provide necessary and sufficient conditions for $(\widetilde F_{n})$ to be ultimately periodic (see Proposition~\ref{p=0}). Moreover, we prove that $\widetilde F_{n}$ may decrease exponentially fast to $0$, but that the exponent depends on the ratio $\widetilde F_0/\widetilde F_1$. The critical value $1/k$ in the non-linear case is to be compared with the results obtained in the study of ${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}[\widetilde F_n]$ (see~\cite{janvresse2008}): it is proved that ${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}[\widetilde F_n]$ increases exponentially fast as soon as $p>(2-\lambda_k)/4$. When $\lambda\ge 2$, the linear case and the non-linear case are essentially the same. The study of the exponential growth of the sequence $(F_n)$ is much simpler, and we obtain the following result. \begin{theo}\label{th:case_2} Let $\lambda\ge 2$ and $0<p\le 1$. For any choice of $F_1>0$ and $F_2>0$, $$ \dfrac{1}{n} \log |F_n| \tend{n}{\infty} \gamma_{p,\lambda} = \int_0^\infty \log x\, d\mu_{p, \lambda} (x)>0\quad\mbox{a.s.}, $$ where $\mu_{p, \lambda}$ is an explicit probability measure supported on $\left[B, \lambda+\frac{1}{B}\right]$, with $B:=\dfrac{\lambda+\sqrt{\lambda^2-4}}{2}$ (see Section~\ref{Sec:lambda2} and Figure~\ref{mesure2}). \end{theo} \subsection*{Road map} The detailed proof of Theorem~\ref{MainTheorem} in the linear case is given in Sections~\ref{reductionSection}-\ref{Section:positivity}: Section~\ref{reductionSection} explains the reduction process on which our method relies. In Section~\ref{Sec:SternBrocot}, we introduce the generalized Stern-Brocot intervals in connection with the expansion of real numbers in Rosen continued fractions, which enables us to study the reduced sequence associated to $(F_n)$. In Section~\ref{Sec:Coupling}, we come back to the original sequence $(F_n)$, and, using a coupling argument, we prove that its exponential growth is given by the integral formula. Then we prove the positivity of the integral in Section~\ref{Section:positivity}. The proof for the non-linear case, $p>1/k$, works with the same arguments (in fact it is even easier), and the minor changes are given at the beginning of Section~\ref{Section:non-linear}. The end of this section is devoted to the proof of Theorem~\ref{Theorem2}. The proof of Theorem~\ref{th:case_2} (for $\lambda\ge2$) is given in Section~\ref{Sec:lambda2}. In Section~\ref{Sec:croissance_p}, we study the variations of $\gamma_{p,\lambda}$ and $\widetilde\gamma_{p,\lambda}$ with~$p$. Conjectures concerning variations with $\lambda$ are given in Section~\ref{Sec:variations_k}. Connections with Embree-Trefethen's paper~\cite{embree1999}, who study a slight modification of our linear random Fibonacci sequences when $p=1/2$, are discussed in Section~\ref{Sec:Embree-Trefethen}. \section{Reduction: The linear case} \label{reductionSection} The sequence $(F_n)_{n\ge1}$ can be coded by a sequence $(X_n)_{n\ge 3}$ of i.i.d. random variables taking values in the alphabet $\{R, L\}$ with probability $(p, 1-p)$. Each $R$ corresponds to choosing the $+$ sign and each $L$ corresponds to choosing the $-$ sign, so that both can be interpreted as the right multiplication of $(F_{n-1}, F_n)$ by one of the following matrices: \begin{equation} \label{matrices} L := \begin{pmatrix} 0 & -1\\ 1 & \lambda \end{pmatrix} \qquad \mbox{and}\qquad R := \begin{pmatrix} 0 & 1\\ 1 & \lambda \end{pmatrix}. \end{equation} According to the context, we will interpret any finite sequence of $R$'s and $L$'s as the corresponding product of matrices. Therefore, for all $n\ge3$, $$(F_{n-1}, F_n) = (F_1, F_2)X_3\ldots X_n.$$ Our method relies on a reduction process of the sequence $(X_n)$ based on some relations satisfied by the matrices $R$ and $L$. Recalling the definition of $\lambda=2\cos(\pi/k)$, we can write the matrix $L$ as the product $P^{-1}DP$, where $$ D:=\begin{pmatrix} e^{i\pi/k} & 0 \\ 0 & e^{-i\pi/k} \end{pmatrix}, \quad P:=\begin{pmatrix} 1 & e^{i\pi/k} \\ 1 & e^{-i\pi/k} \end{pmatrix}, \quad\mbox{and } P^{-1}=\dfrac{1}{2i\sin(\pi/k)}\begin{pmatrix} -e^{-i\pi/k} & e^{i\pi/k} \\ 1 & -1 \end{pmatrix}. $$ As a consequence, we get that for any integer $j$, \begin{equation} \label{PowersOfL} L^j =\dfrac{1}{\sin(\pi/k)}\begin{pmatrix} -\sin\frac{(j-1)\pi}{k} & -\sin\frac{j\pi}{k} \\ \sin\frac{j\pi}{k} & \sin\frac{(j+1)\pi}{k} \end{pmatrix}, \end{equation} and \begin{equation} \label{RtimesPowersOfL} RL^j =\dfrac{1}{\sin(\pi/k)}\begin{pmatrix} \sin\frac{j\pi}{k} & \sin\frac{(j+1)\pi}{k} \\ \sin\frac{(j+1)\pi}{k} & \sin\frac{(j+2)\pi}{k} \end{pmatrix}. \end{equation} In particular, for $j=k-1$ we get the following relations satisfied by $R$ and $L$, on which is based our reduction process: \begin{equation} \label{reduction} RL^{k-1} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}, \quad RL^{k-1}R = -L \quad\mbox{and}\quad RL^{k-1}L=-R. \end{equation} Moreover, $L^k = -{\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}d$. We deduce from \eqref{reduction} that, in products of $R$'s and $L$'s, we can suppress all patterns $RL^{k-1}$ provided we flip the next letter. This will only affect the sign of the resulting matrix. To formalize the reduction process, we associate to each finite sequence $x=x_3\ldots x_n\in\{R,L\}^{n-2}$ a (generally) shorter word $\mathop{\rm Red}(x)=y_3\cdots y_{j}$ by the following induction. If $n=3$, $y_3 = x_3$. If $n>3$, $\mathop{\rm Red}(x_3\ldots x_n)$ is deduced from $\mathop{\rm Red}(x_3\ldots x_{n-1})$ in two steps. {\bf Step 1}: Add one letter ($R$ or $L$, see below) to the end of $\mathop{\rm Red}(x_3\ldots x_{n-1})$. {\bf Step 2}: If the new word ends with the suffix $RL^{k-1}$, remove this suffix. The letter which is added in step 1 depends on what happened when constructing $\mathop{\rm Red}(x_3\ldots x_{n-1})$: \begin{itemize} \item If $\mathop{\rm Red}(x_3\ldots x_{n-1})$ was simply obtained by appending one letter, we add $x_{n}$ to the end of $\mathop{\rm Red}(x_3\ldots x_{n-1})$. \item Otherwise, we had removed the suffix $RL^{k-1}$ when constructing $\mathop{\rm Red}(x_3\ldots x_{n-1})$; we then add $\overline{x_n}$ to the end of $\mathop{\rm Red}(x_3\ldots x_{n-1})$, where $\overline{R}:= L$ and $\overline{L}:= R$. \end{itemize} Example: Let $x=RLRLLLRLL$ and $k=4$. Then, the reduced sequence is given by $\mathop{\rm Red}(x) = R$. Observe that by construction, $\mathop{\rm Red}(x)$ never contains the pattern $RL^{k-1}$. Let us introduce the \emph{reduced random Fibonacci sequence} $(F_n^r)$ defined by $$(F_{n-1}^r,F_n^r) := (F_1,F_2) \mathop{\rm Red}(X_3\ldots X_n).$$ Note that we have $ F_n=\pm F_n^r$ for all $n$. From now on, we will therefore concentrate our study on the reduced sequence $\mathop{\rm Red}(X_3\ldots X_n)$. We will denote its length by $j(n)$ and its last letter by $Y(n)$. The proof of Lemma~2.1 in~\cite{janvresse2007} can be directly adapted to prove the following lemma. \begin{lemma} \label{Survival} We denote by $|W|_R$ the number of $R$'s in the word $W$. We have \begin{equation} \label{RnR} |\mathop{\rm Red}(X_3\ldots X_n)|_R\tend{n}{\infty}+\infty\qquad\mbox{a.s.} \end{equation} In particular, the length $j(n)$ of $\mathop{\rm Red}(X_3\ldots X_n)$ satisfies $$ j(n) \tend{n}{\infty} +\infty.\qquad\mbox{a.s.} $$ \end{lemma} \subsection{Survival probability of an $R$} \label{section:survival} We say that the last letter of $\mathop{\rm Red}(X_3\ldots X_n)$ \emph{survives} if, for all $m\ge n$, $j(m)\ge j(n)$. In other words, this letter survives if it is never removed during the subsequent steps of the reduction. By construction, the survival of the last letter $Y(n)$ of $\mathop{\rm Red}(X_3\ldots X_n)$ only depends on its own value and the future $X_{n+1},X_{n+2}\ldots$. Let $$ p_R:={\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igl( Y(n)\mbox{ survives }{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}ig|Y(n)=R \mbox{ has been appended at time }n{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igr). $$ A consequence of Lemma~\ref{Survival} is that $p_R >0$. We now want to express $p_R$ as a function of $p$. Observe that $Y(n)=R$ survives if and only if, after the subsequent steps of the reduction, it is followed by $L^jR$ where $0\le j\le k-2$, and the latter $R$ survives. Recall that the probability of appending an $R$ after a deletion of the pattern $RL^{k-1}$ is $1-p$, whereas it is equal to $p$ if it does not follow a deletion. Assume that $Y(n)=R$ has been appended at time $n$. We want to compute the probability for this $R$ to survive and to be followed by $L^jR$ ($0\le j\le k-2$) after the reduction. This happens with probability \begin{eqnarray*} p_j & := & {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}\left( R \mbox{ be followed by } \underbrace{{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igl(\!\!\overbrace{[\mbox{\sout{$R\ldots$}}]}^{\substack{\ell\ge 0 \\ \mbox{\scriptsize\ deletions}}}\!\!\! L{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igr)\ \ldots\ {\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igl([\mbox{\sout{$R\ldots$}}] \,L{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igr)}_{j\mbox{\scriptsize\ times}} \quad [\mbox{\sout{$R\ldots$}}]\, \overbrace{R}^{\mbox{\scriptsize survives}}\right) \\ & = &\left( (1-p)+ p\sum_{\ell\ge1} (1-p_R)^\ell (1-p)^{\ell-1}p \right)^j p\sum_{\ell\ge0} (1-p_R)^\ell (1-p)^{\ell} p_R\\ & = & \left(1-\dfrac{pp_R}{p+(1-p)p_R}\right)^j \dfrac{pp_R}{p+(1-p)p_R}. \end{eqnarray*} Writing $p_R = \sum_{j=0}^{k-2}p_j$, we get that $p_R$ is a solution of the equation \begin{equation} \label{survival-pr} g(x)=0,\quad\mbox{where }g(x):= 1- \dfrac{px}{p+(1-p)x} - (1-x)^{1/(k-1)}. \end{equation} Observe that $g(0)=0$, and that $g$ is strictly convex. Therefore there exists at most one $x>0$ satisfying $g(x)=0$, and it follows that $p_R$ is the unique positive solution of~\eqref{survival-pr}. \subsection{Distribution law of surviving letters} A consequence of Lemma~\ref{Survival} is that the sequence of surviving letters $$ (S_j)_{j\ge3} = \lim_{n\to\infty} \mathop{\rm Red}(X_3\ldots X_n) $$ is well defined and can be written as the concatenation of a certain number $s\ge0$ of starting $L$'s, followed by infinitely many blocks: $$ S_1S_2 \ldots = L^s B_1B_2\ldots $$ where $s\ge 0$ and, for all $\ell\ge1$, $B_\ell\in\{R, RL, \ldots, RL^{k-2}\}$. This block decomposition will play a central role in our analysis. We deduce from Section~\ref{section:survival} the probability distribution of this sequence of blocks: \begin{lemma} \label{law} The blocks $(B_\ell)_{\ell\ge1}$ are i.i.d. with common distribution law ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$ defined as follows \begin{equation} \label{defPrho} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}(B_1 = RL^j):= \frac{\rho^j}{\sum_{m=0}^{k-2}\rho^m}\ ,\quad 0\le j\le k-2, \end{equation} where $\rho := 1-\dfrac{pp_R}{p+(1-p)p_R}$ and $p_R$ is the unique positive solution of~\eqref{survival-pr}. \end{lemma} In \cite{janvresse2007}, where the case $k=3$ was studied, we used the parameter $\alpha = 1/(1+\rho)$ instead of $\rho$. Observe that $\rho=\left( (1-p)+ p\sum_{\ell\ge1} (1-p_R)^\ell (1-p)^{\ell-1}p \right)$ can be interpreted as the probability that the sequence of surviving letters starts with an $L$. Since an $R$ does not survive if it is followed by $k-1$ $L$'s, this explains why the probability $1-p_R$ that an $R$ does not survive is equal to $\rho^{k-1}$. \begin{proof} Observe that the event $E_n:=$``$Y(n)=R$ has been appended at time $n$ and survives'' is the intersection of the two events ``$Y(n)=R$ has been appended at time $n$'', which is measurable with respect to $\sigma(X_i,\ i\le n)$, and ``If $Y(n)=R$ has been appended at time $n$, then this $R$ survives'', which is measurable with respect to $\sigma(X_i,\ i>n)$. It follows that, conditioned on $E_n$, $\sigma(X_i,\ i\le n)$ and $\sigma(X_i,\ i> n)$ remain independent. Thus the blocks in the sequence of surviving letters appear independently, and their distribution is given by $${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}(B_1 = RL^j)=\frac{p_j}{p_R} = \frac{\rho^j}{\sum_{m=0}^{k-2}\rho^m}\ ,\quad 0\le j\le k-2.$$ \end{proof} \section{Rosen continued fractions and generalized Stern-Brocot intervals} \label{Sec:SternBrocot} \subsection{The quotient Markov chain} For $\ell\ge1$, let us denote by $n_\ell$ the time when the $\ell$-th surviving $R$ is appended, and set $$Q_\ell:= \dfrac{F_{n_{\ell+1}-1}^r}{F_{n_{\ell+1}-2}^r}, \quad\ell\ge0. $$ $Q_\ell$ is the quotient of the last two terms once the $\ell$-th definitive block of the reduced sequence has been written. Observe that the right-product action of blocks $B\in\{R, RL, \ldots, RL^{k-2}\}$ acts on the quotient $F_n^r/F_{n-1}^r$ in the following way: For $0\le j\le k-2$, for any $(a,b)\in{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}^*\times{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}$, if we set $(a',b'):= (a,b)RL^j$, then $$ \dfrac{b'}{a'} = f^j\circ f_0 \left(\dfrac{b}{a}\right), $$ where $f_0(q):=\lambda+1/q$ and $f(q):=\lambda -1/q$. For short, we will denote by $f_j$ the function $f^j\circ f_0$. Observe that $f_j$ is an homographic function associated to the matrix $RL^j$ in the following way: To the matrix $\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}$ corresponds the homographic function $q\mapsto\frac{\beta+\delta q}{\alpha+\gamma q}$. It follows from Lemma~\ref{law} that $(Q_\ell)_{\ell\ge1}$ is a real-valued Markov chain with probability transitions $$ {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}\left( Q_{\ell+1} = f_j(q) | Q_{\ell} = q \right) = \frac{\rho^j}{\sum_{m=0}^{k-2}\rho^m}\ ,\quad 0\le j\le k-2. $$ \subsection{Generalized Stern-Brocot intervals and the measure $\nu_{k, \rho}$} \label{nu_rho} Let us define subintervals of ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}$: for $0\le j\le k-2$, set $I_j:= f_j([0, +\infty ])$. These intervals are of the form $$I_j =[b_{j+1}, b_j], \mbox{ where } b_0=+\infty, \ b_1=\lambda = f_0(+\infty)=f_1(0), \ b_{j+1}=f(b_j) = f_{j}(+\infty)=f_{j+1}(0).$$ Observe that $b_{k-1}=f_{k-1}(0)=0$ since $RL^{k-1}=\begin{pmatrix}1& 0\\0&-1\end{pmatrix}$. Therefore, $(I_j)_{0\le j\le k-2}$ is a subdivision of $[0, +\infty ]$. More generally, we set $$ I_{j_1, j_2, \dots, j_\ell}:= f_{j_1}\circ f_{j_2}\circ \cdots \circ f_{j_\ell} ([0, +\infty ]), \quad \forall (j_1, j_2, \dots, j_\ell)\in \{0, \dots, k-2\}^\ell. $$ For any $\ell\ge1$, this gives a subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$ of $[0, +\infty ]$ since $$I_{j_1, j_2, \dots, j_{\ell-1}} = \bigcup_{j_\ell=0}^{k-2} I_{j_1, j_2, \dots, j_\ell}.$$ When $k=3$ ($\lambda =1$), this procedure provides subdivisions of $[0, +\infty ]$ into Stern-Brocot intervals. \begin{lemma} \label{generation} The $\sigma$-algebra generated by ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$ increases to the Borel $\sigma$-algebra on ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$. \end{lemma} We postpone the proof of this lemma to the next section. Observe that for any $q\in{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$, ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(Q_{\ell}\in I_{j_1, j_2, \dots, j_{\ell}}| Q_0=q)=\frac{\rho^{j_1+\cdots+j_\ell}}{(\sum_0^{k-2}\rho^m)^\ell}$. Therefore, the probability measure $\nu_{k, \rho}$ on ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$ defined by $$ \nu_{k, \rho}(I_{j_1, j_2, \dots, j_{\ell}}) := \frac{\rho^{j_1+\cdots+j_\ell}}{(\sum_0^{k-2}\rho^m)^\ell}\ $$ is invariant for the Markov chain $(Q_\ell)$. The fact that $\nu_{k, \rho}$ is the unique invariant probability for this Markov chain comes from the following lemma. \begin{lemma} \label{L+} There exists almost surely $L_+\ge0$ such that for all $\ell\ge L_+$, $Q_{\ell}>0$. \end{lemma} \begin{proof} For any $q\in{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}\setminus\{0\}$, either $f_0(q)>0$, or $f_1(q)=\lambda-1/f_0(q)>0$. Hence, for any $\ell\ge0$, $${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(Q_{\ell+1}>0|Q_\ell=q)\ge \frac{\rho}{\sum_0^{k-2}\rho^m}\,. $$ It follows that ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(\forall\ell\ge0,\ Q_\ell<0)=0$, and since $Q_\ell>0{\ifmmode{\mathscr{L}}\else{$\mathscr{L}$}\fi}ongrightarrow Q_{\ell+1}>0$, the lemma is proved. \end{proof} To a given finite sequence of blocks $(RL^{j_\ell}), \ldots, (RL^{j_1})$, we associate the generalized Stern-Brocot interval $I_{j_1, j_2, \dots, j_{\ell}}$. If we extend the sequence of blocks leftwards, we get smaller and smaller intervals. Adding infinitely many blocks, we get in the limit a single point corresponding to the intersection of the intervals, which follows the law $\nu_{k, \rho}$. \subsection{Link with Rosen continued fractions} Recall (see~\cite{rosen1954}) that, since $1\le \lambda <2$, any real number $q$ can be written as $$ q = a_0\lambda + \cfrac{1}{a_1\lambda + \cfrac{1}{\ddots+\cfrac{1}{a_n \lambda +_{\ddots}}}} $$ where $(a_n)_{n\ge0}$ is a finite or infinite sequence, with $a_n\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}\setminus\{0\}$ for $n\ge1$. This expression will be denoted by $[a_0,\ldots, a_n,\ldots]_\lambda$. It is called a \emph{$\lambda$-Rosen continued fraction expansion} of $q$, and is not unique in general. When $\lambda=1$ (\textit{i.e.} for $k=3$), we recover generalized continued fraction expansion in which partial quotients are positive or negative integers. Observe that the function $f_j$ are easily expressed in terms of Rosen continued fraction expansion. The Rosen continued fraction expansion of $f_j(q)$ is the concatenation of $(j+1)$ alternated $\pm1$ with the expansion of $\pm q$ according to the parity of $j$: \begin{equation} \label{fj} f_j([a_0,\ldots, a_n,\ldots]_\lambda) = \begin{cases} [\underbrace{1,-1,1,\ldots,1}_{(j+1) \mbox{ terms}},a_0,\ldots, a_n,\ldots]_\lambda & \mbox{ if $j$ is even}\\ [\underbrace{1,-1,1,\ldots,-1}_{(j+1) \mbox{ terms}},-a_0,\ldots, -a_n,\ldots]_\lambda & \mbox{ if $j$ is odd}. \end{cases} \end{equation} For any $\ell\ge1$, let ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$ be the set of endpoints of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$. The finite elements of ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(1)$ can be written as $$ b_j=f_j(0)=[\underbrace{1,-1,1,\ldots,\pm1}_{j \mbox{ terms}}]_\lambda \quad \forall\ 1\le j\le k-1.$$ In particular for $j=k-1$ we get a finite expansion of $b_{k-1}=0$. Moreover, by~\eqref{fj}, $$b_0=f_0(0)=\infty=[1,\underbrace{1,-1,1,\ldots,\pm1}_{k-1 \mbox{ terms}}]_\lambda.$$ Iterating~\eqref{fj}, we see that for all $\ell\ge1$, the elements of ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$ can be written as a finite $\lambda$-Rosen continued fraction with coefficients in $\{-1,1\}$. \begin{prop} \label{finiteCF} The set $\bigcup_{\ell\ge1}{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}_\ell$ of all endpoints of generalized Stern-Brocot intervals is the set of all nonnegative real numbers admitting a finite $\lambda$-Rosen continued fraction expansion. \end{prop} The proof uses the two following lemmas \begin{lemma} \label{inverse} $$ f_j(x)=\dfrac{1}{f_{k-2-j}(1/x)}, \quad\forall\ 0\le j\le k-2. $$ \end{lemma} \begin{proof} From \eqref{RtimesPowersOfL}, we get $$RL^{k-2}=\begin{pmatrix} \lambda & 1 \\ 1 & 0 \end{pmatrix}, \quad\mbox{hence}\quad f_{k-2}(x)=\frac{1}{\lambda+x}.$$ Therefore, $f_{k-2}(1/x)=1/f_{0}(x)$ and the statement is true for $j=0$. Assume now that the result is true for $j\ge0$. We have $$ f_{j+1}(x) = \lambda - \dfrac{1}{f_j(x)} = \lambda - f_{k-2-j}\left(\dfrac{1}{x}\right) = \lambda - f\circ f_{k-3-j}\left(\dfrac{1}{x}\right) = \dfrac{1}{f_{k-3-j}\left(\frac{1}{x}\right)}\ , $$ so the result is proved by induction. \end{proof} \begin{lemma} \label{endpoints} For any $\ell\ge1$, the set ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$ of endpoints of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$ is invariant by $x\mapsto 1/x$. Moreover, the largest finite element of ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$ is $\ell\lambda$ and the smallest positive one is $1/\ell\lambda$. \end{lemma} \begin{proof} Recall that the elements of ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(1)$ are of the form $b_j=f_{j-1}(\infty)=f_{j}(0)$, and the largest finite endpoint is $b_1=\lambda$. Hence, the result for $\ell=1$ is a direct consequence of Lemma~\ref{inverse}. Assume now that the result is true for $\ell\ge1$. Consider $b\in{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell+1)\setminus{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$. There exists $0\le j\le k-2$ and $b'\in{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$ such that $b=f_j(b')$. Since $1/b'$ is also in ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell)$, we see from Lemma~\ref{inverse} that $1/b=f_{k-2-j}(1/b')\in{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell+1)$. Hence ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell+1)$ is invariant by $x\mapsto 1/x$. Now, since $f_0$ is decreasing, the largest finite endpoint of ${\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}(\ell+1)$ is $f_0(1/\ell\lambda)=(\ell+1)\lambda$, and the smallest positive endpoint of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell+1)$ is $1/(\ell+1)\lambda$. \end{proof} \begin{proof}[Proof of Proposition~\ref{finiteCF}] The set of nonnegative real numbers admitting a finite $\lambda$-Rosen continued fraction expansion is the smallest subset of ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$ containing $0$ which is invariant under $x\mapsto 1/x$ and $x\mapsto x+\lambda$. By Lemma~\ref{endpoints}, the set $\bigcup_{\ell\ge1}{\ifmmode{\mathscr{E}}\else{$\mathscr{E}$}\fi}_\ell$ is invariant under $x\mapsto 1/x$. Moreover, it is also invariant by $x\mapsto f_{k-2}(x)=1/(x+\lambda)$, and contains $b_{k-1}=0$. \end{proof} \begin{remark} The preceding proposition generalizes the well-known fact that the endpoints of Stern-Brocot intervals are the rational numbers, that is real numbers admitting a finite continued fraction expansion. \end{remark} \begin{proof}[Proof of Lemma~\ref{generation}] This is a direct consequence of Proposition~\ref{finiteCF} and the fact that the set of numbers admitting a finite $\lambda$-Rosen continued fraction expansion is dense in ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}$ for any $\lambda<2$ (see \cite{rosen1954}). \end{proof} \section{Coupling with a two-sided stationary process} \label{Sec:Coupling} If $|F_{n+1}/F_n|$ was a stationary sequence with distribution $\nu_{k, \rho}$, then a direct application of the ergodic theorem would give the convergence stated in Theorem~\ref{MainTheorem}. The purpose of this section is to prove via a coupling argument that everything goes as if it was the case. For this, we embed the sequence $(X_n)_{n\ge 3}$ in a doubly-infinite i.i.d. sequence $(X_n^*)_{n\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}}$ with $X_n=X_n^*$ for all $n\ge 3$. We define the reduction of $(X^*)_{-\infty< j\le n}$, which gives a left-infinite sequence of i.i.d. blocks, and denote by $q_n^*$ the corresponding limit point, which follows the law $\nu_{k, \rho}$. We will see that for $n$ large enough, the last $\ell$ blocks of $\mathop{\rm Red}(X_3\dots X_n)$ and $\mathop{\rm Red}((X^*)_{-\infty< j\le n})$ are the same. Therefore, the quotient $q_n:= F_n^r/F_{n-1}^r$ is well-approximated by $q_n^*$, and an application of the ergodic theorem to $q_n^*$ will give the announced result. \subsection{Reduction of a left-infinite sequence} \label{reductioninfinie} We will define the reduction of a left-infinite i.i.d. sequence $(X^*)_{-\infty}^0$ by considering the successive reduced sequence $\mathop{\rm Red}(X_{-n}^*\dots X_0^*)$. \begin{prop} \label{stability} For all $\ell\ge 1$, there exists almost surely $N(\ell)$ such that the last $\ell$ blocks of $\mathop{\rm Red}(X_{-n}^*\dots X_0^*)$ are the same for any $n\ge N(\ell)$. \end{prop} This allows us to define almost surely the reduction of a left-infinite i.i.d. sequence $(X^*)_{-\infty}^0$ as the left-infinite sequence of blocks obtained in the limit of $\mathop{\rm Red}(X_{-n}^*\dots X_0^*)$ as $n\to\infty$. Let us call \emph{excursion} any finite sequence $w_1\dots w_m$ of $R$'s and $L$'s such that $\mathop{\rm Red} (w_1\dots w_m)=\emptyset$. We say that a sequence is {\em proper} if its reduction process does not end with a deletion. This means that the next letter is not flipped during the reduction. The proof of the proposition will be derived from the following lemmas. \begin{lemma} \label{Lem:excursion} If there exists $n>0$ such that $X_{-n}^*\dots X_{-1}^*$ is not proper, then $X_0^*$ is preceded by a unique excursion. \end{lemma} \begin{proof} We first prove that an excursion can never be a suffix of a strictly larger excursion. Let $W=W_1RW'$ be an excursion, with $RW'$ another excursion. Then $WL=W_1RW'L=\pm R$ and $RW'L=\pm R$, which implies that $W_1=\pm{\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}d$. It follows that $\mathop{\rm Red}(W_1)=\begin{pmatrix} \pm 1 & 0 \\ 0 & \pm 1 \end{pmatrix}$. Observe that $\mathop{\rm Red}(W_1)$ cannot start with $L$'s since $\mathop{\rm Red} (W_1 RW')=\emptyset$. Therefore, it is a concatenation of $s$ blocks, corresponding to some function $f_{j_1}\circ\cdots f_{j_s}$ which cannot be $x\mapsto\pm x$ unless $s=0$. But $s=0$ means that $\mathop{\rm Red}(W_1)=\emptyset$, so $\mathop{\rm Red}(W) = \mathop{\rm Red} (LW')=\emptyset$, which is impossible. Observe first that, if $X_0^*$ is not flipped during the reduction of $X_{-(n-1)}^*\dots X_0^*$ but is flipped during the reduction of $X_{-n}^*\dots X_0^*$, then $X_{-n}^*$ is an $R$ which is removed during the reduction process of $X_{-n}^*\dots X_0^*$. In particular, this is true if we choose $n$ to be the smallest integer such that $X_0^*$ is flipped during the reduction of $X_{-n}^*\dots X_0^*$. Therefore there exists $0\le j<n$ such that $X_{-n}^*\dots X_{-(j+1)}^*$ is an excursion. If $j=0$ we are done; otherwise the same observation proves that $X_{-j}^*$ is an $L$ which is flipped during the reduction process of $X_{-n}^*\dots X_{-j}^*$. Therefore, $X_0^*$ is flipped during the reduction of $RX_{-(j-1)}^*\dots X_0^*$, but not during the reduction of $X_{-\ell}^*\dots X_0^*$ for any $\ell \le j-1$. Iterating the same argument finitely many times proves that $\mathop{\rm Red}(X_{-n}^*\dots X_{-1}^*)=\emptyset$. \end{proof} \begin{lemma} $$\sum_{w \mbox{\begin{scriptsize} excursions\end{scriptsize}}} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(w) < 1 .$$ \end{lemma} \begin{proof} $X_0$ is an $R$ which does not survive during the reduction process if and only if it is the beginning of an excursion. By considering the longest such excursion, we get $$ p(1-p_R) = \sum_{w \mbox{\begin{scriptsize} excursions\end{scriptsize}}} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(w) {\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igl[(1-p) p_R+p{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igr]. $$ Hence, \begin{equation} \label{eq:excursion} \sum_{w \mbox{\begin{scriptsize} excursions\end{scriptsize}}} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(w) = \dfrac{p(1-p_R)}{(1-p) p_R+p} <1. \end{equation} \end{proof} We deduce from the two preceding lemmas: \begin{corollary} \label{coro} There is a positive probability that for all $n>0$ the sequence $X_{-n}^*\dots X_{-1}^*$ be proper. \end{corollary} \begin{proof}[Proof of Proposition~\ref{stability}] We deduce from Corollary~\ref{coro} that with probability 1 there exist infinitely many $j$'s such that \begin{itemize} \item $X^*_{-j}$ is an $R$ which survives in the reduction of $X_{-j}^*\dots X_0^*$; \item $X_{-n}^*\dots X^*_{-j-1}$ is proper for all $n\ge j$. \end{itemize} For such $j$, the contribution of $X_{-j}^*\dots X_0^*$ to $\mathop{\rm Red}(X_{-n}^*\dots X_0^*)$ is the same for any $n\ge j$. \end{proof} The same argument allows us to define almost surely $\mathop{\rm Red}((X^*)_{-\infty}^{n})$ for all $n\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}$, which is a left-infinite sequence of blocks. Observe that we can associate to each letter of this sequence of blocks the time $t\le n$ at which it was appended. We number the blocks by defining $B_0^n$ as the rightmost block whose initial $R$ was appended at some time $t<0$. For $n>0$, we have $\mathop{\rm Red}((X^*)_{-\infty}^{n})=\ldots B_{-1}^nB_0^nB_1^n\ldots B_{L(n)}^n$ where $0\le L(n)\le n$. The random number $L(n)$ evolves in the same way as the number of $R's$ in $\mathop{\rm Red}(X_3\ldots X_n)$. By Lemma~\ref{Survival}, $L(n)\to +\infty$ as $n\to \infty$ almost surely. As a consequence, for any $j\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}$ the block $B_j^n$ is well-defined and constant for all large enough $n$. We denote by $B_j$ the limit of $B_j^n$. The concatenation of these blocks can be viewed as the reduction of the whole sequence $(X^*)_{-\infty}^{+\infty}$. The same arguments as those given in Section~\ref{reductionSection} prove that the blocks $B_j$ are i.i.d. with common distribution law ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$. It is remarkable that the same result holds if we consider only the blocks in the reduction of $(X^*)_{-\infty}^0$. \begin{prop} \label{distribution} The sequence $\mathop{\rm Red}((X^*)_{-\infty}^0)$ is a left-infinite concatenation of i.i.d. blocks with common distribution law ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$. \end{prop} \begin{proof} Observe that $\mathop{\rm Red}((X^*)_{-\infty}^{0})=\mathop{\rm Red}((X^*)_{-\infty}^{L})$ where $L\le 0$ is the (random) index of the last letter not removed in the reduction process of $(X^*)_{-\infty}^0$. For any $\ell\le0$, we have $L=\ell$ if and only if $(X^*)_{-\infty}^\ell$ is proper and $(X^*)_{\ell +1}^{0}$ is an excursion. For any bounded measurable function $f$, since ${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{\ell}))\ \big|\ (X^*)_{-\infty}^{\ell} \mbox{ is proper}\bigr]$ does not depend on $\ell$, we have \begin{eqnarray*} \lefteqn{{\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{0})\bigr]}\\ &=& \sum_{\ell} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(L=\ell)\ {\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{\ell}))\ \big|\ L=\ell\bigr]\\ &=& \sum_{\ell} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(L=\ell)\ {\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{\ell}))\ \big|\ (X^*)_{-\infty}^{\ell} \mbox{ is proper, } (X^*)_{\ell+1}^0 \mbox{ is an excursion}\bigr]\\ &=& \sum_{\ell} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(L=\ell)\ {\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{\ell}))\ \big|\ (X^*)_{-\infty}^{\ell} \mbox{ is proper}\bigr]\\ &=& {\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{0}))\ \big|\ (X^*)_{-\infty}^{0} \mbox{ is proper}\bigr]. \end{eqnarray*} This also implies that the law of $\mathop{\rm Red}((X^*)_{-\infty}^{0})$ is neither changed when conditioned on the fact that $(X^*)_{-\infty}^{0}$ is not proper. Assume that $(X^*)_{-\infty}^{0}$ is proper. The fact that the blocks of $\mathop{\rm Red}((X^*)_{-\infty}^{0})$ will not be subsequently modified in the reduction process of $(X^*)_{-\infty}^{\infty}$ only depends on $(X^*)_{1}^{\infty}$. Therefore, ${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{0}))\ \big| \ (X^*)_{-\infty}^{0} \mbox{ is proper}\bigr]$ is equal to $${\ifmmode{\mathbbm{E}}\else{$\mathbbm{E}$}\fi}\bigl[f(\mathop{\rm Red}((X^*)_{-\infty}^{0}))\ \big| \ (X^*)_{-\infty}^{0} \mbox{ is proper and blocks of } \mathop{\rm Red}((X^*)_{-\infty}^{0}) \mbox{ are definitive} \bigr]. $$ The same equality holds if we replace ``proper'' with ``not proper''. Hence, the law of $\mathop{\rm Red}((X^*)_{-\infty}^{0})$ is the same as the law of $\mathop{\rm Red}((X^*)_{-\infty}^{0})$ conditioned on the fact that blocks of $\mathop{\rm Red}((X^*)_{-\infty}^{0})$ are definitive. But we know that definitive blocks are i.i.d. with common distribution law ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$. \end{proof} \subsection{Quotient associated to a left-infinite sequence} Let $n$ be a fixed integer. For $m\ge0$, we decompose $\mathop{\rm Red}((X^*)_{n-m< i\le n})$ into blocks $B_\ell, \ldots, B_1 = (RL^{j_\ell}), \ldots, (RL^{j_1})$, to which we associate the generalized Stern-Brocot interval $I_{j_1, j_2, \dots, j_{\ell}}$. If we let $m$ go to infinity, the preceding section shows that this sequence of intervals converges almost surely to a point $q_n^*$. By Proposition~\ref{distribution}, $q_n^*$ follows the law $\nu_{k, \rho}$. Since $(q_n^*)$ is an ergodic stationary process, and $\log(\cdot)$ is in $L^1(\nu_{k, \rho})$, the ergodic theorem implies \begin{equation} \label{ergodic} \dfrac{1}{N} \sum_{n=1}^N \log q_n^* \tend{N}{\infty} \int_{{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+} \log q \, d\nu_{k, \rho} (q)\quad\mbox{almost surely.} \end{equation} The last step in the proof of the main theorem is to compare the quotient $q_n = F_n^r/F_{n-1}^r$ with $q_n^*$. \begin{prop} \label{comparison} $$\dfrac{1}{N} \sum_{n=3}^N \bigl|\log q_n^* - \log |q_n|\bigr| \tend{N}{\infty} 0 \quad\mbox{almost surely.}$$ \end{prop} We call \emph{extremal} the leftmost and rightmost intervals of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$. \begin{lemma} \label{sup} $$ s_\ell := \sup_{\substack{I\in{\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell) \\ I\mbox{\scriptsize not extremal}}} \sup_{q, q^* \in I} |\log q^* - \log q| \tend{\ell}{\infty} 0 $$ \end{lemma} \begin{proof} Fix $\varepsilon >0$, and choose an integer $M>1/\varepsilon$. By Lemma~\ref{generation}, since $\log(\cdot)$ is uniformly continuous on $[1/M\lambda, M\lambda]$, we have for $\ell$ large enough $$ \sup_{\substack{I\in{\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell) \\ I\subset [1/M\lambda, M\lambda]}} \sup_{q, q^* \in I} |\log q^* - \log q| \le \varepsilon . $$ If $I\in{\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$ is a non-extremal interval included in $[0, 1/M\lambda]$ or in $[M\lambda, +\infty]$, there exists an integer $j\in[M, \ell]$ such that $I\subset [1/(j+1)\lambda, 1/j\lambda]$ or $I\subset [(j+1)\lambda, j\lambda]$. Hence, $$ \sup_{q, q^* \in I} |\log q^* - \log q| \le \log\left( \dfrac{j+1}{j}\right) \le \log\left( 1+\dfrac{1}{M}\right)\le \varepsilon . $$ \end{proof} \begin{proof}[Proof of Proposition~\ref{comparison}] For any $j\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}$, we define the following event $E_j$: \begin{itemize} \item $X^*_{j}$ is an $R$ which survives in the reduction of $(X_{i}^*)_{i\ge j}$; \item $X_{i}^*\dots X^*_{j-1}$ is proper for all $i< j$. \end{itemize} Observe that if $E_j$ holds for some $j\ge3$, then for all $n\ge j$, \begin{eqnarray*} \mathop{\rm Red}(X_3\dots X_n) &=& \mathop{\rm Red}(X_3\dots X_{j-1})\ \mathop{\rm Red}(X_j\dots X_n)\\ \mbox{and}\quad\mathop{\rm Red}((X^*)_{-\infty}^{n}) &=& \mathop{\rm Red}((X^*)_{-\infty}^{j-1})\ \mathop{\rm Red}(X^*_j\dots X^*_n). \end{eqnarray*} Hence, since $X_j\ldots X_n=X_j^*\ldots X_n^*$, they give rise in both reductions to the same blocks, the first one being definitive. Since each $E_j$ holds with the same positive probability, the ergodic theorem yields \begin{equation} \label{E} \dfrac{1}{n}\sum_{j=3}^n \ind{E_j}\tend{n}{\infty} {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(E_3) >0\quad\mbox{almost surely,} \end{equation} hence the number of definitive blocks of $\mathop{\rm Red}(X_3\dots X_n)$ and of $\mathop{\rm Red}((X^*)_{-\infty}^{n})$ which coincide grows almost surely linearly with $n$ as $n$ goes to~$\infty$ (these definitive blocks may be followed by some additional blocks which also coincide). Recall the definition of $L_+$ given in Lemma~\ref{L+} and observe that for $n\ge n_{L_+}$, $q_n>0$. Observe also that, by definition of $I_{j_1, j_2, \dots, j_\ell}$, if $q$ and $q^*$ are two positive real numbers, $f_{j_1}\circ f_{j_2}\circ \cdots \circ f_{j_\ell}(q)$ and $f_{j_1}\circ f_{j_2}\circ \cdots \circ f_{j_\ell}(q^*)$ belong to the same interval of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$. From~\eqref{E}, we deduce that, almost surely, for $n$ large enough, at least $L_+ +\sqrt{n}$ definitive blocks of $\mathop{\rm Red}(X_3\dots X_n)$ and of $\mathop{\rm Red}((X^*)_{-\infty}^{n})$ coincide (possibly followed by some additional blocks which also coincide). This ensures that $q_n$ and $q_n^*$ belong to the same interval of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\sqrt{n})$. By Lemma~\ref{sup}, it remains to check that, almost surely, there exist only finitely many $n$'s such that $q_n^*$ belongs to an extremal interval of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\sqrt{n})$. But this is a direct application of Borel-Cantelli Lemma, observing that the measure $\nu_{k, \rho}$ of an extremal interval of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$ decreases exponentially fast with $\ell$. \end{proof} We now conclude the section by the proof of the convergence to the integral given in Theorem~\ref{MainTheorem}, linear case: Since $F_n=\pm F_n^r$, we can write $n^{-1}\log |F_n|$ as $$ \dfrac{1}{n} \log |F_2| + \dfrac{1}{n}\sum_{j=3}^n \log q_j^* + \dfrac{1}{n}\sum_{j=3}^n \left(\log |q_j|-\log q_j^*\right), $$ and the convergence follows using Proposition~\ref{comparison} and \eqref{ergodic}. \section{Positivity of the integral} \label{Section:positivity} We now turn to the proof of the positivity of $\gamma_{p,\lambda_k}$. It relies on the following lemma, whose proof is postponed. \begin{lemma} \label{lemme:nu_rho} Fix $0<\rho<1$. For any $t>0$, \begin{equation}\label{eq:positivity} {\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_t := \nu_{k, \rho}\left([t, \infty)\right) - \nu_{k, \rho}\left([0, 1/t]\right) \ge 0. \end{equation} Moreover, there exists $t>1$ such that the above inequality is strict. \end{lemma} Using Fubini's theorem, we obtain that $\gamma_{p,\lambda_k}$ is equal to \begin{eqnarray*} \int_0^\infty \log x\, d\nu_{k, \rho} (x) &=& \int_1^\infty \log x\, d\nu_{k, \rho} (x)-\int_0^1 \log (1/x)\, d\nu_{k, \rho} (x)\\ &=& \int_0^\infty \nu_{k, \rho} ([e^{u},\infty)) du - \int_0^\infty \nu_{k, \rho} ([0, e^{-u}]) du \end{eqnarray*} which is positive if $0<\rho<1$ by Lemma~\ref{lemme:nu_rho}. Thus, $\gamma_{p,\lambda_k}>0$ for any $p>0$. This ends the proof of Theorem~\ref{MainTheorem}, linear case. \begin{proof}[Proof of Lemma~\ref{lemme:nu_rho}] By Lemma~\ref{generation}, it is enough to prove the lemma when $t$ is the endpoint of an interval of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$. This is done by induction on $\ell$. Obviously, ${\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_0={\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_\infty=0$. When $\ell=1$ and $\ell=2$, if $t\neq 0,\infty$, it can be written as $f_j(b_i)$ for $0\le j\le k-2$ and $0\le i\le k-2$, and we get $1/t=f_{k-2-j}(b_{k-1-i})$ (see Lemma~\ref{inverse}). Setting $Z:=\sum_{s=0}^{k-2}\rho^s$, we have $$ \nu_{k, \rho}\left([t, \infty)\right) = \sum_{s=0}^{j-1} \nu_{k, \rho}\left([b_{s+1}, b_s)\right) +\nu_{k, \rho}\left([t, b_{j})\right) =\sum_{s=0}^{j-1}\frac{\rho^s}{Z} + \frac{\rho^j}{Z} \nu_{k, \rho}\left([0,b_i]\right)= \sum_{s=0}^{j-1}\frac{\rho^s}{Z} + \frac{\rho^j}{Z}\sum_{s=i}^{k-2}\frac{\rho^s}{Z}. $$ Therefore, \begin{eqnarray*} \lefteqn{\nu_{k, \rho}\left([t, \infty)\right) - \nu_{k, \rho}\left([0, 1/t]\right)} \\ &=& \sum_{s=0}^{j-1}\frac{\rho^s}{Z} + \frac{\rho^j}{Z}\sum_{s=i}^{k-2}\frac{\rho^s}{Z} -\left( \sum_{s=k-1-j}^{k-2}\frac{\rho^s}{Z} + \frac{\rho^{k-2-j}}{Z} \ \sum_{s=0}^{k-2-i}\frac{\rho^s}{Z}\right)\\ &=& \sum_{s=0}^{j-1}\frac{\rho^s}{Z}\left(1-\rho^{k-1-j}\right) + \frac{1}{Z}\left(\rho^{i+j}-\rho^{k-2-j}\right)\sum_{s=0}^{k-2-i}\frac{\rho^s}{Z}. \end{eqnarray*} Since $i\le k-2$, we have $\rho^{i+j}-\rho^{k-2-j}\ge \rho^{k-2-j}(\rho^{2j}-1)$. Moreover, $\sum_{s=0}^{k-2-i}\frac{\rho^s}{Z}\le 1$. Thus, $$ Z{\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_t \ge \sum_{s=0}^{j-1}\rho^s\left(1-\rho^{k-1-j}\right) - \rho^{k-2-j}(1-\rho^{2j}). $$ Observe that $(1-\rho^{k-1-j}) = (1-\rho)\sum_{s=0}^{k-2-j}\rho^s$ and that $1-\rho^{2j}=(1+\rho^{j})(1-\rho)\sum_{s=0}^{j-1}\rho^s$. Hence, $$ Z{\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_t \ge (1-\rho) \sum_{s=0}^{j-1}\rho^s \left(\sum_{s=0}^{k-2-j}\rho^s - \rho^{k-2-j}(1+\rho^{j})\right), $$ which is positive as soon as $j<k-2$. The quantity ${\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_t$ is invariant when $t$ is replaced by $1/t$, so we also get the desired result for $j=k-2$. Assume~\eqref{eq:positivity} is true for any endpoint of intervals of the subdivision ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(j)$, $j\le\ell-1$. Let $t$ be an endpoint of an interval of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell)$; then there exists an interval $[t_1, t_2]$ of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(\ell-2)$ such that $t\in[t_1, t_2]$. We can write \begin{eqnarray*} \nu_{k, \rho}\left([t, \infty)\right) &=& \nu_{k, \rho}\left([t_2, \infty)\right) + \nu_{k, \rho}\left([t_1, t_2]\right) \nu_{k, \rho}\left([u, \infty)\right)\\ \mbox{and }\nu_{k, \rho}\left([0, 1/t]\right) &=& \nu_{k, \rho}\left([0, 1/t_2]\right) + \nu_{k, \rho}\left([1/t_2, 1/t_1]\right) \nu_{k, \rho}\left([0, 1/u]\right) \end{eqnarray*} for some endpoint $u$ of an interval of ${\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}(2)$. If $\nu_{k, \rho}\left([t_1, t_2]\right)\ge \nu_{k, \rho}\left([1/t_2, 1/t_1]\right)$, we get the result since~\eqref{eq:positivity} holds for $u$, and $t_2$. Otherwise, we can write ${\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_t$ as $$ {\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_{t_1} - \nu_{k, \rho}\left([t_1, t_2]\right) + \nu_{k, \rho}\left([1/t_2, 1/t_1]\right) + \nu_{k, \rho}\left([t_1, t_2]\right) \nu_{k, \rho}\left([u, \infty)\right)-\nu_{k, \rho}\left([1/t_2, 1/t_1]\right) \nu_{k, \rho}\left([0, 1/u]\right) $$ which is greater than $$ {\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_{t_1} + \nu_{k, \rho}\left([t_1, t_2]\right) {\ifmmode{\mathscr{D}}\else{$\mathscr{D}$}\fi}elta_u\ge 0. $$ \end{proof} \begin{remark} \label{rho = 1} We can also define the probability measure $\nu_{k,\rho}$ for $\rho=1$. (When $k=3$, this is related to Minkowski's Question Mark Function, see \cite{denjoy1938}.) It is straightforward to check that $\nu_{k,1}\left([t, \infty)\right) - \nu_{k, 1}\left([0, 1/t]\right) = 0$ for all $t>0$, which yields $$ \int_0^\infty \log x\, d\nu_{k, 1} (x) = 0. $$ \end{remark} \section{Reduction: The non-linear case} \label{Section:non-linear} In the non-linear case, where $\widetilde F_{n+2} = |\lambda \widetilde F_{n+1} \pm \widetilde F_{n}|$, the sequence $(\widetilde F_n)_{n\ge1}$ can also be coded by the sequence $(X_n)_{n\ge 3}$ of i.i.d. random variables taking values in the alphabet $\{R, L\}$ with probability $(p, 1-p)$. Each $R$ corresponds to choosing the $+$ sign and can be interpreted as the right multiplication of $(\widetilde F_{n-1}, \widetilde F_n)$ by the matrix $R$ defined in~\eqref{matrices}. Each $L$ corresponds to choosing the $-$ sign but the interpretation in terms of matrices is slighty different, since we have to take into account the absolute value: $X_{n+1}=L$ corresponds either to the right multiplication of $(\widetilde F_{n-1}, \widetilde F_n)$ by $L$ if $(\widetilde F_{n-1}, \widetilde F_n)L$ has nonnegative entries, or to the multiplication by \begin{equation} \label{matrices2} L' := \begin{pmatrix} 0 & 1\\ 1 & -\lambda \end{pmatrix}. \end{equation} Observe that for all $0\le j\le k-2$, the matrix $RL^j$ has nonnegative entries (see~\eqref{RtimesPowersOfL}), whereas $RL^{k-1} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$. Therefore, if $X_i=R$ is followed by some $L$'s, we interpret the first $(k-2)$ $L$'s as the right multiplication by the matrix $L$, whereas the $(k-1)$-th $L$ corresponds to the multiplication by $L'$. Moreover, $RL^{k-2}L' = {\ifmmode{\mathscr{I}}\else{$\mathscr{I}$}\fi}d$, so we can remove all patterns $RL^{k-1}$ in the process $(X_n)$. \newcommand{\widetilde{\red}}{\widetilde{\mathop{\rm Red}}} We thus associate to $x_3\ldots x_n$ the word $\widetilde{\red}(x_3\ldots x_n)$, which is obtained by the same reduction as $\mathop{\rm Red}(x_3\ldots x_n)$, except that the letter added in Step 1 is always $x_{i}$. We have $$ (\widetilde F_{n-1},\widetilde F_n) = (\widetilde F_1,\widetilde F_2) \widetilde{\red}(x_3\ldots x_n). $$ Since the reduction process is even easier in the non-linear case, we will not give all the details but only insist on the differences with the linear case. The first difference is that the survival probability of an $R$ is positive only if $p>1/k$. \begin{lemma} \label{SurvivalNL} For $p > 1/k$, the number of $R$'s in $\widetilde{\red}(X_3\ldots X_n)$ satisfies $$ |\widetilde{\red}(X_3\ldots X_n)|_R\tend{n}{\infty}+\infty\qquad\mbox{a.s.} $$ and the survival probability $p_R$ is for $p<1$ the unique solution in $]0, 1]$ of \begin{equation} \label{pRNL} \tilde g(x)=0,\quad \mbox{ where }\quad \tilde g(x):=(1-x)\left(1+\frac{p}{1-p}\ x\right)^{k-1} - 1\ . \end{equation} If $p\le 1/k$, $p_R=0$. \end{lemma} \begin{proof} Since each deletion of an $R$ goes with the deletion of $(k-1)$ $L$'s, if $p>1/k$, the law of large numbers ensures that the number of remaining $R$'s goes to infinity. If $p<1/k$, there only remains $L$'s, so $p_R=0$. Doing the same computations as in Section~\ref{section:survival}, we obtain that, for all $0\le j\le k-2$, the probability $p_j$ for an $R$ to be followed by $L^jR$ after the subsequent steps of the reduction is $$p_j=\frac{(1-p)^j p p_R}{(1-p+pp_R)^{j+1}}.$$ Since $p_R=\sum_{j=0}^{k-2}p_j$, we get that $p_R$ is solution of $\tilde g(x)=0$. Observe that $\tilde g(0)=0$, $\tilde g(1)=-1$, $\tilde g'(0)>0$ for $p>1/k$ and $\tilde g'$ vanishes at most once on ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$. Hence, for $p>1/k$, $p_R$ is the unique solution of $\tilde g(x)=0$ in $]0, 1]$. For $p=1/k$, $\tilde g'(0)=0$ and the unique nonnegative solution is $p_R=0$. \end{proof} \subsection{Case $p>1/k$} As in the linear case, the sequence of surviving letters $$ (S_j)_{j\ge3} = \lim_{n\to\infty} \widetilde{\red}(X_3\ldots X_n) $$ is well defined for $p>1/k$, and can be written as the concatenation of a certain number $s\ge 0$ of starting $L$'s and of blocks: $$ S_1S_2 \ldots = L^s B_1B_2\ldots $$ where for all $\ell\ge1$, $B_\ell\in\{R, RL, \ldots, RL^{k-2}\}$. These blocks appear with the same distribution ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$ as in the linear case, but with a different parameter $\rho$. \begin{lemma} \label{lawNL} In the non-linear case, for $p>1/k$, the blocks $(B_\ell)_{\ell\ge1}$ are i.i.d. with common distribution law ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}_{\rho}$ defined by~\eqref{defPrho}, where $\rho := \sqrt[k-1]{1-p_R}$ and $p_R$ is given by Lemma~\ref{SurvivalNL}. \end{lemma} As in Section~\ref{reductioninfinie}, we can embed the sequence $(X_n)_{n\ge 3}$ in a doubly-infinite i.i.d. sequence $(X_n^*)_{n\in{\ifmmode{\mathbbm{Z}}\else{$\mathbbm{Z}$}\fi}}$ with $X_n=X_n^*$ for all $n\ge 3$. We define the reduction of $(X^*)_{-\infty< j\le n}$ by considering the successive $\widetilde{\red}(X_{n-N}\dots X_n)$. The analog of Proposition~\ref{stability} is easier to prove than in the linear case since the deletion of a pattern $RL^{k-1}$ does not affect the next letter. The end of the proof is similar. \subsection{Case $p\le 1/k$} \label{Sec:p=0} Since in this case the survival probability of an $R$ is $p_R=0$, the reduced sequence $\widetilde{\red}(X_0^{\infty})$ contains only $L$'s. We consider the subsequence $(\widetilde F_{n_j})$ where $n_j$ is the time when the $j$-th $L$ is appended to the reduced sequence. This subsequence satisfies, for any $j$, $\widetilde F_{n_{j+1}} = |\lambda \widetilde F_{n_{j}}-\widetilde F_{n_{j-1}}|$, which corresponds to the non-linear case for $p=0$. Therefore, we first concentrate on the deterministic sequence $\widetilde F_{n+1}= |\lambda \widetilde F_{n}-\widetilde F_{n-1}|$, with given nonnegative initial values $\widetilde F_0$ and $\widetilde F_1$. \begin{prop} \label{bounded} For any choice of $\widetilde F_0\ge 0$ and $\widetilde F_1\ge 0$, the sequence defined inductively by $\widetilde F_{n+1}= |\lambda \widetilde F_{n}-\widetilde F_{n-1}|$ is bounded. \end{prop} Lemma~\ref{reverse} in the next section gives a proof of this proposition for the specific case $\lambda =2\cos\pi/k $. We give here another proof based on a geometrical interpretation, which can be applied for any $0<\lambda <2$. The key argument relies on the following observation: Let $\theta$ be such that $\lambda=2\cos\theta$. Fix two points $P_0,P_1$ on a circle centered at the origin $O$, such that the oriented angle $(OP_0,OP_1)$ equals $\theta$. Let $P_2$ be the image of $P_1$ by the rotation of angle $\theta$ and center $O$. Then the respective abscissae $x_0$, $x_1$ and $x_2$ of $P_0$, $P_1$ and $P_2$ satisfy $x_2=\lambda x_1 - x_0$. We can then geometrically interpret the sequence $(\widetilde F_n)$ as the successive abscissae of points in the plane. \begin{lemma}[Existence of the circle] Let $\theta\in ]0,\pi[$. For any choice of $(x, x')\in{\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+^2\setminus\{(0,0)\}$, their exist a unique $R>0$ and two points $M$ and $M'$, with respective abscissae $x$ and $x'$, lying on the circle with radius $R$ centered at the origin, such that the oriented angle $(OM,OM')$ equals~$\theta$. \end{lemma} \begin{proof} Assume that $x>0$. We have to show the existence of a unique $R$ and a unique $t\in ]-\pi/2,\pi/2[$ (which represents the argument of $M$) such that $$ R\cos t=x\quad\mbox{and}\quad R\cos(t+\theta)=x'. $$ This is equivalent to $$R\cos t=x \quad\mbox{and}\quad \cos\theta - \tan t\ \sin \theta = \dfrac{x'}{x},$$ which obviously has a unique solution since $\sin \theta\neq 0$. If $x=0$, the unique solution is clearly $R=x'/\cos(\theta-\pi/2)$ and $t=-\pi/2$. \emph{Remark:} Since $x_1>0$, we have $t+\theta<\pi/2$. \end{proof} \begin{proof}[Proof of Proposition~\ref{bounded}] At step $n$, we interpret $\widetilde F_{n+1}$ in the following way: Applying the lemma with $x=\widetilde F_{n-1}$ and $x'=\widetilde F_n$, we find a circle of radius $R_n>0$ centered at the origin and two points $M$ and $M'$ on this circle with abscissae $x$ and $x'$. Consider the image of $M'$ by the rotation of angle $\theta$ and center $O$. If its abscissa is nonnegative, it is equal to $\widetilde F_{n+1}$, and we will have $R_{n+1}=R_n$. Otherwise, we have to apply also the symmetry with respect to the origin to get a point with abscissa $\widetilde F_{n+1}$. The circle at step $n+1$ may then have a different radius, but we now show that the radius always decreases (see Figure~\ref{Fig:cercle}). \begin{figure} \caption{$R_{n} \label{Fig:cercle} \end{figure} Indeed, denoting by $\alpha$ the argument of $M'$, we have in the latter case $\pi/2-\theta<\alpha\le \pi/2$, $\widetilde F_{n}=R_n\cos \alpha$ and $\widetilde F_{n+1}=R_n\cos (\alpha+\theta+\pi)> 0$. At step $n+1$, we apply the lemma with $x=R_n\cos \alpha$ and $x'=R_n\cos (\alpha+\theta+\pi)$. From the proof of the lemma, if $\widetilde F_{n}=0$ (\textit{i.e.} if $\alpha=\pi/2$), $R_{n+1}=R_n\cos (\alpha+\theta+\pi)/\cos(\theta-\pi/2)=R_n$. If $\widetilde F_{n}>0$, we have $R_{n+1}=R_n\cos \alpha/\cos t$, where $t$ is given by $$ \cos\theta - \tan t\ \sin \theta = \dfrac{\cos (\alpha+\theta+\pi)}{\cos \alpha} = -(\cos\theta - \tan \alpha\ \sin \theta). $$ We deduce from the preceding formula that $\tan t + \tan\alpha = 2\cos\theta/\sin\theta>0$, which implies $t>-\alpha$. On the other hand, as noticed at the end of the proof of the preceding lemma, $t+\theta<\pi/2$, hence $t<\alpha$. Therefore, $\cos \alpha<\cos t$ and $R_{n+1}<R_n$. Since $\widetilde F_n\le R_n\le R_1$ for all $n$, the proposition is proved. \end{proof} We come back to the specific case $\lambda=2\cos\pi/k$. \begin{prop} \label{p=0} Let $(\widetilde F_{n})$ be inductively defined by $\widetilde F_{n+1}= |\lambda \widetilde F_{n}-\widetilde F_{n-1}|$ and its two first positive terms. The following properties are equivalent: \begin{enumerate} \item $\widetilde F_0/\widetilde F_1$ admits a finite $\lambda$-continued fraction expansion. \item The sequence $(\widetilde F_{n})$ is ultimately periodic. \item There exists $n$ such that $\widetilde F_{n}=0$. \end{enumerate} \end{prop} \begin{proof} We easily see from the proof of Proposition~\ref{bounded} that (2) and (3) are equivalent. We now prove that (3) implies (1) by induction on the smallest $n$ such that $\widetilde F_{n}=0$. If $\widetilde F_{2}=0$, then $|\lambda \widetilde F_{1}-\widetilde F_{0}|=0$, and we get $\widetilde F_0/\widetilde F_1=\lambda$. Let $n>2$ be the smallest $n$ such that $\widetilde F_{n}=0$. By the induction hypothesis, $\widetilde F_1/\widetilde F_2$ admits a finite $\lambda$-continued fraction expansion. Therefore, $$\frac{\widetilde F_0}{\widetilde F_1}= \lambda\pm \frac{1}{\widetilde F_1/\widetilde F_2}$$ admits a finite $\lambda$-continued fraction expansion. It remains to prove that (1) implies (3). We know from Proposition~\ref{finiteCF} that all positive real numbers that admit a finite $\lambda$-continued fraction expansion are endpoints of generalized Stern-Brocot intervals, hence by~\eqref{fj}, can be written as $[1, a_1, \dots, a_j]_{\lambda}$ with $a_i=\pm1$ for any $i$ and such that we never see more than $(k-1)$ alternated $\pm1$ in a row. We call such an expansion a \emph{standard expansion}. Conversely, all real numbers that admit a standard expansion are endpoints of generalized Stern-Brocot intervals, hence are nonnegative. Assume (1) is true. If $\widetilde F_0/\widetilde F_1 = [1]_{\lambda}$, then $\widetilde F_2=0$. Otherwise, let $[1, a_1, \dots, a_j]_{\lambda}$ be a standard expansion of $\widetilde F_0/\widetilde F_1$. Then, $$ \frac{\widetilde F_1}{\widetilde F_2} = \frac{1}{|\lambda - \widetilde F_0/\widetilde F_1|} = {\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igl|[a_1, \dots, a_j]_{\lambda}{\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}igr|. $$ If $a_1=1$, then $[a_1, \dots, a_j]_{\lambda}\ge0$ and it is equal to $\widetilde F_1/\widetilde F_2$. Otherwise, $\widetilde F_1/\widetilde F_2 = [-a_1, -a_2,\dots, -a_j]_{\lambda}$. In both cases, we obtain a standard expansion of $\widetilde F_1/\widetilde F_2$ of smaller size. The result is proved by induction on $j$. \end{proof} \begin{remark} In general, if $\widetilde F_0/\widetilde F_1$ does not admit a finite $\lambda$-continued fraction expansion, $(\widetilde F_{n})$ decreases exponentially fast to $0$. However, the exponent depends on the ratio $\widetilde F_0/\widetilde F_1$. \end{remark} We exhibit two examples of such behavior. Let $q:=(\lambda + \sqrt{\lambda^2+4})/2$ be the fixed point of $f_0$. Start with $\widetilde F_0/\widetilde F_1 = q$. Then, by a straightforward induction, we get that for all $n\ge0$, $\widetilde F_n = q^{-n}\widetilde F_0$. Start now with $\widetilde F_0/\widetilde F_1 = q'$, where $q'$ is the fixed point of $f_1$. Then, we easily get that for all $n\ge0$, $\widetilde F_{2n} = (q'f_0(q'))^{-n}\widetilde F_0$ and $\widetilde F_{2n+1} = \widetilde F_{2n}/q'$. The exponent is thus $1/\sqrt{q'f_0(q')}$, which is different from $1/q$: For $k=3$, $q=\phi$ (the golden ratio) and $\sqrt{q'f_0(q')}=\sqrt{\phi}$. \begin{proof}[Proof of Theorem~\ref{Theorem2}] We have seen that the subsequence $(\widetilde F_{n_j})$, where $n_j$ is the time when the $j$-th $L$ is appended to the reduced sequence, satisfies, $\widetilde F_{n_{j+1}} = |\lambda \widetilde F_{n_{j}}-\widetilde F_{n_{j-1}}|$ for any $j$. From Proposition~\ref{bounded}, this subsequence is bounded. Moreover, we can write $n_j=j+kd_j$, where $d_j$ is the number of $R$'s up to time $n_j$. By the law of large numbers, $d_j/n_j\to p$, and we get $j/n_j\to 1-kp$. This achieves the proof of Theorem~\ref{Theorem2}. \end{proof} \section{Case $\lambda \ge2$} \label{Sec:lambda2} The case $\lambda\ge2$ ($p>0$) is even easier to study since there is no reduction process. Observe that the linear and the non-linear case are essentially the same. Indeed, in the non-linear case, ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(\widetilde F_{n+1}/\widetilde F_n\ge 1| \widetilde F_{n-1}, \widetilde F_n)\ge p$ and if $\widetilde F_{n+1}/\widetilde F_n\ge 1$, then $\widetilde F_{n+2}/\widetilde F_{n+1}\ge 1$. Therefore, with probability $1$, there exists $N_+$ such that for all $n\ge N_+$, the quotients $\widetilde F_{n+1}/\widetilde F_n$ are larger than $1$. Moreover, for $n\ge N_+$, there is no need to take the absolute value and the sequence behaves like in the linear case. We thus concentrate on the linear case. We now fix $\lambda\ge 2$. The sequence of quotients $Q_n:= F_{n}/F_{n-1}$ is a real-valued Markov chain with probability transitions $$ {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}\left( Q_{n+1} = f_R(q) {\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}ig| Q_{n} = q \right) = p \quad\mbox{ and }\quad {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}\left( Q_{n+1} = f_L(q) {\ifmmode{\mathscr{B}}\else{$\mathscr{B}$}\fi}ig| Q_{n} = q \right) = 1-p, $$ where $f_R(q):= \lambda +1/q$ and $f_L(q):= \lambda -1/q$. Let $B:=\dfrac{\lambda+\sqrt{\lambda^2-4}}{2}\in [1, \lambda]$ be the largest fixed point of $f_L$. Note that we have ${\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(Q_{n+1}\ge \lambda | Q_n)\ge\min(p, 1-p)$ for any $n\ge 2$ and, again, if $Q_n\ge B$, then $Q_{n+1}\ge B$. Thus, with probability $1$, there exists $N_+$ such that for all $n\ge N_+$, the quotients $Q_{n}$ are larger than $B$. Without loss of generality, we can henceforth assume that the initial values $a$ and $b$ are such that $Q_2\ge B$. We inductively define sub-intervals of ${\ifmmode{\mathbbm{R}}\else{$\mathbbm{R}$}\fi}_+$ indexed by finite sequences of $R$'s and $L$'s: $$ I_R:= f_R([B, \infty])=\left[\lambda, \lambda+\frac{1}{B} \right] \quad \mbox{ and }\quad I_L:= f_L([B, \infty]) = [B, \lambda], $$ and for any finite sequence $X$ in $\{R, L\}^*$, $$ I_{XR}:= f_R(I_X)\quad \mbox{ and }\quad I_{XL}:= f_L(I_X). $$ Obviously, all these intervals are included in $\left[B,\lambda+\frac{1}{B}\right]$. \begin{lemma}\label{lem:I_W} Let $W$ and $W'$ be two finite words in $\{R, L\}^*$. \begin{itemize} \item If $W$ is a suffix of $W'$, then $I_{W'}\subset I_W$; \item If neither $W$ is a suffix of $W'$ nor $W'$ is a suffix of $W$, then $I_W$ and $I_{W'}$ have disjoint interiors. \end{itemize} \end{lemma} \begin{proof} The first assertion is an easy consequence of the definition of $I_W$. To prove the second one, consider the largest common suffix $S$ of $W$ and $W'$. Since $LS$ and $RS$ are suffix of $W$ and $W'$, by the first assertion, it is enough to prove that $I_{LS}$ and $I_{RS}$ have disjoint interiors. This can be shown by induction on the length of $S$, using the fact that $f_R$ and $f_L$ are monotonic on $[B, \infty]$. \end{proof} \begin{lemma}\label{lem:singleton} Let $(W_i)_{i\ge1}$ be a sequence of $R$'s and $L$'s. Then $\bigcap_{n\ge1} I_{W_n \dots W_1}$ is reduced to a single point. \end{lemma} \begin{proof} By Lemma~\ref{lem:I_W}, $I_{W_{n+1} W_n \dots W_1}\subset I_{W_n \dots W_1}$. Since the intervals are compact and nonempty, their intersection is nonempty. It remains to prove that their length goes to zero. First consider the case $\lambda >2$. The derivatives of $f_L$ and $f_R$ are of modulus less than $1/B^2<1$. Therefore, the length of $I_{W_n \dots W_1}$ is less than a constant times $(1/B^2)^{n}$. Let us turn to the case $\lambda=2$. Observe that $I_{L^j}=\left[1, \frac{j+1}{j}\right]$, which is of length $1/j$. Hence, if $W_n \dots W_1$ contains $j$ consecutive $L$'s, then $I_{W_n \dots W_1}$ is included, for some $r<n$, in $I_{L^jW_r\dots W_1}=f_{W_1}\circ\dots\circ f_{W_r}(I_{L^j})$ which is of length less than $1/j$ (recall that the derivatives of $f_L$ and $f_R$ are of modulus less than $1$). On the other hand, the derivatives of $f_L\circ f_R$ and $f_R\circ f_R$ are of modulus less than $1/(2B+1)^2=1/9$ on $[B, \infty]$. Therefore, considering the maximum number of consecutive $L$'s in $W_n \dots W_1$, we obtain $\sup_{W_n \dots W_1}|I_{W_n \dots W_1}|\tend{n}{\infty} 0$. \end{proof} \begin{figure} \caption{First stages of the construction of the measure $\mu_{p, 2} \label{mesure2} \end{figure} We deduce from the preceding results the invariant measure of the Markov chain $(Q_n)$. \begin{corollary} The unique invariant probability measure $\mu_{p, \lambda}$ of the Markov chain $(Q_n)=(F_n/F_{n-1})$ is given by \begin{equation}\label{nu} \mu_{p, \lambda}\left( I_{W}\right):=p^{|W|_R}(1-p)^{|W|_{L}} \end{equation} for any finite word $W$ in $\{R, L\}^*$, where $|W|_R$ and $|W|_L$ respectively denote the number of $R$'s and $L$'s in $W$. \end{corollary} We can now conclude the proof of Theorem~\ref{th:case_2} by invoking a classical theorem about law of large numbers for Markov chain (see \textit{e.g.}~\cite{meyn1993}, Theorem~17.0.1). Note that the explicit form of the invariant measure when $p=1/2$ and $\lambda\ge 2$ was already given by Sire and Krapivsky~\cite{krapivsky2001}. \section{Variations of the Lyapunov exponents} \subsection{Variations with $p$} \label{Sec:croissance_p} \begin{theo} \label{croissance} For any integer $k\ge3$, the function $p\mapsto\widetilde\gamma_{p,\lambda_k}$ is increasing and analytic on $]1/k,1[$, and the function $p\mapsto\gamma_{p,\lambda_k}$ is increasing and analytic on $]0,1[$. Moreover, \begin{equation} \label{limit} \lim_{p\to 0}\gamma_{p,\lambda_k} = \lim_{p\to 1/k}\widetilde\gamma_{p,\lambda_k} = 0, \end{equation} and \begin{equation} \label{limit1} \lim_{p\to 1}\gamma_{p,\lambda_k} = \gamma_{1,\lambda_k} = \lim_{p\to 1}\widetilde\gamma_{p,\lambda_k} = \widetilde\gamma_{1,\lambda_k} = \log \left( \dfrac{\lambda_k+\sqrt{\lambda_k^2+4}}{2} \right). \end{equation} For any $\lambda\ge2$, the function $p\mapsto\gamma_{p,\lambda}$ is increasing and analytic on $]0,1[$. \end{theo} The proof of the theorem relies on the following proposition, whose proof is postponed to the end of the section. \begin{prop} \label{compare} Let $(X_i)$ be a sequence of letters in the alphabet $\{R, L\}$ and $(X'_i)$ be a sequence of letters in the alphabet $\{R, L\}$ obtained from $(X_i)$ by turning an $L$ into an $R$. If $\lambda=\lambda_k$ for some $k\ge3$, then, in the non-linear case, any label $\widetilde F_n$ coded by the sequence $(X_i)$ is smaller than the corresponding label $\widetilde F'_n$ coded by $(X'_i)$. If $\lambda\ge 2$, and if $F_2/F_1\ge 1$, any label $F_n$ coded by the sequence $(X_i)$ is smaller than the corresponding label $F'_n$ coded by $(X'_i)$. \end{prop} \begin{proof}[Proof of Theorem~\ref{croissance}] Let $\lambda=\lambda_k$ for some integer $k\ge3$. Let $1/k<p\le p'\le1$. Let $(X_i)$ (respectively $(X'_i)$) be a sequence of i.i.d. random variables taking values in the alphabet $\{R, L\}$ with probability $(p, 1-p)$ (respectively $(p', 1-p')$). We can realize a coupling of $(X_i)$ and $(X'_i)$ such that for any $i$, $X_i=R$ implies $X'_i=R$. From Proposition~\ref{compare}, it follows that the label $\widetilde F_n$ coded by $(X_i)$ is always smaller than the label $\widetilde F'_n$ coded by $(X'_i)$. We get that $$ \widetilde\gamma_{p,\lambda_k} = \lim \dfrac{1}{n}\log \widetilde F_n \le \lim\dfrac{1}{n}\log \widetilde F'_n = \widetilde\gamma_{p',\lambda_k}. $$ Therefore, $p\mapsto \widetilde \gamma_{p,\lambda_k}$ is a non-decreasing function on $[1/k,1]$. Observe that $p\mapsto p_R$ is non-decreasing in both (linear and non-linear) cases. Hence, the function $\rho : p\mapsto \sqrt[k-1]{1-p_R}$ is non-increasing in both cases. We conclude that $p\mapsto\gamma_{p,\lambda_k}$ is non-decreasing on $[0,1]$. Since $\gamma_{p,\lambda_k}>0$ for $0<p<1$, the upper Lyapunov exponent associated to the product of random matrices is simple, and we know from \cite{peres1990} that $\gamma_{p,\lambda_k}$ is an analytic function of $p\in]0, 1[$, thus it is increasing. Via the dependence on $\rho$ which is an analytic function of $p$, we get that $\widetilde\gamma_{p,\lambda_k}$ is an analytic increasing function of $p\in]1/k, 1[$. Now, observe that $\rho\longmapsto \int_0^\infty \log x d\nu_{k,\rho}(x)$ is continuous on $[0,1]$ (as the uniform limit of continuous functions). When $p$ goes to zero in the linear case (or $p\to 1/k$ in the non-linear case), $p_R$ tends to 0 and $\rho$ tends to 1. By continuity of the integral, we obtain~\eqref{limit} using Remark~\ref{rho = 1}. When $p=1$, the deterministic sequence $F_n=\widetilde F_n$ grows exponentially fast, and the expression of $\gamma_{1,\lambda_k}$ follows from elementary analysis. When $\lambda \ge 2$ (we do not need to distinguish the linear case from the non-linear cases), the proof is handled in the same way, using Proposition~\ref{compare}. \end{proof} \begin{proof}[Proof of Proposition~\ref{compare} when $\lambda\ge2$] We let the reader check that in this case, for all $s\ge0$ the matrix $RL^s$ has nonnegative entries. Suppose the difference between $(X_i)$ and $(X'_i)$ occurs at level $j$. For any $n\ge j$, the sequence $X_j\ldots X_n$ can be decomposed into blocks of the form $RL^s$, $s\ge0$, hence the product of matrices $X_j\cdots X_n$ has nonnegative entries. If $n\ge j$, we can thus write $F'_n$ as a linear combination with nonnegative coefficients: $ F'_n = C_1 F'_{j-2}+ C_2 F'_{j-1}$. Moreover, $F_n = -C_1 F_{j-2}+ C_2 F_{j-1}=-C_1 F'_{j-2}+ C_2 F'_{j-1}$, hence $F_n\le F'_n$ (since $F_2/F_1\ge 1$, all $F_n$'s are positive). \end{proof} The proof of Proposition ~\ref{compare} when $\lambda=\lambda_k$ uses three lemmas. The first one can be viewed as a particular case when the sequence of $R$'s and $L$'s is reduced. \begin{lemma} \label{compareRL} Let $\lambda=\lambda_k$. Let $a>0$, $b>0$, $j_1\ge0$ and $j_2\ge0$ such that $j_1+1+j_2\le k-2$. If $(a',b')=(a,b)RL^{j_1}RL^{j_2}$ and $(a'',b'')=(a,b)RL^{j_1+1+j_2}$, then $b'\ge b''$. \end{lemma} \begin{proof} For any $\ell\in\{0,\ldots,j_2\}$, set $(x_\ell,x_{\ell+1}):= (a,b)RL^{j_1}RL^{\ell}$, and $(y_\ell,y_{\ell+1}):= (a,b)RL^{j_1+1+\ell}$. Then the quotient $x_{\ell+1}/x_\ell$ lies in $I_\ell$ (see Section~\ref{Sec:SternBrocot}), whereas the quotient $y_{\ell+1}/y_\ell$ lies in $I_{j_1+1+\ell}$. It follows that $y_{\ell+1}/y_\ell \le x_{\ell+1}/x_\ell$, and since $x_0=y_0$, we inductively get that for all $\ell\in\{0,\ldots,j_2+1\}$, $y_\ell\le x_\ell$. The lemma is proved, observing that $b'=x_{j_2+1}$ and $b''=y_{j_2+1}$. \end{proof} \begin{lemma} \label{comparek} Let $\lambda=\lambda_k$. Let $(X_i)_{i\ge2}$ be a sequence of matrices in $\{R,L\}$, which does not contain $k-1$ consecutive $L$'s and such that $X_2=R$. Let $x_0>0$, $x_1>0$, and set inductively $(x_i,x_{i+1}):= (x_{i-1},x_i)X_{i+1}$. Then for any $i\ge0$, $x_{i+k}\ge x_i$. \end{lemma} \begin{proof} If $X_{i+1}=R$, this is just a repeated application of the following claim: If $a>0$, $b>0$, $0\le j\le k-3$, and if we set $(a',b'):= (a,b)RL^j$, then $b'\ge b$. Indeed, by \eqref{RtimesPowersOfL}, we have $b'\ge b\, \sin\bigl((j+2)\pi/k\bigr) / \sin\bigl(\pi/k\bigr) \ge b$. If $X_{i+1}=L$, we first prove the lemma when the sequence $X_{i+1}\ldots X_{i+k}$ contains only one $R$: $X_{i+j}=R$ for some $j\in\{2,\ldots,k-1\}$. We proceed by induction on $j$. If $j=2$, then $ (x_{i+k-1},x_{i+k}) = (x_{i-1},x_{i}) LRL^{k-2}$. By \eqref{RtimesPowersOfL}, the second column of $RL^{k-2}$ is $\begin{pmatrix}1\\0\end{pmatrix}$, thus $x_{i+k}=x_i$. Now, assume $j>2$ and that we have proved the inequality up to $j-1$. Since the sequence of matrices starts with an $R$ and does not contain $k-1$ consecutive $L$'s, we have $x_{i+1}/x_i \in I_\ell$ for some $\ell\le k-j$ (see Section~\ref{Sec:SternBrocot}). In particular, $x_{i+1}/x_i \ge b_{k-j}$. Now define $x'_{i+k+1}$ by $(x_{i+k},x'_{i+k+1}):= (x_{i+k-1},x_{i+k}) L$. We have $x'_{i+k+1}/x_{i+k} \in I_{k-j+1}$, thus is bounded below by $b_{k-j}$. Using the induction hypothesis $x'_{i+k+1}\ge x_{i+1}$, we conclude that $x_{i+k}\ge x_i$. Finally, assume that the sequence $X_{i+1}\ldots X_{i+k}$ starts with an $L$ and contains several $R$'s. Turning the last $R$ into an $L$, we can apply Lemma~\ref{compareRL} to compare $x_{i+k}$ with the case where there is one less $R$, and prove the result by induction on the number of $R$'s. \end{proof} \begin{lemma} \label{reverse} Let $\lambda=\lambda_k$. Let $\widetilde F_{n}$ be inductively defined by $\widetilde F_0\ge 0$, $\widetilde F_1\ge 0$ and $\widetilde F_{n+1}= |\lambda \widetilde F_{n}-\widetilde F_{n-1}|$ for any $n\ge 1$. Then for any $n\ge 0$, $\widetilde F_{n+k}\le \widetilde F_{n}$. \end{lemma} \begin{proof} For $n\le 0$, let $G_{n} := \widetilde F_{-n}\ge 0$. Then, for any $n\le -1$, we have $$ (G_{n}, G_{n+1})= \begin{cases} (G_{n-1}, G_n)L & \mbox{ if }\lambda \widetilde F_{n}\ge\widetilde F_{n-1},\\ (G_{n-1}, G_n)R & \mbox{ otherwise.} \end{cases} $$ Moreover, we can assume that the sequence of matrices in $\{R,L\}$ corresponding to $(G_n)$ never contains $k-1$ consecutive $L$'s. Indeed, the second column of $L^{k-1}$ is $\begin{pmatrix}-1\\0\end{pmatrix}$. Thus, if we had $k-1$ consecutive $L$'s, we could find $n$ such that $-G_{n-1}=G_{n+k-1}$, which is possible only if $G_{n-1}=G_{n+k-1}=0$. But if such a situation occurs we can always turn the first $L$ into an $R$ without changing the sequence (because $(0, G_n)R=(0, G_n)L$). The result is thus a direct application of Lemma~\ref{comparek}. \end{proof} \begin{proof}[Proof of Proposition~\ref{compare} when $\lambda=\lambda_k$] Suppose the difference between $(X_i)$ and $(X'_i)$ occurs at level $j$. We decompose $(X_{j})_{i\ge j}$ as $LL^rY$ and $(X'_{j})_{i\ge j}$ as $RL^rY$, where $0\le r\le +\infty$ and $Y=(Y_i)_{i\ge j+r+1}$ is a sequence of letters in the alphabet $\{R, L\}$ such that $Y_{j+r+1}=R$. Suppose first that, after the difference, all letters are $L$'s ($Y=\emptyset$). Let $j_1\in\{0, \dots, k-2\}$ be such that $\widetilde F_{j-1}/\widetilde F_j \in I_{j_1}$. Without loss of generality, we can assume that the sequences $(X_i)$ and $(X'_i)$ are reduced before their first difference. Then, $X_{j-j_1-1}\dots X_{j-1}=X'_{j-j_1-1}\dots X'_{j-1}=RL^{j_1}$. By Lemma~\ref{compareRL}, $\widetilde F_{j+s}\ge \widetilde F'_{j+s}$ for all $0\le s\le j_2$, where $j_2:= k-3-j_1$. Now, by Lemma~\ref{comparek}, for all $1+j_2\le s\le k-2$, $\widetilde F_{j+s}\ge \widetilde F_{j+s-k}$, which is equal to $\widetilde F'_{j+s-k}$ since $s<k$. On the other hand, when $s=j_2+1$, we have $\widetilde F'_{j+j_2+1-k}=\widetilde F'_{j+j_2+1}$ because $X'_{j-j_1-1}\dots X'_{j+j_2+1}=RL^{k-1}$. Moreover, by Lemma~\ref{reverse}, $\widetilde F'_{j+s-k}\ge \widetilde F'_{j+s}$ for all $1+j_2< s\le k-2$. We thus get that $\widetilde F_{j+s}\ge \widetilde F'_{j+s}$ for all $j_2+1\le s\le k-2$. If $s\ge k-1$, reducing the pattern $RL^{k-1}$ in the sequence $(X_{j})_{i\ge j}$, we have $\widetilde F_{j+s}=\widetilde F'_{j+s-k}$ which is larger than $\widetilde F'_{j+s}$ by Lemma~\ref{reverse}. Suppose now that the suffix $Y$ is reduced. The above argument shows that all labels up to $j+r$ are well-ordered: In particular, $\widetilde F_{j+r-1} \le \widetilde F'_{j+r-1}$ and $\widetilde F_{j+r} \le \widetilde F'_{j+r}$. Since $Y$ is reduced, we can write, for any $n\ge j+r$, $(\widetilde F_{n}, \widetilde F_{n+1}) = (\widetilde F_{j+r-1}, \widetilde F_{j+r})Y_{j+r+1}\cdots Y_{n+1}$, where each $Y_i$ is interpreted as the corresponding matrix (the same equality is valid if we replace $\widetilde F$ by $\widetilde F'$). The product $Y_{j+r+1}\cdots Y_{n+1}$ can be decomposed into blocks of the form $RL^\ell$, with $0\le \ell\le k-2$, which are matrices with nonnegative entries. Therefore, for any $n\ge j+r$, the label $\widetilde F_{n}$ is a linear combination of $\widetilde F_{j+r-1}$ and $\widetilde F_{j+r}$ , with nonnegative coefficients. Moreover, it is also true with the same coefficients if we replace $\widetilde F$ by $\widetilde F'$. We conclude that $\widetilde F_{n}\le \widetilde F'_{n}$. In the general case, we make all possible reductions on $Y$. We are left either with a reduced sequence or with a sequence of $L$'s, which are the two situations we have already studied. \end{proof} \begin{remark} In~\cite{janvresse2007}, a formula for the derivative of $\gamma_{p,1}$ with respect to $p$ was given, involving the product measure $\nu_{3,\rho}\otimes\nu_{3,\rho}$. We do not know whether this formula can be generalized to other $k$'s. \end{remark} \subsection{Variations with $\lambda$} \label{Sec:variations_k} For $p=1$, the deterministic sequence $F_n=\widetilde F_n$ grows exponentially fast, and we have in that case $$\widetilde \gamma_{1,\lambda} =\gamma_{1,\lambda} = \log \left( \dfrac{\lambda+\sqrt{\lambda^2+4}}{2} \right), $$ which is increasing with $\lambda$. We conjecture that, when $p$ is fixed, $\gamma_{p,\lambda_k}$ and $\widetilde \gamma_{p,\lambda_k}$ are increasing with $k$, and that $\gamma_{p,\lambda}$ is increasing with $\lambda$ for $\lambda\ge2$ (see Figure~\ref{Fig:gammas}). \begin{figure} \caption{The value of $\gamma_{p,\lambda} \label{Fig:gammas} \end{figure} \section{Connections with Embree-Trefethen's paper} \label{Sec:Embree-Trefethen} \subsection{Positivity of the Lyapunov exponent} We have proved that the largest Lyapunov exponent corresponding to the linear $\lambda$-random Fibonacci sequence is positive for all $p$. In~\cite{embree1999}, Embree and Trefethen study a slight modification of our linear random Fibonacci sequence when $p=1/2$. To be exact, they study the random sequence $x_{n+1}=x_n\pm \beta x_{n-1}$, which by a simple rescaling gives our linear $\lambda$-random Fibonacci sequence where $\lambda=1/\sqrt{\beta}$ (see our introduction). However, the exponential growth is not preserved by this rescaling. More precisely, the exponential growth $\sigma(\beta)=\lim |x_n|^{1/n}$ of Embree and Trefethen's sequence satisfies $$ \log \sigma(\beta) = \gamma_{1/2,\lambda} - \log\lambda. $$ In particular, $\sigma(\beta)<1$ if and only if $\gamma_{1/2,\lambda} < \log\lambda$, which according to the simulations described in their paper happens for $\beta<\beta^*\approx 0.70258\ldots$ (which corresponds to $\lambda>1.19303\ldots$). By Theorem~\ref{croissance}, the function $p\mapsto\gamma_{p,\lambda}$ is continuous and increasing from $0$ to $\gamma_{1,\lambda}>\log\lambda$. Hence there exists a unique $p^*(\lambda)\in[0,1]$ such that, for $p<p^*$, $\gamma_{p,\lambda}<\log\lambda$ and for $p>p^*$, $\gamma_{p,\lambda}>\log\lambda$. According to~\cite{embree1999}, for $\lambda=1$ we have $p^*<1/2$, and for $\lambda=\lambda_k$ ($k\ge 4$) and $\lambda\ge2$, $p^*>1/2$. For $\lambda\ge2$, we can indeed prove that $\gamma_{1/2,\lambda}<\log \lambda$: By Jensen's inequality, we have $$ \gamma_{1/2,\lambda} < \log \left(\int_B^{\lambda+1/B} x \, d\mu_{1/2,\lambda}\right), $$ which is equal to $\log\lambda$ by symmetry of the measure $\mu_{1/2,\lambda}$. For $\lambda=1$, we know that $\gamma_{p,1}>0$ for all $p>0$ thus $p^*=0$. When $\lambda=\lambda_k$, $k\ge 4$, numerical computations of the integral confirm that $p^*>1/2$, but we do not know how to prove it. \subsection{Sign-flip frequency} Embree and Trefethen introduce the \textit{sign-flip frequency} as the proportion of values $n$ such that $F_nF_{n+1}<0$, and give (without proof) the estimate $2^{-\pi\lambda/\sqrt{4-\lambda^2}}$ for this frequency, as $\lambda\to2$, $\lambda<2$. Note that, for $\lambda\ge 2$, there are no sign change as soon as $n$ is large enough, and the sign-flip frequency is zero. For $\lambda=\lambda_k$, recall that for $n$ large enough, the sign of the reduced sequence $(F_n^r)$ is constant (see Lemma~\ref{L+}). Moreover, by~\eqref{reduction} and the fact that for all $0\le j\le k-2$ the matrix $RL^j$ has nonnegative entries (see~\eqref{RtimesPowersOfL}), the product $F_nF_{n}^r$ changes sign if and only if a pattern $RL^{k-1}$ is removed. Thus, the sign-flip frequency is equal to the frequency of deletions in the reduction process. Note that we have to make sure that this frequency indeed exists. This can be seen by considering the reduction of the left-infinite i.i.d. sequence $(X^*)_{-\infty}^0$ (Section~\ref{reductioninfinie}), since for $n$ large enough, deletions in the reduction process of $(X)_3^n$ occur at the same times as in the reduction process of $(X^*)_{-\infty}^n$. In the latter case, the ergodic theorem ensures that the frequency $\sigma$ of deletions exists and is equal to the probability that $(X^*)_{-\infty}^0$ be not proper. By Lemma~\ref{Lem:excursion}, $(X^*)_{-\infty}^0$ is not proper if and only if there exists a unique $\ell>0$ such that $(X^*)_{-\ell}^0$ is an excursion, and $(X^*)_{-\infty}^{-\ell-1}$ is proper. Thus, $$\sigma = \sum_{w \mbox{\begin{scriptsize} excursions\end{scriptsize}}}\ {\ifmmode{\mathbbm{P}}\else{$\mathbbm{P}$}\fi}(w) (1-\sigma).$$ By~\eqref{eq:excursion}, we get that the sign-flip frequency is equal to \begin{equation} \sigma = \sigma(\lambda_k,p) = \frac{p(1-p_R)}{p+(1-p)p_R+p(1-p_R)}. \end{equation} Now, for a fixed $p\in ]0,1[$, we would like to obtain an estimate for $\sigma$ as $k\to\infty$. First, observe that $p_R=p_R(k)\to 1$ as $k\to\infty$. Indeed, recalling the expression of the function~$g$ given by~\eqref{survival-pr}, for any $x\in ]0,1[$, we have $g(x)<0$ for $k$ large enough, which implies $p_R>x$. Then, since $p_R$ satisfies $$ 1-p_R = \left(1-\dfrac{pp_R}{p+(1-p)p_R}\right)^{k-1}, $$ we get that $p_R\to 1$ exponentially fast with $k$. Using this estimation in the above equation, elementary computations lead to $$1-p_R \mathop{\sim}_{k\to\infty} (1-p)^{k-1}.$$ Thus, $$\sigma(\lambda_k,p)\mathop{\sim}_{k\to\infty} p(1-p)^{k-1}.$$ For $p=1/2$, this proves the estimate provided in~\cite{embree1999} in the special case $\lambda=\lambda_k$. \end{document}
\begin{document} \title{Total and paired domination numbers of toroidal meshes~\thanks {The work was supported by NNSF of China (No. 11071233).}} \author {Fu-Tao Hu,\quad Jun-Ming Xu\footnote{Corresponding author: [email protected]}\ \\ \\ {\small Department of Mathematics} \\ {\small University of Science and Technology of China}\\ {\small Hefei, Anhui, 230026, China} } \date{} \maketitle \begin{quotation} \textbf{Abstract}: Let $G$ be a graph without isolated vertices. The total domination number of $G$ is the minimum number of vertices that can dominate all vertices in $G$, and the paired domination number of $G$ is the minimum number of vertices in a dominating set whose induced subgraph contains a perfect matching. This paper determines the total domination number and the paired domination number of the toroidal meshes, i.e., the Cartesian product of two cycles $C_n$ and $C_m$ for any $n\ge 3$ and $m\in\{3,4\}$, and gives some upper bounds for $n, m\ge 5$. \vskip6pt\noindent{\bf Keywords}: combinatorics, total domination number, paired domination number, toroidal meshes, Cartesian product. \noindent{\bf AMS Subject Classification: }\ 05C25, 05C40, 05C12 \end{quotation} \section{Introduction} For notation and graph-theoretical terminology not defined here we follow \cite{x03}. Specifically, let $G=(V,E)$ be an undirected graph without loops, multi-edges and isolated vertices, where $V=V(G)$ is the vertex-set and $E=E(G)$ is the edge-set, which is a subset of $\{xy|\ xy$ is an unordered pair of $V \}$. A graph $G$ is {\it nonempty} if $E(G)\ne \emptyset$. Two vertices $x$ and $y$ are {\it adjacent} if $xy\in E(G)$. For a vertex $x$, denote $N(x)=\{y: xy\in E(G)\}$ be the {\it neighborhood} of $x$. For a subset $D\subseteq V(G)$, we use $G[D]$ to denote the subgraph of $G$ induced by $D$. We use $C_n$ and $P_n$ to denote a cycle and a path of order $n$, respectively, throughout this paper. A subset $D\subseteq V(G)$ is called a {\it dominating set} if $N(x)\cap D\ne \emptyset$ for each vertex $x\in V(G)\setminus D$. The {\it domination number} $\gamma(G)$ is the minimum cardinality of a dominating set. A thorough study of domination appears in \cite{hhs98a, hhs98b}. A subset $D\subseteq V(G)$ of $G$ is called a {\it total dominating set}, introduced by Cockayne {\it et al.}~\cite{cdh80}, if $N(x)\cap D\ne \emptyset$ for each vertex $x\in V(G)$ and the {\it total domination number} of $G$, denoted by $\gamma_t(G)$, is the minimum cardinality of a total dominating set of $G$. The total domination in graphs has been extensively studied in the literature. A survey of selected recent results on this topic is given in~\cite{h09} by Henning. A dominating set $D$ of $G$ is called to be {\it paired}, introduced by Haynes and Slater ~\cite{hs95,hs98}, if the induced subgraph $G[D]$ contains a perfect matching. The {\it paired domination number} of $G$, denoted by $\gamma_p(G)$, is the minimum cardinality of a paired dominating set of $G$. Clearly, $\gamma(G)\le\gamma_t(G)\le \gamma_p(G)$ since a paired dominating set is also a total dominating set of $G$, and $\gamma_p(G)$ is even. Pfaff, Laskar and Hedetniemi~\cite{plh83} and Haynes and Slater~\cite{hs98} showed that the problems determining the total-domination and the paired-domination for general graphs are NP-complete. Some exact values of total-domination numbers and paired-domination numbers for some special classes of graphs have been determined by several authors. In particularly, $\gamma_t(P_n \times P_m)$ and $\gamma_p(P_n \times P_m)$ for $2\le m\le 4$ are determined by Gravier~\cite{g02}, and Proffitt, Haynes and Slater~\cite{phs01}, respectively. Use $G_{n,m}$ to denote the toroidal meshes, i.e., the Cartesian product $C_n \times C_m$ of two cycles $C_n$ and $C_m$. Klav\v{z}ar and Seifter~\cite{ks95} determined $\gamma(G_{n,m})$ for any $n\ge 3$ and $m\in\{3,4,5\}$. In this paper, we obtain the following results. $$ \begin{array}{rl} &\gamma_{t}(G_{n,3})=\lceil \frac{4n}{5}\rceil;\\ &\gamma_{p}(G_{n,3})=\left\{ \begin{array}{ll} \lceil\frac{4n}{5}\rceil & {\rm if}\ n\equiv 0,2,4\,({\rm mod}\,5),\\ \lceil\frac{4n}{5}\rceil+1 & {\rm if}\ n\equiv 1,3\,({\rm mod}\,5);\\ \end{array}\right.\\ &\gamma_{t}(G_{n,4})=\gamma_{p}(G_{n,4})=\left\{ \begin{array}{ll} n & {\rm if}\ n\equiv 0\,({\rm mod}\,4),\\ n+1 & {\rm if}\ n\equiv 1,3\,({\rm mod}\,4),\\ n+2 & {\rm if}\ n\equiv 2\,({\rm mod}\,4). \end{array}\right. \end{array} $$ \section{Preliminary results} In this section, we recall some definitions, notations and results used in the proofs of our main results. Throughout this paper, we assume that a cycle $C_n$ has the vertex-set $V(C_n)=\{1,\ldots,n\}$. Use $G_{n,m}$ to denote the toroidal meshes, i.e., the Cartesian product $C_n \times C_m$, which is a graph with vertex-set $V(G_{n,m})=\{x_{ij}|\ 1\leq i \leq n, 1\leq j \leq m\}$ and two vertices $x_{ij}$ and $x_{i'j\,'}$ being linked by an edge if and only if either $i=i'\in V(C_n)$ and $jj\,'\in E(C_m)$, or $j=j\,'\in V(C_m)$ and $ii'\in E(C_n)$. Let $Y_i=\{x_{ij}|\ 1\leq j \leq m\}$ for $1\leq i\leq n$, called a set of {\it vertical vertices} in $G_{n,m}$. In~\cite{gs02}, Gavlas and Schultz defined an efficient total dominating set, which is such a total dominating set $D$ of $G$ that $|N(v)\cap D|=1$ for every $v\in V(G)$. The related research results can be found in~\cite{ds03, gs02, hx08}. \begin{lem}\label{lem2.1}{\rm(Gavlas~and~Schult\cite{gs02})}\ If a graph $G$ has an efficient total dominating set $D$, then the edge-set of the subgraph $G[D]$ forms a perfect matching, and so the cardinality of $D$ is even, and $\{N(v): v\in D\}$ partitions $V(G)$. \end{lem} \begin{lem}\label{lem2.2} Let $G$ be a $k$-regular graph of order $n$. Then $\gamma_t(G)\ge \frac{n}{k}$, with equality if and only if $G$ has an efficient total dominating set. \end{lem} \begin{pf} Since $G$ is $k$-regular, each $v\in V(G)$ can dominate at most $k$ vertices. Thus $\gamma_t(G)\ge \frac{n}{k}$. It is easy to observe that the equality holds if and only if there exists a total dominating set $D$ such that $\{N(v): v\in D\}$ partitions $V(G)$, equivalently, $D$ is an efficient total dominating set. \end{pf} \begin{lem}\label{lem2.3} $\gamma_{t}(G_{n,m})=\gamma_{p}(G_{n,m})=\frac{nm}{4}$ for $n,m\equiv \,0~({\rm mod}\,4)$. \end{lem} \begin{pf} Let $D=\{x_{ij},x_{i(j+1)},x_{(i+2)(j+2)},x_{(i+2)(j+3)}: ~i,j\equiv \,1~({\rm mod}\,4)\}$, where $1\le i\le n$ and $1\le j\le m$. Figure~\ref{f1} is such a set $D$ in $G_{8,4}$. It is easy to see that $D$ is a paired dominating set of $G_{n,m}$ with cardinality $\frac{nm}{4}$. Thus, $\gamma_{p}(G_{n,m})\le \frac{nm}{4}$. \begin{figure} \caption{\label{f1} \label{f1} \end{figure} By Lemma~\ref{lem2.2}, $\gamma_{t}(G_{n,m})\ge \frac{nm}{4}=n$. Since $\gamma_t(G_{n,m})\le \gamma_{p}(G_{n,m})$, $\gamma_{t}(G_{n,m})=\gamma_{p}(G_{n,m})=\frac{nm}{4}$. \end{pf} \section{Total and paired domination number of $G_{n,3}$} In this section, we determine the exact values of the total and the paired domination numbers of $G_{n,3}$, which can be stated the following theorem. \begin{thm}\label{thm3.1} For any $n\geq 3$, $$\gamma_{t}(G_{n,3})=\left\lceil \frac{4n}{5}\right\rceil$$ and $$\gamma_{p}(G_{n,3})=\left\{ \begin{array}{ll} \lceil\frac{4n}{5}\rceil, & {\rm if}\ n\equiv 0,2,4\,({\rm mod}\,5);\\ \lceil\frac{4n}{5}\rceil+1,& {\rm if}\ n\equiv 1,3\,({\rm mod}\,5). \end{array}\right.$$ \end{thm} \begin{pf} Let $D$ be a minimum total dominating set of $G_{n,3}$. First, we may assume that $|Y_i\cap D|\le 2$ for any $1\le i\le n$. Indeed, if $|Y_i\cap D|=3$ for some $i\notin\{1,n\}$, then the set $D'=(D\setminus \{x_{i1}, x_{i3}\})\cup \{x_{(i-1)2}, x_{(i+1)2}\}$ is also a total dominating set of $G_{n,3}$ with $|D'|=|D|$. Let $\alpha_k$ be the number of $i$'s for which $|Y_i\cap D|=k$ for $1\le i\le n$ and $0\le k\le 2$. Then we have \begin{equation}\label{e3.1} \alpha_0+\alpha_1+\alpha_2=n. \end{equation} Assume $|Y_i\cap D|=0$ for some $i\notin\{1,n\}$. At least one of $|Y_{i-1}\cap D|$ and $|Y_{i+1}\cap D|$ is 2 since the three vertices in $Y_i$ should be dominated by $D$, which means that \begin{equation}\label{e3.2} 2\alpha_2-\alpha_0\ge 0. \end{equation} If $|Y_i\cap D|=2$ for some $i$ with $1\le i\le n$, then the two vertices in $Y_i\cap D$ can dominate at most 7 vertices. Since any vertex $x\in D$ can dominate at most 4 vertices, we have \begin{equation}\label{e3.3} 4\alpha_1+7\alpha_2\ge 3n. \end{equation} The sum of (\ref{e3.1}), (\ref{e3.2}) and (\ref{e3.3}) implies $$5\alpha_1+10\alpha_2\ge 4n,$$ and, hence, \begin{equation}\label{e3.4} \gamma_{t}(G_{n,3})=|D|=\alpha_1+2\alpha_2\ge \left\lceil \frac{4n}{5}\right\rceil. \end{equation} \begin{figure} \caption{\label{f2} \label{f2} \end{figure} To obtain the upper bounds of $\gamma_{t}(G_{n,3})$ and $\gamma_{p}(G_{n,3})$, we set $$ D=\{x_{i2}: i\equiv\,1,2 \,({\rm mod}\,5)\} \cup \{x_{j1}, x_{j3}: j\equiv\,4 \,({\rm mod}\,5)\}, $$ where $1\le i\le n$. See Figure~\ref{f2}, where $D$ consists of bold vertices. If $n\not\equiv \,3 \,({\rm mod}\,5)$, then $D$ is a total dominating set and $\gamma_{t}(G_{n,3})\leq |D|=\lceil \frac{4n}{5}\rceil$. If $n\equiv \,3 \,({\rm mod}\,5)$, then $D\cup \{x_{n2}\}$ is a total dominating set and $\gamma_{t}(G_{n,3})\leq |D|+1=\lceil \frac{4n}{5}\rceil$. Combining these facts with (\ref{e3.4}), we have that $\gamma_{t}(G_{n,3})=\lceil \frac{4n}{5}\rceil$. If $n\equiv \,0,2,4 \,({\rm mod}\,5)$, then $D$ is a paired dominating set and $\gamma_{p}(G_{n,3})\leq |D|=\lceil \frac{4n}{5}\rceil$. If $n\equiv \,1\,({\rm mod}\,5)$, then $D\cup \{x_{n1}\}$ is a paired dominating set and $\gamma_{p}(G_{n,3})\leq |D|+1=\lceil \frac{4n}{5}\rceil+1$. If $n\equiv \,3 \,({\rm mod}\,5)$, then $D\cup \{x_{n1},x_{n2}\}$ is a paired dominating set and $\gamma_{p}(G_{n,3})\leq |D|+2=\lceil \frac{4n}{5}\rceil+1$. Since $\gamma_p(G_{n,3})\ge \gamma_t(G_{n,3})$ and $\gamma_p(G_{n,3})$ is even, $\gamma_p(G_{n,3})=\lceil\frac{4n}{5}\rceil$ if $n\equiv \,0,2,4 \,({\rm mod}\,5)$, and $\gamma_p(G_{n,3})=\lceil\frac{4n}{5}\rceil+1$ if $n\equiv\,1,3 \,({\rm mod}\,5)$. The theorem follows. \end{pf} \section{Total and paired domination number of $G_{n,4}$} In this section, we determine the exact values of $\gamma_{t}(G_{n,4})$ and $\gamma_{p}(G_{n,4})$, the latter has been announced by Bre\v{s}ar, Henning and Rall~\cite{bhr05}, but without proofs. \begin{lem}\label{lem4.2} $\gamma_{p}(G_{n,4})=\gamma_{t}(G_{n,4})=n+1$ for $n\equiv 1,3~\,({\rm mod}~4)$. \end{lem} \begin{pf} For $n\equiv 1~\,({\rm mod}~4)$, let $$ D=\{x_{i1},x_{i2},x_{(i+2)3},x_{(i+2)4}:\ i\equiv 1\,({\rm mod}\, 4), i\ne n\} \cup \{x_{n1},x_{n2}\}. $$ Then $D$ is a paired dominating set of $G_{n,4}$ with cardinality $n+1$. For $n\equiv 3~\,({\rm mod}~4)$, $D=\{x_{i1},x_{i2},x_{(i+2)3},x_{(i+2)4}:~i\equiv 1~\,({\rm mod}~4)\}$ is a paired dominating set of $G_{n,4}$ with cardinality $n+1$. Thus, $\gamma_{t}(G_{n,4})\le \gamma_{p}(G_{n,4})\le n+1$ for $n\equiv 1,3~\,({\rm mod}~4)$. By Lemma~\ref{lem2.2}, $\gamma_{t}(G_{n,4})\ge \frac{4n}{4}=n$. Now, we prove $\gamma_{t}(G_{n,4})\ge n+1$. Suppose to the contrary that $\gamma_{t}(G_{n,4})=n$. By Lemma~\ref{lem2.2}, $G_{n,4}$ has an efficient total dominating set $D'$. By Lemma~\ref{lem2.1}, $|D'|=n$ is even, a contradiction. Therefore $\gamma_{t}(G_{n,4})>n$, and hence $\gamma_{p}(G_{n,4})=\gamma_{t}(G_{n,4})=n+1$. \end{pf} \begin{lem}\label{lem4.3} $\gamma_{t}(G_{n,4})\le \gamma_{p}(G_{n,4})\le n+2$ for $n\equiv 2~\,({\rm mod}~4)$. \end{lem} \begin{pf} Let $$ D=\{x_{i1},x_{i2},x_{(i+2)3},x_{(i+2)4}:\ i\equiv 1\,({\rm mod}\,4), i\le n-2\} \cup \{x_{(n-1)1},x_{(n-1)2},x_{n1},x_{n2}\}. $$ Then $D$ is a paired dominating set of $G_{n,4}$ with cardinality $n+2$. Thus, $\gamma_{t}(G_{n,4})\le \gamma_{p}(G_{n,4})\le n+2$. \end{pf} \vskip6pt To prove $\gamma_{t}(G_{n,4})\ge n+2$ for $n\equiv 2~\,({\rm mod}~4)$, we need the following notations and two lemmas. Let $H_i^j=Y_{i}\cup Y_{i+1} \cup\ldots \cup Y_{i+j-1}$, and let $G_i^j$ be the graph obtained from $G_{n,4}-H_i^j$ by adding the edge-set $\{x_{(i-1)k}x_{(i+j)k}:\ 1\le k\le 4\}$, where the subscripts are modulo $n$. Clearly, $G_i^j\cong G_{n-j,4}$. \begin{lem}\label{lem4.4} Let $D$ be a total dominating set of $G_{n,4}$. Then $|D\cap H_i^4|\ge 4$ for any $i$ with $1\le i\le n$. Moreover, if there exists some $i$ with $1\le i\le n$ such that $|N(v)\cap D|=1$ for any vertex $v$ in $H_i^4$, then $D'=D\setminus (D\cap H_i^4)$ is a total dominating set of $G_i^4$. \end{lem} \begin{pf} Without loss of generality, assume $i=2$. It can be easy verified to dominate 8 vertices in $Y_3\cup Y_4$, at least $4$ vertices are needed, and hence $|D\cap H_2^4|\ge 4$. We now show the second assertion. Suppose to the contrary that $D'$ is not a total dominating set of $G_2^4$. Then there is a vertex $u$ in $Y_1\cup Y_6$ such that it is not dominated by $D'$, that is, $N_{G_2^4}(u)\cap D'=\emptyset$. Without loss of generality assume $u=x_{11}$. Then $x_{21}\in D$ and $x_{61}\notin D$. Also $x_{41}\notin D$ since $|N(x_{31})\cap D|=1$. Since $x_{33}$ should be dominated by $D$ and $|N(x_{33})\cap D|=1$, only one of $x_{32}$, $x_{34}$, $x_{23}$, and $x_{43}$ belongs to $D$. If $x_{32}\in D$ or $x_{34}\in D$, then $|N(x_{31})\cap D|\ge 2$, a contradiction. If $x_{23}\in D$, then $|N(x_{22})\cap D|\ge 2$, a contradiction. Thus, $x_{43}\in D$. Since $x_{51}$ should be dominated by $D$, $x_{52}\in D$ or $x_{54}\in D$. But then $|N(x_{53})\cap D|\ge 2$, a contradiction. Thus, $D'=D\setminus (D\cap H_2^4)$ is a total dominating set of $G_i^4$. \end{pf} \begin{lem}\label{lem4.5} Let $D$ be a total dominating set of $G_{n,4}$. If $x_{ij}$ is dominated by two vertices $u,v\in D$, then there exists a vertex $w$ in $H_{i-1}^2$ or $H_i^2$ such that $|N(w)\cap D|\ge 2$. \end{lem} \begin{pf} Without loss of generality, let $i=j=2$. If $u,v\in Y_2$, then assume $u=x_{21}$, $v=x_{23}$ and, hence, $|N(x_{24})\cap D|\ge 2$. If one of $u$ and $v$ is in $Y_2$ and another is in $Y_1\cup Y_3$, then without loss of generality assume $u=x_{21}\in Y_2$ and $v=x_{32}\in Y_3$. And then $|N(x_{31})\cap D|\ge 2$. If one of $u$ and $v$ is in $Y_1$ and another is in $Y_3$, then without loss of generality assume $u=x_{12}\in Y_2$ and $v=x_{32}\in Y_3$. Since $x_{24}$ should be dominated by $D$, let $s\in N(x_{24})\cap D$. It is clearly that $N(s)\cap N(u)\ne \emptyset$ or $N(s)\cap N(v)\ne \emptyset$, which implies that there exists a vertex $w\notin \{u,v\}$ in $H_{1}^2\cup H_{2}^2$ such that $|N(w)\cap D|\ge 2$. \end{pf} \begin{lem}\label{lem4.6} $\gamma_{t}(G_{n,4})=\gamma_{p}(G_{n,4})= n+2$ for $n\equiv 2~\,({\rm mod}~4)$. \end{lem} \begin{pf} By Lemma~\ref{lem4.3}, we only need to show $\gamma_{t}(G_{n,4})\ge n+2$. To this end, let $n=4k+2$. We proceed by induction on $k\ge 1$. It is easy to verify that $\gamma_{t}(G_{6,4})=8$ and $\gamma_{t}(G_{10,4})=12$. The conclusion is true for $k=1,2$. Assume that the induction hypothesis is true for $k-1$ with $k\ge 3$. Let $D$ be a minimum total dominating set of $G_{n,4}$, where $n=4k+2$ for $k\ge 3$. Assume to the contrary that $|D|\le n+1$. Since any vertex $u$ can dominate at most 4 vertices in $G_{n,4}$ and $|V(G_{n,4})|=4n$, there are at most four vertices such that each of them is dominated by at least two vertices in $D$. We now prove that there exists some $i\in\{1,2,\ldots, n\}$ such that $|N(v)\cap D|=1$ for any vertex $v\in H_i^4$. There is nothing to do if there are at most three vertices such that each of them is dominated by at least two vertices since $n\ge 14$. Now, assume there are exactly four vertices such that each of them is dominated by at least two vertices. By Lemma~\ref{lem4.5}, there exists two integers $s$ and $t$ with $1\le s,t\le n$ such that two of the four vertices are in $H_s^2$ and the other two are in $H_t^2$. Therefore, there exists an integer $i$ with $1\le i\le n$ such that for any vertex $v\in Y_i$, $|N(v)\cap D|=1$ since $n\ge 14$. By Lemma~\ref{lem4.4}, $|D\cap H_i^4|\ge 4$ and $D'=D\setminus (D\cap H_i^4)$ is a total dominating set of $G_i^4\cong G_{n-4,4}$. By the inductive hypothesis, $|D'|\ge \gamma_t(G_{n-4,4})\ge n-2$. It follows that $$ n+1\ge |D|=|D\cap H_i^4|+|D'|\ge 4+n-2=n+2, $$ a contradiction, which implies that $\gamma_{t}(G_{n,4})=|D|\ge n+2$. By the induction principle, the lemma follows. \end{pf} \vskip6pt We state the above results as the following theorem. \begin{thm}\label{thm4.1} For any integer $n\ge 3$, $$ \gamma_{t}(G_{n,4})=\gamma_{p}(G_{n,4})=\left\{ \begin{array}{ll} n, & {\rm if}\ n\equiv 0\,({\rm mod}\,4);\\ n+1,& {\rm if}\ n\equiv 1,3\,({\rm mod}\,4);\\ n+2,& {\rm if}\ n\equiv 2\,({\rm mod}\,4). \end{array}\right.$$ \end{thm} \section{Upper bounds of $\gamma_{p}(G_{n,m})$ for $n, m\ge 5$} The values of $\gamma_{t}(G_{n,m})$ and $\gamma_{p}(G_{n,m})$ for $m\in\{3,4\}$ have been determined in the above sections, but their values for $m\ge 5$ have been not determined yet. In this section, we present their upper bounds. Since $\gamma_t(G)\le \gamma_p(G)$ for any graph $G$ without isolated vertices, we establish upper bounds only for $\gamma_{p}(G_{n,m})$ if we can not obtain a smaller upper bound of $\gamma_t(G_{n,m})$ than that of $\gamma_{p}(G_{n,m})$. \begin{lem}\label{lem5.1} $\gamma_t(G_{n,m})\le \gamma_t(G_{n+1,m})$ and $\gamma_p(G_{n,m})\le \gamma_p(G_{n+1,m})$. \end{lem} \begin{pf} Let $D$ be a minimum paired (total) dominating set of $G_{n+1,m}$. If $D\cap Y_{n+1}=\emptyset$, then $D$ is also a paired (total) dominating set of $G_{n,m}$, and hence $\gamma_{p}(G_{n,m})\leq |D|$ ($\gamma_{t}(G_{n,m})\leq |D|$). Assume $D\cap Y_{n+1}\ne\emptyset$ below. Let $A=\{j|\ x_{(n+1)j}\in D\}$ and $B=\{j|\ x_{nj}\in D\}$. Then $D'=(D\setminus Y_{n+1})\cup \{x_{(n-1)j}|\ j\in A\cap B\} \cup \{x_{nj}|\ j\in A\setminus B\}$ is a total dominating set of $G_{n,m}$ and $|D'|\leq |D|$. Therefore $\gamma_{t}(G_{n,m})\leq \gamma_t(G_{n+1,m})$. The vertex set $D'$ may not be a paired dominating set of $G_{n,m}$, that means, the induced subgraph $G$ by $D'$ in $G_{n,m}$ may contains odd connected components. Let $p$ be the number of odd connected components in $G$. It is clear that $|D'|\le |D|-p$ by the construction of $D'$ from $D$. Therefore, we can obtain $D''$ by adding at most $p$ vertices to $D'$ such that the induced subgraph by $D''$ in $G_{n,m}$ does not contain odd connected components. Then $D''$ is a paired dominating set of $G_{n,m}$, and hence $\gamma_{p}(G_{n,m})\leq |D''|\leq |D|$. \end{pf} \begin{thm}\label{thm5.1} $\gamma_p(G_{n,m})\le 4\lceil\frac{n}{4}\rceil\lceil\frac{m}{4}\rceil$. \end{thm} \begin{pf} Let $n=4a-i$ and $m=4b-j$ where $0\le i,j\le 3$. By Lemma~\ref{lem2.3}, $\gamma_p(G_{4a,4b})=4ab =4\lceil\frac{n}{4}\rceil\lceil\frac{m}{4}\rceil$. By Lemma~\ref{lem5.1}, $\gamma_p(G_{m,n})\le \gamma_p(G_{4a,4b}) =4\lceil\frac{n}{4}\rceil\lceil\frac{m}{4}\rceil$. \end{pf} \vskip6pt For $n,m\ge 5$, let $m\equiv \,a~({\rm mod}\,4)$ and $n\equiv \,b~({\rm mod}\,4)$ where $0 \le a,b\le 3$. We will establish some better bounds of $\gamma_{t}(G_{n,m})$ and $\gamma_{p}(G_{n,m})$ than those in Theorem~\ref{thm5.1} for some special $a$ and $b$. Without loss of generality, we can assume $b\ge a$ since $G_{n,m}\cong G_{m,n}$. Let $$ D_e=\{x_{ij},x_{i(j+1)},x_{(i+2)(j+2)},x_{(i+2)(j+3)}: ~i,j\equiv \,1~({\rm mod}\,4)\}, $$ where $1\le i\le n-2$, $1\le j\le m-2$, and $n,m\ge 5$. \begin{thm} $\gamma_{p}(G_{n,m})\le \frac{(n+1)m}{4}$ for $m\equiv \,0~({\rm mod}\,4)$ and $n\equiv \,1~({\rm mod}\,4)$. \end{thm} \begin{pf} Let $D=D_e\cup \{x_{nj},x_{n(j+1)}: ~j\equiv \,1~({\rm mod}\,4)\}$, where $1\le j\le m-2$. Then, it is easy to see that $D$ is a paired dominating set of $G_{n,m}$ with cardinality $\frac{(n+1)m}{4}$. Thus, $\gamma_{p}(G_{n,m})\le \frac{(n+1)m}{4}$. \end{pf} \begin{thm}\label{thm5.3} $\gamma_{t}(G_{n,m})\le \frac{(n+1)(m+1)}{4}$ and $\gamma_{p}(G_{n,m})\le \frac{(n+1)(m+1)}{4}+1$ for $m,n\equiv \,1~({\rm mod}\,4)$. \end{thm} \begin{pf} Let $D=D_e\cup\{x_{nj},x_{n(j+1)},x_{(i+1)(m-1)},x_{(i+2)m}: ~i,j\equiv \,1~({\rm mod}\,4)\}\cup \{x_{nm}\}$, where $1\le i\le n-2$ and $1\le j\le m-2$. Then, it is easy to see that $D$ is a total dominating set of $G_{n,m}$ with cardinality $\frac{(n+1)(m+1)}{4}$, and $D\cup \{x_{n(m-1)}\}$ is a paired dominating set of $G_{n,m}$ with cardinality $\frac{(n+1)(m+1)}{4}+1$. Thus, $\gamma_{t}(G_{n,m})\le \frac{(n+1)(m+1)}{4}$ and $\gamma_{p}(G_{n,m})\le \frac{(n+1)(m+1)}{4}+1$. \end{pf} \begin{thm}\label{thm5.4} $\gamma_{t}(G_{n,m})\le \frac{(n+1)(m+1)}{4}-3$ and $\gamma_{p}(G_{n,m})\le \frac{(n+1)(m+1)}{4}-2$ for $m\equiv \,1~({\rm mod}\,4)$ and $n\equiv \,3~({\rm mod}\,4)$. \end{thm} \begin{pf} Let $D=(D_e\cup\{x_{(i+1)(m-1)},x_{(i+2)m} :~i\equiv \,1~({\rm mod}\,4)\})\setminus \{x_{n(m-2)},x_{nm}\}$, where $1\le i\le n-2$. Then, $D$ is a paired dominating set of $G_{n,m}$ with cardinality $\frac{(n+1)(m+1)}{4}-2$, and $D\setminus\{x_{2(m-1)}\}$ is a total dominating set of $G_{n,m}$ with cardinality $\frac{(n+1)(m+1)}{4}-3$. Thus, $\gamma_{t}(G_{n,m})\le \frac{(n+1)(m+1)}{4}-3$ and $\gamma_{p}(G_{n,m})\le \frac{(n+1)(m+1)}{4}-2$. \end{pf} \begin{cor} $\gamma_{t}(G_{n,m})\le \frac{(n+2)(m+1)}{4}-3$ and $\gamma_{p}(G_{n,m})\le \frac{(n+2)(m+1)}{4}-2$ for $m\equiv \,1~({\rm mod}\,4)$ and $n\equiv \,2~({\rm mod}\,4)$. \end{cor} \begin{pf} By Lemma~\ref{lem5.1}, $\gamma_t(G_{n,m})\le \gamma_t(G_{n+1,m})$ and $\gamma_p(G_{n,m})\le \gamma_p(G_{n+1,m})$. The corollary follows from Theorem~\ref{thm5.4}. \end{pf} \begin{thm}\label{thm5.5} $\gamma_{p}(G_{n,m})\le \frac{(n+2)(m+2)}{4}-6$ for $m,n\equiv \,2~({\rm mod}\,4)$. \end{thm} \begin{pf} Let $D=(D_e\cup\{x_{i(m-2)},x_{i(m-1)},x_{(i+2)(m-1)},x_{(i+2)m}:\ i\equiv \,1~({\rm mod}\,4)\}\cup \{x_{(n-1)j},$ $x_{(n-1)(j+1)},x_{n(j+2)},x_{n(j+3)}:\ j\equiv \,1~({\rm mod}\,4)\}\cup \{x_{n(m-1)}\}) \setminus \{x_{1(m-2)},$ $x_{1(m-1)},x_{n(m-3)}\}$, where $1\le i\le n-2$ and $1\le j\le m-2$. Then $D$ is a paired dominating set of $G_{n,m}$ with cardinality $\frac{(n+2)(m+2)}{4}-6$. Thus, $\gamma_{p}(G_{n,m})\le \frac{(n+2)(m+2)}{4}-6$. \end{pf} \end{document}
\begin{document} \title{On eigenmeasures under Fourier transform} \author{Michael Baake} \address{Fakult\"at f\"ur Mathematik, Universit\"at Bielefeld, \newline \hspace*{\parindent}Postfach 100131, 33501 Bielefeld, Germany} \email{$\{$mbaake,tspindel$\}[email protected] } \author{Timo Spindeler} \author{Nicolae Strungaru} \address{Department of Mathematical Sciences, MacEwan University, \newline \hspace*{\parindent}10700 \hspace{0.5pt} 104 Avenue, Edmonton, AB, Canada T5J 4S2} \email{[email protected]} \begin{abstract} Several classes of tempered measures are characterised that are eigenmeasures of the Fourier transform, the latter viewed as a linear operator on (generally unbounded) Radon measures on $\mathbb{R}\ts^d$. In particular, we classify all periodic eigenmeasures on $\mathbb{R}\ts$, which gives an interesting connection with the discrete Fourier transform, as well as all eigenmeasures on $\mathbb{R}\ts$ with uniformly discrete support. An interesting subclass of the latter emerges from the classic cut and project method for aperiodic Meyer sets. \end{abstract} \maketitle \section{Introduction} It is a well-known fact that the Fourier transform on $\mathbb{R}\ts$, more precisely its extension to the Fourier--Plancherel transform on the Hilbert space $L^2 (\mathbb{R}\ts)$, is a unitary operator of order~$4$, with eigenvalues $\{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$. A complete basis of eigenfunctions can be given in terms of the classic Hermite functions \cite[Thm.~57]{T}; see below for our notation and conventions. The corresponding statement for $L^2 (\mathbb{R}\ts^d)$ can easily be derived from here, and is helpful in many applications of the Fourier transform \cite{Wiener, DK}. Further, eigenfunctions of Fourier cosine and sine transforms as well as Hankel, Mellin and various other integral transforms have been studied \cite[Ch.~IX]{T}, in the setting of different function spaces. Clearly, these results can be reformulated in the realm of \emph{finite}, absolutely continuous measures. This point of view obviously harvests the self-duality of the locally compact Abelian group $\mathbb{R}\ts^d$. On the other hand, if $\delta_x$ denotes the normalised Dirac (or points) measure at $x$, the lattice Dirac comb $\delta^{}_{{\ts \mathbb{Z}}} \mathrel{\mathop:}= \sum_{m\in{\ts \mathbb{Z}}} \delta^{}_{m}$ satisfies \begin{equation}\label{eq:PSF-1} \widehat{\delta^{}_{{\ts \mathbb{Z}}}} \, = \, \delta^{}_{{\ts \mathbb{Z}}} \hspace{0.5pt} , \end{equation} which is nothing but the Poisson summation formula (PSF) for ${\ts \mathbb{Z}}$, written in terms of a lattice Dirac comb; see \cite{Cordoba,BM} as well as \cite[Sec.~9.2]{TAO1} and references therein. In other words, $\delta^{}_{{\ts \mathbb{Z}}}$ is an eigenmeasure of the Fourier transform with eigenvalue $1$. More precisely, in contrast to the function case mentioned above, it is an example of an \emph{unbounded}, but still translation-bounded, Radon eigenmeasure with uniformly discrete support. It is thus a natural question to ask what other unbounded eigenmeasures exist, and which eigenvalues occur. An interesting example, based on work by Guinand \cite{Gui}, is discussed in \cite{Meyer}. This, together with the general questions put forward in \cite{Jeff-rev}, inspired us to take a more systematic look as follows. After some preliminaries in Section~\ref{sec:prelim}, we determine the possible eigenvalues and some first solutions, in the setting of Radon measures, in Section~\ref{sec:exists}. Here, we also derive a simple characterisation of eigenmeasures in terms of $4$-cycles under the Fourier transform (Proposition~\ref{prop:general}). Then, for $d=1$, we consider lattice-periodic solutions (Section~\ref{sec:cryst}) more closely, which are related with eigenvectors of the discrete Fourier transform, that is, the Fourier transform on the finite cyclic group $C_n$, which is also self-dual. In this periodic case, all eigenmeasures are automatically pure point measures that are supported in a lattice (Theorem~\ref{thm:lattice}). Next, in Section~\ref{sec:shadows}, we consider solutions that implicitly emerge from the cut and project method of aperiodic order (Theorem~\ref{thm:aper-meas} and Corollary~\ref{coro:aper}). Finally, we characterise the general class of eigenmeasures with uniformly discrete support in Section~\ref{sec:discrete}, leading to Theorem~\ref{thm:main}. \section{Preliminaries}\label{sec:prelim} The \emph{Fourier transform} of a function $g\in L^{1} (\mathbb{R}\ts)$ is defined by \[ \widehat{g} \hspace{0.5pt} (y) \, = \int_{\mathbb{R}\ts} \mathrm{e}^{-2 \pi \mathrm{i}\ts x y} \hspace{0.5pt} g(x) \, \mathrm{d} x \hspace{0.5pt} , \] which is bounded and continuous, while the inverse transform is given by $\widecheck{g} (y) = \widehat{g} (-y)$. If $\widecheck{g} \in L^{1} (\mathbb{R}\ts)$, one has $\widehat{\hspace{-0.5pt}\widecheck{g}\hspace{0.5pt}} = g$. For the (complex) Hilbert space $\mathcal{H} = L^{2} (\mathbb{R}\ts)$ with inner product \[ \langle \hspace{0.5pt} g \hspace{0.5pt} | f \hspace{0.5pt} \rangle \, \mathrel{\mathop:}= \, \int_{\mathbb{R}\ts} \overline{g (x)} \hspace{0.5pt} f (x) \, \mathrm{d} x \] and norm $\| f \|^{}_{2} \mathrel{\mathop:}= \sqrt{\langle f \hspace{0.5pt} | f \rangle}$, the Fourier transform on $L^{2}(\mathbb{R}\ts) \cap L^{1} (\mathbb{R}\ts)$ has a unique extension to a unitary operator $\mathcal{F}\ts$ on $L^{2} (\mathbb{R}\ts) = \mathcal{H}$, which is often called the \emph{Fourier--Plancherel transform}; see \cite[Thm.~2.5]{BF}. The corresponding statements hold for $\mathbb{R}\ts^d$ as well. $\mathcal{F}\ts$ satisfies the relation $\mathcal{F}\ts^2 = I$, where $I$ is the involution defined by $\bigl( I g \bigr) (x) = g (-x)$. Consequently, one also has $\mathcal{F}\ts^4 = \mathrm{id}$, which implies that any eigenvalue of $\mathcal{F}\ts$ must be a fourth root of unity. It is well known that $\mathcal{H}$ possesses an ON-basis of eigenfunctions, which can be given via the Hermite functions. They naturally also appear as the eigenfunctions of the harmonic oscillator in quantum mechanics, which is a self-adjoint operator that acts on $\mathcal{H}$ and commutes with $\mathcal{F}\ts$; compare \cite[Sec.~34]{Dirac} and \cite{DK}. Concretely, for $n\in \mathbb{N}_0$, we employ the Hermite polynomials $H_n$ defined by the recursion \[ H_{n+1} (x) \, = \, 2 \hspace{0.5pt} x \hspace{0.5pt} H_n (x) - 2 \hspace{0.5pt} n \hspace{0.5pt} H_{n-1} (x) \] for $n\in\mathbb{N}$ together with $H^{}_{0} (x) = 1$ and $H^{}_{1} (x) = 2 x$. Then, the (normalised) Hermite functions \[ h_{n} (x) \, \mathrel{\mathop:}= \, \biggl( \frac{\sqrt{2}}{2^n n!} \biggr)^{\! 1/2} H_n \bigl( \sqrt{2 \pi} \hspace{0.5pt} x \bigr) \, \mathrm{e}^{- \pi x^2} \] satisfy the orthonormality relations $\langle h_m \hspace{0.5pt} | \hspace{0.5pt} h_n \rangle = \delta_{m,n}$ for $m,n \in \mathbb{N}_0$. Moreover, for $n\geqslant 0$, one has \[ \mathcal{F}\ts (h^{}_{n}) \, = \, \widehat{h^{}_{n}} \, = \, (-\mathrm{i}\ts)^n h^{}_{n} \hspace{0.5pt} , \] which means that the Hermite functions are eigenfunctions of $\mathcal{F}\ts$; see \cite[Rem.~8.3]{TAO1} as well as \cite[\S 6]{Wiener} or \cite[Sec.~2.5]{DK}. In Dirac's intuitive bra-c-ket notation \cite[Sec.~14]{Dirac}, one thus obtains the spectral theorem for the unitary operator $\mathcal{F}\ts$ as \[ \mathcal{F}\ts \, = \sum_{n=0}^{\infty} | \hspace{0.5pt} h_n \hspace{0.5pt} \rangle (-\mathrm{i}\ts)^n \langle \hspace{0.5pt} h_n \hspace{0.5pt} | \hspace{0.5pt} , \] which also entails the relation $\mathcal{H} = \mathcal{H}^{}_{0} \oplus \mathcal{H}^{}_{1} \oplus \mathcal{H}^{}_{2} \oplus \mathcal{H}^{}_{3}$, with $\mathcal{H}^{}_{\ell} = \text{span}^{}_{\mathbb{C}\ts} \{ h^{}_{4m+\ell} : m\in \mathbb{N}_{0} \}$. Next, let $xy$ denote the inner product of $x,y\in\mathbb{R}\ts^d$ and let $\mu$ be a finite measure on $\mathbb{R}\ts^d$. Then, its Fourier transform \cite{Rudin} is the continuous function on $\mathbb{R}\ts^d$ given by \[ \widehat{\mu} (y) \, \mathrel{\mathop:}= \int_{\mathbb{R}\ts^d} \mathrm{e}^{-2 \pi \mathrm{i}\ts x y} \, \mathrm{d} \mu (x) \hspace{0.5pt} . \] As the Fourier transform of a finite measure is a continuous function, nothing much of interest happens beyond what we summarised above. Consequently, we proceed with the extension to unbounded measures, where the very notion of Fourier transformability is more delicate. The simplest approach works via distribution theory, which we shall adopt here. From now on, unless specified otherwise, the term \emph{measure} will always mean a Radon measure, the latter viewed as a linear functional on $C_{\mathsf{c}} (\mathbb{R}\ts^d)$. Since $C_{\mathsf{c}}^{\infty} (\mathbb{R}\ts^d) \subset C_{\mathsf{c}} (\mathbb{R}\ts^d)$, every measure defines a distribution. Now, a measure on $\mathbb{R}\ts^d$ is \emph{tempered} if this distribution is tempered, that is, a continuous linear functional on Schwartz space, $\mathcal{S} (\mathbb{R}\ts^d)$. A tempered measure $\mu$ is called \emph{Fourier transformable}, or \emph{transformable} for short, if it is transformable as a distribution and if the transform is again a measure. The distributional transform of $\mu$ will interchangeably be denoted by $\mathcal{F}\ts (\mu)$ and by $\widehat{\mu}$. When the measure $\mu$ is transformable in this sense, its transform $\nu = \widehat{\mu}$ is a tempered measure, and hence transformable as a distribution, with $\widehat{\nu} = \mathcal{F}\ts^2 (\mu) = I \hspace{-0.5pt} . \hspace{0.5pt} \mu$, where the latter means the push-forward of the space inversion defined by $I (x) = -x$, to be discussed in more detail later. As a consequence, $\widehat{\nu}$ is again a measure, and we have the following useful property. \begin{fact}\label{fact:forever} If a tempered measure is transformable, it is also multiply transformable. \qed \end{fact} Since $\mathbb{R}\ts^d$ is a special case of a locally compact Abelian group, let us also mention the classic approach to the transformability of a measure, in the sense of measures, from \cite{ARMA1,BF}. It completely avoids the use of distribution theory, and is not restricted to tempered measures; see \cite{MoSt,NS11} for the systematic exposition of recent developments and results. If we want to refer to this notion, we will call it \emph{transformability in the strict sense}. Note that a transformable, tempered measure need not be strictly transformable. By slight abuse of notation, we shall use $\mathcal{F}\ts$ for both versions, as the context will always be clear. As the domain of two successive applications of $\mathcal{F}\ts$ in the strict sense is now generally smaller than that of $\mathcal{F}\ts$, one also needs the notion of double transformability here. A measure $\mu$ is called \emph{doubly} (or \emph{twice}) transformable in the strict sense if $\mu$ is strictly transformable as a measure and its transform, $\mathcal{F}\ts(\mu)$, is again transformable in the strict sense. Recall that a measure $\mu$ on $\mathbb{R}\ts^d$ is called \emph{translation bounded} if $\sup_{t\in\mathbb{R}\ts^d} \lvert \mu \rvert (t+K) < \infty$ holds for some (and then any) compact set $K\subset\mathbb{R}\ts^d$ with non-empty interior, where $\lvert \mu\rvert$ is the total variation of $\mu$. This notion is crucial in many ways. In particular, every translation-bounded measure is tempered \cite[Thm.~7.1]{ARMA1}. Further, strict transformability of tempered measures can most easily be decided via an application of \cite[Thm.~5.2]{Nicu} as follows; see also \cite{ST}. \begin{fact}\label{fact:transformable} A tempered measure\/ $\mu$ on\/ $\mathbb{R}\ts^d$ is strictly transformable as a measure if and only if it is transformable as a tempered distribution such that\/ $\widehat{\mu}$ is a translation-bounded measure. Moreover, the two transforms agree in this case. \qed \end{fact} By \cite[Thm.~3.12]{RS}, a strictly transformable measure $\mu$ on $\mathbb{R}\ts^d$ is doubly transformable in the strict sense if and only if $\mu$ is translation bounded. In this case, by \cite[Thm.~4.9.28]{MoSt}, one actually gets $\mathcal{F}\ts^2 (\mu) = I \hspace{-0.5pt} . \hspace{0.5pt} \mu$ as before. \begin{definition}\label{def:feigen} A non-trivial tempered measure\/ $\mu$ on\/ $\mathbb{R}\ts^d$ is called an \emph{eigenmeasure} of\/ $\mathcal{F}\ts$, or an eigenmeasure under Fourier transform, if it is transformable together with\/ $\mathcal{F}\ts (\mu) = \lambda \mu$ for some\/ $\lambda \in \mathbb{C}\ts$, which is the corresponding \emph{eigenvalue}. Further, we call an eigenmeasure\/ $\mu$ \emph{strict} when it is transformable in the strict sense and satisfies\/ $\mathcal{F}\ts (\mu) = \lambda \mu$ in the sense of measures. \end{definition} When the support of a measure $\mu$ has special discreteness properties, one can profitably employ the following equivalence. \begin{lemma}\label{lem:tb} If\/ $\mu\ne 0$ is a tempered, pure point measure on\/ $\mathbb{R}\ts^d$ with uniformly discrete support, the following properties are equivalent. \begin{enumerate}\itemsep=2pt \item The measure\/ $\mu$ is strictly transformable, and\/ $\widehat{\mu} = \lambda \mu$ holds as an equation of measures for some\/ $0\ne\lambda\in \mathbb{C}\ts$. \item The measure\/ $\mu$ satisfies\/ $\widehat{\mu} = \lambda \mu$ in the distributional sense for some\/ $0\ne\lambda \in \mathbb{C}\ts$. \end{enumerate} \end{lemma} \begin{proof} Let us first note that $\lambda = 0$ implies $\mu = 0$, so we must indeed have $\lambda \ne 0$. To verify (1) $\Rightarrow$ (2), we simply observe that strict transformability of $\mu$ implies its (distributional) transformability by Fact~\ref{fact:transformable}, and the two transforms agree. Conversely, \cite[Lemma~6.3]{BST} implies that $\widehat{\mu}$ is translation bounded, and the claim follows again from Fact~\ref{fact:transformable}. \end{proof} These characterisations will cover essentially all situations we encounter below. Let us first show that the eigenmeasure notion is a consistent one, and determine the possible eigenvalues. \section{Eigenvalues and structure of solutions}\label{sec:exists} Below, $\mathcal{S} (\mathbb{R}\ts^d)$ denotes Schwartz space \cite{Schw}, and $I$ is the reflection in $0$, so $I(x) = -x$. As above, we let $I$ also act on functions, via $I (g) \mathrel{\mathop:}= g \circ I$. Accordingly, when $\mu$ is a measure, we define the push-forward $I \hspace{-0.5pt} . \hspace{0.5pt} \mu$ by $\bigl( I \hspace{-0.5pt} . \hspace{0.5pt} \mu \bigr) (g) = \mu ( g \circ I)$. Let $\mu\ne 0$ be a tempered measure, so $\mu \in \mathcal{S}' (\mathbb{R}\ts^d)$, where the latter denotes the space of tempered distributions on $\mathbb{R}\ts^d$. If $\mu$ is an eigenmeasure of $\mathcal{F}\ts$, hence $\mathcal{F}\ts (\mu) = \lambda \mu$, we get $\mathcal{F}\ts^4 (\mu) = \lambda^4 \mu$ by repetition and Fact~\ref{fact:forever}. On the other hand, given any $\varphi\in \mathcal{S} (\mathbb{R}\ts^d)$ and observing the relation $\mathcal{F}\ts^2 (\varphi) = \varphi \circ I$, one has \[ \mathcal{F}\ts^2 (\mu) (\varphi) \, = \, \mu \bigl( \mathcal{F}\ts^2 (\varphi) \bigr) \, = \, \mu ( \varphi \circ I ) \, = \, \bigl( I \hspace{-0.5pt} . \hspace{0.5pt} \mu \bigr) (\varphi) \hspace{0.5pt} , \] hence $\mathcal{F}\ts^2 (\mu) = I\hspace{-0.5pt} . \hspace{0.5pt} \mu$ and thus $\mathcal{F}\ts^4 (\mu) = \mu$ due to $I^2 = \mathrm{id}$. This implies $\lambda^4 = 1$ as in the case of functions, so $\lambda$ must be a fourth root of unity. Note that we also get $I\hspace{-0.5pt} . \hspace{0.5pt} \mu = \mathcal{F}\ts^2 (\mu) = \lambda^2 \mu$, for any of the possible eigenvalues. This has an immediate consequence as follows. \begin{fact}\label{fact:symm} Let\/ $\mu$ be a tempered measure on\/ $\mathbb{R}\ts^d$ that is an eigenmeasure of\/ $\mathcal{F}\ts$, so\/ $\widehat{\mu} = \lambda \mu \ne 0$. Then, $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$, and one has\/ $I \hspace{-0.5pt} . \hspace{0.5pt} \mu = \lambda^2 \mu$. In particular, the support of\/ $\mu$ is symmetric, $\supp (\mu) = - \supp (\mu)$, and\/ $\mu$ satisfies\/ $\mu \bigl( \{ -x \} \bigr) = \lambda^2 \hspace{0.5pt} \mu \bigl( \{ x \} \bigr)$ for all\/ $x\in\mathbb{R}\ts^d$. \qed \end{fact} Let us verify that all $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$ actually occur. To this end, we use $\mathcal{F}\ts^4=\mathrm{id}$ and consider measures $\nu^{}_{m} = \mu + \mathrm{i}\ts^m \mathcal{F}\ts (\mu) + \mathrm{i}\ts^{2m} \mathcal{F}\ts^2 (\mu) + \mathrm{i}\ts^{3m} \mathcal{F}\ts^3 (\mu)$, for $0\leqslant m \leqslant 3$. Then, one has \begin{equation}\label{eq:trick} \mathcal{F}\ts (\nu^{}_{m} ) \, = \, (-\mathrm{i}\ts)^m \hspace{0.5pt} \nu^{}_{m} \hspace{0.5pt} , \end{equation} which establishes the existence of eigenmeasures for all possible $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$, provided we start from a transformable, tempered measure $\mu$ such that the $\nu^{}_{m}$ with $0\leqslant m \leqslant 3$ are non-trivial. On $\mathbb{R}\ts$, this can be achieved with $\mu = \delta^{}_{\alpha} \hspace{-0.5pt} * \delta^{}_{{\ts \mathbb{Z}}}$ for any $\alpha \not\in {\ts \mathbb{Q}}$. Let us continue with a slightly more complex example that will give additional insight. To this end, we set $d=1$ and start with an arbitrary real eigenmeasure for $\lambda=1$, so $\widehat{\mu} = \mu = \overline{\mu}$, such as $\mu = \delta^{}_{{\ts \mathbb{Z}}}$ from the PSF in \eqref{eq:PSF-1}. Now, let $\chi^{}_{s} \! : \, \mathbb{R}\ts \xrightarrow{\quad} \mathbb{S}^1$ be a character, so $\chi^{}_{s} (x) = \mathrm{e}^{2 \pi \mathrm{i}\ts s x}$ for some fixed $s \in \mathbb{R}\ts$. Then, for $t\in\mathbb{R}\ts$, a simple calculation shows that \begin{equation}\label{eq:char-1} \delta^{}_{t} * \bigl( \chi^{}_{s} \hspace{0.5pt} \delta^{}_{{\ts \mathbb{Z}}} \bigr) \, = \, \overline{\chi^{}_{s} (t)} \, \chi^{}_{s} \cdot \bigl( \delta^{}_{t} \hspace{-0.5pt} * \delta^{}_{{\ts \mathbb{Z}}} \bigr) \, = \, \overline{\chi^{}_{s} (t)} \, \bigl( \chi^{}_{s} \hspace{0.5pt} \delta^{}_{{\ts \mathbb{Z}} + t} \bigr) . \end{equation} This can be done more generally for any real eigenmeasure, with the following consequence. \begin{lemma}\label{lem:modif} Let\/ $\chi^{}_{s}$ be a character on\/ $\mathbb{R}\ts$ and\/ $\mu$ a transformable, tempered measure on\/ $\mathbb{R}\ts$ that satisfies\/ $\widehat{\mu} = \mu = \overline{\mu}$. Then, the measure\/ $\omega^{}_{s} \mathrel{\mathop:}= \chi^{}_{s} \cdot (\delta^{}_{s} \hspace{-0.5pt} * \mu)$ is a translation-bounded measure that is transformable as well, with \[ \widehat{\omega^{}_{s}} \, = \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2}\hspace{0.5pt} \overline{\omega^{}_{s}} \hspace{0.5pt} . \] For $\mu = \omega^{}_{0}$, this relation reduces to\/ $\widehat{\mu} = \mu$. \end{lemma} \begin{proof} Clearly, $\omega^{}_{s}$ is again a tempered measure, while a simple calculation with the convolution theorem shows that $\widehat{\omega^{}_{s}}$ is indeed a measure, and translation bounded. The transformability of $\omega^{}_{s}$ now follows from Fact~\ref{fact:transformable}. With $\mu$, also the shifted measure $\delta^{}_{s} \hspace{-0.5pt} * \mu$ is real, and we can simply calculate \[ \mathcal{F}\ts (\omega^{}_{s}) \, = \, \widehat{\chi^{}_{s}} * \bigl( \chi^{}_{-s} \, \widehat{\mu} \bigr) \, = \, \delta^{}_{s} \hspace{-0.5pt} * (\overline{\chi^{}_{s}} \, \mu ) \, = \, \chi^{}_{s} (s) \bigl( \overline{\chi^{}_{s}} \cdot (\delta^{}_{s} \hspace{-0.5pt} * \mu ) \bigr) \, = \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2} \, \overline{\omega^{}_{s}} \hspace{0.5pt} , \] where the third step is the obvious generalisation of \eqref{eq:char-1}. This establishes the main claim, while $s = 0$ clearly gives the eigenmeasure relation stated. \end{proof} With the $\omega^{}_{s}$ from Lemma~\ref{lem:modif}, for generic $s$, we get the cycle \begin{equation}\label{eq:cycle} \omega^{}_{s} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2} \, \overline{\omega^{}_{s}} \, \xrightarrow{\,\mathcal{F}\ts\,} \, I \hspace{-0.5pt}\hspace{-0.5pt} . \hspace{0.5pt}\hspace{0.5pt} \omega^{}_{s} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2} \, I \hspace{-0.5pt}\hspace{-0.5pt} . \hspace{0.5pt}\hspace{0.5pt} \overline{\omega^{}_{s}} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \omega^{}_{s} \hspace{0.5pt} , \end{equation} as can be checked by an explicit calculation that we leave to the interested reader. Unless $s$ takes special values, the four measures are distinct, and thus form a $4$-cycle. Excluding a few more values for $s$, the measures $\nu^{}_{m}$ from \eqref{eq:trick} with $\mu = \omega^{}_{s}$ are non-trivial, so we see a plethora of eigenmeasures of $\mathcal{F}\ts$, for all possible eigenvalues. One extension to tempered measures on $\mathbb{R}\ts^d$ is obvious via considering product measures. Let us summarise as follows. \begin{prop}\label{prop:general} Within the class of tempered measures on\/ $\mathbb{R}\ts^d$, there are eigenmeasures of\/ $\mathcal{F}\ts$ for all possible eigenvalues, that is, for any\/ $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$. Moreover, let\/ $\mu\ne 0$ be a tempered measure on\/ $\mathbb{R}\ts^d$ and let\/ $\lambda$ be a fourth root of unity. Then, $\mu$ is an eigenmeasure of\/ $\mathcal{F}\ts$ with eigenvalue\/ $\lambda$ if and only if there exists a transformable, tempered measure\/ $\nu$ such that\/ $\mu = \sum_{j=0}^{3} \lambda^{-j} \hspace{0.5pt} \mathcal{F}\ts^{j} (\nu)$. \end{prop} \begin{proof} The first claim is clear from our considerations above. For the second, assume $\widehat{\mu} = \lambda \hspace{0.5pt}\mu$, which includes the transformability of $\mu$ according to Definition~\ref{def:feigen}, and then also its multiple transformability by Fact~\ref{fact:forever}. Now, set $\nu = \frac{1}{4} \hspace{0.5pt}\mu\hspace{0.5pt}$. Clearly, $\nu$ is tempered and transformable, and it satisfies \[ \nu \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -1} \hspace{0.5pt} \widehat{\nu} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -2} \hspace{0.5pt} \widehat{\widehat{\nu}} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -3} \hspace{0.5pt} \widehat{\widehat{\widehat{\nu}}} \, = \, \myfrac{1}{4} \Bigl( \mu \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -1} \hspace{0.5pt} \widehat{\mu} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -2} \hspace{0.5pt} \widehat{\widehat{\mu}} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -3} \hspace{0.5pt} \widehat{\widehat{\widehat{\mu}}} \, \Bigr) = \, \mu \hspace{0.5pt} , \] where we used $\widehat{\mu} = \lambda\hspace{0.5pt} \mu$ repeatedly. Recalling $\mathcal{F}\ts^4=\mathrm{id}$ and $\lambda^4=1$, the converse direction follows from the relation \begin{equation}\label{eq:F-cycle} \mathcal{F}\ts \circ \bigl( \mathrm{id} \hspace{0.5pt} + \lambda^{-1} \mathcal{F}\ts + \lambda^{-2} \mathcal{F}\ts^{2} + \lambda^{-3} \mathcal{F}\ts^{3} \bigr) \, = \, \lambda \hspace{0.5pt} \bigl( \mathrm{id} \hspace{0.5pt} + \lambda^{-1} \mathcal{F}\ts + \lambda^{-2} \mathcal{F}\ts^{2} + \lambda^{-3} \mathcal{F}\ts^{3} \bigr) . \end{equation} Indeed, if $\nu$ is transformable, it is multiply transformable by Fact~\ref{fact:forever}, so $\mu$ is well defined by the sum and transformable, with $\widehat{\mu} = \lambda\hspace{0.5pt}\mu$ as claimed. \end{proof} Note that the second claim of Proposition~\ref{prop:general} also holds for strict eigenmeasures when transformability is replaced by strict transformability, provided one then demands $\nu$ to be doubly (and hence multiply) transformable. This shows that there is an abundance of eigenmeasures, which makes a meaningful classification difficult, unless certain subclasses are specified. In this framework, for $d=1$, we shall later derive a more explicit characterisation of eigenmeasures that are fully periodic or are pure point measures with uniformly discrete support. Any tempered measure $\mu$ can be decomposed into its even and odd part relative to $I$ by \begin{equation}\label{eq:inv} \mu \, = \, \frac{ \mu + I \hspace{-0.5pt} . \hspace{0.5pt} \mu }{2} + \frac{ \mu - I \hspace{-0.5pt} . \hspace{0.5pt} \mu }{2} \, =\mathrel{\mathop:} \, \mu^{}_{+} \hspace{-0.5pt} + \mu^{}_{-} \hspace{0.5pt} , \end{equation} where $I\hspace{-0.5pt} . \hspace{0.5pt} \mu^{}_{\pm} = \pm \hspace{0.5pt} \mu^{}_{\pm}$. Due to the relation $\mathcal{F}\ts^2 (\mu) = I \hspace{-0.5pt} . \hspace{0.5pt} \mu$, this has an interesting consequence. \begin{lemma}\label{lem:decomp} Any transformable, tempered measure on\/ $\mathbb{R}\ts^d$ possesses a decomposition into\/ $\mathcal{F}\ts$-eigenmeasures. This decomposition can be made unique by demanding that at most one eigenmeasure per fourth root of unity is used. \end{lemma} \begin{proof} Let $\mu^{}_{\pm}$ denote the even and the odd part of $\mu$ as in \eqref{eq:inv}, which are both transformable as well. Now, we can split $\mu$ further as \begin{equation}\label{eq:split} \mu \, = \, \mu^{}_{+}\hspace{-0.5pt} + \mu^{}_{-} \, = \, \frac{\mu^{}_{+} \hspace{-0.5pt} + \widehat{\mu^{}_{+}}}{2} + \frac{\mu^{}_{+} \hspace{-0.5pt} - \widehat{\mu^{}_{+}}}{2} \, + \, \frac{\mu^{}_{-} \hspace{-0.5pt} - \mathrm{i}\ts \widehat{\mu^{}_{-}}}{2} + \frac{\mu^{}_{-} \hspace{-0.5pt} + \mathrm{i}\ts \widehat{\mu^{}_{-}}}{2} \end{equation} which is a sum of four measures, each of which either vanishes or is an eigenmeasure of $\mathcal{F}\ts$ with eigenvalue $1$, $-1$, $\mathrm{i}\ts$ and $-\mathrm{i}\ts$, respectively. Setting $\mathcal{C}^{}_{\lambda} \mathrel{\mathop:}= \frac{1}{4} \sum_{i=0}^{3} \lambda^{\hspace{-0.5pt} -i} \mathcal{F}\ts^{i}$ for $\lambda\in\{1,\mathrm{i}\ts,-1,-\mathrm{i}\ts\}$, one has $\mathcal{C}^{2}_{\lambda} = \mathcal{C}^{}_{\lambda}$ together with $\mathcal{C}^{}_{\lambda} \mathcal{C}^{}_{\lambda'} = 0$ for $\lambda\ne\lambda'$ and $\mathcal{C}^{}_{\hspace{0.5pt} 1} + \mathcal{C}^{}_{\hspace{0.5pt}\mathrm{i}\ts} + \mathcal{C}^{}_{-1} + \mathcal{C}^{}_{-\mathrm{i}\ts} = \mathrm{id}$. Since $\mathcal{F}\ts \hspace{-0.5pt}\circ \mathcal{C}^{}_{\lambda} = \lambda \cdot \mathcal{C}^{}_{\lambda}$, Eq.~\eqref{eq:split} can be written as \[ \mu \, = \, \mathcal{C}^{}_{\hspace{0.5pt} 1} (\mu) + \mathcal{C}^{}_{-1} (\mu) + \mathcal{C}^{}_{\hspace{0.5pt}\mathrm{i}\ts} (\mu) + \mathcal{C}^{}_{-\mathrm{i}\ts} (\mu) \] whenever $\mu$ is transformable. This is the unique cycle decomposition of $\mu$ for the cyclic group $C_4$ generated by $\mathcal{F}\ts$. \end{proof} Let us return to $d=1$. For $r,s \in \mathbb{R}\ts$ and $\alpha > 0$, we now define \begin{equation}\label{eq:def-Z} \mathcal{Z}^{}_{r,s,\alpha} \, \mathrel{\mathop:}= \, \chi^{}_{r} \cdot \bigl( \delta^{}_{s} \hspace{-0.5pt} * \delta^{}_{\alpha {\ts \mathbb{Z}}} \bigr) , \end{equation} which is a translation-bounded (hence tempered) complex measure on $\mathbb{R}\ts$ that is also transformable, as well as strictly transformable. \begin{lemma}\label{lem:Z} For arbitrary\/ $r,s \in \mathbb{R}\ts$ and\/ $\alpha > 0$, the measure\/ $\mathcal{Z}^{}_{r,s,\alpha}$ from \eqref{eq:def-Z} is tempered and $($strictly$\, )$ transformable, and has the following properties. \begin{enumerate}\itemsep=2pt \item $\mathcal{Z}^{}_{r,s + m \alpha,\alpha} = \mathcal{Z}^{}_{r,s,\alpha}$ holds for all\/ $m\in{\ts \mathbb{Z}}$. \item $\mathcal{Z}^{}_{r \hspace{0.5pt} + \frac{m}{\alpha},s,\alpha} = \mathrm{e}^{2 \pi \mathrm{i}\ts \frac{m s}{\alpha}} \mathcal{Z}^{}_{r,s,\alpha}$ holds for all\/ $m\in{\ts \mathbb{Z}}$. \item $\mathcal{F}\ts (\mathcal{Z}^{}_{r,s,\alpha} ) = \delta^{}_{r} \hspace{-0.5pt} * \bigl( \chi^{}_{-s} \cdot \frac{1}{\alpha} \, \delta^{}_{{\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha}\bigr) = \frac{1}{\alpha} \, \mathrm{e}^{2 \pi \mathrm{i}\ts \hspace{0.5pt} r s} \, \mathcal{Z}^{}_{-s, r, \frac{1}{\alpha}}$. \item $\mathcal{F}\ts^2 (\mathcal{Z}^{}_{r,s,\alpha}) = I \hspace{-0.5pt} \hspace{-0.5pt} . \hspace{0.5pt} \mathcal{Z}^{}_{r,s,\alpha} =\mathcal{Z}^{}_{-r,-s,\alpha}$. \end{enumerate} Moreover, when\/ $\alpha^2 = \frac{1}{n}$ for some\/ $n\in\mathbb{N}$, the decomposition\/ $\mathcal{Z}^{}_{r,s,\alpha} = \sum_{m=0}^{n-1} \mathcal{Z}^{}_{r,s+m\alpha,1/\hspace{-0.5pt}\alpha}$ holds for all\/ $r,s \in \mathbb{R}\ts$. More generally, for\/ $\alpha^2 = \frac{p}{q}$ with\/ $p,q\in\mathbb{N}$ coprime and\/ $\beta=\sqrt{p q \hspace{0.5pt}}$, one has \[ \mathcal{Z}^{}_{r,s,\alpha} \, = \sum_{m=0}^{q-1} \mathcal{Z}^{}_{r,s+m\alpha,\beta} \hspace{0.5pt} . \] \end{lemma} \begin{proof} The first relation is obvious, while the second follows from the fact that ${\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$ is dual to $\alpha{\ts \mathbb{Z}}$ as a lattice. It is obvious that $\mathcal{Z}^{}_{r,s,\alpha}$ is translation bounded and (strictly) transformable. Property (3) follows from the convolution theorem \cite[Thm.~8.5]{TAO1}, applied twice, and an extension of Eq.~\eqref{eq:char-1} from ${\ts \mathbb{Z}}$ to $\alpha{\ts \mathbb{Z}}$. Next, observing $I\hspace{-0.5pt} \hspace{-0.5pt} .\hspace{0.5pt} \delta_x = \delta_{-x}$ and $\chi^{}_{r} (-x) = \chi^{}_{-r} (x)$, Property (4) follows from a simple calculation. When $\alpha^2 = \frac{1}{n}$, we have $\alpha {\ts \mathbb{Z}} = {\ts \mathbb{Z}}/\hspace{-0.5pt}\hspace{-0.5pt} \sqrt{n}$ and ${\ts \mathbb{Z}}/\hspace{-0.5pt} \alpha = \sqrt{n} \hspace{0.5pt}\hspace{0.5pt} {\ts \mathbb{Z}}$. As $n {\ts \mathbb{Z}}$ is a sublattice of ${\ts \mathbb{Z}}$ of index $n$, the summation formula follows by a standard coset decomposition of $\alpha\hspace{0.5pt} {\ts \mathbb{Z}}$ modulo ${\ts \mathbb{Z}}/\hspace{-0.5pt} \alpha$. In the general case, one has $\alpha = \beta/\hspace{-0.5pt} q$, which implies $\alpha {\ts \mathbb{Z}} = \dot\bigcup_{m=0}^{q-1} \bigl( \beta {\ts \mathbb{Z}} + \frac{m}{q}\beta \bigr)$. This gives \[ \mathcal{Z}^{}_{r,s,\alpha} \, = \, \chi^{}_{r} \hspace{-0.5pt} \cdot \hspace{-0.5pt} \bigl( \delta^{}_{s} * \delta^{}_{\alpha {\ts \mathbb{Z}}} \bigr) \, = \, \chi^{}_{r} \hspace{-0.5pt} \cdot \hspace{-0.5pt} \Bigl( \delta^{}_{s} * \! \sum_{m=0}^{q-1} \delta^{}_{\beta {\ts \mathbb{Z}} + m \alpha} \Bigr) \, = \sum_{m=0}^{q-1} \chi^{}_{r} \hspace{-0.5pt} \cdot \hspace{-0.5pt} \bigl( \delta^{}_{s + m \alpha} * \delta^{}_{\beta{\ts \mathbb{Z}}} \bigr) \, = \sum_{m=0}^{q-1} \mathcal{Z}^{}_{r, s+m\alpha, \beta} \] for any $r,s \in \mathbb{R}\ts$ as claimed. \end{proof} For $\mu = \delta^{}_{{\ts \mathbb{Z}}}$, the measure $\omega^{}_{s}$ from Lemma~\ref{lem:modif} is $\mathcal{Z}^{}_{s,s,1}$, and gives the $4$-cycle from \eqref{eq:cycle} as \[ \mathcal{Z}^{}_{s,s,1} \, \xrightarrow{\, \mathcal{F}\ts \,} \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{-s,s,1} \, \xrightarrow{\, \mathcal{F}\ts \,} \, \mathcal{Z}^{}_{-s,-s,1} \, \xrightarrow{\, \mathcal{F}\ts \,} \, \mathrm{e}^{2 \pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{s,-s,1} \, \xrightarrow{\, \mathcal{F}\ts \,} \, \mathcal{Z}^{}_{s,s,1} \hspace{0.5pt} . \] With this, one can choose an integer $m\in \{ 0, 1, 2, 3 \}$ and consider the measure \[ \nu^{}_{m} \, \mathrel{\mathop:}= \, \mathrm{e}^{-\pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{s,s,1} + \mathrm{i}\ts^m \mathrm{e}^{\pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{-s,s,1} + \mathrm{i}\ts^{2m} \mathrm{e}^{-\pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{-s,-s,1}+ \mathrm{i}\ts^{3m} \mathrm{e}^{\pi \mathrm{i}\ts s^2} \mathcal{Z}^{}_{s,-s,1} \hspace{0.5pt} , \] which satisfies $\widehat{\nu^{}_{m}} = (-\mathrm{i}\ts)^m \hspace{0.5pt} \widehat{\nu^{}_{m}}$. When $s\in{\ts \mathbb{Q}}$, so $s=\frac{p}{q}$ with $p\in{\ts \mathbb{Z}}$ and $q\in\mathbb{N}$, the example is (at least) $q$-periodic, while $s$ irrational provides aperiodic examples. \begin{prop}\label{prop:FD} Let\/ $\alpha>1$ be fixed, and let\/ $K$ and\/ $J$ be bounded fundamental domains for the lattices\/ $\alpha {\ts \mathbb{Z}}$ and\/ $\frac{1}{\alpha} {\ts \mathbb{Z}}$, respectively. Then, $B_{\alpha} \mathrel{\mathop:}= \{ \mathcal{Z}^{}_{r,s,\alpha} : r\in J, s \in K \}$ is an algebraic basis for\/ $V_{\alpha} \mathrel{\mathop:}= \mathrm{span}^{}_{\mathbb{C}\ts}\hspace{0.5pt} \{ \mathcal{Z}^{}_{r,s,\alpha} : r,s \in \mathbb{R}\ts \}$. \end{prop} \begin{proof} By Lemma~\ref{lem:Z}{\hspace{0.5pt}}(1) and (2), it is clear that the elements of $B_{\alpha}$ are distinct and that $B_{\alpha}$ is a spanning set for $V_{\alpha}$, which is the complex vector space of all \emph{finite} linear combinations of measures $\mathcal{Z}^{}_{r,s,\alpha}$ with fixed $\alpha$. It thus remains to show linear independence of the elements of $B_{\alpha}$. So, let $n\in\mathbb{N}$ and assume \begin{equation}\label{eq:lin} c^{}_{1} \mathcal{Z}^{}_{r^{}_1, s^{}_1, \alpha} + \dots + c^{}_{n} \mathcal{Z}^{}_{r^{}_n, s^{}_n, \alpha} \, = \, 0 \end{equation} with $s_i \in K$ and $r_j \in J$ such that the $n$ pairs $(s_i , r_i )$ are distinct. Since $K$ is a proper funda\-mental domain for $\alpha {\ts \mathbb{Z}}$, for any $1 \leqslant k,\ell \leqslant n$, we have either $s^{}_{k} = s^{}_{\ell}$ or $(s^{}_{k} + \alpha {\ts \mathbb{Z}}) \cap (s^{}_{\ell} + \alpha {\ts \mathbb{Z}}) = \varnothing$. Select some $1\leqslant k \leqslant n$ and set $F^{}_{k} = \{ 1 \leqslant \ell \leqslant n : s^{}_{\ell} = s^{}_{k} \}$. Restricting \eqref{eq:lin} to $s^{}_{k} + \alpha {\ts \mathbb{Z}}$ gives \[ \sum_{\ell \in F^{}_{k}} c^{}_{\ell} \hspace{0.5pt} \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt}\ell}, s^{}_{k}, \alpha} \, = \, 0 \hspace{0.5pt} . \] Taking the Fourier transform turns this relation into \begin{equation}\label{eq:lin-3} \sum_{\ell\in F^{}_{k}} \tfrac{c^{}_{\ell}}{\alpha} \,\mathrm{e}^{2 \pi \mathrm{i}\ts r^{}_{\hspace{-0.5pt} \ell} s^{}_{k}} \mathcal{Z}^{}_{- s^{}_{k}, r^{}_{\hspace{-0.5pt} \ell}, \frac{1}{\alpha}} \, = \, 0 \hspace{0.5pt} . \end{equation} Next, we note that, for all $\ell \in F^{}_{k}$ with $\ell \ne k$, we have $r^{}_{\hspace{-0.5pt} \ell} \ne r^{}_{k}$, as otherwise we would get $\mathcal{Z}^{}_{r^{}_{k}, s^{}_{k}, \alpha} = \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt} \ell}, s^{}_{\ell}, \alpha}$, which is false. Consequently, as $r^{}_{k}, r^{}_{\hspace{-0.5pt} \ell} \in J$, we have \[ (r^{}_{k} + \tfrac{1}{\alpha} {\ts \mathbb{Z}}) \cap (r^{}_{\hspace{-0.5pt}\ell} + \tfrac{1}{\alpha} {\ts \mathbb{Z}}) \, = \, \varnothing \hspace{0.5pt} . \] Now, restricting \eqref{eq:lin-3} to $r^{}_{\hspace{-0.5pt} \ell} + \frac{1}{\alpha} {\ts \mathbb{Z}}$ gives \[ \tfrac{c^{}_k}{\alpha} \mathrm{e}^{2 \pi \mathrm{i}\ts r^{}_{k} s^{}_{k} } \mathcal{Z}^{}_{-s^{}_{k}, r^{}_{k}, \frac{1}{\alpha}} \, = \, 0 \hspace{0.5pt} , \] which implies $c^{}_{k} = 0$. Since $k$ was arbitrary, we are done. \end{proof} Let us now look into classes of periodic eigenmeasures more systematically. \section{Lattice-periodic eigenmeasures}\label{sec:cryst} Assume that $\mu$ is a measure on $\mathbb{R}\ts$ that is $\alpha$-periodic, for some $\alpha>0$, which is to say that $\delta^{}_{\alpha} \hspace{-0.5pt} * \mu = \mu$. Such a measure is of the form \begin{equation}\label{eq:cryst-1} \mu \, = \, \varrho * \delta^{}_{\alpha {\ts \mathbb{Z}}} \hspace{0.5pt} , \end{equation} where $\varrho$ is a \emph{finite} measure; compare \cite[Sec.~9.2.3]{TAO1}. Without loss of generality, we may assume that $\supp (\varrho) \subseteq [0,\alpha]$ with $\varrho( \{ \alpha \}) = 0$, for instance by setting $\varrho = \mu |^{}_{[0,\alpha)}$. As the periodic repetition of a finite motive, $\mu$ obviously is translation bounded, hence also tempered. Any such measure is (strictly) transformable \cite[Cor.~6.1]{ARMA1}. \begin{lemma}\label{lem:cryst} Let\/ $\alpha>0$ be fixed, and let\/ $\mu\ne 0$ be a tempered measure on\/ $\mathbb{R}\ts$ that is\/ $\alpha$-periodic. If\/ $\widehat{\mu} = \lambda \mu$, one has\/ $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$ together with\/ $\alpha^2 = n$ for some\/ $n\in\mathbb{N}$. In particular, any such measure must be of the form \eqref{eq:cryst-1}, where the finite measure\/ $\varrho$ can be chosen to have support in\/ $[0,\alpha) \cap {\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha $, that is, in the natural coset representatives of\/ ${\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$ modulo\/ $\alpha {\ts \mathbb{Z}}$. \end{lemma} \begin{proof} Under the assumption, we may use the representation from \eqref{eq:cryst-1}, together with the choice of $\varrho$. From here, we see that $\mu$ is strictly transformable, and the convolution theorem \cite[Thm.~8.5]{TAO1} in conjunction with the general PSF (see Eq.~\eqref{eq:PSF-2} below) gives \begin{equation}\label{eq:cryst-2} \widehat{\mu} \, = \, \alpha^{-1} \widehat{\varrho} \, \delta^{}_{{\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha} \hspace{0.5pt} . \end{equation} Since $\widehat{\varrho}\hspace{0.5pt}$ is a continuous function on $\mathbb{R}\ts$, this entails that $\supp (\widehat{\mu}) \subseteq {\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$, where ${\ts \mathbb{Z}}/\alpha$ is a lattice and thus uniformly discrete as a point set. If $\widehat{\mu} =\lambda \mu$, where $\mu$ is non-trivial, we know that $\lambda^4=1$. Now, comparing the representation of $\mu$ from \eqref{eq:cryst-1} with Eq.~\eqref{eq:cryst-2} implies $\supp(\mu) = \supp(\varrho) + \alpha {\ts \mathbb{Z}} \subseteq {\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$, hence \[ \alpha \hspace{0.5pt} \supp(\varrho) + \alpha^2 {\ts \mathbb{Z}} \, \subseteq \, {\ts \mathbb{Z}} \hspace{0.5pt} , \] where $A+B$ denotes the Minkowski sum of two sets. This inclusion implies $\alpha \supp(\varrho) \subseteq {\ts \mathbb{Z}}$ because $0\in \alpha^2 {\ts \mathbb{Z}}$. Since $\alpha \supp (\varrho) \ne \varnothing$, it contains some integer, $m$ say, which now results in $\alpha^2 {\ts \mathbb{Z}} \subseteq {\ts \mathbb{Z}}$. But this means that $\alpha^2 = n$ must be an integer, and $\supp(\varrho) \subset {\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$. Under our assumptions, and with our choice of $\varrho$, we get $\supp (\varrho) \subseteq [0,\alpha) \cap {\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha$, so $\varrho$ must be a pure point measure with support in the finite set $\{ \frac{m}{\alpha} : 0 \leqslant m < n \}$, which establishes the last claim. \end{proof} This provides a class of Dirac combs with pure point diffraction \cite{BM}, with the additional property that they are doubly sparse in the sense of \cite{LO1,LO2,BST}. \subsection{Connection with discrete Fourier transform} Let $\alpha = \sqrt{n}$ with $n\in \mathbb{N}$ be fixed, and consider the pure point measure \begin{equation}\label{eq:mu-DFT} \mu \, = \, \varrho * \delta^{}_{\alpha {\ts \mathbb{Z}}} \quad \text{with} \quad \varrho \, = \sum_{m=0}^{n-1} c^{}_{m} \, \delta^{}_{m/\hspace{-0.5pt}\alpha} \hspace{0.5pt} . \end{equation} Clearly, $\mu$ is \emph{crystallographic} in the sense of \cite{TAO1}. Indeed, it is $\alpha$-periodic, but might also have smaller periods, depending on the coefficients $c^{}_{m}$. Moreover, $\mu$ is transformable, with \[ \widehat{\mu} \, = \, \widehat{\varrho} \: \widehat{\delta^{}_{\alpha {\ts \mathbb{Z}}}} \, = \, \alpha^{-1} \sum_{m=0}^{n-1} c^{}_{m} \, \mathrm{e}^{- 2 \pi \mathrm{i}\ts \frac{m}{\alpha} (.)} \, \delta^{}_{{\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha} \hspace{0.5pt} . \] Observing that $\delta^{}_{{\ts \mathbb{Z}}/\hspace{-0.5pt}\alpha} = \delta^{}_{\alpha {\ts \mathbb{Z}}} * \sum_{\ell=0}^{n-1} \delta^{}_{\ell/\hspace{-0.5pt}\alpha}$, one sees that the value of the character in the above sum only depends on the coset of $\alpha {\ts \mathbb{Z}}$, thus giving the simplification \begin{equation}\label{eq:mu-DFT-2} \widehat{\mu} \, = \, \delta^{}_{\alpha {\ts \mathbb{Z}}} \hspace{0.5pt} * \sum_{\ell=0}^{n-1} \biggl(\! \myfrac{1}{\sqrt{n}} \sum_{m=0}^{n-1} \mathrm{e}^{- 2 \pi \mathrm{i}\ts \frac{m \ell }{n}} \, c^{}_{m} \!\biggr) \delta^{}_{\ell/\hspace{-0.5pt}\alpha} \hspace{0.5pt} . \end{equation} The crucial observation to make here is that the term in brackets is the \emph{discrete Fourier transform} (DFT) of the vector $c = (c^{}_{0}, \ldots , c^{}_{n-1} )^{T}$. If $U_n$ denotes the unitary DFT or Fourier matrix, it is well known that its eigenvalues satisfy $\lambda \in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$. Their multiplicities, in closed terms, were already known to Gauss, while the eigenvectors are difficult,\footnote{A good summary with all the relevant details and references can be found in the \textsc{WikipediA} entry on the DFT, which is recommended for background. Some results are collected in our Appendix.} and still not known in closed form. \begin{theorem}\label{thm:lattice} Let\/ $\alpha > 0$ be fixed, and let\/ $\lambda$ be a fourth root of unity. Further, let\/ $\mu\ne 0$ be a measure on\/ $\mathbb{R}\ts$ that is\/ $\alpha$-periodic. Then, the following properties are equivalent. \begin{enumerate}\itemsep=2pt \item $\mu$ is an eigenmeasure of\/ $\mathcal{F}\ts$ with eigenvalue\/ $\lambda$. \item $\mu$ is a strict eigenmeasure of\/ $\mathcal{F}\ts$ with eigenvalue\/ $\lambda$. \item The measure\/ $\mu$ has the form \eqref{eq:mu-DFT} with\/ $\alpha = \sqrt{n}$, for some\/ $n\in\mathbb{N}$ and some eigenvector\/ $c= (c^{}_{0}, \ldots , c^{}_{n-1} )^{T} \in \mathbb{C}\ts^n$ of the corresponding DFT, meaning\/ $U_n c = \lambda c$. \end{enumerate} \end{theorem} \begin{proof} As pointed out above, any $\alpha$-periodic measure is translation bounded. Then, $(1) \Leftrightarrow (2)$ is a consequence of Fact~\ref{fact:transformable}. The equivalence $(1) \Leftrightarrow (3)$ follows from Lemma~\ref{lem:cryst} by a comparison of coefficients in \eqref{eq:mu-DFT} and \eqref{eq:mu-DFT-2}. \end{proof} \begin{example}\label{ex:DFT-1} When $\alpha^2=1$ in Theorem~\ref{thm:lattice}, the only solution we get (up to a constant) is $\mu=\delta^{}_{{\ts \mathbb{Z}}}$, with $\lambda = 1$, as expected. When $\alpha^2=2$, so $\alpha = \sqrt{2}$, the measure $\varrho$ in \eqref{eq:cryst-1} has the form $\varrho = c^{}_{0} \hspace{0.5pt} \delta^{}_{0} + c^{}_{1} \hspace{0.5pt} \delta^{}_{1/\hspace{-0.5pt}\alpha}$. Here, with $c^{}_{0} = 1 \pm \sqrt{2}$ and $c^{}_{1} = 1$, one gets eigenmeasures with $\lambda = \pm 1$, which are the only possibilities (up to an overall constant). Examples with larger values of $\alpha$ can be constructed with the material given in the Appendix. $\Diamond$ \end{example} \begin{remark}\label{rem:symm} Note that Fact~\ref{fact:symm} has consequences on the structure of the eigenvectors $c$ of the DFT. Indeed, it is clear that we get $c^{}_{0} = \lambda^2 c^{}_{0}$ together with \[ c^{}_{n-k} \, = \lambda^2 c^{}_{k} \, , \quad \text{for } 1 \leqslant k \leqslant n-1 \hspace{0.5pt} . \] In particular, for $\lambda = \pm 1$, the (partial) vector $(c^{}_{1}, \ldots , c^{}_{n-1})$ is palindromic, while $\lambda = \pm \hspace{0.5pt} \mathrm{i}\ts$ implies $c^{}_{0} = 0$ together with $(c^{}_{1}, \ldots , c^{}_{n-1})$ being skew-palindromic. This is clearly visible in the above examples and in those of the Appendix. $\Diamond$ \end{remark} Let us next consider another class of eigenmeasures that implicitly emerge from a \emph{cut and project scheme} (CPS) in the theory of aperiodic order; see \cite{TAO1} for background. This will provide non-periodic examples of pure point eigenmeasures. \section{Shadows of product measures}\label{sec:shadows} If $\varGamma$ is a lattice in $\mathbb{R}\ts^d$, and $\delta^{}_{\varGamma}$ the corresponding Dirac comb, the PSF from \eqref{eq:PSF-1} becomes \begin{equation}\label{eq:PSF-2} \widehat{\delta^{}_{\varGamma}} \, = \, \dens (\varGamma) \, \delta^{}_{\varGamma^*} \hspace{0.5pt} , \end{equation} where $\dens (\varGamma)$ is the (uniformly existing) density of $\varGamma$, and $\varGamma^*$ denotes the dual lattice, as given by $\varGamma^* = \{ x \in \mathbb{R}\ts^d : x y \in {\ts \mathbb{Z}} \text{ for all } y \in \varGamma\hspace{0.5pt} \}$; compare \cite[Thm.~9.1]{TAO1}. Here and below, we always assume $\mathbb{R}\ts^d$ to be equipped with Lebesgue measure as its (standard) Haar measure, that is, the unique translation-invariant measure which gives volume $1$ to the unit cube. Clearly, $\varGamma$ can only be self-dual if $\dens (\varGamma) = 1$, which is not a sufficient criterion for $d>1$. When $B$ denotes a basis matrix for $\varGamma$, with the columns being the basis vectors in Cartesian coordinates, $B^* \mathrel{\mathop:}= (B^{-1})^{T}$ is the \emph{dual matrix}, and the fitting basis matrix for $\varGamma^*$. In this formulation, self-duality of $\varGamma$ is equivalent with orthogonality of the matrix $B$. It is now easy to check that the only self-dual lattices in $\mathbb{R}\ts^2$ are the square lattices, that is, ${\ts \mathbb{Z}}^2$ and its rotated siblings. A natural choice for the basis matrix thus is \begin{equation}\label{eq:def-B} B \, = \, \begin{pmatrix} \cos (\theta ) & -\sin (\theta) \\ \sin (\theta) & \cos (\theta) \end{pmatrix} \, = \, \bigl( u^{}_{\theta}, v^{}_{\theta} \bigr) , \end{equation} where $u^{}_{\theta}$ and $v^{}_{\theta}$ are the column vectors. We denote the corresponding lattice by $\varGamma^{}_{\theta}$. Note that the parameter can be restricted to $0 \leqslant \theta < \frac{\pi}{2}$, which covers all cases once, due to the fourfold rotational symmetry of the square lattice. Given this choice of basis, we can always uniquely write $z\in\varGamma^{}_{\theta}$ as $z = m \hspace{0.5pt} u^{}_{\theta} + n \hspace{0.5pt} v^{}_{\theta}$ with $m,n \in {\ts \mathbb{Z}}$. We thus map $z\in\varGamma^{}_{\theta}$ to a pair $(m^{}_{z},n^{}_{z}) \in {\ts \mathbb{Z}}^2$ via this choice of basis. The next result is standard \cite[Thm.~VII.XIV]{Schw}, and follows from a simple Fubini-type calculation with the product Lebesgue measure on $\mathbb{R}\ts^{p+q} = \mathbb{R}\ts^p \! \times \hspace{-0.5pt} \mathbb{R}\ts^q$, where the function $f\hspace{-0.5pt}\otimes g \hspace{-0.5pt} : \, \mathbb{R}\ts^p \! \times \hspace{-0.5pt} \mathbb{R}\ts^q \xrightarrow{\quad} \mathbb{C}\ts$ is defined by $f\hspace{-0.5pt}\otimes g \, (x,y) \mathrel{\mathop:}= f(x) \, g(y)$ as usual. \begin{fact}\label{fact:tensor} Let\/ $p,q\in\mathbb{N}$, $f\in L^{1} (\mathbb{R}\ts^p)$, $g\in L^{1} (\mathbb{R}\ts^q)$, and consider the function\/ $h \mathrel{\mathop:}= f\hspace{-0.5pt}\otimes g$. Then, $h \in L^{1} (\mathbb{R}\ts^{p+q})$, and its Fourier transform reads\/ $\widehat{h} = \widehat{f} \hspace{-0.5pt}\otimes \widehat{g}$. In particular, this applies to\/ $f\in\mathcal{S} (\mathbb{R}\ts^{p})$ and\/ $g\in\mathcal{S} (\mathbb{R}\ts^{q})$, with\/ $h \in \mathcal{S} (\mathbb{R}\ts^{p+q})$. \qed \end{fact} Now, let $\varGamma \subset \mathbb{R}\ts^{d+m}$ be a lattice, let $g \in \mathcal{S}(\mathbb{R}\ts^m)$ be fixed, and consider \begin{equation}\label{eq:def-kamm} \delta^{\hspace{0.5pt}\star}_{\hspace{-0.5pt} g,\varGamma} (f) \, \mathrel{\mathop:}= \, \delta^{}_{\hspace{-0.5pt}\varGamma} (f\hspace{-0.5pt} \otimes g) \hspace{0.5pt} . \end{equation} Since the mapping $\mathcal{S}(\mathbb{R}\ts^d) \ni f \mapsto f \otimes g \in \mathcal{S}(\mathbb{R}\ts^{d+m})$ is continuous with respect to the topology of Schwartz space, and since $\delta^{}_{\hspace{-0.5pt}\varGamma}$ is a tempered distribution, it is clear that $\delta^{\star}_{g,\varGamma}$ is a tempered distribution as well. In fact, we have more as follows. \begin{prop}\label{prop:A} The measure\/ $\delta^{\hspace{0.5pt}\star_{}}_{\hspace{-0.5pt} g,\varGamma}$ from \eqref{eq:def-kamm} is translation bounded and transformable, with the transform\/ $\widehat{\delta^{\hspace{0.5pt}\star_{}}_{\hspace{-0.5pt} g,\varGamma}} = \dens(\varGamma)\, \delta^{\hspace{0.5pt}\star_{}}_{\widecheck{g},\varGamma^{*}_{\vphantom{t}}} $. \end{prop} \begin{proof} If $\varGamma\subset\mathbb{R}\ts^{d+m}$ is a lattice and $h \in \mathcal{S}(\mathbb{R}\ts^{m+d})$, one has $ \sum_{z \in \varGamma} \lvert h(z)\rvert < \infty $ by standard arguments \cite{Schw}, hence \begin{equation}\label{eq:something} \sum_{(x,y) \in \varGamma} \lvert f(x)\, g(y) \rvert \, < \, \infty \end{equation} holds for all $f \in \mathcal{S}(\mathbb{R}\ts^d)$ and $g \in \mathcal{S}(\mathbb{R}\ts^m)$. Consequently, also for all $\varphi \in C_{\mathsf{c}}(\mathbb{R}\ts^d)$, we have \[ \sum_{(x,y) \in \varGamma} \lvert \varphi(x) \, g(y) \rvert \, < \, \infty \hspace{0.5pt} . \] We can thus define $\mu \colon C_{\mathsf{c}}(\mathbb{R}\ts^d) \longrightarrow \mathbb{C}\ts$ by \[ \mu(\varphi) \: = \sum_{(x,y) \in \varGamma} g(y) \, \delta_x (\varphi) \: = \sum_{(x,y) \in \varGamma} \varphi(x) \, g(y) \hspace{0.5pt} , \] with the sum being absolutely convergent. Moreover, by Eq.~\eqref{eq:something}, $\mu(f)$ is well defined for all $f \in \mathcal{S} (\mathbb{R}\ts^d)$ and satisfies $ \mu(f) = \delta^{\hspace{0.5pt}\star}_{\hspace{-0.5pt} g,\varGamma} (f)$. Now, we show that $\mu$ is also a measure, which implies that $\delta^{\hspace{0.5pt}\star_{}}_{\hspace{-0.5pt} g,\varGamma} $ is a measure as well. It is obvious that $\mu$ is a linear mapping. Next, let $K \subset \mathbb{R}\ts^d$ be a fixed compact set, select $f \in C_{\mathsf{c}}^\infty(\mathbb{R}\ts^d)$ so that $f \geqslant 1^{}_K$, and consider \[ C^{}_K \, \mathrel{\mathop:}= \sum_{(x,y) \in \varGamma} \lvert f(x) \, g(y) \rvert \, < \, \infty \hspace{0.5pt} . \] Then, if $\varphi \in C_{\mathsf{c}}(\mathbb{R}\ts^d)$ satisfies $\supp(\varphi)\subseteq K$, we get \[ \lvert \mu(\varphi) \vert \, = \, \biggl| \sum_{(x,y) \in \varGamma} \varphi(x) \, g(y) \biggr| \, \leqslant \sum_{(x,y) \in \varGamma} \lvert \varphi(x) \, g(y) \vert \, \leqslant \, \| \varphi\|^{}_{\infty}\! \sum_{(x,y) \in \varGamma} \lvert f(x) \, g(y) \rvert \, \leqslant \, C^{}_K \, \| \varphi \|^{}_{\infty} \] which shows that $\mu$ is a measure. Next, by applying the proof of \cite[Cor.~2.1]{ST} to $h=f \hspace{-0.5pt}\otimes g$, we see that there is a constant $C$ such that $\bigl( \lvert \delta_{\varGamma} \rvert * \lvert I\hspace{-0.5pt} .h \rvert \bigr) (x) \leqslant C$ holds for all $x \in \mathbb{R}\ts^{d+m}$. In particular, for all $t \in \mathbb{R}\ts^d$, we have \[ \begin{split} \lvert \mu \rvert (t+K) \, & = \sum_{\substack{(x,y) \in \varGamma \\ x \in t+K}} \lvert g(y) \rvert \, \leqslant  \sum_{(x,y) \in \varGamma } \vert f(x-t) \, g(y) \rvert \\[2mm] & = \sum_{(x,y) \in \varGamma} \vert h (x-t,y) \rvert \, = \, \bigl(\lvert \delta^{}_{\hspace{-0.5pt} \varGamma} \rvert * \lvert I \hspace{-0.5pt} .h \rvert \bigr) (t,0) \, \leqslant \, C , \end{split} \] which shows that $\mu$ is translation bounded. The verification of the last claim is similar to an argument from \cite{RS}. Indeed, the PSF implies that, for all $f\in \mathcal{S}(\mathbb{R}\ts^d)$, we have \[ \widehat{\delta^{\hspace{0.5pt}\star_{}}_{\hspace{-0.5pt} g,\varGamma}}(f) \, = \, \delta^{\hspace{0.5pt}\star}_{g,\varGamma} \bigl( \widehat{f}\, \bigr) \, = \, \delta^{}_{\hspace{-0.5pt}\varGamma} \bigl( \widehat{f} \hspace{-0.5pt}\otimes g \bigr) \, = \, \dens(\varGamma) \, \delta^{}_{\hspace{-0.5pt}\varGamma^{*}_{}} \bigl( f \hspace{-0.5pt} \otimes \widecheck{g} \hspace{0.5pt} \bigr) \, = \, \dens(\varGamma)\, \delta^{\hspace{0.5pt}\star}_{\widecheck{g},\varGamma^{*}_{}}(f) \] with the last distribution being a measure by the first part of the proof. \end{proof} Now, we can proceed as follows. Let $\lambda$ be any fixed fourth root of unity, and select a Schwartz function $g \in \mathcal{S} (\mathbb{R}\ts)$ such that $\widehat{g} = \lambda \hspace{0.5pt} g$, which we know to exist via the Hermite functions. Clearly, this implies $\widecheck{g} = \overline{\lambda} \hspace{0.5pt} g$, because we have $\widehat{\hspace{-0.5pt}\widehat{g}\hspace{0.5pt}} = g \circ I$ together with $\lambda^4=1$. Next, fix a parameter $0\leqslant \theta < \frac{\pi}{2}$ and consider the lattice $\varGamma^{}_{\theta}$ in $\mathbb{R}\ts^2 = \mathbb{R}\ts \times \mathbb{R}\ts $. Let $f\hspace{-0.5pt}\otimes g$ refer to the product function with respect to this (Cartesian) splitting, so \[ f \hspace{-0.5pt}\otimes g \, (z) \, = \, f \hspace{-0.5pt}\otimes g \, (m^{}_{z} u^{}_{\theta} + n^{}_{z} v^{}_{\theta}) \, = \, f \bigl(m^{}_{z} \cos (\theta) - n^{}_{z} \sin (\theta)\bigr) \cdot g \bigl(m^{}_{z} \sin (\theta) + n^{}_{z} \cos (\theta)\bigr), \] which elucidates the role of the angle $\theta$. Now, for a fixed lattice $\varGamma^{}_{\theta}$, we consider the translation-bounded (and hence tempered) measure $\omega^{}_{g}$ on $\mathbb{R}\ts$ defined by \begin{equation}\label{eq:def-om} \omega^{}_{g} ( \varphi) \, \mathrel{\mathop:}= \sum_{z\in \varGamma^{}_{\theta}} \varphi\otimes\hspace{-0.5pt} g \, (z) \hspace{0.5pt} , \end{equation} where $\varphi$ is an arbitrary Schwartz function. In fact, \eqref{eq:def-om} is well-defined for $\varphi\in C_{\mathsf{c}} (\mathbb{R}\ts)$, too. Then, the (distributional) transform reads get \[ \begin{split} \widehat{\omega^{}_{g}} (\varphi) \, & = \, \omega^{}_{g} (\hspace{0.5pt}\widehat{\varphi}\hspace{0.5pt} ) \, = \sum_{z \in \varGamma^{}_{\theta}} \! \widehat{\varphi} \otimes\hspace{-0.5pt} g \, (z) \, = \sum_{z \in \varGamma^{}_{\theta}} \widehat{\hspace{-0.5pt}\varphi \otimes \hspace{-0.5pt} \widecheck{g}\hspace{0.5pt}} \, (z)\\[2mm] & = \, \sum_{z \in \varGamma^{}_{\theta}} \varphi \otimes \hspace{-0.5pt} \widecheck{g} \, (z) \, = \, \overline{\lambda} \sum_{z \in \varGamma^{}_{\theta}} \varphi \otimes \hspace{-0.5pt} g \, (z) \, = \, \overline{\lambda} \, \omega^{}_{g} (\varphi) \hspace{0.5pt} . \end{split} \] Here, we have used Fact~\ref{fact:tensor} for $p=q=1$ in the upper line, while the first equality in the second line follows from the PSF for the self-dual lattice $\varGamma^{}_{\theta}$. Since $\varphi$ is arbitrary, this implies the relation $\widehat{\omega^{}_{g}} = \overline{\lambda} \, \omega^{}_{g}$. Invoking \cite[Lemma~5.2 and Thm.~5.2]{Nicu2}, we see that the measure $\omega^{}_{g}$ from \eqref{eq:def-om} is translation bounded and that $\widehat{\omega^{}_{g}} = \overline{\lambda} \, \omega^{}_{g}$ also holds in the strict sense. Consequently, we can construct eigenmeasures of $\mathcal{F}\ts$ for any fourth root of unity this way. Let us summarise this derivation as follows. \begin{theorem}\label{thm:aper-meas} Let\/ $\varGamma^{}_{\theta}$ be the self-dual, planar lattice defined by the basis matrix\/ $B$ from \eqref{eq:def-B}, and let\/ $g \ne 0$ be a Schwartz function with\/ $\widehat{g} = \lambda \hspace{0.5pt} g$ for some\/ $\lambda \in \{1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$. Then, the tempered measure\/ $\omega^{}_{g}$ defined by \eqref{eq:def-om} is a\/ $($strict$\, )$ eigenmeasure of\/ $\mathcal{F}\ts\!$, with\/ $\widehat{\omega^{}_{g}} = \overline{\lambda} \, \omega^{}_{g}$. \qed \end{theorem} It is clear from \cite[Cor.~VII.2.6]{SW} that this approach also works when $g$ is continuous and decays such that $\varphi \otimes \hspace{-0.5pt} g \, (z) = \mathcal{O} \bigl( (1 + \lvert z \rvert)^{-2-\epsilon} \bigr)$ as $\lvert z \rvert \to\infty$, for some $\epsilon > 0$. Depending on the parameter $\theta$, the lattice $\varGamma^{}_{\theta}$ may be in rational or irrational position relative to the horizontal line. The rational case means $\tan (\theta ) = \frac{p}{q}$ with integers $0 \leqslant p \leqslant q$ that can be chosen coprime, with $q=1$ when $p=0$. Rationality implies $\sin (\theta )^2 = p^2/ (p^2 + q^2)$ and $\cos (\theta )^2 = q^2 / (p^2 + q^2)$. In any such case, the intersection of $\varGamma^{}_{\theta}$ with the horizontal line is a one-dimensional lattice. Observing that $q \sin (\theta ) - p \cos (\theta ) = 0$, this lattice is $\alpha {\ts \mathbb{Z}}$, with $\alpha = q \cos (\theta ) + p \sin (\theta ) = \sqrt{p^2 + q^2}$. Consequently, $\omega^{}_{g}$ is $\alpha$-periodic in this case, and we are back to the class discussed in Section~\ref{sec:cryst}. \begin{example}\label{ex:0} When $\theta = 0$, we get $\varGamma^{}_{0} = {\ts \mathbb{Z}}^2$, and our Radon measure simplifies to $\omega^{}_{g} = c \, \delta^{}_{{\ts \mathbb{Z}}}$, with $c = \sum_{m\in{\ts \mathbb{Z}}} g(m)$. Since this measure is $1$-periodic, it is either an eigenmeasure for eigenvalue $1$, or it must be trivial. If $g$ was chosen as a Hermite function, $g = h_n$ say, we thus get an extra condition for any $n \not\equiv 0 \bmod{4}$, namely \begin{equation}\label{eq:sum-rule} \sum_{k\in{\ts \mathbb{Z}}} h_n ( k) \, = \, 0 \hspace{0.5pt} . \end{equation} This is clear by the anti-symmetry of $h_n$ for all odd $n$, while it looks a little less obvious for $n \equiv 2 \bmod 4$. However, this sum rule follows directly from the classic PSF for functions, via \[ \sum_{k\in{\ts \mathbb{Z}}} h_n (k) \, = \sum_{k\in{\ts \mathbb{Z}}} \widehat{h_n} (k) \, = \, (-\mathrm{i}\ts)^n \sum_{k\in{\ts \mathbb{Z}}} h_n (k) \hspace{0.5pt} , \] which implies \eqref{eq:sum-rule} for any $n\not\equiv 0 \bmod{4}$. $\Diamond$ \end{example} \begin{example}\label{ex:next} Consider $p=q=1$, so $\theta = \frac{\pi}{4}$. Here, the intersection of the lattice $\varGamma^{}_{\pi/4}$ with the horizontal line is $\sqrt{2} \hspace{0.5pt} {\ts \mathbb{Z}}$ and the measure becomes $\omega^{}_{g} = c^{}_{0} \hspace{0.5pt} \delta^{}_{\hspace{-0.5pt}\sqrt{2} \hspace{0.5pt} {\ts \mathbb{Z}}} + c^{}_{1} \hspace{0.5pt} \delta^{}_{\hspace{-0.5pt}\beta + \sqrt{2} \hspace{0.5pt} {\ts \mathbb{Z}}}$, with $\beta = 1/\sqrt{2}$ and coefficients \[ c^{}_{0} \, = \sum_{m\in{\ts \mathbb{Z}}} g \bigl( m \sqrt{2} \, \bigr) \quad \text{and} \quad c^{}_{1} \, = \sum_{m\in{\ts \mathbb{Z}}} g \bigl( m \sqrt{2} + \beta \bigr) . \] Since we have $\alpha\hspace{0.5pt}$-periodicity with $\alpha^2 = 2$, there are eigenmeasures for $\lambda = \pm\hspace{0.5pt} 1$. When $\widehat{g} = \pm g$, this is only consistent if the coefficients satisfy $c^{}_{0} / c^{}_{1} = 1 \pm \sqrt{2}$, in line with Example~\ref{ex:DFT-1}. This covers the cases of $g = h_n$ with $n$ even. For odd $n$, the measure $\omega^{}_{g}$ must be trivial, hence $c^{}_{0} = c^{}_{1} = 0$, which is clear from the anti-symmetry of the Hermite functions in this case. More generally, extending this observation to other rational values of $\theta$, one obtains further conditions from the structure of the eigenvectors of the DFT. $\Diamond$ \end{example} When $\tan(\theta)$ is \emph{irrational}, we obtain aperiodic extensions, which can be understood by viewing the setting as a CPS of the form $(\mathbb{R}\ts, \mathbb{R}\ts, \varGamma^{}_{\theta})$; see \cite{TAO1} for more on this concept, and \cite{Mey72,Moo97,Moo00} for general background. The CPS here can be summarised in the diagram \begin{equation}\label{eq:CPS} \renewcommand{r@{}ccccc@{}l}{r@{}ccccc@{}l} \\ & \mathbb{R}\ts & \xleftarrow{\;\;\; \pi \;\;\; } & \mathbb{R}\ts \hspace{-0.5pt}\hspace{-0.5pt} \times \hspace{-0.5pt}\hspace{-0.5pt} \mathbb{R}\ts & \xrightarrow{\;\: \pi^{}_{\text{int}} \;\: } & \mathbb{R}\ts & \\ & \cup & & \cup & & \cup & \hspace*{-1.5ex} \raisebox{1pt}{\text{\footnotesize dense}} \\ & \pi (\mathcal{L}) & \xleftarrow{\;\hspace{0.5pt} 1-1 \;\hspace{0.5pt} } & \mathcal{L} = \varGamma^{}_{\theta} & \xrightarrow{ \qquad } &\pi^{}_{\text{int}} (\mathcal{L}) & \\ & \| & & & & \| & \\ & L & \multicolumn{3}{c}{\xrightarrow{\qquad\quad\quad \,\,\,\star\!\! \qquad\quad\qquad}} & {L_{}}^{\star\hspace{-0.5pt}} & \\ \\ \end{array}\renewcommand{1}{1} \end{equation} where $\star$ is the star map of the CPS. Clearly, this diagram is not restricted to the case $\mathcal{L}=\varGamma^{}_{\theta}$, and $\mathcal{L}$ can be any lattice in $\mathbb{R}\ts^2$ subject to the conditions encoded in the CPS. Combining Theorem~\ref{thm:aper-meas} and Fact~\ref{fact:transformable}, we can reformulate our previous result as follows. \begin{coro}\label{coro:aper} Let\/ $(\mathbb{R}\ts, \mathbb{R}\ts, \mathcal{L})$ be a CPS and let\/ $g \in \mathcal{S}(\mathbb{R}\ts)$ be arbitrary, but fixed. If the lattice\/ $\mathcal{L}$ is self-dual and if\/ $\widecheck{g} = \lambda g$, then necessarily with\/ $\lambda\in\{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$, the tempered measure\/ $\omega^{}_g = \delta^{\,\star}_{\hspace{-0.5pt} L. g}$ is an eigenmeasure of $\mathcal{F}\ts$ with eigenvalue\/ $\overline{\lambda}$, also in the strict sense. \qed \end{coro} \begin{remark} Studying the proofs of Lemmas 5.2 and 5.3 in \cite{Nicu2} with care, one can see that various of our previous arguments remain true for $g \in C^{}_{0} (\mathbb{R}\ts)$, as long as $g$ asymptotically satisfies $g (x) = \mathcal{O} \bigl( (1+\lvert x \rvert )^{2+\epsilon} \bigr)$ for some $\epsilon > 0$ as $\lvert x \rvert \to \infty$, while Proposition~\ref{prop:A} still holds if $g \in C^{}_{0}(\mathbb{R}\ts)$ has the property that both $g$ and $\widecheck{g}$ satisfy such an asymptotic behaviour. $\Diamond$ \end{remark} \section{Eigenmeasures with uniformly discrete support}\label{sec:discrete} Let us finally characterise eigenmeasures on $\mathbb{R}\ts$ with uniformly discrete support. We recall Eq.~\eqref{eq:def-Z} and begin with another consequence of the cycle structure induced by $\mathcal{F}\ts^4 = \mathrm{id}$. \begin{lemma}\label{lem:cycle} Let\/ $r,s \in \mathbb{R}\ts$ and\/ $n\in\mathbb{N}$ be fixed, and let\/ $\lambda$ be a fourth root of unity. Then, \[ \mathcal{Y}^{}_{r,s,\sqrt{n},\lambda} \, \mathrel{\mathop:}= \, \bigl( \mathrm{id} \hspace{0.5pt} + \lambda^{-1} \mathcal{F}\ts + \lambda^{-2} \mathcal{F}\ts^{2} + \lambda^{-3} \mathcal{F}\ts^{3} \bigr) \mathcal{Z}^{}_{r,s,\sqrt{n}} \] is either trivial or an eigenmeasure of\/ $\mathcal{F}\ts$ for the eigenvalue\/ $\lambda$, also in the strict sense. Further, it is supported inside\/ $\sqrt{n} \hspace{0.5pt} {\ts \mathbb{Z}} +F$ for some finite set\/ $F \subset \mathbb{R}\ts$. In particular, the support of the measure\/ $\mathcal{Y}^{}_{r,s,\sqrt{n},\lambda} $ is uniformly discrete. \end{lemma} \begin{proof} The eigenmeasure property is clear from Proposition~\ref{prop:general}. Next, set $\theta = \sqrt{n}$. By Lemma~\ref{lem:Z}, we have \[ \begin{split} \mathcal{Y}^{}_{r,s,\theta,\lambda} \, & = \, \mathcal{Z}^{}_{r,s,\theta} + \myfrac{\mathrm{e}^{2\pi\mathrm{i}\ts r s}}{\lambda \theta} \mathcal{Z}^{}_{-s,r,\frac{1}{\theta}} + \lambda^{-2} \mathcal{Z}^{}_{-r,-s,\theta} + \myfrac{\mathrm{e}^{2\pi\mathrm{i}\ts r s}}{\lambda^3 \theta} \mathcal{Z}^{}_{s,-r,\frac{1}{\theta}} \\[2mm] & = \, \mathcal{Z}^{}_{r,s,\theta} + \lambda^{-2} \mathcal{Z}^{}_{-r,-s,\theta} + \myfrac{\mathrm{e}^{2\pi\mathrm{i}\ts r s}}{\lambda \theta} \sum_{m=0}^{n-1} \bigl( \mathcal{Z}^{}_{-s,r+\frac{m}{\theta},\theta} + \mathcal{Z}^{}_{s,-r + \frac{m}{\theta},\theta} \bigr). \end{split} \] This implies $\supp \bigl( \mathcal{Y}^{}_{r,s,\sqrt{n},\lambda} \bigr) \subseteq \sqrt{n} \hspace{0.5pt} {\ts \mathbb{Z}} + \big\{ \pm s, \pm r + \frac{m}{\sqrt{n}} : 0 \leqslant m \leqslant n-1 \big\}$, which clearly is a uniformly discrete subset of $\mathbb{R}\ts$. \end{proof} For fixed $n\in\mathbb{N}$ and $\lambda\in \{ 1, \mathrm{i}\ts, -1, -\mathrm{i}\ts \}$, we now define a space of tempered measures via \begin{equation}\label{eq:def-E} \mathcal{E}^{}_{\lambda} (n) \, \mathrel{\mathop:}= \, \text{span}^{}_{\mathbb{C}\ts} \big\{ \mathcal{Y}^{}_{r,s,\sqrt{n},\lambda} : r,s \in \mathbb{R}\ts \big\} . \end{equation} Clearly, any element from $\mathcal{E}^{}_{\lambda} (n)$ is an eigenmeasure of $\mathcal{F}\ts$ with eigenvalue $\lambda$ and uniformly discrete support. Let us return to the general class of tempered measures with uniformly discrete support. If such a measure $\mu$ is an eigenmeasure of $\mathcal{F}\ts$, the support of $\widehat{\mu}$ must be the same, so $\mu$ is doubly sparse; see \cite{BST,LO1,LO2} and references therein for background on such measures. This has a strong consequence as follows. \begin{prop}\label{prop:sparse} Let\/ $\mu\ne 0$ be an eigenmeasure of\/ $\mathcal{F}\ts$ on\/ $\mathbb{R}\ts$ such that\/ $\supp (\mu) \subset \mathbb{R}\ts$ is uniformly discrete. Then, there exist integers\/ $n,k \in\mathbb{N}$ such that, with\/ $\beta = \sqrt{n}$, \[ \mu \, = \sum_{i=1}^{k} c^{}_{i} \, \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt} i}, s^{}_{i}, \beta} \] holds for suitable\/ $c^{}_{1}, \ldots , c^{}_{k} \in\mathbb{C}\ts$ and\/ $r^{}_{1}, \ldots, r^{}_{k}, s^{}_{1}, \ldots ,s^{}_{k} \in\mathbb{R}\ts$. \end{prop} \begin{proof} Clearly, since $\supp (\widehat{\mu}) = \supp (\mu)$, the measure $\mu$ is doubly sparse, and an application of \cite[Thms.~1 and 3]{LO1} guarantees the existence of an integer $\kappa\in\mathbb{N}$ and some number $\alpha>0$ such that $\mu = \sum_{j=1}^{\kappa} P_j \, \delta^{}_{\alpha {\ts \mathbb{Z}} + s_j}$, where the $P_j$ are trigonometric polynomials. Expanding the latter and rewritting $\mu$ in terms of our measures $\mathcal{Z}^{}_{r,s,\alpha}$ from \eqref{eq:def-Z} gives \begin{equation}\label{eq:mu-inter} \mu \, = \sum_{i=1}^{\ell} c^{}_{i} \, \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt} i}, s^{}_{i}, \alpha} \end{equation} for some $\ell\in\mathbb{N}$, which might be larger than $\kappa$, and numbers $c^{}_{1}, \ldots , c^{}_{\ell}\in\mathbb{C}\ts$ together with $r^{}_{1}, \ldots, r^{}_{\ell}, s^{}_{1}, \ldots ,s^{}_{\ell} \in\mathbb{R}\ts$, which need not be distinct. With Lemma~\ref{lem:Z}{\hspace{0.5pt}}(3), we see that \[ \supp (\mu) \, \subseteq \, \alpha {\ts \mathbb{Z}} + F_s \quad \text{and} \quad \supp (\widehat{\mu}) \, \subseteq \, \tfrac{1}{\alpha} \hspace{0.5pt} {\ts \mathbb{Z}} + F_r \] with the finite sets $F_s=\{ s^{}_{1}, \ldots , s^{}_{\ell} \}$ and $F_r = \{ r^{}_{1} , \ldots , r^{}_{\ell} \}$. By \cite[Lemma~5.5.1]{RS}, there exists a finite set $G\subset \mathbb{R}\ts$ such that $\frac{1}{\alpha} \hspace{0.5pt} {\ts \mathbb{Z}} + F_r \subseteq \supp (\widehat{\mu}) + G$, which implies \[ \tfrac{1}{\alpha} \hspace{0.5pt} {\ts \mathbb{Z}} \, \subseteq \, \supp ( \widehat{\mu} ) + G - F_r \, = \, \supp (\mu) + G - F_r \, \subseteq \, \alpha {\ts \mathbb{Z}} + F_s + G - F_r \, = \, \alpha {\ts \mathbb{Z}} + F^{\hspace{0.5pt} \prime} , \] where $F^{\hspace{0.5pt}\prime}$ is still a finite set. Consequently, there are integers $m,n \in \mathbb{N}$ with $m<n$ and some element $t\in F^{\hspace{0.5pt}\prime}$ such that $\frac{m}{\alpha}$ and $\frac{n}{\alpha}$ both lie in $\alpha {\ts \mathbb{Z}} + t$. This implies $\frac{n-m}{\alpha} \in \alpha {\ts \mathbb{Z}}$, hence $0\ne n-m\in\alpha^2 {\ts \mathbb{Z}}$, and $\alpha^2 \in {\ts \mathbb{Q}}$. Now, let $\alpha^2 = \frac{p}{q}$ with $p,q\in\mathbb{N}$, and set $\beta = \sqrt{p \hspace{0.5pt} q\hspace{0.5pt} }$, so $\beta = \alpha \hspace{0.5pt} q$ and $\alpha = \beta/q$. Invoking the last relation from Lemma~\ref{lem:Z}, since $\beta^2 = p\hspace{0.5pt} q$ is an integer, we can rewrite $\mu$ from \eqref{eq:mu-inter} as \[ \mu \, = \sum_{i=1}^{\ell} \, \sum_{j=0}^{q-1} c^{}_{i} \hspace{0.5pt} \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt} i}, s^{}_{i} + j \alpha, \beta} \hspace{0.5pt} , \] which, after expanding the double sum and relabeling the parameters, proves the claim. \end{proof} Recall that Lemma~\ref{lem:Z}{\hspace{0.5pt}}(3), applied several times, leads to the cycle \[ \mathcal{Z}^{}_{r,s,\alpha} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \tfrac{1}{\alpha}\hspace{0.5pt} \mathrm{e}^{2 \pi \mathrm{i}\ts r s} \mathcal{Z}^{}_{-s,r,\frac{1}{\alpha}} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \mathcal{Z}^{}_{-r,-s,\alpha} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \tfrac{1}{\alpha} \hspace{0.5pt} \mathrm{e}^{2 \pi \mathrm{i}\ts r s} \mathcal{Z}^{}_{s,-r,\frac{1}{\alpha}} \, \xrightarrow{\,\mathcal{F}\ts\,} \, \mathcal{Z}^{}_{r,s,\alpha} \] of length at most $4$, where $\mathcal{Z}^{}_{-r,-s,\alpha} = I \hspace{-0.5pt} . \hspace{0.5pt} \mathcal{Z}^{}_{r,s,\alpha}$. Ignoring the actual cycle length, which can be $1$, $2$ or $4$, we get the following general result. \begin{theorem}\label{thm:main} Let\/ $\mu\ne 0$ be a transformable measure on\/ $\mathbb{R}\ts$ and\/ $\lambda$ a fourth root of unity. Then, the following properties are equivalent. \begin{enumerate}\itemsep=2pt \item The measure\/ $\mu$ is an eigenmeasure of\/ $\mathcal{F}\ts$ for\/ $\lambda$ with uniformly discrete support, either in the distribution or in the strict sense. \item There are integers\/ $n,k \in \mathbb{N}$ and numbers\/ $c^{}_{1}, \ldots , c^{}_{k} \in \mathbb{C}\ts$ and\/ $r^{}_{1}, \ldots , r^{}_{k}, s^{}_{1}, \ldots , s^{}_{k} \in \mathbb{R}\ts$ such that one has \[ \mu \, = \, \nu \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -1} \hspace{0.5pt} \widehat{\nu} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -2} \hspace{0.5pt} \widehat{\widehat{\nu}} \hspace{0.5pt} + \lambda^{\hspace{-0.5pt} -3} \hspace{0.5pt} \widehat{\widehat{\widehat{\nu}}} \] for\/ $\nu = \sum_{i=1}^{k} c^{}_{i} \, \mathcal{Z}^{}_{r_i, s_i, \beta}$ with\/ $\beta = \sqrt{n}$. \item There is some\/ $n\in\mathbb{N}$ such that\/ $\mu\in \mathcal{E}^{}_{\lambda} (n)$, as defined in Eq.~\eqref{eq:def-E}. \end{enumerate} \end{theorem} \begin{proof} By Lemma~\ref{lem:tb}, the two notions of transformability are equivalent in this case, as claimed under (1). $(1) \Rightarrow (2)$: By Proposition~\ref{prop:sparse}, we know that $\mu$ must be of the form \[ \mu \, = \sum_{i=1}^{k} \hspace{0.5pt}\widetilde{c}^{}_{i} \, \mathcal{Z}^{}_{r_i, s_i, \beta} \] for suitable integers $n,k$ and with suitable numbers $r^{}_{i}$, $s^{}_{j}$ and $\widetilde{c}^{}_{\ell}$. Setting $\nu \mathrel{\mathop:}= \frac{1}{4} \mu$ immediately gives the claim as in Proposition~\ref{prop:general}, with $c^{}_i = \frac{1}{4} \widetilde{c}^{}_{i}$. $(2) \Rightarrow (3)$: Since $\nu = \sum_{i=1}^{k} c^{}_{i} \, \mathcal{Z}^{}_{r_i, s_i, \sqrt{n}}$ and $\mu$ is the sum stated under point (2), we get \[ \mu \, = \sum_{i=1}^{k} c^{}_{i} \, \mathcal{Y}^{}_{r_i, s_i, \sqrt{n}, \lambda} \, \in \, \mathcal{E}^{}_{\lambda} (n) \] in the notation of Lemma~\ref{lem:cycle} and Eq.~\eqref{eq:def-E}. The implication $(3) \Rightarrow (1)$ is clear from Lemma~\ref{lem:cycle}. \end{proof} For concrete calculations, one can harvest Lemma~\ref{lem:decomp} and Eq.~\eqref{eq:split}, from which one derives the following conditions for $\widehat{\mu} = \lambda \mu$, \begin{equation}\label{eq:conditions} \begin{split} \lambda = \pm 1 : & \quad \mu^{}_{-} = 0 \quad \text{and} \quad \widehat{\mu^{}_{+}} = \pm \hspace{0.5pt} \mu^{}_{+} \hspace{0.5pt} , \\ \lambda = \pm\hspace{0.5pt}\hspace{0.5pt} \mathrm{i}\ts\hspace{0.5pt} : & \quad \mu^{}_{+} = 0 \quad \text{and} \quad \widehat{\mu^{}_{-}} = \pm \hspace{0.5pt} \mu^{}_{-} \hspace{0.5pt} , \end{split} \end{equation} which formed the basis for Lemma~\ref{lem:cycle}. It is clear from Proposition~\ref{prop:sparse} in conjunction with Proposition~\ref{prop:FD} that it suffices to consider measures \begin{equation}\label{eq:mu-cand} \mu \, = \sum_{i=1}^{k} c^{}_{i}\hspace{0.5pt} \mathcal{Z}^{}_{r^{}_{\hspace{-0.5pt} i}, s^{}_{i}, \alpha} \end{equation} with fixed $\alpha=\sqrt{n}$ and parameters $r^{}_{\hspace{-0.5pt} i} \in \big( \frac{-1}{2\alpha}, \frac{1}{2\alpha} \big] =\mathrel{\mathop:} J_r$ and $s^{}_{i} \in \big( \frac{-\alpha}{2}, \frac{\alpha}{2} \big] =\mathrel{\mathop:} J_s$. Invoking Lemma~\ref{lem:Z}{\hspace{0.5pt}}(4), we get $\mu = \mu^{}_{+} \hspace{-0.5pt} + \mu^{}_{-}$ with \[ \mu^{}_{\pm} \, = \: \myfrac{1}{2} \sum_{i=1}^{k} c^{}_{i} \bigl( \mathcal{Z}^{}_{r_i, s_i, \alpha} \pm \mathcal{Z}^{}_{-r_i, -s_i, \alpha}\bigr), \] where $-r_i$ and $-s_i$ are taken modulo $\frac{1}{\alpha}$ and $\alpha$, respectively, when $r_i = \frac{1}{2\alpha}$ or $s_i = \frac{\alpha}{2}$. Now, one can use the fundamental domains to rewrite the possible eigenmeasures with restricted parameters, and without ambiguities. Some care has to be exercised with the $2$-division points \[ \big\{ \bigl( 0,0 \bigr), \bigl( 0,\tfrac{\alpha}{2} \bigr), \bigl(\tfrac{1}{2\alpha},0 \bigr), \bigl(\tfrac{1}{2\alpha},\tfrac{\alpha}{2}\bigr) \big\} \,\subset \, J \hspace{0.5pt} , \] under the inversion $I$ in such calculations, the details of which are left to the interested reader. After the full characterisation of $\mathcal{F}\ts$-eigenmeasures on $\mathbb{R}\ts$ that are periodic or have uniformly discrete support, it remains an interesting open problem to extend it to more general measures with discrete support, where interesting new cases exist \cite{Gui,Meyer}. Clearly, linear combinations of measures $\mu^{}_i \in \mathcal{E}^{}_{\lambda} (n^{}_{i})$, with fixed $\lambda$, are eigenmeasures with discrete support. The latter is uniformly discrete if and only if, for all $i\ne j$, one has $\sqrt{m_i/m_j} \in {\ts \mathbb{Q}}$, which is to say that there are many more possibilities. In particular, for each fourth root of unity, there are eigenmeasures with non-uniformly discrete support, for instance of the type studied by Meyer in \cite{Meyer}. A better understanding of such measures seems an interesting task ahead. \section*{Appendix: Discrete Fourier transform} Here, we recall some well-known properties of the DFT. Given $n\in\mathbb{N}$, the unitary Fourier matrix $U_n \in \Mat (n,\mathbb{C}\ts)$ has matrix entries $\omega^{k\ell}/\sqrt{n}$ with $\omega = \mathrm{e}^{-2 \pi \mathrm{i}\ts/n}$ and \mbox{$0 \leqslant k,\ell \leqslant n-1$}, where we label the entries starting with $0$. Since $n=1$ is trivial, we only consider $n\geqslant 2$. While $U^{}_{2}$ is an involution, $U_n$ has order $4$ for all $n \geqslant 3$. The eigenvalues and their multiplicities show a structure modulo $4$, as already known to Gauss. They are summarised in Table~\ref{tab:eigen}. \begin{table} \begin{tabular}{|c|cccc|} \hline $n$ & $\lambda=1$ & $\lambda =\mathrm{i}\ts$ & $\lambda=-1$ & $\lambda=-\mathrm{i}\ts$ \\ \hline $4m$ & $m+1$ & $m-1$ & $m$ & $m$ \\ $4m+1$ & $m+1$ & $m$ & $m$ & $m$ \\ $4m+2$ & $m+1$ & $m$ & $m+1$ & $m$ \\ $4m+3$ & $m+1$ & $m$ & $m+1$ & $m+1$ \\ \hline \end{tabular} \caption{Eigenvalues of $U_n$ and their multiplicities.\label{tab:eigen}} \end{table} Concretely, we have \[ U^{}_2 \, = \, \myfrac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \] with eigenvalues $\pm 1$ and eigenvectors $(1\pm \sqrt{2}, 1)$. Since the DFT matrices are symmetric, left and right eigenvectors map to each other under transposition. We state them in row form. For $n=3$, setting $\omega = \mathrm{e}^{-2 \pi \mathrm{i}\ts/3}$, one finds \[ U^{}_3 \, = \, \myfrac{1}{\sqrt{3}} \begin{pmatrix} 1 & 1 & 1\\ 1 & \omega & \overline{\omega} \\ 1 & \overline{\omega} & \omega \end{pmatrix} , \] with eigenvectors $(1 \pm \sqrt{3}, 1, 1)$ for $\lambda = \pm 1$ and $(0,-1,1)$ for $\lambda=-\mathrm{i}\ts$. For $n=4$, one has \[ U^{}_4 \, = \, \myfrac{1}{2} \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & - \mathrm{i}\ts & -1 & \mathrm{i}\ts \\ 1 & -1 & 1 & -1 \\ 1 & \mathrm{i}\ts & -1 & -\mathrm{i}\ts \end{pmatrix} , \] where the eigenspace for $\lambda=1$ is two-dimensional, spanned by $(1,0,1,0)$ and $(2,1,0,1)$. The other eigenvectors are $(-1,1,1,1)$ for $\lambda = -1$ and $(0,-1,0,1)$ for $\lambda = -\mathrm{i}\ts$. Next, $n=5$ is the first case where all possible eigenvalues occur. With $\omega = \mathrm{e}^{-2 \pi \mathrm{i}\ts/5}$, we get \[ U^{}_5 \, = \, \myfrac{1}{\sqrt{5}} \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & \omega & \omega^2 & \overline{\omega}^2 & \overline{\omega} \\ 1 & \omega^2 & \overline{\omega} & \omega & \overline{\omega}^2 \\ 1 & \overline{\omega}^2 & \omega & \overline{\omega} & \omega^2 \\ 1 & \overline{\omega} & \overline{\omega}^2 & \omega^2 & \omega \end{pmatrix} , \] where $(\tau,1,0,0,1)$ and $(\tau,0,1,1,0)$ span the eigenspace of $\lambda=1$, with $\tau = \frac{1}{2} (1+\sqrt{5}\,)$ being the golden ratio. The other eigenvectors are $(0,-1,\tau\pm\eta,-(\tau\pm\eta),1)$ with $\eta = \sqrt{2+\tau}$ for $\lambda = \pm \mathrm{i}\ts$, and $(-\frac{2}{\tau},1,1,1,1)$ for $\lambda = -1$. As a last case, let us consider $n=6$. Here, with $\omega = \mathrm{e}^{-\pi \mathrm{i}\ts/3}$, we have \[ U^{}_{6} \, = \, \myfrac{1}{\sqrt{6}} \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & \omega & \omega^2 & -1 & \overline{\omega}^2 & \overline{\omega} \\ 1 & \omega^2 & \overline{\omega}^2 & 1 & \omega^2 & \overline{\omega}^2 \\ 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & \overline{\omega}^2 & \omega^2 & 1& \overline{\omega}^2 & \omega^2 \\ 1 & \overline{\omega} & \overline{\omega}^2 & -1 & \omega^2 & \omega \end{pmatrix} . \] The eigenspace for $\lambda=1$ is two-dimensional, spanned by $(\beta,1,0,1-\beta,0,1)$ together with $(1+\beta,0,1,\beta,1,0)$, as is that for $\lambda = -1$, which is spanned by $(-\beta,1,0,1+\beta,0,1)$ and $(1-\beta,0,1,-\beta,1,0)$, with $\beta = \sqrt{3/2}$ in both cases. The remaining eigenvectors are given by $(0,-1,1\pm\sqrt{2},0, -(1\pm\sqrt{2}\,),1)$ for $\lambda = \pm \mathrm{i}\ts$, respectively. \end{document}
\begin{document} \begin{abstract} Let $L(s,f)$ be an $L$-function associated to a primitive (holomorphic or Maass) cusp form $f$ on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. Combining mean-value estimates of Montgomery and Vaughan with a method of Ramachandra, we prove a formula for the mixed second moments of derivatives of $L(1/2+it,f)$ and, via a method of Hall, use it to show that there are infinitely many gaps between consecutive zeros of $L(s,f)$ along the critical line that are at least $\sqrt 3 = 1.732\ldots$ times the average spacing. Using general pair correlation results due to Murty and Perelli in conjunction with a technique of Montgomery, we also prove the existence of small gaps between zeros of any primitive $L$-function of the Selberg class. In particular, when $f$ is a primitive holomorphic cusp form on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$, we prove that there are infinitely many gaps between consecutive zeros of $L(s,f)$ along the critical line that are at most $0.823$ times the average spacing. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let $f$ be a primitive form on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$ with level $q$, which we consider to be fixed. Then $f$ corresponds either to a primitive holomorphic cusp form or to a primitive Maass cusp form. For $\ensuremath{\mathbf{R}}e(s)\gg1$, let \begin{equation}\label{Ldefintion} L(s,f)\ := \ \sum_{n=1}^{\infty}\frac{a_f(n)}{n^s} \end{equation} denote the $L$-function of degree 2 associated to $f$ as defined by Godement and Jacquet~\cite{GJ}. Here, for any choice of $f$, we are normalizing so that $a_f(1)=1$ and the critical line is $\ensuremath{\mathbf{R}}e(s)=1/2$. We study the vertical distribution of the nontrivial zeros of $L(s,f)$, which we denote by $\rho_{\!f} = \beta_{\!f} + i\alphammaf$, where $\beta_{\!f}, \alphammaf\in\ensuremath{\mathbf{R}}$ and $0<\beta_{\!f}<1$. The analogous problems for normalized gaps between consecutive zeros of the Riemann zeta-function and for the Dedekind zeta-function of a quadratic number field have been studied extensively. For example, see {\cite{Bredberg,Bui,Bui2,Bui3,BHTB,BMN,CGGGHB,CGG,CGG2,FW,GGOS,Hall,Hall1, Montgomery, MO, Mu, ng1, Sound, caroline}. It is known (see Theorem 5.8 of \cite{IK}) that \begin{equation}\label{countzeros} N(T,f)\ := \ \sum_{0<\alphammaf \leqslant T} 1 \ = \ \frac{T}{\pi}\log \frac{\sqrt{|q|}T}{2\pi e} +O\left(\log \mathfrak q(iT,f)\right) \end{equation} for $T\geqslant1$ with an implied absolute constant. Here $\mathfrak q(s,f)$ denotes the analytic conductor of $L(s,f)$. Consider the sequence $0 \leqslant \alphammaf(1)\leqslant \alphammaf(2) \leqslant \cdots\leqslant \alphammaf(n) \leqslant \cdots$ of consecutive ordinates of the nontrivial zeros of $L(s,f)$. By \eqref{countzeros}, it follows that the average size of $\alphammaf(n+1) - \alphammaf(n)$ is \begin{equation} \frac{\pi}{\log\left(\sqrt{|q|}\alphammaf(n)\right)}. \end{equation} Let \begin{equation}\label{eq:mulambdadefs} \Lambda_f\ := \ \limsup_{n\to \infty}\,\frac{\tgammaf(n+1)-\tgammaf(n)}{\pi / \log\left(\sqrt{|q|}\tgammaf(n)\right)} \qquad \text{and} \qquad \lambda_f\ := \ \limsup_{n\to \infty}\, \frac{\alphammaf(n+1)-\alphammaf(n)}{\pi / \log\left(\sqrt{|q|}\alphammaf(n)\right)}, \end{equation} where $\tgammaf(n)$ corresponds to the $n$th nontrivial zero of $L(s,f)$ on the critical line $\ensuremath{\mathbf{R}}e(s) = 1/2$. Note that under the assumption of the Generalized Riemann Hypothesis for $L(s,f)$, we have $\Lambda_f = \lambda_f$. Unconditionally, it is certainly true that $\Lambda_f \geqslant\lambda_f\geqslant1$, however we expect that $\Lambda_f = \lambda_f= \infty$. Towards a lower bound on $\lambda_f$, we prove the following unconditional result for $\Lambda_f$. \begin{theorem}\label{thm:gaps}Let $L(s,f)$ be a primitive $L$-function on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. Then $\Lambda_f \geqslant\sqrt{3}=1.732\ldots$. \end{theorem} \begin{corollary}\label{thm:RHlargegaps}Let $L(s,f)$ be a primitive $L$-function on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. Then, assuming the Generalized Riemann Hypothesis for $L(s,f)$, we have $\Lambda_f=\lambda_f \geqslant\sqrt{3}=1.732\ldots$. \end{corollary} We also consider the question of small gaps between nontrivial zeros of $L(s,f)$, however our arguments may be applied to any primitive $L$-function, $L(s)$, in the Selberg class,\footnote{We define the Selberg class in Section \ref{properties}.} which we denote by $\pazocal S$. It is conjectured that all `standard' automorphic $L$-functions as described by Langlands are members of the Selberg class, but this is far from established. It is known, however, that the primitive automorphic cuspidal $\GL(2)$ $L$-functions we consider are members of $\pazocal S$, and moreover membership in $\pazocal S$ has been established for primitive holomorphic cusp forms. This has not yet been established, however, in the real-analytic case of Maass cusp forms. Fix any $L\in\pazocal S$. As above, consider the sequence $0 \le\alphammaL(1)\leqslant \alphammaL(2) \leqslant \cdots\leqslant \alphammaL(n) \leqslant \cdots$ of consecutive ordinates of the nontrivial zeros of $L(s)$, and define \begin{equation}\label{eq:numLzeros} N(T,L)\ := \ \sum_{0<\alphammaL \leqslant T} 1 \end{equation} and \be\label{eq:muLdef} \mu_L\ := \ \liminf_{n\to \infty}\,\frac{\alphammaL(n\!+\!1)-\alphammaL(n)}{\pi/\log \alphammaL(n)}. \ee By definition, we have $\mu_L\le1$, but we expect that $\mu_L=0$. It is conjectured that all the nontrivial zeros of $L(1/2+it)\in \pazocal S$ are simple, except for a possible multiple zero at the central point $s=1/2$. In the case that $f$ is a primitive holomorphic cusp form, Milinovich and Ng~\cite{micah-nathan} have shown, under the Generalized Riemann Hypothesis for $L(s,f)$, that the number of simple zeros of $L(s,f)$ satisfying $0<\alphammaf\leqslant T$ is greater than a positive constant times $T(\log T)^{-\varepsilon}$ for any $\varepsilon>0$ and $T$ sufficiently large. Every $L \in\pazocal S$ satisfies \be\log L(s)\ = \ \sum_{n=1}^\infty \frac{b_L(n)}{n^{s}},\ee where $b_L(n)=0$ unless $n=p^\ell$ for some $\ell\geqslant 1$, and $b_L(n)\ll n^\theta$ for some $\theta<1/2$. In addition to the Generalized Riemann Hypothesis, we make the following assumption in the proof of small gaps between nontrivial zeros of $L\in\pazocal S$. \begin{thmx}\label{eq:selbergconj} Let $\upsilon_L(n)\ := \ b_L(n)\log n.$ We have \be \sum_{n\leqslant x}\upsilon_L(n)\overline{\upsilon_L(n)}\ = \ (1+o(1))x\log x\ee as $x\to \infty$. \end{thmx} Hypothesis~\ref{eq:selbergconj}, proposed by Murty and Perelli in \cite{murty-perelli}, is a mild assumption concerning the correlation of the coefficients of $L$-functions at primes and prime powers. Hypothesis~\ref{eq:selbergconj} is motivated in \cite{murty-perelli} by the Selberg Orthogonality Conjectures, which are known to hold for $L$-functions attached to irreducible cuspidal automorphic representations on $\GL(m)$ over $\ensuremath{\mathbf{Q}}$ if $m\leqslant 4$. (If $m\le4$, see \cite{AS,LWY,LY1,LY2}.) In the case that $m_L=1$, it has recently been shown in \cite{CCLM} that $\mu_L \leqslant 0.606894$. We generalize these arguments for any $L\in \pazocal S$ to prove the following upper bounds on $\mu_L$. \begin{theorem}\label{thm:smallgaps}Let $L\in \pazocal S$ be primitive of degree $m_L$. Assume the Generalized Riemann Hypothesis and Hypothesis~\ref{eq:selbergconj}. Then there is a computable nontrivial upper bound on $\mu_L$ depending on $m_L$. In particular, we record the following upper bounds for $\mu_L$: \be\begin{tabular}{c c c} $m_L$ & & upper bound for $\mu_L$ \\ \hline 1 && 0.606894 \\ 2 && 0.822897 \\ 3 && 0.905604 \\ 4 && 0.942914 \\ 5 && 0.962190\\ \vdots && \vdots \end{tabular}\ee where the nontrivial upper bounds for $\mu_L$ approach 1 as $m_L$ increases. \end{theorem} \begin{corollary} Let $f$ be a primitive holomorphic cusp form on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$ with level $q$ and $L(s,f)$ the associated $L$-function. Then $L(s,f) \in \pazocal S$, so assuming the Generalized Riemann Hypothesis and Hypothesis~\ref{eq:selbergconj}, we have \begin{equation} \liminf_{n\to \infty} \frac{\alphammaf(n\!+\!1)-\alphammaf(n)}{\pi / \log\left(\sqrt{|q|}\alphammaf(n)\right)} \ <\ 0.822897. \end{equation} \end{corollary} The majority of this article is focused on deriving the lower bound given in Theorem \ref{thm:gaps}, which is accomplished using a method of Hall~\cite{Hall} and some ideas of Bredberg~\cite{Bredberg}. We closely follow the arguments in~\cite{caroline}, where the problem is considered for large gaps between consecutive zeros of a Dedekind zeta-function of a quadratic number field. Note that the $L$-function considered in ~\cite{caroline} is of degree 2, however it is not primitive because it factors as the product of the Riemann zeta-function and a Dirichlet $L$-function. Due to the primitivity of the $L$-functions in the present work, we must consider a Rankin-Selberg type convolution, which we define by \begin{equation}\label{convolution} L(s,f \!\times\! \overline{f}) \ := \ \sum_{n=1}^{\infty}\frac{|a_f(n)|^2}{n^s}, \end{equation} for $\ensuremath{\mathbf{R}}e(s)>1$. It can be shown that this function extends meromorphically to $\ensuremath{\mathbf{C}}$ and has a simple pole at $s=1$. For any choice of $f$, we let $c_f$ denote the residue of the simple pole of $L(s,f \!\times\! \overline{f})$ at $s=1$. Following \cite{caroline}, we require asymptotic estimates of the mixed second moments of $L(1/2+\!it,f)$ and $L^{\prime}(1/2+\!it,f)$ with a uniform error. We obtain these by way of the following theorem. \begin{theorem}\label{thm:mixed} Let $L(s,f)$ be a primitive $L$-function on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. Let $s=1/2+it$, $T$ large, and let $\mu, \nu$ denote non-negative integers. We have \begin{align} \int_T^{2T}L^{(\mu)}\left(\tfrac{1}{2}\!+\!it,f\right) L^{(\nu)}\left(\tfrac{1}{2}\!-\!it,\overline{f}\right)\d t \ = \ \frac{(-1)^{\mu+\nu}2^{\mu+\nu+1}}{\mu\!+\!\nu\!+\!1}&c_fT\left(\log T\right)^{\mu+\nu+1} \ + \ O\left(\mu!\nu!T(\log T)^{\mu+\nu}\right) \end{align} as $T\to \infty$, where $c_f$ denotes the residue of the simple pole of $L(s,f,\times \overline{f})$ at $s=1$, and the error term is uniform in $\mu$ and $\nu$. \end{theorem} If $f$ is a cusp form of even weight (at least 12) with respect to the full modular group, the cases $\mu= \nu$ are known. The case $\mu= \nu =0$ in this setting was proved by Good~\cite{Good}, and for any nonnegative integer $m$, the general case $\mu = \nu =m$ was recently given by Yashiro~\cite{yashiro}. In addition to the cases $\mu=\nu=0$ and $\mu=\nu=1$, we require the mixed case $\mu=1, \nu=0$ in the proof of Theorem \ref{thm:gaps}. In this article, we first prove a shifted moment result and then obtain the more general formula given in Theorem \ref{thm:mixed} via differentiation (with respect to the shifts) and Cauchy's integral formula. We deduce the required shifted moment result, given below and proved in Section~\ref{sec:shiftproof} , using a method of Ramamchandra~\cite{ramachandra}. \begin{theorem}\label{thm:shiftedmoment}Let $L(s,f)$ be a primitive $L$-function on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. Let $s=1/2+it$, $T$ large, and $\alpha, \beta\in\ensuremath{\mathbf{C}}$ such that $|\alpha|,|\beta| \ll 1/\log T$. Then, we have \begin{align} \int_T^{2T}{L\left(s\!+\!\alpha,f\right)L\left(1\!-\!s\!+\!\beta,\overline{f}\right)\d t} \ = \ \int_T^{2T}& \left\{L(1\!+\!\alpha\!+\!\beta,f\!\times\!\overline{f}) +\left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)} L(1\!-\!\alpha\!-\!\beta,f\!\times\! \overline{f})\right\}\d t +O(T) \end{align} as $T \to \infty$. \end{theorem} The main term of Theorem~\ref{thm:shiftedmoment} verifies a conjecture arising from the recipe of Conrey, Farmer, Keating, Rubinstein, and Snaith (see~\cite{CFKRS}), which also predicts the additional lower order terms. Using other methods, the expected lower order terms in Theorem \ref{thm:shiftedmoment} can be deduced in the case that $f$ is a holomorphic primitive cusp form of even weight for the full modular group. Farmer \cite{farmer} has proved the asymptotic behavior of the mollified integral second shifted moment of such $L(s,f)$ when the mollifier is a Dirichlet polynomial of length less than $T^{1/6-\varepsilon}$, where $\varepsilon>0$ is small. Recently, Bernard \cite{Bernard} has proved the asymptotic behavior of the smooth mollified shifted second moment of such $L(s,f)$ where the mollifier is a Dirichlet polynomial of length less than $T^{5/27}$, with the additional requirement that the shifts are $o\left(1/\log^2(T)\right)$. (See \cite[Remark 2]{Bernard}.) To deduce the upper bound on $\mu_L$ appearing in Theorem \ref{thm:smallgaps}, we study the pair correlation of nontrivial zeros of $L(s) \in \pazocal S$ given by Murty and Perelli \cite{murty-perelli}. We employ an argument of \cite{GGOS} with a new idea of Carniero, Chandee, Littmann, and Milinovich \cite{CCLM} to prove the list of upper bounds given in Theorem~\ref{thm:smallgaps}. The article is organized as follows. For primitive $\GL(2)$ $L$-functions, we prove Theorem~\ref{thm:gaps} on large gaps between zeros of $L(1/2+it,f)$ in Section~\ref{sec:gaps}. We prove Theorem~\ref{thm:mixed} on the mixed second moments of derivatives of $L(1/2+it,f)$ in Section~\ref{mixed} and the proof of Theorem~\ref{thm:shiftedmoment} regarding shifted moments in Section~\ref{sec:shiftproof}. For primitive $L$-functions in the Selberg class, we prove Theorem~\ref{thm:smallgaps}} on small gaps between zeros of $L(s)$ in Section~\ref{smallgaps}. \section{Proof of Theorem~\ref{thm:gaps}}\label{sec:gaps} We now show that Theorem \ref{thm:gaps} follows from Theorem \ref{thm:mixed}. We closely follow the proof of~\cite[Theorem 1]{caroline}, which in turn is a variation of a method of Hall \cite{Hall} using some ideas due to Bredberg \cite{Bredberg}. Define the function \begin{equation}\label{eq:testfctn} g(t) \ := \ e^{i\rho t \log T}L\left(\tfrac{1}{2}\!+\!it,f \right), \end{equation} where $\rho$ is a real constant that will be chosen later to optimize our result. Fix a primitive form, $f$, on $\GL(2)$, so that $f$ corresponds either to a primitive holomorphic cusp form or to a primitive Maass cusp form. Let $\widetilde\alphammaf$ denote an ordinate of a zero of $L(s,f)$ on the (normalized) critical line $\ensuremath{\mathbf{R}}e(s)=1/2$. Note that $g(t)$ has the same zeros as $L\left(1/2+\!it,f\right)$, that is, $g(t)\!=\!0$ if and only if $t\!=\!\tgammaf$. Let $\left\{ \tgammaf(1),\tgammaf(2),\ldots,\tgammaf(N)\right\}$ denote the set of distinct zeros of $g(t)$ in the interval $[T,2T]$. Let \begin{equation} \kappa_T\ := \ \max\left\{\tgammaf(n\!+\!1)-\tgammaf(n): T+1\leqslant\tgammaf(n) \leqslant 2T-1\right\}, \end{equation} and note that $\Lambda_f\geqslant\limsup_{T\rightarrow\infty}\kappa_T$. Without loss of generality, we may assume that \begin{equation}\label{eq:zeroassumption} \tgammaf(1)-T\ll1\quad\text{and}\quad 2T-\tgammaf(N)\ll1, \end{equation} as otherwise there exist zeros $\tgammaf(0)\leqslant\tgammaf(1)$ and $\tgammaf(N+1)\geqslant\tgammaf(N)$ such that $\tgammaf(0)-\tgammaf(1)$ and $\tgammaf(N+1)-\tgammaf(N)$ are $\gg1$, and the theorem holds for this reason. The following lemma, due to Bredberg \cite[Corollary 1]{Bredberg}, is a variation of Wirtinger's inequality \cite[Theorem 258]{wirtinger}. \begin{lemma}[Wirtinger's inequality]\label{lem:wirtinger} Let $y : [a,b]\rightarrow\ensuremath{\mathbf{C}}$ be a continuously differentiable function, and suppose that $y(a)=y(b)=0$. Then \begin{equation} \int_a^b|y(x)|^2\d x\ \leqslant\ \left(\frac{b-a}\pi\right)^2\int_a^b|y'(x)|^2\d x. \end{equation} \end{lemma} Let $\varepsilon>0$ be small. By the definition of $\kappa_T$ and Lemma \ref{lem:wirtinger}, for each pair of consecutive zeros of $g(t)$ in the interval $[T,2T]$ we have \begin{equation}\label{eq:wirtapp} \int_{\tgammaf(n)}^{\tgammaf(n+1)}|g(t)|^2\d t\ \leqslant\ \frac{\kappa_T^2}{\pi^2} \int_{\tgammaf(n)}^{\tgammaf(n+1)}|g'(t)|^2\d t. \end{equation} Upon summing both sides of the equation in~(\ref{eq:wirtapp}) for $n=1,2,\ldots,N-1$, we have \begin{equation}\label{eq:wirtinterval} \int_{\tgammaf(1)}^{\tgammaf(N)}|g(t)|^2\d t\ \leqslant\ \frac{\kappa_T^2}{\pi^2} \int_{\tgammaf(1)}^{\tgammaf(N)}|g'(t)|^2\d t. \end{equation} Subconvexity bounds for primitive $\GL(2)$ $L$-functions along the critical line yield $|g(t)| \ll |t|^{1/2 - \delta}$, where $\delta>0$ is a fixed constant. Good~\cite{goodsubconvex} established subconvexity in the $t$-aspect for holomorphic forms of full level, achieving $\delta < 1/6$. Meurman~\cite{meur} achieved the same for Maass forms of full level. Jutila and Motohashi~\cite{jutila-motohashi} obtained a hybrid bound in the $t$- and eigenvalue aspects for full-level holomorphic and Maass forms. Blomer and Harcos~\cite{blomer-harcos} obtained $\delta < 25/292$ for holomorphic and Maass forms of arbitrary level and nebentypus in the $t$-aspect. Michel and Venkatesh~\cite{mvsubconvexity} proved general subconvexity for $\GL(2)$. By ~(\ref{eq:zeroassumption}), we have \begin{equation}\label{eq:fullinterval} \int_T^{2T}|g(t)|^2\d t \leqslant \frac{\kappa_T^2}{\pi^2}\int_{T}^{2T}|g'(t)|^2\d t + O\left(T^{1 - 2\delta}\right). \end{equation} Observing that $|g(t)|^2=\left|L\left(1/2+\!it\right)\right|^2$ and \begin{equation} |g'(t)|^2\ = \ \left|L'\left(\tfrac12\!+\!it,f \right)\right|^2 + \rho^2\log^2T \left|L\left(\tfrac12\!+\!it,f \right)\right|^2 + 2\rho\log T\cdot \ensuremath{\mathbf{R}}e\left(L'\left(\tfrac12\!+\!it,f \right) \overline{L\left(\tfrac12\!+\!it,f\right)}\right), \end{equation} Theorem~\ref{thm:mixed} implies that \begin{align} \int_T^{2T}\left|L\left(\tfrac12\!+\!it,f \right)\right|^2\d t &\ = \ 2c_f T \log T + O(T), \notag\\ \int_T^{2T}L'\left(\tfrac12\!+\!it,f \right)\overline{L\left(\tfrac12\!+\!it,f \right)}\d t &\ = \ -2c_f T\left(\log T\right)^2+O\left(T\log T\right), \intertext{and} \int_T^{2T}\left|L'\left(\tfrac12\!+\!it,f \right)\right|^2\d t &\ = \ \frac83 c_f T \left(\log T\right)^3+O\left(T\left(\log T\right)^2\right), \end{align} where $c_f$ denotes the residue of the simple pole of $L(s,f\!\times\! \overline{f})$ at $s\!=\!1$. Combining these estimates and noting that \begin{equation} \frac{1}{1+O\left(\frac1{c_f}\left(\log T\right)^{-1}\right)} \ = \ 1+ O\left(\frac{1}{c_f}\left(\log T\right)^{-1}\right), \end{equation} we find that \begin{equation}\label{eq:almostdone} \frac{\kappa_T^2}{\pi^2} \geqslant \frac3{3\rho^2-6\rho+4}\left(\log T\right)^{-2} \left(1 + O\left(\frac1{c_f}(\log T)^{-1}\right)\right). \end{equation} The polynomial $3\rho^2-6\rho+4$ is minimized by $\rho\!=\!1$. Therefore, inserting this choice of $\rho$ in \eqref{eq:almostdone}, we obtain \begin{equation} \kappa_T\ \geqslant\ \frac{\sqrt3\pi}{\log T} \left(1 + O\left(\frac1{c_f}\left(\log T\right)^{-1}\right)\right). \end{equation} \section{Proof of Theorem~\ref{thm:mixed}}\label{mixed} In this section we prove Theorem~\ref{thm:mixed}. As in the previous section, we closely follow an argument of ~\cite[Theorem 3]{caroline}, and we include the details here for completeness. First, note that for $t\in\left[T,2T\right]$, we have \begin{equation} \left(\frac{t}{2\pi}\right)^{-2(\alpha + \beta)} \ = \ T^{-2(\alpha + \beta)} \left(1 + O\left(\frac{1}{\log{T}}\right)\right) \end{equation} as $T\to\infty$. Also, we have \begin{equation} L(1 \pm \alpha \pm \beta,f\!\times\!\overline{f}) \ = \ \frac{\pm c_f}{\alpha\!+\!\beta} + O(1), \end{equation} and \begin{equation} T^{-2(\alpha+\beta)} \ = \ \sum_{n=0}^{\infty}\frac{(-1)^n2^n(\alpha\!+\!\beta)^n(\log T)^{n}}{n!}. \end{equation} Thus, by Theorem \ref{thm:shiftedmoment}, we have \begin{equation} \int_{T}^{2T}L\left(\tfrac{1}{2} \!+\! \alpha \!+\! it, f \right) L\left(\tfrac{1}{2} \!+\! \beta \!-\! it, \overline{f}\right)\d t \ = \ F(\alpha \!+\! \beta; T) + O(T), \end{equation} and \begin{equation} F(\alpha \!+\! \beta; T) \ := \ c_{f}T \sum_{n \geqslant 0}\frac{(-1)^{n}2^{n+1}(\alpha \!+\! \beta)^{n}(\log{T})^{n+1}}{(n\!+\!1)!}. \end{equation} Let \begin{equation} R(\alpha,\beta; T) \ := \ \int_{0}^{T}L\left(\tfrac{1}{2} \!+\! it \!+\! \alpha, f\right) L\left(\tfrac{1}{2} \!-\! it \!+\! \beta, \overline{f}\right)\d t - F(\alpha \!+\! \beta; T). \end{equation} Then $R(\alpha,\beta; T)$ is an analytic function of two complex variables $\alpha, \beta$ for $\ensuremath{\mathbf{R}}e(\alpha), \ensuremath{\mathbf{R}}e(\beta)<1/2$; moreover, if $|\alpha|, |\beta| \ll 1/\log T$, then Theorem~\ref{thm:shiftedmoment} implies that \begin{equation}\label{eq:R-error} R(\alpha,\beta;T) \ = \ O(T) \end{equation} as $T\to \infty$. Differentiating, we find \begin{equation}\label{eq:R-diff} \int_{T}^{2T}L^{(\mu)}\left(\tfrac{1}{2}\!+\!it\!+\!\alpha, f\right) L^{(\nu)}\left(\tfrac{1}{2}-it\!+\!\beta, \overline{f} \right)\d t \ = \ \frac{\partial^{\mu+\nu}F(\alpha\!+\!\beta;T)} {\partial\alpha^{\mu}\partial\beta^{\nu}} + R_{\mu,\nu}(\alpha,\beta;T), \end{equation} where $\mu$ and $\nu$ are fixed nonnegative integers and \begin{equation} R_{\mu,\nu}(\alpha,\beta;T) \ := \ \frac{\partial^{\mu+\nu}R(\alpha,\beta;T)} {\partial \alpha^{\mu} \partial \beta^{\nu}}. \end{equation} Let $\Omega = \left\{\omega\in\ensuremath{\mathbf{C}} : |\omega-\alpha| = 1/\log T\right\}$. By contour integration and Cauchy's integral formula, \eqref{eq:R-error} implies that \begin{align}\label{eq:R-Cauchy} \frac{\partial^{\mu}}{\partial \alpha^{\mu}} R(\alpha,\beta;T) \ = \ \frac{\mu !}{2 \pi i} \int_{\Omega}\frac{R(\omega,\beta;T)}{(\omega - \alpha)^{\mu+1}}\d \omega &\ = \ O\left(\mu ! T(\log{T})^{\mu}\right). \end{align} A second application of Cauchy's integral formula yields \begin{equation}\label{eq:R-Cauchy2} R_{\mu,\nu}(\alpha,\beta;T) \ := \ \frac{\partial^{\mu+\nu}}{\partial \alpha^{\mu} \partial \beta^{\nu}} R(\alpha,\beta;T) \ = \ O\left(\mu ! \nu ! T(\log{T})^{\mu + \nu}\right). \end{equation} Setting $\alpha\!=\!\beta \!=\! 0$, we obtain \begin{equation}\label{eq:R-diff2} \int_{T}^{2T}L^{(\mu)}\left(\tfrac{1}{2} \!+\! it, f \right) L^{(\nu)}\left(\tfrac{1}{2} \!-\! it, \overline{f} \right)\d t \ = \ \left[\frac{\partial^{\mu+\nu}F(\alpha\!+\!\beta;T)} {\partial \alpha^{\mu} \partial \beta^{\nu}} \right]_{\alpha=\beta=0} + O\left(\mu ! \nu ! T(\log{T})^{\mu + \nu}\right). \end{equation} Differentiating $F(\alpha \!+\! \beta; T)$ with respect to $\alpha$ and $\beta$, we find \begin{equation}\label{eq:F-diff} \left[\frac{\partial^{\mu + \nu} F(\alpha \!+\! \beta; T)} {\partial \alpha^{\mu} \partial \beta^{\nu}} \right]_{\alpha = \beta = 0} \ = \ c_f T\frac{(-1)^{\mu+\nu}2^{\mu+\nu+1}(\log{T})^{\mu+\nu+1}}{\mu \!+\! \nu \!+\! 1}. \end{equation} Inserting \eqref{eq:F-diff} in \eqref{eq:R-diff2}, the theorem now follows by summing over the dyadic intervals $[T/2,T]$, $[T/4,T/2]$, $[T/8,T/4], \ldots$. \section{Properties of $L$-functions}\label{properties} In this section we collect some basic facts about the $L$-functions under consideration. The $L$-functions treated by Theorem~\ref{thm:gaps} have an automorphic characterization; they are associated to primitive cusp forms on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$. We summarize some of their properties below. On the other hand, we prove Theorem~\ref{thm:smallgaps} for the large set of $L$-functions in the analytic axiomatic classification of $L$-functions due to Selberg~\cite{selbergclass}, which we denote $\pazocal S$; see Conrey and Ghosh~\cite{conreyghosh}, Murty~\cite{murty}, and Kaczorowski and Perelli~\cite{kaczorowskiperelli} for the basic properties of $\pazocal S$. Though it is certain that the primitive automorphic cuspidal $\GL(2)$ $L$-functions we consider are members of the Selberg class, and the membership of primitive holomorphic cusp forms has been established, it has not yet been established in the real-analytic case of Maass cusp forms. Most notably, the Ramanujan hypothesis for Maass forms is not yet a theorem. Fortunately, we do not need to assume the Ramanujan hypothesis (or the Generalized Riemann Hypothesis) for Theorem~\ref{thm:gaps}. Likewise, there are some specific assumptions we do need for Theorem~\ref{thm:gaps} that we do not need for Theorem~\ref{thm:smallgaps}. Therefore, the two sets of assumptions we need for our two main theorems have nontrivial intersection but one set is not a proper subset of the other. The approach we take to presenting these two sets of hypotheses is as follows. First, we will present the axioms of the Selberg class $\pazocal S$ needed for Theorem~\ref{thm:smallgaps}. Then, we give information specific to the automorphic $\GL(2)$ $L$-functions to which Theorem~\ref{thm:gaps} applies. \subsection{The Selberg class $\pazocal S$}\label{sec:selberg} For our purposes, the following axiomatic definition is sufficient, and we follow~\cite{murty-perelli} in our presentation. \begin{enumerate}[topsep=0in,label=(\textit{\roman*})] \item\label{selberg:dirichlet} (\textit{Dirichlet series})\quad Every $L \in\pazocal S$ is a Dirichlet series \be L(s)\ = \ \sum_{n=1}^\infty \frac{a_L(n)}{n^{s}}\ee absolutely convergent for $\ensuremath{\mathbf{R}}e(s)>1$. \item\label{selberg:continuation} (\textit{Analytic continuation})\quad There exists an integer $a\geqslant0$ such that $(s-1)^aL(s)$ is an entire function of finite order. \item\label{selberg:funeq} (\textit{Functional equation})\quad Every $L \in\pazocal S$ satisfies a functional equation of type \be L_\infty(s)L(s)\ =: \ \Lambda(s)\ = \ \epsilonsilon\overline\Lambda(1-s),\ee where $\overline\Lambda(s)$ denotes $\overline{\Lambda(\overline s)}$, and \be L_\infty\ = \ Q^s\prod_{j=1}^r\Gamma(w_j s+\mu_j)\ee with $Q>0,w_j>0,\ensuremath{\mathbf{R}}e(\mu_j)\geqslant0$, and $|\epsilonsilon|=1$. The \textit{degree} of $L$ is given by \be\label{degree} m_L\ :=\ 2\sum_{j=1}^{r}w_j. \ee \item\label{selberg:rama} (\textit{Ramanujan hypothesis})\quad For all positive integers $n$, we have $a_L(n)\ll n^{o(1)}$. \item\label{selberg:euler} (\textit{Euler product})\quad Every $L \in\pazocal S$ satisfies \be\log L(s)\ = \ \sum_{n=1}^\infty \frac{b_L(n)}{n^{s}},\ee where $b_L(n)=0$ unless $n=p^\ell$ for some $\ell\geqslant 1$, and $b_L(n)\ll n^\theta$ for some $\theta<1/2$. \end{enumerate} \subsection{Properties of $\GL(2)$ $L$-functions} In this section we collect some basic facts and hypotheses concerning $L$-functions attached to primitive (holomorphic or Maass) cusp forms on $\GL(2)$. We begin by combining some general remarks from~\cite[Section 1.1]{CFKRS}, \cite[Section 2]{rudnicksarnak}, and \cite[Section 3.6]{rubinstein}. For a more in-depth study of these $L$-functions, we refer the reader to~\cite[Chapter 5]{IK}. Let $f$ be a primitive (holomorphic or Maass) cusp form on $\GL(2)$ over $\ensuremath{\mathbf{Q}}$ with level $q$. For $\ensuremath{\mathbf{R}}e(s)>1$, let \be L(s,f) \ := \ \sum_{n=1}^\infty \frac{a_f(n)}{n^s} \ = \ \prod_p\left(1-\frac{\alpha_f(p)}{p^s}\right)^{-1} \left(1-\frac{\beta_f(p)}{p^s}\right)^{-1}\ee be the global $L$-function attached to $f$ (as defined by Godement and Jacquet in~\cite{GJ} and Jacquet and Shalika in~\cite{JS}), where the Dirichlet coefficients $a_f(n)$ have been normalized so that $\ensuremath{\mathbf{R}}e(s)=1/2$ is the critical line of $L(s,f)$. The numbers $\alpha_f,\beta_f$ are called the non-archimedean Satake or Langlands parameters. We assume $L(s,f)$ is primitive; that is, we assume $L(s,f)$ cannot be written as the product of two degree 1 $L$-functions. Then $L(s,f)$ admits an analytic continuation to an entire function of order 1. Additionally, there is a root number $\epsilon_f\in\ensuremath{\mathbf{C}}$ with $|\epsilon_f|=1$ and a function $L_\infty(s,f)$ of the form \be\label{eq:linfty} L_\infty(s,f)\ = \ P(s)Q^s\Gamma(w_js+\mu_1)\Gamma(w_js+\mu_2),\ee where $Q>0$, $w_j>0$, $\ensuremath{\mathbf{R}}e(\mu_j)\geqslant0$, and $P$ is a polynomial whose only zeros in $\sigma>0$ are the poles of $L(s)$, such that the completed $L$-function \be \Lambda(s,f) \ := \ L_\infty(s,f)L(s,f) \ee is entire, and \be \Lambda(s,f)\ = \ \epsilon_f \Lambda(1-s,\overline f), \ee where $\overline f(z)=\overline{f(\overline z)}$, $\Lambda(s,\overline f)=\overline{\Lambda}(s,f)=\overline{\Lambda(\overline s,f)}$, etc. It will be convenient to write this functional equation in asymmetric form \be\label{eq:funeqasymmetric} L(s,f)\ = \ \epsilon_f \Phi_f(s) L(1-s,\overline f), \ee where $\Phi_f(s)=L_\infty(1-s,\overline f)/L_\infty(s,f)$. In light of this, axioms \ref{selberg:dirichlet}, \ref{selberg:continuation}, and \ref{selberg:funeq} of $\pazocal S$ for $L(s,f)$ are satisfied.\footnote{The condition that $\ensuremath{\mathbf{R}}e(\mu_j)\geqslant0$ in axiom \ref{selberg:funeq} is not proven in the case of Maass cusp forms, though it is conjectured to hold, and is immaterial to the proof of Theorem~\ref{thm:gaps}. See~\cite[p.\,13]{bumptrace} and~\cite[\S1.5]{sarnakautomorphic}.} We now isolate the four properties on $L(s,f)$ for $f$ a primitive cusp form on $GL(2)$ over $\ensuremath{\mathbf{Q}}$ that we will make use of in our proof of Theorem~\ref{thm:gaps}. All of these properties are known for such $f$; that is, none are conjectural. \begin{Hlist} \item \label{hyp:entire} $L(s,f)$ is entire. \item \label{hyp:funeq} $L(s,f)$ satisfies a functional equation of the special form \be \Lambda(s,f)\ := \ L_\infty(s,f)L(s,f)\ = \ \epsilon_f\Lambda(1-s,\overline f), \ee where \be L_\infty(s,f) \ = \ Q^s\Gamma\left(\tfrac12s+\mu_1\right)\Gamma\left(\tfrac12s+\mu_2\right), \ee with $\{\mu_j\}$ stable under complex conjugation and the other notation the same as in~\eqref{eq:linfty}. Note that in the notation of~(\ref{eq:linfty}), $w_j=1/2$, which is conjectured to hold for arithmetic $L$-functions. The numbers $\mu_1,\mu_2$ are called the archimedean Langlands parameters. \item \label{hyp:convolution} The convolution Dirichlet series given by \be L(s,f\times\overline f)\ :=\ \sum_{n=1}^\infty\frac{\left|a_f(n)\right|^2}{n^s},\qquad\ensuremath{\mathbf{R}}e(s)>1 \ee is an $L$-function whose analytic continuation has a simple pole at $s=1$. (This is conjectured to be equivalent to $L(s,f)$ being a primitive $L$-function.) We denote the residue of this simple pole by $c_f$. \item \label{hyp:coeffsquaredsum} For sufficiently large $X$, $\sum_{n\leqslant X}\left|a_f(n)\right|^2 \ll X$. \end{Hlist} Note that \ref{hyp:entire} is true by general arguments for $L$-functions associated to primitive cupsidal automorphic representations on $\GL(2)$. In the case that $f$ is holomorphic of weight $k$ and level $q$, the Dirichlet series associated to $L(s,f)$ is formed from the (normalized) coefficients $\lambda_f(n)$ in the Fourier expansion \be f(z)\ = \ \sum_{n=1}^\infty\lambda_f(n)n^{(k-1)/2}e(nz), \ee which satisfy the Deligne divisor bound $\left|\lambda_f(n)\right|\leqslant d(n)$. Hence, axiom~\ref{selberg:rama} is satisfied for $f$, though this is immaterial to our arguments towards Theorem~\ref{thm:gaps}. For $L(s,f)$ attached to such holomorphic $f$, \ref{hyp:funeq} holds for $Q=\pi^{-1}$, $\mu_1=(k-1)/2$ and $\mu_2=(k+1)/2$. In the case that $f$ is instead a Maass cusp form, ~\ref{hyp:funeq} is known to hold for $Q=\pi^{-1}$ and complex $\mu_j$. We only require the known condition that the set $\{\mu_j\}$ is stable under complex conjugation. We note that ~\ref{hyp:convolution} was established by the work of Rankin and Selberg for Hecke (holomorphic) cusp forms; it is established in generality far exceeding our needs by Mœglin and Waldspurger~\cite{MW}; see~\cite[\S5.11--5.12]{IK} for details. For holomorphic $f$, ~\ref{hyp:coeffsquaredsum} holds. Actually, it is known that \be\sum_{n\leqslant x}\left|a_f(n)\right|^2 = c_f x + o(x);\ee see~\cite[Proposition 5.1]{micah-nathan}. For real-analytic $f$, ~\ref{hyp:coeffsquaredsum} holds; see~\cite[\S3.2]{harcos} and~\cite[Theorem 3.2, (8.7), and (9.34)]{iwaniecspectral}. Last, we introduce the notion of the analytic conductor $\mathfrak q(s,f)$ for an automorphic $L$-function attached to a cusp form of level $q$ on $\GL(2)$. In this case, specializing Harcos~\cite{harcos}, who follows~\cite{iwaniecsarnak}, \be \mathfrak q(s,f) \ := \ \frac{q}{(2\pi)^2} \left|s+2\mu_1\right|\left|s+2\mu_2\right|.\ee \section{Proof of Theorem \ref{thm:smallgaps}}\label{smallgaps} Let $L(s)$ be a primitive $L$-function in the Selberg class $\pazocal S$. Recall from \eqref{eq:numLzeros} and \eqref{eq:muLdef} the definitions \[ N(T,L)\ := \ \sum_{0<\alphammaL \leqslant T} 1 \] and \[ \mu_L\ := \ \liminf_{n\to \infty}\,\frac{\alphammaL(n\!+\!1)-\alphammaL(n)}{N(T,L)}, \] where $\alphammaL(n)$ denotes the $n$th nontrivial zero of $L(s)$. In this section, assuming the Generalized Riemann Hypothesis for $L(s)$, we prove an upper bound for $\mu_L$ using ideas first introduced by Montgomery~\cite{montgomerypc} to study the pair correlation of zeros of $\zeta(s)$. Assuming the Riemann Hypothesis and writing the nontrivial zeros of $\zeta(s)$ as $1/2+\alphamma$, if $0\notin [\alpha,\beta]$ and $T\rightarrow\infty$, the pair correlation conjecture is the statement that \be\#\left\{0<\alphamma,\alphamma'<T: \alpha\leqslant\frac{(\alphamma-\alphamma')\log T}{2\pi}\leqslant\beta\right\} \sim\left(\frac T {2\pi}\log T\right)\int_\alpha^\beta \left(1-\left(\frac{\sin \pi u}{\pi u}\right)^2\right)\d u, \ee which is consistent with the pair correlation function of eigenvalues of random Hermitian matrices. Montgomery originally formulated his pair correlation conjecture for $\zeta(s)$, but strong evidence has accumulated since he made his conjecture to suggest that the nontrivial zeros of any general primitive $L$-function share the same statistics as eigenvalues of matrices chosen randomly from a matrix ensemble appropriate to the $L$-function; this philosophy is articulated by Katz and Sarnak in~\cite{katzsarnakbook} and~\cite{katzsarnakarticle}. In view of this, Montgomery's pair correlation conjecture has been generalized to all $L$-functions in the Selberg class $\pazocal S$. Montgomery proved that $\zeta(s)$ satisfies his pair correlation hypothesis for restricted support, and Murty and Perelli~\cite{murty-perelli} proved a general version for all primitive $L$-functions in the Selberg class for restricted support inversely proportional to the degree of the function. We use the pair-correlation for zeros of primitive $L\in\pazocal S$ to establish an upper bound on small gaps of primitive $L$-functions in the Selberg class. Following~\cite{montgomerypc} and \cite{murty-perelli}, we let $\alphammaL, \alphammaL'$ denote ordinates of nontrivial zeros of $L(s)$ and put \be\label{eq:falphadef} F_L(\alpha)\ = \ F_L(\alpha,T)\ := \ \left(\frac{m_LT}{2\pi} \log T\right)^{-1} \sum_{0<\alphammaL,\alphammaL'\leqslant T} T^{im_L\alpha(\alphammaL-\alphammaL')}w(\alphammaL-\alphammaL'),\ee where $w(u) = 4 /(4+u^2)$ and $m_L$ is the degree of $L$ as defined in \eqref{degree}. The pair correlation conjecture then states that as $T\rightarrow\infty$, we have \be F(\alpha)\ = \ \begin{cases} |\alpha|+m_LT^{-2|\alpha|m_L}\log T(1+o(1))+o(1),&\text{if }|\alpha|\leqslant 1, \\ 1+o(1),&\text{if }|\alpha|\geqslant 1, \end{cases}\ee uniformly for $\alpha$ in any bounded interval. Define the functions \be \upsilon_L(n)\ := \ b_L(n)\log n \ee and \be \label{eq:upsilondef} \upsilon_L(n,x)\ := \ \begin{cases} \displaystyle \upsilon_L(n)\left(\frac n x\right)^{1/2},&n\leqslant x, \\[.15in] \displaystyle \upsilon_L(n)\left(\frac x n\right)^{3/2},&n>x. \end{cases}\ee Murty and Perelli~\cite{murty-perelli} have proved the following result for $F_L(\alpha)$. \begin{prop}\label{prop:murtyperelli} With $L(s)\in\pazocal S$ as above and assuming the Generalized Riemann Hypothesis, let $\varepsilon>0$ and $x=T^{\alpha m}$. Then, uniformly for $0\leqslant\alpha\leqslant(1-\varepsilon)/m_L$ as $T\rightarrow\infty$, we have \be F_L(\alpha)\ = \ \frac1{m_Lx\log T}\sum_{n=1}^\infty\upsilon(n,x) \overline{\upsilon(n,x)}+m_LT^{-2\alpha m_L}\log T(1+o(1))+o(1). \ee \end{prop} Under the assumptions of Proposition~\ref{prop:murtyperelli} and Hypothesis~\ref{eq:selbergconj}, we may rewrite $F_L(\alpha)$ via partial summation (c.f.~\cite[\S 4]{murty-perelli} for details) as \be\label{eq:falphapc} F_L(\alpha)\ = \ \alpha+m_LT^{-2\alpha m_L}\log T(1+o(1))+o(1).\ee For the application to small gaps, we generalize an argument of Goldston, Gonek, \"Ozl\"uk, and Snyder in~\cite{GGOS}, with a new modification of Carneiro, Chandee, Littmann, and Milinovich in~\cite{CCLM}. We use the function $F_L(\alpha)$ to evaluate sums over differences of zeros. We first record a convolution formula involving $F_L(\alpha)$ that will prove to be useful. Let $r(u)\in L^1$, and define the Fourier transform by \be\hat r(\alpha)\ = \ \int_{-\infty}^\infty r(u)e(\alpha u)\d u.\ee If $\hat r(u)\in L^1$, we have almost everywhere that \be r(u)\ = \ \int_{\infty}^\infty \hat r(\alpha)e(-u\alpha)\d\alpha.\ee Multiplying~\eqref{eq:falphadef} by $\hat r(\alpha)$ and integrating, we obtain \be\label{eq:falphaconvolution} \sum_{0<\alphammaL,\alphammaL'\leqslant T} r\left(\left(\alphammaL-\alphammaL'\right)\frac{m_L\log T}{2\pi}\right) w(\alphammaL-\alphammaL') \ = \ \left(\frac{m_LT}{2\pi} \log T\right) \int_{-\infty}^\infty \hat r(\alpha)F_L(\alpha)\d\alpha. \ee We make use of the following bound for $F_L(\alpha)$. \begin{lemma}\label{lem:falphaest} Assume the Generalized Riemann Hypothesis, and let $A>1$ be fixed. Then, as $T\rightarrow\infty$, we have \be\int_{1/m_L}^\xi(\xi-\alpha)F_L(\alpha)\d\alpha\ \geqslant\ \frac{\xi^2}2-\frac\xi2\left(1+\frac1{m_L^2}\right)+\frac1{3m_L^3}+o(1)\ee uniformly for $1\leqslant\xi\leqslant A$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:falphaest}] Following~\cite{CCLM}, we start with the Fourier pair \be r_\xi(u)\ = \ \left(\frac{\sin\pi\xi u}{\pi\xi u}\right)^2\quad\text{and}\quad \hat r_\xi(\alpha)\ = \ \frac1{\xi^2}\max\left(\xi-|\alpha|,0\right).\ee Let $\mathfrak{m}_\rho$ denote the multiplicity of the zero with generic ordinate $1/2+\alphamma_L$. By trivially replacing the count of zeros up to height $T$ ( counted with multiplicity) with the diagonal of a weighted sum of $r_\xi$ evaluated at differences in zeros and using the convolution formula~\eqref{eq:falphaconvolution}, we obtain \be\label{eq:xitrick}\begin{aligned} 1+o(1)\ &\leqslant\ \left(\frac{m_LT}{2\pi}\log T\right)^{-1}\sum_{0<\alphammaL\leqslant T}\mathfrak{m}_\rho\\ &\leqslant\ \left(\frac{m_LT}{2\pi}\log T\right)^{-1}\sum_{0<\alphammaL,\alphammaL'\leqslant T} r_\xi\left(\left(\alphammaL'-\alphammaL\right)\frac{m_L\log T}{2\pi}\right) w\left(\alphammaL-\alphammaL'\right) \\ &\ = \ \int_{-\xi}^\xi \hat r(\alpha)F(\alpha)\d\alpha. \end{aligned}\ee Leveraging~\eqref{eq:falphapc}, the evenness of the integrand allows us to write \be\label{eq:xiestimate}\begin{aligned} \int_{-\xi}^\xi \hat r(\alpha)F_L(\alpha)\d\alpha &\ = \ \frac2{\xi^2}\int_0^{1/m_L}(\xi-\alpha)F_L(\alpha)\d\alpha +\frac2{\xi^2}\int_{1/m_L}^\xi (\xi-\alpha)F_L(\alpha)\d\alpha \\ &\ = \ \frac1{\xi}\left(1+\frac1{m_L^2}\right)-\frac23\frac1{\xi^2m_L^3} +\frac2{\xi^2}\int_{1/m_L}^\xi (\xi-\alpha)F_L(\alpha)\d\alpha+o(1) \end{aligned}\ee uniformly for $1\leqslant\xi\leqslant A$. Inserting~\eqref{eq:xiestimate} into~\eqref{eq:xitrick} establishes the lemma. \end{proof} We now prove the following result. \begin{theorem}\label{thm:smallgaps1} Let $L(s)\in\pazocal S$ be primitive of degree $m_L$. Assume the Generalized Riemann Hypothesis and Hypothesis~\ref{eq:selbergconj}. Then, with $\kappa_{m_L}$ as in~\eqref{eq:kappadef} as $T\rightarrow\infty$, we have \be\label{sum}\sum_{0<\alphammaL-\alphammaL'\leqslant\frac{2\pi\lambda}{m_L\log T}}1 \geqslant\left(\frac12-\varepsilon\right)\frac{m_LT}{2\pi}\log T \begin{aligned}[t] \Bigg( &\lambda-1+2\lambda\int_{0}^{1/m_L} \left(1-|\lambda\alpha|+\frac{\sin 2\pi|\lambda\alpha|}{2\pi}\right)\alpha\d\alpha \\ &-4\pi\lambda^3\int_{\kappa_{m_L}}^{1/\lambda}\sin(2\pi\lambda\alpha) \left(\frac{\alpha^2}2-\frac\alpha2\left(1+\frac1{m_L^2}\right) +\frac1{3m_L^3}+o(1)\right)\d\alpha\Bigg).\end{aligned}\ee \end{theorem} The upper bounds on $\mu_L$ given in Theorem~\ref{thm:smallgaps} now follow upon straightforward numerical computation of the first positive value of $\lambda$ for which the right side of the inequality in \eqref{sum} in the theorem becomes positive. \begin{proof}[Proof of Theorem~\ref{thm:smallgaps1}] Consider the Fourier pair \begin{equation} h(u)\ = \ \left(\frac{\sin \pi u}{\pi u}\right)^2\left(\frac1{1-x^2}\right) \qquad \text{and} \qquad \hat h(\alpha)\ = \ \max\left(1-|\alpha|+\frac{\sin 2\pi|\alpha|}{2\pi},0\right). \end{equation} Here $h(u)$ is the Selberg minorant of the charactaristic function of the interval $[-1,1]$ in the class of functions with Fourier transforms with support in $[-1,1]$. Take $r(u)=h(u/\lambda)$. Then $r(u)$ is a minorant of the characteristic function on $[-\lambda,\lambda]$, and $\hat r(\alpha)=\lambda\hat h(\lambda \alpha)$. By~\eqref{eq:falphaconvolution}, this allows us to write \be\begin{aligned} \sum_{0<\alphammaL\leqslant T}\mathfrak{m}_\rho +2\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{0<\alphammaL-\alphammaL'\leqslant\frac{2\pi\lambda}{m_L\log T}}1\ &\geqslant\ \sum_{0<\alphammaL,\alphammaL'\leqslant T} r\left(\left(\alphammaL-\alphammaL'\right)\frac{m_L\log T}{2\pi}\right) w(\alphammaL-\alphammaL') \\ &\ = \ \left(\frac{m_LT}{2\pi} \log T\right) \int_{-1/\lambda}^{1/\lambda} \hat r(\alpha)F_L(\alpha)\d\alpha. \end{aligned}\ee We may take \be\sum_{0<\alphammaL\leqslant T}\mathfrak{m}_\rho\ \sim\ N(T)\ \sim\ \frac{m_LT}{2\pi}\log T,\ee for else the theorem is trivially true. Upon inserting~\eqref{eq:falphapc} in its domain of validity and estimating the integral arising from the $m_LT^{-2\alpha m_L}\log T$ portion trivially, we find \be\label{eq:intermediateestimate} \sum_{0<\alphammaL-\alphammaL'\leqslant\frac{2\pi\lambda}{m_L\log T}}1 \ = \ \left(\frac12-\varepsilon\right)\frac{m_LT}{2\pi}\log T \left(\lambda-1+2\lambda\int_{0}^{1/m_L} \hat h(\lambda\alpha)\alpha\d\alpha +2\lambda\int_{1/m_L}^{1/\lambda} \hat h(\lambda\alpha)F_L(\alpha)\d\alpha \right).\ee Next, still following~\cite{CCLM} and~\cite{goldstontrick}, we define the function \be I(\xi)\ = \ \int_{1/m_L}^\xi(\xi-\alpha)F_L(\alpha)\d\alpha.\ee Note that $I(\xi)$ enjoys the properties \be I'(\xi)\ = \ \int_{1/m_L}^\xi F_L(\alpha)\d\alpha\quad\text{and}\quad I''(\xi)\ = \ F_L(\xi).\ee Integrating by parts twice and observing that $\hat h(1)=\hat h'(1)=0$ allow us to write \be\label{eq:intbypartstrick} \int_{1/m_L}^{1/\lambda}\hat h(\lambda\alpha)F_L(\alpha)\d\alpha \ = \ \int_{1/m_L}^{1/\lambda}\hat h(\lambda\alpha)I''(\alpha)\d\alpha \ = \ \lambda^2\int_{1/m_L}^{1/\lambda}\hat h''(\lambda\alpha)I(\alpha)\d\alpha.\ee Provided $\alpha\geqslant0$, $\hat h''(\lambda\alpha)=-2\pi\sin(2\pi\lambda\alpha)$, which is non-negative for $1\leqslant\alpha\leqslant1/\lambda$ if $1/2\leqslant\beta\leqslant1$. Moreover, as $F_L(\alpha)$ is positive, Lemma~\ref{lem:falphaest} provides a nontrivial bound for \be\label{eq:kappadef}\xi\ \geqslant\ \frac12\left(1+\frac1{m_L^2}+\frac{\sqrt{3-8m_L+6m_L^2+3m_L^4}}{m_L^2\sqrt 3}\right) \ := \ \kappa_{m_L}.\ee Since $m_L$ is a positive integer, we always have $\kappa_{m_L}>1$. Hence, inserting the estimate from Lemma~\ref{lem:falphaest} into~\eqref{eq:intbypartstrick}, we find that \be\label{eq:tailintest} \int_{1/m_L}^{1/\lambda}\hat h(\lambda\alpha)F_L(\alpha)\d\alpha\ \geqslant\ -2\pi\lambda^2\int_{\kappa_{m_L}}^{1/\lambda}\sin(2\pi\lambda\alpha) \left(\frac{\alpha^2}2-\frac\alpha2\left(1+\frac1{m_L^2}\right) +\frac1{3m_L^3}+o(1)\right)\d\alpha\ee for $1/2\leqslant\beta\leqslant1$. Inserting~\eqref{eq:tailintest} into~\eqref{eq:intermediateestimate} establishes the theorem. \end{proof} \ \section{Lemmata} In this section we collect lemmata that will be used in the proof of Theorem~\ref{thm:shiftedmoment}. \begin{lemma}[Convexity bound]\label{lem:convexitybound} For any $0 <\sigma<1$ and $\varepsilon >0$, there is a uniform bound \begin{equation}\label{harcoslemma} L(\sigma\!+\!it,f)\ \ll_{\sigma,\varepsilon}\ \mathfrak{q}(\tfrac{1}{2}\!+\!it,f)^{(1-\sigma)/2+\varepsilon}, \end{equation} where $\mathfrak{q}(s,f)$ denotes the analytic conductor of $L(s,f)$, and the implied constant depends only on $\sigma$ and $\varepsilon$. \end{lemma} \begin{proof} See~\cite[Section 1.2]{harcos}. The uniform bound is deduced by considering upper bounds on $L(\sigma\!+\!it,f)$ in the half-plane $\sigma >1$ and $\sigma<0$ and then interpolating between the two via the Phragm\'{e}n-Lindel{\"o}f convexity principle. \end{proof} We note that while the implied constant in \eqref{harcoslemma} does not depend on the (fixed) form $f$, this independence is not a necessity for the present work. \begin{lemma}\label{lem:weightsum} Let $\alpha,\beta\in\ensuremath{\mathbf{C}}$ with $|\alpha|,|\beta| \ll 1/\log{T}.$ Then, as $T\to \infty$, we have \begin{equation} \sum_{n\geqslant1}\frac{\left|a_f(n)\right|^2}{n^{1+\alpha+\beta}}e^{-2n/T} \ = \ L(1\!+\!\alpha\!+\!\beta,f\!\times\! \overline{f}) +c_{f}\Gamma(-\alpha\!-\!\beta)\left(\frac{T}{2}\right)^{-\alpha-\beta} +O\left(T^{-1/2}\right), \end{equation} and, for $|a| \ll 1/\log T$, we have \begin{equation}\label{expsumbound} \sum_{n\geqslant1}{\frac{|a_f(n)|^2}{n^a}e^{-2n/T}} \ll T \end{equation} as $T\to \infty$. \end{lemma} \begin{proof} The bound in \eqref{expsumbound} follows by partial summation. For the first sum, we begin by writing the main term of the asymptotic expansion using the definition of the convolution sum in property \ref{hyp:convolution}. We have \begin{align} \sum_{n=1}^{\infty} \frac{\left|a_{f}(n)\right|^2}{n^{1+\alpha+\beta}}e^{-2n/T} &\ = \ \frac{1}{2 \pi i} \sum_{n= 1}^{\infty}\frac{ \left|a_{f}(n)\right|^2}{n^{1+\alpha+\beta}} \int_{(2)} \Gamma(w) \left(\frac{2n}{T}\right)^{-w}\d w \\ &\ = \ \frac{1}{2 \pi i} \int_{(2)} \Gamma(w) \left(\frac{T}{2}\right)^{w} \sum_{n= 1}^{\infty} \frac{\left|a_{f}(n)\right|^2}{n^{1+w+\alpha+\beta}}\d w \\ &\ = \ \frac{1}{2 \pi i} \int_{(2)} \Gamma(w) \left(\frac{T}{2}\right)^{w} L(1\!+\!w\!+\!\alpha\!+\!\beta, f \!\times\! \overline{f})\d w. \end{align} As described in property~\ref{hyp:convolution}, we let $c_{f}$ denote the residue of $L(s, f \!\times\! \overline{f})$ at $s\!=\!1$. Note that \begin{equation} \res_{w=-\alpha-\beta}\left[\Gamma(w) \left(\frac{T}{2}\right)^{w}L(1\!+\!w\!+\!\alpha\!+\!\beta, f\!\times\!\overline{f})\right] \ = \ c_{f}\Gamma(-\alpha\!-\!\beta)\left(\frac{T}{2}\right)^{-\alpha-\beta}, \end{equation} and \begin{equation} \res_{w=0}\left[\Gamma(w)\left(\frac{T}{2}\right)^{w} L(1\!+\!w\!+\!\alpha\!+\!\beta,f\!\times\!\overline{f})\right] \ = \ L(1\!+\!\alpha\!+\!\beta,f\!\times\!\overline{f}). \end{equation} Hence, moving the contour of integration to $\ensuremath{\mathbf{R}}e(w)=-1/2$ and noting that the contribution from the horizontal sides of the contour is zero by the exponential decay of the gamma factor and the finite order of $L(s, f \times \overline{f})$ in vertical strips not containing $s=1$, we have that \begin{align} \sum_{n=1}^{\infty} \frac{\left|a_{f}(n)\right|^2}{n^{1+\alpha+\beta}}e^{-2n/T} &\ = \ \begin{aligned}[t] &c_{f}\Gamma(-\alpha\!-\!\beta)\left(\frac{T}{2}\right)^{-\alpha-\beta} +L(1\!+\!\alpha\!+\!\beta, f \!\times\! \overline{f}) \\ &+ O\left(\int_{\left(-\frac{1}{2}\right)}\Gamma(w)\left(\frac{T}{2}\right)^{w} L(1\!+\!w\!+\!\alpha\!+\!\beta, f \!\times\! \overline{f})\d w \right). \end{aligned} \end{align} We now estimate the error term. Letting $w=u+iv$ as above, we have \begin{align} \int_{\left(-\frac{1}{2}\right)}\Gamma(w)\left(\frac{T}{2}\right)^{w} L(w+1+\alpha+\beta, f \times \overline{f})\d w &\ll O\left(T^{-1/2}\right). \end{align} This completes the proof. \end{proof} \begin{lemma}\label{lem:truncatedsum} Let $\alpha,\beta\in\ensuremath{\mathbf{C}}$ with $|\alpha|,|\beta|\ll 1/{\log T}$. Then, as $T\to \infty$, we have \begin{align*} \sum_{n \leqslant T} \left|a_{f}(n) \right|^{2}n^{-1+\alpha+\beta} \ = \ L(1-\alpha-\beta, f \times \overline{f}) +T^{\alpha+\beta}\frac{c_{f}}{\alpha+\beta} +O\left(T^{-1/4+o(1)\log T} \right). \end{align*} We also have, for $|a|\ll 1/{\log T}$, that \begin{equation*} \sum_{n \leqslant T} |a_{f}(n)|^{2}n^{\sigma+a}\ \ll\ \begin{cases} T, & \sigma = 0, \\ T^{3/2}, & \sigma = 1/2, \end{cases} \end{equation*} as $T\to \infty$. \end{lemma} \begin{proof} The second bound is immediate from routine summation by parts. We have \be\begin{aligned}[b] \sum_{n \leqslant T} |a_{f}(n)|^{2}n^{\sigma+a} &\ = \ T^{\sigma+a}\sum_{n \leqslant T}|a_{f}(n)|^{2} -(\sigma+a)\int^T_1u^{\sigma+a-1}\left(\sum_{n \leqslant u} |a_{f}(u)|^{2}\right)\d u \\ &\ll T^{1+\sigma+a}-(\sigma+a)\int^{T}_{1}u^{\sigma+a}\d u \ll T^{1+\sigma+a}, \end{aligned}\ee where here we have made use of ~\ref{hyp:coeffsquaredsum}. As for the first sum, in view of ~\ref{hyp:convolution}, an application of Perron's formula (see \ref{lem:perron}) yields \begin{align}\label{eq:perronapp} \sum_{n \leqslant T} \left|a_{f}(n) \right|^{2}n^{-1+\alpha+\beta} &\ = \ \begin{aligned}[t] &\frac{1}{2 \pi i} \int^{\kappa+i\alphamma}_{\kappa-i\alphamma} L(1-\alpha-\beta+w, f \times \overline{f})T^{w}\frac{\d w}{w} \\ &+O\left(T^{\ensuremath{\mathbf{R}}e(\alpha+\beta)}\frac{\log(T)}{\alphamma} +\frac{\tau(2T)^{2}}{T^{1-\ensuremath{\mathbf{R}}e(\alpha+\beta)}} \left(1+T\frac{\log \alphamma}{\alphamma}\right)\right), \end{aligned} \end{align} where $\kappa=\ensuremath{\mathbf{R}}e(\alpha+\beta)+1/\log(T)$. Moving the line of integration to $-1/2$, we pick up residues from the simple poles at $w=0$ and $w=\alpha+\beta$. We have \be \res_{w=0}L(1-\alpha-\beta+w, f \times \overline{f}) \frac{T^{w}}{w} \ = \ L(1-\alpha-\beta, f \times \overline{f}), \ee and \be \res_{w=\alpha+\beta}L(1-\alpha-\beta+w, f \times \overline{f})\frac{T^w}{w} \ = \ T^{\alpha+\beta}\frac{c_{f}}{\alpha+\beta}. \ee We now estimate the error from closing the contour of integration. This is done by applying Lemma~\ref{lem:convexitybound} for $L(s, f\times\overline{f})$ in the critical strip $0<\sigma<1$. From this we have, fixing $\varepsilon>0$, \be L(\sigma +it, f \times \overline{f}) \ll_{\varepsilon, \sigma} t^{2(1-\sigma)+\varepsilon} \ee for $0<\sigma<1$. Finally, we know that $L(s+it, f\times\overline{f}) \ll 1$ for $\ensuremath{\mathbf{R}}e(s)>1$ since the $L$-function is given by an absolutely convergent Dirichlet series. We proceed to estimate the error from the contour itself. We have \begin{align}\label{eq:s2vertcont} &\begin{aligned}[b] \frac{1}{2 \pi i} \int^{-1/2+i\alphamma}_{-1/2-i\alphamma} L(1-\alpha-\beta+w, f \times \overline{f})T^{w}\frac{\d w}{w} &\ll \int^{\alphamma}_{-\alphamma} v^{1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon}T^{-1/2}\frac{\d v}{-\frac{1}{2}+|v|} \\ &\ll\alphamma^{1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon}T^{-1/2}\log\alphamma, \end{aligned} \\\intertext{and} &\frac{1}{2 \pi i} \int^{\kappa\pm i\alphamma}_{-1/2 \pm i\alphamma} L(1-\alpha-\beta+w, f \times \overline{f})T^{w}\frac{\d w}{w} \notag\\ &\hspace{.25in}\ll\ \int^{\pm i\alphamma}_{-1/2 \pm i\alphamma} L(1-\alpha-\beta+w, f \times \overline{f})T^{w}\frac{\d w}{w} +\int^{\kappa\pm i\alphamma}_{\pm i\alphamma}L(1-\alpha-\beta+w,f\times\overline{f}) T^{w}\frac{\d w}{w} \notag\\ &\hspace{.25in}\ll\ \alphamma^{-1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon} +\kappa T^{\kappa}\alphamma^{-1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon} \notag\\ &\hspace{.25in}\ll\ \alphamma^{-1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon}+\left(\ensuremath{\mathbf{R}}e(\alpha+\beta) +\frac{1}{\log T} \right)\alphamma^{-1+2\ensuremath{\mathbf{R}}e(\alpha+\beta)+2\varepsilon} T^{\ensuremath{\mathbf{R}}e(\alpha+\beta)+1/\log T}. \label{eq:s2horcont} \end{align} Put $\alphamma=T^{1/4}$. Then we find that the right-hand side of \eqref{eq:s2vertcont} is \begin{equation} \ll T^{-1/4+\ensuremath{\mathbf{R}}e(\alpha+\beta)/2+\varepsilon/2}\log T \ll T^{-1/4+\varepsilon/2\log T} \end{equation} and that the right-hand side of \eqref{eq:s2horcont} is \begin{equation} \ll T^{-1/4+\ensuremath{\mathbf{R}}e(\alpha+\beta)/2+\varepsilon/2} +\left(\ensuremath{\mathbf{R}}e(\alpha+\beta)+\frac{1}{\log T} \right) T^{-1/4+3\ensuremath{\mathbf{R}}e(\alpha+\beta)/2+1/\log T+\varepsilon/2} \ll T^{-1/4 +\varepsilon/2}. \end{equation} Finally, we estimate the error term in~\eqref{eq:perronapp}. We have \be\begin{aligned}[b] T^{\ensuremath{\mathbf{R}}e(\alpha+\beta)}\frac{\log(T)}{\alphamma} +\frac{\tau(2T)^{2}}{T^{1-\ensuremath{\mathbf{R}}e(\alpha+\beta)}} \left(1+T\frac{\log \alphamma}{\alphamma}\right) &\ll T^{-1/4 +\ensuremath{\mathbf{R}}e(\alpha+\beta)}\log T+\frac{(2T)^{o(1)}}{T} \left(1+T^{1-1/4}\log T \right) \\ &\ll\log T\left(T^{-1/4 +\ensuremath{\mathbf{R}}e(\alpha+\beta)}+T^{-1/4 +o(1)}\right) \\ &\ll T^{-1/4+o(1)} \log T. \end{aligned}\ee This completes the proof. \end{proof} \begin{lemma}\label{lem:errorsum} Let $a\in\ensuremath{\mathbf{C}}$ with $|a|\ll1/\log T$ and $X\sim T$. Then \be \sum_{n\leqslant X}\left|a_f(n)\right|^2n^{-3/4+a}e^{-n/X} \ll X^{1/4}. \ee \end{lemma} \begin{proof} We use property~\ref{hyp:coeffsquaredsum} to write \be\begin{aligned}[b] &\sum_{n\leqslant X}\left|a_f(n)\right|^2n^{-3/4+a}e^{-n/X} \\ &\ll\left(\sum_{n\leqslant X}\left|a_f(n)\right|^2\right)X^{-3/4}e^{-1} +\int_1^X\left(\sum_{n\leqslant u}\left|a_f(u)\right|^2\right) u^{-7/4}e^{-u/X}\d u +\frac1 X\int_1^X\left(\sum_{n\leqslant u}\left|a_f(u)\right|^2\right)u^{-3/4} e^{-u/X}\d u \\ &\ll X^{1/4}+\int_1^Xu^{-3/4}e^{-u/X}\d u +\frac1 X\int_1^Xu^{1/4}e^{-u/X}\d u. \end{aligned}\ee Changing variables twice yields \be\int_1^Xu^{-3/4}e^{-u/X}\d u \ = \ X^{1/4}\int_{1/X}^1 u^{-3/4}e^{-u}\d u = X^{1/4}\int_1^X u^{-5/4}e^{-1/u}\d u\ll X^{1/4},\ee and, similarly, $\int_1^X u^{1/4}e^{-u/X}\d u\ll X^{5/4}$. \end{proof} \begin{lemma}[Stirling estimate]\label{lem:stirling} Let $L(s,f)$ be of degree $m$, $L_1,L_2\in\{L,\overline L\}$, choose $\alpha,\beta\in\ensuremath{\mathbf{C}}$, and take notation as in~(\ref{eq:funeqasymmetric}). As $t\to\infty$, \be \Phi_{L_1}(\tfrac{1}{2}\!+\!\alpha\!+\!it) \Phi_{L_2}(\tfrac{1}{2}\!+\!\beta\!-\!it) \ = \ Q^{-2(\alpha+\beta)}\left(\frac{t}2\right)^{-m(\alpha+\beta)} \left(1+O\left(\frac1{t}\right)\right) \ee and, for some fixed complex number $a$, there exists a complex number $A$ independent of $t$ (though dependent on $a$) such that, as $t\rightarrow\infty$, \be \Phi_L(a-it) \ = \ AQ^{1-2(a-it)}e^{-i m \,t}\left(\frac t2\right)^{m(1/2-a+it)} \left(1+O\left(t^{-1}\right)\right) \ll t^{m(1/2-a)}. \ee \end{lemma} \begin{proof}We have \be\begin{aligned}[b] \Phi_L(s)\ = \ \frac{\overline{L_\infty}(1-s,f)}{L_\infty(s,f)} &\ = \ Q^{1-2s}\prod_{j=1}^d\frac{\Gamma\left(\tfrac12(1-s)+\overline{\mu_j}\right)} {\Gamma\left(\tfrac12s+\mu_j\right)} \\ &\ = \ Q^{1-2s}\pi^{-d}\prod_{j=1}^d \Gamma\left(\tfrac12(1-s)+\overline{\mu_j}\right) \Gamma\left(1-\frac s 2-\mu_j\right) \sin\left(\pi\left(\frac s 2 +\mu_j\right)\right). \end{aligned}\ee Let $a,b\in \ensuremath{\mathbf{C}}$ and $L_1,L_2=L$ (the other case is established in the same way). Then \be\begin{aligned}[b] &\Phi_L(a+it)\Phi_L(b-it)\ = \ \frac{\Phi_L(b-it)}{\Phi_{\overline L}(1-a-it)} \\ &\ = \ Q^{2(1-a-b)} \prod_{j=1}^m \frac{ \Gamma\left(\frac12-\frac b2+\frac{it}2+\overline{\mu_j}\right) \Gamma\left(1-\frac b2+\frac{it}2-\mu_j\right) }{ \Gamma\left(\frac a 2+\frac{it}2 + \mu_j\right) \Gamma\left(\frac12+\frac a 2+\frac{it}2 - \overline{\mu_j}\right)} \frac{ e^{i\pi\left(\frac {b-it} 2 +\mu_j\right)}- e^{-i\pi\left(\frac {b-it} 2 +\mu_j\right)} }{ e^{i\pi\left(\frac {1-a-it} 2 + \overline{\mu_j}\right)}- e^{-i\pi\left(\frac {1-a-it} 2 + \overline{\mu_j}\right)} } \\ &\ = \ Q^{2(1-a-b)} \prod_{j=1}^m \left(\frac t2\right)^{1-a-b+2(-\mu_j+\overline{\mu_j})} \left(1+O\left(t^{-1}\right)\right). \end{aligned}\ee Recalling that $\sum_j\Im(\mu_j)=0$ (by assumption) completes the first part of the proof. The bound $\Phi_L(a-it) \ll t^{m(1/2-a)}$ follows in exactly the same way. We have \be\begin{aligned}[b] \Phi_L(a-it) &\ = \ Q^{1-2(a-it)}\pi^{-m}\prod_{j=1}^m \Gamma\left(\frac12(1-a+it)+\overline{\mu_j}\right) \Gamma\left(1-\frac {a-it} 2-\mu_j\right) \sin\left(\pi\left(\frac {a-it} 2 +\mu_j\right)\right) \\ &\ = \ Q^{1-2(a-it)}\pi^{-m}\prod_{j=1}^m \left(\frac{it}2\right)^{1/2-a+it-2\Im(\mu_j)} \exp\left(-\frac32+a-it+2\Im(\mu_j)\right) A_1e^{\pi t/2}\left(1+O\left(t^{-1}\right)\right) \\ &\ = \ AQ^{1-2(a-it)}e^{-im\,t}\left(\frac t2\right)^{m(1/2-a+it)} \left(1+O\left(t^{-1}\right)\right), \end{aligned}\ee as desired. \end{proof} \begin{lemma}\label{lem:GeneralMV} Let $\{a_n\}$ and $\{b_n\}$ be two sequences of complex numbers. For any real numbers $T$ and $H$, we have \be \int_T^{T+H}\left|\sum_{n\geqslant1} a_n n^{it}\right|^2\d t \ = \ H\sum_{n\geqslant1}\left|a_n\right|^2 +O\left(\sum_{n\geqslant1}n\left|a_n\right|^2\right) \ee and \be \int_T^{T+H}\left(\sum_{n\geqslant1} a_n n^{-it}\right) \overline{\left(\sum_{n\geqslant1}b_nn^{-it}\right)}\d t \ = \ H\sum_{n\geqslant1}a_n\overline b_n +O\left(\left(\sum_{n\geqslant1} n\left|a_n\right|^2\right)^{\frac12} \left(\sum_{n\geqslant1}n\left|b_n\right|^2\right)^{\frac12}\right). \ee \end{lemma} \begin{proof} This is a generalized form of Montgomery and Vaughan's large sieve proved in \cite[Lemma 1]{tsang}. \end{proof} \begin{lemma}\label{lem:parts} Let $\{a_n\}$ and $\{b_n\}$ be sequences of complex numbers. Let $T_1$ and $T_2$ be positive real numbers and $g(t)$ be a real-valued function that is continuously differentiable on the interval $\left[T_1,T_2\right]$. Then \begin{multline} \int_{T_1}^{T_2}g(t)\left(\sum_{n\geqslant1} a_n n^{-it}\right) \left(\sum_{n\geqslant1} b_n n^{it}\right)\d t \ = \ \\ \ = \ \left(\int_{T_1}^{T_2} g(t)\d t\right) \sum_{n\geqslant1} a_n b_n +O\left(\left[\left|g\left(T_2\right)\right| +\int_{T_1}^{T_2}\left|g'(t)\right|\d t\right] \left[\sum_{n\geqslant1} n\left|a_n\right|^2\right]^{\frac12} \left[\sum_{n\geqslant1} n\left|b_n\right|^2\right]^{\frac12}\right). \end{multline} \end{lemma} \begin{proof} This is proved in \cite[Lemma 4.1]{mililemma}. \end{proof} \section{Proof of Theorem \ref{thm:shiftedmoment}}\label{sec:shiftproof} In this section we prove Theorem~\ref{thm:shiftedmoment}. We first construct an approximate functional equation for $L(s\!+\!\alpha, f)L(1\!-\!s\!+\!\beta, \overline{f})$ using a method of Ramachandra~\cite[Theorem 2]{ramachandra}. \begin{lemma}\label{lem:ram} Let $\alpha \in \ensuremath{\mathbf{C}}, s\!=\!1/2+it,$ and $T\geqslant2$. Then, for $X=T/(2\pi)$ and $T\leqslant t \leqslant 2T$, we have \begin{multline}\label{eq:ramaformula} L(s\!+\!\alpha,f) \ = \ \sum_{n=1}^{\infty}\frac{a_f(n)}{n^{s+\alpha}}e^{-n/X} + \epsilon_f\Phi_f(s\!+\!\alpha)\sum_{n\leqslant X}\frac{a_{\bar f}(n)}{n^{1-s-\alpha}} \\ - \frac1{2\pi i} \int_{-3/4-i\infty}^{-3/4+i\infty}\epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\left(\sum_{n>X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}}\right)\Gamma(w)X^w\d w \\ - \frac1{2\pi i} \int_{1/2-i\infty}^{1/2+i\infty}\epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\left(\sum_{n\leqslant X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}}\right)\Gamma(w)X^w\d w. \end{multline} \end{lemma} \begin{proof} We first use the Mellin identity \begin{equation}\label{eq:mellin} e^{-t}\ = \ -\frac{1}{2\pi}\int_{2-i\infty}^{2+i\infty}\Gamma(w)t^{-w}\d w, \end{equation} which holds for $t\geqslant0$, to write \begin{equation}\label{eq:first} \sum_{n=1}^\infty \frac{a_f(n)}{n^{s+\alpha}}e^{-n/X} \ = \ \frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty} L(s\!+\!\alpha\!+\!w,f)\Gamma(w)X^w\d w. \end{equation} Here the interchanging of the summation and integral is justified by the absolute convergence of the Dirichlet series. On the other hand, by shifting the line of integration from $\ensuremath{\mathbf{R}}e(w)=2$ to $\ensuremath{\mathbf{R}}e(w)=-3/4$, we find \begin{equation}\label{eq:second} \sum_{n=1}^\infty \frac{a_f(n)}{n^{s+\alpha}}e^{-n/X}\ = \ L\left(\!s\!+\!\alpha,f\right)+\frac{1}{2 \pi i}\int_{-3/4-i\infty}^{-3/4+i\infty}{L(s\!+\!\alpha\!+\!w,f)\Gamma(w)T^w\d w}, \end{equation} where $L\left(s\!+\!\alpha,f\right)$ is the residue of the simple pole of the integrand of the right-hand side of \eqref{eq:first} at $w\!=\!0$. By the functional equation for $L(s,f)$ and the absolute convergence of the Dirichlet series of $L(1\!-\!s,\bar f)$, we have \begin{align}\label{eq:third} L(s\!+\!\alpha\!+\!w,f) &\ = \ \epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)L(1\!-\!s\!-\!\alpha\!-\!w,\bar{f})\notag\\ & \ = \ \epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\sum_{n\leqslant X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}} + \epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\sum_{n>X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}}. \end{align} Replacing $L(s\!+\!\alpha\!+\!w,f)$ in \eqref{eq:second} with the right-hand side of \eqref{eq:third}, we have \begin{multline}\label{eq:fourth} \sum_{n=1}^\infty \frac{a_f(n)}{n^{s+\alpha}}e^{-n/X}\ = \ L\left(\!s\!+\!\alpha,f\right)+\frac{1}{2 \pi i}\int_{-3/4-i\infty}^{-3/4+i\infty}{\epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\sum_{n\leqslant X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}}\Gamma(w)T^w\d w}\\ +\frac{1}{2 \pi i}\int_{-3/4-i\infty}^{-3/4+i\infty}{\epsilon_f\Phi_f(s\!+\!\alpha\!+\!w)\sum_{n>X} \frac{a_{\bar f}(n)}{n^{1-w-s-\alpha}}\Gamma(w)T^w\d w}. \end{multline} Finally, we shift the line of integration of the integral involving the Dirichlet series over $n\leqslant X$ from $\ensuremath{\mathbf{R}}e(w)=-3/4$ to $\ensuremath{\mathbf{R}}e(w)=1/2$. We once again pass over the pole at $w=0$ and recover the residue \begin{equation} \epsilon_f\Phi_f(s\!+\!\alpha)\sum_{n\leqslant X}\frac{a_{\bar f}(n)}{n^{1-s-\alpha}}. \end{equation} Upon rearranging terms, we deduce the claimed approximate functional equation for $L(s\!+\!\alpha,f)$. \end{proof} Applying Lemma~\ref{lem:ram} to $L(1\!-\!s\!+\!\beta, \overline{f})$, where $\beta\in \ensuremath{\mathbf{C}}$, we find \be\label{eqn:mixedfunctional} \begin{aligned}[b] L(s\!+\!\alpha,f)L(1\!-\!s\!+\!\beta, \overline{f})\ := \ & \sum_{n=1}^{\infty}\frac{a_f(n)}{n^{s+\alpha}}e^{-n/X} \sum_{n= 1}^{\infty}\frac{a_{\bar{f}}(n)}{n^{1-s+\beta}}e^{-n/X} \\ &+\Phi_f(s\!+\!\alpha)\Phi_{\bar{f}}(1\!-\!s\!+\!\beta)\sum_{n \leqslant X} \frac{a_{f}(n)}{n^{s-\beta}}\sum_{n\leqslant X}\frac{a_{\bar f}(n)}{n^{1-s-\alpha}} + \sum^{14}_{i=1} J_{i} \\ \ := \ & S_1 + S_2 + \sum^{14}_{i=1} J_{i}, \end{aligned}\ee say, where the $J_{i}$ for $1\leqslant i \leqslant 14$ denote the terms that arise as products of the integral components of our mixed functional equation. In the next section, we show that $S_1$ and $S_2$ contribute to the main term in Theorem~\ref{thm:shiftedmoment}. The remaining $J_i$ terms are absorbed into the error term. In Section ~\ref{section:error}, we give full details for the estimation of one of the $J_{i}$; the other estimations follow similarly or by an application of Cauchy-Schwartz. \subsection{Main Term Calculations} \ Recalling $s=1/2+it$, let \be S_{1}\ := \ \left(\sum_{n\geqslant1}a_f(n)n^{-s-\alpha}e^{-n/X}\right) \left(\sum_{n \geqslant 1}a_{\overline{f}}(n)n^{-1+s-\beta}e^{-n/X}\right). \ee Directly applying Lemma~\ref{lem:GeneralMV} yields \be \int^{2T}_{T} S_{1}\d t\ = \ T \sum_{n} \left|a_{f}(n)\right|^2 n^{-1-\alpha-\beta}e^{-2n/X}+ O\left( \left(\sum_{n} \left|a_{f}(n)\right|^2e^{-2n/X}n^{-2\ensuremath{\mathbf{R}}e(\alpha)}\right)^{\frac12} \left(\sum_{n} \left|a_f(n)\right|^2e^{-2n/X}n^{-2\ensuremath{\mathbf{R}}e(\beta)}\right)^{\frac12} \right). \ee With $X \sim T$, Lemma~\ref{lem:weightsum} allows us to conclude that \be \int^{2T}_{T} S_{1}\d t \ = \ T\left(c_{f} \Gamma(-\alpha-\beta)\left(\frac{X}{2}\right)^{-\alpha-\beta} +L(1+\alpha+\beta, f \times \overline{f})\right) +O(T). \ee Supposing $|\alpha|, |\beta| \ll 1/\log T$, \be \int_T^{2T}S_1\d t\ \ll\ T \log T. \ee We now turn to the second product contributing to the main term in our shifted moment result. Let \be S_2 \ := \ \left( \epsilon_f\Phi_f(s+\alpha) \sum_{n\leqslant X}a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\epsilon_{\overline{f}}\Phi_{\overline{f}}(1-s+\beta) \sum_{n \leqslant X} a_{f}(n)n^{-s+\beta} \right). \ee Recalling that $\left|\epsilon_f\right|=1$, we use Lemma~\ref{lem:stirling} to break up the integral and write \be\begin{aligned}[b] \int^{2T}_{T} S_{2}\d t &\ = \ \int^{2T}_{T} \Phi_{f} \left(\frac{1}{2}+\alpha + it \right)\Phi_{\overline{f}} \left(\frac{1}{2}+\beta-it\right) \left(\sum_{n \leqslant X} a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\sum_{n \leqslant X} a_{f}(n)n^{-s+\beta}\right)\d t \\ &\ = \ \begin{multlined}[t][0.8\textwidth] \int_T^{2T}\left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)} \left(\sum_{n \leqslant X} a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\sum_{n \leqslant X} a_{f}(n)n^{-s+\beta}\right)\d t \\ + O\left(\int^{2T}_{T} \left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)-1} \left(\sum_{n \leqslant X} a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\sum_{n \leqslant X} a_{f}(n)n^{-s+\beta}\right)\right)\d t. \end{multlined} \label{eq:s2funerror} \end{aligned}\ee We evaluate the main term using \cite[Lemma 4.1]{mililemma}. We have \be\begin{aligned}[b]\label{eq:s2mainterm} &\int_T^{2T}\left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)} \left(\sum_{n \leqslant X} a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\sum_{n \leqslant X} a_{f}(n)n^{-s+\beta}\right)\d t \\ &\ = \ \left(\int^{2T}_{T} \left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)}\d t \right) \sum_{n\leqslant X} \left|a_{f}(n) \right|^{2}n^{-1+\alpha+\beta} + \\ &+O \left( \left(\left(\frac{T}{\pi}\right)^{-2(\alpha+\beta)} +\int^{2T}_{T}2|\alpha+\beta| \left(\frac{t}{2\pi}\right)^{-2\ensuremath{\mathbf{R}}e(\alpha+\beta)-1}\d t\right) \left(\sum_{n\leqslant X}\left|a_{f}(n)\right|^2n^{2\ensuremath{\mathbf{R}}e(\alpha)}\right)^{\frac12} \left(\sum_{n \leqslant X}\left|a_{f}(n)\right|^2n^{2\ensuremath{\mathbf{R}}e(\beta)}\right)^{\frac12} \right). \end{aligned}\ee Since $|\alpha|, |\beta| \ll 1/\log T$, we have \be\left(\frac{T}{\pi}\right)^{-2(\alpha+\beta)}\ \ll\ 1\ee and \be \int^{2T}_{T}2|\alpha+\beta|\left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)-1}\d t\ \ll\ 1. \ee Lemma~\ref{lem:truncatedsum} contains the estimates that allow us to write~\eqref{eq:s2mainterm} as \begin{multline} \int^{2T}_{T} \left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)} \left(\sum_{n \leqslant X} a_{\overline f}(n)n^{s+\alpha-1}\right) \left(\sum_{n \leqslant X} a_{f}(n)n^{-s+\beta}\right)\d t \\ \ = \ \left(\int^{2T}_{T} \left(\frac{t}{2\pi}\right)^{-2(\alpha+\beta)}\d t \right) \left(L(1-\alpha-\beta, f \times \overline{f}) + X^{\alpha+\beta}\frac{c_{f}}{\alpha+\beta} \right) +O(T) \ll T \log T. \end{multline} \subsection{Error Term Estimates}\label{section:error} In this section we estimate a representative error term. The remaining $J_{i}$ follow from a direct application of Cauchy-Schwartz or from a similar argument. Observe that none of the $J_i$ error terms contribute to the main term, and none dominate the term \be \mathcal{J} \ := \ \left(\sum_{n\geqslant1}a_{\overline f}(n)n^{-s-\alpha}e^{-n/X}\right) \left(\frac{1}{2\pi i} \int_{\left(\frac14\right)} \epsilon_{\overline{f}} \Phi_{\overline{f}}(1-s+\beta+w) \left(\sum_{n \leqslant X} a_{f}(n)n^{w-s+\beta} \right)\Gamma(w)X^{w}\d w \right). \ee This term does not occur in~\eqref{eq:ramaformula}, but it is clear that any $J_{i}$ is of the same order or dominated by $\mathcal{J}$. The real difficulty in estimating $\int_T^{2T} \mathcal{J} \d t$ follows from \be\label{eq:compositeintest} \int_T^{2T} \int_{\left(\frac14\right)}\Phiidual(1-s+\beta+w)\Gamma(w)X^w\d w\d t \ll X^{1/4}T^{1/2}, \ee which is immediate from the estimate in Section~\ref{sec:control}. Refer to Section~\ref{sec:control} for full details. Returning to $\int_T^{2T} \mathcal{J} \d t$, we interchange the sum and the integral. Lemma~\ref{lem:parts} applies only to real-valued functions $g(t)$, so we apply an absolute value, apply Lemma~\ref{lem:parts}, and then use Lemma~\ref{lem:errorsum} and the estimate~\eqref{eq:compositeintest} to write \be\begin{aligned}[b] &\int_T^{2T} \mathcal{J}\d t \\ &\ = \ \int_T^{2T} \left(\sum_{n\geqslant1}\left|a_{\overline f}(n)\right|n^{-1/2-\alpha}e^{-n/X}\right) \left(\sum_{n \leqslant X} \left|a_{f}(n)\right|n^{-1/4+\beta} \right) \left(\frac{1}{2\pi i} \int_{\left(\frac14\right)} \epsilon_{\overline{f}} \Phi_{\overline{f}}(1-s+\beta+w) \Gamma(w)X^{w}\d w \right)\d t \\ &\ll X^{1/4} \int_T^{2T}\frac{1}{2\pi i}\int_{\left(\frac14\right)} \left|\epsilon_{\overline{f}}\Phi_{\overline{f}}(1-s+\beta+w)\Gamma(w)X^{w}\right|\d w\d t +O(\mathfrak{h}) = O\left(X^{1/2}T^{1/2}+\mathfrak{h}\right), \end{aligned}\ee where, if we put \be g(t) \ := \ \int_{\left(\frac14\right)} \left|\epsilon_{\overline{f}}\Phi_{\overline{f}}(1-s+\beta+w)\Gamma(w)X^{w}\right|\d w \ee then \be\mathfrak{h} \ = \ \left(|g(2T)|+\int_T^{2T}\left|g'(t)\right|\d t\right) \left(\sum_{n\geqslant1} \left|a_{\overline f}(n)\right|^2n^{-2\ensuremath{\mathbf{R}}e(\alpha)}e^{-2n/X}\right) \left(\sum_{n\leqslant X}\left|a_{f}(n)\right|^2n^{\frac12+2\ensuremath{\mathbf{R}}e(\beta)}\right).\ee The estimates for the sums are contained in Lemmas~\ref{lem:weightsum} and~\ref{lem:truncatedsum}. Utilizing the estimate in Lemma~\ref{lem:stirling} to differentiate the computations in Section~\ref{sec:control} under the integral, we obtain \be \int_T^{2T}\left|g'(t)\right|\d t\ \ll\ X^{\frac14}T^{-\frac12}.\ee Thus, letting $X\sim T$, \be \mathfrak{h}\ = \ X^{3/2}T^{-1/2} \sim T \ee and we have proved that \be \int_T^{2T}\mathcal{J}\d t\ \ll \ T. \ee \appendix \section{Additional lemmata and calculations} In this appendix we state an effective version of Perron's Formula used in the proof of Lemma \ref{lem:truncatedsum}. We also provide the details for bounding $\int_{\left(\frac14\right)}\Phi_{\overline{f}}(1-s+\beta+w)\Gamma(w)\d w$, which is used in Section~\ref{section:error}. \subsection{An Effective Perron Formula} \begin{lemma}[Effective Perron]\label{lem:perron} Let $F(s)\ := \ \sum_{n=1}^\infty a_nn^{-s}$ be a Dirichlet series with finite abscissa of absolute convergence $\sigma_a$. Suppose that there exists some real number $\alpha \geqslant 0$ s.t. \begin{align*} (i)&\quad \sum_{n=1}^\infty |a_n| n^{-\sigma} \ll (\sigma - \sigma_a)^{-\alpha}&(\sigma>\sigma_a), \\ \noalign{\text{and that $B$ is a non-decreasing function satisfying}}\\ (ii)&\quad |a_n|\leqslant B(n)&(n\geqslant1). \end{align*} Then for $X\geqslant2, \alphamma\geqslant 2,\sigma\leqslant\sigma_a,\kappa\ := \ \sigma_a-\sigma+1/\log X$, we have \be\label{eq:perron} \sum_{n\leqslant X}\frac{a_n}{n^s}\ = \ \frac{1}{2\pi i}\int_{\kappa-i\alphamma}^{\kappa+i\alphamma} F(s+w)X^w\frac{\d{w}}{w} +O\left(X^{\sigma_a-\sigma}\frac{(\log X)^\alpha}{\alphamma} +\frac{B(2X)}{X^\sigma}\left(1+X\frac{\log \alphamma}{\alphamma}\right)\right). \ee \end{lemma} \begin{proof} \cite[\S II.2.1, Corollary 2.1]{tenenbaum}. \end{proof} \subsection{Controlling $\int_{\left(\frac14\right)}\Phi_{\overline{f}}(1-s+\beta+w)\Gamma(w)\d w$}\label{sec:control} Recall that $|\beta|\ll 1/ \log T$ and $s=1/2+it$. In this section we provide the details for the bound \be \int_{\left(\frac14\right)}\Phiidual(1-s+\beta+w)\Gamma(w)\d w\ \ll\ t^{-\frac12}. \ee Since \be\Phi_f(s)\ = \ \frac{L_\infty(1-s,\overline f)}{L_\infty(s,f)},\ee we have the relation \be\Phi_f(s)\ = \ \frac{1}{\Phi_{\overline f}(1-s)}.\ee To ease our application of Stirling's formula by ensuring that we are always applying it as $t\rightarrow+\infty$, we begin by breaking up the integrals as \begin{align} &\int_{\left(\frac14\right)}\Phiidual(1-s+\beta+w)\Gamma(w)\d w \notag\\ &\ = \ \int_{-\infty}^t\Phiidual\left(\frac34+\beta+i(v-t)\right)\Gamma\left(\frac14+iv\right)\d v +\int_t^\infty\Phiidual\left(\frac34+\beta+i(v-t)\right) \Gamma\left(\frac14+iv\right)\d v. \end{align} We consider the first integral, which, changing variables, equals \begin{align} &\int_{-t}^\infty\Phiidual\left(\frac34+\beta-i(v+t)\right) \Gamma\left(\frac14-iv\right)\d v \notag\\ &\ = \ \int_{0}^\infty\Phiidual\left(\frac34+\beta-iv)\right) \Gamma\left(\frac14-i(v-t)\right)\d v \notag\\ &\ = \ \int_{0}^t\Phiidual\left(\frac34+\beta-iv)\right) \Gamma\left(\frac14-i(v-t)\right)\d v +\int_{t}^\infty\Phiidual\left(\frac34+\beta-iv)\right) \Gamma\left(\frac14-i(v-t)\right)\d v. \end{align} We first consider \begin{align} &\int_{0}^t\Phiidual\left(\frac34+\beta-iv)\right) \Gamma\left(\frac14-i(v-t)\right)\d v \notag\\ &\ll\ \int_{0}^tv^{-1/2-2\beta}\Gamma\left(1/4+i(t-v)\right)\d v \notag\\ &\ll\ \int_{0}^tv^{-1/2-2\beta} \exp\left(\left(-\frac14+i(t-v)\right)\log\left(\frac14+i(t-v)\right) -\left(\frac14+i(t-v)\right)\right)\d v\\ &:= \mathcal{A}_1, \notag \end{align} say. For the principal branch, we have \begin{align} &\exp\left(\left(-\frac14+i(t-v)\right)\log\left(\frac14+i(t-v)\right) -\left(\frac14+i(t-v)\right)\right) \notag\\ &\ll\ (it)^{-1/4+i(t-v)}\exp\left(\left(-\frac14+i(t-v)\right) \log\left(1-\frac i{4t}-\frac v t\right)-\frac14\right) \notag\\ &\ll\ (it)^{-1/4+i(t-v)}\exp\left[\left(-\frac14+i(t-v)\right) \left(-\frac1 t\left(\frac i4+v\right)\right)-\frac14\right] \notag\\ &\ll\ (it)^{-1/4+i(t-v)}\notag\\ & \ll t^{-1/4}e^{-\pi(t-v)/2}. \end{align} Since $|\beta|\ll1/\log T$, we find \begin{align} \mathcal{A}_1\ &\ll\ t^{-1/4}\int_{0}^tv^{-1/2-2\beta}e^{-\pi(t-v)/2}\d v \ \ll\ t^{-1/4}\int_{0}^tv^{-1/2}e^{-\pi(t-v)/2}\d v \notag\\ &=\ 2t^{-1/4}\sqrt{\frac2\pi}F\left(\sqrt{\frac{\pi t}2}\right)\notag\\ &\ll t^{-3/4}, \end{align} where $F$ is Dawson's integral, which satisfies $F\left(\sqrt{\pi t/2}\right)\ll t^{-1/2}$. Next, we have \begin{align} &\int_{t}^\infty\Phiidual\left(\frac34+\beta-iv)\right) \Gamma\left(\frac14-i(v-t)\right)\d v \notag\\ &\ll\ \int_{t}^\infty\frac{v^{-1/2-2\beta}\d v} {\Gamma\left(\frac34+i(v-t)\right)e^{\pi(v-t)}} \notag\\ &\ll\ \int_{t}^\infty v^{-1/2-2\beta}(v-t)^{-1/4}e^{-\pi(v-t)/2}\d v \notag\\ &= \ \int_0^\infty(v+t)^{-1/2-2\beta}v^{-1/4}e^{-\pi v/2}\d v \notag\\ &\ll\ t^{-1/2}\int_0^\infty v^{-1/4}e^{-\pi v/2}\d v\notag\\ & \ll t^{-1/2}. \end{align}Finally, we have \begin{align} &\int_t^\infty\Phiidual\left(\frac34+\beta+i(v-t)\right) \Gamma\left(\frac14+iv\right)\d v \notag\\ &\ = \ \int_1^\infty\frac{\Gamma\left(1/4+i(v+t-1)\right)\d v} {\Phii\left(\frac14-\beta-i(v-1)\right)} \notag\\ &\ll\ \int_1^\infty\frac{(i(v+t-1))^{-1/4+i(v+t-1)}\d v} {v^{1-2(1/4-\beta+i)}} \notag\\ &\ll\ \int_1^\infty(v+t-1)^{-1/4}v^{-1/2-2\beta} e^{-\pi(v+t-1)/2}\d v \notag\\ &\ll\ t^{-1/4}\int_1^\infty v^{-1/2} e^{-\pi(v+t)/2}\d v \notag\\ &\ = \ t^{-1/4}e^{-\pi(1+t)/2}\left[\sqrt 2 e^{\pi/2} \erfc\left(\sqrt{\frac\pi2}\right)\right]\notag \\ &\ \ll\ t^{-1/4}e^{-\pi t/2}, \end{align} where $\erfc(z)$ is the complementary error function. \noindent {\it Acknowledgements.} This work was supported in part by NSF Grants DMS1265673 and DMS1347804 and Williams College. We thank Hung Bui, Steve Gonek, Gergely Harcos, Winston Heap, Micah Milinovich, Jeremy Rouse, Frank Thorne, and Ben Weiss for a number of valuable conversations, and the referee for helpful comments on an earlier version. \nocite{*} \end{document}
\begin{document} \title{Creating large noon states with imperfect phase control} \author{Pieter Kok} \email{[email protected]} \affiliation{Department of Materials, Oxford University, Parks Road, Oxford OX1 3PH, United Kingdom.} \pacs{03.65.Ud, 42.50.Dv, 03.67.-a, 42.25.Hz, 85.40.Hp} \begin{abstract} \noindent Optical noon states $(|N,0\ranglengle + |0,N\ranglengle)/ \sqrt{2}$ are an important resource for Heisenberg-limited metrology and quantum lithography. The only known methods for creating noon states with arbitrary $N$ via linear optics and projective measurments seem to have a limited range of application due to imperfect phase control. Here, we show that bootstrapping techniques can be used to create high-fidelity noon states of arbitrary size. \end{abstract} \date{\today} \maketitle \noindent \paragraph*{Introduction} An important part of quantum information processing is quantum metrology and quantum lithography. We speak of quantum---or Heisenberg-limited---metrology when systems in quintessentially quantum mechanical states are used to reduce the uncertainty in a phase measurement below the shot-noise limit. If $\phi$ is the phase to be estimated, and $N$ is the number of independent trials in the estimation, the shot-noise limit is given by $\Delta\phi = 1/\sqrt{N}$. In quantum mechanics, the $N$ trials can be correlated such that the limit is reduced to $\Delta\phi = 1/N$ \cite{caves81,yurke86}. It is believed that this is the best phase sensitivity achievable in quantum mechanics. In optics, $\phi$ may represent the length change in the arm of an interferometer searching for gravity waves. When coherent (laser) light is used, the phase sensitivity is $1/\sqrt{\bar{n}}$, where $\bar{n}$ is the average number of photons in the beam. If, on the other hand, special quantum states of light are used, the phase sensitivity can be improved. One of such states is the so-called {\em noon} state: \begin{equation}\langlebel{noon} |N::0\ranglengle_{ab} \equiv \frac{1}{\sqrt{2}} \left( |N,0\ranglengle_{ab} + |0,N\ranglengle_{ab} \right) . \end{equation} If one of the modes experiences a phase shift $\phi$, the state becomes $(|N,0\ranglengle + e^{iN\phi}|0,N\ranglengle)/\sqrt{2}$. The enhanced phase leads to an increased phase sensitivity of $\Delta\phi = 1/N$ \cite{bollinger96}, which can easily be verified by noting that a phase shift of $\pi/N$ transforms Eq.~(\ref{noon}) into an orthogonal state. This means there exists a single-shot experiment that determines the presence or absence of the phase shift. Another application that requires the ability to create noon states is quantum lithography \cite{boto00}. Classical light can write and resolve features only with size larger than about a quarter of the wavelength: $\Delta x = \langlembda/4$. This is why classical optical lithography is struggling to reach the atomic level. With the use of noon states, however, the minimum resolvable feature size becomes $\Delta x = \langlembda/4N$. The same phase enhancement $N\phi$ that gives rise to the Heisenberg limit also enables an unbounded increase in optical resolution \cite{kok04}. Consequently, noon states have attracted quite some attention in recent years \cite{hofmann04,combes05,pezze05,sun05}. Currently, there are two main procedures for creating noon states: Kerr nonlinearities \cite{gerry00}, and linear optics with projective measurements \cite{fiurasek02,lee02,kok02}. Kerr, or optical $\chi^{(3)}$ nonlinearities may in principle yield perfect noon states, but the small natural coupling of $\chi^{(3)}$ and the unavoidable additional transformation channels pose a formidable challenge to any practical implementation. Electromagnetically induced transparencies may be used to solve this problem \cite{schmidt96}, but even here the creation of noon states needs nonlinearities with appreciably greater strength than what has been demonstrated so far. All methods for creating large noon states with linear optics and projective measurements use the Fundamental Theorem of Algebra, which states that every polynomial has a factorization (see e.g., Ref.~\cite{friedberg89}). In particular, the polynomial function of the creation operators that generate a noon state is factorized by the $N$-th roots of unity: \begin{equation}\langlebel{eq:factors} \hat{a}^{\dagger N} - \hat{b}^{\dagger N} = \prod_{k=1}^N (\hat{a}^{\dagger} + e^{2\pi i (k-1)/N}\, \hat{b}^{\dagger})\; . \end{equation} Every factor can be implemented probabilistically using beam splitters, phase shifters, and photo-detection \cite{fiurasek02,kok02}. Three- and four-photon noon states have been demonstrated experimentally by Mitchell et al.\ \cite{mitchell04} and Walther et al.\ \cite{walther04}, respectively. In this note, I identify a fundamental problem with the noon-state preparation procedure using linear optics and projective measurements. In addition, I propose a method that can be used to circumvent this problem. \paragraph*{Noisy state preparation} In practice the phase factor $2\pi(k-1)/N$ in Eq.~(\ref{eq:factors}) cannot be created with infinite precision. The accuracy of adjusting the phase is bounded by the limits of metrology. In order to create noon states, we must be able to tune the phase shift such that $2\pi (k-1)/N$ and $2\pi k/N$ are well separated. We thus require the phase error to be smaller than $2\pi/N$. This is the Heisenberg limit. If our objective is to create noon states in order to attain the Heisenberg limit, then we encounter a circular argument. This naive line of reasoning therefore suggests that the Heisenberg limit cannot be attained this way. In this note, I quantify the maximum sensitivity using noisy noon states, and explore a possible way to create high-fidelity noon states of arbitrary size. To estimate the effect of imperfect control over the phase shifts in the preparation process, consider the following noise model. Every phase in every factor of Eq.~(\ref{eq:factors}) has a Gaussian distribution with variance $\sqrt{\delta}\,$: \begin{eqnarray} \rho &=& \frac{1}{2N!} \prod_{k=1}^{N} \int\frac{d\varphi_k}{\sqrt{2\pi\delta}}\, e^{-(\varphi_k - 2\pi k/N)^2/2\delta} \cr && \qquad \times \left( \hat{a}^{\dagger} + e^{i\varphi_k}\, \hat{b}^{\dagger} \right) |0\ranglengle_{ab}\langlengle 0| \left( \hat{a} + e^{-i\varphi_k}\, \hat{b} \right) . \end{eqnarray} The variance $\sqrt{\delta}$ is considered sufficiently small such that the integration can be taken over the interval $(-\infty,+\infty)$. To derive the uncertainty in the phase, we adopt the following measurement model: By virtue of quantum lithography \cite{boto00}, the noon state can be focussed onto a small region with width $\pi/N$. In this region, a detector measures the observable \begin{equation} \hat{\Sigma} = |N,0\ranglengle_{ab}\langlengle 0,N| + |0,N\ranglengle_{ab}\langlengle N,0|\; . \end{equation} For a physical model of such a measurement, see Boto {\em et al}.\ \cite{boto00}. In terms of projection operators, this measurement can be written as \begin{equation} \hat{E}_{\pm} = \frac{1}{2} \left( |N,0\ranglengle \pm |0,N\ranglengle \right) \left( \langlengle N,0| \pm \langlengle 0,N| \right)\; , \end{equation} and the evolution due to the phase shift yields \begin{equation} \rho (\phi) = \left( \mbox{\small 1} \!\! \mbox{1}\otimes e^{i\hat{n}_b\phi}\right)\, \rho\, \left( \mbox{\small 1} \!\! \mbox{1}\otimes e^{-i\hat{n}_b\phi} \right), \end{equation} where $\hat{n}_b = \hat{b}^{\dagger}\hat{b}$. The conditional probability of finding outcome $j$ in a measurement given a phase shift $\phi$ is then calculated as follows: \begin{equation} p(j|\phi) = {\rm tr}\left[ \hat{E}_j \rho(\phi) \right]. \end{equation} The uncertainty in the phase is determined by the Cram\'er-Rao bound \cite{helstrom76}: \begin{equation} (\Delta\phi)^2 \geq \frac{1}{F(\phi)}\; , \end{equation} where $F(\phi)$ is the Fisher information defined by \begin{equation} F(\phi) = \sum_j \frac{1}{p(j|\phi)} \left[ \frac{\partial p(j|\phi)}{\partial \phi} \right]^2. \end{equation} When the input state is a perfect noon state, the Fisher information is $F(\phi) = N^2$, and the Cram\'er-Rao bound yields $\Delta\phi \geq 1/N$. Up to a constant of proportionality, this bound is attained by the measurement procedure outlined above. When we take into account the Gaussian noise in the state preparation process, the two conditional probabilities become \begin{equation} p(\pm|\phi) = \frac{1}{2} \pm \frac{1}{2} \cos(N\phi)\, e^{-N\delta/2}. \end{equation} Consequently, the Fisher information is \begin{equation} F(\phi) = \frac{N^2 \sin^2(N\phi)}{e^{N\delta} - \cos(N\phi)}\, , \end{equation} which is maximal when $\phi = \pi/2N$. The uncertainty in the phase at this point is then: \begin{equation} \Delta\phi \geq \frac{e^{N\delta/2}}{N}\, . \end{equation} This function exhibits a minimum at $N = 2/\delta$, as shown in Fig.~\ref{fig:phase}. This means that the phase sensitivity $\delta$ of the optical control limits the size of the useful noisy noon states that can be generated. As expected, when $\delta \rightarrow 0$ we retrieve the Heisenberg limit. It should be mentioned that no optimization of the phase estimation procedure has been performed. \begin{figure} \caption{Phase sensitivity $\Delta\phi$ of imperfect noon states with noise parameter $\delta=0.02$ and $\phi=\pi/2N$ as a function of $N$. The shaded area is bounded by the standard quantum limit $1/\sqrt{N} \end{figure} \paragraph*{Bootstrapping} If the number of photons in a useful noon states is limited by the phase uncertainty as described above, then these states would be of little use in metrology. However, we can use so-called {\em bootstrapping} to increase the effective noon states to arbitrary photon number. The idea behind this technique is to use (noisy) noon states to improve the phase uncertainty in the optical control. For example, suppose that the phase shifters producing the phases in Eq.~(\ref{eq:factors}) are implemented with delay lines, and the error $\Delta l$ in the delay is related to $\sqrt{\delta}$ according to $\sqrt{\delta} = k \Delta l$, with $k$ the wave number. The resulting noisy noon state can be used to re-evaluate the length of the delay lines used in the state preparation process. If the error in the length estimation using the noisy noon state is {\em smaller} than the initial error $\sqrt{\delta}$, then the delay lines can be set with a higher accuracy. Bootstrapping occurs when this higher accuracy is used to tune smaller increments in the phase shifts and consequently create a larger noisy noon state. This procedure can then be repeated indefinitely. Clearly, for bootstrapping to work the minimum phase uncertainty $\Delta\varphi_{\rm min}$ obtained by noon states must be smaller than the phase uncertainty $\sqrt{\delta}$ in the apparatus: \begin{equation}\langlebel{eq:boot} \Delta\varphi_{\rm min} = \frac{e^{N\delta/2}}{N} < \sqrt{\delta} . \end{equation} Since the minimum value of the phase uncertainty is reached when $N = 2/\delta$, we substitute this into Eq.~(\ref{eq:boot}) and solve the inequality. We find that bootstrapping is possible when $\sqrt{\delta} < 2/e$. Furthermore, if $\sqrt{\delta_0}$ is the initial phase uncertainty and $\sqrt{\delta_n}$ is the uncertainty in the $n^{th}$ iteration, the bootstrapping converges to zero super-exponentially: \begin{equation} \delta_n = \left( \frac{e}{2}\, \delta_0 \right)^{2^n} \quad\text{and}\qquad N_n = 2 \left( \frac{1}{e}\, N_0 \right)^{2^n}. \end{equation} For an initial phase uncertainty of 0.05~rad, the optimal noon state contains ten photons. After two and three bootstrapping iterations, the optimal noon state contains $\sim 180$ and $10^5$ photons, respectively. \paragraph*{Conclusion} I have shown that the limits to optical phase control put a bound on the size of the noon states that can be created with linear optics and projective measurements, while still being able to perform sub-shot-noise phase estimation. If the error in the phase control is given by $\sqrt{\delta}$, then the maximum phase sensitivity in standard Heisenberg-limited metrology is reached when $N = 2/\delta$. However, an adaptive bootstrapping technique can be used to create high-fidelity noon states of arbitrary size (high-noon states). Furthermore, this techique reduces the phase uncertainty super-exponentially. This research is part of the QIP IRC www.qipirc.org (GR/S82176/01), and was inspired by the LOQuIP workshop in Baton Rouge. \begin{thebibliography}{18} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Caves}(1981)}]{caves81} \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Caves}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{1693} (\bibinfo{year}{1981}). \bibitem[{\citenamefont{Yurke et~al.}(1986)\citenamefont{Yurke, McCall, and Klauder}}]{yurke86} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Yurke}}, \bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{McCall}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Klauder}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{33}}, \bibinfo{pages}{4033} (\bibinfo{year}{1986}). \bibitem[{\citenamefont{Bollinger et~al.}(1996)\citenamefont{Bollinger, Itano, Wineland, and Heinzen}}]{bollinger96} \bibinfo{author}{\bibfnamefont{J.~J.} \bibnamefont{Bollinger}}, \bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}}, \bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Wineland}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Heinzen}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{4649} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Boto et~al.}(2000)\citenamefont{Boto, Kok, Abrams, Braunstein, Williams, and Dowling}}]{boto00} \bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Boto}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{D.~S.} \bibnamefont{Abrams}}, \bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}}, \bibinfo{author}{\bibfnamefont{C.~P.} \bibnamefont{Williams}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{2733} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Kok et~al.}(2004)\citenamefont{Kok, Braunstein, and Dowling}}]{kok04} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}}, \bibinfo{journal}{J. Opt. B} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{S796} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Hofmann}(2004)}]{hofmann04} \bibinfo{author}{\bibfnamefont{H.~F.} \bibnamefont{Hofmann}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{023812} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Combes and Wiseman}(2005)}]{combes05} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Combes}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~M.} \bibnamefont{Wiseman}}, \bibinfo{journal}{J. Opt. B} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{14} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Pezz{\'e} and Smerzi}(2005)}]{pezze05} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Pezz{\'e}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Smerzi}}, \bibinfo{journal}{arXiv:quant-ph/0508158} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Sun et~al.}(2005)\citenamefont{Sun, Ou, and Guo}}]{sun05} \bibinfo{author}{\bibfnamefont{F.~W.} \bibnamefont{Sun}}, \bibinfo{author}{\bibfnamefont{Z.~Y.} \bibnamefont{Ou}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.~C.} \bibnamefont{Guo}}, \bibinfo{journal}{arXiv:quant-ph/0511189} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Gerry}(2000)}]{gerry00} \bibinfo{author}{\bibfnamefont{C.~C.} \bibnamefont{Gerry}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{043811} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Fiur{\'a}{\v s}ek}(2002)}]{fiurasek02} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Fiur{\'a}{\v s}ek}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{65}}, \bibinfo{pages}{053818} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Kok et~al.}(2002)\citenamefont{Kok, Lee, and Dowling}}]{kok02} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Lee}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{65}}, \bibinfo{pages}{052104} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Lee et~al.}(2002)\citenamefont{Lee, Kok, Cerf, and Dowling}}]{lee02} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Lee}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{N.~J.} \bibnamefont{Cerf}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{65}}, \bibinfo{pages}{030101} (\bibinfo{year}{2002}). \bibitem[{\citenamefont{Schmidt and Imamo$\bar{\rm g}$lu}(1996)}]{schmidt96} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Schmidt}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Imamo$\bar{\rm g}$lu}}, \bibinfo{journal}{Opt. Lett.} \textbf{\bibinfo{volume}{21}}, \bibinfo{pages}{1936} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Friedberg et~al.}(1989)\citenamefont{Friedberg, Insel, and Spence}}]{friedberg89} \bibinfo{author}{\bibfnamefont{S.~H.} \bibnamefont{Friedberg}}, \bibinfo{author}{\bibfnamefont{A.~J.} \bibnamefont{Insel}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Spence}}, \emph{\bibinfo{title}{Linear Algebra}} (\bibinfo{publisher}{Prentice-Hall}, \bibinfo{year}{1989}). \bibitem[{\citenamefont{Mitchell et~al.}(2004)\citenamefont{Mitchell, Lundeen, and Steinberg}}]{mitchell04} \bibinfo{author}{\bibfnamefont{M.~W.} \bibnamefont{Mitchell}}, \bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Lundeen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~M.} \bibnamefont{Steinberg}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{429}}, \bibinfo{pages}{161} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Walther et~al.}(2004)\citenamefont{Walther, Pan, Aspelmeyer, Ursinand, Gasparoni, and Zeilinger}}]{walther04} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Walther}}, \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Aspelmeyer}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ursinand}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gasparoni}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{429}}, \bibinfo{pages}{158} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Helstrom}(1976)}]{helstrom76} \bibinfo{author}{\bibfnamefont{C.~W.} \bibnamefont{Helstrom}}, \emph{\bibinfo{title}{Quantum Detection and Estimation Theory}} (\bibinfo{publisher}{Academic Press}, \bibinfo{year}{1976}). \end{thebibliography} \end{document}
\begin{document} \begin{frontmatter} \title{A sufficient condition for penalized polynomial regression to be invariant to translations of the predictor variables} \author{Johannes W. R. Martini} \ead{[email protected]} \begin{abstract} Whereas translating the coding of predictor variables does not change the fit of a polynomial least squares regression, penalized polynomial regressions are potentially affected. A result on which terms can be penalized to maintain the invariance to translations of the coding has earlier been published. A generalization of a corresponding proposition, which requires a more precise mathematical framework, is presented in this short note. \end{abstract} \begin{keyword} Penalized regression; Polynomial regression; Translation invariance \end{keyword} \end{frontmatter} \section{Introduction} Linear regressions are everyday tools in statistical applications. A particular type of regression, which is linear in the coefficients, and which allows for modeling interaction between the predictor variables, is polynomial regression. Here, the response $y$ is a polynomial in the predictors ${\bf{x}}=(x_1,...,x_p)$ \begin{equation} y_j= \sum\limits_{(i_1,i_2,...,i_p)\in I}a_{i_1,i_2,...,i_p} \; x_{j,1}^{i_1}x_{j,2}^{i_2}\cdot \cdot \cdot x_{j,p}^{i_p} + \epsilon_j, \qquad i_1,...,i_p \in \mathbb{N} \end{equation} In the standard setup, the data ${\bf{y}}=(y_1,...,y_n)$ and ${\bf{X}}=(x_{j,i})_{j=1,...n;i=1,...,p}$ is given, $\epsilon_j$ is assumed to be normally distributed $\epsilon_j \stackrel{i.i.d.}{\sim} \mathcal{N}(0,\sigma^2_\epsilon)$, and the coefficients $a_{i_1,i_2,...,i_p}$ have to be determined by a regression. The index set $I$ is chosen and defines which monomials are included in the model. \added{Here, ${\bf{X}}$ represents the matrix of data, but not the regressors of the regression problem, that is it is not the design matrix. The latter is obtained from ${\bf{X}}$ by multiplying its corresponding columns, according to the set of monomials $I$ of the polynomial model. Each monomial gives a regressor. } \\ Provided that the polynomial model satisfies \added{a ``completeness condition"} \cite{martini2019lost}, an ordinary least squares fit remains invariant when the coding of ${\bf{X}}$ is translated. Contrarily, it has been illustrated that the results of penalized regressions are likely to be affected by translations of the coding \citep{he2016does,martini2017genomic}. \added{Exceptions of penalized polynomial regressions being invariant to translations of the variable coding are given by those only penalizing the size of coefficients of monomials of highest total degree, provided the model allows to adapt all coefficients of lower degree \citep{martini2019lost}.} A generalization of this result is presented in this short note. \section{Recapitulation of the required mathematical background} We will recapitulate some technical background. The first definition provides a partial order on monomials which will be used to define the greatest monomials of a polynomial afterwards. To simplify the treatise, we assume whenever we talk about a regression that a (unique) solution exists. Moreover, think in the following of ${\bf{x}}$ as any initially chosen coding of the predictor variable. \begin{definition}[A partial order on monomials] For two monomials $$m_1:= x_{1}^{i_1}x_{2}^{i_2} ... x_{p}^{i_p} \mbox{ and } m_2:= x_{1}^{k_1}x_{2}^{k_2} ... x_{p}^{k_p},$$ we call $m_2$ \emph{greater than or of equal size as} $m_1$, in symbols $m_2 \geq m_1$, if $$ k_l \geq i_l \qquad \forall l \in \{1,...,p\}.$$ \end{definition} Note that if $m_1 \geq m_2$ and $m_2 \geq m_1$, it follows that $m_2=m_1$. \\ We use this partial order to specify what a ``greatest'' monomial is. \begin{definition}[Greatest monomial] We call a monomial $m_1$ of a polynomial $f$ a \emph{greatest monomial} of $f$, if $f$ does not possess a monomial $m_2 \neq m_1$ with a non-zero coefficient which is greater than $m_1$. For a model $\{ I | I \subset \mathbb{R}^{|I|} \}$ a monomial is called greatest if there is no greater monomial $m_2 \neq m_1$ whose tuple of exponents is an element of $I$. \end{definition} Having defined what a greatest monomial is, we define ``translation'', ``translation invariance'' and the sum of squared residuals. \begin{definition}[Translation of a vector and of a polynomial] Let ${\bf{X}}$ be an $n \times p$ matrix of $p$ predictor variables measured $n$ times for the respective data ${\bf{y}}=(y_1,...,y_n)$. For a given \added{$1 \times p$-vector} ${\bf{P}}$, we define the translation of the predictor variables $$T_{{\bf{P}}}({\bf{X}}):= {\bf{X}} + \mathbf{1}_n^t {\bf{P}}.$$ \added{$ \mathbf{1}_n$ denotes here the $1 \times n$-vector with each entry equal to 1.} Analogously, we define the translation of a polynomial $f({\bf{x}})$ $$T_{{\bf{P}}}(f({\bf{x}})):= f({\bf{x}}+ {\bf{P}})= \left( f \circ T_{{\bf{P}}}\right)({\bf{x}}).$$ \end{definition} The definition of the translation of a polynomial above has the obvious property \begin{equation}\label{obvious}\left [ T_{-{\bf{P}}} \circ f \right ] \circ T_{{\bf{P}}}= f \end{equation} We quickly define what we mean by ``translation invariance". \begin{definition}[Translation invariance] A regression method $R$ that maps the data $({\bf{X}},{\bf{y}})$ to a function $R_{{\bf{X}},{\bf{y}}} ({\bf{x}})$ is called \emph{translation invariant} if and only if \begin{equation}\label{invariance} R_{{\bf{X}},{\bf{y}}} ({\bf{x}})=R_{T_{\bf{P}}({\bf{X}}),{\bf{y}}} (T_{\bf{P}}({\bf{x}})) \end{equation} for any data $({\bf{X}},{\bf{y}})$ and any translation vector ${\bf{P}}$. \end{definition} In words, the definition of translation invariance means that for a regression, the resulting fit mapping ${\bf{x}}$ to $y$ (see Eq.(1)) is \added{identical} when applying the regression on $\left({\bf{X}},{\bf{y}} \right)$ to obtain $R_{{\bf{X}},{\bf{y}}} ({\bf{x}})$ or when using $\left(T_{\bf{P}}({\bf{X}}),{\bf{y}}\right)$ to obtain a function $R_{T_{\bf{P}}({\bf{X}}),{\bf{y}}} (T_{\bf{P}}({\bf{x}}))$ defined on the translated predictor variables $T_{\bf{P}}({\bf{x}})$. For an example to see that this is not always the case, see Table~1 of \cite{martini2019lost}. \begin{definition} Let the data $({\bf{X}},{\bf{y}})$ be given. The function \emph{sum of squared residuals} (SSR) maps a polynomial $f$ to a real, non-negative number by \begin{equation}\label{SSR}SSR_{{\bf{X}},{\bf{y}}}(f) := \sum_{j = 1,...,n} (y_j - f({\bf{X}}_{j,\bullet}))^2.\end{equation} Here, ${\bf{X}}_{j,\bullet}$ denotes the $j$-th row of ${\bf{X}}$. \end{definition} With these definitions we can come to the results. \section{Results} \begin{proposition}\label{prop:01} Let $f({\bf{x}})$ be a polynomial. Then for any data $({\bf{X}},{\bf{y}})$ and translation vector ${\bf{P}}$, \begin{equation}\label{SSR2} SSR_{{\bf{X}},{\bf{y}}}(f)=SSR_{T_{\bf{P}}({\bf{X}}),{\bf{y}}}(T_{-{\bf{P}}}(f)).\end{equation} Moreover, for any greatest monomial $m$ of $f$, the corresponding coefficient $a_m$ of $f$ and $\tilde{a}_m$ of $T_{-{\bf{P}}}(f)$ will be identical: \begin{equation}\label{eq:06}a_m = \tilde{a}_m. \end{equation} \end{proposition} \begin{proof} Concerning $SSR$, use the definition in Eq.(\ref{SSR}). Moreover, expand $f({\bf{x}}-{\bf{P}})$ to receive the coefficients of $T_{-{\bf{P}}}(f)$. A greatest monomial of $f$ will be a greatest monomial of $T_{-{\bf{P}}}(f)$. Inserting ${\bf{x}}-{\bf{P}}$ in a greatest monomial and expanding it, gives the same coefficient for this monomial. \end{proof} \begin{corollary} Let $G$ be the set of greatest monomials of polynomial $f$. Moreover, let $L_{{\bf{X}},{\bf{y}}}$ be a loss function of \added{the form} \begin{equation}\label{GOF}L_{{\bf{X}},{\bf{y}}} = g(SSR_{{\bf{X}},{\bf{y}}}) + PEN(a_{\in G})\end{equation} with $PEN$ any penalty function only dependent on the greatest monomials of $f$, and $g:\mathbb{R}^+_0 \rightarrow \mathbb{R}^+_0$ any function. Then \begin{equation}\label{central}L_{{\bf{X}},{\bf{y}}}(f)=L_{T_{\bf{P}} ({\bf{X}}),{\bf{y}}}(T_{-{\bf{P}}} (f)).\end{equation} \end{corollary} \begin{proof} Eqs.(\ref{SSR2}-\ref{GOF}). \end{proof} \begin{corollary}\label{cor:02} Let us consider a polynomial regression method $R$ defined by minimizing a loss function $L_{{\bf{X}},{\bf{y}}}$ \added{of form} (\ref{GOF}). Moreover, let $\mathcal{F}$ denote the set of polynomials across which we look for the one minimizing $L_{{\bf{X}},{\bf{y}}}$ and let $\forall \, f \in \mathcal{F}$ and $\forall \, {\bf{P}} \in \mathbb{R}^p$ be $T_{\bf{P}}(f) \in \mathcal{F}$. Then \begin{equation}\label{Finalresult} R_{T_{\bf{P}}({\bf{X}}),{\bf{y}}} (T_{\bf{P}}({\bf{x}})) = \left[ T_{-{\bf{P}}} \circ R_{{\bf{X}},{\bf{y}}} \right] \circ T_{\bf{P}} ({\bf{x}}) = R_{{\bf{X}},{\bf{y}}}({\bf{x}}) \end{equation} and thus Eq.(\ref{invariance}) is fulfilled which means the fit is invariant to translations of the coding of the predictor variables. \end{corollary} \begin{proof} The second equality of Eq.(\ref{Finalresult}) is true for any function $f$ as stated in Eq.(\ref{obvious}). What requires a little bit more explanation is the first equality. Remember that $R_{{\bf{X}},{\bf{y}}} ({\bf{x}})$ is a function in ${\bf{x}}$ minimizing $L_{{\bf{X}},{\bf{y}}}(f({\bf{x}}))$. Eq.(\ref{central}) states that $T_{-{\bf{P}}} \circ R_{{\bf{X}},{\bf{y}}}$ is the polynomial in $T_{\bf{P}}({\bf{x}})$ which minimizes $L_{T_{\bf{P}}({\bf{X}}),{\bf{y}}}(f)$, which means that $$R_{T_{\bf{P}}({\bf{X}}),{\bf{y}}} (T_{\bf{P}}({\bf{x}})) = \left[ T_{-{\bf{P}}} \circ R_{{\bf{X}},{\bf{y}}} \right] \circ T_{\bf{P}} ({\bf{x}}). $$ \end{proof} The statement of Corollary~\ref{cor:02} is more general than the result provided by \cite{martini2019lost}, since it allows the penalty function to be defined on the coefficients of the greatest monomials of the polynomial model, and not only on those of highest total degree. Monomials of highest total degree are greatest, but not every greatest monomial is of highest degree. \added{In particular, this more general sufficient condition may also be necessary.} \section{Conclusion and Outlook} We illustrated that any regression method defined by minimizing a loss function which is the sum of a function of the sum of squared residuals (SSR), and a penalty function only depending on the coefficients of the greatest monomials of the polynomial model, is invariant to translations of the coding of the predictor variables. Moreover, Eq.(\ref{GOF}) can easily be generalized by substituting the SSR by another function defined on a different norm, since Eq.(\ref{SSR2}) will still hold. \added{Finally, note that it may be the case that --for a regression defined by a loss function of form Eq.~(\ref{GOF})-- it is also a necessary condition that the penalty function only depends on the coefficients of greatest monomials. A general proof of this conjecture, maybe with some additional minor restrictions on the stucture of the regression problem, remains to be found.} \end{document}
\begin{document} \title{Expressive curves} \author{Sergey Fomin} \address{Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA} \email{[email protected]} \author{Eugenii Shustin} \address{School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel} \email{[email protected]} \thanks {\emph{2020 Mathematics Subject Classification}: Primary 14H50. Secondary 05E14, 14H20, 14H45, 14P05, 14Q05, 51N10. } \thanks{The first author was supported by the NSF grants DMS-1664722 and DMS-2054231 and by a Simons Fellowship. The second author was supported by the ISF grant 501/18 and the Bauer-Neuman Chair in Real and Complex Geometry.} \date{June 1, 2022} \begin{abstract} We initiate the study of a class of real plane algebraic curves which we call \emph{expressive}. These are the curves whose defining polynomial has the smallest number of critical points allowed by the topology of the set of real points~of~a~curve. This concept can be viewed as a global version of the notion of a real morsification of an isolated plane curve singularity. We prove that a plane curve~$C$ is expressive if (a)~each irreducible component of~$C$~can be parametrized by real polynomials (either ordinary or trigonometric), (b)~all singular points of $C$ in the affine plane are ordinary hyperbolic~nodes, and (c)~the set of real points of $C$ in the affine plane is connected. Conversely, an expressive curve with real irreducible components must satisfy conditions (a)--(c), unless it exhibits some exotic behaviour at infinity. We describe several constructions that produce expressive curves, and discuss a large number of examples, including: arrangements of lines, parabolas, and~circles; Chebyshev and Lissajous curves; hypotrochoids and epitrochoids; and much more. \end{abstract} \keywords{Real plane algebraic curve, critical points of real bivariate polynomials, polynomial curve, trigonometric curve, expressive curve} \ \maketitle \tableofcontents \section*{Introduction} Let $g(x)\in{\mathbb R}[x]$ be a polynomial of degree~$n$ whose $n$ roots are real and distinct. Then $g$ has exactly $n-1$ critical points, all of them real, interlacing the roots of~$g$. In this paper, we study the two-dimensional version of this phenomenon. We~call~a bivariate real polynomial $G(x,y)\in{\mathbb R}[x,y]$ (or the corresponding affine plane curve~$C$) \emph{expressive} if \redsf{the locations of the critical points of~$G$ are determined by the set of real points $C_{\mathbb R}=\{(x,y)\in{\mathbb R}^2\mid G(x,y)=0\}$, as follows: \begin{itemize}[leftmargin=.2in] \item there is precisely one extremum inside each bounded region of~${\mathbb R}^2\setminusC_{{\mathbb R}}$; \item all other critical points of~$G$ are the saddles located at hyperbolic nodes of~$C$. \end{itemize} (Recall that a \emph{hyperbolic node} is an intersection of two smooth real local branches.) In particular, all critical points of an expressive polynomial~$G$ are real. } An example is shown in Figure~\hyperref{fig:3-circles}. For a non-example, see Figure~\hyperref{fig:isolines}. \begin{figure} \caption{ The expressive curve $C=\{G(x,y)=0\} \label{fig:3-circles} \end{figure} \begin{figure} \caption{A non-expressive curve whose components are a circle and an ellipse. The bounded region at the bottom contains 3 critical points.} \label{fig:isolines} \end{figure} \pagebreak Our main result (Theorem~\hyperref{th:reg-expressive}) gives an explicit characterization of expressive curves, subject to a mild requirement of ``$L^\infty$-regularity.'' (This requirement forbids some exotic behaviour of~$C$ at infinity.) We prove that a plane algebraic curve~$C$ with real irreducible components is expressive and $L^\infty$-regular if and only if \begin{itemize}[leftmargin=.2in] \item each component of $C$ has a trigonometric or polynomial parametrization, \item all singular points of $C$ in the affine plane are real hyperbolic nodes, and \item the set of real points of $C$ in the affine plane is connected. \end{itemize} To illustrate, a union of circles is an expressive curve provided any two of them intersect at two real points, as in Figure~\hyperref{fig:3-circles}. On the other hand, the circle and the ellipse in Figure~\hyperref{fig:isolines} intersect at four points, two of which are complex conjugate. (In~the case of a pair of circles, those two points escape to infinity.) The above characterization allows us to construct numerous examples of expressive plane curves, including arrangements of lines, parabolas, circles, and singular cubics; Chebyshev and Lissajous curves; hypotrochoids and epitrochoids; and much more. See Figures~\hyperref{fig:5-irr-expressive}--\hyperref{fig:3-arrangements} for an assortment of examples; many more are scattered throughout the paper. \begin{figure} \caption{Irreducible expressive curves: (a) a singular cubic; (b) double lima\c con; (c) $(2,3)$-Lissajous curve; (d) $3$-petal hypotrochoid; (e) $(3,5)$-Chebyshev curve. } \label{fig:5-irr-expressive} \end{figure} \begin{figure} \caption{Reducible expressive curves: arrangements of six lines, four parabolas, and two singular cubics. } \label{fig:3-arrangements} \end{figure} On the face of it, expressivity is an analytic property of a function~\redsf{$G:{\mathbb R}^2\to{\mathbb R}$}. This is however an illusion: just like in the univariate~case, in order to rule out \redsf{accidental} critical points, we need~$G$ to be a polynomial of a certain kind. Thus, expressivity is essentially an algebraic phenomenon. Accordingly, its study requires tools of algebraic geometry and singularity theory. For a real plane algebraic curve~$C$ to be expressive, one needs \begin{align} \notag &\#\{\text{critical points of~$C$ in the complex affine plane}\} \\[-.05in] \tag{$*$} =\,&\#\{\text{double points in~$C_{{\mathbb R}}$}\} + \#\{\text{bounded components of ${\mathbb R}^2\setminusC_{{\mathbb R}}$}\} . \end{align} Since a generic plane curve of degree~$d$ has $(d-1)^2$ critical points, whereas expression~($*$) is typically smaller than~$(d-1)^2$, we need all the remaining critical points to escape to infinity. Our~analysis shows that this can only happen if each (real) irreducible component of~$C$ either has a unique point at infinity, or a pair of complex conjugate points; moreover the components must intersect each other in the affine plane at real points, specifically at hyperbolic nodes. The requirement of having one or two points at infinity translates into the condition of having a polynomial or trigonometric parametrization, yielding the expressivity criterion formulated above. As mentioned earlier, these results are established under the assumption of \emph{$L^\infty$-regularity}, which concerns the behaviour of the projective closure of~$C$ at the line at infinity~$L^\infty$. This assumption ensures that the number of critical points accumulated at each point~$p\in C\cap L^\infty$ is determined in the expected way by the topology of~$C$ in the vicinity of~$p$ together with the intersection multiplicity $(C\cdot L^\infty)_p$. All~polynomial and trigonometric curves are $L^\infty$-regular, as are all expressive curves of degrees~$\le4$. \noindent \textbf{Section-by-section overview.} Sections~\hyperref{sec:plane-curves}--\hyperref{sec:poly-trig} are devoted to algebraic geometry groundwork. Section~\hyperref{sec:plane-curves} reviews basic background on plane algebraic curves, intersection numbers, and topological invariants of isolated singularities. The number of critical points escaping to infinity is determined by the intersection multiplicities of polar curves at infinity, which are studied in Section~\hyperref{sec:polar-curves-at-infinity}. Its main result is Proposition~\hyperref{prop:milnor-infinity}, which gives a lower bound for such a multiplicity in terms of the Milnor number and the order of tangency between the curve and the line at infinity. When this bound becomes an equality, a plane curve~$C$ is called $L^\infty$-regular. In Section~\hyperref{sec:L-regular-curves}, we provide several criteria for $L^\infty$-regularity. We also show (see Proposition~\hyperref{pr:hironaka-milnor}) that for an $L^\infty$-regular curve~$C\!=\!\{G(x,y)\!=\!0\}$ all of whose singular points in the affine plane are ordinary nodes, the number of critical points of~$G$ is completely determined by the number of those nodes, the geometric genus of~$C$, and the number of local branches of~$C$ at infinity. This statement relies on classical formulas due to H.~Hironaka~\cite{Hir} and J.~Milnor~\cite{Milnor}. \redsf{The technical material of Sections~~\hyperref{sec:polar-curves-at-infinity}--\hyperref{sec:L-regular-curves} can be safely skipped by the readers who are willing to treat the notion of $L^\infty$-regularity as a ``black-box'' genericity condition that automatically holds for most, if not all, expressive curves that arise in applications.} Section~\hyperref{sec:poly-trig} introduces polynomial and trigonometric curves, the plane curves possessing a parametrization $t\mapsto (X(t),Y(t))$ in which both $X$ and $Y$ are polynomials, resp.\ trigonometric polynomials. We review a number of examples of such curves, recall the classical result~\cite{Abhyankar-1988} characterizing polynomial curves as those with a single place at infinity, and provide an analogous characterization for trigonometric curves. Expressive curves are introduced in Section~\hyperref{sec:expressive}. We formulate their basic properties and discuss a large number of examples, which include an inventory of expressive curves of degrees~$\le 4$. In Section~\hyperref{sec:divides}, we introduce \emph{divides} and relate them to the notion of expressivity. Section~\hyperref{sec:regular+expressive} contains our main results. Using the aforementioned bounds and criteria, \linebreak[3] we show (see Theorem~\hyperref{th:reg-expressive-irreducible}) that an irreducible real plane algebraic curve is expressive and $L^\infty$-regular if and only if it is either trigonometric or polynomial, and moreover all its singular points in the complex affine plane are (real) hyperbolic nodes. \pagebreak[3] This criterion is then extended (see Theorem~\hyperref{th:reg-expressive}) to general plane curves with real irreducible components. Additional expressivity criteria are given in Section~\hyperref{sec:more-expressivity-criteria}. \pagebreak[3] As a byproduct, we obtain the following elementary statement (see Corollary~\hyperref{cor:all-crit-pts-are-real}): if $C=\{G(x,y)=0\}$ is a real polynomial or trigonometric affine plane curve that intersects itself solely at hyperbolic nodes, then all critical points of~$G$ are real. Multiple explicit constructions of expressive curves are presented in \hbox{Sections~\hyperref{sec:constructions-irr}--\hyperref{sec:arrangements-trig}}, demonstrating the richness and wide applicability of the theory. In Section~\hyperref{sec:constructions-irr}, we describe the procedures of bending, doubling, and unfolding. Each of them can be used to create new (more ``complicated'') expressive curves from existing ones. Arrangements of lines, parabolas, and circles, discussed in Section~\hyperref{sec:overlays}, provide another set of examples. \linebreak[3] These examples are generalized in Section~\hyperref{sec:shifts+dilations} to arrangements consisting of shifts, dilations and/or rotations of a given expressive curve. Explicit versions of these constructions for polynomial (resp., trigonometric) curves are presented in Section~\hyperref{sec:arrangements-poly} (resp., Section~\hyperref{sec:arrangements-trig}). In Section~\hyperref{sec:alternative-expressivity}, we briefly discuss alternative notions of expressivity: a ``topological'' notion that treats real algebraic curves set-theoretically, and an ``analytic'' notion that does not require the defining equation of a curve to be algebraic. The class of divides which can arise from $L^\infty$-regular expressive curves is studied in Section~\hyperref{sec:regular-expressive-divides}. In particular, we show that a simple pseudoline arrangement belongs to this class if and only if it is \emph{stretchable}. In Section~\hyperref{sec:expressive-vs-algebraic}, we compare this class to the class of \emph{algebraic} divides studied in \cite{FPST}. \noindent \textbf{Motivations and outlook.} This work grew out of the desire to develop a global version of the A'Campo--Guse\u\i n-Zade theory \cite{acampo-1975, acampo-1999, GZ1974, gusein-zade-2} of morsifications of isolated singularities of plane curves. The defining feature of such morsifications is a \emph{local} expressivity property, which prescribes the locations (up to real isotopy) of the critical points of a morsified curve in the vicinity of the original singularity. In~this paper, expressivity is a \emph{global} property of a real plane algebraic curve, prescribing the locations of its critical points (again, up to real isotopy) on the entire affine plane. In a forthcoming follow-up to this paper, we intend to develop a global analogue---in the setting of expressive curves---of A'Campo's theory of divides and their links. As~shown in~\cite{FPST}, this theory has intimate connections to the combinatorics of quivers, cluster mutations, and plabic graphs. It would be interesting to explore the phenomenon of expressivity in higher dimensions, and in particular find out which results of this paper generalize. The concept of an expressive curve/hypersurface can be viewed as a generalization of the notion of a line/hyperplane arrangement. (Expressivity of such arrangements in arbitrary dimension can be established by a log-concavity argument.) This opens the possibility of extending the classical theory of hyperplane arrangements~\cite{Aguiar-Mahajan, Dimca, Stanley-arrangements} to arrangements of expressive curves/surfaces. \noindent \textbf{Acknowledgments.} We thank Michael Shapiro for stimulating \hbox{discussions}, and Pavlo Pylyavskyy and Dylan Thurston for the collaboration~\cite{FPST} which prompt\-ed our work on this project. \redsf{We are grateful to the referee for multiple suggestions whose implementation improved the quality of the paper.} We used \texttt{Sage} to compute resultants, and \texttt{Desmos} to draw curves. While cataloguing expressive curves of degrees $\le 4$, we made use of the classifications produced by A.~Korchagin and D.~Weinberg~\cite{Korchagin-Weinberg-2005}. \pagebreak[3] \section{Plane curves and their singularities} \label{sec:plane-curves} \begin{definition} Let ${\mathbb P}^2$ denote the complex projective plane. We fix homogeneous coordinates $x,y,z$ in~${\mathbb P}^2$. Any homogeneous polynomial $F\in{\mathbb C}[x,y,z]$ defines a \emph{plane algebraic curve}~$C=Z(F)$ in ${\mathbb P}^2$ given by \[ C=Z(F)=\{F(x,y,z)=0\}. \] We understand the notion of a curve (and the notation~$Z(F)$) scheme-theoretically: if the polynomial $F$ splits into factors, we count each component of the curve $C=Z(F)$ with the multiplicity of the corresponding factor. \end{definition} For two distinct points $p,q\in{\mathbb P}^2$, we denote by $L_{pq}$ the line passing through $p$~and~$q$. The \emph{line at infinity} $L^\infty\subset{\mathbb P}^2$ is defined by $L^\infty=Z(z)$. For $F$ a smooth function in $x,y,z$, we use the shorthand \[ F_x=\tfrac{\partial F}{\partial x}\,\quad F_y=\tfrac{\partial F}{\partial y}\,,\quad F_z=\tfrac{\partial F}{\partial z} \] for the partial derivatives of~$F$. The following elementary statement is well known, and easy to check. \begin{lemma}[Euler's formula] Let $F=F(x,y,z)$ be a homogeneous polynomial of degree~$d$. Then \begin{equation} \label{eq:Euler} d\cdot F=xF_x+yF_y+zF_z\,. \end{equation} \end{lemma} \begin{definition} Let $F\in{\mathbb C}[x,y,z]$ be a homogeneous polynomial in $x,y,z$. For a point $q=(q_x,q_y,q_z)\in{\mathbb P}^2$, we denote \[ F_{(q)}=q_xF_x+q_yF_y+q_zF_z\,. \] The \emph{polar curve} $C_{(q)}$ associated with a curve $C=Z(F)$ and a point $q\in{\mathbb P}^2$ is defined by $C_{(q)}=Z(F_{(q)})$. In particular, for $q=(1,0,0)\in L^\infty$ (resp., $q=(0,1,0)\in L^\infty$), we get the polar curve $Z(F_x)$ (resp., $Z(F_y)$). \end{definition} For a point $p$ lying on two plane curves $C$ and $\tilde C$, we denote by $(C\cdot \tilde C)_p$ the \emph{intersection number} of these curves at~$p$. We will also use this notation for analytic curves, i.e., curves defined by analytic equations in a neighborhood of~$p$. \begin{definition} \label{def:top-invariants} Let $C=Z(F)$ be a plane algebraic curve, and $p$ an isolated singular point of~$C$. Let us recall the following topological invariants of the singularity $(C,p)$: \begin{itemize}[leftmargin=.2in] \item the \emph{multiplicity} $\mt(C,p)=(C\cdot L)_p$, where $L$ is any line passing through~$p$ which is not tangent to the germ~$(C,p)$; \item the \emph{$\varkappa$-invariant} $\varkappa(C,p)=(C\cdot C_{(q)})_p$, where $q\in {\mathbb P}^2\setminus\{p\}$ is such that the line~$L_{pq}$ is not tangent to $(C,p)$; \item the \emph{number $\br(C,p)$ of local branches} (irreducible components) of the germ~$(C,p)$; \item the \emph{$\delta$-invariant} $\delta(C,p)$, which can be determined from \[ \varkappa(C,p)=2\delta(C,p)+\mt(C,p)-\br(C,p); \] \item the \emph{Milnor number} $\mu(C,p)=(C_{(q')}\cdot C_{(q'')})_p$, where the points $q', q''\in{\mathbb P}^2$ are chosen so that $p,q',q''$ are not collinear. \end{itemize} More generally, for any point $p\in C_{(q')}\cap C_{(q'')}$, not necessarily lying on the curve~$C$, we can define the Milnor number \[ \mu(C,p)=(C_{(q')}\cdot C_{(q'')})_p \] (provided $p,q',q''$ are not collinear). Note that for $p\notin L^\infty$, we can simply define \begin{equation} \label{eq:mu-x-y} \mu(C,p)=(Z(F_x)\cdot Z(F_y))_p\,. \end{equation} See \cite[\S5 and~\S10]{Milnor} and \cite[Sections I.3.2 and~I.3.4]{GLS} for additional details as well as basic properties of these invariants. See also Remark~\hyperref{rem:invariants-informal} and Proposition~\hyperref{pr:invariants-identities}~below. \end{definition} \begin{remark} \label{rem:invariants-informal} All invariants listed in Definition~\hyperref{def:top-invariants} depend only on the topological type of the singularity at hand. The Milnor number $\mu(C,p)$ measures the complexity of the singular point~$p$ viewed as a critical point of~$F$. It is equal to the maximal number of critical points that a small deformation of~$F$ may have in the vicinity~of~$p$. The $\delta$-invariant is the maximal number of critical points lying on the deformed curve in a small deformation of the germ~$(C,p)$. The $\varkappa$-invariant is the number of ramification points of a generic projection onto a line of a generic deformation of the germ $(C,p)$. \end{remark} \begin{proposition}[{\cite[Propositions~3.35, 3.37, 3.38]{GLS}}] \label{pr:invariants-identities} Let $(C,p)$ be an isolated plane curve singularity as above. Then we have: \begin{align} \label{eq:milnor-formula} \mu(C,p)&=2\delta(C,p)-\br(C,p)+1 \quad \text{(Milnor's formula)}; \\ \label{eq:C*Cq} (C\cdot C_{(q)})_p&=\varkappa(C,p)+(C\cdot L_{pq})_p-\mt(C,p) \quad \text{for any $q\in{\mathbb P}^2\setminus\{p\}$;}\\ \label{eq:kappa=mu+mult-1} \varkappa(C,p)&=\mu(C,p)+\mt(C,p)-1. \end{align} \end{proposition} \begin{example} \label{ex:(x^2+z^2)(yz^2-x^3+x^2y)} Consider the quintic curve $C=Z(F)$ defined by the polynomial \[ F(x,y,z)=(x^2+z^2)(yx^2+yz^2-x^3)=(x+iz)(x-iz)(yx^2+yz^2-x^3). \] It has two points on the line at infinity~$L^\infty$, namely $p_1=(0,1,0)$ and $p_2=(1,1,0)$. At~$p_1$, the cubical component has an elliptic node, and the two line components are the two tangents to the cubic at~$p_1$. At~$p_2$, we have a smooth real local branch of the cubical component. Direct computations show that \begin{alignat*}{3} \mt(C,p_1)&=4 &\qquad \mt(C,p_2)&=1\\ \varkappa(C,p_1)&=16&\qquad \varkappa(C,p_2)&=0 \\ \br(C,p_1)&=4 &\qquad \br(C,p_2)&=1 \\ \delta(C,p_1)&=8&\qquad \delta(C,p_2)&=0 \\ \mu(C,p_1)&=13&\qquad \mu(C,p_2)&=0 \\ (C\cdot L^\infty)_{p_1}&=4 &\qquad (C\cdot L^\infty)_{p_2}&=1\\ (Z(F_x)\cdot Z(F_y))_{p_1}&=16 &\qquad (Z(F_x)\cdot Z(F_y))_{p_2}&=0 \\ (C\cdot F_x)_{p_1}&=16 &\qquad (C\cdot F_x)_{p_2}&=0 \end{alignat*} Note that \eqref{eq:mu-x-y} does not hold for $p=p_1$; this is not a contradiction since $p_1\in L^\infty$. \end{example} \section{Intersections of polar curves at infinity} \label{sec:polar-curves-at-infinity} In this section, we study the properties of intersection numbers of polar curves at their common points located at the line at infinity. \begin{lemma} \label{d1} Let $F(x,y,z)\in{\mathbb C}[x,y,z]$ be a non-constant homogeneous polynomial. \begin{itemize}[leftmargin=.3in] \item[{\rm (i)}] The set $Z(F_{(q')})\cap Z(F_{(q'')})\cap L^\infty$ does not depend on the choice of a pair of distinct points $q',q''\in L^\infty$. Moreover this set is contained in~$C$. \item[{\rm (ii)}] For a point $p\in C\cap L^\infty$, the intersection multiplicity $(Z(F_{(q')})\cdot Z(F_{(q'')}))_p$ does not depend on the choice of a pair of distinct points $q',q''\in L^\infty$. \end{itemize} \end{lemma} \begin{proof} Any other pair $\hat q',\hat q''$ of distinct points in $L^\infty$ satisfies \begin{equation} \label{eq:hat-q} \begin{array}{r} \hat q'=a_{11}q'+a_{12}q'',\\ \hat q''=a_{21}q'+a_{22}q'', \end{array} \text{\quad with\quad} \left| \begin{matrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{matrix}\right| \ne 0. \end{equation} Consequently \begin{align*} F_{(\hat q')}&=a_{11}F_{(q')}+a_{12}F_{(q'')},\\ F_{(\hat q'')}&=a_{21}F_{(q')}+a_{22}F_{(q'')} \end{align*} and the first claim in~(i) follows. To establish the second claim, set $q'=(1,0,0)$ and $q''=(0,1,0)$ (i.e., take the polar curves $Z(F_x)$ and $Z(F_y)$), and note that by Euler's formula~\eqref{eq:Euler}, $F$ vanishes as long as $F_z$, $F_y$ and $z$ vanish. To prove (ii), factor the nonsingular $2\times2$ matrix in \eqref{eq:hat-q} into the product of an upper triangular and a lower triangular matrix; then use that, for $bc\ne0$, \begin{equation} \label{e4e} (Z(aG_1+bG_2)\cdot Z(cG_1))_p=(Z(G_2)\cdot Z(G_1))_p\,. \qedhere \end{equation} \end{proof} \begin{definition} \redsf{Let $C$ be a plane projective curve $C$ that does not contain the line at infinity~$L^\infty$ as a component.} For a point $p\in L^\infty$, we denote \[ {\mu(C,p,L^\infty)} {\stackrel{\rm def}{=}} (Z(F_x)\cdot Z(F_y))_p\,. \] \redsf{Note that by Lemma~\hyperref {d1}(i)}, if $p$ lies on both $Z(F_x)$ and~$Z(F_y)$, then it necessarily lies on~$C$, so ${\mu(C,p,L^\infty)}$ can only be nonzero at points $p\in C\cap L^\infty$. \end{definition} \begin{remark} \label{r5} For $p\in C\cap L^\infty$, the number ${\mu(C,p,L^\infty)}$ may differ from the Milnor number~$\mu(C,p)$ (cf.~\eqref{eq:mu-x-y}), since the points $p,q',q''$ lie on the same line~$L^\infty$. Moreover, ${\mu(C,p,L^\infty)}$ is not determined by the topological type of the singularity~$(C,p)$, as it also depends on its ``relative position'' with respect to the line~$L^\infty$. The following example illustrates this phenomenon. Consider the curves $Z(x^2y-z^3-xz^2)$ and $Z(x^2y^2-yz^3)$. Each of them has an ordinary cusp (type~$A_2$) at the point $p=(0,1,0)$. On the other hand, we have ${\mu(C,p,L^\infty)}=4$ in the former case versus ${\mu(C,p,L^\infty)}=3$ in the latter. Additional examples can be produced using Proposition~\hyperref{prop:milnor-infinity} below. \end{remark} \begin{remark} \label{rem:Milnor-number-at-infinity} The intersection number ${\mu(C,p,L^\infty)}$ is also different from ``the Milnor number at infinity'' (as defined, for instance, in~\cite{ABLMH, Siersma-Tibar}) since ${\mu(C,p,L^\infty)}$ depends on the choice of a point $p\in L^\infty$. Moreover, ${\mu(C,p,L^\infty)}$ is not determined by the local topology of the configuration consisting of the germ $(C,p)$ and the line~$L^\infty$. \redsf{To see this, consider Example~\hyperref{e3.8} and Example~\hyperref{example:expressive-non-reg} with $p=p_1$. In both cases, $(C,p)$ is an ordinary cusp transversal to~$L^\infty$. In Example~\hyperref{e3.8}, we have $\mu(C,p,L^\infty)=4$, whereas in Example~\hyperref{example:expressive-non-reg}, we get $\mu(C,p,L^\infty)=\mu(C,p)+(C\cdot L^\infty)_p-1=3$ by Proposition~\hyperref{pr:nonmax-tangency})}. \end{remark} \pagebreak[3] \begin{proposition} \label{prop:milnor-infinity} Let $C=Z(F)$ be an algebraic curve in~${\mathbb P}^2$. Let~$p \in C\cap L^\infty$. Then \begin{equation} \label{eq:e2} {\mu(C,p,L^\infty)} \ge \mu(C,p)+(C\cdot L^\infty)_p-1. \end{equation} \end{proposition} The proof of Proposition~\hyperref{prop:milnor-infinity} will rely on two lemmas, one of them very~simple. \begin{lemma} \label{lem:F*Fq} For any $q\in L^\infty\setminus\{p\}$, we have \begin{equation*} (C\cdot Z(F_{(q)}))_p=\mu(C,p)+(C\cdot L^\infty)_p-1. \end{equation*} \end{lemma} \begin{proof} Using \eqref{eq:C*Cq} and \eqref{eq:kappa=mu+mult-1}, we obtain: \begin{align*} (C\cdot Z(F_{(q)}))_p&=\varkappa(C,p)+(C\cdot L^\infty)_p-\mt(C,p) =\mu(C,p)+(C\cdot L^\infty)_p-1. \qedhere \end{align*} \end{proof} \begin{lemma} \label{lem:e1} Let $Q$ be a local branch (i.e., a reduced, irreducible component) of the germ of the curve $Z(F_y)$ at $p$. Then \begin{equation} \label{eq:e1} (Z(zF_z)\cdot Q)_p\ge(Z(F)\cdot Q)_p\ . \end{equation} \end{lemma} \begin{proof} We will prove the inequality \eqref{eq:e1} inductively, by blowing up the point $p$. As a preparation step, we will apply a coordinate change intended to reduce the general case to a particular one, in which the blowing up procedure is easier to describe. Without loss of generality, we assume that $p=(1,0,0)$. In a neighborhood of $p$, we can set $x=1$ and then work in the affine coordinates $y,z$. Abusing notation, for a homogeneous polynomial $G(x,y,z)$, we will write $G(y,z)$ instead of $G(1,y,z)$. For any curve $Z(G)$, the intersection multiplicity $(G\cdot Q)_p$ can be computed as follows. Write $Q=\{f(y,z)=0\}$, where $f(y,z)$ is an irreducible element of the ring ${\mathbb C}\{y,z\}$ of germs at $p$ of holomorphic functions in the variables $y$ and~$z$. By \cite[Proposition~I.3.4]{GLS}, we have \[ f(y,z)=u(y,z)\prod_{1\le i\le k}(y-\xi_i(z^{1/k})), \quad u(y,z)\in{\mathbb C}\{y,z\},\quad u(0,0)\ne0, \] where each $\xi_i$ is a germ at zero of a holomorphic function vanishing at the origin. Then \cite[Propostion I.3.10 (Halphen's formula)]{GLS} yields \begin{equation} (G\cdot Q)_p=\sum_{1\le i\le k}\ord_0G(\xi_i(z),z). \label{e17}\end{equation} It follows that the variable change $(y,z)=\tau(y_1,z_1)\stackrel{\rm def}{=}(y_1,z_1^k)$ multiplies both sides of \eqref{eq:e1} by~$k$. For $g\in{\mathbb C}\{y,z\}$, let us denote $\tau^*g(y_1,z_1)=g\circ\tau(y_1,z_1))$. Then $\tau^*Q$ splits into $k$ smooth branches \[ Q_i=\{y_1-\xi_i(z_1)=0\}, \quad i=1,...,k, \] and it suffices to prove the inequality \eqref{eq:e1} with $Q$ replaced by each of the~$Q_i$'s. Moreover, with respect to $Q_i$, the desired inequality is of the same type. Namely, $\tau^*(F_y)=(\tau^*F)_{y_1}$, and hence $Q_i$ is a local branch of the polar curve $Z((\tau^*F)_{y_1})$ of the curve $\tau^*C=Z(\tau^*F)$. Furthermore, \begin{align*} z_1(\tau^*F)_{z_1}(y_1,z_1)&=z_1\tfrac{\partial}{\partial z_1}[F(y_1,z_1^k)]\\ &=z_1F_y(y_1,z_1^k)+kz_1^kF_z(y_1,z_1^k)\\ &=z_1(\tau^*F_y)(y_1,z_1)+k\cdot(\tau^*(zF_z))(y_1,z_1),\end{align*} which implies \[ (Z(z_1(\tau^*F)_{z_1})\cdot Q_i)_p=(Z(\tau_*(zF_z))\cdot Q_i)_p,\quad i=1,...,k. \] We have thus reduced the proof of \eqref{eq:e1} to the case where $Q$ is a smooth curve germ transversal to the line~$L^\infty$. To simplify notation, we henceforth write $C,F,y,z$ instead of $\varphi^*C,\varphi^*F,y_1,z_1$, respectively. We proceed by induction on $\mu(C,p)$. If $\mu(C,p)=0$, then $(C,p)$ is a smooth germ. If $C$ intersects $L^\infty$ transversally, then $(C,p)$ is given by \[ F(y,z)=ay+bz+\text{h.o.t.}, \quad a\ne0 \] (hereinafter h.o.t.\ is a shorthand for ``higher order terms''), implying \hbox{$F_y(0,0)=a\ne0$}. Thus the polar curve $Z(F_y)$ does not pass through~$p$; consequently both sides of \eqref{eq:e1} vanish. If $C$ is tangent to $L^\infty$ at $p$, then it is transversal to $Q$ at $p$, so we have \[ (C\cdot Q)_p=1=(Z(z)\cdot Q)_p\le(Z(zF_z)\cdot Q)_p\,. \] Suppose that $\mu(C,p)>0$. Then $m\stackrel{\rm def}{=}\mt(C,p)\ge2$. If $C$ and $Q$ intersect transversally at $p$ (i.e., have no tangent in common), then \[ (C\cdot Q)_p=\mt(C,p)\cdot\mt(Q,p)=m\cdot 1=m. \] On the other hand, $\mt(Z(F_z),p)\ge m-1$, and therefore \[ (Z(zF_z)\cdot Q)\ge\mt(Z(zF_z),p)\cdot \mt(Q,p)\ge m\cdot 1=m=(C\cdot Q). \] If $C$ and $Q$ have a common tangent at $p$, we apply the blowing-up $\pi:\widetilde{\mathbb P}^2\to{\mathbb P}^2$ of the plane at the point~$p$. For a curve $D$ passing through~$p$, let $D^*$ denote its \emph{strict transform}, i.e., the closure of the preimage $\pi^{-1}(D\setminus\{p\})$ in~$\widetilde{\mathbb P}^2$. (For more details, see \cite[Section~I.3.3, p.~185]{GLS}.). Since $Q$ is smooth, the strict transform $Q^*$ is smooth too, and intersects transversally the exceptional divisor $E$ at some point~$p^*$. More precisely, if $Q=Z(y-\eta z-\text{h.o.t.})$, then in the coordinates $(y_*,z_*)$ given by $y=y_*z_*$, $z=z_*$, we have $E=Z(z_*)$ and $p^*=(\eta,0)$. Since $C$ and its polar curve $Z(F_y)$ are tangent to the line $Z(y-\eta z)$, the lowest homogeneous form of $F(y,z)$ is divisible by $(y-\eta z)^2$, while the lowest homogeneous form of $F_y$ is divisible by $y-\eta z$ and, moreover, $\mt(Z(F_y),p)=m-1$. We now recall some properties of the blowing-up. For a curve $D\!=\!Z(G(y,z))$ pass\-ing through~$p$, we have \cite[Prop.~I.3.21 and I.3.34, and computations on p.~186]{GLS}: \begin{align*} (D^*\cdot Q^*)_{p^*}&=(D\cdot Q)_p-\mt(D,p)\cdot\mt(Q,p)=(D\cdot Q)_p-\mt(D,p); \\ \textstyle\sum_{q\in D^*\cap E}\delta(D^*,q)&=\delta(D,p)-\tfrac12 \mt(D,p)(\mt(D,p)-1); \\ D^*&=Z(z_*^{-\mt(D,p)}G(y_*z_*,z_*)). \end{align*} We see that \begin{align*} F^*(y_*,z_*)&=z_*^{-m}F(y_*z_*,z_*),\\ (F^*)_{y_*}(y_*,z_*)&= z_*^{1-m}F_y(y_*z_*,z_*)=(F_y)^*(y_*,z_*). \end{align*} Thus $Q^*$ is a local branch of the polar curve $(F^*)_{y_*}\!=\!0$ of the strict transform~$C^*$. Hence after the blowing-up we come to the original setting. Furthermore, \begin{align*} \mu(C^*,p^*)&=2\delta(C^*,p^*)-\br(C^*,p^*)+1\\ &\le2\delta(C^*,p^*)\\ &\le2\textstyle\sum_{q\in C^*\cap E}\delta(C^*,q)\\ &=2\delta(C,p)-m(m-1)\\ &=\mu(C,p)+\br(C,p)-1-m(m-1)\\ &\le\mu(C,p)+m-1-m(m-1)\\ &<\mu(C,p). \end{align*} So we can apply the induction assumption. Observe that $\mt(F_z,p)=m-1+r$ for some $r\ge0$. It follows that \begin{equation} \begin{aligned} (Z(F)\cdot Q)_p&=(Z(F^*)\cdot Q^*)_{p^*}+m,\\ (Z(zF_z)\cdot Q)_p&=1+(Z(F_z)\cdot Q)_p=(Z(F_z)^*)\cdot Q^*)_{p^*}+m+r. \end{aligned} \label{eq:e3} \end{equation} Now \begin{align*}z_*(F^*)_{z_*}&=-mz_*^{-m}F(y_*z_*,z_*)+y_*z_*^{1-m}F_y(y_*z_*,z_*)+z_*^{1-m}F_z(y_*z_*,z_*)\\ &=-mF^*(y_*,z_*)+y_*(F^*)_{y_*}(y_*,z_*)+z_*^r(F_z)^*(y_*,z_*). \end{align*} Finally, the latter formula, the induction assumption, and (\hyperref{eq:e3}) imply \begin{align*} (Z(zF_z)\cdot Q)_p&=(Z((F_z)^*)\cdot Q^*)_{p^*}+m+r\\ &=(Z(z_*^r(F_z)^*)\cdot Q^*)_{p^*}+m\\ &\ge\min\{Z(z_*(F^*)_{z_*})\cdot Q^*)_{p^*},\, (Z(F^*)\cdot Q^*)_{p^*}\}+m\\ &\!\!\!\stackrel{\eqref{eq:e1}}{=} (Z(F^*)\cdot Q^*)_{p^*}\}+m\\ &=(Z(F)\cdot Q)_p\,. \qedhere \end{align*} \end{proof} \begin{proof}[Proof of Proposition~\hyperref{prop:milnor-infinity}] We again assume $p=(1,0,0)$. Set $d=\deg(F)$. In the local affine coordinates $y, z$ (with $x=1$), Euler's formula~\eqref{eq:Euler} becomes \[ d F(1,y,z)=F_x(1,y,z)+yF_y(1,y,z)+zF_z(1,y,z). \] Consequently ${\mu(C,p,L^\infty)}=(Z(F_x)\cdot Z(F_y))_p=(Z(d F-zF_z)\cdot Z(F_y))_p$. Let ${\mathcal B}$ denote the set of local branches of the polar curve $F_y=0$ at~$p$. Then \begin{align*} {\mu(C,p,L^\infty)}&=(Z(d F-zF_z)\cdot Z(F_y))_p\\ &=\textstyle\sum_{Q\in{\mathcal B}} (Z(dF-zF_z)\cdot Q)_p\\ &\ge\textstyle\sum_{Q\in{\mathcal B}}\min\{(Z(F)\cdot Q)_p\,,(Z(zF_z)\cdot Q)_p\}\\ &=\textstyle\sum_{Q\in{\mathcal B}}(Z(F)\cdot Q)_p \qquad \text{(by \eqref{eq:e1})}\\ &=(Z(F)\cdot Z(F_y))_p \\ &=\mu(C,p)+(C\cdot L^\infty)_p-1 \quad \text{(by Lemma~\hyperref{lem:F*Fq}).} \qedhere \end{align*} \end{proof} \section{\texorpdfstring{$L^\infty$}{L}-regular curves} \label{sec:L-regular-curves} \begin{definition} \label{def:c-regular} Let $C=Z(F(x,y,z))\subset{\mathbb P}^2$ be a reduced plane algebraic curve which does not contain the line at infinity~$L^\infty$ as a component. The curve $C$ (or the polynomial~$F$) is called \emph{$L^\infty$-regular} if at each point $p\in C\cap L^\infty$, the formula~\eqref{eq:e2} becomes an equality: \begin{equation} \label{eq:c-regularity} {\mu(C,p,L^\infty)} = \mu(C,p)+(C\cdot L^\infty)_p-1. \end{equation} \end{definition} In the rest of this section, we provide $L^\infty$-regularity criteria for large classes of plane curves. \redsf{As mentioned in the Introduction, the technical material in this section can be skipped if the reader is willing to take it on faith and to view the requirement of $L^\infty$-regularity as a genericity condition that holds in all ``non-pathological'' examples arising in common applications.} \begin{proposition} \label{pr:regularity-via-milnor} Let $C\!=\!Z(F(x,y,z))\!\subset\!{\mathbb P}^2$ be a reduced algebraic curve of degree~$d$ which does not contain~$L^\infty$ as a component. Assume that the polynomial $F(x,y,1)$ has $\xi<\infty$ critical points, counted with multiplicities. Then we have \begin{equation} \label{eq:xi-d-mu} \xi \le d^2-3d+1-\sum_{p\in C\cap L^\infty} (\mu(C,p)-1), \end{equation} with equality if and only if $C$ is $L^\infty$-regular. \end{proposition} \begin{proof} In view of Proposition~\hyperref{prop:milnor-infinity}, we have \begin{equation} \label{eq:L-inequality} \sum_{p\in C\cap L^\infty} {\mu(C,p,L^\infty)} \ge d+\sum_{p\in C\cap L^\infty} (\mu(C,p)-1), \end{equation} with equality if and only if $C$ is $L^\infty$-regular. Since $F$ has finitely many critical points, B\'ezout's theorem for the polar curves $Z(F_x)$ and~$Z(F_y)$ applies, yielding \begin{equation} \sum_{p\in C\cap L^\infty}{\mu(C,p,L^\infty)} = (d-1)^2 -\xi. \end{equation} The claim follows. \end{proof} \begin{example} \label{ex:(x^2+z^2)(yz^2-x^3+yx^2)-regular} As in Example~\hyperref{ex:(x^2+z^2)(yz^2-x^3+x^2y)}, consider the quintic curve $C=Z(F)$ defined by the polynomial \[ F(x,y,z)=(x^2+z^2)(yx^2+yz^2-x^3)=(x+iz)(x-iz)(yx^2+yz^2-x^3). \] Set $G(x,y)=F(x,y,1)=(x^2+1)(yx^2+y-x^3)$. Then \begin{align*} G_x&= 2x(yx^2+y-x^3)+(x^2+1)(2xy-3x^2) =(x^2+1)(4xy-3x^2)-2x^4, \\ G_y&=(x^2+1)^2, \end{align*} and we see that $G$ has no critical points in the complex $(x,y)$-plane; thus $\xi=0$. Using the values of Milnor numbers computed in Example~\hyperref{ex:(x^2+z^2)(yz^2-x^3+x^2y)}, we obtain: \[ d^2-3d+1-\sum_{p\in C\cap L^\infty} (\mu(C,p)-1) =25-15+1-(13-1)-(0-1) =0. \] It follows by Proposition~\hyperref{pr:regularity-via-milnor} that $C$ is $L^\infty$-regular. Alternatively, one can check directly that the equality~\eqref{eq:c-regularity} holds at $p_1$ and~$p_2$. \end{example} Recall that the \emph{geometric genus} of a plane curve~$C$ is defined by \begin{equation} \label{eq:g(C)} g(C)=\sum_{C'\in\Comp(C)} (g(C')-1)+1, \end{equation} where $\Comp(C)$ is the set of irreducible components of~$C$, and $g(C')$ denotes the genus of the normalization of a component~$C'$. \begin{proposition} \label{pr:hironaka-milnor} Let $C\!=\!Z(F(x,y,z))\!\subset\!{\mathbb P}^2$ be a reduced algebraic curve of degree~$d$. Suppose that \begin{itemize}[leftmargin=.2in] \item $C$ does not contain the line at infinity~$L^\infty$ as a component; \item all singular points of $C$ in the affine $(x,y)$-plane ${\mathbb P}\setminus L^\infty$ are ordinary nodes; \item the polynomial $F(x,y,1)\in{\mathbb C}[x,y]$ has finitely many critical points. \end{itemize} Let $\nu$ denote the number of nodes of~$C$ in the $(x,y)$-plane, and let $\xi$ denote the number of critical points of the polynomial~$F(x,y,1)$, counted with multiplicities. \linebreak[3] Then we have \begin{equation} \label{eq:xi-le} \xi \le 2g(C)-1+2\nu+\sum_{p\in C\cap L^\infty}\br(C,p), \end{equation} with equality if and only if $C$ is $L^\infty$-regular. \end{proposition} \begin{proof} By Hironaka's genus formula~\cite{Hir} (cf.\ also \cite[Chapter II, (2.1.4.6)]{GLS1}), we~have \begin{equation} \label{eq:Hironaka} g(C)=\tfrac{(d-1)(d-2)}{2}-\sum_{p\in\Sing(C)}\delta(C,p), \end{equation} where $\Sing(C)$ denotes the set of singular points of~$C$. Combining this with Milnor's formula~\eqref{eq:milnor-formula}, we obtain: \begin{align*} \sum_{p\in C\cap L^\infty} (\mu(C,p)-1) &=\sum_{p\in \Sing(C)} (\mu(C,p)-1) \\ &= \sum_{p\in \Sing(C)} (2\delta(C,p)-\br(C,p)) \\ &=(d-1)(d-2)-2g(C)-2\nu-\sum_{p\in C\cap L^\infty}\br(C,p). \end{align*} Therefore \[ d^2-3d+1-\sum_{p\in C\cap L^\infty} (\mu(C,p)-1) =2g(C)-1+2\nu+\sum_{p\in C\cap L^\infty}\br(C,p), \] and the claim follows from Proposition~\hyperref{pr:regularity-via-milnor}. \end{proof} Our next result (Proposition~\hyperref{pr:nonmax-tangency} below) shows that equation \eqref{eq:c-regularity} holds under certain rather mild local conditions, To state these conditions, we will need to recall some terminology and notation. \begin{definition}[{\cite[Definitions I.2.14--I.2.15]{GLS}}] \label{def:Newton-diagram} We denote by $\Gamma(G)$ the \emph{Newton diagram} of a bivariate polynomial~$G$, i.e., the union of the edges of the Newton polygon of~$G$ which are visible from the origin. The \emph{truncation} of~$G$ along an edge~$e$ of~$\Gamma(G)$ is the sum of all monomials in~$G$ corresponding to the integer points in~$e$. An isolated singularity of an affine plane curve $\{G(y,z)=0\}$ at the origin is called \emph{Newton nondegenerate} (with respect to the local affine coordinates~$(y,z)$) if the Newton diagram $\Gamma(G)$ intersects each of the coordinate axes, and the truncation of~$G$ along any edge of the Newton diagram is a quasihomogeneous polynomial without critical points in~$({\mathbb C}^*)^2$. \end{definition} \begin{proposition} \label{pr:nonmax-tangency} Let $C=Z(F(x,y,z))\subset{\mathbb P}^2$ be a reduced curve not containing the line at infinity $L^\infty=Z(z)$ as a component. Let $p\in C\cap L^\infty$. \redsf{Without loss of generality, assume that $p=(1,0,0)$.} Suppose that $p$ is either a smooth point of~$C$, or a singular point of $C$ such that \begin{align} \label{eq:newton-nondeg} &\!\!\text{the singularity $(C,p)$ is Newton nondegenerate, in the local coordinates~$y,z$;}\\ \label{eqeq} &\!\! (Z(y)\cdot C)_p<\deg C=\deg F. \end{align} Then \begin{equation} \label{eq:local-regularity} {\mu(C,p,L^\infty)} = \mu(C,p)+(C\cdot L^\infty)_p-1. \end{equation} Thus, if conditions~\eqref{eq:newton-nondeg}--\eqref{eqeq} hold at every point $p\in C\cap L^\infty$, then $C$ is $L^\infty$-regular. \end{proposition} \begin{remark} Although we did not find Proposition~\hyperref{pr:nonmax-tangency} in the literature, similar results---proved using similar tools---appeared before, see for example~\cite{ABLMH}. \end{remark} \begin{proof} If $p$ is a smooth point of $C$ with the tangent $L\ne L^\infty$, then \[ F(1,y,z)=ay+bz+\text{h.o.t.} \quad (a\ne0), \] implying that $p\not\in Z(F_y)$. The $L^\infty$-regularity follows: \[ (Z(F_x)\cdot Z(F_y))_p=0=\mu(C,p)+(C\cdot L^\infty)_p-1. \] If $C$ is smooth at $p$ with the tangent line $L^\infty$, then \[ F(1,y,z)=ay^n+bz+\text{h.o.t.} \quad (ab\ne0,\ n>1), \] which implies the Newton nondegeneracy as well as condition~\eqref{eqeq}: \[(Z(y)\cdot C)_p=1<n\le\deg C\ .\] Thus, this situation can be viewed as a particular case of the general setting where we have a singular point~$p$ satisfying conditions~\eqref{eq:newton-nondeg}--\eqref{eqeq}. We next turn to the treatment of this setting. We proceed in two steps. We first consider semi-quasihomogeneous singular points, and then move to the general case. We set $x=1$ in a neighborhood of $p$ and write $F(y,z)$ as a shorthand for~$F(1,y,z)$. (1) Assume that $\Gamma(F)$ is a segment with endpoints $(m,0)$ and~$(0,n)$. By the assumptions of the lemma, $m\le d=\deg F$ and $n<d$. The Newton nondegeneracy condition means that the truncation $F^{\Gamma(F)}$ of $F$ on $\Gamma(F)$ is a square-free quasihomogeneous polynomial. Assuming that $s=\gcd\{m,n\}$, $m=m_1s$, $n=n_1s$, we can write \[ F^{\Gamma(F)}(y,z)=\sum_{k=0}^sa_ky^{m_1k}z^{n_1(s-k)},\quad\text{where}\ a_0a_s\ne0 . \] Consider the family of polynomials \[ F_t(y,z)=t^{-mn}F(yt^n,zt^m)=F^{\Gamma(F)}(y,z)+\sum_{in+jm>mn}c_{ij}t^{in+jm-mn}\redsf{y^iz^j}, \quad t\in[0,1]. \] Note that $F_0=F^{\Gamma(F)}$ and the polynomials $F$ and $F_t$, $0<t<1$, differ by a linear change of the variables. This together with the lower semicontinuity of the intersection multiplicity implies \begin{equation}{\mu(C,p,L^\infty)}=(Z(F_y)\cdot Z(dF-zF_z))_p\le(Z(F^{\Gamma(F)}_y)\cdot Z(dF^{\Gamma(F)}-zF^{\Gamma(F)}_z))_p\ .\label{e10}\end{equation} Here \begin{align} F^{\Gamma(F)}_y&=\sum_{k=1}^sm_1ka_ky^{m_1k-1}z^{n_1(s-k)},\nonumber\\ dF^{\Gamma(F)}-zF^{\Gamma(F)}_z&= \sum_{k=0}^s(d-n_1(s-k))a_ky^{m_1k}z^{n_1(s-k)}. \label{e12} \end{align} Since $a_s\ne0$ and $n_1s=n<d$, these are nonzero polynomials, and moreover $F^{\Gamma(F)}_y$ splits into $l_1\ge0$ factors of type $z^{n_1}+\alpha y^{m_1}$, $\alpha\ne0$ and the factor $y^{m-1-l_1m_1}$, while $zF^{\Gamma(F)}_z-dF^{\Gamma(F)}$ splits into $l_2\ge0$ factors of type $z^{n_1}+\beta y^{m_1}$, $\beta\ne0$, and the factor~$z^{n-l_2n_1}$. Observe that the polynomials $F^{\Gamma(F)}_y$ and $zF^{\Gamma(F)}_z-F^{\Gamma(F)}$ are coprime. (Otherwise, they would have a common factor $z^{n_1}+\gamma y^{m_1}$ with $\gamma\ne0$, which would also be a divisor of the polynomial \[ n_1yF^{\Gamma(F)}+m_1(zF^{\Gamma(F)}_z-dF^{\Gamma(F)})=m_1(n-d)F^{\Gamma(F)}. \] \redsf{Then $F^{\Gamma(F)}$ and its derivative $F_y^{\Gamma(F)}$ would have a common factor,} contradicting the square-freeness of~$F^{\Gamma(F)}$.) Since \begin{align*} (Z(y)\cdot Z(z))_p&=1, \\ (Z(y)\cdot Z(z^{n_1}+\beta y^{m_1}))_p&=n_1, \\ (Z(z)\cdot Z(z^{n_1}+\alpha y^{m_1}))_p&=m_1\quad \text{as}\quad \alpha\ne0, \\ (Z(z^{n_1}+\alpha y^{m_1})\cdot Z(z^{n_1}+\beta y^{m_1}))_p&=m_1n_1\quad\text{as}\quad\alpha\ne\beta, \end{align*} the right-hand side of (\hyperref{e10}) equals \begin{align*} &l_1l_2\cdot(Z(z^{n_1}+\alpha y^{m_1})\cdot Z(z^{n_1}+\beta y^{m_1}))_p+ l_1(n-l_2n_1)\cdot(Z(z)\cdot Z(z^{n_1}+\alpha y^{m_1}))_p \\ &\quad +(m-1-l_1m_1)l_2\cdot(Z(y)\cdot Z(z^{n_1}+\beta y^{m_1}))_p +(m-1-l_1m_1)\cdot(Z(y)\cdot Z(z))_p \\ =&(m-1)n\\ =&(m-1)(n-1)+m-1\\ =&\mu(C,p)+(C\cdot L^\infty)_p-1\ , \end{align*} which together with (\hyperref{eq:e2}) yields the desired equality. (2) Suppose that $\Gamma(F)$ consists of $r\ge2$ edges $\sigma^{(1)},...,\sigma^{(r)}$ successively ordered so that $\sigma^{(1)}$ touches the axis of exponents of $y$ at the point $(m,0)$, where $m=(C\cdot L^\infty)_p$, and $\sigma^{(r)}$ touches the axis of exponents of $z$ at the point $(0,n)$, where $n=(C\cdot Z(y))_p<d=\deg F$. By the hypotheses of the lemma, for any edge $\sigma=\sigma^{(i)}$, the truncation $F^\sigma(y,z)$ is the product of $y^az^b$, $a,b\ge0$, and of a quasihomogeneous, square-free polynomial $F_0^\sigma$, whose Newton polygon $\Delta(F_0^\sigma)$ is the segment $\sigma_0$ with endpoints on the coordinate axes, obtained from $\sigma$ by translation along the vector $(-a,-b)$. Note that the minimal exponent of $z$ in the polynomial $dF(0,z)-zF_z(0,z)$ is $n$, and hence \begin{equation}(Z(F_y)\cdot Z(dF-zF_z))_p=(Z(yF_y)\cdot Z(dF-zF_z))_p-n\ .\label{e11}\end{equation} Next, we note that the Newton diagram $\Gamma(yF_y)$ contains entire edges $\sigma^{(1)},...,\sigma^{(r-1)}$ and some part of the edge $\sigma^{(r)}$, while $\Gamma(dF-zF_z)=\Gamma(F)$, since the monomials of $dF-zF_z$ and of $F$ on the Newton diagram $\Gamma(F)$ are in bijective correspondence, and the corresponding monomials differ by a nonzero constant factor, cf. (\hyperref{e12}). By \cite[Proposition~I.3.4]{GLS}, we can split the polynomial $F$ inside the ring ${\mathbb C}\{y,z\}$ into the product $$F=\varphi_1...\varphi_r,\quad\Gamma(\varphi_i)=\sigma^{(i)}_0,\ \varphi_i^{\sigma^{(i)}_0}= F_0^{\sigma^{(i)}},\ i=1,...,r,$$ and similarly $$dF-zF_z=\psi_1...\psi_r,\quad\Gamma(\psi_i)=\sigma^{(i)}_0,\ \psi_i^{\sigma^{(i)}_0}= (dF-zF_z)_0^{\sigma^{(i)}},\ i=1,...,r,$$ $$yF^y=\theta_1...\psi_r,\quad \Gamma(\theta_i)=\sigma^{(i)}_0,\ \theta_i^{\sigma^{(i)}_0}= (yF_y)_0^{\sigma^{(i)}},\ i=1,...,r-1,$$ while $$\theta_r^{\sigma_0^{(r)}}=(yF_y)^{\sigma^{(r)}}\cdot z^{-c}$$ for $c$ the minimal exponent of $z$ in $(yF_y)^{\sigma^{(r)}}$. Thus, \begin{equation}(Z(yF_y)\cdot Z(dF-zF_z))_p=\sum_{1\le i,j\le r}(Z(\psi_i)\cdot Z(\theta_j))_p\ .\label{e14}\end{equation} We claim that \begin{equation}(Z(\psi_i)\cdot Z(\theta_j))_p=(Z(\varphi_i)\cdot Z(\theta_j))_p\quad \text{for all}\ 1\le i,j\le r\ .\label{e15}\end{equation} Having this claim proven, we derive from (\hyperref{e14}) that \begin{align*}(Z(yF_y)\cdot Z(dF-zF_z))_p&=\sum_{1\le i,j\le r}(Z(\varphi_i)\cdot Z(\theta_j))_p\\ &=(Z(yF_y)\cdot Z(F))_p=(Z(y)\cdot Z(F))_p+(Z(F_y)\cdot Z(F))_p\\ &=n+\varkappa(C,p)+(C\cdot L^\infty)_p-\mt(C,p)\\ &=n+(\mu(C,p)+\mt(C,p)-1)+(C\cdot L^\infty)_p-\mt(C,p)\\ &=n+\mu(C,p)+(C\cdot L^\infty)_p-1\ ,\end{align*} which completes the proof in view of (\hyperref{e11}). The equality (\hyperref{e15}) follows from the fact that both sides of the relation depend only on the geometry of the segments $\sigma_0^{(i)}$ and $\sigma_0^{(j)}$. Suppose that $1\le i<j\le r$. We have $\sigma^{(i)}=[(m',0),(0,n')]$, $\sigma^{(j)}=[(m'',0),(0,n'')]$ with $\frac{m'}{n'}>\frac{m''}{n''}$. By the Newton-Puiseux algorithm \cite[pp.~165--170]{GLS}, the function $\varphi_i(y,z)$ (or $\psi_i(y,z)$) splits into the product of $u(y,z)\in{\mathbb C}\{y,z\}$, $u(0,0)\ne0$, and $n'$ factors of the form $$z-\alpha y^{m'/n'}+\text{h.o.t.},\quad \alpha\ne0,$$ and then by (\hyperref{e17}) we get $$(Z(\varphi_i)\cdot Z(\theta_j))_p=(Z(\psi_i)\cdot Z(\theta_i))_p=m''n'.$$ This holds even for $j=r$ since the only monomial of $\theta_r(y,z)$ that comes into play is $\beta y^{m''}$, $\beta\ne0$. The case $1\le j<i\le r$ is settled analogously. Now suppose \hbox{$1\le i=j\le r$}. As we observed in the first part of the proof, the pairs of quasihomogeneous polynomials $\varphi_i^{\sigma^{(i)}_0}$ and $\theta_i^{\sigma^{(i)}_0}$, $\psi_i^{\sigma^{(i)}_o}$ and $\theta_i^{\sigma^{(i)}_0}$ are coprime (even for $i=r$!). Then the above computation yields $$(Z(\varphi_i)\cdot Z(\theta_i))_p=(Z(\psi_i)\cdot Z(\theta_i))_p=m'n'\ ,$$ where $\sigma^{(i)}_0=[(m',0),(0,n')]$. \end{proof} The following example shows that condition \eqref{eqeq} in Proposition~\hyperref{pr:nonmax-tangency} cannot be removed. \begin{example}\label{e3.8} Consider the cubic $C=Z(xy^2-z^3-yz^2)$ (cf.\ Remark~\hyperref{r5}). It has a Newton nondegenerate singular point $p=(1,0,0)\in L^\infty$, an ordinary cusp with the tangent line $L=Z(y)\ne L^\infty$. Thus $(C\cdot L)=3=\deg(C)$, so~\eqref{eqeq} fails. Also, \begin{align*} {\mu(C,p,L^\infty)}=(Z(y^2) \cdot Z(2xy-z^2))_p&=4, \\ \mu(C,p)+(C\cdot L^\infty)_p-1=2+2-1&=3. \end{align*} so \eqref{eq:local-regularity} fails as well. \end{example} For another (more complicated) instance of this phenomenon, see Example~\hyperref{example:expressive-non-reg}. \begin{corollary} Let $C=Z(F(x,y,z))\subset{\mathbb P}^2$ be a reduced curve not containing the line at infinity $L^\infty=Z(z)$ as a component. Let $G(x,y)=F(x,y,1)$. Assume that the Newton polygon $\Delta(G)$ intersects each of the coordinate axes in points different from the origin, and the truncation of~$G$ along any edge of the boundary~$\partial\Delta(G)$ not visible from the origin is a square-free polynomial, except possibly for factors of the form $x^i$ or~$y^j$. Then $C$ is $L^\infty$-regular. \end{corollary} \begin{proof} The intersection $C\cap L^\infty$ is determined by the top degree form of $G(x,y)$, which has the form $x^iy^jf(x,y)$, with $f(x,y)$ a square-free homogeneous polynomial. Hence $C\cap L^\infty$ consists of the points $(0,1,0)$ (if $i>0$), $(1,0,0)$ (if $j>0$), and $\deg(f)$ other points, at which $C$ is smooth and $L^\infty$-regular (see Proposition~\hyperref{pr:nonmax-tangency}). The Newton diagram of each of the points $(0,1,0)$ and $(1,0,0)$ consists of some edges of $\partial\Delta(G)$ mentioned in the lemma, and therefore these points (if they lie on~$C$) satisfy the requirements of Proposition \hyperref{pr:nonmax-tangency}. \end{proof} \begin{remark} \label{rem:Linf-reducible} A reducible plane curve~$C=Z(F)$ with $L^\infty$-regular components~does not have to be $L^\infty$-regular. For example, take $F=(xy-1)(xy-2)$: each factor is $L^\infty$-regular, but the product is not, since $F_x$ and $F_y$ have a common divisor $2xy-3$. Conversely, a curve may be $L^\infty$-regular even when one of its components is not. \redsf{For example, let $F=x^2y+x^3+x^2z+xz^2+z^3$. Then $Z((x-y)F)$ is $L^\infty$-regular by Proposition~\hyperref{pr:nonmax-tangency}. On the other hand, $L^\infty$-regularity of $Z(F)$ fails at $p=(0,1,0)$: direct computation yields $(Z(F_x)\cdot Z(F_y))_p=4$, whereas $\mu(Z(F),p)+(Z(F)\cdot L^\infty)_p-1=3$.} \end{remark} \section{Polynomial and trigonometric curves} \label{sec:poly-trig} In Sections~\hyperref{sec:poly-trig} and~\hyperref{sec:expressive}, we introduce several classes of affine (rather than projective) plane curves. Before we begin, let us clarify what we mean by a real affine plane~curve. There are two different notions here: an algebraic and a topological one: \begin{definition} \label{def:curves} As usual, a reduced real \emph{algebraic curve}~$C$ in the complex affine plane ${\mathbb A}^2\cong{\mathbb C}^2$ is the vanishing set \[ C=V(G)=\{(x,y)\in{\mathbb C}^2\mid G(x,y)=0\} \] of a squarefree bivariate polynomial $G(x,y)\in{\mathbb R}[x,y]\subset{\mathbb C}[x,y]$. We view $C$ as a subset of~${\mathbb A}^2$, and implicitly identify it with the polynomial~$G(x,y)$ (viewed up to a constant nonzero factor), or with the principal ideal generated by it, the ideal of polynomials vanishing on~$C$. Alternatively, one can consider a ``topological curve''~$C_{{\mathbb R}}$ in the real affine $(x,y)$-plane~${\mathbb R}^2$, defined as the set of real points of an algebraic curve~$C$ as above: \[ C_{{\mathbb R}}=V_{{\mathbb R}}(G)=\{(x,y)\in{\mathbb R}^2\mid G(x,y)=0\}. \] In contrast to the real algebraic curve $V(G)\subset{\mathbb C}^2$, the real algebraic set~$V_{{\mathbb R}}(G)$---even when it is one-dimensional---does not determine the polynomial~$G(x,y)$ up to a scalar factor. In other words, an algebraic curve~$C$ is not determined by the set of its real points~$C_{{\mathbb R}}$, even when $C_{{\mathbb R}}$ ``looks like'' an algebraic curve. (Roughly speaking, this is because $C$ can have ``invisible'' components which either have no real points at all, or all such points are isolated in~${\mathbb R}^2$.) There is however a canonical choice, provided $C_{{\mathbb R}}$ is nonempty and without isolated points: we can let $C$ be the Zariski closure of~$C_{{\mathbb R}}$, or equivalently let $G(x,y)$ be the \emph{minimal polynomial} of~$C_{{\mathbb R}}$, a real polynomial of the smallest possible degree satisfying \hbox{$V_{{\mathbb R}}(G)=C_{{\mathbb R}}$}. (The~minimal polynomial is defined up to a nonzero real factor.) \end{definition} Throughout this paper, we switch back and forth between a projective curve $Z(F(x,y,z))$ and its affine counterpart $V(G(x,y))$, where $G(x,y)=F(x,y,1)$. \linebreak[3] (Remember that the line at infinity $L^\infty=Z(z)={\mathbb P}^2\setminus{\mathbb A}^2$ is fixed throughout.) For example, we say that $V(G)$ is $L^\infty$-regular if and only if $Z(F)$ is $L^\infty$-regular, cf.\ Definition~\hyperref{def:c-regular}. \begin{definition} \label{def:poly-curve} Let $C$ be a complex curve in the affine $(x,y)$-plane. We say that $C$ is a \emph{polynomial curve} if it has a polynomial para\-metrization, i.e., if there exist polynomials $X(t), Y(t)\in{\mathbb C}[t]$ such that the map $t\mapsto (X(t),Y(t))$ is a (birational, i.e., generically one-to-one) parametrization of~$C$. A projective algebraic curve~$C=\{F(x,y,z)=0\}\subset{\mathbb P}^2$ is called \emph{polynomial} if $C$ does not contain the line at infinity $L^\infty=\{z=0\}$, and the portion of $C$ contained in the affine $(x,y)$-plane (i.e., the curve $\{F(x,y,1)=0\}$) is an affine polynomial curve. \end{definition} \begin{remark} Not every polynomial map defines a polynomial parametrization. For example, $t\mapsto (t^2-t,t^4-2t^3+t)$ is not a polynomial (or~birational) parametrization, since it is not generically one-to-one: $(X(t),Y(t))=(X(1-t),Y(1-t))$. \end{remark} \begin{example} \label{ex:poly-with-elliptic-node} The cubic $y^2=x^2(x-1)$ is a real polynomial curve, with a polynomial parametrization $t\mapsto (t^2+1,t(t^2+1))$. Note that this curve has an elliptic node $(0,0)$, attained for imaginary parameter values $t=\pm\sqrt{-1}$. \end{example} \begin{example} The ``witch of Agnesi'' cubic $x^2y+y-1$ is rational but not polynomial. Indeed, if $X(t)$ is a positive-degree polynomial in~$t$, then $Y(t)=\frac{1}{(X(t))^2+1}$ is not. \end{example} \begin{example}[Irreducible Chebyshev curves] Recall that the \emph{Chebyshev polynomials} of the first kind are the univariate polynomials~$T_a(x)$ (here $a\in{\mathbb Z}_{>0}$) defined~by \begin{equation} \label{eq:chebyshev-poly} T_a(\cos\varphi)=\cos(a\varphi). \end{equation} The polynomial $T_a(x)$ has integer coefficients, and is an even (resp., odd) function of~$x$ when $a$ is even (resp., odd). We note that $T_a(T_b(t))=T_b(T_a(t))=T_{ab}(t)$. Let $a$ and $b$ be coprime positive integers. The \emph{Chebyshev curve} with parameters $(a,b)$ is given by the equation \begin{equation} \label{eq:Chebyshev-curve} T_a(x)+T_b(y)=0. \end{equation} It is not hard to see that this curve is polynomial; let us briefly sketch why. (For a detailed exposition, see \cite[Section~3.9]{Fischer}.) Without loss of generality, let us assume that $a$ is odd. Then the $(a,b)$-Chebyshev curve has a polynomial parametrization $t\mapsto (-T_b(t), T_a(t))$. Indeed, $T_a(-T_b(t))+T_b(T_a(t))=-T_{ab}(t)+T_{ab}(t)=0$. To illustrate, consider the Chebyshev curves with parameters $(3,2)$ and $(3,4)$ shown in Figure~\hyperref{fig:chebyshev-34-32}. The $(3,2)$-Chebyshev curve is a nodal Weierstrass cubic \begin{equation} \label{eq:32-Chebyshev} 4x^{3}-3x+2y^{2}-1=0, \end{equation} or parametrically $t\mapsto (-2t^2+1,4t^3-3t)$. The $(3,4)$-Chebyshev curve (see Figure~\hyperref{fig:chebyshev-34-32}) is a sextic given by the equation \begin{equation} \label{eq:34-Chebyshev} 4x^{3}-3x+8y^{4}-8y^{2}+1=0, \end{equation} or by the polynomial parametrization $t\mapsto (-8t^4+8t^2-1, 4t^3-3t)$. \end{example} \begin{figure} \caption{The Chebyshev curves with parameters $(3,2)$ and $(3,4)$.} \label{fig:chebyshev-34-32} \end{figure} \begin{lemma} \label{lem:poly-curve} For a real plane algebraic curve~$C$, the following are equivalent: \begin{itemize}[leftmargin=.2in] \item[{\rm (1)}] $C$ is polynomial; \item[{\rm (2)}] $C$ has a parametrization $t\mapsto (X(t),Y(t))$ with $X(t),Y(t)\in{\mathbb R}[t]$; \item[{\rm (3)}] $C$ is rational, with a unique local branch at infinity. \end{itemize} \end{lemma} \begin{proof} The equivalence (1)$\Leftrightarrow$(3) (for complex curves) is well known; see, e.g.,~\cite{Abhyankar-1988}. The implication (2)$\Rightarrow$(1) is obvious. It remains to show that (3)$\Rightarrow$(2). The local branch of $C$ at infinity must by real, since otherwise complex conjugation would yield another such branch. Consequently, the set of real points of~$C$ has a one-dimensional connected component which contains the unique point $p\in C\cap L^\infty$. It follows that the normalization map $\boldsymbol{n}:{\mathbb P}^1\to C\hookrightarrow{\mathbb P}^2$ (which is nothing but a rational parametrization of $C$) pulls back the complex conjugation on $C$ to the antiholomorphic involution $c:{\mathbb P}^1\to{\mathbb P}^1$ which is a reflection with a fixed point set $\Fix(c)\simeq S^1$, and $p$ lifts to a point in~$\Fix(c)$. (There is another possible antiholomorphic involution on ${\mathbb P}^1$, the antipodal one, corresponding to real plane curves with a finite real point~set.) Thus, we can choose coordinates $(t_0,t_1)$ on~${\mathbb P}^1$ so that $c(t_0,t_1)=(\overline t_0,\overline t_1)$ and the preimage of $p$ is $(0,1)$. Hence the map $\boldsymbol{n}$ can be expressed~as \[ x=X(t_0,t_1),\quad y=Y(t_0,t_1),\quad z=t_0^d\,, \] where $(t_0,t_1)\in{\mathbb P}^1$, $X$~and $Y$ are bivariate homogeneous polynomials of degree $d=\deg C$, and by construction \[ \overline x=X(\overline t_0,\overline t_1),\quad \overline y=Y(\overline t_0,\overline t_1), \] which means that $X$ and $Y$ have real coefficients. \end{proof} Recall that a \emph{trigonometric polynomial} is a finite linear combination of functions of the form $t\mapsto \sin(kt)$ and/or $t\mapsto \cos(kt)$, with $k\in{\mathbb Z}_{\ge 0}$. \begin{definition} \label{def:trig-curve} We say that a real \redsf{algebraic} curve $C$ in the affine $(x,y)$-plane is a \emph{trigonometric curve} if there exist real trigonometric polynomials $X(t)$ and~$Y(t)$ such that $t\mapsto (X(t),Y(t))$ is a parametrization of~$C_{{\mathbb R}}$, the set of real points of~$C$, generically one-to-one for $t\in [0,2\pi)$. A projective real algebraic curve~$C=\{F(x,y,z)=0\}\subset{\mathbb P}^2$ is called \emph{trigonometric} if $C$ does not contain the line at infinity $L^\infty=\{z=0\}$, and the portion of $C$ contained in the affine $(x,y)$-plane is an affine trigonometric curve. \end{definition} \begin{remark} Not every trigonometric map gives a trigonometric parametrization. For example, $t\mapsto (\cos(t),\cos(2t))$ is not a trigonometric parametrization of its image (a segment of the parabola $y\!=\!2x^2\!-\!1$), since it is not generically one-to-one on~$[0,2\pi)$. \end{remark} \begin{example} The most basic example of a trigonometric curve is the circle $t\mapsto (\cos(t),\sin(t))$, or more generally an ellipse \[ t\mapsto (A\cos(t),B\sin(t)) \quad (A, B\in{\mathbb R}_{>0}). \] \end{example} \begin{example}[Lissajous curves] \label{ex:Lissajous} Let $k$ and $\ell$ be coprime positive integers, with $\ell$~odd. The \emph{Lissajous curve} with parameters $(k,\ell)$ is a trigonometric curve defined by the parametrization \[ t\mapsto(\cos(\ell t),\sin(kt)). \] The algebraic equation for this curve is \begin{equation} \label{eq:Lissajous-algebraic} T_{2k}(x)+T_{2\ell }(y)=0, \end{equation} cf.~\eqref{eq:chebyshev-poly}. (Indeed, $T_{2k}(\cos(\ell t))+T_{2\ell}(\cos(\frac{\pi}{2}-kt))=\cos(2k\ell t)+\cos(\ell\pi-2k\ell t)=0$.) Note that \eqref{eq:Lissajous-algebraic} looks exactly like~\eqref{eq:Chebyshev-curve}, except that now the indices $2k$ and $2\ell$ are not coprime (although $k$ and~$\ell$ are). To illustrate, the $(2,3)$-Lissajous curve is given by the equation \begin{equation} \label{eq:32-Lissajous} 8x^4-8x^2+1+32y^6-48y^4+18y^2-1=0, \end{equation} or by the trigonometric parametrization \begin{equation} \label{eq:32-Lissajous-parametric} t\mapsto (\cos(3t),\sin(2t)). \end{equation} Several Lissajous curves, including this one, are shown in Figure~\hyperref{fig:lissajous-12-13-32-34}. \end{example} \begin{figure} \caption{The Lissajous curves with parameters $(2,1)$, $(3,1)$, $(2,3)$, and $(4,3)$.} \label{fig:lissajous-12-13-32-34} \end{figure} \begin{example}[Rose curves] \label{ex:rose-curves} A \emph{rose curve} with parameter~$q=\frac{a}{b}\in{\mathbb Q}_{>0}$ is defined in polar coordinates by $r=\cos(q\theta)$, $\,\theta\in [0,2\pi b)$. While a general rose curve has a complicated singularity at the origin, it becomes nodal when $q\!=\!\frac{1}{2k+1}$, with $k\!\in\!{\mathbb Z}_{>0}$. In that case, we get a ``multi-lima\c con,'' defined in polar coordinates by $r=\cos\bigl(\tfrac{\theta}{2k+1}\bigr)$, or equivalently by \begin{equation} \label{eq:multi-limacon} r T_{2k+1}(r)-x=0. \end{equation} Note that the left-hand side of~\eqref{eq:multi-limacon} is a polynomial in $r^2=x^2+y^2$, so it is an algebraic equation in $x$ and~$y$. This is a trigonometric curve, with a trigonometric parametrization given by \[ t\mapsto \Bigl(\frac{\cos(kt)+\cos((k+1)t)}{2}, \frac{\sin(kt)+\sin((k+1)t)}{2}\Bigr). \] The cases $k=1,2,3$ are shown in Figure~\hyperref{fig:multi-limacons}. In the special case $k=1$, we get $rT_3(r)=4r^4-3r^2$, and the equation~\eqref{eq:multi-limacon}~becomes \begin{equation} \label{eq:limacon} 4(x^2+y^2)^2-3(x^2+y^2)-x=0. \end{equation} This quartic curve is one of the incarnations of the \emph{lima\c con} of \'Etienne Pascal. \end{example} \begin{figure} \caption{The rose curves $r=\cos\bigl(\tfrac{\theta} \label{fig:multi-limacons} \end{figure} \begin{lemma} \label{lem:trig-curve} For a real plane algebraic curve~$C$, the following are equivalent: \begin{itemize}[leftmargin=.2in] \item[{\rm (1)}] $C$ is trigonometric; \item[{\rm (2)}] there exist polynomials $P(\varphi),Q(\varphi)\in{\mathbb C}[\varphi]$ such that the map ${\mathbb C}^*\to{\mathbb A}^2$ given by \begin{equation} \label{eq:param-PQ} \varphi\longmapsto (P(\varphi)+\overline P(\varphi^{-1}),Q(\varphi)+\overline Q(\varphi^{-1})) \end{equation} is a birational parametrization of~$C$. \item[{\rm (3)}] $C$ is rational, with two complex conjugate local branches at infinity and with an infinite real point set.\end{itemize} \end{lemma} \pagebreak[3] \begin{proof} $\boxed{(1)\Rightarrow(2)}$ The correspondence $t\leftrightarrow \varphi=\exp(t\sqrt{-1})$ establishes a bianalytic isomorphism between $(0,2\pi)$ and $S^1\setminus\{1\}$. (Here $S^1=\{|\varphi|=1\}\subset{\mathbb C}$.) Under this correspondence, we have \begin{equation} \label{e4.5} a\cos(kt)+b\sin(kt)=\tfrac{a-b\sqrt{-1}}{2}\varphi^k+\tfrac{a+b\sqrt{-1}}{2}\varphi^{-k} \quad (a,b\in{\mathbb R}), \end{equation} so any trigonometric polynomial in~$t$ transforms into a Laurent polynomial in~$\varphi$ of the form $P(\varphi)+\overline P(\varphi^{-1})$. Thus a trigonometric parametrization of a curve~$C$ yields its parametrization of the form~\eqref{eq:param-PQ}; this parametrization is generically one-to-one along $S^1$ and therefore extends to a birational map ${\mathbb P}^1\to C$. $\boxed{(2)\Rightarrow(1)}$ A parametrization \eqref{eq:param-PQ} of a curve~$C$ sends the circle~$S^1$ generically one-to-one to~$C_{{\mathbb R}}$, the real point set of~$C$, see the formulas~\eqref{e4.5}. The same formulas \eqref{e4.5} convert the parametrization \eqref{eq:param-PQ} restricted to the circle~$S^1$ into a trigonometric parametrization $t\in[0,2\pi)\mapsto(X(t),Y(t))$ of~$C_{{\mathbb R}}$. $\boxed{(2)\Rightarrow(3)}$ A parametrization (\hyperref{eq:param-PQ}) intertwines the standard real structure in~${\mathbb P}^2$ and the real structure defined by the involution $c(\varphi)=\overline\varphi^{-1}$ on~${\mathbb C}\setminus\{0\}$. Thus, it takes the set $S^1=\Fix(c)$ to the set~$C_{{\mathbb R}}$ of real points of~$C$, while the conjugate points $\varphi=0$ and $\varphi=\infty$ of~${\mathbb P}^1$ go to the points of~$C$ at infinity determining two complex conjugate local branches at infinity. $\boxed{(3)\Rightarrow(2)}$ Assuming~(3), the normalization map $\boldsymbol{n}:{\mathbb P}^1\to C$ pulls back the~standard complex conjugation in ${\mathbb P}^2$ to the standard complex conjugation on~${\mathbb P}^1$, while the circle ${\mathbb R}{\mathbb P}^1$ maps to the one-dimensional connected component of~$C_{{\mathbb R}}$, and some complex conjugate points $\alpha,\overline\alpha\in{\mathbb P}^1$ go to infinity. The automorphism of ${\mathbb P}^1$ defined~by \[ \varphi=\frac{s-\alpha}{s-\overline\alpha} \] takes the points $s=\alpha$ and $s=\overline\alpha$ to $0$ and $\infty$, respectively, the circle ${\mathbb R}{\mathbb P}^1$ to the circle~$S^1$, and the standard complex conjugation to the involution~$c$, see above. Hence the parametrization $\boldsymbol{n}:{\mathbb P}^1\to C$ goes to a parametrization (\hyperref{eq:param-PQ}). \end{proof} \begin{lemma} \label{lem:real-points} Let $C$ be a real polynomial (resp., trigonometric) nodal plane curve, with a parametrization $\{(X(t),Y(t))\}$ as in Definition~\hyperref{def:poly-curve} (resp., Definition~\hyperref{def:trig-curve}). Assume that $C$ has no elliptic nodes. Then $C_{{\mathbb R}}=\{(X(t),Y(t))\mid t\in{\mathbb R}\}$. \end{lemma} \begin{proof} This lemma follows from the well known fact (see, e.g., \cite[Proposition~1.9]{IMR}) that the real point set of a real nodal rational curve~$C$ in ${\mathbb R}{\mathbb P}^2$ is the disjoint union of a circle ${\mathbb R}{\mathbb P}^1$ generically immersed in ${\mathbb R}{\mathbb P}^2$ and a finite set of elliptic nodes. \end{proof} If we allow elliptic nodes, the conclusion of Lemma~\hyperref{lem:real-points} can fail, cf.\ Example~\hyperref{ex:poly-with-elliptic-node}. Recall that an irreducible nodal plane curve of degree~$d$ has at most $\frac{(d-1)(d-2)}{2}$ nodes, with the upper bound only attained for rational curves. \redsf{In the case of trigonometric or polynomial curves, maximizing the number of nodes has direct geometric consequences:} \begin{proposition}[{\rm G.~Ishikawa \cite[Proposition~1.4]{Ishikawa-trigonometric}}] \label{pr:trig-no-flex} Let $C$ be a trigonometric curve of degree~$d$ with $\frac{(d-1)(d-2)}{2}$ real hyperbolic nodes. Then $C$ has no inflection points. \end{proposition} \begin{proposition} \label{pr:poly-no-flex} A \redsf{(complex)} plane polynomial curve of degree $d$ with $\frac{(d-1)(d-2)}{2}$ nodes has no inflection points. \end{proposition} \begin{proof} By Hironaka's genus formula (\hyperref{eq:Hironaka}), the projective closure $\hat C$ of $C$ has a single smooth point~$p$ on~$L^\infty$, with $(\hat C\cdot L^\infty)_p=d$. We then determine the number of inflection points of~$C$ (in the affine plane) using Pl\"ucker's formula (see, e.g., \cite[Chapter IV, Sections 6.2--6.3]{Walker}): \[ 2d(d-2)-(d-2)-6\cdot\tfrac{(d-1)(d-2)}{2}=0. \qedhere \] \end{proof} \begin{example} \label{ex:hypotrochoids} The curve \begin{equation} \label{eq:hypotrochoid} t\mapsto (\cos((k-1)t)+a\cos(kt), \sin((k-1)t)-a\sin(kt)) \end{equation} is a trigonometric curve of degree~$d=2k$. (It is a special kind of \emph{hypotrochoid}, cf.\ Definition~\hyperref{def:hypotrochoids} below.) For \redsf{suitably} chosen real values of~$a$, this curve has $\frac{(d-1)(d-2)}{2}=(k-1)(2k-1)$ real hyperbolic nodes, as in Proposition~\hyperref{pr:trig-no-flex}. See Figures~\hyperref{fig:3-petal} and~\hyperref{fig:5-petal}. In the special case $k=1$ illustrated in Figure~\hyperref{fig:3-petal}, we get a three-petal hypotrochoid, a quartic trigonometric curve with 3~nodes given by the parametrization \begin{equation} \label{eq:3-petal-param} t\mapsto (\cos(t)+a\cos(2t),\sin(t)-a\sin(2t)), \end{equation} or by the algebraic equation \begin{equation} \label{eq:3-petal-alg} a^2(x^2+y^2)^2+(-2a^4+a^2+1)(x^2+y^2)+(a^2-1)^3-2ax^3+6axy^2=0. \end{equation} \end{example} \begin{figure} \caption{Three-petal hypotrochoids \eqref{eq:3-petal-param} \label{fig:3-petal} \end{figure} \begin{figure} \caption{Five-petal hypotrochoids $\{(\cos(2t)+a\cos(3t),\sin(2t)-a\sin(3t))\} \label{fig:5-petal} \end{figure} \section{Expressive curves and polynomials} \label{sec:expressive} \begin{definition} \label{def:expressive-poly} Let $G(x,y)\!\in\!{\mathbb R}[x,y]\!\subset\!{\mathbb C}[x,y]$ be a polynomial with real coefficients. \linebreak[3] Let $C=V(G)$ be the corresponding affine algebraic curve, and let $C_{{\mathbb R}}=V_{{\mathbb R}}(G)$ be the set of its real points, see Definition~\hyperref{def:curves}. We say that $G(x,y)$ is an \emph{expressive polynomial} (resp., $C$~is an \emph{expressive curve})~if \begin{itemize}[leftmargin=.2in] \item all critical points of $G$ (viewed as a polynomial in ${\mathbb C}[x,y]$) are real; \item all critical points of $G$ are Morse (i.e., have nondegenerate Hessians); \item each bounded component of ${\mathbb R}^2\setminus C_{{\mathbb R}}$ contains exactly one critical point of~$G$; \item each unbounded component of ${\mathbb R}^2\setminus C_{{\mathbb R}}$ contains no critical points; \item $C_{{\mathbb R}}$ is connected, and contains at least two (hence infinitely many) points. \end{itemize} \end{definition} \begin{remark} Let $G(x,y)$ be a real polynomial with real Morse critical points. Then each double point of $V_{{\mathbb R}}(G)$ must be a critical point of~$G$ (a~saddle). Also, each bounded connected component of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$ must contain at least one critical point (an extremum). Thus, for~$G$ to be expressive, it must have the smallest possible number of complex critical points that is allowed by the topology of~$V_{{\mathbb R}}(G)$: a saddle at each double point, one extremum within each bounded component of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$, and nothing else. \end{remark} \begin{example} \label{ex:quadratic} The following quadratic polynomials are expressive: \begin{itemize}[leftmargin=.2in] \item $G(x,y)=x^2-y$ has no critical points; \item $G(x,y)=x^2+y^2-1$ has one critical point $(0,0)$ (a minimum) lying inside the unique bounded component of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$; \item $G(x,y)=x^2-y^2$ has one critical point $(0,0)$ (a saddle), a hyperbolic node. \end{itemize} The following quadratic polynomials are not expressive: \begin{itemize}[leftmargin=.2in] \item $G(x,y)=x^2-y^2-1$ has a critical point $(0,0)$ in an unbounded component of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$; besides, $V_{{\mathbb R}}(G)$ is not connected; \item $G(x,y)=x^2+y^2$ has $V_{{\mathbb R}}(G)$ consisting of a single point; \item $G(x,y)=x^2+y^2+1$ has $V_{{\mathbb R}}(G)=\varnothing$; \item $G(x,y)=x^2-1$ and $G(x,y)=x^2$ have non-Morse critical points. \end{itemize} \end{example} \begin{lemma} \label{lem:expressive-poly} Let $G(x,y)$ be an expressive polynomial. Then: \begin{itemize}[leftmargin=.2in] \item $G$ is squarefree (i.e, not divisible by a square of a non-scalar polynomial); \item $G$ has finitely many critical points, \redsf{all of them real}; \item each critical point of $G$ is either a saddle or an extremal point; \item all saddle points of $G$ lie on $V_{{\mathbb R}}(G)$; they are precisely the singular points of~$V(G)$; \item each bounded connected component of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$ is simply connected. \end{itemize} \end{lemma} \begin{proof} As the critical points of $G$ are real and Morse, each of them is either a saddle or a local (strict) extremum of~$G$, viewed as a function ${\mathbb R}\to{\mathbb R}$. The extrema must be located outside~$C_{{\mathbb R}}$, one per bounded connected component of ${\mathbb R}^{2}\setminusC_{{\mathbb R}}$. The saddles must lie on~$C_{{\mathbb R}}$, so they are precisely the double points of~it. We conclude that $G$ has finitely many critical points. Consequently $G$ is squarefree. Finally, since $C_{{\mathbb R}}$ is connected, each bounded component of ${\mathbb R}^{2}\setminusC_{{\mathbb R}}$ must be simply connected. \end{proof} \pagebreak[3] Definition~\hyperref{def:expressive-poly} naturally extends to homogeneous polynomials in three variables, and to algebraic curves in the projective plane: \begin{definition} \label{def:expressive-xyz} Let $F(x,y,z)\!\in\!{\mathbb R}[x,y,z]\!\subset\!{\mathbb C}[x,y,z]$ be a homogeneous polynomial \linebreak with real coefficients, and $C\!=\!Z(F)$ the corresponding projective algebraic~curve. Assume that $F(x,y,z)$ is not divisible by~$z$. (In other words, $C$ does not contain the line at infinity~$L^\infty$.) We~call $F$ and~$C$ \emph{expressive} if the bivariate polynomial $F(x,y,1)$ is expressive in the sense of Definition~\hyperref{def:expressive-poly}, or equivalently the affine curve $C\setminus L^\infty\subset{\mathbb A}^2={\mathbb P}^2\setminus L^\infty$ is expressive. \end{definition} In the rest of this section, we examine examples of expressive and non-expressive curves and polynomials. \begin{example}[Conics] Among real conics (cf.\ Example~\hyperref{ex:quadratic}), a parabola, an ellipse, and a pair of crossing real lines are expressive, whereas a hyperbola, a pair of parallel (or identical) lines, and a pair of complex conjugate lines (an elliptic node) are not. \end{example} \begin{example} \label{ex:two-graphs} Let $f_1(x),f_2(x)\in{\mathbb R}[x]$ be two distinct real univariate polynomials of degrees~$\le d$ such that $f_1-f_2$ has $d$ distinct real roots. (Thus, at least one of $f_1,f_2$ has degree~$d$.) We claim that the polynomial \[ G(x,y)=(f_1(x)-y)(f_2(x)-y) \] is expressive. To prove this, we first introduce some notation. Let $x_1,\dots,x_d$ be the roots of $f_1-f_2$, and let $z_1,\dots,z_{d-1}$ be the roots of its derivative $f_1'-f_2'$; they are also real and distinct, by Rolle's theorem. The critical points of~$G$ satisfy \begin{align*} G_x(x,y)&=f'_1(x)(f_2(x)-y) + (f_1(x)-y)f'_2(x)=0, \\ G_y(x,y)&=-f_1(x)-f_2(x)+2y=0. \end{align*} It is straightforward to see that these equations have $2d-1$ solutions: $d$~hyperbolic nodes $(x_k,f_1(x_k))=(x_k,f_2(x_k))$, for $k=1,\dots,d$, as well as $d-1$ extrema at the points $(z_k,\frac12(f_1(z_k)+f_2(z_k))$, for $k=1,\dots,d-1$. The conditions of Definition~\hyperref{def:expressive-poly} are now easily verified. (Alternatively, use the coordinate change $(x,\tilde y)=(x,y-f_1(x))$ to reduce the problem to the easy case when one of the two polynomials is~$0$.) \end{example} \begin{example}[Lemniscates] \label{ex:lemniscates} The \emph{lemniscate of Huygens} (or Gerono) is given by the equation $y^2+4x^4-4x^2=0$, or by the parametrization $t\mapsto (\cos(t),\sin(2t))$. This curve, shown in Figure~\hyperref{fig:lissajous-12-13-32-34} on the far left, is a Lissajous curve with parameters~$(2,1)$, cf.\ Example~\hyperref{ex:Lissajous}. The polynomial $G(x,y)=y^2+4x^4-4x^2$ has three real critical points: a saddle at the hyperbolic node~$(0,0)$, plus two extrema $(\pm \frac{1}{\sqrt{2}},0)$ inside the two bounded connected components of ${\mathbb R}^2\setminus V_{{\mathbb R}}(G)$. Thus, this lemniscate is expressive. By contrast, another quartic curve with a similar name (and a similar-looking set of real points), the \emph{lemniscate of Bernoulli} \begin{equation} \label{eq:lemniscate-Bernoulli} (x^2+y^2)^2-2x^2+2y^2=0, \end{equation} is not expressive, as the polynomial $G(x,y)=(x^2+y^2)^2-2x^2+2y^2$ has critical points $(0,\pm i)$ outside~${\mathbb R}^2$. \end{example} Many more examples of expressive and non-expressive polynomials (or curves) are given in Tables~\hyperref{table:conics+cubics} and~\hyperref{table:conics+cubics+quartics}, and later in the paper. \begin{table}[ht] \begin{tabular}{|p{1.2in}|p{1.9in}|p{2.0in}|} \hline \multicolumn{3}{|c|}{\textbf{expressive conics}} \\ \hline $G(x,y)$ & real curve $V_{{\mathbb R}}(G)$ & critical points \\ \hline $x^2-y$ & parabola & none \\ $x^2-y^2$ & two lines & saddle\\ $x^2+y^2-1$ & ellipse & extremum \\ \hline \multicolumn{3}{|c|}{\textbf{non-expressive conics}} \\ \hline $G(x,y)$ & real point set $V_{{\mathbb R}}(G)$ & why not expressive? \\ \hline $x^2-y^2-1$ & hyperbola & saddle in an unbounded region \\ $x^2+y^2+1$ & imaginary ellipse & $V_{{\mathbb R}}(G)$ is empty \\ $x^2+y^2$ & elliptic node & $V_{{\mathbb R}}(G)$ is a single point \\ $x^2+a$ \ ($a\in{\mathbb R}$) & two parallel lines & critical points are not Morse \\ \hline \multicolumn{3}{|c|}{\textbf{expressive cubics}} \\ \hline $G(x,y)$ & real point set $V_{{\mathbb R}}(G)$ & critical points \\ \hline $x^3-y$ & cubic parabola & none \\ $(x^2-y)x$ & parabola and its axis & single saddle \\ $x^3-3x+2-y^2$ & nodal Weierstrass cubic & one saddle, one extremum \\ $(x-1)xy$ & two parallel lines + line & two saddles \\ $(x^2-y)(y-1)$ & parabola + line & two saddles, one extremum \\ $(x+y-1)xy$ & three lines & three saddles, one extremum \\ $(x^2+y^2-1)x$ & ellipse + line & two saddles, two extrema \\ \hline \multicolumn{3}{|c|}{\textbf{non-expressive cubics}} \\ \hline $G(x,y)$ & real point set $V_{{\mathbb R}}(G)$ & why not expressive? \\ \hline $x^3-y^2$ & semicubic parabola & critical point is not Morse \\ $x^3-3x-y^2$ & two-component elliptic curve & saddle in an unbounded region \\ $x^3-3x+3-y^2$ & one-component elliptic curve & saddle in an unbounded region \\ $x^3+3x-y^2$ & one-component elliptic curve & two non-real critical points \\ $x^3+xy^2+4xy+y^2$ & oblique strophoid & two non-real critical points \\ $x^2y-x^2+2y^2$ & Newton's species \#54 & two non-real critical points \\ $yx^2-y^2-xy+x^2$ & Newton's species \#51 & two non-real critical points \\ $x^3+y^3+1$ & Fermat cubic & critical point is not Morse \\ $x^3+y^3-3xy$ & folium of Descartes & two non-real critical points \\ $x^2y+y-x$ & serpentine curve & two non-real critical points \\ $x^2y+y-1$ & witch of Agnesi & two non-real critical points \\ $x^2y$ & double line + line & critical points are not Morse \\ $x(x^2-y^2)$ & three concurrent lines & critical point is not Morse \\ $(x^2-y^2-1)y$ & hyperbola + line & two non-real critical points \\ $(xy-1)x$ & hyperbola + asymptote & $V_{{\mathbb R}}(G)$ is not connected \\ \hline \end{tabular} \caption{Expressive and non-expressive conics and cubics. Unless specified otherwise, lines are placed so as to maximize the number of crossings. } \label{table:conics+cubics} \end{table} \begin{table}[ht] \begin{tabular}{|c|c|p{1.9in}|p{2.95in}|c|} \hline $d$ & $\xi$ & $G(x,y)$ & real point set $V_{{\mathbb R}}(G)$ \\ \hline \multicolumn{4}{|c|}{} \\[-.18in] \hline $1$ & 0 & $x$ & line \\ \hline $2$ & 0 & $x^2-y$ & parabola \\ $2$ & 1 & $x^2-y^2$ & two crossing lines \\ $2$ & 1 & $x^2+y^2-1$ & ellipse \\ \hline $3$ & 0 & $x^3-y$ & cubic parabola \\ $3$ & 1 & $(x^2-y)x$ & parabola and its axis \\ $3$ & 2 & $x^3-3x+2-y^2$ & nodal Weierstrass cubic \\ $3$ & 2 & $(x-1)xy$ & three lines, two of them parallel \\ $3$ & 3 & $(x^2-y)(y-1)$ & parabola crossed by a line \\ $3$ & 4 & $(x+y-1)xy$ & three generic lines \\ $3$ & 4 & $(x^2+y^2-1)x$ & ellipse crossed by a line \\ \hline $4$ & 0 & $y-x^4$ & quartic parabola \\ $4$ & 0 & $(y-x^2)^2-x$ & \\ $4$ & 1 & $(y-x^2)^2-x^2$ & two aligned parabolas \\ $4$ & 1 & $(y-x^2)^2+x^2-1$ & \\ $4$ & 1 & $(y-x^3)x$ & cubic parabola and its axis \\ $4$ & 2 & $(y-x^2)^2-xy$& \\ $4$ & 2 & $(y-x^2)x(x-1)$& parabola + two lines parallel to its axis \\ $4$ & 3 & $y^2-(x^2-1)^2$ & co-oriented parabolas crossing at two points\\ $4$ & 3 & $y^2+4x^4-4x^2$ & lemniscate of Huygens \\ $4$ & 3 & $x(x-1)(x+1)y$ & three parallel lines crossed by a fourth\\ $4$ & 3 & $4(x^2+y^2)^2-3(x^2+y^2)-x$ & lima{\c c}on \\ $4$ & 5 & $x(x-1)y(y-1)$ & two pairs of parallel lines\\ $4$ & 5 & $(x^2+y^2-1)(x^2-2x+y^2)$ & two circles crossing at two points\\ $4$ & 5 & $(y-x^3+x)y$ & cubic parabola + line \\ $4$ & 5 & $(x^3-3x+2-y^2)(x-a)$ & nodal Weierstrass cubic $\!+\!$ line crossing it at~$\infty$\\ $4$ & 6 & $4x^3-3x+8y^4-8y^2+1$ & $(3,4)$-Chebyshev curve\\ $4$ & 6 & $(y-x^2)(x-a)(y-1) $ & parabola + line + line parallel to the axis \\ $4$ & 6 & $(y-x^2)(y-1)(y-2) $ & parabola + two parallel lines \\ $4$ & 7 & $(y-x^2+1)(x-y^2+1) $ & two parabolas crossing at four points\\ $4$ & 7 & see \eqref{eq:3-petal-alg} & three-petal hypotrochoid \\ $4$ & 7 & $(x^{3}-3x+2-y^{2})(x+y-a)$& nodal Weierstrass cubic + line \\ $4$ & 7 & $(x+y)(x-y)(x-1)(x-a)$ & line + line + two parallel lines \\ $4$ & 7 & $(x^2+y^2-1)x(x-a)$ & ellipse + two parallel lines\\ $4$ & 8 & $(4y-x^2)(x+y)(ax+y+1) $& parabola + two lines \\ $4$ & 8 & $(x^2+y^2-1)(y-4x^2+2)$ & ellipse and parabola crossing at four points \\ $4$ & 9 & $x(y+1)(x-y)(x+y-1)$ & four lines \\ $4$ & 9 & $(x^2+y^2-1)x(x+y-a)$& ellipse + two lines\\ $4$ & 9 & $(x^2 + 4y^2)(4x^2 + y^2)$ & two ellipses crossing at four points \\ \hline \end{tabular} \caption{Expressive curves $C\!=\!\{G\!=\!0\}$ of degrees $d\le 4$. Unless noted otherwise, lines are drawn to maximize the number of crossings. The value~$a\!\in\!{\mathbb R}$ is to be chosen appropriately. We denote by~$\xi$ the number of critical points~of~$G$. } \label{table:conics+cubics+quartics} \end{table} \begin{remark} Definition~\hyperref{def:expressive-poly} can be generalized to allow arbitrary ``hyperbolic'' singular points, i.e., isolated real singular points all of whose local branches are real. \end{remark} \begin{remark} In can be verified by an exhaustive case-by-case analysis that every expressive curve of degree $d\le 4$ is $L^\infty$-regular. Starting with $d\ge 5$, this is no longer the case, cf.\ Example~\hyperref{example:expressive-non-reg} below: an expressive curve~$C$ need not be $L^\infty$-regular, even when $C$ is rational. Still, examples like this one are rare. \end{remark} \begin{example} \label{example:expressive-non-reg} Consider the real rational quintic curve~$C$ parametrized by \[ x=t^2, \ y=t^{-1}+t^{-2}-t^{-3}. \] The set of its real points in the $(x,y)$-plane consists of two interval components corresponding to the negative and positive values of~$t$, respectively. These components intersect at the point $(1,1)$, attained for $t=1$ and $t=-1$. The algebraic equation of~$C$ is obtained as follows: \begin{align*} &y-x^{-1}=t^{-1}-t^{-3} ,\\ &(y-x^{-1})^2=t^{-2}-2t^{-4}+t^{-6}=x^{-1}-2x^{-2}+x^{-3}, \\ &x^3y^2-2x^2y-x^2+3x-1=0. \end{align*} The Newton triangle of $C$ is $\conv\{(0,0),(3,2),(2,0)\}$, which means that \begin{itemize}[leftmargin=.2in] \item at the point $p_1=(1,0,0)$, the curve $C$ has a type~$A_2$ (ordinary cusp) singularity, tangent to the axis $y=0$; \item at the point $p_2=(0,1,0)$, the curve $C$ has a type~$E_8$ singularity, tangent to the axis $x=0$. \end{itemize} In projective coordinates, we have $C=Z(F)$ where \begin{align*} F&=x^3y^2-2x^2yz^2-x^2z^3+3xz^4-z^5,\\ F_x&=3x^2y^2-4xyz^2-2xz^3+3z^4,\\ F_y&=2x^3y-2x^2z^2. \end{align*} It is now easy to check that the only critical point of $F(x,y,1)$ is the hyperbolic node $(1,1)$ discussed above. It follows that the curve $C$ is expressive. We next examine the behavior of the polar curves at infinity. In a neighborhood of the point $p_2=(0,1,0)$, we set $y=1$ and obtain \begin{align*} F_x&=3x^2-4xz^2-2xz^3+3z^4,\\ F_y&=2x^3-2x^2z^2. \end{align*} The curve $Z(F_x)$ has two smooth local branches at~$p_2$, quadratically tangent to the axis $Z(x)$; the curve $Z(F_y)$ has the double component $Z(x)$ and a smooth local branch quadratically tangent to~$Z(x)$. Thus the intersection multiplicity~is \[ (Z(F_x)\cdot Z(F_y))_{p_2}=12>\mu(C,p_2)+(C\cdot L^\infty)_{p_2}-1 =8+3-1=10, \] so $L^\infty$-regularity fails at~$p_2$. (The intersection at $p_1$ is regular by Proposition~\hyperref{pr:nonmax-tangency}.) \end{example} \begin{example}[Lissajous-Chebyshev curves] \label{ex:lissajous-chebyshev} Let $a$ and $b$ be positive integers. The \emph{Lissajous-Chebyshev curve} with parameters~$(a,b)$ is given by the equation \begin{equation} \label{eq:Ta+Tb} T_a(x)+T_b(y)=0. \end{equation} When $a$ and~$b$ are coprime, with $a$ odd, we recover the polynomial Chebyshev curve with parameters $(a,b)$, see~\eqref{eq:Chebyshev-curve}. When both $a$ and $b$ are even, with $\frac{a}{2}$ and $\frac{b}{2}$ coprime, we recover the trigonometric Lissajous curve with parameters $(\frac{a}{2},\frac{b}{2})$, see~\eqref{eq:Lissajous-algebraic}. This explains our use of the term ``Lissajous-Chebyshev curve.'' The general construction appears in the work of S.~Guse{\u\i}n-Zade~\cite{GZ1974}, who observed (without using this terminology) that the Lissajous-Chebyshev curve with parameters~$(a,b)$ provides a morsification of an isolated quasihomogeneous singularity of type~$(a,b)$. There is also a variant of the Lissajous-Chebyshev curve defined by \begin{equation} \label{eq:Ta-Tb} T_a(x)-T_b(y)=0. \end{equation} When $a$ (resp.,~$b$) is odd, this curve is a mirror image of the Lissajous-Chebyshev curve~\eqref{eq:Ta+Tb}, under the substitution $x:=-x$ (resp., $y:=-y$). However, when both $a$ and $b$ are even, the two curves differ. For example, for $(a,b)=(4,2)$, the curve defined by~\eqref{eq:Ta+Tb} is the lemniscate of Huygens (see Example~\hyperref{ex:lemniscates}), whereas the curve defined by~\eqref{eq:Ta-Tb} is a union of two parabolas. It is not hard to verify that every Lissajous-Chebyshev curve (and every curve $V(T_a(x)-T_b(y))$) is expressive. The critical points of $T_a(x)\pm T_b(y)$ are found from the equations \[ T_a'(x)=T_b'(y)=0, \] so they are of the form $(x_i,y_j)$ where $x_1,\dots,x_{a-1}$ (resp., $y_1,\dots,y_{b-1}$) are the (distinct) roots of~$T_a$ (resp.,~$T_b$). Since the total number of nodes and bounded components of ${\mathbb R}^2\setminusC_{{\mathbb R}}$ is easily seen to be exactly $(a-1)(b-1)$, the claim follows. \end{example} \begin{figure} \caption{Lissajous-Chebyshev curves with parameters $(2,3)$, $(2,4)$, $(2,5)$, $(2,6)$ (top row) and $(3,3)$, $(3,4)$, $(3,5)$, $(3,6)$ (bottom row).} \label{fig:lissajous-chebyshev} \end{figure} \begin{example}[Multi-lima\c cons] \label{ex:multi-limacons} Recall that the multi-lima\c con with parameter~$k$ is a trigonometric curve~$C$ of degree~$2k+2$ given by the equation~\eqref{eq:multi-limacon}. It is straightforward to verify that the corresponding polynomial has $2k+1$ critical points, all of them located on the $x$ axis. Comparing this to the shape of the curve~$C_{\mathbb R}$, we conclude that $C$ is expressive. \end{example} \section{Divides} \label{sec:divides} The notion of a divide was first introduced and studied by N.~A'Campo \cite{acampo-ihes, ishikawa}. The version of this notion that we use in this paper differs slightly from A'Campo's, and from the version used in~\cite{FPST}. \begin{definition} \label{def:divide} Let ${\mathbf{D}}$ be a disk in the real~plane~${\mathbb R}^2$. A \emph{divide}~$D$ in~${\mathbf{D}}$ is the image of a generic relative immersion of a finite set of intervals and circles into~${\mathbf{D}}$ satisfying the conditions listed below. The images of these immersed intervals and circles are called the \emph{branches} of~$D$. They must satisfy the following conditions: \begin{itemize}[leftmargin=.2in] \item the immersed circles do not intersect the boundary~$\partial{\mathbf{D}}$; \item the endpoints of the immersed intervals lie on~$\partial{\mathbf{D}}$, and are pairwise distinct; \item these immersed intervals intersect $\partial D$ transversally; \item all intersections and self-intersections of the branches are transversal. \end{itemize} We view divides as topological objects, i.e., we do not distinguish between divides related by a diffeomorphism between their respective ambient disks. A divide is called \emph{connected} if the union of its branches is connected. The connected components of the complement ${\mathbf{D}}\setminus D$ which are disjoint from $\partial{\mathbf{D}}$ are the \emph{regions} of~$D$. If $D$ is connected, then each region of~$D$ is simply connected. We refer to the singular points of~$D$ as its \emph{nodes}. See Figure~\hyperref{fig:divide}. \end{definition} \begin{figure} \caption{A divide, its branches, regions, and nodes.} \label{fig:divide} \end{figure} \begin{remark} Although all divides of interest to us are connected, we forego any connectivity requirements in the above definition of a divide, which therefore is slightly more~general than \cite[Definition~2.1]{FPST}. \end{remark} The main focus of~\cite{FPST} was on the class of \emph{algebraic} divides coming from real~\emph{morsi\-fi\-cations} of isolated plane curve singularities, see \cite[Definition~2.3]{FPST}. Here~we~study a different (albeit related) class of divides which arise from real algebraic curves: \begin{definition} \label{def:divide-polynomial} Let $G(x,y)\in{\mathbb R}[x,y]$ be a real polynomial such that each real singular point of the curve $V(G)\subset{\mathbb C}^2$ is a \emph{hyperbolic node} (an intersection of two smooth real local branches). Then the portion of $V_{{\mathbb R}}(G)$ contained in a sufficiently large disk \begin{equation} \label{eq:Disk_R} {\mathbf{D}}_R=\{(x,y)\in{\mathbb R}^2\mid x^2+y^2\le R^2\} \end{equation} gives a divide in~${\mathbf{D}}_R$. Moreover this divide does not depend (up to homeomorphism) on the choice of~$R\gg0$. We denote this divide~by~$D_G$. \end{definition} \pagebreak[3] \begin{proposition} \label{pr:expressivity-criterion} Let $G(x,y)\in{\mathbb R}[x,y]$ be a real polynomial with $\xi<\infty$ critical points. Assume that the real algebraic set $V_{{\mathbb R}}(G)=\{G=0\}\subset{\mathbb R}^2$ is nonempty, and each singular point of $V_{{\mathbb R}}(G)$ is a hyperbolic node. Let~$\nu$ be the number of such nodes, and let $\iota$ be the number of interval branches of the divide~$D_G$. Then \begin{equation} \label{eq:expressivity-xi} \xi \ge 2\nu-\iota+1, \end{equation} with equality if and only if the polynomial $G$ is expressive. \end{proposition} \begin{proof} Let $K$ denote the union of the divide~$D_G$ and all its regions, viewed as a closed subset of~${\mathbb R}^2$. Let $\sigma$ be the number of connected components of~$K$. Since all these components are simply connected, the Euler characteristic of~$K$ is equal~to~$\sigma$. On the other hand, $K$ can be split into 0-dimensional cells (the nodes, plus the ends of interval branches), 1-dimensional cells (curve segments of $D$ connecting nodes), ovals (smooth closed components of~$D_G$), and regions. The number~$\beta$ of 1-dimensional cells satisfies $2\beta\!=\!2\iota+4\nu$ (by counting endpoints), implying $\beta\!=\!\iota+2\nu$. We thus~have \[ \sigma = \chi(K)=(\nu+2\iota)-(\iota+2\nu)+\rho-h = \iota-\nu+\rho-h, \] where $\rho$ is the number of regions in~$D_G$, and $h$ denotes the total number of holes (the sum of first Betti numbers) over all regions. Denote $u=\xi-\nu-\rho$. The set of critical points of~$G$ contains all of the nodes, plus at least one extremum per region. Thus $u=0$ if $G$ has no other critical points, and $u>0$ otherwise. Putting everything together, we obtain: \begin{align*} \xi=\nu+\rho+u =\nu+\sigma-\iota+\nu+h+u =2\nu-\iota+\sigma+h+u. \end{align*} Since $\sigma\ge 1$ and $h,u\ge 0$, we get~\eqref{eq:expressivity-xi}. Moreover $\xi = 2\nu-\iota+1$ if and only if $K$ is connected, all regions are simply connected, and $G$ has exactly $\nu+\rho$ critical~points. All these conditions are satisfied if $G$ is expressive; conversely, they ensure expressivity. ($V_{{\mathbb R}}(G)$~is connected if $K$ is connected and each region is simply connected.) \end{proof} \begin{remark} Table~\hyperref{table:conics+cubics+quartics} does not list any expressive quartic with $\xi=4$. We~can now explain why. \redsf{First, by Proposition \hyperref{pr:expressivity-criterion}, $\xi=4$ and $d=4$ would imply that $2\nu=\iota+3$. \textbf{Case~1}: $\nu=3$, $\iota=3$. Then our quartic $C$ has three real irreducible components, \emph{viz.}, a conic (necessarily a parabola) and two lines crossing it, with three hyperbolic nodes total, and no other nodes. Such a configuration is impossible: if both lines are parallel to the axis of the parabola, then we get two nodes; otherwise, at least four. \textbf{Case~2}: $\nu=2$, $\iota=1$. Then $C$ is irreducible. (Otherwise, $C$~would split into~two conics---a parabola and an ellipse---intersecting at four real points, contrary to $\nu=2$.) An irreducible quartic~$C$ with $\iota=1$ has one real local branch $Q$ centered at some real point $p\in L^\infty$, plus perhaps a pair of complex conjugate local branches centered on~$L^\infty$. Let us list the possibilities. \textbf{Case~2A}: $Q$ is smooth, $(Q\cdot L^\infty)_p=4$, with no other local branches centered~on~$L^\infty$. \textbf{Case~2B}: $Q$ is smooth, $(Q\cdot L^\infty)_p=2$, with two smooth complex conjugate branches transversal to~$L^\infty$. These two local branches either cross $L^\infty$ at different points, or at the same point~$p'$. By the genus formula~\eqref{eq:Hironaka} and due to $\nu=2$, we have $p'\ne p$, with a nodal singularity at~$p'$. \textbf{Case~2C}: $Q$ is singular. In this case, again using $\nu=2$ and the genus formula~\eqref{eq:Hironaka}, we conclude that $Q$ is of type~$A_2$. The intersection multiplicity of a cusp and a line is either $2$ or~$3$. In our setting, $(Q\cdot L^\infty)_p=2$, with two additional smooth complex conjugate local branches of $C$ centered at distinct complex conjugate points on~$L^\infty$. All cases 2A--2C are subject to Proposition~\hyperref{pr:nonmax-tangency}, so $C$ is $L^\infty$-regular. A direct computation then yields that $\sum_{q\in C\cap L^\infty}\mu(C,q,L^\infty)\le3$ in every case. This, however, is in contradiction with $\sum_{q\in C\cap L^\infty}\mu(C,q,L^\infty)=(4-1)^2-\xi=5$.} \end{remark} \section{\texorpdfstring{$L^\infty$}{L}-regular expressive curves} \label{sec:regular+expressive} \begin{proposition} \label{pr:expressive+regular} Let $C=Z(F)\subset{\mathbb P}^2$ be a reduced algebraic curve defined by a real homogeneous polynomial~$F(x,y,z)\in{\mathbb R}[x,y,z]$. Assume that \begin{itemize}[leftmargin=.35in] \item[{\rm (a)}] all irreducible components of $C$ are real; \item[{\rm (b)}] $C$ does not contain the line at infinity~$L^\infty$ as a component; \item[{\rm (c)}] all singular points of $C$ in the affine $(x,y)$-plane are real hyperbolic nodes; \item[{\rm (d)}] the polynomial $F(x,y,1)\in{\mathbb C}[x,y]$ has finitely many critical points; \item[{\rm (e)}] the set of real points $\{F(x,y,1)=0\}\subset{\mathbb R}^2$ is nonempty. \end{itemize} Then the following are equivalent: \begin{itemize}[leftmargin=.35in] \item[{\rm (i)}] the curve $C$ is expressive and $L^\infty$-regular; \item[{\rm (ii)}] each irreducible component of $C$ is rational, with a set of local branches at infinity consisting of either a unique (necessarily real) local branch, or a pair of complex conjugate local branches, possibly based at the same real point. \end{itemize} \end{proposition} \begin{proof} The proof is based on Propositions~\hyperref{pr:hironaka-milnor} and~\hyperref{pr:expressivity-criterion}. Let us recall the relevant notation, and introduce additional one: \begin{align*} d&=\deg(C),\\ \iota&=\text{number of interval branches of~$D_G$}\\ s&=|\Comp(C)|=\text{number of irreducible components of~$C$},\\ s_1&= \text{number of components of $C$ with a real local branch at infinity}, \\ s_2&= \text{number of components of $C$ with a pair of complex conjugate}\\ &\hspace{.225in} \text{local branches at infinity}. \end{align*} Combining Propositions~\hyperref{pr:hironaka-milnor} and~\hyperref{pr:expressivity-criterion}, we conclude that \begin{equation*} 2g(C)-2+\iota+\sum_{p\in C\cap L^\infty}\br(C,p) \ge 0, \end{equation*} or equivalently (see \eqref{eq:g(C)}) \begin{equation} \label{eq:restated-reg-expressive} 2\sum_{C'\in\Comp(C)} g(C')+\iota+\sum_{p\in C\cap L^\infty}\br(C,p) \ge 2s, \end{equation} with equality if and only if $C$ is both expressive and $L^\infty$-regular. On the other hand, we have the inequalities \begin{align} \label{eq:sum-g(C')} \sum_{C'\in\Comp(C)} g(C')&\ge 0, \\[-.05in] \iota &\ge s_1\,, \\ \sum_{p\in C\cap L^\infty}\br(C,p) &\ge s_1+2s_2\,, \\ \label{eq:s+s} 2s_1 + 2s_2 &\ge 2s, \end{align} whose sum yields~\eqref{eq:restated-reg-expressive}. Therefore we have equality in~\eqref{eq:restated-reg-expressive} if and only if each of \eqref{eq:sum-g(C')}--\eqref{eq:s+s} is an equality. This is precisely statement~(ii). \end{proof} \begin{example} The curve $C$ from Example~\hyperref{example:expressive-non-reg} satisfies requirements (a)--(e) of Proposition~\hyperref{pr:expressive+regular}. Since $C$ is not $L^\infty$-regular, condition~(ii) must~fail. Indeed, while $C$ is rational, it has two real local branches at infinity, centered at the points $p_1$ and~$p_2$. \end{example} \begin{proposition} \label{prop:nonempty} Let $C$ be an expressive $L^\infty$-regular plane curve whose irreducible components are all real. Then each component of $C$ is either trigonometric or polynomial. \end{proposition} \begin{proof} Since $C$ is $L^\infty$-regular and expressive, with real components, the requirements (a)--(e) of Proposition~\hyperref{pr:expressive+regular} are automatically satisfied, cf.\ Lemma~\hyperref{lem:expressive-poly}. Consequently the statement (ii) of Proposition~\hyperref{pr:expressive+regular} holds. By Lemmas~\hyperref{lem:poly-curve} and~\hyperref{lem:trig-curve}, this would imply that each component of $C$ is either trigonometric or polynomial, provided the real point set of each component is infinite. It thus suffices to show is that each component $B$ of $C$ has an infinite real point set in~${\mathbb A}^2$. In fact, it is enough to show that this set is nonempty, for if it were finite and nonempty, then~$B$---hence~$C$---would have an elliptic node, contradicting the expressivity of~$C$ (cf.\ Lemma~\hyperref{lem:expressive-poly}). It remains to prove that each component of $C$ has a nonempty real point set in~${\mathbb A}^2$. We argue by contradiction. Let $C\!=\!Z(F)$. Suppose that $B\!=\!Z(G)$ is a component of~$C$ without real points in~${\mathbb A}^2$. We claim that the rest of~$C$ is given by a polynomial of the form~$H(G,z)$, where $H\in{\mathbb R}[u,v]$ is a bivariate polynomial. Once we establish this claim, it will follow that the polynomials $F_x$ and $F_y$ have a non-trivial common factor $\frac{\partial}{\partial u}(uH(u,v))\big|_{u=G,v=z}$, contradicting the finiteness of the intersection $Z(F_x)\cap Z(F_y)$. Let us make a few preliminary observations. First, the degree $d=\deg G=\deg B$ must be even. Second, by Proposition~\hyperref{pr:expressive+regular}, $B$~has two complex conjugate branches centered on~$L^\infty$. Third, in view of the expressivity of $C$, the affine curve $B\cap{\mathbb A}^2$ is disjoint from any other component $B'$ of $C$, implying that \begin{equation} B\cap B'\subset L^\infty. \label{el1} \end{equation} Consider two possibilities. \textbf{Case~1}: $B\cap L^\infty$ consists of two complex conjugate points $p$ and $\overline p$. Let $Q$ and $\overline Q$ be the local branches of $B$ centered at $p$ and $\overline p$, respectively. Let $B'=Z(G')$ be some other component of $C$, of degree $d'=\deg B'=\deg G'$. Since $B'$ is real and satisfies statement~(ii) of Proposition~\hyperref{pr:expressive+regular} as well as \eqref{el1}, we~get \[ B'\cap B=B'\cap L^\infty=B\cap L^\infty=\{p,\overline p\}; \] moreover $B'$ has a unique local branch $R$ (resp.,~$\overline R$) at the point $p$ (resp.~$\overline p$). We have \begin{align*} (R\cdot Q)_p=(\overline R\cdot\overline Q)_{\overline p}=\tfrac{d'd}{2},\qquad ((L^\infty)^{d'}\cdot Q)_p=((L^\infty)^{d'}\cdot\overline Q)_{\overline p}&=\tfrac{d'd}{2}. \end{align*} It follows that any curve $\hat B$ in the pencil $\Span\{B',(L^\infty)^{d'}\}$ satisfies \begin{equation} (\hat B\cdot Q)_p\ge\tfrac{d'd}{2},\quad (B'\cdot \overline Q)_{\overline p}\ge\tfrac{d'd}{2}. \label{el2} \end{equation} Pick a point $q\in B\cap{\mathbb A}^2$. Since $q\not\in B'\cup L^\infty$, there exists a curve $\tilde B\in\Span\{B',(L^\infty)^{d'}\}$ containing~$q$. It then follows by B\'ezout and by \eqref{el2} that $\tilde B$ must contain $B$ as a component. In particular, $d'\ge d$. By symmetry, the same argument yields $d\ge d'$. Hence $d'=d$, $B\in\Span\{B',(L^\infty)^d\}$, and the polynomial $G'$ defining the curve $B'$ satisfies $G'=\alpha G+\beta z^d$ for some $\alpha,\beta\in{\mathbb C}\setminus\{0\}$. The desired claim follows. \pagebreak[3] \textbf{Case~2}: $B\cap L^\infty$ consists of one (real) point~$p$. Then $B$ has two complex conjugate branches $Q$ and $\overline Q$ centered at~$p$. Let $B'=Z(G')$ be a component of $C$ different from $B$, and let $B'$ have a unique (real) local branch~$R$, necessarily centered at~$p$. Denote $d'=\deg B'=\deg G'$. Then \[ (R\cdot Q)_p=(R\cdot \overline Q)_p=\tfrac{d'd}{2}, \qquad ((L^\infty)^{d'}\cdot Q)_p=((L^\infty)^{d'}\cdot\overline Q)_p=\tfrac{d'd}{2}, \] which implies (cf.\ Case~1) that any curve $\hat B'\in\Span\{B',(L^\infty)^{d'}\}$ satisfies \[ (\hat B'\cdot q)_p\ge\tfrac{d'd}{2},\quad (\hat B'\cdot \overline Q)_p\ge\tfrac{d'd}{2}, \] and then we conclude---as above---that there exists a curve $\tilde B'\in\Span\{B',(L^\infty)^{d'}\}$ containing $B$ as a component. On the other hand, \[ (B\cdot R)_p=d'd,\quad ((L^\infty)^d\cdot R)_p=d'd, \] which in a similar manner implies that there exists a curve $\tilde B\in\Span\{B,(L^\infty)^d\}$ containing $B'$ as a component. We conclude that $d'=d$, $B\in\Span\{B',(L^\infty)^d\}$, and finally, $G'=\alpha G+\beta z^d$ for $\alpha,\beta\in{\mathbb C}\setminus\{0\}$, as desired. Now let $B'=Z(G')$ be a component of $C$ different from $B$, and let it have a couple of complex conjugate local branches $R$ and~$\overline R$ centered at $p$. Since $B'$ is real, we have \[ (B'\cdot Q)_p=(B'\cdot \overline Q)_p=\tfrac{d'd}{2},\quad ((L^\infty)^{d'}\cdot Q)_p=((L^\infty)^{d'}\cdot\overline Q)_p=\tfrac{d}{2}, \] and since $B$ is real, we have \[ (B\cdot R)_p=(B\cdot\overline R)_p=\tfrac{d'd}{2},\quad ((L^\infty)^d\cdot R)_p=((L^\infty)^d\cdot\overline R)_p=\tfrac{d'd}{2}. \] Thus the above reasoning applies again, yielding $d'=d$ and $B'\in\Span\{B,(L^\infty)^d\}$. Hence $G'=\alpha G+\beta z^d$ for $\alpha,\beta\in{\mathbb C}\setminus\{0\}$, and we are done. \end{proof} The following example shows that in Proposition~\hyperref{prop:nonempty}, the requirement that all components are real cannot be dropped. \begin{example} \label{ex:reducible-quintic} The quintic curve $C=Z(F)$ defined by the polynomial \[ F(x,y,z)=(x^2+z^2)(yx^2+yz^2-x^3) \] has two non-real components $Z(x\pm z\sqrt{-1})$. In Example~\hyperref{ex:(x^2+z^2)(yz^2-x^3+yx^2)-regular}, we verified that this curve is $L^\infty$-regular. It is also expressive, because the polynomial $G(x,y)=F(x,y,1)$ has no critical points in the complex affine plane (see Example~\hyperref{ex:(x^2+z^2)(yz^2-x^3+yx^2)-regular}) and the set \begin{equation} \label{eq:y=x^3/...} V_{{\mathbb R}}(G)=\{(x,y)\in{\mathbb R}^2\mid y=\tfrac{x^3}{x^2+1}\} \end{equation} is connected. On the other hand, the real irreducible component $\widetilde C=Z(yx^2+yz^2-x^3)$ is neither trigonometric nor polynomial because it has two real points at infinity, see Example~\hyperref{ex:(x^2+z^2)(yz^2-x^3+x^2y)}. Furthermore, $\widetilde C$ is not expressive, since the polynomial $yx^2+y-x^3$ has two critical points $(\pm \sqrt{-1}, \pm \frac32 \sqrt{-1})$ outside~${\mathbb R}^2$. \end{example} Proposition~\hyperref{prop:nonempty} immediately implies the following statement. \begin{corollary} \label{cor:reg-expressive-irreducible} An irreducible $L^\infty$-regular expressive curve is either trigonometric or polynomial. \end{corollary} We note that such a curve also needs to be immersed, meeting itself transversally at real hyperbolic nodes (thus, no cusps, tacnodes, or triple points). We next provide a partial converse to Corollary~\hyperref{cor:reg-expressive-irreducible}. \begin{proposition} \label{pr:irr-expressive} Let $C$ be a real polynomial or trigonometric curve whose singular set in the affine plane~${\mathbb A}^2={\mathbb P}^2\setminus L^\infty$ consists solely of hyperbolic nodes. Then $C$ is expressive and $L^\infty$-regular. \end{proposition} \begin{proof} By Lemmas~\hyperref{lem:poly-curve} and~\hyperref{lem:trig-curve}, a real polynomial or trigonometric curve~$C=Z(F)$ is a real rational curve with one real or two complex conjugate local branches at infinity, and with a nonempty set of real points in~${\mathbb A}^2$. Thus, conditions (ii), (a), (b), (c), and~(e) of Proposition \hyperref{pr:expressive+regular} are satisfied, so in order to obtain~(i), we only need to establish~(d). That is, we need to show that the polynomial $F$ has finitely many critical points in the affine plane ${\mathbb A}^2$. We will prove this by contradiction. Denote $d=\deg C=\deg F$. Suppose that $Z(F_x)\cap Z(F_y)$ contains a (real, possibly reducible) curve $B$ of a positive degree $d'<d$. We first observe that $B\cap C\cap{\mathbb A}^2=\emptyset$. Assume not. If $q\in B\cap C\cap {\mathbb A}^2$, then $q\in\Sing(C)\cap{\mathbb A}^2$, so $q$ must be a hyperbolic node of~$C$, implying $(Z(F_x)\cdot Z(F_y))_q=\mu(C,q)=1$; but since $q$ lies on a common component of $Z(F_x)$ and $Z(F_y)$, we must have $(Z(F_x)\cdot Z(F_y))_q=\infty$. Since $B$ is real, and $C$ has either one real or two complex conjugate local branches at infinity, it follows that $B\cap C=L^\infty\cap C$. \textbf{Case~1}: $C$ has a unique (real) branch $Q$ at a point $p\in L^\infty$. Then $(B\cdot Q)_p=d'd$. On the other hand, $((L^\infty)^{d'}\cdot Q)_p=d'd$. Hence any curve $\hat B\in\Span\{B,(L^\infty)^{d'}\}$ satisfies $(\hat B\cdot Q)_p\ge d'd$. For any point $q\in C\setminus\{p\}$ there is a curve $\tilde B\in\Span\{B,(L^\infty)^{d'}\}$ passing through~$q$. Hence $C$ is a component of~$\tilde B$, in contradiction with $d'<d$. \textbf{Case~2}: $C$ has two complex conjugate branches $Q$ and~$\overline Q$ centered at (possibly coinciding) points $p$ and~$\overline p$ on $L^\infty$, respectively. Since $B$ is real, we have \[ (B\cdot Q)_p=(B\cdot\overline Q)_{\overline p}=((L^\infty)^{d'}\cdot Q)_p=((L^\infty)^{d'}\cdot\overline Q)_{\overline p}=d'd/2. \] This implies that any curve $\hat B\in\Span\{B,(L^\infty)^{d'}\}$ satisfies both $(\hat B\cdot Q)_p\ge d'd/2$ and $(\hat Q\cdot\overline Q)_{\overline p}\ge d'd/2$, resulting in a contradiction as in Case~1. \end{proof} \begin{remark} In Proposition~\hyperref{pr:irr-expressive}, the requirement concerning the singular set cannot be dropped: a real polynomial curve~$C$ may have elliptic nodes (see Example~\hyperref{ex:poly-with-elliptic-node}) or cusps (consider the cubic $(t^2,t^3)$), preventing~$C$ from being expressive. \end{remark} \begin{remark} \label{rem:poly-trig-expr=>reg} Proposition~\hyperref{pr:irr-expressive} implies that if a real curve $C$ is polynomial or trigonometric, and also expressive, then it is necessarily $L^\infty$-regular. Indeed, expressivity in particular means that all singular points of~$C$ are hyperbolic nodes. \end{remark} \pagebreak[3] \begin{example} \label{ex:hypotrochoid-expressive} Recall from Example~\hyperref{ex:hypotrochoids} that for appropriately chosen values of the real parameter~$a$, the hypotrochoid~$C$ given by the equation~\eqref{eq:hypotrochoid} is a nodal trigonometric curve of degree~$d=2k$ with the maximal possible number of real hyperbolic nodes, namely $\frac{(d-1)(d-2)}{2}=(k-1)(2k-1)$. Thus $C$ has no other singular points in the affine plane, and consequently is expressive by Proposition~\hyperref{pr:irr-expressive}. \end{example} Combining Corollary~\hyperref{cor:reg-expressive-irreducible} and Proposition~\hyperref{pr:irr-expressive}, we obtain: \begin{theorem} \label{th:reg-expressive-irreducible} For a real plane algebraic curve~$C$, the following are equivalent: \begin{itemize}[leftmargin=.2in] \item $C$ is irreducible, expressive and $L^\infty$-regular; \item $C$ is either trigonometric or polynomial, and all its singular points in the affine plane~${\mathbb A}^2$ are hyperbolic nodes. \end{itemize} \end{theorem} \begin{example} Theorem~\hyperref{th:reg-expressive-irreducible} is illustrated in Table~\hyperref{table:irreducible-expressive}. Each real curve in this table is either polynomial or trigonometric. We briefly explain why all these curves are expressive (hence $L^\infty$-regular, see Remark~\hyperref{rem:poly-trig-expr=>reg}). Expressivity of Chebyshev and Lissajous curves was established in Example~\hyperref{ex:lissajous-chebyshev}. The lima\c con of Pascal was discussed in Example~\hyperref{ex:multi-limacons}. In Example~\hyperref{ex:hypotrochoid-expressive}, we saw~that a hypotrochoid~\eqref{eq:hypotrochoid} is expressive for suitably chosen values of~$a$. For a more general statement, see Proposition~\hyperref{pr:epi-hypo} below. As to the remaining curves in Table~\hyperref{table:irreducible-expressive}, all we need to check is that each of their singular points in the affine plane is a hyperbolic node. The curves $V(x^d-y)$ are smooth, so there is nothing to prove. Ditto for the ellipse, as well as the curves $V((y\!-\!x^2)^2\!-x)$ and $V((y\!-\!x^2)^2\!+x^2\!-\!1)$. Finally, $V((y\!-\!x^2)^2\!-\!xy)$ has a single singular point in~${\mathbb A}^2$, a hyperbolic node at the origin. \end{example} \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|p{1.7in}|p{1.8in}|p{1.25in}|} \hline $d$ & $\xi$ & $G(x,y)$ & $(X(t), Y(t))$ & $V(G)$ \\ \hline \multicolumn{4}{|c|}{} \\[-.18in] \hline $1$ & 0 & $x-y$ & $(t, t)$ & line \\ \hline $2$ & 0 & $x^2-y$ & $(t, t^2)$ & parabola \\ $2$ & 1 & $x^2+y^2-1$ & $(\cos(t),\sin(t))$ & ellipse \\ \hline $3$ & 0 & $x^3-y$ & $(t,t^3)$ & cubic parabola \\ $3$ & 2 & $4x^{3}-3x+2y^{2}-1$ & $(-2t^2+1,4t^3-3t)$ & $(3,2)$-Chebyshev \\ \hline $4$ & 0 & $x^4-y$ & $(t,t^4)$ & quartic parabola \\ $4$ & 0 & $(y-x^2)^2-x$ & $(t^2,t^4+t)$ & \\ $4$ & 1 & $(y-x^2)^2+x^2-1$ & $(\cos(t), \sin(t)\!+\!\cos^2(t))$ & \\ $4$ & 2 & $(y-x^2)^2-xy$ & $(t^2-t,t^4-t^3)$ & \\ $4$ & 3 & $y^2+4x^4-4x^2$ & $(\cos(t), \sin(2t))$ & (1,2)-Lissajous \\ $4$ & 3 & $4(x^2+y^2)^2\!-\!3(x^2+y^2)\!-\!x$ & $(\cos(t)\!\cos(3t),\cos(t)\!\sin(3t)$ & lima{\c c}on \\ $4$ & 6 & $4x^3-3x+8y^4-8y^2+1$ & $(-8t^4+8t^2-1, 4t^3-3t)$ & $(3,4)$-Chebyshev \\ $4$ & 7 & see \eqref{eq:3-petal-alg} & see \eqref{eq:3-petal-param} & (2,1)-hypotrochoid \\ \hline \end{tabular} \end{center} \caption{Irreducible expressive curves $C=V(G)$ of degrees $d\le 4$. All curves are $L^\infty$-regular. For~each curve, a trigonometric or polynomial parametrization $(X(t),Y(t))$ is shown. We denote by~$\xi$ the number of critical points~of~$G$. } \label{table:irreducible-expressive} \end{table} \pagebreak[3] The following construction provides a rich source of examples of expressive curves. \begin{definition}[Epitrochoids and hypotrochoids] \label{def:hypotrochoids} Let $b$ and $c$ be coprime nonzero integers, with $b>|c|$. Let $u$ and $v$ be nonzero reals. The trigonometric curve \begin{align} \label{def:epi-hypo1} x&=u\cos bt+v\cos ct, \\ \label{def:epi-hypo2} y&=u\sin bt-v\sin ct. \end{align} is called a \emph{hypotrochoid} if $c>0$, and an \emph{epitrochoid} if $c<0$. It is a rational curve of degree~$2b$. \end{definition} \begin{example} \label{ex:hypotrochoids-again} A hypotrochoid with (coprime) parameters $(b,c)$ and \redsf{suitably chosen ratio~$\frac{v}{u}$} has $b+c$ ``petals,'' see Figure~\hyperref{fig:6-hypotrochoids}. \redsf{The number of petals can change as $\frac{v}{u}$ changes, cf.\ Figure~\hyperref{fig:5-general-hypotrochoids}.} When $b=c+1$, we recover Example~\hyperref{ex:hypotrochoids}, cf.\ Figure~\hyperref{fig:3-petal} ($(b,c)=(2,1)$) and Figure~\hyperref{fig:5-petal} ($(b,c)\!=\!(3,2)$). \end{example} \begin{figure} \caption{\redsf{Expressive} \label{fig:6-hypotrochoids} \end{figure} \begin{figure} \caption{\redsf{Hypotrochoids defined by the parametric equations \eqref{def:epi-hypo1} \label{fig:5-general-hypotrochoids} \end{figure} \begin{example} An epitrochoid with (coprime) parameters $b$ and $c$ (here $b>-c>0$) has $b+c$ inward-pointing ``petals,'' \linebreak[3] see Figure~\hyperref{fig:6-epitrochoids}. When $b=-c+1$, we recover the ``multi-lima\c cons'' of Example~\hyperref{ex:rose-curves} and Figure~\hyperref{fig:multi-limacons} (with $b\in\{2,3,4\}$, $c=1-b$). \end{example} \begin{figure} \caption{Epitrochoids defined by the parametric equations \eqref{def:epi-hypo1} \label{fig:6-epitrochoids} \end{figure} \begin{proposition} \label{pr:epi-hypo} Let $C$ be an epitrochoid or hypotrochoid given by \eqref{def:epi-hypo1}--\eqref{def:epi-hypo2} (As~in Definition~\hyperref{def:hypotrochoids}, $b$ and $c$ are coprime integers, with $b>|c|$; and \hbox{$u,v\in{\mathbb R}^*$}.) \linebreak[3] Then $C$ is expressive and $L^\infty$-regular if it has $(b+c)(b-1)$ hyperbolic nodes~in~${\mathbb A}^2$. In~particular, this holds if $\frac{|v|}{|u|}$ is sufficiently small. \end{proposition} \begin{proof} Setting $\tau=e^{it}$, we convert the trigonometric parametrization \eqref{def:epi-hypo1}--\eqref{def:epi-hypo2} of~$C$ into a rational one: \begin{align*} x&=\ \ \,\tfrac{1}{2}(u\tau^{2b}+v\tau^{b+c}+v\tau^{b-c}+u), \\ y&=-\tfrac{i}{2}(u\tau^{2b}-v\tau^{b+c}+v\tau^{b-c}-u), \\ z&=\tau^b. \end{align*} This curve has two points at infinity, namely $(1,i,0)$ and $(1,-i,0)$, corresponding to $\tau=0$ and $\tau=\infty$, respectively. Noting that the line $x+iy=0$ passes through $(1,i,0)$, we change the coordinates by replacing $y$ by \begin{align*} x+iy&=u\tau^{2b}+v\tau^{b-c}. \end{align*} In a neighborhood of $(1,i,0)$, this yields the following parametrization: $$x=1,\quad x+iy=\tfrac{2v}{u}\tau^{b-c}+\text{h.o.t.},\quad z=\tfrac{2}{u}\tau^b+\text{h.o.t.}$$ Thus, at the point $(1,i,0)$ we have a semi-quasihomogeneous singularity of weight $(b-c,b)$, and similarly at~$(1,-i,0)$. The $\delta$-invariant at each of the two points is equal to $\frac{1}{2}(b-c-1)(b-1)$. Hence in the affine plane~${\mathbb A}^2$, there remain \[ \tfrac{1}{2}(2b-1)(2b-2)-(b-c-1)(b-1)=(b+c)(b-1) \] nodes. So if all these nodes are real hyperbolic (and distinct), then $C$ is expressive and $L^\infty$-regular by Proposition~\hyperref{pr:irr-expressive} (or Theorem~\hyperref{th:reg-expressive-irreducible}). It remains to show that $C$ has $(b+c)(b-1)$ hyperbolic nodes as long as $u,v\in{\mathbb R}^*$ are chosen appropriately (in particular, if $\frac{|v|}{|u|}$ is sufficiently small). Consider the rational parametrization of $C$ given by \begin{align*} x+iy&=u\tau^b+v\tau^{-c},\\ x-iy&=u\tau^{-b}+v\tau^c, \\ z&=1. \end{align*} We need to show that all solutions $(\tau,\sigma)\in({\mathbb C}^*)^2$ of \begin{align} \label{eq:u(tau-theta1)} &u\tau^b+v\tau^{-c}=u\sigma^b+v\sigma^{-c},\\ \label{eq:u(tau-theta2)} &u\tau^{-b}+v\tau^c=u\sigma^{-b}+v\sigma^c,\\ \label{eq:tau-neq-sigma} &\tau\neq\sigma \end{align} satisfy $|\tau|=|\sigma|=1$. Let us rewrite \eqref{eq:u(tau-theta1)}--\eqref{eq:u(tau-theta2)} as \begin{align*} u(\tau^b-\sigma^b)-v(\tau\sigma)^{-c}(\tau^c-\sigma^c)&=0,\\ -u(\tau\sigma)^{-b}(\tau^b-\sigma^b)+v(\tau^c-\sigma^c)&=0. \end{align*} The condition $\gcd(b,c)=1$ implies that at least one of $\tau^b-\sigma^b$ and $\tau^c-\sigma^c$ is nonzero (or else $\tau=\sigma$). Consequently \[ \det\left(\begin{matrix} u & -v(\tau\sigma)^{-c} \\ -u(\tau\sigma)^{-b} & v\end{matrix}\right) =uv(1-(\tau\sigma)^{-b-c}) =0, \] meaning that $\omega=\tau\sigma$ must be a root of unity: $\omega^{b+c}=1$. With respect to~$\tau$ and~$\omega$, the conditions \eqref{eq:u(tau-theta1)}--\eqref{eq:tau-neq-sigma} become: \begin{align} \label{eq:omega} &\omega^{b+c}=1, \\ \label{eq:tau-eps} &u(\tau^b-\omega^b\tau^{-b})+v(\tau^{-c}-\omega^{-c}\tau^c)=0, \\ & \label{eq:tau^2} \tau^2\neq\omega. \end{align} Let $\lambda$ be a square root of~$\omega$, i.e., $\lambda^2=\omega$. Then \eqref{eq:tau^2} states that $\tau\notin\{\lambda,-\lambda\}$. For each of the $b+c$ possible roots of unity~$\omega$, \eqref{eq:tau-eps}~is an algebraic equation of degree~$2b$ in~$\tau$. We claim that if $u,v\in{\mathbb R}^*$ are suitably chosen, then all $2b$ solutions of this equation lie on $S^1=\{|\tau|=1\}$. It is easy to see that this set of solutions contains the two values $\tau=\pm\lambda$ which we need to exclude, leaving us with $2(b-1)$ solutions for each~$\omega$ satisfying~\eqref{eq:omega}. We also claim that all these $2(b+c)(b-1)$ solutions are distinct, thereby yielding $(b+c)(b-1)$ hyperbolic nodes of the curve, as desired. Let us establish these claims. Denote ${\varepsilon}=\lambda^{b+c}\in\{-1,1\}$. Replacing $\tau$ by $\rho=\tau\lambda^{-1}$ (thus $\tau=\lambda\rho$), we transform~\eqref{eq:tau-eps} into \[ u{\varepsilon}(\rho^b-\rho^{-b})=v(\rho^c-\rho^{-c}). \] Making the substitution $\rho=e^{i\alpha}$, we translate this into \begin{equation} \label{eq:two-sinusoids} u{\varepsilon} \sin(b\alpha)=v\sin(c\alpha). \end{equation} If $|\frac{v}{u}|$ is sufficiently small, then the equation~\eqref{eq:two-sinusoids} clearly has $2b$ distinct real solutions in the interval $[0,2\pi)$, as claimed. Finally, it is not hard to see that all resulting $2(b+c)(b-1)$ values of~$\tau$ are distinct. \end{proof} We return to the general treatment of (potentially reducible) expressive $L^\infty$-regular curves. First, a generalization of Proposition~\hyperref{pr:irr-expressive}: \begin{proposition} \label{pr:sufficient-expressive} Let $C$ be a reduced real plane curve such that \begin{itemize}[leftmargin=.2in] \item each component of $C$ is real, and either polynomial or trigonometric; \item the singular set of $C$ in the affine plane ${\mathbb A}^2$ consists solely of hyperbolic nodes; \item the set of real points of $C$ in the affine plane is connected. \end{itemize} Then $C$ is expressive and $L^\infty$-regular. \end{proposition} \begin{proof} The proof utilizes the approach used in the proof of Proposition~\hyperref{pr:irr-expressive}. Arguing exactly as at the beginning of the latter proof, we conclude that all we need to show is that the polynomial $F$ defining $C$ has finitely many critical points in the affine plane~${\mathbb A}^2$. Once again, we argue by contradiction. Assuming that $Z(F_x)\cap Z(F_y)$ contains a real curve $B$ of a positive degree $d'<d=\deg(C)$, and reasoning as in the earlier proof, we conclude that for any component $C'$ of~$C$, there exists a curve $\hat B_{C'}\in\Span\{B,(L^\infty)^{d'}\}$ containing $C'$ as a component. In view of the connectedness of~$C_{{\mathbb R}}$ and the fact that different members of the pencil $\Span\{B,(L^\infty)^{d'}\}$ are disjoint from each other in the affine plane ${\mathbb A}^2$, we establish that there is just one curve $\hat B\in\Span\{B,(L^\infty)^{d'}\}$ that contains all the components of $C$---but this contradicts the inequality $d'<d$. \end{proof} The following theorem is the main result of this paper. \begin{theorem} \label{th:reg-expressive} Let $C$ be a reduced real plane algebraic curve, with all irreducible components real. The following are equivalent: \begin{itemize}[leftmargin=.2in] \item $C$ is expressive and $L^\infty$-regular; \item each irreducible component of~$C$ is either trigonometric or polynomial, \\ all singular points of $C$ in the affine plane~${\mathbb A}^2$ are hyperbolic nodes, and \\ the set of real points of $C$ in the affine plane is connected. \end{itemize} \end{theorem} \begin{proof} Follows from Propositions~\hyperref{prop:nonempty} and~\hyperref{pr:sufficient-expressive}. \end{proof} \begin{corollary} Let $C$ be an $L^\infty$-regular expressive plane curve whose irreducible components are all real. Let $C'$ be a subcurve of~$C$, i.e., a union of a subset of irreducible components. If the set of real points of~$C'$ in the affine plane is connected, then $C'$ is also $L^\infty$-regular and expressive. \end{corollary} \begin{proof} Follows from Theorem~\hyperref{th:reg-expressive}. \end{proof} We conclude this section by a corollary whose statement is entirely elementary, and in particular does not involve the notion of expressivity. \begin{corollary} \label{cor:all-crit-pts-are-real} Let $C=V(G(x,y))$ be a real polynomial or trigonometric affine plane curve which intersects itself solely at hyperbolic nodes. Then all critical points of the polynomial $G(x,y)$ are real. \end{corollary} \begin{proof} Immediate from Proposition~\hyperref{pr:irr-expressive} (or Theorem~\hyperref{th:reg-expressive-irreducible}). \end{proof} \section{More expressivity criteria} \label{sec:more-expressivity-criteria} We first discuss the irreducible case. By Theorem~\hyperref{th:reg-expressive-irreducible}, an irreducible plane curve is expressive and $L^\infty$-regular if and only if it is is either trigonometric or polynomial, and all its singular points outside~$L^\infty$ are real hyperbolic nodes. The last condition is the usually the trickiest to verify. One simple case is when the number of hyperbolic nodes attains its maximum: \begin{corollary} \label{cor:max-number-of-nodes} Let $C$ be a real polynomial or trigonometric curve of degree~$d$ with $\frac{(d-1)(d-2)}{2}$ hyperbolic nodes. Then $C$ is expressive and $L^\infty$-regular. \end{corollary} \begin{proof} In view of Hironaka's formula \eqref{eq:Hironaka}, the curve~$C$ has no other singular points besides the given hyperbolic nodes. The claim follows by Proposition~\hyperref{pr:irr-expressive}. \end{proof} Examples illustrating Corollary~\hyperref{cor:max-number-of-nodes} include Lissajous-Chebyshev curves~\eqref{eq:Ta+Tb} with parameters $(d,d-1)$ as well as hypotrochoids with parameters \hbox{$(k,k-1)$} (cf.~\eqref{eq:hypotrochoid}, \redsf{with suitably chosen value of~$a$}). \begin{remark} In Corollary~\hyperref{cor:max-number-of-nodes}, the requirement that the curve $C$ is polynomial or trigonometric cannot be dropped. For example, there exists an irreducible real quadric with three hyperbolic nodes which is not expressive. \end{remark} Corollary~\hyperref{cor:max-number-of-nodes} can be generalized as follows. \begin{corollary} \label{cor:nodes-Newton} Let $C=V(G(x,y))$ be a real polynomial or trigonometric curve with $\nu$ hyperbolic nodes. Suppose that the Newton polygon of $G(x,y)$ has $\nu$ interior integer points. Then $C$ is expressive and $L^\infty$-regular. \end{corollary} \begin{proof} It is well known (see \cite{Baker1893} or \cite[Section~4.4]{Fulton}) that the maximal possible number of nodes of an irreducible plane curve with a given Newton polygon (equivalently, the arithmetic genus of a curve in the linear system spanned by the monomials in the Newton polygon, on the associated toric surface) equals the number of interior integer points in the Newton polygon. The claim then follows by Proposition~\hyperref{pr:irr-expressive}. \end{proof} Applications of Corollary~\hyperref{cor:nodes-Newton} include arbitrary irreducible Lissajous-Chebyshev curves~\eqref{eq:Ta+Tb}. It is natural to seek an algorithm for verifying whether a given immersed real polynomial or trigonometric curve~$C$, say one given by an explicit parametrization, is expressive. By Theorem~\hyperref{th:reg-expressive-irreducible}, this amounts to checking that each point of self-intersection of~$C$ in the affine plane~${\mathbb A}^2$ corresponds to two real values of the parameter. In the case of a polynomial curve \[ t\mapsto (P(t),Q(t)), \] this translates into requiring that \begin{itemize}[leftmargin=.2in] \item the resultant (with respect to either variable $s$ or $t$) of the polynomials \[ \widehat P(t,s)=\frac{P(t)-P(s)}{t-s} \quad \text{and}\quad \widehat Q(t,s)=\frac{Q(t)-Q(s)}{t-s} \] has \redsf{simple} real roots and \item \redsf{the corresponding points $(P(t),Q(t))$ are simple (hyperbolic) nodes of~$C$}. \end{itemize} \begin{example} Consider the sextic curve \[ t\mapsto (-8t^{6}+24t^{4}+4t^{3}-18t^{2}-6t+1,-2t^{4}+4t^{2}-1). \] In this case, \begin{align*} \widehat P(t,s)&=-2 (2s^2 + 2st + 2t^2 - 3) (2s^3 + 2t^3 - 3s - 3t - 1),\\ \widehat Q(t,s)&=-2(s+t)(s^2+t^2-2). \end{align*} The resultant of $\widehat P$ and $\widehat Q$ (which we computed using \texttt{Sage}) is equal to \[ -256 (2s^2 - 3)(2s^2 - 2s - 1)(2s^2 + 2s - 1)(8s^6 - 24s^4 - 4s^3 + 18s^2 + 6s - 1). \] All its 12 roots are real, so the curve is expressive, with 6 nodes. See Figure~\hyperref{fig:newton-degenerate}. \end{example} \begin{figure} \caption{The curve $C=V(2x^3+3x^2-1+(4y^3-3y+\tfrac{x^2} \label{fig:newton-degenerate} \end{figure} The case of a trigonometric curve can be treated in a similar way. Let $C$ be a trigonometric curve with a Laurent parametrization \[ t\mapsto (x(t),y(t))=(P(t)+\overline P(t^{-1}),Q(t)+\overline Q(t^{-1})) \] (cf.\ \eqref{eq:param-PQ}), with $P(t)=\sum_k \alpha_k t^k$, $Q(t)=\sum_k \beta_k t^k$. We write down the differences \begin{align*} \frac{x(t)-x(s)}{t-s} =\sum_k(t^{k-1}+t^{k-2}s+...+s^{k-1})\Bigl(\alpha_k-\frac{\overline\alpha_k}{t^k s^k}\Bigr),\\[-.05in] \frac{y(t)-y(s)}{t-s}=\sum_k(t^{k-1}+t^{k-2}s+...+s^{k-1})\Bigl(\beta_k-\frac{\overline\beta_k}{t^k s^k}\Bigr), \end{align*} clear the denominators by multiplying by an appropriate power of~$ts$, and require that all values of $t$ and $s$ for which the resulting polynomials vanish (equivalently, all roots of their resultants) lie on the unit circle. \begin{proposition} Let $C\!=\!V(G)$ be an $L^\infty$-regular curve, with $G(x,y)\in{\mathbb R}[x,y]$. Assume that $C_{{\mathbb R}}$ is connected, and each singular point of it is a hyperbolic node, as in Definition~\hyperref{def:divide-polynomial}. Let $\nu$ denote the number of such nodes, and let $\rho$ be the number of regions of the divide~$D_G$. Then $C$ is expressive if and~only~if \begin{equation} \label{eq:nu+rho} \nu+\rho=(d-1)^2-\sum_{p\in C\cap L^\infty} \mu(C,p,L^\infty). \end{equation} \end{proposition} \begin{proof} By B\'ezout's theorem, the right-hand side of~\eqref{eq:nu+rho} is the number of critical points of~$G$ in the affine plane, counted with multiplicities. Since each region of $D_G$ contains at least one (real) critical point, and each node of~$C_{{\mathbb R}}$ is a critical point, the only way for~\eqref{eq:nu+rho} to hold is for $C$ to be expressive. \end{proof} \section{Bending, doubling, and unfolding} \label{sec:constructions-irr} In this section, we describe several transformations which can be used to construct new examples of expressive curves from existing ones. The simplest transformation of this kind is the ``bending'' procedure based on the following observation: \begin{proposition} \label{pr:bending} Let $f(x,y),g(x,y)\in{\mathbb R}[x,y]$ be such that the map \begin{equation} \label{eq:real-biregular} (x,y)\mapsto (f(x,y),g(x,y)) \end{equation} is a biregular automorphism~of~${\mathbb A}^2$. If the curve $C=V(G(x,y))$ is expressive, then so is the curve \[ \widetilde C=V(G(f(x,y),g(x,y))). \] If, in addition, $C$ is $L^\infty$-regular, with real components, then so is~$\widetilde C$. \end{proposition} Proposition~\hyperref{pr:bending} is illustrated in Figure~\hyperref{fig:bending}. \begin{figure} \caption{The curves $C=V(x^2+y^2-1)$ and $\widetilde C=V(x^2+(2y-2x^2)^2-1)$.} \label{fig:bending} \end{figure} \begin{proof} The automorphism~\eqref{eq:real-biregular} is an invertible change of variables that restricts to a diffeomorphism of the real plane~${\mathbb R}^2$. As such, it sends (real) critical points to (real) critical points, does not change the divide of the curve, and preserves expressivity. Let us now assume that $C$ is expressive and $L^\infty$-regular, with real components. In view of Theorem 7.16, all we need to show is that all components of~$\widetilde C$ are polynomial or trigonometric. Geometrically, a polynomial (resp., trigonometric) component is a Riemann sphere punctured at one real point (resp., two complex conjugate points) and equivariantly immersed into the plane. This property is preserved under real biregular automorphisms of~${\mathbb A}^2$. \end{proof} \begin{remark} It is well known~\cite{vanderKulk} that the group of automorphisms of the affine plane is generated by affine transformations together with the transformations of the form $(x,y)\mapsto (x,y+P(x))$, for $P$ a polynomial. This holds over any field of characteristic zero, in particular over the reals. \end{remark} \begin{example} Several examples of bending can be extracted from Table~\hyperref{table:irreducible-expressive}. Applying the automorphism $(x,y)\mapsto (x,y+x-x^m)$ to the line $V(x-y)$, we get $V(x^m-y)$. The automorphism $(x,y)\mapsto (x,y-x^2)$ transforms the parabola $V(y^2-x)$ into $V((y-x^2)^2-x)$, the ellipse $V(x^2+y^2-1)$ into $V(x^2+(y-x^2)^2-1)$, and the nodal cubic $V(y^2-xy-x^3)$ into $V((y-x^2)^2-xy)$. (In turn, $V(y^2-xy-x^3)$ and $V(4x^{3}-3x+2y^{2}-1)$ are related to each other by an affine transformation.) \end{example} \begin{example} The polynomial expressive curves shown in Figure~\hyperref{fig:doubling-to-limacon-and-hypotrochoid} (in red) are obtained by bending a parabola and a nodal cubic. \end{example} \begin{figure} \caption{A curve $C=V(G(x,y))$ and the ``doubled'' curve $\widetilde C=V(G(x,y^2))$. On the left: $G(x,y^2)$ is the left-hand side of the equation~\eqref{eq:limacon} \label{fig:doubling-to-limacon-and-hypotrochoid} \end{figure} We next discuss the ``doubling'' construction which transforms a plane curve \linebreak \hbox{$C=V(G(x,y))$} into a new curve $\widetilde C=V(G(x,y^2))$. Proposition~\hyperref{pr:doubling} below shows~that, under certain conditions, this procedure preserves expressivity. See Figure~\hyperref{fig:doubling-to-limacon-and-hypotrochoid}. \begin{proposition} \label{pr:doubling} Let $C$ be an expressive $L^\infty$-regular curve whose components are all real (hence polynomial or trigonometric, see Proposition~\hyperref{prop:nonempty}). Suppose that \begin{itemize}[leftmargin=.2in] \item each component $B=V(G(x,y))$ of $C$, say with $\deg_x(G)=d$, intersects $Z(y)$ in $d$ real points (counting multiplicities), all of which are smooth points of~$C$; \redsf{moreover, \begin{itemize}[leftmargin=.15in] \item[{\rm $\circ$}] if $B$ is trigonometric, then it intersects $Z(y)$ at $\frac{d}{2}$ points of quadratic tangency; \item[{\rm $\circ$}] if $B$ is polynomial, then it intersects $Z(y)$ at $\frac{d}{2}$, $\frac{d-1}{2}$ or $\frac{d-2}{2}$ points of quadratic tangency, with 0, 1, or 2 points of transverse intersection, respectively; \end{itemize} } \item all nodes of $C$ lie in the real half-plane $\{y>0\}$; \item at each point of quadratic tangency between $C$ and $Z(y)$, the local real branch of $C$ lies in the upper half-plane $\{y>0\}$; \item each connected component of the set $\{(x,y)\in C\cap{\mathbb R}^2\mid y<0\}$ is unbounded. \end{itemize} Then the curve $\widetilde C=V(G(x,y^2))$ is expressive and $L^\infty$-regular. \end{proposition} \begin{proof} The curve $\widetilde C$ is nodal by construction. It is not hard to see that the set $\widetilde C_{\mathbb R}$ is connected, and all the nodes of $\widetilde C$ are hyperbolic. (See the proof of Proposition~\hyperref{pr:accordion} for a more involved version of the argument.) In view of Theorem \hyperref{th:reg-expressive}, it remains to show that all components of $\widetilde C$ are real polynomial or trigonometric. We only treat the trigonometric case, since the argument in the polynomial case is similar. (Cf.\ also the proof of Lemma~\hyperref{lem:accordion2-poly}, utilizing such an argument in a more complicated context.) The natural map $\widetilde C\!\to \!C$ lifts to a two-sheeted ramified covering $\rho:\widetilde C^{\vee}\!\to\! C^\vee$ between respective normalizations. The restriction of $\rho$ to $\widetilde C^{\vee}\!\setminus\!\rho^{-1}(Z(y))$ is an unramified two-sheeted covering, and each component of $\widetilde C$ contains a one-dimensional fragment of the real point set, hence is real. In fact, $\rho$ is not ramified at all, since each point in $C\cap Z(y)$ lifts to a node, and hence to two preimages in $\widetilde C^{\vee}$. Since $C^\vee={\mathbb C}^*$, it follows that $\widetilde C^{\vee}$ is a union of at most two disjoint copies of~${\mathbb C}^*$. We conclude that $\widetilde C$ consists of one or two trigonometric components. \end{proof} If a curve $C=V(G(x,y))$ satisfies the conditions in Proposition~\hyperref{pr:doubling} with respect to each of the coordinate axes $Z(x)$ and~$Z(y)$, with all points in $C\cap Z(x)$ (resp.,~$C\cap Z(y)$) located on the positive ray $\{x=0, y>0\}$ (resp., $\{y=0, x>0\}$), the one can apply the doubling transformation twice, obtaining an expressive curve $\widetilde C=V(G(x^2,y^2))$. A~couple of examples are shown in Figure~\hyperref{fig:double-reflection}. \begin{figure} \caption{A curve $C=V(G(x,y))$ and its ``double-double'' $C'=V(G(x^2,y^2))$. Left: $C$ is a nodal cubic tangent to both axes, $\widetilde C$ is a 4-petal hypotrochoid, cf.\ Definition~\hyperref{def:hypotrochoids} \label{fig:double-reflection} \end{figure} The remainder of this section is devoted to the discussion of ``unfolding.'' This is a transformation of algebraic curves that utilizes the coordinate change \begin{equation} (x,y)=(x,T_m(u)). \label{eq:accordion} \end{equation} (As before, $T_m$ denotes the $m$th Chebyshev polynomial of the first kind, see~\eqref{eq:chebyshev-poly}.) A~precise description of unfolding is given in Proposition~\hyperref{pr:accordion} below. To get a general idea of how this construction works, take a look at the examples in Figures~\hyperref{fig:ellipse-infolding}--\hyperref{fig:accordion-exotic}. As these examples illustrate, the bulk of the unfolded curve (viewed up to an isotopy of the real plane) is obtained by stitching together $m$ copies of the input curve~$C$, or more precisely the portion of~$C$ contained in the strip $\{-1\le y\le 1\}$. \begin{figure} \caption{An ellipse $C=V(G(x,y))$ and its unfolding $C=V(G(x,T_5(y)))$. Here $G(x,y)=y^2-\sqrt{2} \label{fig:ellipse-infolding} \end{figure} \begin{figure} \caption{A curve $C=V(G(x,y))$ and its triple unfolding $C=V(G(x,T_3(y)))$. Here $G(x,y)=8x^{3} \label{fig:accordion-exotic} \end{figure} \begin{lemma} \label{lem:accordion1} Let $C=V(F(x,y))$ be a trigonometric curve. Suppose that \begin{itemize}[leftmargin=.2in] \item the strip $\{-1<y<1\}$ contains a one-dimensional fragment of $C_{{\mathbb R}}$; \item $C$ intersects each of the lines $y=\pm 1$ in $d/2$ points; \item all of these points are smooth points of quadratic tangency between $C$ and $Z(y^2-1)$. \end{itemize} Then for any~$m\in{\mathbb Z}_{>0}\,$, the curve $C_{(m)}$ defined by \begin{equation} \label{eq:Cm} C_{(m)}=V(F(x,T_m(u))) \end{equation} is a union of trigonometric curves. \end{lemma} \begin{proof} The natural map $C_{(m)}\to C$ given by~\eqref{eq:accordion} lifts to the $m$-sheeted ramified covering map $\rho:C_{(m)}^{\vee}\to C^\vee$ between the normalizations. The restriction \[ \rho:C_{(m)}^{\vee}\setminus\rho^{-1}(Z(y^2-1))\to C^\vee\setminus Z(y^2-1) \] is an unramified $m$-sheeted covering, and each of the components of $C_{(m)}$ contains a one-dimensional fragment of the real point set, hence is real. Let us show that $\rho$ is not ramified at all. If $p\in C\cap Z(y-1)$ and $T_m^{-1}(1)$ consists of $a$ simple roots and $b$ double roots, then $p$ lifts to $a$ smooth points (where $C_{(m)}$ is quadratically tangent to the lines $u=\lambda$ with $\lambda$ running over all these simple roots) and $b$ nodes, totaling $a+2b=m$ preimages in $C_{(m)}^{\vee}$. As $C$ is trigonometric, $C^\vee\!=\!{\mathbb C}^*$. Since a cylinder can only be covered by a cylinder, $C_{(m)}^{\vee}$~is a union of disjoint copies of ${\mathbb C}^*$ (not necessarily $m$ of them), so $C_{(m)}$ is a union of trigonometric components. \end{proof} \pagebreak[3] \begin{lemma} \label{lem:accordion2-poly} Let $C=V(F(x,y))$ be a real polynomial curve. Let $d=\deg_x F(x,y)$. Suppose that \begin{itemize}[leftmargin=.2in] \item the strip $\{-1<y<1\}$ contains a one-dimensional fragment of $C_{{\mathbb R}}$; \item $C$ intersects $Z(y^2-1)$ in $2d$ points (counting multiplicities), all of which are smooth points of~$C$; \item these points include $d-1$ quadratic tangencies and two transverse intersections. \end{itemize} Then for any $m\in{\mathbb R}_{>0}\,$, the curve $C_{(m)}$ defined by~\eqref{eq:Cm} is a union of polynomial or trigonometric components. \end{lemma} \begin{proof} It is not hard to see that $C$ has a polynomial parametrization $t\mapsto (P(t),Q(t))$ with \hbox{$Q(t)=\pm T_d(t)$}. This follows from ``Chebyshev's equioscillation theorem'' of approximation theory (due to E.~Borel and A.~Markov, see, e.g., \cite[Section~1.1]{Sodin-Yuditskii} or~\cite[Theorem~3.4]{Mason-Handscomb}. In~the rest of the proof, we assume that $Q(t)=T_d(t)$, as the case $Q(t)=-T_d(t)$ is completely similar. We note that one can slightly vary the coefficients of $P$ while keeping the intersection properties of $C$ with $Z(y^2-1)$ (and maintaining expressivity if $C$ has this property). Thus, we can assume that $P(t)$ is a generic polynomial (in particular, with respect to $Q(t)=T_d(t)$). \textbf{Case~1}: $\gcd(d,m)=1$. Observe that \begin{equation} \label{eq:Cm-param-coprime} \tau\mapsto (x,u)=(P(T_m(\tau)),T_d(\tau)) \end{equation} is a parametrization of a polynomial curve lying inside~$C_{(m)}$. Indeed, \[ F(P(T_m(\tau)),T_m(T_d(\tau)))=F(P(T_m(\tau)),T_d(T_m(\tau)))=0 \] because $F(P(t),Q(t))=0$. Since $\frac{dx}{d\tau}$ and $\frac{du}{d\tau}$ never vanish simultaneously (thanks to the genericity of~$P$ and the coprimeness of $d$ and~$m$), the map~\eqref{eq:Cm-param-coprime} is an immersion of ${\mathbb C}$ into the affine plane. Since $u=0$ at $d$ points, while $\deg_x F(x,T_m(u))=d$, the image of this immersion is the entire curve~$C_{(m)}$, which is therefore polynomial. \textbf{Case~2}: $c=\gcd(m,d)>1$. Let $m=cr$, $d=cs$. The curve $C_{(m)}$ is given in the affine $(x,u)$-plane by the equations \begin{equation*} x=P(t),\quad T_m(u)=T_d(t) \label{eq:Cm-parametric} \end{equation*} involving an implicit parameter~$t$. Setting $t=T_r(\tau)$ we rewrite this as $$x=P(T_r(\tau)),\quad T_m(u) =T_m(T_s(\tau)) $$ (since $T_d(T_r(\tau))=T_m(T_s(\tau))$). The equation $T_m(u)=T_m(u')$ has solutions $u=u'$, $u=-u'$ (for $m$ even) as well as $$\arccos u=\pm \arccos u'+2\pi\tfrac{k}{m},\quad k=0,\dots,m-1, \quad u,u'\in[-1,1]. $$ From this, we obtain the following components of~$C_{(m)}$, all of which turn out to be either polynomial or trigonometric. The polynomial components are: \begin{align*} &x=P(T_r(\tau)),\quad u=T_s(\tau), \\ &x=P(T_r(\tau)),\quad u=-T_s(\tau) \quad (m\in 2{\mathbb Z}). \end{align*} The trigonometric components are (here we set $\tau=\cos\theta$): $$x=P(\cos(r\theta)),\quad u=\cos(s\theta\pm2\pi\tfrac{k}{m}), \quad 0<k<\tfrac{m}{2}. $$ One can sort out which of these components are distinct by taking into account that $x$ is invariant with respect to the substitutions $\theta\mapsto-\theta$ and $\theta\mapsto\theta+2\pi\frac{j}{r}$. \redsf{Finally, we note that the curve $C_{(m)}$ has no multiple components. To see that, take a line $x=x_0$ such that the polynomial $F(x_0,y)$ has only simple roots $y_1,...,y_r$ which differ from $\pm1$. (Such a line $x=x_0$ does exist, since otherwise $C$ would have multiple components or contain one of the lines $y=\pm1$.) Then the polynomial $F(x_0,T_m(u))$ has only simple roots which all come from the equations $T_m(u)=y_i$, $i=1,...,r$.} \end{proof} \begin{proposition} \label{pr:accordion} Let $C=V(G(x,y))$ be an expressive $L^\infty$-regular curve all of whose components are real (hence polynomial or trigonometric, cf.\ Proposition~\hyperref{prop:nonempty}). Suppose~that \begin{itemize}[leftmargin=.2in] \item each component $V(F(x,y))$ of $C$, say with $\deg_x(F)=d$, intersects $Z(y^2-1)$ in $2d$ real points (counting multiplicities), all of which are smooth points of~$C$; \item moreover, these intersections occur in one of the following two ways: \begin{itemize}[leftmargin=.2in] \item[{\rm $\circ$}] $d$ points of quadratic tangency if the component is trigonometric, or \item[{\rm $\circ$}] $d-1$ points of quadratic tangency and two points of transverse intersection\\ if the component is polynomial; \end{itemize} \item all nodes of $C$ lie in the strip $\{-1<y<1\}$; \item at each point of quadratic tangency between $C$ and $Z(y^2-1)$, the local real branch of $C$ lies in the strip $\{-1\le y\le 1\}$. \end{itemize} Then the curve $C_{(m)}=V(G(x,T_m(y)))$ is expressive and $L^\infty$-regular. \end{proposition} \begin{proof} By construction, the curve $C_{(m)}$ is nodal. By Lemmas \hyperref{lem:accordion1} and~\hyperref{lem:accordion2-poly}, the components of $C_{(m)}$ are real polynomial or trigonometric. In view of Theorem \hyperref{th:reg-expressive}, it remains to show that the set $C_{(m),{\mathbb R}}$ is connected, and that all the nodes of $C_{(m)}$ are hyperbolic. The set of real points $C_{\mathbb R}$ is connected by Definition~\hyperref{def:expressive-poly}. Since all the nodes of~$C$ lie inside the strip $\{-1<y<1\}$, it follows that the set $C_{\mathbb R}\cap\{-1\le y\le1\}$ is connected. Let $\{a_1<\cdots<a_{m+1}\}$ be the set of roots of the equation $T_m^2(a)=1$. Then each~set \begin{equation} \label{eq:substips} C_{(m),{\mathbb R}}\cap\{a_j\le y\le a_{j+1}\}, \quad j=1,...,m, \end{equation} is an image of $C_{\mathbb R}\cap\{-1\le y\le1\}$ under a homeomorphism of the strip $\{-1\le y\le1\}$ onto the strip $\{a_j\le y\le a_{j+1}\}$. Furthermore, for each $j=2,...,m$, the pair of sets \[ C_{(m),{\mathbb R}}\cap\{a_{j-1}\le y\le a_j\} \quad \text{and}\quad C_{(m),{\mathbb R}}\cap\{a_j\le y\le a_{j+1}\} \] are attached to each other at their common points along the line $Z(y-a_j)$. (This set of common points is nonempty since it includes the images of the intersection of $C_{\mathbb R}$ with one of the two lines $Z(y\pm 1)$.) To obtain the entire set $C_{(m),{\mathbb R}}\,$, we attach to the (connected) union of the $m$ sets~\eqref{eq:substips} the diffeomorphic images of the intervals forming the set $C_{\mathbb R}\setminus\{-1\le y\le 1\}$ (if any). We conclude that $C_{(m),{\mathbb R}}$ is connected. Regarding the nodes of $C_{(m)}$, we observe that they come in two flavours. First, as one of the $m$ preimages of a node of $C$ contained in the strip $\{-1<y<1\}$; all these preimages are real, hence hyperbolic. Second, as a preimage of a tangency point between $C$ and $Z(y^2-1)$; this again yields a hyperbolic node. \end{proof} \section{Arrangements of lines, parabolas, and circles} \label{sec:overlays} We next discuss ways of putting together several expressive curves to create a new (reducible) expressive curve. Our key tool is the following corollary. \begin{corollary} \label{cor:overlays} Let $C_1,\dots,C_k$ be expressive and $L^\infty$-regular plane curves such that \begin{itemize}[leftmargin=.2in] \item each pair $C_i$ and $C_j$ intersect each other in~${\mathbb A}^2$ at (distinct) hyperbolic nodes, and \item the set $C_{\mathbb R}$ of real points of the curve $C=C_1\cup \cdots C_k$ is connected. \end{itemize} Then $C$ is expressive and $L^\infty$-regular. \end{corollary} \begin{proof} Follows from Theorem~\hyperref{th:reg-expressive}. \end{proof} One easy consequence of Corollary~\hyperref{cor:overlays} is the following construction. \begin{corollary}[cf.\ Example~\hyperref{ex:two-graphs}] \label{cor:overlays-graphs} Let $f_1(x),\dots,f_k(x)\in{\mathbb R}[x]$. Assume that each poly\-nomial $f_i(x)-f_j(x)$ has real roots, and all such roots (over all pairs $\{i,j\}$) are pairwise distinct. Then the curve $V(\prod_i (y-f_i(x)))$ is expressive and $L^\infty$-regular. \end{corollary} \begin{example}[Line arrangements] \label{ex:line-arrangements} A \redsf{connected} arrangement of distinct real lines in the plane forms an expressive and $L^\infty$-regular curve, as long as no three lines intersect at a point. Parallel lines are allowed. See Figure~\hyperref{fig:six-lines}. \end{example} \begin{figure} \caption{A line arrangement. } \label{fig:six-lines} \end{figure} \begin{example}[Arrangements of parabolas] \label{ex:arrangements-of-parabolas} Let $C$ be a union of distinct real parabolas in the affine plane. Then $C$ is an expressive and $L^\infty$-regular curve provided \begin{itemize}[leftmargin=.2in] \item the set of real points of $C$ is connected; \item no three parabolas intersect at a point; \item all intersections between parabolas are transverse; \item for each pair of parabolas $P_1$ and $P_2$, one of the following options holds: \begin{itemize}[leftmargin=.2in] \item $P_1$ and $P_2$ differ by a shift of the plane, or \item $P_1$ and $P_2$ have parallel (or identical) axes, and intersect at 2~points, or \item $P_1$ and $P_2$ intersect at 4 points. \end{itemize} \end{itemize} See Figures~\hyperref{fig:four-parabolas} and~\hyperref{fig:two-4-parabolas}. \end{example} \begin{figure} \caption{An arrangement of four co-oriented parabolas. Each pair of parabolas intersect trans\-versally at two points. } \label{fig:four-parabolas} \end{figure} \begin{figure} \caption{Two arrangements of four parabolas. Each pair of parabolas intersect trans\-versally at one, two, or four points, all of them real. } \label{fig:two-4-parabolas} \end{figure} Examples~\hyperref{ex:line-arrangements} and~\hyperref{ex:arrangements-of-parabolas} have a common generalization: \begin{example}[Arrangements of lines and parabolas] \label{ex:lines+parabolas} Let $C$ be a union of distinct real lines and parabolas in the affine plane. Then $C$ is an expressive and $L^\infty$-regular curve provided \begin{itemize}[leftmargin=.2in] \item the set of real points of $C$ is connected; \item no three of these curves intersect at a point; \item all intersections between these lines and parabolas are transverse; \item each line intersects every parabola at one or two points; \item each pair of parabolas intersect in one of the ways listed in Example~\hyperref{ex:arrangements-of-parabolas}. \end{itemize} \end{example} Another elegant application of Corollary~\hyperref{cor:overlays} involves arrangements of circles: \begin{example}[Circle arrangements] \label{ex:circle-arrangements} Let $\{C_i\}$ be a collection of circles on the real affine plane such that each pair of circles intersect transversally at two real points, with no triple intersections. Then the curve $\bigcup C_i$ is expressive and $L^\infty$-regular. See Figure~\hyperref{fig:five-circles}. \end{example} \begin{figure} \caption{A circle arrangement.} \label{fig:five-circles} \end{figure} Here is a common generalization of Examples~\hyperref{ex:line-arrangements} and~\hyperref{ex:circle-arrangements}: \begin{example}[Arrangements of lines and circles] Let $\{C_i\}$ be a collection of lines and circles on the real affine plane such that each circle intersects every line (resp., every other circle) transversally at two points, with no triple intersections. Then the curve $\bigcup C_i$ is expressive and $L^\infty$-regular. See Figure~\hyperref{fig:five-circles+three-lines}. \end{example} \begin{figure} \caption{An arrangement of circles and lines.} \label{fig:five-circles+three-lines} \end{figure} \section{Shifts and dilations} \label{sec:shifts+dilations} In this section, we obtain lower bounds for an intersection multiplicity (at a point $p\in L^\infty$) between a plane curve~$C$ and another curve obtained from~$C$ by a shift or dilation. In Sections~\hyperref{sec:arrangements-poly}--\hyperref{sec:arrangements-trig}, we will use these estimates to derive expressivity criteria for unions of polynomial or trigonometric curves. Without loss of generality, we assume that $p=(1,0,0)$ throughout this section. To state our bounds, we will need some notation involving Newton diagrams: \begin{definition} \label{def:Newton-Gamma} Let $C=Z(F(x,y,z))$ be a plane projective curve that contains neither of the lines $Z(z)=L^\infty$ and $Z(y)$ as a component. Furthermore assume that $p=(1,0,0)\in C\cap L^\infty$. We denote by $\Gamma(C,p)$ the Newton diagram of the polynomial \begin{equation} \label{eq:F1} F_1(y,z)=F(1,y,z)\in{\mathbb C}[y,z] \end{equation} at the point $(0,0)$, see Definition~\hyperref{def:Newton-diagram}. Since $F_1$ is not divisible by $y$ or~$z$, the Newton diagram $\Gamma(C,p)$ touches both coordinate axes. We denote by $S^-(\Gamma(C,p))$ the area of the domain bounded by $\Gamma(C,p)$ and these axes. \end{definition} \begin{proposition} \label{pr:gen-dilation} Let $C=Z(F(x,y,z))$ be a projective curve that contains neither $Z(z)=L^\infty$ nor~$Z(y)$ as a component. Let $c\in{\mathbb C}^*$, and let $C_c$ denote the dilated~curve \begin{equation} \label{eq:dilX} C_c=Z(F(cx,cy,z)). \end{equation} Assume that $p=(1,0,0)\in C\cap L^\infty$. Then \begin{equation} (C\cdot C_c)_p\ge2S^-(\Gamma(C,p)). \label{est-dilation} \end{equation} \end{proposition} \begin{proof} In the coordinates $(y,z)$, the dilation $C\leadsto C_c$ can be regarded as a transition from the polynomial $F_1$ (see~\eqref{eq:F1}) to the polynomial $F_c(y,z)=F(c,cy,z)$. The polynomials $F_1$ and $F_c$ have the same Newton diagram at~$p$ (resp., Newton polygon). Furthermore, let $G(y,z)$ be a polynomial with the same Newton polygon and with generic coefficients. By the lower semicontinuity of the intersection number, we have $$(C\cdot C_c)_p\ge(C\cdot Z(G(y,z)))_p.$$ By Kouchnirenko's theorem \cite[1.18, Th\'eor\`eme~III${}'$]{Kouchnirenko}, the total intersection multiplicity of the curves $C$ and $Z(G)$ in the torus $({\mathbb C}^*)^2$ equals $2S(P)$, twice the area of the Newton polygon $P$ of~$F_1$. Let us now deform $F_1$ and $G$ by adding all monomials underneath the Newton diagram $\Gamma(C,p)$, with sufficiently small generic coefficients. Again by Kouchnirenko's theorem, the total intersection multiplicity of the deformed curves in $({\mathbb C}^*)^2$ equals $2S(P)+2S^-(\Gamma(C,p))$. To establish the bound (\hyperref{est-dilation}), it remains to notice that the extra term $2S^-(\Gamma(C,p))$ occurring in the latter intersection multiplicity geometrically comes from simple intersection points in a neighborhood of~$p$, obtained by breaking up the (complicated) intersection of $C$ and $Z(G)$ at~$p$. \end{proof} To state the analogue of Proposition~\hyperref{pr:gen-dilation} for shifted curves, we need to recall some basic facts about the \emph{Newton-Puiseux algorithm} \cite[Algorithm~I.3.6]{GLS}. This algorithm assigns each local branch~$Q$ of a curve $C$ at the point $p=(1,0,0)\in C\cap L^\infty$ to an edge $E\!=\!E(Q)$ of the Newton diagram~$\Gamma(C,p)$, cf.\ Definition~\hyperref{def:Newton-Gamma}. We denote~by \[ \mathbf{n}(E)=(n_y,n_z)\in{\mathbb Z}_{>0}^2 \] the primitive integral normal vector to~$E$, with positive coordinates. \pagebreak[3] \begin{lemma}[{cf.\ \cite[Section~I.3.1]{GLS}}] \label{l:wzwy} Let $Q$ be a local branch of $C$ at \hbox{$p\!=\!(1,0,0)\!\in\! C$}. Assume that $Q$ is tangent to~$L^\infty$. Let $E=E(Q)$. Then \begin{equation} \label{eq:nE=} \mathbf{n}(E)=(n_y,n_z)=\tfrac{1}{r} (m,d) \end{equation} where \begin{align} \label{eq:d(Q)} d=d(Q)&=(Q\cdot L^\infty)_p\,,\\ \label{eq:m(Q)} m=m(Q)&=\mt(Q)<d,\\ \label{eq:r(Q)} r=r(Q)&=\gcd(m,d). \end{align} \end{lemma} We denote by $F^E(y,z)=F^{E(Q)}(y,z)$ the truncation of $F_1$ (see~\hyperref{eq:F1}) along the edge $E=E(Q)$. The polynomial $F^E$ is quasihomogeneous with respect to the weighting of $y$ and $z$ by $n_y$ and $n_z$, respectively. We denote \begin{align} \label{eq:rho(Q)} \rho=\rho(Q)&=\lim_{\substack{q=(1,y,z)\in Q\\ q\to p}} \frac{z^{n_y}}{y^{n_z}}\in{\mathbb C}^*,\\ \label{eq:eta(Q)} \eta(Q)&=\text{multiplicity of $(z^{n_y}-\rho y^{n_z})$ as a factor of $F^E(y,z)$.} \end{align} It is not hard to see that $\rho(Q)$ is well defined, and that $\eta(Q)\ge 1$. \begin{proposition} \label{pr:gen-shift} Let $C=Z(F(x,y,z))$ be a projective curve not containing the line $Z(z)=L^\infty$ as a component. Assume that $p=(1,0,0)\in C\cap L^\infty$, and that $C$ is not tangent to the line~$Z(y)$ at~$p$. Let $a,b\in{\mathbb C}$, and let $C_{a,b}$ denote the shifted curve \begin{equation} \label{eq:shiftX} C_{a,b}=Z(F(x+az,y+bz,z)). \end{equation} Then \begin{align} \label{est-shift} (C\cdot C_{a,b})_p\ge 2S^-(\Gamma(C,p))&-\mt(C,p)+(C\cdot L^\infty)_p +\sum_Q \min(r(Q), \eta(Q)-1), \end{align} where the sum is over all local branches $Q$ of~$C$ at~$p$ which are tangent to~$L^\infty$. \end{proposition} (For the definitions of $S^-(\Gamma(C,p))$, $r(Q)$ and $\eta(Q)$, see Definition~\hyperref{def:Newton-Gamma}, \eqref{eq:d(Q)}--\eqref{eq:r(Q)} and \eqref{eq:rho(Q)}--\eqref{eq:eta(Q)}, respectively.) \begin{proof} Since the intersection multiplicity is lower semicontinuous, we may assume that $a$ and $b$ are generic complex numbers. Let us denote \begin{align*} \mathcal{Q}(C,p) &= \text{the set of local branches of $C$ at $p$;}\\ \mathcal{Q}_0(C,p)&= \text{the set of local branches tangent to~$L^\infty$;}\\ \mathcal{Q}_1(C,p)&= \text{the set of local branches transversal to~$L^\infty$.} \end{align*} The local branches in $\mathcal{Q}_0(C,p)$ correspond to the edges of $\Gamma(C,p)$ such that \hbox{$n_y<n_z$}, cf.\ Lemma~\hyperref{l:wzwy}. Let $\Gamma_0(C,p)$ denote the union of these edges. The local branches in $\mathcal{Q}_1(C,p)$, if~any, correspond to the unique edge $E_{(1,1)}\!\subset\!\Gamma(C,p)$ with \hbox{$\mathbf{n}(E_{(1,1)})\!=\!(1,1)$}. Figure~\hyperref{fig:Gamma(C,p)} illustrates the case where the edge $E_{(1,1)}$ is present; equivalently, some local branches are transversal to~$L^\infty$. \begin{figure} \caption{The Newton diagram $\Gamma(C,p)$. The edge $E_{(1,1)} \label{fig:Gamma(C,p)} \end{figure} Let us consider the family of curves $\{C_{\lambda a,\lambda b}\}_{0\le\lambda\le1}$ interpolating between \hbox{$C=C_{0,0}$} and~$C_{a,b}\,$. Being equisingular at the point~$p$, this deformation preserves the number of local branches at~$p$ as well as their topological characteristics. It therefore descends to families of individual local branches at~$p$, yielding an equisingular bijection \begin{align*} \mathcal{Q}(C,p)&\longrightarrow\mathcal{Q}(C_{a,b},p)\\ Q&\longmapsto Q_{a,b}\,. \end{align*} Since the fixed point set of the shift $(x,y,z)\mapsto(x+az,y+bz,z)$ is the line $L^\infty$, this bijection restricts to bijections $\mathcal{Q}_0(C,p)\to\mathcal{Q}_0(C_{a,b},p)$ and $\mathcal{Q}_1(C,p)\to\mathcal{Q}_1(C_{a,b},p)$. To obtain the desired lower bound on $(C\cdot C_{a,b})_p\,$, we will exploit the decomposition \begin{equation} (C\cdot C_{a,b})_p=\sum_{Q,Q'\in\mathcal{Q}(C,p)}(Q\cdot Q'_{a,b})_p = \Sigma_{00}+\Sigma_{01}+\Sigma_{10}+\Sigma_{11}\,, \label{eq-branch} \end{equation} where we use the notation \[ \Sigma_{\varepsilon\delta}=\sum_{\substack{ Q\in\mathcal{Q}_\varepsilon(C,p)\\ Q'\in\mathcal{Q}_\delta(C,p) }} (Q\cdot Q'_{a,b})_p\,, \quad \text{for $\varepsilon,\delta\in\{0,1\}$.} \] \begin{lemma} \label{lem:sum-QQ} Suppose $\mathcal{Q}_1(C,p)\neq\varnothing$. Then \begin{equation} \label{eq:three-sigmas} \Sigma_{01}+\Sigma_{10}+\Sigma_{11}=2S^-(E_{(1,1)}), \end{equation} where $S^-(E_{(1,1)})$ denotes the area of the trapezoid bounded by the edge~$E_{(1,1)}$, the coordinate axes, and the vertical line through the rightmost endpoint of~$E_{(1,1)}$. \end{lemma} \begin{proof} For $Q\in\mathcal{Q}_1(C,p)$, the tangent lines to $Q$ and $Q_{a,b}$ differ from each other. The foregoing discussion implies that, for any $Q\in\mathcal{Q}_1(C,p)$ and $Q'\in\mathcal{Q}(C,p)$, we have \[ (Q\cdot Q'_{a,b})_p=(Q_{a,b}\cdot Q')_p=\mt Q\cdot \mt Q'. \] We then observe that \begin{align*} \sum_{Q\in{\mathcal Q}_0(C,p)} \mt Q=\ell_0 &\,{\stackrel{\rm def}{=}}\, \text{length of the projection of $\Gamma_0(C,p)$ to the vertical axis}, \\ \sum_{Q\in{\mathcal Q}_1(C,p)} \mt Q=\ell_1 &\,{\stackrel{\rm def}{=}}\, \text{length of the projection of $E_{(1,1)}$ to either of the axes}, \end{align*} implying \[ \Sigma_{01}+\Sigma_{10}+\Sigma_{11}=\ell_0\ell_1+\ell_1\ell_0+\ell_1^2=2S^-(E_{(1,1)}). \qedhere \] \end{proof} In light of \eqref{eq-branch} and \eqref{eq:three-sigmas}, it remains to obtain the desired lower bound for the summand~$\Sigma_{00}\,$. To simplify notation, we will pretend, for the time being, that all local branches are tangent to~$L^\infty$, so that $\mathcal{Q}(C,p)=\mathcal{Q}_0(C,p)$ and $\Gamma(C,p)=\Gamma_0(C,p)$. Let $\mathcal{Q}(E)$ denote the set of local branches of $C$ at $p$ associated with an edge~$E$ of~$\Gamma(C,p)$. Equivalently, $\mathcal{Q}(E)=\{Q\in\mathcal{Q}(C,p)\mid E(Q)=E\}$. \begin{lemma} \label{lem:E1E2} Let $E_1$ and $E_2$ be two distinct edges of the Newton diagram~$\Gamma(C,p)$. Assume that $E_2$ is located above and to the left of~$E_1$, so that \[ E_1\!=\![(i_1,j_1),(i'_1,j'_1)], \quad E_2\!=\![(i_2,j_2),(i'_2,j'_2)], \quad i_2'\!<\!i_2\!\le\! i_1'\!<\!i_1\,,\quad j_1\!<\!j_1'\!\le\! j_2\!<\!j_2'\,. \] Then \begin{equation} \label{el11.6} \sum_{Q\in\mathcal{Q}(E_1)}\sum_{Q'\in\mathcal{Q}(E_2)}(Q\cdot Q'_{a,b})_p= \sum_{Q\in\mathcal{Q}(E_1)}\sum_{Q'\in\mathcal{Q}(E_2)}(Q_{a,b}\cdot Q')_p =(j'_1-j_1)(i_2-i'_2). \end{equation} \end{lemma} Note that $(j'_1-j_1)(i_2-i'_2)$ is the area of the rectangle formed by the intersections of the horizontal lines passing through~$E_1$ with the vertical lines passing through~$E_2$. See Figure~\hyperref{fig:E1E2}. \begin{figure} \caption{Two edges in the Newton diagram $\Gamma(C,p)$. Here $\Gamma(C,p)\!=\!\Gamma_0(C,p)$.} \label{fig:E1E2} \end{figure} \redsf{We provide two proofs of Lemma~\hyperref{lem:E1E2}. The first proof, based on the Bernstein-Kouchnirenko mixed volume formula, relies on a genericity assumption whose justification we do not provide. (Note however that this assumption is not needed for the proof of the ``$\ge$'' inequality in~\eqref{el11.6}. This inequality will be sufficient for the upcoming proof of Proposition \hyperref{pr:gen-shift}.) The second proof is rigorous but more technical. \begin{proof}[Proof~1 of Lemma~\hyperref{lem:E1E2} (sketch)] Assume that the truncations of $F(1,y,z)$ along the edges $E_1$ and $E_2$ are square-free. Construct the right triangles $\tau_1$ and~$\tau_2$ as shown in Figure~\hyperref{fig-BK}(a). By Bernstein's theorem~\cite{DBernstein}, the left and middle terms in \eqref{el11.6} can be bounded from below by the difference between the mixed area of $E_1, E_2$ and the mixed area of $\tau_1, \tau_2$ (see Figure~\hyperref{fig-BK}(b,c)). The claim follows. \end{proof} } \begin{figure} \caption{ \redsf{ Proof 1 of Lemma \hyperref{lem:E1E2} \label{fig-BK} \end{figure} \begin{proof}[Proof~2 of Lemma~\hyperref{lem:E1E2}] In the coordinates $(1,y,z)$, the shift transformation \[ (1,y,z)\mapsto(1+az,y+bz,z) \] converts the polynomial $F_1(y,z)=F(1,y,z)$ defining the affine curve $C\setminus Z(x)$ into the polynomial $F_{a,b}(y,z)=F(1+az,y+bz,z)$ defining~$C_{a,b}\setminus Z(x)$. Under this transformation, each monomial $y^iz^j$ of $F_1$ becomes \begin{equation}y^iz^j+\sum_{\renewcommand{0.6}{0.6} \begin{array}{c} \scriptstyle{i',j'\ge0}\\ \scriptstyle{i'+j'>0}\end{array}}c_{i'j'}y^{i-i'}z^{j+i'+j'}. \label{eq3} \end{equation} In view of the assumption $\mathcal{Q}(C,p)=\mathcal{Q}_0(C,p)$, the monomials appearing in the sum above correspond to integer points lying strictly above the Newton diagram~$\Gamma(C,p)$ of~$F_1$. In particular, $F_{a,b}$ has the same Newton diagram~$\Gamma(C,p)$, and the same truncations to its edges. Furthermore, each local branch $Q\in\mathcal{Q}(C,p)$ and its counterpart $Q_{a,b}\in\mathcal{Q}(C_{a,b},p)$ are associated with the same edge of~$\Gamma(C,p)$. The union of the local branches of $C_{a,b}$ associated with the edge~$E_1$ can be defined by an analytic equation $f(y,z)=0$ whose Newton diagram at the origin is the line segment $[(i_1-i'_1,0),(0,j'_1-j_1)]$ (cf.\ the Newton-Puiseux algorithm \cite[Algorithm~I.3.6]{GLS}). For the same reason, a local branch $Q$ of $C$ at $p$ associated with the edge $E_2$ has a parametrization \[ \begin{array}{l} y=\varphi(t)=t^m,\\[.03in] z=\psi(t)=\alpha t^d+O(t^{d+1}), \end{array} \quad \alpha\!\ne\!0,\quad |t| \! \ll \!1,\quad d\!=\!(Q\cdot L^\infty)_p\,,\quad m\!=\!\mt(Q) \] (cf.\ \eqref{eq:d(Q)}--\eqref{eq:m(Q)}) where $\frac{d}{m}=\frac{j'_2-j_2}{i_2-i'_2}$. Since $\frac{j'_2-j_2}{i_2-i'_2}>\frac{j'_1-j_1}{i_1-i'_1}$, we obtain \[ f(\varphi(t),\psi(t))=t^{m(i_1-i_1')}(\beta+O(t)),\quad\beta\ne0. \] The statement of the lemma now follows from the fact that the total multiplicity of the local branches of $C$ at $p$ associated with the edge $E_2$ equals $j_2'-j_2\,$. \end{proof} We are now left with the task of computing \begin{equation} \sum_E \sum_{Q,Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p\,, \label{eq:last-sum} \end{equation} where the first sum runs over the edges $E$ of the Newton diagram. In this part of the proof, we continue to assume, for the sake of simplifying the exposition, that all local branches are tangent to~$L^\infty$. Since the shift $(a,y,z)\mapsto(1+az,y+bz,z)$ acts independently on the analytic factors of~$F_1(y,z)$, we furthermore assume, in our computation of $\sum_{Q,Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p$ (see~\eqref{eq:last-sum}), that the Newton diagram $\Gamma(C,p)$ consists of a single edge $E=[(0,M),(D,0)]$, with $M<D$ and $\mathbf{n}(E)=(n_y,n_z)$. Pick a local branch $Q\in\mathcal{Q}(E)$. It admits an analytic parametrization of the form \begin{equation} \begin{array}{l} x=1, \quad \\ y=\varphi(t)=t^m,\quad \\ z=\psi(t)=\alpha t^d+O(t^{d+1}), \end{array} \label{eq2} \end{equation} where $t$ ranges over a small disk in ${\mathbb C}$ centered at zero, $m=r(Q)\cdot n_y$, $d=r(Q)\cdot n_z$, and $r(Q)=\gcd(d,m)$, cf.\ \eqref{eq:nE=} and~\eqref{eq:r(Q)}. \begin{lemma} \label{lem:Step3} We have \begin{equation} \label{eq:sum-Q'} \sum_{Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p=dM-m+d+\min(r(Q),\eta(Q)-1). \end{equation} \end{lemma} \redsf{Before providing a proof of Lemma~\hyperref{lem:Step3}, we note that the weaker inequality \begin{equation} \label{eq:sum-Q'-ineq} \sum_{Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p \ge dM-m+d \end{equation} can be deduced directly from the Bernstein-Kouchnirenko formula, similarly to Proof~1 of Lemma~\hyperref{lem:E1E2} provided above. As in the case of Lemma~\hyperref{lem:E1E2}, this lower bound would be sufficient to complete the upcoming proof of Proposition~\hyperref{pr:gen-shift}, restricted to the case of a Newton non-degenerate singularity~$(C,p)$. } \begin{proof} The left-hand side of~\eqref{eq:sum-Q'} is the minimal exponent of $t$ appearing in the expansion of $F_{a,b}(\varphi(t),\psi(t))$ into a power series in~$t$. Since $F_1(\varphi(t),\psi(t))=0$, we may instead substitute \eqref{eq2} into the difference $F_{a,b}(y,z)-F_1(y,z)$, or equivalently into the monomials of the second summand in~\eqref{eq3} (corresponding to individual monomials $y^iz^j$ of~$F_1(y,z)$). Evaluating $y^{i-i'}z^{j+i'+j'}$ at $y=\varphi(t)$, $z=\psi(t)$, we obtain \[ (t^m)^{i-i'}(t^d(\alpha+O(t)))^{j+i'+j'}=t^{mi+dj+(d-m)i'+dj'}(\alpha^{j+i'+j'}+O(t)). \] To get the minimal value of the exponent $mi+dj+(d-m)i'+dj'$, we need to minimize $mi+dj$ (which is achieved for $(i,j)\in E$) and take $i'=1$ and $j'=0$. Developing $F_{a,b}(y,z)=F(1+az,y+bz,z)$ into a power series in $a$ and~$b$, we see that the corresponding monomials $y^{i-i'}z^{j+i'+j'}=y^{i-1}z^{j+1}$ in $F_{a,b}(y,z)-F(1,y,z)$ add up to $bzF_y(1,y,z)$. We conclude that the desired minimal exponent of~$t$ occurs when we substitute $(y,z)=(\varphi(t),\psi(t))$ either into $bzF^E_y(y,z)$ or into a monomial $y^{i-1}z^{j+1}$ such that $(i,j)$ is one of the integral points closest to the edge~$E$ and lying above~$E$. The latter condition reads $n_yi+n_zj=n_zM+1$. The truncation $F^E(y,z)$ of $F_1(y,z)$ has the form \begin{equation} F^E(y,z)= \prod_{k=1}^n(z^{n_y}-\beta_ky^{n_z})^{r_k}, \label{eq:trun} \end{equation} where $\beta_1,\dots,\beta_n\in{\mathbb C}$ are distinct, and $r_1,\dots,r_n\in{\mathbb Z}_{>0}$. Developing $F^E(y,z)$ into~a power series in~$t$, we see that the monomials of $F^E$ yield the minimal exponent of~$t$. Since $F_1(\varphi(t),\psi(t))=0$, these minimal powers of $t$ must cancel out, implying that, for some $k_0\in\{1,\dots,n\}$, we have (cf.\ \eqref{eq:rho(Q)}, \eqref{eq:eta(Q)}): \begin{align*} \rho=\rho(Q)&=\lim_{t\to 0}\frac{(\alpha t^d)^{n_y}}{(t^m)^{n_z}}=\alpha^{n_y}=\beta_{k_0}\,,\\ \eta=\eta(Q)&=r_{k_0}\,. \end{align*} The factorization formula \eqref{eq:trun} implies that \[ bzF^E_y(y,z)=bn_zy^{n_z-1}z\sum_{k=1}^n\Bigl((-\beta_k)^{r_k}(z^{n_y}-\beta_ky^{n_z})^{r_k-1}\prod_{l\ne k}(z^{n_y}-\beta_ly^{n_z})^{r_l}\Bigr). \] We then compute the minimal exponent for $bzF^E_y(y,z)$: \begin{align*}bn_zy^{n_z-1}z\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{mn_z-m+d}),\\ (z^{n_y}-\beta_ky^{n_z})^{r_k-1}\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{dn_y(r_k-1)}),\quad k\ne k_0,\\ (z^{n_y}-\rho y^{n_z})^{\eta-1}\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{(dn_y+1)(\eta-1)}),\\ (z^{n_y}-\beta_ly^{n_z})^{r_l}\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{dn_yr_l}),\quad l\ne k_0,\\ (z^{n_y}-\rho y^{n_z})^{\eta}\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{(dn_y+1)\eta}), \\ bzF^E_y(y,z)\big|_{y=\varphi(t),z=\psi(t)}&=O(t^{dM-m+d+\eta-1}). \end{align*} Also, for $n_yi+n_zj=n_zM+1$, we have \[ y^{i-1}z^{j+1}\big|_{y=\varphi(t),z=\psi(t)}=O(t^{dM-m+d+r(Q)}). \] Consequently \begin{align*} \sum_{Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p &=\min(dM-m+d+\eta-1,dM-m+d+r(Q)), \end{align*} as desired. \end{proof} We are now ready to complete the proof of Proposition~\hyperref{pr:gen-shift}. We first note that in Lemma~\hyperref{lem:Step3}, \[ m=\mt Q,\quad d=(Q\cdot L^\infty)_p\,, \] and moreover \begin{equation} \label{eq:M,D} \sum_{Q\in\mathcal{Q}(E)}\mt Q=M,\quad \sum_{Q\in\mathcal{Q}(E)}(Q\cdot L^\infty)_p=D. \end{equation} Therefore \begin{align} \notag \sum_{Q,Q'\in\mathcal{Q}(E)}(Q\cdot Q'_{a,b})_p &=\sum_Q (dM-m+d+\min(r(Q),\eta(Q)-1)) \\ \label{eq:DM-M+D...} &=DM-M+D+\sum_Q \min(r(Q),\eta(Q)-1). \end{align} Note that $DM$ is twice the area of the right triangle with hypotenuse~$E$. As illustrated in Figure~\hyperref{fig:2*area}, adding up the contributions $DM$ from all edges~$E$ of~$\Gamma_0(C,p)$, together with the contributions coming from Lemmas~\hyperref{lem:sum-QQ} and~\hyperref{lem:E1E2}, we obtain $2S^-(\Gamma(C,p))$, cf.\ Definition~\hyperref{def:Newton-Gamma}. Finally, in view of~\eqref{eq:M,D}, we have \begin{align*} \sum_{E\subset \Gamma_0(C,p)} (-M+D) &=\sum_{Q\in \mathcal{Q}_0(C,p)} (-\mt(Q)+(Q\cdot L^\infty)_p)\\ &=\sum_{Q\in \mathcal{Q}(C,p)} (-\mt(Q)+(Q\cdot L^\infty)_p)\\ &=-\mt(C,p)+(C\cdot L^\infty)_p\,. \end{align*} Putting everything together, we obtain~\eqref{est-shift}. \end{proof} \begin{figure} \caption{The area $S^-(\Gamma(C,p))$ under the Newton diagram $\Gamma(C,p)$ is obtained by adding three kinds of contributions: \red{(a)} \label{fig:2*area} \end{figure} \section{Arrangements of polynomial curves} \label{sec:arrangements-poly} In this section, we generalize Example~\hyperref{ex:lines+parabolas} to arrangements of polynomial curves obtained from a given curve by shifts, dilations, and/or rotations. We start by obtaining upper bounds on the number of intersection points of two polynomial curves related by one of these transformations. These bounds lead to expressivity criteria for arrangements consisting of such curves. Recall that for $a,b\in{\mathbb C}$ and $c\in{\mathbb C}^*$, we denote by \begin{align} \label{eq:Cab} C_{a,b}&=Z(F(x+az,y+bz,z)), \\ C_c&=Z(F(cx,cy,z)) \end{align} the curves obtained from a plane curve $C=Z(F(x,y,z))$ by a shift and a dilation, respectively. We will also use the notation \begin{equation} \label{eq:shift+dilation} C_{a,b}c=Z(F(cx+az,cy+bz,z)) \end{equation} for a curve obtained from~$C$ by a combination of a shift and a dilation. As before, we identify a projective curve $C=Z(F(x,y,z))$ with its restriction to the affine plane ${\mathbb A}^2={\mathbb P}^2\setminus L^\infty$ given by $C=V(G(x,y))$, where $G(x,y)=F(x,y,1)$. Under this identification, we have \begin{align} C_{a,b}&=V(G(x+a,y+b)), \\ C_c&=V(G(cx,cy)),\\ \label{eq:Cabs=V(G)} C_{a,b}c&=V(G(cx+a,cy+b)). \end{align} \begin{remark} \label{rem:shift+dilation} Unless $c=1$ (the case of a pure shift), the transformation $C\leadsto C_{a,b}c$ can be viewed as a pure dilation centered at some point $o\in{\mathbb C}$ (where $o$ may be different from~$0$). \end{remark} \begin{corollary} \label{cor:shift-poly} Let $C$ be a real polynomial curve of degree~$d$. Let $C\cap L^\infty=\{p\}$ and $m=\mt(C,p)$. Then we have, for $a,b\in{\mathbb C}$ and $c\in{\mathbb C}^*$: \begin{align} \label{eq:dilation-poly} (C\cdot C_{a,b}c)_p &\ge dm ;\\ \label{eq:shift-poly} (C\cdot C_{a,b})_p &\ge (d-1)(m+1)+\gcd(d,m). \end{align} \end{corollary} \begin{proof} In view of Remark~\hyperref{rem:shift+dilation} and the inequality \[ (d-1)(m+1)+\gcd(d,m)\ge dm, \] it suffices to establish \eqref{eq:dilation-poly} in the case $a=b=0$, i.e., with $C_{a,b}c$ replaced by~$C_c$. The bounds \eqref{eq:dilation-poly}--\eqref{eq:shift-poly} are obtained by applying Propositions~\hyperref{pr:gen-dilation} and~\hyperref{pr:gen-shift} to the case of a polynomial curve~$C$, while noting that in this case, \begin{align*} S^-(\Gamma(C,p))&=\tfrac12 md,\\ \mt(C,p)&=m,\\ (C\cdot L^\infty)_p&=d,\\ r(Q)=\eta(Q)&=\gcd(d,m). \qedhere \end{align*} \end{proof} \begin{proposition} \label{pr:dilations-poly} Let $C$ be a real polynomial curve with a parametrization \[ t\mapsto (P(t),Q(t)), \quad \deg(P)=d, \quad \deg(Q)=d'\neq d. \] Let $a,b,c\in{\mathbb C}$, $c\neq 0$, and let $C_{a,b}c$ be the shifted and dilated curve given by~\eqref{eq:shift+dilation}. Assume that $C$ and $C_{a,b}c$ have $N$ intersection points in ${\mathbb A}^2={\mathbb P}^2\setminus L^\infty$, \redsf{counting with multiplicity}. Then $N\le dd'$. \end{proposition} \begin{proof} Without loss of generality, assume that $d>d'$. By B\'ezout's theorem, the inter\-section multiplicity $(C\cdotC_{a,b}c)_p$ at the point $p=(1,0,0)\in L^\infty$ is at most~\hbox{$d^2-N$}. Applying \eqref{eq:dilation-poly}, with $m=d-d'$, we obtain $d^2-N\ge d(d-d')$. The claim follows. \end{proof} Proposition~\hyperref{pr:dilations-poly} and Theorem~\hyperref{th:reg-expressive} imply that if $C$ and $C_{a,b}c$ intersect at $dd'$ hyperbolic nodes in the real affine plane, then the union $C\cupC_{a,b}c$ is expressive. \begin{example} \label{ex:23-dilation} Let $C$ be the $(2,3)$-Chebyshev curve, the singular cubic given by \begin{equation} \label{eq:23-Chebyshev} 2x^2-1+4y^3-3y=0 \end{equation} or parametrically by \[ t\mapsto (4t^3-3t, -2t^2+1), \] cf.\ \eqref{eq:32-Chebyshev}. Applying Proposition~\hyperref{pr:dilations-poly} (with $d=3$ and $d'=2$), we see that the curve $C$ and its dilation~$C_{a,b}c$ ($c\neq 1$) can intersect in the real affine plane in at most 6~points. When this bound is attained, the union $C\cupC_{a,b}c$ is expressive. Figure~\hyperref{fig:chebyshev23-dilation} shows one such example, with $a=b=0$ and $c=-1$ (so $C_{a,b}c$ is a reflection of~$C$). Cf.\ also Figure~\hyperref{fig:three-dilations}. \end{example} \begin{figure} \caption{An expressive cubic and its reflection, intersecting at 6 real points. The~resulting two-component curve is expressive.} \label{fig:chebyshev23-dilation} \end{figure} \begin{proposition} \label{pr:shifts-poly} Let $C$ be a real polynomial curve with a parametrization \[ t\mapsto (P(t),Q(t)), \quad \deg(P)=d, \quad \deg(Q)=d'<d. \] Let $a,b\in{\mathbb C}$, and let $C_{a,b}$ be the shifted curve given by~\eqref{eq:Cab}. Assume that $C$ and $C_{a,b}$ have $N$ intersection points in ${\mathbb A}^2={\mathbb P}^2\setminus L^\infty$, \redsf{counting with multiplicity}. Then \begin{equation} \label{eq:N-shift} N\le dd'-d'-\gcd(d,d')+1. \end{equation} \end{proposition} \begin{proof} We use the same arguments as in the proof of Proposition~\hyperref{pr:dilations-poly} above, with the lower bound~\eqref{eq:dilation-poly} replaced by~\eqref{eq:shift-poly}: \begin{align*} N\le d^2-(C\cdot C_{a,b})_p &\le d^2-(d-1)(m+1)-\gcd(d,m) \\ &=d^2-(d-1)(d-d'+1)-\gcd(d,d') \\ &=dd'-d'+1-\gcd(d,d'). \qedhere \end{align*} \end{proof} For special choices of shifts, the bound \eqref{eq:N-shift} can be strengthened. Here is one example, involving the Lissajous-Chebyshev curves, see Example~\hyperref{ex:lissajous-chebyshev}. (Note that such a curve does not have to be polynomial: it could be trigonometric or reducible.) \begin{proposition} \label{pr:shifts-lissajous} Let $C$ be a Lissajous-Chebyshev curve given by \begin{equation} \label{eq:lissajous-kl} T_k(x)+T_\ell(y)=0. \end{equation} Let $a\!\in\!{\mathbb C}$, and let $C_{a,0}$ (resp.,~$C_{0,a}$) be the shift of~$C$ in the $x$ (resp.,~$y$) direction. Assume that $C$ and $C_{a,0}$ (resp.,~$C_{0,a}$) intersect at $N_x$ (resp.,~$N_y$) points in~${\mathbb A}^2$. Then \begin{align} \label{eq:x-shift-lissajous} N_x &\le (k-1)\ell, \\ \label{eq:y-shift-lissajous} N_y &\le k(\ell-1). \end{align} \end{proposition} We note that in the case when $\gcd(k,\ell)=1$ and $k<\ell$, the bound \eqref{eq:y-shift-lissajous} matches~\eqref{eq:N-shift}, with $d=k$ and $d'=\ell$, whereas \eqref{eq:x-shift-lissajous} gives a stronger bound. \begin{proof} Due to symmetry, it suffices to prove~\eqref{eq:y-shift-lissajous}. The intersection of the curves $C$ and~$C_{0,a}$ is given by \begin{align*} \begin{cases} T_k(x)+T_\ell(y)=0 \\ T_k(x)+T_\ell(y+a)=0 \end{cases} \Longleftrightarrow{\ } \begin{cases} T_k(x)+T_\ell(y)=0 \\ T_\ell(y+a)-T_\ell(y)=0. \end{cases} \end{align*} The equation $T_\ell(y+a)-T_\ell(y)=0$ has at most $\ell-1$ roots; each of these values of~$y$ then gives at most $k$ possible values of~$x$. \end{proof} \begin{example} \label{ex:23-shifts} As in Example~\hyperref{ex:23-dilation}, let $C$ be the $(2,3)$-Chebyshev curve~\eqref{eq:23-Chebyshev}. Applying Proposition~\hyperref{pr:shifts-lissajous} (with $k=2$ and $\ell=3$), we see that the curve $C$ and its vertical shift~$C_{0,b}$ ($b\neq 0$) can intersect in the affine plane in at most 4~points. More generally, Proposition~\hyperref{pr:shifts-poly} (with $d=3$ and $d'=2$) gives the upper bound of~4 for the number of intersection points between $C$ and its nontrivial shift~$C_{a,b}$. On the other hand, in the case of a horizontal shift, we get at most $3$~points of intersection. When these bounds are attained, with all intersection points real, the union $C\cupC_{a,b}$ is expressive. See Figure~\hyperref{fig:chebyshev23-shifts}. \end{example} \begin{figure} \caption{An expressive cubic and its shift, intersecting at 3 or 4 real points, depending on the direction of the shift. The~resulting two-component curve is expressive.} \label{fig:chebyshev23-shifts} \end{figure} In addition to shifts and dilations, we can consider other linear changes of variables that can be used to construct new expressive curves. Let us illustrate one such construction using the example of Lissajous-Chebyshev curves: \begin{proposition} \label{pr:rescale-lissajous} Let $C$ be the $(k,\ell)$-Lissajous-Chebyshev curve given by~\eqref{eq:lissajous-kl}, with $\ell>k\ge 2$. For $q\in{\mathbb C}^*$, $q\neq 1$, let $C_{[q]}$ denote the curve defined by \begin{equation} \label{eq:lissajous-kl-q} T_k(\tfrac{x}{q^\ell})+T_\ell(\tfrac{y}{q^k})=0. \end{equation} Assume that $C$ and $C_{[q]}$ intersect at $N$ points in~${\mathbb A}^2$. Then \begin{equation} \label{eq:x-rescale-lissajous} N \le k(\ell-2). \end{equation} \end{proposition} \begin{proof} Since $T_k(x)=2^{k-1}x^k+O(x^{k-1})$ and $T_\ell(y)=2^{\ell-1}y^\ell+O(y^{\ell-1})$, the equations defining $C$ and $C_{[q]}$ can be written as \begin{align} \label{eq:leading-lissajous} 2^{k-1}x^k+O(x^{k-2})+2^{\ell-1}y^\ell+O(y^{\ell-2})&=0, \\ \label{eq:leading-lissajous-q} 2^{k-1}q^{-k\ell}x^k+O(x^{k-2})+2^{\ell-1}q^{-k\ell}y^\ell+O(y^{\ell-2})&=0. \end{align} Multiplying \eqref{eq:leading-lissajous} by $q^{k\ell}$ and subtracting~\eqref{eq:leading-lissajous}, we get an equation of the form \begin{equation} \label{eq:leading-lissajous-q-1} O(x^{k-2})+O(y^{\ell-2})=0. \end{equation} We thus obtain a system of two algebraic equations of the form \eqref{eq:leading-lissajous} and \eqref{eq:leading-lissajous-q-1}. Their Newton polygons are contained in the triangles with vertices $(0,0), (k,0), (0,\ell)$ and $(0,0), (k-2,0), (0,\ell-2)$, respectively. The mixed area of these two triangles is equal to $k(\ell-2)$ (here we use that $\ell>k\ge 2$ and therefore $\frac{\ell-2}{k-2}>\frac{\ell}{k}$). By Bernstein's theorem~\cite{DBernstein}, this system of equations has at most $k(\ell-2)$ solutions. \end{proof} \begin{example} \label{ex:23-q} Let $C$ be the $(2,3)$-Chebyshev curve~\eqref{eq:23-Chebyshev}, as in Examples~\hyperref{ex:23-dilation} and~\hyperref{ex:23-shifts}. Applying Proposition~\hyperref{pr:rescale-lissajous} (with $k=2$ and $\ell=3$), we see that the curve $C$ and the rescaled curve~$C_{[q]}$ defined by~\eqref{eq:lissajous-kl-q} can intersect in the affine plane in at most 2~points. When they do intersect at 2 real points, the union $C\cupC_{a,b}$ is expressive. See Figure~\hyperref{fig:chebyshev23-q}. \end{example} \begin{figure} \caption{An expressive cubic and its rescaling~\eqref{eq:lissajous-kl-q} \label{fig:chebyshev23-q} \end{figure} \begin{remark} \label{rem:generic-crossings} Let $C'$ be a curve obtained from a plane curve~$C=V(G(x,y))$ of degree~$d$ by an arbitrary affine change of variables: \begin{equation*} C'=V(G(c_{11}x + c_{12}y+a,c_{21}x+c_{22}y+b)). \end{equation*} Then $C$ and $C'$ intersect in~${\mathbb A}^2$ in at most~$d^2$ points. Thus, if they intersect at $d^2$ hyperbolic nodes, then $C\cup C'$ is expressive. See Figure~\hyperref{fig:chebyshev23-x}. \end{remark} \begin{figure} \caption{Two expressive cubics related by a $90^\circ$ rotation, and intersecting at 9 real points. The~resulting two-component curve is expressive.} \label{fig:chebyshev23-x} \end{figure} \begin{example} \label{ex:5-chebyshevs} Figures~\hyperref{fig:chebyshev23-5}--\hyperref{fig:chebyshev34-5} show five different ways to arrange two Chebyshev curves (the $(2,3)$-Chebyshev cubic and the $(3,4)$-Chebyshev quartic, respectively) related to each other by an affine transformation of the plane~${\mathbb A}^2$ so that the resulting two-component curve is expressive. These pictures illustrate: \begin{itemize} \item[(a)] Proposition~\hyperref{pr:rescale-lissajous}, cf.\ Example~\hyperref{ex:23-q}; \item[(b,c)] Proposition~\hyperref{pr:shifts-lissajous}, cf.\ Example~\hyperref{ex:23-shifts}; \item[(d)] Proposition~\hyperref{pr:dilations-poly}, cf.\ Example~\hyperref{ex:23-dilation}; and \item[(e)] Remark~\hyperref{rem:generic-crossings}. \end{itemize} \end{example} \begin{figure} \caption{Two expressive cubics forming a two-component expressive curve. The two components intersect at 2, 3, 4, 6, and 9 real points, respectively. } \label{fig:chebyshev23-5} \end{figure} \begin{figure} \caption{Two expressive quartics forming a two-component expressive curve. The two components intersect at 6, 8, 10, 12, and 16 real points, respectively. } \label{fig:chebyshev34-5} \end{figure} \begin{remark} \label{rem:arrangements-poly} More generally, consider a collection of expressive polynomial curves related to each other by affine changes of variables (equivalently, affine transformations of the plane~${\mathbb A}^2$). Suppose that for every pair of curves in this collection, the number of hyperbolic nodes in their intersection attains the upper bound for an appropriate version of Proposition~\hyperref{pr:dilations-poly}, \hyperref{pr:shifts-poly}, \hyperref{pr:shifts-lissajous}, \hyperref{pr:rescale-lissajous}, or Remark~\hyperref{rem:generic-crossings}. Then the union of the curves in the given collection is an expressive curve. See Figures~\hyperref{fig:three-nodal-cubics}--\hyperref{fig:four-cubics}. \end{remark} \begin{figure} \caption{An expressive curve whose three components are translations of the same nodal cubic. Each pair of components intersect at 4 hyperbolic nodes.} \label{fig:three-nodal-cubics} \end{figure} \begin{figure} \caption{An expressive curve whose four components are singular cubics related to each other either by a horizontal translation or a dilation (with $c=-1$). Each pair of components intersect at 3 or 6 hyperbolic nodes, respectively.} \label{fig:four-cubics} \end{figure} \begin{figure} \caption{An expressive curve whose components are singular cubics related to each other by dilations. Each pair of them intersect at 6 hyperbolic nodes.} \label{fig:three-dilations} \end{figure} \section{Arrangements of trigonometric curves} \label{sec:arrangements-trig} In this section, we generalize Example~\hyperref{ex:circle-arrangements} to arrangements of curves obtained from a given trigonometric curve by shifts, dilations, and/or rotations. We continue to use the notation \eqref{eq:Cab}--\eqref{eq:Cabs=V(G)} for the shifted and dilated curves. \begin{corollary} \label{cor:shift-dilate-trig-2pts-transversal} Let $C$ be a trigonometric curve of degree~$2d$, with two local branches at infinity centered at distinct points $p,\overline{p}\in C\cap L^\infty$. Suppose that $\mt(C,p)=d$, i.e., these branches are transversal to~$L^\infty$. Then we have, for $a,b\in{\mathbb C}$ and $c\in{\mathbb C}^*$: \begin{equation} \label{eq:dilate-trig-transversal} (C\cdot C_{a,b}c)_p \ge d^2. \end{equation} If $C$ and $C_{a,b}c$ intersect in $N$ points in the affine plane~${\mathbb A}^2$, \redsf{counting with multiplicity,} then $N\le 2d^2$. \end{corollary} \begin{proof} We apply Proposition~\hyperref{pr:gen-dilation}, with $S^-(\Gamma(C,p))=\frac12 d^2$, to obtain~\eqref{eq:dilate-trig-transversal}. It follows that $N\le (2d)^2-2\cdot \frac12 d^2=2d^2$. \end{proof} \begin{example} \label{ex:limacon-shifts} Let $C$ be an epitrochoid with parameters $(2,-1)$, i.e., a lima\c con. It has two conjugate points at infinity, each an ordinary cusp transversal to~$L^\infty$. This is a quartic trigonometric curve, so by~Corollary~\hyperref{cor:shift-dilate-trig-2pts-transversal} (with $d=2$), any two shifts/dilations of~$C$ intersect in at most 8~points in the affine plane. Thus, if they intersect at 8 hyperbolic nodes, then $C\cup C_{a,b}c$ is expressive by Theorem~\hyperref{th:reg-expressive}. More generally, an arrangement of lima\c cons related to each other by shifts and dilations gives an expressive curve if any two of these lima\c cons intersect at 8 hyperbolic nodes. See Figure~\hyperref{fig:two-limacons-3}. \end{example} \begin{figure} \caption{A union of two lima\c cons related to each other by a shift or dilation is expressive if they intersect at 8 hyperbolic nodes.} \label{fig:two-limacons-3} \end{figure} \begin{corollary} \label{cor:shift-dilate-trig-2pts-tangent} Let $C$ be a trigonometric curve of degree~$2d$, with two local branches at infinity centered at distinct points $p,\overline{p}\in C\cap L^\infty$. Suppose that $m=\mt(C,p)<d$, i.e., $C$ is tangent to~$L^\infty$. Then we have, for $a,b\in{\mathbb C}$ and $c\in{\mathbb C}^*$: \begin{align} \label{eq:dilate-trig} (C\cdot C_{a,b}c)_p &\ge dm, \\ \label{eq:shift-trig} (C\cdot C_{a,b})_p &\ge (d-1)(m+1)+\gcd(d,m). \end{align} If $C$ and $C_{a,b}c$ (resp., $C_{a,b}$) intersect in~$N$ (resp.,~$M$) points in the affine plane~${\mathbb A}^2$, then \begin{align} \label{eq:} N &\le 4d^2-2dm; \\ M &\le 4d^2-2(d-1)(m+1)-2\gcd(d,m). \end{align} \end{corollary} \begin{proof} The proof is analogous to the proof of Corollary~\hyperref{cor:shift-poly}. We apply Propositions \hyperref{pr:gen-dilation} and~\hyperref{pr:gen-shift} to the case under consideration, taking into account that \begin{align*} S^-(\Gamma(C,p))&=\tfrac12 dm, \\ (C\cdot L^\infty)_p&=d,\\ r(Q)=\eta(Q)&=\gcd(d,m). \qedhere \end{align*} \end{proof} \begin{example} \label{ex:hypotrochoid-shifts} Let $C$ be a hypotrochoid with parameters $(2,1)$, cf.\ Examples~\hyperref{ex:hypotrochoids} and~\hyperref{ex:hypotrochoids-again}. It has two conjugate points at infinity; at each of them, $C$~is smooth and has a simple (order~2) tangency to~$L^\infty$. By \eqref{eq:dilate-trig} (with $d=2$ and $m=1$), any dilation of~$C$ intersects~$C$ in the affine plane~${\mathbb A}^2$ in at most 12~points. Similarly, by~\eqref{eq:shift-trig}, any shift of~$C$ intersects~$C$ in~${\mathbb A}^2$ in at most 10~points. When these bounds are attained, and all intersections are hyperbolic nodes, the union of the two curves is expressive. See Figure~\hyperref{fig:two-hypotrochoids}. \end{example} \begin{figure} \caption{A union of two 3-petal hypotrochoids related to each other by a shift (resp., dilation) is expressive if they intersect at~10 (resp.,~12) hyperbolic nodes.} \label{fig:two-hypotrochoids} \end{figure} \begin{corollary} \label{cor:shift-dilate-trig-1pt} Let $C=Z(F)$ be a trigonometric projective curve of degree~$2d$, with two local complex conjugate branches $Q,\overline Q$ centered at the same point $p\in C\cap L^\infty$. Denote $\mt(C,p)=2\mt Q=2m$. Then we have, for $a,b\in{\mathbb C}$ and $c\in{\mathbb C}^*$: \begin{align} \label{eq:4dm} (C\cdot C_{a,b}c)_p &\ge 4dm, \\ \label{eq:shift-trig-1pt} (C\cdot C_{a,b})_p &\ge \begin{cases} 4dm-2m+2d+2\gcd(d,m)-2 &\text{if}\ (Q\cdot\overline Q)_p=dm,\\ 4dm-2m+2d+2\gcd(d,m) &\text{if}\ (Q\cdot\overline Q)_p>dm. \end{cases} \end{align} \end{corollary} \begin{proof} Once again, we apply Propositions~\hyperref{pr:gen-dilation} and~\hyperref{pr:gen-shift}, with \begin{align*} S^-(\Gamma(C,p))&=2dm, \\ (C\cdot L^\infty)_p&=2d,\\ r(Q)&=\gcd(d,m), \\ \eta(Q)&=\begin{cases}\gcd(d,m),\ &\text{if}\ (Q\cdot\overline Q)_p=dm,\\ 2\gcd(d,m),\ &\text{if}\ (Q\cdot\overline Q)_p>dm.\end{cases} \end{align*} and similarly for $\overline Q$. \end{proof} \pagebreak[3] \begin{example} \label{ex:shifts-of-lemniscates} Let $C$ be a lemniscate of Huygens \begin{equation} \label{eq:huygens-again} y^2+4x^4-4x^2=0, \end{equation} see Example~\hyperref{ex:lemniscates}. It has a single point~$p=(0,1,0)$ at infinity, with two conjugate local branches $Q$ and~$\overline Q$. These branches are tangent to each other and to~$L^\infty$; all these tangencies are of order~2. Thus Corollary~\hyperref{cor:shift-dilate-trig-1pt} applies, with $d=2$, $m=1$, and $(Q\cdot \overline Q)=2=dm$. The bound~\eqref{eq:shift-trig-1pt} yields $(C\cdot C_{a,b})_p\ge 10$, implying that $C$ and a shifted curve~$C_{a,b}$ intersect in~${\mathbb A}^2$ in at most 6~points. Thus, any arrangement of shifts of~$C$ which intersect pairwise transversally in 6~real points produces an expressive curve (assuming all these double points are distinct). See Figure~\hyperref{fig:three-lemniscates}. \end{example} \begin{figure} \caption{Left: an expressive curve whose three components are translations of the same lemniscate. Each pair of components intersect at six hyperbolic~nodes. Right: two lemniscates differing by a vertical shift, see Example~\hyperref{ex:lemniscate-vertical} \label{fig:three-lemniscates} \end{figure} For special choices of shifts and dilations, the bounds in the corollaries above~can be strengthened, leading to examples of expressive curves whose components intersect~in fewer real points than one would ordinarily expect. Here are two examples: \begin{example} \label{ex:lemniscate-vertical} Let $C$ be the lemniscate~\eqref{eq:huygens-again}. Since $C$ is a Lissajous-Chebyshev curve with parameters~$(4,2)$, by Proposition~\hyperref{pr:shifts-lissajous}, its vertical shift $C_{0,b}$ intersects $C$ in~${\mathbb A}^2$ in at most 4~points. Hence $C \cup C_{0,b}$ is expressive for $b\in(-2,2)$. See Figure~\hyperref{fig:three-lemniscates}. \end{example} \begin{example} \label{ex:two-exotic-lemniscates} Let $C=V(G(x,y))$ be the trigonometric curve defined by the polynomial \begin{equation} \label{eq:exotic-lemniscate} G(x,y)=x^2+y^4-11y^2+18y-8=x^2+(y+4)(y-1)^2(y-2). \end{equation} It is easy to see that $C$ is expressive, and that $C$ intersects its dilation/reflection $C_{-1}=V(G(-x,-y))$ in two points in~${\mathbb A}^2$, both of which are real hyperbolic nodes. Hence the union $C\cup C_{-1}$ is expressive. See Figure~\hyperref{fig:two-exotic-lemniscates}. \end{example} \begin{figure} \caption{A two-component expressive curve $C\cup C_{-1} \label{fig:two-exotic-lemniscates} \end{figure} \section{Alternative notions of expressivity} \label{sec:alternative-expressivity} In this section, we discuss two alternative notions of expressivity. For the first notion, algebraic curves are treated as subsets of~${\mathbb R}^2$, instead of the scheme-theoretic point of view that we adopted above in this paper. For the second notion, bivariate polynomials are replaced by arbitrary smooth functions of two real variables. Viewing real algebraic curves set-theoretically, as ``topological curves'' in the real affine~plane, we arrive at the following definition: \begin{definition} \label{def:expressive-curve} Let $\mathcal{C}\subset{\mathbb R}^2$ be the set of real points of a real affine algebraic curve, see Definition~\hyperref{def:curves}. Assume that $\mathcal{C }$ is nonempty, with no isolated points. We say that $\mathcal{C }$ is \emph{expressive} if its (complex) Zariski closure~$C=\overline{\mathcal{C}}$ is an expressive plane algebraic curve. Thus, a subset $\mathcal{C }\subset{\mathbb R}^2$ is expressive if \begin{itemize}[leftmargin=.2in] \item $\mathcal{C }$ is the set of real points of a real affine plane algebraic curve, \item $\mathcal{C }$ is nonempty, with no isolated points, and \item the minimal polynomial of $\mathcal{C }$ is expressive, see Definition~\hyperref{def:expressive-poly}. \end{itemize} \end{definition} As always, one should be careful when passing from a real algebraic set to an algebraic curve, or the associated polynomial. A polynomial $G(x,y)\!\in\!{\mathbb R}[x,y]$ can be expressive while the real algebraic set~$V_{{\mathbb R}}(G)$ is not; see Example~\hyperref{example1} below. Conversely, $V_{{\mathbb R}}(G)$ can be expressive while $G(x,y)$ is~not, see Example~\hyperref{ex:xy(x^2+y^2+1)}. That's because $G$ may not be the minimal polynomial for~$V_{{\mathbb R}}(G)$. \begin{example}[cf.\ Examples~\hyperref{ex:(x^2+z^2)(yz^2-x^3+x^2y)}, \hyperref{ex:(x^2+z^2)(yz^2-x^3+yx^2)-regular}, and~\hyperref{ex:reducible-quintic}] \label{example1} The real polynomials \begin{align*} G(x,y)&=(x^2+1)(x^2y-x^3+y), \\ \widetilde G(x,y)&=x^2y-x^3+y, \end{align*} define the same (connected) real algebraic set \[ \mathcal{C }= V_{{\mathbb R}}(G)=V_{{\mathbb R}}(\widetilde G) =\{(x,y)\in{\mathbb R}^2\mid y=\tfrac{x^3}{x^2+1}\}, \] cf.\ \eqref{eq:y=x^3/...}. As we saw in Example~\hyperref{ex:reducible-quintic}, $G$ is expressive while $\widetilde G$ is not. Consequently, the affine algebraic curve~$V(G)$ is expressive while its real point set, the real topological curve~$\mathcal{C }$ is not---because $\widetilde G$, rather than~$G$, is the minimal polynomial of~$\mathcal{C }$. \end{example} \begin{example} \label{ex:xy(x^2+y^2+1)} The real polynomials $G(x,y)$ and $\tilde G(x,y)$ given by \begin{align*} G(x,y)&=xy(x^2+y^2+1), \\ \widetilde G(x,y)&=xy, \end{align*} define the same real algebraic set $\mathcal{C }\!=\! V_{{\mathbb R}}(G)\!=\!V_{{\mathbb R}}(\widetilde G)$. Clearly, $\widetilde G$ is expressive, and so is the affine curve $V(\widetilde G)$, or the projective curve~$Z(xy)$. Since $\widetilde G$ is the minimal polynomial of~$\mathcal{C }$, this real algebraic set is expressive as well. On~the other~hand, \begin{align*} G_x&= 3x^2y+y^3+y=y(3x^2+y^2+1), \\ G_y&=x^3+3xy^2+x=x(x^2+3y^2+1), \end{align*} and we see that $G$ has 9 critical points: \[ \text{$(0,0)$, $(0,\pm i)$, $(\pm i,0)$, $(\tfrac12 i, \pm\tfrac12 i)$, $(-\tfrac12 i, \pm\tfrac12 i)$.} \] Since 8~of these points are not real, the polynomial~$G$ is not expressive; nor are the curves $V(G)$ and $Z(xy(x^2+y^2+z^2))$. \end{example} \pagebreak[3] The following criterion is a direct consequence of Theorem~\hyperref{th:reg-expressive}. \begin{corollary} Let $\mathcal{C }\subset{\mathbb R}^2$ be the set of real points of a real affine algebraic~curve. Assume that $\mathcal{C }$ is connected, and contains at least two (hence infinitely many) points. Then the following are equivalent: \begin{itemize}[leftmargin=.2in] \item the minimal polynomial of $\mathcal{C }$ (or the Zariski closure $C=\overline{\mathcal{C }}$, cf.\ Definition~\hyperref{def:expressive-curve}) is expressive and $L^\infty$-regular; \item each component of $C=\overline{\mathcal{C}}$ is trigonometric or polynomial, and all singular points of $C$ in the complex affine plane~${\mathbb A}^2$ are real hyperbolic nodes. \end{itemize} \end{corollary} We conclude this section by discussing the challenges involved in extending the notion of expressivity to arbitrary (i.e., not necessarily polynomial) smooth real functions of two real arguments. \begin{remark} \label{rem:expressive-analytic} Let $G:{\mathbb R}^2\to{\mathbb R}$ be a smooth function. Suppose that $G$ satisfies the following conditions: \begin{itemize}[leftmargin=.2in] \item the set $V_{{\mathbb R}}(G)=\{G(x,y)=0\}\subset{\mathbb R}^2$ is connected; \item $V_{{\mathbb R}}(G)$ is a union of finitely many immersed circles and open intervals which intersect each other and themselves transversally, \linebreak[3] in a finite number of points; \item the complement ${\mathbb R}^2\setminusV_{{\mathbb R}}(G)$ is a union of a finite number of disjoint open~sets; \item all critical points of $G$ in ${\mathbb R}$ have a nondegenerate Hessian; \item these critical points are located as follows: \begin{itemize}[leftmargin=.2in] \item[$\circ$] one critical point inside each bounded connected component of ${\mathbb R}^2\setminusV_{{\mathbb R}}(G)$; \item[$\circ$] no critical points inside each unbounded connected component of ${\mathbb R}^2\setminusV_{{\mathbb R}}(G)$; \item[$\circ$] a saddle at each double point of $V_{{\mathbb R}}(G)$. \end{itemize} \end{itemize} One may be tempted to call such a (topological, not necessarily algebraic) curve $V_{{\mathbb R}}(G)$ expressive. Unfortunately, this definition turns out to be problematic, as one and the same curve $\mathcal{C}={\mathbb R}^2$ can be defined by two different smooth functions one of which satisfies the above-listed conditions whereas the other does not. An example is shown in Figure~\hyperref{fig:isolines}, with the polynomial $G$ given by \[ G(x,y)=(\tfrac{x^{2}}{16}+y^{2}-1)((x-1)^{2}+(y-1)^{2}-1). \] As the picture demonstrates, the function $G$ is not expressive in any reasonable sense. At the same time, $\mathcal{C}$~can be transformed into an expressive curve (a~union of two circles) by a diffeomorphism of~${\mathbb R}^2$, and consequently can be represented as the vanishing set of a smooth function satisfying the conditions listed above. \end{remark} \pagebreak \section{Regular-expressive divides} \label{sec:regular-expressive-divides} It is natural to wonder which divides arise from expressive curves (perhaps satisfying additional technical conditions). If $G(x,y)$ is an expressive polynomial, then all singular points of~$V(G)$ are (real) hyperbolic nodes, so the divide $D_G$ is well defined, see Definition~\hyperref{def:divide-polynomial}. The class of divides arising in this way (with or without the additional requirement of $L^\infty$-regularity) is however too broad to be a natural object of study: as Example~\hyperref{example1} demonstrates, a non-expressive polynomial may become expressive upon multiplication by a polynomial with an empty set of real zeroes. With this in mind, we propose the following definition. \begin{definition} \label{def:regular-expressive-divide} A divide $D$ is called \emph{regular-expressive} if there exists an $L^\infty$-regular expressive curve $C=V(G)$ with real irreducible components such that $D=D_G$. \end{definition} \begin{remark} Some readers might prefer to just call such divides ``expressive'' rather than regular-expressive. We decided against the shorter term, as it would misleadingly omit a reference to the $L^\infty$-regularity requirement. \end{remark} By Theorem~\hyperref{th:reg-expressive}, a connected divide is regular-expressive if and only if it arises from an algebraic curve whose components are real and either polynomial or trigonometric, and all of whose singular points in the affine plane are hyperbolic nodes. We note that a regular-expressive divide is isotopic to an expressive algebraic subset of~${\mathbb R}^2$, in the sense of Definition~\hyperref{def:expressive-curve}, or more precisely to an intersection of such a subset with a sufficiently large disk ${\mathbf{D}}_R$, see~\eqref{eq:Disk_R}. Numerous examples of regular-expressive divides are scattered throughout this paper. \begin{problem} \label{problem:reg-expr-divide} Find a criterion for deciding whether a given divide is regular-expressive. \end{problem} \newsavebox{\crossing} \setlength{\unitlength}{0.8pt} \savebox{\crossing}(10,10)[bl]{ \thicklines \qbezier(5,5)(7,10)(10,10) \qbezier(5,5)(3,0)(0,0) \qbezier(5,5)(3,10)(0,10) \qbezier(5,5)(7,0)(10,0) } \newsavebox{\opening} \setlength{\unitlength}{0.8pt} \savebox{\opening}(10,10)[bl]{ \thicklines \qbezier(0,0)(-5,0)(-5,5) \qbezier(0,10)(-5,10)(-5,5) } \newsavebox{\closing} \setlength{\unitlength}{0.8pt} \savebox{\closing}(10,10)[bl]{ \thicklines \qbezier(0,0)(5,0)(5,5) \qbezier(0,10)(5,10)(5,5) } \begin{remark} Problem~\hyperref{problem:reg-expr-divide} appears to be very difficult. It seems to be even harder to determine whether a given divide can be realized by an expressive curve of a specified degree. For example, the divide \setlength{\unitlength}{0.8pt} \begin{picture}(26,10)(-5,-5) \put(0,0){\makebox(0,0){\usebox{\opening}}} \put(0,0){\makebox(0,0){\usebox{\crossing}}} \put(10,0){\makebox(0,0){\usebox{\crossing}}} \put(20,0){\makebox(0,0){\usebox{\closing}}} \end{picture} can be realized by an expressive sextic (the $(2,6)$-Lissajous curve, see Figure~\hyperref{fig:lissajous-chebyshev}) but not by an expressive quadric---even though there exists a (non-expressive) quadric realizing this divide. \end{remark} Here is a non-obvious non-example: \begin{proposition} \label{pr:not-reg-expr-divide} The divide shown in Figure~\hyperref{fig:nonexpressive-divide} is not regular-expressive. \end{proposition} \begin{figure} \caption{A connected divide which is not regular-expressive.} \label{fig:nonexpressive-divide} \end{figure} \begin{proof} Suppose on the contrary that the divide~$D$ in Figure~\hyperref{fig:nonexpressive-divide} is regular-expressive. By Theorem \hyperref{th:reg-expressive}, $D$~must come from a plane curve $C$ consisting of two polynomial components: $C=K\cup L$. One of them, say~$L$, is smooth. By the Abhyankar-Moh theorem \cite[Theorem~1.6]{AM-1975}, there exists a real automorphism of the affine plane that transforms $L$ into a real straight line. So without loss of generality, we can simply assume that $L$ is a line. The other component $K$ has degree $d\ge 4$. Since $C$ is expressive, the projective line $\hat L$ must intersect the projective closure~$\hat K$ of~$K$ at the unique point $p\in K\cap L^\infty$ with multiplicity $d-2$. The curve $\hat K$ has a unique tangent line~$L^\infty$ at~$p$. Therefore any projective line $\hat L'\ne L^\infty$ passing through $p$ intersects $\hat K$ at $p$ with multiplicity $d-2$. Equivalently, every affine line $L'$ parallel to $L$ intersects $K$ in at most two points (counting multiplicities). However, shifting $L$ in a parallel way until it intersects $K$ at a node for the first time, we obtain a real line $L'$ parallel to $L$ and crossing $K$ in at least four points in the affine plane, a contradiction. \end{proof} We recall the following terminology, adapting it to our current needs. \begin{definition} \label{def:pseudoline} A (simple) \emph{pseudoline arrangement} is a connected divide whose branches are embedded intervals any two of which intersect at most once. A pseudoline arrangement is called \emph{stretchable} if it is isotopic to a configuration of straight lines, viewed within a sufficiently large disk. \end{definition} \begin{proposition} \label{pr:non-stretchable-is-not-reg-expr} Let D be a pseudoline arrangement in which any two pseudolines intersect. Then the divide D is regular-expressive if and only if it is stretchable. \end{proposition} \begin{proof} The ``if'' direction has already been established, see Example~\hyperref{ex:line-arrangements}. Let us prove the converse. Suppose that a pseudoline arrangement $D$ with $n$ pseudolines $D_1,...,D_n$ is the divide of an $L^\infty$-regular expressive curve~$C$ with real irreducible components. We need to show that $D$ is stretchable. By Theorem \hyperref{th:reg-expressive}, $C$~must consist of $n$ polynomial components $C_1,...,C_n$. Since $C_1$ is smooth, the Abhyankar-Moh theorem \cite[Theorem 1.6]{AM-1975} implies that a suitable real automorphism of the affine plane takes $C_1$ to a straight line (and leaves the other components polynomial). So without loss of generality, we can assume that $C_1=\{x=0\}$. Let $i\in\{2,...,n\}$ be such that $\deg C_i=d\ge 2$. (If there is no such~$i$, we are done.) The projective line $\hat C_1$ intersects the projective curve~$\hat C_i$ at the point $p=(0,1,0)=\hat C_i\cap L^\infty$ either with multiplicity $d-1$ (if $C_1$ and $C_i$ intersect in the affine plane) or with multiplicity~$d$ (if $C_1$ and $C_i$ are disjoint there). Note that $\hat C_1$ cannot be tangent to $\hat C_i$ at $p$, since $\hat C_i$ is unibranch at $p$, and the infinite line $L^\infty$ is the tangent to $\hat C_i$ at $L^\infty$. It follows that $(\hat C_1\cdot\hat C_i)_p<(\hat C_i\cdot L^\infty)_p=d$ and therefore $(\hat C_1\cdot\hat C_i)_p=d-1$. Since $\hat C_1$ is transversal to $\hat C_i$ at~$p$, we have $\mt(\hat C_i,p)=d-1$. It~follows that the (affine) equation of $C_i$ does not contain monomials with exponents of $y$ higher than~$1$. That is, $C_i=\{yP_i(x)=Q_i(x)\}$ with $P_i,Q_i$ coprime real polynomials. Moreover, the polynomiality of $C_i$ implies that $P_i(x)$ is a nonzero constant, say~$1$. Finally, recall that any two components $C_i$ and~$C_j$, $2\le i<j\le n$, intersect in at most one point in the affine plane, and if they do, the intersection is transversal. So if $C_i=\{y=Q_i(x)\}$ and $C_j=\{y=Q_j(x)\}$, then $\deg(Q_i-Q_j)\le 1$. It follows that a suitable automorphism of the affine plane $(x,y)\mapsto(x,y-R(x))$, with $R(x)\in{\mathbb R}[x]$, leaves $C_1$ invariant while simultaneously taking all other components $C_2,...,C_n$ to straight lines. \end{proof} \pagebreak \section{Expressive curves vs.\ morsifications} \label{sec:expressive-vs-algebraic} In this section, we compare two classes of divides: \begin{itemize}[leftmargin=.2in] \item the \emph{algebraic} divides, which arise from real morsifications of isolated singularities of real plane curves, \redsf{see, e.g., \cite[Definition~2.2]{FPST}}; \item the \emph{regular-expressive} divides, which arise from $L^\infty$-regular expressive curves with real components, see Section~\hyperref{sec:regular-expressive-divides}. \end{itemize} There are plenty of divides (such as, e.g., generic real line arrangements) which are both algebraic and regular-expressive. \begin{proposition} \label{pr:reg-expr-not-alg} None of the following regular-expressive divides is algebraic: \begin{itemize}[leftmargin=.3in] \item[\rm{(a)}] the divides shown in Figures \hyperref{fig:multi-limacons}-\hyperref{fig:5-petal}; \item[\rm{(b)}] the divide shown in Figure \hyperref{fig:newton-degenerate}; \item [\rm{(c)}] any divide containing two branches whose intersection is empty. \end{itemize} \end{proposition} \begin{proof} (a) Let $D$ be the divide shown in Figure~\hyperref{fig:5-petal} on the right. Suppose that $D$ is algebraic. Pick a point $p$ inside the ``most interior'' region of~$D$. Any real line through $p$ intersects~$D$ (or any curve isotopic to it) in at least $6$ points, counting multiplicities. Hence the underlying singularity has multiplicity~$\ge 6$. Such a singularity must have Milnor number $\ge(6-1)^2=25$. On the other hand, $D$ exhibits only $21$ critical points ($10$ saddles at the hyperbolic nodes and $11$ extrema, one per region), a contradiction. The same argument works for the other divides except for the left divides in Figures \hyperref{fig:3-petal} and \hyperref{fig:5-petal}, which require a slightly more complicated treatment. We leave it to the reader as an exercise. (b) Suppose on the contrary that our divide $D$ is algebraic. The corresponding singularity must be unibranch, with Milnor number~$12$. Let $T$ be the sole region of~$D$ whose closure has zero-dimensional intersection with the union of closures of the unbounded connected components. Every real straight line crossing~$T$ intersects $D$ in at least $4$ points (counting multiplicities). It follows that the underlying singularity~has multiplicity~$\ge4$. \redsf{The simplest unibranch singularity of multiplicity~$4$ is $y^4-x^5=0$ (up to topological equivalence); its Milnor number is~$12$. Hence we cannot encounter any other (more complicated) singularity.} The link of the singularity $y^4-x^5=0$ is a $(4,5)$ torus knot. Its Alexander polynomial is (see, e.g., \cite[p.~131, formula~(1)]{SW}) \begin{equation} \label{eq:Alexander-poly-1} \frac{(1-t^{4\cdot5})(1-t)}{(1-t^4)(1-t^5)}=1-t+t^4-t^6+t^8-t^{11}+t^{12}. \end{equation} On the other hand, \redsf{as shown by N.~A'Campo \cite[Theorem 2]{acampo-1999} (reproduced in \cite[Theorem~7.6]{FPST}), the link of a singularity is isotopic to the link of the divide of its morsification, as defined by A'Campo; see, e.g., \cite[Definition~7.1]{FPST}.} In the terminology of \cite[Section 11]{FPST}, the divide $D$ is scannable of multiplicity~$4$. Its link can be computed as the closure of a 4-strand braid~$\beta$ constructed by the Couture-Perron algorithm \cite[Proposition 2.3]{CP} (reproduced in \cite[Definition~11.3]{FPST}): \[ \beta=\sigma_1\sigma_3\sigma_2\sigma_3\sigma_2\sigma_1\sigma_3\sigma_2\sigma_3\sigma_2\sigma_1\sigma_3\sigma_2\sigma_3\sigma_2. \] Direct computation shows that the Alexander polynomial of this knot is equal to \[ (1-t+t^2)(1-t^2+t^4)(1-t^3+t^6)=1-t+t^4-t^5+t^6-t^7+t^8-t^{11}+t^{12}, \] which is different from~\eqref{eq:Alexander-poly-1}. (c) Follows from \cite[Proposition 1.8(ii)]{BK}. \end{proof} By Proposition~\hyperref{pr:reg-expr-not-alg}, a connected line arrangement containing parallel lines is regular-expressive but not algebraic. We next give an example of the opposite~kind. \begin{proposition} Let $D$ be the ``non-Pappus'' pseudoline arrangement shown in Figure~\hyperref{fig:non-pappus-arr} on the right. The divide $D$ is algebraic but not regular-expressive. \end{proposition} \begin{figure} \caption{Left: a Pappus configuration of straight lines. Right~\cite[Figure~14]{AKPV} \label{fig:non-pappus-arr} \end{figure} \begin{proof} It is well known that the non-Pappus pseudoline arrangement~$D$ is non-stretchable, see \cite[Section~3]{AKPV}, \cite[Section~5.3]{PL}. We next show that the divide $D$ is algebraic. For the benefit of the readers who may not be experts in singularity theory, we begin by recalling some background. Any isolated curve singularity possesses a versal deformation with finite-dimen\-sion\-al base; see \cite[Section~II.1.3]{GLS} for a brief account of the theory of versal deformations. In particular, a versal deformation induces any other deformation. Furthermore, if the singular point is real and a versal deformation is conjugation-invariant with a smooth base, then it induces any other conjugation-invariant deformation with a smooth base. (Indeed, an analytic map $({\mathbb C}^n,0)\to({\mathbb C}^N,0)$ that takes real points to real points is given by germs of analytic functions with real coefficients.) If a singularity (at the origin) is given by $f(x,y)=0$, then the deformation \[ f(x,y)+\sum_{i+j\le d}t_{ij}x^iy^j=0,\quad (t_{ij})\in\mathbf{B}^N_{\varepsilon},\ N=\textstyle\binom{d+2}{2}, \] is versal if $d\ge\mu(f,0)-1$; here $\mathbf{B}^N_{\varepsilon}\subset{\mathbb C}^N$ denotes a sufficently small open disc centered at the origin. If $C=V(F(x,y))$ is an affine curve and $p_1,...,p_r$ are some of its isolated singularities, then the deformation \[ F(x,y)+\sum_{i+j\le d}t_{ij}x^iy^j=0,\quad (t_{ij})\in\mathbf{B}^N_{\varepsilon},\ N=\textstyle\binom{d+2}{2}, \] where $d\ge\sum_i\mu(C,p_i)-1$, is a joint versal deformation of the singular points $p_1,...,p_r$. That is, it simultaneously induces arbitrary individual deformations at all these singular points; see the versality criterion \cite[Theorem~II.1.16]{GLS} and \cite[Proposition~3.4.6]{GLS1}. Let \[ \prod_{i=1}^9(a_ix+b_iy+c_i)=0 \] be a Pappus configuration without parallel lines. The family \[ F_t(x,y)=\prod_{i=1}^9(a_ix+b_iy+c_it)=0, \quad t\in[0,1], \] is a deformation of the ordinary $9$-fold singular point $F_0(x,y)=0$, whose members $F_t=0$, $t\ne0$, are Pappus configurations differing from each other by a homothety. It is induced by a versal deformation \begin{align} \label{eq:versal-pappus} &F_0(x,y)+\sum_{i+j\le d}t_{ij}x^iy^j=0,\quad (t_{ij})\in\mathbf{B}^N_{\varepsilon}, \\ \intertext{where} \notag &\quad N=\textstyle\binom{d+2}{2}, \quad d=\mu(F_0,0)-1=63. \end{align} Since the total Milnor number of the Pappus configuration (i.e., the sum of Milnor numbers at all singular points) is less than $\mu(F_0,0)=64$, the deformation~\eqref{eq:versal-pappus} is joint versal for all the singularities of any given curve $F_t(x,y)=0$, $0<t\ll1$. We now construct a morsification of the singularity $F_0=0$ which is isotopic to our divide~$D$. We begin by deforming it into a Pappus configuration $F_t=0$ with $0<t\ll1$. Then, by variation of the monomials up to degree $63$, we deform each triple point of the latter curve into appropriate three double intersections (while keeping the nodes of the Pappus configuration). Since $t$ can be chosen arbitrarily small, the curve isotopic to the constructed one appears arbitrarily close to the original germ $\{F_0=0\}$. In view of the fact that all the strata in the real part of the discriminant in the versal deformation base are semialgebraic sets, by the arc selection lemma \cite[Lemma 3.1]{Milnor}, there exists a real analytic deformation \[ \widetilde F_t(x,y)=0, \quad 0\le t\ll1 \] of $\widetilde F_0=F_0=0$, whose members $\widetilde F_t=0$, $t\ne0$, realize the desired morsification. \end{proof} \end{document}
\begin{document} \title[Existence of resonances]{Existence of resonances for Schr\"odinger operators on hyperbolic space} \author[Borthwick]{David Borthwick} \address{Department of Mathematics, Emory University, Atlanta, GA 30322} \email{[email protected]} \author[Wang]{Yiran Wang} \address{Department of Mathematics, Emory University, Atlanta, GA 30322} \email{[email protected]} \date{\today} \begin{abstract} We prove existence results and lower bounds for the resonances of Schr\"odinger operators associated to smooth, compactly support potentials on hyperbolic space. The results are derived from a combination of heat and wave trace expansions and asymptotics of the scattering phase. \end{abstract} \maketitle \section{Introduction} This article is devoted to establishing lower bounds on the resonance count for Schr\"odinger operators on the hyperbolic space $\mathbb{H}^{n+1}$. Although such results are well known in Euclidean scattering theory, the literature for Schr\"odinger operators on hyperbolic space is comparatively sparse. Upper bounds on resonances for such operators were considered in \cite{Borthwick:2010, BC:2014}. Most other recent papers dealing with Schr\"odinger operators on hyperbolic space have focused on applications to nonlinear Sch\"rodinger equations \cite{AV:2009,Banica:2007,BCD:2009,BCS:2008,BM:2015,Ionescu:2012,IS:2009}. As far as we are aware, the literature contains no general existence results for resonances in this context. Let $\lap$ denote the positive Laplacian operator on $\bbH^{n+1}$. For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, the Schr\"odinger operator $\lap+V$ has continuous spectrum $[n^2/4,\infty)$. The resolvent of $\lap +V$ is defined for $\operatorname{Re} s > n/2$ by \begin{equation}\label{RV.def} R_V(s) := (\lap + V - s(n-s))^{-1}. \end{equation} As an operator on weighted $L^2$ spaces, $R_V(s)$ admits a meromorphic extension to $s \in \mathbb{C}$, with poles of finite rank, as described in \S\operatorname{Re}f{setup.sec}. The resonance set $\mathcal{R}_V$ associated to $V$ consists of the poles of $R_V(s)$, repeated according to the multiplicity given by \begin{equation}\label{mv.def} m_V(\zeta) := \operatorname{rank} \operatorname{Re}s_\zeta R_V(s). \end{equation} There are no eigenvalues embedded in the continuous spectrum, and no resonances on the line $\operatorname{Re} s = n/2$ except possibly at $n/2$. For a proof, see for example Borthwick-Marzuola \cite[Thm.~1]{BM:2015}. The discrete spectrum of $\lap + V$ is therefore finite and lies below $n^2/4$. An eigenvalue $\lambda$ corresponds to a resonance $\zeta \in (n/2,n)$ such that $\lambda = \zeta(n-\zeta)$. The resonance counting function, \begin{equation}\label{NV.def} N_V(r) := \#\set*{ \zeta \in \mathcal{R}_V: \abs{\zeta - \tfrac{n}2} \le r}, \end{equation} satisfies a polynomial bound as $r \to \infty$ \begin{equation}\label{NV.bound} N_V(r) = O(r^{n+1}). \end{equation} This estimate is covered by the more general result of Borthwick \cite[Thm.~1.1]{Borthwick:2008}, but with $\bbH^{n+1}$ as the background metric one could also give a simpler proof following the approach that Guillop\'e-Zworski \cite{GZ:1995b} used for $n=1$. A sharp constant for the bound \eqref{NV.bound}, depending on the support of the potential, was obtained in Borthwick \cite{Borthwick:2010}. The corresponding lower bound was shown to hold in a generic sense in Borthwick-Crompton \cite{BC:2014}, but the existence question was not resolved. The existence problem for resonances looks quite different in even and odd dimensions. In even dimensions, $\mathcal{R}_0$ contains resonances at negative integers, with multiplicities such that the polynomial bound \eqref{NV.bound} is already saturated for $V=0$. Therefore our goal in even dimensions is to distinguish $\mathcal{R}_V$ from $\mathcal{R}_0$. On the other hand, $\mathcal{R}_0$ is empty for odd dimensional hyperbolic space. In that case we seek lower bounds on $\mathcal{R}_V$ itself. In the present paper, we will prove the following: \begin{theorem}\label{main.thm} Let $\mathcal{R}_V$ denote the set of resonances of $\lap + V$ for $V \in C^\infty_0(\bbH^{n+1},\mathbb{R})$, with $N_V(r)$ the corresponding counting function. \begin{enumerate}[label=(\roman*), parsep=4pt, leftmargin=2\parindent] \item If the dimension is even or equal to 3, then $\mathcal{R}_V = \mathcal{R}_0$ only if $V=0$. \item For even dimension $\ge 6$, if $V \ne 0$ then $\mathcal{R}_V$ and $\mathcal{R}_0$ differ by infinitely many points. This conclusion holds also if $\int V\>dg \ge 0$ for $\dim=2$, and if $\int V\>dg \ne 0$ for $\dim =4$. \item In all odd dimensions, if $\mathcal{R}_V$ is not empty then the resonance set is infinite and $N_V(r) \ne O(r)$. \end{enumerate} \end{theorem} In even dimensions, we also show that the resonance set determines the scattering phase and wave trace completely, and in particular fixes all of the wave invariants. See \S\operatorname{Re}f{exist.sec} for the full set of inverse scattering results. Theorem~\operatorname{Re}f{main.thm} is derived from the asymptotic expansions of the scattering phase and heat and wave traces. In the context of potential scattering in hyperbolic space, these expansions do not seem to have not been studied in the literature, so we give a full account of their adaptation to this setting. The explicit formulas for the wave invariants are stated in Proposition~\operatorname{Re}f{d12.prop}. The organization of the paper is as follows. After reviewing some facts on the resolvent and its kernel in \S\operatorname{Re}f{setup.sec}, we use the spectral resolution to define distributional traces in \S\operatorname{Re}f{trace.sec}. In \S\operatorname{Re}f{bk.sec} we establish the Birman-Krein formula relating these traces to the scattering phase. The Poisson formula expressing the wave trace as a sum over resonances is proven in \S\operatorname{Re}f{poisson.sec}. In \S\operatorname{Re}f{wave.exp.sec} the asymptotic expansion of the wave trace at $t=0$ is established. The corresponding heat trace expansion is worked out in \S\operatorname{Re}f{heat.sec}, which is then used to study asymptotics of the scattering phase in \S\operatorname{Re}f{scphase.sec}. Finally, in \S\operatorname{Re}f{exist.sec} these tools are applied to derive the existence results. \noindent {\bf Acknowledgement} The authors are grateful for some very helpful comments and corrections by the journal referees. \section{The resolvent}\label{setup.sec} The resolvent of the free Laplacian on $\bbH^{n+1}$ is traditionally written with spectral parameter $s(n-s)$ as in \eqref{RV.def}. The resolvent kernel is given by a well-known hypergeometric formula, derived by Patterson \cite[Prop~2.2]{Patterson:1989}: \[ \begin{split} R_0(s;z,w) &= \frac{\pi^{-\frac{n}2} 2^{-2s-1}\Gamma(s)}{\Gamma(s-\frac{n}2+1)} \cosh^{-2s}(\tfrac12 d(z,w)) \\ &\qquad \times F(s,s-\tfrac{n-1}2, 2s-n+1; \cosh^{-2}(\tfrac12 d(z,w)), \end{split} \] where $d(z,w)$ is the hyperbolic distance. Using hypergeometric identities \cite[\S14.3(iii)]{NIST}, we can rewrite this formula as \begin{equation}\label{R0.kernel} R_0(s;z,w) = (2\pi)^{-\frac{n+1}2} \frac{\Gamma(s)}{{\sinh^\mu d(z,w)}} \mathbf{Q}_\nu^\mu(\cosh d(z,w)), \end{equation} where $\nu := s - \tfrac{n+1}2$, $\mu := \frac{n-1}2$, and $\mathbf{Q}_\nu^\mu$ denotes the normalized Legendre function, \[ \mathbf{Q}_\nu^\mu(x) := \frac{e^{-i \pi \mu}}{\Gamma(\mu+\nu+1)} Q_\nu^\mu(x). \] (Under this convention $\mathbf{Q}_\nu^\mu(x)$ is entire as a function of both indices.) The factor $\Gamma(s)$ in \eqref{R0.kernel} has poles at negative integers, but these yield resonances only for $n+1$ even. In odd dimensions the poles are canceled by zeroes of $\mathbf{Q}_\nu^\mu$. Let $(r,\omega) \in [0,\infty) \times \mathbb{S}^n$ denote geodesic polar coordinates on $\bbH^{n+1}$. We will take \[ \rho := \frac{1}{\cosh r} \] as a boundary defining function for the radial compactification of $\bbH^{n+1}$ into a ball. The hypergeometric formula for $\mathbf{Q}_\nu^\mu(x)$ \cite[\S3.2(5)]{Erdelyi} yields an expansion of the resolvent kernel, \begin{equation}\label{R0.expand} R_0(s;z,w) = \pi^{-\frac{n}2} 2^{-s-1} \frac{\Gamma(s)}{\Gamma(s-\frac{n}2+1)} \sum_{k=0}^\infty \frac{a_k(s)}{(\cosh d(z,w))^{s+2k}}, \end{equation} with $a_0(s) = 1$. In particular, \begin{equation}\label{R0.infty} R_0(s;z,w) = O(e^{-sd(z,w)})\text{ as }d(z,w) \to \infty, \end{equation} which shows that $R_0(s)$ extends meromorphically to $s \in \mathbb{C}$, as a bounded operator $\rho^N L^2(\bbH^{n+1}) \to \rho^{-N} L^2(\bbH^{n+1})$ for $\operatorname{Re} s > -N+\nh$. For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, the resolvent $R_V(s)$ defined in \eqref{RV.def} is related to $R_0(s)$ by the identity, \begin{equation}\label{R0RV} R_0(s) = R_V(s) (1 + VR_0(s)). \end{equation} The operator $1 + VR_0(s)$ is invertible by Neumann series for $\operatorname{Re} s$ sufficiently large, and it follows from \eqref{R0.infty} that the operator $VR_0(s)$ is compact on $\rho^N L^2(\bbH^{n+1})$ for $\operatorname{Re} s > -N+\nh$. Hence the analytic Fredholm theorem yields a meromorphic inverse $(1 + VR_0(s))^{-1}$, with poles of finite rank, which is bounded on $\rho^N L^2(\bbH^{n+1})$ for $\operatorname{Re} s > -N+\nh$. We thus obtain a meromorphic extension of $R_V(s)$ by setting \[ R_V(s) = R_0(s) (1 + VR_0(s))^{-1}, \] which is bounded as an operator $\rho^N L^2(\bbH^{n+1}) \to \rho^{-N} L^2(\bbH^{n+1})$ for $\operatorname{Re} s > -N+\nh$. We can see from \eqref{R0.expand} that the free resolvent kernel $R_0(s;z,z')$ is polyhomogeneous as a function of $\rho(z')$ as $\rho(z') \to 0$, with leading term of order $\rho(z')^s$. It follows from \eqref{R0RV} that the kernel of $R_V(s)$ has the same property. The Poisson kernel is defined as the leading coefficient in the expansion as $\rho(z') \to 0$, \begin{equation}\label{EV.def} E_V(s;z,\omega') := \lim_{r' \to \infty} \rho(z')^{-s} R_V(s; z,z'), \end{equation} Interpreting this function is as an integral kernel, with respect to the standard sphere metric, defines the Poisson operator, \[ E_V(s): C^\infty(\mathbb{S}^n) \to L^2(\bbH^{n+1}). \] The Poisson operator maps boundary data to solutions of the generalized eigenfunction equation $(\lap + V - s(n-s))u = 0$. By Stone's formula, the continuous part of the spectral resolution of $\lap + V$ is given by the restriction of the operator $R_V(s) - R_V(n-s)$ to the line $\operatorname{Re} s = n/2$. This is related to the Poisson operator by the following identity: \begin{equation}\label{RR.EE} R_V(s) - R_V(n-s) = (n-2s) E_V(s) E_V(n-s)^t, \end{equation} as operators $C^\infty_0(\bbH^{n+1}) \to L^2(\bbH^{n+1})$, meromorphically for $s \in \mathbb{C}$. The proof of \eqref{RR.EE} is essentially the same as in the case $n=1$ presented in \cite[Prop.~4.6]{B:STHS2}. \section{Traces}\label{trace.sec} Given $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ and $f \in \mathcal{S}(\mathbb{R})$, the operator $f(\lap + V) - f(\lap)$ is of trace class. In fact, the map \begin{equation}\label{rel.trace} f \mapsto \operatorname{tr} \sqbrak*{f(\lap + V) - f(\lap)}. \end{equation} defines a tempered distribution. For the proof, see Dyatlov-Zworski \cite[Thm.~3.50]{DZbook}, which applies to the hyperbolic setting with only minor modifications. The spectral theorem gives the representation \[ f(\lap + V) = \lim_{\varepsilon\to 0} \frac{1}{2\pi i} \int_{-\infty}^\infty \sqbrak[\Big]{(\lap + V - \lambda- i\varepsilon)^{-1} - (\lap + V - \lambda+i\varepsilon)^{-1}} f(\lambda) d\lambda, \] with the limit taken in the operator-norm topology. We can separate the contributions from the discrete and continuous spectrum, and write the continuous part in terms of $R_V(s)$ by setting $s(n-s) = \lambda \pm i \varepsilon$. The result is \begin{equation}\label{fDV.tlim} \begin{split} f(\lap + V) &= \lim_{\varepsilon \to 0} \frac{1}{2\pi i} \int_{-\infty}^\infty \sqbrak[\Big]{R_V(\tfrac{n}2 - i \xi + \varepsilon) - R_V(\tfrac{n}2 + i \xi + \varepsilon)} f(\tfrac{n^2}4 + \xi^2) \xi\>d\xi \\ &\qquad + \sum_{j=1}^d f(\lambda_j) \phi_j \otimes \overline{\phi}_j, \end{split} \end{equation} where the $\lambda_1,\dots, \lambda_d$ are the eigenvalues of $\lap+V$, with corresponding normalized eigenvectors $\phi_j$. The self-adjointness of $\lap + V$ implies an estimate \[ \norm{(\lap + V - s(n-s)) u} \ge \abs{\operatorname{Im} s(n-s)} \norm{u}^2. \] This shows that a pole of $R_V(s)$ at $s = \tfrac{n}2$ could have at most second order. (This argument is analogous to the Euclidean case; see \cite[Lemma~3.16]{DZbook}.) A pole of order two can occur only if $n^2/4$ is an eigenvalue, which is ruled out by Bouclet \cite[Cor.~1.2]{Bouclet:2013}. Therefore $R_V(s)$ has at most a first order pole at $n/2$. The integrand in \eqref{fDV.tlim} is thus continuous at $\varepsilon=0$ in the operator topology, because a pole would be canceled by the extra factor of $\xi$. Taking the limit $\varepsilon \to 0$ gives \[ \begin{split} f(\lap + V) &= \frac{1}{2\pi i} \int_{-\infty}^\infty \sqbrak[\Big]{R_V(\tfrac{n}2 - i \xi) - R_V(\tfrac{n}2 + i \xi)} f(\tfrac{n^2}4 + \xi^2) \xi\>d\xi \\ &\qquad + \sum_{j=1}^d f(\lambda_j) \phi_j \otimes \overline{\phi}_j, \end{split} \] Let us define the integral kernel of the spectral resolution as \begin{equation}\label{KV.def} K_V(\xi; z,w) := \frac{\xi}{2\pi i} \sqbrak[\Big]{R_V(\tfrac{n}2 - i \xi; z,w) - R_V(\tfrac{n}2 + i \xi; z,w)}. \end{equation} For $V=0$ this kernel can be written explicitly, using \eqref{R0.kernel} and the Legendre connection formula \cite[\S14.9(iii)]{NIST}, \[ \frac{\mathbf{Q}_{-\nu-1}^\mu(x)}{\Gamma(\mu+\nu+1)} - \frac{\mathbf{Q}_{\nu}^\mu(x)}{\Gamma(\mu-\nu) } = \cos(\pi \nu) P_{\nu}^{-\mu}(x). \] The result is \begin{equation}\label{imR0} K_0(\xi;z,w) := c_n(\xi) (\sinh r)^{-\mu} P_{-\frac12 + i\xi}^{-\mu}(\cosh r), \end{equation} where $\mu = \tfrac{n-1}2$, $r = d(z,w)$, and \[ c_n(\xi) := (2\pi)^{-\mu} \xi \sinh (\pi \xi) \Gamma(\tfrac{n}2 + i\xi) \Gamma(\tfrac{n}2 - i\xi). \] The hypergeometric expansion \cite[eq.~(14.3.9)]{NIST} of $P_\nu^{-\mu}(x)$ near $x=1$ shows that $K_0(\xi;z,w)$ is smooth for all $z,w \in \bbH^{n+1}\times \bbH^{n+1}$. For the Schr\"odinger operator case, we note that \eqref{R0RV} yields the identity \[ R_V(s) - R_V(n-s) = (1 - R_V(s)V)(R_0(s) - R_0(n-s))(1 - VR_V(n-s)). \] Since $R_0(s) - R_0(n-s)$ has a smooth kernel for $\operatorname{Re} s = n/2$, $V\in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, and $R_V(s)$ is a pseudodifferential operator of order $-2$, the identity implies that $R_V(s) - R_V(n-s)$ also has a smooth kernel for $\operatorname{Re} s = n/2$, $s \ne n/2$. The kernel $K_V(\xi;\cdot,\cdot)$ is thus continuous for $\xi \in \mathbb{R}$ and smooth as a function on $\bbH^{n+1}\times \bbH^{n+1}$. In Borthwick-Marzuola \cite[Prop.~6.1]{BM:2015}, it was shown that $\operatorname{Im} R_V(\nh + i\xi;z,w)$ satisfies a polynomial bound as a function of $\xi \in \mathbb{R}$, uniformly in $\bbH^{n+1}\times \bbH^{n+1}$, provided there is no resonance at $s = \nh$. This restriction can be removed for the $K_V$ estimate because of the extra factor of $\xi$ in \eqref{KV.def}, since, as noted above, $R_V(s)$ has at most a first order pole at $s= \nh$. We can thus use the spectral resolution formula to write the kernel of $f(\lap + V)$ as \begin{equation}\label{fv.stone} f(\lap + V)(z,w) = \int_{-\infty}^\infty K_V(\xi;z,w) f(\tfrac{n^2}4 + \xi^2)\> d\xi + \sum_{j=1}^d f(\lambda_j) \phi_j(z) \bar\phi_j(w). \end{equation} Since $f(\lap + V) - f(\lap)$ is trace-class and has a continuous kernel, the trace can be computed as an integral over the kernel by Duflo's theorem \cite[Thm.~V.3.1.1]{Duflo}. This proves the following: \begin{proposition}\label{trace.Kf.prop} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, \[ \begin{split} \operatorname{tr} \sqbrak*{f(\lap + V) - f(\lap)} &= \int_{\bbH^{n+1}} \int_{-\infty}^\infty \sqbrak[\big]{K_V(\xi;z,z) - K_0(\xi,z,z)} f(\tfrac{n^2}4 + \xi^2)\> d\xi \> dg(z) \\ &\qquad + \sum_{j=1}^d f(\lambda_j). \end{split} \] \end{proposition} \section{Birman-Krein formula}\label{bk.sec} The Birman-Krein formula relates the spectral resolution of $\lap + V$ to the scattering matrix. This formula provides the crucial link between the traces discussed in \S\operatorname{Re}f{trace.sec} and the resonance set. The formula for the hyperbolic case is analogous to the Euclidean version \cite[Thm.~3.51]{DZbook}. The scattering matrix associated to $V$ is defined as follows. The Poisson operator maps a function $f \in C^\infty(\mathbb{S}^n)$ to a generalized eigenfunction $E_V(s) f$, which admits an asymptotic expansion with leading terms \begin{equation}\label{eigf.exp} (2s-n) E_V(s) f \sim \rho^{n-s} f + \rho^s f', \end{equation} where $f' \in C^\infty(\mathbb{S}^n)$ for $\operatorname{Re} s = n/2$, $s \ne n/2$. The structure of this expansion is well known and can be deduced from the resolvent identity \eqref{R0RV}. The scattering matrix $S_V(s)$ is a family of pseudodifferential operators $S_V(s)$ on $\mathbb{S}^n$ that intertwines the leading coefficients of \eqref{eigf.exp}, \[ S_V(s): f \mapsto f'. \] For appropriate choices of $s$, we can interpret $f$ as incoming boundary data, and $f'$ as the corresponding outgoing data. By the meromorphic continuation of the resolvent, $S_V(s)$ extends meromorphically to $s \in \mathbb{C}$. The identities \begin{equation}\label{SV.inv} S_V(s)^{-1} = S_V(n-s), \end{equation} and \begin{equation}\label{ESE} E_V(n-s)S_V(s) = -E_V(s), \end{equation} which follow from \eqref{eigf.exp}, hold meromorphically in $s$. The integral kernel of the scattering matrix (with respect to the standard sphere metric) can be derived from the resolvent by a boundary limit analogous to \eqref{EV.def}, \[ S_V(s;\omega, \omega') := (2s-n) \lim_{\rho,\rho' \to 0} (\rho\rho')^{-s} R_V(s; z,z'), \] for $\omega \ne \omega'$. We can thus see from \eqref{R0RV} that \[ S_V(s) = S_0(s) - (2s-n) E_V(s)VE_0(s). \] This gives a formula for the relative scattering matrix, \begin{equation}\label{srel.ee} S_V(s) S_0(s)^{-1} = I + (2s-n) E_V(s)VE_0(n-s). \end{equation} Since $(2s-n) E_V(s)VE_0(n-s)$ is a smoothing operator, $S_V(s)S_0(s)^{-1}$ is of determinant class. We can thus define the relative scattering determinant, \[ \tau(s) := \det \sqbrak*{S_V(s)S_0(n-s)}. \] By \eqref{SV.inv}, the scattering determinant satisfies \begin{equation}\label{tau.refl} \tau(s)\tau(n-s) = 1. \end{equation} Also, since $S_V(s)$ is unitary on the critical line, $\abs{\tau(s)} = 1$ for $\operatorname{Re} s = n/2$. We can evaluate $\tau(\tfrac{n}2)$ by noting that \begin{equation}\label{SV.n2} S_V(\nh) = -I + 2P, \end{equation} where $P$ is an orthogonal projection of rank $m_V(\tfrac{n}2)$. (See \cite[Lemma~8.9]{B:STHS2} for the argument.) This implies that \[ \tau(\tfrac{n}2) = (-1)^{m_V(\tfrac{n}2)}. \] The scattering phase $\sigma(\xi)$ for $\xi \in \mathbb{R}$ is defined as \[ \sigma(\xi) := \frac{i}{2\pi} \log \frac{\tau(\frac{n}2 + i\xi)}{\tau(\frac{n}2)}, \] with the branch of log chosen continuously from $\sigma(0) := 0$. The reflection formula \eqref{tau.refl} implies that \[ \sigma(-\xi) = -\sigma(\xi). \] We will be particularly interested in the derivative of the scattering phase. By Gohberg-Krein \cite[\S IV.1]{GK:1969}, \[ \begin{split} \frac{\tau'}{\tau}(s) &= \operatorname{tr} \sqbrak*{\paren*{S_V(s)S_0(n-s)}^{-1} \frac{d}{ds}\paren*{S_V(s)S_0(n-s)}} \\ &= \operatorname{tr} \sqbrak[\Big]{ S_V(n-s) S_V'(s) - S_0(s) S_0'(n-s)}, \end{split} \] where $S_V'(s) := \partial_s S_V(s)$. For the scattering phase this gives \begin{equation}\label{logp.tau} \sigma'(\xi) = -\frac{1}{2\pi} \operatorname{tr} \sqbrak[\Big]{ S_V(\tfrac{n}2 - i\xi) S_V'(\tfrac{n}2 + i\xi) - S_0(\tfrac{n}2 + i\xi) S_0'(\tfrac{n}2 - i\xi)}. \end{equation} \begin{theorem}[Birman-Krein formula]\label{BK.thm} For $V \in L^\infty_{\rm cpt}(\bbH^{n+1}, \mathbb{R})$ and $f \in \mathcal{S}(\mathbb{R})$, \[ \begin{split} \operatorname{tr} \sqbrak[\big]{f(\lap + V) - f(\lap)} &= \int_{0}^\infty \sigma'(\xi) f(\tfrac{n^2}4 + \xi^2) \, d\xi \\ &\qquad + \sum_{j=1}^{d} f(\lambda_j) + \frac12 m_V(\tfrac{n}2) f(\tfrac{n^2}4). \end{split} \] where $\lambda_1,\dots,\lambda_d$ are the eigenvalues of $\lap+V$, and $m_V(\tfrac{n}2)$ is the multiplicity of $n/2$ as a resonance of $\lap+V$. \end{theorem} \begin{proof} For convenience, let us assume that the discrete spectrum of $\lap + V$ is empty, since the contribution to the trace from $\lambda_1,\dots,\lambda_d$ is easily dealt with. Under this assumption, Proposition~\operatorname{Re}f{trace.Kf.prop} gives \[ \operatorname{tr} \sqbrak*{f(\lap + V) - f(\lap)} = \int_{\bbH^{n+1}} \int_{-\infty}^\infty \sqbrak[\big]{K_V(\xi;z,z) - K_0(\xi,z,z)} f(\tfrac{n^2}4 + \xi^2)\> d\xi \> dg(z). \] If the integral over $z$ is restricted to the set $\set{\rho(z) \ge \varepsilon}$, then switching the order of integration is justified by the uniform polynomial bounds on $K_V$. We can thus write \begin{equation}\label{Ie.lim} \operatorname{tr} \sqbrak*{f(\lap + V) - f(\lap)} = \lim_{\varepsilon \to 0} \int_{-\infty}^\infty I_\varepsilon(\xi) f(\tfrac{n^2}4 + \xi^2)\, d\xi, \end{equation} where \[ I_\varepsilon(\xi) := \int_{\set{\rho\ge \varepsilon}} \sqbrak[\big]{K_V(\xi;z,z) - K_0(\xi,z,z)} d\omega'\, dg(z). \] To compute $I_\varepsilon(\xi)$, we first use \eqref{RR.EE} to write $K_V$ in terms of $E_V$, \begin{equation}\label{Ivep1} \begin{split} I_\varepsilon(\xi) &:= - \frac{(2s-n)^2}{4\pi} \int_{\set{\rho\ge \varepsilon}} \int_{\mathbb{S}^n} \Bigl[ E_V(s;z,\omega')E_V(n-s;z,\omega') \\ &\qquad - E_0(s;z,\omega')E_0(n-s;z,\omega') \Bigr] d\omega'\>dg(z), \end{split} \end{equation} where $d\omega'$ is the standard sphere measure. We are using the identification $s = \nh+i\xi$ freely here, to simplify notation where possible. The next step is to apply a Maass-Selberg identity as described in the proof of \cite[Prop.~10.4]{B:STHS2}. Because $E_V(s)$ satisfies the eigenvalue equation, we can write \[ E_V(s') E(n-s)^t = \frac{1}{s(n-s) - s'(n-s')} \sqbrak[\Big]{E_V(s') \lap E(n-s)^t - \lap E_V(s') E(n-s)^t} \] Applying this to \eqref{Ivep1} yields \[ \begin{split} I_\varepsilon(\xi) &:= - \frac{1}{4\pi} \lim_{s'\to s} \frac{2s-n}{s'-s} \int_{\set{\rho\ge \varepsilon}} \int_{\mathbb{S}^n} \Bigl[ E_V(s';z,\omega') \lap E_V(n-s;z,\omega') \\ & \qquad - \lap E_V(s'; z,\omega') E_V(n-s; z,\omega') - E_0(s'; z,\omega') \lap E_0(n-s; z,\omega') \\ &\qquad - E_0(s';z,\omega') \lap E_0(n-s;z,\omega') \Bigr] d\omega'\>dg(z), \end{split} \] with $\lap$ acting on the $z$ variable. By Green's formula, applied to the region $\set{\rho = \varepsilon}$, \[ \begin{split} I_\varepsilon(\xi) &:= \frac{1}{4\pi} \lim_{s'\to s} \frac{2s-n}{s'-s} \int_{\set{\rho = \varepsilon}} \int_{\mathbb{S}^n} \Bigl[ E_V(s';z,\omega') \partial_r E_V(n-s;z,\omega') \\ & \qquad - \partial_r E_V(s'; z,\omega') E_V(n-s; z,\omega') - E_0(s'; z,\omega') \partial_r E_0(n-s; z,\omega') \\ &\qquad - E_0(s';z,\omega') \partial_r E_0(n-s;z,\omega') \Bigr] \sinh^n r \>d\omega'\>d\omega, \end{split} \] where $z = (r,\omega)$ in geodesic polar coordinates. The same calculation with $s=s'$ yields zero, so we can evaluate the limit $s' \to s$ as a derivative, \[ \begin{split} I_\varepsilon(\xi) &= \frac{2s-n}{4\pi} \int_{\mathbb{S}^n} \int_{\mathbb{S}^n} \Bigl[ E_V'(s; z,\omega') \partial _r E_V(n-s; z,\omega') \\ & \qquad - \partial _r E_V'(s; z,\omega') E_V(n-s; z,\omega') - E_0'(s; z,\omega') \partial _r E_0(n-s; z,\omega') \\ & \qquad + \partial _r E_0'(s; z,\omega') E_0(n-s; z,\omega') \Bigr] \sinh^n r \>d\omega'\>d\omega \Big|_{r = \cosh^{-1}(1/\varepsilon)}, \end{split} \] where $E_V' = \partial_s E_V$. The integrand can be simplified using the identity \eqref{SV.inv} and the distributional asymptotic \[ (2s-n)E_V(s;z,\omega') \sim \rho^{n-s} \partialta_\omega(\omega') + \rho^s S_V(s;\omega,\omega'), \] which follows from \eqref{eigf.exp}. After cancelling terms between $E_V(s)$ and $E_0(s)$, we find that \begin{equation}\label{Iep.calc} \begin{split} I_\varepsilon(\xi) &= - \frac{1}{4\pi} \operatorname{tr} \Bigl[ S_V(\tfrac{n}2 - i\xi) S_V'(\tfrac{n}2 + i\xi) - S_0(\tfrac{n}2 - i\xi) S_0'(\tfrac{n}2 + i\xi) \Bigr] \\ & \quad + \frac{\varepsilon^{-2i\xi}}{8\pi i\xi} \operatorname{tr} \Bigl[ S_V(\tfrac{n}2 - i\xi)- S_0(\tfrac{n}2 - i\xi) \Bigr] \\ & \quad - \frac{\varepsilon^{2i\xi}}{8\pi i\xi} \operatorname{tr} \Bigl[ S_V(\tfrac{n}2 + i\xi)- S_0(\tfrac{n}2 + i\xi) \Bigr] \\ & \quad + o(\varepsilon). \end{split} \end{equation} The first trace in \eqref{Iep.calc} reduces to $\tfrac12\sigma'(\xi)$ by \eqref{logp.tau}. Thus, applying \eqref{Iep.calc} in \eqref{Ie.lim} gives \begin{equation}\label{trf.alim} \begin{split} \operatorname{tr} \sqbrak[\big]{f(\lap + V) - f(\lap)} &= \frac12 \int_{-\infty}^\infty \sigma'(\xi) f(\tfrac{n^2}4 + \xi^2) d\xi \\ &\qquad + \lim_{a \to \infty} \int_{-\infty}^\infty \sqbrak*{\frac{e^{i\xi a}}{8\pi i\xi} \varphi(\xi) - \frac{e^{-i\xi a}}{8\pi i\xi} \varphi(-\xi)} d\xi, \end{split} \end{equation} where we have substituted $\varepsilon = e^{-a/2}$, and \begin{equation}\label{varphi.def} \varphi(\xi) := \operatorname{tr} \Bigl[ S_V(\tfrac{n}2 - i\xi)- S_0(\tfrac{n}2 - i\xi) \Bigr] f(\tfrac{n^2}4 + \xi^2). \end{equation} To evaluate the limit $a \to \infty$ in \eqref{trf.alim}, we need to control the growth of $\varphi(\xi)$. We can argue as in \cite[Lemma~3.3]{BC:2014} that for $\chi \in C^\infty_0(\mathbb{H})$, equal to 1 on the support of $V$, \[ S_V(s) - S_0(s) = -(2s-n) E_0(s)^t \chi (1 + VR_0(s) \chi)^{-1} V E_0(s). \] The Hilbert-Schmidt norms of the cutoff factors $\chi E_0(\nh \pm i\xi)$ are $O(\abs{\xi}^{-1})$ by \cite[Lemma~3.3]{BC:2014}. The operator norm of $(1 + VR_0(\nh \pm i\xi) \chi)^{-1}$ is $O(1)$ by the cutoff resolvent bound from Guillarmou \cite[Prop.~3.2]{Gui:2005c}. Therefore the trace in \eqref{varphi.def} has at most polynomial growth, and $\varphi$ is integrable over $\xi \in \mathbb{R}$. The Riemann-Lebesgue lemma gives \[ \lim_{a \to \infty} \int_{\abs{\xi} >1} \sqbrak*{\frac{e^{i\xi a}}{8\pi i\xi} \varphi(\xi) - \frac{e^{-i\xi a}}{8\pi i\xi} \varphi(-\xi)} d\xi = 0, \] as well as \[ \lim_{a \to \infty} \int_{-1}^1 \frac{e^{\pm i\xi a}}{8\pi i\xi} \sqbrak[\Big]{ \varphi(\pm \xi) - \varphi(0)} d\xi = 0. \] We can thus drop the portion of the integral with $\abs{\xi} >1$ and replace $\varphi(\pm\xi)$ by $\varphi(0)$ for $\abs{\xi} \le 1$ before taking the limit. This reduces the final term in \eqref{trf.alim} to \[ \begin{split} \lim_{a \to \infty} \int_{-\infty}^\infty \sqbrak*{\frac{e^{i\xi a}}{8\pi i\xi} \varphi(\xi) - \frac{e^{-i\xi a}}{8\pi i\xi} \varphi(-\xi)} d\xi &= \lim_{a \to \infty} \int_{-1}^1 \frac{\sin(\xi a)}{4\pi \xi} \varphi(0) d\xi \\ &= \varphi(0) \int_{-\infty}^\infty \frac{\sin(\xi)}{4\pi \xi} d\xi \\ &= \frac14 \varphi(0). \end{split} \] To complete the argument, we note that \eqref{SV.n2} implies that \[ \operatorname{tr} \Bigl[ S_V(\tfrac{n}2)- S_0(\tfrac{n}2) \Bigr] = 2m_V(\tfrac{n}2). \] \end{proof} \section{Poisson formula}\label{poisson.sec} The Poisson formula expresses the trace of the wave group as a sum over the resonance set. The relative wave trace, \begin{equation}\label{ThetaV.def} \Theta_V(t) := \operatorname{tr}\sqbrak*{\cos \paren*{t\sqrt{\lap + V - n^2/4}} - \cos \paren*{t\sqrt{\lap - n^2/4}}}, \end{equation} is defined distributionally as in \S\operatorname{Re}f{trace.sec}. That is, for $\psi \in C^\infty_0(\mathbb{R})$, \[ \paren*{\Theta_V, \psi} := \operatorname{tr} \sqbrak[\big]{f(\lap + V) - f(\lap)}, \] where \begin{equation}\label{f.psi} f(x) := \chi(x) \int_{-\infty}^\infty \cos\paren*{t \sqrt{x - n^2/4}} \psi(t)\>dt, \end{equation} with $\chi$ a smooth cutoff which equals $1$ on the spectrum of $\lap + V - n^2/4$ and vanishes on $(-\infty, c]$ for some $c<0$. The cutoff is a technicality, included so that $f \in \mathcal{S}(\mathbb{R})$. \begin{theorem}[Poisson formula]\label{poisson.thm} For a potential $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, \begin{equation}\label{poisson.formula} t^{n+1} \Theta_V = t^{n+1} \sqbrak[\Bigg]{ \frac12 \sum_{\zeta\in \mathcal{R}_V} e^{(\zeta-\frac{n}2)\abs{t}} - u_0(t)} \end{equation} as a distribution on $\mathbb{R}$, where \[ u_0(t) := \begin{dcases} \frac{\cosh t/2}{(2 \sinh t/2)^{n+1}},&\text{for }n+1\text{ even},\\ 0, & \text{for }n+1 \text{ odd}. \end{dcases} \] \end{theorem} A more general version of the Poisson formula for resonances for compactly supported black box perturbations of $\bbH^{n+1}$ was stated in \cite[Thm.~3.4]{Borthwick:2010}, with the proof omitted because of its similarity to the argument of Guillop\'e-Zworski \cite{GZ:1997}. Zworski has recently noted that the proof in \cite{GZ:1997} glossed over certain technical details concerning the computation of the distributional Fourier transform of the spectral resolution. Furthermore, the optimal factor of $t^{n+1}$ was not obtained in these previous versions. The technicalities of this proof are now worked out in the book of Dyatlov-Zworski \cite[Ch.~3]{DZbook}, including the $t$ prefactor. The proof of \cite[Thm.~3.53]{DZbook} relies only on a global upper bound on the counting function, as in \eqref{NV.bound}, and a factorization formula for the scattering determinant, which we state as Proposition~\operatorname{Re}f{tau.factor.prop} below. It therefore essentially applies to Theorem~\operatorname{Re}f{poisson.thm}. However, there are some structural differences in the hyperbolic case, due to the shifted spectral parameter $z = s(1-s)$ and the non-trivial background contribution of $\mathbb{H}^{n+1}$ in even dimensions. For the sake of completeness, we will include a hyperbolic version of the proof. The starting point is to apply the Birman-Krein formula (Theorem \operatorname{Re}f{BK.thm}) to the relative wave trace. The relation \eqref{f.psi} implies that \[ f(\tfrac{n^2}4 + \xi^2) = \frac12\sqbrak*{\hat\psi(\xi) + \hat\psi(-\xi)}. \] Using this, and the fact that $\sigma'(\xi)$ is even, reduces the Birman-Krein formula to \begin{equation}\label{theta.bk} \paren*{\Theta_V, \psi} = \frac12 \int_{-\infty}^\infty \sigma'(\xi) \hat\psi(\xi) d\xi + \sum_{j=1}^d f(\lambda_j) + \frac12 m_V(\tfrac{n}2) \hat\psi(0). \end{equation} To evaluate the integral in \eqref{theta.bk}, we need some additional facts about the scattering determinant. Given the polynomial bound on the resonance counting function \eqref{NV.bound}, we can define the Hadamard product \[ H_V(s) := s^{m_V(0)} \prod_{\zeta \in \mathcal{R}_V\backslash\set{0}} E\paren*{s/\zeta, n+1}, \] where \[ E(z,p) := \paren*{1 - z} e^{z + \frac{z^2}{2} + \dots + \frac{z^p}{p}}. \] This yields an entire function with zeros located at the resonances. The following factorization formula provides the connection between the Birman-Krein formula and the resonance set. \begin{proposition}\label{tau.factor.prop} The relative scattering determinant admits a factorization \[ \tau(s) = (-1)^{m_V(\tfrac{n}2)} e^{q(s)} \frac{H_V(n-s)}{H_V(s)} \frac{H_0(s)}{H_0(n-s)}, \] where $q$ is a polynomial of degree at most $n+1$, satisfying $q(n-s) = -q(s)$. \end{proposition} Proposition~\operatorname{Re}f{tau.factor.prop} is a special case of \cite[Prop.~3.1]{Borthwick:2010}, which applies to black-box perturbations of $\bbH^{n+1}$. That statement did not include the symmetry condition on $q(s)$, which follows from \eqref{tau.refl} once the parity of $m_V(\tfrac{n}2)$ has been factored out. An analogous result for metric perturbations was given in \cite[Prop.~7.2]{Borthwick:2008}, without the estimate on the degree of $q(s)$. These previous versions contained a typo in the Hadamard product, in that the $\zeta=0$ term should always be treated as a separate factor $s^{m(0)}$. In view of \eqref{rel.trace}, Theorem~\operatorname{Re}f{BK.thm} implies that the derivative $\sigma'$ defines a tempered distribution. We will need the following estimate of its rate of growth. \begin{proposition}\label{temper.prop} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, the derivative of the scattering phase satisfies \[ \abs*{\sigma'(\xi)} \le C_V ( 1 + \abs{\xi})^{n-1} \] for $\xi \in \mathbb{R}$. \end{proposition} The fact that $\sigma'$ has at most polynomial growth follows from Proposition~\operatorname{Re}f{tau.factor.prop}, by a general argument given in Guillop\'e-Zworski \cite[Lemma~4.7]{GZ:1997}, which in turn is based on a method introduced by Melrose \cite{Melrose:1988}. The explicit growth rate of Proposition~\operatorname{Re}f{temper.prop} was proven in Borthwick-Crompton \cite[Prop.~3.1]{BC:2014}. With these ingredients in place, the strategy for the proof of the Poisson formula is essentially to compute the Fourier transform of $\sigma'$. \begin{proof}[Proof of Theorem~\operatorname{Re}f{poisson.thm}] Let us first show that the right-hand side of \eqref{poisson.formula} defines a distribution. Indeed, if we exclude the finite number of terms with $\operatorname{Re} \zeta> \tfrac{n}2$, which have exponential growth, the remaining sum gives a tempered distribution. To see this, consider a test function $\psi \in \mathcal{S}(\mathbb{R})$. Repeated integration by parts can be used to estimate, for $\operatorname{Re} \zeta> \tfrac{n}2$, \[ \abs*{\int_{-\infty}^\infty t^{n+1} e^{(\zeta-\frac{n}2) t} \psi(t) \>dt} \le \frac{C}{(1+\abs{\zeta-\tfrac{n}2})^{n+2}} \sum_{k=0}^{n+1} \sup_{t\in \mathbb{R}} \,\abs[\Big]{\brak{t}^{n+3} \psi^{(k)}(t)}. \] It then follows from the polynomial bound \eqref{NV.bound} that the sum \[ t^{n+1} \sum_{\operatorname{Re} \zeta < \frac{n}2} e^{(\zeta-\frac{n}2)\abs{t}} \] defines a tempered distribution on $\mathbb{R}$. The right-hand side of \eqref{poisson.formula} is thus well-defined as a distribution, since there are only finitely many terms with $\operatorname{Re}\zeta \ge \tfrac{n}2$. Let $\Theta_{\rm sc}$ denote the tempered distribution defined by \begin{equation}\label{vtheta.def} (\Theta_{\rm sc}, \psi) := \frac12 \int_{-\infty}^\infty \sigma'(\xi) \hat\psi(\xi) d\xi. \end{equation} for $\psi \in \mathcal{S}(\mathbb{R})$. This distribution accounts for the contributions to the Birman-Krein formula \eqref{theta.bk} from the continuous spectrum. The sum over the discrete spectrum can be rewritten as a sum over the resonances with $\operatorname{Re} s > n/2$, using the fact that \[ \cos\paren*{t \sqrt{\lambda - n^2/4}} = \cosh( t(\zeta - \tfrac{n}2)) \] for $\zeta \in (\tfrac{n}2, \infty)$ and $\lambda = \zeta(n-\zeta)$. The Birman-Krein formula then becomes \begin{equation}\label{Theta.vart} \Theta_V(t) = \Theta_{\rm sc}(t) + \sum_{\operatorname{Re} \zeta > \frac{n}2} \cosh( t(\zeta - \tfrac{n}2)) + \frac12 m_V(\tfrac{n}2). \end{equation} Since $\Theta_{\rm sc}$ is tempered, it suffices to evaluate \eqref{vtheta.def} under the assumption that $\hat\psi \in C^\infty_0(\mathbb{R})$. From Proposition~\operatorname{Re}f{tau.factor.prop} we calculate \[ \frac{\tau'}{\tau}(s) = q'(s) - \frac{H_V'}{H_V}(n-s) - \frac{H_V'}{H_V}(s) + \frac{H_0'}{H_0}(s) + \frac{H_0'}{H_0}(n-s). \] The Hadamard product derivatives are given by \[ \frac{H_V'}{H_V}(s) = \frac{m_V(0)}{s} + \sum_{\zeta \in \mathcal{R}_V\backslash\set{0}} \sqbrak*{\frac{1}{s-\zeta} + \frac{1}{\zeta} + \dots + \frac{s^n}{\zeta^{n+1}}} \] Hence we can write \[ \frac{H_V'}{H_V}(n-s) + \frac{H_V'}{H_V}(s) = \sum_{\zeta \in \mathcal{R}_V} \sqbrak*{\frac{n-2\zeta}{(n-s-\zeta)(s-\zeta)} + p_\zeta(s)}, \] where $p_\zeta(s)$ is a polynomial of degree $n$ for $\zeta \ne 0$, and $p_0(s) := 0$. Switching to a $\xi$ derivative for $\sigma$ gives \[ \sigma'(\xi) = -\frac{1}{2\pi} \frac{\tau'}{\tau}(\nh+i\xi), \] which evaluates to \begin{equation}\label{taup.sum} \begin{split} \sigma'(\xi) &= -\frac{1}{2\pi}q'(\tfrac{n}2 + i\xi) + \frac{1}{2\pi} \sum_{\zeta \in \mathcal{R}_V} \sqbrak*{\frac{n-2\zeta}{\xi^2+ (\zeta-\frac{n}2)^2} + p_\zeta(\nh+i\xi)} \\ &\qquad -\frac{1}{2\pi} \sum_{\zeta \in \mathcal{R}_0} \sqbrak*{\frac{n-2\zeta}{\xi^2+ (\zeta-\frac{n}2)^2} + p_\zeta(\nh+i\xi)}, \end{split} \end{equation} where $p_\zeta$ is a polynomial of degree at most $n$. The convergence is uniform on compact intervals. Note that there is no pole corresponding to the possible resonance at $\zeta = \tfrac{n}2$, because a zero at this point would cancel out of $H_V(s)/H_V(n-s)$. Assuming that $\hat\psi$ is compactly supported, the contributions of \eqref{taup.sum} to $(t^{n+1} \Theta_{\rm sc}, \psi)$ can be evaluated term by term. Under the Fourier transform, the factor $t^{n+1}$ becomes $(-i\partial_\xi)^{n+1}$, which knocks out all of the polynomial terms. Hence, after integrating by parts, \[ \begin{split} (t^{n+1} \Theta_{\rm sc}, \psi) &= \frac{1}{4\pi} \sum_{\zeta \in \mathcal{R}_V} \int_{-\infty}^\infty \frac{n-2\zeta}{\xi^2+ (\zeta-\frac{n}2)^2} (i\partial_\xi)^{n+1} \hat\psi(\xi)\>d\xi \\ &\qquad - \frac{1}{4\pi} \sum_{\zeta \in \mathcal{R}_0} \int_{-\infty}^\infty \frac{n-2\zeta}{\xi^2+ (\zeta-\frac{n}2)^2} (i\partial_\xi)^{n+1} \hat\psi(\xi)\>d\xi. \end{split} \] By a straightforward contour integration, \[ \int_{-\infty}^\infty e^{-i \xi t} \frac{n-2\zeta}{\xi^2+ (\zeta-\frac{n}2)^2}\>d\xi = \begin{dcases} -2\pi e^{-(\zeta-\frac{n}2)\abs{t}}, & \operatorname{Re} \zeta > \tfrac{n}2, \\ 2\pi e^{(\zeta-\frac{n}2)\abs{t}}, & \operatorname{Re} \zeta < \tfrac{n}2. \end{dcases} \] Using this calculation in the formula for $(t^{n+1} \Theta_{\rm sc}, \psi)$ gives \begin{equation}\label{vtheta.reson} \begin{split} (t^{n+1} \Theta_{\rm sc}, \psi) &= \frac12 \int_{-\infty}^\infty t^{n+1} \Biggl(\sum_{\zeta\in \mathcal{R}_V: \operatorname{Re} \zeta < \frac{n}2} e^{(\zeta-\frac{n}2)\abs{t}} \\ &\qquad-\sum_{\zeta\in \mathcal{R}_V: \operatorname{Re} \zeta > \frac{n}2} e^{-(\zeta-\frac{n}2)\abs{t}} - \sum_{\zeta\in \mathcal{R}_0} e^{(\zeta-\frac{n}2)\abs{t}} \Biggr) \psi(t)\>dt. \end{split} \end{equation} This calculation contains no contribution from a resonance at $\zeta = \nh$, because a zero at this point cancels out of the formula for $\tau(s)$. To remove the restriction of compact support for $\hat\psi$, we note that the right-hand side of \eqref{vtheta.reson} defines a tempered distribution by the remarks at the beginning of the proof. Since $\Theta_{\rm sc}$ is also tempered, and $C^\infty_0(\mathbb{R})$ is dense in $\mathcal{S}(\mathbb{R})$, it follows that \eqref{vtheta.reson} holds for all $\psi \in \mathcal{S}(\mathbb{R})$. Combining this computation of $t^{n+1}\Theta_{\rm sc}$ with the formula \eqref{Theta.vart} now yields the formula \[ t^{n+1} \Theta_V = \frac12 t^{n+1} \sqbrak[\Bigg]{ \sum_{\zeta\in \mathcal{R}_V} e^{(\zeta-\frac{n}2)\abs{t}} - \sum_{\zeta\in \mathcal{R}_0} e^{(\zeta-\frac{n}2)\abs{t}}}. \] Note that the constant term $\tfrac12 m_V(\nh)$ from \eqref{Theta.vart} is now incorporated into the sum over $\mathcal{R}_V$. This completes the proof for $n+1$ odd, because $\mathcal{R}_0$ is empty. If $n+1$ is even, then $\mathcal{R}_0$ is equal to $-\mathbb{N}_0$ as a set, with multiplicities given by the dimensions of spaces of spherical harmonics of degree $k$, \[ m_0(-k) = (2k+n)\frac{(k+1)\dots (n+k-1)}{n!}. \] The resulting sum over $\mathcal{R}_0$ was computed in Guillarmou-Naud \cite[Lemma 2.4]{GN:2006}, \[ \frac12 \sum_{k=0}^\infty m_0(-k) e^{-(k+\frac{n}2)\abs{t}} = \frac{\cosh t/2}{(2 \sinh t/2)^{n+1}}. \] \end{proof} \section{Wave trace expansion}\label{wave.exp.sec} In this section, we compute the expansion at $t = 0$ of the relative wave trace distribution $\Theta_V$, as defined in \eqref{ThetaV.def}, and determine the first two wave invariants explicitly. Although the existence of the wave-trace expansion is considered well known, we are not aware of any direct proof for Schr\"odinger operators in the literature. For the odd-dimensional Euclidean case, Melrose \cite[\S4.1]{Melrose:1995} is generally cited, but this source does not include a proof. Because the hyperbolic setting leads to differences from the familiar Euclidean formulas, we will include the argument here. To set up the expansion formula, we recall that $\abs{t}^\beta$ is well-defined as a meromorphic family of distributions on $\mathbb{R}$, with poles at negative odd integers. The residues at these poles are given by delta distributions. Dividing by $\Gamma(\tfrac{\beta+1}2)$ cancels the poles and defines a holomorphic family, \begin{equation}\label{vartheta.def} \vartheta^\beta(t) := \frac{\abs{t}^\beta}{\Gamma(\tfrac{\beta+1}2)}, \end{equation} where \[ \vartheta^{-1-2j}(t) = (-1)^{j} \frac{j!}{(2j)!} \partialta^{(2j)}(t), \] for $j\in \mathbb{N}_0$ (see, e.g., Kanwal \cite[\S4.4, eq.~(52)]{Kanwal}). \begin{theorem}\label{wave.exp.thm} Let $V\in C_0^\infty(\bbH^{n+1})$ with $n \geq 1$. For each integer $N > [(n+1)/2]$, there exist constants $a_k(V)$ (the wave invariants) such that \beq \Theta_V(t) = \sum_{k = 1}^N a_k(V) \vartheta^{-n+2k-1}(t) + F_N(t), \eeq with $F_N \in C^{2N-n - 1}(\mbr)$ and $F_N(t) = O(\abs{t}^{2N-n})$ as $t \to 0$. \end{theorem} The proof is adapted from B\'erard \cite{Be:1977} and relies on the Hadamard-Riesz \cite{Hadamard:1953, Riesz:1949} construction of a parametrix for the wave kernel. For $V\in C_0^\infty(\bbH^{n+1})$, let \[ P_V := \lap + V - \tfrac{n^2}{4}. \] We denote by $e_V$ the fundamental solution of the Cauchy problem for the wave equation, \beqq\label{eq-cauchy} \begin{gathered} (\p_t^2 + P_V ) e_V(t; z, w) = 0, \\ e_V(0; z,w) = \partialta(z - w), \\ \p_t e_V(0; z, w) = 0, \end{gathered} \eeqq for $t \in \mbr$ and $z,w\in \bbH^{n+1}$. In other words, $e_V(t;\cdot,\cdot)$ is the integral kernel of the wave operator $\cos (t\sqrt{P_V})$. For $\alpha \in \mathbb{C}$, we define the holomorphic family of distributions \[ \chi^\alpha_+ := \frac{x_+^\alpha}{\Gamma(\alpha+1)}, \] using the notation of H\"ormander \cite[\S3.2]{Hormander:I}. This family satisfies the derivative identity, \[ \frac{d}{dx} \chi^\alpha_+ = \chi^{\alpha-1}_+. \] Since $\chi^0_+ = x_+$, it follows that $\chi^\alpha_+$ is a point distribution at negative integers, \[ \chi_+^{-m} = \partialta^{(m-1)}(x). \] For $z,w \in \bbH^{n+1}$ we set $r := d(z,w)$ and denote by $\chi_+^{\alpha}(t^2 - r^2)$ the pullback of $\chi^\alpha_+$ by the smooth map $\bbH^{n+1}\times\bbH^{n+1} \times \mathbb{R} \to \mathbb{R}$ given by $(z,w,t) \mapsto t^2-d(z,w)^2$. Since $\chi_+^{\alpha}$ is classically differentiable for $\operatorname{Re} \alpha > -1$, derivatives of $\chi_+^{\alpha}(t^2 - r^2)$ can be computed directly in this region, and then extended by analytic continuation. Hence the formulas, \begin{equation}\label{chia.deriv} \begin{split} \partial_t \sqbrak*{ \chi_+^{\alpha}(t^2 - r^2)} &= 2t \chi_+^{\alpha-1}(t^2 - r^2), \\ \partial_r \sqbrak*{\chi_+^{\alpha}(t^2 - r^2)} &= -2r \chi_+^{\alpha-1}(t^2 - r^2), \end{split} \end{equation} are valid for all $\alpha$. Following B\'erard \cite[\S D]{Be:1977}, we seek to construct the parametrix as a sum of the distributions $\abs{t} \chi_+^{\alpha}(t^2 - r^2)$ with increasing values of $\alpha$. The starting point for the expansion is dictated by the initial conditions in \eqref{eq-cauchy}, so we need to understand the distributional limit of $\abs{t} \chi_+^{\alpha}(t^2 - r^2)$ as $t \to 0$. \begin{lemma}\label{tzero.lemma} For $\psi \in C^\infty_0(\bbH^{n+1})$, \begin{equation}\label{tchi.lim} \lim_{t\to 0} \paren[\Big]{\abs{t} \chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}, \psi} = \begin{cases} \pi^{\frac{n}2} \psi(z), & \alpha = -\nh -1, \\ 0, & \alpha > -\nh - 1, \end{cases} \end{equation} and \begin{equation}\label{dtchi.lim} \lim_{t\to 0} \paren[\Big]{\partial_t \sqbrak*{\abs{t} \chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}}, \psi} = 0 \end{equation} for $\alpha = -\nh - 1$ and $\alpha \ge -(n+1)/2$. \end{lemma} \begin{proof} The distribution is even in the variable $t$, so it suffices to consider $t>0$. The first formula of \eqref{chia.deriv} gives, for $k \in \mathbb{N}$, \[ \chi_+^{\alpha}(t^2 - r^2) = \paren*{\frac{1}{2t} \partial_t}^{\!k} \chi_+^{\alpha+k}(t^2 - r^2), \] which can be used to shift the computation to the integrable range. For $t>0$ and $\operatorname{Re} \alpha + k > -1$, we have \[ \paren[\Big]{\chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}, \psi} = \frac{1}{\Gamma(\alpha+k+1)} \paren*{\frac{1}{2t} \partial_t}^{\!k} \int_0^{t} (t^2 - r^2)^{\alpha+k} \tilde{\psi}(r) r^n\>dr, \] where in geodesic polar coordinates $(r,\omega)$ centered at $z$, \[ \tilde{\psi}(r) := \frac{\sinh^n r}{r^n} \int_{\mathbb{S}^n} \psi(r,\omega)\>d\omega. \] Rescaling $r \to tr$ in the integral gives \begin{equation}\label{chia.psi1} \begin{split} &\paren[\Big]{\chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}, \psi} \\ &\qquad = \frac{1}{\Gamma(\alpha+k+1)} \paren*{\frac{1}{2t} \partial_t}^{\!k} \sqbrak*{ t^{2(\alpha+k)+n+1} \int_0^1 (1 - r^2)^{\alpha+k} \tilde{\psi}(tr) r^n\>dr}. \end{split} \end{equation} Since $\tilde{\psi}$ is the spherical average of $\psi$ centered at $z$, and the linear term in a Taylor approximation of $\psi$ at $z$ cancels out in this average, \[ \begin{split} \tilde{\psi}(r) &= \operatorname{Vol}(\mathbb{S}^n) \psi(z) + O(r^2)\\ &= \frac{2\pi^{\frac{n+1}2}}{\Gamma(\tfrac{n+1}2)} \psi(z) + O(r^2). \end{split} \] For the same reason, $\partial_r \tilde{\psi}(r) = O(r)$. Higher radial derivatives are bounded on $\set{r>0}$. Hence, in the leading term from \eqref{chia.psi1}, all of the $t$ derivatives are applied to the factor preceding the integral, which gives \[ \paren*{\frac{1}{2t} \partial_t}^{\!k} \sqbrak*{t^{2(\alpha+k)+n+1}} = \frac{\Gamma(\alpha + k + \frac{n+3}2)}{\Gamma(\alpha + \frac{n+3}2)} t^{2\alpha+n+1}. \] The leading contribution from the $r$ integration can be calculated from Euler's beta function formula \cite[eq. (5.12.1)]{NIST}, \[ \int_0^1 (1 - r^2)^{\alpha+k} r^n\>dr = \frac{\Gamma(\alpha + k+ 1)\Gamma(\frac{n+1}2)}{2\Gamma(\alpha + k + \frac{n+3}2)}. \] Combining these results in \eqref{chia.psi1} gives, for $t>0$, \begin{equation}\label{chia.psi2} \paren[\Big]{\chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}, \psi} = \frac{\pi^{\frac{n+1}2}}{\Gamma(\alpha + \frac{n+3}2)} t^{2\alpha+n+1} \psi(z) + O(t^{2\alpha+n+3}). \end{equation} This proves \eqref{tchi.lim}, once the extra factor of $t$ has been inserted. To establish \eqref{dtchi.lim}, we note that \[ \partial_t \sqbrak*{t \chi_+^{\alpha}\paren*{t^2 - r^2}} = \chi_+^{\alpha}\paren*{t^2 - r^2} + 2t^2 \chi_+^{\alpha-1}\paren*{t^2 - r^2}, \] by \eqref{chia.deriv}. We thus obtain from \eqref{chia.psi2}, \[ \paren[\Big]{\partial_t \sqbrak*{t\chi_+^{\alpha}\paren*{t^2 - d(z,\cdot)^2}}, \psi} = \pi^{\frac{n+1}2} \frac{2\alpha + n +2}{\Gamma(\alpha + \frac{n+3}2)} t^{2\alpha+n+1} \psi(z) + O(t^{2\alpha+n+3}), \] and \eqref{dtchi.lim} follows. \end{proof} We take the following ansatz for the parametrix: \begin{equation}\label{evn.def} e_{V,N}(t,z,w) := \pi^{-\frac{n}2} \sum_{k=0}^N u_{V,k}(z,w) \abs{t} \chi_+^{-\frac{n}2 + k-1}(t^2 - r^2), \end{equation} where $u_{V, 0}(z,z) = 1$ and higher coefficients $u_{V,k}$ are to be chosen so that in the expression for $(\partial_t^2 + P_V)e_{V,N}(t,z,\cdot)$, the coefficients of $\abs{t} \chi_+^{-\frac{n}2 + k-1}(t^2 - r^2)$ cancel for $k \le N$. Lemma~\operatorname{Re}f{tzero.lemma} implies that the initial conditions are satisfied, \begin{equation}\label{initialN} \begin{gathered} e_{V,N}(0; z,w) = \partialta(z - w), \\ \p_t e_{V,N}(0; z, w) = 0. \end{gathered} \end{equation} To work out the equations for the coefficients, will compute the action of $(\partial_t^2 + P_V)$ on each term. As above, it suffices to compute for $t>0$ by evenness, and we will use the temporary abbreviations \[ u_{V,k}(z,\cdot) \leadsto u_k, \qquad \chi_+^{\alpha}(t^2 - r^2) \to \chi_+^{\alpha}, \] to simplify the formulas. The time derivatives are calculated from \eqref{chia.deriv}, \[ \partial_t^2 \sqbrak*{t\chi_+^\alpha} = 6t \chi_+^{\alpha-1} + 4t^3 \chi_+^{\alpha-2}. \] Using the geodesic polar coordinate form of the Laplacian, \begin{equation}\label{lap.polar} \lap = -\partial_r^2 - n \coth r\, \partial_r + \sinh^{-2} r\, \lap_{\mathbb{S}^n}, \end{equation} we also compute that \[ P_V \chi_+^\alpha = (2 + 2nr \coth r)\chi_+^{\alpha-1} - 4r^2 \chi_+^{\alpha-2}. \] Putting these together gives \[ \begin{split} (\partial_t^2 + P_V) \sqbrak*{u_k t\chi_+^\alpha} &= (P_Vu_k) t \chi_+^\alpha + 4r (\partial_r u_k) t \chi_+^{\alpha-1} + \paren[\big]{8 + 2nr \coth r} u_kt\chi_+^{\alpha-1} \\ &\qquad + 4u_kt (t^2-r^2) \chi_+^{\alpha-2}. \end{split} \] The final term simplifies, \[ (t^2-r^2) \chi_+^{\alpha-2} = (\alpha-1) \chi_+^{\alpha-1}, \] which reduces the formula to \[ (\partial_t^2 + P_V) \sqbrak*{u_k t\chi_+^\alpha} = (P_Vu_k) t \chi_+^\alpha + \sqbrak[\Big]{4r \partial_r u_k + \paren*{4(\alpha+1) + 2nr \coth r} u_k} t\chi_+^{\alpha-1}. \] After setting $\alpha = -\nh+k-1$ as in \eqref{evn.def}, we obtain, for $t>0$, \begin{equation}\label{evn.terms} \begin{split} (\partial_t^2 + P_V) \sqbrak*{u_k t\chi_+^{-\nh+k-1}} &= \sqbrak[\Big]{4r \partial_r u_k + \paren*{2n(r \coth r-1) + 4k} u_k} t\chi_+^{-\nh+k} \\ &\qquad + (P_Vu_k) t \chi_+^{-\nh+k-1}. \end{split} \end{equation} The calculation \eqref{evn.terms} shows the cancelling of terms in $(\partial_t^2 + P_V)e_{V,N}(t,z,\cdot)$ is ensured by the transport equations: \begin{equation}\label{transport.eq} \begin{split} \sqbrak[\big]{4r \partial_r + 2n (r \coth r - 1)} u_{V,0}(z,\cdot) &= 0. \\ \sqbrak[\big]{4r \partial_r + 2n(r \coth r - 1) + 4k} u_{V,k}(z,\cdot) &= - P_V u_{V,k-1}(z,\cdot). \\ \end{split} \end{equation} To solve \eqref{transport.eq} we define \begin{equation}\label{phi.def} \phi(r) := \paren*{\frac{\sinh r}{r}}^{\!-\frac{n}2}, \end{equation} and then set \begin{equation}\label{transport.soln} \begin{split} u_{V,0}(z,w) &= \phi(r), \\ u_{V,k+1}(z,w) &= -\frac14 \phi(r) \int_0^1 \frac{s^k}{\phi(sr)} P_V u_{V,k}(z, \gamma(s)) ds, \end{split} \end{equation} where $\gamma$ is the geodesic from $z$ to $w$, parametrized by $s \in [0,1]$, and $P_V$ acts on the second variable of $u_{V,k}$. The coefficients $u_{V,k}$ are smooth for all $k$. \begin{proposition}\label{wave.parametrix.prop} With $e_{V,N}$ defined as above, set \[ q_{V,N}(t,z,w) := e_V(t,z,w) - e_{V,N}(t,z,w). \] For $m \in \mathbb{N}$, we have $q_{V,N} \in C^m$ for $N$ sufficiently large and \[ \abs{q_{V,N}(t,z,w)} = O(\abs{t}^{-n+N-1}) \] as $t \to 0$, uniformly in $z,w$. \end{proposition} \begin{proof} From \eqref{evn.def}, \eqref{evn.terms}, and the transport equations \eqref{transport.eq}, we observe that \[ (\partial_t^2 + P_V) e_{V,N}(t,z,\cdot) = f_N(t,z,\cdot), \] where \begin{equation}\label{errorN} f_N(t,z,\cdot) = \pi^{-\frac{n}2} P_V u_{V,N}(z,\cdot) \abs{t}\, \chi_+^{-\frac{n}2 + N-1}(t^2 - r^2), \end{equation} with $P_V$ acting on the second variable. Since $e_{V,N}$ satisfies the same initial conditions \eqref{initialN} as $e_V$, this gives \[ \begin{gathered} (\p_t^2 + P_V ) q_{V,N}(t; z, w) = f_N(t,z,w), \\ q_{V,N}(0; z,w) = 0, \\ \p_t q_{V,N}(0; z, w) = 0. \end{gathered} \] The coefficients $u_{V,k}$ are smooth, by \eqref{transport.soln}, and $\abs{t} \chi_+^\alpha(t^2 - r^2)$ is $C^l$ for $\alpha > l+1$. Hence, by \eqref{errorN}, $f_N \in C^l$ for $l < N - \nh -2$ and has support in $\set{r\le t}$. It follows that $q_V \in C^m$ for $N$ sufficiently large. For any $b>0$, the Sobolev norms of $f_{N}(t, z,\cdot)$ can be estimated by $O(\abs{t}^b)$ for $N$ sufficiently large. These estimates are uniform in $z$, since $\phi$ depends only on $r$ and $V$ has compact support. Standard regularity estimates for hyperbolic PDE (see for example \cite[Ch. 47]{Treves}) then show that for $N_1$ sufficiently large, \[ \abs{q_{V,N_1}(t,z,w)} = O(\abs{t}^{2N - n - 1}), \] uniformly in $z,w$. The estimate of $q_{V,N}$ as $t \to 0$ is then derived from \[ q_{V,N}(t,z,w) = \pi^{-\frac{n}2} \sum_{k=N+1}^{N_1} u_{V,k}(z,w) \abs{t} \chi_+^{-\frac{n}2 + k-1}(t^2 - r^2) + q_{V,N_1}(t,z,w). \] \end{proof} With this estimate on the parametrix, we are now prepared to establish the wave trace expansion. \begin{proof}[Proof of Theorem \operatorname{Re}f{wave.exp.thm}] For $\psi \in C^\infty_0(\mathbb{R})$, we write the integral kernel of the trace-class operator \[ \int_{-\infty}^\infty \sqbrak[\Big]{\cos(t\sqrt{P_V}) - \cos(t\sqrt{P_0})} \psi(t) dt \] as \[ \int_{-\infty}^\infty \sqbrak[\big]{e_V(t,z,w) - e_0(t,z,w)} \psi(t) dt, \] which is smooth and compactly supported. Taking the trace gives \begin{equation}\label{Theta.ev} (\Theta_V, \psi) = \int_{\bbH^{n+1}}\paren*{ \int_{-\infty}^\infty \sqbrak[\big]{e_V(t,z,w) - e_0(t,z,w)} \psi(t) dt }\bigg|_{z=w} dg(z) \end{equation} The wave kernel parametrices can be substituted into \eqref{Theta.ev} and the contributions from the terms $k=0, \dots, N$ evaluated separately. For $\operatorname{Re} \alpha$ sufficiently large we can verify directly that \[ \abs{t}\, \chi_+^\alpha(t^2 - r^2)\Big|_{r=0} = \vartheta^{2\alpha+1}(t), \] and this formula extends to all $\alpha \in \mathbb{C}$ by analytic continuation. It follows from \eqref{Theta.ev} and Proposition~\operatorname{Re}f{wave.parametrix.prop} that \[ \Theta_V(t) = \pi^{-\frac{n}2} \sum_{k=1}^N \paren*{\int_{\bbH^{n+1}} \sqbrak[\big]{u_{V,k}(z,z) - u_{0,k}(z,z)} dg(z)} \vartheta^{-n + 2k-1}(t) + F_N(t), \] where \[ F_N(t) := \int_{\bbH^{n+1}} \sqbrak[\big]{q_{V,N}(t,z,z) - q_{0,N}(t,z,z)} dg(z). \] \end{proof} The proof of Theorem~\operatorname{Re}f{wave.exp.thm} yields a formula for the wave invariants, \begin{equation}\label{akv.explicit} a_k(V) = \pi^{-\frac{n}2} \int_{\bbH^{n+1}} \sqbrak[\big]{u_{V,k}(z,z) - u_{0,k}(z,z)} dg(z). \end{equation} This formula can be simplified somewhat using the transport equations. By \eqref{transport.eq}, we have \begin{equation}\label{transport.iter} (L+4)\cdots (L + 4k) u_{V,k}(z,\cdot) = (-1)^k P_V u_{V,0}(z,\cdot), \end{equation} where, in geodesic polar coordinates centered at $z$, $L$ is the differential operator, \[ L := 4r \partial_r + 2n(r \coth r - 1). \] Note that for any smooth function $f$, $Lf$ vanishes at $r=0$. Therefore, evaluating \eqref{transport.iter} at the point $z$, yields \begin{equation}\label{uk.zero} u_{V,k}(z,z) = \frac{(-1)^k}{4^kk!} P_V^k u_{V,0}(z,z). \end{equation} where $u_{V,0}(z,w) = \phi(d(z,w))$ and $P_V^k$ acts on the second variable. In principle, \eqref{uk.zero} can be used to derive explicit formulas for all of the wave invariants. The first two are relatively simple. \begin{proposition}\label{d12.prop} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$, \[ a_1(V) = -\frac14 \pi^{-\frac{n}2} \int_{\bbH^{n+1}} V(z)\>dg(z) \] and \[ a_2(V) = \frac1{32} \pi^{-\frac{n}2} \int_{\bbH^{n+1}} \sqbrak*{ \frac{2n-n^2}{6} V(z) + V(z)^2}\>dg(z). \] \end{proposition} \begin{proof} Since $u_{V,0}(z,z) = 1$ and $P_V - P_0 = V$, we see immediately from \eqref{uk.zero} that \[ u_{V,1}(z,z) - u_{0,1}(z,z) = -\frac14 V(z). \] This gives the formula for $a_1(V)$. For the second invariant, we use \eqref{uk.zero} to write \[ \begin{split} u_{V,2}(z,z) - u_{0,2}(z,z) &= \frac{1}{32} \paren*{P_V^2\phi - P_0^2\phi}\Big|_{r=0} \\ &= \frac{1}{32} \sqbrak*{2V(z)P_0\phi(0) + \lap V(z) + V(z)^2}, \end{split} \] where $r$ is the radius for geodesic polar coordinates centered at $z$, and we have used the facts that $\phi(0)=1$ and $\partial_r\phi(0)=0$. From \eqref{lap.polar} and \eqref{phi.def} we compute that \[ \begin{split} P_0 \phi(0) &= \frac{n(n+1)}6 - \frac{n^2}4 \\ &= \frac{2n-n^2}{12}. \end{split} \] This gives \[ u_{V,2}(z,z) - u_{0,2}(z,z) = \frac{1}{32} \sqbrak*{ \frac{2n-n^2}{6} V(z) + V(z)^2 + \lap V(z)}. \] When substituted into the formula for $a_2(V)$, the term $\lap V(z)$ integrates to zero, because $V$ has compact support. \end{proof} \section{Heat trace}\label{heat.sec} The relative heat trace associated to a potential $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ is defined by applying the distribution \eqref{rel.trace} to the function $f(x) = \chi(x)e^{-t(x-n^2/4)}$ for $t > 0$, where $\chi$ is a smooth cutoff which equals $1$ on the spectrum of $P_V := \lap - \tfrac{n^2}4 + V$ and vanishes on $(-\infty, c]$ for some $c<0$. The Birman-Krein formula (Theorem~\operatorname{Re}f{BK.thm}) gives \begin{equation}\label{heat.bk} \begin{split} \operatorname{tr} \sqbrak*{e^{-t P_V} - e^{- tP_0}} &= \int_{0}^\infty \sigma'(\xi) e^{-\xi^2 t} d\xi \\ & \qquad + \sum_{j=1}^d e^{t(\frac{n^2}4 - \lambda_j)} + \tfrac12 m_V(\tfrac{n}2). \end{split} \end{equation} We have no analog of the Poisson formula of Theorem~\operatorname{Re}f{poisson.thm} for the heat trace. This is because the values of $(\zeta-\tfrac{n}2)^2$ are spread over the full complex plane, so there is no apparent regularization of the heat trace as a sum over the resonance set. The asymptotic expansion of the heat trace at $t=0$ can be derived by a variety of methods. The simplest route for us via the wave trace expansion. \begin{theorem}\label{heat.trace.thm} As $t \to 0$, the relative heat trace admits an asymptotic expansion \begin{equation}\label{heat.trace.PV} \operatorname{tr} \sqbrak*{e^{-t P_V} - e^{- tP_0}} \sim \pi^{-\frac12} \sum_{k=1}^\infty a_k(V) (4t)^{-\frac{n+1}2+k}, \end{equation} where $a_k(V)$ are the wave invariants from Theorem~\operatorname{Re}f{wave.exp.thm}. \end{theorem} \begin{proof} Since $\Theta_{\rm sc}(s)$ is tempered, the definition \eqref{vtheta.def} gives \[ \frac{1}{\sqrt{4\pi t}} \int_{-\infty}^\infty \Theta_{\rm sc}(s) e^{-s^2/4t} ds = \int_{0}^\infty \sigma'(\xi) e^{-\xi^2 t} d\xi, \] for $t>0$. It then follows from \eqref{Theta.vart} and \eqref{heat.bk} that \begin{equation}\label{htr.theta} \operatorname{tr} \sqbrak*{e^{-tP_V} - e^{-tP_0}} = \frac{1}{\sqrt{4\pi t}} \int_{-\infty}^\infty \Theta_V(s) e^{-s^2/4t} ds, \end{equation} even when $\Theta_V(s)$ is not tempered. For $\operatorname{Re} \beta$ sufficiently large, we compute \[ \frac{1}{\sqrt{4\pi t}} \int_{-\infty}^\infty \vartheta^\beta(s) e^{-s^2/4t} ds = \pi^{-\frac12} (4t)^{\frac{\beta}2}, \] and this formula extends to all $\beta\in \mathbb{C}$ by analytic continuation. Theorem~\operatorname{Re}f{wave.exp.thm} thus yields the expansion \[ \operatorname{tr} \sqbrak*{e^{-tP_V} - e^{-tP_0}} = \pi^{-\frac12} \sum_{k=1}^N a_k(V) (4t)^{-\frac{n+1}2+k} + \frac{1}{\sqrt{4\pi t}} \int_{-\infty}^\infty F_N(s)e^{-s^2/4t}\>ds, \] for $N > [(n+2)/2]$, where $F_N(s) = O(\abs{s}^{2N-n})$ as $s \to 0$. From \eqref{Theta.vart} we can also see that $F_N(s) = O(e^{n\abs{s}/2})$ as $\abs{s} \to \infty$. It follows that \[ \frac{1}{\sqrt{4\pi t}} \int_{-\infty}^\infty F_N(s)e^{-s^2/4t}\>ds = O(t^{-\frac{n}2 + N}). \] \end{proof} Note that coefficients in \eqref{heat.trace.PV} are not quite the usual heat invariants, because of the shift $-n^2/4$ in the definition of $P_V$. This shift gives an extra factor of $e^{-n^2t/4}$ in \eqref{heat.trace.PV}, so the traditional heat invariants could be computed as finite linear combinations of the $a_k(V)$. The behavior of the heat kernel as $t\to \infty$ is also of interest to us. According to \eqref{heat.bk}, this behavior is dominated by exponential terms corresponding to eigenvalues and a constant term from the possible resonance $s = n/2$. If these contributions are absent, then the heat kernel decays at a rate independent of the dimension. \begin{proposition}\label{hk.decay.prop} Suppose that for the potential $V \in C^\infty(\bbH^{n+1}, \mathbb{R})$, $P_V$ has no eigenvalues and no resonance at $n/2$. Then the following bound holds uniformly for $t\in (0,\infty)$ and $z,w$ in $\mathbb{H}$, \begin{equation}\label{heatV.bound} e^{-tP_V}(t;z,w) \asymp t^{-\frac{n+1}2} e^{-r^2/4t-nr/2}(1+r+t)^{\frac{n}2-1} (1+r), \end{equation} where $r = d(z,w)$ and $\asymp$ means that the ratio of the two sides is bounded above and below by positive constants. \end{proposition} In the case $V=0$, \eqref{heatV.bound} was proven in Davies-Mandouvalos \cite[Thm.~3.1]{DaviesMand:1988}. There is no factor $e^{-n^2t/4}$ in \eqref{heatV.bound} because of the shift $-n^2/4$ in the definition of $P_V$. These estimates were generalized in Chen-Hassell \cite[Thm.~5]{ChenHassell:2020} to asymptotically hyperbolic Cartan-Hadamard manifolds with no eigenvalues and no resonance at $s = n/2$, by methods that allow for the inclusion of a $C^\infty_0$ potential. The power $t^{-3/2}$, independent of dimension, corresponds to the vanishing of the spectral resolution $K_V(\xi;\cdot,\cdot)$ to order $\xi^2$ at $\xi =0$, under the assumption of no resonance at $n/2$. Proposition~\operatorname{Re}f{hk.decay.prop} implies a bound on the heat trace, by the argument from S\'a Barreto-Zworski \cite[Prop.~3.1]{SaZw}. \begin{corollary}\label{ht.decay.cor} For a potential $V$ satisfying the hypotheses of Proposition~\operatorname{Re}f{hk.decay.prop}, \[ \operatorname{tr} \sqbrak*{e^{-t P_V} - e^{- tP_0}} = O(t^{-1/2}) \] as $t \to \infty$. \end{corollary} \begin{proof} Duhamel's principle gives the trace estimate \begin{equation}\label{duhamel} \begin{split} &\abs*{\operatorname{tr} \sqbrak*{e^{-t P_V} - e^{- tP_0}}} \\ &\quad\le \int_0^t \int_{\bbH^{n+1}} \int_{\bbH^{n+1}} e^{-(t-s) P_0}(w,z) e^{-sP_V}(z,w) \abs{V(z)}\>dg(z) dg(w) ds. \end{split} \end{equation} The uniform bound \eqref{heatV.bound} implies in particular that \[ e^{-sP_V}(z,w) \le C_Ve^{-sP_0}(z,w), \] and so we can use the semigroup property to estimate \[ \begin{split} \int_{\bbH^{n+1}}e^{-(t-s) P_0}(w,z) e^{-sP_V}(z,w) \> dg(w) &\le C_Ve^{-tP_0}(z,z) \\ &\le C_Vt^{-3/2} \end{split} \] as $t \to \infty$, uniformly in $z$. Applying this to \eqref{duhamel} gives \[ \abs*{\operatorname{tr} \sqbrak*{e^{-t P_V} - e^{- tP_0}}} \le C_V t^{-1/2} \norm{V}_{L^1} . \] \end{proof} \section{Scattering phase asymptotics}\label{scphase.sec} The Birman-Krein formula allows us to connect the wave-trace invariants to corresponding asymptotic expansions for the scattering phase and its derivative. For Schr\"odinger operators in the odd-dimensional Euclidean setting, the asymptotic expansion of the scattering phase was established by Colin de Verdi\`ere \cite{Colin:1981}, Guillop\'e \cite{Guillope:1981}, and Popov \cite{Popov:1982}, via formulas relating the scattering determinant to regularized determinants of the cutoff resolvent. An argument based on expansion of the scattering matrix is given in Yafaev \cite[Thm.~9.2.12]{Yafaev:2010}, and a semiclassical version in Dyatlov-Zworski \cite[Thm.~3.62]{DZbook}. For hyperbolic space we have the following version of these results: \begin{theorem}\label{sigmap.thm} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ the function $\sigma'(\xi)$ admits a full asymptotic expansion as $\xi \to +\infty$. If the dimension $n+1$ is odd, then \[ \sigma'(\xi) \sim \sum_{k=1}^{\infty} c_k(V) \xi^{n-2k}. \] For $n+1$ even, the expansion is truncated, \[ \sigma'(\xi) = \sum_{k=1}^{[n/2]}c_k(V) \xi^{n-2k} + O(\xi^{-\infty}). \] The coefficients are related to the wave invariants by \[ c_k(V) = \frac{2^{-n+2k}}{\pi^{\frac12} \Gamma(\frac{n+1}{2}-k)} a_k(V). \] \end{theorem} Before proving the theorem, we start by establishing the existence of the scattering phase expansion. The coefficients are relatively easy to calculate once this is known. \begin{proposition}\label{sigma.exp.prop} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ the function $\sigma'(\xi)$ admits an asymptotic expansion of the form \begin{equation}\label{sigmap.exp} \sigma'(\xi) \sim \sum_{j=0}^\infty b_j \xi^{n-j-1}, \end{equation} as $\xi \to \infty$. \end{proposition} \begin{proof} Of the approaches mentioned above, the ray expansion method from Yafaev \cite[\S8.4]{Yafaev:2010} is the most easily adapted to the hyperbolic setting. In our context, the idea is to expand $E_V(s;z,\omega')$ in powers of $s$, and then apply this expansion to the scattering phase. To develop the approximation formula, we first consider $z \in \mathbb{H}^{n+1}$ with $\omega' = \infty$. In standard hyperbolic coordinates $z = (x,y) \in \mathbb{R}^n \times \mathbb{R}_+$, \begin{equation}\label{lap.Hn} \lap = -y^2\partial_y^2 +(n-1)y\partial_y + y^2\lap_x, \end{equation} and the unperturbed generalized eigenfunction has the form (see, e.g., \cite[\S4]{Borthwick:2010}) \begin{equation}\label{E0.infty} E_0(s;z,\infty) = 2^{-2s-1} \pi^{-\frac12} \frac{\Gamma(s)}{\Gamma(s-\frac{n}2+1)} y^s. \end{equation} In geodesic coordinates, $y = e^{-r}$, so this is the analog of a Euclidean plane wave with frequency $\xi = \operatorname{Im} s$. Following the construction in \cite[\S8.1]{Yafaev:2010}, we define an approximate plane wave using the ansatz \begin{equation}\label{psiN.def} \psi_N(s;z) = \sum_{j=0}^N s^{-j} y^s w_j(z), \end{equation} with $w_0(z)=1$. From \eqref{lap.Hn}, we have \[ \sqbrak[\big]{\lap + V - s(n-s)} (y^s w_j) = y^s (\lap + V) w_j - 2sy^{s-1}\partial_y w_j. \] We can thus cancel coefficients up to order $s^N$ by imposing the transport equation \[ 2y \partial_y w_{j+1} = (\lap + V) w_j. \] The solutions are given recursively by \begin{equation}\label{bj.formula} w_{j+1}(z) := \frac12 \int_{-\infty}^0 (\lap + V)w_j(x,e^ty) dt \end{equation} for $j \ge 1$. With these coefficients, the function \eqref{psiN.def} satisfies \begin{equation}\label{bN.remainder} \sqbrak[\big]{\lap + V - s(n-s)}\psi_N(s;z) = s^{-N}y^s w_N(z). \end{equation} In \eqref{bj.formula}, the point $(x,e^t)$ can be interpreted geometrically as the translation of $z=(x,y)$ by distance $t$ along the vertical geodesic through $z$. Returning to the geodesic polar coordinates $z = (r,\omega) \in \mathbb{R}_+ \times \mathbb{S}^n$ used to define $E_V(s)$, we let $\phi_{z,\omega'}(t)$ denote the unique geodesic through $z$ with limit point $\omega' \in \mathbb{S}^n$ as $t \to 0$. Let $w_0(z,\omega') = 1$ and define $w_j(z,\omega')$ for $j\ge 1$ by \[ w_{j+1}(z,\omega') := \frac12 \int_{-\infty}^0 (\lap + V)w_j(\phi_{z,\omega'}(t)) dt. \] For the approximate Poisson kernel, \begin{equation}\label{EVN} E_{V,N}(s;z,\omega') := \sum_{j=0}^N s^{-j} w_j(z,\omega') E_0(s,z,\omega'), \end{equation} the calculation of \eqref{bN.remainder} shows that \[ \sqbrak[\big]{\lap + V - s(n-s)}E_{V,N}(s;z,\omega') := s^{-N} E_0(s,z,\omega') (\lap + V) w_N(z,\omega'). \] The coefficients of \eqref{EVN} have support properties analogous to the approximate plane waves in the Euclidean case. That is, for $j \ge 1$, $w_j(z,\omega')$ vanishes unless $z$ lies on a geodesic connecting a point in $\operatorname{supp} V$ to the limit point $\omega'$. One can thus repeat the argument from \cite[Thm.~8.4.3]{Yafaev:2010}, using the cutoff resolvent bound from Guillarmou \cite[Prop.~3.2]{Gui:2005c} in place of its Euclidean counterpart. The result is that \begin{equation}\label{EV.approx} E_V(s;z,\omega') = E_{V,N}(s;z,\omega') + q_n(s;z,\omega'), \end{equation} where, for $\operatorname{Re} s = \nh$, \[ \norm{q_n(s;\cdot,\omega')}_{L^2(B)} = O(s^{\frac{n}2-N}), \] with $B$ a ball in $\bbH^{n+1}$ containing $\operatorname{supp} V$. The shift in the power in the error estimate comes from the Gamma factors in the normalization of \eqref{E0.infty}. The same error estimate applies when \eqref{EV.approx} is differentiated with respect to $s$. The approximation \eqref{EV.approx} can be applied to the scattering phase through the formula \eqref{srel.ee}, which gives \[ \tau(s) = \det (1 + T(s)), \] where \[ T(s) := (2s-n) E_V(s)VE_0(n-s). \] By the definition of the scattering phase, and the fact that $1 + T(\nh+i\xi)$ is unitary for $\xi \in \mathbb{R}$, \begin{equation}\label{sigp.trace} \sigma'(\xi) = -\frac{1}{2\pi} \operatorname{tr} \sqbrak*{(1 + T(\nh+i\xi)^*)T'(\nh+i\xi)}. \end{equation} The kernels of $T(s)$ and $T'(s)$ are smooth, and \eqref{EV.approx} gives uniform asymptotic expansions of their kernels for $\operatorname{Re} s = \nh$, with leading term of order at most $\xi^{n-1}$. We can thus deduce the expansion of $\sigma'(\xi)$ from \eqref{sigp.trace}. \end{proof} Although the leading term in \eqref{sigmap.exp} matches the growth estimate of Proposition~\operatorname{Re}f{temper.prop}, this coefficient vanishes and the leading order is actually $\xi^{n-2}$. Computing coefficients through the construction of Proposition~\operatorname{Re}f{sigma.exp.prop} is rather cumbersome, however. There is a much easier method, by comparison to the heat trace expansion via the Birman-Krein formula. \begin{proof}[Proof of Theorem \operatorname{Re}f{sigmap.thm}] By a straightforward calculus argument (see \cite[Lemma~3.65]{DZbook}), the expansion \eqref{sigmap.exp} yields the corresponding expansion, \[ \begin{split} \int_{0}^\infty \sigma'(\xi) e^{-\xi^2 t} d\xi &\sim \frac12 \sum_{j=0}^{n-1} \Gamma(\tfrac{n-j}{2}) b_j \, t^{-\frac{n-j}2} - \frac{1}{2} \sum_{l=0}^{\infty} \frac{(-1)^l}{l!} b_{n+2l}\, t^l \log t \\ &\qquad + \frac12 \sum_{l=0}^{\infty} \Gamma(-l-\tfrac{1}{2}) b_{n+2l+1} t^{l + \frac12} + g(t), \end{split} \] as $t \to 0^+$, where $g \in C^\infty[0,\infty)$. The function $g$ is not determined by the coefficients $b_j$. On the other hand, by \eqref{heat.bk} and Theorem~\operatorname{Re}f{heat.trace.thm} we have \begin{equation}\label{heat.sigmap} \int_{0}^\infty \sigma'(\xi) e^{-\xi^2 t} d\xi \sim \pi^{-\frac12} \sum_{k=1}^{\infty} a_k(V) (4t)^{-\frac{n+1}2+k} + h(t). \end{equation} where $h \in C^\infty[0,\infty)$ is given by \[ h(t) := \sum_{j=1}^d e^{t(\frac{n^2}4 - \lambda_j)} + \tfrac12 m_V(\tfrac{n}2) \] If $n+1$ is odd, then comparing these expansions shows that $b_j = 0$ if $j$ is even, and \begin{equation}\label{bj.odd} b_{2k-1} = \frac{2^{-n+2k}}{\pi^{\frac12} \Gamma(\frac{n+1}{2}-k)} a_k(V) \end{equation} for $k \in \mathbb{N}$. For $n+1$ even the heat trace expansion contains only integral powers of $t$. This implies that $b_j = 0$ for all $j \ge n$, and also for even values of $j<n$. For odd values of $j < n$, the coefficients are given by \eqref{bj.odd}. \end{proof} Integrating the asymptotic expansion from Theorem~\operatorname{Re}f{sigmap.thm} yields the following: \begin{corollary}\label{sigma.cor} The scattering phase admits a full asymptotic expansion as $\xi \to 0$. If the dimension $n+1$ is odd, then \[ \sigma(\xi) \sim \sum_{k=1}^{[n/2]} \frac{c_k(V)}{n-2k+1} \xi^{n-2k+1} + d + \tfrac12 m_V(\tfrac{n}2) + \sum_{k>[n/2]} \frac{c_k(V)}{n-2k+1} \xi^{n-2k+1}, \] where $d$ is the number of eigenvalues. For $n+1$ even, \[ \sigma(\xi) = \sum_{k=1}^{[n/2]} \frac{c_k(V)}{n-2k+1} \xi^{n-2k+1} + d + \tfrac12 m_V(\tfrac{n}2) + O(\xi^{-\infty}). \] \end{corollary} \begin{proof} By Theorem~\operatorname{Re}f{sigmap.thm}, the function \[ t \mapsto \int_0^\infty \sqbrak[\Bigg]{\sigma'(x) - \sum_{k=1}^{[n/2]}c_k(V) x^{n-2k}} e^{-x^2t} dx \] is continuous for $t \in [0,\infty)$. By \eqref{heat.sigmap} taking the limit as $t \to 0^+$ yields \[ \int_0^\infty \sqbrak[\Bigg]{\sigma'(x) - \sum_{k=1}^{[n/2]}c_k(V) x^{n-2k}} dx = d + \tfrac12 m_V(\tfrac{n}2). \] Splitting the integral at $x=\xi$ then gives, since $\sigma(0) =0$, \[ \begin{split} \sigma(\xi) &= \sum_{k=1}^{[n/2]} \frac{c_k(V)}{n-2k+1} \xi^{n-2k+1} + d + \tfrac12 m_V(\tfrac{n}2)\\ &\qquad - \int_\xi^\infty \sqbrak[\Bigg]{\sigma'(x) - \sum_{k=1}^{[n/2]}c_k(V) x^{n-2k}} dx. \end{split} \] By Theorem~\operatorname{Re}f{sigmap.thm}, for $n+1$ odd the final integral on the right can be integrated to produce an asymptotic expansion in $\xi$. For $n+1$ even, this integral gives an error term $O(\xi^{-\infty})$. \end{proof} \section{Existence of resonances}\label{exist.sec} The asymptotic expansions of the wave trace and scattering have significantly different behavior in odd and even dimensions, so we will consider the two cases separately. \subsection{Even dimensions} For $n+1$ even, all of the singularities in the wave trace expansion of Theorem~\operatorname{Re}f{wave.exp.thm} are detectable for $t \ne 0$. It thus follows immediately from Theorem~\operatorname{Re}f{poisson.thm} that for $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ the resonance set $\mathcal{R}_V$ determines all of the wave invariants $a_k(V)$. In particular, since the vanishing of the first two wave invariants implies $V=0$ by the formulas of Proposition~\operatorname{Re}f{d12.prop}, we obtain the following: \begin{theorem}\label{even.R0.thm} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ with $n+1$ even, if $\mathcal{R}_V = \mathcal{R}_0$ then $V=0$. \end{theorem} We can also deduce a lower bound on the resonance counting function from the wave trace in even dimensions. Note that $\Theta_V(t) = O(t^{-n+1})$ by Theorem~\operatorname{Re}f{wave.exp.thm}, whereas the $\mathcal{R}_0$ contribution in \eqref{poisson.formula} satisfies \[ u_0(t) \sim \frac{1}{t^{n+1}} \] as $t \to 0$. It thus follows from \eqref{poisson.formula} that \begin{equation}\label{even.trace.asym} \sum_{\zeta \in \mathcal{R}_V} e^{(\zeta-\frac{n}2)t} \sim \frac{2}{t^{n+1}}. \end{equation} The lower-bound argument from Guillop\'e-Zworski \cite[Thm.~1.3]{GZ:1997} (see also \cite[\S12.2]{B:STHS2}) can be applied to \eqref{even.trace.asym}, yielding the following: \begin{theorem} For $n+1$ even, the counting function for $\mathcal{R}_V$ satisfies \[ N_V(r) \ge c r^{n+1}, \] for some constant $c>0$ that depends only on $n$ and the radius of $\operatorname{supp} V$. \end{theorem} \begin{proof} Choose $\phi \in C^\infty_0(\mathbb{R}_+)$ with $\phi \ge 0$ and $\phi(1) >0$, and set \[ \phi_\lambda(t) := \lambda \phi(\lambda t). \] By \eqref{even.trace.asym} we have \[ \int_0^\infty \paren[\bigg]{\sum_{\zeta \in \mathcal{R}_V} e^{(\zeta-\frac{n}2)t}} \phi_\lambda(t)\>dt \ge c_n \lambda^{n+1}, \] where $c_n$ does not depend on $V$. Using the Fourier transform to evaluate the right hand-side gives \begin{equation}\label{hatphi.sum} \sum_{\zeta \in \mathcal{R}_V} \hat{\phi}\paren*{i(\zeta-\nh)/\lambda} \ge c_n \lambda^{n+1}. \end{equation} Since $\hat\phi(\xi)$ is rapidly decreasing, we can estimate $\hat\phi(\xi) = O(\abs{\xi}^{-n-2})$ in particular. In terms of the counting function, \eqref{hatphi.sum} then implies that \[ \begin{split} c_n\lambda^{n+1} & \le \int_0^\infty (1+r/\lambda)^{-n-2}\>dN_V(r)\\ & = (n+2) \int_0^\infty (1+r)^{-n-3}N_V(\lambda r)\>dr. \end{split} \] Splitting the integral at $r=a$ and adjusting the constant gives \begin{equation}\label{cnl.split} c_n\lambda^{n+1} \le N_V(\lambda a) + \int_a^\infty (1+r)^{-n-3}N_V(\lambda r)\>dr. \end{equation} If $V$ has support in a ball of radius $R$, then Borthwick \cite[Thm.~1.1]{Borthwick:2010} gives an upper bound \[ N_V(r) \le C_R r^{n+1}. \] Applying this estimate to \eqref{cnl.split} gives \[ N_V(\lambda a) \ge c_n\lambda^{n+1} - C_R \lambda^{n+1} a^{-1}. \] We can then set $a = 2C_R/c_n$ and rescale $\lambda$ to obtain \[ N_V(\lambda) \ge \frac12 c_n \paren*{\frac{c_n}{2C_R}}^{\!n+1} \lambda^{n+1}. \] \end{proof} The existence of a lower bound in even dimensions is not surprising, since the optimal order of growth is already attained for $V=0$. It is more interesting to examine the difference between $\mathcal{R}_V$ and the background resonance set. Note that when $n+1$ is even, the expansion of Theorem~\operatorname{Re}f{sigmap.thm} contains only odd powers of $\xi$. Since $\sigma'(\xi)$ is an even function, this creates a discrepancy that we can exploit. \begin{theorem}\label{even.scphase.thm} For $n+1$ even, suppose that $V_1, V_2 \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$. If the resonance sets $\mathcal{R}_{V_1}$ and $\mathcal{R}_{V_2}$ differ by only finitely many points (counting multiplicities), then: \begin{enumerate} \item The corresponding scattering phases $\sigma_{V_1}$ and $\sigma_{V_2}$ differ by a constant. \item The sets $\mathcal{R}_{V_1} \backslash \mathcal{R}_{V_2}$ and $\mathcal{R}_{V_2} \backslash \mathcal{R}_{V_1}$ are contained in $(0,n)$ and invariant under the reflection $s \mapsto n-s$. \item The wave invariants satisfy $a_k(V_1) = a_k(V_2)$ for $k = 1, \dots, (n-1)/2$. \end{enumerate} Furthermore, if $\mathcal{R}_{V_1} = \mathcal{R}_{V_2}$ (with multiplicities), then $\Theta_{V_1} = \Theta_{V_2}$ and hence all of the wave invariants match. \end{theorem} \begin{proof} Under the assumption that $\mathcal{R}_{V_1}$ and $\mathcal{R}_{V_2}$ differ by only finitely many points, the factorization of Proposition~\operatorname{Re}f{tau.factor.prop} implies that \begin{equation}\label{tau.factor12} \frac{\tau_{V_1}(s)}{\tau_{V_2}(s)} = (-1)^{m_{V_1}(\frac{n}2) - m_{V_2}(\frac{n}2)} e^{p(s)} \prod_{\zeta \in \mathcal{R}_{V_1} \backslash \mathcal{R}_{V_2}} \frac{n-s-\zeta}{s-\zeta} \prod_{\zeta \in \mathcal{R}_{V_2} \backslash \mathcal{R}_{V_1}} \frac{s-\zeta}{n-s-\zeta}, \end{equation} where $p$ is a polynomial with degree at most $n+1$, satisfying $p(s) = p(n-s)$. It follows that $\sigma_{V_1}'(\xi) - \sigma_{V_2}'(\xi)$ is an even, rational function of $\xi$. Since the expansion formula from Theorem~\operatorname{Re}f{sigmap.thm} contains only odd powers of $\xi$, plus a $O(\xi^{-\infty})$ remainder, this implies that $\sigma_{V_1}'(\xi) = \sigma_{V_2}'(\xi)$. The equality of the wave invariants for $k = 1,\dots,[n/2]$ then follows from the matching of expansion coefficients. Since $\sigma_{V_1}' = \sigma_{V_2}'$ also implies that $\tau_{V_1}(s)/\tau_{V_2}(s)$ is constant, the characterization of $\mathcal{R}_{V_1} \backslash \mathcal{R}_{V_2}$ and $\mathcal{R}_{V_2} \backslash \mathcal{R}_{V_1}$ follows from \eqref{tau.factor12}. If $\mathcal{R}_{V_1} = \mathcal{R}_{V_2}$, then the same argument shows that the scattering phases are equal. It then follows from \eqref{Theta.vart} that $\Theta_{V_1} = \Theta_{V_1}$. \end{proof} Let us apply Theorem~\operatorname{Re}f{even.scphase.thm} to compare $\mathcal{R}_V$ to $\mathcal{R}_0$. The hypothesis that $\mathcal{R}_V$ and $\mathcal{R}_0$ differ by finitely many points implies that $\sigma'(\xi) = 0$ and $a_k(V) = 0$ for $k \le (n-1)/2$. Since $\mathcal{R}_0 \cap (0,n) = \emptyset$, it also implies that $\mathcal{R}_V$ is the union of $\mathcal{R}_0$ with a possible resonance at $\zeta = \tfrac{n}2$, plus a finite set of pairs of the form \[ \zeta = \tfrac{n}2 \pm \sqrt{\tfrac{n^2}4 - \lambda_j}, \] where $\lambda_j$ is an eigenvalue. As noted at the start of this section, the vanishing of the first two wave invariants implies that $V=0$. From Theorem~\operatorname{Re}f{even.scphase.thm} we thus immediately obtain the following: \begin{corollary} Let $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ with $n+1$ even and $n \ge 5$. If $V \ne 0$ then $\mathcal{R}_V$ differs from $\mathcal{R}_0$ by infinitely many points (counting multiplicities). \end{corollary} For $n\le 3$ we cannot fully control the first two wave invariants. However, we can derive some extra information from the heat trace. If $\sigma'(\xi)=0$, we see from \eqref{heat.bk} and \eqref{heat.trace.PV} that \begin{equation}\label{rfinite.match} \sum_{j=1}^d e^{t(\frac{n^2}4 - \lambda_j)} + \tfrac12 m_V(\tfrac{n}2) \sim \sum_{k=(n+1)/2}^\infty 2^{-n+2k-1} \pi^{-\frac12} a_k(V) t^{-\frac{n+1}2+k} \end{equation} as $t \to 0$. Matching the coefficients in the expansion leads to a set of relationships between the discrete eigenvalues $\lambda_j$, the multiplicity $m_V(\tfrac{n}2)$, and the wave invariants. \begin{corollary} For $V \in C^\infty_0(\mathbb{H}^2, \mathbb{R})$, if $V \ne 0$ and $\int V\>dg \ge 0$, then $\mathcal{R}_V$ differs from $\mathcal{R}_0$ by infinitely many points. The same conclusion holds for $V \in C^\infty_0(\mathbb{H}^4, \mathbb{R})$, provided $\int V\>dg \ne 0$. \end{corollary} \begin{proof} Assume that $\mathcal{R}_V$ differs from $\mathcal{R}_0$ by finitely many points. For $n=1$ the $t^0$ term in \eqref{rfinite.match} gives \[ d + \tfrac12 m_V(\tfrac{n}2) = - \frac{1}{4\pi} \int_{\mathbb{H}^2} V(z)\>dg(z). \] Hence $\int V\>dg \ge 0$ implies $\mathcal{R}_V = \emptyset$, which gives $V=0$ by Theorem~\operatorname{Re}f{even.R0.thm}. For $n=3$, the assumption that $\mathcal{R}_V$ differs from $\mathcal{R}_0$ by finitely many points gives $a_1(V) = 0$ by Theorem~\operatorname{Re}f{even.scphase.thm}. This means $\int V\>dg = 0$. \end{proof} Assuming a finite discrepancy between $\mathcal{R}_V$ and $\mathcal{R}_0$, the expansion \eqref{rfinite.match} also implies a set of relations between eigenvalues and wave invariants. For $n=1$, we have \[ \frac{1}{(k-1)!} \sum_{j=1}^d \paren*{\tfrac{1}4 - \lambda_j}^{k-1} = 4^{k-1} \pi^{-\frac12} a_k(V) \] for $k \ge 2$, and if $n=3$, \[ d + \tfrac12 m_V(\tfrac{3}2) = \pi^{-\frac12} a_2(V) \] and \[ \frac{1}{(k-2)!} \sum_{j=1}^d \paren*{\tfrac{9}4 - \lambda_j}^{k-2} = 4^{k-2} \pi^{-\frac12} a_k(V) \] for $k \ge 3$. Although these relations seem rather delicate, they do not lead to any obvious contradiction. \subsection{Odd dimensions} In odd dimensions, the primary limitation to drawing implications from the wave trace is the fact that the terms in the expansion of Theorem~\operatorname{Re}f{wave.exp.thm} with $k \le n/2$ are distributions supported only at $t=0$. Hence the trace formula of Theorem~\operatorname{Re}f{poisson.thm} yields no information about the first $n/2$ wave invariants. In the Euclidean case, S\'a Barreto-Zworski \cite{SaZw} exploited the decay of the heat trace as $t \to \infty$ to prove an existence result. In the hyperbolic case, the corresponding decay rate from Corollary~\operatorname{Re}f{ht.decay.cor}, is merely $O(t^{-1/2})$, independent of the dimension. Hence this approach fails and we obtain an existence result only for dimension three. \begin{theorem}\label{odd.exist.thm} For $V \in C^\infty_0(\mathbb{H}^3, \mathbb{R})$, if $V \ne 0$ then $\mathcal{R}_V$ is not empty. \end{theorem} \begin{proof} For $n=2$, if $\mathcal{R}_V = \emptyset$ then Theorems~\operatorname{Re}f{poisson.thm} and \operatorname{Re}f{wave.exp.thm} show that $a_k(V) = 0$ for $k \ge 2$. By the formula from Proposition~\operatorname{Re}f{d12.prop}, \[ a_2(V) = \frac{1}{32\pi} \int_{\mathbb{H}^3} V(z)^2\>dg(z), \] and so $a_2(V) = 0$ implies $V=0$ when $n=2$. \end{proof} As long as at least one resonance exists, we can use the Poisson formula to show that there are infinitely many. The arguments from Christiansen \cite[Thm.~1]{Christ:1999b} and S\'a Barreto \cite[Thm.~1.3]{SB:2001} can then be applied to produce a lower bound on the count. \begin{theorem} For $V \in C^\infty_0(\bbH^{n+1}, \mathbb{R})$ with $n+1$ odd, either $\mathcal{R}_V = \emptyset$ or $\mathcal{R}_V$ is infinite and the counting function satisfies \[ \limsup_{r \to \infty} \frac{N_V(r)}{r} > 0. \] \end{theorem} \begin{proof} Suppose that $\mathcal{R}_V$ is finite. By Theorem~\operatorname{Re}f{poisson.thm} the wave trace is given by a finite sum, \[ \Theta_V(t) = \frac12 \sum_{\zeta\in\mathcal{R}_V} e^{(\zeta-\frac{n}2)\abs{t}}, \] for $t \ne 0$. Hence \[ \lim_{t \to 0} \Theta_V(t) = \frac12 \#\mathcal{R}_V. \] Since the wave trace expansion of Theorem~\operatorname{Re}f{wave.exp.thm} has no term of order $t^0$ for $n+1$ odd, this shows that $\mathcal{R}_V = \emptyset$. Now assume that $\mathcal{R}_V$ is infinite. Since $\mathcal{R}_0 = \emptyset$ in odd dimensions, the factorization formula of Proposition~\operatorname{Re}f{tau.factor.prop} reduces to \[ \tau(s) = (-1)^{m_V(\tfrac{n}2)} e^{q(s)} \frac{H_V(n-s)}{H_V(s)}. \] This is completely analogous to the factorization in the Euclidean case, once we shift the spectral parameter by setting $s = \nh + i\xi$. Suppose that $N_V(r) = O(r)$. Then, the scattering phase expansion of Corollary~\operatorname{Re}f{sigma.cor} allows us to apply \cite[Thm.~1.2]{SB:2001} to deduce that \[ \abs*{\sum_{\abs{\xi_j} < r } \frac{1}{\xi_j} }\le C, \] for all $r>0$. The argument from the proof of \cite[Thm.~1.3]{SB:2001} then yields a contradiction to the fact the asymptotic expansion from Corollary~\operatorname{Re}f{sigma.cor} has only integral powers of $\xi$. \end{proof} \end{document}
\begin{document} \title{\begin{center} Quantum Computation explained to my Mother \end{center} } \author{Pablo Arrighi} \email{[email protected]} \affiliation{Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, U.K. } \keywords{introduction} \pacs{03.65} \begin{abstract} There are many falsely intuitive introductions to quantum theory and quantum computation in a handwave. There are also numerous documents which teach those subjects in a mathematically sound manner. To my knowledge this paper is the shortest of the latter category. The aim is to deliver a short yet rigorous and self-contained introduction to Quantum Computation, whilst assuming the reader has no prior knowledge of anything but the fundamental operations on real numbers. Successively I introduce complex matrices; the postulates of quantum theory and the simplest quantum algorithm. The document originates from a fifty minutes talk addressed to a non-specialist audience, in which I sought to take the shortest mathematical path that proves a quantum algorithm right. \end{abstract} \maketitle \section{Some mathematics} I will begin this introduction with less than three pages of mathematics, mainly definitions. These notions constitute the vocabulary, the very language of quantum theory, and every single one of them will find its use in the second part, when I introduce the postulates of quantum theory. \subsection{Complex Numbers} A \emph{real} number is a number just like you are used to. E.g. $1,\; 0,\;-4.3\;$ are all real numbers. A \emph{complex} number, on the other hand, is just a pair of real numbers. I.e. suppose $z$ is a complex number ($z$ is just a name we give to the number, we could call it \emph{zorro}), then $z$ must be of the form $(a,b)$ where $a$ and $b$ are real numbers.\\ Now I must teach you how to add or multiply complex numbers. Suppose we have two complex numbers $z_1=(a_1,b_1)$ and $z_2=(a_2,b_2)$. Addition first: $z_1+z_2$ is defined to be the pair of real numbers $(a_1+a_2,b_1+b_2)$. And now multiplication (when I put two number next to one another, with no sign in between that means they are multiplied): $z_1 z_2$ is defined to be the pair of real numbers given by $(a_1 a_2-b_1 b_2,a_1 b_2+a_2 b_1)$. \\ Sometimes we want to change the sign of the second (real) component of the complex number $z$. This operation is called \emph{conjugation}, and is denoted by a upper index `$^*$', i.e. $z^*$ is defined to be the pair of real numbers $(a,-b)$. \\ Another useful operation we do on a complex number is to take its \emph{norm}. The norm of $z=(a,b)$ is defined to be the real number $\sqrt{a^2+b^2}$. This operation is denoted by two vertical bars surrounding the complex number, in other words $|z|$ is simply a notation for $\sqrt{a^2+b^2}$. \subsection{Matrices} A \emph{matrix} of \emph{things} is a table containing those things, for instance: $ \left(\begin{array}{cc} \heartsuit &\spadesuit \\ \diamondsuit &\clubsuit \end{array}\right) $ is a matrix of card suits.\\ We shall call this matrix $M$ for use in later examples.\\ A matrix does not have to be square. We say that a matrix is $m\times n$ if it has $m$ horizontal lines and $n$ vertical lines.\\ For instance a \emph{column} is a $1\times n$ matrix e.g: $\left(\begin{array}{c} \heartsuit \\ \diamondsuit \end{array}\right) $. \\ Similarly a \emph{row} is a $m\times 1$ matrix, e.g. $ \left(\begin{array}{cc} \heartsuit &\spadesuit \\ \end{array}\right) $ is a row. \\ The $ij$-\emph{component} of a matrix designates the `thing' which is sitting at vertical position $i$ and horizontal position $j$ in the table, starting from the upper left corner. For instance the $2\;1$-component of $M$ is $\diamondsuit$. If $A$ is a matrix then the $ij$-component of $A$ is denoted $A_{ij}$, e.g. here you have that $M_{11}=\heartsuit,\;M_{21}=\diamondsuit$ etc. \\ Given a matrix we often need to make vertical lines into horizontal lines and vice-versa. This operation is called \emph{transposition} and is written `$^t$'. We thus have $A^t_{ij}=A_{ji}$, in other words if $A$ the $m\times n$ matrix with $ij$-component $A_{ij}$, then $A^t$ is defined to be the $n\times m$ matrix which has $ij$-component $A_{ji}$. Here are two examples: \begin{align*} M^t=\left(\begin{array}{cc} \heartsuit &\diamondsuit \\ \spadesuit &\clubsuit \end{array}\right) \quad;\quad \left(\begin{array}{c} \heartsuit \\ \diamondsuit \end{array}\right)^t= \left(\begin{array}{cc} \heartsuit &\diamondsuit \\ \end{array}\right) \end{align*} \subsection{Matrices of Numbers} Let us now consider matrices of numbers. The good thing about numbers (real or complex, it does not matter at this point) is that you know how to add and multiply them. This particularity will now enable us to define addition and multiplication \emph{of matrices of these numbers}.\\ In order to add two matrices $A$ and $B$ they must both be $m\times n$ matrices (they have the same size). Suppose $A$ has $ij$-components. Then $A+B$ is defined to be the $m\times n$ matrix with $ij$-components $A_{ij}+B_{ij}$.\\ If we now want to multiply the matrix $A$ by the matrix $B$ it has to be the case that the number of vertical lines of $A$ equals that of the number of horizontal lines of $B$. Now suppose $A$ is an $m\times n$ matrix with $ij$-components $A_{ij}$, whilst $B$ is $n\times r$ and has $pq$-components $B_{pq}$. Then $AB$ is defined to be the $m\times r$ matrix with $iq$-components $A_{i1} B_{1q}+A_{i2} B_{2q}+..+A_{in} B_{nq}$.\\ To make things clear let us work this out explicitly for general $2\times 2$ matrices of numbers: \begin{align*} \textrm{Let}\quad A= \left(\begin{array}{cc} A_{11} &A_{12} \\ A_{21} &A_{22} \end{array}\right)\quad\textrm{and}\quad B=\left(\begin{array}{cc} B_{11} &B_{12} \\ B_{21} &B_{22} \end{array}\right) \end{align*} \begin{align*} \textrm{Then}\quad A+B&= \left(\begin{array}{cc} A_{11}+B_{11} &A_{12}+B_{12} \\ A_{21}+B_{21} &A_{22}+B_{22} \end{array}\right)\\ \quad \textrm{and}\quad\quad\; AB&= \left(\begin{array}{cc} A_{11}B_{11}+A_{12}B_{21} &A_{11}B_{12}+A_{12}B_{22} \\ A_{21}B_{11}+A_{22}B_{21} &A_{21}B_{12}+A_{22}B_{22} \end{array}\right) \end{align*} \subsection{Matrices of Complex Numbers} Matrix addition and multiplication work on numbers, whether they are real or complex. But from now we look at matrices of complex numbers only, upon which we define one last operation called \emph{dagger}.\\ To do a dagger operation upon a matrix is to transpose the matrix and then to conjugate all the complex numbers it contains. This operation is denoted `$^{\dagger}$'. We thus have $A^{\dagger}_{ij}=A^*_{ji}$, in other words if $A$ is the $m\times n$ matrix with $ij$-component $A_{ij}$, then $A^{\dagger}$ is defined to be the $n\times m$ matrix which has $ij$-component $A^*_{ji}$.\\ Quite a remarkable $n\times n$ matrix of complex numbers is the one we call `the \emph{identity} matrix'. It is defined such that its $ij$-component is the complex number $(0,0)$ when $i\neq j$, and the complex number $(1,0)$ when $i=j$. The $n\times n$ identity matrix is denoted $I_n$, as in: \begin{align*} I_1= \left(\begin{array}{c} (1,0) \end{array}\right) \quad \textrm{and}\quad I_2= \left(\begin{array}{cc} (1,0) &(0,0) \\ (0,0) &(1,0) \end{array}\right) \end{align*} Having defined the identity matrices we are now able to explain what it means to be a \emph{unit} matrix of complex numbers. Consider $M$ an $m\times n$ matrix of complex numbers. $M$ is said to be a unit matrix if (and only if) it is true that $M^{\dagger}M=I_n$. \subsection{Some properties} You may skip the following three properties if you wish, but they will be needed in order to fully understand the comments which follow postulates \ref{evolution} and \ref{measurement}. Moreover by going through the proofs you will exercise your understanding of the many definitions you have just swallowed. \begin{Pro}\label{identity} Let $A$ be an $n\times m$ matrix of complex numbers and $I_m$ the $m\times m$ identity matrix. We then have that $AI_m=A$. In other words multiplying a matrix by the identity matrix leaves the matrix unchanged. \end{Pro} \emph{Proof.} First note that a complex number $(a,b)$ multiplied by the complex number $(1,0)$ is, by definition of complex number multiplication, given by $(1a-0b,0a+1b)$, which is just $(a,b)$ again. Likewise note that a complex number $(a,b)$ multiplied by the complex number $(0,0)$ is given by $(0a-0b,0a+0b)$, which is just $(0,0)$. Now by definition of matrix multiplication the $iq$-component of $AI_m$ is given by: (where we denote $I_m$ by just $I$) \begin{align*} (AI)_{iq}&=A_{i1}I_{1q}+A_{i2}I_{2q}+..+A_{in} I_{nq}\\ &=A_{i1}(0,0)+A_{i2}(0,0)+..+A_{iq}(1,0)+..+A_{in}(0,0) \end{align*} The second line was obtained by replacing the $I_{pq}$ with their value, which we know from the definition of the identity matrix. Now using the two remarks at the beginning of the proof we can further simplify this equation: \begin{align*} (AI)_{iq}&=(0,0)+(0,0)+..+A_{iq}+..+(0,0) \\ &=A_{iq} \quad\textrm{by complex number addition.} \end{align*} Thus the components of $AI$ are precisely those of $A$. $\quad\square$ \begin{Pro}\label{dagger} Let $A$ be an $m\times n$ matrix of complex numbers and $B$ be an $n \times r$ matrix of complex numbers. Then the following equality is true: \begin{align*} (AB)^{\dagger}=B^{\dagger}A^{\dagger} \end{align*} \end{Pro} \emph{Proof.} First note that \begin{align} \label{conj plus} ((a_1,b_1)+(a_2,b_2))^{*}=(a_1,b_1)^{*}+(a_2,b_2)^{*} \end{align} This is obvious since \begin{align*} ((a_1,b_1)+(a_2,b_2))^{*}&=(a_1+a_2,b_1+b_2)^{*} \\ &=(a_1+a_2,-b_1-b_2)\quad\textrm{and}\\ (a_1,b_1)^{*}+(a_2,b_2)^{*}&=(a_1,-b_1)+(a_2,-b_2)\\ &=(a_1+a_2,-b_1-b_2)\quad\textrm{as well.} \end{align*} Likewise note that \begin{align} \label{conj mult} ((a_1,b_1)(a_2,b_2))^{*}=(a_1,b_1)^{*}(a_2,b_2)^{*} \end{align} and also \begin{align} \label{commut} (a_1,b_1)(a_2,b_2)=(a_2,b_2)(a_1,b_1) \end{align} again this is easily verified by computing the left-hand-side and the right-hand-side of those equalities. \emph{You may want to check this as an exercise.}\\ Now by definition of matrix multiplication we have that \begin{align*} (AB)_{iq}=A_{i1}B_{1q}+A_{i2}B_{2q}+..+A_{in} B_{nq} \end{align*} Thus the components of $(AB)^{\dagger}$ are given by \begin{align*} (AB)^{\dagger}_{iq}&=(AB)^{*}_{qi}\\ &=A^{*}_{q1}B^{*}_{1i}+A^{*}_{q2}B^{*}_{2i}+..+A^{*}_{qn}B^{*}_{ni} \\ &=B^{*}_{1i}A^{*}_{q1}+B^{*}_{2i}A^{*}_{q2}+..+B^{*}_{ni}A^{*}_{qn} \end{align*} where we used equations (\ref{conj plus}) and (\ref{conj mult}) to obtain the second line, and equation (\ref{commut}) to obtain the third line. Now consider the components of $B^{\dagger}A^{\dagger}$. By definition of matrix multiplication we have that \begin{align*} (B^{\dagger}A^{\dagger})_{iq}&=B^{\dagger}_{i1}A^{\dagger}_{1q}+B^{\dagger}_{i2}A^{\dagger}_{2q}+..+B^{\dagger}_{in} A^{\dagger}_{nq}\\ &=B^{*}_{1i}A^{*}_{q1}+B^{*}_{2i}A^{*}_{q2}+..+B^{*}_{ni}A^{*}_{qn} \end{align*} where the last line was obtained using the fact that $A^{\dagger}_{ij}=A^*_{ji}$. Thus the components of $(AB)^{\dagger}$ are precisely those of $B^{\dagger}A^{\dagger}$. $\quad\square$ \begin{Pro}\label{probas} Let $V$ be a $n\times 1$ unit matrix of complex numbers (a column). Then it is the case that: \begin{align*} |V_{11}|^2+|V_{21}|^2+..+|V_{n1}|^2=1 \end{align*} \end{Pro} \emph{Proof.} First let $z=(a,b)$ be a complex number, and note that \begin{align*} z^{*}z&=(a^2+b^2,0) \\ &=(|z|^2,0) \end{align*} Now since $V$ is unit we have that \begin{align*} (V^{\dagger}V)_{11}&=V^{\dagger}_{11} V_{11}+V^{\dagger}_{12} V_{21}+..+V^{\dagger}_{1n} V_{n1}\\ &=V^{*}_{11} V_{11}+V^{*}_{21} V_{21}+..+V^{*}_{n1} V_{n1} \end{align*} where we used successively: the definition of matrix multiplication, and $A^{\dagger}_{ij}=A^*_{ji}$. The last line can be further simplified using our first remark, namely: \begin{align*} V^{*}_{i1} V_{i1}=(|V_{i1}|^2,0) \end{align*} Thus \begin{align*} (V^{\dagger}V)_{11}&=(|V_{11}|^2,0)+(|V_{21}|^2,0)+..+(|V_{n1}|^2,0)\\ &=(|V_{11}|^2+|V_{21}|^2+..+|V_{n1}|^2,0) \end{align*} Because $V$ is unit the last line must be equal to $(1,0)$, and so we have proved the property.$\quad\square$ \section{Quantum Theory} Quantum theory is one of the pillars of modern physics. The theory is $100$ years old and thoroughly checked by experiments; it enables physicists to understand and predict the behaviors of any closed (perfectly isolated from the rest of the world) physical system. Usually these are small systems such as atoms, electrons, photons etc. (only because they are generally less subject to outside interactions). \subsection{States} \begin{Post} \label{states}The \emph{state} of a \emph{closed physical system} is wholly described by a unit $n\times 1$ matrix of complex numbers. \end{Post} \emph{Comments.} In other words a state is given by a column of $n$ complex numbers \begin{align*} V=\left(\begin{array}{c} V_{11} \\ \vdots\\ V_{n1} \end{array}\right) \quad \textrm{such that}\quad V^{\dagger}V=I_1. \end{align*} What we mean by closed physical system is just about anything which is totally isolated from the rest of the world. The number of components $n$ varies depending on how complicated the system is; it is called the \emph{degrees of freedom} or the \emph{dimension} of the system. The postulate itself is extremely short and simple. It is nonetheless puzzling as soon as you attempt to apprehend it with your classical intuition.\\ \emph{Example.} Consider a coin, which insofar as we have always observed, can either by `head $\circledcirc$' or `tail $\circledast$'. Thus we will suppose it has $n=2$ degrees of freedom, and we will further assume that the state: \begin{align*} \textrm{`head}\;\circledcirc\textrm{' corresponds to quantum state }\left(\begin{array}{c} (1,0) \\ (0,0) \end{array}\right)\\ \textrm{whilst `tail}\;\circledast\textrm{' corresponds to quantum state }\left(\begin{array}{c} (0,0) \\ (1,0) \end{array}\right) \end{align*} Now if the coin was to be shut in a totally closed box, it would start behaving like a quantum coin. Thus the state: \begin{align*} \textrm{`}\circledcirc +\circledast\textrm{'}=\left(\begin{array}{c} (\frac{1}{\sqrt{2}},0) \\ (\frac{1}{\sqrt{2}},0) \end{array}\right) \end{align*} would become perfectly allowable. A quantum coin can be in a \emph{superposition} of head and tail, i.e. it can be both head and tail at the same time, in some proportion. Quantum theory is more general than our classical intuition: it allows for more possible states. It as if `head' and `tail' were two axes, and the quantum coin was allowed to live in the plane described by those axes. \subsection{Evolution} \begin{Post} \label{evolution} A closed physical system in state $V$ will evolve into a new state $W$, after a certain period of time, according to \begin{align*} W=UV \end{align*} where $U$ is a $n\times n$ unit matrix of complex numbers. \end{Post} \emph{Comments.} In other words, in order to see how the quantum state of a closed physical system evolves, you have to multiply it by the matrix which describes its evolution (which we call $U$). $U$ could be any matrix of complex numbers so long as it is $n\times n$ (remember $V$ is an $n\times 1$ matrix) and verifies the condition $U^{\dagger}U=I_n$.\\ Note that this postulate is coherent with the first one, because evolution under $U$ takes an allowed quantum state into an allowed quantum state. Indeed suppose $V$ is a valid state, i.e. an $n\times 1$ matrix verifying $V^{\dagger}V=I_1$. By definition of the matrix multiplication an $n\times 1$ matrix multiplied by an $n\times n$ matrix is also an $n\times 1$ matrix, and thus $W$ has the right sizes. Is it a unit matrix? Yes: \begin{align*} W^{\dagger}W&=(UV)^{\dagger}(UV)\quad\textrm{by definition of W}\\ &=V^{\dagger}U^{\dagger}UV\quad\textrm{by Property \ref{dagger}}\\ &=V^{\dagger}I_n V\quad\textrm{since $U$ is unit}\\ &=V^{\dagger}V\quad\textrm{by Property \ref{identity}}\\ &=I_1\quad\textrm{since $V$ is unit}\\ \end{align*} Thus $W$ is a valid quantum state. \subsection{Measurement} \begin{Post} \label{measurement}When a physical system in state \begin{align*} V=\left(\begin{array}{c} V_{11} \\ \vdots\\ V_{n1} \end{array}\right) \end{align*} is \emph{measured}, it yields outcome $i$ with probability $p_i=|V_{i1}|^2$. Whenever outcome $i$ occurs, the system is left in the state: \begin{align*} W=\left(\begin{array}{c} (0,0)\\ \vdots\\ (1,0)\\ \vdots\\ (0,0) \end{array}\right)\;\leftarrow\;i^{th}\textrm{ position} \end{align*} \end{Post} \emph{Example.} Suppose you have a quantum coin in state: \begin{align*} \textrm{`}\circledcirc +\circledast\textrm{'}=\left(\begin{array}{c} (\frac{1}{\sqrt{2}},0) \\ (\frac{1}{\sqrt{2}},0) \end{array}\right) \end{align*} which you decide to measure. With a probability $p_1=|\frac{1}{\sqrt{2}}|^2=\frac{1}{2}$ you will know that outcome `1' has occurred, in which case your quantum system will be left in state \begin{align*} \textrm{`}\circledcirc\textrm{'}=\left(\begin{array}{c} (1,0) \\ (0,0) \end{array}\right) \end{align*} But with probability $p_2=\frac{1}{2}$ outcome `2' may occur instead,in which case your quantum system will be left in state `$\circledast$'. \emph{Comments.} Thus a measurement in quantum theory is fundamentally a probabilistic process. For this postulate to work well we need to be sure that the probabilities all sum up to $1$ (so that something happens $100\%$ of the time). But you can check that this is the case: \begin{align*} p_1+...+p_n&=|V_{11}|^2+..+|V_{n1}|^2 \quad\textrm{by postulate \ref{measurement}}\\ &=1\quad\textrm{by Property \ref{probas}} \end{align*} The other striking feature of this postulate is that the state of the system gets \emph{changed} under the measurement. In our example everything happens as though the quantum coin in state `$\circledcirc +\circledast$' is asked to make up its mind between `$\circledcirc$' and `$\circledast$'. The quantum coin decides at random, but once it does it remains coherent with its decision: its new state is either `$\circledcirc$' or `$\circledast$'.\\ This feature provides the basis for one of the latest high-tech applications of quantum theory: quantum cryptography. Suppose Alice and Bob want to communicate secretly over the phone, but Eve, the Eavesdropper, might be spying upon their conversation. What Alice and Bob can do is to send quantum coins to each other across the (upgraded) phone network. As Eve attempts to measure what the honest parties are saying, she is bound to \emph{change} the state of the coin. This will enable\cite{BB84} Alice and Bob to detect her malevolent presence. \section{Deutsch-Jozsa algorithm} The measurement postulate will (probably) make you think that quantum theory is just a convoluted machinery whose only purpose is to describe objects which might be in `state $1$' with probability $p_1$, in `state $2$' with probability $p_2$ etc. until $n$. After all why bother thinking of the state `$\circledcirc +\circledast$' as a coin which is both head `$\circledcirc$' and tail `$\circledast$' at the same time - when after it gets observed it collapses to either head `$\circledcirc$' or tail `$\circledast$' anyway? \\ No. You \emph{have} to consider that the coin is both `$\circledcirc$' and `$\circledast$' \emph{until you measure it}, because this \emph{is} how it behaves \emph{experimentally} (until you measure it). In other words the only way to account for what happens between the moment you prepare your initial system and the moment you measure it is to think of the complex components of the state $V$ as \emph{amplitudes, proportions} and \emph{not} as probabilities. This has much to do with what Postulate \ref{evolution} enables us to do.\\ In this last part we shall illustrate this point by considering the simplest of all known quantum algorithms\cite{DJ}. An \emph{algorithm} is just a recipe that is used to systematically solve a mathematical problem. But the mathematical problem we will now introduce cannot be solved by classical means: it can only be solved using quantum theory, that is with a quantum algorithm. The fact that this algorithm \emph{does work in practice} ought to demonstrate the fact that the amplitudes of quantum theory permit us to do things which mere probabilities would not allow, and would not explain. \subsection{The problem} A \emph{boolean value} is something which can either be $\mathbf{True}$ or $\mathbf{False}$. For instance the statement `the sky is blue' has the boolean value $\mathbf{True}$ almost anywhere in the world with the exception of England, where it takes the value $\mathbf{False}$. \\ A \emph{boolean operator} is just a `box' which takes one or several boolean values and returns one or several boolean values. In order to define our problem we need to become familiar with two boolean operators, which we now describe. \\ The boolean operator $\mathbf{Not}$ takes the boolean value $\mathbf{True}$ into $\mathbf{False}$ and the boolean value $\mathbf{False}$ into $\mathbf{True}$. We denote this as follows: \begin{align*} \mathbf{Not(True)}&=\mathbf{False} \\ \mathbf{Not(False)}&=\mathbf{True} \end{align*} The boolean operator $\mathbf{Xor}$ (exclusive or) takes two boolean values and returns one boolean value. It returns $\mathbf{True}$ either if the first boolean value it takes is $\mathbf{True}$ and the second one is $\mathbf{False}$ or if the second boolean value it takes is $\mathbf{True}$ and the first one is $\mathbf{False}$. Otherwise it returns $\mathbf{False}$. We denote this as follows: \begin{align*} \mathbf{Xor(True,False)}&=\mathbf{True} \\ \mathbf{Xor(False,True)}&=\mathbf{True} \\ \mathbf{Xor(False,False)}&=\mathbf{False} \\ \mathbf{Xor(True,True)}&=\mathbf{False} \end{align*} In other words $\mathbf{Xor}$ compares its two input boolean values: it returns $\mathbf{True}$ if they are different and $\mathbf{False}$ if they are the same.\\ We are now ready to state the problem. \begin{Pb} \label{xor pb} Suppose we are given a mysterious boolean operator $\mathbf{F}$ (a black box) which takes one boolean value and returns another boolean value. We want to calculate $\mathbf{Xor(F(False),F(True))}$, i.e. the boolean value returned by $\mathbf{Xor}$ when applied to the two possible results of $\mathbf{F}$. But we are allowed to use the mysterious boolean operator $\mathbf{F}$ only once. \end{Pb} It is clear that this problem cannot be solved classically. This is because in order to learn anything about $\mathbf{F}$ you will have to use $\mathbf{F}$. But we are allowed to do this only once. Suppose we use $\mathbf{F}$ on input boolean value $\mathbf{False}$. This gives us $\mathbf{F(False)}$, but tells us nothing about $\mathbf{F(True)}$ which may still be either $\mathbf{True}$ or $\mathbf{False}$. Thus we cannot compute $\mathbf{Xor(F(False),F(True))}$ and we fail to solve the problem. The same reasoning applies if we begin by using $\mathbf{F}$ to obtain $\mathbf{F(True)}$.\\ But what would happen if we had the possibility to use $\mathbf{F}$ upon an input boolean value which is both $\mathbf{True}$ and $\mathbf{False}$, in some proportions (a superposition)? \subsection{The quantum setup} Now suppose that the mysterious boolean operator $\mathbf{F}$ is given in the form of a `quantum black box' instead. To make this more precise we need to call \begin{align*} \textrm{`}\mathbf{False},\mathbf{False}\textrm{' the quantum state}\left(\begin{array}{c} (1,0)\\ (0,0)\\ (0,0)\\ (0,0) \end{array}\right)\\ \textrm{`}\mathbf{False},\mathbf{True}\textrm{' the quantum state}\left(\begin{array}{c} (0,0)\\ (1,0)\\ (0,0)\\ (0,0) \end{array}\right)\\ \textrm{`}\mathbf{True},\mathbf{False}\textrm{' the quantum state}\left(\begin{array}{c} (0,0)\\ (0,0)\\ (1,0)\\ (0,0) \end{array}\right)\\ \textrm{`}\mathbf{True},\mathbf{True}\textrm{' the quantum state}\left(\begin{array}{c} (0,0)\\ (0,0)\\ (0,0)\\ (1,0) \end{array}\right) \end{align*} We assume we have access, for one use only, to a physical device which implements $\mathbf{F}$ as a quantum evolution. This quantum evolution $U$ must take \begin{align*} \textrm{`}\mathbf{True,False}\textrm{' into }\textrm{`}\mathbf{True,F(True)}\textrm{'} \\ \textrm{`}\mathbf{False,False}\textrm{' into }\textrm{`}\mathbf{False,F(False)}\textrm{'} \end{align*} Notice that if for instance `$\mathbf{F(True)}=\mathbf{True}$'then `$\mathbf{True,F(True)}$' simply denotes the quantum state `$\mathbf{True,True}$'. Furthermore we assume $U$ takes \begin{align*} \textrm{`}\mathbf{True,True}\textrm{' into }\textrm{`}\mathbf{True,Not(F(True))}\textrm{'} \\ \textrm{`}\mathbf{False,True}\textrm{' into }\textrm{`}\mathbf{False,Not(F(False))}\textrm{'} \\ \end{align*} The quantum evolution $U$ is fully specified in this manner. In matrix form it is given as follows: \begin{align*} \left(\begin{array}{cccc} (1-F_\textrm{False},0) &(F_\textrm{False},0) &(0,0) &(0,0)\\ (F_\textrm{False},0) &(1-F_\textrm{False},0) &(0,0) &(0,0)\\ (0,0) &(0,0) &(1-F_\textrm{True},0) &(F_\textrm{True},0)\\ (0,0) &(0,0) &(F_\textrm{True},0) &(1-F_\textrm{True},0) \end{array}\right) \end{align*} with:\\ $F_\textrm{False}$ equal to $1$ if $\mathbf{F(False)}$ is $\mathbf{True}$, and $0$ otherwise.\\ $F_\textrm{True}$ equal to $1$ if $\mathbf{F(True)}$ is $\mathbf{True}$ , and $0$ otherwise.\\ \\ Whatever the values of $F_\textrm{False}$ and $F_\textrm{True}$, the matrix of complex number defined above is unit, i.e. $U^{\dagger}U=I_4$. Thus according to postulate \ref{evolution} this mysterious quantum black box is perfectly allowable physically.\\ \emph{As an exercise you may want to check that the matrix $U$ does take `$\mathbf{True,False}$' into `$\mathbf{True,F(True)}$' etc., and that it is indeed unit.}\\ \\ For our quantum algorithm we will need another quantum evolution: \begin{align*} H= \left(\begin{array}{cccc} (1/2,0) &(1/2,0) &(1/2,0) &(1/2,0)\\ (1/2,0) &(-1/2,0) &(1/2,0) &(-1/2,0)\\ (1/2,0) &(1/2,0) &(-1/2,0) &(-1/2,0)\\ (1/2,0) &(-1/2,0) &(-1/2,0) &(1/2,0) \end{array}\right) \end{align*} This $H$ is also a unit matrix of complex numbers. \subsection{The solution} \begin{Algo} In order to solve problem \ref{xor pb} one may use the following algorithm:\\ \\ $1$. Start with a closed physical system in quantum state `$\mathbf{False,True}$'.\\ $2$. Evolve the system under the quantum evolution $H$.\\ $3$. Evolve the system under the quantum evolution $U$.\\ $4$. Evolve the system under the quantum evolution $H$.\\ $5$. Measure the system.\\ \\ If $\mathbf{Xor(F(False),F(True))}$ is $\mathbf{False}$ the quantum measurement always yields outcome `$2$'. \\ On the other hand if $\mathbf{Xor(F(False),F(True))}$ is $\mathbf{True}$ the quantum measurement always yields outcome `$4$'.\\ Thus the algorithm always manages to determine $\mathbf{Xor(F(False),F(True))}$, and does so with only one use of the quantum evolution $U$. \end{Algo} \emph{Proof.} In Step $1$ we start with a closed physical system whose quantum state is $V=\left(\begin{array}{c} (0,0)\\ (1,0)\\ (0,0)\\ (0,0) \end{array}\right)$.\\ After Step $2$ the quantum state of the system has become $HV$. By working out this matrix multiplication we have $HV=\left(\begin{array}{c} (1/2,0)\\ (-1/2,0)\\ (1/2,0)\\ (-1/2,0) \end{array}\right)$.\\ \emph{You may want to check this matrix multiplication and the ones to follow, as an exercise.}\\ After Step $3$ the quantum state of the system has become $UHV$. We can still work out the matrix multiplication but obviously the result now depends upon our mysterious boolean operator $\mathbf{F}$. Indeed we have $UHV=\left(\begin{array}{c} (1/2-F_\textrm{False},0)\\ (-1/2+F_\textrm{False},0)\\ (1/2-F_\textrm{True},0)\\ (-1/2+F_\textrm{True},0) \end{array}\right)$.\\ Notice that $UHV$ depends both upon $\mathbf{F(False)}$ and $\mathbf{F(True)}$, in some proportions. \\ After Step $4$ the quantum state of the system has become $HUHV$ and we have, by working out the multiplication: $HUHV=\left(\begin{array}{c} (0,0)\\ (1-F_\textrm{False}-F_\textrm{True},0)\\ (0,0)\\ (F_\textrm{True}-F_\textrm{False},0) \end{array}\right)$.\\ Finally in Step $5$ we measure the state $HUHV$. According to Postulate \ref{measurement} this yields:\\ - outcome `$1$' with probability $0$ (never).\\ - outcome `$2$' with probability $p_2=(1-(F_\textrm{False}+F_\textrm{True}))^2$.\\ - outcome `$3$' with probability $0$ (never).\\ - outcome `$4$' with probability $p_4=(F_\textrm{True}-F_\textrm{False})^2$.\\ \\ Now if $\mathbf{Xor(F(False),F(True))}$ is $\mathbf{False}$ then $F_\textrm{False}$ and $F_\textrm{True}$ have to be the same. Thus $F_\textrm{False}+F_\textrm{True}$ equals either $0$ or $2$, whereas $F_\textrm{True}-F_\textrm{False}$ is necessarily worth $0$. As a consequence $p_2$ must equal $1$ whereas $p_4$ is worth $0$.\\ Similarly, if $\mathbf{Xor(F(False),F(True))}$ is $\mathbf{True}$ then $F_\textrm{False}$ and $F_\textrm{True}$ have to be the different values. Thus $F_\textrm{False}+F_\textrm{True}$ is necessarily worth $1$, whereas $F_\textrm{True}-F_\textrm{False}$ equals either $-1$ or $1$. As a consequence $p_2$ is worth $0$ whereas $p_4$ must equal $1$. $\quad\square$ \subsection{Comments} It is quite a remarkable fact that with only one use of the `quantum black box' we succeed to determine a quantity which intrinsically depends `on both possible values which the box may return'. Although this algorithm does not seem extremely useful in every day life, it teaches us an important lesson: the components of a quantum state must be viewed as proportions (amplitudes), not as probabilities. The quantum coin can be both head or tail in some proportions, simultaneously, until you measure it.\\ Until recently this feature of quantum theory was essentially regarded as an unfortunate oddity which made the theory difficult to grasp. But we are now learning to turn this feature to our own advantage, as a means of `exploring several possibilities simultaneously' (so to speak).\\ This is recent research however, and to this day not so many quantum algorithms are known. Yet we do know that Quantum Computers can factorize large integer numbers efficiently, or even find a name within an unordered list of $100$ people in only $5$ tries. These are quite useful things to be able to do. The best place to learn about them is \cite{Nielsen}, if you have followed me this far you can go further. \section{Acknowlegments} The author would like to thank his mother for suggesting this article, Anuj Dawar for his patient listening, EPSRC, Marconi, the Cambridge European and Isaac Newton Trusts for financial support. \end{document}
\begin{document} \title{Analytic-geometric methods for finite Markov chains with applications to quasi-stationarity} \begin{abstract} For a relatively large class of well-behaved absorbing (or killed) finite Markov chains, we give detailed quantitative estimates regarding the behavior of the chain before it is absorbed (or killed). Typical examples are random walks on box-like finite subsets of the square lattice $\mathbb Z^d$ absorbed (or killed) at the boundary. The analysis is based on Poincar\'e, Nash, and Harnack inequalities, moderate growth, and on the notions of John and inner-uniform domains. \end{abstract} \tableofcontents \section{Introduction} \setcounter{equation}{0} \subsection{Basic ideas and scope} Markov chains that are either absorbed or killed at boundary points are important in many applications. We refer to \cite{CMSM,DM} for entries to the vast literature regarding such chains and their applications. Absorption and killing are distinguished by what happens to the chain when it exits its domain $U$. In the killing case, it simply ceases to exist. In the absorbing case, the chain exits $U$ and gets absorbed at a specific boundary point which, from a classical viewpoint, is still part of the state space of the chain. In this paper we study the behavior of chains until they are either absorbed or killed, which means that there is no significant difference between the two cases. For simplicity, we will phrase the present work in the language of Markov chains killed at the boundary. The goal of this article is to explain how to apply to finite Markov chains a well-established circle of ideas developed for and used in the study of the heat equation with Dirichlet boundary condition in Euclidean domains and manifolds with boundary, or, equivalently, for Brownian motion killed at the boundary. By applying these techniques to some finite Markov chains, we can provide good estimates for the behavior of these chains until they are killed. These estimates are also very useful for computing probabilities concerning the exit position of the process, that is, the position when the chain is killed. Such probabilities are related to harmonic measure and time-constrained variants. This is discussed by the authors in a follow-up article~\cite{DHSZ2}. In \cite{DM}, a very basic example of this sort is discussed, lazy simple random walk on $\{0,1\dots,N\}$ with absorption at $0$ and reflection at $N$. This served as a starting point for the present work. Even for such a simple example, the techniques developed below provide improved estimates. The present approach utilizes powerful tools: Harnack, Poincar\'e and Nash inequalities. It leads to good results even for domains whose boundaries are quite rugged, namely, inner-uniform domains and John domains. The notions of ``Harnack inequality'' and ``John domain'' are quite unfamiliar in the context of finite Markov chains and their installment in this context is non-trivial and interesting mostly when a quantitative viewpoint is implemented carefully. The main contribution of this work is to provide such an implementation. \begin{figure} \caption{The forty-five degree finite cone in $\mathbb Z^2$} \label{D0} \end{figure} The type of finite Markov chains---more precisely, the type of families of finite Markov chains---to which these methods apply is, depending of one's perspective, both quite general and rather restrictive. First, we will mostly deal with reversible Markov chains. Second, the most technical part of this work applies only to families of finite Markov chains whose state spaces have a common ``finite-dimensional'' nature. Our basic geometric assumptions require that all Markov chains in the general family under consideration have, roughly speaking, the same dimension. The model examples are families of finite Markov chains whose state spaces are subsets of $\mathbb{Z}^d$ for some fixed $d$, such as the family of forty-five degree cones parametrized by $N$ shown in Figure~\ref{D0}. Many interesting families of finite Markov chains evolve on state spaces that have an ``infinite-dimensional nature,'' e.g., the hypercube $\{0,1\}^d$ or the symmetric group $\mathbb S_d$ where $d$ grows to infinity. Our main results do not apply well to these ``infinite-dimensional'' families of Markov chains, although some intermediary considerations explained in this paper do apply to such examples. See Section~\ref{sec-Doob}. The simple example depicted in Figure \ref{D0} illustrates the aim of this work. Start with simple random walk on the square grid in the plane. For each integer $N>2$, consider the subgraph of the square grid consisting of those vertices $(p,q)$ such that $$ q<p\le N \mbox{ and } 0<q<N,$$ which are depicted by black dots on Figure~\ref{D0}. Call this set of vertices~${U=U_N}$. The boundary where the chain is killed (depicted in blue) consists of the bottom and diagonal sides of the cone, i.e., the vertices with either $q=0$ or $p=q$ for $0\le p,q\le N$. Call this set $\partial U =\partial U_N$ and set $\mathfrak X=\mathfrak X_N=U_N\cup \partial U_N$. The vertices along the right side of the cone, $\{(N,q), 1\le q<N\}$, have one less neighboring vertex, so we add a loop at each of these vertices. (In Figure~\ref{D0}, these vertices are depicted with larger black dots and the loops are omitted for simplicity.) We are interested in understanding the behavior of the simple random walk on $\mathfrak X$ killed at the boundary $\partial U$, before its random killing time $\tau_U$. In particular, we would like to have good approximations of quantities such as \begin{equation} \label{eq:est1} \mathbf P_x(\tau_U >\ell), \;\; \mathbf P_x( X_t=y| \tau_U >\ell ), \;\; \mathbf P_x( X_t=y \mbox{ and } \tau_U>n), \end{equation} for $x,y\in U, \ 0\le t\le \ell,$ and \begin{equation} \label{eq:est2} \lim_{\ell\rightarrow \infty} \mathbf P_x( X_t=y| \tau_U >\ell ), \end{equation} for $ x,y\in U, \ 0\le t <+\infty$ where the time parameter $t$ is integer valued. This limit, if it exists, can be interpreted as the iterated transition probability at time $t$ for the chain conditioned to never be absorbed. We chose the example in Figure \ref{D0} because it is a rather simple domain, but already demonstrates some of the complexities in approximating the above quantities. \subsection{The Doob-transform technique} Before looking at this example in detail, consider a general irreducible aperiodic Markov kernel $K$ on a finite or countable state space $\mathfrak X$. Let $U$ be a finite subset of $\mathfrak X$ such that the kernel $K_U(x,y)=K(x,y)\mathbf 1_U(x)\mathbf 1_U(y)$ is still irreducible and aperiodic. Let $(X_t)$ be the (discrete time) random walk on $\mathfrak X$ driven by $K$, and let $\tau_U$ be the stopping time equal to the time of the first exit from $U$ as above. A rather general result explained in Section~\ref{sec-Dir} implies that the limit $$\lim_{\ell\rightarrow \infty} \mathbf P_x( X_t=y| \tau_U >\ell ), \;\;x,y\in U,\;\;t \in \mathbb{N}_{\geq 0}$$ exists and so we can define $K_{\mbox{\tiny Doob}}^t(x,y)$ for any $x, y \in U$ and $t \in \mathbb{N}_{\geq 0}$ as $$K_{\mbox{\tiny Doob}}^t(x,y) = \lim_{\ell\rightarrow \infty} \mathbf P_x( X_t=y| \tau_U >\ell).$$ It is not immediately clear that this collection of $t$-dependent kernels, $$K^1_{\mbox{\tiny Doob}},K^2_{\mbox{\tiny Doob}},K^3_{\mbox{\tiny Doob}},\dots,$$ has special properties but, it turns out that it is nothing other than the collection of the iterated kernels of the kernel $K_{\mbox{\tiny Doob}}=K^1_{\mbox{\tiny Doob}}$ itself, i.e., $$K_{\mbox{\tiny Doob}}^t(x,y)=\sum_z K^{t-1}_{\mbox{\tiny Doob}}(x,z)K_{\mbox{\tiny Doob}}(z,y).$$ Moreover, $K_{\mbox{\tiny Doob}}$ is an irreducible aperiodic Markov kernel. To see why this is true, let us explicitly find the kernel $K_{\mbox{\tiny Doob}}$. Recall that, by the Perron-Frobenius theorem, the irreducible, aperiodic, non-negative kernel $K_U$ has a real eigenvalue $\beta_0 \in [0,1]$ which is simple and such that $|\beta| < \beta_0$ for every other eigenvalue $\beta$. This top eigenvalue $\beta_0$ has a right eigenfunction $\phi_0$ and a left eigenfunction $\phi_0^*$ which are both positive functions on $U$. Set $$ K_{\phi_0}(x,y)= \beta_0^{-1} \phi_0(x)^{-1} K_U(x,y)\phi_0(y)$$ and observe that this is an irreducible aperiodic Markov kernel with invariant probability measure proportional to $\phi^*_0\phi_0$. These facts all follow from the definition and elementary algebra. In Section~\ref{sec-IU}, we show that $$\lim_{\ell\rightarrow \infty} \mathbf P_x( X_t=y| \tau_U >\ell ) = K^t_{\phi_0}(x,y),$$ and hence $$K_{\mbox{\tiny Doob}}^t(x,y) = K^t_{\phi_0}(x,y).$$ This immediately implies that $$\mathbf P_x(X_t=y \mbox{ and } \tau_U >t)=K^t_U(x,y)= \beta_0^t K^t_{\mbox{\tiny Doob}}(x,y) \phi_0(x)\phi_0(y)^{-1}.$$ If we assume---this is a big and often unrealistic assumption---that we know the eigenfunction $\phi_0$, either via an explicit formula or via ``good two-sided estimates,'' then any question about $$\mathbf P_x(X_t=y \mbox{ and } \tau_U >t) \ \text{or, equivalently,} \ K^t_U(x,y)$$ can be answered by studying $$K^t_{\mbox{\tiny Doob}}(x,y)$$ and vice-versa. The key point of this technique is that $K_{\mbox{\tiny Doob}}$ is an irreducible aperiodic Markov kernel with invariant measure proportional to $\phi_0^*\phi_0$ and its ergodic properties can be investigated using a wide variety of classical tools. The notation $K_{\mbox{\tiny Doob}} $ refers to the fact that this well-established circle of ideas is known as the Doob-transform technique. From now on, we will use the name $K_{\phi_0}$ instead, to remind the reader about the key role of the eigenfunction $\phi_0$. \subsection{The 45 degree finite discrete cone} In our specific example depicted in Figure~\ref{D0}, $K_U$ is symmetric in $x,y$ so that~${\phi_0^*=\phi_0}$. We let $\pi_U\equiv 2/N(N-1) $ denote the uniform measure on $U$ and normalize $\phi_0$ by the natural condition $\pi_U(\phi_0^2)=1$. Then, $\pi_{\phi_0}=\phi_0^2 \pi_U$ is the invariant probability measure of $K_{\phi_0}$ and this pair $(K_{\phi_0} , \pi_{\phi_0})$ is irreducible, aperiodic, and reversible. By applying known quantitative methods to this particular aperiodic, irreducible, ergodic Markov chain, we can approximate the quantities~\eqref{eq:est1} and~\eqref{eq:est2} as follows. For any $x=(p,q)\in U$ and any $t$, set $x_{\sqrt{t}}=(p_{\sqrt{t}},q_{\sqrt{t}})$ where $$p_{\sqrt{t}}= (p+ 2\lfloor \sqrt{t/4}\rfloor)\wedge N \mbox{ and }q_{\sqrt{t}}=(q+ \lfloor \sqrt{t/4}\rfloor)\wedge (N/2). $$ The transformation $x=(p,q)\mapsto x_{\sqrt{t}}=(p_{\sqrt{t}},q_{\sqrt{t}})$ takes any vertex $x=(p,q)$ and pushes it inside $U$ and away from the boundary at scale $\sqrt{t}$ (at least as long as $t\le N$). The two key properties of $x_{\sqrt{t}}$ are that it is at distance at most $\sqrt{t}$ from $x$ and at a distance from the boundary $\partial U$ of order at least $\sqrt{t}\wedge N$. The following six statements can be proven using the techniques in this paper. The first five of these statements generalize to a large class of examples that will be described in detail. The last statement takes advantage of the particular structure of the example in Figure~\ref{D0}. Note that the constants $c,C$ may change from line to line but are independent of $N,t$ and $x,y\in U=U_N.$ \begin{enumerate} \item For all $N$, $ cN^{-2} \le 1-\beta_0 \le CN^{-2}.$ This eigenvalue estimate gives a basic rate at which mass disappears from $U$. For a more precise statement, see item 5 below. \item All eigenvalues of $K_U$ are real, the smallest one, $\beta_{\mbox{\tiny min}}$, satisfies $$ \beta_0+\beta_{\mbox{\tiny min}}\ge c\beta_0 N^{-2}$$ and, for any eigenvalue $\beta$ other than $\beta_0$, $$\beta_0-\beta\ge c\beta_0 N^{-2}.$$ This inequality shows that $\frac{\beta_{\mbox{\tiny min}}}{\beta_0}$, the smallest eigenvalue of $K_{\phi_0}$, is strictly larger than~$-1$, which implies the aperiodicity of $K_{\phi_0}$. \item For all $x,y,t,N$ with $t\ge N^2$ $$ \max_{x,y} \left\{ \left| \frac{N(N-1)\mathbf P_x(X_t=y \mbox{ and } \tau_U >t)}{2\beta_0^t \phi_0(x)\phi_0(y) } -1\right| \right\} \le Ce^{- c t/N^2}.$$ A simple interpretation of this (and the following) statement is that $$\mathbf{P}_x(X_t = y \ \text{and} \ \tau_U > t)\;\; (\mbox{resp. } \mathbf{P}_x(\tau_U > t))$$ is asymptotic to a known function expressed in terms of $\beta_0$ and $\phi_0$. \item For all $x,t,N$ with $t\ge N^2$, $$ \max_{x} \left\{ \left| \frac{N(N-1)\mathbf P_x(\tau_U >t)}{2\beta_0^t \phi_0(x) \pi_U(\phi_0) } -1\right| \right\} \le Ce^{- c t/N^2}.$$ \item For all $x, t,N$, $$ c\beta_0^t\frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})}\le \mathbf P_x(\tau_U>t) \le C \beta_0^t \frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})} .$$ Unlike the third and fourth statements on this list, which give asymptotic expressions for $$ \mathbf P_x(X_t=y\mbox{ and } \tau_U>t) \mbox{ and }\mathbf{P}_x(\tau_U > t)$$ for times greater than $N^2$, the fifth statement provides a two-sided bound of the survival probability $\mathbf P_x(\tau_U>t)$ that holds true uniformly for every starting point $x$ and time $t>0$. \item For all $N$ and $x=(p,q)\in U$, where $U$ is described in Figure~\ref{D0}, $$ cpq(p+q)(p-q) N^{-4} \le \phi_0(x) \le Cpq(p+q)(p-q) N^{-4} .$$ Observe that this detailed description of the somewhat subtle behavior of $\phi_0$ in all of $U$, together with the previous estimate of $\mathbf P_x(\tau_U>t)$, provides precise information for the survival probability of the process $(X_t)_{t>0}$ started at any given point in $U$. \end{enumerate} In general, it is hard to get detailed estimates on $\phi_0$, although some non-trivial and useful properties of $\phi_0$ can be derived for large classes of examples. Even in the example given in Figure~\ref{D0}, the behavior of $\phi_0$ is not easily explained. In this case, it is actually possible to explicitly compute $\phi_0$: \begin{eqnarray*}\phi_0(x) =4\kappa_N \sin \frac{ \pi p}{2N+1}\sin\frac{\pi q}{2N+1} \left(\sin^2 \frac{\pi p}{2N+1} - \sin^2 \frac{\pi q}{2N+1} \right). \end{eqnarray*} The constant $\kappa_N $ which makes this eigenfunction have $L^2(\pi_U)$-norm equal to $1$ can be computed to be $\kappa_N= \frac{\sqrt{8N(N-1)}}{2N+1}$. The eigenvalue $\beta_0$ is $$\beta_0= \frac{1}{2}\left(\cos\frac{\pi}{2N+1}+\cos\frac{3\pi}{2N+1}\right).$$ \subsection{A short guide} Because some of the key techniques in this paper have a geometric flavor, we have chosen to emphasize the fact that all our examples are subordinate to some preexisting geometric structure. This underlying geometric structure introduces some of the key parameters that must remain fixed (or appropriately bounded) in order to obtain families of examples to which the results we seek to obtain apply uniformly. Generally, we use the language of graphs, and the most basic example of such a structure is a $d$-dimensional square grid. Throughout, the underlying space is denoted by $\mathfrak X$. It is finite or countable and its elements are called vertices. It is equipped with an edge set $\mathfrak E$ which is a set of pairs undirected $\{x,y\}$ of distinct vertices (note that this excludes loops). Vertices in such pairs are called neighbors. For each $x\in \mathfrak X$, the number of pairs in $\mathfrak E$ that contain $x$ is supposed to be finite, i.e., the graph is locally finite. The structure $(\mathfrak X,\mathfrak E)$ yields a natural notion of a discrete path joining two vertices and we assume that any two points in $\mathfrak X$ can indeed be joined by such a path. Two rather subtle types of finite subsets of $\mathfrak X$ play a key role in this work: \mbox{$\alpha$-John domains} and $\alpha$-inner-uniform domains. Inner-uniform domains are always John domains, but John domains are not always inner-uniform. The number $\alpha\in (0,1]$ is a geometric parameter, and we will mostly consider families of subsets which are all either $\alpha$-John or $\alpha$-inner-uniform for one fixed $\alpha>0$. John domains, named after Fritz John, are discussed in Section \ref{sec-JD} whereas the discussion and use of inner-uniform domains is postponed until Section \ref{sec-IU}. Our most complete results are for inner-uniform domains. These notions are well known in the context of (continuous) Euclidean domains, in particular in the field of conformal and quasi-conformal geometry. We provide a discrete version. See Figures \ref{D3}, \ref{IU-J}, and~\ref{IU-U} for simple examples. Whitney coverings are a key tool used in proofs about John and inner-uniform domains. These are collections of inner balls within some domain that are nearly disjoint and have a radius that is proportional to the distance of the center to the boundary. These collections of balls are not themselves a covering of the domain, but their triples are, i.e., they generate a covering. See Section \ref{sec-WC} for the formal definition and Figure \ref{fig:whitney-covering} for an example. Whitney coverings are absolutely essential to the analysis presented in this paper. For instance, a Whitney covering of a given finite John domain $U$ is used to obtain good estimates for the second largest eigenvalue of a Markov chain (e.g., simple random walk on our graph) forced to remained in the finite domain $U$. See, e.g., Theorem \ref{th-KNU}. \begin{figure} \caption{A graph with weights ${\color{red} \label{WM} \end{figure} With the geometric graph structure of Section~\ref{sec-JDW} fixed, we add vertex weights, $\pi(x)$ for each $x \in \mathfrak{X}$, and (positive) edge weights, $\mu_{xy}$ for each $\{x,y\} \in \mathfrak{E}$, with the requirement that $\mu$ is subordinated to $\pi$, i.e., $\sum_{y \in \mathfrak{X}} \mu_{xy} \leq \pi(x)$ (often, $\mu_{xy}$ is extended to all pairs by setting $\mu_{xy}=0$ when $\{x,y\}\not\in \mathfrak E$). Section \ref{sec-pimuK} explains how each choice of such weights defines a Markov chain and Dirichlet form adapted to the geometric structure $(\mathfrak X,\mathfrak E)$. This is illustrated in Figure~\ref{WM} where the Markov kernel $K=K_\mu$ is obtained by seting $K(x,y) = \mu_{xy}/\pi(x)$ for $x \neq y$ and $K(x,x) = 1 - \sum_{y} \mu_{xy}/\pi(x)$. We will generally refer to the geometric structure of $(\mathfrak{X}, \mathfrak{E})$ with weights $(\pi,\mu)$ instead of the Markov chain. Section \ref{sec-DMPN} introduces the important known concepts of volume doubling, moderate growth, various Poincar\'e inequalities, and Nash inequalities. These notions depend on the underlying structure $(\mathfrak X,\mathfrak E)$ and the weights $(\pi,\mu)$. There is a very large literature on volume doubling, Poincar\'e inequalities and Nash inequalities in the context of harmonic analysis, global analysis and partial differential equations (see, e.g., \cite{GrigBook,LSCAspects} and the references therein for pointers to the literature) and analysis on countable graphs (see, \cite{BalrlowLMS,Coulhon,GrigoryanAMS,LSCStF}). The notion of moderate growth is from \cite{DSCmod,DSCNash} which also cover volume doubling and Poincar\'e and Nash inequalities in the context of finite Markov chains. Section \ref{sec-PQPJD} is one of the key technical sections of the article. Given an underlying structure $(\mathfrak{X},\mathfrak{E},\pi,\mu)$ which satisfies two basic assumptions---volume doubling and the ball Poincar\'e inequality---we prove a uniform Poincar\'e inequality for finite $\alpha$-John domains with a fixed $\alpha$. This relies heavily on the definition of a John domain and the use of Whitney coverings. Theorems \ref{th-P1} and~\ref{th-PP1} are the main statements in this section. Section \ref{sec-AW} provides an extension of the results of Section \ref{sec-PQPJD}, namely, Theorems \ref{th-PP1doub} and \ref{th-PP1controlled}. The line of reasoning for these results is adapted from \cite{Jerison,MLSC,LSCAspects} where earlier relevant references can be found (all these references treat PDE type situations). Section \ref{sec-Metro} illustrates the results of Section~\ref{sec-AW} in the classical context of the Metropolis-Hastings algorithm. Specifically, given a finite John domain $U$ in a graph $(\mathfrak X,\mathfrak E)$, we can modify a simple random walk via edges weights in order to target a given probability distribution. Under certain hypotheses on the target distribution, Section \ref{sec-AW} provides useful tools to study the convergence of such chains. We describe several examples in detail. Section \ref{sec-Dir} deals with applications to absorbing Markov chains (or, equivalently for our purpose, chains killed at the boundary). We call such a chain Dirichlet-type by reference to the classical concept of Dirichlet boundary condition. The section has two subsections. The first provides a very general discussion of the Doob transform technique for finite Markov chains. The second applies the results of Section \ref{sec-AW} to Dirichlet-type chains in John domains. The main results are Theorems \ref{th-Doob1}, \ref{th-Doob2}, and \ref{th-JZd}. Section \ref{sec-IU} introduces the notion of inner-uniform domain in the context of our underlying discrete space $(\mathfrak X,\mathfrak E)$. Theorem \ref{theo-Carleson} captures a key property of the Perron-Frobenius eigenfunction $\phi_0$ in a finite inner-uniform domain. This key property is known as a {\em Carleson estimate} after Lennart Carleson. There is a vast literature regarding this estimate and its relation to the boundary Harnack principle in the context of potential theory in Euclidean domains (see, e.g., \cite{Ancona,BBB,Aik1,Aik2,Aik3} and the references and pointers given therein). Corollary \ref{cor-Carleson2} is based on the Carleson estimate of Theorem \ref{theo-Carleson} and on Theorem \ref{th-Doob1}. It provides a sharp ergodicity result for Doob-transform chains in finite inner-uniform domains. Section \ref{sec-cable} provides a proof of the Carleson estimate via transfer to the associated cable-process and Dirichlet form. Because the Carleson estimate is a deep and difficult result, it is nice to be able to obtain it from already known results. We use here a similar (and much more general) version of the Carleson estimate in the context of local Dirichlet spaces developed in \cite{LierlLSCOsaka,LierlLSCJFA} following \cite{Aik1,Aik4} and \cite{Gyrya}. We apply to the eigenfunction $\phi_0$ the technique of passage from the discrete graph to the continuous cable space. This requires an interesting argument. (See Proposition \ref{pro-*}.) Section~\ref{sec-HPWb} provides more refined results regarding the iterated kernels $K^t_U$ (chain killed at the boundary) and $K^t_{\phi_0}$ (associated Doob-transform chain) in the form of two-sided bounds valid at all times and all space location in $U$. A key result is Corollary \ref{cor:exit-time-estimate} which gives, for inner-uniform domains, a sharp two-sided bound on $\mathbf{P}_x(\tau_U>t)$, the probability that the process $(X_t)_{t>0}$ started at $x$ has not yet exited $U$ at time $t$. The final section, Section \ref{sec:examples}, describes several explicit examples in detail. \section{John domains and Whitney coverings} \setcounter{equation}{0} \label{sec-JDW} This section is concerned with notions of a purely geometric nature. Our basic underlying structure can be described as a finite or countable set $\mathfrak X$ (vertex set) equipped with an edge set $\mathfrak E$ which, by definition, is a set of pairs of distinct vertices $\{x,y\}\subset \mathfrak X$. We write $x\sim y$ whenever $\{x,y\}\in \mathfrak E$ and say that these two points are neighbors. By definition, a path is a finite sequence of points $\gamma=(x_0,\dots,x_m)$ such that $\{x_i,x_{i+1}\}\in \mathfrak E$, $0\le i <m$. We will always assume that $\mathfrak X$ is connected in the sense that, for any two points in $\mathfrak X$, there exists a finite path between them. The graph-distance function $d$ assigns to any two points $x,y$ in $\mathfrak X$ the minimal length of a path connecting $x$ to $y$, namely, $$ d(x,y)=\inf \{m \ : \ \exists \ \gamma=(x_i)_0^m, \ x_0=x, \ x_m=y, \ \{x_i,x_{i+1}\}\in \mathfrak E\}.$$ We set $$B(x,r)=\{y:d(x,y)\le r).$$ This is the (closed) metric ball associated with the distance $d$. Note that the radius is a nonnegative real number and $B(x,r)=\{x\}$ for $r\in [0,1)$. \begin{nota} Given a ball $E=B(x,r)$ with specified center and radius and $\kappa>0$, let $\kappa E$ denote the ball $\kappa E=B(x,\kappa r)$. \end{nota} \begin{rem} We think of $\mathfrak E$ as producing a ``geometric structure'' on $\mathfrak X$. Note that loops are not allowed since the elements of $\mathfrak E$ are pairs, i.e., subsets of $\mathfrak X$ containing two distinct elements. This does not mean that the Markov chains we will consider are forbidden to have positive holding probability at some vertices. The example in the introduction, Figure~\ref{D0}, does have positive holding at some vertices (specifically, at $(N,q)$ for $1\le q\le N$) so the associated Markov chain is allowed to have loops even though the geometric structure does not. \end{rem} Let $U$ be a subset of $\mathfrak X$. By definition, the boundary of $U$ is $$\partial U=\{y\in \mathfrak X\setminus U: \exists\, x\in U \mbox{ such that } \{x,y\}\in \mathfrak E\}.$$ Note that this is the {\em exterior boundary} of $U$ in the sense that it sits outside of $U$. We say that $U$ is connected if, for any two points $x,y$ in $U$, the exists a finite path $\gamma_{xy}=(x_0,x_1,\dots ,x_m)$ with $x_0 = x$ and $x_m = y$ such that $x_i\in U$ for~$0\le i\le m$. A domain $U$ is a connected subset of $\mathfrak X$. We will be interested here in finite domains. \begin{defin} Given a domain $U \subseteq \mathfrak X$, define the intrinsic distance $d_U$ by setting, for any $x,y\in U$, $$d_U(x,y)=\inf\{ m \ : \ \exists (x_i)_0^m, \ x_0=x, \ x_m=y, \ \{x_i,x_{i+1}\}\in \mathfrak E, \ x_i\in U, \ 0\le i<m\}.$$ In words, $d_U(x,y)$ is the graph distance between $x$ and $y$ in the subgraph $(U, \mathfrak E_U)$ where $\mathfrak E_U=\mathfrak E\cap (U\times U)$. It is also sometimes called the inner distance (in $U$). Let $$B_U(x,r)=\{y\in U: d_U(x,y)\le r\}$$ be the (closed) ball of radius $r$ around $x$ for the intrinsic distance $d_U$. \end{defin} In the example of Figure~\ref{D0}, we set $$\mathfrak X=\mathfrak X_N=\{(p,q): 0\le q\le p\le N\}.$$ The edge set $\mathfrak E=\mathfrak E_N$ is inherited from the square grid and $$U=U_N=\{(p,q): 0<q<p\le N\}.$$ It follows that the boundary $\partial U$ of $U$ (in $(\mathfrak X,\mathfrak E)$) is $$\partial U=\partial U_N= \{(p,p), (p,0): 0\le p\le N\}.$$ \subsection{John domains} \label{sec-JD} The following definition introduces a key geometric notion which is well known in the areas of harmonic analysis, geometry, and partial differential equations. \begin{defin}[John domain] \label{def-John} Given $\alpha, R>0$, we say that a finite domain $U \subseteq \mathfrak X$, equipped with a point $o\in U$, is in $J(\mathfrak X, \mathfrak E, o,\alpha, R)$ if the domain $U$ has the property that for any point $x\in U$ there exists a path $\gamma_x=(x_0,\dots, x_m)$ of length $m_x$ contained in $U$ such that $x_0=x$ and $x_m=o$, with $$ \max_{x\in U}\{m_x\}\le R \;\mbox{ and } \;d(x_i, \mathfrak X\setminus U) \ge \alpha (1+i),$$ for $0\le i\le m_x.$ When the context makes it clear what underlying structure $(\mathfrak X,\mathfrak E)$ is considered, we write $J(o,\alpha, R)$ for $J(\mathfrak X, \mathfrak E, o,\alpha, R)$.\end{defin} We can think of a John domain $U$ as being one where any point $x$ is connected to the central point $o$ by a carrot-shaped region, which is entirely contained within $U$. The $x$ is the pointy end of the carrot and the point $o$ is the center of the round, fat end of the carrot. See Figure~\ref{Banana} for an illustration. Within the lattice $\mathbb Z^d$, there are many examples of John domains: the lattice balls, the lattice cubes, and the intersection of Euclidean balls and Euclidean equilateral triangles with the lattice. See also Examples \ref{exa-T}, \ref{exa-mb}, and \ref{ex:convex} and Figure \ref{D3} below. Domains having large parts connected through narrow parts are not John. These examples, however, are much too simple to convey the subtlety and flexibility afforded by this definition. \begin{figure} \caption{The forty-five degree finite cone in $\mathbb Z^2$ with the ``center'' marked as a red $o$.} \label{D1} \end{figure} \begin{defin}[$\alpha$-John domains] Given $(\mathfrak X,\mathfrak E)$, let $J(\alpha)=J(\mathfrak X,\mathfrak E,\alpha)$ be the set of all domains $U\subset \mathfrak X$ which belong to $J(\mathfrak X,\mathfrak E, o,\alpha,R)$ for some fixed $o\in U$ and $R>0$. A finite domain in $J(\alpha)$ is called an $\alpha$-John domain. \end{defin} \begin{defin}[John center and John radius]For any domain $U\in J(\alpha)$, there is at least one pair $(o,R)$, with $o\in U$ and $R>0$, such that $U\in J(o,\alpha,R)$. Given such a John center $o$, let $R(U,o,\alpha)$ be the smallest $R$ such that $U\in J(o,\alpha, R)$. Assuming $\alpha$ is fixed, we call $R(U,o,\alpha)$ the John-radius of $U$ with respect to $o$. \end{defin} \begin{rem} If we apply the second condition of Definition \ref{def-John} to any point in $U$ at distance $1$ from the boundary, we see that $\alpha\in (0,1]$. \end{rem} \begin{rem} \label{rem:johnradius} Given $U\in J(\mathfrak X,\mathfrak E, \alpha)$, define the internal radius of $U$, viewed from~$o$, as $$\rho_o(U)=\max\{d_U(o,x) : x\in U\}.$$ Then, the John-radius $R(U,o,\alpha)$ is always greater than or equal to $\rho_o(U$), i.e., $R(U,o,\alpha) \geq \rho_o(U)$. Furthermore, we always have $$ \min\{ d_U(o,z):z\in \mathfrak X\setminus U\}= d(o, \mathfrak X \setminus U)\ge \alpha (1+ R ( U,o, \alpha)),$$ which implies that $$\alpha (1+ R(U,o,\alpha))\le 1+\rho_o(U)\le 1+ R(U,o,\alpha).$$ In words, when $U\in J(\alpha)$ is not a singleton, the John-radius of $U$ and $\rho_o(U)$ are comparable, namely, $$ \frac{\alpha}{2} R(U,o,\alpha)\le \rho_o(U)\le R(U,o,\alpha).$$ We can also compare $\rho_o(U)$ to the diameter of the finite metric space $(U,d_U)$. Namely, we have $$ \rho_o(U)\le \mbox{diam}(U,d_U)\le 2\rho_o(U).$$ \end{rem} \begin{rem} \label{rem-MS} Let us compare this definition of a discrete John domain to the continuous version introduced in the classical reference \cite{MS}. In \cite{MS}, a Euclidean domain $D$ is an $(\alpha,\beta)$-John domain (denoted $D\in \mathbf J(\alpha,\beta)$) if there exists a point $o\in D$ such that every $x\in D$ can be joined to $o$ by a rectifiable path $\gamma_x: [0,T_x]$ (paramatrized by arc-length) with $\gamma_x(0)=x$, $\gamma_x(T_x)=o$, $T_x\le \beta$ and $d_2(\gamma_x(t),\partial D)\ge \alpha (t/T_x)$ for $t\in [0,T_x]$. (Here $d_2$ is the Euclidean distance.) If one ignores the small modifications made in our definition to account for the discrete graph structure, the class $J(o, \alpha,R)$ is the analogue of the class $\mathbf J(\alpha R, R)$ with an explicit center $o$. The smallest $R$ such that $D$ belong to $\mathbf J(\alpha R,R)$ with a given center $o$ would be the analogue of our John-radius with respect to $o$. \end{rem} \begin{exa} \label{exa-T}Consider the example depicted in Figure~\ref{D1}. From the definition of John domain, one can see that it is best to choose $o$ far from the boundary. We pick $o=(N, \lfloor N/2\rfloor)$, depicted in red in Figure~\ref{D1}. For each point $x=(p,q)\in U$ we will define a (graph) geodesic path $\gamma_x$ joining $x$ to $o$ in $U$ that satisfies the conditions of a John domain. First, draw two straight lines $L$ and $L'$. The first line $L$, shown in red in Figure~\ref{D1}, joins $(0,0) $ to $(N,N/2)$. This is the line with equation $p-2q=0$ and the integer points on this line are at equal graph-distance from the ``boundary lines'' $\{(p,q): q=0, p = 0,1,\dots, N\}$ and $\{(p,p): p = 0,1,\dots, N\}$ as shown in blue in Figure~\ref{D1}. The line $L'$, shown in green, has the equation $p-2q = 1$. For any integer point $x=(p,q)$ on the line $L$, there is graph-geodesic path $\gamma_x$ joining $x$ to $o$ obtained by alternatively moving two steps right and one step up. Similarly, for any integer point $x=(p,q)$ on the line $L'$, there is a graph-geodesic path $\gamma_x$ joining $x$ to $o$ by moving right, then up, to reach a point $x'$ on $L$. From there, following $\gamma_{x'}$ to $o$. For any integer point $x$ in $U$ above $L$, define $\gamma_x$ by moving straight right until reaching an integer point $x'$ on $L$, then follow $\gamma_{x'}$ to $o$. For those $x\in U$ below $L$, move straight up until reaching an integer point $x'$ on $L'$. From there, follow the path $\gamma_{x'}$ to $o$. Along any of the paths $\gamma_x=(x_0,\dots, x_m)$, with $x_0=x\in U$ and $x_m=o$, $d(x_{i},\mathfrak X\setminus U)$ is non-increasing and $d(x_{3i},\mathfrak X\setminus U) \ge 1+ i$. It follows that $d(x_{j},\mathfrak X\setminus U) \ge \frac{1}{3}(1+ i)$. This proves that $U$ is a John domain with respect to $o$ with parameter $\alpha=1/3$ and John-radius $R(U,o, \frac{1}{3})= \rho_o(U)=N+[N/2] -3$. \end{exa} \begin{exa}[Metric balls] \label{exa-mb} Any metric ball $U=B(o,R)$ is a 1-John domain, i.e., $$B(o,R)\in J(\mathfrak X,\mathfrak E, o,1, R).$$ This is a straightforward but important example. For each $x\in B(o,r)$, fix a path of minimal length $\gamma_x=(x_0=x,x_1,\dots, x_{m_x}=o)$, $m_x\le R$, joining $x$ to $o$ in $(\mathfrak X,\mathfrak E)$. Then, $d(x_i, \mathfrak X\setminus B(o,R)) \ge 1+i$ because, otherwise, there would be a point $z \notin B(o,R)$ and at distance at most $R$ from $o$, contradicting the definition of a ball. \end{exa} \begin{exa}[Convex sets]\label{ex:convex} In the classical theory of John domains in Euclidean space, convex sets provide basic examples. Round, convex sets have a good John constant ($\alpha$ close to $1$) whereas long, narrow ones have a bad John constant ($\alpha$ close to $0$). We will describe how this theory applies in the case of discrete convex sets, but first, let us review the continuous case. Here is how the definition of Euclidean John domain given in \cite{MS} applies to Euclidean convex sets. A Euclidean convex set $C$ belongs to $\mathbf J(\alpha,\beta)$ (see \cite[Definition 2.1]{MS} and Remark \ref{rem-MS} above) if and only if there exists $o\in C$ such that $$B_2(o,\alpha) \subset C\subset B_2(o,\beta).$$ Here the balls are Euclidean balls and this is indicated by the subscript $2$, referencing the $d_2$ metric. This condition is obviously necessary for $C\in \mathbf J(\alpha,\beta)$. To see that it is sufficient, observe that along the line-segment $\gamma_{xy}$ between any two points $x,y\in C$, parametrized by arc-length and of length $T$, the function $f(t)=d_2(\gamma_{xy}(t),C^c)$, defined on $[0,T]$, is concave (it is the minimum of the distances to the supporting hyperplanes defining $C$). Hence, if we assume that $B(y,\alpha)\subset C$, either $d_2(y,C^c) < d_2(x,C^c)$ and then $d_2(\gamma_{xy}(t),C^c)\ge \alpha \ge \alpha \frac{t}{T}$, or $d_2(y,C^c) \ge d_2(x,C^c)$ and $$d_2(\gamma_{xy}(t),C^c) - d_2(x,C^c) \ge \frac{t}{T} (d_2(y,C^c) -d_2(x,C^c)) $$ which gives $$d_2(\gamma_{xy}(t),C^c) \ge \alpha \frac{t}{T} + \left(1-\frac{t}{T}\right) d_2(x,C^c) \ge \alpha \frac{t}{T}.$$ To transition to discrete John domains, we first consider the case of finite domains in $\mathbb Z^2$ because it is quite a bit simpler than the general case (compare \cite[Section 6]{DSCNash} and\cite{Virag}). In $\mathbb Z^2$, we can show that any finite sub-domain $U$ of $\mathbb Z^2$ (this means we assume that $U$ is graph connected) obtained as the trace of a convex set $C$ such that $B_2(o,\alpha R)\subseteq C\subseteq B(o,R)$ for some $\alpha\in (0,1)$ and $R>0$ is a $\alpha'$-John domain with $\alpha'$ depending only on $\alpha$. \begin{figure} \caption{A finite discrete ``convex subset'' of $\mathbb Z^2$} \label{fig-ConvZ2} \end{figure} To deal with higher dimensional grids ($d>2$), let us adopt here the definition put forward by B\'alint Vir\'ag in \cite{Virag}: a subset $U$ of the square lattice $\mathbb Z^d$ is {\em convex} if and only if there exists a convex set $C\subset \mathbb R^d$ such that $U=\{x\in \mathbb Z^d: d_\infty(x,C)\le 1/2\}$ where $d_\infty(x,y)=\max\{|x_i-y_i|: 1\le i\le d\}$. The set $C$ is called a base for $U$. We will use three distances on $\mathbb R^d$ and $\mathbb Z^d$: the max-distance $d_\infty$, the Euclidean $L^2$-distance $d_2(x,y)=\sqrt{\sum_1^d |x_i-y_i|^2}$ and the $L^1$-distance $d_1(x,y)=\sum_1^d |x_i -y_i|$ which coincides with the graph distance on $\mathbb Z^d$. In \cite{Virag}, B. Vir\'ag shows that, given a subset $U$ of $\mathbb Z^d$ that is convex in the sense explained above, for any two points $x,y\in U$, there is a discrete path $\gamma_{xy} =(z_0,\dots,z_m)$ in $U$ such that: (a) $z_0=x,z_m=y$; (b) $\gamma_{xy}$ is a discrete geodesic path in $\mathbb Z^d$; and (c) if $L_{xy}$ is the straight-line passing through $x$ and $y$ then each vertex $z_i$ on $\gamma_{xy}$ satisfies $d_\infty(z_i,L_{xy})<1$. We will use this fact to prove the following proposition. \begin{pro} \label{pro-Conv}Let $U\subset \mathbb Z^d$ be convex in the sense explained above, with base $C$. Suppose there is a point $o$ in $U$ and positive reals $\alpha, R$ such that \begin{equation} \label{eq-alphaConv} B_2(o,\alpha R)\subset C \mbox{ and }C+B_\infty(0,1) \subset B_2(o,R),\end{equation} where $C+B_\infty(0,1) = \{y\in \mathbb R^d: d_\infty(y,C)\le 1\}$. Then the set $U$ is in $J(o, \alpha', R')$ with $\alpha' = \alpha/(6d\sqrt{d}) $ and $\alpha R\le R' \le \sqrt{d}R $, where $d$ is the dimension of the underlying graph $\mathbb{Z}^d$. \end{pro} The dimensional constants in this statement are related to the use of three metrics, namely, $d_1,d_2$ and $d_\infty$. \begin{rem} In practice, this definition is more flexible than it first appears because one can choose the base $C$. Moreover, once a certain finite domain $U$ is proved to be an $\alpha_0$-John-domain in $\mathbb Z^d$, it is easy to see that we are permitted to add and subtract in an arbitrary fashion lattice points that are at a fixed distance $r_0$ from the boundary $\partial U$ of $U$ in $\mathbb Z^d$, as long as we preserved connectivity. The cost is to change the John-parameter $\alpha_0$ to $\bar{\alpha}_0$ where $\bar{\alpha}_0$ depends only on $r_0$ and $\alpha_0$. \end{rem} \begin{proof}[Proof of Proposition {\ref{pro-Conv}}] The convexity of $C$ (together with that of the unit cube $B_\infty(0,1)$) implies the convexity of $C'=C+B_\infty(0,1)$. Thus, by hypothesis, we know that the straight-line segments $l_x$ joining any point $x\in C'$ to $o$ that witness that $C'\in \mathbf J(\alpha R,R)$. For $x\in U$, the construction in \cite{Virag} provides a discrete geodesic path $\gamma_x=(x_0,\dots, x_{m_x})$ (of length $m_x$) in $\mathbb Z^d$ joining $x$ to $o$ within the set $U$ and which stays at most $d_\infty$-distance $1$ from $l_x$. As usual, we parametrize $l_x$ by arc-length so that $l_x(0)=x$, $l_x(T)=o$, $T=T_x$. For each point $x_i\in \gamma_x$, we pick a point $z_i$ on $l_x$ such that $d_\infty(x_i,z_i)=d_\infty(x_i,l_x)<1$ and define $t_i\in [0,T]$ by $z_i=l_x(t_i)$. For each $x \in U$, $$d_1(x,o) \le \sqrt{d} d_2(x,o) \le \sqrt{d} R .$$ To obtain a lower bound on $d_1(x_i,\mathbb Z^d \setminus U)$, observe that $$ d_1(x_i, \mathbb Z^d \setminus U) \ge d_1(x_i, \mathbb{R}^d\setminus C) $$ because $C$ is contained in $U+ B_{\infty}(0,\frac{1}{2})$. By definition of $C'$, $d_1(x_i,\mathbb R^d\setminus C)\ge d_1(x_i, \mathbb R^d\setminus C')- d.$ Hence, we have $$d_1(x_i, \mathbb Z^d\setminus U)\ge d_2(x_i,\mathbb R^d\setminus C')-d.$$ Recall that $z_i=l_x(t_i)$ is on the line-segment from $x$ to $o$ and at $d_\infty$-distance less than $1$ from $x_i$. Further, we know that $$d_2(z_i,\mathbb R^d\setminus C')\ge \alpha t_i $$ because $C'$ is convex and $B_2(o,\alpha R)\subset C'\subset B_2(o,R)$. Also, we have $$t_i=d_2(x,z_i) \ge d_2(x,x_i) -d_2(x_1,z-i) \ge \frac{d_1(x,x_i)}{\sqrt{d}} - \sqrt{d} = \frac{i-d}{\sqrt{d}}.$$ Putting these estimates together gives $$d_1(x_i, \mathbb Z^d\setminus U)\ge \frac{\alpha}{\sqrt{d}}\left(i - d-\frac{d\sqrt{d}}{\alpha}\right).$$ Since, by construction, $d_1(x_i, \mathbb Z^d\setminus U)\ge 1$ for all $i$, it follows from the previous estimate that, $$d_1(x_i, \mathbb Z^d\setminus U)\ge \frac{\alpha}{6d\sqrt{d}}\left(1+ i \right)$$ for all $0\le i\le m_x$. \end{proof} \end{exa} Convexity is certainly not necessary for a family of connected subsets of $\mathbb Z^d$ to be $\alpha$-John domains with a uniform $\alpha\in (0,1)$. Figure~\ref{D3} gives an example of such a family that is far from convex in any sense. If we denote by $U_N$ the set depicted for a given $N$ and let $o_N=(\lfloor 2N/3 \rfloor,\lfloor N/2\rfloor)$ the chosen central point, then there are positive reals $\alpha,c,C$, independent of $N$, such that $U_N$ is a $J(o_N,\alpha, R)$ with $cN\le R\le CN$. Figure~\ref{D4} gives an example of a family of sets that is NOT uniformly in $J(\alpha)$, for any $\alpha>0$. \begin{figure} \caption{A non-convex example of John domain, with the boundary points indicated in blue, and center $o$ indicated in red.} \label{D3} \end{figure} \begin{figure} \caption{A family of subsets that are \emph{not} \label{D4} \end{figure} The following lemma shows that any inner-ball $B_U(x,r)$ in a John domain contains a ball from the original graph with roughly the same radius. When the graph is equipped with a doubling measure (see Section \ref{sec-DMPN}), this shows that the inner balls for the domain $U$ have volume comparable to that of the original balls. \begin{lem} Given $U \in J(\mathfrak{X},\mathfrak{E},\alpha)$, recall that $\rho_o(U)=\max\{d_U(o,x): x\in U\}$. For any $x\in U$ and $r\in [0,2\rho_o(U)]$, there exists $x_r \in U$ such that $B(x_r, \alpha r/8) \subset B_U(x,r)$. For $r\ge 2\rho_o(U)$, we have $B_U(x,r)=U$. \end{lem} \begin{proof} The statement concerning the case $r\ge 2\rho_o(U)$ is obvious. We consider three cases. First, consider the case when $o\in B_U(x,r/4)$ and $\rho_o(U)\le r <2\rho_o(U)$. Then $B(o,\alpha\rho_o(U)/4) \subseteq B_U(x, r)$ and we can set $x_r=o$. Second, assume that $o\in B_U(x,r/4)$ and $ r< \rho_o(U)$. Recall from Remark~\ref{rem:johnradius} that $d(o,\mathfrak X\setminus U)\ge \alpha(1+\rho_o(U))$. It follows that $B(o,\alpha r/8) \subset U$ and $B(o,\alpha r/8)\subset B_U(x,r)$. We can again set $x_r=o$. Finally, assume that $o\not\in B_U(x,r/4)$. If $r<8$, we can take $x_r=x$. When $r\ge 8$, let $\gamma_x=(z_0=x,z_1,\dots,z_m=o)$ be the John-path from $x$ to $o$ and let $x_r=z_i$, where $z_i$ is the first point on $\gamma_x$ such that $z_{i+1}\not\in B_U(x, \lfloor r/4\rfloor )$. By construction, we have $d_U(x,x_r)\le \lfloor r/4\rfloor +1 \le r/2$, $i(x,r)\ge \lfloor r/4\rfloor $ and $$\delta(x_r)\ge \alpha(1+\lfloor r/4\rfloor)\ge \alpha r/4.$$ Therefore $B(x_r, \alpha r/8) \subset U$ and $B_U(x_r, \alpha r/8)=B(x_r,\alpha r/8)\subset B_U(x,r).$~\end{proof} \subsection{Whitney coverings} \label{sec-WC} Let $U$ be a finite domain in the underlying graph $(\mathfrak X,\mathfrak E)$ (this graph may be finite or countable). Fix a small parameter $\eta\in (0,1)$. For each point $x\in U$, let $$B^\eta_x =\{y \in U: d(x,y)\le \eta \delta(x)/4\}$$ be the ball centered at $x$ of radius $ r(x)=\eta \delta (x)/4 $ where $$\delta(x)= d(x,\mathfrak X\setminus U)$$ is the distance from $x$ to $\mathfrak X \setminus U$, the boundary of $U$ in $(\mathfrak X,\mathfrak E)$. The finite family $\mathcal F=\{B^\eta_x: x\in U\}$ forms a covering of $U$. Consider the set of all sub-families $\mathcal V$ of $\mathcal F$ with the property that the balls $B^\eta_x$ in $\mathcal V$ are pairwise disjoint. This is a partially ordered finite set and we pick a maximal element $$\mathcal W =\{B^\eta_{x_i}: 1\le i\le M\},$$ which, by definition, is a \emph{Whitney covering} of $U$. Note that the Whitney covering of $U$ is not a covering itself, but it generates a covering, because the triples of the balls in $\mathcal W$ are a covering of $U$. Because the balls in $U$ are disjoint, this is a relatively efficient covering. The size $M$ of this covering will never appear in our computation and is introduced strictly for convenience. This integer $M$ depends on $U,s,\eta$ and on the particular choice made among all maximal elements in $\mathcal V$. Whitney coverings are useful because they allow us to do manipulations on balls that form a covering---such as doubling their size---without leaving the domain $U$. Moreover, for any $k<4/\eta$, the closed ball $\{y: d(x,y)\le k r(x)\}$ is entirely contained in $U$. \begin{figure} \caption{A Whitney covering of the forty-five degree cone with $\eta = \frac{4} \label{fig:whitney-covering} \end{figure} In the above (standard, discrete) version of the notion of Whitney covering, the largest balls are of size comparable to $\eta \max\{d(x,\mathfrak X\setminus U: x\in U\}$. In the following $s$-version, $s\ge 1$, where $s$ is a (scale) parameter, the size of the largest balls are at most $s$. Fix $s\ge 1$ and a small parameter $\eta\in (0,1)$ as before. For each point $x\in U$, let $$B^{s,\eta}_x =\{y: d(x,y)\le \min\{s,\eta \delta(x)/4\}\}$$ be the ball centered at $x$ of radius $ r(x)=\min\{s,\eta \delta (x)/4\} $. Note as before that, for any $k<4/\eta$, the closed ball $\{y: d(x,y)\le k \delta(x)/4\}$ is entirely contained in $U$. The finite family $\mathcal F_s=\{B^{s,\eta}_x: x\in U\}$ form a covering of $U$. Consider the set of all sub-families $\mathcal V_s$ of $\mathcal F_s$ with the property that the balls $B^{s,\eta}_x$ in $\mathcal V_s$ are pairwise disjoint. These subfamilies form a partially ordered finite set and, just as we did with $\mathcal{F}$, we pick a maximal element $$\mathcal W_s =\{B^{s,\eta}_{x_i}: 1\le i\le M\},$$ which is the \emph{s-Whitney covering}. See Figure~\ref{fig:whitney-corner} for an example. \begin{figure} \caption{The left figure shows a standard Whitney covering of the upper right corner of a square (boundary indicated in blue) with $\eta = \frac{4} \label{fig:whitney-corner} \end{figure} As before, the size $M$ of this covering will never appear in our computations. It will be useful to split the family $\mathcal W_s$ into its two natural components, $\mathcal W_s=\mathcal W_{=s}\cup \mathcal W_{<s}$ where $\mathcal W_{=s}$ is the subset of $\mathcal W_s$ of those balls $B(x_i,r(x_i))$ such that $r(x_i)=s$. \begin{rem}\label{rem-Whit} When the domain $U$ is finite (in a more general context, bounded) any Whitney covering ${\mathcal W}_s$ with parameter $s$ large enough, namely $$s\ge \eta (\rho_o(U)+1)/4,$$ is simply a Whitney covering $\mathcal W$ because $\min\{s,\eta\delta(x)/4\}=\eta \delta(x)/4$ for all $x\in U$. It follows that properties that hold for all $\mathcal W_s$, $s>0$, also hold for any standard Whitney coverings $\mathcal W$. \end{rem} \begin{lem}[Properties of $\mathcal W_s$, $s\ge 1$] \label{lem-W} For any $s>0$, the family $\mathcal W_s$ has the following properties. \begin{enumerate} \item The balls $B^{s,\eta}_{x_i}=B(x_i,r(x_i))$, $1\le i\le M$, are pairwise disjoint and $$U=\bigcup_1^M B(x_i, 3r(x_i)).$$ In other words, the tripled balls cover $U$. \item For any $\rho \le 4/\eta$ and any $z\in B(x_i, \rho r(x_i))$, $$\delta(x_i) (1- \rho\eta /4) \le \delta (z) \le (1+\rho\eta/4) \delta (x_i) $$ and $$(1- \rho\eta /4) r(x_i) \le r (z) \le (1+\rho\eta/4) r(x_i).$$ \item For any $\rho \le 2/\eta$, if the balls $B(x_i, \rho r(x_i))$ and $B(x_j, \rho r(x_j))$ intersect then $$ \frac{1}{3}\le \frac{1-\rho\eta/4}{1+\rho\eta/4}\le \frac{\delta(x_i)}{\delta(x_j)}\le \frac{ 1+\rho \eta/4}{1-\rho\eta/4}\le 3.$$ \end{enumerate} \end{lem} \begin{proof} We prove the first assertion. Consider a point $z\in U$. Since $\mathcal W_s$ is maximal, the ball $B^{s,\eta}_z$ intersects $\cup_1^M B(x_i, r(x_i))$. So there is an $i \in \{1,2,\dots,M\}$ and a $y\in B^{s,\eta}_{x_i}$ such that $y\in B^{s,\eta}_z$. By the triangle inequality, $$\delta (x_i)\ge \delta(z) - r (x_i)- r(z) \ge \delta(z)- \eta\delta(x_i)/4-\eta\delta(z)/4,$$ which yields, $$ (1+\eta/4) \delta (x_i)\ge (1-\eta/4)\delta(z),$$ and hence, $$(1+\eta/4) r (x_i)\ge (1-\eta/4) r (z).$$ It follows that $$d(x_i, z) \le r(x_i)+r(z)\le r (x_i) \left(1+\frac{ 1+\eta/4}{1-\eta/4}\right) \le \left(1+\frac{5}{3}\right)r(x_i) .$$ This contradicts the assumption that $z\not\in \cup_1^M B(x_i, 3r(x_i))$. The proofs of (2)-(3) follow the same line of reasoning. \end{proof} \section{Doubling and moderate growth; Poincar\'e and Nash inequalities} \setcounter{equation}{0} \label{sec-DMPN} In this section, we fix a background graph structure $(\mathfrak X,\mathfrak E)$ and use $x \sim y$ to indicate that $\{x,y\} \in \mathfrak{E}$. As before, let $d(x,y)$ denote the graph distance between $x$ and $y$, and let $$B(x,r)=\{y: d(x,y)\le r\}$$ be the ball of radius $r$ around $x$. (Note that balls are not uniquely defined by their radius and center, i.e., it's possible that $B(x,r) = B(\tilde{x},\tilde{r})$ for $x \neq \tilde{x}$ and $r \neq \tilde{r}$.) In addition we will assume that $\mathfrak X$ is equipped with a measure $\pi$ and, later, that $\mathfrak E$ is equipped with an edge weight $\mu=(\mu_{xy})$ defining a Dirichlet form. \subsection{Doubling and moderate growth} Assume that $\mathfrak X$ is equipped with a positive measure $\pi$, where $\pi(A)=\sum _{x\in A}\pi(x)$ for any finite subset $A$ of $\mathfrak X$. (The total mass $\pi(\mathfrak X)$ may be finite or infinite.) Denote the volume of a ball with respect to $\pi$ as $$V(x,r)=\pi(B(x,r)).$$ For any function $f$ and any ball $B$ we set $$f_B=\frac{1}{\pi(B)}\sum_Bf\pi.$$ If $U$ is a finite subset of $\mathfrak X$, then let $\pi|_U$ be the restriction of $\pi$ to $U$, i.e., $\pi|_U(x) = \pi(x) \mathbf{1}_U(x)$. We often still call this measure $\pi$. Let $\pi_U$ be the probability measure on $U$ that is proportional to $\pi|_U$, i.e., $\pi_U(x) = \frac{\pi|_U(x)}{Z}$ where $Z = \sum_{y \in U} \pi|_U(y)$ is the normalizing constant. \begin{defin}[Doubling]\label{defn-doubling} We say that $\pi$ is doubling (with respect to $(\mathfrak X,\mathfrak E)$) if there exists a constant $D$ (the doubling constant) such that, for all $x\in \mathfrak X$ and $r>0$, $$V(x,2r)\le D V(x,r).$$ \end{defin} This property has many implications. The proofs are left to the reader. \begin{enumerate} \item For any $x\sim y$, $\pi(x)\le D\pi(y)$. \item For any $x\in \mathfrak X$, $\#\{y: \{x,y\}\in \mathfrak E\}\le D^2$. \item For any $x\in \mathfrak X, r\ge s>0$ and $y\in B(x,r)$, $$\frac{V(x,r)}{V(y,s)}\le D^2 \left(\frac{\max\{1,r\}}{\max\{1,s\}}\right)^{\log_2D}.$$ \end{enumerate} We will need the following classic result for the case $p=2$. (For example, for the proofs of Theorems~\ref{th-P1} and~\ref{th-PP1}.) The complete proof is given here for the convenience of the reader. \begin{pro} \label{prop-sumballs} Let $(\mathfrak X,\mathfrak E, \pi)$ be doubling. For any $p\in [1,\infty)$, any real number $t\ge 1$, any sequence of balls $B_i$, and any sequence of non-negative reals $a_i$, we have $$\left\| \sum_i a_i \mathbf 1_{tB_i}\right\|_p \le C \left\|\sum_i a_i \mathbf 1_{B_i}\right\|_p,$$ where $C= 2(D^2p)^{1-1/p}D^{1+\log_2t}$ and $\|f\|_p=\left(\sum_{\mathfrak X} |f|^p \pi\right)^{1/p}$. \end{pro} \begin{rem} For $p=1$, the result is trivial since $\pi(tB)\le D^{\log_2(t)}\pi(B)$ for any ball $B$. \end{rem} \begin{proof} For any function $f$, consider the maximal function $$Mf(x)=\sup_{B\ni x}\left\{ \frac{1}{\pi(B)}\sum_{ y\in B}|f(y)| \pi(y) \right\}.$$ By Lemma \ref{lem-Max} below, $\|Mf\|_q\le C_q \|f\|_q$ for all $1<q\le +\infty$. Also, for any ball $B$, $x\in B$ and function $h\ge 0$, we have $$ \frac{1}{\pi(tB)}\sum_{y \in tB} h(y)\pi(y) \le(Mh)(x)$$ and thus $$ \frac{1}{\pi(tB)}\sum_{y \in tB} h(y)\pi(y) \le \frac{1}{\pi(B)}\sum_B (Mh)(y)\pi(y).$$ Set $$f(y)= \sum_i a_i \mathbf 1_{tB_i}(y) \;\; \text{ and } \;\; g(y)= \sum_i a_i \mathbf 1_{B_i}(y).$$ It suffices to prove that, for all functions $h\ge 0$, $|\sum f h\pi |\le C\|g\|_p\|h\|_q$, where $1/p+1/q=1$. Note that \begin{eqnarray*} \sum_{y\in \mathfrak X} f(y) h(y)\pi(y) &=&\sum_i a_i \sum_{y \in tB_i} h(y)\pi(y) \\ &\le & \sum_i a_i \frac{\pi(tB_i) }{\pi(B_i)} \sum_{y \in B_i} (Mh)(y) \pi \\ &\le & D^{1+\log_2 t} \sum_i a_i \sum_{y \in B_i} (Mh)(y) \pi(y) \\ &=&D^{1+log_2 t} \sum _{y \in \mathfrak X}\sum_i a_i \mathbf 1_{B_i} (Mh)(y) \pi(y) \\ &\le & D^{1+\log_2 t} \|g\|_p\|Mh\|_q \\ &\le &C_qD^{1+\log_2(t)} \|g\|_p\|h\|_q. \end{eqnarray*} Applying this fact with $h=f^{p/q}$ proves the desired result. \end{proof} \begin{lem}\label{lem-Max} For any $q\in (1,+\infty]$ and any $f$, the maximal function $M$ satisfies $\|Mf\|_q\le C_q\|f\|_q$ with $C_q=2 (D^{2}p)^{1-1/p}$ where $1/p+1/q=1$. \end{lem} \begin{proof} Consider the set $V^f_\lambda=\{x: Mf(x)>\lambda\}$. By definition, for each $x\in V^f_{\lambda}$ there is a ball $B_x$ such that $\frac{1}{\pi(B_x)}\sum_{B_x} |f|\pi> \lambda$. Form $$\mathcal B=\{B_x: x\in V^f_\lambda\}$$ and extract from it a set of disjoint balls $B_1,\dots, B_q$ so that $B_1$ has the largest possible radius among all balls in $\mathcal B$, $B_2$ has the largest possible radius among all balls in $\mathcal B$ which are disjoint from $B_1$. At stage $i$, the ball $B_i$ is chosen to have the largest possible radius among the balls $B_x$ which are disjoint from $B_1,\dots, B_{i-1}$. We stop when no such balls exist. We claim that the balls $3B_i$ cover $V^f_\lambda$, where $1\le i\le q$ and $q$ is the size of $\mathcal{B}$. For any $x\in V^f_\lambda$, we have $B_x=B(z,r)$, for some $z$ and $r$, and $ B(z,r) \cap \left(\cup_1^q B_i\right) \neq \emptyset$. By construction if $j$ is the first subscript such that there exists $y\in B(z,r)\cap B_j$, $r$ must be no larger than the radius of $B_j$. This implies $z\in 2B_j$ and $x\in 3B_j$. It follows from the fact that $3B_i$ cover $V^f_{\lambda}$ that $$\pi(V^f_\lambda)\le D^{2} \sum_1^q\pi(B_i) \le D^2\lambda^{-1} \sum_1^q \sum_{B_i}|f|\pi \le D^2 \lambda ^{-1}\sum_{\mathfrak{X}} |f|\pi.$$ Next observe that $ Mf \le M(f\mathbf 1_{\{|f|>\lambda/2\}})+ \lambda/2$ and thus $$\{x: Mf(x)> \lambda\}\subset \{x: M(f\mathbf 1_{\{|f|>\lambda/2\}})(x)>\lambda/2\}.$$ Therefore $\pi (Mf >\lambda)\le 2D^2\lambda^{-1} \sum_{\{f>\lambda/2\}} |f|\pi $. Finally, recall that $$\|h\|^q_q= q\int_0^\infty \pi(h>\lambda) \lambda^{q-1}d\lambda.$$ This gives $$\|Mf\|_q^q \le 2qD^2 \sum \int_0^{2|f|}\lambda^{q-2}d\lambda |f| \pi = \frac{qD^2 2^{q}}{q-1} \sum |f|^q\pi.$$ This gives $C_q= 2D^{2/q} \left(\frac{1}{1-1/q}\right)^{1/q}$. If $1/p+1/q=1$ then $C_q= 2(D^2p)^{1-1/p}$. \end{proof} The following notion of moderate growth is key to our approach. It was introduced in~\cite{DSCmod} for groups and in~\cite{DSCNash} for more general finite Markov chains. The reader will find many examples there. It is used below repeatedly, in particular, in Lemma \ref{lem-Q} and Theorems \ref{th-KNU}-\ref{th-Ktilde1}-\ref{th-Ktilde2}, and in Theorems \ref{th-Doob1}-\ref{th-Doob2}-\ref{th-JZd}. \begin{defin} Assume that $\mathfrak X$ is finite. We say that $(\mathfrak X,\mathfrak E, \pi)$ has $a,\nu$-moderate volume growth if the volume of balls satisfies $$\forall\, r\in (0,\mbox{diam}],\;\;\frac{V(x,r)}{\pi(\mathfrak X)}\ge a \left(\frac{1+r}{\mbox{diam}}\right)^\nu,$$ where $\mbox{diam} = \sup\{|\gamma_{xy}| : x,y \in \mathfrak{X}\}$ is the maximum of path lengths $|\gamma_{xy}|$ with $\gamma_{xy}$ the shortest path between $x,y \in \mathfrak{X}.$ \end{defin} \begin{rem} When $\mathfrak X$ is finite and $\pi$ is $D$-doubling then $(\mathfrak X,\mathfrak E, \pi)$ has $((D)^{-2},\log_2D)$-moderate growth because $$\frac{V(x,s)}{\pi(\mathfrak X)}=\frac{V(x,s)}{V(x,\mbox{diam})}\ge D^{-1}\left( \frac{\max\{1,s\}}{\mbox{diam}}\right) ^{\log_2D} \ge D^{-2}\left( \frac{1+s}{\mbox{diam}}\right) ^{\log_2D}.$$ \end{rem} Because of this remark, moderate growth can be seen as a generalization of the doubling condition. It implies that the size of $\mathfrak X$ (as measured by $\pi(\mathfrak X)$) is bounded by a power of the diameter (this can be viewed as a ``finite dimension'' condition and a rough upper bound on volume growth). It also implies that the measure of small balls grows fast enough: $V(x,s)\ge a\pi(X)(\mbox{diam})^{-\log_2D}(1+s)^\nu.$ \subsection{Edge-weight, associated Markov chains and Dirichlet forms} \label{sec-pimuK} This section introduces symmetric edge-weights $\mu_{xy}=\mu_{yx}\ge 0$ and the associated quadratic form $$\mathcal E_\mu(f,g)= \frac{1}{2}\sum_{x,y} (f(x)-f(y))(g(x)-g(y))\mu_{xy}.$$ \begin{defin}\label{defn-adapted-elliptic-subor} \begin{enumerate} \item We say the edge-weight $\mu =(\mu_{xy})_{ x\neq y\in \mathfrak X}$, is adapted to~$\mathfrak E$ if $$\mu_{xy}>0 \mbox{ if and only } \{x,y\}\in \mathfrak E.$$ \item We say that the edge-weight $\mu=(\mu_{xy})_{ x\neq y\in \mathfrak X}$ is elliptic with constant $P_e\in (0,\infty)$ with respect to $(\mathfrak X,\mathfrak E,\pi)$ if $$\forall \ \{x,y\}\in \mathfrak E,\;\; P_e\mu_{xy}\ge \pi(x).$$ \item We say that the edge-weight $\mu =(\mu_{xy})_{ x,y\in \mathfrak X}$ is subordinated to $\pi$ on $\mathfrak X$ if $$\forall\,x\in \mathfrak X,\;\; \sum_{y\in \mathfrak X}\mu_{xy}\le \pi(x).$$ \end{enumerate} \end{defin} \begin{rem} An adapted edge-weight $\mu$ is always such that $\mu_{xy}=0$ if $\{x,y\}\not\in \mathfrak E$, so the definition of adapted edge-weight means that $\mu$ is carried by the edge set $\mathfrak{E}$ in a qualitative sense. Ellipticity makes this quantitative in the sense that $\mu_{xy}\ge P_e^{-1}\pi(x)$. Note that, with this definition, the smaller the ellipticity constant, the better. \end{rem} \begin{rem}\label{rem:ellip-equiv} Since $\mu_{xy}=\mu_{yx}$, the ellipticity condition is equivalent to $$P_e\mu_{xy}\ge \pi(y)$$ and also to $P_e \mu_{xy}\ge \max\{\pi(x),\pi(y)\}$. \end{rem} The condition $\sum_y\mu_{xy} <+\infty$ implies immediately that the quadratic form $\mathcal E_\mu$ defined on finitely supported functions is closable with dense domain in $L^2(\pi)$. In that case, the data $(\mathfrak X,\pi,\mu)$ defines a continuous time Markov process on the state space $\mathfrak X$, reversible with respect to the measure $\pi$. This Markov process is the process associated to the Dirichlet form obtained by closing $\mathcal E_\mu$ in $L^2(\pi)$ and to the associated self-adjoint semigroup $H_t$. See, e.g., \cite[Example 1.2.4]{FOT}. \begin{defin} Assume the the edge-weight $\mu$ is subordinated to $\pi$, i.e., $$\forall\,x\in \mathfrak X,\;\; \sum_y\mu_{xy}\le \pi(x).$$ Set \begin{equation} \label{def-Ksub} K_\mu(x,y)=\left\{\begin{array}{ccl} \mu_{xy}/\pi(x) & \mbox{ for } x\neq y,\\ 1-(\sum_{y}\mu_{xy}/\pi(x)) & \mbox{ for } x=y. \end{array}\right. \end{equation} \end{defin} Note that the condition that $\mu$ is subordinated to $\pi$ is necessary and sufficient for the semigroup $H_t$ to be of the form $H_t=e^{-t(I-K)} $ where $K$ is a Markov kernel on $\mathfrak X$. Indeed, we then have $K=K_\mu$. This Markov kernel is always reversible with respect to $\pi$. Of course, if we replace the condition $\sum_y\mu_{xy}\le \pi(x)$ by the weaker condition $\sum_y\mu_{xy}\le A\pi(x)$ for some finite $A$, then $H_t=e^{-At(I-K_{A^{-1}\mu})}$ where $A^{-1}\mu$ is the weight $(A^{-1}\mu_{xy})_{x,y\in \mathfrak X}$. \subsection{Poincar\'e inequalities} \begin{defin}[Ball Poincar\'e Inequality]\label{defn-ball-poincare} We say that $(\mathfrak X,\mathfrak E,\pi,\mu)$ satisfies the ball Poincar\'e inequality with parameter $\theta$ if there exists a constant $P$ (the Poincar\'e constant) such that, for all $x\in \mathfrak X$ and $r>0$, $$\sum_{z\in B(x,r)} |f(z)-f_B|^2\pi(z) \le P r^\theta \sum_{z,y\in B(x,r), z\sim y}|f(z)-f(y)|^2 \mu_{zy}.$$\end{defin} \begin{rem} Under the doubling property, ellipticity is somewhat related to the Poincar\'e inequality on balls of small radius. Whenever the ball of radius $1$ around a point $x$ is a star (i.e., there are no neighboring relations between the neighbors of $x$ as, for instance, in a square grid) the ball Poincar\'e inequality with constant $P$ implies easily that, at such point $x$ and for any $y\sim x$, $$\pi(y) \le P D^2 \mu_{xy}.$$ To see this, fix $y\in B(x,1)$ and apply the Poincar\'e inequality on $B(x,1)$ to the test function defined on $B(x,1)$ by $$f(x) = \begin{cases} -c & \text{ if } x \neq y \\ 1 & \text{ if } x=y, \end{cases}$$ where $c=\pi(y)/(\pi(B(x,1))-\pi(y))$ so that the mean of $f$ over $B(x,1)$ is $0$. Recall that $B(x,1)$ is assumed to be a star and note that $0\le c\le \pi(y)/\pi(x)\le D$ where $D\ge 1$ is the doubling constant. This yields $$ \pi(y)\le P(1-c)^2\mu_{xy} \le P D^2 \mu_{xy}. $$ Hence, when all balls of radius $1$ are stars then the ball Poincar\'e inequality with constant $P$ implies ellipticity with constant $P_e=D^2P$. (See Remark~\ref{rem:ellip-equiv}.) However, when it is not the case that all balls of radius $1$ are stars then the ball Poincar\'e inequality does not necessarily imply ellipticity. \end{rem} \begin{figure} \caption{A finite piece of the Vicsek graph, an infinite graph which is both a tree and a fractal graph, has volume growth of type $r^d$ with $d=\log 5/\log 3$ and satisfies the Poincar\'e inequality on balls with parameter $\theta=1+d=1+\log 5/\log 3$.} \label{fig-V2} \end{figure} \begin{defin}[Classical Poincar\'e inequality] A finite subset $U$ of $\mathfrak X$, equipped with the restrictions of $\pi$ and $\mu$ to $U$ and $\mathfrak E\cap (U\times U)$ satisfies the (Neumann-type) Poincar\'e inequality with constant $P(U)$ if and only if, for any function $f$ defined on $U$, $$\sum_{U}|f(x)-f_U|^2\pi(x) \le P(U) \mathcal E_{\mu,U}(f,f)$$ where $$\mathcal E_{\mu,U}(f,g)= \frac{1}{2} \sum_{x,y\in U} (f(x)-f(y))(g(x)-g(y)) \mu_{xy}$$ and $f_U=\pi(U)^{-1}\sum_U f\pi$. \end{defin} \begin{exa} Assume that $\mathfrak X$ is finite and that $(\mathfrak X,\mathfrak E,\pi,\mu)$ satisfies the ball Poincar\'e inequality with parameter $\theta$. Then, taking $r=\mbox{diam}(\mathfrak X)$ implies that $\mathfrak X$ satisfies the Poincar\'e inequality with constant $P(\mathfrak X)= 2P \mbox{diam}(\mathfrak X)^\theta$. \end{exa} \begin{defin}[$\mathcal Q$-Poincar\'e Inequality] Let $\mathcal Q=\{Q(x,r): x \in \mathfrak X, r>0\}$ be a given collection of finite subsets of $\mathfrak X$. We say that $(\mathfrak X,\mathfrak E,\pi,\mu)$ satisfies the $\mathcal Q$-Poincar\'e inequality with parameter $\theta$ if there exists a constant $P$ such that for any function $f$ with finite support and $r>0$, $$\sum_x |f(x)-Q_rf(x)|^2\pi(x)\le P r^\theta \mathcal E_\mu(f,f)$$ where $$\mathcal{E}_{\mu}(f,g) = \frac{1}{2}\sum_{x,y \in \mathfrak{X}} (f(x) - f(y))(g(x) - g(y))\mu_{xy}$$ and $Q_rf(x) =\pi(Q(x,r))^{-1}\sum_{y\in Q(x,r)}f(y)\pi(y)$. \end{defin} The notion of $\mathcal Q$-Poincar\'e inequality is tailored to make it a useful tool to prove the Nash inequalities discussed in the next subsection. We can think of $Q_rf$ as a regularized version of $f$ at scale $r$. The $\mathcal Q$-Poincar\'e inequality provides control (in $L^2$-norm) of the difference $f-Q_rf$. If $\mathfrak X$ is finite and there is an $R>0$ such that $Q(x,R)=\mathfrak X$ for all $x$ then $Q_Rf(x)$ is the $\pi$ average of $f$ over $\mathfrak X$ and the $\mathcal Q$-Poincar\'e inequality at level $R$ becomes a classical Poincar\'e inequality as defined above. \begin{exa} The typical example of a collection $\mathcal Q$ is the collection of all balls $B(x,r)$. In that case, $Q_rf(x)=f_r(x)$ is simply the average of $f$ over $B(x,r)$. In this case, the $\mathcal Q$-Poincar\'e inequality is often called a pseudo-Poincar\'e inequality. Furthermore, if $(\mathfrak X,\mathfrak E,\pi,\mu)$ satisfies the doubling property and the ball Poincar\'e inequality then it automatically satisfies the pseudo-Poincar\'e inequality. \end{exa} \subsection{Nash inequality} Nash inequalities (in $\mathbb R^n$) were introduced in a famous 1958 paper of John Nash as a tool to capture the basic decay of the heat kernel over time. Later, they where used by many authors for a similar purpose in the contexts of Markov semigroups and Markov chains on countable graphs. Nash inequalities where first used in the context of finite Markov chains in \cite{DSCNash}, a paper to which we refer for a more detailed introduction. Assume that $(\mathfrak X,\mathfrak E)$ is equipped with a measure $\pi$ and an edge-weight $\mu$. The following is a variant of \cite[Theorem 5.2]{DSCNash}. The proof is the same. \begin{pro} \label{pro-Nash1} Assume that there is a family of operators defined on finitely supported functions on $\mathfrak X$, $Q_s$ (with $0\le s \le T$) such that $$\|Q_sf\|_\infty \le M (1+s)^{- \nu} \|f\|_1$$ for some $\nu \geq 0$ and that the edge weight $\mu=(\mu_{x,y})$ is such that $$ \|f-Q_sf\|_2^2\le P s^\theta \mathcal E_{\mu}(f,f) .$$ then the Nash inequality $$\|f\|_2^{2(1+\theta/\nu)}\le C\left[\mathcal E_{\mu}(f,f)+\frac{1}{PT^\theta }\|f\|_2^2\right]\|f\|_1^{2\theta/\nu}$$ holds with $C= (1+\frac{\theta}{2\nu})^2(1+\frac{2\nu}{\theta})^{\theta/\nu}M^{\theta/\nu} P$. \end{pro} \begin{rem} When $$Q_rf(x)=\pi(Q(x,r))^{-1}\sum_{y\in Q(x,r)}f(y)\pi(y)$$ as in the definition of the $\mathcal Q$-Poincar\'e inequality, the first assumption, $$\|Q_sf\|_\infty \le M (1+s)^{- \nu} \|f\|_1,$$ amounts to a lower bound on the volume of the set $Q(x,r)$. In that case, the second assumption is just the requirement that the $\mathcal Q$-Poincar\'e inequality is satisfied. \end{rem} For the next statement, we assume that $\mu$ is subordinated to $\pi$, i.e., for all $x$, $\sum_y\mu_{xy}\le \pi(x)$. We consider the Markov kernel $K$ defined at (\ref{def-Ksub}) for which $\pi$ is a reversible measure and whose associated Dirichlet form on $L^2(\pi)$ is $\mathcal E_\mu(f,f)=\langle (I-K)f,f\rightarrowngle_\pi$. \begin{pro}[{\cite[Corollary 3.1]{DSCNash}}] \label{pro-Nash2} Assume that $\mu$ is subordinated to $\pi$ and that $$\forall \,f\in L^2(\pi),\;\;\|f\|_2^{2(1+\theta/\nu)}\le C\left[\mathcal E_{\mu}(f,f)+\frac{1}{N}\|f\|_2^2\right]\|f\|_1^{2\theta/\nu}.$$ Then, for all $0\le n\le 2N$, $$\sup_{x,y}\left\{K^{2n}(x,y)/\pi(y)\right\} =\sup_{x}\left\{K^{2n}(x,x)/\pi(x)\right\}\le 2\left(\frac{8C(1+\nu/\theta)}{n+1}\right)^{\nu/\theta}.$$ \end{pro} This proposition demonstrates how the Nash inequality provides some control on the decay of the iterated kernel of the Markov chain driven by $K$ over time. \section{Poincar\'e and $\mathcal Q$-Poincar\'e inequalities for John domains} \setcounter{equation}{0} \label{sec-PQPJD} This is a key section of this article as well as one of the most technical. Assuming that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is adapted, elliptic, and satisfies the doubling property and the ball Poincar\'e inequality with parameter $\theta$, we derive both a Poincar\'e inequality (Theorem \ref{th-P1}) and a $\mathcal Q$-Poincar\'e inequality (Theorem \ref{th-PP1}) on finite John domains. The statement of the Poincar\'e inequality can be described informally as follows: for a finite domain $U$ in $J(\alpha)$ we have, for all functions $f$ defined on $U$, $$\sum_{U}|f(x)-f_U|^2\pi(x) \le CR^\theta \mathcal E_{\mu,U}(f,f)$$ where $R$ is the John radius for $U$ and $C$ depends only on $\alpha$ and the constants, coming from doubling, the Poincar\'e inequality on balls, and ellipticity, which describe the basic properties of $(\mathfrak X,\mathfrak E,\pi,\mu)$. (Instead of $R$, one can use the intrinsic diameter of $U$ because they are comparable up to a multiplicative constant depending only on $\alpha$, see Remark \ref{rem:johnradius}.) We give an explicit description of the constant $C$ without trying to optimize what can be obtained through the general argument. For many explicit examples running a similar argument while taking advantage of the feature of the example will lead to (much) improved estimates for $C$ in terms of the basic parameters. These results will be amplified in Section \ref{sec-AW} by showing that the same technique works as well for a large class of weights which can be viewed as modifications of the pair $(\pi,\mu)$. Throughout this section, we fix a finite domain $U$ in $\mathfrak X$ with (exterior) boundary $\partial U$ such that $U\in J( o,\alpha,R)$ for some $o\in U$. We also fix a witness family of John-paths $\gamma_x$ for each $x\in U$, joining $x$ to $o$ and fulfilling the $\alpha$-John domain condition. \subsection{Poincar\'e inequality for John domains} Fix a Whitney covering of $U\in J(o,\alpha,R)$, $$\mathcal W=\{B_i= B^\eta_{x_i}=B(x_i, r_i): 1\le i\le Q\},$$ with $r_i=\eta \delta(x_i)/4$ and parameter $\eta<1/4$. By construction, the collection of balls $B'_i=3B_i=B(x_i,3r_i)$ covers $U$, and it is useful to set $$\mathcal W'=\{3B_i:\; 1\le i\le Q\}.$$ Please note that we always think of the elements of $\mathcal W,\mathcal W'$ as balls, each with a specified center and radius, not just subsets. \begin{lem} \label{lem-Rad} Any ball $E$ in $\mathcal W$ (i.e., $E = B_i$ for some $i$) has radius $r$ bounded above by $\eta (2R+1)/4$. \end{lem} \begin{proof} By hypothesis, $U\in J(o,\alpha,R)$. Let $R_o=\delta(o)$. Any other point $x\in U$ is at distance at most $R$ from $o$. It follows that $\delta (x)\le R+ R_o\le 2R+1$. \end{proof} Fix a ball $E_o$ in $\mathcal W$ such that $3E_o$ contains the point $o$. For any $E=B(z,r) \in \mathcal W$, let $\gamma^E=\gamma_z$ be the John-path from $z$ to $o$ and select a finite sequence \begin{equation} \label{eq:whitney-chain} \mathcal W'(E)=(F^E_0,\dots,F^E_{q(E)})=(F_0,\dots,F_{q(E)}) \end{equation} of distinct balls $F^E_i=F_i\in \mathcal W'$, for $0\le i\le q(E)$ such that $F^E_0=3E$, $F^E_{q(E)}=3E_o$, $F^E_i$ intersects $\gamma^E$ and $d(F^E_{i+1}, F^E_i)\le 1$, $0\le i\le q(E)-1$. This is possible since the balls in $\mathcal W'$ cover $U$. When the ball $E$ is fixed, we drop the superscript $E$ from the notation $F_i^E$. For each $E\in \mathcal W$, the sequence of balls $3F_i$ (for $1\le i\le q(E)$) provides a chain of adjacent balls joining $z$ to $o$ along the John-path $\gamma^E$. The union of the balls $6F^E_i$ form a carrot-shaped region joining $z$ to $o$ (thin at $z$ and wide at $o$). These families of balls are a key ingredient in the following arguments. See Figure~\ref{fig:chain} for an example. \begin{figure} \caption{A chain of $9$ balls $\mathcal W'(E) =\{3F_0, 3F_1, \dots,3F_{q(E)} \label{fig:chain} \end{figure} \begin{lem} \label{lem-WN} Fix $\eta <1/4$ and $\rho\le 2/\eta$. The doubling property implies that any point $z\in U$ is contained in at most $D^{1+\log_2(4\rho+3)}$ distinct balls of the form $\rho E$ with $E\in \mathcal W$, where $D$ is the volume doubling constant. \end{lem} \begin{rem}Note that this property does not necessarily hold if $\rho$ is much larger than $2/\eta$. This lemma implies that $$\sum_{E \in \mathcal{W}} \chi_{\rho E} \leq D^{1+\log_2(4\rho + 3)}.$$ \end{rem} \begin{proof} Suppose $z\in U$ is contained in $N$ balls $\rho E$ with $E\in \mathcal W$, and call them $E_i=B(x_i,r_i)$, $1\le i\le N$. By Lemma \ref{lem-W}(3), the radii $r_i$ satisfy $r_i/r_j\le 3$ (this uses the inequality $\rho\le 2/\eta$) and it follows that $$\bigcup_1^N B(x_i,r_i) \subset B(x_j, (4\rho+ 3)r_j) .$$ Because the balls $E_i$ are disjoint, applying this inclusion with $j$ chosen so that $\pi(E_j)=\min\{\pi(E_i):1\le i\le N\}$ yields $$N \pi(E_j)\le \pi((4\rho+3)E_j)\le D^{1+\log_2(4\rho+3)} \pi(E_j),$$ which, dividing by $\pi(E_j)$ proves the lemma. \end{proof} \begin{lem} \label{lem-WN2} Fix $\eta <1/4$ and $\rho\le 2/\eta$. For any ball $E=B(x,r(x))\in \mathcal W$ and any ball $F=B(y,3r(y))\in \mathcal W'(E)$, where $\mathcal{W}'(E)$ is defined in~\eqref{eq:whitney-chain}, we have $E\subset \kappa F$ with $\kappa = 7\alpha^{-1}\eta^{-1}$. \end{lem} \begin{proof} By construction, there is a point $z$ in $F$ on the John-path $\gamma^E$ from $x$ to $o$ and $\delta(z)\ge \alpha (1+d(z,x))$. This implies $$4r(y)/\eta=\delta(y)\ge \delta(z)-3r(y)\ge \alpha (1+d(z,x)) -3r(y),$$ that is, $((4/\eta)+3)r(y)\ge \alpha (1+d(x,z))$. It follows that $$ x\in B(y ,(3+\alpha^{-1} \eta^{-1}(4+3\eta)) r(y)).$$ Observe that $$\delta(x)\le \delta(y) +d(x,y)\le 4\eta^{-1}r(y) + (3+ \alpha^{-1}\eta^{-1}(4+3\eta))r(y)$$ which gives $$r(x)=\eta \delta(x)/4\le r(y)(1+ \alpha^{-1}(3\alpha \eta+ 4+3\eta)/4).$$ Then, $$B(x,r(x))\subset B(y,d(x,y)+r(x)),$$ which gives $$B(x,r(x))\subset B(y, (4+\alpha^{-1}\eta^{-1}(4+3\eta+(3\alpha \eta^2 +4\eta+3\eta^2)/4)r(y)).$$ Because $\alpha \le 1$ and we assumed $\eta<1/4$, we have $$ 4+\alpha^{-1}\eta^{-1}(4+3\eta+(3\alpha \eta^2 +4\eta+3\eta^2)/4 \le 4+6\alpha^{-1}\eta^{-1}\le 7 \alpha^{-1}\eta^{-1},$$ and hence $B(x,r(x)) \subseteq \kappa B(y,2r(y))$ with $\kappa = 7\alpha^{-1}\eta^{-1}$. \end{proof} \begin{lem} \label{lem-WN3} Fix $\eta\le 1/4$. For each $E\in \mathcal W$, the sequence $$\mathcal W'(E)=(F^E_0,\dots, F^E_{q(E)})$$ has the following properties. Recall that for each $i\in \{0,\dots, q(E)\}$, $F^E_i=B(z^E_i,\rho^E_i)$ with $\rho^E_i=3r^E_i= (3\eta/4) \delta(z^E_i)$ and that $F_0^E=3E$, $F^E_{q(E)}=3E_o$. (We drop the reference to $E$ when $E$ is clearly fixed.) \begin{enumerate} \item For each $E$, when $\rho_i <1$ we have $B(z_i,\rho_i)=\{z_i\}$ and $$1+d(z_0,z_i) \le 4/(3\alpha \eta).$$ \item For each $E$ and $i\in \{1,\dots, q(E)-1\}$ such that $\max\{\rho_i,\rho_{i+1}\} < 1$, we have $$ |f_{F_i}-f_{F_{i+1}}|^2 =|f(z_{i})-f(z_{i+1})|^2 \le \frac{P_e}{\pi(z_i)} \sum_{z\sim z_i, z\in U} |f(z)-f(z_i)|^2\mu_{zz_i}.$$ \item For each $E$ and $i\in \{1,\dots, q(E)-1\}$ such that $\max\{\rho_i,\rho_{i+1}\}\ge 1$, we have $$|f_{F_i}-f_{F_{i+1}}|^2 \le 2D^6 P (8\rho_i)^\theta \frac{1}{\pi(F_i)} \sum_{x,y\in 8F_{i}, x\sim y}|f(x)-f(y)|^2\mu_{xy},$$ for any function $f$ on $U$. \end{enumerate} \end{lem} \begin{proof} In the first statement we have $\rho_i=\rho(z_i)= 3\eta \delta(z_i)/4 <1$. Because $U\in J(\alpha)$, $E_0=B(z_0,r_0)=E$ and $z_i$ must be on $\gamma^E=\gamma_{z_0}$, $$\delta(z_i)\ge \alpha (1+ d(z_i,z_0)).$$ It follows that $ 1+d(z_0,z_i)\le 4/(3\alpha \eta)$. The second statement is clear. For the third statement, we need some preparation. First we obtain the lower bound $$\min\{\delta(z_i),\delta(z_{i+1})\}\ge \frac{5}{6\eta},$$ based on the assumption that $\max \{\rho_i,\rho_{i+1}\}\ge 1$. If both $\rho_i,\rho_{i+1}$ are at least $1$, there nothing to prove. If is one of them is less than $1$, say $\rho_i<1$, then $F_i=B(z_i,\rho_i)=\{z_i\}$ and $d(z_i,F_{i+1})\le 1$. It follows that $$\frac{4}{3\eta}\le \frac{4}{3\eta}\rho_{i+1}= \delta(z_{i+1}) \le 1+ \rho_{i+1}+\delta(z_i).$$ But $\rho_{i+1}= (3/4\eta) \delta(z_{i+1})$, so $$\left(1-\frac{3\eta}{4}\right) \delta(z_{i+1})\le 1+\delta(z_i)$$ and (using the fact that $\eta\le 1/4$) $$ \frac{5}{6\eta}\le \frac{4}{3\eta}- 2\le \delta(z_i).$$ This shows that $\min\{\rho_i,\rho_{i+1}\} \geq \frac{5}{8}$ because $$\min\{\rho_i,\rho_{i+1}\} = \frac{3\eta}{4}\min\{\delta(z_i),\delta(z_{i+1})\} \ge \frac{3\eta}{4}\frac{5}{6\eta}=5/8.$$ Next, we show that $$F_i\cup F_{i+1}\subset 8F_i \cap 8F_{i+1}\subset U .$$ By assumption, the balls $B(z_{j+1}, 6r_{j+1})$ and $B(z_j,6r_j)$ intersect. Applying Lemma \ref{lem-W}(3) with $\rho=6$ and $\eta \leq 1/4$ gives that $5/11\le r_{j+1}/r_j\le 11/5$ and it follows that $$ \max \left\{ \frac{\rho_{i+1}}{\rho_i}, \frac{\rho_{i}}{\rho_{i+1}}\right\}\le 11/5.$$ Moreover, because $d(F_i,F_{i+1})\le 1$, we have $$ \max\{d(z_i,z):z\in F_{i+1})\le \rho_i+ 2\rho_{i+1} +1\le 8\rho_i$$ and similarly, $$ \max\{d(z_{i+1},z):z\in F_{i} \} \le \rho_{i+1}+2\rho_i+1\le 8\rho_{i+1}.$$ It follows that $F_i\cup F_{i+1}\subset 8F_i\cap 8F_{i+1}\subset U$. Now, we are ready to prove the inequality stated in the lemma. Write \begin{eqnarray*} |f_{F_i}-f_{F_{i+1}}|^2 &=& \left|\frac{1}{\pi(F_i)\pi(F_{i+1})} \sum_{\xi\in F_i,\zeta\in F_{i+1}} [f(\xi)-f(\zeta)] \pi(\xi)\pi(\zeta) \right|^2\\ &\le & \frac{1}{\pi(F_i)\pi(F_{i+1})} \sum_{\xi, \zeta\in 8F_i} |f(\xi)-f(\zeta)| ^2\pi(\xi)\pi(\zeta) \\ &= & \frac{2\pi(8F_i)}{\pi(F_i)\pi(F_{i+1})} \sum_{\xi \in 8F_i} |f(\xi)-f_{8F_i}|^2 \pi(\xi)\\ &\le & \frac{2 P \pi(8F_i)(8\rho_i)^\theta}{\pi(F_i)\pi(F_{i+1})} \sum_{x,y\in 8F_i} |f(x)-f(y)|^2\mu_{xy} \\ &\le & \frac{2 D^6 P (8\rho_i)^\theta}{\pi(F_i)} \sum_{x,y\in 8F_i} |f(x)-f(y)|^2\mu_{xy} \end{eqnarray*} \end{proof} \begin{theo} \label{th-P1} Fix $\alpha, \theta, D,P, >0$. Assume that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is adapted, elliptic, and satisfies the doubling property with constant $D$ and the ball Poincar\'e inequality with parameter $\theta$ and constant $P$. Assume that the finite domain $U$ and the point $o\in U$ are such that $U\in J(o,\alpha,R)$, $R>0$. Then there exist a constant $C$ depending only on $\alpha, \theta, D,P$ and such that $$\sum_{U}|f(x)-f_U|^2\pi(x) \le P(U) \mathcal E_{\mu,U}(f,f)$$ with $$P(U) \le C R^\theta \mbox{ with } C= 4^{-\theta}2PD^5+16 D^{14+ 2\log(2\kappa)} \max\{ R^{-\theta} P_e, 2^{\theta}2PD^6\} $$ where $\kappa = 84/\alpha$. In particular, $$C\le 17 D^{30+2\log_2(1/\alpha)} \max\{R^{-\theta}P_e,2^\theta P\}.$$ \end{theo} \begin{proof} We pick a Whitney covering with $\eta=1/12$. Recall from Lemma \ref{lem-Rad} that all balls in $\mathcal W$ have radius at most $R/16$. It suffices to bound $\sum_U|f-f_{3E_o}|^2\pi$ because $$\sum_U|f-f_{U}|^2\pi=\min_{c}\left\{ \sum_U|f-c|^2\pi\right\}.$$ The balls in $\mathcal W'$ cover $U$ hence $$\sum_U|f-f_{3E_o}|^2\pi \le \sum _{E\in W} \sum_{ 3E} |f-f_{3E_o}|^2\pi.$$ Next, using the fact that $(a+b)^2\le 2(a^2+b^2)$, write $$\sum _{E\in W} \sum_{3E}|f-f_{3E_o}|^2\pi \le 2\left(\sum _{E\in W} \sum_{3E}|f-f_{2E}|^2\pi\right)+ 2\sum _{E\in W} \pi(3E)|f_{3E}-f_{3E_o}|^2.$$ We can bound and collect the first part of the right-hand side very easily because, using the Poincar\'e inequality in balls of radius at most $3R/16\le R/4$ and then Lemma~\ref{lem-WN}, we have \begin{eqnarray} \sum _{E\in W} \sum_{ 3E} |f-f_{3E}|^2\pi &\le & P (R/4)^\theta \sum_{E\in W} \sum_{\substack{x,y\in 3E \\ x\sim y}}|f(x)-f(y)|^2\mu_{xy} \nonumber \\ & \le & PD^5 (R/4)^\theta \mathcal E_{\mu,U}(f,f). \label{crux1} \end{eqnarray} This reduces the proof to bounding $$\sum_{E\in W} \pi(3E)|f_{3E}-f_{3E_o}|^2.$$ For this, we will use the chain of balls $\mathcal W'(E) =(F^E_0,\dots, F^E_{q(E)})$ to write $$|f_{3E}-f_{3E_o}|\le \sum_{0}^{q(E)-1} |f_{F^E_i}-f_{F^E_{i+1}}|.$$ \begin{nota} For any function $f$ on $U$ and any ball $ F=B(x,\rho)\in \mathcal W'$ set $$G(F,f)= \left(\frac{1}{\pi(F)} \sum_{x\in 8F\cap U}\sum_ {y\sim x, y\in U}|f(x)-f(y)|^2\mu_{xy}\right)^{1/2}.$$ \end{nota} With this notation, Lemma \ref{lem-WN3}(2)-(3) yields $$ |f_{F^E_i}-f_{F^E_{i+1}}| \le Q R^{\theta/2} G(F^E_i,f),$$ where $Q^2=\max\{ R^{-\theta} P_e, 2^{\theta}2PD^6\}$. With $\kappa $ as in Lemma \ref{lem-WN2}, this becomes $$ |f_{3E}-f_{3E_o}| \mathbf 1_E \le QR^{\theta/2} \sum_{0}^{q(E)-1} G(F^E_i,f) \mathbf 1_E \mathbf 1_{\kappa F^E_i}.$$ Write \begin{eqnarray*} \lefteqn{\sum_{E\in W} \pi(3E)|f_{3E}-f_{3E_o}|^2\le D^2\sum_{E\in W}\sum_U |f_{3E}-f_{3E_o}|^2 \mathbf 1_{E}(x) \pi(x)}&& \\ &\le &Q^2D^2 R^\theta\sum_{E\in W}\sum_U \left| \sum_{0}^{q(E)-1} G(F^E_i,f) \mathbf 1_{\kappa F^E_i} (x) \right|^2 \mathbf 1_{E}(x) \pi(x) \\ &\le & Q^2D^2 R^\theta\sum_{E\in W}\sum_U \left| \sum_{F\in \mathcal W'} G(F,f) \mathbf 1_{\kappa F} (x) \right|^2 \mathbf 1_{E}(x) \pi(x) \\ &\le & Q^2D^2 R^\theta \sum_{\mathfrak X} \left| \sum_{F\in \mathcal W'} G(F,f) \mathbf 1_{\kappa F} (x) \right|^2 \pi(x) \end{eqnarray*} where the last step follows from the observation that $\sum_{E\in \mathcal W}\mathbf 1_E\le 1$ because the balls in $\mathcal W$ are pairwise disjoint. By Proposition \ref{prop-sumballs} and the fact that the balls in $\mathcal W$ are disjoint, we have \begin{eqnarray*} \lefteqn{ \sum_{\mathfrak X} \left|\sum_{E\in \mathcal W} G(3E,f) \mathbf 1_{3\kappa E} (x) \right|^2 \pi(x) } &&\\ &\le & 8 D^{4+2\log_2(2\kappa)} \sum_{\mathfrak X} \left|\sum_{E\in \mathcal W} G(3E,f) \mathbf 1_{ E} (x) \right|^2 \pi(x) \\ &=& 8 D^{4+2\log_2(2\kappa)} \sum_{E\in \mathcal W} G(3E,f)^2 \pi(E) \\ &=& 8 D^{4+2\log_2(2\kappa)} \sum_{E\in \mathcal W} \sum_{x,y\in 24 E\cap U}|f(x)-f(y)|^2\mu_{xy} \end{eqnarray*} By Lemma \ref{lem-WN} (note that $2/\eta=24$), for each $x\in \mathfrak X$, there are at most $D^8$ balls $E$ in $\mathcal W$ such that $24E$ contains $x$. This yields $$\sum_{\mathfrak X} \left|\sum_{F\in \mathcal W} G(2F,f) \mathbf 1_{2\kappa F} (x) \right|^2 \pi(x) \le 8 D^{12+ 2\log(2\kappa)} \mathcal E_{U,\mu} (f,f).$$ Collecting all terms gives Theorem \ref{th-P1} as desired. \end{proof} \subsection{$\mathcal Q$-Poincar\'e inequality for John domains} For any $s\ge 1$, fix a scale-$s$ Whitney covering $\mathcal W_s$ with Whitney parameter $\eta<1/4$. For our purpose, we can restrict ourselves to integer parameters $s$ no greater than $2R+1$ which results in making only finitely many choice of coverings. Recall that $\mathcal W_s$ is the disjoint union of $\mathcal W_{=s}$ (balls of radius exactly $s$) and $\mathcal W_{<s}$ (balls of radius strictly less than $s$). As before, we denote by $\mathcal W_{s}', \mathcal W_{=s}'$ and $\mathcal W_{<s}'$, the sets of balls obtained by tripling the radius of the balls in $\mathcal W_{s}, \mathcal W_{=s}$ and $\mathcal W_{<s}$. Fix a ball $E^s_o$ in $\mathcal W_s$ such that $3E^s_o$ contains the point $o$. For any ${E=B(z,r) \in \mathcal W_s}$, select a finite sequence $$\mathcal W_s'(E)=(F^{s,E}_0,\dots,F^{s,E}_{q_s(E)})=(F_0,\dots,F_{q(E)})$$ of distinct balls $F^{s,E}_i=F_i\in \mathcal W'_s$ (for $0\le i\le q_s(E)$) such that $F^{s,E}_0=3E$, $F^E_{q(E)}=3E^s_o$, $F^{s,E}_i$ intersects $\gamma^E$ and $d(F^{s,E}_{i+1}, F^{s,E}_i)\le 1$ ($0\le i\le q_s(E)-1$). This is obviously possible since the balls in $\mathcal W'_s$ cover $U$. When the parameter $s$ and the ball $E$ are fixed, we drop the supscripts $s,E$ from the notation $F_i^{s,E}$. We only need a portion of this sequence, namely, \begin{equation}\label{chain} W'_{<s}(E)=(F^{s,E}_0,\dots, F^{s,E}_{q^*_s(E)})\end{equation} where $q^*_s(E)$ is the smallest index $j$ such that $r_j=s$. If no such $j$ exists, set $q^*_s(E)=q(E)$. For future reference, we call these sequences of balls {\em local s-chains}. Namely, the sequence $W'_{<s}(E)$ is the local s-chain for $E$ at scale $s$. We set $$F(s,E)=F^{s,E}_{q^*_s(E)},$$ to be the last ball in the local s-chain of $E$. For each $x$, choose a ball $E(s,x)\in \mathcal W_s$ with maximal radius among those $E\in \mathcal W_s$ such that $3E$ contains $x$ and set $$F(s,x)=\left\{\begin{array}{cl} 3E(s,x) & \mbox{ when } x\in \bigcup_{E\in \mathcal W_{=s}}3E,\\ F(s,E(s,x)) & \text{otherwise.} \end{array}\right.$$ The ball $F(s,x)$ is, roughly speaking, chosen among those balls of radius $3s$ in the Whitney covering that are not too far from $x$ and away from the boundary of $U$ --- for points $x$ near the boundary, where the Whitney balls have radius less than $s$, $F(s,x)$ is the last ball in the local s-chain of $E \in \mathcal{W}_s$, where $3E$ covers $x$. \begin{defin} \label{def-Q} For $s\in [0,1]$, set $Q_s=I$ (i.e., $Q_sf=f$). For any $s>1$, define the averaging operator $$Q_sf(x)=\sum_y Q_s(x,y)f(y)\pi(y)$$ by setting $$Q_s(x,y)= \frac{1}{\pi(F(s,x))}\mathbf 1_{F(s,x)}(y).$$ \end{defin} Next we collect the $s$-version of the statements analogous to Lemmas \ref{lem-WN} and~\ref{lem-WN2}. The proofs are the same. \begin{lem} \label{lem-WPP}Fix $\eta <1/4$ and $\rho\le 2/\eta$. For any $s>0$, the following properties hold. \begin{enumerate} \item Any point $z\in U$ is contained in at most $D^{1+\log_2(4\rho+3)}$ distinct balls $\rho E$ with $E\in \mathcal W_s$. \item For any ball $E=B(x,r(x))\in \mathcal W_{<s}$ and any ball ${F=B(y,3r(y))\in \mathcal W'_{<s}(E)}$ we have $E\subset \kappa F$ with $\kappa = 7\alpha^{-1}\eta^{-1}$. \end{enumerate} \end{lem} The $s$-version of Lemma \ref{lem-WN3} is as follows. The proof is the same. \begin{lem} \label{lem-WPP3} Fix $\eta\le 1/4$. For each $s\ge 1$, and $E\in \mathcal W_{<s}$, the sequence $$\mathcal W'_{<s}(E)=(F^{s,E}_0,\dots, F^{s,E}_{q^*_s(E)})$$ has the following properties. Set $i\in \{0,\dots, q^*_s(E)\}$, $F^{s,E}_i=B(z^{s,E}_i,\rho^{s,E}_i)$ with $\rho^{s,E}_i=3r^{s,E}_i= 3\min\{s,\eta \delta(z^E_i)/4\}$ and that $F_0^{s,E}=3E$, $F^{s,E}_{q^*_s(E)}=F(s,E)$. We drop the reference to $s$ and $E$ when they are clearly fixed. \begin{enumerate} \item For each $E\in W_{<s}$, when $\rho_i <1$ we have $B(z_i,\rho_i)=\{z_i\}$ and $$1+d(z_0,z_i) \le 4/(3\alpha \eta).$$ \item For each $E\in W_{<s}$ and $i\in \{1,\dots, q^*_s(E)-1\}$ such that $\max\{\rho_i,\rho_{i+1}\} < 1$, we have $$ |f_{F_i}-f_{F_{i+1}}|^2 =|f(z_{i})-f(z_{i+1})|^2 \le \frac{P_e}{\pi(z_i)} \sum_{z\sim z_i, z\in U} |f(z)-f(z_i)|^2\mu_{zz_i}.$$ \item For each $E\in W_{<s}$ and $i\in \{1,\dots, q^*_s(E)-1\}$ such that $\max\{\rho_i,\rho_{i+1}\}\ge 1$ we have $\min\{\delta(z_i),\delta(z_{i+1})\}\ge 4/(9\eta) $, $\min\{\rho_i,\rho_{i+1}\}\ge 1/3$ and $$F_i\cup F_{i+1}\subset 8F_i\subset U.$$ Furthermore, for any function $f$ on $U$, $$|f_{F_i}-f_{F_{i+1}}|^2 \le 2D^6 P (8\rho_i)^\theta \frac{1}{\pi(F_i)} \sum_{x,y\in 8F_{i}, x\sim y}|f(x)-f(y)|^2\mu_{xy}.$$ \end{enumerate} \end{lem} \begin{theo} \label{th-PP1} Fix $\alpha, \theta, D, P_e,P >0$. Assume that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is adapted, elliptic and, satisfies the doubling property with constant $D$ and the ball Poincar\'e inequality with parameter $\theta$ and constant $P$. Assume that the finite domain $U$ and the point $o\in U$ are such that $U\in J(o,\alpha,R)$, $R>0$. Then there exists a constant $C$ depending only on $\alpha, \theta, D,P$ and such that $$\forall\, s>0,\;\;\sum_{U}|f(x)-Q_sf(x)|^2\pi(x) \le C s^\theta \mathcal E_{\mu,U}(f,f)$$ with $$ C= 3^\theta 7 PD^5 +16 D^{14+ 2\log(2\kappa)} \max\{P_e, 8^{\theta}2PD^6\}$$ where $\kappa =84/\alpha $. \end{theo} \begin{proof} The conclusion trivially holds when $s\in [0,1]$ because $Q_sf=f$ in this case. For $s>1$, as in the proof of Theorem \ref{th-P1}, we pick a Whitney covering with $\eta<1/12$. We need to bound \begin{eqnarray*}\sum_x|f(x)-Q_sf(x)|^2\pi(x)&=&\sum_{E\in \mathcal W_s}\sum_{\substack{x\in 3E \\ E=E(s,x)}}|f(x) -f_{F(s,E)}|^2\pi(x)\\ &=& \sum_{E\in \mathcal W_{=s}}\sum_{\substack{x\in 3E \\ E=E(s,x)}}|f(x) -f_{3E}|^2\pi(x) \\ && +\sum_{E\in \mathcal W_{<s}}\sum_{\substack{x\in 3E \\ E=E(s,x)}}|f(x) -f_{F(s,E)}|^2\pi(x) \\ &\le& \sum_{E\in \mathcal W_{=s}}\sum_{x\in 3E}|f(x) -f_{3E}|^2\pi(x) \\ && +\sum_{E\in \mathcal W_{<s}}\sum_{x\in 3E}|f(x) -f_{F(s,E)}|^2\pi(x).\end{eqnarray*} Note that, in the first two lines, we are only summing over the $x$ such that $E = E(s,x)$ i.e., $E \in \mathcal{W}_s$ is the selected ball of radius $s$ which covers $x$. That way, $x \in U$ appears once in the sum. In the third line, we expand the sum and each $x$ may appear multiple times. We can bound and collect the first part of the right-hand side of the last inequality using the Poincar\'e inequality on balls of radius $3s$ and Lemma~\ref{lem-WPP}(1), \begin{eqnarray} \sum _{E\in W_{=s}} \sum_{ 3E} |f-f_{3E}|^2\pi &\le & P (3s)^\theta \sum_{E\in W_{=s}} \sum_{\substack{x,y\in 3E \\ x\sim y}}|f(x)-f(y)|^2\mu_{xy} \nonumber \\ & \le &3^\theta P D^5 s^\theta \mathcal E_{\mu,U}(f,f). \label{cruxPP1} \end{eqnarray} This reduces the proof to bounding \begin{eqnarray*}\lefteqn{\sum_{E\in W_{<s}} \sum_{x\in 3E}|f_{f(x)-f_{F(s,x)}}|^2\pi(x)} &&\\ & \le& 2\sum_{E\in \mathcal W_{<s}}\left(\sum_{x\in 3E}|f(x)-f_{3E}|^2\pi(x)+ \pi(3E)|f_{3E}-f_{F(s,x)}|^2\right).\end{eqnarray*} The first part of the right-hand side is, again, easily bounded by \begin{eqnarray*}2\sum_{E\in \mathcal W_{<s}}\sum_{x\in 3E}|f(x)-f_{3E}|^2\pi(x) &\le & 3^{1+\theta} P s^\theta \sum_{E\in \mathcal W_{<s}} \sum_ {x,y\in 3E, x\sim y}|f(x)-f(y)|^2\mu_{xy} \\ &\le & 3^{1+\theta} P D^5 s^\theta \mathcal E_{\mu,U}(f,f).\end{eqnarray*} The second part is $$2\sum_{E\in \mathcal W_{<s}}\pi(3E)|f_{3E}-f_{F(s,x)}|^2$$ for which we use the chain of balls $\mathcal W'_{s}(E) =(F^{s,E}_0,\dots, F^{s,E}_{q^*_s(E)})$ to write $$|f_{3E}-f_{F(s,E)}|\le \sum_{0}^{q^*_s(E)-1} |f_{F^{s,E}_i}-f_{F^{s,E}_{i+1}}|.$$ Lemma \ref{lem-WPP3}(2)-(3) and the notation $G(F,f)$ introduced for the proof of Theorem \ref{th-P1} yields $$ |f_{F^{s,E}_i}-f_{F^{s,E}_{i+1}}| \le Q s^{\theta/2} G(F^{s,E}_i,f),\;\;Q^2=\max\{ s^{-\theta} P_e, 8^{\theta}2PD^6\} $$ and, with $\kappa $ as in Lemma \ref{lem-WPP}(2), $$ |f_{3E}-f_{F(s,E)}| \mathbf 1_E \le Qs^{\theta/2} \sum_{0}^{q^*_s(E)-1} G(F^{s,E}_i,f) \mathbf 1_E \mathbf 1_{\kappa F^{s,E}_i}.$$ Using this estimate, the same argument used at the end of the proof of Theorem \ref{th-P1} (and based on Proposition \ref{prop-sumballs}) gives $$2\sum_{E\in \mathcal W_{<s}}\pi(3E)|f_{3E}-f_{F(s,x)}|^2 \le 16 Q^2 D^{13+2\log_2 2\kappa} s^\theta \mathcal E_{\mu,U}(f,f).$$ \end{proof} \section{Adding weights and comparison argument} \setcounter{equation}{0} \label{sec-AW} Comparison arguments are very useful in the study of ergodic finite Markov chains (see ~\cite{DSCcomprev} and~\cite{DSCcompg}). This section uses these ideas in the present context. The results here are used in Section~\ref{sec-Metro} to study the rates of convergence for Metropolis type chains and in Sections~\ref{sec-Dir} and~\ref{sec-IU} for studying Markov chains which are killed on the boundary. By their very nature, the (almost identical) proofs of Theorems \ref{th-P1} and~\ref{th-PP1} allow for a number of important variants. In this subsection, we discuss transforming the pair $(\pi,\mu)$ into a pair $(\widetilde{\pi},\widetilde{\mu})$ so that the proofs of the preceding section yield Poincar\'e type inequalities (including $Q$-type) for this new pair. \begin{defin} Let $U$ be a finite domain in $(\mathfrak X,\mathfrak E,\pi,\mu)$. Let $(\widetilde{\pi},\widetilde{\mu})$ be given on $(U,\mathfrak E_U)$. We say that the pair $(\widetilde{\pi},\widetilde{\mu})$ $(\eta,A)$-dominates the pair $(\pi,\mu)$ in $U$ if, for any ball $E=B(z,r)\subset U$ with $r\le 6 \eta\delta(z)$, we have $$\sup\left\{\frac{\widetilde{\pi}(x)}{\pi(x)} \ : \ x \in B\right\}\le A \inf\left\{\frac{\widetilde{\mu}_{xy}}{\mu_{xy}}:x\in B,\{x,y\}\in E\right\} ,$$ \end{defin} \begin{rem} If $\eta\ge 1/6$, this property is very strong and not very useful. We will use it with $\eta\le 1/12$ so that each of the balls considered is far from the boundary relative to the size of its radius. The size of balls for which this property is required, namely, balls such that $r\le 6\eta\delta(z)$ is dictated by the fact the we will have to use this property for the balls $24 E$ where $E=B(z,r(z))$ is a ball that belong to an $\eta$-Whitney covering of $U$. See Lemma \ref{lem-WPP3}(3). By construction, such a ball $E$ will satisfy $r(z)=\eta \delta(z)/4$ and $r=24r(z)$ satisfies $r = 6\eta \delta(x)$. \end{rem} The following obvious lemma justifies the above definition. \begin{lem} \label{lem-domP} Assume that $(\widetilde{\pi},\widetilde{\mu})$ $(\eta,A)$-dominates the pair $(\pi,\mu)$ in $U$. \begin{enumerate} \item If $(\pi,\mu)$ is $P_e$-elliptic then $(\widetilde{\pi},\widetilde{\mu})$ is $AP_e$-elliptic on $U$. \item If $B=B(z,r)$ is a ball such that $r\le 6\eta\delta(z)$ and the Poincar\'e inequality $$\sum_{x \in B}|f(x)-f_B|^2\pi\le P(B)\sum_{x,y\in B}|f(x)-f(y)|^2\mu_{xy}$$ holds on $B$ then $$\sum_B|f-\widetilde{f}_{B}|^2\widetilde{\pi}\le \sum_{x\in B}|f(x)-f_B|^2\widetilde{\pi}\le AP(B)\sum_{x,y\in B}|f(x)-f(y)|^2\widetilde{\mu}_{xy}$$ where $\widetilde{f}_{B}$ is the mean of $f$ over $B$ with respect to $\widetilde{\pi}$ and $f_B$ is the mean of $f$ over $B$ with respect to $\pi$. \end{enumerate} \end{lem} \begin{defin} \label{rem-tilde} Assume that $U$ is a connected subset of $(\mathfrak X,\mathfrak E)$ with internal boundary $\delta U=\{x\in U:\exists \,y\in \mathfrak X\setminus U,\;\{x,y\}\in \mathfrak E\}.$ For each $x\in \delta U$, introduce an auxiliary symbol, $x^c$ and set $$\mathfrak U= U\cup \{x^c:x\in \delta U\},\;\;\mathfrak E_\mathfrak U=\mathfrak E_U\cup \{\{x,x^c\}: x\in \delta U\},$$ so that $\mathfrak U$ has an additional copy of $\delta U$ attached to $\delta U$. By inspection, a domain $U$ is in $ J(\mathfrak X,\mathfrak E,\alpha, o,R)$ if and only if $U\in J(\mathfrak U,\mathfrak E_\mathfrak U,\alpha,o,R)$. If $\widetilde{\pi}$ is a measure on $U$ then we can extend this measure to a measure on $\mathfrak U$, which we still call $\widetilde{\pi}$, by setting $\widetilde{\pi}(x^c)=\widetilde{\pi}(x)$, $x\in \delta U$. If $\widetilde{\pi}$ is $\widetilde{D}$-doubling on $(U,\mathfrak E_U)$ then its extension is $2\widetilde{D}$-doubling on $(\mathfrak U,\mathfrak E_U)$. \end{defin} \subsection{Adding weight under the doubling assumption for the weighted measure} \begin{theo}\label{th-PP1doub} Referring to the setting of Theorems {\em \ref{th-P1}-\ref{th-PP1}}, assume further that we are given $\eta\in (0,1/12)$ and a pair $(\widetilde{\pi},\widetilde{\mu})$ on $U$ which dominates $(\pi,\mu)$ with constants $(\eta,A)$ and such that $\widetilde{\pi}$ is $\widetilde{D}$-doubling on $(U,\mathfrak E_U)$. Then there exists a constant $C$ depending only on $\eta, A,\alpha, \theta, \widetilde{D},P,P_e$ such that $$\forall\,s>0,\;\;\sum_{x \in U}|f(x)-\widetilde{Q}_sf(x)|^2\widetilde{\pi}(x)\le C s^\theta \mathcal E_{\widetilde{\mu},U}(f,f).$$ We can take $$ C= 7(3^\theta PA(2\widetilde{D})^5)+16 A(2\widetilde{D})^{14+ 2\log(2\kappa)} \max\{P_e, 8^{\theta}2P (2\widetilde{D})^6\}$$ where $\kappa =7/(\alpha\eta) $. Here $\widetilde{Q}_s$ is as in {\em Definition \ref{def-Q}} with $\widetilde{\pi}$ instead of $\pi$. In particular, $$\sum_U|f(x)-\widetilde{f}_U|^2\widetilde{\pi}(x)\le C R^\theta \mathcal E_{\widetilde{\mu},U}(f,f).$$\end{theo} \begin{proof} Follow the proofs of Theorems \ref{th-P1}-\ref{th-PP1}, using a $\eta$-Whitney covering with $\eta$ small enough that the Poincar\'e inequalities on Whitney balls (in fact, on double Whitney balls) holds for the pair $(\widetilde{\pi},\widetilde{\mu})$ by Lemma \ref{lem-domP}. To make the argument go as smoothly as possible, use the construction of $(\mathfrak U,\mathfrak E_U)$ in Definition \ref{rem-tilde}. The proof proceeds as before with $(\widetilde{\pi},\widetilde{\mu})$ instead of $(\pi,\mu)$. The full strength of the assumption that $\widetilde{\pi}$ is doubling is key in applying Proposition \ref{prop-sumballs} in this context. \end{proof} \subsection{Adding weight without the doubling assumption for the weighted measure} \begin{defin}Let $\psi:U\rightarrow (0,\infty)$ be a positive function on $U$ (we call it a weight). We say that $\psi$ is $A$-doubling on $U$ if the measure $\psi\pi$ is doubling on $(U,\mathfrak E_U)$ with constant $A$. \end{defin} \begin{defin}\label{def-regular} Let $\psi:U\rightarrow (0,\infty)$ be a positive function on $U$. We say that $\psi$ is $(\eta,A)$-regular on $U$ if $$\psi(x)\le A \psi(y) \mbox{ for all }\{x,y\}\in \mathfrak E_U,$$ and, for any ball $E=B(z,r)\subset U$ with $r\le 6\eta\delta(z)$, we have $$\max_{E}\{\psi\}\le A \min_{E}\{\psi\}.$$ \end{defin} \begin{rem} \label{rem-psimu} Assume that $\psi$ is $(\eta,A)$-regular and consider any pair $(\widetilde{\pi},\widetilde{\mu}) $ on $(U,\mathfrak E_U)$ such that $$ \widetilde{\pi}\le \psi \pi,\;\;\mu_{xy} \psi(x)\le A' \widetilde{\mu}_{xy}.$$ Then the pair $(\widetilde{\pi},\widetilde{\mu})$ $(\eta,AA')$-dominates $(\pi,\mu)$. For instance we can set $\widetilde{\pi}=\psi\pi$ and take $\widetilde{\mu}$ to be given by one of the following choices: $$\mu_{xy}\sqrt{\psi(x)\psi(y)},\;\;\widetilde{\mu}_{xy}=\mu_{xy}\min\{\psi(x),\psi(y)\} \mbox{ or } \widetilde{\mu}_{xy}=\mu_{xy}\max\{\psi(x),\psi(y)\}.$$ In these three cases $A'=\sqrt{A}$, $A'= A$ and $A'=1$, respectively. \end{rem} \begin{defin} \label{def-controlled} Fix $\eta \in (0,1/8)$. Let $\psi$ be a weight on a finite domain $U$ such that $\psi$ is $(\eta,A)$-regular on $U$. Assume $U$ is a John domain, $U\in J(\alpha,o,R)$, equipped with John paths $\gamma_x$ joining $x$ to $o$, $x\in U$, and a family of $\eta$-Whitney coverings $\mathcal W_s$, $s\ge 1$. We say that $\psi$ is $(\omega,A_1)$-controlled if, for any local s-chain $\mathcal W'_{<s}(E) =(F^{s,E}_0,\dots,F^{s,E}_{q^*_s(E)})$ with $F^{s,E}_i=B(x_i,3r(x_i))$, $0\le i\le q^*_s(E)$, we have $$\forall\,s\ge 1,\;\;\forall \, i\in \{0,\dots, q^*_s(E)\},\;\;\psi(x_0)\le A_1 s^\omega \psi(x_{i}).$$ When we say that an $(\eta,A)$-regular weight $\psi$ on $U\in J(\alpha, o,R)$ is $(\omega,A_1)$-controlled, we assume implicitly that a family of $\eta$-Whitney coverings $\mathcal W_s$, $s\ge 1$ has been chosen. \end{defin} \begin{rem} When $\omega=0$, the weight $\psi$ is essentially increasing along the John path joining Whitney balls to $o$. \end{rem} \begin{theo} \label{th-PP1controlled} Given the setting of Theorems \ref{th-P1} and~\ref{th-PP1}, assume further that we are given $\eta\in (0,1/12)$ and a weight $\psi$ on $U$ such that $\psi$ is $(\eta,A)$-regular and $(\omega,A_1)$-controlled. Set $\widetilde{\pi}=\psi\pi$ and let $\widetilde{\mu}$ be a weight defined on $\mathfrak E_U$ such that \begin{equation} \label{psimu} \forall x,y\in U,\;\; \psi(x)\mu_{xy}\le A_2 \widetilde{\mu}_{xy}. \end{equation} Then there exist a constant $C$ depending only on $\eta, \alpha, \theta, A, A_1,A_2 D,P$ and such that $$\forall\,s>0,\;\;\sum_U|f(x)-\widetilde{Q}_sf(x)|^2\widetilde{\pi}(x)\le C s^{\theta+\omega} \mathcal E_{\widetilde{\mu},U}(f,f).$$ Here $\widetilde{Q}_s$ is as in {\em Definition \ref{def-Q}} with $\widetilde{\pi}$ instead of $\pi$. The constant $C$ can be taken to be $$C= C= 7AA_2(3^\theta PD^5)+16 D^{14+ 2\log(2\kappa)} A^3A_1A_2 \max\{P_e, 8^{\theta}2PA^2D^6\}$$ where $\kappa =7/(\alpha\eta) $. In particular, $$\sum_U|f(x)-\widetilde{f}_U|^2\widetilde{\pi}(x)\le C R^{\theta +\omega} \mathcal E_{\widetilde{\mu},U}(f,f).$$\end{theo} \begin{proof} (The case $s\in [0,1]$ is trivial and we can assume $s>1$). This result is a bit more subtle than the previous result because the measure $\widetilde{\pi}$ may not be doubling. However, because $\psi$ is $(\eta,A)$-regular and $\widetilde{\mu}$ satisfies (\ref{psimu}), it follows from Remark \ref{rem-psimu} that $(\widetilde{\pi},\widetilde{\mu})$ $(\eta,AA_2)$-dominates $(\pi,\mu)$. By Lemma \ref{lem-domP} this implies that $(\widetilde{\pi},\widetilde{\mu})$ is $AA_2P_e$-elliptic and the $\theta$-Poincar\'e inequality on balls $B(z,r)$ such that ${r\le \eta \delta(z)}$, $z\in U$, with constant $PAA_2$. Using the notation $\widetilde{f}_B$ for the mean of $f$ over $B$ with respect to $\widetilde{\pi}$, we also have, for any ball $E$ in $\mathcal W_{<s}$ and its local s-chain $\mathcal W'_{<s}(E)=(F^{s,E}_i)_0^{q^*_s(E)}$ with $F^{s,E}_i=B(x_i,3r(x_i))$, $F^{s,E}_0=3E$, $$|\widetilde{f}_{F^{s,E}_i}-\widetilde{f}_{F^{s,E}_{i+1}}| \le Q s^{\theta/2} \widetilde{G}(F^{s,E}_i,f)$$ where $\widetilde{G}$ is defined just as $G$ but with respect to the pair $(\widetilde{\pi},\widetilde{\mu})$. Here we can take $$Q^2=AA_2\max\{P_e,8^\theta 2 P A^2D^6\}.$$ In this computation (see the proof of Lemma \ref{lem-WN3}), we have had to estimate $\widetilde{\pi}(8F_j)/\widetilde{\pi}(F_j)$ by $ AD^3$ using the doubling property of $\pi$ and the fact that $\psi$ is $(\eta,A)$-regular (in words, what is used here is the fact that, because $\psi$ is $(\eta,A)$-regular, $\widetilde{\pi}$ is doubling on balls that are far away from the boundary even so it is not necessarily globally doubling on $(U,\mathfrak E_U)$). Next, set $$G^*(F,f) = \left(\frac{1}{\pi(F)}\sum _{x\in 8F \cap U}\sum_{y\sim x, y\in U}|f(x)-f(y)|\widetilde{\mu}_{xy}\right)^{1/2}.$$ This differs from $\widetilde{G}(F,f)$ only by the use of $\pi$ instead of $\widetilde{\pi}$ in the fraction appearing in front of the summations (but note that this quantity involves the edge weight $\widetilde{\mu}$). Now, we have $$|\widetilde{f}_{F^{s,E}_i}-\widetilde{f}_{F^{s,E}_{i+1}}| \left(\frac{\widetilde{\pi}(3E)}{\pi(3E)} \right)^{1/2} \le A\sqrt{A_1}Qs^{(\theta+\omega)/2} G^*(F^{s,E}_i,f)$$ because $$\frac{\widetilde{\pi}(3E)}{\pi(3E)} \le A\psi(x_0)\le A A_1s^{\omega} \psi(x_i)\le A ^2A_1 s^{\omega} \frac{\widetilde{\pi}(F^{s,E}_i)}{\pi(F^{s,E}_i)}.$$ This gives \begin{eqnarray*} \lefteqn{|\widetilde{f}_{3E}-\widetilde{f}_{F(s,E)}| \left(\frac{\widetilde{\pi}(3E)}{\pi(3E)} \right)^{1/2} \mathbf 1_E}&&\\ &\le& A^2A_1Q s^{(\theta+\omega)/2} \sum_0^{q^*_{s}(E)-1} G^*(F^{s,E}_i,f) \mathbf 1_E \mathbf 1_{ \kappa F^{s,E}_i}.\end{eqnarray*} To finish the proof, we square both sides, multiply by $\pi(3E)$, and proceed as at the end of the proof of Theorem \ref{th-PP1}, using the doubling property of $\pi$. \end{proof} \subsection{Regular weights are always controlled} The following lemma is a version of a well-known fact concerning chains of Whitney balls in John domains. \begin{lem} \label{lem-RC} Assume that $(\mathfrak X,\mathfrak E,\pi)$ is doubling with constant $D$. Fix $\eta \in (0,1/8)$. Let $\psi$ be a weight on a finite domain $U$ such that $\psi$ is $(\eta,A)$-regular on $U$ and $U$ is a John domain, $U\in J(\alpha,o,R)$, equipped with John paths $\gamma_x$ joining $x$ to $o$, $x\in U$, and a family of $\eta$-Whitney coverings $\mathcal W_s$, $s>0$. Then there exist $\omega\ge 0$ and $A_1\ge 1$ such that $\psi$ is $(\omega,A_1)$-controlled on $U$. Here $A_1=A^{2+4\kappa}$ and $\omega=2\kappa \log _2A$ with $\kappa= D^{4+\log_2(1+1/(\alpha\eta))} .$\end{lem} \begin{proof} Using the notation of Definition \ref{def-controlled}, we need to compare the values taken by the weight $\psi$ at any pair of points $x_0,x_i,$ such that $x_0$ is the center of a Whitney ball $E$ and $x_i$ is the center of a ball belonging to the local s-chain $\mathcal W'_{<s}(E)$. This local s-chain is made of balls in $\mathcal W'_s$, each of which has radius at most $3s$ and intersects the John path $\gamma_E=\gamma_{x_0}$ joining $x_0$ to $o$. Assume that we can prove that \begin{equation}\label{logcount} \#\{ K\in \mathcal W_{<s}: 2K \cap \gamma_E\neq \emptyset\}\le \kappa \log_2 (4s).\end{equation} Of course, under this assumption, $$1+q_s^*(E)=\#\mathcal W'_{<s}(E)\le 1+\kappa \log_2 (4s) .$$ Further, by definition of $\mathcal W'_{<s}(E)=(F^{s,E}_0,\dots,F^{s,E}_{q^*_s(E)})$, the balls $2F^{s,E}_i,2F^{s,E}_{i+1}$ have a non-empty intersection or are singletons $\{x_i\}$, $\{x_{i+1}\}$ with $\{x_i,x_{i+1}\}\in \mathfrak E$. Since $\psi$ is $(\eta,A)$-regular and the ball $2F^{s,E}_i$ has radius $6r(x_i)\le 3\eta \delta(x_i)/2$, we have $$\psi(x_i)\le A^2\psi(x_{i+1}), \;\;i=0,\dots,q^*_s(E).$$ This implies \begin{equation} \label{RtoC1} \psi(x_0)\le A^{2(1+\kappa \log_2 (4s))} \psi(x_i)= A^{2+4\kappa} s^{2\kappa \log_2 A} \psi(x_i) ,\;\; \;\;i=0,\dots,q^*_s(E).\end{equation} To prove (\ref{logcount}), for each $\rho\ge 1$, let the John path $\gamma_{x_0}$ be $$\gamma_{x_0}=(\xi_0=x_0, \dots,\xi_m=o).$$ Consider $$\#\{ K=B(x,r)\in \mathcal W_{<s}: 3K \cap \gamma_E\neq \emptyset, r\in [\rho,2\rho) \},\;\;\rho\ge 1.$$ Let $K=B(x,r), K'=B(x',r')$ be any two balls from that set and let $\xi_i\in 3K$ and $\xi'_{i}\in 3K'$ be two points on the John path $\gamma_{x_0}$ that are witness to the fact that these balls intersect $\gamma_{x_0}$. Now, by construction, $$ d(x,\xi_i)\le 3r,\;r=\eta \delta(x)/4\mbox{ and } \delta (\xi_i) \ge \alpha (1+i)$$ It follows that $\delta(x)\ge \delta(\xi_i)- (3\eta/4)\delta(x)$ and thus, using a similar argument for~{$x',\xi'_{i},r'$,} $$\delta(x)\ge (\alpha/2) (1+i) \mbox{ and } \delta(x')\ge (\alpha/2) (1+i') .$$ This implies that $1+\max\{i,j\}\le (16/\alpha \eta) \rho$ and $$d(x,x')\le 8\left(1+ \frac{2}{\alpha\eta} \right)\rho,\;\; B(x',\rho)\subset B\left(x, \left(9+ \frac{16}{\alpha\eta} \right) \rho\right).$$ By construction, the balls $K'\in \mathcal W_{<s}$ are disjoint and the doubling property of $\pi$ thus implies that $$\#\{ K=B(x,r)\in \mathcal W_{<s}: 3K \cap \gamma_E\neq \emptyset, r\in [\rho,2\rho) \} \le D^{ 4+\log_2 (1+1/(\alpha\eta))}. $$ The same argument shows that $$\#\{ K=B(x,r)\in \mathcal W_{<s}: 3K \cap \gamma_E\neq \emptyset, r\in (0,1) \} \le D^{ 4+\log_2 (1+1/(\alpha\eta))}. $$ For $s\in (2^k,2^{k+1}]$, this implies \begin{eqnarray*} \#\{ K\in \mathcal W_{<s}: 3K \cap \gamma_E\neq \emptyset\} &\le & D^{ 4+\log_2 (1+1/(\alpha\eta))} (k+2)\\ & \le & D^{ 4+\log_2 (1+1/(\alpha\eta))} \log_2(4s).\end{eqnarray*} This, together with~\eqref{RtoC1}, yields $$ \psi(x_0)\le A^{2+4\kappa} s^{2\kappa \log_2 A} \psi(x_i)$$ for $i=0,\dots,q^*_s(E), \kappa= D^{4+\log_2(1+1/(\alpha\eta))}$. \end{proof} \section{Application to Metropolis-type chains} \setcounter{equation}{0} \label{sec-Metro} \label{sec-Met} \subsection{Metropolis-type chains} We are ready to apply the technical results developed so far (primarily within Section~\ref{sec-AW}) to Metropolis-type chains on John domains. The reader may find motivation in the explicit examples of Section~\ref{sec-metropolis-ex}. First we explain what we mean by Metropolis-type chains. Classically, The Metropolis and Metropolis Hastings algorithms give a way of changing the output of one Markov chain to have a desired stationary distribution. See~\cite{liu} or~\cite{DSCMetro} for background and examples. Assume we are given the background structure $(\mathfrak X,\mathfrak E,\mu, \pi)$ with $\mathfrak X$ finite or countable. Assume that $\mu$ is adapted and subordinated to $\pi$. Let $U$ be a finite domain in $\mathfrak X$. This data determines an irreducible Markov kernel $K_{N,U}$ on $U$ with reversible probability measure $\pi_U$, proportional to $\pi |_U$, given by (this is similar to~\eqref{def-Ksub}) \begin{equation} \label{def-KNU} K_{N,U}(x,y)=\left\{\begin{array}{ccl} \mu_{xy}/\pi(x) & \mbox{ for } x\neq y,\,x,y\in U\\ 1-(\sum_{z\in U:z\sim x}\mu_{xz}/\pi(x)) & \mbox{ for } x=y\in U. \end{array}\right. \end{equation} The notation $K_{N,U}$ captures the idea that this kernel corresponds to imposing the Neumann boundary condition in $U$ (i.e., some sort of reflexion of the process at the boundary). Suppose now that we are given a vertex weight $\psi$ and a symmetric edge weight $h_{xy}$ on the domain $U$. Set $$\widetilde{\pi}=\psi \pi,\;\; \widetilde{\mu}_{xy}=\mu_{xy}h_{xy},$$ and assume that $$\sum_{y\in U} \widetilde{\mu}_{xy}\le \widetilde{\pi}$$ so that $\widetilde{\mu}$ is subordinated to $\widetilde{\pi}$ in $U$. This yields a new Markov kernel $\widetilde{K}$ defined on $U$ by \begin{equation} \label{def-Ktilde} \widetilde{K}(x,y)=\left\{\begin{array}{ccl} \widetilde{\mu}_{xy}/\widetilde{\pi}(x) & \mbox{ for } x\neq y,\,x,y\in U\\ 1-(\sum_{z\in U: z\sim x}\widetilde{\mu}_{xz}/\widetilde{\pi}(x)) & \mbox{ for } x=y\in U. \end{array}\right. \end{equation} This kernel is irreducible and reversible with reversible probability measure proportional to $\widetilde{\pi}$. \begin{exa}\label{ex-metropolis} The choice $h_{xy}=\min\{\psi(x),\psi(y)\}$ satisfies this property and yields the well-known Metropolis chain with proposal chain $(K_{N,U},\pi_U)$ and target probability measure $\widetilde{\pi}_U$, proportional to $\widetilde{\pi}=\psi\pi |_U$. Other choice of $h$ would lead to similar chains including the variants of the Metropolis algorithm considered by Hastings and Baker. See the discussion in \cite[Remark 3.1]{BD}. \end{exa} \subsection{Results for Metropolis type chains} In order to simplify notation, we fix the background structure $(\mathfrak X,\mathfrak E, \pi, \mu)$. We assume that $\pi$ is $D$-doubling, $\mu$ is adapted and that the pair $(\pi,\mu)$ is elliptic and satisfies the $\theta$-Poincar\'e inequality on balls with constant $P$. We also assume that $\mu$ is subordinated to $\pi$. We also fix $\alpha\in (0,1)$. In the statements below, we will use $c,C$ to denote quantities whose exact values change from place to place and depend only on $\theta, D,P_e,P$ and $\alpha$. Explicit descriptions of these quantitates in terms of the data can be obtained from the proofs. They are of the form $$\max\{c^{-1}, C\}\le A_1^\theta D^{A_2(1+\log 1/\alpha)}\max\{P_e,P\} $$ where $ A_1,A_2 $ are universal constants. Within this fixed background, we consider the collection of all finite domains $U\subset \mathfrak X$ which are John domains of type $J(\alpha,o,R)$ for some point $o\in U$ and $R\le 2 R(U,o,\alpha)$. The parameter $R$ is allowed to vary freely and all estimates are expressed in terms of $R$. Recall that $\rho_o(U)=\max\{d_U(o,x): x\in U\}$ satisfies $$2R(U,o,\alpha)\ge 2\rho_o(U)\ge \delta(o) \ge \alpha R(U,o,\alpha).$$ We always assume implicitly that $U$ is not reduced to a singleton so that $R(U,o,\alpha)\ge 1$. Since $\alpha$ is fixed, it follows that $R\asymp \rho_o(U)$, namely, $$\frac{\alpha}{2}R \le \rho_o(U) \le 4R. $$ We need the following simple technical lemma. \begin{lem} \label{lem-Q} Assume that $U\in J(o,\alpha,R)$, with $R\le 2R(U,o,\alpha)$, is not a singleton and $0<\eta<1/4$. Referring to the construction of the ball $F(s,x)$, $s>0$, $x\in U$, used in Definition \ref{def-Q}, any $\eta$-Whitney covering $\mathcal W_s$ of $U$ satisfies \begin{itemize} \item $W_{=s}=\emptyset$ whenever $s\ge 3 R(U,o,\alpha)$. In that case, $F(s,x)=3E^s_o$ for all $x\in U$ and the ball $E^s_o$ has radius $$r(o)=\eta\delta(o)/4 \mbox{ with } \alpha R(U,o,\alpha)\le \delta(o) \le 2R(U,o,\alpha). $$ \item When $ s\le \alpha \eta R(U,o,\alpha)/4$, all balls $F(s,x)$ have radius $3s$. \item When $s\in (\alpha \eta R(U,o,\alpha)/4, 3R(U,o,\alpha))$, each ball $F(s,x)$, $x\in U$, has radius contained in the interval $$ [\alpha \eta R(U,o,\alpha)/2, 9R(u,o,\alpha)].$$ \end{itemize} In particular, for all $s\in (0, \alpha \eta R/8)$ \begin{equation} \label{MGF} \frac{\pi(F(s,x))}{\pi(U)}\ge \frac{\pi(B(z(x),s))}{\pi(B(z(x),8R)}\ge \frac{1}{D^2} \left(\frac{1+s}{8R}\right)^{\log_2 D},\end{equation} and, for all $s\in (0,\alpha\eta R/8)$, the averaging operator $Q_s$ (Definition \ref{def-Q}) satisfies $$\|Q_s f\|_\infty\le M(1+s)^{-\log_2 D} \|f\|_1 $$ where $\|f\|_1=\sum|f|\pi$ and $M= D^2 (8R)^{\log_2 D} /\pi(U) $.\end{lem} \begin{rem} \label{rem-MGF} The bound~\eqref{MGF} is a version of moderate growth for the metric measure space $(U,d_U,\pi)$ with the additional twist that, for each $s,x$, we consider the ball $F(s,x)$ instead of the ball $B_U(x,s)$. The reason for this is that it is the balls $F(s,x)$ that appear in the definition of the operator $Q_s$ because of the crucial use we make of the Whitney coverings $\mathcal W_s$, $s>0$. \end{rem} Our first result concerns the Markov chain driven by $K_{N,U}$ defined in Example~\ref{ex-metropolis}. This is a reversible chain with reversible probability measure $\pi_U$. We let $\beta=\beta_{N,U}$ be the second largest eigenvalue of $K_{N,U}$ and $\beta_{-}=\beta_{N,U,-}$ be the smallest eigenvalue of $K_{N,U}$. From the definition, it is possible that $U = \mathfrak X$ and $\beta_-=-1$. \begin{theo}\label{th-KNU} There exist constants $c,C$ such that for any $R>0$ and any finite domain $U\in J(\alpha,R)$, we have $$1-\beta_{N,U}\ge cR^{-\theta}.$$ Assume further that $1+\beta_{N,U,-}\ge cR^{-\theta}$. Under this assumption, for all $t\ge R^\theta$, $$\max_{x,y\in U}\left\{ \left|\frac{K^t_{N,U}(x,y)}{\pi_U(y)} -1\right|\right\}\le C e^{-2ct/R^\theta}.$$ \end{theo} \begin{proof} This result is a consequence of Theorems \ref{th-P1} and~\ref{th-PP1}. We use a Whitney covering family $W_s$, $s>0$, with $\eta=1/4$. For later purpose when we will need to use a given $\eta$, we keep $\eta$ as a parameter in the proof. Theorem \ref{th-P1} gives the estimates for $1-\beta_{N,U}$. (Theorem \ref{th-PP1} also gives that eigenvalue estimate if we pick $s\asymp R$ large enough that the Whitney covering $\mathcal W_s$ is such that $W_{=s}$ is empty.) By Lemma \ref{lem-Q}, for $s\in (0,\alpha \eta R/8]$ $$\|Q_sf\|_\infty\le M (1+s)^{-\log_2 D}\|f\|_1$$ where $M=D^2 (8R)^{\log_2 D}/ \pi(U) .$ Now, we appeal to Theorem \ref{th-PP1} and Propositions \ref{pro-Nash1} and~\ref{pro-Nash2} to obtain $$\sup_{x,y\in U} \{K^t(x,y)/\pi(y)\}\le C \pi(U)^{-1} (R^\theta/(n+1))^{\log_2 D/\theta} $$ for all $t\le (\alpha \eta R/4)^\theta $. This is the same as \begin{equation} \label{NashK} \sup_{x,y\in U} \{K^t(x,y)/\pi_U(y)\}\le C (R^\theta/(n+1))^{\log_2 D/\theta} \end{equation} for all $t\le (\alpha \eta R/4)^\theta$, because $\pi_U=\pi(U)^{-1}\pi|_U$. The constant $C$ is of the type described above and incorporates various factors depending only on $D,\theta, \alpha, P, P_e$ which are made explicit in Theorem \ref{th-PP1}, Lemma \ref{lem-Q} and Propositions \ref{pro-Nash1} and~\ref{pro-Nash2}. The next step is (essentially) \cite{DSCNash}[Lemma 1.1]. Using operator notation for ease, write $$\sup_{x,y\in U}\left\{\left|\frac{K^t(x,y)}{\pi_U(y)}-1\right|\right\}= \|(K-\pi_U)^t\|_{L^1(\pi_U)\rightarrow L^\infty}$$ and observe that, for any $t_1,t_2$ such that $t=t_1+2t_2$, $\|(K-\pi_U)^t\|_{L^1(\pi_U)\rightarrow L^\infty}$ is bounded above by the product of $$ \|(K-\pi_U)^{t_2}\|_{L^1(\pi_U)\rightarrow L^2(\pi_U)}, \;\;\|(K-\pi_U)^{t_1}\|_{L^2(\pi_U)\rightarrow L^2(\pi_U)}$$ and $$\|(K-\pi_U)^{t_2}\|_{L^2(\pi_U)\rightarrow L^\infty}.$$ The first and last factors are equal (reversibility and duality) and also equal to $$ \sqrt{\sup_{x,y\in U} \{K^{2t_2}(x,y)/\pi_U(y)\} }.$$ The second factor is $$\|(K-\pi_U)^{t_2}\|_{L^2(\pi_U)\rightarrow L^2(\pi_U)} = \max \{\beta_{N,U},|\beta_{N,U,-}|\}^{t_1}.$$ We pick $t_2$ to be the largest integer less than or equal to $(\alpha\eta R)^\theta /8$ and apply (\ref{NashK}) to obtain $$\sup_{x,y\in U}\left\{\left|\frac{K^t(x,y)}{\pi_U(y)}-1\right|\right\} \le 2^{\log_2 D/\theta} C (\alpha \eta )^{\log_2 D/\theta} \max \{\beta_{N,U},|\beta_{N,U,-}|\}^{t_1}.$$ This gives the desired result. \end{proof} The following very general example illustrates the previous theorem. \begin{exa}[Graph metric balls] Fix constants $P_e,P,\theta$ and $D$. Assume that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is such that the volume doubling property holds with constant $D$ together with $P_e$-ellipticity and the $\theta$-Poincar\'e inequality with constant $P$. We also assume (for simplicity) that $$ \sum_{y\sim x} \mu_{xy}\le \pi(x)/2.$$ Under this assumption, for any finite domain $U$, the kernel $K_{N,U}$ has the property that $K_{N,U}(x,x)\ge 1/2$ (this is often called ``laziness'') and it implies that $\beta_{N,U,-}\ge 0$. Let $U=B(o,R)$ be any graph metric ball in $(\mathfrak X,\mathfrak E)$. From Example~\ref{exa-mb}, such a ball is a John domain with $\alpha=1$, namely, $U\in J(\mathfrak X,\mathfrak E, 1, o,R)$ and $R=R(U,\alpha,o)$. Since $\beta_{N,U,-}\ge 0$, Theorem \ref{th-KNU} applies and show that $K^t_{N,U}$ converges to $\pi_U$ in times of order $R^\theta$. This applies for instance to the metric balls of the Vicsek graph of Figure \ref{fig-V2}. \end{exa} Next we consider a weight $\psi$ which is $(\eta,A)$-regular to $U$ and $A$-doubling. This means that the measure $\widetilde{\pi}=\psi\pi$ is $A$ doubling on $(U,\mathfrak E_U)$ (and also it extension to $(\mathfrak U,\mathfrak E_\mathfrak U)$ is $2A$ doubling). For simplicity we pick $\widetilde{\mu}$ to be given by the Metropolis choice $$\widetilde{\mu}_{xy}=\mu_{x,y}\min\{\pi(x),\pi(y)\}.$$ This implies that $\widetilde{\mu}$ is subordinated to $\widetilde{\pi}$ and we let $$K_{U,\psi}=\widetilde{K}$$ be defined by (\ref{def-Ktilde}). This reversible Markov kernel has reversible probability measure $\widetilde{\pi}_U$ proportional to $\psi\pi$ on $U$. Also, the hypothesis that $\psi$ is $(\eta,A)$-regular to $U$ implies that the pair $(\widetilde{\psi},\widetilde{\mu})$ $(\eta, A^2)$-dominates $(\pi,\mu)$ on $U$. See Remark \ref{rem-psimu}. This shows that we can use Theorem \ref{th-PP1doub} to prove the following result using the same line of reasoning as for Theorem \ref{th-KNU}. We will denote by $\beta_{U,\psi}$ the second largest eigenvalue of $\widetilde{K}=K_{U,\psi}$ and by $\beta_{U,\psi,-}$ its lowest eigenvalue. \begin{theo}\label{th-Ktilde1} For fixed $\eta\in (0,1/8),A\ge 1$, there exist constants $c,C$ such that for any $R>0$, any finite domain $U\in J(\alpha,R)$ and any weight $\psi$ which is $(\eta,A)$-regular (see Definition~\ref{def-regular})and $A$-doubling on $U$, we have $$1-\beta_{U,\psi}\ge cR^{-\theta}.$$ Assume further that $1+\beta_{U,\psi,-}\ge cR^{-\theta}$. Under this assumption, for all $t\ge R^\theta$, $$\max_{x,y\in U}\left\{ \left|\frac{\widetilde{K}^t(x,y)}{\widetilde{\pi}_U(y)} -1\right|\right\}\le C e^{-2ct/R^\theta}.$$ There are universal constants $A_1,A_2$ such that $$\max\{c^{-1}, C\}\le A_1^\theta (AD)^{A_2(1+\log 1/\alpha\eta)}\max\{P_e,P\} .$$ \end{theo} Replacing the hypothesis that $\psi$ is $(\eta,A)$-regular and $A$-doubling by the hypothesis that $\psi$ is $(\eta,A)$-regular and $(\omega,A)$-controlled leads to the following similar statement. \begin{theo}\label{th-Ktilde2} For fixed $\eta\in (0,1/8),A\ge 1$ and $\omega\ge 0$, there exist constants $c,C$ such that for any $R>0$, any finite domain $U\in J(\alpha,R)$ and any weight $\psi$ which is $(\eta,A)$-regular and $(\omega,A)$-controlled (see Definition~\ref{def-controlled}) on $U$, we have $$1-\beta_{U,\psi}\ge cR^{-(\theta+\omega)}.$$ Assume further that $1+\beta_{U,\psi,-}\ge cR^{-(\theta+\omega)}$. Under this assumption, for all $t\ge R^\theta$, $$\max_{x,y\in U}\left\{ \left|\frac{\widetilde{K}^t(x,y)}{\widetilde{\pi}_U(y)} -1\right|\right\}\le C e^{-2ct/R^{(\theta+\omega)}}.$$ There are universal constants $A_1,A_2$ such that $$\max\{c^{-1}, C\}\le A_1^{\theta +\omega}(AD)^{A_2(1+\log 1/\alpha\eta)}\max\{P_e,P\} .$$ \end{theo} \subsection{Explicit examples of Metropolis type chains}\label{sec-metropolis-ex} We give four simple and instructive explicit examples regarding Metropolis chains. There are based on a cube $U=[-N,N]^d$ in some fixed dimension $d$. The key parameter which is allowed to vary is $N$. This cube is equipped with it natural edge structure induced by the square grid. The underlying edge weight is $\mu_{x,y}=(2d)^{-1}$ and $\pi$ is the counting measure. To obtain each of our examples, we will define a ``boundary" for $U$ and a weight $\psi$ that is $(1/8,A)$-regular and $A$ doubling. \begin{exa} Our first example uses the natural boundary of $U=[-N,N]^d$ in the square grid $\mathbb Z^d$. The weight $\psi=\psi_\nu$, $\nu\ge 0$, is given by $$\psi(x)=\delta(x)^\nu.$$ Recall that $\delta(x)$ is the distance to the boundary. Thus, this power weight is largest at the center of the cube. It is $(1/8,A)$-regular and $A$-doubling with $A$ depending of $d$ and $\nu$ which we assume are fixed. Theorem \ref{th-Ktilde1} applies (with $\theta=2$). The necessary estimates on the lowest eigenvalue $\beta_{U,\psi,-}$ holds true because there is sufficient holding probability provided by the Metropolis rule at each vertex (this holding is of order at least $1/N$ and, in addition, there is also enough holding at the boundary). Here $R\asymp N$ and convergence occurs in order $N^2$ steps. \end{exa} \begin{exa} Our second example is obtained by adding two points to the box from the first example, which will serve as the boundary. Let $\mathfrak{X} = [-N,N]^d \cup \{u_{-},u_{+}\}$, where $u_-$ is attached by one edge to $(-N,\dots, -N)$ and $u_+$ attached by one edge to $(N,\dots,N)$. Within $\mathfrak{X}$, let $U=[-N,N]^d$, so the boundary is $\{u_-,u_+\}$. Again, we consider the power weight $$\psi(x)=\psi_\nu(x)=\delta(x)^\nu,\;\;\nu>0$$ but this time $\delta$ is the distance to the boundary $\{u_-,u_+\}$. This power weight is constant along the hyperplanes $\sum_1^d x_i=k$ and maximum on $\sum_1^d x_i=0$. \begin{figure} \caption{The box $U=[-N,N]^3$ with two boundary points $u_1,u_+$ attached at corners $(-N,-N,-N)$ and $(N,N,N)$ (these to corners are marked with black dots). The blue plane is the set of points in $U$ at maximal distance from the boundary points $\{u_-,u_+\} \end{figure} This weight is $(1/8,A)$-regular and $A$-doubling with $A$ depending of $d$ and $\nu$ which are fixed. Theorem \ref{th-Ktilde1} applies (with $\theta=2$). The necessary estimates on the lowest eigenvalue $\beta_{U,\psi,-}$ hold true because there is sufficient holding probability provided by the Metropolis rule (again, at least order $1/N$ at each vertex). We have $R\asymp N$ and convergence occurs in order $N^2$ steps. \end{exa} \begin{exa} Our third example is obtained by adding only one boundary point to the box from the first example. Let $\mathfrak{X} = [-N,N]^d \cup \{u_0\}$ where $u_0$ is attached by one edge to the center $(0,\dots, 0)$. Within $\mathfrak{X}$, let $U = [-N,N]^d$, so the boundary is $\{u_0\}$. Still, we consider the power weight $$\psi(x)=\psi_\nu(x)=\delta(x)^\nu,\;\nu>0,$$ where $\delta$ is the distance to the boundary $\{u_0\}$. This power weight is constant along the boundary of the graph balls centered at $(0,\dots,0)$. It is largest at the four corners. In this case, we obtain a John domain with a fixed $\alpha$ only when $d>1$ (in the case $d=1$, there is no way to avoid passing near the boundary point $u_0$). When $d>1$, we can chose $o$ to be one of the four corners. Again, the weight is $(1/8,A)$-regular and $A$ doubling with $A$ depending on $d$ and $\nu$ which are fixed. Theorem \ref{th-Ktilde1} applies (with $\theta=2$). The necessary estimates on the lowest eigenvalue $\beta_{U,\psi,-}$ hold true as n the previous examples. Again, $R\asymp N$ and convergence occurs in order $N^2$ steps. We note that there is no problems replacing the single ``pole'' $0$ in this example by an arbitrary finite set $\mathfrak O$ of ``poles'', as long as we fix the number of elements in $\mathfrak O$. \end{exa} \begin{exa} This last example involves weights which lead to non-doubling measure but are $\omega$-controlled. Take $d=1$ and $U=[-N,\dots,N]$, a symmetric interval around $0$ in $\mathbb Z$. Fix $\nu>1$ and consider the weight $\psi_{\nu}=\delta(x)^{-\nu}$, where $\delta$ is the distance to the boundary $\{-N-1,N+1\}$. It is easy to check that this weight is not doubling (compare the $\widetilde{\pi}$-volume of $B(0,N/2)$ to that of $B(0,N)$). Obviously, $\psi_\nu$ is $\omega$-controlled with $\omega=\nu$. The reference \cite[Theorem 9.6]{LSCSimpleNash} applies to this family and provides the eigenvalue estimate $$1-\beta_{U,\psi_\nu}\approx N^{-1-\nu}$$ and the fact that this chain converges to its equilibrium measure in order $N^{1+\nu}$ steps. This should be compared with the eigenvalue estimate of Theorem \ref{th-Ktilde2} which reads $1-\beta_{U,\psi_\nu}\ge c N^{-2-\nu} $ because $R\approx N$ and $\omega=\nu$. This estimate is off by a factor of $N$, but it is clear that the parameter $\omega=\nu$ plays a key role in estimating $\beta_{U,\psi_\nu}$ in this case. The following modification of this example shows that the eigenvalue estimate of Theorem \ref{th-Ktilde2} is actually almost optimal. Consider $\mathbb [-(N+1),(N+1)]$ equipped with the measure $\pi(x)= (N+2-|x|)^{-\alpha}$, $\alpha\in (0,1)$ and the usual graph structure induced by $\mathbb Z$. This space is doubling and satisfies the Poincar\'e inequality on balls (this is not obvious, but it can be proved). On this space, let $U=[-N,\dots,N]$ and repeat the construction above with $\psi_\nu(x)=\delta(x)^{-\nu}$, $\nu>1-\alpha$. Now, on this new space, this weight is not doubling but it is $\omega$-controlled with $\omega=\nu$. The previous argument shows that the eigenvalue $\beta_{U,\alpha,\psi_\nu}$ satisfies $1-\beta_{U,\alpha,\psi_\nu}\approx N^{-1-\alpha-\nu}$ whereas Theorem \ref{th-Ktilde2} yields $\beta_{U,\alpha,\psi_\nu}\ge c N^{-2-\nu}.$ Since $\alpha$ can be chosen as close to $1$ as desired, Theorem \ref{th-Ktilde2} is indeed almost sharp. \end{exa} \section{The Dirichlet-type chain in $U$} \label{sec-Dir} \setcounter{equation}{0} We continue with our general setup described by the data $(\mathfrak X,\mathfrak E,\pi,\mu)$. We assume that $\mu$ is adapted and that $\mu$ is subordinated to $\pi$. For any finite domain $U$, we consider $K_{D,U}$, the Dirichlet-type kernel in $U$, defined by \begin{equation} \label{KDU} K_{D,U}(x,y)= \left\{\begin{array}{ccl} \mu_{xy}/\pi(x) & \mbox{ for } x\neq y \mbox{ with } x,y\in U\\ 1-(\sum_{z\in \mathfrak X: z\sim x}\mu_{xz}/\pi(x)) & \mbox{ for } x=y\in U. \end{array}\right. \end{equation} This is the kernel describing the chain that is killed when it exits $U$. Let us point out the subtle but essential difference between this definition and that of $K_{N,U}$, the Neumann-type kernel on $U$. The values of these two kernels are the same when $x\neq y$ or when $x=y$ has no neighbors outside $U$. But when $x=y$ has a neighbor outside $U$, we have $$K_{N,U}(x,x)= 1- \left(\sum_{z\in U: z\sim x}\mu_{xz}\right)/\pi(x) $$ whereas $$K_{D,U}(x,x)= 1- \left(\sum_{z\in \mathfrak X: z\sim x}\mu_{xz}\right)/\pi(x) .$$ Because $\mu$ is adapted, at such a point $x$, $$\sum_{y\in U}K_{N,U}(x,y)=1 \mbox{ whereas }\sum_{y\in U}K_{D,U}(x,y) <1.$$ In words, the kernel $K_{D,U}$ is not strictly Markovian and the Markov chain corresponding to this kernel includes killing at the boundary. In terms of the global Markov kernel $K=K_\mu$ defined on $\mathfrak X$ by (\ref{def-Ksub}), we have $$K_{D,U}= \mathbf 1_U(x)K(x,y)\mathbf 1_U(y).$$ To simplify notation, we set $$K_U=K_{D,U}.$$ The goal of this section is to apply the previous results to the study of the iterated kernel $K^t_U(x,y)$. This will be done using the method of Doob's transform explained in more general terms in the next subsection. \subsection{The general theory of Doob's transform} \label{sec-Doob} For the purpose of this subsection, we simply assume we are given a finite or countable state space $\mathfrak X$ equipped with a Markov kernel $K$. We do not assume any reversibility. Fix a finite subset $U$ and consider the restricted kernel $$K_U(x,y)=\mathbf 1_U(x)K(x,y)\mathbf 1_U(y). $$ Throughout this section, we assume that this kernel $K_U$ is irreducible on $U$ in the sense that for any $x,y\in U$ there is an integer $t=t(x,y)$ such that $K_U^t(x,y)>0$. The period $d$ of $K_U$ is the greatest common divisor of $\{t: K_U^t(x,x)>0\}$. Note that $d$ is independent of the choice of $x\in U$. When $d=1$ (which is referred to as the aperiodic case), there exists an $N$ such that $K^N_U(x,y)>0$ simultaneously for all $x,y\in U$. We are interested in understanding the behavior of the chain driven by $K$ on $\mathfrak X$, started in $U$ and killed at the first exit from $U$. If $(X_t)_0^\infty$ denotes the chain driven by $K$ on $\mathfrak X$ and $$\tau=\tau_U=\inf\{t\ge 0: X_t\not \in U\}$$ is the exit time from $U$, we would like to have good approximations for quantities such as $$\mathbf P_x(\tau_U >\ell),\;\;\mathbf P_x( X_t=y \ | \ \tau_U >\ell ),\;\;\mathbf P_x( X_t=y \mbox{ and } \tau>t),$$ for $x,y\in U_N,\;\; 0\le t\le \ell.$ The last of these quantities is, of course, $$\mathbf P_x( X_t=y \mbox{ and } \tau>t)= K^t_U(x,y).$$ See \cite{CMSM} for a book length discussion of such problems. The key lemma is the following. \begin{lem} \label{lem-Cond} Assume that $K_U$ is irreducible aperiodic. Let $\beta_0,\phi_0$ denote the Perron-Frobenius eigenvalue and right eigenfunction of $K_U$. The limit $$\mathbf{P}_x(X_t = y \ | \ \tau_U = \infty) = \lim_{L \rightarrow \infty} \mathbf{P}_x(X_t = y \ | \ \tau_U > L)$$ exists and it is equal to $$ \mathbf{P}_x (X_t = y \ | \ \tau_U = \infty)= K_{\phi_0}^t(x,y)$$ where $K_{\phi_0}$ is the irreducible aperiodic Markov kernel given by \begin{equation} \label{def-Kphi0} K_{\phi_0}(x,y)=\beta_0^{-1}\frac{1}{\phi_0(x)}K_U(x,y)\phi_0(y),\;\;x,y\in U. \end{equation} \end{lem} \begin{rem} When $K_U$ is irreducible but periodic, it still has a unique Perron-Frobenius eigenvalue and right eigenfunction, $\beta_0,\phi_0$, and one can still define the Markov kernel $K_{\phi_0}$ (and use it to study $K_U$), but the limit in the lemma does not typically exist. See Example~\ref{ex-five-vertices} below. \end{rem} \begin{rem} \label{rem-Doob} In general terms, {\em Doob's transform method} studies the Markov kernel $K_{\phi_0}$ in order to study the iterated kernel $K^t_U$. By definition, $$K^t_U(x,y)= \beta_0^t \phi_0(x) K^t_{\phi_0}(x,y) \frac{1}{\phi_0(y)}.$$ Let $\phi_0^*$ denote the (positive) left eigenfunction of $K_U$ associated with $\beta_0$. By inspection, the positive function $\phi_0^*\phi_0$, understood as a measure on $U$, is invariant under the action of $K_{\phi_0}$, that is, $$\sum_x\phi_0^*(x)\phi_0(x)K_{\phi_0}(x,y)= \phi_0^*(y)\phi_0(y).$$ This measure can be normalized to provide the invariant probability measure for the irreducible Markov kernel $K_{\phi_0}$. We call this invariant probability measure $\pi_{\phi_0}$. It is given by $$\pi_{\phi_0}=\frac{\phi_0^*\phi_0}{\sum_U\phi_0^*\phi_0}.$$ The measure $\pi_{\phi_0}$ is one version of the quasi-stationary distribution (a second version is in Definition~\ref{eq-second-qs} below). The measure $\pi_{\phi_0}$ gives the limiting behavior of the chain, conditioned never to be absorbed. As shown below, it is the key to understanding the absorbing chain as well. The Doob transform is a classical tool in Markov chain theory~\cite[Chapter 8]{kemeny}. For many applications and a literature review see~\cite{Pang}. \end{rem} \begin{proof}[Proof of Lemma \ref{lem-Cond}] Fix $T \in \mathbb{N}$ and any $t \leq T$. Temporarily fix $L$, but we will let it tend to infinity. \begin{eqnarray}\lefteqn{ \mathbf{P}_x(X_t = y, \tau_U > t \ | \ \tau_U > L) } && \nonumber \\ &=& \frac{\mathbf{P}_x(\tau_U > L \ | \ X_t = y, \tau_U > t) \ \mathbf{P}_x(X_t = y, \tau_U > t)}{\mathbf{P}_x(\tau_U > L, \tau_U > t)} \label{eq:bayes-1} \end{eqnarray} We can assume $L>T$, because we will later take the limit as $L$ tends to infinity. So~\eqref{eq:bayes-1}, the identity above, becomes, \begin{eqnarray*}\lefteqn{ \mathbf{P}_x(X_t = y \ | \ \tau_U > L)} && \\ & = & \frac{\mathbf{P}_x(\tau_U > L \ | \ X_t = y, \tau_U > t) \ \mathbf{P}_x(X_t = y, \tau_U > t)}{\mathbf{P}_x(\tau_U > L)} \end{eqnarray*} or equivalently, \begin{equation} \label{eq:bayes-2} \mathbf{P}_x(X_t = y \ | \ \tau_U > L) = \frac{\mathbf{P}_x(\tau_U > L \ | \ X_t = y, \tau_U > t)}{\mathbf{P}_x(\tau_U > L)}K_U^t(x,y) \end{equation} Because $(X_t)$ is a Markov chain, \begin{align*} \frac{\mathbf{P}_x(\tau_U > L \ | \ X_t = y, \ \tau_U > t)}{\mathbf{P}_x(\tau_U > L)} &= \frac{\mathbf{P}_y(\tau_U > L-t)}{\mathbf{P}_x(\tau_U > L)} \\ &= \frac{\sum_{z \in U}K_U^{L-t}(y,z)}{\sum_{z \in U}K_U^{L}(x,z)} \\ &= \frac{\sum_{z \in U}\beta_0^{L-t}\phi_0(y)K_{\phi_0}^{L-t}(y,z)\phi_0(z)^{-1}}{\sum_{z \in U}\beta_0^{L}\phi_0(x)K_{\phi_0}^L(x,z)\phi_0(z)^{-1}}. \end{align*} Plugging this into~\eqref{eq:bayes-2}, we have \begin{eqnarray}\lefteqn{ \mathbf{P}_x(X_t = y \ | \ \tau_U > L) = } && \nonumber \\ &=& \left[\frac{\sum_{z \in U}K_{\phi_0}^{L-t}(y,z)\phi_0(z)^{-1}}{\sum_{z \in U}K_{\phi_0}^L(x,z)\phi_0(z)^{-1}}\right]\beta_0^{-t}\phi_0(x)^{-1}K_U^t(x,y)\phi_0(y) \end{eqnarray} Now we take the limit as $L$ tends to infinity. To finish the proof of Lemma~\ref{lem-Cond} we need to show that \begin{equation} \label{eq:frac-limit} \lim_{L \rightarrow \infty} \frac{\sum_{z \in U}K_{\phi_0}^{L-t}(y,z)\phi_0(z)^{-1}}{\sum_{z \in U}K_{\phi_0}^L(x,z)\phi_0(z)^{-1}}=1, \end{equation} which is the content of the following, Lemma~\ref{lem:Kphi0-equals-cond}. \end{proof} \begin{lem} \label{lem:Kphi0-equals-cond} Assume that $K_U(x,y)$ is irreducible and aperiodic on $U$. Then, \begin{equation}\label{eq-equals-cond} \lim_{L \rightarrow \infty} \frac{\sum_{z \in U}K_{\phi_0}^{L-t}(y,z)\phi_0(z)^{-1}}{\sum_{z \in U}K_{\phi_0}^L(x,z)\phi_0(z)^{-1}} = 1. \end{equation} \end{lem} \begin{proof} By Remark \ref{rem-Doob}, $K_{\phi_0}$ is an irreducible aperiodic Markov kernel with invariant measure $\pi_{\phi_0}$ proportional to $\phi_0^*\phi_0$. By the basic convergence theorem for finite Markov chains (e.g.,~\cite[Thm. 1.8.5]{Norris}), $$\lim_{L \rightarrow \infty} K_{\phi_0}^L(x,y) = \pi_{\phi_0}(y).$$ Applying this to $$\frac{\sum_{z \in U}K_{\phi_0}^{L-t}(y,z)\phi_0(z)^{-1}}{\sum_{z \in U}K_{\phi_0}^L(x,z)\phi_0(z)^{-1}},$$ we can see that both the numerator and denominator approach $$\sum_{z \in U} \pi_{\phi_0}(z)\phi_0(z)^{-1}.$$ The stated result follows.\end{proof} \begin{rem} If $K_U$ is irreducible and periodic of period $d>1$ then so is $K_{\phi_0}$. The chain driven by $K_{\phi_0}$ has $d$ periodic classes, $C_i$ (with $0\le i\le d-1$) each of which has the same measure, $\pi_{\phi_0}(C_i)=\pi_{\phi_0}(C_0)$, and the limit theorem reads $$\lim_{ L\rightarrow \infty} K_{\phi_0}^{t+Ld}(x,y) =\left\{ \begin{array}{cl} \pi_{\phi_0}(y)/d &\mbox{ if } x\in C_i , y\in C_{i+t}\\ 0 & \mbox{ otherwise}. \end{array}\right.$$ Here, $0\le i\le d-1$, and the index $i+t$ in $C_{i+t}$ is taken modulo $d$. It follows that, typically, the ratio in Lemma \ref{lem:Kphi0-equals-cond} has no limit. See below for a concrete example. \end{rem} \begin{figure} \caption{Simple random walk on five vertices} \label{pic:five-vertices} \end{figure} \begin{exa}\label{ex-five-vertices} As a concrete example, consider the simple random walk on five vertices where the boundary vertices have holding probability $\frac{1}{2}$. $$K(x_i,x_j) = \begin{cases} \frac{1}{2} & \text{if }|i-j|=1, \ i=j=0, \text{ or } i=j=4 \\ 0 & \text{else} \end{cases}.$$ Let $U = \{x_1,x_2,x_3\}$ be the middle three vertices and define $K_U$ to be sub-Markovian kernel described above. The transition matrix for $K_U$ is given by $$\begin{bmatrix} 0 & \frac{1}{2} & 0 \\ \frac{1}{2} & 0 & \frac{1}{2} \\ 0 & \frac{1}{2} & 0 \end{bmatrix},$$ with largest eigenvalue $\beta_0 = \frac{\sqrt{2}}{2}$ and normalized eigenfunction $$\phi_0 = \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{2}}{2} \\ \frac{1}{2} \end{bmatrix}.$$ This is a reversible situation (hence, $\phi_0^*=\phi_0$) and the period is $2$ with periodic classes: $C_0 = \{x_2\}$ and $C_1 = \{x_1,x_3\}$. We have $$ \lim_{L\rightarrow \infty}\sum_y K_{\phi_0}^{2L} (x_2,y) \phi_0^{-1}(y)=\sqrt{2}$$ and $$\lim_{L\rightarrow \infty}\sum_y K_{\phi_0}^{2L+1} (x_2,y) \phi_0^{-1}(y)=2,$$ and hence the ratio in Lemma~\ref{lem:Kphi0-equals-cond} has no limit. \end{exa} Previously, we were considering $\mathbf{P}_x(X_t = y \ | \ \tau_U > L)$, the probability that the process $(X_t)$ equals $y$ at time $t$ and is still inside $U$ at some other time $L$. Now, we consider the case where $t=L$. \begin{defin}\label{eq-second-qs} Set $$\nu_x^t(y) = \mathbf{P}_x(X_t = y \ | \ \tau_U> t), \;\;x,y\in U.$$ This is the second form of quasi-stationary distribution; $\nu_x^t (y)$ describes the chance that the chain is at $y$ at time $t$ (starting from $x$) given that it is still alive. \end{defin} \begin{theo} \label{theo:mu-limit} Assume that $K_U$ is irreducible and aperiodic. Then $$\lim_{t \rightarrow \infty} \nu_x^t(y) = \frac{\phi^*_0(y)}{\sum_{U} \phi^*_0}.$$ \end{theo} \begin{proof} Write \begin{align} \nu^t_x(y) &= \mathbf{P}_x(X_t = y \ | \ \tau_U > t) \nonumber\\ &= \frac{\mathbf{P}_x(X_t = y, \tau_U > t)}{\mathbf{P}_x(\tau_U > t)} \nonumber\\ &= \frac{K^t_U(x,y)}{\sum_{z \in U} K^t_U(x,z)} \nonumber\\ &= \frac{\beta_0^t\phi_0(x)K_{\phi_0}(x,y)\phi_0(y)^{-1}}{\sum_{z \in U} \beta_0^t \phi_0(x) K_{\phi_0}^t(x,z)\phi_0(z)^{-1}} \nonumber\\ &= \frac{K^t_{\phi_0}(x,y)\phi_0(y)^{-1}}{\sum_{z \in U} K_{\phi_0}^t(x,z)\phi_0(z)^{-1}} \label{defn-nu-t}.\end{align} Taking the limit when $t$ tends to infinity yields $$\lim_{t\rightarrow \infty}\nu_x^t(y)= \frac{\phi_0(y)^{-1}\pi_{\phi_0}(y)}{\sum_{z \in U} \phi_0(z)^{-1}\pi_{\phi_0}(z)}= \frac{\phi_0^*(y)}{\sum_z \phi_0^*(z)}.$$ This equality follows from the basic Markov chain convergence theorem~\cite[Theorem 4.9]{LevP}. The stated result follows since $\pi_{\phi_0}$ is proportional to $\phi_0^*\phi_0$. \end{proof} \begin{theo}\label{thm-nu-control} Assume that $K_U$ is irreducible and aperiodic. Then the rate of convergence in $$\lim_{t \rightarrow \infty} \nu_x^t(\cdot) = \frac{\phi^*_0}{\sum_{U} \phi^*_0}$$ is controlled by that of $$\lim_{t\rightarrow \infty} K^t_{\phi_0}(x,\cdot)= \pi_{\phi_0}.$$ More precisely, fix $\epsilon >0$. Assume that $N_{\epsilon}$ is such that, for any $t \geq N_{\epsilon}$ and~${y \in U}$, $$\left|\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)} - 1\right| < \epsilon.$$ Then, for any $t \geq N_{\epsilon}$, $$\left|\frac{(\sum_U\phi_0^*)\nu_x^t(y) }{\phi_0^*(y)}- 1\right| < \frac{2\epsilon}{1 - \epsilon}.$$ \end{theo} \begin{proof} For a fixed $\epsilon>0$, let $N_{\epsilon}$ be such that, for $t \geq N_{\epsilon}$ and $z \in U$, \begin{equation} \label{eq:conv-of-one} \left|\frac{K^t_{\phi_0}(x,z)}{\pi_{\phi_0}(z)} - 1\right| < \epsilon, \end{equation} or equivalently, $$\left|K^t_{\phi_0}(x,z)\phi_0^{-1}(z) - c\phi_0^*(z)\right| < \epsilon c \phi_0^*(z),$$ where $c = (\sum_U \phi_0\phi_0^*)^{-1}$ is the normalization constant $\pi_{\phi_0}=c\phi_0\phi_0^*$. Summing over all $z \in U$ and applying the triangle inequality, \begin{equation} \label{eq:conv-of-sum} \left|\frac{\sum_{z \in U} K^t_{\phi_0}(x,z)\phi_0^{-1}(z)}{c\sum_{z \in U} \phi^*_0(z)} - 1 \right| < \epsilon. \end{equation} For ease of notation, we abbreviate $$a_t = K^t_{\phi_0}(x,y)\phi_0(y)^{-1}, \hspace{.8cm} a = c\phi_0^*(y),$$ $$b_t =\sum_{z \in U} K^t_{\phi_0}(x,z)\phi_0(z)^{-1}, \hspace{.8cm} b = c\sum_{z \in U} \phi_0^*(z),$$ so that~\eqref{eq:conv-of-one} and~\eqref{eq:conv-of-sum} become, $$\left|\frac{a_t}{a} - 1\right|<\epsilon \text{ and } \left|\frac{b_t}{b} - 1\right|<\epsilon.$$ The formula \eqref{defn-nu-t} for $\nu_x^t(y)$ gives $$\frac{(\sum_U\phi_0^*)\nu_x^t(y) }{\phi_0^*(y)}= \frac{(\sum_U\phi_0^*)K_{\phi_0}^t(x,y)\phi_0(y)^{-1}}{\phi_0^*(y)\sum_{z \in U} K_{\phi_0}^t(x,z)\phi_0(z)^{-1}} = \frac{a_t}{b_t}\cdot\frac{b}{a}$$ and thus \begin{align*} \left|\frac{(\sum_U\phi_0^*)\nu_x^t(y) }{\phi_0^*(y)}- 1\right| & = \left|\frac{a_t}{b_t}\cdot\frac{b}{a} - 1\right| = \left|\frac{a_tb - b_ta}{b_ta}\right| \\ &\leq \frac{b}{b_t}\left(\left|\frac{a_t}{a} - 1\right| + \left|\frac{b_t}{b} - 1\right|\right) \\ &< \frac{2\epsilon}{1-\epsilon}. \end{align*} \end{proof} \subsection{Dirichlet-type chains in John domains} We return to our main setting of an underlying space $(\mathfrak X,\mathfrak E, \pi, \mu)$ with $\mu$ subordinated to $\pi$ and $K$ defined by this data as in (\ref{def-Ksub}). For any finite domain $U\subset \mathfrak X$, we consider the kernel $K_U=K_{D,U}$ defined at (\ref{KDU}) and equal to $K_U(x,y)=\mathbf 1_U(x)K(x,y)\mathbf 1_U(y)$. We also let $\pi_U$ be the probability measure proportional to $\pi|_U$, i.e., $\pi_U(x) = \frac{\pi|_U(x)}{Z}$ where $Z = \sum_{y \in U} \pi|_U(y)$ is the normalizing constant. Let $\phi_0,\phi_0^*$ be the right and left Perron eigenfunctions of the kernel $K_U$ considered in subsection \ref{sec-Doob} above. By construction, $K_U(x,y)/\pi_U(y)$ is symmetric in $x,y$, that is, $$\pi_U(y)K_U(y,x) =\pi_U(x)K_U(x,y).$$ Multiplying by $\phi_0(y)$ and summing over $y$, we have $$\sum_y \phi_0(y) \pi_U(y)K_U(y,x) =\pi_U(x)\sum_y K_U(x,y)\phi_0(y)= \beta_0\pi_U(x)\phi_0(x).$$ This shows that $\phi_0(y)\pi_U(y)$ is proportional to $\phi_0^*(y)$. If we choose to normalize $\phi_0$ by the natural condition $\sum \phi_0^2\pi_U=1$, then the invariant probability measure of the Doob transform kernel $K_{\phi_0}$ at (\ref{def-Kphi0})---which is proportional to $\phi_0^*\phi_0$---is $$\pi_{\phi_0}=\phi_0^2\pi_U.$$ Next, observe that, for any $x,y \in \mathfrak X$, $$\pi(x)K(x,y)=\mu_{xy}$$ and, for any $x,y \in U$, $$\phi^2(x)\pi|_U(x)K_{\phi_0}(x,y)= \beta_0^{-1}\phi_0(x)\phi_0(y) \pi|_U(x)K(x,y) = \beta_0^{-1}\phi_0(x)\phi_0(y)\mu_{xy}.$$ This means that the kernel $K_{\phi_0}$ is obtained as a Markov kernel on the graph $(U,\mathfrak E_U)$ using the pair of weights $(\bar{\mu}, \bar{\pi})$ where $$\begin{cases} \bar{\mu}_{xy}= \beta_0^{-1}\phi_0(x)\phi_0(y) \mu_{xy} \\ \bar{\pi}= \phi_0^2\pi |_U, \end{cases}$$ i.e., $K_{\phi_0} = \bar{\mu}_{xy}/\bar{\pi}$. Indeed, for any $x,y \in U$, we have $$\bar{\mu}_{xy}= \left(\sum_U\pi\right) \pi_{\phi_0}(x)K_{\phi_0}(x,y) \mbox{ and } \bar{\pi}(x)= \left(\sum_U\pi\right) \pi_{\phi_0}(x).$$ Furthermore, $\bar{\mu}$ is subordinated to $\bar{\pi}$ in $U$ because, for any $x\in U$, $$\sum_{y\in U} \bar{\mu}_{xy}= \sum_{y\in U} \beta_0^{-1}\phi_0(x)\phi_0(y) \pi|_U(x)K(x,y) =\phi_0(x)^2\pi |_U(x)=\bar{\pi}(x).$$ All of this means that we are in precisely the situation of Section~\ref{sec-AW}. We now list four assumptions that will be used to obtain good results concerning the behavior of the chain $(K_{\phi_0},\pi_{\phi_0})$ by applying the techniques described in Section \ref{sec-AW} and Section \ref{sec-Met}. In what follows, we always fix the parameter $\alpha\in (0,1]$ as well as $\theta\ge 2$. For the reader's convenience we give brief pointers to notation that will be used crucially in what follows: John domains (Section~\ref{sec-JD}), Whitney coverings (Section~\ref{sec-WC}), $D$-doubling (Definition~\ref{defn-doubling}, the ball Poincar\'e inequality (Definition~\ref{defn-ball-poincare}), elliptic (Defintion~\ref{defn-adapted-elliptic-subor}), subordinated weight (Defintion~\ref{defn-adapted-elliptic-subor}), $(\eta,A)$-regular (Defintion~\ref{def-regular}), and $(\eta,A)$-controlled (Defintion~\ref{def-controlled}). \begin{description} \item[Assumption A1 (on $(\mathfrak X,\mathfrak E,\pi,\mu)$)] The measure $\pi$ is $D$-doubling, $\mu$ is adapted and the pair $(\pi,\mu)$ is elliptic and satisfies the $\theta$-Poincar\'e inequality on balls with constant $P$. In addition, $\mu$ is subordinated to $\pi$. \item[Assumption A2 (on the finite domain $U$)] The finite domain $U\subset \mathfrak X$ belongs to $J(o,\alpha, R)$ for some $o\in U$ with $R(o,\alpha,U)\le R\le 2R(o,\alpha,U)$. \item[Assumption A3 (on $U$ in terms of $\phi_0$)] There are $\eta\in (0,1/12]$ and $A\ge 1$ such that $\phi_0$ is $(\eta,A)$-regular and $A$-doubling on $U$. \item[Assumption A4 (on $U$ in terms of $\phi_0$)] There are $\eta\in (0,1/12]$, $\omega\ge 0$, and $A\ge 1$ such that $\phi_0$ is $(\eta,A)$-regular and $(\omega,A)$-controlled on $U$. \end{description} Assumption A1 will be our basic assumption about the underlying weighted graph structure $(\mathfrak X,\mathfrak E,\pi,\mu)$. Assumption A2 is a strong and relatively sophisticated assumption regarding the geometric properties of the finite domain $U$. Assumptions 3 and 4 are technical requirements necessary to apply the methods in Sections\ref{sec-PQPJD} and~\ref{sec-AW}. In the classical case when the parameter $\theta$ in the assumed Poincar\'e inequality satisfies $\theta=2$, Assumptions A1-A2 imply that Assumption A4 is satisfied. This follows from Lemma \ref{lem-H} below and Lemma \ref{lem-RC}. \begin{lem} \label{lem-H} Assume that {\em A1-A2} are satisfied and $\theta=2$. Then $\phi_0$ is $(1/8,A)$-regular with $A$ depending only on the quantities $D,P_e,P$ appearing in Assumption {\em A1}. \end{lem} \begin{proof} The short outline of the proof is that doubling and Poincar\'e (with $\theta=2$) imply the Harnack inequality $$\sup_{B}\{ \phi_0\}\le C_H \inf_{B}\{\phi_0\} $$ for any ball $B$ such that $2B\subset U$. The constant $C_H$ is independent of $B$ and $U$ and depends only of $D,P_e,P$. This would follow straightforwardly from Delmotte's elliptic Harnack inequality (see \cite{Delm-EH}) if $\phi_0$ were a positive solution of $$(I-K)u=0$$ in the ball $2B$. However, $\phi_0$ is a positive solution of $$(I-K) u= (1-\beta_0)u.$$ Heuristically, at scale less than $R$, this is almost the same because Assumption A1 implies that $1-\beta_0 \le C R^{-2}$. This easy estimate follows by using a tent test function in the ball $B(o,R/4)\subset U$. To prove the stated Harnack inequality for $\phi_0$, one can either extend Delmotte's argument (adapted from Moser's proof of the elliptic Harnack inequality for uniformly elliptic operators in $\mathbb R^n$), see~\cite{Delm-EH}, or use the more difficult parabolic Harnack inequality of \cite{Delm-PH}. Indeed, to follow this second approach, $$v(t,x)=e^{-\frac{1}{2}(1-\beta_0)t}\phi_0(x)\;\; (\mbox{resp. }\;w(t,x)= (1-\frac{1}{2}(1-\beta_0))^t\phi_0(x))$$ is a positive solution of the continuous-time (resp. discrete-time) parabolic equation $$ \left[\partial_t +\frac{1}{2}(I-K)\right] v=0 \;\; (\mbox{resp. } \; w(t+1,x)-w(t,x)= - \left[\frac{1}{2}(I-K)w_t\right] (x) )$$ in $U$ (in the discrete time case, $w_t=w(t,\cdot)$). These parabolic equations are associated with the (so-called) lazy version of the Markov kernel $K$, that is, $\frac{1}{2}(I+K)$ to insure that the results of \cite{Delm-PH} are applicable. The parabolic Harnack inequality in \cite{Delm-PH} necessitates that the time scale be adapted to the size of the ball on which it is applied, namely, the time scale should be $r^2$ if the ball has radius $r$. Our positive solution $v(t,x)=e^{-\frac{1}{2}(1-\beta_0)t}\phi_0(x)$ of the heat equation is defined on $\mathbb R\times B$ where $B=B(z,r)\subset U$ is a ball of radius $r$. The parabolic Harnack inequality gives that there is a constant $C_H$ such that, for all $x,y\in B$, $$v(r^2,x)\le C_H v(2r^2,y).$$ Because $1-\beta_0\le C R^{-2}$ and $r\le R$, the exponential factors $$e^{-\frac{1}{2}(1-\beta_0)r^2}, \;\;e^{-(1-\beta_0)r^2}$$ behave like the constant $1$. This implies that $\phi_0(x)\approx \phi_0(y)$ for all $x,y\in B$. \end{proof} The following statement is an easy corollary of the last part of the proof of Lemma \ref{lem-H}. See the remarks following the statement. \begin{lem} \label{lem-HBB} Fix $\theta\ge 2$. Assume that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is such that $\mu$ is adapted, the pair $(\pi,\mu)$ is elliptic and $\mu$ is subordinated to $\pi$. In addition, assume that the operator $\frac{1}{2}(I+K_\mu) $ satisfies the $\theta$-parabolic inequality {\em PHI($\theta$)} of {\em \cite[(1.9)]{BB}}. If $U$ is a finite domain in $\mathfrak X$ satisfying {\em A2}, the function $\phi_0$ is $(1/8,A)$-regular with $A$ depending only on the $\theta$, $P_e$, and the constant $C_H$ from the $\theta$-parabolic Harnack inequality. \end{lem} \begin{rem} The $\theta$-parabolic inequality PHI($\theta$) of {\em \cite{BB}} implies the doubling property and the $\theta$-Poincar\'e inequality (\cite[Theorem 1.5]{BB}). In addition it implies the so-called cut-off Sobolev inequality $\mbox{CS}(\theta)$ (\cite[Definition 1.4; Theorem 1.5]{BB}). Conversely, doubling, the $\theta$-Poincar\'e inequality and $\mbox{CS}(\theta)$ imply PHI($\theta$). In the case $\theta=2$, the cut-off Sobolev inequality is always trivially satisfied. When $\theta>2$, the cut-off Sobolev inequality is non-trivial and become essential to the characterization of the parabolic Harnack inequality PHI($\theta$). See \cite[Theorem 5]{BB} (in \cite{BB}, the parameter $\theta$ is called $\beta$). \end{rem} \begin{rem} To prove Lemma \ref{lem-HBB}, it is essential to have an upper bound $1-\beta_0\le C R^{-\theta}$ on the spectral gap $1-\beta_0$. This upper bound easily follows from the cut-off Sobolev inequality $\mbox{CS}(\theta)$. \end{rem} We can now state two very general results concerning the reversible Markov chain $(K_{\phi_0}, \pi_{\phi_0})$ in the finite domain $U$. The first theorem has weaker hypotheses and is, in principle, easier to apply. When the parameter $\omega=0$, the two theorems gives essentially identical conclusions. The proofs are immediate application of the results in Section \ref{sec-AW} and follow the exact same line of reasoning used in Section \ref{sec-Met} to obtain Theorems \ref{th-Ktilde1}-\ref{th-Ktilde2}. In the following statement, $\beta_-$ is the least eigenvalue of the pair $(K_{\phi_0},\pi_{\phi_0})$ and $\beta$ is second largest eigenvalue of $(K_{\phi_0},\pi_{\phi_0})$. If $\beta_{U,-}$ denotes the smallest eigenvalue of $K_U$ on $L^2(U,\pi_U)$, then $\beta_-=\beta_{U,-}/\beta_0$. If $\beta_{U,1}$ denotes the second largest eigenvalue of $K_U$ on $L^2(U,\pi_U)$, then $\beta= \beta_{U,1}/\beta_0$. The eigenfunction $\phi_0$ is normalized by $\pi_U(\phi_0^2)=1$. \begin{theo} \label{th-Doob1} Fix $\alpha, \theta, \eta, \omega, P_e,P,D,A$ and assume {\em A1-A2-A4}. Under these assumptions there are constants $c,C\in (0,\infty)$ (where $c,C$ depend only on the parameters $\alpha, \theta, \eta, \omega, P_e,P, D$ and $ A$) such that $$ 1- \beta_0\le CR^{-\theta}$$ and $$ 1-\beta\ge c R^{-(\theta+\omega)}.$$ Assume further that $1+\beta_-\ge cR^{-(\theta+\omega)}$. Then, for all $t\ge R^{\theta+\omega}$, we have the following $L^{\infty}$ rate of convergence, $$\max_{x,y\in u}\left|\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)} -1\right| \le C \exp\left(-c\frac{t}{R^{\theta+\omega}}\right).$$ Equivalently, in terms of the kernel $K_U$, this reads $$\left|K^t_U(x,y) - \beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)\right|\le C\beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)e^{-ct/R^{\theta+\omega}},$$ for all $x,y\in U$ and $t\ge R^{\theta+\omega}$. \end{theo} \begin{rem} Part of the proof of this result is to show that there are constants $C,\nu$ such that, for all $t\le R^{\theta+\omega}$ and $x,y\in U$, $$\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)}\le C (R^{\theta+\omega}/t)^\nu,$$ where $C,\nu$ depends only on the parameters $\alpha, \theta, \eta, \omega, P_e,P, D$ and $ A$. In terms of $K_U^t$, this becomes for all $t\le R^{\theta+\omega}$ and $x,y\in U$, $$\frac{K^t_{U}(x,y)}{\pi_U(y)}\le C (R^{\theta+\omega}/t)^\nu \phi_0(x)\phi_0(y).$$ This type of estimate for $K^t_U$ is called intrinsic ultracontractivity. It first appeared in the context of Euclidean domains in \cite{DavSim,DavUl} (see also \cite{DavisB}) and has been discussed since by many authors. In its classical form, ultracontractivity of the Dirichlet heat semigroup in a bounded Euclidean domain $U$ is the statement that, for each $t>0$, there is a constant $C_t$ such that for all $x,y\in U$, $$h^D_U(t,x,y)\le C_t\phi_0(x)\phi_0(y)$$ Here $h^D_U(t,x,y)$ is the fundamental solution (e.g., heat kernel) of the heat equation with Dirichlet boundary condition in $U$. Ultracontractivity may or may not hold in a particular bounded domain. It is known that it holds in bounded Euclidean John domains, see \cite{CipUJ}. We note here that running the line of reasoning used here in the case of bounded Euclidean John domains would produce more effective ultracontractivity bounds than the ones reported in \cite{CipUJ}. \end{rem} \begin{rem} As mentioned above, Theorem \ref{th-Doob1} is relatively easy to apply. Hypothesis A1 is our basic working hypothesis regarding $(\mathfrak X,\mathfrak E,\pi,\mu)$. Hypothesis A2 requires the finite domain $U$ to be a John domain. When $\theta=2$, Hypothesis A4 is automatically satisfied for some $\omega\ge 0$ depending only on the other fixed parameters (Lemma \ref{lem-H}). When $\theta>2$, we would typically appeal to Lemma \ref{lem-HBB} in order to verify A4. This requires an additional assumption on $(\mathfrak X,\mathfrak E,\pi,\mu)$, namely, that $\frac{1}{2}(I+K_\mu)$ satisfies the parabolic Harnack inequality PHI($\theta$) of \cite{BB}. For instance, Theorem \ref{th-Doob1} applies uniformly to the graph metric balls in $(\mathfrak X,\mathfrak E,\pi,\mu)$ under Hypothesis A1 when $\theta=2$, and under A1 and PHI($\theta$) when $\theta>2$. Consider the infinite Vicsek fractal graph $(\mathfrak X^V,\mathfrak E^V)$ (a piece of which is pictured in Figure \ref{fig-V2}) equipped the vertex weight $\pi^V(x)=4$, $x\in \mathfrak X^V$ and the edge weight $\mu^V_{xy}=1$, $\{x,y\}\in \mathfrak E^V$. This structure is a good example for the case $\theta>2$. It has $\theta=d+1$ where $d=\log 5/\log 3$ and also volume growth $\pi(B(x,r))\asymp r^d$. It satisfies the parabolic Harnack inequality PHI$(\theta)$. See, e.g., \cite[Example 2 and Example 3, Section 5]{BCK} which provides larger classes of examples of this type. \end{rem} \begin{theo} \label{th-Doob2} Fix $\alpha, \theta, \eta, P_e,P,D,A$ and assume {\em A1-A2-A3}. Under these assumptions there are constants $c,C\in (0,\infty)$ (where $c,C$ depend only on the parameters $\alpha, \theta, \eta, P_e,P, D$ and $ A$) such that $$ 1- \beta_0\le CR^{-\theta}$$ and $$ 1-\beta\ge c R^{-\theta}.$$ Assume further that $1+\beta_-\ge cR^{-\theta}$. Then, for all $t\ge R^{\theta}$, we have $$\max_{x,y\in u}\left|\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)} -1\right| \le C \exp\left(-c\frac{t}{R^{\theta}}\right).$$ Equivalently, in terms of the kernel $K_U$, this reads $$\left|K^t_U(x,y) - \beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)\right|\le C\beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)e^{-ct/R^{\theta}},$$ for all $x,y\in U$ and $t\ge R^{\theta}$. \end{theo} \begin{rem} As for Theorem \ref{th-Doob1}, part of the proof of Theorem \ref{th-Doob2} is to show that there are constants $C,\nu$ such that, for all $t\le R^{\theta}$ and $x,y\in U$, $$\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)}\le C (R^{\theta}/t)^\nu,$$ where $C,\nu$ depends only on the parameters $\alpha, \theta, \eta, P_e,P, D$ and $ A$. In terms of $K_U^t$, this gives the intrinsic ultracontractivity estimate for all $t\le R^{\theta}$ and $x,y\in U$, $$\frac{K^t_{U}(x,y)}{\pi_U(y)}\le C (R^{\theta}/t)^\nu \phi_0(x)\phi_0(y).$$ \end{rem} \begin{rem}Theorem~\ref{th-Doob2} gives a more satisfying result than Theorem \ref{th-Doob1} in that it does not involves the extra parameter $\omega$ (the two theorems have the same conclusion when $\omega=0$). However, Theorem \ref{th-Doob2} requires to verify Hypothesis A3, that is, to show that $\pi_{\phi_0}$ is doubling. This is an hypothesis that is hard to verify, even for simple finite domains in $\mathbb Z^d$. At this point in this article, the only finite domains in $\mathbb Z^2$ for which we could verify this hypothesis are those where we can compute $\phi_0$ explicitly such as cubes with sides parallel to the axes or the $45$ degree finite cone of Figure \ref{D0}. This shortcoming will be remedied in the next section when we show that finite inner-uniform domains satisfy Hypothesis A3 (see Theorem~\ref{theo-Carleson}). \end{rem} \begin{exa} We can apply either of these two theorems to the one dimensional example of simple lazy random walk on $\{0,1,\dots,N\}$ with absorption at $0$ and reflection at $N$. This is the leading example of \cite{DM} where quantitative estimates for absorbing chains are discussed. In this simple example, we know exactly the function $\phi_0$ and we can easily verify A1-A2-A3 and A4 with $\omega=0$. In terms of the Doob-transform chain $K_{\phi_0}$ and its invariant measure $\pi_{\phi_0}$, the result above proves convergence after order $N^2$ steps. This improves upon the results of \cite{DM} by a factor of $\log N$. \end{exa} \begin{exa} In the same manner, we can apply the two theorems above to the example discussed in the introduction (Figure \ref{D0}). The key is again the fact that we can find an explicit expression for the eigenfunction $\phi_0$ and that it follows that Assumptions A1-A1-A3 and A4 with $\omega=0$ are satisfied. The conclusion is the same. In terms of the Doob-transform chain $K_{\phi_0}$ and its invariant measure $\pi_{\phi_0}$, the result above proves convergence after order $N^2$ steps. \end{exa} \begin{exa} Let us focus on the square grid $\mathbb Z^m$ in a fixed dimension $m$ and on the family of its finite $\alpha$-John domains for some fixed $\alpha\in (0,1]$. In addition, for simplicity, we assume that the weight $\mu$ is constant equal to $1/4m$ one the grid edges and $\pi\equiv 1$ (this insure aperiodicity of $K$ and $K_U$). Obviously, A1 is satisfied with $\theta=2$ and A2 is assumed since $U$ is an $\alpha$-John domain. Theorem \ref{th-Doob2} does not apply here because we are not able to prove doubling of the measure $\pi_{\phi_0}$ (and in fact, doubling should probably not be expected in this generality). However, there is an $\omega$ (which depends only on the two fixed parameters $m$ and $\alpha$) such that A4 is satisfied (this follows from Lemma \ref{lem-RC} and Lemma \ref{lem-H}), and hence, we can apply Theorem~\ref{th-Doob1}. \begin{theo} \label{th-JZd} Fix $m$ and $\alpha\in (0,1]$. Let the square grid $\mathbb Z^m$ be equipped with the weights $\mu,\pi$ described above. There are constants $c=c(m,\alpha), C=C(m,\alpha)$ and $\omega=\omega(m,\alpha)$ such that, for any finite $\alpha$-John domain $U$ in $\mathbb Z^m$ with John radius $R_U=R(o,\alpha,U)$, the Doob-transform chain $K_{\phi_0}$ satisfies $$ cR_U^{-2}\le1-\beta_0\le CR_U^{-2},$$ $$1-\beta\ge c R_U^{-2-\omega},$$ and, for $t\ge R^{2+\omega}$ $$\max_{x,y\in u}\left|\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)} -1\right| \le C \exp\left(-c\frac{t}{R_U^{2+\omega}}\right).$$ Equivalently, in terms of the kernel $K_U$, this reads $$\left|K^t_U(x,y) - \beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)\right|\le C\beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)e^{-ct/R^{2+\omega}},$$ for all $x,y\in U$ and $t\ge R^{2+\omega}$.Moreover, for $1\le t\le R^{\theta+\omega}$, we have $$\frac{K_U^t(x,y)}{\pi_U(y)}\le C\left(R^{2+\omega}/t\right)\phi_0(x)\phi_0(y)\pi_U(y). $$\end{theo} It is an open question whether or not it is possible to prove the above theorem with $\omega(m,\alpha)=0$ for all finite $\alpha$-John domains in $\mathbb Z^m$ or, even more generally, for a general underlying structure $(\mathfrak X,\mathfrak E,\pi,\mu)$ under assumption $A1$ with $\theta=2$. \end{exa} \begin{rem} Recall from Definition~\ref{eq-second-qs} that $\nu_x^t(y) = \mathbf{P}_x(X_t=y \ | \ \tau_u > t)$. Theorem~\ref{thm-nu-control} gives control on the rate of convergence of $\nu_x^t(y)$ in terms of the rate of convergence of $K_{\phi_0}^t(x,y)$. We can now apply Theorem~\ref{thm-nu-control} in each of the settings described above in Theorems~\ref{th-Doob1},~\ref{th-Doob2}, and~\ref{th-JZd}. For example, in the case of the square grid $\mathbb Z^m$ and for a fixed $\alpha\in (0,1)$, there exists $\omega=\omega(m,\alpha)\ge 0$ and $C=(m,\alpha), c=c(m,\alpha)>0$ such that, for any finite $\alpha$-John domain $U$ with John radius $R$, $$\forall\, t>CR^{2+\omega},\;\; \left|\frac{\phi_0(y)\nu_x^t(y)}{\sum_U\phi_0}-1\right|\le e^{-ct/R^{2+\omega}}$$ \end{rem} \section{Inner-uniform domains} \setcounter{equation}{0} \label{sec-IU} We now turn to the definition of inner $(\alpha,A)$-uniform domains. These domains form a subclass of the class of $\alpha$-John domains. They allow for a much more precise analysis of Metropolis-type chains and their Doob-transforms. \begin{figure} \caption{A graph in which, at large scales, some balls are not inner-uniform. To the left, the graph ends after finitely many step with an origin $o$ which serves as the center of the balls to be considered. To the right, the indicated pattern is repeated infinitely many times at larger and larger scales. This graph is roughly linear. It satisfies doubling and Poincar\'e.} \label{fig-ballnotIU1} \end{figure} \begin{figure} \caption{The basic model for the balls in the graph of Figure \ref{fig-ballnotIU1} \label{fig-ballNotIU2} \end{figure} Although the definition of inner-uniform domains given below appears to be quite similar to that of John domains, it is in fact much harder, in general circumstances, to find inner-uniform domains than it is to find John domains. In the square lattice $\mathbb Z^d$, both classes of domains are very large and contain many interesting natural examples. Things are very different if one consider an abstract graph structure $(\mathfrak X,\mathfrak E)$ of the type used in this paper. We noted earlier than any graph distance ball $B(o,r)$ in such a structure $(\mathfrak X,\mathfrak E)$ is a $1$-John domain. In particular, $\mathfrak X$ admits an exhaustion $\mathfrak X=\lim_{r\rightarrow \infty}B(o,r)$ by finite $1$-John domains. We know of no constructions of an increasing family of $\alpha$-inner-uniform domains in $(\mathfrak X,\mathfrak E)$, in general. Even if we assume additional properties such as doubling and Poincar\'e inequality on balls, we are not aware of a general method to construct inner-uniform subsets. Of course, it may happen that, as in the case of $\mathbb Z^d$, graph balls turn out to be inner-uniform (all for some fixed $\alpha>0$). But that is not the case in general. Figures \ref{fig-ballnotIU1} and~\ref{fig-ballNotIU2} describe a simple planar graph in which, there are balls $B(o,r_i)$ with $r_i$ tending to infinity which each contains points $x_i,y_i$ such that $d_{B(o,r_i)}(x_i,y_i)=\rho_i=o(r_i)$ but the only path from $x_i$ to $y_i$ of length $O(\rho_i)$ has a middle point $z_i$ which is at distance $1$ from the boundary. all other paths from $x_i$ to $y_i$ have length at least $r_i/8$. This implies that the inner-uniformity constant $\alpha_i$ of the ball $B(o,r_i)$ is $O(\rho_i/r_i)\le o(1)$. The graph in question has a very simple structure and it satisfies doubling and the Poincar\'e inequality on balls at all scales. \subsection{Definition and main convergence results} \begin{defin} A domain $U$ in $\mathfrak X$ is an inner $(\alpha,A)$-uniform domain (with respect to the graph structure $(\mathfrak X,\mathfrak E)$) if for any two points $x,y\in U$ there exists a path $\gamma_{xy}=(x_0=x,x_1,\dots,x_k=y)$ joining $x$ to $y$ in $(U, \mathfrak E_U)$ with the properties that: \begin{enumerate} \item $k \le A d_U(x,y)$; \item For any $j \in \{0,\ldots, k\}$, $d (x_j,\mathfrak X \setminus U) \ge \alpha (1+\min \{ j,k-j\})$. \end{enumerate} \end{defin} \begin{rem} Because the second condition must hold for all $x$, including those that are distance $1$ from the boundary, we see that $\alpha\in (0,1]$. \end{rem} We can think of an inner-uniform domain $U$ as being one where any two points are connected by a banana-shaped region. The entire banana must be contained within $U$. See Figure~\ref{Banana} for an illustration. There is an alternative and equivalent (modulo a change in $\alpha$) definition of inner-uniformity which uses distance instead of path-length in the second condition. More precisely, in this alternative definition, the condition ``for any $j=0,\dots, k$, $d (x_j,\mathfrak X \setminus U) \ge \alpha (1+\min \{ j,k-j\})$'' is replaced by ``for any $j=0,\dots, k$, $d(x_j,\mathfrak X \setminus U)\ge \alpha' \min\{d_U(x_j,x),d_U(x_j,y)\}$''. It is obvious that the definition we choose here easily implies the condition of the alternative definition (with $\alpha=\alpha'$). The reverse implication is much less obvious. It amounts to showing that it is possible to choose the path $\gamma_{xy}$ so that any of its segments $(x_i,x_{i+1},\dots,x_j)$ provide approximate geodesics between its end-points. This requires a modification (i.e., straightening) of the path $\gamma_{xy}$ provided by the definition because there is no reasons these paths have this property. See \cite{MS}. The following lemma shows that all inner-uniform domains are John domains. However, the converse is not true. See Figure~\ref{IU-J}. \begin{figure} \caption{On the left: The banana regions for arbitrary pairs of points which are the witnesses for the inner-uniform property. On the right: The carrot regions joining arbitrary vertices to the central point $o$ marked in red. They are witnesses for the John domain property.} \label{Banana} \end{figure} \begin{lem} \label{lem-IUJ} Suppose that $U$ is a finite inner $(\alpha,A)$-uniform domain. Let $o$ be a point such that $d(o,\mathfrak X \setminus U)=\max\{d(x,\mathfrak X \setminus U):x\in U\}$, and let $R = d(o,\mathfrak X \setminus U)$. Then $U\in J(\mathfrak X,\mathfrak E,o, \alpha^2/8, (2/\alpha)R)$, that is, $U$ is an ($\alpha^2/8$)-John domain. \end{lem} \begin{proof} Look at the mid-point $z=x_{\lfloor k/2\rfloor}$ along $\gamma_{xo}$. We have ${R\ge d(z,\mathfrak X \setminus U)\ge \alpha k/2}$ so the $k\le (2/\alpha)R$. We consider three cases to find a lower bound on $d(x_j,\mathfrak X \setminus U)$ along $\gamma_{xo}$. \begin{enumerate} \item When $j\le k/2$, then we have $d(x_j,\mathfrak X \setminus U)\ge \alpha (1+j)$. \item When $x_j\in B(o,R/2)$, then we have that $$d(x_j,\mathfrak X \setminus U)\ge R/2 \ge (\alpha/4) k\ge (\alpha/8)(1+j).$$ \item When $x_j\not \in B(o,R/2)$ and $j\ge k/2$, then $k-j\ge R/2$ and $$d(x_j,\mathfrak X \setminus U)\ge \alpha(1+ k-j)\ge \alpha (1+R/2)\ge \alpha(1+ \alpha k/4)\ge (\alpha^2/4)(1+j).$$ \end{enumerate} \end{proof} \begin{figure} \caption{A domain that is John but not inner-uniform. The blue dots are the boundary. Note that, on the middle vertical line, the blue dots are placed on every other vertex, up to the indicated height.} \label{IU-J} \end{figure} \begin{rem} The word ``inner'' in inner-uniform refers to the fact that the first condition compares the length $k$ of the curve $\gamma_{xy}$ to the inner-distance $d_U(x,y)$ between $x$ and $y$. If, instead, the original distance $d(x,y)$ is used (i.e., the first condition in the definition becomes $k \leq Ad(x,y)$), then we obtain a much more restrictive class of domains called ``uniform domains.'' See Figure~\ref{IU-U}. \end{rem} \begin{figure} \caption{Finite discrete ``convex subsets'' of $\mathbb Z^2$ are (inner-)uniform} \label{fig-ConvZ2IU} \end{figure} \begin{figure} \caption{A domain that is inner-uniform but not uniform.} \label{IU-U} \end{figure} \begin{exa} The set $\mathfrak X_N$ in Figure~\ref{D0} (a forty-five degree finite cone in $\mathbb{Z}^2$) is a uniform domain, and hence, also an inner-uniform domain. Finite convex sets in $\mathbb Z^d$ in the sense of Example~\ref{ex:convex} are uniform domains, all with the same fixed $(\alpha,A)$ depending only on the dimension $d$. The domain pictured in Figure \ref{D3} is a uniform domain, with the same fixed $(\alpha,A)$ for all $N$. Note that in this example, viewed as a subset of $\mathbb Z^2$, some of the boundary points are not killing points, but points where the process is reflected. This illustrates how variations of this type (i.e., with reflecting points) can be treated with our methods. \end{exa} \begin{exa} In Example \ref{exa-mb}, we observed that metric balls are always $1$-John domains. They are not always inner-uniform domains. See Figures~\ref{fig-ballnotIU1} and~\ref{fig-ballNotIU2}. \end{exa} \begin{exa} The discrete ``finite convex subsets" $U$ of $\mathbb Z^d$ satisfying \eqref{eq-alphaConv} and considered in Proposition \ref{pro-Conv} are inner-uniform with parameter $\bar{\alpha}>0$ depending only on the dimension $d$ and the parameter $\alpha$ in \eqref{eq-alphaConv}. Note that the inner distance in such a finite connected set is comparable to the graph distance of $\mathbb Z^d$ with comparison constant depending only on the dimension $d$ and the parameter $\alpha$ in \eqref{eq-alphaConv} (i.e., these finite domains are {\em uniform}). Here is a rough description of the paths $\gamma_{xy}$ that demonstrate that such domains $U$ are inner-uniform. (See Figure \ref{fig-ConvZ2IU}). Let $r$ be the distance between $x$ and $y$ in $\mathbb Z^d$. Recall that $U$ has ``center'' $o$ and that we can go from $x$ (and $y$) to $o$ while getting away linearly from the boundary, roughly along a straight-line (see Proposition \ref{pro-Conv}). Let $\tilde{x}$ and $\tilde{y}$ be respective points along the paths joining $x$ and $y$ to $o$, respectively at distance $r$ from $x$ and from $y$. Convexity insures that there is a discrete path in $U$ joining $\tilde{x}$ to $\tilde{y}$ while staying close to the straight-line segment between these two points. This discrete path from $\tilde{x}$ to $\tilde{y}$ has length at most $Ar$ and stays at distance at least $ar$ from the boundary. This completely the discussion of the example. \end{exa} Now we return to the general setting. We define a special point $x_r$ for each point $x \in U$ and radius $r >0$. The meaning of this definition and the key geometric property of $x_r$ is that $x_r$ is a point which is essentially as far away from the boundary as possible while still being within a ball of radius $r$ of $x$, i.e., $d(x,x_r) \leq r$. Namely, ${d(x_r,\mathfrak X \setminus U)\ge \alpha( 1+r)}$ if $r\le R$ and $x_r=o$ otherwise. \begin{defin} \label{def-x(r)} Let $U$ be a finite inner $(\alpha,A)$-uniform domain. Let $o$ be a point such that $d(o,\mathfrak X \setminus U)=\max\{d(x,\mathfrak X \setminus U):x\in U\}=R$. Let $\gamma_{xy}$ be a collection of inner $(\alpha,A)$-uniform paths indexed by $x,y\in U$. For any $x\in U$ and $r>0$, let $x_r$ be defined by $$x_r= \left\{\begin{array}{cl}x_{\lfloor r\rfloor} & \mbox{ if } \gamma_{xo}=(x=x_0,x_1,\dots, x_k=o) \mbox{ with } k\ge r ,\\ o& \mbox{ if } \gamma_{xo}=(x=x_0,x_1,\dots, x_k=o) \mbox{ with } k< r.\end{array}\right. $$ \end{defin} The following Carleson-type theorem, regarding the eigenfunction $\phi_0$, is the key to obtaining refined results for the convergence of the intrinsic Doob-transform chain on a finite inner-uniform domain. The context is as follows. In addition to the geometric structure $(\mathfrak X,\mathfrak E)$, we assume we are given a measure $\pi$ and an edge weight $\mu$ such that $(\mathfrak X,\mathfrak E, \pi,\mu)$ satisfies Assumption A1 with $\theta=2$, i.e., we assume that the measure $\pi$ is $D$-doubling, $\mu$ is adapted, $\pi$ domaintes $\mu$, and the pair $(\pi,\mu)$ is elliptic and satisfies the $2$-Poincar\'e inequality on balls with constant $P$. \begin{theo}\label{theo-Carleson} Assume $(\mathfrak X, \mathfrak E, \pi, \mu)$ satisfies Assumption {\em A1} with $\theta=2$ and fix $\alpha,A$. There exists a constant $C_0$ depending only on $\alpha,A,D,P_e,P$ such that, for any finite inner $(\alpha,A)$-uniform domain $U$, the positive eigenfunction $\phi_0$ for the kernel $K_U$ in $U$ is $(1/8,C_0)$-regular and satisfies $$ \forall\,r>0,\;x\in U,\; z\in B_U(x, r/2),\;\;\phi_0(z)\le C_0\phi_0(x_r).$$ \end{theo} \begin{cor} \label{cor-Carleson} Under the assumptions of {\em Theorem \ref{theo-Carleson}}, there are constants $D_0,D_1$ depending only on $\alpha,A,D,P_e,P$ such that $$\forall\,x\in U,\;r>0,\;\;\pi_{\phi_0}(B_U(x,2r))\le D_0 \pi_{\phi_0}(B_U(x,r)).$$ Moreover, for all $r\in [0,R]$, $$D_1^{-1}\phi_0(x_r)^2\pi_U(B(x_r,\alpha r) )\le \pi_{\phi_0}(B_U(x,r)) \le D_1\phi_0(x_r)^2\pi_U(B(x_r,\alpha r)).$$ \end{cor} The following corollary gives a rate of convergence of the Doob transform chain to its stationary distribution in $L^{\infty}$. \begin{cor} \label{cor:KU-rate-of-conv} Under the assumptions of {\em Theorem \ref{theo-Carleson}}, there are constants $C,c$ depending only on $\alpha,A,D,P_e,P$ such that, assuming that the lowest eigenvalue $\beta_-$ of the reversible Markov chain $(K_{\phi_0},\pi_{\phi_0}) $ satisfies $1+\beta_-\ge cR^{-2}$, we have $$\max_{x,y\in u}\left|\frac{K^t_{\phi_0}(x,y)}{\pi_{\phi_0}(y)} -1\right| \le C \exp\left(-c\frac{t}{R^{2}}\right),$$ for all $t\ge R^{2}$. In terms of the kernel $K_U$, this reads $$\left|K^t_U(x,y) - \beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)\right|\le C\beta_0^t\phi_0(x)\phi_0(y)\pi_U(y)e^{-ct/R^{2}},$$ for all $x,y\in U$ and $t\ge R^{2}$. \end{cor} \begin{proof} This follows from Theorem \ref{th-Doob2} because the measure $\phi_0^2\pi_U$ is doubling by Theorem \ref{theo-Carleson} (and $U$ is a John domain by Lemma~\ref{lem-IUJ}). \end{proof} \begin{cor} \label{cor-Carleson2} Under the assumptions of {\em Theorem \ref{theo-Carleson}}, there are constants $c,C$ depending only on $\alpha,A,D,P_e,P$ such that the second largest eigenvalue $\beta $ of the reversible Markov chain $(K_{\phi_0},\phi_{\phi_0})$ satisfies $$ c R^{-2}\le 1-\beta \le CR^{-2}.$$ If $\beta_{U,1}< \beta_{U,0}=\beta_0$ denotes the second largest eigenvalue of the kernel $K_U$ acting on on $L^2(U,\pi_U)$ then $\beta=\beta_{U,1}/\beta_0$ and $$ cR^{-2}\beta_0\le \beta_0 -\beta_{U,1}\le C \beta_0 R^{-2}$$ or equivalently $$ \beta_0(1-CR^{-2})\le \beta_{U,1}\le \beta_0(1-cR^{-2}).$$ In particular, for all $t$, $$\max_{x\in U}\sum_{y\in U}| K_{\phi_0}^t(x,y)-\pi_{\pi_0}(y)|\ge ce^{-Ct/R^2}.$$ \end{cor} The following theorem is closely related to~\ref{theo-Carleson} and is to used to obtain explicit control on the function $\phi_0$. In Section~\ref{sec:examples} we demonstrate the power of this theorem in several examples. \begin{theo} \label{theo-comph} Assume {\em A1} with $\theta=2$ and fix $\alpha,A$. There exists a constant $C_1$ depending only on $\alpha,A,D,P_e,P$ such that, for any finite inner $(\alpha,A)$-uniform domain $U$, any point $x\in U$ and $r>0$ such that $$B_U(x,r)=\{y\in U: d_U(x,y)\le r\} \neq U$$ and any function $h$ defined in $U$ and satisfying $K_Uh =h$ in $B_U(x,r)$, we have $$\forall\,y,z\in B_U(x,r/2),\;\;\; \frac{\phi_0(y)}{\phi_0(z)}\le C_1 \frac{h(y)}{h(z)}.$$ \end{theo} \subsection{Proofs of Theorems \ref{theo-Carleson} and \ref{theo-comph}: the cable space with loops} \label{sec-cable} The statement in Theorem \ref{theo-Carleson} is a version of a fundamental inequality known as a Carleson estimate \cite{Carleson} and was first derived in the study of analysis in Lipschitz domains \cite{Kemp} and \cite{Ancona,Dahl,Wu}. For a modern perspective, sharp results, and references to the vast literature on the subject in the context of analysis on bounded domains, see \cite{BBB,Aik1,Aik2,Aik3,Aik4}. The generality and flexibility of the arguments developed by H. Aikawa in these papers and other works, based on the notion of ``capacity width,'' is used in a fundamental way in \cite{Gyrya} and in \cite{LierlLSCOsaka,LierlLSCJFA,Lierl1,Lierl2} to extend the result in the setting of (nice) Dirichlet spaces. Given $(\mathfrak X,\mathfrak E, \mu, \pi) $ one can build an associated continuous space $\mathbf X$, known as the cable space for $(\mathfrak X,\mathfrak E,\pi,\mu)$. In many cases, it is more difficult to prove theorems in discrete domains than in continuous domains --- the cable space provides an important bridge, but allowing us to transfer known theorems from the continuous space $\mathbf X$ to its associated discrete space $\mathfrak X$. Topologically, the space $\mathbf X$ is a connected one-dimensional complex, that is, a union of copies of the interval $[0,1]$ with some identifications of end points. The process of building the cable space from the discrete space $(\mathfrak X, \mathfrak E, \mu, \pi)$ is straightforward: the zero-dimensional points in the complex are given by the vertices $\mathfrak X$; two points $x$ and $y$ are then connected by a unit length edge $[0,1]$ if $\{x,y\} \in \mathfrak E$, with $0$ identified with $x$ and $1$ identified with $y$. For early references to the cable space, see the introduction to~\cite{Cattaneo}. This resource is particularly relevant because it discusses the spectrum of the discrete Laplacian. But we need to allow for the addition of self-loops, copies of $[0,1]$ with $0$ and $1$ identified to each other and to some vertex $x \in \mathfrak X$. (Recall that $\mathfrak E$ has no self-loops.) We will use the notation $(0,1)_{xx}$ for the self-loop at $x$ minus the point $x$ itself. Let $\mathfrak L$ be the subset of those $x\in \mathfrak X$ where $\sum_y \mu_{xy} <\pi (x)$. Form a loop at each $x \in \mathfrak L$ and set the weight $\mu_{xx}$ on the loop to be equal to its ``deficiency,'' \begin{equation} \label{eq:loop-deficiency-weight} \mu_{xx}=\pi(x)-\sum_y\mu_{xy}, \;\;x\in \mathfrak L. \end{equation} In what follows we will use the notation $xy$ as an index running over $\{x,y\}\in \mathfrak E$ when $x\neq y$ and $x\in \mathfrak L$ when $x=y$. We need to use a simple (but rather interesting) variation on this construction. We introduce a {\em loop-parameter}, call it $\ell$. For any fixed $\ell\in [0,1]$, we construct the cable space $\mathbf X_\ell$ as described above but the self-loops have length $\ell$ instead of $1$ above. The other edges (non-self-loops) still have length 1. More precisely, the space $\mathbf X_\ell$ is obtained by joining any two points $x,y$ in $\mathfrak X$ with $\{x,y\}\in \mathfrak E$ by a continuous edge $e_{xy}=(0,1)_{xy}$ isometric to the interval $(0,1)$ and adding a self-loop $e_{xx}=(0,\ell)_{xx}$ at each $x\in \mathfrak L$. Strictly speaking, $$\mathbf X=\mathfrak X\cup \left(\bigcup_{\{x,y\}\in \mathfrak E}(0,1)_{xy}\right)\cup \left(\bigcup_{x\in \mathfrak L} (0,1)_{xx} \right) $$ with $e_{xy}$ being a copy of $(0,1)$ when $x\neq y$ and a copy of $(0,\ell)$ when $x=y$. See Figure \ref{XX}. The topology of this space is generated by the open subintervals of these many copies of $(0,1)$ and $(0,\ell)$, together with the star-shaped open neighborhoods of the vertices in $\mathfrak X$. \begin{figure} \caption{A simple example of $(\mathfrak X,\mathfrak E,\pi,\mu)$ and the associated cable spaces $\mathbf X_\ell$, where the edge weights $\mu$ are indicated in black and the vertex weights $\pi$ are indicated in red. The black weights on the loops indicate the ``deficiencies'' in the edge weights, as described in~\eqref{eq:loop-deficiency-weight} \label{XX} \end{figure} The cable Dirichlet space associated with the data $(\mathfrak X,\mathfrak E, \mu, \pi,\ell) $ is obtained by equipping $\mathbf X_\ell$ with its natural distance function $\mathbf d_\ell :\mathbf X_{\ell}\times\mathbf X_{\ell}\rightarrow [0,\infty)$, the length of the shortest path between two points. The space $\mathbf X_\ell $ is also equipped with a measure $\boldsymbol{\pi}$ equal to $\mu_{xy} dt$ on each interval $e_{xy}$ (including the intervals $e_{xx}$), and with the Dirichlet form obtained by closing the form $$\mathcal E_{\mathbf X_\ell}(f,f)= \sum_{{xy}} \mu_{xy}\int_{e_{xy}} |f_{e_{xy}}'(t)|^2dt, \;\; f\in \mathcal D_0(\mathbf X_\ell),$$ where $\mathcal D_0(\mathbf X_\ell)$ is the space of all compactly supported continuous functions on $\mathbf X_\ell$ which have a bounded continuous derivative $f'_{e_{xy}}$ on each open edge $e_{xy}$ and $e_{xx}$. (Note that the values of these various edge-derivatives at a vertex do not have to match in any sense.) The domain of $\mathcal E_{\mathbf X_\ell}$, $\mathcal D(\mathcal E_{\mathbf X_{\ell}})$, is the closure of $\mathcal D_0(\mathbf X_\ell)$ under the norm $$\|f\|_{\mathcal E_{\mathbf X_{\ell}}} = \left(\int_{\mathbf X_\ell}|f|^2d\boldsymbol{\pi}+\mathcal E_{\mathbf X_\ell}(f,f)\right)^{1/2}.$$ The cable Dirichlet space $(\mathbf X_\ell, \boldsymbol{\pi}, {\mathcal E}_{\mathbf X_\ell})$ is a regular strictly local Dirichlet space (see, e.g., \cite{FOT,Gyrya}) and its intrinsic distance is the shortest-path distance $\mathbf d_\ell$ described briefly above. This Dirichlet space is actually quite elementary in the sense that it is possible to describe concretely the domain of the associated Laplacian, the generator of the associated Markov semigroup of operators acting on $L^2(\mathbf X_\ell,\boldsymbol{\pi})$. First, we recall that this Laplacian is the self-adjoint operator $\Delta_\ell$ with domain $\mathcal D(\Delta_\ell)$ in $L^2(\mathbf X_\ell,\boldsymbol{\pi})$ defined by $$\mathcal D(\Delta_\ell)=\{u\in \mathcal D(\mathcal E_{\mathbf X_\ell}): \exists C \mbox{ such that}, \forall f\in \mathcal D_0({\mathbf X_\ell}), \mathcal E_{\mathbf X_\ell}(u,f)\le C\|u\|_2\}.$$ For any function $u\in \mathcal D(\Delta_\ell)$ there exists a unique function $v\in L^2({\mathbf X_\ell},\boldsymbol{\pi})$ such that $\mathcal E_{\mathbf X_\ell}(u,f)=\int_{\mathbf X_\ell} vf d\boldsymbol{\pi}$ (from the Riesz representation theorem) and we set $$\Delta u = -v.$$ This implies that $$\mathcal E_{\mathbf X_\ell}(u,f)= -\int f \Delta _\ell u d\boldsymbol{\pi}$$ for all $u\in \mathcal D(\Delta_\ell)$ and all $f\in \mathcal D_0(\mathbf X_\ell)$ (equivalently, all $f\in \mathcal D(\mathcal E_{\mathbf X_\ell})$). From the above abstract definition, we can now derive a concrete description of $\mathcal D(\Delta_\ell)$. We start with a concrete description of $\mathcal D(\mathcal E_{\mathbf X_\ell})$. A function $f$ is in $\mathcal D(\mathcal E_{\mathbf X_\ell})$ if it is continuous on $\mathbf X_\ell$, belongs to $L^2(\mathbf X_\ell,\boldsymbol{\pi})$ and the restriction $f_{e_{xy}}$ of $f$ to any open edge $(0,1)_{xy}$, has a distributional derivative which can be represented by a square-integrable function $f'_{e_{xy}}$ satisfying $$\sum_{xy} \mu_{xy}\int_{e_{xy}} |f'_{e_{xy}}|^2 dt<\infty.$$ The key observation is that, because of the one-dimensional nature of $\mathbf X$, on any edge $e_{xy}$ (or subinterval of $e_{xy}$) on which $f'$ is defined in the sense of distributions and represented by a square integrable function, we have $$ |f(s_2)-f(s_1)|\le \sqrt{|s_2-s_1|}\left(\int_{s_1}^{s_2}|f'(s)|^2 ds\right)^{1/2}.$$ We now give a (well-known) concrete description of $\mathcal D(\Delta_\ell)$. A function $u\in L^2(\mathbf X_\ell,\boldsymbol{\pi})$ is in $\mathcal D(\Delta_\ell)$ if and only if \begin{enumerate} \item The function $u\in L^2(\mathbf X_\ell,\boldsymbol{\pi})$ admits a continuous version, which, abusing notation, we still call $u$. \item On each open edge $e_{xy}$, the restriction $u_{e_{xy}}$ of $u$ to $e_{xy}$ has a continuous first derivative $u'_{e_{xy}}$ with limits at the two end-points and such that $$\sum_{xy}\mu_{xy}\int_{e_{xy}}|u'_{e_{xy}}|^2dt<\infty.$$ Furthermore $u_{e_{xy}}$ has a second derivative in the sense of distributions which can be represented by a square-integrable function $u''_{e_{xy}}$ and $$\sum_{xy}\mu_{xy}\int_{e_{xy}}|u''_{e_{xy}}|^2dt<\infty.$$ \item At any vertex $x\in \mathfrak X$, Kirchhoff's law $$\sum_{y:\{x,y\}\in \mathcal E}\mu_{xy}\vec{u}_{e_{xy}}(x) +\sum_{x\in \mathfrak L}\mu_{xx}(\vec{u}_{e_{xx}}(0)-\vec{u}_{e_{xx}}(\ell))=0$$ holds. Here, for $\{x,y\}\in \mathfrak E$, $\vec{u}_{e_{xy}}(x)$ is the (one-sided) derivative of $u$ at $x$ computed along $e_{xy}$ oriented from $x$ to $y$ and, for $x\in \mathfrak L$, $\vec{u}_{e_{xx}}(0)$ and $\vec{u}_{e_{xx}}(\ell)$ are the (one-sided) derivatives of $u_{e_{xx}}$ on $(0,\ell)_{xx}$ at $0$ and at $\ell$. \end{enumerate} We say that a function $u$ defined on a subset $\Omega$ is locally in $\mathcal D(\Delta_\ell)$ if it satisfies the above properties over $\Omega$ except for the global square integrable conditions on $u,u'$ and $u''$. For such a function, $\Delta_\ell u$ is defined as the locally square integrable function $\Delta_\ell u =u''$ where $u''=u''_{e_{xy}}$ on $e_{xy}\cap \Omega$. \begin{rem} The stochastic process associated with the Dirichlet form $\mathcal{E}_{\mathbf{X}_{\ell}}$ can be explicitly constructed using Brownian motion. More specifically, starting at a vertex in the cable space, one performs Brownian excursions along adjacent edges until reaching another vertex. For a detailed description see~\cite{RevuzYor, Folz}. See~\cite{quantum-graphcs} for a description of the related quantum graphs. \end{rem} \begin{defin}\label{def-UL} To any finite domain $U$ in $(\mathfrak X,\mathfrak E)$ we associate the domain $\mathbf U=\mathbf U_\ell$ in ${\mathbf X_\ell}$ formed by all the vertices $x$ in $U$ and all the open edge $e_{xy}$ with at least one end point in $U$, including the loops $e_{xx}$ with $x\in U$. \end{defin} See Figure \ref{XXU} for an example of Defintion~\ref{def-UL}. As another example, consider the trivial finite domain $U=\{x\}$. To it, we associate the domain $\mathbf U$ formed by the vertex $x$ and all the open edges containing $x$, i.e., an open star around $x$, perhaps with a self-loop of length $\ell$, whose branches are in one to one correspondence with the $y\in \mathfrak X$ such that $\{x,y\}\in \mathfrak E$. \begin{figure} \caption{ $U$ and $\mathbf U$ in black with their boundaries in blue.} \label{XXU} \end{figure} With this definition, the discrete finite domain $U$ is inner-uniform if and only if the domain $\mathbf U$ is inner-uniform in the metric space $(\mathbf X_\ell,\mathbf{d}_\ell)$. Following \cite[Definition 3.2]{Gyrya} we say that a continuous domain $\mathbf U$ is inner-uniform in the metric space $(\mathbf X_{\ell},\mathbf d_{\ell})$ if there exists constants $A^c$ and $\alpha^c$ such that, for each $\xi,\zeta \in \mathbf U$, there exists a continuous curve $\gamma_{\xi\zeta}:[0,\tau] \rightarrow \mathbf U$ (called an \emph{inner-uniform path}) contained in $\mathbf U$ with $|\gamma_{\xi\zeta}| = \tau$ such that (1) $\gamma_{\xi\zeta}(0) = \xi$ and $\gamma_{\xi\zeta}(\tau) = \zeta$, (2) $|\gamma_{\xi\zeta}| \leq A^c\mathbf{d}_{\mathbf U}(\xi,\zeta)$ and (3) for any $t\in[0,\tau]$, $$\mathbf{d}_{\ell}(\gamma_{\xi\zeta}(t),\mathbf X_{\ell} \setminus \mathbf{U}) \geq \alpha^c \min\{t,\tau_U - t\}$$ where $\mathbf{d}_{\mathbf{U}}$ is the distance in $\mathbf{U}$. The important constants $A^d,\alpha^d$ and $A^c,\alpha^c$ ($d$ for discrete, $c$ for continuous) capturing the key properties of an inner-uniform domain in both cases are within factors of $8$ from each others. (Very large self-loops would be problematic, but we restrict to $\ell\in [0,1]$.) In fact, for any pair of points $\xi,\zeta$ in $\mathbf U$ we can define an inner-uniform path $\gamma_{\xi\zeta}$ from $\xi$ to $\zeta$ as follows. If the two points satisfy $\mathbf d_{\mathbf U}(\xi,\zeta)=\tau\le 1$, i.e., they are either on the same edge or on two adjacent edges, then we set $\gamma_{\xi\zeta}$ to be the obvious path from $\xi$ to $\zeta$, parametrized by arc-length (one can easily check that this path satisfies $\mathbf{d}_{\ell}(\gamma_{\xi\zeta}(t),\mathbf{X}_{\ell} \setminus \mathbf{U})\ge \min\{t,\tau-t\}$). When $\mathbf d_{\mathbf U}(\xi,\zeta)> 1$, one can join them in $\mathbf U$ by first finding the closest points $x(\xi)$ and $x(\zeta)$ in $U$ (if there are multiple choices, pick one) and then use the obvious continuous extension of the discrete inner-uniform path from $x(\xi)$ to $x(\zeta)$, which is, again, parametrized by arc-length. Finally we extend Definition \ref{def-x(r)} from $U$ to $\mathbf U$ as follows. \begin{defin} Let $U$ be a finite inner-uniform domain equipped with a central point $o\in U$ such that $d(o,\mathfrak X \setminus U)=\max\{d(x,\mathfrak X \setminus U):x\in U\}$. For any point $\xi\in \mathbf U$, let $\gamma_{\xi o}$ be the inner-uniform continuous path defined above joining $\xi$ to $o$ in $\mathbf U$. For any $\xi\in \mathbf U$ and $r>0$, let $\xi_r$ be defined by $$\xi_r= x(\xi)_r \mbox{ if } r\ge 1$$ where $x(\xi)$ is the (chosen) closest point to $\xi$ in $U$ and $x(\xi)_r$ is given by Definition \ref{def-x(r)}, and $$\xi_r =\gamma_{\xi o}(\min\{r,\tau\}) \mbox{ if } r\in (0,1) \mbox{ and } \gamma_{\xi o}(\tau)=o,.$$ \end{defin} \begin{rem} \label{rem-xr} The two key properties of the point $\xi_r\in \mathbf U$ are as follows. There are two constants $C,\epsilon$ which depends only on the inner-uniform constants $A,\alpha$ of $U$ such that \begin{enumerate} \item The inner-distance $\mathbf d_{\mathbf U}(\xi,\xi_r)$ is no larger than $C r$; \item The distance $\mathbf d_\ell (\xi_r,\mathbf \mathfrak X \setminus U)$ is at least $\epsilon r$. \end{enumerate} In the present case, we chose the points $\xi_r$ so that, for $r\ge 1$, they actually belong to $U$ and coincide with $x(\xi)_r$ from Definition \ref{def-x(r)}. \end{rem} The heat diffusion with Dirichlet boundary condition on the bounded inner-uniform domain $\mathbf U=\mathbf U_\ell$ is studied in \cite{LierlLSCJFA,LierlLSCOsaka}. The heat diffusion semigroup with Dirichlet boundary condition on the domain $\mathbf U$ is the semigroup associated with the Dirichlet form obtained by closing the (closable) form $$ \mathcal E_{\mathbf U,D}(f,f)= \int _{\mathbf U} |f'|^2 d\boldsymbol{\pi}$$ defined on continuous functions $f$ in $\mathbf U$ that are locally in $\mathcal D(\mathcal E_{\mathbf X_\ell})$ and have compact support in $\mathbf U$ (for such function, $f'=f'_{e_{xy}}$ on $e_{xy}\cap \Omega$). The subscript $D$ in this notation stands for {\em Dirichlet condition}. Let $H_t^{\mathbf U,D}= e^{t \Delta_{\mathbf U,D}}$ be the associated self-adjoint semigroup on $L^2(\mathbf U,\boldsymbol{\pi}_{\mathbf U})$ with infinitesimal generator $\Delta_{\mathbf U,D}$. Here, $\boldsymbol{\pi}_{\mathbf U}$ is the normalized restriction of $\boldsymbol{\pi}$ to $\mathbf U$ $$ \boldsymbol{\pi}_{\mathbf U}= \boldsymbol{\pi}(\mathbf U)^{-1} \boldsymbol{\pi}|_{\mathbf U}.$$ The domain of $\Delta_{\mathbf U, D}$ is exactly the set of functions $f$ that are locally in $\mathcal D(\Delta_\ell)$ in $\mathbf U$, have limit $0$ at the boundary points of $\mathbf U$ and satisfy $\int _{\mathbf U}|u''|^2 d\boldsymbol{\pi}<\infty$. Also the parameter $\ell$ does not appear explicitly in the notation we just described, but all these objects depend on the choice of $\ell$. Just as in the discrete setting, the key to the study of $H^{\mathbf U,D}_t$ is the Doob-transform technique which involves the positive eigenfunction $\boldsymbol{\phi}_{\ell,0}$ associated to the smallest eigenvalue $\boldsymbol{\lambda}_{\ell,0}$ of $-\Delta_{\mathbf U,D}$ in $\mathbf U$. This function is defined by the following equations: \begin{enumerate} \item $\boldsymbol{\lambda}_{\ell,0}= \inf \left\{\int_{\mathbf U} |f'|^2 d\boldsymbol{\pi}_{\mathbf{U}}: f\in \mathcal D(\mathcal E_{\mathbf U,D}),\int_{\mathbf U}|f|^2d\boldsymbol{\pi}_{\mathbf{U}}=1\right\}$; \item $\boldsymbol{\phi}_{\ell,0}\in \mathcal D(\Delta_{\mathbf U,D})$ and $\Delta_{\mathbf U,D} \boldsymbol{\phi}_{\ell,0}=-\boldsymbol{\lambda}_0 \boldsymbol{\phi}_{\ell,0}$; \item $\int_{\mathbf U}|\boldsymbol{\phi}_{\ell,0}|^2d\boldsymbol{\pi}=1.$ \end{enumerate} \begin{pro} \label{pro-*} Assume that $(\mathfrak X,\mathfrak E,\pi,\mu)$ is such that $\mu$ is adapted and $\mu$ is subordinated to $\pi$. Let $U$ be a finite domain in $(\mathfrak X,\mathfrak E)$. There exists a value $$\ell_0=\ell_0(\mathfrak X,\mathfrak E,\pi,\mu,U)\in [0,1]$$ of the loop-parameter $\ell$ such that the following properties hold true. Let $\mathbf U$ be the bounded domain in $\mathbf X_{\ell_0}$ associated to $U$. Let $\phi_0$, $\beta_0$ be the Perron-Frobenius eigenfunction and eigenvalue of $K_U$. Let $\boldsymbol{\phi}_0$, $\boldsymbol{\lambda}_0$ be the eigenfunction and bottom eigenvalue of $\Delta_{\mathbf U,D}$ for the parameter $\ell_0$ as defined above. There exists a constant $\kappa>0$ such that \begin{enumerate} \item $\beta_0= \cos(\sqrt{\boldsymbol{\lambda}_0})$, \item $\phi_0(x)= \kappa \boldsymbol{\phi}_0(x)$ for all vertex $x\in U$. \end{enumerate} \end{pro} \begin{proof} First, we study the function $\boldsymbol{\phi}_{\ell,0}$ for an arbitrary $\ell \in [0,1]$. On each edge $e_{xy}$ in $\mathbf U$, the function $\boldsymbol{\phi}_{\ell,0}$ satisfies $$\left(\frac{\partial }{\partial s}\right)^2[\boldsymbol{\phi}_{\ell,0}]_{e_{xy}}=-\boldsymbol{\lambda}_{\ell,0}[\boldsymbol{\phi}_{\ell,0}]_{e_{xy}},$$ and this implies $$[\boldsymbol{\phi}_{\ell,0}]_{e_{xy}}(s)= \frac{\boldsymbol{\phi}_{\ell,0}(y)-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}} \ell_{xy})\boldsymbol{\phi}_{\ell,0}(x)}{\sin (\sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell_{xy})} \sin (\sqrt{\boldsymbol{\lambda}_{\ell,0}} s) +\boldsymbol{\phi}_{\ell,0}(x)\cos (\sqrt{\boldsymbol{\lambda}_{\ell,0}} s)$$ where $s\in (0,\ell_{xy})$ parametrizes $e_{xy}$ from $x$ to $y$ with $$\ell_{xy} = \begin{cases} 1 & \text{ when } x\neq y \\ \ell & \text{ when } x = y. \end{cases}$$ When $x=y$, $$[\boldsymbol{\phi}_{\ell,0}]_{e_{xx}}(0)=[\boldsymbol{\phi}_{\ell,0}]_{e_{xx}}(\ell)=\boldsymbol{\phi}_{\ell,0}(x),$$ and the function $[\boldsymbol{\phi}_{\ell,0}]_{e_{xx}} $ on the edge $(0,\ell)_{xx}$ satisfies $$[\boldsymbol{\phi}_{\ell,0}]_{e_{xx}}(s)=[\boldsymbol{\phi}_{\ell,0}]_{e_{xx}}(\ell-s).$$ To express Kirchhoff's law at $x\in U$, we compute, for $x \neq y$, $$[\vec{\boldsymbol{\phi}}_{\ell,0}]_{e_{xy}}(0)=\frac{\sqrt{\boldsymbol{\lambda}_{\ell,0}}}{\sin (\sqrt{\boldsymbol{\lambda}_{\ell,0}})}(\boldsymbol{\phi}_{\ell,0}(y )-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}) \boldsymbol{\phi}_{\ell,0}(x)),$$ and, for $x=y$, \begin{eqnarray*} [\vec{\boldsymbol{\phi}}_{\ell,0}]_{e_{xx}}(0)-[\vec{\boldsymbol{\phi}}_{\ell,0}]_{e_{xx}}(1)&=& 2[\vec{\boldsymbol{\phi}}_{\ell,0}]_{e_{xx}}(0)\\ &=&2\frac{\sqrt{\boldsymbol{\lambda}_{\ell,0}}}{\sin (\sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}(1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}} \ell)) \boldsymbol{\phi}_{\ell,0}(x).\end{eqnarray*} It follows that Kirchhoff's law gives \begin{eqnarray*} \lefteqn{ \sum_{y:\{x,y\}\in \mathfrak E}\mu_{xy}(\boldsymbol{\phi}_{\ell,0}(y )-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}) \boldsymbol{\phi}_{\ell,0}(x)) } \hspace{1in}&&\\ &&+2\mu_{xx}\frac{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}})}{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}(1-\cos(\sqrt{\boldsymbol{\lambda}_0}\ell))\boldsymbol{\phi}_{l,0}(x)=0.\end{eqnarray*} Recall that $K_U(x,y)= \mu_{xy}/\pi(x)$ for $x,y\in U$ with $\{x,y\}\in \mathfrak E$ and $K_U(x,x)=\mu_{xx}/\pi(x)$. It follows that, for $x\in U$, $$K_U\boldsymbol{\phi}_{\ell,0} (x)= \frac{1}{\pi(x)}\sum_{y} \mu_{xy} \boldsymbol{\phi}_{\ell,0}(y)= \frac{1}{\pi(x)}\left( \sum_{y:\{x,y\}\in \mathfrak E} \mu_{xy} \boldsymbol{\phi}_{\ell,0}(y) +\mu_{xx} \boldsymbol{\phi}_{\ell,0}(x) \right),$$ and Kirchhoff laws for $\boldsymbol{\phi}_{\ell,0}$ yields \begin{eqnarray*}\lefteqn{ \frac{K_U\boldsymbol{\phi}_{\ell,0}(x) }{ \boldsymbol{\phi}_{\ell,0}(x)}= K_U(x,x) +\left(1-K_U(x,x)\right)\cos (\sqrt{\boldsymbol{\lambda}_{\ell,0}})} &&\\&&\hspace{1.3in} -2K_U(x,x) \frac{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}})}{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}(1-\cos(\sqrt{\boldsymbol{\lambda}_0}\ell))\\ &=& \cos (\sqrt{\boldsymbol{\lambda}_{\ell,0}})\\ &&+K_U(x,x)(1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}))\left( 1-2 \frac{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}})}{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}\frac{(1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell))}{ (1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}))} \right) \end{eqnarray*} Given the uniqueness of the Perron-Frobenius eigenvalue and the fact that the associated positive eigenfunction is unique up to a multiplicative constant, the proposition follows from the previous computation if there exists $\ell_0\in [0,1] $ at which the function $$ F(\ell)= 1-2 \frac{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}})}{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}\frac{(1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell))}{ (1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}))} $$ vanishes. But, by an easy inspection, $F(0)= 1$ and $F(1)=-1$. If we can prove that the function $$\ell\mapsto \boldsymbol{\lambda}_{\ell,0}$$ is continuous, then $F$ must vanish somewhere between $l=0$ and $l=1$, so we are done. Fix $\ell_1,\ell_2$. Any function $f$ on $\mathbf X_{\ell_1}$ is turned into a function $\tilde{f}$ on $\mathbf X_{\ell_2}$ by setting $$\tilde{f}_{e_{xy}}(s)=\left\{\begin{array}{cl} f_{e_{xy}}(s) &\mbox{ if } x\neq y\\ f_{e_{xx}}(\ell_2s/\ell_1) &\mbox{ if } y=x.\end{array}\right.$$ Further, $$\int_{\mathbf X_{\ell_2}}|\tilde{f}|^2d\boldsymbol{\pi}=\int_{\mathbf X_{\ell_1}}|f|^2d\boldsymbol{\pi} +((\ell_1/\ell_2)-1)\sum_{x\in \mathfrak L}\mu_{xx}\int_{e_{xx}}|f_{e_{xx}}|^2dt$$ and $$\mathcal E_{\mathbf X_{\ell_2}}(\tilde{f},\tilde{f})=\mathcal E_{\mathbf X_{\ell_1}}(f,f)+((\ell_2/\ell_1)-1)\sum_{x\in \mathfrak L}\mu_{xx}\int_{e_{xx}}|f'_{e_{xx}}|^2dt.$$ Applying this to the function $\boldsymbol{\phi}_{\ell_1,0}$, normalized so that $\int_{\mathbf X_{\ell_1}}|\boldsymbol{\phi}_{\ell_1,0}|^2d\boldsymbol{\pi} =1$, we find that $$\boldsymbol{\lambda}_{\ell_2,0}\le \frac{\max\{1,\ell_1/\ell_2\}}{\min\{1,\ell_2/\ell_1\}} \boldsymbol{\lambda}_{\ell_1,0}.$$ Exchanging the role of $\ell_1,\ell_2$ yields thet complementary inequality $$\boldsymbol{\lambda}_{\ell_2,0}\ge \frac{\min\{1,\ell_1/\ell_2\}}{\max\{1,\ell_2/\ell_1\}} \boldsymbol{\lambda}_{\ell_1,0}.$$ This proves the continuity of $\ell\mapsto \boldsymbol{\lambda}_{\ell,0}$ as desired. \end{proof} \begin{rem} When the quantity $K_U(x,x)$ is constant, say, $K_U(x,x)=\theta$ for all $x\in U$, then every function $\boldsymbol{\phi}_{\ell,0}$ for $\ell\in [0,1]$ satisfies $\phi_0(x)=\kappa_\ell \boldsymbol{\phi}_{\ell,0}(x)$ at vertices $x \in U$, and we have $$\beta_0=1- (1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}))\left(1-\theta\left(1-2 \frac{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}})}{\sin( \sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell)}\frac{(1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}\ell))}{ (1-\cos(\sqrt{\boldsymbol{\lambda}_{\ell,0}}))} \right)\right).$$ The function of $l$ on the right-hand side is equal to the constant $\beta$. \end{rem} \begin{theo}[{Special case of \cite[Proposition 5.10]{LierlLSCJFA}}] \label{th-JL} Assume {\em A1} with $\theta=2$ and fix $\alpha,A$. There exists a constant $C_0$ depending only on $\alpha,A,D,P_e,P$ such that, for any finite inner $(\alpha,A)$-uniform domain $U$ and loop parameter $\ell\in [0,1]$, the positive eigenfunction $\boldsymbol{\phi}_{\ell,0}$ for the $\Delta_{\mathbf U_\ell,D}$ in $\mathbf U_\ell$ is $(1/8,C_0)$-regular and satisfies $$ \forall\,r>0,\;\xi\in \mathbf U_\ell,\; z\in B_{\mathbf U_\ell}(\xi, r/2),\;\;\boldsymbol{\phi}_{\ell,0}(z)\le C_0\boldsymbol{\phi}_{\ell,0}(\xi_r).$$ \end{theo} \begin{proof} The domain $\mathbf U=\mathbf U_\ell$ in $(\mathbf X_\ell,\mathbf d_\ell)$ is inner-uniform and the Dirichlet space $(\mathbf X_\ell,\boldsymbol{\pi},\mathcal E_{\mathbf X_\ell})$ is a Harnack space in the sense of \cite{Gyrya} and \cite{LierlLSCJFA}. The most basic case of \cite[Proposition 5.10]{LierlLSCJFA} provides the desired result. Technically speaking, the definition of the map $(x,r)\mapsto \xi_r$ here and in \cite{LierlLSCJFA} are slightly different but these differences are inconsequential. \end{proof} \begin{proof}[Proof of Theorem \ref{theo-Carleson}] Together, Theorem \ref{th-JL} and Proposition \ref{pro-*} obviously yield Theorem \ref{theo-Carleson}. \end{proof} \begin{proof}[Proof of Theorem~\ref{theo-comph}] We use the same method as in the proof of Theorem \ref{theo-Carleson} and extract this result from the similar result for the cable process with the proper choice $\ell_0$ of loop length. Local harmonic functions for the cable process (with Dirichlet boundary condition at the boundary of $U$) are always in a one-to-one straightforward correspondance with local harmonic functions for $K_U$, independently of the choice of the loop parameter $\ell$. Therefore, the stated result follows from \cite[Theorem 5.5]{LierlLSCJFA}. \end{proof} \subsection{Point-wise kernel bounds} \label{sec-HPWb} In this section, we describe how to obtain the following detailed point-wise estimates on the iterated kernels $K^t_U$ and $K^t_{\phi_0}$ when $U$ is inner-uniform. Recall that $V(x,r)=\pi(B(x,r))$ and $x_{\sqrt{t}}$ is a point such that ${d(x_{\sqrt{t}},\mathfrak X \setminus U)\ge \alpha( 1+\sqrt{t})}$ if $\sqrt{t} \le R$ and $x_{\sqrt{t}}=o$ otherwise. \begin{theo}\label{theo-DH} Assume {\em A1} with $\theta=2$ and fix $\alpha,A$. In addition, assume that the pair $(\pi,\mu)$ is such that $\sum_y\mu_{xy}\le (1-\epsilon)\pi$ with $\epsilon>0$ (this means that $\min_{x\in \mathfrak X}\{K_{\mu}(x,x)\}\ge \epsilon $). There exist constants $c_1,c_2,C_1,C_2\in (0,\infty)$which depend only on $\alpha,A,D,P_e,P$ and are such that, for any finite inner $(\alpha,A)$-uniform domain $U$, integer $t$ and $x,y\in U$ such that $d_U(x,y)\le t$, \begin{eqnarray*} \lefteqn{ \frac{C_1\exp(-c_1 d_U(x,y)^2/t) } {\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})} \phi_0(x_{\sqrt{t}})\phi_0(y_{\sqrt{t}})} } && \\ &\leq& \frac{K_{\phi_0}^t(x,y)}{\phi_0(y)^2\pi(y)}\\ &\leq& \frac{C_2 \exp(-c_2 d_U(x,y)^2/t)} {\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})} \phi_0(x_{\sqrt{t}})\phi_0(y_{\sqrt{t}})} .\end{eqnarray*} \end{theo} \begin{rem} When $t$ is larger than $R^2$ then $x_{\sqrt{t}}=o$ and the two-sided estimate above states that $K_{\phi_0}(x,y)$ is roughly of order $\pi_{\phi_0}(y)^2\pi_U(y)$ because $\phi_0(o)^2\simeq \sum_U \phi_0^2\pi_U=1$. The convergence result stated earlier give better estimates in this case. When $t\le R^2$, the statement provides a useful estimate of the iterated kernel before the equilibrium is reached. \end{rem} The following corollary simply translates Theorem \ref{theo-DH} in terms of the iterated kernel $K_U^t$. \begin{cor} \label{cor:DH} Assume {\em A1} with $\theta=2$ and fix $\alpha,A$. In addition, assume that the pair $(\pi,\mu)$ is such that $\sum_y\mu_{xy}\le (1-\epsilon)\pi$ with $\epsilon>0$ (which implies that $\min_{x\in \mathfrak X}\{K_{\mu}(x,x)\}\ge \epsilon $). There exist constants $c_1,c_2,C_1,C_2\in (0,\infty)$which depend only on $\alpha,A,D,P_e,P$ and are such that, for any finite inner $(\alpha,A)$-uniform domain $U$, for any integer $t$ and any $x,y\in U$ such that $d_U(x,y)\le t$, \begin{eqnarray*} \lefteqn{ \frac{C_1\beta_0^t\phi_0(x)\phi_0(y)\exp(-c_1 d_U(x,y)^2/t) } {\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})} \phi_0(x_{\sqrt{t}})\phi_0(y_{\sqrt{t}})}} \\ &\leq& \frac{K_{U}^t(x,y)}{\pi(y)} \\ &\le & \frac{C_2\beta_0^t \phi_0(x)\phi_0(y)\exp(-c_2 d_U(x,y)^2/t)} {\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})} \phi_0(x_{\sqrt{t}})\phi_0(y_{\sqrt{t}})} .\end{eqnarray*} \end{cor} \begin{proof}[Outline of the proof of Theorem \ref{theo-DH}] To simplify notation, set $$\widetilde{K}= K_{\phi_0}, \;\;\widetilde{\pi}= \phi_0^2\pi |_U.$$ The estimates stated above and which we are going to obtain for $\widetilde{K}^t=K^t_{\phi_0}$ do not depend on the exact scaling of $\phi_0$ and $\pi|_U$ as long as the given choice made is used consistently. The first key point of the proof is the fact that $\widetilde{K}=K_{\phi_0}$ is Markov (i.e., satisfies $\sum_{y\in U} \widetilde{K}(x,y)=1$ for each $x\in U$) and reversible with respect to $\widetilde{\pi}=\phi_0^2 \pi|_U$. (Normalizing is optional.) Also, the reversible Markov chain $(\widetilde{K},\widetilde{\pi})$ satisfies $\widetilde{K}(x,x)\ge \epsilon $ and the ellipticity condition $\widetilde{K}(x,y)\ge 1/\widetilde{P}_e$ where $\widetilde{P}_e= \beta^{-1}_0 P_e\max\{\phi_0(x)/\phi_0(y): \{x,y\}\in \mathfrak E_U\}$. The constant $\widetilde{P}_e$ is bounded above in terms of the constants $\alpha,A,D,P, P_e,\epsilon$ only. It is well-known (see~\cite[Theorem 6.34]{BalrlowLMS} or~\cite{Delm-PH}) that the two-sided Gaussian-type estimate stated in Theorem \ref{theo-DH} for the reversible Markov chain $(\widetilde{K},\widetilde{\pi})$ is equivalent to the conjunction of two more geometric properties which are (a) the doubling property $$\forall\,x\in U,\;r>0,\;\;\; \widetilde{V}(x,2r)\le \widetilde{D}\widetilde{V}(x,r)$$ of the volume function $$\widetilde{V}(x,r)= \widetilde{\pi}(B_U(x,r))= \sum_{y\in B_U(x,r)} \phi_0^2(y)\pi|_U(y),$$ and (b) the Poincar\'e inequality $$\min_{\xi}\sum_{B_U(x,r)} |f(y)-\xi|^2 \widetilde{\pi}(y)\le \widetilde{P} r^2 \sum_{y,z\in B_U(x,r)} |f(y)-f(z)|^2 \widetilde{K}(z,y)\widetilde{\pi}(z),$$ for all $x\in U$, $r>0$ and all $f$ defined over $B_U(x,r)$. See \cite{Delm-PH}. Theorem \ref{theo-Carleson} shows that \begin{equation} \label{eq:phi0-volume} \widetilde{V}(x,r)\simeq \phi_0(x_r)^2V(x,r) \end{equation} and the doubling property of $\widetilde{V}$ follows from Corollary \ref{cor-Carleson}. The proof of the Poincar\'e inequality on the balls $B_U(x,r)$ follows from a variation on the argument developed in Section \ref{sec-PQPJD} which uses the additional property of inner-uniform domains. See \cite{Gyrya} for the proof in the context of strictly local Dirichlet spaces and \cite{Kelsey} for the case of discrete graphs. \end{proof} The following useful corollary to Theorem~\ref{theo-DH} is illustrated in several different examples in Section~\ref{sec:examples}. \begin{cor} \label{cor:exit-time-estimate} Given the setup of Theorem~\ref{theo-DH}, $$ c\beta_0^t\frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})}\le \mathbf P_x(\tau_U>t) \le C \beta_0^t \frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})} ,$$ where $\tau_U$ is the random time that the process $(X_t)$ exists $U$, and $c,C>0$ are constants which depend only on $\alpha,A,D,P_e,P$. \end{cor} \begin{proof} Remark \ref{rem-xr} gives us a constant $c$ such $d(x_r,\mathfrak X\setminus U)\ge cr.$ Note that for any $y\in B(x_{\sqrt{t}},c\sqrt{t}/2)$, we have $\phi_0(y)\le C\phi_0(x_{\sqrt{t}})$ and $\phi_0(y_{\sqrt{t}})\ge C^{-1}\phi_0(x_{\sqrt{t}})$. Furthermore, Theorem \ref{theo-Carleson} gives that $$\widetilde{V}(x,\sqrt{t})\approx \widetilde{V}(y,\sqrt{t})\approx V(x_{\sqrt{t}},c\sqrt{t}/2)\approx \phi_0(x_{\sqrt{t}})^2 V(x,\sqrt{t}).$$ Now, we use the lower bound concerning $K^t_{U}$ from Corollary~\ref{cor:DH} and the previous observations to obtain \begin{align} \mathbf{P}_x(\tau_U > t) &= \sum_{y \in U} K_U^t(x,y) \geq \sum_{y \in B(x_{\sqrt{t}},c\sqrt{t}/2)} K^t_U(x,y)\nonumber \\ &\geq c'_1\beta_0^t\frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})}. \label{eq:exit-time-1} \end{align} For the upper bound, also using Corollary~\ref{cor:DH}, \begin{align} \mathbf{P}_x(\tau_U > t) &= \sum_{y \in U} K_U^t(x,y) \nonumber \\ &\le C_2 \beta_0^t\frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})} \sum_{y \in U} \frac{\phi_0(y)}{\phi_0(y_{\sqrt{t}})} \frac{e^{-c_2d^2_U(x,y)}}{\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})}} \pi(y) \nonumber \\ & \le C'_2 \beta_0^t\frac{\phi_0(x)}{\phi_0(x_{\sqrt{t}})} . \end{align} The last inequality holds because $\phi_0(y)\le C\phi_0(y_{\sqrt{t}})$ by Theorem \ref{theo-Carleson}, and $$\sum_{y \in U} \frac{e^{-c_2d^2_U(x,y)}}{\sqrt{V(x,\sqrt{t})V(y,\sqrt{t})}} \pi(y) \le C $$ on any doubling space. \end{proof} \section{Some explicit examples} \label{sec:examples} In this section, we consider explicit families of finite domains indexed by a size parameter $N$ which is comparable to the diameter of the relevant domain. Each finite domain $U$ is an $\alpha$-inner-uniform domain with a chosen ``center'' $o$ which is just a point in $U$ at maximal distance $R=R_U$ from the boundary (see Lemma \ref{lem-IUJ}). Within each family, the inner-uniformity parameter, $\alpha\in (0,1)$, is fixed. The underlying weighted graph $(\mathfrak X,\mathfrak E,\pi,\mu)$ for these examples satisfies A1 with $\theta=2$. In fact, in this section, the underlying space is the square grid $\mathbb Z^d$ of some fixed dimension $d$ (or some simple modification of it). We normalize the Perron-Frobenius eigenfunction $\phi_0$ by $\pi_U(\phi_0^2)=1$. Because of Theorem \ref{theo-Carleson}, we have $$\max\{\phi_0\}\le C_0\phi_0(o)$$ and (see the $(1/8,C_0)$-regularity of $\phi_0$), $$C_0\min_{B(o,R/2)}\{\phi_0\}\ge \phi_0(o).$$ Furthermore, $\pi_U(B(o,R/2))\ge c_0\pi(U)$. It follows that $$\forall y\in B(o,R/2),\;\;\phi_0(y)\approx \phi_0(o)\simeq 1$$ uniformly within each family of examples considered. In fact, in many examples, the choice of the point $o$ is somewhat arbitrary because one could as well pick any point $\tilde{o}$ with the property that $$d(\tilde{o},\mathfrak X\setminus U)\ge \frac{1}{2}\max_{x\in U}\{d(x,\mathfrak X\setminus U)\}=\frac{R}{2}.$$ Any such point $\tilde{o}$ has the property that $$\forall y\in B(\tilde{o},R/4),\;\;\phi_0(y)\approx \phi_0(\tilde{o})\approx \phi_0(o)\simeq 1$$ uniformly over $\tilde{o}$ and within each family of examples considered. See Figure~\ref{B0}. \begin{figure} \caption{In light orange, regions where $\phi_0$ is approximately equal to $1$. On the left, an example in which there is essentially one central point $o$. On the right, an example in which the ``center'' $o$ can be placed in a variety of different location.} \label{B0} \end{figure} \subsection{Graph distance balls in $\mathbb Z^2$} \begin{figure} \caption{$B(N)$ in $\mathbb Z^2$} \label{B1} \end{figure} In $\mathbb Z^2$, let $U=B(N)=\{x=(p,q)\in \mathbb Z^2: |p|+|q|\leq N\}$. This is the graph ball around $0$ in $\mathbb Z^2$. Equip $\mathbb Z^2$ with the counting measure $\pi$ and with edge weights $$\mu_{xy}\begin{cases} 1/8 & \text{ if } |p_x-p_y|+|q_x-q_y|=1 \\ 0 & \text{ otherwise.} \end{cases}$$ The Markov kernel $K_\mu$ drives a lazy random walk on the square lattice, with holding probability $1/2$ at each vertex. We are interested in the kernel $$K_U(x,y)= K_\mu(x,y)\mathbf 1_U(x)\mathbf 1_U(y)$$ which we view as defining an operator on $L^2(U,\pi_U)$ where $\pi_U$ is the uniform probability measure on $U$. This set is clearly inner-uniform (in fact, it is uniform because the inner distance between any two points in $U$ is the same as the distance between these point in $\mathbb Z^2$). Let us introduce the Perron-Frobenius eigenfunction $\phi_0$ and its eigenvalue~$\beta_0$. Obviously, they depend on $N$. This is one of the rare cases when $\phi_0$ and $\beta_0$ can be determined explicitly: $$\phi_0(x)= \kappa_N\cos \left(\frac{\pi}{2(N+1)} (p+q) \right) \cos \left(\frac{\pi}{2(N+1)}(p-q) \right)$$ with $$\beta_0 = \frac{1}{2}\left(1 + \cos^2\left(\frac{\pi}{2(N+1)}\right)\right).$$ The normalizing constant $\kappa_N$ is of order $1$. Here we need to recall that $\phi_0$ vanishes on points at graph distance $N+1$ from the origin in $\mathbb Z^2$. To illustrate our result for estimating $\mathbf{P}_x(\tau_U > t)$ without writing long formulas, let us consider the probabilities $\mathbf P_{(p,0)}(\tau_U>t)$ and $\mathbf P_{(p,p)}(\tau_U>t)$ that a random walk started at $x=(p,0)$ (for $0\le p\le N)$ and $x=(p,p)$ (for $0\le p\le N/2$), respectively, has not yet been killed by time $t$. For all $t\le N^2$, we have \begin{equation} \label{eq:exit-time-ball-1} \mathbf P_{(p,0)}(\tau_U>t) \approx \left(\frac{N-p}{N-p+\sqrt{t}}\right)^2, \;0\le p\le N. \end{equation} This comes from applying Corollary~\ref{cor:exit-time-estimate} to the eigenfunction above, \begin{align*} \mathbf{P}_{(p,0)}(\tau_U > t) &\approx \frac{\phi_0((p,0))}{\phi_0((p,0)_{\sqrt{t}})} \\ &\approx \frac{\phi_0((p,0))}{\phi_0((p-\sqrt{t},0))} \\ &\approx \frac{(\cos (\frac{\pi}{2N}p))^2}{\cos(\frac{\pi}{2N}(p-\sqrt{t}))^2}. \end{align*} Now, use that $\cos\left(\frac{\pi}{2N}x\right)= \sin \left(\frac{\pi}{2N}(N-x)\right)\sim \frac{\pi}{2N}(N-x)$. In particular, for any fixed $0<t\le N^2$, $P_{(p,0)}(\tau_U>t)$ vanishes asymptotically like $\frac{(N-p)^2}{t}$ as $p$ tends to $N$. Similarly, for $0<t\le N^2$, $$\mathbf P_{(p,p)}(\tau_U>t) \approx \left(\frac{N-2p}{N-2p+\sqrt{t}}\right),\;0\le 2p\le N.$$ In this case, for any fixed $0<t\le N^2$, $\mathbf P_{(p,p)}(\tau_U>t)$ vanishes like $\frac{N-2p}{\sqrt{t}}$ when $p$ tends to $N/2$. \begin{rem} While our results apply equally well to the graph distance balls of $\mathbb Z^d$ for $d>2$, they are much more complicated in that case and there is no explicit formula for $\phi_0$ or the eigenvalue $\beta_0$. The ball is a polytope with faces of dimension $0,1,\dots,d$. The vanishing of $\phi_0$ near each of these faces is described by a power function of the distance to the particular face that is considered and the exponent depends on the dimension of the face and on the angles made by the higher dimensional faces meeting at the given face (the exponent is always $1$ when approaching the highest dimensional faces). \end{rem} \subsection{$B(N)\setminus\{(0,0)\}$ in $\mathbb Z^2$} \begin{figure} \caption{$B(N)\setminus \{0\} \label{B2} \end{figure} The case when $U=B(N)\setminus\{(0,0)\}$ is interesting because we are able to describe precisely the behavior of $\phi_0$ even though there is no explicit formula available. First, we note again that this is an inner-uniform domain (there is no preferred point $o$ in this case, since any point at distance of order $N/2$ from $(0,0)$ will do). Theorem \ref{theo-comph} will play a key part in allowing us to describe the behavior of $\phi_0$. First, we can use path arguments and an appropriate test function to show that $$1-\beta_0 \approx N^{-2}.$$ For the upper bound, use the test function $$f((p,q))=\min\{d((0,0),(p,q)),N+1-d(0,0),(p,q))\}$$ which vanishes at all boundary points for $U$ Second, we show that $$\phi_0((p,q))\approx \frac{(N-|p+q|)(N-|p-q|))\log (1+|p|+|q|)}{N^2\log N }.$$ To obtain this result, cover $U$ by a finite number (independent of $N$) of $\mathbb Z^2$ balls $\{B_j\}$ of radius of order $N$ so that the trace of $U$ in each of the balls $2B_j$ is of one of the following four types: (1) no intersection with the boundary of $U$; (2) the intersection with the boundary of $U$ is $\{(0,0)\}$; (3) the intersection with the boundary of $U$ is a subset of $\{(p,q): p+q=N\}$ or $\{(p,q): p-q=N\}$ of $\{(p,q): p+q=-N\}$ or $\{(p,q): p-q=-N\}$; and (4) the intersection with the boundary is a corner formed by two of the previously mentioned lines. See Figure \ref{B2} for an illustration of these four types. In case (1), we know that $\phi$ is approximately constant in $B_j$. Moreover, this approximately constant value must be (approximately) the maximum value of $\phi_0$ because of Theorem \ref{theo-Carleson}, and this constant must be approximatively equal to $1$ because $\phi_0$ is normalized by $\pi_U(\phi_0^2)=1$. This is compatible with the proposed formula describing $\phi_0$. In case (2), Theorem \ref{theo-comph} allows us to compare $\phi_0((p,q))$ to the harmonic function $h((p,q))$ equal to the discrete modified Green's function $$A((0,0),(p,q))=\sum_0^\infty(M^t((0,0),(p,q))-M^t((0,0),(0,0))$$ on $\mathbb Z^2\setminus \{(0,0)\}$ Here $M$ is the Markov kernel of aperiodic simple random walk on $\mathbb Z^2$. It is well-known that this function is comparable to $\log(1+|p|+|q|)$ (See \cite[Chapter 3]{Spitzer} from which we borrowed the notation $A(x,y)$. More precise estimates are available using sharp version of the local limit theorem, but this is enough for our purpose). Because the ball $B_j$ in question must contain a point at distance of order $N$ from the boundary of $U$ at which $\phi_0$ is of order $1$, we find that, in such a ball, $$\phi_0((p,q))\approx \frac{\log (1+|p|+|q|)}{\log N}.$$ Again, this estimate is compatible with the proposed formula. In case (3), we easily have a linear function $h$ vanishing on the (flat) portion of the boundary contains in that ball and positive discrete harmonic in $U$. Thanks to Theorem~\ref{theo-comph}, this provides the estimate $$\phi_0((p,q))\approx \frac{ d_U((p,q)),\mathfrak X \setminus U)}{N}$$ in balls of this type, which has the form suggested by the proposed formula. Finally, in case (4), and, for definiteness, in the case the ball $B_j$ is centered at the corner of intersection of the line ${\{(p,q): p+q=N\}}$ and ${\{(p,q): p-q=N\}}$, the function $h((p,q))= (N-p-q)(N-p+q) $ vanishes on these two lines and is discrete harmonic. This gives (again,using Theorem \ref{theo-comph}) $$\phi_0((p,q))\approx \frac{ (N-p-q)(N-p+q)}{N^2}$$ as desired. \subsection{$B(N)\setminus \{\mathbf 0\}$ in $B(N)$, in dimension $d>1$} \label{pointedball} \begin{figure} \caption{$B(N)\setminus \{0\} \label{B3} \end{figure} First we explain the title of this subsection. Consider the simple random walk in the ball $B(N)\subset \mathbb Z^d$, with any reasonable reflection type hypothesis on the boundary of $B(N)$. Our aim is to study absorption at $0$ for this random walk on the finite set $B(N)$. To put this example in our general framework, we set $\mathfrak X_N= B(N)$ equipped with the edge set $\mathfrak E_N$ induced by the underlying square lattice, that is the collection of all lattice edges with both end points in $B(N)$. The measure $\pi$ on $\mathfrak X_N=B(N) $ is the counting measure and each lattice edge $e$ in $\mathfrak E$ is given the weight $\mu(e)=1/(2d)$. This means that the Markov kernel $K_\mu$ for our underlying walk has no holding at point $x\in B(N-1)\subset B(N)$ and holding probability $\nu(x)/(2d)$ where $\nu(x)= 2d- \#\{y\in B(N): \{x,y\}\in \mathfrak E_N\}$ when $x\in B(N)\setminus B(N-1)$ (this holding probability at the boundary is always at least $1/2$). The domain $U_N$ of interest to us here is $U_N=B(N)\setminus \{\mathbf 0\}$ (inside $B(N)$) whose sole outside boundary point is the center $\mathbf 0$. When the dimension $d$ is at least $2$, this is an inner-uniform domain in $(\mathfrak X_N,\mathfrak E_N)$ (there is no canonical center but any point at distance at least $N/2$ from $\mathbf 0$ can be chosen to be the center $o$). Because the domain $U_N$ is inner-uniform (uniformly in $N$), Theorem~\ref{theo-comph} yields $$\mathbf P_x(\tau_U >t)\approx \frac{\beta_0^{t} \phi_0(x)}{\phi_0(x_{\sqrt{t}})} $$ and, for $t \geq N^2$, Corollary~\ref{cor:KU-rate-of-conv} gives, $$|K^t_U(x,y) - \phi_0(x)\phi_0(y)\beta_0^{t} |U|^{-1}| \le C \beta_0^t\phi_0(x)\phi_0(y) e^{-t/N^2}.$$ As in the previous examples, the key is to obtain further information on $\beta_0$ and $\phi_0$. For that we need to treat the cases $d=2$ and $d>2$ separately. In both cases, we use Theorem \ref{theo-comph} to estimate $\phi_0$. \subsubsection{Case $d=2$} The first task is to estimate $1-\beta_0$ from above and below. This is done by using the same argument explained in \cite[Example 3.2.5: The dog]{LSCStF}. See Subsection \ref{par-eig} below where we spell out the main part of the argument in question. The upshot is that $1-\beta_0\approx 1/N^2\log N$. We know that $\phi_0(x)\approx 1$ when $x$ is at graph distance at least $N/2$ from $\mathbf 0$ (see the outline describe in Example \ref{pointedball} for type 1 balls). To estimate $\phi_0$ at other points, we compare it with the global positive harmonic function from $\mathbb Z^2\setminus \{\mathbf 0\}$ given the so-called modified Green's function $h(x)= A(\mathbf 0,x)= \sum_{t=0}^\infty [M^t(\mathbf 0,x)-M^t(\mathbf 0,\mathbf 0)]$ where $M$ stands here for the Markov kernel of aperiodic simple random walk in $\mathbb Z^2$ as in Example \ref{pointedball}. Note that $h$ vanishes at $0$. Classical estimates (e.g., \cite{Spitzer}) yield $h(x) \approx \log |x|$. This, together with Theorem \ref{theo-comph} and the estimate when $x$ is at distance at least $N/2$ from $\mathbf 0$, gives $$\phi_0(x) \approx \frac{\log |x|}{\log N}.$$ \subsubsection{Case $d>2$} The case $d>2$ is perhaps easier although the arguments are essentially the same. The eigenvalue $\beta_0$ is estimated by $1-\beta_0\approx 1/N^d$ and the harmonic function $h(x)=\sum_0^\infty M^t(\mathbf 0,x)-\sum_0^\infty M^t(\mathbf 0,\mathbf 0)$ (these sums converge separately because $d>2$) is estimated by $h(x)\approx \left(1- 1/(1+|x|)^{d-2}\right)$. This gives $$\phi_0(x) \approx \frac{\left(1- 1/(1+|x|)^{d-2}\right) }{ \left(1- 1/(1+N)^{d-2}\right)} \approx 1.$$ \subsubsection{Discussion} The first thing to observe in these examples is the fact that $1-\beta_0=o(1/N^2)$. For $t\ge N^2$ we have $$|K^t_U(x,y) - \phi_0(x)\phi_0(y)\beta_0^{t} |U|^{-1}| \le C \beta_0^t\phi_0(x)\phi_0(y) e^{-t/N^2}.$$ In the case $d=2$, if $\epsilon>0$ is fixed and $x,y$ are at distance greater than $N^\epsilon$ from the origin, we can without loss of information, simplify the above statement and write $$|K^t_U(x,y) - \phi_0(x)\phi_0(y)\beta_0^{t} |U|^{-1}| \le C e^{-t/N^2}.$$ Because $\beta_0^t$ decays significantly slower than $e^{-t/N^2}$, this provides a good example of a quasi-stationary distribution during the time interval $t\in (N^2, N^2\log N)$. In the case $d>2$, the same phenomenon occurs, only in an even more tangible way. For any $x,y\in U_N$, $\phi_0(x),\phi_0(y)$ are uniformly bounded away from $0$ (even for the neighbors of the origin, $\mathbf 0$). Moreover, $1-\beta_0\approx 1/N^{d}=o(1/N^2)$. For $t\ge N^2$ and $x,y\in U_N$, $$|K^t_U(x,y) - \phi_0(x)\phi_0(y)\beta_0^{t} |U|^{-1}| \le C e^{-t/N^2}.$$ On intervals of the type $t \in (TN^2,N^d/T)$ with $T$ large enough, $ K^t_U(x,y) $ is well approximated by $ \phi_0(x)\phi_0(y) |U|^{-1} $ because, on such intervals, $\beta_0^t$ remains close to $1$. \subsection{ $B(N)\setminus B_2(L)$ in $B(N)$, in dimension $d>1$} We work again in $\mathfrak X_N=B(N)$ with the weighted graph structure explained above. We use $B_2(r)$ to denote the trace on the lattice $\mathbb Z^d$ of the Euclidean (round) ball centered at the origin, $\mathbf 0$. The domain we wish to investigate is $U_{N,L}=B(N)\setminus B_2(L)$ with $L=o(N)$ so that the number of points in $U_{N,L}$ is of order $N^d$ and $U_{N,L}$ is inner-uniform (uniformly in all choices of $N,L$). Again, the chosen center $o$ in $U_{N,L}$ can be any point at graph distance $N$ from $\mathbf 0$. All the estimates described below are uniform in $N,L$ as long as $L=o(N)$. \subsubsection{Estimating $\beta_0$} \label{par-eig} First we explain how to estimate $\beta_0$ for $U=B(N)\setminus B_2(L)$ in $B(N)$ using and argument very similar to those used in \cite[Example 3.2.5: The dog]{LSCStF}. For each point $x\in U$ fix a graph geodesic discrete path $\gamma_x$ that joins $x$ to the origin in $\mathbb Z^d$ while staying as close as possible to the straight line from $x$ to the origin. We stop $\gamma_x$ whenever it reaches a point in $B_2(L)$. \begin{figure} \caption{Paths to the origin in $B(N)\setminus B_2(L)$} \label{B4path} \end{figure} Given a function $f$ on $B(N)$ which is equal to zero on $B_2(L)$ and a directed edge $e=(x,y)$, set $df(e)=f(y)-f(x)$. The edges along a path $\gamma_x$ are all directed toward the origin. Using this notation, we have $$|f(x)|^2\le |\sum_{e\in \gamma_x} df(e)|^2 \le |\gamma_x|_w \sum_{e\in \gamma_x} |df(e)|^2 w(e)$$ where $w$ is a weight function on the edge $e$ which will be chosen later and $|\gamma|_w= \sum_{e\in \gamma} w(e)^{-1}$. Summing over all $x\in U$, we obtain $$\sum_U|f|^2 \le 2d\sum_{e\in \mathfrak E}\left(\sum_{x: \gamma_x\ni e}|\gamma_x|_w w(e)\right) \frac{|df(e)|^2 }{2d} \le C_w(d,N,L) \mathcal E_\mu(f,f) $$ where $$C_w(d,N,L)= 2d \max_{e\in \mathfrak E}\left\{w(e)\sum_{x:\gamma_x\ni e}|\gamma_x|_w\right\}. $$ Using the Raleigh quotient formula for $1-\beta_0$, we obtain the eigenvalue estimate $$\beta_0\le 1 - 1/C_w(d,N,L)$$ for any choice of the weight $w$. Here we choose $w(e)$ to be the Euclidean distance of the edge $e$ to the origin raised to the power $d-1$. This implies that $$|\gamma_x|_w\le C_d\times \begin{cases} \log (N/L) &\mbox{ when } d=2,\\ L^{-d+2} & \mbox{ when } d>2 \end{cases}$$ for some constant $C_d$ which depends on the dimension $d$. It remains to count how many $x$ use a given edge $e$. Because we use paths that remain close to the straight line from $x$ to the origin, the vertices $x$ that use and given edge $y$ at Euclidean distance $T$ from the origin must be in a cone of aperture bounded by $C_d/T$. The number of these vertices is at most $C_d N\times (N/T)^{d-1} $ where the constant $C_d$ changes from line to line. See Figure \ref{B4path}. Recall that $w(e)\approx T^{d-1}$. Putting things together yields $$C_w(d,N,L)\le C_d \times \begin{cases}{cl} N^2 \log (N/L) &\mbox{ when } d=2,\\ N^d L^{-d+2} & \mbox{ when } d>2. \end{cases}$$ In terms $\beta_0$ this gives $$1-\beta_0\ge C^{-1}_d \times \begin{cases}{cl} 1/N^2\log (N/L) &\mbox{ when } d=2,\\ L^{d-2}/N^d & \mbox{ when } d>2. \end{cases} $$ The upper-bound is a simple computation using a test function which take the value $0$ on $B_2(L)$ and increase linearly at rate $1$ until taking the value $L$. After that the test function remains constant equal to $L$. Note that this bound interpolates between the case $L=1$ (more or less, the previous case) when $1-\beta_0\approx 1/N^d$ and the case when $L$ is a fixed small fraction of $N$, in which case $1-\beta_0\approx 1/N^2$. \subsubsection{Estimating $\phi_0$ in the case $d=2$} \begin{figure} \caption{$B(N)\setminus B_2(L)$: In the yellow region of width $L$ around $B_2(L)$, $\phi_0(x)\approx (\frac{\log L} \label{B4} \end{figure} The technique is the same as the one described below for the case $d>2$. Here we omit the details and only describe the findings. The behavior of the function $\phi_0$ is best described by considering two zones. See Figure \ref{B4}. The first zone is $B_2(2L)\setminus B_2(L)$ in which the function $\phi_0$ is roughly linearly increasing as the distance from $B_2(L)$ increases and satisfies $$\phi_0(x)\approx \frac{ \log L}{\log N}d(x,B_2(L)).$$ The second zone is $B(N)\setminus B_2(2L)$ in which $\phi_0$ satisfies $$\phi_0(x)\approx \frac{\log |x|}{\log N}.$$ \subsubsection{Estimating $\phi_0$ in the case $d>2$} Because of the basic known property of $\phi_0$ discussed earlier, it satisfies $\phi_0\approx 1$ on the portion of $U_{N,L}$ which is at distance of order $N$ from $B_2(L)$ (the outer-part of $U_{N,L}$). The function $\phi_0$ is also bounded on $U_N$, uniformly in $N,L$. One key step is to find out the region in $U_{N,L}$ over which $\phi_0$ is bounded below by a fixed small $\epsilon$. For this purpose we use, a simple comparison with the Green's function $G(\mathbf 0,y)=\sum K^t(\mathbf 0,y)$, of the simple random walk on $\mathbb Z^d$. First, find the smallest positive $T=T(L)$ such that $$B_2(L)\subset \{x\in \mathbb Z^d: G(\mathbf 0,x)\ge T\}.$$ Recall that \begin{equation} G(\mathbf 0,x)\approx 1/(1+|x|)^{d-2} \label{Green=} \end{equation} This shows that $T\approx 1/L^{d-2}$ (the implied constants in this estimate depend on $d$ because we are using both the Euclidean norm and the graph distance). We are going to compare $\phi_0$ to a multiple of the harmonic function $$v(x)=1-G(\mathbf 0,x)/T.$$ It is clear that $v\approx 1$ when $|x|=N$ (uniformly over $N,L$). It follows that there is a constant $a>0$, independent of $N,L$, such that $ \phi_0-av$ is greater or equal to $4$ on the boundary of $V_{N,L}=B(N)\setminus \{z: G(\mathbf 0,z)\ge T\}$ (the constant $a$ is chosen so that this is true on the outer-boundary whereas, on the inner-boundary, $v=0$, $\phi_0>0$). Suppose that $\phi_0-av$ attains a minimum at an interior point $x_0$ in $V_{N,L}$. This would imply that $\phi_0(x_0)-av(x_0)\le \beta_0 \phi_0(x_0)-av(x_0)$, that is, $1\le \beta_0$, a contradiction. It follows that $\phi_0\ge a v$ on $V_{N,L}$. Because of the known estimate for $G$ recalled above and of the general properties of $\phi_0$, this shows that $$\phi_0\approx 1 \mbox{ over }B(N)\setminus B_2(2L).$$ All the statements and arguments given so far would work just as well if we where considering $B(N)\setminus B(L)$ instead of $B(N)\setminus B_2(L)$. These two cases differ only in the behavior of their respective $\phi_0$ near the interior boundary. For $U_{N,L}=B(N)\setminus B_2(L)$, it is possible to show that $$\phi_0(x) \approx \frac{d( x, B_2(L)}{L}.$$ The fundamental reason for this is the (uniform) smoothness of the boundary of the Euclidean ball $B_2(L)$ (viewed at scale $L$). The result is a consequence of one of the main result in \cite{VarMilan1} (see also \cite{varMilan2,VarMilan3}). \subsection{$B(N)\setminus B(L)$, $d=2$} \begin{figure} \caption{$B(N)\setminus B(L)$} \label{BNL} \end{figure} Next we consider $B(N)\setminus B(L)$, $L<N/2$, in dimension $d=2$. We have again $$\beta_0\approx 1/ N^2\log (N/L)$$ In the zone $B(N)\setminus B_2(2L)$ (outside the yellow area in Figure \ref{BNL}), the function $\phi_0$ is estimated by $$\phi_0(x)\approx \frac{\log |x|}{\log N }.$$ We note here that the exact outer shape of the yellow region is unimportant (we could have drawn a diamond instead of a round ball). In order to describe the function $\phi_0$ is the yellow zone ($B_2(2L)\setminus B(L)$), it is convenient to split the region into eight areas, each of which is of one of two types. See Figure \ref{BNL2} where the two red circles describes the two types of region that we will consider. The estimates described below are compatible when two regions intersect. In the type $1$ regions, because the relevant piece of the boundary at scale $L$ is flat, $$\phi_0(x)\approx \frac{\log L}{L\log N} d(x,B(L)).$$ \begin{figure} \caption{The yellow zone in $B(N)\setminus B(L)$} \label{BNL2} \end{figure} In the type $2$ regions, centered around one of the corner of $B(L)$, $$\phi_0(x)\approx \frac{\log L}{\log N} (\rho/L)^{2/3} \cos \left( 4\theta/3\right),\;\; x=(x_1,x_2), x-\xi=\rho e^{i \theta}$$ Here $\xi$ is the tip of the diamond $B(L)$ around which the region of type 2 is centered, $\theta$ is the angle in $[-\pi,\pi)$ measured from the median semi-axis through the tip. This last estimate is obtained by using the results of \cite{VarMilan1} to derive the behavior of discrete harmonic function in a type $2$ region from the behavior of the analogous classical harmonic function in the analogous domain in $\mathbb R^2$ (a cone with aperture $3\pi/2$). \section{Summary and concluding remarks} This article gives detailed quantitative estimates describing the behavior of Markov chains on certain finite sub-domains of a large class of underlying graphs before the chain exits the given sub-domain. There are two types of key assumptions. The first set of assumptions concern the underlying graph (before we consider a particular sub-domain). This underlying graph belongs to a large class of graphs whose properties mimic those of the square grid $\mathbb Z^m$. This class of graphs can be defined in a variety of known equivalent different ways: it satisfies, uniformly at all scales and locations, the doubling volume condition and Poincar\'e inequality on balls; equivalently, the iterated kernel of simple random walk satisfies detailed two-sided ``Gaussian or sub-Gaussian bounds''; or, equivalently, it satisfies a certain type of parabolic Harnack inequality for (local) positive solutions of the discrete heat equation. See the books \cite{BalrlowLMS,GrigBook} for details and pointers to the literature. It is perfectly fine for the reader to concentrate attention on the case of the square grid $\mathbb Z^m$. However, even if the reader concentrates on this special case, the techniques that are then used to study the behavior of the chain in sub-domains are the same techniques as the ones needed to understand the more general class of graphs we just alluded to. The second set of assumptions concerns the finite sub-domains of the underlying graph that can be treated. These sub-domains are called John domains and inner-uniform domains, and both are defined using metric properties. For John domains (the larger class), there is a central point $o$ and any other point of the domain can be joined to the central point $o$ by a carrot-shaped region that remains entirely contained in the domain. The inner-uniform condition (a strictly more restrictive condition) requires that any pair of point in the domain can be joined by a banana-shaped region that is entirely contained in the domain. It is not easy to get a good precise understanding of the type of regions afforded by these conditions because they allow for very rugged domains (e.g., in the Euclidean plane version, the classical snowflake). They do cover many interesting examples. It is worth emphasizing here that the strength of the results obtained in this article comes from the conjunction of the two types of assumptions described above. Under these assumptions, one can describe the results of this paper by saying that any question about the behavior of the chain until it exits the given sub-domain boils down (in a technically precise and informative way) to estimating the so-called Perron-Frobenius eigenvalue and eigenfunction of the domain. Let us stress here that it is quite clear that it is necessary to understand the Perron-Frobenius pair in order to get a handle on the behavior of the chain until it exits the domain. What is remarkable is the fact that it is essentially sufficient to understand this pair in order to answer a host of seemingly more sophisticated and intricate questions. This idea is not new as it is the underlying principle of the method known as the Doob-transform technique which has been used by many authors before. Under two basic types of assumptions described above, this idea works remarkably well. In different contexts (diffusion, continuous metric measure spaces, Dirichlet forms and unbounded domains) this same idea is the basis for many of the developments in \cite{Pinsky,Gyrya}. For inner-uniform domains, the more restrictive class of domains, the results obtained are rather detailed and complete. For John domains, the results obtained, which depend on the notion of moderate growth (see Lemma~\ref{lem-Q}), are less detailed and leave interesting questions open. We conclude with pointing out to further potential developments. This article focuses on the behavior before the exit time of the given finite domain. In the follow-up paper \cite{DHSZ2}, we discuss, in the case of inner-uniform domains, the implications of these results on the problem of understanding the exit position. This can be framed as an extension of the classical Gambler's ruin problem. In a spirit similar to what was said above, \cite{DHSZ2} shows how Gambler's ruin estimates on inner-uniform domains reduce to an understanding of the Perron-Frobenius eigen pair of the domain. Much less is known for John domains in this direction. Having reduced a certain number of interesting questions to the problem of estimating the Perron-Frobenius eigenfunction $\phi_0$ of a given finite domain, we owe the reader to observe that this task, estimating $\phi_0$, remains extremely difficult. There are plenty of interesting results in this direction and many more natural open problems. An illustrative example is the following: consider the cube of side length $2N$ in $\mathbb Z^3$ with the three main coordinate axes going through the center removed; this is an inner-uniform domain and we would like to estimate the eigenfunction $\phi_0$. Another example, less mysterious, is to find precise estimates for $\phi_0$ for the graph balls in $\mathbb Z^m$ with $m\ge 3$. For finite domains in $\mathbb Z^m$ with diameter $R$, we have proved that the key convergence parameter for the quasi-stationarity problems considered here is order $R^2$ for $\alpha$-inner-uniform domains and no more than $R^{2+\omega}$ for $\alpha$-John domains where $\omega\ge 0$ depends only on the dimension $m$ and John parameter $\alpha$. It is an interesting open question to decide whether or not $\omega$ can be taken to be always equal to $0$. Even if there are John domains where $\omega$ must be positive, it is clear that there is a class of John domains that is strictly larger than the class of all inner-uniform domains and for which one can take $\omega=0$. Elucidating this question is an interesting open problem in the present context and in the context of analysis in Euclidean domains. \end{document}
\begin{document} \title{\bf Topological algebras of bounded operators with locally solid Riesz spaces} \maketitle \author{\centering ABDULLAH AYDIN \\ \small Department of Mathematics, Mu\c{s} Alparslan University, Mu\c{s}, Turkey. \\} \abstract{Let $X$ be a vector lattice and $(E,\tau)$ be a locally solid vector lattice. An operator $T:X\to E$ is said to be $ob$-bounded if, for each order bounded set $B$ in $X$, $T(B)$ is topologically bounded in $E$. In this paper, we study on algebraic properties of $ob$-bounded operators with respect to the topology of uniform convergence and equicontinuous convergence. \let\thefootnote\relax\footnotetext {Keywords: $ob$-bounded operator, order bounded operator, vector lattice, locally solid Riesz space \text{2010 AMS Mathematics Subject Classification:} 46A40, 47B65, 46H35 e-mail: [email protected]} \section{Introduction and preliminaries}\label{A1} Bounded operators play an important role in the functional analysis. Our aim in this paper is to introduce and study $ob$-bounded operators from vector lattice to locally solid vector lattices or between locally solid vector lattices, which attracted the attention of several authors in series of recent papers; for example \cite{AA,AEEM2,Tr,Z}. By an {\em operator}, we always mean a linear operator between vector spaces. Let us recall some notation and terminology used in this paper. In every topological vector space (over $\mathbb{R}$ or $\mathbb{C}$), there exists a base $N_0$ of zero neighborhoods with; every $V\in N_0$, $\lambda V\subseteq V$ whenever $\lvert \lambda\rvert\leq 1$; for every $V_1,V_2\in N_0$ there exists $V\in N_0$ such that $V\subseteq V_1\cap V_2$; for every $V\in N_0$ there exists $U\in N_0$ such that $U+U\subseteq V$; for every $V\in N_0$ and every scalar $\lambda$ the set $\lambda V$ is in $N_0$. Whenever we mention a neighborhood of zero, we always assume that it belongs to a base which satisfies these properties. A subset $A$ of a vector lattice $E$ is said to be {\em solid} if, for $y\in A$ and $x\in E$ with $\lvert x\rvert\leq\lvert y\rvert$, we have $x\in A$. Let $E$ be a vector lattice and $\tau$ be a linear topology on $E$ that has a base at zero consisting of solid sets. Then the pair $(E,\tau)$ is said a {\em locally solid vector lattice} (or, {\em locally solid Riesz space}) that means a topological vector lattice with a locally solid topology, for more details on these notions, see \cite{AB,ABPO}. A vector lattice $E$ is {\em order complete} if every subset of $E$ which is bounded above has a supremum. For $a\in E_+$, the {\em order interval} consists of all $x\in E$ such that $-a\leq x\leq a$. In a vector lattice $E$, a subset $B\subseteq E$ is called {\em order bounded} if it is contained in an order interval. Similarly, in a topological vector lattice $(E,\tau)$, a subset $B\subseteq E$ is called {\em bounded} (or, {\em topological bounded}) if, for each zero neighborhood $U\in E$, there exists a positive scalar $\lambda$ with $B\subseteq \lambda U$. We refer the reader for more information about some relation and result for mentioned types of bounded sets to \cite{AB,ABPO,L,Tr}. An {\em order bounded operator} between vector lattices sends order bounded subsets to order bounded subsets. For given a vector lattices $E$, $B_b(E)$ is the space of all order bounded operators on $E$. Since we have just order structure on $B_b(E)$, there is not exist a topological structure on it. But, for given a Banach lattice $E$, $B_b(E)$ forms a Banach lattice. Through this paper, we consider a subspace of the space of all linear operators between vector lattices. All vector spaces in this paper are assumed to be real, and all topological vector lattices are considered to be locally solid. Now, let us give some known types of bounded operators. Let $E$ and $F$ be topological vector spaces. An operator $T:E\to F$ is said to be {\em $bb$-bounded} if it maps every bounded set into a bounded set, and said to be {\em $nb$-bounded} if $T$ maps some neighborhood of zero into a bounded set; see \cite{Tr}. When $E$ and $F$ are locally solid vector lattices, $T$ is said to be {\em $bo$-bounded} if, for every bounded set $B\subset E$, $T(B)$ is order bounded in $F$, and said to be {\em $no$-bounded} if there exists some zero neighborhood $U\subset E$ such that $T(U)$ is order bounded in $F$; see \cite{Z}. Motivated by these definitions, we give the following notion. \begin{defn} Let $X$ be a vector lattice and $(E,\tau)$ be a locally solid vector lattice. Then an operator $T:X\to E$ is said to be {\em $ob$-bounded} if, for every order bounded set $B\subseteq X$, $T(B)$ is $\tau$-bounded in $E$. \end{defn} Every order bounded subset is topologically bounded in locally solid vector lattices; see \cite[Thm.2.19(i)]{AB}, and so we have the following result. \begin{rem}\ \begin{enumerate} \item[(i)] Every $bb$-bounded operator between locally solid vector lattices is $ob$-bounded. \item[(ii)] Every $bo$-bounded operator is $ob$-bounded. \item[(iii)] Every order bounded operator from a vector lattice to a locally solid vector lattice is $ob$-bounded. \end{enumerate} \end{rem} It is natural to ask whether an $ob$-bounded operator is order bounded or $bo$-bounded. By considering the Example 2.4 of \cite{L}, we have the negative answer for order boundedness since bounded set may be not order bounded in locally solid vector lattice. Also, by considering the Example 2 of \cite{Z} and the Theorem \cite[Thm.2.19(i)]{AB}, the identity operator $I$ in the example is $ob$-bounded but it fails to be $bo$-bounded. But, on the other hand, we have a partial positive answer. Every bounded set is order bounded in locally solid vector lattice when space contains an order bounded zero neighborhood; see \cite[Thm.2.2]{L}. So, every $ob$-bounded operator from a vector lattice to a locally solid vector lattice with an order bounded zero neighborhood is order bounded, and also every $ob$-bounded operator between locally solid vector lattices with order bounded zero neighborhoods is $bo$-bounded, and lastly every $ob$-bounded operator from a locally solid vector lattice with an order bounded zero neighborhood to a locally solid vector lattice is $bb$-bounded. \begin{rem}\label{some example of bounded operators} The sum of two $ob$-bounded operators is $ob$-bounded since the sum of two bounded sets in a topological vector space is bounded. The product of two $ob$-bounded operators may not be $ob$-bounded. However, the product of an $ob$-bounded operator $T$ with a $bo$-bounded operator $S$ from the left hand (i.e $S\circ T$) is order bounded and so $ob$-bounded. \end{rem} The class of all $ob$-bounded operators from a vector lattice $X$ to a locally solid vector lattice $(E,\tau)$ is denoted by $B_{ob}(X,E)$, and it will be equipped with the topology of uniform convergence on order bounded sets. Recall that a net $(S_\alpha)$ of $ob$-bounded operators is said to {\em converge to zero uniformly} on an order bounded set $B$ if, for each zero neighborhood $V$ in $E$, there exists an index $\alpha_0$ such that $S_\alpha(B)\subseteq V$ for all $\alpha\geq\alpha_0$. We say that $(S_\alpha)$ converges to $S$ uniformly on order bounded sets if $(S_\alpha-S)$ converges to zero uniformly on order bounded sets; see \cite[2.16]{Tr}. Also, $ob$-bounded operators will be equipped with the topology of equicontinuous convergence with order bounded sets. Recall that a net $(S_\alpha)$ of $ob$-bounded operators is said to {\em converge to zero equicontinuously} with order bounded sets if, for each zero neighborhood $V$ in $Y$, there is an order bounded set $B$ in $X$ such that, for every $\varepsilon>0$, there exists an index $\alpha_0$ such that $S_\alpha(B) \subseteq\varepsilon V$ for all $\alpha\geq\alpha_0$. We say that $(S_\alpha)$ converges to $S$ equicontinuously with order bounded sets if $(S_\alpha-S)$ converges to zero equicontinuously with order bounded sets; see \cite[2.18]{Tr}. \section{Main results}\label{A2} In the following two results, we show the continuity of addition and scalar multiplication with respect to the uniform convergence topology and the equicontinuous convergence topology, respectively. \begin{thm} The operations of addition and scalar multiplication are continuous in $B_{ob}(X,E)$ with respect to the uniform convergence topology on order bounded sets. \end{thm} \begin{proof} Suppose $(T_\alpha)$ and $(S_\alpha)$ are two nets of $ob$-bounded operators which are uniform convergent to zero on order bounded sets. Fix an arbitrary order bounded set $B$ in $X$. Consider a zero neighborhood $V$ in $E$ then there exists another zero neighborhood $U$ such that $U+U\subseteq V$. So that there are some indexes $\alpha_1$ and $\alpha_2$ such that $T_\alpha(B)\subseteq U$ for every $\alpha\geq\alpha_1$ and $S_\alpha(B)\subseteq U$ for each $\alpha\geq\alpha_2$. Since index set is directed, there is another index $\alpha_0$ such that $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_1$. Hence, $T_\alpha(B)\subseteq U$ and $S_\alpha(B)\subseteq U$ for all $\alpha\geq\alpha_0$. Then we have $$ (T_\alpha+S_\alpha)(B)\subseteq T_\alpha(B)+S_\alpha(B)\subseteq U+U\subseteq V $$ for each $\alpha\geq\alpha_0$. Therefore, since $B$ is arbitrary, we get that addition is continuous in $B_{ob}(E)$ with respect to the uniform convergence topology on order bounded sets. Next, we show the continuity of the scalar multiplication. Consider the order bounded set $B$ in $E$ and a sequence of reals $\lambda_n$ which is convergent to zero. Since $(T_\alpha)$ is uniform convergent to zero on order bounded sets, for each zero neighborhood $V$ in $E$, there exists an index $\alpha_0$ such that $T_\alpha(B)\subseteq V$ for all $\alpha\geq\alpha_0$. For sufficiently large $n$, we have $\lvert\lambda_n\rvert\leq 1$, and so $\lambda_n V\subseteq V$. Then, for all $\alpha\geq\alpha_0$ and sufficiently large $n$, we have $$ \lambda_nT(B)=T(\lambda_n B)\subseteq\lambda_n V \subseteq V. $$ Therefore, we get the desired result. \end{proof} \begin{thm} The operations of addition and scalar multiplication are continuous in $B_{ob}(X,E)$ with respect to the equicontinuous convergence topology with order bounded sets. \end{thm} \begin{proof} Let $(T_\alpha)$ and $(S_\alpha)$ be two nets of $ob$-bounded operators which are equicontinuous convergent to zero with order bounded sets. Fix an arbitrary zero neighborhood $V$. Then there is another zero neighborhood $U$ with $U+U\subseteq V$. So, there exist order bounded sets $B_1$, $B_2$ in $X$ such that, for every $\varepsilon>0$, there are indexes $\alpha_1$ and $\alpha_1$ such that $T_\alpha(B_1) \subseteq\varepsilon U$ for all $\alpha\geq\alpha_1$ and $S_\alpha(B_2) \subseteq\varepsilon U$ for each $\alpha\geq\alpha_2$. Take an index $\alpha_0$ such that $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_1$. Hence, $T_\alpha(B_1)\subseteq\varepsilon U$ and $S_\alpha(B_2) \subseteq\varepsilon U$ for all $\alpha\geq\alpha_0$. Choose the order bounded set $B=B_1\cap B_2$ then we have $$ (T_\alpha+S_\alpha)(B)\subseteq T_\alpha(B_1)+S_\alpha(B_2)\subseteq=\varepsilon U+\varepsilon U\subseteq \varepsilon V $$ for each $\alpha\geq\alpha_0$. Therefore, since $V$ is arbitrary, we get the desired result. Now, we show the continuity of scalar multiplication. Consider the zero neighborhood $V$ and a sequence of reals $\lambda_n$ which is convergent to zero. Since $(T_\alpha)$ is equicontinuous convergent to zero with order bounded sets, there exists an order bounded set $B$ in $X$ such that, for every $\varepsilon>0$, there is an index $\alpha_0$ such that $T_\alpha(B)\subseteq\varepsilon V$ for each $\alpha\geq\alpha_0$. Also, for sufficiently large $n$, we have $\lvert\lambda_n\rvert\leq 1$, so that $\lambda_n V\subseteq V$. Then, for all $\alpha\geq\alpha_0$ and for fixed sufficiently large $n$, we have $$ \lambda_nT_\alpha(B)=T_\alpha(\lambda_n B)\subseteq \lambda_n\varepsilon V \subseteq\varepsilon V. $$ Therefore, we get the desired result. \end{proof} The next two results give the continuity of the product of $ob$-bounded operators with respect to the uniform convergence topology and the equicontinuous convergence topology, respectively. \begin{thm} Let $(E,\tau)$ be locally solid vector lattice with an order bounded zero neighborhood. Then the product of $ob$-bounded operators is continuous in $B_{ob}(E)$ with respect to the uniform convergence topology on order bounded sets. \end{thm} \begin{proof} Suppose $(T_\alpha)$ and $(S_\alpha)$ which are uniform convergent to zero on order bounded sets are two nets of $ob$-bounded operators. Fix a zero neighborhood $V$ in $E$, so there is $a\in E_+$ such that $V\subseteq [-a,a]$; see \cite[Thm.2.2]{L}. Since $(T_\alpha)$ is uniform convergent to zero, there exists an index $\alpha_1$ such that $T_\alpha([-a,a])\subseteq V$ for all $\alpha\geq\alpha_1$. Also, there is an index $\alpha_2$ such that $S_\alpha([-a,a])\subseteq V$ for all $\alpha\geq\alpha_2$. Since there is an index $\alpha_0$ with $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_2$, we have $$ S_\alpha\big(T_\alpha([-a,a])\big)\subseteq S_\alpha(V)\subseteq S_\alpha([-a,a])\subseteq V $$ for each $\alpha\geq\alpha_0$. Therefore, we get the desired result. \end{proof} \begin{thm} Let $(E,\tau)$ be locally solid vector lattice with an order bounded zero neighborhood. Then the product of $ob$-bounded operators is continuous in $B_{ob}(E)$ with respect to the equicontinuous convergence topology with order bounded sets. \end{thm} \begin{proof} Let $(T_\alpha)$ and $(S_\alpha)$ be nets of $ob$-bounded operators which are equicontinuous convergent to zero with order bounded sets. Fix a zero neighborhood $V$ in $E$ and, by applying \cite[Thm.2.2]{L}, there is $a\in E_+$ such that $V\subseteq [-a,a]$. On the other hand, there exist order bounded sets $B_1$ and $B_2$ in $X$ such that, for every $\varepsilon>0$, there are indexes $\alpha_1$ and $\alpha_2$ such that $T_\alpha(B_1)\subseteq\varepsilon V$ for each $\alpha\geq \alpha_1$ and $S_\alpha(B_2)\subseteq\varepsilon V$ for all $\alpha\geq\alpha_2$. Take an index $\alpha_0$ with $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_2$, and order bounded set $B=B_1\cap B_2$. Then, for each $\alpha\geq\alpha_0$, we have $$ S_\alpha\big(T_\alpha(B)\big)\subseteq S_\alpha(\varepsilon V)=\varepsilon S_\alpha(V) \subseteq \varepsilon S_\alpha([-a,a]). $$ By \cite[Thm.2.19(i)]{AB}, there is a positive scalar $\lambda$ such that $[-a,a]\subseteq \lambda V$. Then, for given positive scalar $\frac{\varepsilon}{\lambda}$, there exists an index $\alpha_3$ such that $T_\alpha(B)\subseteq\frac{\varepsilon}{\lambda} V$ and $S_\alpha(B)\subseteq\frac{\varepsilon}{\lambda} V$ for each $\alpha\geq \alpha_3$. Thus, we have $$ S_\alpha\big(T_\alpha(B)\big)\subseteq S_\alpha(\frac{\varepsilon}{\lambda}V) \subseteq \frac{\varepsilon}{\lambda} S_\alpha([-a,a])\subseteq \frac{\varepsilon}{\lambda} \lambda V=\varepsilon V $$ for all $\alpha\geq \alpha_3$. \end{proof} In the following two works, we show the lattice operations are continuous with respect to the uniform convergence topology and the equicontinuous convergence topology, respectively. \begin{thm}\label{lattice operations is cont.} Let $X$ be a vector lattice and $(E,\tau)$ locally solid vector lattice with order complete property and an order bounded zero neighborhood. Then the lattice operations in $B_{ob}(X,E)$ are continuous with respect to the uniform convergence topology on order bounded sets. \end{thm} \begin{proof} Assume $(T_\alpha)$ and $(S_\alpha)$ are two nets of $ob$-bounded operators which are uniform convergent to the linear operators $T$ and $S$ on order bounded sets, respectively. For each $x\in X_+$, by applying the Riesz–Kantorovich formula, we have \begin{eqnarray} \big(T\vee S\big)(x)=\sup\{Tu+Sv:u,v\geq0,u+v=x\}. \end{eqnarray} Fix an order bounded set $B$ and fix $x\in B_+$. Suppose $u$, $v$ are positive elements such that $x=u+v$, and so that $u,v\in B$. Also, for two subsets $A_1$, $A_2$ in a vector lattice, we have $\sup(A_1)-\sup(A_2)\leq \sup(A_1-A_2)$. Then we get \begin{eqnarray*} \big(T_\alpha\vee S_\alpha\big)(x)-\big(T\vee S\big)(x)&=&\sup\{T_\alpha u+S_\alpha v:u,v\geq0,u+v=x\}\\&& -\sup\{Tu+Sv:u,v\geq0,u+v=x\}\\&\leq&\sup\{(T_\alpha-T) u+(S_\alpha-S) v:u,v\geq0,u+v=x\}. \end{eqnarray*} For given a zero neighborhood $V$ in $E$, pick a zero neighborhood $U$ with $U+U\subseteq V$. Thus, there are some indexes $\alpha_1$ and $\alpha_2$ such that $(T_\alpha-T)(B)\subseteq U$ for every $\alpha\geq\alpha_1$ and $(S_\alpha-S)(B)\subseteq U$ for each $\alpha\geq\alpha_2$. There exists another index $\alpha_0$ such that $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_1$, so that $(T_\alpha-T)(B)\subseteq U$ and $(S_\alpha-S)\subseteq U$ for all $\alpha\geq\alpha_0$. Then, for each $\alpha\geq\alpha_0$, we have $$ \big(T_\alpha\vee S_\alpha\big)(x)-\big(T\vee S\big)(x)\leq \big(T_\alpha-T\big)(x)+\big(S_\alpha-S\big)(x)\subseteq U+U\subseteq V. $$ Therefore, we have $\big(T_\alpha\vee S_\alpha\big)(B)-\big(T\vee S\big)(B)\subseteq V$ for each $\alpha\geq\alpha_0$, and so we get the desired result. \end{proof} \begin{ques} Does the Theorem \ref{lattice operations is cont.} hold without order bounded zero neighborhood? \end{ques} \begin{thm}\label{lattice operations is equi. cont.} Let $X$ be a vector lattice and $(E,\tau)$ locally solid vector lattice with order complete property and an order bounded zero neighborhood. Then the lattice operations in $B_{ob}(X,E)$ are continuous with respect to the equicontinuous convergence topology with order bounded sets. \end{thm} \begin{proof} Let $(T_\alpha)$ and $(S_\alpha)$ be nets of $ob$-bounded operators which are equicontinuous convergent to the linear operators $T$ and $S$ with order bounded sets, respectively. By the Riesz–Kantorovich formula, we have $$ \big(T\vee S\big)(x)=\sup\{Tu+Sv:u,v\geq0,u+v=x\}. $$ for ever $x\in X_+$. Fix a zero neighborhood $V$ in $E$. Take a zero neighborhood $U$ with $U+U\subseteq V$. There are order bounded sets $B_1$ and $B_2$ in $X$ such that, for every $\varepsilon>0$, there exist indexes $\alpha_1$ and $\alpha_2$ such that $T_\alpha(B_1)\subseteq\varepsilon U$ for each $\alpha\geq \alpha_1$ and $S_\alpha(B_2)\subseteq\varepsilon U$ for all $\alpha\geq\alpha_2$. Pick an index $\alpha_0$ with $\alpha_0\geq\alpha_1$ and $\alpha_0\geq\alpha_2$, and order bounded set $B=B_1\cap B_2$. Hence, $T_\alpha(B)\subseteq\varepsilon U$ and $S_\alpha(B)\subseteq\varepsilon U$ for all $\alpha\geq \alpha_0$. Fix $x\in B_+$, suppose $u$, $v$ are positive elements such that $x=u+v$, and so that $u,v\in B$. Then we get \begin{eqnarray*} \big(T_\alpha\vee S_\alpha\big)(x)-\big(T\vee S\big)(x)&=&\sup\{T_\alpha u+S_\alpha v:u,v\geq0,u+v=x\}\\&& -\sup\{Tu+Sv:u,v\geq0,u+v=x\}\\&\leq&\sup\{(T_\alpha-T) u+(S_\alpha-S) v:u,v\geq0,u+v=x\}. \end{eqnarray*} Then, for each $\alpha\geq\alpha_0$, we have $$ \big(T_\alpha\vee S_\alpha\big)(x)-\big(T\vee S\big)(x)\leq \big(T_\alpha-T\big)(x)+\big(S_\alpha-S\big)(x)\subseteq \varepsilon U+\varepsilon U\subseteq \varepsilon V. $$ Therefore, we get the result. \end{proof} \begin{ques} Does the Theorem \ref{lattice operations is equi. cont.} hold without order bounded zero neighborhood? \end{ques} We finish this paper with the following results which show that $B_{ob}(X,E)$ is topologically complete algebras with respect to the assigned topologies. \begin{prop}\label{t is also ob-bounded} Let $(T_\alpha)$ be a net of $ob$-bounded operators in $B_{ob}(X,E)$ which uniform convergent to the linear operators $T$ on order bounded sets. Then $T$ is $ob$-bounded. \end{prop} \begin{proof} Let $B$ be an order bounded set. For given a zero neighborhood $V$ in $E$, there exists an index $\alpha_0$ such that $\big(T-T_\alpha\big)(B)\subseteq V$ for all $\alpha\geq\alpha_0$. So, we have $$ T(B)\subseteq T_{\alpha_0}(B)+V. $$ Since $T_{\alpha_0}$ is $ob$-bounded, $T_{\alpha_0}(B)$ is bounded in $E$. Also, since the sum of two bounded sets in a topological vector space is bounded, $T(B)$ is also bounded. \end{proof} \begin{ques} Let $(T_\alpha)$ be a net of $ob$-bounded operators in $B_{ob}(X,E)$. Is $T$ $ob$-bounded operator whenever $(T_\alpha)$ equicontinuous convergent to the linear operators $T$ with order bounded sets? \end{ques} A net $(S_\alpha)$ on a locally solid vector lattice $(E,\tau)$ is said to {\em order converges to zero uniformly} on a bounded set $B$ if, for each $a\in E_+$, there is an $\alpha_0$ with $S_\alpha(B)\subseteq [-a,a]$ for each $\alpha\geq \alpha_0$; see \cite{Z}. \begin{prop}\label{t is o} Let $(T_\alpha)$ be a net of $ob$-bounded operators between locally solid vector lattices $(X,\acute{\tau})$ and $(E,\tau)$. If $(T_\alpha)$ order convergent uniformly on zero neighborhood to the linear operator $T$ then $T$ is also $ob$-bounded. \end{prop} \begin{proof} Suppose $B$ is an order bounded set. So, $B$ is bounded in $X$; see \cite[Thm.2.19(i)]{AB}. Then, by assumption, for any $e\in E_+$ there exists an index $\alpha_0$ with $(T-T_\alpha)(B)\subseteq [-e,e]$ for each $\alpha\geq \alpha_0$. Thus, we have $$ T(B)\subseteq T_{\alpha_0}(B)+[-e,e]. $$ Fix a $\tau$-neighborhood $V$, and consider another $\tau$-neighborhood $W$ with $W+W\subseteq V$. Since $T_{\alpha_0}$ is $ob$-bounded, there exists a positive scalar $\gamma_1$ such that $T_{\alpha_0}(B)\subseteq \gamma_1 W$. Also, by using \cite[Thm.2.19(i)]{AB}, there exists another positive scalar $\gamma_2$ such that $[-e,e]\subseteq \gamma_2 W$. Let's take $\gamma=\max\{\gamma_1,\gamma_2\}$, and so, by solidness of $W$, we have $\gamma_1 W\subseteq\gamma W$ and $\gamma_2 W\subseteq\gamma W$. Then we get $$ T(B)\subseteq T_{\alpha_0}(B)+[-e,e]\subseteq\gamma_1 W+\gamma_2 W\subseteq\gamma W+\gamma W\subseteq \gamma V. $$ Therefore we get the desired result. \end{proof} \begin{thm} Let $(T_\alpha)$ be a net of $B_{ob}(X,E)$ which uniform convergent to the linear operator $T$ on order bounded sets. If $(E,\tau)$ is topologically complete vector lattice and every order bounded set in $E$ is absorbing then $B_{ob}(X,E)$ is complete with respect to the topology of uniform convergence on order bounded sets. \end{thm} \begin{proof} Suppose $(E,\tau)$ is topological complete and $(T_\alpha)$ is a Cauchy net in $B_{ob}(X,E)$. Consider an order bounded set $B$ in $X$. Then, for each zero neighborhood $V$, there exists an index $\alpha_0$ such that $$ (T_\alpha-T_\beta)(B)\subseteq V $$ for each $\alpha\geq\alpha_0$ and for each $\beta\geq\alpha_0$. For any $x\in X$, there is a positive real $\lambda$ such that $x\in \lambda B$ since $B$ is absorbing. Thus, $(T_\alpha-T_\beta)(x)\subseteq V$ for each $\alpha,\beta \geq\alpha_0$, so we conclude that $(T_\alpha(x))$ is a Cauchy net in $E$. Put $T(x)=\lim T_\alpha(x)$. Therefore, by Proposition \ref{t is also ob-bounded}, we get the desired result. \end{proof} \begin{thm} Let $(T_\alpha)$ be a net of $ob$-bounded operators between locally solid vector lattices $(X,\acute{\tau})$ and $(E,\tau)$ which order convergent uniformly on zero neighborhood to the linear operator $T$. If $(E,\tau)$ is topologically complete vector lattice then $B_{ob}(X,E)$ is complete with respect to the topology of order convergent uniformly on zero neighborhood sets. \end{thm} \begin{proof} Suppose $(E,\tau)$ is topological complete and $(T_\alpha)$ is a Cauchy net in $B_{ob}(X,E)$. Consider a bounded set $B$ in $X$. Thus, for each $a\in E_+$, there is an $\alpha_0$ with $$ (T_\alpha-T_\beta)(B)\subseteq [-a,a] $$ for each $\alpha, \beta\geq \alpha_0$. For any $x\in X$, there is a positive real $\lambda$ such that $x\in \lambda B$ since $B$ is solid. Thus, $(T_\alpha-T_\beta)(x)\subseteq V$ for each $\alpha,\beta \geq\alpha_0$, so we conclude that $(T_\alpha(x))$ is a Cauchy net in $E$. Put $T(x)=\lim T_\alpha(x)$. Therefore, by Proposition \ref{t is o}, we get the desired result. \end{proof} \end{document}
\begin{document} \date{} \title{On the Longest Paths and the Diameter \ in Random Apollonian Networks} \begin{abstract} We consider the following iterative construction of a random planar triangulation. Start with a triangle embedded in the plane. In each step, choose a bounded face uniformly at random, add a vertex inside that face and join it to the vertices of the face. After $n-3$ steps, we obtain a random triangulated plane graph with $n$ vertices, which is called a {Random Apollonian Network (RAN)}. We show that asymptotically almost surely (a.a.s.) every path in a RAN has length $o(n)$, refuting a conjecture of Frieze and Tsourakakis. We also show that a RAN always has a path of length $(2n-5)^{\log 2 / \log 3}$, and that the expected length of its longest path is $\Omega\left(n^{0.88}\right)$. Finally, we prove that a.a.s.\ the diameter of a RAN is asymptotic to $c \log n$, where $c\approx 1.668$ is the solution of an explicit equation. \end{abstract} \section{Introduction} Due to the increase of interest in social networks, the Web graph, biological networks etc., in recent years a large amount of research has focused on modelling real world networks (see, e.g., Bonato~\cite {web_survey} or Chung and Lu~\cite{complexgraphs}). Despite the outstanding amount of work on models generating graphs with power law degree sequences, a considerably smaller amount of work has focused on generative models for planar graphs. In this paper we study a popular random graph model for generating planar graphs with power law properties, which is defined as follows. Start with a triangle embedded in the plane. In each step, choose a bounded face uniformly at random, add a vertex inside that face and join it to the vertices on the face. We call this operation \emph{subdividing} the face. In this paper, we use the term ``face'' to refer to a ``bounded face,'' unless specified otherwise. After $n-3$ steps, we have a (random) triangulated plane graph with $n$ vertices and $2n-5$ faces. This is called a \emph{Random Apollonian Network (RAN)} and we study its asymptotic properties, as its number of vertices goes to infinity. The number of edges equals $3n-6$, and hence a RAN is a maximal plane graph. The term ``{apollonian network}'' refers to a deterministic version of this process, formed by subdividing all triangles the same number of times, which was first studied in~\cite{ANs_1,ANs_2}. Andrade~et~al.~\cite{ANs_1} studied power laws in the degree sequences of these networks. Random apollonian networks were defined in Zhou~et~al.~\cite{define_RANs} (see Zhang et al.~\cite{high_RANs} for a generalization to higher dimensions), where it was proved that the diameter of a RAN is asymptotically bounded above by a constant times the logarithm of the number of vertices. It was shown in~\cite{define_RANs,RANs_powerlaw} that RANs exhibit a power law degree distribution. The average distance between two vertices in a typical RAN was shown to be logarithmic by Albenque and Marckert~\cite{RANs_average_distance}. The degree distribution, $k$ largest degrees and $k$ largest eigenvalues (for fixed $k$) and the diameter were studied in Frieze and Tsourakakis~\cite{first}. We continue this line of research by studying the asymptotic properties of the longest (simple) paths in RANs and giving sharp estimates for the diameter of a typical RAN. Before stating our main results, we need a few definitions. In this paper $n$ (respectively, $m$) always denotes the number of vertices (respectively, faces) of the RAN. All logarithms are in the natural base. We say an event $A$ happens \emph{asymptotically almost surely (a.a.s.)} if $\p{A}$ approaches 1 as $n$ goes to infinity. {For two functions $f(n)$ and $g(n)$ we write $f \sim g$ if $\lim_{n\rightarrow \infty} { \frac{f(n)}{g(n)} } = 1 \: .$ For a random variable $X = X(n)$ and a function $f(n)$, we say $X$ is \emph{a.a.s.\ asymptotic to} $f(n)$ (and write \emph{a.a.s.\ $X\sim f(n)$}) if for every fixed $\varepsilon>0$, $$\lim_{n\rightarrow \infty} \p{ f(n) (1- \varepsilon) \leq {X} \leq f(n) (1+ \varepsilon)} = 1 \:,$$ and we say \emph{a.a.s.\ $X=o\big(f(n)\big)$} if for every fixed $\varepsilon>0$, $\lim_{n\rightarrow \infty} \p{ {X} \leq \varepsilon f(n) } = 1 \: .$ The authors of~\cite{first} conjecture in their concluding remarks that a.a.s.\ a RAN has a path of length $\Omega(n)$. We refute this conjecture by showing the following theorem. Let $\mathcal{L}_m$ be a random variable denoting the number of vertices in a longest path in a RAN with $m$ faces. \begin{theorem} \label{thm:longest_upper} A.a.s.\ we have $\mathcal{L}_m = o(m)$. \end{theorem} Recall that a RAN on $n$ vertices has $2n-5$ faces, so Theorem~\ref{thm:longest_upper} implies that a.a.s.\ a RAN does not have a path of length $\Omega(n)$. We also prove the following lower bounds for the length of a longest path deterministically, and its expected value in a RAN. \begin{theorem} \label{thm:longest_lower} For every positive integer $m$, the following statements are true. \begin{itemize} \item[(a)] $ \mathcal{L}_m \geq m^{\log 2 / \log 3} + 2 \:.$ \item[(b)] $\e{\mathcal{L}_m} = \Omega\left ( m^{0.88} \right) \:.$ \end{itemize} \end{theorem} The proofs of Theorems~\ref{thm:longest_upper}~and~\ref{thm:longest_lower} are built on two novel graph theoretic observations, valid for all subgraphs of apollonian networks. We also study the diameter of RANs. In~\cite{first} it was shown that the diameter of a RAN is a.a.s.\ at most $\eta_2 \log n$, where $\eta_2 \approx 7.081$ is the unique solution greater than 1 of $\exp\left ( {1}/{x} \right) = {3e}/{x}$. (Our statement here corrects a minor error in~\cite{first}, propagated from Broutin and Devroye~\cite{treeheight}, which stated that $\eta_2$ is the unique solution less than 1.) In~\cite{RANs_average_distance} it was shown that a.a.s.\ the distance between two randomly chosen vertices of a RAN (which naturally gives a lower bound on the diameter) is asymptotic to $\eta_1 \log n$, where $\eta_1 = 6/11 \approx 0.545$. In this paper, we provide the asymptotic value for the diameter of a typical RAN. \begin{theorem} \label{thm:diameter} A.a.s.\ the diameter of a RAN on $n$ vertices is asymptotic to $c \log n$, with $c=(1-\hat x^{-1})/\log h(\hat x)\approx 1.668$, where $$ h(x)=\frac{12x^3}{1-2x}- \frac{6x^3}{1-x} \:, $$ and $\hat x\approx 0.163$ is the unique solution in the interval $(0.1,0.2)$ to $$ x(x-1)h'(x)=h(x)\log h(x)\:. $$ \end{theorem} The proof of Theorem~\ref{thm:diameter} consists of a nontrivial reduction of the problem of estimating the diameter to the problem of estimating the height of a certain skewed random tree, which can be done by applying a result of~\cite{treeheight}. We start with some preliminaries in Section~\ref{sec:preliminaries}, and prove Theorems~\ref{thm:longest_upper},~\ref{thm:longest_lower}, and~\ref{thm:diameter} in Sections~\ref{sec:longest_upper},~\ref{sec:longest_lower}, and~\ref{sec:diameter}, respectively. \section{Preliminaries} \label{sec:preliminaries} The following result is due to Eggenberger and P\'{o}lya~\cite{polya} (see, e.g., Mahmoud~\cite[Theorem~5.1.2]{urns}). \begin{theorem} \label{thm:beta} Start with $w$ white balls and $b$ black balls in an urn. In each step, pick a ball uniformly at random from the urn, look at its colour, and return it to the urn; also add $s$ balls of the same colour to the urn. Let $w_n$ and $t_n$ be the number of white balls and the total number of balls in the urn after $n$ draws. Then, for any $\alpha\in[0,1]$ we have $$\lim_{n\rightarrow \infty} \p{\frac{w_n}{t_n} < \alpha} = \frac{\Gamma((w+b)/s)}{\Gamma(w/s)\Gamma(b/s)} \int_{0}^{\alpha} x ^{\frac{w}{s} - 1} (1-x)^{\frac{b}{s}-1} \: \mathrm{d}x \:.$$ \end{theorem} Note that the right hand side equals $\p{\operatorname{Beta}(w/s,b/s) < \alpha}$, where $\operatorname{Beta}(p,q)$ denotes a beta random variable with parameters $p$ and $q$. The urn described in Theorem~\ref{thm:beta} is called the \emph{Eggenberger-P\'{o}lya} urn. Let $\triangle$ be a triangle. The \emph{standard 1-subdivision} of $\triangle$ is the set of three triangles obtained from subdividing $\triangle$ once. For $k>1$, the \emph{standard $k$-subdivision} of $\triangle$ is the set of triangles obtained from subdividing each triangle in the standard $(k-1)$-subdivision of $\triangle$ exactly once. In Figure~\ref{fig:path}, the standard 2-subdivision of a triangle is illustrated. Consider a triangle $\triangle$ containing more than one face in a RAN, and let $\triangle_1,\triangle_2,\triangle_3$ be the three triangles in its standard 1-subdivision. We can analyze the number of faces inside $\triangle_1$ by modelling the process of building the RAN as an Eggenberger-P\'{o}lya urn: after the first subdivision of $\triangle$, each of $\triangle_1$, $\triangle_2$, and $\triangle_3$ contains exactly one face. We start with one white ball corresponding to the only face in $\triangle_1$, and two black balls corresponding to the two faces in $\triangle_2$ and $\triangle_3$. In each subsequent step, we choose a face uniformly at random, and subdivide it. If the face is in $\triangle_1$, then the number of faces in $\triangle_1$ increases by 2, and otherwise the number of faces not in $\triangle_1$ increases by 2. Thus after $k$ subdivisions of $\triangle$, the number of faces in $\triangle_1$ has the same distribution as the number of white balls in an Eggenberger-P\'{o}lya urn with $w=1$, $b=2$, and $s=2$, after $k-1$ draws. This observation leads to the following corollary. \begin{corollary} \label{cor:fairness} Let $\triangle$ be a triangle containing $m$ faces in a RAN, and let $Z_1,Z_2,\dots,Z_9$ be the number of faces inside the 9 triangles in the standard $2$-subdivision of $\triangle$. Given $\varepsilon>0$, there exists $m_0=m_0(\varepsilon)$ such that for $m > m_0$, $$ \p{\min\{Z_1,\ldots,Z_9\} / m <\varepsilon } < 13 \sqrt[4]{\varepsilon} \:.$$ \end{corollary} \begin{proof} Let $\overline{\triangle}$ be a triangle containing $\overline{m}$ faces in a RAN, and let $W_1,W_2,W_3$ be the number of faces inside the three triangles in the standard 1-subdivision of $\overline{\triangle}$. Say that $\overline{\triangle}$ is \emph{balanced} if $$\min \{ W_1, W_2, W_3 \} / \overline{m} \geq \sqrt{\varepsilon} \:.$$ By Theorem~\ref{thm:beta}, for a given $1\leq i \leq 3$ we have $$\lim_{\overline{m}\rightarrow \infty} \p{\frac{W_i}{\overline{m}} < \sqrt{\varepsilon}} = \int_{0}^{\sqrt{\varepsilon}} \frac{\Gamma(3/2)}{\Gamma(1)\Gamma(1/2)}\:x^{{-1}/{2}}\: \mathrm{d}x = \sqrt{\sqrt{\varepsilon}}\:.$$ In particular, there exists $\overline{m}_0$ such that $$\p{\frac{W_i}{\overline{m}} < \sqrt{\varepsilon}} < \sqrt[4]{1.1 \varepsilon}$$ for $\overline{m} > \overline{m}_0$. Now, take $m_0 = \overline{m}_0 / \sqrt{\varepsilon}$, and let $\triangle$ be a triangle containing $m>m_0$ faces in a RAN. The probability that $\triangle$ is balanced is at least $ 1 - 3 \sqrt[4]{1.1 \varepsilon}$ by the union bound. If $\triangle$ is balanced, then each of the three triangles in the standard 1-subdivision of $\triangle$ contains more than $m_0 \sqrt{\varepsilon} = \overline{m}_0$ faces, so the probability that a certain one of them is not balanced is at most $3\sqrt[4]{1.1\varepsilon}$. Note that if $\triangle$ and these three triangles are balanced, then $\min\{Z_1,\cdots,Z_9\}/m \geq \varepsilon$. Hence by the union bound, $$\p{\min\{Z_1,\cdots,Z_9\} / m <\varepsilon } < 12 \sqrt[4]{1.1 \varepsilon} < 13 \sqrt[4]{\varepsilon}\:.\qedhere$$ \end{proof} We include some definitions here. Let $G$ be a RAN. We denote the vertices incident with the unbounded face by $\nu_1,\nu_2,\nu_3$. All trees we consider are rooted. We define a tree $T$, called the \emph{$\triangle$-tree} of $G$, as follows. There is a one to one correspondence between the triangles in $G$ and the nodes of $T$. For every triangle $\triangle$ in $G$, we denote its corresponding node in $T$ by $\node{\triangle}$. To build $T$, start with a single root node, which corresponds to the triangle $\nu_1 \nu_2 \nu_3$ of $G$. Wherever a triangle $\triangle$ is subdivided into triangles $\triangle_1$, $\triangle_2$, and $\triangle_3$, generate three children $\node{\triangle_1}$, $\node{\triangle_2}$, and $\node{\triangle_3}$ for $\node{\triangle}$, and extend the correspondence in the natural manner. Note that this is a random ternary tree, with each node having either zero or three children, and has $3n-8$ nodes and $2n-5$ leaves. We use the term ``nodes'' for the vertices of $T$, so that ``vertices'' refer to the vertices of $G$. Note that the leaves of $T$ correspond to the faces of $G$. The \emph{depth} of a node $\node{\triangle}$ is its distance to the root. \section{Upper bound for a longest path} \label{sec:longest_upper} In this section we prove Theorem~\ref{thm:longest_upper}, stating that a.a.s.\ all paths in a RAN have length $o(n)$. The set of \emph{grandchildren} of a node is the set of children of its children, so every node in a ternary tree has between zero and nine grandchildren. For a triangle $\triangle$ in $G$, $I(\triangle)$ denotes the set of vertices of $G$ that are \emph{strictly inside} $\triangle$. \begin{lemma} Let $G$ be a RAN and let $T$ be its $\triangle$-tree. Let $\node{\triangle}$ be a node of $T$ with nine grandchildren $\node{\triangle_1},\node{\triangle_2},\dots,\node{\triangle_9}$. Then the vertex set of a path in $G$ does not intersect all of the $I(\triangle_i)$'s. \label{lem:9children} \end{lemma} \begin{proof} There are exactly $7$ vertices in the boundaries of the {triangles} corresponding to the grandchildren of $\node{\triangle}$. Let $v_1,\dotsc, v_7$ denote such vertices (see Figure~\ref{fig:path}). Let $P=u_1 u_2\dots u_p$ be a path in $G$. Clearly, when $P$ enters or leaves one of $\triangle_1,{\triangle_2},\dots,{\triangle_9}$, it must go through a $v_i$. So $P$ does not contain vertices from more than one triangle between two consecutive occurrences of a $v_i$. Since $P$ goes through each $v_i$ at most once, the vertices $v_i$ split $P$ up into at most eight sub-paths. Hence $P$ contains vertices from at most eight of the triangles $\triangle_i$. \begin{figure} \caption{A triangle in $G$ corresponding to a node of $T$ with $9$ grandchildren. Vertices $v_1,\dotsc, v_7$ are the vertices in the boundaries of the triangles corresponding to these grandchildren.} \label{fig:path} \end{figure} \end{proof} We first sketch a proof of Theorem~\ref{thm:longest_upper}. Let $G$ be a RAN on $n$ vertices, and let $T$ be its $\triangle$-tree. The 2-subdivision of the triangle $\nu_1 \nu_2 \nu_3$ consists of nine triangles, and every path misses the vertices in at least one of them by Lemma~\ref{lem:9children}. We can now apply the same argument inductively for the other eight triangles, and repeat. Note that if the distribution of vertices in the nine triangles of every 2-subdivision were always moderately balanced, this argument would immediately prove the theorem (by extending it to $O(\log n)$ depth). Unfortunately, the distribution is biased towards becoming unbalanced: the greater the number of vertices falling in a certain triangle, the higher the probability that the next vertex falls in the same triangle. However, Corollary~\ref{cor:fairness} gives an upper bound for the probability that this distribution is very unbalanced. The idea is to use this Corollary iteratively and to use independence of events cleverly to bound the probability of certain ``bad'' events. It is easy to see that $T$ is a random ternary tree on $3n-8$ nodes in the sense of Drmota~\cite{random_trees}. The following theorem is due to Chauvin and Drmota~\cite[Theorem~2.3]{saturation} (we use the wording of~\cite[Theorem~6.47]{random_trees}). \begin{theorem} \label{thm:saturation} Let $\overline{H}_n$ denote the largest number $L$ such that a random $n$-node ternary tree has precisely $3^L$ nodes at depth $L$. Let $\psi \approx 0.152$ be the unique solution in $(0,3)$ to $$ 2 \psi \log \left (\frac{3e}{2\psi}\right) = 1 \:.$$ Then we have $$\e{\overline{H}_n} \sim \psi \log n \:,$$ and there exists a constant $\kappa>0$ such that for every $\varepsilon>0$, $$\p{|\overline{H}_n - \e{\overline{H}_n}| > \varepsilon} = O(\exp(-\kappa\varepsilon)) \:.$$ \end{theorem} Let $D = 0.07 \log n$. Then, the following is obtained immediately. \begin{corollary} \label{cor:standard} A.a.s.\ there are $3^{2D}$ nodes at depth $2D$ of $T$. \end{corollary} Let $\varepsilon>0$ be a fixed number such that $3 (13\sqrt[4]{4\varepsilon})^{1/5} < 1$, and let $p_F = 1 - 13\sqrt[4]{4\varepsilon}$. Notice that $3 (1-p_F)^{1/5} < 1$. We say node $\node{\triangle}$ is \emph{fair} if at least one of the following holds: \begin{enumerate} \item[(i)] the number of faces inside $\triangle$ is less than $3^D$, or \item[(ii)] $\node{\triangle}$ has nine grandchildren $\node{\triangle_1},\node{\triangle_2},\dots,\node{\triangle_9}$, and $|I(\triangle_i)| \geq \varepsilon |I(\triangle)|$ for all $1\leq i \leq 9$. \end{enumerate} A triangle $\triangle$ in $G$ is fair if its corresponding node $\node{\triangle}$ is fair. \begin{lemma} \label{lem:fairness} Let $\node{\triangle}$ be a node in $T$ with nine grandchildren, and let $U$ be a subset of the set of ancestors of $\node{\triangle}$, not including the parent of $\node{\triangle}$. The probability that $\node{\triangle}$ is fair, conditional on all nodes in $U$ being unfair, is at least $p_F$. \end{lemma} \begin{proof} Let $n$ be sufficiently large that $3^D > m_0(2\varepsilon)$, where $m_0(2\varepsilon)$ is defined as in Corollary~\ref{cor:fairness}. Let $\overline{M}$ denote the number of faces inside $\triangle$, and let $\overline{m} \geq 3^D$ be arbitrary. If $\overline{M} < 3^D$, then $\node{\triangle}$ is fair by definition, so it is enough to prove that $$\p{\node{\triangle}\mathrm{\ is\ fair} \cond \mathrm{nodes\ in\ }U\mathrm{\ are\ unfair}, \overline{M} = \overline{m}} \geq p_F \:.$$ Since $U$ does not contain the parent of $\node{\triangle}$, conditional on nodes in $U$ being unfair and $\overline{M}=\overline{m}$, the subgraph of $G$ induced by vertices on and inside $\triangle$ is distributed as a RAN with $\overline{m}$ faces. Let $\triangle_1,\dots,\triangle_9$ be the nine triangles in the standard 2-subdivision of $\triangle$, and let $Z_1,Z_2,\dots,Z_9$ be the number of faces inside them. By Corollary~\ref{cor:fairness} and since $\overline{m} > m_0(2\varepsilon)$, with probability at least $p_F$ for all $1\leq i\leq 9$, $$Z_i \geq 2 \varepsilon \overline{m} \:,$$ and so $$I(\triangle_i) = \frac{Z_i - 1}{2} \geq \frac{2\varepsilon \overline{m} - 1}{2} \geq \varepsilon\:\frac{\overline{m}-1}{2} = \varepsilon I(\triangle) \:, $$ which implies that $\node{\triangle}$ is fair. \end{proof} Let $k = (\log \log n) / 2$. Let $d_0 = 0$ and $d_i = 2^{i-1} k$ for $1\leq i \leq k$. Notice that $d_k < D$. \begin{lemma} \label{lem:lots_of_fair} A.a.s\ the following is true. Let $v$ be an arbitrary node of $T$ at depth $d_i$ for some $1\leq i \leq k$, and let $u$ be the ancestor of $v$ at depth $d_{i-1}$. Then there is at least one fair node $f$ on the $(u,v)$-path in $T$, such that the depth of $f$ is between $d_{i-1}$ and $d_i - 2$, inclusive. \end{lemma} \begin{proof} Let us say that a node is \emph{bad} if the conclusion of the lemma is false for it. We prove that the probability that a bad node exists is $o(1)$. Let $v$ be a node at depth $d_i$ and $u$ be its ancestor at depth $d_{i-1}$. Let $x_0 = v, x_1, x_2, \dots, x_r = u$ be the $(v,u)$-path in $T$, where $r = d_i - d_{i-1}$. By Lemma~\ref{lem:fairness}, the probability that none of $x_{2\lfloor r/2 \rfloor},x_{2\lfloor r/2 \rfloor - 2}, \dots, x_4, x_2$ is fair is at most $$(1-p_F)^{\lfloor r/2 \rfloor} \leq (1-p_F)^{(d_i - d_{i-1} - 1)/2} \leq (1-p_F)^{d_{i}/5}\:.$$ There are at most $3^{d_i}$ nodes at depth $d_i$, so by the union bound, the probability that there is at least one bad node $v$ at depth $d_i$ is at most $$3^{d_i} (1-p_F)^{d_i/5} = \left [ 3 (1-p_F)^{1/5} \right ]^{d_i} \leq \left [ 3 (1-p_F)^{1/5} \right ]^{k} = o(1/k) \:,$$ by the definition of $d_i$ and as $3 (1-p_F)^{1/5} < 1$ and $k\to\infty$. Consequently, the probability that there exists a bad node whose depth lies in $\{d_1,d_2,\dots,d_k\}$ is $o(1)$. \end{proof} We are now ready to prove Theorem~\ref{thm:longest_upper}. \begin{proof}[Proof of Theorem~\ref{thm:longest_upper}.] Let $G$ be a RAN with $n$ vertices and $m$ faces, and let $T$ be the $\triangle$-tree of $G$. The \emph{depth} of a vertex $v$ of $G$ is defined as $\max \{\operatorname{depth}(\triangle) : v \in I(\triangle) \}$, and we define the depth of $\nu_1, \nu_2, \nu_3$ to be $-1$. Say a vertex is \emph{deep} if its depth is greater than $D$, and is \emph{shallow} otherwise. Let $n_D$ denote the number of deep vertices. Note that the number of shallow vertices is at most $(3^{D+1}+5)/2$, which is $o(n)$ by the choice of $D$, so $n_D = n - o(n)$. For a node $\node{\triangle}$ of $T$, let $I_D(\triangle)$ be the set of deep vertices in $I(\triangle)$, and for a subset $A$ of nodes of $T$, let $$I_D(A) = \bigcup_{\node{\triangle}\in A} I_D(\triangle) \:.$$ \begin{claim} If $T$ is full down to depth $2D$ where $D\ge 2$, then any fair node $\node{\triangle}$ with depth at most $D$ has nine grandchildren $\node{\triangle_1},\node{\triangle_2},\dots,\node{\triangle_9}$ such that \begin{equation} \label{eq:farisplit} |I_D(\triangle_i)| \geq \varepsilon |I_D(\triangle)| / 2 \qquad i=1,2,\dots,9. \end{equation} \end{claim} \begin{claimproof} Assume that $T$ is full down to depth $2D$. We first show that for any triangle $\triangle$, \begin{equation} \label{eq:half} |I_D(\triangle)| \geq |I(\triangle)| / 2 \:. \end{equation} To prove (\ref{eq:half}), let $\triangle$ be a triangle at depth $r$. If $r>D$, then $I(\triangle)$ contains no shallow vertices, and (\ref{eq:half}) is obviously true. Otherwise, the number of shallow vertices in $I(\triangle)$ equals $1 + 3 + \dots + 3^{D-r} = (3^{D-r+1}-1)/2$, whereas the number of vertices in $I(\triangle)$ at depth $D+1$ equals $3^{D-r+1}$, where we have used the fact that $T$ is full down to depth at least $D+2$. Thus $I(\triangle)$ contains more deep vertices than shallow vertices, and (\ref{eq:half}) follows. Now, let $\node{\triangle}$ be a fair triangle having depth at most $D$. Since $T$ is full down to depth $2D$, {the number of faces inside $\triangle$ is at least $3^D$}. So, as $\node{\triangle}$ is fair, it has nine grandchildren $\node{\triangle_1},\node{\triangle_2},\dots,\node{\triangle_9}$ such that $|I(\triangle_i)| \geq \varepsilon |I(\triangle)|$ for all $1\leq i \leq 9$. Applying (\ref{eq:half}) gives \begin{equation*} |I_D(\triangle_i)| \geq |I(\triangle_i)|/ 2 \geq \varepsilon |I(\triangle)| / 2 \geq \varepsilon |I_D(\triangle)| / 2 \qquad i=1,2,\dots,9 \:, \end{equation*} as required. \end{claimproof} We {may} condition on two events that happen a.a.s.:\ the first one is the conclusion of Lemma~\ref{lem:lots_of_fair}, and the second one is that of Corollary~\ref{cor:standard}, namely that $T$ is full down to depth $2D$. To complete the proof of the theorem, for a given path $P$ in $G$, we will define a sequence $B_0, B_1, \dots, B_k$ of sets of nodes of $T$, such that for all $0 \leq i \leq k$ we have \begin{enumerate} \item [(i)] $ |I_D(B_i)| \geq n_D \left(1 - \left(1-\frac{\varepsilon}{2}\right)^i\right)$, and \item [(ii)] $V(P) \cap I_D(B_i) = \emptyset$. \end{enumerate} Before defining the $B_i$'s, let us show that this completes the proof. Notice that (i) gives $$|I_D(B_k)| \geq n_D - n_D (1-\varepsilon/2)^k \geq n_D - n_D \exp(-\varepsilon k /2)\:,$$ which is $n - o(n)$ since $n_D = n-o(n)$ and $\varepsilon k = \omega(1)$. Therefore, by (ii), $$|V(P)| \leq |V(G) \setminus I_D(B_k)| = o(n)\:.$$ {So, now} we define the sets $B_i$. Let $S_i$ denote the set of nodes of $T$ at depth $d_i$. Let $B_0 = \emptyset$ and we define the $B_i$'s inductively, in such a way that $B_i \subseteq S_i$. Fix $1 \leq i \leq k$, and assume that $B_{i-1}$ has already been defined. Let $C_i$ be the set of nodes at depth $d_i$ whose ancestor at depth $d_{i-1}$ is in $B_{i-1}$ (so, in particular, $C_1=\emptyset$). By the induction hypothesis, $V(P)$ does not intersect $I_D(B_{i-1}) = I_D(C_i)$, and $ |I_D(C_i)| = |I_D(B_{{i-1}})| \geq n_D \left(1 - (1-\varepsilon/2)^{i-1}\right)$. Since the conclusion of Lemma~\ref{lem:lots_of_fair} is true, there exists a set $F$ of fair nodes, with depths between $d_{i-1}$ and $d_i - 2$, such that every $v \in S_i \setminus C_i$ is a descendent of some node in $F$. Now, for every $x,y \in F$ such that $y$ is a descendent of $x$, remove $y$ from $F$. This results in a set $\{u_1,u_2,\dots,u_s\}$ of fair nodes, with depths between $d_{i-1}$ and $d_i - 2$, such that every $v \in S_i \setminus C_i$ is a descendent of a unique $u_j$. Recall that $d_k < D$ and so all the $u_j$'s have depths less than $D$. Let $w_1,\dots,w_9$ be the grandchildren of $u_1$. By Lemma~\ref{lem:9children}, $V(P)$ does not intersect all of the $I(w_i)$'s{;} say it does not intersect $I({w_1})$. Then mark all of the descendants of $w_1$, and perform a similar procedure for $u_2,\dots,u_s$. Let $M_i$ be the set of marked nodes in $S_i$. See Figure~\ref{fig:induction}. \begin{figure} \caption{Illustration for the inductive step in the proof of Theorem~\ref{thm:longest_upper} \label{fig:induction} \end{figure} Thus $V(P) \cap I_D(M_i) = \emptyset$. Moreover, since the $u_j$'s are fair and the $I_D(u_j)$'s are disjoint, it follows from the claim that $$|I_D(M_i)| \geq \sum_{j=1}^{s} \varepsilon |I_D(u_j)| / 2 = {\varepsilon} | I_D(S_i \setminus C_i) | / 2 = \varepsilon (n_D - |I_D(B_{i-1})|) / 2\:.$$ Now, let $B_i = C_i \cup M_i$. Then we have \begin{align*} |I_D(B_i)| &= |I_D(C_i)| + |I_D(M_i)|\\ & \geq \: |I_D(B_{i-1})| + \frac{\varepsilon}2 \Big(n_D - |I_D(B_{i-1})|\Big) = |I_D(B_{i-1})| (1-\frac{\varepsilon}{2}) + \frac{\varepsilon n_D}{2} \\ & \geq n_D \left(1 - \left(1-\frac{\varepsilon}{2}\right)^{i-1}\right) \left(1-\frac{\varepsilon}{2}\right) + \frac{\varepsilon n_D}{2} = n_D \left(1 - \left(1-\frac{\varepsilon}{2}\right)^i\right)\:, \end{align*} and $V(P)$ does not intersect $I_D(B_i)$. \end{proof} \begin{remark} Noting that $n - n_D < n^{1-\delta}$ for some fixed $\delta>0$ and being more careful in the calculations above shows that indeed we have a.a.s.\ $\mathcal{L}_m \leq n \left(\log n\right)^{-\Omega(1)}$. \end{remark} \section{Lower bounds for a longest path} \label{sec:longest_lower} In this section we prove Theorem~\ref{thm:longest_lower}. We first prove part (a), i.e., we give a deterministic lower bound for the length of a longest path in a RAN. Recall that $\mathcal{L}_m$ denotes the number of vertices of a longest path in a RAN with $m$ faces. Let $G$ be a RAN with $m$ faces, and let $v$ be the unique vertex that is adjacent to $\nu_1$, $\nu_2$, and $\nu_3$. For $1\leq i \leq 3$, let $\triangle_i$ be the triangle with vertex set $\{v,\nu_1,\nu_2,\nu_3\} \setminus \{\nu_i\}$. Define the random variable $\mathcal{L}'_m$ as the largest number $L$ such that for every permutation $\pi$ on $\{1,2,3\}$, there is a path in $G$ of $L$ edges from $\nu_{\pi(1)}$ to $\nu_{\pi(2)}$ not containing $\nu_{\pi(3)}$. Clearly we have $\mathcal{L}_m\ge \mathcal{L}'_m +2$. \begin{proof}[Proof of Theorem~\ref{thm:longest_lower}(a).] Let $\xi = \log 2 / \log 3$. We prove by induction on $m$ that $\mathcal{L}'_m \geq m^{\xi}$. This is obvious for $m=1$, so assume that $m>1$. Let $m_i$ denote the number of faces in $\triangle_i$. Then $m_1 + m_2 + m_3 = m$. By symmetry, we may assume that $m_1 \geq m_2 \geq m_3$. For any given $1\leq i\leq 3$, it is easy to find a path avoiding $\nu_i$ that connects the other two $\nu_j$'s by attaching two appropriate paths in $\triangle_1$ and $\triangle_2$ at vertex $v$. (See Figures~\ref{fig:pathmerge}(a)--(c).) By the induction hypothesis, these paths can be chosen to have lengths at least ${m_1}^{\xi}$ and ${m_2}^{\xi}$, respectively. \begin{figure} \caption{Paths avoiding $\triangle_3$ and one of the $\nu_i$'s.} \label{fig:pathmerge} \end{figure} Hence for every permutation $\pi$ of $\{1,2,3\}$, there is a path from $\nu_{\pi(1)}$ to $\nu_{\pi(2)}$ avoiding $\nu_{\pi(3)}$ with length at least \begin{equation} \label{eq:m1+m2} {m_1}^{\xi} + {m_2}^{\xi}\:. \end{equation} It is easily verified that since $m_1 \geq m_2 \geq m_3$ and $m_1 + m_2 + m_3 = m$, the minimum of (\ref{eq:m1+m2}) happens when $m_1 = m_2 = m / 3$, thus $$\mathcal{L}'_m \geq {m_1}^{\xi} + {m_2}^{\xi} \geq 2 \left(\frac{m}{3}\right)^{\xi} = m^{\xi}\:,$$ and the proof is complete. \end{proof} Next, we use the same idea to give a larger lower bound for $\e{\mathcal{L}_m}$. Let the random variable $X_i$ denote the number of faces in $\triangle_i$. Then the $X_i$'s have the same distribution and are not independent. It follows from Theorem~\ref{thm:beta} that as $m$ grows, the distribution of $\frac{X_i}{m}$ converges pointwise to that of $\operatorname{Beta}(1/2,1)$. Moreover, for any fixed $\varepsilon \in [0,1)$, if we condition on $X_1 = \varepsilon m$, then the subdividing process inside $\triangle_2$ and $\triangle_3$ can be modelled as an Eggenberger-P\'{o}lya urn again, and it follows from Theorem~\ref{thm:beta} that the distribution of $\frac{X_2}{(1-\varepsilon)m}$ conditional on $X_1 = \varepsilon m$ converges pointwise to that of {$\operatorname{Beta}(1/2,1/2)$}. Namely, for any fixed $\varepsilon \in [0,1)$ and $\delta \in [0,1]$, \begin{equation} \label{eq:secondtriangledistribution} \lim_{m\rightarrow \infty} \p{\frac{X_2}{(1-\varepsilon)m} \leq \delta {\left\vert\vphantom{\frac{1}{1}}\right.} X_1 = \varepsilon m} = \int_{0}^{\delta} \frac{\Gamma(1)}{\Gamma({1}/{2})^2} \: x^{-1/2} (1-x)^{-1/2}\: \mathrm{d}x \:. \end{equation} We are now ready to prove part (b) of Theorem~\ref{thm:longest_lower}. \begin{proof}[Proof of Theorem~\ref{thm:longest_lower}(b).] Let $\zeta = 0.88$. We prove that there exists a constant $\kappa>0$ such that $\e{\mathcal{L}'_m} \ge \kappa m^{\zeta}$ holds for all $m\ge 1$. We proceed by induction on $m$, with the induction base being $m=m_0$, where $m_0$ is a sufficiently large constant, to be determined later. By choosing $\kappa$ sufficiently small, we may assume $\e{\mathcal{L}'_m} \ge \kappa m^{\zeta}$ for all $m\leq m_0$. For $1\leq i \leq 3$, let $X_i$ denote the number of faces in $\triangle_i$. Define a permutation $\sigma$ on $\{1,2,3\}$ such that $X_{\sigma(1)}\ge X_{\sigma(2)}\ge X_{\sigma(3)}$, breaking ties randomly. Then $\sigma$ is a random permutation determined by the $X_i$ and the random choice in the tie-breaking. By symmetry, for every fixed $\sigma'\in S_3$, $\p{\sigma=\sigma'}= 1/6$. From the proof of part (a), we know $$ \mathcal{L}'_m\ge \mathcal{L}'_{X_{\sigma(1)}}+\mathcal{L}'_{X_{\sigma(2)}}. $$ Taking the expectation on both sides, we have \begin{equation} \e{\mathcal{L}'_m}\ge \e{\mathcal{L}'_{X_{\sigma(1)}}+\mathcal{L}'_{X_{\sigma(2)}}}\ge 6\e{(\mathcal{L}'_{X_{1}}+\mathcal{L}'_{X_{2}})\mathds{1}_{X_1> X_2> X_3}}\:,\label{eq:uniformProb} \end{equation} where the second inequality holds by symmetry and as $\p{\sigma=(1,2,3)}= 1/6$. By the induction hypothesis, for every $x_1,x_2<m$, $$ \e{\mathcal{L}'_{X_{1}}\mid X_1=x_1}\ge \kappa x_1^{\zeta}, \mathrm{\ and\ }\e{\mathcal{L}'_{X_{2}}\mid X_2=x_2}\ge \kappa x_2^{\zeta}. $$ Hence, \begin{equation} \e{(\mathcal{L}'_{X_{1}}+\mathcal{L}'_{X_{2}})\mathds{1}_{X_1> X_2> X_3}}\ge \kappa\e{(X_1^{\zeta}+X_2^{\zeta})\mathds{1}_{X_1> X_2> X_3}}.\label{eq:induct} \end{equation} Let $f_1(x)$ and $f_2(x)$ denote the probability density functions of $\operatorname{Beta}(1/2,1)$ and $\operatorname{Beta}(1/2,1/2)$, respectively. Namely, $$ f_1(x)=\frac{\Gamma(3/2)}{\Gamma(1)\Gamma(1/2)}\:x^{{-1}/{2}}\mathrm{\ and\ } f_2(x)=\frac{\Gamma(1)}{\Gamma({1}/{2})^2} \: x^{-1/2} (1-x)^{-1/2}\:. $$ Then it follows from Theorem~\ref{thm:beta} that for any fixed $0\leq t < 1$, $$ \lim_{m\rightarrow \infty} \p{\frac{X_1}{m} \leq t} = \int_{0}^{t} f_1(x)\: \mathrm{d}x \:, $$ and for any fixed $0\leq s \leq 1$, by (\ref{eq:secondtriangledistribution}), $$ \lim_{m\rightarrow \infty} \p{\frac{X_2}{m} \leq (1-t)s \cond X_1 = t m}=\lim_{m\rightarrow \infty} \p{\frac{X_2}{(1-t)m} \leq s \cond \frac{X_1}{m} = t } = \int_{0}^{s} f_2(x)\: \mathrm{d}x \:. $$ Hence (see Billingsley~\cite[Theorem 29.1 (i)]{ref.Billingsley}) \begin{align*} \e{\left ( \left(\frac{X_1}{m}\right)^{\zeta} + \left(\frac{X_2}{m}\right) ^{\zeta} \right) \mathds{1}_{X_1 >X_2 > X_3}}& \rightarrow \int \limits_{t=1/3}^{1} \! \! \int \limits_{s=1/2}^{\min\left\{1, \frac{t}{1-t}\right\}}\! \left[ t^{\zeta} + (s(1-t))^{\zeta}\right] f_1(t)f_2(s)\,\mathrm{d}s\,\mathrm{d}t\,, \end{align*} as $m\to\infty$. By the choice of $\zeta$, we have $$\int_{t=1/3}^{1} \int_{s=1/2}^{\min\{1, \frac{t}{1-t}\}} \left[t^{\zeta} + (s(1-t))^{\zeta}\right] f_1(t)f_2(s) \:\mathrm{d}s\:\mathrm{d}t > 1/6 \:.$$ Then, by~(\ref{eq:uniformProb}) and~(\ref{eq:induct}), $$ \e{\mathcal{L}'_m}\ge 6\kappa\e{(X_1^{\zeta}+X_2^{\zeta})\mathds{1}_{X_1> X_2> X_3}}> \kappa m^{\zeta}, $$ if we choose $m_0$ sufficiently large. \end{proof} \section{Diameter} \label{sec:diameter} As mentioned in the introduction, prior to this work it had been known that a typical RAN has logarithmic diameter, and asymptotic lower and upper bounds for the diameter had been proved, but the asymptotic value had not been determined. In this section we prove Theorem~\ref{thm:diameter}, which states that a.a.s.\ the diameter of a RAN is asymptotic to $c \log n$, where $c \approx 1.668$ is the solution of an explicit equation. Let $G$ be a RAN with $n$ vertices, and recall that $\nu_1$, $\nu_2$, and $\nu_3$ denote the vertices incident with the unbounded face. For a vertex $v$ of $G$, let $\tau(v)$ be the minimum graph distance of $v$ to the boundary, i.e., $$\tau(v) = \min \{\operatorname{dist}(v,\nu_1),\operatorname{dist}(v,\nu_2),\operatorname{dist}(v,\nu_3)\} \:.$$ The \emph{radius} of $G$ is defined as the maximum of $\tau(v)$ over all vertices $v$. \begin{lemma} \label{lem:radius} Let $$ h(x)=\frac{12x^3}{1-2x} - \frac{6x^3}{1-x} \:,$$ and let $\hat x$ be the unique solution in $(0.1,0.2)$ to $$x(x-1)h'(x)=h(x)\log h(x) \:.$$ Finally, let $$c = \frac { 1-\hat x^{-1}} {\log h(\hat x)} \approx 1.668 \:.$$ Then the radius of $G$ is a.a.s.\ asymptotic to $c \log n / 2$. \end{lemma} We first show that this lemma implies Theorem~\ref{thm:diameter}. \begin{proof} [Proof of Theorem~\ref{thm:diameter}] Let $\triangle_1$, $\triangle_2$, and $\triangle_3$ be the three triangles in the standard 1-subdivision of the triangle $\nu_1\nu_2\nu_3$, and let $n_i$ be the number of vertices on and inside $\triangle_i$. Let $\operatorname{diam}(G)$ denote the diameter of $G$. Fix arbitrarily small $\varepsilon, \delta > 0$. We show that with probability at least $1-2\delta$ we have $$(1-\varepsilon) c \log n \leq \operatorname{diam}(G) \leq (1+\varepsilon) c\log n \:.$$ {Here and in the following, we assume $n$ is sufficiently large.} Let $M$ be a positive integer sufficiently large that, for a given $1\leq i\leq 3$, $$ \p{\frac{n_i}{n} < \frac{1}{M}} < \delta / 6 \:.$$ Such an $M$ exists by Theorem~\ref{thm:beta} and the discussion after it. Let $A$ denote the event $$ \min\left\{ \frac{n_i}{n} : 1\leq i \leq 3 \right\} \geq \frac{1}{M} \:.$$ By the union bound, $\p{A} \geq 1- \delta / 2$. We condition on values $(n_1,n_2,n_3)$ such that $A$ happens. Note that we have $\log n_i = \log n - O(1)$ for each $i$. For a triangle $\triangle$, $V(\triangle)$ denotes the three vertices of $\triangle$. Note that for $1\leq i \leq 3$, the subgraph induced by vertices on and inside $\triangle_i$ is distributed as a RAN $G_i$ with $n_i$ vertices. Hence by Lemma~\ref{lem:radius} and the union bound, with probability at least $1-\delta /2$, the radius of each of $G_1$, $G_2$ and $G_3$ is at least $(1-\varepsilon) c \log n / 2$. Hence, with probability at least $1-\delta /2$ there exists $u_1 \in V(G_1)$ with distance at least $(1-\varepsilon) c \log n / 2$ to $V(\triangle_1)$, and also there exists $u_2 \in V(G_2)$ with distance at least $(1-\varepsilon) c \log n / 2$ to $V(\triangle_2)$. Since any $(u_1,u_2)$-path must contain a vertex from $V(\triangle_1)$ and $V(\triangle_2)$, with probability at least $1 - \delta/2$, there exists $u_1, u_2 \in V(G)$ with distance at least $2 (1-\varepsilon) c \log n / 2$, which implies $$\p{\operatorname{diam}(G) \geq c(1-\varepsilon) \log n} \geq \p{\operatorname{diam}(G) \geq c (1-\varepsilon) \log n \vert A} \p{A} > 1 - \delta \:.$$ For the upper bound, let $R$ be the radius of $G$. Notice that the distance between any vertex and $\nu_1$ is at most $R+1$, so $\operatorname{diam}(G)\le 2R+2$. By Lemma~\ref{lem:radius}, with probability at least $1 - \delta$ we have $R \le (1+\varepsilon/2)c \log n / 2$. If this event happens, then $\operatorname{diam}(G) \le (1+\varepsilon)c \log n$. \end{proof} The rest of this section is devoted to the proof of Lemma~\ref{lem:radius}. Let $T$ be the $\triangle$-tree of $G$, as defined in Section~\ref{sec:preliminaries}. We categorize the triangles in $G$ into three types. Let $\triangle$ be a triangle in $G$ with vertex set $\{x,y,z\}$, and assume that $\tau(x) \leq \tau(y) \leq \tau(z)$. Since $z$ and $x$ are adjacent, we have $\tau(z) \leq \tau(x) + 1$. So, ${\triangle}$ can be categorized to be of one of the following types: \begin{enumerate} \item if $\tau(x) = \tau(y) = \tau(z)$, then say ${\triangle}$ is of type 1. \item If $\tau(x) = \tau(y) < \tau(y) + 1 = \tau(z)$, then say ${\triangle}$ is of type 2. \item If $\tau(x) < \tau(x) + 1 = \tau(y) = \tau(z)$, then say ${\triangle}$ is of type 3. \end{enumerate} The type of a node of $T$ is the same as the type of its corresponding triangle. The root of $T$ corresponds to the triangle $\nu_1\nu_2\nu_3$ and the following are easy to observe. \begin{enumerate} \item[(a)] The root is of type 1. \item[(b)] A node of type 1 has three children of type 2. \item[(c)] A node of type 2 has one child of type 2 and two children of type 3. \item[(d)] A node of type 3 has two children of type 3 and one child of type 1. \end{enumerate} For a triangle $\triangle$, define $\tau(\triangle)$ to be the minimum of $\tau(u)$ over all $u\in V(\triangle)$. Then it is easy to observe that, for two triangles $\overline{\triangle}$ and $\triangle$ of type 1 such that $\node{\overline{\triangle}}$ is an ancestor of $\node{\triangle}$ and there is no node of type 1 in the unique path connecting them, we have $\tau({\triangle}) = \tau(\overline\triangle) + 1$. This determines $\tau$ inductively: for every $\node{\triangle} \in V(T)$, $\tau(\triangle)$ is one less than the number of nodes of type 1 in the path from $\node{\triangle}$ to the root. We call $\tau(\triangle)$ the \emph{auxiliary depth} of node $\triangle$, and define the \emph{auxiliary height} of a tree $T$, written $\operatorname{ah}(T)$, to be the maximum auxiliary depth of its nodes. Note that the auxiliary height is always less than or equal to the height. Also, for a vertex $v \in V(G)$, if $\triangle$ is the triangle that $v$ subdivides, then $\tau(v) = \tau(\triangle)+1$. We augment the tree $T$ by adding specification of the type of each node, and we abuse notation and call the augmented tree the $\triangle$-tree of the RAN. Hence, the radius of the RAN is either $\operatorname{ah}(T)$ or $\operatorname{ah}(T)+1$. Notice that instead of building $T$ from the RAN $G$, one can think of the random $T$ as being generated in the following manner: let $n\ge 3$ be a positive integer. Start with a single node as the root of $T$. So long as the number of nodes is less than $3n-8$, choose a leaf $v$ independently of previous choices and uniformly at random, and add three leaves as children of $v$. Once the number of nodes becomes $3n-8$, add the information about the types using rules (a)--(d), as follows. Let the root have type 1, and determine the types of other nodes in a top-down manner. For a node of type 1, let its children have type 2. For a node of type 2, select one of the children independently and uniformly at random, let that child have type 2, and let the other two children have type 3. Similarly, for a node of type 3, select one of the children independently of previous choices and uniformly at random, let that child have type 1, and let the other two children have type 3. Henceforth, we will forget about $G$ and focus on finding the auxiliary height of a random tree $T$ generated in this manner. A major difficulty in analyzing the auxiliary height of the tree generated in the aforementioned manner is that the branches of a node are heavily dependent, as the total number of nodes equals $3n-8$. To remedy this we consider another process which has the desired independence and approximates the original process well enough for our purposes. The process, $\widehat{P}$, starts with a single node, the root, which is born at time 0, and is of type 1. From this moment onwards, whenever a node is born (say at time $\kappa$), it waits for a random time $X$, which is distributed exponentially with mean 1, and after time $X$ has passed (namely, at absolute time $\kappa + X$) gives birth to three children, whose types are determined as before (according to the rules (b)--(d), and using randomness whenever there is a choice) and dies. Moreover, the lifetime of the nodes are independent. By the memorylessness of the exponential distribution, if one starts looking at the process at any (deterministic) moment, the next leaf to die is chosen uniformly at random. For a nonnegative (possibly random) $t$, we denote by ${\widehat{T}}^t$ the random almost surely finite tree obtained by taking a snapshot of this process at time $t$. Hence, for any deterministic $t \geq 0$, the distribution of ${\widehat{T}}^t$ conditional on ${\widehat{T}}^t$ having exactly $3n-8$ nodes, is the same as the distribution of $T$. \begin{lemma} \label{lem:equal_logs} Assume that there exists a constant $c$ such that a.a.s.\ the auxiliary height of $\widehat{T}^t$ is asymptotic to $c t$ as $t\to\infty$. Then the radius of a RAN with $n$ vertices is a.a.s.\ asymptotic to $c \log n / 2$ as $n\to\infty$. \end{lemma} \begin{proof} Let $\ell_n = 3n - 8$, and let $\varepsilon>0$ be fixed. For the process $\widehat{P}$, we define three stopping times as follows: \begin{description} \item $a_1$ is the deterministic time $(1-\varepsilon) \log (\ell_n) / 2$. \item $A_2$ is the random time when the evolving tree has exactly $\ell_n$ nodes. \item $a_3$ is the deterministic time $(1+\varepsilon) \log (\ell_n) / 2$. \end{description} Broutin and Devroye~\cite[Proposition~2]{treeheight} proved that almost surely $$\log |V(\widehat{T}^t)| \sim 2t\:,$$ which implies the same statement a.a.s.\ as $t\to\infty$. This means that, as $n\to\infty$, a.a.s. $$\log |V(\widehat{T}^{a_1})| \sim 2a_1 = (1 - \varepsilon) \log (\ell_n) \:,$$ and hence $|V(\widehat{T}^{a_1})| < \ell_n$, which implies $a_1 < A_2$. Symmetrically, it can be proved that a.a.s.\ as $n\to\infty$ we have $A_2 < a_3$. It follows that a.a.s.\ as $n\to\infty$ $$\operatorname{ah}\left(\widehat{T}^{a_1}\right) \leq \operatorname{ah} \left(\widehat{T}^{A_2}\right) \leq \operatorname{ah} \left(\widehat{T}^{a_3}\right) \:.$$ By the assumption, a.a.s.\ as $n\to\infty$ we have $\operatorname{ah}\left(\widehat{T}^{a_1}\right) \sim (1-\varepsilon) c \log (\ell_n) / 2$ and $\operatorname{ah}\left(\widehat{T}^{a_3}\right) \sim (1+\varepsilon) c \log (\ell_n) / 2$. On the other hand, as noted above, $T$ has the same distribution as $\widehat{T}^{A_2}$. It follows that a.a.s.\ as $n\to\infty$ $$ 1 - 2\varepsilon \leq \frac{2\operatorname{ah}(T)}{c \log (\ell_n)} \leq 1 + 2\varepsilon \:.$$ Since $\varepsilon$ was arbitrary, the result follows. \end{proof} It will be more convenient to view the process $\widehat{P}$ in the following equivalent way. Denote by $\operatorname{Exp}(1)$ an exponential random variable with mean 1. Let $\widehat{T}$ denote an infinite ternary tree whose nodes have types assigned using rules (a)--(d) and are associated with independent $\operatorname{Exp}(1)$ random variables. For convenience, each edge of the tree from a parent to a child is labelled with the random variable associated with the parent, which denotes the age of the parent when the child is born. For every node $u \in V(\widehat{T})$, its \emph{birth time} is defined as the sum of the labels on the edges connecting $u$ to the root, and the birth time of the root is defined to be zero. Given $t\geq 0$, the tree $\widehat{T}^t$ is the subtree induced by nodes with birth time less than or equal to $t$, and is finite with probability one. Let $k\ge 3$ be a fixed positive integer. We define two {random} infinite trees $\underline{T_k}$ and $\overline{T_k}$ as follows. First, we regard $\widehat{T}$ as a tree generated by each node giving birth to exactly three children with types assigned using (b)--(d), and with an $\operatorname{Exp}(1)$ random variable used to label the edges to its children. The tree $\underline{T_k}$ is obtained using the same generation rules as $\widehat{T}$ except that every node of type 2 or 3, whose distance to its closest ancestor of type 1 is equal to $k$, dies without giving birth to any children. Given $t\geq 0$, the random (almost surely finite) tree $\underline{T^t_k}$ is, as before, the subtree of $\underline{T_k}$ induced by nodes with birth time less than or equal to $t$. The tree $\overline{T_k}$ is also generated similarly to $\widehat{T}$, except that for each node $u$ of type 2 (respectively, 3) in $\overline{T_k}$ whose distance to its closest ancestor of type 1 equals $k$, $u$ has exactly three (respectively, four) children of type 1, and the edges joining $u$ to its children get label 0 instead of random ${\operatorname{Exp}}(1)$ labels. (In the ``evolving tree'' interpretation, $u$ immediately gives birth to three or four children of type 1 and dies.) Such a node $u$ is called an \emph{annoying} node. The random (almost surely finite) tree $\overline{T^t_k}$ is defined as before. \begin{lemma} \label{lem:sandwich} For every fixed $k\ge 3$, every $t\geq 0$, and every $g=g(t)$, we have $$ \p{\operatorname{ah} \left ( \underline{T^t_k}\right) \geq g} \leq \p{ \operatorname{ah} \left ( \widehat{T}^t \right ) \geq g} \leq \p{ \operatorname{ah} \left ( \overline{T_k^t} \right) \geq g} \:.$$ \end{lemma} \begin{proof} The left inequality follows from the fact that the random edge labels of $\widehat{T}$ and $\underline{T_{k}}$ can easily be coupled using a common sequence of independent $\operatorname{Exp}(1)$ random variables in such a way that for every $t \geq 0$, the generated $\underline{T_{k}^t}$ is always a subtree of the generated $\widehat{T}^t$. For the right inequality, we use a sneaky coupling between the edge labels of $\widehat{T}$ and $\overline{T_{k}}$. It is enough to choose them using a common sequence of independent $\operatorname{Exp}(1)$ random variables $X_1,X_2,\ldots$ and define a one-to-one mapping $f : V(\widehat{T}) \rightarrow V\left(\overline{T_{k}}\right)$ such that for every $u \in V(\widehat{T})$, \begin{enumerate} \item[(1)] the auxiliary depth of $f(u)$ is greater than or equal to the auxiliary depth of $u$, and \item[(2)] for some $I$ and $J \subseteq I$, the birth time of $u$ equals $\sum_{i\in I} X_i$ and the birth time of $f(u)$ equals $\sum_{j\in J} X_j$. \end{enumerate} For annoying nodes, the coupling and the mapping $f$ is shown down to their grandchildren in Figures~\ref{fig:type-2} and~\ref{fig:type-3}. This is easily extended in a natural way to all other nodes of the tree. \end{proof} \begin{figure}\label{fig:type-2} \end{figure} \begin{figure} \caption{ \label{fig:type-3} \label{fig:type-3} \end{figure} With a view to proving Lemma~\ref{lem:radius} by appealing to Lemmas~\ref{lem:equal_logs}~and~\ref{lem:sandwich}, we will define two sequences $\left(\underline{\rho_k}\right)$ and $\left(\overline{\rho_k}\right)$ such that for each $k$, a.a.s.\ the heights of $\overline{T_k^t}$ and $\underline{T_k^t}$ are asymptotic to $\overline{\rho_k} t $ and $\underline{\rho_k}t$, respectively, and {also} $$ \lim_{k\to\infty} \underline{\rho_k} = \lim_{k\to\infty} \overline{\rho_k} = c \:, $$ where $c\approx 1.668$ is defined in the statement of Lemma~\ref{lem:radius}. For the rest of this section, asymptotics are with respect to $t$ instead of $n$, unless otherwise specified. We analyze the heights of $\underline{T_k}$ and $\overline{T_k}$ with the help of a theorem of Broutin and Devroye~\cite[Theorem~1]{treeheight}. We state here a special case suitable for our purposes, including a trivial correction to the conditions on $E$. \begin{theorem} \label{thm:broutin-devroye} Let $E$ be a prototype nonnegative random variable that satisfies $\p{E=0} = 0$ and $\sup \{z : \p{E > z} =1 \} = 0$, and such that $\p{E=z}<1$ for every $z\in\mathbb{R}$; and for which there exists $\lambda>0$ such that $\e{\exp(\lambda E)}$ is finite. Let $b$ be a fixed positive integer greater than {1} and let $T_{\infty}$ be an infinite $b$-ary tree. Let $B$ be a prototype random $b$-vector with each component distributed as $E$ (but not necessarily independent components). For every node $u$ of $T_{\infty}$, label the edges to the children of $u$ using an independently generated copy of $B$. Given $t\geq 0$, let $H_t$ be the height of the subtree of $T_{\infty}$ induced by the nodes for which the sum of the labels on their path to the root is at most $t$. Then, a.a.s.\ we have ${H_t} \sim \rho t $, where $\rho$ is the unique solution to $$ \sup \{\lambda / \rho - \log (\e{\exp(\lambda E)}) : \lambda \leq 0 \} = \log b \:. $$ \end{theorem} For each $i=2,3,\dots$, let $\alpha_i, \beta_i, \gamma_i$ denote the number of nodes of type 1, 2, 3 at depth $i$ of $\widehat{T}$ for which the root is the only {node of type 1} in their path to the root. Then rules (a)--(d) for determining node types imply $$\forall i>2 \qquad \alpha_i = \gamma_{i-1}, \quad \beta_i = \beta_{i-1}, \quad \gamma_i = 2\beta_{i-1}+2\gamma_{i-1} \:.$$ These, together with $\alpha_2 = 0$, $\beta_2 = 3$, and $\gamma_2 = 6$, imply \begin{equation} \forall i \geq 2 \qquad \alpha_i = 3 \times 2^{i-1} - 6, \quad \beta_i = 3, \quad \gamma_i = 3 \times 2^i - 6 \:.\label{alpha's} \end{equation} Let $\underline{b_k}=\sum_{i=1}^k \alpha_i$ and $\overline{b_k}=\sum_{i=1}^k \alpha_i + 3\beta_k+4\gamma_k$. For a positive integer $s$, let $\operatorname{Gamma}(s)$ denote the Gamma distribution with mean $s$, i.e., the distribution of the sum of $s$ independent $\operatorname{Exp}(1)$ random variables. We define a random infinite tree $\underline{T_k}'$ as follows. The nodes of $\underline{T_k}'$ are the type-1 nodes of $\underline{T_k}$. Let $V'$ denote the set of these nodes. For $u,v\in V'$ such that $u$ is the closest type-1 ancestor of $v$ in $\underline{T_k}$, there is an edge joining $u$ and $v$ in $\underline{T_k}'$, whose label equals the sum of the labels of the edges in the unique $(u,v)$-path in $\underline{T_k}$. By the construction, for all $t\geq 0$, the height of the subtree of $\underline{T_k}'$ induced by nodes with birth time less than or equal to $t$ equals the auxiliary height of $\underline{T^t_k}$. Let $u$ be a node in $\underline{T_k}'$. Then observe that for each $i = 3, 4, \dots, k$, $u$ has $\alpha_i$ children whose birth times equal the birth time of $u$ plus a $\operatorname{Gamma}(i)$ random variable. In particular, $\underline{T_k}'$ is an infinite $\underline{b_k}$-ary tree. To apply Theorem~\ref{thm:broutin-devroye} we need the label of each edge to have the same distribution. For this, we create a random rearrangement of $\underline{T_k}'$. First let $\underline{E_k}$ be the random variable such that for each $3\le i\le k$, with probability $\alpha_i/\underline{b_k}$, $\underline{E_k}$ is distributed as a $\operatorname{Gamma}(i)$ random variable. Now, for each node $u$ of $\underline{T_k}'$, starting from the root and in a top-down manner, randomly permute the branches below $u$. This results in an infinite $\underline{b_k}$-ary tree, every edge of which has a random label distributed as $\underline{E_k}$. Although the labels of edges from a node to its children are dependent, the $\underline{b_k}$-vector of labels of edges from a node to its children is independent of all other edge labels, as required for Theorem~\ref{thm:broutin-devroye}. Let $\rho$ be the solution to \begin{equation} \label{eq:rhounder} \sup \{\lambda / \rho - \log (\e{\exp(\lambda \underline{E_k})}) : \lambda \leq 0 \} = \log \underline{b_k} \:. \end{equation} Then by Theorem~\ref{thm:broutin-devroye}, a.a.s.\ the auxiliary height of $\underline{T^t_k}$, which equals the height of the subtree of $\underline{T_k}'$ induced by nodes with birth time less than or equal to $t$, is asymptotic to $\rho t$. Notice that we have $$ \e{\exp(\lambda \: \operatorname{Exp}(1))} = \frac{1}{1-\lambda} \:.$$ So, by the definition of $\operatorname{Gamma}(s)$, and since the product of expectation of independent variables equals the expectation of their product, $$ \e{\exp(\lambda \: \operatorname{Gamma}(s))} = \frac{1}{(1-\lambda)^s} \:.$$ Hence by linearity of expectation, \begin{equation} \label{eq:genunder} \e{\exp(\lambda \underline{E_k})}= \sum_{i=3}^k \frac{ \alpha_i}{\underline{b_k} (1-\lambda)^{i} }\:. \end{equation} One can define a random infinite $\overline{b_k}$-ary tree $\overline{T_k}'$ in a similar way. Let $\overline{E_k}$ be the random variable such that for each $3\le i\le k-1$, with probability $\alpha_i/\overline{b_k}$, it is distributed as a $\operatorname{Gamma}(i)$ random variable, and with probability $(\alpha_k + 3\beta_k + 4\gamma_k) / \overline{b_k}$, it is distributed as a $\operatorname{Gamma}(k)$ random variable. Then by a similar argument, a.a.s.\ the auxiliary height of $\overline{T^t_k}$ is asymptotic to $\rho t$, where $\rho$ is the solution to \begin{equation} \label{eq:rhoover} \sup \{\lambda / \rho - \log (\e{\exp(\lambda \overline{E_k})}) : \lambda \leq 0 \} = \log \overline{b_k} \:. \end{equation} Moreover, one calculates \begin{equation} \label{eq:genover} \e{\exp(\lambda \overline{E_k})}= \frac{ \alpha_k + 3\beta_k + 4\gamma_k}{\overline{b_k}(1-\lambda)^{k}} +\sum_{i=3}^{k-1} \frac{ \alpha_i}{\overline{b_k} (1-\lambda)^{ i} }\:. \end{equation} As part of our plan to prove Lemma~\ref{lem:radius}, we would like to define $\underline{\rho_k}$ and $\overline{\rho_k}$ in such a way that they are the unique solutions to~(\ref{eq:rhounder})~and~(\ref{eq:rhoover}), respectively. We first need to establish two analytical lemmas. For later convenience, we define $\mathcal F$ to be the set of positive functions $f : [0.1, 0.2] \rightarrow \mathbb{R}$ that are differentiable on $(0.1,0.2)$, and let $W : \mathcal{F} \rightarrow \mathbb{R}^{[0.1,0.2]}$ be the operator defined as $$Wf (x) = x (x-1) f'(x) / f(x) - \log f(x) \:.$$ Note that $Wf$ is continuous. Define $h \in \mathcal{F}$ as $$h(x) = \frac{12x^3}{1-2x} - \frac{6x^3}{1-x} \:.$$ \begin{lemma} The function $Wh$ has a unique root $\hat x$ in $(0.1,0.2)$. \end{lemma} \begin{proof} By the definition of $(\alpha_i)_{i\ge 3}$ in (\ref{alpha's}) we have $$ h(x)=\sum_{i\ge 3}\alpha_i x^i \qquad \forall x \in [0.1,0.2] \:. $$ Since $\alpha_i>0$ for all $i\ge 3$, we have $h(x)>0$ and $h'(x)>0$ for $x \in [0.1,0.2]$, and hence the derivative of $\log h(x)$ is positive. Moreover, the derivative of $x(x-1)h'(x)/h(x)$ equals $4x(x-1)/(1-2x)^2$, which is negative. Therefore, $Wh(x)$ is a strictly decreasing function on $[0.1,0.2]$. Numerical calculations give $Wh (0.1) \approx 1.762 > 0$ and $Wh(0.2) \approx -0.831 < 0$. Hence, there is a unique solution to $Wh(x)=0$ in $(0.1,0.2)$. \end{proof} \begin{remark} Numerical calculations give $\hat x \approx 0.1629562 \:.$ \end{remark} Define functions $\underline{g_k}, \overline{g_k} \in \mathcal{F}$ as $$ \underline{g_k}(x)=\sum_{i=3}^k \alpha_i x^i, \mathrm{\ and\ }\overline{g_k}(x)=(\alpha_k+3\beta_k+4\gamma_k)x^k + \sum_{i=3}^{k-1} \alpha_i x^i\:.$$ Note that by (\ref{eq:genunder}) and (\ref{eq:genover}), \begin{equation} \label{eq:gs} \underline{b_k} \: \e{\exp\left(\lambda \underline{E_k}\right)} = \underline{g_k}\left(\frac{1}{1-\lambda}\right) , \mathrm{\ and\ } \overline{b_k} \: \e{\exp\left(\lambda \overline{E_k}\right)} = \overline{g_k}\left(\frac{1}{1-\lambda}\right) \end{equation} hold at least when $(1-\lambda)^{-1} \in [0.1,0.2]$, namely for all $\lambda \in [-9, -4]$. \begin{lemma} \label{lem:limits} Both sequences $\left(W \underline{g_k}\right)_{k=3}^{\infty}$ and $\left(W \overline{g_k}\right)_{k=3}^{\infty}$ converge pointwise to $Wh$ on $[0.1,0.2]$ as $k\rightarrow \infty$. Also, there exists a positive integer $k_0$ and sequences $\left(\underline{x_k}\right)_{k=k_0}^{\infty}$ and $\left(\overline{x_k}\right)_{k=k_0}^{\infty}$ such that $W\underline{g_k}\left(\underline{x_k}\right) = W\overline{g_k}\left(\overline{x_k}\right) = 0$ for all $k \geq k_0$, and $$\lim_{k\rightarrow \infty} \underline{x_k} = \lim_{k\rightarrow \infty} \overline{x_k} = \hat x \:.$$ \end{lemma} \begin{proof} For any $x \in [0.1,0.2]$, we have $$\lim_{k\rightarrow \infty} \underline{g_k}(x) = h(x), \quad \lim_{k\rightarrow \infty} \underline{g_k}'(x) = h'(x), \quad\lim_{k\rightarrow \infty} \overline{g_k}(x) = h(x), \quad \lim_{k\rightarrow \infty} \overline{g_k}'(x) = h'(x) \:,$$ so the sequences $\left(W \underline{g_k}\right)_{k=3}^{\infty}$ and $\left(W \overline{g_k}\right)_{k=3}^{\infty}$ converge pointwise to $Wh$. Next, we show the existence of a positive integer $\underline{k_0}$ and a sequence $\left(\underline{x_k}\right)_{k=k_0}^{\infty}$ such that $W\underline{g_k}\left(\underline{x_k}\right) = 0$ for all $k \geq k_0$, and $$\lim_{k\rightarrow \infty} \underline{x_k} = \hat x \:.$$ The proof for existence of corresponding positive integer $\overline{k_0}$ and the sequence $\left(\overline{x_k}\right)_{k=k_0}^{\infty}$ is similar, and we may let $k_0 = \max \{\underline{k_0}, \overline{k_0}\}$. Since $Wh (0.1) > 0$ and $Wh (0.2) < 0$, there exists $\underline{k_0}$ so that for $k \geq \underline{k_0}$, $W\underline{g_k} (0.1) > 0$ and $W\underline{g_k} (0.2) < 0$. Since $W\underline{g_k}$ is continuous for all $k\geq 3$, it has at least one root in $(0.1,0.2)$. Moreover, since $W\underline{g_k}$ is continuous, the set $\{x : W\underline{g_k} (x) = 0\}$ is a closed set, thus we can choose a root $\underline{x_k}$ closest to $\hat x$. We just need to show that $\lim_{k\rightarrow \infty} \underline{x_k} = \hat x$. Fix an $\varepsilon>0$. Since $Wh\left(\hat x - \varepsilon\right) >0 $ and $Wh\left(\hat x + \varepsilon\right)<0$, there exists a large enough $M$ such that for all $k\geq M$, $W\underline{g_k}(\hat x - \varepsilon) >0 $ and $W\underline{g_k}(\hat x + \varepsilon)<0$. Thus $\underline{x_k} \in (\hat x - \varepsilon, \hat x + \varepsilon)$. Since $\varepsilon$ was arbitrary, we conclude that $\lim_{k\rightarrow \infty} \underline{x_k} = \hat x$. \end{proof} Let $k_0$ be as in Lemma~\ref{lem:limits} and let $\left(\underline{x_k}\right)_{k=k_0}^{\infty}$ and $\left(\overline{x_k}\right)_{k=k_0}^{\infty}$ be the sequences given by Lemma~\ref{lem:limits}. Define the sequences $\left(\underline{\rho_k}\right)_{k=k_0}^{\infty}$ and $\left(\overline{\rho_k}\right)_{k=k_0}^{\infty}$ by \begin{equation}\label{eq:rho} \underline{\rho_k}=\left(1-\underline{x_k}^{-1}\right)/\log \underline{g_k}(\underline{x_k}),\quad \overline{\rho_k}=\left(1-{\overline{x_k}\:}^{-1}\right)/\log \overline{g_k}(\overline{x_k}). \end{equation} \begin{lemma} \label{lem:long} For every fixed $k\geq k_0$, a.a.s.\ the heights of $\overline{T_k^t}$ and $\underline{T_k^t}$ are asymptotic to $\overline{\rho_k} t $ and $\underline{\rho_k}t$, respectively. \end{lemma} \begin{proof} We give the argument for $\overline{T_k^t}$; the argument for $\underline{T_k^t}$ is similar. First of all, we claim that $\log (\e{\exp(\lambda \overline{E_k})})$ is a strictly convex function of $\lambda$ over $(-\infty,0]$. To see this, let $\lambda_1 < \lambda_2 \leq 0$ and let $\theta \in (0,1)$. Then we have \begin{align*} \e{\exp\left(\theta \lambda_1 \overline{E_k} + (1-\theta) \lambda_2 \overline{E_k} \right)} & = \e{\left[\exp\left(\lambda_1 \overline{E_k}\right)\right]^{\theta} \left[\exp\left( \lambda_2 \overline{E_k} \right)\right]^{1-\theta}} \\ & < \e{\exp\left(\lambda_1 \overline{E_k}\right)}^{\theta} \e{\exp\left(\lambda_2 \overline{E_k}\right)}^{1-\theta} \:, \end{align*} where the inequality follows from H\"{o}lder's inequality, and is strict as the random variable $\overline{E_k}$ does not have all of its mass concentrated in a single point. Taking logarithms completes the proof of the claim. It follows that given any value of $\rho$, $\lambda / \rho - \log (\e{\exp(\lambda \overline{E_k})})$ is a strictly concave function of $\lambda \in (-\infty,0]$ and hence attains it supremum at a unique $\lambda \le 0$. Now, define $$\overline{\lambda_k} = 1 - {\overline{x_k}\:}^{-1} \:,$$ which is in $(-9,-4)$ as $\overline{x_k} \in (0.1, 0.2)$. Next we will show that \begin{align} \overline{\lambda_k} / \overline{\rho_k} - \log (\e{\exp(\overline{\lambda_k} \,\overline{E_k})}) & = \log \overline{b_k} \label{eq:rho1} \:, \\ \frac{\mathrm{d}}{\mathrm{d}{\lambda}} \left[\lambda / \overline{\rho_k} - \log (\e{\exp(\lambda \overline{E_k})})\right] \Big|_{\lambda = \overline{\lambda_k}} & = 0 \label{eq:rho2} \:, \end{align} which implies that $\overline{\rho_k}$ is the unique solution for (\ref{eq:rhoover}), and thus by Theorem~\ref{thm:broutin-devroye} and the discussion after it, the height of $\overline{T_k^t}$ is asymptotic to $\overline{\rho_k} t$. Notice that $\overline{\lambda_k} \in (-9,-4)$, so by (\ref{eq:gs}), $$\overline{b_k} \e{\exp({\lambda} \overline{E_k})} = \overline{g_k}((1-\lambda)^{-1})$$ for $\lambda$ in a sufficiently small open neighbourhood of $\overline{\lambda_k}$. Taking logarithm of both sides and using~(\ref{eq:rho}) gives~(\ref{eq:rho1}). To prove (\ref{eq:rho2}), note that \begin{align*} \frac{\mathrm{d}}{\mathrm{d}{\lambda}} \left[\log (\e{\exp(\lambda \overline{E_k})})\right] \Big|_{\lambda = \overline{\lambda_k}} & = \frac{\mathrm{d}}{\mathrm{d}{\lambda}} \left[\log \overline{g_k} \left(\left(1-\overline{\lambda_k}\right)^{-1}\right) - \log \overline{b_k} \right]\Big|_{\lambda = \overline{\lambda_k}} \\ & = \frac { {\overline{g_k}\,}' \left( (1-\overline{\lambda_k})^{-1} \right)} {\left(1-\overline{\lambda_k}\right)^{2} \: \overline{g_k} \left( (1-\overline{\lambda_k})^{-1} \right)} = {\overline{x_k}\:}^2 \: \frac{{\overline{g_k}\,}'(\overline{x_k})}{\overline{g_k}(\overline{x_k})} \:. \end{align*} By Lemma~\ref{lem:limits}, $W\overline{g_k} (\overline{x_k}) = 0$, i.e., $${\overline{x_k}\:}^2 \: \frac{{\overline{g_k}\,}'(\overline{x_k})}{{\overline{g_k}}(\overline{x_k})} = {\overline{x_k}\:}^2 \: \frac{\log \overline{g_k}( \overline{x_k} )}{\overline{x_k} (\overline{x_k} - 1)} = \frac{\log \overline{g_k}(\overline{x_k})}{1 - {\overline{x_k}\:}^{-1}} = \frac{1}{\overline{\rho_k}}\:,$$ and (\ref{eq:rho2}) is proved. \end{proof} We now have all the ingredients to prove Lemma~\ref{lem:radius}. \begin{proof} [Proof of Lemma~\ref{lem:radius}.] By Lemma~\ref{lem:equal_logs}, we just need to show that a.a.s.\ the auxiliary height of $\widehat{T}^t$ is asymptotic to $c t$, where $$c = \frac{1-\hat {x}}{\log h(\hat x)} \:.$$ By Lemma~\ref{lem:long}, a.a.s.\ the heights of $\overline{T_k^t}$ and $\underline{T_k^t}$ are asymptotic to $\overline{\rho_k} t$ and $\underline{\rho_k} t$, respectively. By Lemma~\ref{lem:limits}, $\overline{x_k}\to \hat x$ and $\underline{x_k}\to \hat x$. Observe that $\left(\underline{g_k}\right)_{k=3}^{\infty}$ and $\left(\overline{g_k}\right)_{k=3}^{\infty}$ converge pointwise to $h$, and that for every $k\geq 3$ and every $x\in[0.1,0.2]$, $\underline{g_k}(x) \leq \underline{g_{k+1}}(x)$ and $\overline{g_k}(x) \geq \overline{g_{k+1}}(x)$. Thus by Dini's theorem (see, e.g., Rudin~\cite[Theorem~7.13]{rudin}), $\left(\underline{g_k}\right)_{k=3}^{\infty}$ and $\left(\overline{g_k}\right)_{k=3}^{\infty}$ converge uniformly to $h$ on $[0.1,0.2]$. Hence, $$ \lim_{k\to\infty}\underline{\rho_k}= \lim_{k\to\infty} \frac{1-\underline{x_k}^{-1}}{\log \underline{g_k}(\underline{x_k})} = \frac{1-\hat {x}^{-1}}{\log h(\hat x)} = c \:, $$ and $$ \lim_{k\to\infty} \overline{\rho_k}= \lim_{k\to\infty} \frac{1-{\overline{x_k}}^{-1}}{\log \overline{g_k}(\overline{x_k})} = \frac{1-\hat {x} ^{-1}}{\log h(\hat x)} = c \:. $$ It follows from Lemma~\ref{lem:sandwich} that a.a.s.\ the auxiliary height of $\widehat{T}^t$ is asymptotic to $c t$, as required. \end{proof} \end{document}
\begin{document} \maketitle \begin{abstract} We first study the degeneration of a sequence of Hermitian-Yang-Mills metrics with respect to a sequence of balanced metrics on a Calabi-Yau threefold $\hat{X}$ that degenerates to the balanced metric constructed by Fu, Li and Yau \cite{FLY} on the complement of finitely many (-1,-1)-curves in $\hat{X}$. Then under some assumptions we show the existence of Hermitian-Yang-Mills metrics on bundles over a family of threefolds $X_t$ with trivial canonical bundles obtained by performing conifold transitions on $\hat{X}$. \end{abstract} \section{Introduction} This paper is about the existence problem for Hermitian-Yang-Mills metrics on holomorphic vector bundles with respect to balanced metrics, when conifold transitions are performed on the base Calabi-Yau threefolds. The construction of canonical geometric structures on manifolds and vector bundles has always been a very important problem in differential geometry, especially in K$\ddot{\text{a}}$hler geometry. A class of manifolds which are the main focus in this direction is the K$\ddot{\text{a}}$hler Calabi-Yau manifolds\footnote{In this paper, by a Calabi-Yau manifold we mean a complex manifold with trivial canonical bundle which may or may not be K$\ddot{\text{a}}$hler, and what is usually called a Calabi-Yau manifold will now be a K$\ddot{\text{a}}$hler Calabi-Yau manifold.}, i.e., K$\ddot{\text{a}}$hler manifolds with trivial canonical bundles. The Calabi conjecture which was solved by Yau \cite{Y} in 1976 states that in every K$\ddot{\text{a}}$hler class of a K$\ddot{\text{a}}$hler Calabi-Yau manifold there is a unique representative which is Ricci-flat. After the solution of the Calabi conjecture, K$\ddot{\text{a}}$hler Calabi-Yau manifolds have undergone rapid developments, and the moduli spaces of K$\ddot{\text{a}}$hler Calabi-Yau threefolds gradually became one of the most important area of study. In the work of Todorov \cite{To} and Tian \cite{Tian1} the smoothness of the moduli spaces of K$\ddot{\text{a}}$hler Calabi-Yau manifolds in general dimensions was proved. In the complex two dimensional case, the moduli space of K3 surfaces is known to be a 20-dimensional complex smooth irreducible analytic space, with the algebraic K3 surfaces occupying a 19-dimensional reducible analytic subvariety with countable irreducible components \cite{Ko} \cite{Ty} \cite{M}. The global properties of the moduli spaces of K$\ddot{\text{a}}$hler Calabi-Yau threefolds remain much less understood. However, there was the proposal by Miles Reid \cite{Reid} which states that the moduli spaces of all Calabi-Yau threefolds can be connected by means of taking birational transformations and smoothings on the Calabi-Yau threefolds. This idea, later dubbed as ``Reid's Fantasy'', was checked for a huge number of examples in \cite{C1}\cite{CGGK}. The processes just mentioned are called geometric transitions in general, and the main focus in this paper is the most studied example, namely the conifold transition, which was first considered by Clemens \cite{Cl} in 1982 and later caught the attention of the physicists starting the late 1980's. It is described as follows. Let $\hat{X}$ be a smooth Calabi-Yau threefold containing a collection of mutually disjoint (-1,-1)-curves $C_1,...,C_l$, i.e., rational curves $C_i \cong \mathbb{P}^1$ with normal bundles in $\hat{X}$ isomorphic to $\mathcal{O}_{\mathbb{P}^1}(-1)\oplus \mathcal{O}_{\mathbb{P}^1}(-1)$. One can contract the $C_i$'s to obtain a space $X_0$ with ordinary double points, and then under certain conditions given by Friedman, $X_0$ can be smoothed and one obtains a family of threefolds $X_t$ with trivial canonical bundles. Even when $\hat{X}$ is K$\ddot{\text{a}}$hler, the manifolds $X_t$ may be non-K$\ddot{\text{a}}$hler, and it was proved in \cite{FLY} that they nevertheless admit balanced metrics, which we denote by $\tilde{\omega}_t$. In general, a Hermitian metric $\omega$ on a complex $n$-dimensional manifold is balanced if $d(\omega^{n-1})=0$. K$\ddot{\text{a}}$hler metrics are obviously balanced metrics, but, unlike the K$\ddot{\text{a}}$hler case, the existence of balanced metrics is preserved under birational transformations \cite{AB}. Moreover, if the manifold satisfies the $\partial\bar{\partial}$-lemma, then the aforementioned existence is also preserved under small deformations \cite{Wu}. What \cite{FLY} shows is that it is also preserved under conifold transitions provided $\hat{X}$ is K$\ddot{\text{a}}$hler Calabi-Yau. In this paper we would like to push further the above result on the preservation of geometric structures after conifold transitions. Consider a pair $(\hat{X},\mathcal{E})$ where $\hat{X}$ is a K$\ddot{\text{a}}$hler Calabi-Yau threefold with a K$\ddot{\text{a}}$hler metric $\omega$, and $\mathcal{E}$ is a holomorphic vector bundle endowed with a Hermitian-Yang-Mills metric with respect to $\omega$. Denote the contraction of exceptional rational curves mentioned above by $\pi:\hat{X} \rightarrow X_0$. From the point of view of metric geometry, such a contraction can be seen as a degeneration of Hermitian metrics on $\hat{X}$ to a metric which is singular along the exceptional curves. In fact, following the methods in \cite{FLY}, one can construct a family of smooth balanced metrics $\{\hat{\omega}_a\}_{a>0}$ on $\hat{X}$ such that $\hat{\omega}_a^2$ and $\omega^2$ differ by $\partial\bar{\partial}$-exact forms and, as $a\rightarrow 0$, $\hat{\omega}_a$ converges to a metric $\hat{\omega}_0$ which is singular along the exceptional curves. The metric $\hat{\omega}_0$ can also be viewed as a smooth metric on $X_{0,sm}$, the smooth part of $X_0$. We have the following result which is the first main theorem. \begin{theorem} \label{th:main1} Let $\mathcal{E}$ be an irreducible holomorphic vector bundle over a K$\ddot{\text{a}}$hler Calabi-Yau threefold $(X,\omega)$ such that $c_1(\mathcal{E})=0$ and $\mathcal{E}$ is trivial on a neighborhood of the exceptional rational curves $C_i$. Suppose $\mathcal{E}$ is endowed with a HYM metric w.r.t. $\omega$. Then there exists a HYM metric $H_0$ on $\mathcal{E}|_{X_{0,sm}}$ with respect to $\hat{\omega}_0$, and there is a decreasing sequence $\{a_i\}_{i=1}^\infty$ converging to 0, such that there is a sequence $\{H_{a_i}\}_{i=1}^\infty$ of Hermitian metrics on $\mathcal{E}$ converging weakly in the $L^p_2$-sense, for all $p$, to $H_0$ on each compactly embedded open subset of $X_{0,sm}$, where each $H_{a_i}$ is HYM with respect to $\hat{\omega}_{a_i}$. \end{theorem} Suppose that one can smooth the singular space $X_0$ to $X_t$, and that the bundle $\pi_*\mathcal{E}$ fits in a family of holomorphic bundles $\mathcal{E}_t$ over $X_t$, i.e., the pair $(X_0, \pi_*\mathcal{E})$ can be smoothed to $(X_t,\mathcal{E}_t)$. We ask the question of whether a Hermitian-Yang-Mills metric with respect to the balanced metric $\tilde{\omega}_t$ exists on the bundle $\mathcal{E}_t$. Note that the condition that $\mathcal{E}$ is trivial in a neighborhood of the exceptional rational curves $C_i$ implies that the bundles $\mathcal{E}_t$ would be trivial in a neighborhood of the vanishing cycles. Also note that $c_1(\mathcal{E}_t)=0$ for any $t\neq 0$. We now state the second main theorem of this paper. \begin{theorem} \label{th:main2} Let $(\hat{X},\omega)$ be a smooth K$\ddot{\text{a}}$hler Calabi-Yau threefold and $\pi:\hat{X} \rightarrow X_0$ be a contraction of mutually disjoint (-1,-1)-curves. Let $\mathcal{E}$ be an irreducible holomorphic vector bundle over $\hat{X}$ with $c_1(\mathcal{E})=0$ that is trivial in a neighborhood of the exceptional curves of $\pi$, and admits a Hermitian-Yang-Mills metric with respect to $\omega$. Suppose that the pair $(X_0,\pi_*\mathcal{E})$ can be smoothed to a family of pairs $(X_t,\mathcal{E}_t)$ where $X_t$ is a smooth Calabi-Yau threefold and $\mathcal{E}_t$ is a holomorphic vector bundle on $X_t$. Then for $t\neq 0$ sufficiently small, $\mathcal{E}_t$ admits a smooth Hermitian-Yang-Mills metric with respect to the balanced metric $\tilde{\omega}_t$ constructed in \cite{FLY}. \end{theorem} For irreducible holomorphic vector bundles over a K$\ddot{\text{a}}$hler manifold, the existence of Hermitian-Yang-Mills metrics corresponds to the slope stability of the bundles. For proofs of this correspondence, see \cite{Don1}\cite{Don2}\cite{UY}. On a complex manifold endowed with a balanced metric, or more generally a Gauduchon metric, i.e., a Hermitian metric $\omega$ satisfying $\partial\bar{\partial}(\omega^{n-1})=0$, one can still define the slopes of bundles and hence the notion of slope stability. Under this setting, Li and Yau \cite{LY1} proved the same correspondence. Another motivation for considering stable vector bundles over non-K$\ddot{\text{a}}$hler manifolds comes from physics. K$\ddot{\text{a}}$hler Calabi-Yau manifolds have always played a central role in the study of Supersymmetric String Theory, a theory that holds the highest promise so far concerning the unification of the fundamental forces of the physical world. Among the many models in Supersymmetric String Theory, the Heterotic String models \cite {G}\cite{W} require not only a manifold with trivial canonical bundle but a stable holomorphic vector bundle over it as well. Besides using the K$\ddot{\text{a}}$hler Calabi-Yau threefolds as the internal spaces, Strominger also suggested to use a model allowing nontrivial torsions in the metric. In \cite{St}, he proposed the following system of equations for a pair $(\omega,H)$ consisting of a Hermitian metric $\omega$ on a Calabi-Yau threefold $X$ and a Hermitian metric $H$ on a vector bundle $\mathcal{E} \rightarrow X$ with $c_1(\mathcal{E})=0$: \begin{equation} \label{s1} F_H\wedge \omega^2=0;\,\,\,F_H^{0,2}=F_H^{2,0}=0; \end{equation} \begin{equation} \label{s2} \sqrt{-1}\partial\bar{\partial}\omega=\frac{\alpha}{4}(\text{tr}(R_{\omega}\wedge R_{\omega})-\text{tr}(F_H\wedge F_H)); \end{equation} \begin{equation} \label{s3} d^*\omega =\sqrt{-1}(\bar{\partial}-\partial)\ln \Arrowvert \Omega \Arrowvert_{\omega}; \end{equation} where $R_{\omega}$ is the full curvature of $\omega$ and $F_H$ is the Hermitian curvature of $H$. The equations (\ref{s1}) is simply the Hermitian-Yang-Mills equations for $H$. Equation (\ref{s2}) is named the Anomaly Cancellation equation derived from physics. In \cite{LY2} it was shown that equation (\ref{s3}) is equivalent to another equation showing that $\omega$ is conformally balanced: \begin{equation*} d(\Arrowvert \Omega \Arrowvert_{\omega}\omega^2)=0. \end{equation*} It is mentioned in \cite{FLY} that this system should be viewed as a generalization of Calabi Conjecture for the case of non-K$\ddot{\text{a}}$hler Calabi-Yau manifolds. The system, though written down in 1986, was first shown to have non-K$\ddot{\text{a}}$hler solutions only in 2004 by Li and Yau \cite{LY2} using perturbation from a K$\ddot{\text{a}}$hler solution. The first solutions to exist on manifolds which are never K$\ddot{\text{a}}$hler are constructed by Fu and Yau \cite{FY}. The class of threefolds they consider are the $T^2$-bundles over K3 surfaces constructed by Goldstein and Prokushkin \cite{GP}. Some non-compact examples have also been constructed by Fu, Tseng and Yau \cite{FTY} on $T^2$-bundles over the Eguchi-Hanson space. More solutions are found in a recent preprint \cite{AG} using the perturbation method developed in \cite{LY2}. The present paper can also be viewed as a step following \cite{FLY} in the investigation of the relation between the solutions to Strominger's system on $\hat{X}$ and those on $X_0$ and $X_t$. This paper is organized as follows: Section 2 sets up the conventions and contains more background information of conifold transitions and Hermitian-Yang-Mills metrics over vector bundles. Moreover, the construction of balanced metrics in \cite{FLY} is described in more details necessary for later discussions. In Section 3 the uniform coordinate systems on $X_0$ and on $X_t$ are introduced, which are needed to show a uniform control of the constants in the Sobolev inequalities and elliptic regularity theorems. In Section 4 Theorem \ref{th:main1} is proved, and several boundedness results of the HYM metric $H_0$ in that theorem are discussed. In Section 5 a family of approximate Hermitian metrics $H_t$ on $\mathcal{E}_t$ are constructed, and some estimates on their mean curvatures are established. Section 6 describes the contraction mapping setup for the HYM equation on the bundle $\mathcal{E}_t$. Theorem \ref{th:main2} is proved here. Section 7 deals with a proposition left to be proved from Section 6.\\[0.3cm] \noindent \textbf{Acknowledgements} The author would like to thank his thesis advisor Professor S.-T. Yau for constant supports and valuable comments. The author is also grateful to Professor C. Taubes and Professor J. Li for helpful discussions, and to Professor J.-X. Fu for useful comments during the preparation of this work. \section{Backgrounds} \subsection{Conifold transitions} Let $\hat{X}$ be a K$\ddot{\text{a}}$hler Calabi-Yau threefold with a K$\ddot{\text{a}}$hler metric denoted by $\omega$. Let $\bigcup C_i$ be a collection of $(-1,-1)$-curves in $\hat{X}$, and let $X_0$ be the threefold obtained by contracting $\bigcup C_i$, so $\hat{X}$ is a small resolution of $X_0$. $X_0$ has ordinary double points, which are the images of the curves $C_i$ under the contraction. There is a condition given by Friedman which relates the smoothability of the singular space $X_0$ to the classes $[C_i]$ of the exceptional curves in $\hat{X}$: \begin{theorem} \cite{Fri1}\cite{Fri2} If there are nonzero numbers $\lambda_i$ such that the class \begin{equation} \label{eq:i-1} \sum_i \lambda_i[C_i]=0 \end{equation} in $H^2(\hat{X},\Omega_{\hat{X}}^2)$ then a smoothing of $X_0$ exists, i.e., there is a 4-dimensional complex manifold $\mathcal{X}$ and a holomorphic projection $\mathcal{X} \rightarrow \Delta$ to the disk $\Delta$ in $\mathbb{C}$ such that the general fibers are smooth and the central fiber is $X_0$. \end{theorem} The above theorem is also considered in \cite{Tian2} from a more differential geometric point of view, and in \cite{Chan} the condition (\ref{eq:i-1}) is discussed in the obstructed case of the desingularization of K$\ddot{\text{a}}$hler Calabi-Yau 3-folds with conical singularities. \\[0.2cm] The local geometry of the total space $\mathcal{X}$ near an ODP of $X_0$ is described in the following. For some $\epsilon>0$ and for \begin{equation*} \tilde{U}=\{(z,t) \in \mathbb{C}^4 \times \Delta_\epsilon | \Arrowvert z \Arrowvert<2,\,\,z_1^2+z_2^2+z_3^2+z_4^2=t \} \end{equation*} there is a holomorphic map $\Xi:\tilde{U} \rightarrow \mathcal{X}$ respecting the projections to $\Delta$ and $\Delta_\epsilon$ so that $\tilde{U}$ is biholomorphic to its image. We will denote \begin{equation*} Q_t:=\{z_1^2+z_2^2+z_3^2+z_4^2=t\} \subset \mathbb{C}^4. \end{equation*} From the above description, a neighborhood of 0 in $Q_0$ models a neighborhood of an ODP in $X_0$. For $t \neq 0$, $Q_t$ is called a deformed conifold. Throughout this paper we will denote by $\mathbf{r}_t$ the restriction of $\Arrowvert z \Arrowvert$ to $Q_t \subset \mathbb{C}^4$, and use the same notation for their pullbacks under $\Xi^{-1}$. For each ODP $p_i$ of $X_0$, we have the biholomorphism $\Xi_i:\tilde{U}_i \rightarrow \mathcal{X}$ as above. Without loss of generality we may assume that the images of the $\Xi_i$'s are disjoint. For a given $t \in \Delta$, define $V_{t,i}(c)$ to be the image under $\Xi_i$ of $\{(z,t) \in \mathbb{C}^4 \times \Delta_\epsilon | \mathbf{r}_t(z)< c,\,\,z_1^2+z_2^2+z_3^2+z_4^2=t \}$, and define $V_t(c)=\bigcup_i V_{t,i}(c)$. Define $V_{t,i}(R_1,R_2)=V_{t,i}(R_2)\backslash V_{t,i}(R_1)$ for any $0<R_1<R_2$ and $V_t(R_1,R_2)=\bigcup_i V_{t,i}(R_1,R_2)$. Define $U_i(c):=\pi^{-1}(V_{0,i}(c))\subset \hat{X}$ where $\pi$ is the small resolution $\pi:\hat{X} \rightarrow X_0$, and $U(c)=\bigcup_i U_{i}(c)$. Finally, define $X_t[c]=X_t \backslash V_t(c)$. For each $t \neq0$, it can be easily checked that $\mathbf{r}_t\geq |t|^\frac{1}{2}$ on $Q_t$ and the subset $\{\mathbf{r}_t=|t|^\frac{1}{2}\}\subset Q_t$ is isomorphic to a copy of $S^3$, which is usually called the vanishing sphere. Each subset $V_t(c)$ is thus an open neighborhood of the vanishing spheres.\\[0.2cm] \noindent \textbf{Remark} In the rest of the paper we will always regard $V_{t,i}(c)$ not only as a subset of $X_t$, but also as a subset of $Q_t$ via the map $\Xi_i$ and the projection map from the set $\{(z,t) \in \mathbb{C}^4 \times \Delta_\epsilon | \mathbf{r}_t(z)< c,\,\,z_1^2+z_2^2+z_3^2+z_4^2=t \}$ to $\mathbb{C}^4$. We also use the same notation $\mathbf{r}_t$ to denote a fixed smooth extension of $\mathbf{r}_t$ from $V_t(1)$ to $X_t$ so that $\mathbf{r}_t<3$.\\[0.2cm] The following description of $Q_0$ and $Q_1$ will be useful in our discussion. Denote $\Sigma=SO(4)/SO(2)$. Then there are diffeomorphisms \begin{equation} \label{p1} \phi_0: \Sigma \times (0,\infty) \rightarrow Q_{0,sm}\,\,\,\,\text{such that}\,\,\, \phi_0(A\cdot SO(2),\mathbf{r}_0)=A \begin{pmatrix} \frac{1}{\sqrt{2}}\mathbf{r}_0 \\ \frac{i}{\sqrt{2}}\mathbf{r}_0 \\ 0 \\ 0 \end{pmatrix}, \end{equation} and \begin{equation} \label{p2} \phi_1: \Sigma \times (1,\infty) \rightarrow Q_1\backslash \{\mathbf{r}_1=1\}\,\,\,\,\text{such that}\,\,\, \phi_1(A\cdot SO(2),\mathbf{r}_1)=A \begin{pmatrix} \cosh (\frac{1}{2}\cosh^{-1}(\mathbf{r}_1^2)) \\ i\sinh (\frac{1}{2}\cosh^{-1}(\mathbf{r}_1^2)) \\ 0 \\ 0 \end{pmatrix}. \end{equation} Here $Q_{0,sm}$ is the smooth part of $Q_0$, and the variables $\mathbf{r}_0$ and $\mathbf{r}_1$ are indeed the distances of the image points to the origin. We can see in particular from (\ref{p1}) that $\phi_0$ describes $Q_0$ as a cone over $\Sigma$. It is not hard to see that $\Sigma \cong S^2 \times S^3$. However, the radial variable for the Ricci-flat K$\ddot{\text{a}}$hler cone metric $g_{co,0}$ on $Q_0$ is not $\mathbf{r}_0$, but $\rho_0=\mathbf{r}_0^\frac{2}{3}$. In fact, $g_{co,0}$ can be expressed as \begin{equation} \label{cone} g_{co,0}=(d\mathbf{r}_0^\frac{2}{3})^2+\mathbf{r}_0^\frac{4}{3} g_\Sigma \end{equation} where $g_\Sigma$ is an $SO(4)$-invariant Sasaki-Einstein metric on $\Sigma$. The K$\ddot{\text{a}}$hler form of $g_{co,0}$ is given by $\omega_{co,0}=\sqrt{-1}\partial \bar{\partial} f_0(\mathbf{r}_0^2)$ where $f_0(s)=\frac{3}{2}s^\frac{2}{3}$. In this paper we will not use the variable $\rho_0$. In this paper, given a Hermitian metric $g$, the notation $\nabla_g$ will always refer to the Chern connection of $g$.\\[0.2cm] \subsection{The Candelas-de la Ossa metrics} Candelas and de la Ossa \cite{CO} constructed a 1-parameter family of Ricci-flat K$\ddot{\text{a}}$hler metrics $\{ g_{co,a} | a>0\}$ on the small resolution $\hat{Q}$ of $Q_0$. The space $\hat{Q}$ is named the resolved conifold, and the parameter $a$ measures the size of the exceptional curve $C$ in $\hat{Q}$. Identifying $Q_{0,sm}$ with $\hat{Q}\backslash C$ biholomorphically via the resolution map, the family $\{ g_{co,a} | a>0\}$ converges smoothly, as $a$ goes to 0, to the cone metric $g_{co,0}$ on each compactly embedded open subset of $Q_{0,sm}$, i.e., each open subset of $Q_{0,sm}$ whose closure in $Q_0$ is contained in $Q_{0,sm}$. The K$\ddot{\text{a}}$hler forms of the metrics $g_{co,a}$ will be denoted by $\omega_{co,a}$. They also construct a Ricci-flat K$\ddot{\text{a}}$hler metric $g_{co,t}$ on $Q_t$ for each $0\neq t \in \Delta$. Explicitly, the K$\ddot{\text{a}}$hler form of $g_{co,t}$ is given by $\omega_{co,t}=\sqrt{-1}\partial \bar{\partial} f_t(\mathbf{r}_t^2)$ where \begin{equation} \label{potentials} f_t(s)=2^{-\frac{1}{3}}|t|^\frac{2}{3} \int_0^{\cosh^{-1}(\frac{s}{|t|})} (\sinh(2\tau)-2\tau)^\frac{1}{3} d\tau, \end{equation} and it satisfies \begin{equation} \label{n} \omega_{co,t}^3=\sqrt{-1}\frac{1}{2}\Omega_t\wedge \Omega_t \end{equation} where $\Omega_t$ is the holomorphic (3,0)-form on $Q_t$ such that, on $\{ z_1 \neq 0 \}$, \begin{equation*} \Omega_t=\frac{1}{z_1}dz_2 \wedge dz_3 \wedge dz_4|_{Q_t}. \end{equation*} In this paper, the metrics $g_{co,a}$ with subscripts $a$ will always denote the Candelas-de la Ossa metrics on the resolved conifold $\hat{Q}$, and the metrics $g_{co,t}$ with subscripts $t$ will always denote the Candelas-de la Ossa metrics on the deformed conifolds $Q_t$.\\[0.2cm] In the following we discuss the asymptotic behavior of the CO-metrics $g_{co,t}$. Consider the smooth map \begin{equation*} \Phi:\Sigma \times (1,\infty) \rightarrow \Sigma \times (0,\infty) \end{equation*} defined by \begin{equation*} \Phi (A\cdot SO(2),\mathbf{r}_1)= ( A\cdot SO(2), \mathbf{r}_0(\mathbf{r}_1)) \end{equation*} where \begin{equation*} \mathbf{r}_0(\mathbf{r}_1) =\left(\frac{1}{2}(\sinh(2\cosh^{-1}(\mathbf{r}_1^2))-2\cosh^{-1}(\mathbf{r}_1^2)) \right)^\frac{1}{4}. \end{equation*} Note that \begin{equation} \label{eq:6-2} \mathbf{r}_1=\left(\cosh \left ( f^{-1}\left( 2\mathbf{r}_0^4 \right) \right)\right)^\frac{1}{2} \end{equation} where $f(s)=\sinh(2s)-2s$. Define $x_1= \phi_0\circ\Phi\circ\phi_1^{-1}$, which is a diffeomorphism from $Q_1\backslash \{\mathbf{r}_1=1\}$ to $Q_{0,sm}$. Then $\mathbf{r}_0(x_1(x))=\mathbf{r}_0(\mathbf{r}_1(x))$ for $x \in Q_1\backslash \{\mathbf{r}_1=1\}$. Define $\Upsilon_1=x_1^{-1}$. It is shown in \cite{Chan} that the following hold for some constants $D_{1,k}$, $D_{2,k}$, and $D_{3,k}$ as $\mathbf{r}_0 \rightarrow \infty$: \begin{equation} \label{eq:3-4} \Upsilon_1^*\omega_{co,1} = \omega_{co,0}, \end{equation} \begin{equation} \label{eq:3-4-1} |\nabla_{g_{co,0}}^k(\Upsilon_1^*\Omega_1-\Omega_0)|_{g_{co,0}} \leq D_{1,k}\mathbf{r}_0^{\frac{2}{3}(-3-k)}, \end{equation} \begin{equation} \label{eq:6-1} |\nabla_{g_{co,0}}^k(\Upsilon_1^*g_{co,1}-g_{co,0})|_{g_{co,0}} \leq D_{2,k}\mathbf{r}_0^{\frac{2}{3}(-3-k)}, \end{equation} and \begin{equation} \label{eq:3-5} |\nabla_{g_{co,0}}^k(\Upsilon_1^*J_1-J_0)|_{g_{co,0}} \leq D_{3,k}\mathbf{r}_0^{\frac{2}{3}(-3-k)} \end{equation} where $J_t$ is the complex structure on $Q_t$. Let $\psi_t:Q_1 \rightarrow Q_t$ be a map such that $\psi_t^*(z_i)=t^{\frac{1}{2}}z_i$. Here $t^\frac{1}{2}$ can be either of the two square roots of $t$. We then have \begin{equation} \label{eq:s} \begin{split} \psi_t^*\mathbf{r}_t=|t|^\frac{1}{2}\mathbf{r}_1, \, &\,\psi_t^*\mathbf{r}_0=|t|^\frac{1}{2}\mathbf{r}_0\\ \psi_t^*\Omega_t=t\Omega_1, \,&\,\psi_t^*\Omega_0=t\Omega_0, \\ \psi_t^*\omega_{co,t}=|t|^\frac{2}{3}\omega_{co,1}, \,&\,\psi_t^*\omega_{co,0}=|t|^\frac{2}{3}\omega_{co,0},\,\, \\ \psi_t^* g_{co,t}=|t|^\frac{2}{3}g_{co,1}, \,\,& \text{and}\, \,\psi_t^* g_{co,0}=|t|^\frac{2}{3} g_{co,0}.\,\, \\ \end{split} \end{equation} The equality $\psi_t^* \omega_{co,t}=|t|^\frac{2}{3}\omega_{co,1}$ follows from the explicit formulas of the K$\ddot{\text{a}}$hler potentials (\ref{potentials}) and the fact that the map $\psi_t$ is biholomorphic. With this understood, $\psi_t^* g_{co,t}=|t|^\frac{2}{3}g_{co,1}$ then follows easily. The rest are trivial. Note that $\nabla_{\psi_t^* g_{co,0}}=\nabla_{|t|^\frac{2}{3}g_{co,0}}=\nabla_{g_{co,0}}$ and $\nabla_{\psi_t^* g_{co,t}}=\nabla_{|t|^\frac{2}{3}g_{co,1}}=\nabla_{g_{co,1}}$ for $t\neq0$. Let $x_t=\psi_t\circ x_1 \circ \psi_t^{-1}$, which is understood as a diffeomorphism from $Q_t\backslash \{\mathbf{r}_t=|t|^\frac{1}{2}\}$ to $Q_{0,sm}$. Note that $x_t$ is independent of the choice of $t^\frac{1}{2}$, and so $\{x_t\}_t$ form a smooth family. Define $\Upsilon_t=x_t^{-1}$. \begin{lemma} \label{lm:4} We have \begin{equation*} x_t^*\omega_{co,0}=\omega_{co,t}, \end{equation*} and for the same constants $D_{1,k}$, $D_{2,k}$ and $D_{3,k}$ as in (\ref{eq:3-4-1}), (\ref{eq:6-1}) and (\ref{eq:3-5}), we have, as $\mathbf{r}_0 \rightarrow \infty$, \begin{equation*} \begin{split} & |\nabla_{g_{co,0}}^k(\Upsilon_t^*\Omega_t-\Omega_0)|_{g_{co,0}} \leq D_{1,k} |t|\mathbf{r}_0^{\frac{2}{3}(-3-k)}, \\ & |\nabla_{g_{co,0}}^k(\Upsilon_t^*g_{co,t}-g_{co,0})|_{g_{co,0}} \leq D_{2,k}|t|\mathbf{r}_0^{\frac{2}{3}(-3-k)}, \,\,\text{and}\\ & |\nabla_{g_{co,0}}^k(\Upsilon_t^*J_t-J_0)|_{g_{co,0}} \leq D_{3,k}|t|\mathbf{r}_0^{\frac{2}{3}(-3-k)}. \end{split} \end{equation*} \end{lemma} \begin{proof} The first equation follows easily. From the rescaling properties (\ref{eq:s}) we have, for $w\in X_0$, \begin{equation*} \begin{split} & |\nabla_{g_{co,0}}^k(\Upsilon_t^*\Omega_t-\Omega_0)|_{g_{co,0}}(w)= |\nabla_{g_{co,0}}^k((\psi_t^{-1})^*\Upsilon_1^*\psi_t^*\Omega_t-\Omega_0)|_{g_{co,0}}(w) \\ = & |\nabla_{g_{co,0}}^k(t (\psi_t^{-1})^*\Upsilon_1^*\Omega_1-\Omega_0)|_{g_{co,0}}(w) =|\nabla_{\psi_t^*g_{co,0}}^k(t \Upsilon_1^*\Omega_1-\psi_t^*\Omega_0)|_{\psi_t^*g_{co,0}}(\psi_t^{-1}(w)) \\ = & |\nabla_{g_{co,0}}^k( t \Upsilon_1^*\Omega_1- t \Omega_0)|_{|t|^\frac{2}{3}g_{co,0}}(\psi_t^{-1}(w)) =|t| |\nabla_{g_{co,0}}^k( \Upsilon_1^*\Omega_1- \Omega_0 )|_{|t|^\frac{2}{3}g_{co,0}}(\psi_t^{-1}(w)) \\ = & |t| |t|^{-\frac{1}{3}(3+k)} |\nabla_{g_{co,0}}^k( \Upsilon_1^*\Omega_1- \Omega_0)|_{g_{co,0}}(\psi_t^{-1}(w)) \\ \leq & |t| |t|^{-\frac{1}{3}(3+k)} D_{1,k}\mathbf{r}_0(\psi_t^{-1}(w))^{\frac{2}{3}(-3-k)} =|t|^{-\frac{1}{3}k} D_{1,k}|t|^{\frac{1}{3}(3+k)}\mathbf{r}_0(w)^{\frac{2}{3}(-3-k)}\\ =&D_{1,k}|t|\mathbf{r}_0(w)^{\frac{2}{3}(-3-k)}. \end{split} \end{equation*} The other two estimates can be carried out in a similar manner. \qed \end{proof} Using the explicit formula (\ref{eq:6-2}), the following lemma is elementary, and the proof is omitted: \begin{lemma} \label{lm:2} As $x\in Q_1\backslash \{\mathbf{r}_1=1\}$ goes to infinity, $\mathbf{r}_1(x)\mathbf{r}_0(x_1(x))^{-1}$ goes to 1. In particular, there is a constant $A>0$ such that \begin{equation*} \frac{1}{A}<\mathbf{r}_1(x)\mathbf{r}_0(x_1(x))^{-1}<A \end{equation*} for any $x\in Q_1$ such that $1 \ll \mathbf{r}_1(x)$. As a result, by the rescaling relation (\ref{eq:s}), for the same constant $A$ we have \begin{equation*} \frac{1}{A}<\mathbf{r}_t(z)\mathbf{r}_0(x_t(z))^{-1}<A \end{equation*} for any $z\in Q_t$ such that $|t|^\frac{1}{2}\ll \mathbf{r}_t(z)$. \end{lemma} Lemma \ref{lm:4} and Lemma \ref{lm:2} imply \begin{corollary} \label{asym} There exists a constant $D_0>0$ such that for any $z\in Q_t$ with $|t|^\frac{1}{2}\ll \mathbf{r}_t(z)$, \begin{equation*} \begin{split} |\nabla_{g_{co,0}}^k(\Upsilon_t^*J_t-J_0)|_{\Upsilon_t^*g_{co,t}}(x_t(z)) \leq D_0|t|\mathbf{r}_t(z)^{\frac{2}{3}(-3-k)} \end{split} \end{equation*} for $k=0,1$. \end{corollary} \subsection{The balanced metrics constructed by Fu-Li-Yau} Using Mayer-Vietoris sequence, the change in the second Betti numbers before and after a conifold transition is given in the following proposition: \begin{proposition} \cite{Reid} Let $k$ be the maximal number of homologically independent exceptional rational curves in $\hat{X}$. Then the second Betti numbers of $\hat{X}$ and $X_t$ satisfy the equations \begin{equation*} \begin{split} & b_2(X_t)=b_2(\hat{X})-k. \\ \end{split} \end{equation*} \end{proposition} From this proposition one sees that the second Betti number drops after each transition, and when it becomes 0, the resulting threefold is never K$\ddot{\text{a}}$hler. Because of this, when considering Reid's conjecture, a class of threefolds strictly containing the K$\ddot{\text{a}}$hler Calabi-Yau ones have to be taken into account. A particular question of interest would be finding out suitable geometric structures that are possessed by every member in this class of threefolds. One achievement in this direction is the work of \cite{FLY} in which the following theorem is proved: \begin{theorem} Let $\hat{X}$ be a K$\ddot{\text{a}}$hler Calabi-Yau threefold. Then after a conifold transition, for sufficiently small $t$, $X_t$ admits a balanced metric. \end{theorem} In the following we review the results in \cite{FLY} in more detail. First, a balanced metric $\hat{\omega}_0$ on $X_{0,sm}$ is constructed by replacing the original metric $\omega$ near the ODPs with the CO-cone metric $\omega_{co,0}$. One of the main feature of this construction is that $\omega^2$ and $\hat{\omega}_0^2$ differ by a $\partial \bar{\partial}$-exact form. It is not hard to see that their construction can be used the construct a family of balanced metrics $\{ \hat{\omega}_{co,a} | a>0\}$ on $\hat{X}$ converging smoothly, as $a$ goes to 0, to the metric $\hat{\omega}_0$ on compactly embedded open subsets of $\hat{X}\backslash \bigcup C_i \cong X_{0,sm}$, such that $\omega^2$ and all $\hat{\omega}_{co,a}^2$ differ by $\partial \bar{\partial}$-exact forms. The main achievement in \cite{FLY} is the construction of balanced metrics $\tilde{\omega}_t$ on $X_t$. Fix a smooth family of diffeomorphisms $x_t: X_t[\frac{1}{2}] \rightarrow X_0[\frac{1}{2}]$ that such $x_0=id$. Let $\varrho(s)$ be a decreasing cut-off function such that $\varrho(s)=1$ when $s \leq \frac{5}{8}$ and $\varrho(s)=0$ when $s \geq \frac{7}{8}$. Define a cut off function $\varrho_0$ on $X_0$ such that $\varrho_0|_{X_0[1]}=0$, $\varrho_0|_{V_0(\frac{1}{2})}=1$, and $\varrho_0|_{V_0(\frac{1}{2},1)}=\varrho(\mathbf{r}_0)$. Also define a cut off function $\varrho_t$ on $X_t$ such that $\varrho_t|_{X_t[\frac{1}{2}]}=x_t^*\varrho_0$ and $\varrho_t|_{V_t(\frac{1}{2})}=1$. Denote $\hat{\Omega}_0=\hat{\omega}_0^2=i\partial \bar{\partial}(f_0 \partial \bar{\partial}f_0)$, and let \begin{equation*} \Phi_t=f_t^*(\hat{\Omega}_0-i\partial \bar{\partial}(\varrho_0\cdot f_0(\mathbf{r}_0^2) \partial \bar{\partial} f_0(\mathbf{r}_0^2)))+i\partial \bar{\partial}(\varrho_t\cdot f_t(\mathbf{r}_t^2) \partial \bar{\partial} f_t(\mathbf{r}_t^2)). \end{equation*} We can decompose the 4-form $\Phi_t=\Phi_t^{3,1}+\Phi_t^{2,2}+\Phi_t^{1,3}$. It is proved in \cite{FLY} that for $t\neq 0$ sufficiently small the (2,2) part $\Phi_t^{2,2}$ is positive and over $V_t(\frac{1}{2})$ it coincides with $\omega_{co,t}^2$. Let $\omega_t$ be the positive (1,1)-form on $X_t$ such that $\omega_t^2=\Phi_t^{2,2}$. Neither $\omega_t$ nor $\omega_t^2$ is closed in general. The balanced metric $\tilde{\omega}_t$ constructed in \cite{FLY} satisfies the condition $\tilde{\omega}_t^2=\Phi_t^{2,2}+\theta_t+\bar{\theta}_t$ where $\theta_t$ is a (2,2)-form satisfying the condition that, for any $\kappa>-\frac{4}{3}$, \begin{equation} \label{eq:0-1} \lim_{t \rightarrow 0} (|t|^\kappa \sup_{X_t}|\theta_t|_{g_t}^2)=0 \end{equation} where $g_t$ is the Hermitian metric associated to $\omega_t$. The proof of this limit makes use of the expression \begin{equation} \label{eq:0-2} \theta_t=\partial\bar{\partial}^*\partial^*\gamma_t \end{equation} for a unique (2,3)-form $\gamma_t$ satisfying the equation $E_t(\gamma_t)=-\partial \Phi^{1,3}_t$ and $\gamma_t\perp \ker{E_t}$ where \begin{equation*} E_t=\partial\bar{\partial}\bar{\partial}^*\partial^*+\partial^*\bar{\partial}\bar{\partial}^*\partial+\partial^*\partial \end{equation*} and the $\ast$-operators are with respect to the metric $g_t$. It was proved in \cite{FLY} that $\partial\gamma_t=0$. Moreover, the $(2,3)$-form $\partial \Phi^{1,3}_t$ is supported on $X_t[1]$, so there is a constant $C>0$ independent of $t$ such that \begin{equation} \label{eq:1} |\partial \Phi^{1,3}_t|_{C^k}<C|t|. \end{equation} We will denote $|\cdot|_t$ the norm w.r.t. $\tilde{g}_t$, $|\cdot|_{co,t}$ the norm w.r.t. $g_{co,t}$, and $|\cdot|$ the norm w.r.t. $g_t$. We will denote $dV_t$ the volume w.r.t. $\tilde{g}_t$, $dV_{co,t}$ the volume w.r.t. $g_{co,t}$, and $dV$ the volume w.r.t. $g_t$. Because of (\ref{eq:0-1}) we have the following lemma concerning a uniformity property between the metrics $g_t$ and $g_{co,t}$. \begin{lemma} \label{lm:0-1} There exists a constant $\tilde{C}>1$ such that for any small $t\neq 0$, over the region $V_t(1)$ we have \begin{equation*} \tilde{C}^{-1}\tilde{g}_t \leq g_{co,t} \leq \tilde{C}\tilde{g}_t. \end{equation*} Consequently, we have constants $\tilde{C}_1>1$ and $\tilde{C}_2>1$ such that for any $t\neq 0$ small enough, \begin{equation*} \tilde{C}_1^{-1}dV_t \leq dV_{co,t} \leq \tilde{C}_1 dV_t \end{equation*} and \begin{equation*} \tilde{C}_2^{-1}|\cdot|_t \leq |\cdot|_{co,t} \leq \tilde{C}_2 |\cdot|_t. \end{equation*} \end{lemma} Now we introduce our conventions on (negative) Laplacians. Let $\omega$ be a Hermitian metric on $X$ and $n=\dim_{\mathbb{C}}X$. For any (1,1)-form $\varphi$ on $X$, define $\Lambda_{\omega}\varphi:=\frac{\varphi\wedge\omega^{n-1}}{\omega^n}$. For a smooth function $f$ on $X$, define $\Delta_{\omega}f=\sqrt{-1}\Lambda_{\omega} \partial\bar{\partial}f$. In local coordinates, if $\omega=\frac{\sqrt{-1}}{2}g_{i\bar{j}}dz_i\wedge d\bar{z}_j$ and $\varphi=\varphi_{i\bar{j}}dz_i\wedge d\bar{z}_j$, then $\sqrt{-1}\Lambda_{\omega}\varphi=2g^{i\bar{j}}\varphi_{i\bar{j}}$. We denote $\tilde{\Delta}_t:=\Delta_{\tilde{\omega}_t}$ and $\hat{\Delta}_a:=\Delta_{\hat{\omega}_a}$. \\[0.2cm] \subsection{Hermitian-Yang-Mills equation} \iffalse Given a smooth vector bundle $\mathcal{E}$ over a smooth manifold $X$, let $\Omega^k(\mathcal{E})$ be the bundle of $\mathcal{E}$-valued $k$-forms, and let $\Gamma(\Omega^{k}(\mathcal{E}))$ be the space of smooth sections of $\Omega^k(\mathcal{E})$. The space $\Gamma(\Omega^0(\mathcal{E}))$ is often denoted by $\Gamma(\mathcal{E})$. A connection on $\mathcal{E}$ is an operator $\nabla_A:\Gamma(\mathcal{E}) \rightarrow \Gamma(\Omega^1(\mathcal{E}))$ satisfying the condition \begin{equation*} \nabla_A(fs)=df\otimes s+f\nabla_As \end{equation*} for any smooth function $f$ on $X$. With respect to a given a local frame of the bundle, $\nabla_A$ can be written as $d+A$ where $A=({A_\alpha}^\beta)$ is a local matrix-valued 1-form. We also call $A$ the connection. In the remaining of the paper $\mathcal{E}$ will always be a holomorphic vector bundle over a complex manifold $X$. In this setting we have the splitting $\nabla_A=\nabla_A^{1,0}+\nabla_A^{0,1}$ where $\nabla_A^{1,0}$ and $\nabla_A^{0,1}$ are the composites of $\nabla_A$ with the projections onto the spaces of 1-forms of the indicated types. The notation $\partial_A$ will be used to denote the composition of the induced operator of $\nabla_A^{1,0}$ on $\Gamma(\Omega^{p,q}(\mathcal{E}))$ and the natural anti-symmetrization projection from $\Gamma(\Omega^{p,q}(\mathcal{E})\otimes \Omega^{1,0})$ to $\Gamma(\Omega^{p+1,q}(\mathcal{E}))$. The notation $\bar{\partial}_A$ can be defined similarly. A connection is said to be compatible with the holomorphic structure $\bar{\partial}$ of $\mathcal{E}$ if $\nabla_A^{0,1}=\bar{\partial}$. A smooth Hermitian metric $H$ over $\mathcal{E}$ is a pointwise positive Hermitian pairing on the fibers of $\mathcal{E}$ which depends smoothly on the base. The symbol $\langle \cdot,\cdot \rangle_H$ will be used to denote this pairing. A connection $\nabla_A$ is said to be $H$-unitary if for any sections two $s_1$ and $s_2$ of $\mathcal{E}$, we have \begin{equation*} d\langle s_1,s_2\rangle_H=\langle \nabla_As_1,s_2\rangle_H+\langle s_1,\nabla_As_2\rangle_H. \end{equation*} Given a Hermitian vector bundle $(\mathcal{E},H)$ over a Hermitian manifold $(X,g)$ and an $H$-unitary connection $\nabla_A$ on $\mathcal{E}$, we can consider the induced pairing $\langle \cdot,\cdot \rangle_{H,g}$ on $\Gamma(\Omega^{k}(\mathcal{E}))$ and the connection $\nabla_{A,g}$ on the bundle $\Omega^k(\mathcal{E})$ induced from $\nabla_A$ and the Chern connection of $g$. Then given any sections $s_1$ and $s_2$ of $\Omega^k(\mathcal{E})$, we have \begin{equation*} d\langle s_1,s_2\rangle_{H,g}=\langle \nabla_{A,g}s_1,s_2\rangle_{H,g}+\langle s_1,\nabla_{A,g}s_2\rangle_{H,g}. \end{equation*} (In our convention, we have $\partial\langle s_1,s_2\rangle_{H,g}=\langle \nabla_{A,g}^{1,0}s_1,s_2\rangle_{H,g}+\langle s_1,\nabla_{A,g}^{0,1}s_2\rangle_{H,g}$.) Given a Hermitian metric on $\mathcal{E}$, there is a unique $H$-unitary connection $\nabla_H$ compatible with the holomorphic structure. In fact, with respect to a local holomorphic frame, $H$ can be viewed as a matrix-valued function $(H_{\alpha\bar{\beta}})$ and $\nabla_H$ can be written as $d+\partial HH^{-1}$. We call $\nabla_H$ the Chern connection for $H$. Given an $H$-unitary connection $\nabla_A$ on $\mathcal{E}$, without causing confusion, we use the same notation $\nabla_{A,g}$ to denote the induced connection on $\Omega^{k}(\text{End}(\mathcal{E}))$. Both $\partial_{A,g}$ and $\bar{\partial}_{A,g}$ are similarly understood. The notation $\langle \cdot,\cdot \rangle_H$ can also mean the pairing between sections of the endomorphism bundle End$(\mathcal{E})$. To be more precise, for a section $h \in \Gamma(\text{End}(\mathcal{E}))$ of End$(\mathcal{E})$, respect to a local holomorphic frame of $\mathcal{E}$ we define $h^{*_H}=Hh^*H^{-1}$ to be the dual of $h$ with respect to the metric $H$. Here $h^*$ is the complex conjugate of the matrix $h=({h_\alpha}^\beta)$. It is easy to check that $h^{*_H}$ is again a section of $\text{End}(\mathcal{E})$, and we denote $\langle h_1,h_2 \rangle_H:=\text{tr}(h_1h_2^{*_H})$ for $h_1,h_2 \in \Gamma(\text{End}(\mathcal{E}))$. We say that $h$ is $H$-symmetric if $h^{*_H}=h$ and $H$-antisymmetric if $h^{*_H}=-h$. Define the norm of $h$ w.r.t. $H$ to be $|h|_H=\langle h,h \rangle_H^\frac{1}{2}$. Similarly, we use $\langle \cdot,\cdot \rangle_{H,g}$ to denote the induced pairing on $\Gamma(\Omega^{k}(\text{End}(\mathcal{E})))$. Suppose now that the metric $g$ is balanced, then the following properties hold: \fi Let $H$ be a Hermitian metric on a holomorphic vector bundle $\mathcal{E}$ over a complex manifold $X$ endowed with a balanced metric $g$. Let $\nabla_A=\partial_A+\bar{\partial}_A$ be an $H$-unitary connection on $\mathcal{E}$. We denote by $\langle \cdot,\cdot \rangle_{H,g}$ the pointwise pairing induced by $H$ and $g$ between the $\mathcal{E}$-valued forms or the $\text{End}(\mathcal{E})$-valued forms. The following proposition is will be used in later calculations. \begin{proposition} \cite{LT} For $h_1,h_2 \in \Gamma(\text{End}(\mathcal{E}))$, we have \begin{equation*} \int_X \langle \partial_Ah_1,\partial_Ah_2 \rangle_{H,g}\,dV_g=\sqrt{-1}\int_X \langle \Lambda_g\bar{\partial}_A\partial_Ah_1,h_2 \rangle_{H,g}\,dV_g \end{equation*} and \begin{equation*} \int_X \langle \bar{\partial}_Ah_1,\bar{\partial}_Ah_2 \rangle_{H,g}\,dV_g=-\sqrt{-1}\int_X \langle \Lambda_g\partial_A\bar{\partial}_Ah_1,h_2 \rangle_{H,g}\,dV_g.\\[0.3cm] \end{equation*} \end{proposition} In a local holomorphic frame of $\mathcal{E}$, the curvature of a connection $\nabla_A$ is given by \begin{equation*} F_A:=dA-A \wedge A, \end{equation*} which is an End$(\mathcal{E})$-valued 2-form. Given a Hermitian metric $H$ over a bundle $\mathcal{E}$, the curvature for the Chern connection can then be locally computed to be \begin{equation*} F_H=\bar{\partial}(\partial HH^{-1}). \end{equation*} Taking the trace of the curvature 2-form with respect to a Hermitian metric $\omega$, we obtain the mean curvature $\sqrt{-1}\Lambda_{\omega}F_H$ of $H$. It is not hard to see that $\sqrt{-1}\Lambda_{\omega}F_H$ is $H$-symmetric. \begin{definition} A Hermitian metric $H$ on $\mathcal{E}$ satisfies the Hermitian-Yang-Mills equation with respect to $\omega$ if \begin{equation*} \sqrt{-1}\Lambda_{\omega}F_H=\lambda I \end{equation*} for some constant $\lambda$. Here $I$ denotes the identity endomorphism of $\mathcal{E}$. \end{definition} Next we introduce slope stability. For a given Hermitian metric $H$ on $\mathcal{E}$, the first Chern form of $\mathcal{E}$ with respect to $H$ is defined to be \begin{equation*} c_1(\mathcal{E},H)=\frac{\sqrt{-1}}{2\pi}\text{tr}F_H. \end{equation*} It is independent of $H$ up to a $\partial\bar{\partial}$-exact form, and is a representative of the topological first Chern class $c_1(E)\in H^2(X,\mathbb{C})$. The $\omega$-degree of $\mathcal{E}$ with respect to a Hermitian metric $\omega$ is defined to be \begin{equation*} \text{deg}_{\omega}(\mathcal{E}):=\int_{X} c_1(\mathcal{E},H)\wedge \omega^{n-1} \end{equation*} where $n=\dim_{\mathbb{C}}X$. This is not well-defined for a general $\omega$. It is, however, well-defined for a Gauduchon metric $\omega$ since $\partial\bar{\partial}(\omega^{n-1})=0$ and $c_1(\mathcal{E},H)$ is independent of $H$ up to $\partial\bar{\partial}$-exact forms. In particular, the degree with respect to a balanced metric is well-defined. Note that the $\omega$-degree is a topological invariant, i.e., depends only on $c_1(\mathcal{E})$, if $\omega$ is balanced. We restrict ourselves from now on to the case when $\omega$ is Gauduchon. For an arbitrary coherent sheaves $\mathcal{F}$ of $\mathcal{O}_X$-modules of rank $s>0$, we define deg$_{\omega}(\mathcal{F}):=\text{deg}_{\omega}(\det\mathcal{F})$ where $\det \mathcal{F}:=(\Lambda^s \mathcal{F})^{**}$ is the determinant line bundle of $\mathcal{F}$. We define the $\omega$-slope of $\mathcal{F}$ to be $\mu_{\omega}(\mathcal{F}):=\frac{ \text{deg}_{\omega}(\mathcal{F})}{s}$. \begin{definition} A holomorphic vector bundle $\mathcal{E}$ is said to be $\omega$-(semi)stable if $\mu_{\omega}(\mathcal{F})<(\leq)\mu_{\omega}(\mathcal{E})$ for every coherent subsheaf $\mathcal{F}\hookrightarrow\mathcal{E}$ with $0<\text{rank}\,\mathcal{F}<\text{rank}\,\mathcal{E}$. A holomorphic vector bundle $\mathcal{E}$ is said to be $\omega$-polystable if $\mathcal{E}$ is a direct sum of $\omega$-stable bundles all of which have the same $\omega$-slope. \end{definition} The following theorem generalizing \cite{UY} was proved by Li and Yau \cite{LY1}: \begin{theorem} \label{LY} On a complex manifold $X$ endowed with a Gauduchon metric $\omega$, a holomorphic vector bundle $\mathcal{E}$ is $\omega$-polystable if and only if it admits a Hermitian-Yang-Mills metric with respect to $\omega$.\\[0.2cm] \end{theorem} \subsection{Controls of constants} \iffalse \begin{lemma} \label{lm:001} Assume that there are constants $C_k>0$, $k\geq 0$, on $B$ such that $\frac{1}{C_0}g_e\leq g \leq C_0g_e$ and $|\nabla_e^k g|_{g_e} \leq C_k$ for $k \geq 1$. Then there are constants $E_k>0$, $k\geq 0$, depending only on $C_k$, $k\geq 0$, such that when viewing $\Gamma_g=(\partial g_{i\bar{j}}g^{k\bar{j}})$ as a matrix-valued 1-form on $B$, \begin{equation*} |\nabla_g^k \,\,\Gamma_g|_{g} \leq E_k. \end{equation*} \end{lemma} \begin{proof} This follows easily from the above explicit expression for $\Gamma_g$ and the fact that each $g^{k\bar{j}}$ is a polynomial in $\det(g)^{-1}$ and the $g_{i\bar{j}}$'s. \qed \end{proof} A similar discussion implies \begin{corollary} \label{co:001} Let $\mathcal{E}$ be a trivial holomorphic vector bundle over $B$ and fix a trivialization $\mathcal{E}\cong \mathcal{O}^r$. Let $H_1$ be a Hermitian metric on $\mathcal{E}$ viewed as a matrix-valued function on $B$ under this fixed trivialization. Let $g$ be a Hermitian metric on $B$ satisfying the assumptions in Lemma \ref{lm:001}. Assume further that there are constants $C'_k$, $k\geq 0$, such that $\frac{1}{C'_0}I <H_1 <C'_0I$ and $|\nabla_{H,g_e}^k H_1|_{H,g_e} \leq C'_k$ where $I$ is the constant matrix-valued function on $B$ with value the identity $r\times r$-matrix. Then the matrix-valued 1-form $\partial H_1H_1^{-1}$, which is the Chern connection of $H_1$, satisfies the following: there are constans $E'_k>0$, $k\geq 0$, depending only on $C_k$ and $C'_k$ for $k\geq 0$ such that \begin{equation*} |\nabla_{H_1,g}^k \left(\partial H_1H_1^{-1}\right)|_{H_1,g} \leq E'_k. \end{equation*} \end{corollary} \begin{proposition} \label{pr:0-1} Let $B$, $g_e$, $g$, $\mathcal{E}$ be defined as above. Let $H$ and $H_1$ be two Hermitian metrics on $\mathcal{E}$ such that under some trivialization $\mathcal{E}\cong\mathcal{O}^r$ $H$ can be regarded as a matrix-valued function on $B$ taking the constant value $I$, the $r\times r$ identity matrix, and $H_1$ as another Hermitian matrix-valued function. Assume the following: (i) There are constants $C_k>0$, $k\geq 0$, on $B$ such that $\frac{1}{C_0}g_e\leq g \leq C_0g_e$ and $|\nabla_e^k g|_{g_e} \leq C_k$ for $k \geq 1$. (ii) There are constants $C'_k$, $k\geq 0$, such that $\frac{1}{C'_0}H <H_1 <C'_0H$ and $|\nabla_{H,g_e}^k h_1|_{H,g_e} \leq C'_k$ where $h_1=H_1H^{-1}$. Then for each $l \geq 0$ there is a constant $D_l>0$ depending only on $C_k$ and $C'_k$ such that for any End($\mathcal{E}$)-valued function $h$ on $B$, \begin{equation*} D_l^{-1} \sum_{j=0}^l|\nabla_{H,g_e}^j h|_{H,g_e}(q) \leq \sum_{j=0}^l|\nabla_{H_1,g}^j h|_{H_1,g}(q) \leq D_l \sum_{j=0}^l|\nabla_{H,g_e}^j h|_{H,g_e}(q) \end{equation*} for any point $q \in B$. \end{proposition} \begin{proof} See \cite{Ch}.\qed \end{proof} \fi Let $\mathcal{E}$ be a holomorphic vector bundle over a compact Hermitian manifold $(X,g)$, $H$ a Hermitian metric on $\mathcal{E}$, and $\nabla_{H,g}$ the connection on $\mathcal{E}\otimes (\Omega^1)^{\otimes k}$ induced from the Chern connections of $H$ and $g$. Let $\mathbf{r}$ be a smooth positive function on $X$. We can define the following weighted norms on the usual Sobolev spaces $L^p_k(\mathcal{E})$ over $X$: for each $\sigma \in L^p_k(\mathcal{E})$, \begin{equation*} \Arrowvert \sigma \Arrowvert_{L^p_{k,\beta}} :=\left( \sum_{j=0}^{k} \int_{X} |\mathbf{r}^{-\frac{2}{3}\beta+\frac{2}{3}j}\nabla_{H,g}^j\sigma|_{H,g}^p\mathbf{r}^{-4}\, dV_g \right)^\frac{1}{p}. \end{equation*} We denote by $L^p_{k,\beta}(\mathcal{E})$ the same space as $L^p_k(\mathcal{E})$ but endowed with the above norm. Here $dV_g$ is the volume form of $g$. There are also the weighted $C^k$-norms: \begin{equation*} \Arrowvert \sigma \Arrowvert_{C^k_{\beta}} :=\sum_{j=0}^{k} \sup_{X} |\mathbf{r}^{-\frac{2}{3}\beta+\frac{2}{3}j}\nabla_{H,g}^j \sigma |_{H,g}. \end{equation*} We denote by $C^k_{\beta}(\mathcal{E})$ the same space as $C^k(\mathcal{E})$ but endowed with the above norm.\\[0.2cm] Now let $\{\phi_ {z}:B_z\rightarrow U_z\subset X\}_{z\in X}$ be a system of complex coordinate charts where each $\phi_ {z}$ maps the Euclidean ball of radius $\rho$ in $\mathbb{C}^3$ centered at 0 homeomorphically to $U_z$, an open neighborhood of $z$, such that $\phi_ {z}(0)=z$. Over each $U_z$ define $\bar{g}$ to be $\mathbf{r}(z)^{-\frac{4}{3}}g$. Let $g_e$ denote the standard Euclidean metric on $B_z\subset \mathbb{C}^3$, and $\nabla_e$ the Euclidean derivatives. For $m\geq 0$, let $R_m>0$ be constants such that for any $z \in X$ and $y\in U_z$, \begin{equation} \label{R-1} \frac{1}{R_0}\mathbf{r}(z) \leq \mathbf{r}(y) \leq R_0\mathbf{r}(z) \end{equation} and \begin{equation} \label{R-2} |\nabla_e^m \mathbf{r}|_{g_e}(y) \leq R_m\mathbf{r}(y). \end{equation} For $k\geq 0$, let $C_k>0$ be constants such that for any $z\in X$, \begin{equation} \label{R-3} \frac{1}{C_0}g_e \leq \phi_{z}^*\bar{g} \leq C_0g_e \end{equation} over $B_z$ where $g_e$ is the Euclidean metric, and \begin{equation} \label{R-4} \Arrowvert \phi_{z}^*\bar{g} \Arrowvert_{C^k(B_z,g_e)} \leq C_k. \end{equation} We may deduce the following version of Sobolev Embedding Theorem. \begin{theorem} \label{th:so} For each $l,p,q,r$ there exists a constant $C>0$ depending only on the constants $R_m$ and $C_k$ above such that \begin{equation*} \Arrowvert \sigma \Arrowvert_{L^r_{l,\beta}}\leq C \Arrowvert \sigma \Arrowvert_{L^p_{q,\beta}} \end{equation*} whenever $\frac{1}{r}\leq \frac{1}{p} \leq \frac{1}{r}+\frac{q-l}{6}$ and \begin{equation*} \Arrowvert \sigma \Arrowvert_{C^l_{\beta}}\leq C \Arrowvert \sigma \Arrowvert_{L^p_{q,\beta}} \end{equation*} whenever $\frac{1}{p} < \frac{q-l}{6}$. \end{theorem} The proof of the above result is standard. Simply put, we integrate over $z \in X$ the Sobolev inequalities on each chart $U_z$, and use the bounds (\ref{R-1})-(\ref{R-4}) to help control the constants of the global inequalities. In fact, the method of this proof is useful in controlling not only the Sobolev constants, but the constants in elliptic estimates as well. Consider a linear differential operator $P:C^\infty(\mathcal{E}) \rightarrow C^\infty(\mathcal{E})$ of order $m$ on the space of smooth sections of $\mathcal{E}$. Assume also that $P$ is strongly elliptic, i.e., its principal symbol $\sigma(P)$ satisfies the condition that there is a constant $\lambda>0$ such that $\langle \sigma_{\xi}(P)(v),v \rangle \geq \lambda ||v||^2 $ for any $v \in \mathbb{R}^r$ ($r=\text{rank}\mathcal{E}$) and $\xi \in \mathbb{R}^6$ with norm $||\xi||=1$. \begin{proposition} \label{pr:uniform} Assume there are constants $\Lambda_k>0$, $k\geq 0$, such that for any $z \in X$ there is a trivialization of $\mathcal{E}|_{U_z}$ under which the operator $P$ above takes the form \begin{equation*} P=\sum_{|\alpha|\leq m} A_{\alpha}\frac{\partial^{|\alpha|}}{\partial w_1^{\alpha_1}...\partial \bar{w}_3^{\alpha_6}} \end{equation*} in the coordinates $(w_1,w_2,w_3)\in B_z \subset \mathbb{C}^3$, and the matrix-valued coefficient functions $A_\alpha$ satisfy \begin{equation*} |\nabla_{e}^k A_\alpha|_{g_e} \leq \Lambda_k \end{equation*} for all $\alpha$ and $k$. Here $\alpha=(\alpha_1,...,\alpha_6)$, $\alpha_i \geq 0$, are the multi-indices and $|\alpha|=\alpha_1+...+\alpha_6$. Assume also that there is a Hermitian metric $H$ on $\mathcal{E}$ and constants $C'_k>0$ for $k\geq 0$, such that when $H$ is viewed as a matrix-valued function on $U_z$ under the above frames, we have ${C'_0}^{-1}I \leq H \leq C'_0I$ and $|\nabla_{e}^k H|_{g_e} \leq C'_k$ on $U_z$ for any $k$ and $z \in X$. Then there exists a constant $C>0$ depending only on $p$, $l$, $m$, $\beta$, $\lambda$, $\Lambda_k$, $R_m$, $C_k$, and $C'_k$ such that for any $\sigma \in C^\infty(\mathcal{E})$, we have \begin{equation*} \Arrowvert \sigma \Arrowvert_{L^p_{l+m,\beta}}\leq C \left( \Arrowvert P(\sigma) \Arrowvert_{L^p_{l,\beta}}+\Arrowvert \sigma \Arrowvert_{L^2_{0,\beta}} \right).\\[0.2cm] \end{equation*} \end{proposition} \section{Uniform coordinate systems} In this section we will construct coordinate systems with special properties over $X_{0,sm}$ and over each $X_t$ for small $t\neq 0$. Later we will mainly be using the wieghted Sobolev spaces and the discussions in Section 2 show that these coordinate systems help providing uniform controls of constants appearing in the weighted versions of Sobolev inequalities and elliptic estimates. The use of weighted Sobolev spaces is now standard in the gluing constructions or desingularization of spaces with conical singularities. See \cite{LM} and \cite{Pa} for more details. The main goal of this section is to prove the following theorem. \begin{theorem} \label{th:0} There is a constant $\rho>0$ such that, for any $t$ ($t$ can be zero), at each point $z\in X_t$ (or $z \in X_{0,sm}$ when $t=0$), there is an open neighborhood $U_z\subset X_t$ (or $U_z\subset X_{0,sm}$ when $t=0$) of $z$ and a diffeomorphic map $\phi_ {t,z}:B_z\rightarrow U_z$ from the Euclidean ball of radius $\rho$ in $\mathbb{C}^3$ centered at 0 to $U_z$ mapping 0 to $z$ so that one has the following properties: \begin{enumerate} \item[(i)] There are constants $R_m>0$, $m \geq 0$, such that for any $t$, $z \in X_t$ (or $z \in X_{0,sm}$ when $t=0$) and $y\in U_z$, \begin{equation} \label{eq:18} \frac{1}{R_0}\mathbf{r}_t(z) \leq \mathbf{r}_t(y) \leq R_0\mathbf{r}_t(z) \end{equation} and \begin{equation} \label{eq:ra-1} |\nabla_e^m \mathbf{r}_t|_{g_e}(y) \leq R_m\mathbf{r}_t(y). \end{equation} \item[(ii)] Over each $U_z$ define $\bar{\tilde{g}}_t$ to be $\mathbf{r}_t(z)^{-\frac{4}{3}}\tilde{g}_t$. Then for each $k\geq 0$, there is a constant $C_k$ independent of $t$ and $z\in X_t$ (or $z \in X_{0,sm}$ when $t=0$) such that \begin{equation} \label{eq:19} \frac{1}{C_0}g_e \leq \phi_{t,z}^*\bar{\tilde{g}}_t \leq C_0g_e \end{equation} over $B_z$, and \begin{equation} \label{eq:20} \Arrowvert \phi_{t,z}^*\bar{\tilde{g}}_t \Arrowvert_{C^k(B_z,g_e)} \leq C_k. \end{equation} \end{enumerate} \end{theorem} We first consider the following version of this theorem: \begin{theorem} \label{th:0-1} Theorem \ref{th:0} holds with $B_z$ understood as a Euclidean ball of radius $\rho$ in $\mathbb{R}^6$ centered at 0 and $g_e$ as the standard Euclidean metric on $B_z\subset \mathbb{R}^6$. \end{theorem} The proof of Theorem \ref{th:0-1} begins with a version where $X_t$ are replaced by $Q_t$ and $\tilde{g}_t$ by $g_{co,t}$. \begin{proposition} \label{pr:n-1} There is a constant $\rho>0$ such that, for any $t$ ($t$ can be zero), at each point $z\in Q_t$ ($z \in Q_{0,sm}$ when $t=0$), there is an open neighborhood $U_z\subset Q_t$ (or $U_z\subset Q_{0,sm}$ when $t=0$) of $z$ and a diffeomorphic map $\phi_ {t,z}:B_z\rightarrow U_z$ from the Euclidean ball of radius $\rho$ in $\mathbb{R}^6$ centered at 0 to $U_z$ mapping 0 to $z$ so that one has the following properties: \begin{enumerate} \item[(i)] There are constants $R_m>0$, $m \geq 0$, such that for any $t$, $z \in Q_t$ (or $z \in Q_{0,sm}$ when $t=0$) and $y\in U_z$, \begin{equation} \label{eq:Q-1} \frac{1}{R_0}\mathbf{r}_t(z) \leq \mathbf{r}_t(y) \leq R_0\mathbf{r}_t(z) \end{equation} and \begin{equation} \label{eq:Q-2} |\nabla_e^m \mathbf{r}_t|_{g_e}(y) \leq R_m\mathbf{r}_t(y). \end{equation} \item[(ii)] Over each $U_z$ define $\bar{g}_{co,t}$ to be $\mathbf{r}_t(z)^{-\frac{4}{3}}g_{co,t}$. Then for each $k\geq 1$, there is a constant $C_k$ independent of $t$ and $z\in Q_t$ (or $z \in Q_{0,sm}$ when $t=0$) such that \begin{equation} \label{eq:Q-3} \frac{1}{C_0}g_e \leq \phi_{t,z}^*\bar{g}_{co,t} \leq C_0g_e \end{equation} over $B_z$, and \begin{equation} \label{eq:Q-4} \Arrowvert \phi_{t,z}^*\bar{g}_{co,t} \Arrowvert_{C^k(B_z,g_e)} \leq C_k. \end{equation} \end{enumerate} \end{proposition} \begin{proof} While constructing the coordinate charts, we prove (\ref{eq:Q-1}), (\ref{eq:Q-3}), and (\ref{eq:Q-4}) first, leaving (\ref{eq:Q-2}) to be discussed at the end. We begin with the $t=0$ case. Choose $\rho<1$ to be significantly smaller then the injectivity radius of the metric $g_\Sigma$ from (\ref{cone}). Then at each point $p\in \Sigma$ one has the coordinates $\Phi_p:\tilde{B}_p\rightarrow \Sigma$ from the Euclidean ball of radius $\rho$ in $\mathbb{R}^5$ centered at 0 to $\Sigma$ mapping 0 to $p$ and satisfying the properties that there are constants $\tilde{C}_k>0$, $k\geq 0$, independent of $p$ such that \begin{equation} \label{eq:c0-1} \frac{1}{\tilde{C}_0}\tilde{g}_e \leq \Phi_p^*g_\Sigma \leq \tilde{C}_0\tilde{g}_e \end{equation} over $\tilde{B}_p$, and \begin{equation} \label{eq:c0-2} \Arrowvert \Phi_p^*g_\Sigma \Arrowvert_{C^k(\tilde{B}_p,\tilde{g}_e)} \leq \tilde{C}_k. \end{equation} Here $\tilde{g}_e$ is the standard Euclidean metric on $\tilde{B}_p$. More explicitly, we can simply choose a coordinate chart around one point in $\Sigma$ and then define the coordinates around the other points of $\Sigma$ by using the transitive action of $SO(4)$ on $\Sigma$. Since the metric $g_\Sigma$ is $SO(4)$-invariant, the above constants are easily seen to exist. For $x \in Q_{0,sm}$ with $\phi_0^{-1}(x)=(p,\mathbf{r}_0(x)) \in \Sigma \times (0,\infty)$, define \begin{equation*} j_x: \tilde{B}_p\times (-\rho, \rho) \hookrightarrow \Sigma \times (0,\infty) \end{equation*} which maps $(y,s) \in \tilde{B}_p\times (-\rho, \rho)$ to $j_x(y,s)=(\Phi_p(y), \mathbf{r}_0(x)e^{\frac{3}{2}s})$. Denote the restriction of $j_x$ to $B_x \subset \tilde{B}_p\times (-\rho, \rho)$ by the same notation. Then define $\phi_{0,x}:\phi_0\circ j_x:B_x \hookrightarrow Q_{0,sm}$. Condition (\ref{eq:Q-1}) is manifest. We have \begin{equation} \label{eq:3} \phi_{0,x}^*g_{co,0} =(d(\mathbf{r}_0(x)^\frac{2}{3}e^s))^2+\mathbf{r}_0(x)^\frac{4}{3}e^{2s}\Phi_p^*g_\Sigma=\mathbf{r}_0(x)^\frac{4}{3}e^{2s}((ds)^2+\Phi_p^*g_\Sigma). \end{equation} By choosing $\rho$ small so that $\frac{1}{2}<e^{2s}<2$ for $s \in (-\rho,\rho)$. Using the identity $g_e=(ds)^2+\tilde{g}_e$, one sees that the bound (\ref{eq:Q-3}) for the $t=0$ case follows from (\ref{eq:c0-1}). Moreover, using the fact that the derivatives of $e^{2s}$ and $(ds)^2+\Phi_p^*g_\Sigma$ are bounded in the Euclidean norm on $B_x$, the bound (\ref{eq:Q-4}) for this case follows.\\[0.2cm] Next we deal with the $t=1$ case. We will use the asymptotically conical behavior of the deformed conifold metrics discussed in Section 2. Recall the explicit diffeomorphism $x_1:Q_1\backslash \{\mathbf{r}_1=1\} \rightarrow Q_{0,sm}$ with inverse $\Upsilon_1$, and also the estimate \begin{equation} \label{eq:2} | \nabla_{g_{co,0}}^k(\Upsilon_1^*g_{co,1}-g_{co,0}) |_{g_{co,0}} \leq D_{2,k}\mathbf{r}_0^{-\frac{2}{3}(3+k)}. \end{equation} for $\mathbf{r}_0 \in (R,\infty)$ where $R>0$ is a large number. Let $\overline{V_1(R)}$ be the compact subset of $Q_1$ where $\mathbf{r}_1\leq R$. We will specify the choice of $R$ later. It is easy to see that the desired neighborhood $U_w$ exists for $w$ inside $\overline{V_1(R)}$. In fact, for $w\in \overline{V_1(R)}$ we can even choose $B_w$ to be a Euclidean ball of fixed small radius in $\mathbb{C}^3$ with the real coordinates taken from the real and imaginary parts of the complex coordinates. Therefore we focus on $Q_1\backslash \overline{V_1(R)}$. For $w \in Q_1\backslash \overline{V_1(R)}$, define \begin{equation*} \phi_{1,w}:=\Upsilon_1\circ \phi_{0,x_1(w)}: B_w \rightarrow Q_1\backslash \overline{V_1(R)} \end{equation*} for each $w \in Q_1\backslash \overline{V_1(R)}$. Here we identify $B_w$ with $B_{x_1(w)}$. What we do is defining the chart around $w \in Q_1\backslash \overline{V_1(R)}$ by pushing forward the chart around $x_1(w)$ via $\Upsilon_1$. Property (\ref{eq:Q-1}) is clear in view of Lemma \ref{lm:2}. \\[0.2cm] From the $k=0$ case of (\ref{eq:2}) and (\ref{eq:Q-3}) for the $t=0$ case, (\ref{eq:Q-3}) holds for $t=1$ for a constant independent of $w\in Q_1\backslash \overline{V_1(R)}$ when $R$ is large enough. We have \begin{equation} \label{eq:15} \begin{split} \phi_{1,w}^*\left(\mathbf{r}_1(w)^{-\frac{4}{3}}g_{co,1}\right) =\mathbf{r}_1(w)^{-\frac{4}{3}} \phi_{0,x_1(w)}^*\left( \Upsilon_1^*g_{co,1}-g_{co,0}\right)+\left( \mathbf{r}_1(w)^{-\frac{4}{3}} \phi_{0,x_1(w)}^*g_{co,0} \right) \\ \end{split} \end{equation} The second term in the RHS of (\ref{eq:15}) is dealt with in a way similar to the $t=0$ case as follows. By (\ref{eq:3}) we can write \begin{equation*} \begin{split} \mathbf{r}_1(w)^{-\frac{4}{3}} \phi_{0,x_1(w)}^*g_{co,0} =&\mathbf{r}_1(w)^{-\frac{4}{3}}\mathbf{r}_0(x_1(w))^\frac{4}{3}e^{2s}\left((ds)^2+\Phi_p^*g_\Sigma\right) \end{split} \end{equation*} Lemma \ref{lm:2} implies that for $R$ large enough we have \begin{equation*} |\mathbf{r}_1(w)^{-\frac{4}{3}}\mathbf{r}_0(x_1(w))^\frac{4}{3}| <A, \end{equation*} where $A$ is independent of $w\in Q_1\backslash \overline{V_1(R)}$, and from this we obtain, as in the $t=0$ case, \begin{equation} \label{eq:9} \Arrowvert \mathbf{r}_1(w)^{-\frac{4}{3}} \phi_{0,x_1(w)}^*g_{co,0} \Arrowvert_{C^k(B_w,g_e)}<C_{0,k}. \end{equation} Next we deal with the first term in the RHS of (\ref{eq:15}). Note that by (\ref{eq:2}) and the bound (\ref{eq:Q-3}) for $t=0$, we have, for any $w \in Q_1\backslash \overline{V_1(R)}$ when $R$ is large enough, \begin{equation} \label{eq:7} \left\Arrowvert \mathbf{r}_1(w)^{-\frac{4}{3}}\phi_{0,x_1(w)}^*\left( \nabla_{g_{co,0}}^k (\Upsilon_1^*g_{co,1}-g_{co,0}) \right)\right\Arrowvert_{C^0(B_w,g_e)} \leq D'_{2,k}\sup_{y\in x_1(U_w)}( \mathbf{r}_1(w)^{-\frac{4}{3}}\mathbf{r}_0(y)^{-\frac{2}{3}}). \end{equation} Here $U_w$ is the image of $B_w$ in $Q_1$. Note that from (\ref{eq:Q-1}) (for the $t=1$ case) and Lemma \ref{lm:2} one can deduce that \begin{equation*} \mathbf{r}_1(w)^{-\frac{4}{3}}\mathbf{r}_0(y)^{-\frac{2}{3}}<1 \end{equation*} for $w\in Q_1\backslash \overline{V_1(R)}$ and for any $y \in x_1(U_w)$ if $R$ is large enough. \begin{lemma} For each $k \geq 0$ there is a constant $C_{1,k}>0$ independent of $w \in Q_1 \backslash \overline{V_1(R)}$ such that \begin{equation*} \left\Arrowvert \phi_{0,x_1(w)}^* (\Upsilon_1^*g_{co,1}-g_{co,0}) \right\Arrowvert_{C^k(B_w,g_e)} \leq C_{1,k} \sum_{j=0}^k\Arrowvert \phi_{0,x_1(w)}^*\left( \nabla_{g_{co,0}}^j (\Upsilon_1^*g_{co,1}-g_{co,0}) \right)\Arrowvert_{C^0(B_w,g_e)}. \end{equation*} \end{lemma} \begin{proof} Recall the expression (\ref{eq:3}) for the pullback of $g_{co,0}$ to $B_w$. Using (\ref{eq:c0-1}) and (\ref{eq:c0-2}), an explicit calculation shows that the Christoffel symbols of the cone metric $g_{co,0}$ and their derivatives are bounded in $B_w$ w.r.t. the Euclidean norm by constants independent of $w \in Q_1 \backslash \overline{V_1(R)}$. The lemma now follows easily. \qed \end{proof} From this lemma we have for $k \geq 1$ \begin{equation} \label{eq:14} \Arrowvert \mathbf{r}_1(w)^{-\frac{4}{3}} \phi_{0,x_1(w)}^* (\Upsilon_1^*g_{co,1}-g_{co,0}) \Arrowvert_{C^k(B_w,g_e)} <C_{2,k} \end{equation} The required bound (\ref{eq:Q-4}) for the $t=1$ case then follow from (\ref{eq:15}), (\ref{eq:9}) and (\ref{eq:14}). \\[0.3cm] We proceed to consider the case for general $t \neq 0$. For each point $z=\psi_t(w)$ in $Q_t$, denote $U_z=\psi_t(U_w)$, $B_z=B_w$ and define $\phi_{t,z}=\psi_t\circ \phi_{1,w}$. Then $\{(U_z,\phi_{t,z})|z\in Q_t\}$ is a coordinate system on $Q_t$ and one can check that \begin{equation} \label{eq:16} \frac{1}{C_0}g_e \leq \phi_{t,z}^*\bar{g}_{co,t} \leq C_0g_e \end{equation} over $B_z$, and \begin{equation} \label{eq:17} \Arrowvert \phi_ {t,z} ^*\bar{g}_{co,t} \Arrowvert_{C^k(B_z,g_e)} \leq C_k. \end{equation} for the same constants $C_k$ appearing in the $t=1$ case.\\[0.2cm] Finally, we prove (\ref{eq:Q-2}). In the $t=0$ case, for $y\in U_x$ we have $\mathbf{r}_0=\mathbf{r}_0(x)e^{\frac{2}{3}s}$, $s \in (-\rho,\rho)$, for values of $\mathbf{r}_0$, and (\ref{eq:Q-2}) follows immediately. For the $t=1$ case, recall the expression (\ref{eq:6-2}) of $\mathbf{r}_1$ as a function of $\mathbf{r}_0$. If a point $y\in U_w\subset Q_1$ has coordinates $(p,s)\in B_w$, then $\mathbf{r}_1(y)=\mathbf{r}_1(s)=\mathbf{r}_1\left(\mathbf{r}_0(x_1(w))e^{\frac{3}{2}s}\right)$. From straight forward computation we can see that there exist constants $R'_m$, $m\geq 1$, independent of $w \in Q_1$ such that $\left| \frac{\partial^m}{\partial s^m}\mathbf{r}_1(s) \right| \leq R'_m\mathbf{r}_1(s)$. This implies (\ref{eq:Q-2}) for the $t=1$ case. The general case follows easily from a rescaling argument. The proof of Proposition \ref{pr:n-1} is now complete.\qed \end{proof} It's not hard to deduce the following: \begin{corollary} \label{co:3} For any fixed $\beta \in\mathbb{R}\backslash \{0\}$, there are constants $R''_m>0$, $m\geq 1$, such that $|\nabla_{g_{co,t}}^m \mathbf{r}_t^\beta|_{g_{co,t}} \leq R''_m\mathbf{r}_t^{\beta-\frac{2}{3}m}$ on $Q_t$ for any $t$. \end{corollary} The above proposition and the uniform geometry of $\bigcup_t X_t[1]$ together imply \begin{proposition} \label{pr:2} Theorem \ref{th:0-1} is true if $\tilde{g}_t$ is replaced by $g_t$.\\[0.2cm] \end{proposition} What we have now are charts $B_z$ endowed with some Euclidean coordinates $(y_1,...,y_6)$. In the following we introduce holomorphic coordinates $(w_1,w_2,w_3)$ on $B_z$ (with possibly a smaller common radius) so each $B_z$ can be regarded as a copy of the ball $B$ in Section 2. From the construction above for $z\in X_t\backslash V_t(R|t|^\frac{1}{2},\frac{3}{4})$ we can simply take $w_i=y_i+\sqrt{-1}y_{i+3}$ for $i=1,2,3$. For $z\in V_t(R|t|^\frac{1}{2},\frac{3}{4})$, by our construction it is actually enough to consider $z \in Q_1$ where $\mathbf{r}_1(z) \geq R$. Moreover, by the homogeneity property of $Q_1$ it is enough to consider $z=(\sqrt{-1}\sqrt{\frac{\mathbf{r}_1^2-1}{2}},0,0,\sqrt{\frac{\mathbf{r}_1^2+1}{2}})\in Q_1$. The coordinates of each point $(z_1,...,z_4)\in Q_1$ near $z$ satisfy \begin{equation*} Z=M_1Z_0M_2 \end{equation*} where \begin{equation*} Z= \begin{pmatrix} z_1+\sqrt{-1}z_2 & -z_3+\sqrt{-1}z_4 \\ z_3+\sqrt{-1}z_4 & z_1-\sqrt{-1}z_2 \end{pmatrix}, \end{equation*} \begin{equation*} Z_0=\sqrt{-1} \begin{pmatrix} \sqrt{\frac{\mathbf{r}_1(s)^2-1}{2}}+\sqrt{\frac{\mathbf{r}_1(s)^2+1}{2}} & 0 \\ 0 & \sqrt{\frac{\mathbf{r}_1(s)^2-1}{2}}-\sqrt{\frac{\mathbf{r}_1(s)^2+1}{2}} \end{pmatrix}, \end{equation*} \begin{equation*} M_1= \begin{pmatrix} \cos (\theta_1+\frac{\pi}{4})e^{\sqrt{-1}(\psi+\phi_1)} & -\sin (\theta_1+\frac{\pi}{4})e^{-\sqrt{-1}(\psi-\phi_1)} \\ \sin (\theta_1+\frac{\pi}{4})e^{\sqrt{-1}(\psi-\phi_1)} & \cos (\theta_1+\frac{\pi}{4})e^{-\sqrt{-1}(\psi+\phi_1)} \end{pmatrix} \end{equation*} and \begin{equation*} M_2= \begin{pmatrix} \cos (\theta_2+\frac{\pi}{4})e^{-\sqrt{-1}\phi_2} & \sin (\theta_2+\frac{\pi}{4})e^{\sqrt{-1}\phi_2} \\ -\sin (\theta_2+\frac{\pi}{4})e^{-\sqrt{-1}\phi_2} & \cos (\theta_2+\frac{\pi}{4})e^{\sqrt{-1}\phi_2} \end{pmatrix} \end{equation*} for $(\theta_1,\theta_2,\phi_1,\phi_2,\psi,s)\in B_z$, viewed as the ball of radius $0<\rho \ll 1$ in $\mathbb{R}^6$ centered at 0. Here $\mathbf{r}_1(s)=\mathbf{r}_1\left(\mathbf{r}_0(x_1(z))e^{\frac{3}{2}s}\right)$ as before, and $(\theta_1,\theta_2,\phi_1,\phi_2,\psi)$ form a local coordinate system on $\Sigma$. Explicitly, we have $(y_1,...,y_6)=(\theta_1,\theta_2,\phi_1,\phi_2,\psi,s)$. Near the point $z=(z_1,...,z_4)=(\sqrt{-1}\sqrt{\frac{\mathbf{r}_1^2-1}{2}},0,0,\sqrt{\frac{\mathbf{r}_1^2+1}{2}})\in Q_1$ we can let $(z_1,z_2,z_3)$ be local holomorphic coordinates. Using the above explicit expressions, we can show that, for some $\rho>0$ small enough independent of $z$, on the ball $B_z$ the rescaled holomorphic coordinates $(w_1,w_2,w_3):=\mathbf{r}_1(z)^{-1}(z_1,z_2,z_3)$ satisfy the following property that there exist constants $\Lambda_k>0$ and $\Lambda_{k,l}>0$ for $k \geq 1$ and $l \geq 0$ independent of $z$ such that as functions in coordinates $(x_1,...,x_6)$ on $B_z$ where $w_i=x_i+\sqrt{-1}x_{i+3}$, $i=1,2,3$, the partial derivatives $\frac{\partial^k y_j}{\partial x_{i_1}...\partial x_{i_k}}$ and $\frac{\partial^l }{\partial x_{i_1}...\partial x_{i_l}}\left(\frac{\partial^k x_i}{\partial y_{j_1}...\partial y_{j_k}}\right)$ satisfy \begin{equation*} \left| \frac{\partial^k y_j}{\partial x_{i_1}...\partial x_{i_k}} \right|\leq \Lambda_k\,\,\text{and}\,\, \left| \frac{\partial^k }{\partial x_{i_1}...\partial x_{i_k}}\left(\frac{\partial^l x_i}{\partial y_{j_1}...\partial y_{j_l}}\right) \right|\leq \Lambda_{k,l} \end{equation*} for $k \geq 1$ and $l \geq 0$. Moreover, there is a constant $\Lambda_0>0$ independent of $z$ such that \begin{equation*} \frac{1}{\Lambda_0}\leq \frac{\partial (y_1,...,y_6)}{\partial (x_1,...,x_6)} \leq \Lambda_0 \end{equation*} on $B_z$. These properties are not affected if we make a shift in the coordinates $(x_1,...,x_6)$, and we do so to have $B_z$ centered at the origin of $\mathbb{R}^6\cong \mathbb{C}^3$. We can easily see from the above properties that for some possibly smaller choice of $\rho>0$, the version of Theorem \ref{th:0} with $\tilde{g}_t$ replaced by $g_t$ holds on each $B_z$ endowed with the coordinates $(w_1,w_2,w_3)$ and with $\nabla_e$ now understood as the Euclidean derivative w.r.t. $(w_1,w_2,w_3)$. This is what we'll always have in mind from now on when we work in the charts $B_z$, and in all our later calculations on $B_z$ the coordinates $(w_1,w_2,w_3)$ will always be understood as the choice of holomorphic coordinates introduced here unless stated otherwise.\\[0.3cm] \noindent \textbf{Remark} For simplicity, in the following we will identify $B_z$ with its image $U_z$ under $\phi_{t,z}$. In particular, $B_z$ can also be regarded as a subset of $X_t$ if $z\in X_t$, and the pullback sign $\phi_{t,z}^*$ will be omitted without causing confusion.\\[0.3cm] We proceed to prove the original version of Theorem \ref{th:0}. Recall that the Hermitian form $\tilde{\omega}_t$ of the balanced metric $\tilde{g}_t$ on $X_t$ satisfies $\tilde{\omega}_t^2=\omega_t^2+\theta_t+\bar{\theta}_t$ where $\theta_t=\partial\bar{\partial}^*\partial^*\gamma_t$ for some (2,3)-form $\gamma_t$ satisfying the equations $E_t(\gamma_t)=-\partial \Phi^{1,3}_t$ and $\partial \gamma_t=0$, where \begin{equation*} E_t=\partial\bar{\partial}\bar{\partial}^*\partial^*+\partial^*\bar{\partial}\bar{\partial}^*\partial+\partial^*\partial \end{equation*} and the $\ast$-operators are with respect to the metric $g_t$. Moreover, $\partial \Phi^{1,3}_t$ is supported on $X_t[1]$ and there is a constant $C>0$ such that \begin{equation} \label{eq:1} |\partial \Phi^{1,3}_t|_{C^k}<C|t|. \end{equation} For an arbitrary Hermitian metric $g$ with Hermitian form $\omega$, in a complex coordinate system $(w_1,w_2,w_3)$ we have \begin{equation*} \omega=\frac{\sqrt{-1}}{2}\sum_{1\leq i,j\leq 3}g_{i\bar{j}}dw_i\wedge d\bar{w}_j. \end{equation*} Write \begin{equation*} \omega^2=-\frac{1}{2}\sum_{1\leq i,j\leq 3}G_{i\bar{j}} dw_1\wedge d\bar{w}_1\wedge ...\wedge \widehat{dw_i} \wedge...\wedge\widehat{d\bar{w}_j}\wedge...\wedge dw_3\wedge d\bar{w}_3, \end{equation*} then each $g_{i\bar{j}}$ is a polynomial in the $G_{i\bar{j}}$'s and $\det(G_{i\bar{j}})^{-\frac{1}{2}}$. With this elementary fact in mind Theorem \ref{th:0} follows from its version for $g_t$ and \begin{proposition} \label{pr:1} For given $k\geq0$, there is a constant $C>0$ which may depend on $k$ such that \begin{equation*} \Arrowvert \mathbf{r}_t(z)^{-\frac{8}{3}}\theta_t \Arrowvert_{C^k(B'_z,g_e)}<C|t|^\frac{1}{3} \end{equation*} for any $z \in X_t$ when $t\neq 0$ sufficiently small. Here $B'_z\subset B_z$ is the ball centered at 0 with radius $\frac{\rho}{2}$. \end{proposition} \begin{proof} It is enough to prove for $z\in V_t(\frac{1}{8})$. Let $\Delta_{\bar{\partial}}=\bar{\partial}\bar{\partial}^*+\bar{\partial}^*\bar{\partial}$ be the $\bar{\partial}$-Laplacian w.r.t. $g_t$. Over the region $V_t(1)$ where $g_t$ is just the CO-metric $g_{co,t}$, we have $\Delta_{\bar{\partial}}\theta_t=0$ since \begin{equation*} \bar{\partial}^*\theta_t=\bar{\partial}^*\partial\bar{\partial}^*\partial^*\gamma_t=\partial\bar{\partial}^*\bar{\partial}^*\partial^*\gamma_t=0 \end{equation*} and \begin{equation} \label{th-1} \bar{\partial}\theta_t=\bar{\partial}\partial\bar{\partial}^*\partial^*\gamma_t=-E_t(\gamma_t)=\partial \Phi^{1,3}_t=0. \end{equation} The second equality of the second line follows because $\partial \gamma_t=0$. The operator \begin{equation*} \mathbf{r}_t^\frac{4}{3}\Delta_{\bar{\partial}}:\Gamma(X_t, \Omega^{2,2}) \rightarrow \Gamma(X_t, \Omega^{2,2}) \end{equation*} is elliptic. In general, given a $(p,q)$-form $\psi=\sum \psi_{\alpha_1...\bar{\beta}_q}dw_{\alpha_1}\wedge...\wedge d\bar{w}_{\beta_q}$, Kodaira's Bochner formula says \begin{equation*} \begin{split} (\Delta_{\bar{\partial}} \psi)_{\alpha_1...\bar{\beta}_q} =&-\sum_{\alpha,\beta}g^{\bar{\beta}\alpha}\nabla_{\alpha}\nabla_{\bar{\beta}}\psi_{\alpha_1...\bar{\beta}_q}\\ &+\sum_{i=1}^p\sum_{k=1}^q \sum_{\alpha,\beta} {{R^{\alpha}}_{\alpha_i\bar{\beta}_k}}^{\bar{\beta}}\psi_{\alpha_1...\alpha_{i-1}\alpha\alpha_{i+1}...\bar{\beta}_{k-1}\bar{\beta}\bar{\beta}_{k+1}...\bar{\beta}_q}\\ &-\sum_{k=1}^q \sum_{\beta} {{R_{\bar{\beta}_k}}}^{\bar{\beta}}\psi_{\alpha_1...\bar{\beta}_{k-1}\bar{\beta}\bar{\beta}_{k+1}...\bar{\beta}_q}. \end{split} \end{equation*} Applying this to $\psi=\theta_t=\sum \theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2}dw_{\alpha_1}\wedge dw_{\alpha_2}\wedge d\bar{w}_{\beta_1}\wedge d\bar{w}_{\beta_2}$ and using (\ref{th-1}), we have \begin{equation} \label{th-2} \begin{split} &\mathbf{r}_t^\frac{4}{3}\sum_{\alpha,\beta}g^{\bar{\beta}\alpha}\nabla_{\alpha}\nabla_{\bar{\beta}}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2}\\ &- \sum_{\alpha,\beta} \mathbf{r}_t^\frac{4}{3}\left( {{R^{\alpha}}_{\alpha_1\bar{\beta}_2}}^{\bar{\beta}}\theta_{\alpha\alpha_2\bar{\beta}_{1}\bar{\beta}} + {{R^{\alpha}}_{\alpha_2\bar{\beta}_2}}^{\bar{\beta}}\theta_{\alpha_1\alpha\bar{\beta}_{1}\bar{\beta}} + {{R^{\alpha}}_{\alpha_1\bar{\beta}_1}}^{\bar{\beta}}\theta_{\alpha\alpha_2\bar{\beta}\bar{\beta}_{2}} + {{R^{\alpha}}_{\alpha_2\bar{\beta}_1}}^{\bar{\beta}}\theta_{\alpha_1\alpha\bar{\beta}\bar{\beta}_{2}} \right)\\ &+ \sum_{\beta} \mathbf{r}_t^\frac{4}{3}\left( {{R_{\bar{\beta}_1}}}^{\bar{\beta}}\theta_{\alpha_1\alpha_2\bar{\beta}\bar{\beta}_2}+ {{R_{\bar{\beta}_2}}}^{\bar{\beta}}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}} \right)=0. \end{split} \end{equation} The first term above can be written as \begin{equation} \label{th-3} \begin{split} &\sum_{\alpha,\beta}\mathbf{r}_t^\frac{4}{3}g^{\bar{\beta}\alpha}\nabla_{\alpha}\nabla_{\bar{\beta}}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2}\\ =&\mathbf{r}_t^\frac{4}{3}g^{\bar{\beta}\alpha}\frac{\partial}{\partial w_\alpha}\frac{\partial}{\partial \bar{w}_\beta}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2} +\,\text{remaining terms}, \end{split} \end{equation} where the remaining terms involve derivatives of $\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2}$ of order 1 or less, with coefficients bounded as in Proposition \ref{pr:uniform} for constants $\Lambda_k$ independent of $z$ and $t\neq 0$. \iffalse \begin{equation} \label{th-3} \begin{split} &\sum_{\alpha,\beta}g^{\bar{\beta}\alpha}\nabla_{\alpha}\nabla_{\bar{\beta}}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2}\\ =&g^{\bar{\beta}\alpha}\frac{\partial}{\partial w_\alpha}\frac{\partial}{\partial \bar{w}_\beta}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\beta}_2} -g^{\bar{\beta}\alpha}\Gamma_{\alpha\alpha_1}^{\xi}\frac{\partial}{\partial \bar{w}_\beta}\theta_{\xi\alpha_2\bar{\beta}_1\bar{\beta}_2} -g^{\bar{\beta}\alpha}\Gamma_{\alpha\alpha_2}^{\xi}\frac{\partial}{\partial \bar{w}_\beta}\theta_{\alpha_1\xi\bar{\beta}_1\bar{\beta}_2} \\ &-g^{\bar{\beta}\alpha}\Gamma_{\bar{\beta}\bar{\beta}_1}^{\bar{\zeta}} \frac{\partial}{\partial w_\alpha}\theta_{\alpha_1\alpha_2\bar{\zeta}\bar{\beta}_2} -g^{\bar{\beta}\alpha}\Gamma_{\bar{\beta}\bar{\beta}_2}^{\bar{\zeta}} \frac{\partial}{\partial w_\alpha}\theta_{\alpha_1\alpha_2\bar{\beta}_1\bar{\zeta}}\\ &-g^{\bar{\beta}\alpha}\left(\frac{\partial}{\partial w_\alpha}\Gamma_{\bar{\beta}\bar{\beta}_1}^{\bar{\zeta}}\right) \theta_{\alpha_1\alpha_2\bar{\zeta}\bar{\beta}_2} -g^{\bar{\beta}\alpha}\left(\frac{\partial}{\partial w_\alpha}\Gamma_{\bar{\beta}\bar{\beta}_2}^{\bar{\zeta}}\right) \theta_{\alpha_1\alpha_2\bar{\zeta}_1\bar{\delta}}\\ & +g^{\bar{\beta}\alpha} \Gamma_{\alpha\alpha_1}^{\xi} \Gamma_{\bar{\beta}\bar{\beta}_1}^{\bar{\zeta}} \theta_{\xi\alpha_2\bar{\zeta}\bar{\beta}_2} +g^{\bar{\beta}\alpha} \Gamma_{\alpha\alpha_2}^{\xi} \Gamma_{\bar{\beta}\bar{\beta}_1}^{\bar{\zeta}} \theta_{\alpha_1\xi\bar{\zeta}\bar{\beta}_2}\\ &+g^{\bar{\beta}\alpha} \Gamma_{\alpha\alpha_1}^{\xi} \Gamma_{\bar{\beta}\bar{\beta}_2}^{\bar{\zeta}} \theta_{\xi\alpha_2 \bar{\beta}\bar{\zeta}} +g^{\bar{\beta}\alpha} \Gamma_{\alpha\alpha_2}^{\xi} \Gamma_{\bar{\beta}\bar{\beta}_2}^{\bar{\zeta}} \theta_{\alpha_1\xi \bar{\beta}\bar{\zeta}}\\ \end{split} \end{equation} \fi Note that the products of $\mathbf{r}_t^\frac{4}{3}$ and the curvature terms in (\ref{th-2}) are bounded similarly. Therefore, $\theta_t$ is the zero of the elliptic operator $\mathbf{r}_t^\frac{4}{3}\Delta_{\bar{\partial}}$ whose coefficients are bounded as in Proposition \ref{pr:uniform} for constants $\Lambda_k$ independent of $z$ and $t\neq 0$. We use the Hermitian metric on $\Omega^{2,2}$ induced by $g_e$. Then there are constants $C_{p,k}>0$ such that \begin{equation*} \Arrowvert \theta_t \Arrowvert_{L^p_{k+2}(B_z,g_e)} \leq C_{p,k}\Arrowvert \theta_t \Arrowvert_{L^2(B_z,g_e)} \end{equation*} for $z\in V_t(\frac{1}{8})$ (so $B_z\subset V_t(\frac{1}{4})$ for $\rho$ small enough, which we assume is the case). Each $C_{p,k}$ is independent of $z$ and $t$ since we use the Euclidean metric in each chart. By the usual Sobolev Theorem over the Euclidean ball $(B_z, g_e)$, for $p$ large enough one can get \begin{equation*} \begin{split} \Arrowvert \theta_t \Arrowvert_{C^{k}(B'_z,g_e)} \leq &C'_{p,k}\Arrowvert \theta_t \Arrowvert_{L^2(B_z,g_e)}\\ =&C'_{p,k}\left (\int_{B_z} |\theta_t|_{g_e}^2dV_e \right)^\frac{1}{2} \leq C'_{p,k}\text{Vol}_e(B_z)^\frac{1}{2} \sup_{B_z}|\theta_t|_{g_e} \end{split} \end{equation*} for some constants $C'_{p,k}>0$ independent of $z$ and $t$. Therefore, \begin{equation} \label{eq:theta-1} \Arrowvert \mathbf{r}_t(z)^{-\frac{8}{3}}\theta_t \Arrowvert_{C^{k}(B_z,g_e)} \leq C'_{p,k}\text{Vol}_e(B_z)^\frac{1}{2} \sup_{B_z}|\mathbf{r}_t(z)^{-\frac{8}{3}}\theta_t|_{g_e}. \end{equation} From (\ref{eq:0-1}) one sees easily that \begin{equation*} |\theta_t|_{g_{co,t}}^2 \leq |t|^\frac{2}{3} \end{equation*} for $t\neq 0$ sufficiently small, and by Proposition \ref{pr:n-1} this implies \begin{equation} \label{eq:theta-2} |\mathbf{r}_t(z)^{-\frac{8}{3}}\theta_t|_{g_e}^2 \leq C|t|^\frac{2}{3} \end{equation} for $t \neq 0$ sufficiently small. Now (\ref{eq:theta-1}) and (\ref{eq:theta-2}) complete the proof.\qed \end{proof} In later section we will need the following result on the sup norm of $\theta_t$: \begin{proposition} \label{pr:6-1} There is a constant $C>0$ independent of $t$ such that \begin{equation*} |\theta_t|_{g_t} \leq C\mathbf{r}_t^{-\frac{2}{3}}\cdot |t|. \end{equation*} Consequently, there is a constant $C>0$ such that \begin{equation*} |\tilde{\omega}_t^{-1}-\omega_t^{-1}|_{g_t} \leq C\mathbf{r}_t^{-\frac{2}{3}}\cdot |t|. \end{equation*} \end{proposition} \begin{proof} Again, it is enough to consider over $B_z$ for $z\in V_t(\frac{1}{8})$. A similar discussion as in Proposition \ref{pr:1} shows that for each $z\in V_t(\frac{1}{8})$ we have \begin{equation} \label{eq:theta-3} \begin{split} \sup_{B'_z} |\mathbf{r}_t^\frac{2}{3}\theta_t|_{g_t} \leq &C\Arrowvert \theta_t \Arrowvert_{C^0_{3}(B'_z,g_e)} \leq C'\Arrowvert \theta_t \Arrowvert_{L^2_{3}(B_z,g_e)} =C'\left (\int_{B_z} |\mathbf{r}_t^{-2}\theta_t|_{g_e}^2dV_e \right)^\frac{1}{2}\\ =&C''\left (\int_{B_z} |\mathbf{r}_t^\frac{2}{3}\theta_t|_{g_t}^2\mathbf{r}_t^{-4}dV_t \right)^\frac{1}{2} \leq C''\left (\int_{V_t(\frac{1}{4})} |\theta_t|_{g_t}^2\mathbf{r}_t^{-\frac{8}{3}}dV_t \right)^\frac{1}{2}. \end{split} \end{equation} It is proved in Lemma 17 of \cite{FLY} that \begin{equation*} \int_{V_t(\frac{1}{4})} |\theta_t|_{g_t}^2\mathbf{r}_t^{-\frac{8}{3}}dV_t \leq C\int_{X_t[\frac{1}{8}]}(|\gamma_t|_{g_t}^2+|\partial \Phi^{1,3}_t |_{g_t}^2)dV_t \end{equation*} for some constant $C>0$ independent of $t$. In view of (\ref{eq:1}), to prove the proposition it is enough to show \begin{equation*} \int_{X_t} |\gamma_t|_{g_t}^2dV_t \leq C|t|^2 \end{equation*} for some constant $C>0$ independent of $t$. Suppose that there is a sequence $\{t_i\}$ converging to 0 such that \begin{equation*} |t_i|^{-2}\int_{X_{t_i}} |\gamma_{t_i}|^2dV_{t_i}=\alpha_i^2\rightarrow \infty\,\,\,\text{when}\,\,i\rightarrow \infty. \end{equation*} where $\alpha_1>0$. Define $\tilde{\gamma}_{t_i}=|t_i|^{-1}\alpha_i^{-1}\gamma_{t_i}$ then \begin{equation*} \int_{X_{t_i}} |\tilde{\gamma}_{t_i}|^2dV_{t_i}=1\,\,\,\text{and}\,\,E_{t_i}(\tilde{\gamma}_{t_i})=-|t_i|^{-1}\alpha_i^{-1}\partial \Phi^{1,3}_{t_i}. \end{equation*} Thus there exists a smooth (2,3)-form $\tilde{\gamma}_0$ on $X_{0,sm}$ such that $E_0(\tilde{\gamma}_0)=0$ and $\tilde{\gamma}_{t_i} \rightarrow \tilde{\gamma}_0$ pointwise. Then one can prove that \begin{equation*} \int_{X_{0,sm}} |\tilde{\gamma}_0|^2dV_0=1\,\,\,\text{but}\,\,\tilde{\gamma}_0=0. \end{equation*} as in \cite{FLY} in exactly the same way, only noticing that in several places we use the fact that $|t_i|^{-2}\alpha_i^{-2}|\partial \Phi^{1,3}_{t_i}|^2 \rightarrow 0$ as $i \rightarrow \infty$. This completes the proof. \qed \end{proof} \section{HYM metrics on the vector bundle over $X_0$} Let $\mathcal{E}$ be an irreducible holomorphic vector bundle over the K$\ddot{\text{a}}$hler Calabi-Yau threefold $(\hat{X},\omega)$ as before. Our assumption on $\mathcal{E}$ is that it is trivial over a neighborhood of the exceptional curves $C_i$'s. By a rescaling of the metric $\hat{\omega}_0$, we may assume that $\mathcal{E}$ is trivial over $U(1)\subset \hat{X}$. As mentioned in Section 2, over $\hat{X}$ there is a 1-parameter family of balanced metrics $\hat{\omega}_a$, $0< a \ll 1$, constructed as in \cite{FLY}. Since for each $a\neq 0$ the (2,2)-forms $\hat{\omega}_a^2$ and $\omega^2$ differ by smooth $\partial \bar{\partial}$-exact forms, the bundle $\mathcal{E}$ is stable with respect to all $\hat{\omega}_a$ if it is so with respect to $\omega$. Assume that this is the case. Then by the result of \cite{LY1}, there exists a HYM metric $H_a$ on $\mathcal{E}$ with respect to $\hat{\omega}_a$. In this section, $\hat{H}$ will be a metric such that $\hat{H}=I$ with respect to some a constant frame over $U(1)$ where $\mathcal{E}$ is trivial. By a constant frame we mean the following: under an isomorphism $\mathcal{E}|_{U(1)}\cong \mathcal{O}_{U(1)}^r$, a holomorphic section of $\mathcal{E}$ over $U(1)$ can be viewed as a holomorphic vector-valued function on $U(1)$. Then a constant frame $\{s_1,...,s_r\}$ is a set of such functions which are (pointwise) linearly independent and each member $s_i$ is a constant (vector-valued) function. A constant frame is in particular a holomorphic frame. The metric $\hat{H}$ will serve as the reference metric. The constants appearing in this section may depend on $\hat{H}$. We will also often use implicitly the identification $\hat{X}\backslash \bigcup C_{i} \cong X_{0,sm}$. \subsection{Proof of the first main theorem} The goal of this subsection is to prove the following theorem on the existence of a HYM metric with respect to $\hat{\omega}_0$ over $\mathcal{E}|_{X_{0,sm}}$. The techniques we use are largely based on \cite{Don1} \cite{Don2} \cite{Don3} \cite{Siu} \cite{UY}. \begin{theorem} \label{th:2-1} There is a smooth Hermitian metric $H_0$ on $\mathcal{E}|_{X_{0,sm}}$ which is HYM with respect to $\hat{\omega}_0$ such that there is a decreasing sequence $\{a_i\}_{i=1}^\infty$ converging to 0 for which a sequence $\{H_{a_i}\}$ of HYM metrics (w.r.t. $\hat{\omega}_{a_i}$, respectively) converge weakly to $H_0$ in the $L^p_2$-sense for all $p$ on each compactly embedded open subset of $X_{0,sm}$. \end{theorem} \begin{proof} We begin with a boundedness result on the determinants of $h_a:=H_a\hat{H}^{-1}$. \begin{lemma} \label{lm:d-1} After a rescaling $H_a$ by positive constants we can assume that $\det h_a$ are bounded from above and below by positive constants independent of $0<a\ll 1$. \end{lemma} \begin{proof} Let $\varphi_a$ be the unique smooth function on $\hat{X}$ satisfying \begin{equation*} \hat{\Delta}_a\varphi_a=-\frac{\sqrt{-1}}{r}\text{tr}\,\Lambda_{\hat{\omega}_a}F_{\hat{H}} \end{equation*} and $\int_{\hat{X}} \varphi_a dV_a=0$ where $dV_a$ is the volume form of $\hat{g}_a$. \\ \noindent \textbf{Claim} The sup norm of $\varphi_a$ is bounded by a constant independent of $0<a\ll 1$. \begin{proof} First note that since $\Lambda_{\hat{\omega}_a} F_{\hat{H}}=0$ on $U(1)$, $\varphi_a$ is harmonic over $U(1)$ and so we have by the maximum principle $\sup_{U(1)}|\varphi_a| \leq \sup_{X_0[\frac{3}{4}]}|\varphi_a|$. Since $\hat{\omega}_a$ is a balanced metric the Laplacian $\hat{\Delta}_a$ coincides (up to a constant multiple) with the negative of the Laplace-Beltrami operator its associated Riemannian metric (see for example \cite{Ga}). We thus have the Greens formula \cite{Au}: for each $x \in X_0[\frac{3}{4}]$, $0<a\ll 1$, $\frac{1}{4}\leq\delta\leq\frac{1}{2}$, and smooth function $f$ on $\hat{X}$, \begin{equation} \label{eq:green} f(x)= \int_{\partial X_0[\delta]} \Gamma_{a,\delta}(x,y) f(y) dS_a(y)+\int_{y\in X_0[\delta]}\,G_{a,\delta}(x,y)\hat{\Delta}_a f(y)\, dV_a(y) \end{equation} where $G_{a,\delta}(x,y)\leq 0$ is the Green's function for $\hat{\Delta}_a$ over the region $X_0[\delta]$, and $\Gamma_{a,\delta}$ is the boundary normal derivative of $G_{a,\delta}(x,y)$ with respect to $y$. Moreover, $dS_a$ is the volume form on $\partial X_0[\delta]$ with respect to the metric induced from $\hat{g}_a$. We apply the above formula to $f=\varphi_a$. Since the family of metrics $\{\hat{\omega}_a| 0<a\ll 1\}$ are uniform over $X_0[\frac{1}{4}]$ there is a constant $K_0$ such that for any $0<a\ll 1$, $\frac{1}{4}\leq\delta\leq\frac{1}{2}$, $y \in \partial X_0[\delta]$ and $x \in X_0[\frac{3}{4}]$, \begin{equation*} |\Gamma_{a,\delta}(x,y)| \leq K_0. \end{equation*} For the same reason there is a constant $K_1>0$ such that \begin{equation*} -\int_{y\in X_0[\delta]}\,G_{a,\delta}(x,y)\, dV_a(y) \leq K_1 \end{equation*} for any $x \in X_0[\frac{3}{4}]$, $\frac{1}{4}\leq\delta\leq\frac{1}{2}$, and $0<a\ll 1$. Because $\Lambda_{\hat{\omega}_a} F_{\hat{H}}=0$ over $U(1)$, $|\frac{1}{r}\text{tr}\,\Lambda_{\hat{\omega}_a} F_{\hat{H}}|$ is bounded by a constant $K_2>0$ independent of $a$. Therefore we have \begin{equation*} | G_{a,\delta}(x,y)\hat{\Delta}_a \varphi_a| \leq -G_{a,\delta}(x,y) |\frac{1}{r}\text{tr}\,\Lambda_{\hat{\omega}_a} F_{\hat{H}}| \leq -K_2\cdot G_{a,\delta}(x,y). \end{equation*} We can conclude from the above bounds that \begin{equation} \label{eq:d-1} \begin{split} | \varphi_a(x) | \leq & K_0\int_{\partial X_0[\delta]} |\varphi_a| dS_a - \int_{y\in X_0[\delta]}\,K_2\cdot G_{a,\delta}(x,y) dV_a(y) \\ \leq & K_0\int_{\partial X_0[\delta]} |\varphi_a|\, dS_a+K_1K_2. \\ \end{split} \end{equation} Integrate (\ref{eq:d-1}) with respect to $\delta$ from $\frac{1}{4}$ to $\frac{1}{2}$ and use once again the uniformity in the metrics over $X_0[\frac{1}{4}]$ we obtain \begin{equation} \begin{split} | \varphi_a(x) | \leq & K_4\left( \int_{X_0[\frac{1}{2}] \backslash X_0[\frac{1}{4}] } | \varphi_a | dV_a \right)+4K_1K_2 \\ \leq & K_4K_5^\frac{1}{2}\left( \int_{\hat{X}} | \varphi_a |^2 dV_a \right)^\frac{1}{2}+4K_1K_2 \end{split} \end{equation} for each $x \in X_0[\frac{3}{4}]$. Here $K_5$ is a comment upper bound for the volumes of $\hat{X}$ w.r.t. $\hat{g}_a$. Now, to prove the claim, we have to show that $\int_{\hat{X}} | \varphi_a |^2 dV_a$ is bounded by a constant independent of $0<a\ll 1$. For this we use the estimates on the first eigenvalue of Laplacians due to Yau \cite{Y1} which implies that for a compact Riemannian manifold $(X,g)$ of dimension $n$, if (i) $\text{diag}_g(X) \leq D_1$, (ii) $\text{Vol}_g(X) \geq D_2$ and (iii) $Ric(g)\geq (n-1)K$ hold, then the number \begin{equation*} \lambda_1:=\inf_{0\neq f\in C^\infty(X),\int_{X} f dV_g=0} \frac{\int_{X} f\Delta_g^{LB}f \, dV_g}{\int_{X} f^2 \, dV_g}. \end{equation*} is bounded below by a constant depending only on $D_1$, $D_2$ and $K$. Here $\Delta_g^{LB}$ denotes the Laplace-Beltrami operator of $g$. For the family of metrics $\{\hat{g}_a\}$ on $\hat{X}$, it is easy to see that the diameters and volumes are bounded as in (i) and (ii) by the same constants $D_1$ and $D_2$. Note that in a neighborhood of the exceptional curves each member $\hat{g}_a$ is Ricci-flat, and so by the uniformity outside that neighborhood, condition (iii) holds for a common value of $K$. Therefore, there is a constant $K_6>0$ such that \begin{equation*} \begin{split} \int_{\hat{X}} | \varphi_a |^2 dV_a &\leq K_6 \int_{\hat{X}} | \varphi_a||\hat{\Delta}_a\varphi_a| dV_a = K_6\int_{\hat{X}} |\varphi_a| |\frac{1}{r}\text{tr}\,\Lambda_{\hat{\omega}_a} F_{\hat{H}}| dV_a \\ &\leq K_6K_2\int_{\hat{X}} | \varphi_a | dV_a \leq K_6K_2K_5^\frac{1}{2}\left( \int_{\hat{X}} | \varphi_a |^2 dV_a \right)^\frac{1}{2} \end{split} \end{equation*} and hence \begin{equation*} \left( \int_{\hat{X}} | \varphi_a |^2 dV_a \right)^\frac{1}{2} \leq K_6K_2K_5^\frac{1}{2}. \end{equation*} This completes the proof of the claim. \qed \end{proof} Define $\hat{H}_a:=e^{\varphi_a}\hat{H}$. Then it follows from the claim that to prove the lemma, it is enough to show that the determinants of $\hat{h}_a:=H_a\hat{H}_a^{-1}$ have common positive upper and lower bounds. To do so first note that we have $\text{tr}\,\Lambda_{\hat{\omega}_a}F_{\hat{H}_a}=0$. Then the proof of Proposition 2.1 in \cite{UY} shows that this and the fact that $\Lambda_{\hat{\omega}_a}F_{H_a}=0 $ imply $\det \hat{h}_a$ is constant for each $a$. After a rescaling of $H_a$ by a positive constant, we can assume $\det \hat{h}_a=1$, and the proof of Lemma \ref{lm:d-1} is complete.\qed \end{proof} From now on we assume that the rescaling in the above lemma is done. We next show a result on the $C^0$-bound for $\text{tr}\,h_a$. \begin{proposition} \label{pr:2-1} Assume that the integrals $\int_{\hat{X}} |\log\text{tr} \,h_a|^2 \,dV_a$ have a common upper bound for $0<a \ll1$. Then there is a constant $C_0>0$ such that for any $0<a \ll1$, \begin{equation*} -C_0< \log\text{tr}\,h_a<C_0. \end{equation*} \end{proposition} \begin{proof} First of all, we have the following inequality whose proof can be found in \cite{Siu}: \begin{lemma} \label{lm:main} Let $H_0$ and $H_1$ be two Hermitian metrics on a holomorphic vector bundle $\mathcal{E}$ over a Hermitian manifold $(X,\omega)$, and define $h=H_1H_0^{-1}$. Then \begin{equation} \Delta_{\omega} \log\text{tr}\,h \geq -(|\Lambda_{\omega} F_{H_0}|_{H_0}+|\Lambda_{\omega} F_{H_1}|_{H_0}). \end{equation} \end{lemma} By Lemma \ref{lm:main}, we have the inequality \begin{equation} \label{eq:2-3} \hat{\Delta}_a \log\text{tr} (h_a) \geq -(|\Lambda_{\hat{\omega}_a} F_{H_a}|_{\hat{H}}+|\Lambda_{\hat{\omega}_a} F_{\hat{H}}|_{\hat{H}})=-|\Lambda_{\hat{\omega}_a} F_{\tilde{H}}|_{\hat{H}} \end{equation} where the equality follows since $H_a$ is HYM with respect to $\hat{\omega}_a$. Over $U(\frac{7}{8})$ we have $\hat{\Delta}_a \log\text{tr}\,h_a \geq -|\Lambda_{\hat{\omega}_a} F_{\hat{H}}|_{\hat{H}}=0$ and so by Maximum Principle we have \begin{equation*} \begin{split} \sup_{U(\frac{7}{8})} \log\text{tr}\,h_a \leq \sup_{\partial U(\frac{7}{8})} \log\text{tr}\,h_a \leq \sup_{ X_0[\frac{3}{4}]} \log\text{tr}\,h_a. \end{split} \end{equation*} Using the Green's formula (\ref{eq:d-1}), we can show as in Lemma \ref{lm:d-1} that \begin{equation} \sup_{ X_0[\frac{3}{4}]} \log\text{tr}\,h_a \leq K' \end{equation} for some $K'>0$ independent of $0<a \ll1$ assuming that the integrals $\int_{\hat{X}} |\log\text{tr} \,h_a|^2 \,dV_a$ have a common upper bound. We thus have a commen upper bound for $\sup_{\hat{X}} \log\text{tr}\,h_a$. Together with the fact that the determinants of $h_a$ are bounded from above and below by positive constants independent of $0<a \ll1$, this upper bound also implies a common lower bound for $\log\text{tr}\,h_a$ over $\hat{X}$. The proof is completed. \qed \end{proof} Therefore, to get $C^0$-estimate we prove \begin{proposition} \label{pr:L2} There is a constant $C'_0>0$ such that \begin{equation*} \int_{\hat{X}} |\log\text{tr}\,h_a|^2 dV_a<C'_0 \end{equation*} for any $0<a\ll 1$. \end{proposition} \begin{proof} The idea is basically the same as in the proof of Proposition 4.1 in \cite{UY}. Assume the contrary. Then there is a sequence $\{ a_k \}_{k=1}^\infty$ converging to 0 such that $\lim_{k\rightarrow \infty} \int_{\hat{X}} |\log\text{tr}\,h_{a_k}|^2 \,dV_{a_k}=\infty$. Denote $h(k)=h_{a_k}$, and define $\rho_k=e^{-M_k}$ where $M_k$ is the largest eigenvalue of $\log h(k)$. Then $\rho_k h(k) \leq I$. The following inequality is proved in Lemma 4.1 of \cite{UY}: \begin{lemma} Suppose \begin{equation*} \Lambda_{\omega} F_H+\Lambda_{\omega} \bar{\partial}((\partial_H h)h^{-1})=0 \end{equation*} holds for a Hermitian metric $H$ on a vector bundle $\mathcal{E}$ over a Hermitian manifold $(X,\omega)$ and $h\in \Gamma(\text{End}(E))$. Then for $0<\sigma \leq 1$, we have the inequality \begin{equation*} |h^{-\frac{\sigma}{2}}\partial_H h^\sigma|_{H,g}^2-\frac{1}{\sigma}\Delta_{\omega}|h^\sigma|_H \leq -\langle \Lambda_{\omega} F_H, h^\sigma \rangle_H. \end{equation*} \end{lemma} In our case, because $\Lambda_{\hat{\omega}_a} F_{\hat{H}}+\Lambda_{\hat{\omega}_a} \bar{\partial}((\partial_{\hat{H}} h_a)h_a^{-1})=\Lambda_{\hat{\omega}_a} F_{H_a}=0$, apply the above lemma to $\sigma=1$, we see immediately that \begin{equation} \label{eq:s-1} -\hat{\Delta}_a |h_a|_{\hat{H}} \leq |\Lambda_{\hat{\omega}_a} F_{\hat{H}}|_{\hat{H}} |h_a|_{\hat{H}} \leq K_7|h_a|_{\hat{H}} \end{equation} where $K_7$ is a common upper bound for $|\Lambda_{\hat{\omega}_{a}} F_{\hat{H}}|_{\hat{H}} $. Note that $|h_a|_{\hat{H}}$ is subharmonic in $U(\frac{7}{8})$ because of the first inequality in (\ref{eq:s-1}) and the fact that $\Lambda_{\hat{\omega}_a} F_{\hat{H}}=0$ there. Maximum Principle then implies that \begin{equation*} \sup_{U(\frac{7}{8})} |h_a|_{\hat{H}} \leq \sup_{X_0[\frac{3}{4}]} |h_a|_{\hat{H}}. \end{equation*} From this observation and an iteration argument over $X_0[\frac{3}{4}]$ on (\ref{eq:s-1}), we can deduce that \begin{equation*} \sup_{\hat{X}} |h_a|_{\hat{H}} \leq K_8\left( \int_{X_0[\frac{1}{4}]} |h_a|_{\hat{H}}^2 dV_a \right)^\frac{1}{2}. \end{equation*} This implies \begin{equation} \label{eq:2-2} 1 \leq K_8\left( \int_{X_0[\frac{1}{4}]} | \rho_k h(k)|_{\hat{H}}^2 dV_{a_k} \right)^\frac{1}{2}. \end{equation} for any $k>0$. As in page S275 of \cite{UY}, one can show that \begin{equation*} \int_{\hat{X}} | \nabla_{\hat{H}}(\rho_k h(k))|_{\hat{H},\hat{g}_{a_k}}^2 dV_{a_k} \leq 4 \max_{\hat{X}} |\Lambda_{\hat{\omega}_{a_k}} F_{\hat{H}}|_{\hat{H}} \cdot \text{Vol}_{a_k}(\hat{X}) \leq 4K_7K_5 \end{equation*} where $K_5$ is as in the proof of Lemma \ref{lm:d-1}. Thus we see that the $L^2_1$-norms of $\rho_k h(k)$ with respect to $\hat{\omega}_{a_k}$ are bounded by a constant independent of $k$. Because the sequence of metrics $\{ \hat{\omega}_{a_k} \}$ are uniformly bounded only on each compactly embedded open subset in $X_{0,sm}$, a subsequence of the sequence $\{\rho_k h(k)\}$ converges strongly on each subset of this kind. After taking a sequence $\{U_l\subset\subset \hat{X} \}_l$ of exhausting increasing subsets and use the diagonal argument, we obtain a subsequence $\{ \rho_{k_i} h(k_i) \}_{i\geq 1}$ of $\{\rho_k h(k)\}_{k \geq 1}$ and an $\hat{H}$-symmetric endomorphism $h_\infty$ of $\mathcal{E}$ which is the limit of $\{ \rho_{k_i} h_{k_i}|_{U_l} \}_{i\geq 1}$ in $L^2(U_l,\text{End}(\mathcal{E}))$ for all $l$. From (\ref{eq:2-2}) one immediately sees that $h_\infty$ is nontrivial. Define $h_i=\rho_{k_i} h(k_i)$. The same argument shows that $h_i^\sigma$ converges weakly in the $L^2_1$ sense on each $U_l$ to some $h_\infty^\sigma$. The uniform bound on the $L^2_1$-norm of $h_i^\sigma$ gives the same bound on $h_\infty^\sigma$ for all $\sigma$. It follows that $I-h_\infty^\sigma$ has a weak limit in $L^2_1$ sense on each $U_l$ for some subsequence $\sigma \rightarrow 0$. We call the limit $\pi$. Similar to \cite{UY} except that we consider integrals over each $U_l$, we can show that $\pi$ gives a weakly holomorphic subbundle of $\mathcal{E}$. More precisely \cite{LT}, there is a coherent subsheaf $\mathcal{F}$ of $\mathcal{E}$ and an analytic subset $S\subset \hat{X}$ (containing the exceptional curves) such that $S$ has codimension greater than 1 in $\hat{X}$, the restriction of $\pi$ to $\hat{X}\backslash S$ is smooth and satisfies $\pi^{*_{\hat{H}}}=\pi=\pi^2$ and $(I-\pi)\bar{\partial}\pi=0$, and finally, the restriction $\mathcal{F}':=\mathcal{F}|_{\hat{X}\backslash S}=\pi|_{\hat{X}\backslash S}(\mathcal{E}|_{\hat{X}\backslash S})\hookrightarrow \mathcal{E}$ is a holomorphic subbundle. The rank of $\mathcal{F}$ satisfies $0<\text{rank} \mathcal{F}<\text{rank} \mathcal{E}$.\\[0.2cm] Following the argument in \cite{K} p. 181-182 (see also Proposition 3.4.9 of \cite{LT}), we have \begin{equation*} \mu_0 :=\lim_{\delta\rightarrow 0} \frac{1}{\text{rank}\mathcal{F}}\int_{X_0[\delta]} c_1(\det\mathcal{F},u)\wedge \hat{\omega}_0^2 =\lim_{\delta\rightarrow 0} \frac{1}{\text{rank}\mathcal{F}}\int_{X_0[\delta]} c_1(\mathcal{F}',\hat{H}_1)\wedge \hat{\omega}_0^2. \end{equation*} Here $u$ is some smooth Hermitian metric on the holomorphic line bundle $\det\mathcal{F}$ over $\hat{X}$, and $\hat{H}_1$ is the Hermitian metric on the bundle $\mathcal{F}'$ induced by the metric $\hat{H}$ on $\mathcal{E}$. Using the above construction of $\pi$ by convergence on the $U_l$'s one can show by a slight modification of the arguments in \cite{UY} that \begin{equation*} \lim_{\delta\rightarrow 0} \frac{1}{\text{rank}\mathcal{F}}\int_{X_0[\delta]} c_1(\mathcal{F}',\hat{H}_1)\wedge \hat{\omega}_0^2 \geq 0.\\[0.2cm] \end{equation*} \noindent \textbf{Claim} For $0<a\ll 1$, $\mu_{\hat{\omega}_a}(\mathcal{F})\geq 0$. \begin{proof} It is enough to show $\mu_{\hat{\omega}_a}(\mathcal{F})=\mu_0$. From the construction of $\hat{\omega}_0$ in \cite{FLY}, we have \begin{equation*} \hat{\omega}_0^2=\Psi+\Phi_0 \end{equation*} where $\Psi$ is a (2,2)-form supported outside $U(1)$ and $\Phi_0$ is a $\partial\bar{\partial}$-exact (2,2)-form which is defined only on $\hat{X}\backslash \cup C_i$, is supported in $U(\frac{3}{2})\backslash \cup C_i$, and equals $\omega_{co,0}^2=\frac{9}{4}\sqrt{-1}\partial\bar{\partial}\mathbf{r}^\frac{4}{3}\wedge \sqrt{-1}\partial\bar{\partial}\mathbf{r}^\frac{4}{3}$ on $U(1)\backslash \cup C_i$. The same construction gives $\hat{\omega}_a$ such that \begin{equation*} \hat{\omega}_a^2=\Psi+\Phi_a \end{equation*} where $\Phi_a$ is a smooth $\partial\bar{\partial}$-exact (2,2)-form supported in $U(\frac{3}{2})$ which equals $\omega_{co,a}^2$ on $U(1)$. Denote the smooth (1,1)-form $c_1(\det\mathcal{F},u)$ on $\hat{X}$ by $c_1$ and $\text{rank}\mathcal{F}$ by $s$. From the above descriptions we have \begin{equation*} \begin{split} &\mu_{\hat{\omega}_a}(\mathcal{F})-\mu_0 =\lim_{\delta\rightarrow 0} \frac{1}{s}\int_{X_0[\delta]} c_1\wedge \left(\hat{\omega}_a^2-\hat{\omega}_0^2\right) =\lim_{\delta\rightarrow 0} \frac{1}{s}\int_{X_0[\delta]} c_1\wedge \left(\Phi_a-\Phi_0\right) \\ =&\frac{1}{s}\int_{\hat{X}} c_1\wedge \Phi_a-\lim_{\delta\rightarrow 0} \frac{1}{s}\int_{X_0[\delta]} c_1\wedge \Phi_0 =-\lim_{\delta\rightarrow 0} \frac{1}{s}\int_{X_0[\delta]} c_1\wedge \Phi_0 \end{split} \end{equation*} where the last equality follows from the fact that, as smooth forms on $\hat{X}$, $c_1$ is closed and $\Phi_a$ is exact. One can write $c_1\wedge \Phi_0=d(c_1\wedge \varsigma)$ where $\varsigma$ is a 3-form supported on $U(\frac{3}{2})\backslash \cup C_i$ which equals $\frac{9}{8}(\partial \mathbf{r}^\frac{4}{3}-\bar{\partial}\mathbf{r}^\frac{4}{3})\wedge \partial \bar{\partial}\mathbf{r}^\frac{4}{3}$ on $U(1)\backslash \cup C_i$. By Stokes' Theorem, we have \begin{equation} \label{lim} -\lim_{\delta\rightarrow 0} \frac{1}{s}\int_{X_0[\delta]} c_1\wedge \Phi_0 =-\lim_{\delta\rightarrow 0} \frac{9}{8}\frac{1}{s}\int_{\partial X_0[\delta]} c_1\wedge (\partial \mathbf{r}^\frac{4}{3}-\bar{\partial}\mathbf{r}^\frac{4}{3})\wedge \partial\bar{\partial}\mathbf{r}^\frac{4}{3}. \end{equation} An explicit calculation on coordinate charts can then show that the last limit is zero. \qed \end{proof} Since $\mu_{\hat{\omega}_a}(\mathcal{E})=0$, we get from this claim a contradiction to the assumption that $\mathcal{E}$ is stable with respect to $\hat{\omega}_a$ and complete the proof of Proposition \ref{pr:L2}.\qed \end{proof} We continue with the proof of Theorem \ref{th:2-1}. Using $\hat{H}$ and $\hat{g}_a$ one can define $L^2_1$-norms for $h_a$. The next step is to give an $L^2_1$-boundedness. \begin{proposition} \label{pr:L21} The $L^2_1$-norm of $h_a$ over $\hat{X}$ is bounded by some constant $C_2$ independent of $0<a \ll 1$. \end{proposition} \begin{proof} The $C^0$-boundedness obtained above and the common upper bound in Vol$_a(\hat{X})$ imply that the $L^2$ norm of $h_a$ is bounded above by a constant independent of $a$. Choose a finite number of Hermitian metrics $H^{(\nu)}$ for $1 \leq \nu \leq k$ on $\mathcal{E}$ which are constant in some holomorphic frame $\mathcal{E}|_{U(1)} \cong \mathcal{O}^r$ over $U(1)$, such that for any smooth Hermitian metric $K$ on $\mathcal{E}$ the entries of the Hermitian matrix representing $K$ are linear functions of $\text{tr}(K(H^{(\nu)} )^{-1})$, $1 \leq \nu \leq k$, whose coefficients are constants depending only on $H^{(\nu)}$. Denote $h_a^{(\nu)}=H_a(H^{(\nu)})^{-1}$. It is therefore enough to bound the integrals \begin{equation*} \int_{\hat{X}} |d\,\text{tr}\,h_a^{(\nu)}|_{\hat{g}_a}^2 dV_a \end{equation*} for $1 \leq \nu \leq k$. From Lemma \ref{lm:main} and the fact that $H_a$ is HYM w.r.t. $\hat{\omega}_a$, we have \begin{equation} \begin{split} \hat{\Delta}_a \log\text{tr}\,h_a^{(\nu)} \geq &-(|\Lambda_{\hat{\omega}_a} F_{H^{(\nu)}}|_{H^{(\nu)}}+|\Lambda_{\hat{\omega}_a} F_{H_a}|_{H^{(\nu)}}) \geq -|\Lambda_{\hat{\omega}_a} F_{H^{(\nu)}}|_{H^{(\nu)}}, \end{split} \end{equation} from which we have the inequality \begin{equation} \label{eq:19-5} -\hat{\Delta}_a \text{tr}\,h_a^{(\nu)} \leq |\Lambda_{\hat{\omega}_a} F_{H^{(\nu)}}|_{H^{(\nu)}}\text{tr}\,h_a^{(\nu)} \leq K_9\text{tr}\,h_a^{(\nu)} \end{equation} for some constant $K_9>0$. Here the last inequality follows from the fact that $F_{H^{(\nu)}}$ is supported on $\hat{X}\backslash U(1)$, where the $\hat{\omega}_a$ are uniform. Multiplying $\text{tr} \,h_a^{(\nu)}$ on both sides of the inequality (\ref{eq:19-5}) and using integration by parts, we get \begin{equation*} \begin{split} \int_{\hat{X}} |d \,\text{tr} h_a^{(\nu)}|_{\hat{g}_a}^2 dV_a \leq K_9\int_{\hat{X}} |\text{tr} h_a^{(\nu)}|^2 dV_a. \end{split} \end{equation*} Finally, write $h_a^{(\nu)}=h_a\hat{H}(H^{(\nu)})^{-1}$, and we see that the result follows from the uniform $C^0$ bound of $h_a$. \qed \end{proof} Using the diagonal argument, the uniform boundedness of the $L^2_1$-norm of $h_a$ over $\hat{X}$ implies that there is a sequence $\{ a_i \}_{i\geq 1}$ converging to 0 and an $\hat{H}$-symmetric endomorphism $h_0$ of $\mathcal{E}$ which is the limit of $\{ h_{a_i}|_{U_l} \}_{i\geq 1}$ in $L^2(U_l,\text{End}(\mathcal{E}))$ for all $l$. As in \cite{Don1} and \cite{Siu}, we can then prove that the sequence $\{ h_{a_i} \}_{i\geq 1}$ converges in the $C^0$-sense to $h_0$ on each $U_l$. Next we argue that there is a uniform $C^1$-bound for $\{ h_{a_i} \}_{i\geq 1}$ over $\hat{X}$. We need the following lemma, whose proof will be given later. \begin{lemma} \label{lm:X} Let $V$ be a K$\ddot{\text{a}}$hler manifold endowed with a Ricci-flat K$\ddot{\text{a}}$hler metric $g$, and let $H$ be a HYM metric on a trivial holomorphic $\mathcal{F}$ bundle over $V$ w.r.t. $g$. Fixed a trivialization of $\mathcal{F}$ and view $H$ as a matrix-valued function on $V$. Then \begin{equation*} -\Delta_g | \partial HH^{-1}|_{H,g}^2 \leq 0. \end{equation*} \end{lemma} We apply this lemma to the the restriction of $\mathcal{E}$ to $U(1)$ under a trivialization in which $\hat{H}=I$. Also let $H=H_{a_i}$ and $g$ the restriction of $\hat{g}_{a_i}$ to $U(1)$, where it coincides with the CO-metric on resolved conifold. Then we have \begin{equation*} -\hat{\Delta}_{a_i} | \partial H_{a_i} H_{a_i}^{-1}|_{H_{a_i},\hat{g}_{a_i}}^2 \leq 0 \end{equation*} and hence, by the Maximum Principle, \begin{equation*} \sup_{U(1)} | \partial H_{a_i}H_{a_i}^{-1}|_{H_{a_i},\hat{g}_{a_i}}^2 \leq \sup_{\partial U(1)} | \partial H_{a_i} H_{a_i}^{-1}|_{H_{a_i},\hat{g}_{a_i}}^2. \end{equation*} Using the uniform $C^0$-boundedness of $H_a$ and the fact that $\hat{H}=I$, the above inequality implies \begin{equation*} \sup_{U(1)}| \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}\leq K_{10}\sup_{\partial U(1)} | \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}. \end{equation*} Therefore, it ia enough to bound the maximum of $| \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}$ over $X_0[\frac{1}{2}]$. Let $x_i\in X_0[\frac{1}{2}]$ be a sequence of points such that \begin{equation*} m_i:=\sup_{X_0[\frac{1}{2}]} | \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}=| \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}(x_i). \end{equation*} Assume $m_i$ is unbounded. If $\{x_i\}$ has a converging subsequence with limit in the interior of $X_0[\frac{1}{2}]$, then one can argue as in \cite{Don1} and \cite{Siu} and get a contradiction. Thus it is enough to get a uniform bound near $\partial X_0[\frac{1}{2}]$. For this we use Lemma \ref{lm:X} and an iteration argument to conclude that $\sup_{\partial X_0[\frac{1}{2}]}| \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}$ is bounded by the $L^2$-integral of $| \partial_{\hat{H}} h_{a_i}|_{\hat{H},\hat{g}_{a_i}}$ in a neighborhood of $\partial X_0[\frac{1}{2}]$, say $V_0(\frac{1}{4},\frac{3}{4})$. This last integral is uniformly bounded by Proposition \ref{pr:L21}. Thus if $\{x_i\}$ has a limit on $\partial X_0[\frac{1}{2}]$, $m_i$ is bounded, which contradicts to the assumption. We therefore prove uniform $C^1$-boundedness for $\{ h_{a_i} \}_{i\geq 1}$. One can then obtain from this uniform $C^1$-bound a uniform $L^p_2$-bound for $\{ h_{a_i} \}_{i\geq 1}$ over each $U_l$ as in \cite{Don1} and \cite{Siu}. Then after taking a subsequence, we may assume that $h_{a_i}$ converges to $h_0$ weakly in the $L^p_2$ sense for all $p$ over each $U_l$. This implies $\Lambda_{\hat{\omega}_0} F_{H_0}=0$ where $H_0=h_0\hat{H}$. By standard elliptic regularity $H_0$ is smooth. The proof of Theorem \ref{th:2-1} is now complete.\qed \end{proof} \noindent \textbf{Remark} From Lemma \ref{lm:d-1} and Proposition \ref{pr:2-1} it is easy to see that the largest eigenvalue of $h_0$ is bounded from above over $X_{0,sm}$, and lower eigenvalues of $h_0$ is bounded from below over $X_{0,sm}$. In particular, the $C^0$-norm of $h_0$ is bounded over $X_{0,sm}$.\\[0.2cm] \noindent{\scshape Proof of Lemma \ref{lm:X}} The HYM equation takes the form \begin{equation*} \sqrt{-1}\Lambda_ g\bar{\partial}(\partial HH^{-1})=0. \end{equation*} In local coordinates this is just \begin{equation*} g^{i\bar{j}} \frac{\partial}{\partial \bar{z}_j} \left(\frac{\partial H}{\partial z_i}H^{-1} \right)=0. \end{equation*} In the following we denote $\partial_i=\frac{\partial}{\partial z_i}$ and $\partial_{\bar{j}}=\frac{\partial}{\partial \bar{z}_j}$. Taking partial derivatives on both sides of the above equation, we get \begin{equation} \label{eq:n-1} \begin{split} -g^{i\bar{p}} \partial_k g_{\bar{p}q}g^{q\bar{j}} \partial_{\bar{j}} \left(\partial_i H H^{-1}\right) +g^{i\bar{j}} \partial_{\bar{j}} \left((\partial_k\partial_i H)H^{-1} - \partial_i HH^{-1}\partial_k HH^{-1} \right)=0. \end{split} \end{equation} One can compute that \begin{equation*} \begin{split} &(\partial_k\partial_i H)H^{-1} - \partial_i HH^{-1}\partial_k HH^{-1}\\ =& \partial_i\left( \partial_k HH^{-1} \right) +\partial_k HH^{-1}\partial_i HH^{-1} -\partial_i HH^{-1}\partial_k HH^{-1} =(\partial_H)_i\left( \partial_k HH^{-1}\right). \end{split} \end{equation*} Note also that $g^{i\bar{p}} \frac{\partial g_{\bar{p}q}}{\partial z_k}$ is the Christoffel symbol $\Gamma^i_{kq}$ of $g$. Therefore (\ref{eq:n-1}) becomes \begin{equation} \label{eq:n-2} -\Gamma^i_{kq}g^{q\bar{j}} \partial_{\bar{j}} \left(\partial_i H H^{-1}\right) +g^{i\bar{j}} \partial_{ \bar{j}} (\partial_H)_i\left( \partial_k HH^{-1}\right)=0. \end{equation} Now, in local charts, \begin{equation} \label{eq:n-5} \begin{split} &-\Delta_g | \partial HH^{-1}|_{H,g}^2=-\sqrt{-1}\Lambda_g \partial \bar{\partial} | \partial HH^{-1}|_{H,g}^2 \\ \leq &-\langle \sqrt{-1}\Lambda_g \nabla_{H,g}^{1,0}\wedge \nabla_{H,g}^{0,1} (\partial HH^{-1}) , \partial HH^{-1} \rangle_{H,g} -\langle \partial HH^{-1} , \sqrt{-1}\Lambda_g \nabla_{H,g}^{0,1} \wedge \nabla_{H,g}^{1,0} (\partial HH^{-1}) \rangle_{H,g}. \end{split} \end{equation} Here $\Lambda_g: \Gamma(V, \text{End}(\mathcal{F})\otimes \Omega^1 \otimes \Omega^2 ) \rightarrow \Gamma(V, \text{End}(\mathcal{F})\otimes \Omega^1)$ is the contraction of the 2-form part with the K$\ddot{\text{a}}$hler form $\omega_g$ of $g$. The operator $\nabla_{H,g}^{1,0}\wedge \nabla_{H,g}^{0,1}$ is the composition \begin{equation*} \begin{split} &\Gamma(V, \text{End}(\mathcal{F})\otimes \Omega^1) \xrightarrow{\nabla_{H,g}^{0,1}} \Gamma(V,\text{End}(\mathcal{F})\otimes \Omega^1\otimes \Omega^{0,1})\\ &\xrightarrow{\nabla_{H,g}^{1,0}} \Gamma(V,\text{End}(\mathcal{F})\otimes \Omega^1\otimes \Omega^{0,1}\otimes \Omega^{1,0}) \xrightarrow{a}\Gamma(V,\text{End}(\mathcal{F})\otimes \Omega^1 \otimes \Omega^{1,1} ) \end{split} \end{equation*} where the last map is the natural anti-symmetrization. The operator $\nabla_{H,g}^{0,1}\wedge \nabla_{H,g}^{1,0}$ is analogously defined. Write $A=\partial HH^{-1}=A_k dz_k$ so $\nabla_{H,g}=\nabla_{A,g}$. Explicitly, we have \begin{equation} \label{eq:n-3} \begin{split} &\sqrt{-1}\Lambda_g \nabla_{H,g}^{1,0}\wedge \nabla_{H,g}^{0,1} (\partial HH^{-1})\\ =&-2(g^{i\bar{j}}(\partial_A)_i\partial_{\bar{j}} A_k) dz_k+2g^{q\bar{j}}(\partial_{\bar{j}} A_i)\Gamma^i_{kq}dz_k =2(-g^{i\bar{j}}\partial_{\bar{j}}(\partial_A)_i A_k+g^{q\bar{j}}(\partial_{\bar{j}} A_i)\Gamma^i_{kq})dz_k \end{split} \end{equation} where we use the fact that \begin{equation*} g^{i\bar{j}}(\partial_A)_i\partial_{\bar{j}} A_k=g^{i\bar{j}}\partial_{\bar{j}}(\partial_A)_i A_k + [g^{i\bar{j}}(F_A)_{i\bar{j}},A_k]=g^{i\bar{j}}\partial_{\bar{j}}(\partial_A)_i A_k \end{equation*} because $d+A$ is a HYM connection. Now (\ref{eq:n-2}) and (\ref{eq:n-3}) together implies that \begin{equation} \label{eq:n-6} \sqrt{-1}\Lambda_g \nabla_{H,g}^{1,0}\wedge \nabla_{H,g}^{0,1} (\partial HH^{-1})=0. \end{equation} Next we compute $\sqrt{-1}\Lambda_g\nabla_{H,g}^{0,1}\wedge \nabla_{H,g}^{1,0} (\partial HH^{-1})$. We have \begin{equation} \label{eq:n-7} \begin{split} &\sqrt{-1}\Lambda_g \nabla_{H,g}^{0,1}\wedge \nabla_{H,g}^{1,0} (\partial HH^{-1}) =2\left(g^{i\bar{j}}(\partial_{\bar{j}}(\partial_A)_i A_k) dz_k -(\partial_{\bar{j}} A_i)g^{q\bar{j}}\Gamma^i_{kq}dz_k -A_ig^{q\bar{j}}\partial_{\bar{j}} \Gamma^i_{kq}dz_k\right) . \end{split} \end{equation} Note that \begin{equation*} -\partial_{\bar{j}} \Gamma^i_{kq}=-\partial_{\bar{j}} (g^{i\bar{p}} \partial_k g_{\bar{p}q}) =-g^{i\bar{p}}\partial_{\bar{j}}\partial_k g_{\bar{p}q}+g^{i\bar{s}}g^{t\bar{p}}\partial_{\bar{j}} g_{\bar{s}t}\partial_k g_{\bar{p}q} \end{equation*} is the full curvature tensor $R^i_{qk\bar{j}}$ of $g$. From the Bianchi identity and the fact that $g$ is Ricci flat, we have \begin{equation} \label{eq:n-4} -g^{q\bar{j}}\partial_{\bar{j}} \Gamma^i_{kq}=g^{q\bar{j}}R^i_{qk\bar{j}} =g^{q\bar{j}}R_{q\bar{p}k\bar{j}}g^{i\bar{p}}= g^{q\bar{j}}R_{q\bar{j}k\bar{p}}g^{i\bar{p}}=R_{k\bar{p}}g^{i\bar{p}}=0. \end{equation} From (\ref{eq:n-2}) and (\ref{eq:n-4}) we then have \begin{equation} \label{eq:n-7} \sqrt{-1}\Lambda_g \nabla_{H,g}^{0,1}\wedge \nabla_{H,g}^{1,0} (\partial HH^{-1}) =0. \end{equation} The result now follows from (\ref{eq:n-5}), (\ref{eq:n-6}) and (\ref{eq:n-7}).\qed \subsection{Boundedness results for $H_0$} We will now establish some boundedness results for $H_0$. The following $C^1$-boundedness for $h_0$ follows easily from the uniform $C^1$-bound of the sequence $\{ h_{a_i} \}_{i\geq 1}$ which converges to $h_0$. \begin{proposition} \label{pr:2-4} There is a constant $C'_1>0$ such that $| \nabla_{\hat{H}}h_0 |_{\hat{H},\hat{g}_0} \leq C'_1$ on $X_{0,sm}$. \end{proposition} Higher order bounds for $H_0$ will be described in the uniform coordinate system $\{(B_z,\phi_ {0,z})| z\in X_{0,sm} \} $ from Section 3. \begin{proposition} \label{pr:2-3} There are constants $C'_k>0$ for $k \geq 0$ such that in the above coordinate system, \begin{equation*} \Arrowvert h_0 \Arrowvert_{C^k(B_z,\hat{H},g_e)}<C'_k \end{equation*} for each $z \in X_{0,sm}$. \end{proposition} \begin{proof} It is enough to focus on $V_{0,sm}(1)$, where $\mathcal{E}$ is the trivial bundle. Moreover, by gauge invariance of the norm, it is enough to work under a holomorphic frame in which $\hat{H}=I$. With this understood, $h_0$ is just $H_0$. The result for the $k=0$ cases is Proposition \ref{pr:2-1}. For the $k=1$ case, note that by Proposition \ref{pr:2-4} we have locally \begin{equation*} \text{tr}\left( {\hat{g}_0}^{i\bar{j}}\frac{\partial H_0}{\partial w_i }\frac{\partial H^*_0}{\partial \bar{w}_j} \right)<(C'_1)^2. \end{equation*} Here $\ast$ is w.r.t. $I$, and in this case $H_0^*=H_0$. Therefore, because the norm $\mathbf{r}_0(z)^{-\frac{4}{3}}\hat{g}_0\leq C_0g_e$ where $g_e$ is the Euclidean metric in $(w_1,w_2,w_3)$, we have \begin{equation} \label{eq:22} \text{tr}\left({g}_e^{i\bar{j}}\frac{\partial H_0}{\partial w_i }\frac{\partial H_0}{\partial \bar{w}_j} \right) \leq C_0 \text{tr}\left(\mathbf{r}_0(z)^\frac{4}{3}(\hat{g}_0)^{i\bar{j}}\frac{\partial H_0}{\partial w_i }\frac{\partial H_0}{\partial \bar{w}_j} \right)<(C'_1)^2C_0\mathbf{r}_0(z)^\frac{4}{3}<K_{11}, \end{equation} for some constant $K_{11}$ independent of $z$. This is the desired result for $k=1$. For the $k \geq 2$ case, note that the metric $H_0$ is HYM, so in each coordinate chart $B_z$ it satisfies the equation \begin{equation} \label{eq:21} \mathbf{r}_0(z)^\frac{4}{3}{\hat{g}_0}^{i\bar{j}}\frac{\partial^2H_0}{\partial w_i \partial \bar{w}_j} =\mathbf{r}_0(z)^\frac{4}{3}{\hat{g}_0}^{i\bar{j}}\frac{\partial H_0}{\partial w_i }{H_0}^{-1} \frac{\partial H_0}{\partial \bar{w}_j }. \end{equation} By the $k=0,1$ cases and Proposition \ref{pr:2-4} the right hand side of (\ref{eq:21}) is bounded by some constant independent of $z \in V_{0,sm}(1)$. Moreover, there is a constant $\lambda>0$ independent of $z\in V_{0,sm}(1)$ such that \begin{equation} \label{eq:23} \mathbf{r}_0(z)^\frac{4}{3}(\hat{g}_0)^{i\bar{j}}\xi_i \bar{\xi}_j \geq \lambda |\xi|^2 \end{equation} over any $B_z$. Therefore, by p.15 of \cite{J}, the bounds in (\ref{eq:22}) and (\ref{eq:23}) together with the estimates on the higher derivatives of $\mathbf{r}_0(z)^{-\frac{4}{3}}\hat{g}_0$ from (\ref{eq:20}) imply that \begin{equation*} \Arrowvert H_0 \Arrowvert_{C^{1,\frac{1}{2}}(B'_z,g_e)}<K_{12} \end{equation*} where $B'_z\subset B_z$ is the ball of radius $\frac{\rho}{2}$ and $K_{12}$ is a constant independent of $z$. It is not hard to improve this to \begin{equation*} \Arrowvert H_0 \Arrowvert_{C^{1,\frac{1}{2}}(B_z,g_e)}<K_{13} \end{equation*} by considering the estimates over $B_y$ for $y \in B_z \backslash B'_z$. What is important is that this $C^{1,\frac{1}{2}}(B_z,g_e)$ bound of $H_0$ implies that the right hand side of (\ref{eq:21}) is bounded in the $C^{0,\frac{1}{2}}$ sense, and so by elliptic regularity we get \begin{equation*} \Arrowvert H_0 \Arrowvert_{C^{2,\frac{1}{2}}(B'_z,g_e)}<K_{14}, \end{equation*} which can be improved to $B_z$ as before. Using bootstrap arguments, we can obtain, for any $k\geq 1$, a constant $C'_k$ independent of $z \in V_{0,sm}(1)$ such that \begin{equation*} \Arrowvert H_0 \Arrowvert_{C^{k}(B_z,g_e)}<C'_k. \end{equation*} Here the derivatives is w.r.t. the Euclidean derivatives. However, these are also the derivatives w.r.t. $g_e$ and $\hat{H}$ since $\hat{H}=I$ here. \qed \end{proof} Let $\alpha$ be a number such that $0<\alpha <\frac{1}{2}$. We will specify the choice of $\alpha$ later. If we restrict ourselves to the region $V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$, where the bundle $\mathcal{E}$ is trivial, we have the following result which we will need in the next section. \begin{proposition} \label{pr:1-4} For every small $t$ and $w_t \in V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$, we have \begin{equation*} \Arrowvert H_0H_0(w_t)^{-1}-I \Arrowvert_{C^{2,\alpha}(B_z,\hat{H},g_e)}<D|t|^{\frac{2}{3}\alpha} \end{equation*} where $D>0$ is a constant independent of $t$ and $z\in V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$. Here $H_0(w_t)$ is viewed as a constant metric on $\mathcal{E}|_{V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)} \cong \mathcal{O}^r$. \end{proposition} \begin{proof} We work in a holomorphic frame over $V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$ under which $\hat{H}=I$, so $H_0(w_t)$ is constant a matrix. Because of the bound in the remark before the proof of Lemma \ref{lm:X}, it is enough to show that \begin{equation*} \Arrowvert H_0-H_0(w_t) \Arrowvert_{C^{2}(B_z,g_e)}<D|t|^{\frac{2}{3}\alpha} \end{equation*} for some constant $D$. Since $|\nabla_{\hat{H}} H_0|_{\hat{H},\hat{g}_0}<C'_1$ for some constant $C'_1$ and there is a constant $K_{15}>0$ such that $\text{dist}_{\hat{g}_0}(z,w_t)<K_{15}|t|^{\frac{2}{3}\alpha}$ for any small $t$ and $z \in V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$, by the mean value theorem we have $|H_0-H_0(w_t)|<K_{16}|t|^{\frac{2}{3}\alpha}$ on $V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$. For each $z \in V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$ in each coordinate chart $B_z$ we have \begin{equation} \label{eq:24} \mathbf{r}_0(z)^\frac{4}{3}(\hat{g}_0)^{i\bar{j}}\frac{\partial^2(H_0-H_0(w_t))}{\partial w_i \partial \bar{w}_j} =\mathbf{r}_0(z)^\frac{4}{3}(\hat{g}_0)^{i\bar{j}}\frac{\partial H_0}{\partial w_i }{H_0}^{-1} \frac{\partial H_0}{\partial \bar{w}_j }. \end{equation} Notice that equation (\ref{eq:22}) actually implies that the right hand side of equation (\ref{eq:24}) is bounded by $(C'_1)^2C_0\mathbf{r}_0(z)^\frac{4}{3}$, which is less than $K_{17}|t|^{\frac{4}{3}\alpha}$ for some constant $K_{17}>0$. Therefore, in view of (\ref{eq:20}), by elliptic regularity there is a constant $K_{18}$ independent of $t$ and $z \in V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$ such that \begin{equation*} \begin{split} &\Arrowvert H_0-H_0(w_t) \Arrowvert_{C^{1,\frac{1}{2}}(B'_z,g_e)} \\ \leq &K_{18}\left( \left\Arrowvert RHS\,of\,(\ref{eq:24})\right \Arrowvert_{C^0(B_z)} + \Arrowvert H_0-H_0(w_t) \Arrowvert_{C^0(B_z)} \right) \leq K_{18}(K_{17}+K_{16})|t|^{\frac{2}{3}\alpha}. \end{split} \end{equation*} As in the proof of Proposition \ref{pr:2-3}, this final estimate can be improved to $\Arrowvert H_0-H_0(w_t) \Arrowvert_{C^{1,\frac{1}{2}}(B_z,g_e)} \leq K_{19}|t|^{\frac{2}{3}\alpha}$, and we use elliptic regularity once again to get the desired bound.\qed \end{proof} \section{The approximate Hermitian metrics on $\mathcal{E}_t$ over $X_t$} \subsection{Construction of approximate metrics} In this subsection we construct approximate Hermitian metrics on $\mathcal{E}_t$. We will compare the estimates on the bundles $\mathcal{E}_t$, each over a different manifold $X_t$. For this we first recall the smooth family of diffeomorphisms $x_t:Q_t\backslash \{\mathbf{r}_t=|t|^\frac{1}{2}\}\rightarrow Q_{0,sm}$ from Section 2. Recall also the fixed large number $R \gg 1$ from Section 3 (after (\ref{eq:2})). For $t$ small, restricting to $V_t(\frac{1}{2}R|t|^\frac{1}{2},1)$ we get a smooth family of injective maps \begin{equation*} x_t:V_t(\frac{1}{2}R|t|^\frac{1}{2},1)\rightarrow V_0(\frac{1}{4}R|t|^\frac{1}{2},\frac{3}{2}). \end{equation*} We can extend these to a smooth family of injective maps, still denoted by $x_t$: \begin{equation*} x_t:X_t[\frac{1}{2}R|t|^\frac{1}{2}]\rightarrow X_0[\frac{1}{4}R|t|^\frac{1}{2}]. \end{equation*} Next, choose a smooth family \begin{equation*} f_t:\mathcal{E}_t|_{X_t[\frac{1}{2}R|t|^\frac{1}{2}]} \rightarrow \mathcal{E}|_{X_0[\frac{1}{4}R|t|^\frac{1}{2}]} \end{equation*} of maps between smooth complex vector bundles which commute with $x_t$ and are diffeomorphic onto the images. In addition, we require the following condition on $f_t$. Denote by $(\mathcal{X}, \tilde{\mathcal{E}})$ the smoothing of the pair $(X_0,\pi_*\mathcal{E})$ mentioned in the introduction. By our assumption on $\mathcal{E}$ the restriction of $\tilde{\mathcal{E}}$ to $\mathcal{V}:=\bigcup_{t \in \Delta_\epsilon}V_t(1)$ is a trivial holomorphic bundle. Fix a holomorphic trivialization $\tilde{\mathcal{E}}|_{\mathcal{V}}\cong \mathcal{O}_{\mathcal{V}}^r$ inducing the trivialization $\mathcal{E}|_{V_{0,sm}(1)}\cong \mathcal{O}_{V_{0,sm}(1)}^r$ under which $\hat{H}=I$, the $r\times r$ identity matrix. With the induced holomorphic trivialization of $\mathcal{E}_t|_{V_t(1)}$ for all small $t$, we require the family $f_t$ to be such that when restricting to $V_t(\frac{1}{2}R|t|^\frac{1}{2},\frac{3}{4})$, we get a map from the trivial rank $r$ bundle to another trivial rank $r$ bundle which is the product of the map on the base and the identity map on the $\mathbb{C}^r$ fibers. Over $X_t[\frac{1}{2}R|t|^\frac{1}{2}]$ we let $H''_t=f_t^*H_0$, the pullback of the HYM metric $H_0$ from $X_0[\frac{1}{4}R|t|^\frac{1}{2}]$. Note that our choice of $f_t$ over $V_t(\frac{1}{2}R|t|^\frac{1}{2},\frac{3}{4})$ is one such that $f_t^*$ becomes the pullback of vector-valued functions by $x_t$. In particular, the pullback of a constant frame of $\mathcal{E}|_{x_t(V_t(2R|t|^\alpha,\frac{3}{4}))}$ by $f_t$ is again a constant frame of $\mathcal{E}_t|_{V_t(2R|t|^\alpha,\frac{3}{4})}$. Therefore, under some constant frame of $\mathcal{E}_t|_{V_t(\frac{1}{2}R|t|^\frac{1}{2},\frac{3}{4})}$, the pullback $\hat{H}_t$ of $\hat{H}$ can be seen as an identity matrix. We can extend this constant frame of $\mathcal{E}_t|_{V_t(2R|t|^\alpha,\frac{3}{4})}$ naturally to one over $V_t(\frac{3}{4})$, and we then can extend $\hat{H}_t$ over $V_t(\frac{3}{4})$ by taking the identity matrix under this constant frame. We then further extend $\hat{H}_t$ over the whole $X_t$ to form a smooth family. We still denote these extensions by $\hat{H}_t$, and they will serve as reference metrics on $\mathcal{E}_t$. \\[0.2cm] From Proposition \ref{pr:2-3} one can deduce \begin{lemma} \label{lm:5} There exists a constant $C_k$ such that for any $t \neq 0$ and $z \in X_t[R|t|^\frac{1}{2}]$ we have $\Arrowvert f_t^*h_0 \Arrowvert_{C^k(B_z,\hat{H}_t,\tilde{g}_t)} \leq C_k $. \end{lemma} In view of Theorem \ref{th:0} and Proposition \ref{pr:n-1}, we can deduce \begin{corollary} \label{co:4} There exists a constant $C''_k$ such that for any $t$ over $V_t(R|t|^\frac{1}{2},\frac{3}{4})$, we have \begin{equation*} \sum_{j=0}^k| \mathbf{r}_t^{\frac{2}{3}j} \nabla_{g_{co,t}}^j (f_t^*h_0) |_{\hat{H}_t,g_{co,t}} \leq C''_k.\\[0.2cm] \end{equation*} \end{corollary} For $\alpha$ such that $0<\alpha <\frac{1}{2}$ and $t$ small, the image of the restriction of $x_t$ to $V_t(R|t|^\alpha,2R|t|^\alpha)$ lies in $V_0(\frac{1}{2}R|t|^\alpha,3R|t|^\alpha)$. For $w_t$ as in Proposition \ref{pr:1-4}, define $H'_t:=f_t^*(H_0(w_t))$ to be the constant metric on $\mathcal{E}_t|_{V_t(2R|t|^\alpha)}$ (w.r.t. a constant frame). Then by Proposition \ref{pr:1-4} we immediately get \begin{lemma} \label{lm:8} There is a constant $D>0$ such that for any $t$ and $z \in V_t(R|t|^\alpha,2R|t|^\alpha)$, we have \begin{equation*} \Arrowvert H''_t(H'_t)^{-1} -I \Arrowvert_{C^2(B_z,\hat{H}_t,g_e)}<D|t|^{\frac{2}{3}\alpha}. \end{equation*} \end{lemma} Now let $\tilde{\tau}_{t}(s)$ be a smooth increasing cutoff function on $\mathbb{R}^1$ such that \begin{equation*} \tilde{\tau}_{t}(s)=\{ \begin{array}{ll} 1, & s \geq 2R|t|^{\alpha-\frac{1}{2}} \\ 0, & s \leq R|t|^{\alpha-\frac{1}{2}}, \end{array} \end{equation*} and such that its $l$-th derivative $\tilde{\tau}_{t}^{(l)}$ satisfies $|\tilde{\tau}_{t}^{(l)}| \leq \tilde{K}_l|t|^{(\frac{1}{2}-\alpha)l}$ for $l \geq 1$ for a constant $\tilde{K}_l>0$ independent of $t$. Define $\tau_t=\tilde{\tau}_{t}(|t|^{-\frac{1}{2}}\mathbf{r}_t)$, which is a cutoff function on $X_t$, and define the approximate Hermitian metric to be \begin{equation*} H_t=(1-\tau_t)H'_t+\tau_t H''_t=(I+\tau_t(H''_t(H'_t)^{-1}-I))H'_t. \end{equation*} \noindent \textbf{Remark} The metric $H_t$ is just an interpolation between $f_t^*H_0$ and $H'_t=f_t^*(H_0(w_t))$. Because the determinant $f_t^*h_0$ is bounded uniformly both from above and below, the common $C^0$-bound of $f_t^*h_0$ w.r.t. $\hat{H}_t$ in Lemma \ref{lm:5} implies that the norms $|\cdot|_{H_t}$ and $|\cdot|_{\hat{H}_t}$ are in fact equivalent (uniformly in $t$). \\[0.2cm] The following estimates for $H_t$ are analogous to those for $H_0$ in Proposition \ref{pr:2-3}. They follow from that proposition with the help of Corollary \ref{co:3}. \begin{proposition} \label{pr:3-3} There are constants $C_k>0$ for $k \geq 0$ such that \begin{equation*} \Arrowvert H_t (\hat{H}_t)^{-1}\Arrowvert_{C^k(B_z,\hat{H}_t,g_e)}<C_k \end{equation*} for each $z \in X_t$. \\[0.2cm] \end{proposition} \subsection{Bounds for the mean curvatures} The following proposition gives the bounds for the mean curvatures $\sqrt{-1}\Lambda_{\tilde{\omega}_t}F_{H_t}$ of the approximate metrics $H_t$. \begin{proposition} \label{pr:3-4} There are constants $\Lambda_k>0$ and $\tilde{Z}_k>0$ such that for $t$ small enough, we have the following: \begin{enumerate} \item For any $z \in X_t$ and $k\geq 1$, \begin{equation} \label{eq:F-3} \Arrowvert \mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{C^k(B_z,\hat{H}_t,g_e)} \leq \Lambda_k, \end{equation} \item \begin{equation} \label{eq:20-5} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}F_{H_t} |_{H_t}\leq \tilde{Z}_0 \max \{|t|^{\frac{2}{3}\alpha},\,|t|^{1-2\alpha}\}, \end{equation} and \item \begin{equation} \label{eq:20-6} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}(X_t,H_t,\tilde{g}_t)}\leq \tilde{Z}_k \max \{|t|^{2\alpha},\,|t|^{1-\frac{2}{3}\alpha}\}. \end{equation} \end{enumerate} \end{proposition} \begin{proof} The first estimates (\ref{eq:F-3}) follow from Theorem \ref{th:0} and Proposition \ref{pr:3-3}. For (\ref{eq:20-5}), first of all we have \begin{equation} \label{eq:3-18} \Lambda_{\tilde{\omega}_t} F_{H_t}=0\,\,\,\text{on}\,\,V_t(R|t|^\alpha) \end{equation} because $H_t=H'_t$ there and $H'_t$ is a flat metric. Next consider the annulus $V_t(R|t|^\alpha,2R|t|^\alpha)$. Let \begin{equation*} h'_t=I+\tau_t(H''_t(H'_t)^{-1} -I), \end{equation*} then we have \begin{equation*} \Lambda_{\tilde{\omega}_t} F_{H_t} =\Lambda_{\tilde{\omega}_t} F_{H'_t}+\Lambda_{\tilde{\omega}_t}\bar{\partial}(\partial_{H'_t}h'_t(h'_t)^{-1}) =\Lambda_{\tilde{\omega}_t}\bar{\partial}(\partial_{H'_t}h'_t(h'_t)^{-1}). \end{equation*} Now, on each local coordinate chart $B_z\cap V_t(R|t|^\alpha,2R|t|^\alpha)$, compute in a frame under which $H_t'$ is constant, we have \begin{equation} \label{eq:3-3} \begin{split} \mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial}(\partial_{H'_t}h'_t(h'_t)^{-1}) =&\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial}\left(\left(\partial h'_t-\partial H'_t(H'_t)^{-1} h'_t+h'_t \partial H'_t(H'_t)^{-1}\right)(h'_t)^{-1}\right) \\ =&\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial}(\partial h'_t(h'_t)^{-1}) \\ =&\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\partial h'_t(h'_t)^{-1}\bar{\partial}h'_t (h'_t)^{-1} +\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial h'_t(h'_t)^{-1}. \end{split} \end{equation} To bound the derivatives of $h'_t$ we need to bound the derivatives of $\tau_t$. The first order derivative of $\tau_t$ can be bounded as \begin{equation} \label{eq:3-1} |\nabla_e \tau_t|_{g_e} \leq |\tilde{\tau}'(|t|^{-\frac{1}{2}}\mathbf{r}_t)|t|^{-\frac{1}{2}} \nabla_e \mathbf{r}_t|_{g_e} \leq \tilde{K}_1|t|^{\frac{1}{2}-\alpha} |t|^{-\frac{1}{2}}\cdot R_1\mathbf{r}_t \leq \tilde{K}_1R_1 |t|^{-\alpha}\mathbf{r}_t \leq 2\tilde{K}_1R_1R \end{equation} where (\ref{eq:ra-1}) is used. The last inequality follows from the fact that the support of $\nabla\tau_t$ is contained in $V_t(R|t|^\alpha,2R|t|^\alpha)$. Similarly, the second order derivative of $\tau_t$ can be bounded as \begin{equation} \label{eq:3-2} \begin{split} |\nabla_e^2 \tau_t|_{g_e} \leq & |\tilde{\tau}''(|t|^{-\frac{1}{2}}\mathbf{r}_t)|t|^{-1}||\nabla_e \mathbf{r}_t|_{g_e}^2+ |\tilde{\tau}'(|t|^{-\frac{1}{2}}\mathbf{r}_t)|t|^{-\frac{1}{2}} \nabla_e^2 \mathbf{r}_t|_{g_e} \\ \leq & \tilde{K}_2 |t|^{1-2\alpha} |t|^{-1}\mathbf{r}_t^2+\tilde{K}_1|t|^{\frac{1}{2}-\alpha} |t|^{-\frac{1}{2}}\cdot R_2\mathbf{r}_t \\ \leq & \tilde{K}_2 |t|^{-2\alpha} \mathbf{r}_t^2+\tilde{K}_1R_2|t|^{-\alpha} \mathbf{r}_t \leq 4\tilde{K}_2R^2+2\tilde{K}_1R_2R \end{split} \end{equation} where (\ref{eq:ra-1}) is used again and the last inequality follows as in (\ref{eq:3-1}). From (\ref{eq:3-1}), (\ref{eq:3-2}) and Lemma \ref{lm:8} we can obtain the estimates \begin{equation*} \begin{split} |\partial h'_t|_{\hat{H}_t,g_e}=&|\partial(\tau_t(H''_t(H'_t)^{-1}-I))|_{\hat{H}_t,g_e} \\ \leq & |\nabla_e\tau_t|_{g_e} |H''_t(H'_t)^{-1}-I |_{\hat{H}_t}+\tau_t|\nabla_e (H''_t(H'_t)^{-1})|_{\hat{H}_t,g_e} \\ \leq & 2\tilde{K}_1R_1R\cdot D|t|^{\frac{2}{3}\alpha}+D|t|^{\frac{2}{3}\alpha} \leq (2\tilde{K}_1R_1R+1) D\cdot|t|^{\frac{2}{3}\alpha} \end{split} \end{equation*} and \begin{equation*} \begin{split} |\bar{\partial}\partial h'_t|_{\hat{H}_t,g_e} \leq & |\nabla_e^2\tau_t|_{g_e} |H''_t(H'_t)^{-1} -I |_{\hat{H}_t}+\tau_t|\nabla_e^2 (H''_t(H'_t)^{-1})|_{\hat{H}_t,g_e}\\ &+2|\nabla_e\tau_t|_{g_e}|\nabla_e (H''_t(H'_t)^{-1})|_{\hat{H}_t,g_e} \\ \leq &(4\tilde{K}_2R^2+2\tilde{K}_1R_2R)\cdot D|t|^{\frac{2}{3}\alpha}+D|t|^{\frac{2}{3}\alpha}+2\cdot 2\tilde{K}_1R_1R\cdot D|t|^{\frac{2}{3}\alpha}\\ \leq & (4\tilde{K}_2R^2+2\tilde{K}_1R_2R+4\tilde{K}_1R_1R+1)\cdot D|t|^{\frac{2}{3}\alpha}. \end{split} \end{equation*} In local charts, the term $\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}$ contributes to $\mathbf{r}_t^\frac{4}{3}\tilde{g}_t^{-1}$, which is bounded by Theorem \ref{th:0}. Therefore, from expression (\ref{eq:3-3}) we can now conclude that \begin{equation*} |\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}F_{H_t}|_{\hat{H}_t} = |\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial}(\partial_{H'_t}h'_t(h'_t)^{-1})|_{\hat{H}_t} \leq Z_1|t|^{\frac{2}{3}\alpha} \end{equation*} on $V_t(R|t|^\alpha,2R|t|^\alpha)$ for some constant $Z_1>0$ independent of $t$. From the remark before Proposition \ref{pr:3-3} we get \begin{equation} \label{eq:8-2} |\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}F_{H_t}|_{H_t} \leq Z_2|t|^{\frac{2}{3}\alpha} \end{equation} on $V_t(R|t|^\alpha,2R|t|^\alpha)$ for some constant $Z_2>0$ independent of $t$.. We now estimate the $L^k_{0,-4}$-norm of $\sqrt{-1}\Lambda_{\tilde{\omega}_t}F_{H_t}$ on $V_t(R|t|^\alpha,2R|t|^\alpha)$ with respect to $\tilde{g}_t$ and $H_t$. \begin{equation*} \begin{split} &\int_{V_t(R|t|^\alpha,2R|t|^\alpha)} |\mathbf{r}_t^{\frac{8}{3}} \Lambda_{\tilde{\omega}_t}F_{H_t}|_{H_t}^k \mathbf{r}_t^{-4}dV_t = \int_{V_t(R|t|^\alpha,2R|t|^\alpha)} \mathbf{r}_t^{\frac{4}{3} k} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} F_{H_t}|_{H_t}^k \mathbf{r}_t^{-4}dV_t \\ \leq & (2R)^{\frac{4}{3}k} |t|^{\frac{4}{3}\alpha k}\cdot Z_2^k |t|^{\frac{2}{3}\alpha k} \int_{V_t(R|t|^\alpha,2R|t|^\alpha)} \mathbf{r}_t^{-4}dV_t \leq (2R)^{\frac{4}{3}k}Z_2^k |t|^{2\alpha k} Z_3. \end{split} \end{equation*} where $Z_3>1$ is an upper bound for $\int_{V_t(R|t|^\alpha,2R|t|^\alpha)} \mathbf{r}_t^{-4}dV_t$ for any $t \neq 0$ small. Thus \begin{equation} \label{eq:3-17} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}(V_t(R|t|^\alpha,2R|t|^\alpha),\tilde{g}_t,H_t)} \leq (2R)^{\frac{4}{3}}Z_2 Z_3 |t|^{2\alpha}. \end{equation} We proceed to consider the region $V_t(2R|t|^\alpha,\frac{3}{4})$. We will first give a pointwise estimate on the mean curvature of the Hermitian metric $H_t=f_t^*H_0$. We will use $\partial_t$ and $\bar{\partial}_t$ to emphasize that they are the $\partial$- and $\bar{\partial}$-operators on $X_t$, respectively. The calculation will be done under the specific choices of frames as mentioned before Lemma \ref{lm:5}. With these choices, we have $\hat{H}_t=I$ and $f_t^*H_0$ can be regarded as the pullback by $x_t$ of a matrix-valued function representing $H_0$. Since constant frames are holomorphic, the curvature of $f_t^*H_0$ can be computed using this pullback matrix-valued function which we still denote by $f_t^*H_0$. \begin{lemma} \label{lm:7} There is a constant $Z_4>0$ independent of $t$ such that \begin{equation*} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left(\bar{\partial}_t(\partial_t (f_t^*H_0)(f_t^*H_0)^{-1})\right)|_{\hat{H}_t} \leq Z_4\cdot |t|\mathbf{r}_t^{-2} \end{equation*} on $V_t(2R|t|^\alpha,\frac{3}{4})$. \end{lemma} \begin{proof} We expand and get \begin{equation} \label{eq:3-19} \bar{\partial}_t(\partial_t (f_t^*H_0)(f_t^*H_0)^{-1}) = (\bar{\partial}_t\partial_t (f_t^*H_0))(f_t^*H_0)^{-1}+\partial_t (f_t^*H_0)\wedge(f_t^*H_0)^{-1}\bar{\partial}_t (f_t^*H_0) (f_t^*H_0)^{-1}. \end{equation} We compute \begin{equation} \label{eq:3-7} \begin{split} \bar{\partial}_t\partial_t (f_t^*H_0) =&-\frac{\sqrt{-1}}{2} dJ_t d(f_t^*H_0)=-\frac{\sqrt{-1}}{2} dJ_tf_t^*(dH_0)\\ = &-\frac{\sqrt{-1}}{2} d(x_t^*J_0d(f_t^*H_0)) -\frac{\sqrt{-1}}{2} d\left [(J_t-x_t^*J_0)d(f_t^*H_0) \right ] \\ =& -\frac{\sqrt{-1}}{2} f_t^*(dJ_0d(f_t^*H_0))) -\frac{\sqrt{-1}}{2} d\left [(J_t-x_t^*J_0)d(f_t^*H_0) \right ] \\ =& f_t^*(\bar{\partial}_0\partial_0 H_0) -\frac{\sqrt{-1}}{2} d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]. \end{split} \end{equation} Moreover, \begin{equation} \label{eq:3-8} \begin{split} \partial_t (f_t^*H_0) =& \frac{d-\sqrt{-1}J_td}{2}(f_t^*H_0)=\frac{1}{2}(1-\sqrt{-1}J_t)f_t^*(dH_0)\\ =&\frac{1}{2}(1-\sqrt{-1}x_t^*J_0)f_t^*(dH_0)-\frac{\sqrt{-1}}{2}(J_t-x_t^*J_0)f_t^*(dH_0) \\ =& f_t^*(\partial_0 H_0)-\frac{\sqrt{-1}}{2}(J_t-x_t^*J_0)f_t^*(dH_0), \end{split} \end{equation} and similarly \begin{equation} \label{eq:3-9} \bar{\partial}_t (f_t^*H_0) = f_t^*(\bar{\partial}_0 H_0)+\frac{\sqrt{-1}}{2}(J_t-x_t^*J_0)f_t^*(dH_0). \end{equation} Plug in (\ref{eq:3-7}), (\ref{eq:3-8}) and (\ref{eq:3-9}) to (\ref{eq:3-19}), we get \begin{equation*} \begin{split} & \bar{\partial}_t(\partial_t (f_t^*H_0)(f_t^*H_0)^{-1}) \\ =& f_t^*(\bar{\partial}_0\partial_0 H_0)(f_t^*H_0)^{-1} -\frac{\sqrt{-1}}{2}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1}\\ & +(f_t^*\partial_0 H_0)(f_t^*H_0)^{-1}\wedge (f_t^*\bar{\partial}_0 H_0)(f_t^*H_0)^{-1} \\ & +\frac{\sqrt{-1}}{2}(f_t^*\partial_0 H_0)(f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \\ & -\frac{\sqrt{-1}}{2}[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge (f_t^*\bar{\partial}_0 H_0)(f_t^*H_0)^{-1} \\ & +\frac{1}{4}[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \\ =& f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))) -\frac{\sqrt{-1}}{2}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1} \\ & +\frac{\sqrt{-1}}{2}(f_t^*\partial_0 H_0)(f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \\ & -\frac{\sqrt{-1}}{2}[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge (f_t^*\bar{\partial}_0 H_0)(f_t^*H_0)^{-1} \\ & +\frac{1}{4}[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \\ \end{split} \end{equation*} Therefore we have \begin{equation} \label{eq:3-10} \begin{split} &|\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} \bar{\partial}_t(\partial_t (f_t^*H_0)(f_t^*H_0)^{-1})|_{\hat{H}_t} \\ \leq &|\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))|_{\hat{H}_t} +\frac{1}{2}|\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1}|_{\hat{H}_t} \\ & +\frac{1}{2} | \mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left [(f_t^*\partial_0 H_0)(f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1}\right] |_{\hat{H}_t}\\ & +\frac{1}{2} | \mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left [[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge (f_t^*\bar{\partial}_0 H_0)(f_t^*H_0)^{-1} \right ] |_{\hat{H}_t} \\ & +\frac{1}{4} | \mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left [[(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \right] |_{\hat{H}_t} \end{split} \end{equation} Note that because $\hat{H}_t=I$ under the chosen frame, we have $f_t^*H_0=f_t^*h_0$. Using the bounds in Proposition \ref{pr:2-3}, we can estimate the first term on the RHS of (\ref{eq:3-10}) in each coordinate chart $B_z$ as \begin{equation} \label{eq:3-11} \begin{split} & |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))|_{\hat{H}_t} \\ \leq & |\mathbf{r}_t^\frac{4}{3} \Lambda_{x_t^*\hat{\omega}_0} f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))|_{\hat{H}_t} +|\mathbf{r}_t^\frac{4}{3} (\Lambda_{\tilde{\omega}_t} - \Lambda_{x_t^*\hat{\omega}_0})f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))|_{\hat{H}_t} \\ \leq & Z_5|\mathbf{r}_t^\frac{4}{3} f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))|_{\hat{H}_t,g_{co,t}} \cdot |\tilde{\omega}_t^{-1} - x_t^*\omega_{co,0}^{-1}|_{g_{co,t}} \\ \leq & Z_6|f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1})) |_{C^0(B_z,\hat{H}_t,\bar{g}_{co,t})} \cdot |\tilde{\omega}_t^{-1}-\omega_{co,t}^{-1}|_{g_{co,t}} \\ \leq & Z_7|f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1})) |_{C^0(B_z,\hat{H}_t,g_e)} \cdot C''|t|\mathbf{r}_t^{-\frac{2}{3}} \\ \leq & Z_8|\bar{\partial}_0(\partial_0 H_0(H_0)^{-1})) |_{C^0(B_{x_t(z)},\hat{H},g_e)} \cdot |t|\mathbf{r}_t^{-\frac{2}{3}}\\ \leq & Z_9\left(\Arrowvert H_0\Arrowvert_{C^2(B_{x_t(z)},\hat{H},g_e)}+\Arrowvert H_0\Arrowvert_{C^1(B_{x_t(z)},\hat{H},g_e)}^2\right)\cdot |t|\mathbf{r}_t^{-\frac{2}{3}} \\ \leq &Z_9\cdot (C'_2+(C'_1)^2) \cdot |t|\mathbf{r}_t^{-\frac{2}{3}} \leq Z_{10}|t|\mathbf{r}_t^{-2} \end{split} \end{equation} where Proposition \ref{pr:6-1} and the equation in Lemma \ref{lm:4} are applied. We have also used the fact that \begin{equation*} \Lambda_{x_t^*\hat{\omega}_{0}}f_t^*(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1}))=f_t^*(\Lambda_{\hat{\omega}_{0}}(\bar{\partial}_0(\partial_0 H_0(H_0)^{-1})))=0 \end{equation*} since $H_0$ is HYM with respect to the balanced metric $\hat{\omega}_{0}$ on $X_{0,sm}$. The second term on the RHS of (\ref{eq:3-10}) is bounded as \begin{equation} \label{eq:3-12} \begin{split} & \frac{1}{2}|\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1} |_{\hat{H}_t} \\ \leq & \frac{1}{2}|\mathbf{r}_t^\frac{4}{3} \Lambda_{\omega_{co,t}}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1} |_{\hat{H}_t} \\ &+\frac{1}{2}|\mathbf{r}_t^\frac{4}{3} (\Lambda_{\omega_{co,t}}-\Lambda_{\tilde{\omega}_t}) d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]\cdot (f_t^*H_0)^{-1}|_{\hat{H}_t} \\ \leq & Z_{11}|\mathbf{r}_t^\frac{4}{3} \Lambda_{\omega_{co,t}}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]|_{\hat{H}_t} \\ &+Z_{11}|\tilde{\omega}_t^{-1}-\omega_{co,t}^{-1}|_{g_{co,t}} | \mathbf{r}_t^\frac{4}{3}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ] |_{\hat{H}_t,g_{co,t}}\\ \leq & Z_{12}(1+|\tilde{\omega}_t^{-1}-\omega_{co,t}^{-1}|_{g_{co,t}})\cdot \\ & \left( \mathbf{r}_t^\frac{2}{3}|\nabla_{g_{co,t}}(J_t-x_t^*J_0)|_{g_{co,t}} |\mathbf{r}_t^\frac{2}{3} d(f_t^*H_0)|_{\hat{H}_t,g_{co,t}} + |J_t-x_t^*J_0|_{g_{co,t}} \sum_{j=0}^2|\mathbf{r}_t^{\frac{2}{3}j}\nabla_{g_{co,t}}^j (f_t^*H_0)|_{\hat{H}_t,g_{co,t}} \right). \\ \end{split} \end{equation} To proceed, let $\nabla_{\Upsilon_t^*g_{co,t}}-\nabla_{g_{co,0}}$ be the difference between the two connections. It is in fact the difference between the Christoffel symbols of $\Upsilon_t^*g_{co,t}$ and $g_{co,0}$. From the explicit formulas of Christoffel symbols in terms of the metrics and Proposition \ref{pr:n-1}, for some universal constans $D_1>0$ and $D_2>0$ we have \begin{equation} \label{dif} |\nabla_{\Upsilon_t^*g_{co,t}}-\nabla_{g_{co,0}}|_{\Upsilon_t^*g_{co,t}}\leq D_1(|g_{co,0}^{-1}dg_{co,0}|_{\Upsilon_t^*g_{co,t}}+|\Upsilon_t^*g_{co,t}^{-1}d\Upsilon_t^*g_{co,t}|_{\Upsilon_t^*g_{co,t}})\leq D_2\mathbf{r}_t^{-\frac{2}{3}}. \end{equation} Now, by Corollary \ref{asym} we have \begin{equation} \label{est-1} \begin{split} |J_t-x_t^*J_0|_{g_{co,t}} \leq |\Upsilon_t^*J_t-J_0|_{\Upsilon_t^*g_{co,t}} \leq D_0|t|\mathbf{r}_t^{-2} \end{split} \end{equation} and by (\ref{dif}) we have \begin{equation} \label{est-2} \begin{split} |\nabla_{g_{co,t}}(J_t-x_t^*J_0)|_{g_{co,t}} \leq & |\nabla_{\Upsilon_t^*g_{co,t}}(\Upsilon_t^*J_t-J_0)|_{\Upsilon_t^*g_{co,t}} \\ \leq & |\nabla_{g_{co,0}}(\Upsilon_t^*J_t-J_0)|_{\Upsilon_t^*g_{co,t}} +|(\nabla_{\Upsilon_t^*g_{co,t}}-\nabla_{g_{co,0}})(\Upsilon_t^*J_t-J_0)|_{\Upsilon_t^*g_{co,t}}\\ \leq & D_0|t|\mathbf{r}_t^{-\frac{8}{3}}+|\nabla_{\Upsilon_t^*g_{co,t}}-\nabla_{g_{co,0}}|_{\Upsilon_t^*g_{co,t}}\cdot |\Upsilon_t^*J_t-J_0|_{\Upsilon_t^*g_{co,t}} \\ \leq & D_0|t|\mathbf{r}_t^{-\frac{8}{3}}+D_2D_0|t|\mathbf{r}_t^{-\frac{8}{3}}. \end{split} \end{equation} We also have $|\mathbf{r}_t^\frac{2}{3} d(f_t^*H_0)|_{\hat{H}_t,g_{co,t}} \leq C''_1$ and $\sum_{j=0}^2|\mathbf{r}_t^{\frac{2}{3}j}\nabla_{g_{co,t}}^j (f_t^*H_0)|_{\hat{H}_t,g_{co,t}} \leq C''_2$ from Corollary \ref{co:4}, and $|\tilde{\omega}_t^{-1}-\omega_{co,t}^{-1}|_{g_{co,t}} \leq C''|t|^\frac{2}{3}$ from Proposition \ref{pr:6-1}. Plug these, (\ref{est-1}) and (\ref{est-2}) into (\ref{eq:3-12}) we get \begin{equation} \label{eq:3-12-1} \begin{split} & \frac{1}{2}|(f_t^*H_0)^{-1}\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}d\left [(J_t-x_t^*J_0)f_t^*(dH_0) \right ]|_{\hat{H}_t} \\ \leq & Z_{12}(1+C''|t|^{\frac{2}{3}})\left( \mathbf{r}_t^\frac{2}{3}(D_0+D_2D_0)|t|\mathbf{r}_t^{-\frac{8}{3}}\cdot C''_1 +D_0|t|\mathbf{r}_t^{-2} \cdot C''_2\right) \leq Z_{13}|t|\mathbf{r}_t^{-2}. \end{split} \end{equation} The third term on the RHS of (\ref{eq:3-10}) is bounded as \begin{equation} \label{eq:3-13} \begin{split} & \frac{1}{2} | \mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left [(f_t^*\bar{\partial}_0 H_0) (f_t^*H_0)^{-1}\wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \right] |_{\hat{H}_t} \\ \leq &\frac{1}{2} | \mathbf{r}_t^\frac{4}{3} \Lambda_{\omega_{co,t}}\left [(f_t^*\bar{\partial}_0 H_0) (f_t^*H_0)^{-1}\wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \right] |_{\hat{H}_t} \\ &+\frac{1}{2} | \mathbf{r}_t^\frac{4}{3} (\Lambda_{\omega_{co,t}}-\Lambda_{\tilde{\omega}_t}) \left [(f_t^*\bar{\partial}_0 H_0) (f_t^*H_0)^{-1}\wedge [(J_t-x_t^*J_0)f_t^*(dH_0)](f_t^*H_0)^{-1} \right] |_{\hat{H}_t} \\ \leq &Z_{14}(1+|\tilde{\omega}_t^{-1}-\omega_{co,t}^{-1}|_{g_{co,t}})\cdot |J_t-x_t^*J_0|_{g_{co,t}} \cdot |\mathbf{r}_t^\frac{2}{3}d (f_t^*H_0)|_{\hat{H}_t,g_{co,t}} ^2 \\ \leq &Z_{14} (1+C''|t|^\frac{2}{3}) \cdot D_0|t|\mathbf{r}_t^{-2} \cdot (C''_1)^2 \leq Z_{15} |t|\mathbf{r}_t^{-2}. \end{split} \end{equation} where (\ref{est-1}) and Corollary \ref{co:4} have been used again. The last two terms on the RHS of (\ref{eq:3-10}) are also bounded by $Z_{16} |t|\mathbf{r}_t^{-2}$ by similar discussion. This together with (\ref{eq:3-10}), (\ref{eq:3-11}),(\ref{eq:3-12-1}) and (\ref{eq:3-13}) complete the proof of Lemma \ref{lm:7}. \qed \end{proof} We continue with the proof of Proposition \ref{pr:3-4}. From the remark before Proposition \ref{pr:3-3} we get, for $V_t(2R|t|^\alpha,\frac{3}{4})$, that \begin{equation} \label{eq:8-1} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}F_{H_t} |_{H_t} =|\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}\left(\bar{\partial}_t(\partial_t (f_t^*H_0)\cdot(f_t^*H_0)^{-1})\right)|_{H_t} \leq Z_{17}\cdot |t|\mathbf{r}_t^{-2} \end{equation} for some constant $Z_{17}>0$. Consequently, in this region we have \begin{equation} \label{eq:20-4} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}F_{H_t} |_{H_t} \leq Z_{17}\cdot \frac{1}{4R^2} |t|^{1-2\alpha} . \end{equation} and one can estimate \begin{equation} \label{eq:3-14} \begin{split} &\int_{V_t(2R|t|^\alpha,\frac{3}{4})} |\mathbf{r}_t^\frac{8}{3} \Lambda_{\tilde{\omega}_t} F_{H_t} |_{H_t}^k \mathbf{r}_t^{-4}dV_t =\int_{V_t(2R|t|^\alpha,\frac{3}{4})} \mathbf{r}_t^{\frac{4}{3} k} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} F_{H_t} |_{H_t}^k \mathbf{r}_t^{-4}dV_t \\ \leq & Z_{17}^k \int_{V_t(2R|t|^\alpha,\frac{3}{4})} \mathbf{r}_t^{\frac{4}{3} k} (|t|\mathbf{r}_t^{-2})^k \mathbf{r}_t^{-4}dV_t \leq Z_{17}^kZ_{18} |t|^k \int_{\mathbf{r}_t=2R|t|^\alpha}^{\frac{3}{4}} \mathbf{r}_t^{-\frac{2}{3}k-1} d\mathbf{r}_t\\ \leq & Z_{17}^kZ_{18} |t|^k \cdot\frac{3}{2k}(2R)^{-\frac{2}{3}k} |t|^{-\frac{2}{3}\alpha k}. \end{split} \end{equation} We thus obtain \begin{equation} \label{eq:3-15} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}(V_t(2R|t|^\alpha,\frac{3}{4}),\tilde{g}_t,H_t)}\leq Z_{19}|t|^{1-\frac{2}{3}\alpha}. \end{equation} This ends the discussion on the region $V_t(2R|t|^\alpha,\frac{3}{4})$. As for the region $X_t[\frac{3}{4}]$, because the geometry is uniform there it is easy to see that \begin{equation} \label{eq:20-7} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t} F_{H_t} |_{H_t} \leq Z_{20}\cdot |t|. \end{equation} and \begin{equation} \label{eq:3-16} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}(X_t[\frac{3}{4}],\tilde{g}_t,H_t)} \leq Z_{20,k} |t| \end{equation} when $t$ is small. Finally, from (\ref{eq:3-18}), (\ref{eq:8-2}), (\ref{eq:20-4}) and (\ref{eq:20-7}) we get (\ref{eq:20-5}), and from (\ref{eq:3-18}), (\ref{eq:3-17}), (\ref{eq:3-15}), and (\ref{eq:3-16}) we get (\ref{eq:20-6}). The proof of Proposition \ref{pr:3-4} is complete now. \qed \end{proof} \noindent \textbf{Remark} From now on we fix $\alpha=\frac{3}{8}$. Then we have \begin{equation} \label{eq:F-1} |\mathbf{r}_t^\frac{4}{3} \Lambda_{\tilde{\omega}_t}F_{H_t} |_{H_t} \leq \tilde{Z}_0 |t|^\frac{1}{4} \end{equation} and \begin{equation} \label{eq:F-2} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}(X_t,\tilde{g}_t,H_t)}\leq \tilde{Z}_k |t|^\frac{3}{4}. \end{equation} \section{Contraction mapping argument} Our background Hermitian metric on $\mathcal{E}_t$ as constructed in Section 5 is denoted by $H_t$. Let $\tilde{H}_t$ be another Hermitian metric on $\mathcal{E}_t$ and write $\tilde{h}=\tilde{H}_tH_t^{-1}=I+h$ where $h$ is $H_t$-symmetric. It is known that the mean curvature \begin{equation*} \sqrt{-1}\Lambda_{\tilde{\omega}_t}F_{\tilde{H}_t}=\sqrt{-1}\Lambda_{\tilde{\omega}_t} \bar{\partial}((\partial_{H_t}(I+h))(I+h)^{-1})+\sqrt{-1}\Lambda_{\tilde{\omega}_t}F_{H_t} \end{equation*} of $\tilde{H}_t$ is $\tilde{H}_t$-symmetric. To make it $H_t$-symmetric, consider a positive square root of $\tilde{H}_tH_t^{-1}$, denoted by $(\tilde{H}_tH_t^{-1})^{\frac{1}{2}}$. More explicitly, write $h=P^{-1}DP$ where $D$ is diagonal with positive eigenvalues, then $(\tilde{H}_tH_t^{-1})^{\frac{1}{2}}=P^{-1}(I+D)^\frac{1}{2}P$.\\[0.2cm] \noindent \textbf{Remark} Write $(\tilde{H}_tH_t^{-1})^{\frac{1}{2}}=I+u(h)$. Then it is easy to see that the linear part of $u(h)$ in $h$ is $\frac{1}{2}h$.\\[0.2cm] After twisting the mean curvature above by $I+u(h)$, we obtain \begin{equation} \label{eq:4-1} \sqrt{-1}(I+u(h))^{-1}[\Lambda_{\tilde{\omega}_t} \bar{\partial}((\partial_{H_t}(I+h))(I+h)^{-1})+\Lambda_{\tilde{\omega}_t}F_{H_t}](I+u(h)), \end{equation} which is $H_t$-symmetric. The equation \begin{equation*} \sqrt{-1}\Lambda_{\tilde{\omega}_t}F_{\tilde{H}_t}=0 \end{equation*} is equivalent to the equation \begin{equation*} \sqrt{-1}(I+u(h))^{-1}[\Lambda_{\tilde{\omega}_t} \bar{\partial}((\partial_{H_t}(I+h))(I+h)^{-1})+\Lambda_{\tilde{\omega}_t}F_{H_t}](I+u(h))=0, \end{equation*} which can be written in the form \begin{equation*} L_t(h)=Q_t(h) \end{equation*} where \begin{equation*} L_t(h)=\sqrt{-1}\left(\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h+\frac{1}{2}[\Lambda_{\tilde{\omega}_t}F_{H_t},h] \right) \end{equation*} is a linear map from \begin{equation*} \text{Herm}_{H_t}(\text{End} (\mathcal{E}_t)):=\{H_t\text{-symmetric endomorphisms of}\, \mathcal{E}_t\} \end{equation*} to itself, and \begin{equation} \label{eq:4-2} \begin{split} Q_t(h) &=-\sqrt{-1}(I+u(h))^{-1}(\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h)(I+h)^{-1} (I+u(h))+\sqrt{-1}\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h \\ & -\sqrt{-1}(I+u(h))^{-1}\Lambda_{\tilde{\omega}_t} (\partial_{H_t} h\cdot(I+h)^{-1}\wedge \bar{\partial}h\cdot(I+h)^{-1})(I+u(h)) \\ &-\sqrt{-1}\left((I+u(h))^{-1}\Lambda_{\tilde{\omega}_t}F_{H_t}(I+u(h))-\frac{1}{2}[\Lambda_{\tilde{\omega}_t}F_{H_t},h] \right). \end{split} \end{equation} In the above formulas, we use the fact that $\frac{1}{2}h$ is the linear part of $u(h)$. Notice that since \begin{equation*} \int_X \langle \sqrt{-1}\left(\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h+\frac{1}{2}[\Lambda_{\tilde{\omega}_t}F_{H_t},h] \right),I \rangle_{H_t} dV_t=0 \end{equation*} we have an induced map from \begin{equation*} \begin{split} &\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)):= \\ &\{H_t\text{-symmetric endomorphisms of}\, \mathcal{E}_t\,\text{which are orthogonal to}\, I\} \end{split} \end{equation*} to itself. Because (\ref{eq:4-1}) is a $H_t$-symmetric endomorphisms of $\mathcal{E}_t$ which are orthogonal to $I$, we see the same is true for $Q_t(h)$. In this section $h$ will always be a section for the bundle $\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t))$. We consider the contraction mapping problem via weighted norms introduced in Section 2. The metrics that define these norms and all the pointwise norms will be w.r.t. the balanced metrics $\tilde{g}_t$ on $X_t$ and the Hermitian metrics $H_t$ on $\mathcal{E}_t$, and the connections we use are always the Chern connections of $H_t$. Therefore we remove $\tilde{g}_t$ and $H_t$ from the subscripts of the norms for simplicity unless needed. As in Section 2, we now consider the following norms defined on the usual Sobolev space $L^k_l(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))$: \begin{equation*} \Arrowvert h \Arrowvert _{L^k_{l,\beta}} =\left( \sum_{j=0}^l \int_{X_t} |\mathbf{r}_t^{-\frac{2}{3}\beta+\frac{2}{3}j}\nabla^jh|_t^k\mathbf{r}_t^{-4}\, dV_t \right)^\frac{1}{k}. \end{equation*} As before, we use $L^k_{l,\beta}$ to denote $L^k_{l,\beta}(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))$ for simplicity. The following Sobolev inequalities will be used in our discussion: \begin{proposition} \label{pr:1-3} For each $l,p,q,r$ there exists a constant $C>0$ independent of $t$ such that for any section $h$ of $\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t))$, \begin{equation*} \Arrowvert h \Arrowvert_{L^r_{l,\beta}}\leq C \Arrowvert h \Arrowvert_{L^p_{q,\beta}} \end{equation*} whenever $\frac{1}{r}\leq \frac{1}{p} \leq \frac{1}{r}+\frac{q-l}{6}$ and \begin{equation*} \Arrowvert h \Arrowvert_{C^l_{\beta}}\leq C \Arrowvert h \Arrowvert_{L^p_{q,\beta}} \end{equation*} whenever $\frac{1}{p} < \frac{q-l}{6}$. Here the norms are with respect to $H_t$ and $\tilde{g}_t$.\\[0.2cm] \end{proposition} We now begin the discussion on the properties of the operator $L_t$. \begin{lemma} \label{lm:1} For any given $0<\nu \ll 1$ and $t \neq0$ small enough, we have \begin{equation*} \Arrowvert h \Arrowvert_{L^2_{1,-2}} \leq 8|t|^{-2\nu}\Arrowvert L_t(h) \Arrowvert_{L^2_{0,-4}}. \end{equation*} In particular, the operator $L_t$ is injective on $L^2_{2,-2}(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))$. \end{lemma} \begin{proof} Later in Proposition \ref{pr:5-1} we will show that for arbitrarily given $\nu >0 $, we have \begin{equation*} \Arrowvert h \Arrowvert_{L^2_{0,-2}} \leq |t|^{-\nu}\Arrowvert \mathbf{r}_t^\frac{2}{3} \partial_{H_t}h \Arrowvert_{L^2_{0,-2}} \end{equation*} for $t\neq0$ small enough. Using this one easily deduce that \begin{equation*} \Arrowvert h \Arrowvert_{L^2_{1,-2}} \leq 2|t|^{-\nu} \Arrowvert \mathbf{r}_t^\frac{2}{3} \partial_{H_t}h \Arrowvert_{L^2_{0,-2}} \end{equation*} for $t\neq0$ small enough. Now \begin{equation*} \begin{split} \Arrowvert \mathbf{r}_t^\frac{2}{3} \partial_{H_t}h \Arrowvert_{L^2_{0,-2}}^2 =& \int_{X_t} \langle\partial_{H_t}h,\partial_{H_t}h\rangle \, dV_t = \int_{X_t} \langle \sqrt{-1}\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h,h\rangle \, dV_t \\ \leq &\int_{X_t} | L_t(h)-\frac{\sqrt{-1}}{2}[\Lambda_{\tilde{\omega}_t}F_{H_t},h] ||h| \, dV_t \\ \leq &\int_{X_t} | L_t(h)||h| \, dV_t + \int_{X_t} |h|^2|\Lambda_{\tilde{\omega}_t}F_{H_t}| \, dV_t. \end{split} \end{equation*} From (\ref{eq:F-1}) we have $|\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}F_{H_t}| \leq \tilde{Z}_0|t|^\frac{1}{4}$. Therefore we can bound \begin{equation*} \begin{split} \int_{X_t} |h|^2|\Lambda_{\tilde{\omega}_t}F_{H_t}| \, dV_t =&\int_{X_t} |\mathbf{r}_t^{\frac{4}{3}}h|^2|\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}F_{H_t}| \mathbf{r}_t^{-4}\, dV_t \\ \leq & \tilde{Z}_0|t|^\frac{1}{4} \int_{X_t} |\mathbf{r}_t^{\frac{4}{3}}h|^2 \mathbf{r}_t^{-4}\, dV_t \leq \tilde{Z}_0|t|^\frac{1}{4} \Arrowvert h \Arrowvert^2_{L^2_{1,-2}} \end{split} \end{equation*} Using this bound, we now have \begin{equation*} \begin{split} \Arrowvert h \Arrowvert_{L^2_{1,-2}}^2 \leq & 4|t|^{-2\nu}\Arrowvert \mathbf{r}_t^\frac{2}{3} \partial_{H_t}h \Arrowvert_{L^2_{0,-4}}^2 \\ \leq & 4|t|^{-2\nu} \left( \int_{X_t} | L_t(h)||h| \, dV_t+\tilde{Z}_0|t|^\frac{1}{4}\Arrowvert h \Arrowvert^2_{L^2_{1,-2}}\right) \\ \leq & 4|t|^{-2\nu} \left(\int_{X_t} | \mathbf{r}_t^{\frac{8}{3}}L_t(h)|^2 \mathbf{r}_t^{-4}\, dV_t \right)^\frac{1}{2} \left(\int_{X_t} | \mathbf{r}_t^{\frac{4}{3}}h|^2 \mathbf{r}_t^{-4}\, dV_t \right)^\frac{1}{2} +4\tilde{Z}_0|t|^{\frac{1}{4}-2\nu}\Arrowvert h \Arrowvert^2_{L^2_{1,-2}} \\ \leq & 4|t|^{-2\nu} \Arrowvert L_t(h) \Arrowvert_{L^2_{0,-4}}\Arrowvert h \Arrowvert_{L^2_{1,-2}} +4\tilde{Z}_0|t|^{\frac{1}{4}-2\nu}\Arrowvert h \Arrowvert^2_{L^2_{1,-2}}. \\ \end{split} \end{equation*} Therefore for $\nu \ll1$ and $t\neq 0$ small enough such that $4\tilde{Z}_0|t|^{\frac{1}{4}-2\nu} \leq \frac{1}{2}$, we have the desired result. \qed \end{proof} We conclude from this that for $k \geq 6$, the operator $L_t: L^k_{2,-2} \rightarrow L^k_{0,-4}$ is injective. The operator $L_t$ is also surjective. First of all, $\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}$ is a self-adjoint Fredholm operator, so it has index zero. Secondly, since for each $t \neq0$, $\Lambda_{\tilde{\omega}_t}F_{H_t}$ is a smooth function on $X_t$, the operator \begin{equation*} h \rightarrow \frac{1}{2}[\Lambda_{\tilde{\omega}_t}F_{H_t},h] \end{equation*} from $L^k_{2,-2}$ to $L^k_{0,-4}$ is a compact operator. Therefore $L_t$ has index zero, and the injectivity of $L_t$ implies its surjectivity. Let the inverse be denoted by $P_t$. \begin{proposition} \label{pr:4-1} There exist constants $\hat{Z}_k>0$ such that for any $0<\nu \ll1$ and $t\neq0$ small enough, \begin{equation} \Arrowvert h \Arrowvert_{L^k_{2,-2}} \leq \hat{Z}_k(-\log |t|)^\frac{1}{2}|t|^{-2\nu} \Arrowvert L_t(h) \Arrowvert_{L^k_{0,-4}}. \end{equation} Consequently, the norm of the operator $P_t:L^k_{0,-4} \rightarrow L^k_{2,-2}$ is bounded as \begin{equation*} \Arrowvert P_t \Arrowvert \leq \hat{Z}_k(-\log |t|)^\frac{1}{2}|t|^{-2\nu}. \end{equation*} \end{proposition} \begin{proof} From the estimates of $\tilde{g}_t$ in Theorem \ref{th:0}, the estimates of $H_t$ in Proposition \ref{pr:3-3}, and the estimates (\ref{eq:F-3}) of $\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}F_{H_t}$ in Proposition \ref{pr:3-4}, we can apply Proposition \ref{pr:uniform} to the operator $\mathbf{r}_t^\frac{4}{3}L_t$, and obtain \begin{equation*} \begin{split} \Arrowvert h \Arrowvert_{L^k_{2,-2}} \leq &\hat{C}_k \left( \Arrowvert \mathbf{r}_t^\frac{4}{3}L_t(h) \Arrowvert_{L^k_{0,-2}}+\Arrowvert h \Arrowvert_{L^2_{0,-2}} \right)\\ \leq &\hat{C}'_k \left( \Arrowvert L_t(h) \Arrowvert_{L^k_{0,-4}}+\Arrowvert h \Arrowvert_{L^2_{0,-2}} \right)\\ \end{split} \end{equation*} for constans $\hat{C}'_k>0$ independent of $t$. By Lemma \ref{lm:1} and H$\ddot{\text{o}}$lder inequality, \begin{equation} \begin{split} \Arrowvert h \Arrowvert_{L^2_{0,-2}}^2 \leq &64|t|^{-4\nu}\Arrowvert L_t(h) \Arrowvert_{L^2_{0,-4}}^2 = 64|t|^{-4\nu}\int_{X_t} |\mathbf{r}_t^{\frac{8}{3}}L_t(h)|^2 \mathbf{r}_t^{-4} dV_t \\ \leq &64|t|^{-4\nu}\left(\int_{X_t} 1\cdot \mathbf{r}_t^{-4} dV_t\right)^{1-\frac{2}{k}} \left(\int_{X_t} |\mathbf{r}_t^{\frac{8}{3}}L_t(h)|^k \mathbf{r}_t^{-4} dV_t\right)^\frac{2}{k} \\ \leq &Z'_0|t|^{-4\nu} (-\log |t|)^{1-\frac{2}{k}}\Arrowvert L_t(h) \Arrowvert_{L^k_{0,-4}}^2 \leq Z'_0|t|^{-4\nu}(-\log |t|)\Arrowvert L_t(h) \Arrowvert_{L^k_{0,-4}}^2. \end{split} \end{equation} for $t\neq 0$ small. The claim follows now.\qed \end{proof} Now we consider the contraction mapping problem for the map \begin{equation*} U_t: L^k_{2,-2}\rightarrow L^k_{2,-2},\, \, \, \, U_t(h)=P_t(Q_t(h)). \end{equation*} Here $Q_t(h)$ is given in (\ref{eq:4-2}). Take $\beta'$ to be a number such that $0<\beta'-2 \ll 1$. We restrict ourselves to a ball $B(\beta')$ of radius $|t|^{\frac{\beta'}{3}}$ centered at 0 inside $L^k_{2,-2}$, and show that $U_t$ is a contraction mapping from the ball into itself when $t \neq 0$ is small enough. \begin{proposition} \label{pr:4-2} For each $k$ large enough, there is a constant $\hat{Z}'_k>0$ such that when $t \neq 0$ is small enough the operator $h \mapsto Q_t(h)$ maps the ball of radius $|t|^{\frac{\beta'}{3}}$ in $L^k_{2,-2}$ into the ball of radius $\hat{Z}'_k|t|^{\frac{\beta'-2}{3}}\cdot |t|^{\frac{\beta'}{3}}$ in $L^k_{0,-4}$. \end{proposition} \begin{proof} Note that when $k$ is large enough one has the Sobolev embedding $L^k_{2,-2}\hookrightarrow C^1_{-2}$. Proposition \ref{pr:1-3} implies the existence of a constant $C^{sb}_k$ independent of $t$ such that \begin{equation} \label{eq:4-0} \Arrowvert h \Arrowvert_{C^1_{-2}} \leq C^{sb}_k \Arrowvert h \Arrowvert_{L^k_{2,-2}}. \end{equation} In this case, $\Arrowvert h \Arrowvert_{L^k_{2,-2}}<|t|^{\frac{\beta'}{3}}$ implies in particular that $|h|\mathbf{r}_t^{\frac{4}{3}}<C^{sb}_k|t|^{\frac{\beta'}{3}}$, and hence \begin{equation} \label{eq:4-01} |h|<C^{sb}_k|t|^{\frac{\beta'-2}{3}}. \end{equation} Therefore, because $0<\beta'-2$, when $t \neq 0$ is small it makes sense to take the inverse of $I+h$ and $I+u(h)$, and there are constants $Z'_1$ and $Z'_2$ such that, for $t\neq 0$ small, \begin{equation} \label{eq:4-3} \begin{split} |(I+u(h))^{-1}&(\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h)(I+h)^{-1} (I+u(h))-\sqrt{-1}\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h| \\ &\leq Z'_1|\bar{\partial}\partial_{H_t}h||h| \leq Z'_1C^{sb}_k|t|^{\frac{\beta'-2}{3}}|\bar{\partial}\partial_{H_t}h| \leq Z'_1C^{sb}_k|t|^{\frac{\beta'-2}{3}} |\nabla_{H_t}^2 h| \end{split} \end{equation} and \begin{equation} \label{eq:4-4} \max \{ |h|,\,\,|I+u(h)|,\,\, |(I+u(h))^{-1}|,\,\, |I+h|,\,\, |(I+h)^{-1}|\}<Z'_2. \end{equation} From the expression (\ref{eq:4-2}) for $Q_t$ we can bound it as \begin{equation} \label{eq:4-5} \begin{split} |Q_t(h)| \leq & |(I+u(h))^{-1}(\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h)(I+h)^{-1} (I+u(h))-\sqrt{-1}\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h| \\ + & |(I+u(h))^{-1}\Lambda_{\tilde{\omega}_t} (\partial_{H_t} h\cdot(I+h)^{-1}\wedge \bar{\partial}h\cdot (I+h)^{-1})(I+u(h)) |\\ + & |((I+u(h))^{-1}| \cdot |\Lambda_{\tilde{\omega}_t}F_{H_t}| \cdot |(I+u(h))| + |h| \cdot |\Lambda_{\tilde{\omega}_t}F_{H_t}| \\ \leq & Z'_1C^{sb}_k |t|^{\frac{\beta'-2}{3}} |\nabla_{H_t}^2 h| + (Z'_2)^4|\nabla_{H_t} h|^2 + ((Z'_2)^2+Z'_2)|\Lambda_{\tilde{\omega}_t}F_{H_t}| \end{split} \end{equation} where (\ref{eq:4-3}) and (\ref{eq:4-4}) are used. Now we estimate the $L^k_{0,-4}$-norm of $|\nabla_{H_t}^2 h|$ and $|\nabla_{H_t} h|^2$. First of all we have \begin{equation*} \begin{split} \Arrowvert |\nabla_{H_t}^2 h| \Arrowvert_{L^k_{0,-4}}^k & =\int_{X_t} \mathbf{r}_t^{\frac{8}{3}k}|\nabla_{H_t}^2 h|^k \mathbf{r}_t^{-4}dV_t = \int_{X_t} |\mathbf{r}_t^{\frac{2}{3}(2+2)}\nabla_{H_t}^2 h|^k \mathbf{r}_t^{-4}dV_t \leq \Arrowvert h \Arrowvert_{L^k_{2,-2}}^k \end{split} \end{equation*} and hence \begin{equation} \label{eq:4-6} \Arrowvert |\nabla_{H_t}^2 h| \Arrowvert_{L^k_{0,-4}} \leq \Arrowvert h \Arrowvert_{L^k_{2,-2}} \leq |t|^{\frac{\beta'}{3}}. \end{equation} Next we estimate \begin{equation*} \begin{split} \Arrowvert |\nabla_{H_t} h|^2 \Arrowvert_{L^k_{0,-4}}^k &=\int_{X_t} \mathbf{r}_t^{\frac{8}{3}k}|\nabla_{H_t} h|^{2k}\mathbf{r}_t^{-4}dV_t \\ &=\int_{X_t} \mathbf{r}_t^{-\frac{4}{3} k}\mathbf{r}_t^{\frac{2}{3}(2+1)2k}|\nabla_{H_t} h|^{2k}\mathbf{r}_t^{-4}dV_t \leq |t|^{-\frac{2}{3}k}\Arrowvert h \Arrowvert_{L^{2k}_{1,-2}}^{2k}. \end{split} \end{equation*} By Proposition \ref{pr:1-3} we have, for large $k$, $\Arrowvert h \Arrowvert_{L^{2k}_{1,-2}} \leq \hat{C}^{sb}_k \Arrowvert h \Arrowvert_{L^{k}_{2,-2}} \leq \hat{C}^{sb}_k|t|^{\frac{\beta'}{3}}$ for some constant $\hat{C}^{sb}_k$ independent of $t$. Thus we get \begin{equation} \label{eq:4-7} \Arrowvert |\nabla_{H_t} h|^2 \Arrowvert_{L^k_{0,-4}} \leq |t|^{-\frac{2}{3}} \Arrowvert h \Arrowvert_{L^{2k}_{1,-2}}^2 \leq (\hat{C}^{sb}_k)^2|t|^{\frac{1}{3}(2\beta'-2) }<(\hat{C}^{sb}_k)^2|t|^{\frac{\beta'-2}{3}} |t|^{\frac{\beta'}{3}}. \end{equation} From the remark after Proposition \ref{pr:3-4}, we have for some constants $\tilde{Z}_k>0$ \begin{equation} \label{eq:4-8} \Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}} \leq \tilde{Z}_k|t|^{\frac{3}{4}} \leq \tilde{Z}_k|t|^{\frac{3}{4}-\frac{\beta'}{3}}|t|^{\frac{\beta'}{3}}. \end{equation} Note that for $0<\beta'-2\ll 1$, $\frac{3}{4}-\frac{\beta'}{3}>\frac{\beta'-2}{3}>0$. We fix such a $\beta'$. Now, from (\ref{eq:4-5}), (\ref{eq:4-6}), (\ref{eq:4-7}), and (\ref{eq:4-8}) we have \begin{equation} \label{eq:4-9} \begin{split} \Arrowvert Q_t(h)\Arrowvert_{L^k_{0,-4}} \leq &\left(Z'_1C^{sb}_k + (Z'_2)^4\cdot (\hat{C}^{sb}_k)^2 + ((Z'_2)^2+Z'_2)\tilde{Z}_k \right)|t|^{\frac{\beta'-2}{3}}\cdot |t|^{\frac{\beta'}{3}} \end{split} \end{equation} for $t\neq 0$ small enough. \qed \end{proof} Fix $\beta'$ as in Proposition \ref{pr:4-2} and choose $\nu<\frac{1}{6}\beta'-\frac{1}{3}$ in Proposition \ref{pr:4-1}, then for $t\neq 0$ sufficiently small, $U_t$ maps $B(\beta')$ to itself. Next we show \begin{proposition} $U_t$ is a contraction mapping on $B(\beta')$ for $t\neq 0$ small enough. \end{proposition} \begin{proof} We first show that when $t \neq 0$ is small enough and $k$ large enough, there are constants $\hat{Z}''_k>0$ such that for any $h_1$ and $h_2$ contained in $B(\beta')$, we have \begin{equation} \label{eq:18-4} \Arrowvert Q_t(h_1)-Q_t(h_2) \Arrowvert_{L^k_{0,-4}} \leq \hat{Z}''_k|t|^{\frac{\beta'-2}{3}}\Arrowvert h_1-h_2\Arrowvert_{L^k_{2,-2}}. \end{equation} As discussed in Proposition \ref{pr:4-2}, for $i=1,2$ when $|h_i| \in B(\beta')$ we have $|h_i|<C^{sb}_k|t|^{\frac{\beta'-2}{3}}$ for some constants $C^{sb}_k$. In this case there is a constant $Z'_3$ independent of $t$ such that \begin{equation} \label{eq:4-10} \begin{split} | (I+h_1)^{-1}(I+u(h_1)) - (I+h_2)^{-1}(I+u(h_2)) | & \leq Z'_3 |h_1-h_2|, \\ | (I+u(h_1))^{-1} - (I+u(h_2))^{-1} | & \leq Z'_3 |h_1-h_2|, \\ | (I+u(h_1)) - (I+u(h_2)) | & \leq Z'_3 |h_1-h_2|,\\ | (I+h_1)^{-1} - (I+h_2)^{-1} | & \leq Z'_3 |h_1-h_2|, \\ | u(h_1) - u(h_2) | & \leq Z'_3 |h_1-h_2|. \end{split} \end{equation} Using these bounds, the bounds in (\ref{eq:4-4}), and the expression in (\ref{eq:4-2}) for $Q_t(h)$, it is not hard to see that for some constant $Z'_4$ we have \begin{equation} \begin{split} & |Q_t(h_1)-Q_t(h_2)| \\ \leq &Z'_4 ((|\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_1 | + |\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_2 |) |h_1-h_2| + (|h_1|+|h_2|)|\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}(h_1-h_2)| \\ &+(|\nabla_{H_t} h_1|^2+|\nabla_{H_t} h_2|^2) |h_1-h_2| \\ &+ (|\nabla_{H_t} h_1|+|\nabla_{H_t} h_2|) |\nabla_{H_t} (h_1-h_2)| +|\Lambda_{\tilde{\omega}_t}F_{H_t} | |h_1-h_2| ) \\ \leq &Z'_4 (|t|^{-\frac{2}{3} } ( |\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_1 |+ |\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_2 | +|\Lambda_{\tilde{\omega}_t}F_{H_t} |) |\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)|\\ &+|t|^{-\frac{2}{3} } (|\nabla_{H_t} h_1|^2+|\nabla_{H_t} h_2|^2) |\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)| \\ & +|t|^{-\frac{2}{3} } (|\mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_1|+|\mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_2|) |\mathbf{r}_t^{2}\nabla_{H_t} (h_1-h_2)| \\ & + (|h_1|+|h_2|)|\Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}(h_1-h_2)| ) \end{split} \end{equation} where in the last line we use the fact that $|t|^{-\frac{2}{3}}\mathbf{r}_t^{\frac{4}{3}} \geq 1$ on $X_t$. Therefore \begin{equation} \label{eq:4-11} \begin{split} &\Arrowvert Q_t(h_1)-Q_t(h_2) \Arrowvert_{L^k_{0,-4}} \\ \leq & Z'_4 (|t|^{-\frac{2}{3}} (\Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_1 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_2 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}}) \sup_{X_t}|\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)| \\ &+|t|^{-\frac{2}{3}} (\Arrowvert |\nabla_{H_t} h_1|^2 \Arrowvert_{L^{k}_{0,-4}} +\Arrowvert |\nabla_{H_t} h_2|^2 \Arrowvert_{L^{k}_{0,-4}} )\sup_{X_t}|\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)| \\ &+|t|^{-\frac{2}{3}} ( \Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_1 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_2 \Arrowvert_{L^k_{0,-4}} ) \sup_{X_t}|\mathbf{r}_t^{2}\nabla_{H_t}(h_1-h_2)| \\ &+2C^{sb}_k|t|^{\frac{\beta'-2}{3}} \Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}(h_1-h_2) \Arrowvert_{L^k_{0,-4}} ) \end{split} \end{equation} where (\ref{eq:4-01}) is used to bound $|h_1|+|h_2|$. The first term in the RHS of (\ref{eq:4-11}) is bounded as \begin{equation} \label{eq:4-12} \begin{split} &|t|^{-\frac{2}{3}} (\Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_1 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}h_2 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \Lambda_{\tilde{\omega}_t}F_{H_t} \Arrowvert_{L^k_{0,-4}}) \sup_{X_t}|\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)| \\ \leq & (2+\tilde{Z}_k)|t|^{-\frac{2}{3}} |t|^{\frac{\beta'}{3}} \Arrowvert h_1-h_2\Arrowvert_{C^0_{-2}} =(2+\tilde{Z}_k)C^{sb}_k|t|^{\frac{\beta'-2}{3}} \Arrowvert h_1-h_2\Arrowvert_{L^k_{2,-2}} \end{split} \end{equation} for $t$ small enough. Here we have used (\ref{eq:4-6}), (\ref{eq:4-8}) and (\ref{eq:4-0}). The second term in the RHS of (\ref{eq:4-11}) is bounded as \begin{equation} \label{eq:4-13} \begin{split} |t|^{-\frac{2}{3}}& (\Arrowvert |\nabla_{H_t} h_1|^2 \Arrowvert_{L^{k}_{0,-4}} +\Arrowvert |\nabla_{H_t} h_2|^2 \Arrowvert_{L^{k}_{0,-4}} )\sup_{X_t}|\mathbf{r}_t^{\frac{4}{3}}(h_1-h_2)| \\ \leq &|t|^{-\frac{2}{3}}\cdot 2(\hat{C}^{sb}_k)^2|t|^{\frac{\beta'-2}{3}} |t|^{\frac{\beta'}{3}} \cdot \Arrowvert h_1-h_2\Arrowvert_{C^0_{-2}} \leq 2(\hat{C}^{sb}_k)^2C^{sb}_k|t|^{\frac{2\beta'-4}{3}} \Arrowvert h_1-h_2\Arrowvert_{L^k_{2,-2}} \end{split} \end{equation} for $t$ small enough. Here (\ref{eq:4-7}) and (\ref{eq:4-0}) are used. To bound the third term in the RHS of (\ref{eq:4-11}), we first estimate that, for $\Arrowvert h \Arrowvert_{L^k_{2,-2}} \leq |t|^{\frac{\beta'}{3}}$, \begin{equation*} \begin{split} \Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h \Arrowvert_{L^k_{0,-4}}^k & =\int_{X_t} \mathbf{r}_t^{\frac{8}{3}k}|\mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h|^k \mathbf{r}_t^{-4}dV_t \leq \int_{X_t} | \mathbf{r}_t^{2} \nabla_{H_t} h|^k \mathbf{r}_t^{-4}dV_t \leq \Arrowvert h \Arrowvert_{L^k_{2,-2}}^k \end{split} \end{equation*} and hence \begin{equation*} \Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h \Arrowvert_{L^k_{0,-4}} \leq \Arrowvert h \Arrowvert_{L^k_{2,-2}} \leq |t|^{\frac{\beta'}{3}}. \end{equation*} Therefore we have \begin{equation} \label{eq:4-14} \begin{split} & |t|^{-\frac{2}{3}} \left( \Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_1 \Arrowvert_{L^k_{0,-4}} +\Arrowvert \mathbf{r}_t^{-\frac{2}{3}}\nabla_{H_t} h_2 \Arrowvert_{L^k_{0,-4}} \right ) \sup_{X_t}|\mathbf{r}_t^{2}\nabla_{H_t}(h_1-h_2)|\\ \leq & 2|t|^{-\frac{2}{3}} |t|^{\frac{\beta'}{3}} \Arrowvert h_1-h_2\Arrowvert_{C^1_{-2}} \leq 2C^{sb}_k|t|^{\frac{\beta'-2}{3}} \Arrowvert h_1-h_2\Arrowvert_{L^k_{2,-2}} \end{split} \end{equation} where the above estimate and (\ref{eq:4-0}) are used. Finally, it is easy to see that the last term in (\ref{eq:4-11}) is also bounded as \begin{equation} \label{last} 2C^{sb}_k|t|^{\frac{\beta'-2}{3}} \Arrowvert \Lambda_{\tilde{\omega}_t} \bar{\partial}\partial_{H_t}(h_1-h_2) \Arrowvert_{L^k_{0,-4}} \leq 2C^{sb}_k |t|^{\frac{\beta'-2}{3}} \Arrowvert h_1-h_2\Arrowvert_{L^k_{2,-2}}. \end{equation} Plugging (\ref{eq:4-12}), (\ref{eq:4-13}), (\ref{eq:4-14}) and (\ref{last}) into (\ref{eq:4-11}) proves (\ref{eq:18-4}). Recall that we have chosen $\nu<\frac{1}{6}\beta'-\frac{1}{3}$. Therefore (\ref{eq:18-4}) and the bound for the norm of $P_t$ given in Proposition \ref{pr:4-1} show that for $t \neq 0$ small enough $U_t$ is a contraction mapping, as desired. \qed \end{proof} Using the contraction mapping theorem on $U_t:B(\beta') \rightarrow B(\beta')$, we have now proved \begin{theorem} For $t\neq 0$ sufficiently small, the bundle $\mathcal{E}_t$ admits a smooth Hermitian-Yang-Mills metric with respect to the balanced metric $\tilde{\omega}_t$.\\[0.2cm] \end{theorem} \section{Proposition 7.1} What remains to be proved is the following proposition. \begin{proposition} \label{pr:5-1} For each $\nu >0$, we have \begin{equation*} \int_{X_t} |\mathbf{r}_t^{\frac{4}{3}}h|^2\mathbf{r}_t^{-4}dV_t\leq |t|^{-\nu}\int_{X_t} |\partial_{H_t}h|^2dV_t \end{equation*} for $t\neq 0$ small. \end{proposition} We can regard this proposition as a problem of smallest eigenvalue of a self-adjoint operator. Consider the pairing \begin{equation*} \langle h_1,h_2\rangle_{L^2_{0,-2}}:=\int_{X_t} \mathbf{r}_t^{\frac{8}{3}}\langle h_1,h_2 \rangle_{H_t} \mathbf{r}_t^{-4}dV_t. \end{equation*} One can compute \begin{equation*} \begin{split} &\int_{X_t} \langle \partial_{H_t}h_1,\partial_{H_t}h_2 \rangle_{H_t,\tilde{g}_t} dV_t = \int_{X_t} \langle \sqrt{-1}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t} h_1, h_2 \rangle_{H_t} dV_t \\ =&\int_{X_t} \mathbf{r}_t^{\frac{8}{3}} \langle \sqrt{-1}\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t} h_1, h_2 \rangle_{H_t} \mathbf{r}_t^{-4}dV_t =\langle \sqrt{-1}\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t} h_1,h_2\rangle_{L^2_{0,-2}}. \end{split} \end{equation*} From this we see that the operator $\sqrt{-1}\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t}$ is self-adjoint on $L^2_{0,-2}(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))$. Define the number \begin{equation*} \lambda_t:=\inf_{0\neq h\in L^2_{0,-2}(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))}\frac{\int_{X_t} |\partial_{H_t}h|^2 dV_t}{\int_{X_t} |h|^2\mathbf{r}_t^{-\frac{4}{3}}dV_t}. \end{equation*} It is not hard to show that the above infimum is achieved at those $h$ satisfying \begin{equation} \label{eq:11-2} \sqrt{-1}\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t}h=\lambda_t h, \end{equation} i.e., $h$ is an eigenvector of the operator $\sqrt{-1}\mathbf{r}_t^\frac{4}{3}\Lambda_{\tilde{\omega}_t}\bar{\partial} \partial_{H_t}$ corresponding to the smallest nonzero eigenvalue $\lambda_t$ on $L^2_{0,-2}(\text{Herm}^0_{H_t}(\text{End} (\mathcal{E}_t)))$. For each $t \neq 0$ let $h_t$ be such an element which satisfies $\Arrowvert h_t \Arrowvert_{L^2_{0,-2}}=1$. \begin{proof} Our goal is to show that for each $\nu >0$ one has $\lambda_t>|t|^\nu$ when $t \neq 0$ is small. Suppose such a bound does not exist. Then for some $\nu>0$ there is a sequence $\{t_n\}$ converging to $0$ such that $\lambda_{t_n} \leq |t_n|^\nu$. The endomorphisms $h_{t_n}$ introduced above satisfy \begin{equation} \label{eq:11-1} \sqrt{-1}\mathbf{r}^\frac{4}{3}\Lambda_{\tilde{\omega}_n}\bar{\partial} \partial_{H_n}h_n=\lambda_n h_n, \end{equation} \begin{equation} \label{eq:11-3} \int_{X_n} |h_n|^2\mathbf{r}^{-\frac{4}{3}}dV_n=1 \end{equation} and \begin{equation} \label{eq:11-4} \int_{X_n} |\partial_{H_n}h_n|^2 dV_n \leq |t_n|^\nu. \end{equation} Here we use the notations $\mathbf{r}$, $\tilde{\omega}_n$, $H_n$ and $\lambda_n$ to denote $\mathbf{r}_{t_n}$, $\tilde{\omega}_{t_n}$, $H_{t_n}$ and $\lambda_{t_n}$, respectively. In the following we will replace the subscripts $t_n$ with $n$. For each fixed $\delta>0$ and $n$ sufficiently large, because the Riemannian manifold $(X_n[\delta], \tilde{\omega}_n)$ has uniform geometry, and because the coefficients in the equations (\ref{eq:11-1}) are uniformly bounded, there is a constant $C$ independent of large $n$ such that \begin{equation*} \label{Lp} \Arrowvert h_n \Arrowvert_{L^p_3(X_n[2\delta])} \leq C\Arrowvert h_n \Arrowvert_{L^2(X_n[\delta])} \leq C'\left(\int_{X_n} |h_n|^2\mathbf{r}^{-\frac{4}{3}}dV_n\right)^\frac{1}{2} \leq C' \end{equation*} where $C'$ depends only on $\delta$ and $p$. For $p$ large enough we see that $\Arrowvert h_n \Arrowvert_{C^2(X_n[2\delta])} $ is bounded independent of $n$. Therefore by using the diagonal argument, there is a subsequence of $\{h_n\}$ converging to an $H_{0}$-symmetric endomorphism $h$ in the $C^1$ sense over each compactly embedded open subset of $X_{0,sm}$. From (\ref{eq:11-4}) one sees that $\bar{\partial}h=0$ over $X_{0,sm}$. But then $h$ is a holomorphic endomorphism of $\mathcal{E}|_{\hat{X}\backslash \bigcup C_i}$, and by Hartog's Theorem it extends to a holomorphic endomorphism of $\mathcal{E}$ over $\hat{X}$. Since $\mathcal{E}$ is irreducible, the existence of a HYM metric on $\mathcal{E}$ implies that it is stable and hence simple. Therefore $h=\mu I$ for some constant $\mu$. \begin{lemma} \label{lm:3} There exists an $0<\iota<\frac{1}{6}$ and a constant $C_{10}>0$ such that for any $0<\delta<\frac{1}{4}$ and large $n$, \begin{equation*} \int_{V_n(\delta)} |h_n|^2 \mathbf{r}^{-\frac{4}{3}}dV_n \leq C_{10}\delta^{2\iota}. \end{equation*} \end{lemma} Let's assume the lemma first. Then we have \begin{equation*} \begin{split} \int_{X_{0,sm}} |h|^2 \mathbf{r}^{-\frac{4}{3}}dV_0 =\lim_{\delta\rightarrow 0}\int_{X_{0}[\delta]} |h|^2 \mathbf{r}^{-\frac{4}{3}} dV_0 &=\lim_{\delta\rightarrow 0}\lim_{n\rightarrow \infty}\int_{X_{n}[\delta]} |h_n|^2\mathbf{r}^{-\frac{4}{3}}dV_n \\ &\geq \lim_{\delta\rightarrow 0}\lim_{n\rightarrow \infty} (1-C_{10}\delta^{2\iota})=1. \end{split} \end{equation*} On the other hand \begin{equation*} \begin{split} \int_{X_{0,sm}} |h|^2 \mathbf{r}^{-\frac{4}{3}}dV_0 &=\lim_{\delta\rightarrow 0}\lim_{n\rightarrow \infty}\int_{X_{n}[\delta]} |h_n|^2\mathbf{r}^{-\frac{4}{3}}dV_n \leq \lim_{\delta\rightarrow 0}\lim_{n\rightarrow \infty} 1=1, \end{split} \end{equation*} so we have \begin{equation*} \int_{X_{0,sm}} |h|^2\mathbf{r}^{-\frac{4}{3}}dV_0=1. \end{equation*} Since $h=\mu I$, this implies that \begin{equation} \label{eq:f-1} |\mu|^2=\left(\text{rank}(\mathcal{E})\int_{X_{0,sm}} \mathbf{r}^{-\frac{4}{3}}dV_0 \right)^{-1}. \end{equation} On the other hand, note that for each $\delta>0$, \begin{equation*} |\mu|\text{rank}(\mathcal{E})\text{Vol}_0(X_0[\delta])=\left| \int_{X_0[\delta]}\text{tr}\,h\, dV_0 \right|=\lim_{n \rightarrow \infty}\left| \int_{X_n[\delta]}\text{tr}\,h_n\, dV_n \right|. \end{equation*} Because \begin{equation*} \int_{X_n}\text{tr}\,h_n\, dV_n=0, \end{equation*} we have \begin{equation} \label{eq:f-2} \begin{split} &|\mu|\text{rank}(\mathcal{E})\text{Vol}_0(X_0[\delta]) =\lim_{n\rightarrow \infty}\left| \int_{X_n[{\delta}]}\text{tr}\,h_n\, dV_n \right| =\lim_{n\rightarrow \infty}\left| \int_{V_n(\delta)}\text{tr}\,h_n\, dV_n \right| \\ &\leq C_1\lim_{n\rightarrow \infty}\left( \int_{V_n(\delta)} |h_n|^2dV_n\right)^\frac{1}{2} \leq C_2\lim_{n\rightarrow \infty} \left( \int_{V_n(\delta)} |h_n|^2\mathbf{r}^{-\frac{4}{3}}dV_n \right)^\frac{1}{2} \leq C_3\delta^\iota. \end{split} \end{equation} Now choose $\delta$ small enough such that \begin{equation} \label{eq:f-3} \text{Vol}_0(X_0[\delta])\geq \frac{1}{2}\text{Vol}_0(X_0) \end{equation} and \begin{equation} \label{eq:f-4} \left(\text{rank}(\mathcal{E})\int_{X_{0,sm}} \mathbf{r}^{-\frac{4}{3}}dV_0 \right)^{-\frac{1}{2}}>\frac{2C_3\delta^\iota}{\text{rank}(\mathcal{E})\text{Vol}_0(X_0)}. \end{equation} We see that a contradiction arises from (\ref{eq:f-1})-(\ref{eq:f-4}). We have thus shown Proposition \ref{pr:5-1}.\qed \noindent{\scshape Proof of Lemma \ref{lm:3}} First of all, by H$\ddot{\text{o}}$lder inequality, \begin{equation*} \int_{V_n(\delta)} |h_n|^2 \mathbf{r}^{-\frac{4}{3}} dV_n \leq \left( \int_{V_n(\frac{1}{4})} |h_n|^3 \mathbf{r}^{-3\iota} dV_n \right)^\frac{2}{3} \left(\int_{V_n(\delta)} \mathbf{r}^{-4+6\iota} dV_n \right)^\frac{1}{3}. \end{equation*} Because \begin{equation*} \left(\int_{V_n(\delta)} \mathbf{r}^{-4+6\iota} dV_n \right)^\frac{1}{3} \leq C_3\delta^{2\iota}, \end{equation*} it is enough to prove that \begin{equation*} \left( \int_{V_n(\frac{1}{4})} |h_n|^3 \mathbf{r}^{-3\iota} dV_n \right)^\frac{2}{3} \leq C_4 \end{equation*} for some constant $C_4>0$. The proof makes use of Michael-Simon's Sobolev inequality \cite{MS} which we now describe. Let $M$ be an $m$-dimensional submanifold in $\mathbb{R}^N$. Denote the mean curvature vector of $M$ by $H$. Then for any nonnegative function $f$ on $M$ with compact support, one has \begin{equation} \label{eq:9-1} \left( \int_M f^\frac{m}{m-1} dV_{g_E} \right) \leq C(m)\int_M (|\nabla f|_{g_E}+|H|\cdot f)\,dV_{g_E} \end{equation} where $C(m)$ is a constant depending only on $m$. Here all metrics and norms are the induced ones from the Euclidean metric on $\mathbb{C}^4$. We denote this induced metric by $g_E$. Do not confuse this metric with the metric $g_e$ appearing in earlier sections. In our case $M$ is the space $V_t(\frac{1}{2})$ identified as part of the submanifold $Q_{t} \subset \mathbb{C}^4$. As pointed out in \cite{FLY}, the relations between the volumes and norms for the CO-metric $g_{co,t}$ and those for the induced metric $g_E$ are \begin{equation} \label{eq:r-1} dV_{g_{co,t}}=\frac{2}{3}\mathbf{r}_t^{-2}dV_{g_E} \end{equation} and \begin{equation} \label{eq:r-2} |\nabla f|_{g_E}^2 \leq C\mathbf{r}_t^{-\frac{2}{3}}|\nabla f|_{g_{co,t}}^2 \end{equation} for any smooth function $f$ on $V_t(\frac{\delta}{2})$. Let $\tau(\mathbf{r})$ be a cutoff function defined on $V_n(1)$ such that $\tau(\mathbf{r})=1$ when $\mathbf{r}\leq \frac{1}{4}$ and $\tau(\mathbf{r})=0$ when $\mathbf{r} \geq \frac{1}{2}$. Extend it to $X_n$ by zero. From (\ref{eq:r-1}) we have \begin{equation} \label{eq:9-2} \int_{V_n(\frac{1}{4})} | h_n |^3\mathbf{r}^{-3\iota}dV_{co,n} \leq \frac{2}{3}\int_{V_n(\frac{1}{2})} |h_n |^{3} \mathbf{r}^{-3\iota-2} \tau^3 dV_{g_E}. \end{equation} where $dV_{co,n}$ is the volume form with respect to the CO-metric $\omega_{co,t_n}$. Moreover, using H$\ddot{\text{o}}$lder inequality, one can deduce from (\ref{eq:9-1}) that \begin{equation*} \left( \int_{V_n(\frac{1}{2})} f^3dV_{g_E} \right)^\frac{2}{3} \leq C\int_{V_n(\frac{1}{2})} |\nabla f|_{g_E}^2dV_{g_E} , \end{equation*} and using (\ref{eq:r-1}) and (\ref{eq:r-2}) we get \begin{equation} \label{eq:r-3} \left( \int_{V_n(\frac{1}{2})} f^3dV_{g_E} \right)^\frac{2}{3} \leq C_5\int_{V_n(\frac{1}{2})} |\nabla f|_{co,n}^2\mathbf{r}^\frac{4}{3}dV_{g_{co,n}} \end{equation} where $|\cdot|_{co,n}$ is the used to denote $|\cdot|_{g_{co,t_n}}$. Apply (\ref{eq:r-3}) to $f=|h_n| \mathbf{r}^{-\iota-\frac{2}{3}}\tau$, and then together with (\ref{eq:9-2}) (and Lemma \ref{lm:0-1}) we have \begin{equation} \label{eq:19-1} \begin{split} &\left( \int_{V_n(\frac{1}{4})} |h_n|^3 \mathbf{r}^{-3\iota} dV_n \right)^\frac{2}{3} \\ \leq & \tilde{C}_1^\frac{2}{3}\left( \int_{V_n(\frac{1}{4})} |h_n|^3 \mathbf{r}^{-3\iota} dV_{co,n} \right)^\frac{2}{3} \leq \tilde{C}_1^\frac{2}{3}\left( \frac{2}{3}\int_{V_n(\frac{1}{2})} (|h_n| \mathbf{r}^{-\iota-\frac{2}{3}}\tau)^3 dV_{g_E} \right)^\frac{2}{3} \\ \leq & C_6\int_{V_n(\frac{1}{2})} |\nabla (|h_n| \mathbf{r}^{-\iota-\frac{2}{3}}\tau)|_{co,n}^2 \mathbf{r}^\frac{4}{3} dV_{co,n} \\ \leq & 3C_6\int_{V_n(\frac{1}{2})} |\nabla |h_n| |_{co,n}^2 \mathbf{r}^{-2\iota}\tau^2 dV_{co,n} +3C_6\int_{V_n(\frac{1}{2})} |h_n|^2 |\nabla \mathbf{r}^{-\iota-\frac{2}{3}}|_{co,n}^2 \tau^2 \mathbf{r}^\frac{4}{3} dV_{co,n} \\ &+ 3C_6\int_{V_n(\frac{1}{2})} |h_n|^2 \mathbf{r}^{-2\iota} |\nabla\tau|_{co,n}^2 dV_{co,n}. \\ \end{split} \end{equation} The third term on the RHS of (\ref{eq:19-1}) is an integral over $V_n(\frac{1}{4},\frac{1}{2})$ in which the support of $\nabla \tau$ lies. From (\ref{eq:11-3}) one sees that it is bounded by some constant $C_7>0$ independent of $n$. Later whenever we encounter an integral with a derivative of $\tau$ in the integrant, we will bound it by a constant for the same reason. Because $h_n$ is $H_n$-hermitian symmetric, $\bar{\partial} h_n=(\partial_{H_n} h_n)^{*_{H_n}}$, and so the first term on the RHS of (\ref{eq:19-1}) can be bounded as \begin{equation} \label{eq:r-4} \begin{split} \int_{V_n(\frac{1}{2})} |\nabla |h_n| |_{co,n}^2 \mathbf{r}^{-2\iota}\tau^2 dV_{co,n} \leq &\int_{V_n(\frac{1}{2})} (\langle \partial_{H_n} h_n, \partial_{H_n} h_n \rangle_{co,n}+\langle \bar{\partial} h_n, \bar{\partial} h_n \rangle_{co,n}) \mathbf{r}^{-2\iota}\tau^2 dV_{co,n} \\ = &2\int_{V_n(\frac{1}{2})} \langle \partial_{H_n} h_n, \partial_{H_n} h_n \rangle_{co,n} \mathbf{r}^{-2\iota}\tau^2 dV_{co,n} \leq \tilde{C}_3|t_n|^{\nu-\iota} \end{split} \end{equation} for some constant $\tilde{C}_3>0$ independent of $n$. The last inequality follows from (\ref{eq:11-4}) and Lemma \ref{lm:0-1}. We now fix an $\iota$ such that $0<\iota<\min\{\frac{1}{6},\nu\}$. Then we see that as $n$ goes to infinity, this term goes to zero. Finally we deal with the second term on the RHS of (\ref{eq:19-1}). It can be bounded as \begin{equation} \label{eq:19-2} \begin{split} \int_{V_n(\frac{1}{2})} |h_n|^2 |\nabla \mathbf{r}^{-\iota-\frac{2}{3}}|_{co,n}^2 \tau^2 \mathbf{r}^\frac{4}{3} dV_{co,n} \leq &C_8\int_{V_n(\frac{1}{2})} |h_n|^2 \mathbf{r}^{-2\iota-\frac{4}{3}} \tau^2 dV_{co,n} \end{split} \end{equation} for some constant $C_8>0$. hence it is enough to bound the term on the right. To do so, we introduce the notation $\phi_2=\mathbf{r}^{-2\iota}$, and denote $\partial_{H_n}^{\phi_2}=\partial_{H_n}+\partial \log \phi_2 \wedge$. We can estimate \begin{equation*} \begin{split} 0 \leq& \int_{V_n(\frac{1}{2})} \langle \bar{\partial} h_n, \bar{\partial} h_n \rangle_{\tilde{g}_n} \phi_2\tau^2 dV_{n} \leq -\int_{V_n(\frac{1}{2})} \langle \sqrt{-1}\Lambda_{\tilde{\omega}_n} \partial_{H_n}^{\phi_2} \bar{\partial} h_n, h_n \rangle_{\tilde{g}_n} \phi_2\tau^2 dV_{n}+C_7 \\ =& \int_{V_n(\frac{1}{2})} -\langle \sqrt{-1}\Lambda_{\tilde{\omega}_n} \partial_{H_n}^{\phi_2} \bar{\partial} h_n+\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial_{H_n}^{\phi_2} h_n, h_n \rangle_{\tilde{g}_n}\phi_2\tau^2 dV_{n} \\ &+\int_{V_n(\frac{1}{2})} \langle \sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial_{H_n}^{\phi_2} h_n, h_n \rangle_{\tilde{g}_n} \phi_2\tau^2 dV_{n} +C_7. \end{split} \end{equation*} One can compute that \begin{equation*} \sqrt{-1}\Lambda_{\tilde{\omega}_n} \partial_{H_n}^{\phi_2} \bar{\partial} h_n+\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial_{H_n}^{\phi_2} h_n = -[\sqrt{-1}\Lambda_{\tilde{\omega}_n}F_{H_n}, h_n]+(\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial \log\phi_2) h_n, \end{equation*} and so we have \begin{equation*} \begin{split} 0\leq & \int_{V_n(\frac{1}{2})} \langle [\sqrt{-1}\Lambda_{\tilde{\omega}_n}F_{H_n}, h_n],h_n\rangle_{\tilde{g}_n}-\langle (\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial \log\phi_2) h_n, h_n \rangle_{\tilde{g}_n} \phi_2\tau^2 dV_{n} \\ &+\int_{V_n(\frac{1}{2})} \langle \partial_{H_n}^{\phi_2} h_n, \partial_{H_n}^{\phi_2} h_n \rangle_{\tilde{g}_n} \phi_2\tau^2 dV_{n} +C_7 \\ \leq &2 \int_{V_n(\frac{1}{2})} |\Lambda_{\tilde{\omega}_n}F_{H_n}| | h_n|^2 \phi_2\tau^2 dV_{n} + 2\int_{V_n(\frac{1}{2})} |\partial_{H_n} h_n|_{\tilde{g}_n}^2 \phi_2\tau^2 dV_{n} \\ &+ \int_{V_n(\frac{1}{2})} (2|\partial\log \phi_2|_{\tilde{g}_n}^2-\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial \log\phi_2) | h_n|^2 \phi_2\tau^2 dV_{n} +C_7 \end{split} \end{equation*} To proceed, we use the bound $|\Lambda_{\tilde{\omega}_n}F_{H_n}| \leq \tilde{Z}_0\mathbf{r}^{-\frac{4}{3}}|t_n|^{\frac{1}{4}}$ from the remark at the end of Section 5 to deal with the first term. We use (\ref{eq:11-4}) to take care of the second term. Finally, we have \begin{equation*} |\partial\log \phi_2|_{\tilde{g}_n}^2<3\iota^2 \mathbf{r}^{-\frac{4}{3}}\,\,\,\text{and}\,\,\sqrt{-1}\Lambda_{\tilde{\omega}_n} \bar{\partial} \partial \log\phi_2>\iota \mathbf{r}^{-\frac{4}{3}}, \end{equation*} which follow from (bottom of) p.31 of \cite{FLY} together with the observation $\sqrt{-1}\bar{\partial} \partial \log\phi_2\geq 0$ and the crude estimate $\frac{1}{2}g_{co,t} \leq \tilde{g}_t \leq 2g_{co,t}$ on $V_t(\frac{1}{2})$ for $t$ sufficiently small. Thus \begin{equation*} \begin{split} 0\leq & 2\tilde{Z}_0 |t_n|^{\frac{1}{4}} \int_{V_n(\frac{1}{2})} | h_n|^2 \mathbf{r}^{-\frac{4}{3}}\phi_2\tau^2 dV_{n} + 2|t_n|^{\nu-\iota} + \int_{V_n(\frac{1}{2})} (6\iota^2-\iota) | h_n|^2 \mathbf{r}^{-\frac{4}{3}}\phi_2\tau^2 dV_{n} +C_7 \\ \leq &(2\tilde{Z}_0 |t_n|^\frac{1}{4}+6\iota^2-\iota) \int_{V_n(\frac{1}{2})} | h_n|^2 \mathbf{r}^{-2\iota-\frac{4}{3}}\tau^2 dV_{n}+ 2|t_n|^{\nu-\iota} +C_4. \end{split} \end{equation*} Recall that $0<\iota<\min\{\frac{1}{6},\nu\}$ is fixed. Let $n$ be large so that $2\tilde{Z}_0 |t_n|^{\frac{1}{4}}+6\iota^2-\iota<0$, we see from above that \begin{equation} \label{eq:19-4} \int_{V_n(\frac{1}{2})} | h_n|^2 \mathbf{r}^{-2\iota-\frac{4}{3}}\tau^2 dV_{n} \leq C(\iota) \end{equation} for some constant $C(\iota)>0$ depending on $\iota$. From (\ref{eq:19-1}), (\ref{eq:r-4}), (\ref{eq:19-2}), and (\ref{eq:19-4}) the proof is complete. \qed \end{proof} \end{document}
\begin{document} \title{Entanglement dynamics of three-qubit states in many-sided noisy channels} \author{Michael Siomau} \email{[email protected]} \affiliation{Physikalisches Institut, Heidelberg Universit\"{a}t, D-69120 Heidelberg, Germany} \affiliation{Department of Theoretical Physics, Belarussian State University, 220030 Minsk, Belarus} \begin{abstract} We study entanglement dynamics of pure three-qubit Greenberger-Horne-Zeilinger-type (GHZ-type) entangled states when one, two or three qubits being subjected to general local noise. Employing a lower bound for three-qubit concurrence as an entanglement measure, we show that for some many-sided noisy channels the entanglement dynamics can be completely described by the evolution of the entangled states in single-sided channels. \end{abstract} \pacs{03.67.Mn, 03.65.Yz.} \maketitle \section{\label{sec:1} Introduction} It is widely accepted nowadays that entangled states of multiparticle systems are the most promising resource for quantum information processing. At the same time, entanglement of complex systems is known to be very fragile with regard to decoherence, which may appear, for instance, due to a transmission of the whole quantum system or some its subsystems through communication channels. In practice, moreover, it is often required to distribute parts of an entangled multipartite system between several remote recipients. In this case, each subsystem is coupled locally with its environment. Such a coupling of a quantum subsystem to some environmental channel leads to decoherence of the multipartite system and usually to some loss of entanglement. For successful practical utilization of entangled states, it is of great importance to quantify entanglement of the complex quantum systems and describe quantitatively their entanglement dynamics under the action of local (noisy) channels. There are two main approaches in the literature to investigate entanglement dynamics of multiparticle entangled states. First one is to study state evolution of particular (usually maximally entangled) states under the action of some chosen noisy channels and deduce entanglement dynamics from the state evolution \cite{Carvalho:04,Borras:09,Liu:09,Siomau1:10}. This approach, however, is restricted by our choice of the entangled states and the noise models. Recently, a completely different approach for the description of entanglement dynamics, which is based on evolution equation for entanglement, has been developed \cite{Konrad:08,Li(d):09,Gour:10,Siomau2:10}. This latter approach allows to obtain a direct relationship between the initial and the final entanglement of the system in which just one of its subsystems is subjected to an arbitrary noise. Unfortunately, the suggested evolution equations can not be straightforwardly generalized to the case when more than one subsystem undergo the action of local noisy channels. In this work, we investigate entanglement dynamics of initially pure three-qubit states when one, two or three qubits are affected by general local single-qubit noisy channels. At first, we shall consider the entanglement dynamics of the maximally entangled Greenberger-Horne-Zeilinger (GHZ) state, which can be written, in the computational basis, as \begin{equation} \label{GHZ} \ket{\rm GHZ} = \frac{1}{\sqrt{2}} \left( \ket{000} + \ket{111} \right) \, . \end{equation} Later, we shall generalize our discussion to all (GHZ-type) pure three-qubit states which can be obtained from the state (\ref{GHZ}) by local unitary transformations. To describe the influence of the local noisy channels on the entangled states we shall use quantum operation formalism \cite{Nielsen:00}. Since there is no an analytically computable measure of entanglement for multiqubit states \cite{Horodecki:09}, a lower bound for multiqubit concurrence \cite{Li(b):09} will be utilized to access the entanglement dynamics. We shall show that for some noisy channels the complex two- and three-sided entanglement dynamics, i.e. when two or three qubits are affected by local noise, can be completely described by the evolution of the entangled system in single-sided channels (when just one qubit is subjected to local noise). This work is organized as follows. In the next section, we present the entanglement measure of use, the lower bound for multiqubit concurrence, and recall its properties. In Sec. \ref{sec:3}, we step by step analyze cases when one, two or three qubits of the three-qubit system are subjected to local noisy channels. A summary is drawn in Sec. \ref{sec:4}. \section{\label{sec:2} The entanglement measure} Concurrence, originally suggested by Wootters \cite{Wootters:98} to describe entanglement of an arbitrary state of two qubits, has been recognized as a very powerful measure of entanglement. Although various extensions of the concurrence to the case of bipartite states, if the dimensions of the associated Hilbert (sub-)spaces are larger than two, has been suggested \cite{Coffman:00,Rungta:01,Gour:05}, a full generalization of the concurrence towards multiparticle states still remains challenging \cite{Horodecki:09}. However, a formal extension of the concurrence to multipartite case can be successfully approximated by an analytically computable function, so-called lower bound, which never exceeds such multipartite concurrence, but nonetheless is close enough to its values. To date, several lower bounds for the multipartite concurrence have been proposed \cite{Carvalho:04,Horodecki:09,Li(b):09}. Let us recall and exploit the lower bound for the multiqubit concurrence as suggested by Li \textit{et al.} \, \cite{Li(b):09}. The multiqubit concurrence for a pure three-qubit state $\ket{\psi}$ is given by \begin{equation} \label{c-pure} C_3(\ket{\psi}) = \sqrt{1 - \frac{1}{3} \sum_{i=1}^3 {\rm Tr}\, \rho_i^2 } \, , \end{equation} where the $\rho_i = {\rm Tr} \ket{\psi}\bra{\psi}$ denote the reduced density matrix of the $i$-th qubit which is obtained by tracing out the remaining two qubits. The concurrence for an arbitrary mixed three-qubit state can be formally defined by means of the so-called convex roof \begin{equation} \label{c-mixed} C_3(\rho) = {\rm min} \sum_i p_i \, C_3(\ket{\psi_i}) \, , \end{equation} relying on the fact that any mixed state can be expressed as a convex sum of some pure states $\{ \ket{\psi_i} \}$: $\rho = \sum_i \, p_i \ket{\psi_i}\bra{\psi_i}$. However, the minimum in this expression is to be found among all possible decompositions of $\rho$ into pure states $\ket{\psi_i}$. No solution has been found so far to optimize the concurrence (\ref{c-mixed}) analytically \cite{Horodecki:09}. A simple analytically computable lower bound $\tau_3 (\rho)$ for three-qubit concurrence can be given in terms of the three bipartite concurrences $C^{ab|c}$ ($a,b,c=1,2,3$ and $a\neq b\neq c\neq a$) \cite{Li(b):09} as \begin{equation} \label{low-bound-three} \tau_3 (\rho) = \sqrt{\frac{1}{3}\, \sum_{k=1}^6 \, (C_k^{12|3})^2 + (C_k^{13|2})^2 + (C_k^{23|1})^2 } \, . \end{equation} Here, each bipartite concurrence $C^{ab|c}$ corresponds to a possible (bipartite) cut of the three-qubit system in which just one of the qubits is discriminated from the other two qubits. For a separation $ab|c$, the bipartite concurrence $C^{ab|c}$ is given by a sum of six terms $C_k$ which are expressed as \begin{equation} \label{concurence} C_k^{ab|c} = {\rm max} \left( 0, \lambda_k^1 - \lambda_k^2 - \lambda_k^3 - \lambda_k^4 \right) \, , \end{equation} and where the $\lambda_k^m, \, m=1..4$ are the square roots of the four nonvanishing eigenvalues of the matrix $\rho\, \tilde{\rho}_k^{\: ab|c}$, if taken in decreasing order. These matrices $\rho\: \tilde{\rho}_k^{\: ab|c}$ are formed by means of the density matrix $\rho$ and its complex conjugate $\rho^*$, and are further transformed by the operators $\{ S_k^{ab|c} = L^{ab|c}_k \otimes L_0,\; k = 1,...,6 \}$ as: $\tilde{\rho}_k^{n} = S_k^{ab|c} \rho^\ast S_k^{ab|c}$. In this notation, moreover, $L_0$ is the (single) generator of the group SO(2) which is given by the second Pauli matrix $\sigma_y = - i \, ( \ket{0}\bra{1} + \ket{1}\bra{0})$; while the $\{ L^{ab|c}_k \}$ are the six generators of the group SO$(4)$ that can be expressed explicitly by means of the totally antisymmetric Levi-Cevita symbol in four dimensions as $(L_{kl})_{mn} = - i \, \varepsilon_{klmn}; \; k,l,m,n =1..4$ \cite{Jones:98}. Since the lower bound (\ref{low-bound-three}) is just an approximation for the convex roof (\ref{c-mixed}), it is of great importance to know how accurate this bound is with respect to the convex roof for multiqubit concurrence. It has been checked \cite{Siomau:11} by sampling $100$ randomly generated density matrices $\rho$ that the lower bound $\tau_3 (\rho)$ coincides with the numerically simulated convex roof $C_3(\rho)$ for all density matrices with rank $r \leq 4$. This result has found its explanation in the fact that the lower bound $\tau_3 (\rho)$, by its construction \cite{Li(b):09}, relies only on four nonvanishing eigenvalues of the matrix $\rho\, \tilde{\rho}_k^{\: ab|c}$, while this matrix may have at most eight eigenvalues. Taking into account the mentioned property of the lower bound $\tau_3 (\rho)$, it is convenient to define a function of at most four variables \begin{eqnarray} \label{func} f(w,x,y,z) & = & \nonumber \\[0.1cm] & & \hspace*{-1.4cm} {\rm min} \left( 0, \; 2\,{\rm max}\left(w,x,y,z\right) -w -x -y -z \right) \, . \end{eqnarray} It is easy to see that $C_k^{ab|c} = f(\lambda_k^1, \lambda_k^2, \lambda_k^3, \lambda_k^4)$, where $\lambda_k^m, \, m=1..4$ are the square roots of the eigenvalues of the matrix $\rho\, \tilde{\rho}_k^{\: ab|c}$. Whenever the matrix $\rho\, \tilde{\rho}_k^{\: ab|c}$ has less than four, e.g. two, eigenvalues, we shall use the function of just two inputs $f(w, x) \equiv f(w, x, 0, 0)$. \section{\label{sec:3} Entanglement dynamics in local noisy channels} Quantum operation formalism is a very general and prominent tool to describe how a quantum system has been influenced by its environment. According to this formalism the final state of the quantum system, that is coupled to some environmental channel, can be obtained from its initial state with the help of (Kraus) operators \begin{equation} \label{sum-represent} \rho_{\rm fin} = \sum_i K_i \, \rho_{\rm ini} \, K_i^\dag \, , \end{equation} and the condition $\sum_i K_i^\dag \, K_i = I$ is fulfilled. Note that we consider only such system-environment interactions that can be associated with completely positive {\it trace-preserving} maps \cite{Nielsen:00}. If the quantum system of interest consists of just single qubit which is subjected to some environmental channel $A$, then an arbitrary quantum operation can be expressed with the help of at most four operators \cite{Nielsen:00}. Let us define the four operators through the Pauli matrices as \begin{eqnarray} \label{single} K_1(a_1) &=& \frac{a_1}{\sqrt{2}} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) , \; K_2(a_2) = \frac{a_2}{\sqrt{2}} \left( \begin{array}{cc} 0 & 1 \\1 & 0 \end{array} \right) \, , \nonumber \\[0.1cm] K_3(a_3) &=& \frac{a_3}{\sqrt{2}} \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) , \: K_4(a_4) = \frac{a_4}{\sqrt{2}} \left( \begin{array}{cc} 1 & 0 \\0 & -1 \end{array} \right) \, \end{eqnarray} where $a_i$ are real parameters and the condition $\sum_{i=1 }^4 a_i^2 = 1$ holds. For three-qubit system, local interactions of the qubits with channels $A,B$ and $C$ can be described by operators, which are constructed as tensor products of the single-qubit operators $K_i(a_i), K_i(b_i)$ and $K_i(c_i)$. Therefore, if the three qubits are affected by local noise simultaneously, the final state of the system (\ref{sum-represent}) can be obtained from its initial state with the help of $64$ operators $K_i(a_i)\otimes K_j(b_j)\otimes K_l(c_l)$ $(i,j,l=1...4)$. If just two qubits undergo the action of local channels, the total number of Kraus operators is $16$. \subsection{\label{sec:3.1} Single-sided channels} Suppose, just one qubit of the three qubit system, which is initially prepared in pure $GHZ$ state (\ref{GHZ}), is subjected to a noisy channel $A$. The final state of the system is in general mixed and can be expressed by a rank-4 density matrix $\rho$. This matrix is obtained from Eq.~(\ref{sum-represent}) using the four operators $1\otimes 1 \otimes K_i(a_i)$. Here we assumed, without loss of generality, that the third qubit undergoes the action of the channel. From the matrix $\rho$ we can compute the three bipartite concurrences $C^{12|3}, C^{13|2}, C^{23|1}$ and the lower bound (\ref{low-bound-three}). The concurrences are given by \begin{eqnarray} \label{1-side-1} C^{12|3} &=& f(a_1^2, a_2^2, a_3^2, a_4^2) \, , \\[0.1cm] \label{1-side-2} C^{13|2} &=& \sqrt{f^2(a_1^2, a_4^2) + f^2(a_2^2, a_3^2)} \, , \\[0.1cm] \label{1-side-3} C^{23|1} &=& \sqrt{f^2(a_1^2, a_4^2) + f^2(a_2^2, a_3^2)} \, , \end{eqnarray} where we dropped the arguments of the concurrences, e.g. $C^{12|3} \equiv C^{12|3} (\left[1\otimes 1\otimes A\right] \ket{GHZ}\bra{GHZ})$ Interestingly, if the channel $A$ is the bit-flip $(a_1\neq 0, a_2\neq 0, a_3=a_4=0)$ or the bit-phase-flip $(a_1\neq 0, a_3\neq 0, a_2=a_4=0)$ channel \cite{Nielsen:00}, the entanglement of the three-qubit system, described by means of the lower bound $\tau_3 (\rho)$, never vanish. The description of the entanglement dynamics of the $GHZ$ state (\ref{GHZ}) in single-sided channels can be generalized to the case of an arbitrary pure three-qubit state $\ket{\psi}$ which can be obtained from the $GHZ$ state by local unitary transformations. Such an extension is possible due to recently suggested evolution equation for the lower bound $\tau_3$ \cite{Siomau2:10}, which manifests that the entanglement dynamics of an arbitrary pure state $\ket{\psi}$ of a three-qubit system, when one of its qubits undergoes the action of an arbitrary noisy channel $A$, is subordinated to the dynamics of the maximally entangled state, i.e. \begin{eqnarray} \label{tau} \tau_3 [(1 \otimes 1 \otimes A) \ket{\psi}\bra{\psi}] &=& \nonumber \\[0.1cm] & & \hspace*{-2.8cm} \tau_3 [(1 \otimes 1 \otimes A) \ket{GHZ}\bra{GHZ}] \: \tau_3 [\ket{\psi}] \, . \end{eqnarray} Conclusively, the entanglement dynamics of an arbitrary pure three-qubit state, which is locally equivalent to the GHZ state and is subjected to an arbitrary single-sided channel $A$, is given by the lower bound (\ref{low-bound-three}), which is defined through Eqs.~(\ref{1-side-1})-(\ref{1-side-3}), and the evolution equation (\ref{tau}). Moreover, based on the numerical simulations in \cite{Siomau:11}, we argue that the description of this particular case of the entanglement dynamics with the lower bound $\tau_3$ is as general as with the convex roof (\ref{c-mixed}) for multiqubit concurrence. \subsection{\label{sec:3.2} Two-sided channels} If the three-qubit system is prepared into the $GHZ$ state (\ref{GHZ}) and just two (let say, the second and the third) qubits are affected by local channels $A$ and $B$ simultaneously, the final state of the system is given by a rank-8 density matrix $\rho$. In this case, the three bipartite concurrences can be expressed as \begin{widetext} \begin{small} \begin{eqnarray} \label{2-side-1} C^{12|3} = \sqrt{f^2(a_2^2b_3^2+a_3^2b_2^2, a_2^2b_2^2+a_3^2b_3^2, a_3^2b_1^2+a_2^2b_4^2, a_2^2b_1^2+a_3^2b_4^2) +f^2(a_4^2b_2^2+a_1^2b_3^2, a_1^2b_2^2+a_4^2b_3^2, a_4^2b_1^2+a_1^2b_4^2, a_1^2b_1^2+a_4^2b_4^2 )} \, , \\[0.1cm] \label{2-side-2} C^{13|2} = \sqrt{f^2(a_4^2b_2^2 +a_1^2b_3^2, a_3^2b_2^2 + a_2^2b_3^2, a_2^2b_2^2 +a_3^2b_3^2, a_1^2b_2^2 +a_4^2b_3^2 ) +f^2(a_4^2b_1^2 +a_1^2b_4^2, a_3^2b_1^2 +a_2^2b_4^2, a_2^2b_1^2 +a_3^2b_4^2, a_1^2b_1^2 +a_4^2b_4^2)} \, , \\[0.1cm] \label{2-side-3} C^{23|1} = \sqrt{f^2(a_4^2b_2^2 +a_1^2b_3^2, a_1^2b_2^2 +a_4^2b_3^2, a_3^2b_1^2 +a_2^2b_4^2, a_2^2b_1^2 +a_3^2b_4^2) +f^2(a_3^3b_2^2 +a_2^2b_3^2, a_2^2b_2^2+a_3^2b_3^2, a_4^2b_1^2 +a_1^2b_4^2, a_1^2b_1^2 +a_4^2b_4^2)} \, . \end{eqnarray} \end{small} \end{widetext} Although no general conclusion about the entanglement dynamics of the three-qubit system can be made from the structure of these bipartite concurrences, there are some interesting particular cases. If each of the channels $A$ and $B$ is just the bit-flip or the bit-phase-flip channel, the squared lower bound $\tau_3^2$ is given by \begin{equation} \label{low-bound-2-sided} \frac{1}{3} \left[ f^2(a_1^2, a_i^2) + f^2(b_1^2, b_j^2) + f^2(a_1^2, a_i^2)f^2(b_1^2, b_j^2) \right] \, , \end{equation} where $i,j = 2,3$. Symbolically Eq.~(\ref{low-bound-2-sided}) can be written as \begin{equation} \label{gen} \tau_3^2 (\left[ 1\otimes A\otimes B\right]\rho) = \frac{1}{3} \, \left( \mathbb{A}^2 + \mathbb{B}^2 + \mathbb{A}^2 \mathbb{B}^2\right) \, , \end{equation} where $\mathbb{A} \equiv C^{12|3}(\left[1\otimes 1\otimes A\right]\rho), \, \mathbb{B} \equiv C^{12|3}(\left[1\otimes 1\otimes B\right]\rho)$ and $\rho=\ket{GHZ}\bra{GHZ}$. Other words, entanglement dynamics of the $GHZ$ state in two-sided noisy channels, when the channels $A$ and $B$ are the bit-flip or the bit-phase-flip channels, is completely defined by the dynamics of the state in the single-sided channels. Although this result holds only for mentioned channels, it can be extended to the case of an arbitrary pure three-qubit state $\rho = \ket{\psi}\bra{\psi}$ which is locally equivalent to the $GHZ$ state. This extension is based on the evolution equation for bipartite concurrence \cite{Li(d):09} which has very similar sense and structure to the evolution equation (\ref{tau}) for the lower bound. For the bipartite concurrence $C^{12|3}$, for example, the evolution equation is given by \begin{eqnarray} \label{tau} C^{12|3} [(1 \otimes 1 \otimes A) \ket{\psi}\bra{\psi}] &=& \nonumber \\[0.1cm] & & \hspace*{-4cm} C^{12|3} [(1 \otimes 1 \otimes A) \ket{GHZ}\bra{GHZ}] \: C^{12|3} [\ket{\psi}] \, . \end{eqnarray} It is important to note that Eqs.~(\ref{2-side-1})-(\ref{2-side-3}) are computed for a rank-8 density matrix and, therefore, the lower bound $\tau_3$ obtained with the help of these equations may differ from the convex roof (\ref{c-mixed}) for the concurrence significantly. However, due to the assumption that the channels $A$ and $B$ are the bit-flip or the bit-phase-flip channels, the state $\left[1\otimes A\otimes B\right]\ket{GHZ}\bra{GHZ}$ has rank four. Therefore, it can be argued that Eq.~(\ref{gen}), formulated for the lower bound $\tau_3$, remains valid if the convex roof (\ref{c-mixed}) is substituted in its left hand side (lhs). \subsection{\label{sec:3.3} Three-sided channels} In the previous subsection we have analyzed the case when just two qubits of the three-qubit system are subjected to local noisy channels. Similar analysis can be made when all three qubits are affected by local channels $A, B$ and $C$ simultaneously. If the system is initially prepared in the $GHZ$ state (\ref{GHZ}), the final state density matrix, derived from Eq.~(\ref{sum-represent}), has rank eight. The analytically computed concurrences $C^{12|3}, C^{13|2}$ and $C^{23|1}$ in this case have very complex structure and, therefore, we do not display them here. There are significant simplifications in the structure of the concurrences and the lower bound if all $A, B$ and $C$ are bit-flip channels or if two of these channels are bit-phase-flip and the remaining one is a bit-flip. In these two cases, the squared lower bound $\tau_3^2$ can be written as \begin{equation} \label{gen2} \tau_3^2 (\left[ A\otimes B\otimes C\right]\rho) = \frac{1}{3} \, \left( \mathbb{A}^2 \mathbb{B}^2 + \mathbb{B}^2 \mathbb{C}^2 + \mathbb{A}^2 \mathbb{C}^2\right) \, , \end{equation} where $\mathbb{A} \equiv C^{12|3}(\left[1\otimes 1\otimes A\right]\rho)$, $\mathbb{B} \equiv C^{12|3}(\left[1\otimes 1\otimes B\right]\rho)$, $\mathbb{C} \equiv C^{12|3}(\left[1\otimes 1\otimes C\right]\rho)$ and $\rho=\ket{GHZ}\bra{GHZ}$. The entanglement dynamics of the $GHZ$ state in the three-sided noisy channels is given just by the dynamics of the state in the single-sided channels. As in the previous section, this result can be generalized to the case of an arbitrary three-qubit state (which is locally equivalent to the $GHZ$ state) due to the evolution equation for bipartite concurrence \cite{Li(d):09}. Moreover, the fact that the state $\left[A\otimes B\otimes C\right] \ket{GHZ}\bra{GHZ}$ has rank four, for mentioned channels, suggests to substitute the convex roof (\ref{c-mixed}) in the lhs of Eq.~(\ref{gen2}) without loss of any information about the entanglement dynamics. \section{\label{sec:4} Summary} Our analysis in the previous section suggests that the entanglement dynamics of a pure three-qubit GHZ-type entangled state $\ket{\psi}$ in some local many-sided noisy channels can be completely described by evolution of the entangled state in single-sided channels. More specifically, if just two qubits are subjected to local channels $A$ and $B$, so that the channel $\left[ 1\otimes A\otimes B\right]$ is a combination of just bit-flip and/or bit-phase-flip channels, the lower bound for three-qubit concurrence $\tau_3 (\left[ 1\otimes A\otimes B\right]\ket{\psi}\bra{\psi})$ can be expressed without loss of generality through the bipartite concurrences $ C^{12|3}(\left[1\otimes 1\otimes A\right]\ket{\psi}\bra{\psi})$ and $C^{12|3}(\left[1\otimes 1\otimes B\right]\ket{\psi}\bra{\psi})$ by Eq.~(\ref{gen}). If all three qubits undergo the action of local noisy channels $A, B$ and $C$, which are neither bit-flip channels or two of these channels are the bit-phase-flip and the remaining one is the bit-flip, the lower bound $\tau_3 (\left[ A\otimes B\otimes C\right]\ket{\psi}\bra{\psi})$ is again given just by the bipartite concurrences as it is displayed in Eq.~(\ref{gen2}). It is important to note that there are only two nonvanishing eigenvalues in the expression (\ref{1-side-1}) for the bipartite concurrence $C^{12|3}(\left[1\otimes 1\otimes A\right]\ket{\psi}\bra{\psi})$, if the channel $A$ is the bit-flip or the bit-phase-flip channel. Such a bipartite concurrence can be directly measured \cite{Guehne:09}. Eqs.~(\ref{gen}) and (\ref{gen2}) allow one to access the complex many-sided entanglement dynamics just by measuring the bipartite concurrence in the simplest case, when just one qubit is affected by a local noisy channel. \end{document}
\begin{document} \author[Abdollahi and Khosravi]{A. Abdollahi \;\;\&\;\; H. Khosravi} \title[Right and left Engel element]{On the right and left $4$-Engel elements} \address{$^\mathfrak{m} athbf{1}$Department of Mathematics, University of Isfahan, Isfahan 81746-73441, Iran} \address{$^\mathfrak{m} athbf{2}$School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O.Box: 19395-5746, Tehran, Iran.} \email{($^\mathfrak{m} athbf{1,2}$A. Abdollahi) \;\; [email protected] \;\ [email protected]} \email{hassan\[email protected]} \subjclass[2000]{20F45; 20F12} \keywords{Right Engel elements; Left Engel elements; Hirsch-Plotkin radical of a group; Baer radical of a group; Fitting subgroup} \thanks{The first author's research was in part supported by a grant from IPM (No. 87200118).} \begin{abstract} In this paper we study left and right 4-Engel elements of a group. In particular, we prove that $\langle a, a^b\rangle$ is nilpotent of class at most 4, whenever $a$ is any element and $b^{\pm 1}$ are right 4-Engel elements or $a^{\pm 1}$ are left 4-Engel elements and $b$ is an arbitrary element of $G$. Furthermore we prove that for any prime $p$ and any element $a$ of finite $p$-power order in a group $G$ such that $a^{\pm 1}\in L_4(G)$, $a^4$, if $p=2$, and $a^p$, if $p$ is an odd prime number, is in the Baer radical of $G$. \end{abstract} \mathfrak{m} aketitle \section{\bf Introduction and Results} Let $G$ be any group and $n$ be a non-negative integer. For any two elements $a$ and $b$ of $G$, we define inductively $[a,_n b]$, the $n$-Engel commutator of the pair $(a,b)$, as follows: $$[a,_0 b]:=a,~ [a,b]=[a,_1 b]:=a^{-1}b^{-1}ab \mathfrak{m} box{ and }[a,_n b]=[[a,_{n-1} b],b]\mathfrak{m} box{ for all }n>0.$$ An element $x$ of $G$ is called right $n$-Engel if $[x,_ng]=1$ for all $g\in G$. We denote by $R_n(G)$, the set of all right $n$-Engel elements of $G$. The corresponding subset to $R_n(G)$ which can be similarly defined is $L_n(G)$ the set of all left $n$-Engel elements of $G$ where an element $x$ of $G$ is called left $n$-Engel element if $[g,_n x]=1$ for all $g\in G$. A group $G$ is called $n$-Engel if $G=L_n(G)$ or equivalently $G=R_n(G)$. It is clear that $R_0(G)=1$, $R_1(G)=Z(G)$ the center of $G$, and by a result of Kappe \cite{kappe}, $R_2(G)$ is a characteristic subgroup of $G$. Also we have $L_0(G)=1$, $L_1(G)=Z(G)$ and it can be easily seen that $$L_2(G)=\{x\in G\;|\; \langle x\rangle^G ~ \text{is abelian} \},$$ where $\langle x\rangle^G$ denotes the normal closure of $x$ in $G$. In \cite{ab1} it is shown that $$L_3(G)=\{x\in G \;|\; \langle x,x^y\rangle \in \mathfrak{m} athcal{N}_2 ~\text{ for all }~y\in G\},$$ where $\mathfrak{m} athcal{N}_2$ is the class of nilpotent groups of class at most 2. Also it is proved that $\langle x,y\rangle$ is nilpotent of class at most 4 whenever $x,y\in L_3(G)$. Newell \cite{newell} has shown that the normal closure of every element of $R_3(G)$ in $G$ is nilpotent of class at most 3. This shows that $R_3(G)\subseteq Fit(G)$, where $Fit(G)$ is the Fitting subgroup of $G$, and in particular it is contained in $B(G)\subseteq HP(G)$ where $B(G)$ and $HP(G)$ are the Baer radical and Hirsch-Plotkin radical of $G$, respectively. It is clear that $$R_0(G)\subseteq R_1(G)\subseteq R_2(G)\subseteq\cdots \subseteq R_n(G)\subseteq\cdots. $$ Gupta and Levin \cite{gupta} have shown that the normal closure of an element in a 5-Engel group need not be nilpotent (see also \cite{vaghan} p. 342). Therefore $R_n(G)\nsubseteq Fit(G)$ for $n\geq 5$. The following question naturally arises: \begin{qu}\label{q1} Let $G$ be an arbitrary group. What are the least positive integers $n$, $m$ and $p$ such that $R_n(G)\nsubseteq Fit(G)$, $R_m(G)\nsubseteq B(G)$ and $R_p(G)\nsubseteq HP(G)$? \end{qu} To find integer $n$ in Question \ref{q1} we have to answer the following. \begin{qu} Let $G$ be an arbitrary group. Is it true that $R_4(G)\subseteq Fit(G)$? \end{qu} Although in \cite{ab1} it is shown that there exists $n \in \mathbb{N} $ such that $L_n(G)\nsubseteq HP(G)$, the following question is still open. \begin{qu} Let $G$ be an arbitrary group. What is the least positive integer $k$ such that $L_k(G)\nsubseteq HP(G)$? \end{qu} In this paper we study right and left 4-Engel elements. Our main results are the following. \begin{thm}\label{th3} Let $G$ be any group. If $a\in G$ and $b^{\pm 1}\in R_4(G)$, then $\langle a,a^b\rangle$ is nilpotent of class at most $4$. \end{thm} \begin{thm}\label{th4} Let $G$ be an arbitrary group and $a^{\pm 1}\in L_4(G)$. Then $\langle a,a^b\rangle$ is nilpotent of class at most $4$ for all $b\in G$. \end{thm} \begin{thm}\label{th5} Let $G$ be a group and $a^{\pm 1}\in L_4(G)$ be a $p$-element for some prime $p$. Then \begin{enumerate} \item If $p=2$ then $a^4\in B(G)$. \item If $p$ is an odd prime, then $a^p\in B(G)$. \end{enumerate} \end{thm} In \cite{traus,traus2} Traustason has proved the above results for a 4-Engel group $G$ (in which all elements are simultaneously right and left 4-Engel). The proofs of Theorems \ref{th3}, \ref{th4} and \ref{th5} are somehow inspired by the arguments of \cite{traus,traus2}. In Section 4, we will show that in Theorem \ref{th3}, we cannot remove the condition $b^{-1}\in R_4(G)$ and that in Theorems \ref{th4} and \ref{th5}, the condition $a^{-1}\in L_4(G)$ is an necessary condition. Macdonald \cite{macd} has shown that the inverse or square of a right 3-Engel element need not be right 3-Engel. In Section 4 the GAP \cite{gap} implementation of Werner Nickel's nilpotent quotient algorithm \cite{nick} is used to prove that the inverse of a right (left, respectively) 4-Engel element is not necessarily a right (left, respectively) 4-Engel element. \section{\bf Right $4$-Engel elements} In this section we prove Theorem \ref{th3}. \begin{lem}\label{car} Let G be a group with elements $a, b$. Let $x=a^b$. We have \begin{enumerate} \item[(a)] $[b^{-1}, a, a, a, a]=1$ if and only if $[x^{-1}, x^a]$ commutes with $x^a$. \item[(b)] $[b^{-1}, a^{\tau}, a^{\tau}, a^{\tau},a^{\tau}]=1$ for $\tau=1$ and $\tau=-1$ if and only if $\langle x^a,x\rangle$ is nilpotent of class at most $2$. \item[(c)] $[b^{\operatorname{\varepsilon}silon}, a^{\tau}, a^{\tau}, a^{\tau}, a^{\tau} ] = 1$ for all $\operatorname{\varepsilon}silon,\tau\in\{-1,+1\}$ if and only if $\langle a,[a^b,a]\rangle$ and $\langle a^b, [a^b,a]\rangle$ are nilpotent of class at most $2$. \end{enumerate} \end{lem} \begin{proof} We use a general trick for Engel commutators, namely $$[b^{-1},_n a]=1 \Leftrightarrow [a^{-1},_{n-1} x] = 1 \;\; \text{where} \; x = a^b.$$ Applying this trick twice gives $$[b^{-1},_4 a]=1 \Leftrightarrow [a^{-1},_3 x] = 1 \Leftrightarrow [x^{-1},_2 x^a] = 1.$$ This proves (a). To prove (b) note that we also have $$[b^{-1},_4 a^{-1}]=1 \Leftrightarrow [a,_3 x^{-1}] = 1 \Leftrightarrow [x,_2 x^{-a^{-1}}]=1 \Leftrightarrow [x^{a},_2 x^{-1}]=1.$$ To show (c), it follows from (b) that $\langle x,x^a\rangle=\langle a^b,(a^b)^a\rangle=\langle a^b,[a^b,a]\rangle$ is nilpotent of class at most 2. Also we have $\langle a^{b^{-1}},[a^{b^{-1}},a]\rangle$ is nilpotent of class at most $2$. Now the conjugate of the latter subgroup by $b$, it follows that $\langle a,[a^b,a]\rangle$. This completes the proof. \end{proof} As immediate corollaries it follows then that \begin{cor}\label{co1} \begin{enumerate} \item[(a)] $b^{\operatorname{\varepsilon}silon}\in R_4(G)$ for both $\operatorname{\varepsilon}silon=1$ and $\operatorname{\varepsilon}silon=-1$ if and only if $\langle a, [a^b, a]\rangle$ and $\langle a^b, [a^b, a]\rangle$ are nilpotent of class at most $2$ for all $a\in G$. \item[(b)] $a^{\operatorname{\varepsilon}silon}\in L_4(G)$ for both $\operatorname{\varepsilon}silon=1$ and $\operatorname{\varepsilon}silon=-1$ if and only if $\langle a, [a^b, a]\rangle$ and $\langle a^b, [a^b, a]\rangle$ are nilpotent of class at most $2$ for all $b\in G$. \end{enumerate} \end{cor} \begin{cor}\label{co2} Let $G$ be a group and $a,b\in G$ such that $[b^{\operatorname{\varepsilon}silon}, a^{\tau}, a^{\tau}, a^{\tau}, a^{\tau} ] = 1$ for all $\operatorname{\varepsilon}silon,\tau\in\{-1,+1\}$. Then \begin{enumerate} \item $\langle [a^b,a]\rangle^{\langle a\rangle}$ and $\langle [a^b,a]\rangle^{\langle a^b\rangle}$ are both abelian. \item $[a^b,a]^{a^2}=[a^b,a]^{2a-1}$ \item $[a^b,a]^{a^{2b}}=[a^b,a]^{2a^b-1}$ \item $[a^b,a]^{a^{-1}}=[a^b,a]^{-a+2}$ \item $[a^b,a]^{a^{-b}}=[a^b,a]^{-a^b+2}$ \item $[a^b,a]^{a^ba}=[a^b,a]^{-1+aa^b+1}$ \item $[a^b,a]^{aa^ba}=[a^b,a]^{-1+2aa^b-a^b+1}$ \item $[a^b,a]^{aa^{2b}}=[a^b,a]^{a^b+2aa^b-a-a^b}$ \end{enumerate} \end{cor} We use the following result due to Sims \cite{sim}. \begin{thm}\label{sim} Let $F$ be the free group of rank $2$. Then the $5$-th term $\operatorname{\gamma_{\GI}}ma_5(F)$ of the lower central series of $F$ is equal to $N_5$ the normal closure of the basic commutators of weight $5$. \end{thm} \begin{rem}\label{rem}{\rm Suppose that $a,b$ are arbitrary elements of a group and let $H=\langle a,a^b\rangle$. Then a set of basic commutators of weight 5 on $\{a,a^b\}$ is $\{ x_1=[a^b,a,a,a,a]$, $x_2=[a^b,a,a,a,a^b]$, $x_3=[a^b,a,a,a^b,a^b]$, $x_4=[a^b,a,a^b,a^b,a^b]$, $x_5=[[a^b,a,a],[a^b,a]]$, $x_6=[[a^b,a,a^b],[a^b,a]]\}$. Hence, by Theorem $\ref{sim}$, we have $\operatorname{\gamma_{\GI}}ma_5(H)=\langle x_1,\dots,x_6 \rangle^H$. From on now, we fix and use the notation $x_1,\dots,x_6$ as the mentioned commutators.} \end{rem} In the following calculation, one must be careful with notation. As usual $u^{g_1+g_2}$ is shorthand notation for $u^{g_1}u^{g_2}$. This means that $u^{(g_1+g_2)(h_1+h_2)}=u^{(g_1+g_2)h_1+{(g_1+g_2)h_2}}$ which does not have to be equal to $u^{g_1(h_1+h_2)+g_2(h_1+h_2)}$. We also have that $$u^{(g_1+g_2)(-h)}=((u^{g_1}.u^{g_2})^{-1})^h=u^{-g_2h-g_1h}.$$ This does not have to be the same as $u^{-g_1h-g_2h}$. \begin{lem}\label{lm2} Let $G$ be a group and $a,b\in G$ such that $[b^{\operatorname{\varepsilon}silon}, a^{\tau}, a^{\tau}, a^{\tau}, a^{\tau} ] = 1$ for all $\operatorname{\varepsilon}silon,\tau\in\{-1,+1\}$. Then $[a^b,a,a,a^b,a^b]=[x,x^{aa^b}]^{x^{aa^b}}$, where $x=[a^b,a]$, and $\operatorname{\gamma_{\GI}}ma_5(\langle a,a^b\rangle)=\langle [a^b,a,a,a^b,a^b]\rangle^{\langle a,a^b \rangle}$. \end{lem} \begin{proof} By Lemma \ref{car}, we have $x_1=x_2=x_4=x_5=x_6=1$. Now Remark \ref{rem} completes the proof of the second part. To prove the first part, by Corollary \ref{co2} we may write \begin{eqnarray} [a^b,a,a,a^b,a^b]&=&[x^{(-1+a)},a^b,a^b]\nonumber\\ &=&x^{(-1+a)(-1+a^b)(-1+a^b)}\nonumber\\ &=&x^{-aa^b+a^b-1+a-aa^b+a^b-a^{2b}+aa^{2b}}\nonumber\\ &=&x^{-aa^b+a^b-1+a-aa^b+1+2aa^b-a-a^b}\nonumber\\ &=&x^{-aa^b-1-aa^b+1+2aa^b}\nonumber\\ &=&x^{-aa^b}[x,x^{aa^b}]x^{aa^b}\nonumber \end{eqnarray} \end{proof} \noindent{\bf Proof Of Theorem \ref{th3}.} By Corollary \ref{co2}, $\langle [a^b,a]\rangle^{\langle a\rangle}$ is an abelian group generated by $[a^b,a]$ and $[a^b,a]^a$. As $[a^b,a^{-1}]=[a^b,a]^{-a^{-1}}$, the subgroup $\langle a, [a^b,a^{-1}]\rangle$ is nilpotent of class at most $2$ for all $a\in G$. Thus by replacing $a$ by $a^{-1}$ in the latter group, we have that $$\langle a,[a^{-b},a]\rangle=\langle a,[a^b,a]^{-a^{-b}}\rangle=\langle a,[a^b,a]^{-a^b+2}\rangle$$ is nilpotent of class at most 2 for every element $a\in G$. Now if $x=[a^b,a]$, it is enough to show that $[x^{a^b},x^a]=1$. Since $\langle a,x^{-a^b+2}\rangle$ is nilpotent of class at most 2, we have \begin{eqnarray} 1&=&[x^{-a^b+2},a,a]\nonumber\\ &=&x^{(-a^b+2)(-1+a)(-1+a)}\nonumber\\ &=&x^{(-2+a^b-a^ba+2a)(-1+a)}\nonumber\\ &=&x^{(-3+a^b-aa^b+1+2a)(-1+a)}\nonumber\\ &=&x^{-2a-1+aa^b-a^b+3-3a+a^ba-aa^ba+a+2a^2}\nonumber\\ &=&x^{aa^b-a^b+2-3a+a^b-aa^b+3a-2}\nonumber\\ &=&x^{-aa^b-2+aa^b+2}x^{-a^b-3a+a^b+3a}\nonumber\\ &=&[x^{aa^b},x^2][x^{a^b},x^{3a}].\nonumber \end{eqnarray} Therefore $[x^2,x^{aa^b}]=[x^{a^b},x^{3a}]\hspace*{1.05cm}~~(1)$. Also \begin{eqnarray} 1&=&[x^{-a^b+2},a,x^{-a^b+2}]\nonumber\\ &=&[x^{(-a^b+2)(-1+a)},x^{-a^b+2}]\nonumber\\ &=&[x^{(-2+a^b-a^ba+2a)},x^{-a^b+2}]\nonumber\\ &=&[x^{(-3+a^b-aa^b+1+2a)},x^{-a^b+2}]\nonumber\\ &=&[x^{-aa^b+2a},x^{-a^b+2}]\nonumber\\ &=&[x^{-aa^b},x^{-a^b+2}]^{x^{2a}}[x^{2a},x^{-a^b+2}]\nonumber\\ &=&[x^{-aa^b},x^2][x^{2a},x^{-a^b}]\nonumber\\ &=&[x^2,x^{aa^b}][x^{a^b},x^{2a}].\nonumber \end{eqnarray} Thus $[x^2,x^{aa^b}]=[x^{a^b},x^{2a}]^{-1}\hspace*{1.05cm}~~~(2)$. Furthermore \begin{eqnarray} 1&=&[x^{a^b-2},a,a]\nonumber\\ &=&x^{(a^b-2)(-1+a)(-1+a)}\nonumber\\ &=&x^{(1-a^b+aa^b+1-2a)(-1+a)}\nonumber\\ &=&x^{2a-1-aa^b+a^b-1+a-a^ba+aa^ba+a-2a^2}\nonumber\\ &=&x^{2a-1-aa^b+a^b-1+a-1-aa^b+1-1+2aa^b-a^b+1+a-4a+2}\nonumber\\ &=&x^{-aa^b+a^b-2+a+aa^b-a^b+2-a}\nonumber\\ &=&x^{-aa^b-2+aa^b+2}x^{-a^b-a+a^b+a}\nonumber\\ &=&[x^{aa^b},x^2][x^{a^b},x^a].\nonumber \end{eqnarray} Therefore $[x^2,x^{aa^b}]=[x^{a^b},x^a]\hspace*{1.05cm}~~(3)$. Now by (3), $[x^{a^b},x^a]^{x^a}=[x^{a^b},x^a]$ and by (1) and (3) we have $[x^{a^b},x^a]^3=[x^{a^b},x^{3a}]=[x^{a^b},x^a]$. Hence $[x^{a^b},x^a]^2=1~~(*)$. Also by (2) and (3) we have $[x^{a^b},x^a]^{-2}=[x^{a^b},x^{2a}]^{-1}=[x^{a^b},x^a]$. Thus $[x^{a^b},x^a]^3=1~~(**)$. Now it follows from $(*)$ and $(**)$ that $[x^{a^b},x^a]=1$. This completes the proof. $ \Box$ \section{\bf Left 4-Engel elements} In this section we prove Theorems \ref{th4} and \ref{th5}. The argument of Lemma \ref{lm5} is very much modeled on an argument given in \cite{traus}. \begin{lem}\label{lm5} Let $G$ be any group. If $a^{\pm 1}\in L_4(G)$, then $[x^{a^b},x^a]=[x,x^{aa^b}]$, where $x=[a^b,a]$ for all $b\in G$. \end{lem} \begin{proof} From Corollary \ref{co1}-(b) we have $\langle a,[a^b,a]\rangle \in \mathfrak{m} athcal{N}_2$ for all $b\in G$. Thus $\langle a,[a^{bx},a]\rangle \in \mathfrak{m} athcal{N}_2$. Therefore $[a^{bx},a^{-1},a,a]=1$. We have \begin{eqnarray} [a^{bx},a]&=&[x^{-1}a^bx,a]\nonumber\\ &=&[x^{-1}a^b,a]^x[x,a]\nonumber\\ &=&[x^{-1},a]^{a^bx}[a^b,a]x^{-1}x^a\nonumber\\ &=&(xx^{-a})^{a^bx}x^a\nonumber\\ &=&x^{-1-aa^b+a^b+1+a}.\nonumber \end{eqnarray} Let $y=[a^{bx},a^{-1}]=[a^{bx},a]^{-a^{-1}}$. Then \begin{eqnarray} y&=&x^{-1-a^{-1}-a^ba^{-1}+aa^ba^{-1}+a^{-1}}\nonumber\\ &=&x^{-1-a^{-1}+(a^{-1}-a^{-1}a^b-a^{-1})+(a^{-1}+a^b-a^{-1})+a^{-1}}\nonumber\\ &=&x^{-1+aa^b-a^b}.\nonumber \end{eqnarray} We then have $y^{-a}=[a^{bx},a]=x^{-1-aa^b+a^b+1+a}$ and \begin{eqnarray} y^{a^2}&=&x^{-a^2-a-a^ba+aa^ba+a}\nonumber\\ &=&x^{-3a+aa^b-a^b+1+a}.\nonumber \end{eqnarray} Therefore \begin{eqnarray} 1&=&y^{-a}yy^{-a}y^{a^2}\nonumber\\ &=&x^{-1-aa^b+a^b+1+a}x^{-1+aa^b-a^b}x^{-1-aa^b+a^b+1+a}x^{-3a+aa^b-a^b+1+a}\nonumber\\ &=&x^{-1+a^b+a-a^b}x^{-1-aa^b+a^b+1-2a+aa^b-a^b+1+a}\nonumber\\ &=&x^{-1+a^b-1-aa^b+1-a+aa^b-a^b+1+a}\nonumber\\ &=&x^{-1+a^b}x^{-1-aa^b+1+aa^b}x^{-a-a^b+a+a^b}x^{-a^b+1}\nonumber\\ &=&x^{-1+a^b}[x,x^{aa^b}][x^a,x^{a^b}]x^{-a^b+1}.\nonumber \end{eqnarray} Conjugation with $x^{-1+a^b}$ gives $$1=[x,x^{aa^b}][x^a,x^{a^b}].$$ \end{proof} \noindent{\bf Proof of Theorem \ref{th4}}. By Lemma \ref{lm2} we have to prove that $x_3=1$. Let $u=[x^{a^b},x^a]=[x,x^{aa^b}]$. By Lemma \ref{lm2} and Lemma \ref{lm5} we have $x_3=u^{x^{aa^b}}=u$. Thus $$\operatorname{\gamma_{\GI}}ma_5(\langle a,a^b\rangle)=\langle u\rangle^{\langle a,a^b\rangle}.$$ Since \begin{eqnarray} u^a&=&[x^a,x^{aa^ba}]\nonumber\\ &=&[x^a,x^{-1+2aa^b-a^b+1}]\nonumber\\ &=&[x^a,x^{-a^b}]\nonumber\\ &=&u\nonumber \end{eqnarray} and \begin{eqnarray} u^{a^b}&=&[x^{a^{2b}},x^{aa^b}]\nonumber\\ &=&[x^{2a^b-1},x^{aa^b}]\nonumber\\ &=&[x^{-1},x^{aa^b}]\nonumber\\ &=&u^{-1}\nonumber \end{eqnarray} we have $\operatorname{\gamma_{\GI}}ma_5(\langle a,a^b\rangle)=\langle u\rangle$ and \begin{eqnarray} \operatorname{\gamma_{\GI}}ma_6(\langle a,a^b\rangle)&=&[\langle a,a^b\rangle, \operatorname{\gamma_{\GI}}ma_5(\langle a,a^b\rangle)]\nonumber\\ &=&[\langle a,a^b\rangle,\langle u\rangle]\nonumber\\ &=&\langle u^2\rangle. \nonumber \end{eqnarray} Also $1=[u,a^b,a^b]=[u^{-2},a^b]$ and we have $\operatorname{\gamma_{\GI}}ma_7(\langle a,a^b\rangle)=1$. On the other hand \begin{eqnarray} [[a^b,a,a,a^b],[a^b,a]]&=&[x^{-a+1-a^b+aa^b},x]\nonumber\\ &=&[x^{aa^b},x]\nonumber\\ &=&u^{-1}\in \operatorname{\gamma_{\GI}}ma_6(\langle a,a^b\rangle).\nonumber \end{eqnarray} Thus $u=1$ and this completes the proof. $ \Box$ \begin{cor}\label{co3} Let $G$ be a group and $a^{\pm 1}\in L_4(G)$. Then $\langle a,a^b\rangle'$ is abelian, for all $b \in G$. \end{cor} \begin{cor}\label{co4} Let $G$ be an arbitrary group and $a^{\pm 1}\in L_4(G)$. Then every power of $a$ is also a left $4$-Engel element. \end{cor} \begin{proof} By Corollary \ref{co2}, Theorem \ref{th4} and Corollary \ref{co3}, $$\langle a^b,[a^b,a],[a^b,a]^a\rangle=\langle a^b,[a^b,a],[a^b,a,a]\rangle\in \mathfrak{m} athcal{N}_2$$ for all $b\in G$. It follows that $\langle a^{ib},[a^{ib},a^i]\rangle \leq \langle a^b,[a^b,a],[a^b,a]^a\rangle$ and $\langle a^{ib},[a^{ib},a^i]\rangle\in\mathfrak{m} athcal{N}_2$ for all $b\in G$ and $i\in \mathbb{Z} $. Now Corollary \ref{co1} implies that $a^i\in L_4(G)$ for all $i\in \mathbb{Z} $. \end{proof} \begin{lem} Let $G$ be a group and $a^{\pm 1}\in L_4(G)$. Then for all $b$ in $G$ and $r,m,n\in \mathbb{N} $ we have \begin{enumerate}\label{lm6} \item $[a^b,a^r]^{a^{2m}+1}=[a^b,a^r]^{2a^m}$ \item $[a^{nb},a^r]^{a^{2mb}+1}=[a^{nb},a^r]^{2a^{mb}}$ \item If $a^s=1$ then $[a^b,a,a^s]=[a^b,a,a]^s$ \item $[a^b,a]^{a^n}=[a^b,a]^{na-(n-1)}$ \item $[a^b,a]^{a^{nb}}=[a^b,a]^{na^b-(n-1)}$ \end{enumerate} \end{lem} \begin{proof} By Corollary \ref{co2}, $[a^b,a]^{\langle a\rangle}$ and $[a^b,a]^{\langle a^b\rangle}$ are both abelian and $$\langle a^{mb},[a^{nb},a^r]\rangle\leq \langle a^b,[a^b,a],[a^b,a]^a\rangle$$ for all $b$ in $G$. Therefore both $\langle a^m,[a^b,a^r]\rangle$ and $ \langle a^{mb},[a^{nb},a^r]\rangle$ are nilpotent of class at most 2, for all $b\in G$ and $r,m,n\in \mathbb{N} $. Thus \begin{eqnarray} 1=[a^b,a^r,a^m,a^m]&=&[a^b,a^r]^{(-1+a^m)^2}\nonumber\\ &=&[a^b,a^r]^{1-2a^m+a^{2m}}.\nonumber \end{eqnarray} This prove part (1). Part (2) is similar to part (1) and the other parts are straightforward by Corollary \ref{co2} and induction. \end{proof} \begin{lem}\label{lm7} Let $G$ be a group and $a^{\pm 1}\in L_4(G)$. If $o(a)=p^i$, for $p=2$ and $i\geq 3$ or some odd prime $p$ and $i\geq 2$, then $a^{p^{i-1}}\in L_2(G)$. \end{lem} \begin{proof} First let $p=2$ and $m=p^{i-3}$. Then $a^{8m}=1$ and we have so \begin{eqnarray} 1=[a^b,a^{8m}]&=&[a^b,a^{4m}]^{1+a^{4m}}\nonumber\\ &=&[a^b,a^{4m}]^2.~~\hspace*{1.05cm}~~~ \text{by Lemma}~ \ref{lm6}\nonumber \end{eqnarray} Now we have $$[b,a^{p^{i-1}},a^{p^{i-1}}]=[b,a^{4m},a^{4m}]=[a^{4mb},a^{4m}]^{-a^{4m(-b+1)}}.$$ But \begin{eqnarray} [a^{4mb},a^{4m}]&=&[a^{2mb},a^{4m}]^{a^{2mb}+1}\nonumber\\ &=&[a^{2mb},a^{4m}]^{2a^{mb}} \hspace*{1.05cm}~~~\text{by Lemma}\; \ref{lm6}\nonumber\\ &=&[a^b,a^{4m}]^{2(a^{2m-1}+\cdots+a^b+1)a^{mb}}\nonumber\\ &=&1.\nonumber \end{eqnarray} This complete the proof of the lemma in this case. Now let $p$ be an odd prime number and $i\geq 2$. Then we have \begin{eqnarray} 1&=&[a^b,a^{p^i}]\nonumber\\ &=&[a^b,a]^{1+a+\cdots+a^{p^i-1}}\nonumber\\ &=&[a^b,a]^{1+a+2a-1+\cdots+(p^i-1)a-(p^i-2)} \hspace*{1.05cm} \text{by Lemma}~ \ref{lm6}\nonumber\\ &=&[a^b,a]^{\frac{p^i(p^i-1)}{2}a-\frac{p^i(p^i-1)}{2}+p^i}\nonumber\\ &=&[a^b,a]^{(p^ia-p^i)\frac{p^i-1}{2}}[a^b,a]^{p^i}\nonumber \end{eqnarray} Now since $$1=[a^b,a,a^{p^i}]=[a^b,a,a]^{p^i}=[a^b,a]^{p^i a-p^i} \hspace*{.5cm} \text{by Lemma \ref{lm6} part (3)}$$ we have $[a^b,a^{p^i}]=[a^b,a]^{p^i}=1$. Let $m=p^{i-1}$. Then $$[b,a^m,a^m]=[a^{mb},a^m]^{-a^{-mb+m}}$$ and by Corollary \ref{co3} and Lemma \ref{lm6} we have so \begin{eqnarray} [a^{mb},a^m]&=&[a^b,a^m]^{(1+a^b+\cdots+a^{(m-1)b})}\nonumber\\ &=&[a^b,a^{m}]^{(1+a^b+2a^b-1+\cdots+(m-1)a^b-(m-2))}\nonumber\\ &=&[a^b,a]^{(1+a+2a-1+\cdots+(m-1)a-(m-2))(1+a^b+2a^b-1+\cdots+(m-1)a^b-(m-2))} ~\nonumber\\ &=&[a^b,a]^{(\frac{m(m-1)}{2}a-\frac{m(m-1)}{2}+m)(\frac{m(m-1)}{2}a^b-\frac{m(m-1)}{2}+m)}\nonumber\\ &=&[a^b,a]^{m^2(\frac{m-1}{2}a-\frac{m-3}{2})(\frac{m-1}{2}a^b-\frac{m-3}{2})}.\nonumber \end{eqnarray} Now since $p^i\mathfrak{m} id m^2$ and $[a^b,a]^{p^i}=1$ we have $$[b,a^{p^{i-1}},a^{p^{i-1}}]=[b,a^m,a^m]=1.$$ This complete the proof of the lemma. \end{proof} \noindent{\bf Proof of Theorem \ref{th5}.} Let $p$ be a prime number and $o(a)=p^i$. If $p=2$ and $i\leq 2$ then the assertion is obvious. Therefore let $i\geq 3$ if $p=2$; and $i\geq 2$ if $p$ is an odd prime number. By Lemma \ref{lm7} and Corollary \ref{co1} $$1\trianglelefteq \langle a^{p^{n-1}}\rangle^G \trianglelefteq \langle a^{p^{n-2}}\rangle^G\trianglelefteq \cdots\trianglelefteq \langle a^p\rangle^G$$ is a series of normal subgroups of $G$ with abelian factors. This implies that $K=\langle a^p\rangle^G$ is soluble of derived length at most $n-1$. By Corollary \ref{co4}, $a^p$ and so all its conjugates in $G$ belong to $L_4(G)$ and in particular they are in $L_4(K)$. Now a result of Gruenberg \cite[Theorem 7.35]{robin} implies that $B(K)=K$. Therefore $a^p\in K\leq B(G)$, as required. $ \Box$ \section{\bf Examples and questions} In this section we give some examples by using {\sf GAP nq} package of Werner Nickel to show what we mentioned in the last two paragraphs of Section 1.\\ Let $H$ be the largest nilpotent group generated by $a,b$ such that $a\in R_4(H)$; then $H$ is nilpotent of class 8. On the other hand if $K$ is the largest nilpotent group generated by $a$ and $b$ such that $a^{\pm 1}\in R_4(K)$, then $K$ is nilpotent of class 7. Thus in an arbitrary group $G$, $a\in R_4(G)$ does not imply $a^{-1}\in R_4(G)$. One can check the above argument with the following GAP program: \begin{verbatim} LoadPackage("nq"); #nq package of Werner Nickel# F:=FreeGroup(3);; a:=F.1;; b:=F.2;; x:=F.3;; G:=F/[LeftNormedComm([a,x,x,x,x])];; H:=NilpotentQuotient(G,[x]);; NilpotencyClassOfGroup(H); G:=F/[LeftNormedComm([a,x,x,x,x]),LeftNormedComm([a^-1,x,x,x,x])];; K:=NilpotentQuotient(G,[x]);; NilpotencyClassOfGroup(K); \end{verbatim} Similar to above let $N$ be the largest nilpotent group generated by $a,b$ such that $a,b\in L_4(N)$ and $b^2=1$. Also let $S=\langle a,a^b\rangle$. Then $N$ is nilpotent of class 10 and $S$ is nilpotent of class 6 but the largest nilpotent group $M$ generated by $a,b$ such that $a^{\pm 1},b\in L_4(M)$ and $b^2=1$ is nilpotent of class 7. Therefore in an arbitrary group $G$, $a\in L_4(G)$ does not imply $a^{-1}\in L_4(G)$ and $b^{-1}\in L_4(G)$ is a necessary condition in Theorem \ref{th4} and \ref{th5}. The following GAP program confirms the above argument: \begin{verbatim} F:=FreeGroup(3);; a:=F.1;; b:=F.2;; x:=F.3;; G:=F/[LeftNormedComm([x,a,a,a,a]),LeftNormedComm([x,b,b,b,b]),b^2];; N:=NilpotentQuotient(G,[x]);; NilpotencyClassOfGroup(N); S:=Subgroup(N,[N.1,N.2^-1*N.1*N.2]);; NilpotencyClassOfGroup(S); G:=F/[LeftNormedComm([x,a,a,a,a]),LeftNormedComm([x,b,b,b,b]), LeftNormedComm([x,a^-1,a^-1,a^-1,a^-1]), b^2];; M:=NilpotentQuotient(G,[x]);; NilpotencyClassOfGroup(M); \end{verbatim} We end this section by proposing some questions on bounded right Engel elements in certain classes of groups. \begin{qu}\label{ql1} Let $n$ be a positive integer. Is there a set of prime numbers $\pi_n$ depending only on $n$ and a function $f:\mathfrak{m} athbb{N}\rightarrow \mathfrak{m} athbb{N}$ such that the nilpotency class of $\langle x \rangle ^G$ is at most $f(n)$ for any $\pi_n'$-element $x\in R_n(G)$ and any nilpotent or finite group $G$? \end{qu} \begin{qu}\label{ql2} Let $n$ be a positive integer. Is there a set of prime numbers $\pi_n$ depending only on $n$ such that the set of right $n$-Engel elements in any nilpotent or finite $\pi'_n$-group forms a subgroup? \end{qu} Note that if the answers of Questions \ref{ql1} and \ref{ql2} are positive, the answers of the corresponding questions for residually (finite or nilpotent $\pi'_n$-groups) are also positive. \begin{qu} Let $n$ and $d$ be positive integers. Is there a function $g:\mathfrak{m} athbb{N}\times \mathfrak{m} athbb{N}\rightarrow \mathfrak{m} athbb{N}$ such that any nilpotent group generated by $d$ right $n$-Engel elements is nilpotent of class at most $g(n,d)$? \end{qu} {\noindent\bf Acknowledgements.} The authors are grateful to the referee for his very helpful comments. They also thank Yaov Segev for pointing out a flaw in our proofs. The research of the first author was supported in part by the Center of Excellence for Mathematics, University of Isfahan. \end{document}
\begin{document} \begin{large} \title{SRB measures for $C^\infty$ surface diffeomorphisms} \author{David Burguet} \address{CNRS, Sorbonne Universite, LPSM, 75005 Paris, France} \email{[email protected]} \mathop{\mathrm{ess sup}}ubjclass[2010]{Primary 37C40, 37D25} \date{October 2021} \begin{abstract}A $C^\infty$ surface diffeomorphism admits a SRB measure if and only if the set $\{ x, \ \limsup_n\frac{1}{n}\log \|d_xf^n\|>0\}$ has positive Lebesgue measure. Moreover the basins of the ergodic SRB measures are covering this set Lebesgue almost everywhere. We also obtain similar results for $C^r$ surface diffeomorphisms with $+\infty>r>1$. \end{abstract} \keywords{} \maketitle \tableofcontents \mathop{\mathrm{ess sup}}ection{Introduction}One fundamental problem in dynamics consists in understanding the statistical behaviour of the system. Given a topological system $(X,f)$ we are more precisely interesting in the asymptotic distribution of the empirical measures $\left(\frac{1}{n}\mathop{\mathrm{ess sup}}um_{k=0}^{n-1}\delta_{f^kx}\right)_n$ for typical points $x$ with respect to a reference measure. In the setting of differentiable dynamical systems the natural reference measure to consider is the Lebesgue measure on the manifold. The basin of a $f$-invariant measure $\mu$ is the set $\mathcal B(\mu)$ of points whose empirical measures are converging to $\mu$ in the weak-$*$ topology. By Birkhoff's ergodic theorem the basin of an ergodic measure $\mu$ has full $\mu$-measure. An invariant measure is said physical when its basin has positive Lebesgue measure. We may wonder when such measures exist and then study their basins. In the works of Y. Sinai, D. Ruelle and R. Bowen \cite{sin,bow,rue} these questions have been successfully solved for uniformly hyperbolic systems. A SRB measure of a $C^{1+}$ system is an invariant probability measure with at least one positive Lyapunov exponent almost everywhere, which has absolutely continuous conditional measures on unstable manifolds \cite{young}. Physical measures may neither be SRB measures nor sinks (as in the famous figure-eight attractor), however hyperbolic ergodic SRB measures are physical measures. For uniformly hyperbolic systems, there is a finite number of such measures and their basins cover a full Lebesgue subset of the manifold. Beyond the uniformly hyperbolic case such a picture is also known for large classes of partially hyperbolic systems \cite{BV,ABV,ADLP}. Corresponding results have been established for unimodal maps with negative Schwartzian derivative \cite{kel}. SRB measures have been also deeply investigated for parameter families such as the quadratic family and Henon maps \cite{jak, BC,BY,VB}. In his celebrated ICM's talk, M. Viana conjectured that a surface diffeomorphism admits a SRB measure, whenever the set of points with positive Lyapunov exponent has positive Lebesgue measure. In recent works some weaker versions of the conjecture (with some additional assumptions of recurrence and Lyapunov regularity) have been proved \cite{cli,BO}. Finally we mention that in the present context of $C^\infty$ surface diffeomorphims J. Buzzi, S. Crovisier, O. Sarig have also recently shown the existence of a SRB measure when the set of points with a positive Lyapunov exponent has positive Lebesgue measure \cite{BCS3} (Corollary \ref{coco}). \\ In this paper we define a general entropic approach to build SRB measures, which we apply to prove Viana's conjecture for $C^\infty$ surface diffeomorphisms. We strongly believe the same approach may be used to recover the existence of SRB measures for weakly mostly expanding partially hyperbolic systems \cite{ADLP} and to give another proof of Ben Ovadia's criterion for $C^{1+}$ diffeomorphisms in any dimension \cite{BO}. \\ We state now the main results of our paper. Let $(M,\|\cdot \|)$ be a compact Riemannian surface and let $\mathop{\mathrm{Leb}}$ be a volume form on $M$, called Lebesgue measure. We consider a $C^\infty$ surface diffeomorphism $f:M\circlearrowleft$. The maximal Lyapunov exponent at $x\in M$ is given by $\chi(x)=\limsup_n\frac{1}{n}\log \|d_xf^n\|.$ When $\mu$ is a $f$-invariant probability measure, we let $\chi(\mu)=\int \chi(x)\, d\mu(x).$ For two Borel subsets $A$ and $B$ of $M$ we write $A\mathop{\mathrm{ess sup}}tackrel{o}{\mathop{\mathrm{ess sup}}ubset} B$ (resp. $A\mathop{\mathrm{ess sup}}tackrel{o}{=} B$) when we have $\mathop{\mathrm{Leb}}(A\mathop{\mathrm{ess sup}}etminus B)=0$ (resp. $\mathop{\mathrm{Leb}}(A\Delta B)=0$). \begin{theorem} Let $f:M\circlearrowleft$ be a $C^\infty$ surface diffeomorphism. There are countably many ergodic SRB measures $(\mu_i)_{i\in I}$, such that we have with $\Lambda=\{\chi(\mu_i), \ i\in I\}\mathop{\mathrm{ess sup}}ubset \mathbb R_{>0}$: \begin{itemize} \item $\{\chi>0\}\mathop{\mathrm{ess sup}}tackrel{o}{=} \{\chi\in \Lambda\}$, \item $\{\chi=\lambda\}\mathop{\mathrm{ess sup}}tackrel{o}{\mathop{\mathrm{ess sup}}ubset}\bigcup_{i, \chi(\mu_i)=\lambda}\mathcal {B}(\mu_i) $ for all $\lambda\in \Lambda$. \end{itemize} \end{theorem} \begin{coro} Let $f:M\circlearrowleft$ be a $C^\infty$ surface diffeomorphism. Then $$\{\chi>0\}\mathop{\mathrm{ess sup}}tackrel{o}{\mathop{\mathrm{ess sup}}ubset}\bigcup_{\mu \text{ SRB ergodic}}\mathcal {B}(\mu).$$ \end{coro} \begin{coro}[Buzzi-Crovisier-Sarig \ \cite{BCS3}]\label{coco} Let $f:M\circlearrowleft$ be a $C^\infty$ surface diffeomorphism. If $\mathop{\mathrm{Leb}}(\chi>0)>0$, then there exists a SRB measure. \end{coro} In fact we establish a $C^r$, $1<r< +\infty$, stronger version, which implies straightforwardly Theorem 1: \begin{theorem*} \label{Cr} Let $f:M\circlearrowleft$ be a $C^r$, $r>1$, surface diffeomorphism. Let $R(f):=\lim_n\frac{1}{n}\log^+ \mathop{\mathrm{ess sup}}up_{x\in M}\|d_xf^n\|$. There are countably many ergodic SRB measures $(\mu_i)_{i\in I}$ with $\Lambda:=\{\chi(\mu_i), \ i\in I\}\mathop{\mathrm{ess sup}}ubset ]\frac{R(f)}{r},+\infty[$, such that we have : \begin{itemize} \item $\left\{\chi>\frac{R(f)}{r}\right\}\mathop{\mathrm{ess sup}}tackrel{o}{=} \{\chi\in \Lambda\}$, \item $\{\chi=\lambda\}\mathop{\mathrm{ess sup}}tackrel{o}{\mathop{\mathrm{ess sup}}ubset}\bigcup_{i, \chi(\mu_i)=\lambda}\mathcal {B}(\mu_i) $ for all $\lambda\in \Lambda$. \end{itemize} \end{theorem*} When $f$ is a $C^{1+}$ topologically transitive surface diffeomorphism, there is at most one SRB measure, i.e. $\mathop{\mathrm{ess sup}}harp I\leq 1$ \cite{RRHH}. If moreover the system is topologically mixing, then the SRB measure when it exists is Bernoulli \cite{BCS}. By the spectral decomposition of $C^r$ surface diffeomorphisms for $1<r\leq +\infty$ \cite{BCS} there are at most finitely many ergodic SRB measures with entropy and thus maximal exponent larger than a given constant $b>\frac{R(f)}{r}$. Therefore, in the Main Theorem, the set $\Lambda=\{\chi(\mu_i), \ i\in I\}$ is either finite or a sequence decreasing to $\frac{R(f)}{r}$. When $r$ is finite, there may also exist ergodic SRB measures $\mu$ with $\chi(\mu)\leq \frac{R(f)}{r}$. \\ We prove in a forthcoming paper \cite{burex} that the above statement is sharp by building for any finite $r>1$ a $C^r$ surface diffeomorphism $(f,M)$ with a periodic saddle hyperbolic point $p$ such that $\chi(x)=\frac{R(f)}{r}>0$ for all $x\in U$ for some set $U\mathop{\mathrm{ess sup}}ubset \mathcal B(\mu_p)$ with $\mathop{\mathrm{Leb}}(U)>0$, where $\mu_p$ denotes the periodic measure associated to $p$ (see \cite{bur} for such an example of interval maps). \\ In higher dimensions we let $\Sigma^k\chi(x):=\limsup_n\frac{1}{n}\|\Lambda^k d_xf^n\|$ where $\Lambda^k df$ denotes the action induced by $f$ on the $k^{th}$ exterior power of $TM$ for $k=1,\cdots, d$ with $d$ being the dimension of $M$. By convention we also let $\Sigma^0\chi=0$. For any $C^1$ diffeomorphism $(M,f)$ we have $\mathop{\mathrm{Leb}}(\Sigma^d\chi>0)=0$ (see \cite{ara}). The product of a figure-eight attractor with a surface Anosov diffeomorphism does not admit any SRB measure whereas $\chi$ is positive on a set of positive Lebesgue measure. However we conjecture : \begin{conj} Let $f:M\circlearrowleft$ be a $C^\infty$ diffeomorphism on a compact manifold (of any dimension). If $\mathop{\mathrm{Leb}}\left(\Sigma^k\chi>\Sigma^{k-1}\chi\geq 0\right)>0$, then there exists an ergodic measure with at least $k$ positive Lyapunov exponents, such that its entropy is larger than or equal to the sum of its $k$ smallest positive Lyapunov exponents. \end{conj} In the present two-dimensional case the semi-algebraic tools used to bound the distorsion and the local volume growth of $C^\infty$ curves are elementary. This is a challenging problem to adapt this technology in higher dimensions. \\ When the empirical measures from $x\in M$ are not converging, the point $x$ is said to have historic behaviour \cite{ruee}. A set $U$ is contracting when the diameter of $f^nU$ goes to zero when $n\in \mathbb{N}$ goes to infinity. In a contracting set the empirical measures of all points have the same limit set, however they may not converge. P. Berger and S. Biebler have shown that $C^\infty$ densely inside the Newhouse domains \cite{BB} there are contracting domains with historic behaviour. In intermediate smoothness, such domains have been previously built in \cite{KS}. As a consequence of the Main Theorem, Lebesgue almost every point $x$ with historic behaviour satisfies $\chi(x)\leq 0$ for $C^\infty$ surface diffeomorphisms. We also show the following statement. \begin{theorem}\label{yoyo} Let $f$ be a $C^\infty$ diffeomorphism on a compact manifold (of any dimension). Then Lebesgue a.e. point $x$ in a contracting set satisfies $\chi(x)\leq 0$. \end{theorem} \begin{ques} Let $f$ be a $C^\infty$ surface diffeomorphism. Assume the set $H$ of points with historic behaviour has positive Lebesgue measure. Does every Lebesgue density point of $H$ belongs to a almost contracting\footnote{See Section \ref{gui} for the definition of almost contracting set.} set with positive Lebesgue measure? \\ \end{ques} We explain now in few lines the main ideas to build a SRB measure under the assumptions of the Main Theorem. The geometric approach for uniformly hyperbolic systems consists in considering a weak limit of $\left(\frac{1}{n}\mathop{\mathrm{ess sup}}um_{k=0}^{n-1}f_*^k\mathop{\mathrm{Leb}}_{D_u}\right)_n$, where $D_u$ is a local unstable disc and $\mathop{\mathrm{Leb}}_{D_u}$ denotes the normalized Lebesgue measure on $D_u$ induced by its inherited Riemannian structure as a submanifold of $M$. Here we take a smooth $C^r$ embedded curve $D$ such that $$\chi(x,v_x):=\limsup_n\frac{1}{n}\log \|d_xf^n(v_x)\|>b>\frac{R(f)}{r}$$ for $(x,v_x)$ in the unit tangent space $T^1D$ of $D$ with $x$ in a subset $B$ of $D$ with positive $\mathop{\mathrm{Leb}}_D$-measure. For $x$ in $B$ we define a subset $E(x)$ of positive integers, called the \textit{ geometric set}, such that the following properties hold for any $n\in E(x)$ : \begin{itemize} \item the geometry of $f^nD$ around $f^nx$ is \textit{bounded} meaning that for some uniform $\epsilon>0$, the connected component $D_n^\epsilon(x)$ of $f^nD$ with the ball at $f^nx$ of radius $\epsilon>0$ is a curve with bounded $s$-derivative for $s\leq r$, \item the distorsion of $df^{-n}$ on the tangent space of $ D_n^\epsilon(x)$ is controlled, \item for some $\tau>0$ we have $\frac{\|d_xf^{l}(v_x)\|}{\|d_xf^{k}(v_x)\|}\geq e^{(l-k)\tau}$ for any $l>k\in E(x)$. \end{itemize} We show that $E(x)$ has positive upper asymptotic density for $x$ in a subset $A$ of $B$ with positive $\mathop{\mathrm{Leb}}_D$-measure. Let $F:\mathbb PTM\circlearrowleft$ be the map induced by $f$ on the projective tangent bundle $\mathbb PTM$. We build a SRB measure by considering a weak limit $\mu$ of a sequence of the form $\left(\frac{1}{\mathop{\mathrm{ess sup}}harp F_n}\mathop{\mathrm{ess sup}}um_{k\in F_n}F_*^k\mu_n\right)_n$ such that : \begin{itemize} \item $(F_n)_n$ is a F\"olner sequence, so that the weak limit $\mu$ will be invariant by $F$, \item for all $n$, the measure $\mu_n$ is the probability measure induced by $\mathop{\mathrm{Leb}}_D$ on $A_n\mathop{\mathrm{ess sup}}ubset A$, the $\mathop{\mathrm{Leb}}_D$-measure of $A_n$ being not exponentially small, \item the sets $(F_n)_n$ are in some sense \textit{filled with } the geometric set $E(x)$ for $x\in A_n$. Then the measure $\mu$ on $\mathbb PTM$ will be supported on the unstable Oseledec's bundle. \end{itemize} Finally we check with some F\"olner Gibbs property that the limit empirical measure $\mu$ projects to a SRB measure on $M$ by using the Ledrappier-Young entropic characterization.\\ The paper is organized as follows. In Section 2 we recall for general sequences of integers the notion of asymptotical density and we build for any sequence $E$ with positive upper density a F\"olner set $F$ filled with $E$. Then we use a Borel-Cantelli argument to define our sets $(A_n)_n$ and the F\"olner sequence $(F_n)_n$. In Section 3, we study the maximal Lyapunov exponent and the entropy of the generalized empirical measure $\mu$ assuming some Gibbs property. We introduce the geometric set in Section 4 by using the Reparametrization Lemma of \cite{bure}. We build then SRB measures in Section 5 by using the abstract formalism of Section 2 and 3. Then we prove the covering property of the basins in Section 6 by the standard argument of absolute continuity of Pesin stable foliation. The last section is devoted to the proof of Theorem \ref{yoyo}. \\ \textit{Comment :} In a first version of this work, by following \cite {bure} (incorrectly) the author claimed that, at $b$-hyperbolic times $n$ of the sequence $\left(\|d_xf^k(v_x)\|\right)_k$ for some $b>0$, the geometry of $f^nD$ at $f^nx$ was bounded. J. Buzzi, S. Crovisier and O. Sarig gave then in \cite{BCS3} another proof of Corollary \ref{coco} by using their analysis of the entropic continuity of Lyapunov exponents from \cite{BCS2}. But as realized recently, our claim on the geometry at hyperbolic times is wrong in general and we manage to show it only when $\chi(x)>\frac{R(f)}{2}$. In this last version, we correct our proof by showing directly that the set of times with bounded geometry has positive upper asymptotic density on a set of positive $\mathop{\mathrm{Leb}}_D$-measure. Our proof is still based on the Reparametrization Lemma proved in \cite{bure}. \mathop{\mathrm{ess sup}}ection{Some asymptotic properties of integers}\label{drei} \mathop{\mathrm{ess sup}}ubsection{Asymptotic density} We first introduce some notations. In the following we let $\mathcal P_\mathbb{N}$ and $\mathcal P_n$ be respectively the power sets of $\mathbb N$ and $\{1,2,\cdots,n\}$, $n\in \mathbb{N}$. The\textbf{ boundary} $\partial E$ of $E\in \mathcal P_\mathbb{N}$ is the subset of $\mathbb{N}$ consisting in the integers $n\in E$ with $n-1\notin E$ or $n+1\notin E$. We also let $E^{-}:=\{n\in E, \ n+1\in E\}$. For $a,b\in \mathbb{N}$ we write $\llbracket a,b \rrbracket$ (resp. $\llbracket a,b \llbracket$, $\rrbracket a,b\rrbracket$) the interval of integers $k$ with $a\leq k\leq b$ (resp $a\leq k< b$, $a<k\leq b$). The \textbf{connected components} of $E$ are the maximal intervals of integers contained in $E$. An interval of integers $\llbracket a, b\llbracket$ is said $E$-\textbf{irreducible} when we have $a, b\in E $ and $\llbracket a,b \llbracket\cap E=\{a\}$. For $E\in \mathcal P_\mathbb{N}$ we let $E_{(n)}:=E\cap \llbracket 1,n \rrbracket\in \mathcal P_n$ for all $n\in \mathbb{N}$. For $M\in \mathbb{N}$, we denote by $E_M$ the union of the intervals $\llbracket a,b \rrbracket$ with $a,b\in E$ and $|a-b|\leq M$. We let $\mathfrak N$ be the set of increasing sequences of natural integers, which may be identified with the subset of $\mathcal P(\mathbb{N})$ given by infinite subsets of $\mathbb{N}$. For $\mathfrak n\in \mathfrak N$ we define the \textbf{generalized power set of $\mathfrak n$} as $\mathcal Q_\mathfrak n:=\prod_{n\in \mathfrak n} \mathcal P_n.$\\ We recall now the classical notion of upper and lower asymptotic densities. For $n\in \mathbb{N}^*$ and $F_n\in \mathcal P_n$ we let $d_n(F_n)$ be the frequency of $F_n$ in $\llbracket 1,n \rrbracket$: $$d_n(F_n)=\frac{\mathop{\mathrm{ess sup}}harp F_n}{n}.$$ The \textbf{upper and lower asymptotic densities} $\overline{d}(E)$ and $\underline{d}(E)$ of $E\in \mathcal P_\mathbb{N}$ are respectively defined by $$\overline{d}(E):=\limsup_{n\in \mathbb N}d_n(E_{(n)} ) \text{ and} $$ $$\underline{d}(E):=\liminf_{n\in \mathbb N}d_n(E_{(n)} ). $$ We just write $d(E)$ for the limit, when the frequencies $d_n(E_{(n)})$ are converging. For any $\mathfrak n\in \mathfrak N$ we let similarly $\overline{d}^\mathfrak n(E):=\limsup_{n\in \mathfrak n}d_n(E_{(n)} )$ and $\underline{d}^\mathfrak n(E):=\liminf_{n\in \mathfrak n}d_n(E_{(n)} )$. The concept of upper and lower asymptotic densities of $E\in \mathcal P_\mathbb{N}$ may be extended to generalized power sets as follows. For $\mathfrak n\in \mathfrak N$ and $\mathcal F=(F_n)_{n\in \mathfrak n}\in\mathcal Q_\mathfrak n$ we let $$\overline{d}^{\mathfrak n}(\mathcal F):=\limsup_{n\in \mathfrak n}d_n(F_n) \text{ and }$$ $$\underline{d}^{\mathfrak n}(\mathcal F):=\liminf_{n\in \mathfrak n}d_n(F_n). $$ Again we just write $d^{\mathfrak n}(E) $ and $d^{\mathfrak n}(\mathcal F)$ when the corresponding frequencies are converging. \mathop{\mathrm{ess sup}}ubsection{F\"olner sequence and density along subsequences} We say that $E\in \mathcal P_\mathbb{N}$ is \textbf{F\"olner along} $\mathfrak n\in \mathfrak N$ when its boundary $\partial E$ has zero upper asymptotic density with respect to $\mathfrak n$, i.e. $\overline{d}^\mathfrak n(\partial E)=0$. More generally $\mathcal F=(F_n)_{n\in \mathfrak n}\in \mathcal Q_\mathfrak n $ with $\mathfrak n\in \mathfrak N$ is F\"olner when we have $\overline{d}^\mathfrak n(\partial \mathcal F)=0$ with $\partial \mathcal F=(\partial F_n )_{n\in \mathfrak n}$. In general this property seems to be weaker than the usual F\"olner property $\limsup_{n\in \mathfrak n}\frac{\mathop{\mathrm{ess sup}}harp \partial F_n}{\mathop{\mathrm{ess sup}}harp F_n}=0$. But in the following we will work with sequences $\mathcal F$ with $\underline{d}^{\mathfrak n}(\mathcal F)>0$. In this case our definition coincides with the standard one. \\ Let $E,F\in \mathcal P_\mathbb{N}$ and $\mathfrak n\in \mathfrak N$. We say that $F$ is \textbf{ $\mathfrak n$-filled with $E$} or $E$ is \textbf{dense in $F$ along $\mathfrak n$} when we have $$\overline{d}^{\mathfrak n}(F\mathop{\mathrm{ess sup}}etminus E_M)\xrightarrow{M\rightarrow +\infty}0.$$ Observe that $\left(\overline{d}(E_M)\right)_M$ is converging nondecreasingly to some $a\geq \overline{d}(E)$ when $M$ goes to infinity. The limit $a$ is in general strictly less than $1$. For example if $E:=\bigcup_n \llbracket 2^{2n},2^{2n+1}\rrbracket$ one easily computes $\overline{d}(E_M)=\overline{d}(E)=2/3$ for all $M$. In this case, the set $E$ is moreover a F\"olner set.\\ Also $\mathcal F=(F_n)_{n\in \mathfrak n}\in \mathcal Q_\mathfrak n $ is said filled with $E$ when we have with $\mathcal F\mathop{\mathrm{ess sup}}etminus E_M:=(F_n\mathop{\mathrm{ess sup}}etminus E_M)_{n\in \mathfrak n}$: $$\overline{d}^{\mathfrak n}(\mathcal F\mathop{\mathrm{ess sup}}etminus E_M)\xrightarrow{M\rightarrow +\infty}0.$$ \mathop{\mathrm{ess sup}}ubsection{F\"olner set $F$ filled with a given $E$ with $\overline{d}(E)>0$} Given a set $E$ with positive upper asymptotic density we build a F\"olner set $F$ filled with $E$ by using a diagonal argument. More precisely we will build $F$ by filling the holes in $E$ of larger and larger size when going to infinity. \begin{lem} For any $E$ with $\overline{d}(E)>0$ there is a subsequence $\mathfrak n\in \mathfrak N$ and $F\in \mathcal P_\mathbb{N}$ with $\partial F\mathop{\mathrm{ess sup}}ubset E$ such that \begin{itemize} \item $d^\mathfrak{n}(F)\geq d^{\mathfrak n}(E\cap F )= \overline{d}(E)$; \item $F $ is F\"olner along $\mathfrak n$; \item $E$ is dense in $F$ along $\mathfrak n$. \end{itemize} \end{lem} \begin{proof} We first consider a subsequence $\mathfrak n^0=(\mathfrak n^0_k)_k$ satisfying $d^{\mathfrak n^0}(E)=\overline{d} (E)$. We can ensure that $\mathfrak n^0_k$ belong to $E$ for all $k$. Observe that $\overline{d}(E\mathop{\mathrm{ess sup}}etminus E_M)\leq 1/M$ for all $M\in \mathbb{N}^*$. Therefore for $M>2/\overline{d}(E)$, we have $\overline{d}^{\mathfrak n^0}(E_{M})\geq \overline{d}^{\mathfrak n^0}(E_{M}\cap E)>\overline{d}(E)/2>0$. We fix such an integer $M$ and we extract again a subsequence $\mathfrak{n}^{M}=(\mathfrak{n}^{M}_k)_k$ of $\mathfrak n^0$ such that $d^{\mathfrak{n}^{M}}(E_{M})$ is a limit equal to $\Delta_{M}:=\overline{d}^{\mathfrak n^0}(E_{M})$. Then we put $\Delta_{M+1}=\overline{d}^{\mathfrak{n}^{M}}(E_{M+1})$ and we consider a subsequence $ \mathfrak n^{M+1}$ of $\mathfrak n^{M}$ such that $d^{\mathfrak{n}^{M+1}}(E_{M+1}) $ is a limit equal to $\Delta_{M+1}\geq \Delta_{M}$ and $d_l(E_{M+1})\leq \Delta_{M+1}+1/2^{M+1}$ for all $l\in \mathfrak{n}^{M+1}$. We define by induction in this way nested sequences $ \mathfrak n^k$ for $k> M$ such that $d^{\mathfrak{n}^{k}}(E_{k})$ is a limit equal to $\Delta_{k}=\overline{d}^{\mathfrak{n}^{k-1}}(E_{k})$ and $d_l(E_{k})\leq \Delta_{k}+1/2^{k}$ for all $l\in \mathfrak n^{k}$. We let $\Delta_\infty>\overline{d}(E)/2>0$ be the limit of the nondecreasing sequence $(\Delta_k)_k$. We consider the diagonal sequence $\mathfrak n=(\mathfrak n_k)_{k\geq M}= (\mathfrak n^k_k)_{k\geq M}$ and we let $$F=\bigcup_{k>M}\llbracket \mathfrak n_{k-1},\mathfrak n_k\rrbracket \cap E_k.$$ Clearly we have $\partial F\mathop{\mathrm{ess sup}}ubset \mathfrak n^0\cup E\mathop{\mathrm{ess sup}}ubset E$. On the one hand, $F\cap \llbracket 1,\mathfrak n_k\rrbracket$ is contained in $E_k\cap \llbracket 1,\mathfrak n_k\rrbracket$ so that \begin{align*}d_{\mathfrak n_k}(F)&\leq d_{\mathfrak n^k_k}(E_k),\\& \leq \Delta_k+1/2^k,\\ \overline{d}^{\mathfrak n}(F) & \leq \lim_k\Delta_k=\Delta_{\infty}, \end{align*} On the other hand, $F\cap \llbracket 1,\mathfrak n_k\rrbracket $ contains $E_l\cap \llbracket \mathfrak n_{l-1},\mathfrak n_k\rrbracket$ for all $M<l<k$. Therefore \begin{align*}d_{\mathfrak n^k_k}(E_l)-\frac{\mathfrak n_{l-1}}{\mathfrak n^k_k}&\leq d_{\mathfrak n_k}(F),\\ \Delta_l&\leq \underline{d}^{\mathfrak n}(F),\\ \Delta_{\infty}&\leq \underline{d}^{\mathfrak n}(F). \end{align*} We conclude $d^{\mathfrak n}(F)=\Delta_\infty$. Similarly we have for all $l>M$ : \begin{align*} \underline{d}^{\mathfrak n}(E\cap F)&\geq \underline{d}^{\mathfrak n}(E\cap E_l),\\ &\geq d^{\mathfrak n}(E)-1/l,\\ &\geq \overline{d}^{\mathfrak{m}}(E)-1/l, \end{align*} therefore $ \underline{d}^{\mathfrak n}(E\cap F)\geq \overline{d}(E)$. Also $\overline{d}^{\mathfrak n}(E\cap F)\leq d^{\mathfrak n}(E)= \overline{d}^\mathfrak{m}(E)$. Consequently we get $d^{\mathfrak n}(E\cap F)=\overline{d}^\mathfrak{m}(E)$. We check now that $E$ is dense in $F$. For $l$ fixed and for all $k\geq l$ we have \begin{align*} d_{\mathfrak n_k}(F\mathop{\mathrm{ess sup}}etminus E_l)&\leq d_{\mathfrak n^k_k}(E_k\mathop{\mathrm{ess sup}}etminus E_l),\\ & \leq d_{\mathfrak n_k^k}(E_k)-d_{\mathfrak n_k^k}(E_l),\\ &\leq \Delta_k+1/2^k -d_{\mathfrak n_k^k}(E_l). \end{align*} By taking the limit in $k$, we get $\overline{d}^{\mathfrak n}(F\mathop{\mathrm{ess sup}}etminus E_l)\leq \Delta_\infty-\Delta_l\xrightarrow{l}0$. Let us prove finally the F\"olner property of the set $F$. For $\mathfrak n_k<K\in \partial F$ either $[K-k,K[$ or $]K,K+k]$ lies in the complement of $E$. Therefore $\overline{d}^{\mathfrak n}(\partial F)\leq 2/k$. As it holds for all $k$, the set $F$ is F\"olner along $\mathfrak n$. \end{proof} \mathop{\mathrm{ess sup}}ubsection{Borel-Cantelli argument} Let $(X, \mathcal A, \lambda)$ be a measure space with $\lambda$ being a finite measure. A map $E:X\rightarrow \mathcal P_\mathbb{N}$ is said measurable, when for all $n\in \mathbb{N}$ the set $\{x, \ n\in E(x)\}$ belongs to $\mathcal A$ (equivalently writing $E$ as an increasing sequence $(n_i)_{i\in \mathbb{N}}$ the integers valued functions $n_i$ are measurable). For such measurable maps $E$ and $\mathfrak n$, the upper asymptotic density $\overline{d}^\mathfrak n(E)$ defines a measurable function. \begin{lem}\label{measurable}Assume $E$ is a measurable sequence of integers such that $\overline{d}(E(x))>\beta>0$ for $x$ in a measurable set $A$ of a positive $\lambda$-measure. Then there exist $\mathfrak n\in \mathfrak N$, measurable subsets $(A_n)_{n\in \mathfrak n}$ of $X$ and $\mathcal F=(F_n)_{n\in \mathfrak n}\in \mathcal Q_{\mathfrak n}$ with $\partial F_n\mathop{\mathrm{ess sup}}ubset E(x)$ for all $x\in A_n$, $n\in \mathfrak n$ such that : \begin{itemize} \item $\underline{d}^\mathfrak n(\mathcal F)\geq \beta$; \item $\lambda(A_n)\geq \frac{e^{-n\delta_n}}{n^2}$ for all $n\in \mathfrak n$ with $\delta_n\xrightarrow{\mathfrak n \ni n\rightarrow +\infty}0$; \item $\mathcal F$ is a F\"olner sequence; \item $E$ is dense in $\mathcal F$ uniformly on $A_n$, i.e. $$\limsup_{n\in \mathfrak n}\mathop{\mathrm{ess sup}}up_{x\in A_n}d_n\left( F_n\mathop{\mathrm{ess sup}}etminus E_M(x)\right) \xrightarrow{M}0.$$ \item \begin{align*} \liminf_{n\in \mathfrak n} \inf_{x\in A_n}d_n\left(E(x)\cap F_n\right)& \geq \beta. \end{align*} \end{itemize} \end{lem} \begin{proof} The sequences $\mathfrak n$ and $F$ built in the previous lemma define measurable sequences on $A$. By taking a smaller subset $A$ we may assume \begin{itemize} \item $\mathfrak n_k(x)$ is bounded on $A$ for all $k$, \item $d_{\mathfrak n_k(x)}(\partial F(x))\xrightarrow{k}0$ uniformly in $x\in A$, \item $\limsup_k \mathop{\mathrm{ess sup}}up_{x\in A}d_{\mathfrak n_k(x)}(F(x)\mathop{\mathrm{ess sup}}etminus E_M(x))\xrightarrow{M}0$, \item $d_{\mathfrak n_k(x)}(E(x)\cap F(x))\xrightarrow{k}d^{\mathfrak n(x)}(E(x)\cap F(x))\geq \beta$ uniformly in $x\in A$. \end{itemize} By Borel-Cantelli Lemma, the subset $A_n:=\{x\in A, \ n\in \mathfrak n(x)\}$ has $\lambda$-measure larger than $1/n^2$ for infinitely many $n\in \mathbb N$. We let $\mathfrak n$ be this infinite subset of integers. By the (uniform in $x$) F\"olner property of $F(x)$, the cardinality of the boundary of $\left(F(x)\right)_{(n)}=F(x)\cap \llbracket 1, n\rrbracket$ for $x\in A_n$ and $n\in \mathfrak n$ is less than $n\alpha_n$ for some sequence $(\alpha_n)_{n\in \mathfrak n}$ (independent of $x$) going to $0$. Therefore there are at most $2\mathop{\mathrm{ess sup}}um_{k=1}^{[n\alpha_n]}{n \choose k}$ choices for $(F(x))_{(n)}$ and thus it may be fixed by dividing the measure of $A_n$ by $2\mathop{\mathrm{ess sup}}um_{k=1}^{[n\alpha_n]}{n \choose k}=e^{n\delta_n}$ for some $\delta_n\xrightarrow{n}0$. \end{proof} \mathop{\mathrm{ess sup}}ection{Empirical measures associated to F\"olner sequences}\label{zwei} Let $(X,T)$ be a topological system, i.e. $X$ is a compact metrizable space and $T:X\circlearrowleft$ is continuous. We denote by $\mathcal M(X)$ the set of Borel probability measures on $X$ endowed with the weak-$*$ topology and by $\mathcal M(X,T)$ the compact subset of invariant measures. We will write $\delta_x$ for the Dirac measure at $x\in X$. We let $T_*$ be the induced (continuous) action on $\mathcal M(X)$. For $\mu\in \mathcal M (X)$ and a finite subset $F$ of $\mathbb{N}$, we let $\mu^F$ be the empirical measure $\mu^F:=\frac{1}{\mathop{\mathrm{ess sup}}harp F }\mathop{\mathrm{ess sup}}um_{k\in F}T_*^k\mu$. \mathop{\mathrm{ess sup}}ubsection{Invariant measures} The following lemma is standard, but we give a proof for the sake of completeness. We fix $\mathfrak n\in \mathfrak N$ and $\mathcal F=(F_n)_{n\in \mathfrak N}\in \mathcal Q_{\mathfrak n}$. \begin{lem}\label{fol} Assume $\mathcal F$ is a F\"olner sequence and $\underline{d}^{\mathfrak n}(\mathcal F)>0$. Let $(\mu_n)_{n\in \mathfrak n}$ be a family in $\mathcal M(X)$ indexed by $\mathfrak n$. Then any limit of $\left(\mu_n^{F_n}\right)_{n\in \mathfrak n}$ is a $T$-invariant Borel probability measure. \end{lem} \begin{proof} Let $\mathfrak n'$ be a subsequence of $\mathfrak n$ such that $\left(\mu_n^{F_n}\right)_{n\in \mathfrak n'}$ is converging to some $\mu'$. It is enough to check that $\left|\int \phi\, d\mu_n^{F_n}-\int \phi\circ T\, d\mu_n^{ F_n}\right|$ goes to zero for when $\mathfrak n'\ni n\rightarrow +\infty$ for any $\phi:X\rightarrow \mathbb R$ continuous. This follows from \begin{align*} \int \phi\, d\mu_n^{F_n}-\int \phi\circ T\, d\mu_n^{ F_n}&=\frac{1}{\mathop{\mathrm{ess sup}}harp F_n} \int \left(\mathop{\mathrm{ess sup}}um_{\mathop{\mathrm{ess sup}}tackrel{k+1\in F_n}{k\notin F_n}}\phi\circ T^{k}- \mathop{\mathrm{ess sup}}um_{\mathop{\mathrm{ess sup}}tackrel{k+1\notin F_n}{k\in F_n}} \phi\circ T^k\right) d\, \mu_n,\\ \left|\int \phi\, d\mu_n^{F_n}-\int \phi\circ T\, d\mu_n^{F_n}\right| & \leq \mathop{\mathrm{ess sup}}up_{x\in X}|\phi(x)| \frac{\mathop{\mathrm{ess sup}}harp\partial F_n}{\mathop{\mathrm{ess sup}}harp F_n},\\ \liminf_{n\in \mathfrak n} \left|\int \phi\, d\mu_n^{F_n}-\int \phi\circ T\, d\mu_n^{F_n}\right| &\leq \mathop{\mathrm{ess sup}}up_{x\in X}|\phi(x)| \liminf_{n\in \mathfrak n}\frac{\mathop{\mathrm{ess sup}}harp\partial F_n}{\mathop{\mathrm{ess sup}}harp F_n}, \\ &\leq \mathop{\mathrm{ess sup}}up_{x\in X}|\phi(x)| \frac{d^{\mathfrak n}(\partial \mathcal F) }{ \underline{d}^{\mathfrak n}(\mathcal F)}= 0. \end{align*} \end{proof} \mathop{\mathrm{ess sup}}ubsection{Subadditive cocycles} We fix a general continuous subadditive process $\Phi=(\phi_n)_{n\in \mathbb N}$ with respect to $(X,T)$, i.e. $\phi_0=0$, $\phi_n:X\rightarrow \mathbb R$ is a continuous function for all $n$ and $\phi_{n+m}\leq \phi_n+\phi_m\circ T^n$ for all $m,n$. In the proof of the main theorem we will only consider additive cocycles, but we think it could be interesting to consider general subadditive cocycles in other contexts. Observe that $\Phi^+=(\phi_n^+)_n$, with $\phi_n^+=\max(\phi_n,0)$ for all $n$, is also subadditive. For any $\mu \in \mathcal M(X,T)$, we let $\phi^+(\mu)=\lim_n\frac{1}{n}\int \phi_n^+\, d\mu=\inf_n\frac{1}{n}\int \phi_n^+\, d\mu$ (the existence of the limit follows from the subadditivity property). Recall also that by the subadditive ergodic theorem \cite{King}, the limit $\phi_*(x)=\lim_n\frac{\phi_n(x)}{n}$ exists for $x$ in a set of full measure with respect to any invariant measure $\mu$. When $\phi_*(x)\geq 0$ for $\mu$-almost every point $x$ with $\mu\in \mathcal M(X,T)$, then we have $\phi(\mu):=\int \phi_*(x)\, d\mu(x)=\phi+(\mu)$. If $\mu$ is moreover ergodic, then $\phi_*(x)=\phi^+(\mu)$ for $\mu$ almost every $x$. Let $E:Y\rightarrow\mathcal P_\mathbb{N}$ be a measurable sequence of integers defined on a Borel subset $Y$ of $X$. For a set $F_n\in \mathcal P_n$ with $\partial F_n\mathop{\mathrm{ess sup}}ubset E(x)$ for some $x\in X$, we may write $F_n^-$ uniquely as the finite union of $E(x)$-irreducible intervals $F_n^-=\bigcup_{\mathsf k\in \mathsf K} \llbracket a_\mathsf k, b_\mathsf k\llbracket$. Let $n_\mathsf k=b_\mathsf k-a_\mathsf k$ for any $\mathsf k\in \mathsf K$. Then we define $$\forall x\in X, \ \phi_E^{F_n}(x):=\mathop{\mathrm{ess sup}}um_{\mathsf k\in \mathsf K}\phi_{n_\mathsf k}(T^{a_\mathsf k}x)$$ When $\Phi$ is additive, i.e. $\Phi_n=\mathop{\mathrm{ess sup}}um_{0\leq k<n}\phi\circ T^k$ for some continuous function $\phi:X\rightarrow \mathbb R$, we always have $\phi_E^{F_n}(x)= \phi^{F_n}(x):=\mathop{\mathrm{ess sup}}um_{k\in F_n^-}\phi(T^kx)$. The set valued map $E$ is said \textbf{$a$-large} with respect to $\Phi$ for some $a\geq 0$ when we have $\phi_{l-k}(T^kx)\geq (l-k)a$ for all consecutive integers $l> k$ in $E(x)$. \begin{lem}\label{cocycle} Let $\Phi$, $F_n$ and $E$ as above. Assume $E$ is $0$-large. Then for all $x\in X $ and for all integers $n\geq N\geq M$ we have: $$\frac{\phi_E^{F_n}(x)}{\mathop{\mathrm{ess sup}}harp F_n}\geq \int \frac{\phi_N^+}{N} \, d\delta_x^{F_n}-\frac{d_n\left (F_n\mathop{\mathrm{ess sup}}etminus E_M(x)\right)+Nd_n(\partial F_n)+4M/N}{d_n(F_n)}\mathop{\mathrm{ess sup}}up_y|\phi_1(y)|$$ \end{lem} \begin{proof} Let $k\in \{0,\cdots, N-1\}$ and $l\in \mathbb{N}$. The interval of integers $J_{k,l}=\llbracket k+lN,k+(l+1)N\llbracket$ may be written as $$J_{k,l}=I_1\mathop{\mathrm{m}}prod I_2\mathop{\mathrm{m}}prod I_3\mathop{\mathrm{m}}prod I_4$$ where $I_1$ is the union of disjoint $E$-irreducible intervals of length less than $M$ contained in $J_{k,l}$, $I_2 \mathop{\mathrm{ess sup}}ubset \mathbb{N}\mathop{\mathrm{ess sup}}etminus F_n^-$, $I_3\mathop{\mathrm{ess sup}}ubset F_n^-\mathop{\mathrm{ess sup}}etminus E_M(x)$ and $I_4$ is the union of at most two subintervals of $E$-irreducible intervals of length less than $M$ containing an extremal point of $J_{k,l}$. Therefore for a fixed $k$, by summing over all $l$ with $k+lN\in F_n$ we get as $E$ is $0$-large and $\Phi$ is subadditive: \begin{align*} &\mathop{\mathrm{ess sup}}um_{l, \ k+lN\in F_n }\phi_N^+(T^kx) \\ & \leq \mathop{\mathrm{ess sup}}um_{\mathsf k\in \mathsf K, \ \llbracket a_\mathsf k, b_\mathsf k\llbracket \cap (k+N\mathbb{N})=\emptyset }\phi_{n_\mathsf k}(T^{a_\mathsf k}x)+\mathop{\mathrm{ess sup}}up_y |\phi_1(y)| \left(N\mathop{\mathrm{ess sup}}harp \partial F_n+\mathop{\mathrm{ess sup}}harp \left(F_n\mathop{\mathrm{ess sup}}etminus E_M(x)\right)+2M([n/N]+1)\right) \\ & \leq \phi_E^{F_n}(x) +\mathop{\mathrm{ess sup}}up_y |\phi_1(y)| \left(N\mathop{\mathrm{ess sup}}harp \partial F_n+\mathop{\mathrm{ess sup}}harp \left( F_n\mathop{\mathrm{ess sup}}etminus E_M(x)\right)+2M([n/N]+1)\right). \end{align*} Then by summing over all $k\in \{0,\cdots, N-1\}$ and dividing by $N$, we conclude that $$\mathop{\mathrm{ess sup}}harp F_n\int \frac{\phi_N^+}{N} \, d\delta^{F_n}_x\leq \phi_E^{F_n}(x) +\mathop{\mathrm{ess sup}}up_y |\phi_1(y)| \left(N\mathop{\mathrm{ess sup}}harp \partial F_n+\mathop{\mathrm{ess sup}}harp F_n\mathop{\mathrm{ess sup}}etminus E_M(x)+2M([n/N]+1)\right).$$ \end{proof} \mathop{\mathrm{ess sup}}ubsection{Positive exponent of empirical measures for additive cocycles} We consider here an additive cocycle $\Phi$ associated to a continuous function $\phi:X\rightarrow \mathbb R$. With the notations of Lemma \ref{measurable} and Lemma \ref{fol}, we have : \begin{lem}\label{large} Let $(\mu_n)_{n\in \mathfrak n}$ with $\mu_n(A_n)=1$ for all $n\in \mathfrak n$. Assume $E$ is $a$-large with $a>0$. Then for any weak-$*$ limit $\mu$ of $\mu_n^{F_n}$ we have $$\phi_*(x)\geq a\text{ for } \mu \text{ a.e. }x. $$ \end{lem} \begin{proof} We claim that for any $0<\alpha<1$ and any $\epsilon>0$, there is arbitrarily large $N_0$ such that \begin{eqnarray}\label{zut}\limsup_n\mu_n^{F_n}(\phi_{N_0}/N_0\geq \alpha a)\geq 1-\epsilon. \end{eqnarray} By weak-$*$ convergence of $\mu_n$ to $\mu$, it will imply, the set $\{\phi_{N_0}/N_0\geq \alpha a\}$ being closed : $$\mu(\phi_{N_0}/N_0\geq \alpha a)\geq 1-\epsilon.$$ Then we may consider a sequence $(N_k)_k$ going to infinity such that $$\mu(\phi_{N_k}/N_k\geq \alpha a)\geq 1-\epsilon/2^k.$$ Therefore $\mu\left(\bigcap_k\{\phi_{N_k}/N_k\geq \alpha a\}\right)\geq 1-2\epsilon$. We conclude $\limsup_n\frac{\phi_n(x)}{n}\geq \alpha a$ for $\mu$ a.e. $x$ by letting $\epsilon$ go to zero. Let us show now our first claim (\ref{zut}). It is enough to show the inequality for $\mu_n=\delta_x$ uniformly in $x\in A_n$. We use the same notations as in the proof of Lemma \ref{cocycle}. Fix $x\in A_n$. For $k,l$ with $k+lN\in F_n$, the interval $J_{k,l} $ is said admissible, when $\phi_{N}(f^{k+lN}x)/N\geq \alpha a$. If $J_{k,l}$ is not admissible we have \begin{align*} \phi_{N}(f^{k+lN}x)&\geq \mathop{\mathrm{ess sup}}um_{i\in I_1}\phi(f^ix)- \mathop{\mathrm{ess sup}}up_{y}|\phi(y)| \mathop{\mathrm{ess sup}}harp (I_2\cup I_3\cup I_4), \\ \alpha a N &\geq\ a\mathop{\mathrm{ess sup}}harp I_1- \mathop{\mathrm{ess sup}}up_{y}|\phi(y)| \mathop{\mathrm{ess sup}}harp (I_2\cup I_3\cup I_4),\\ &\geq aN- (a+\mathop{\mathrm{ess sup}}up_{y}|\phi(y)|) \mathop{\mathrm{ess sup}}harp (I_2\cup I_3\cup I_4), \\ \mathop{\mathrm{ess sup}}harp (I_2\cup I_3\cup I_4) & \geq \frac{(1-\alpha)aN}{a+\mathop{\mathrm{ess sup}}up_{y}|\phi(y)|}. \end{align*} If we sum over all $l$ with $k+lN\in F_n$ and then over $k\in \{0,\cdots, N-1\}$, we get by arguing as in the proof of Lemma \ref{cocycle} : \begin{align*} N \Big( N\mathop{\mathrm{ess sup}}harp \partial F_n+\mathop{\mathrm{ess sup}}harp\left(F_n\mathop{\mathrm{ess sup}}etminus E_M(x)\right)+2M([n/N]+1)\Big)\geq \\\mathop{\mathrm{ess sup}}harp\{J_{k,l}\text{ not admissible}, \ k+lN\in F_n\}\times \frac{(1-\alpha)aN}{a+\mathop{\mathrm{ess sup}}up_{y}|\phi(y)|}. \end{align*} Therefore by Lemma \ref{measurable} (third and fourth items) we have for $\mathfrak n \ni n\gg N \gg M$ uniformly in $x\in A_n$, $$\mathop{\mathrm{ess sup}}harp\{J_{k,l} \text{ not admissible}, \ k+lN\in F_n\}\leq \epsilon \mathop{\mathrm{ess sup}}harp F_n.$$ By definition of admissible intervals we conclude that $$\limsup_n\delta_x^{F_n}(\phi_{N}/N\geq \alpha a)\geq 1-\epsilon.$$ \end{proof} \mathop{\mathrm{ess sup}}ubsection{Entropy of empirical measures} Following Misiurewicz's proof of the variational principle, we estimate the entropy of empirical measures from below. For a finite partition $P$ of $X$ and a finite subset $F$ of $\mathbb{N}$, we let $P^F$ be the iterated partition $P^F=\bigvee_{k\in F}f^{-k}P$. When $F=\llbracket 0,n-1\rrbracket$, $n\in \mathbb{N}$, we just let $P^{F}=P^n$. We denote by $P(x)$ the element of $P$ containing $x\in X$. For a Borel probability measure $\mu$ on $X$, the static entropy $H_\mu(P)$ of $\mu$ with respect to a (finite measurable) partition $P$ is defined as follows: \begin{align*} H_\mu(P)&=-\mathop{\mathrm{ess sup}}um_{A\in P}\mu(A)\log \mu(A),\\ &=-\int \log \mu\left(P(x)\right)\, d\mu(x). \end{align*} When $\mu$ is $T$-invariant, we recall that the measure theoretical entropy of $\mu$ with respect to $P$ is then $$h_\mu(P)=\lim_n\frac{1}{n}H_{\mu}(P^n)$$ and the entropy $h(\mu)$ of $\mu$ is $$h(\mu)=\mathop{\mathrm{ess sup}}up_Ph_\mu(P).$$ We will use the two following standard properties of the static entropy\cite{dow}: \begin{itemize} \item for a fixed partition $P$, the map $\mu\mapsto H_\mu(P)$ is concave on $\mathcal M(X)$, \item for two partitions $P$ and $Q$, the joined partition $P\vee Q$ satisfies \begin{align}\label{ziz} H_{\mu}(P\vee Q)&\leq H_\mu(P)+H_\mu(Q). \end{align} \end{itemize} \begin{lem}\label{comput}Let $\mathcal F=(F_n)_{n\in \mathfrak n}$ be a F\"olner sequence with $\underline{d}^{\mathfrak n}(\mathcal F)>0$. For any measurable finite partition $P$ and $m\in \mathbb{N}^*$, there exist a sequence $(\epsilon_n)_{n\in \mathfrak n}$ converging to $0$ such that $$\forall n\in \mathfrak n, \ \frac{1}{m}H_{\mu_n^{F_n}}(P^m)\geq \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}H_{\mu_n}(P^{F_n})+\epsilon_n.$$ \end{lem} \begin{proof} When $F_n$ is an interval of integers, we have \cite{Mis} : \begin{equation}\label{mi}\frac{1}{m}H_{\mu_n^{F_n}}(P^m)\geq \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}H_{\mu_n}(P^{F_n})-\frac{3m\log \mathop{\mathrm{ess sup}}harp P}{\mathop{\mathrm{ess sup}}harp F_n}. \end{equation} Consider a general set $F_n\in \mathcal P_n$. We decompose $F_n$ into connected components $F_n=\mathop{\mathrm{m}}prod_{k=1,\cdots, K } F_n^k$. Observe $K\leq \mathop{\mathrm{ess sup}}harp \partial F_n$. Then we get : \begin{align*} \frac{1}{m}H_{\mu_n^{F_n}}(P^m)&\geq \mathop{\mathrm{ess sup}}um_{k=1}^K\frac{\mathop{\mathrm{ess sup}}harp F_n^k}{m\mathop{\mathrm{ess sup}}harp F_n}H_{\mu_n^{F^k_n}}(P^m), \text{by concavity of $\mu \mapsto H_{\mu}(P^m)$}\\ &\geq \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}\mathop{\mathrm{ess sup}}um_{k=1}^KH_{\mu_n}(P^{F_n^k})-\frac{3mK\log \mathop{\mathrm{ess sup}}harp P}{\mathop{\mathrm{ess sup}}harp F_n},\text{ by applying (\ref{mi}) to each $F_n^k$},\\ &\geq \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}H_{\mu_n}(P^{F_n})-3m\log \mathop{\mathrm{ess sup}}harp P\frac{\mathop{\mathrm{ess sup}}harp \partial F_n}{\mathop{\mathrm{ess sup}}harp F_n}, \text{ according to (\ref{ziz})}. \end{align*} This concludes the proof with $\epsilon_n=3m\frac{\mathop{\mathrm{ess sup}}harp \partial F_n}{\mathop{\mathrm{ess sup}}harp F_n} \log \mathop{\mathrm{ess sup}}harp P$, because $\mathcal F$ is a F\"olner sequence with $\underline{d}^\mathfrak n(\mathcal F)>0$. \end{proof} With the notations of Lemma \ref{measurable} we let $\mu_n$ be the probability measure induced by $\lambda$ on $A_n$, i.e. $\mu_n=\frac{\lambda(A_n\cap \cdot)}{\lambda(A_n)}$. Let $\Psi=(\psi_n)_n$ be a continuous subadditive process such that $E$ is $0$-large with respect to $\Psi$. We assume that $\lambda$ satisfies the following \textit{F\"olner Gibbs property } with respect to the subadditive cocycle $\Psi$ : \begin{align}\label{hyp} &\text{There exists $\epsilon>0$ such that } \nonumber \\ & \text{we have for any partition $P$ with diameter less than $\epsilon$ :}\tag{$\text{G}$}\\ & \exists N \ \forall x\in A_n \textrm{ with } N<n\in \mathfrak{n}, \ \ \ \frac{1}{\lambda\left(P^{F_n}(x)\cap A_n\right)}\geq e^{\psi_E^{F_n}(x)}. \nonumber \end{align} \begin{prop}\label{pour} Under the above hypothesis (\ref{hyp}), any weak-$*$ limit $\mu$ of $(\mu_n^{F_n})_{n\in \mathfrak n}$ satisfies $$h(\mu)\geq \psi^+(\mu).$$ \end{prop} \begin{proof} Without loss of generality we may assume $(\mu_n^{F_n})_{n\in \mathfrak n}$ is converging to $\mu$. Take a partition $P$ with $\mu(\partial P)=0$ and with diameter less than $\epsilon$. In particular we have for all fixed $m\in \mathbb{N} $: \begin{equation*}\frac{1}{m}H_{\mu}(P^m)=\lim_n\frac{1}{m}H_{\mu_n^{F_n}}(P^m). \end{equation*} Then we get for $n\gg N\gg M\gg m$ \begin{align*} \frac{1}{m}H_{\mu}(P^m)\geq & \limsup_{n\in\mathfrak n} \ \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}H_{\mu_n}(P^{F_n}), \textrm{ by Lemma \ref{comput}},\\ \geq & \limsup_{n\in\mathfrak n} \frac{1}{\mathop{\mathrm{ess sup}}harp F_n}\int \left(- \log \lambda \left( P^{F_n}(x)\cap A_n \right)+\log \lambda(A_n)\right) \, d\mu_n(x),\\ \geq & \limsup_{n\in\mathfrak n} \int \frac{\psi_E^{F_n}}{\mathop{\mathrm{ess sup}}harp F_n} \, d\mu_n(x), \textrm{ by Hypothesis (G)},\\ \geq & \limsup_{n\in\mathfrak n} \bigg(\int \frac{\psi_N^+}{N}\, d \mu_n^{F_n}\\ & -\frac{\mathop{\mathrm{ess sup}}up_{y}|\psi(y)|\left(\mathop{\mathrm{ess sup}}up_{x\in A_n}d_n(F_n\mathop{\mathrm{ess sup}}etminus E_M(x))+Nd_n(\partial F_n) +4M/N\right) }{d_n(F_n)}\bigg), \textrm{ by Lemma \ref{cocycle}},\\ \geq & \int \frac{\psi_N^+}{N}\,d \mu-\frac{1}{\underline{d}^{\mathfrak n}(\mathcal F)}\left(\mathop{\mathrm{ess sup}}up_{y}|\psi(y)|\left(\limsup_{n\in \mathfrak n}\mathop{\mathrm{ess sup}}up_{x\in A_n}d_n(F_n\mathop{\mathrm{ess sup}}etminus E_M(x))+4M/N\right) \right),\\ \geq & \psi^{+}(\mu)-\frac{1}{\underline{d}^{\mathfrak n}(\mathcal F)}\left(\mathop{\mathrm{ess sup}}up_{y}|\psi(y)|\left(\limsup_{n\in \mathfrak n}\mathop{\mathrm{ess sup}}up_{x\in A_n}d_n(F_n\mathop{\mathrm{ess sup}}etminus E_M(x))+4M/N\right)\right). \end{align*} Letting $N$, then $M$, then $m$ go to infinity, we conclude that \begin{align*} h(\mu)\geq h_{\mu}(P) &\geq \psi^{+}(\mu). \end{align*} \end{proof} In the following we will also consider a general\footnote{The set $E$ is not assumed here to be $0$-large with respect to $\Psi$.} additive cocycle $\Psi=(\psi_n)_n$ associated to a continuous function $\psi:X\rightarrow \mathbb R$. Then for any F\"olner sequence $(F_n)_n$, the \textit{F\"olner Gibbs property } with respect to the additive cocycle $\Psi$ may be simply written as follows: \begin{align}\label{hyp} &\text{There exists $\epsilon>0$ such that } \nonumber \\ & \text{we have for any partition $P$ with diameter less than $\epsilon$ :}\tag{$\text{H}$}\\ & \exists N \ \forall x\in A_n \textrm{ with } N<n\in \mathfrak{n}, \ \ \ \frac{1}{\lambda\left(P^{F_n}(x)\cap A_n\right)}\geq e^{\psi^{F_n}(x)}. \nonumber \end{align} In this additive setting we get : \begin{prop}\label{por} Under the above hypothesis (\ref{hyp}), any weak-$*$ limit $\mu$ of $(\mu_n^{F_n})_{n\in \mathfrak n}$ satisfies $$h(\mu)\geq \psi(\mu).$$ \end{prop} \begin{proof} Let $P$ as in the proof of Proposition \ref{pour}. Then for $n\gg N\gg m$ we obtain by following this proof : \begin{align*} \frac{1}{m}H_{\mu}(P^m)&\geq \limsup_{n\in\mathfrak n} \int \frac{\psi^{F_n}}{\mathop{\mathrm{ess sup}}harp F_n} \, d\mu_n(x), \textrm{ by Hypothesis (H)},\\ & \geq \limsup_{n\in\mathfrak n} \int \psi \, d \mu_n^{F_n},\\ & \geq \psi(\mu). \end{align*} Letting $m$ go to infinity, we conclude that $h(\mu)\geq \psi(\mu)$. \end{proof} \mathop{\mathrm{ess sup}}ection{Geometric times} Let $r\geq 2$ be an integer and let $(M,\|\cdot\|)$ be a $C^{r}$ smooth compact Riemannian manifold, not necessarily a surface for the moment. We denote by $\mathrm d$ the distance induced by the Riemannian structure on $M$. We also consider a distance $\hat{\mathrm{d}}$ on the projective tangent bundle $ \mathbb P TM$, such that $\hat{\mathrm{d}}(\hat x, \hat y)\geq \mathrm{d}(\pi\hat x, \pi \hat y)$ for all $\hat x, \hat y \in \mathbb P TM$ with $\pi:\mathbb P TM\rightarrow M $ being the natural projection. For a $C^r$ map $f:M\rightarrow M$ or a $C^r$ curve $\mathop{\mathrm{ess sup}}igma:[0,1]\rightarrow M$ we may define the norm $\|d^sf\|_\infty$ and $\|d^s\mathop{\mathrm{ess sup}}igma\|_\infty$ for $1\leq s\leq r$ as the supremum norm of the $s$-derivative of the induced maps through the charts of a given atlas or through the exponential map $\exp$. In the following, to simplify the presentation we lead the computations as $M$ was an Euclidean space. For a $C^1$ curve $\mathop{\mathrm{ess sup}}igma:I\rightarrow M$, $I$ being a compact interval of $\mathbb R$, we let $\mathop{\mathrm{ess sup}}igma_*=\mathop{\mathrm{ess sup}}igma(I)$. The length of $\mathop{\mathrm{ess sup}}igma_*$ for the induced Riemannian metric is denoted by $|\mathop{\mathrm{ess sup}}igma_*|$. For a fixed curve $\mathop{\mathrm{ess sup}}igma$ we also let $v_x\in \mathbb PTM$ be the line tangent to $\mathop{\mathrm{ess sup}}igma_*$ at $x$ and we write $\hat x=(x,v_x)$. We denote by $F$ the projective action $F:\mathbb PTM\circlearrowleft$ induced by $f$ and we consider: the additive derivative cocycle $\Phi=(\phi_k)_k$ for $F$ on $\mathbb PTM$ given by $\phi(x,v)=\phi_1(x,v)=\log \|d_xf(v)\|$, where we have identified the line $v_x$ with one of its unit generating vectors. \mathop{\mathrm{ess sup}}ubsection{Bounded curve} \label{curvee} Following \cite{bure} a $C^r$ smooth curve $\gamma:[-1,1]\rightarrow M$ is said \textbf{bounded }when \begin{equation*} \max_{s=2,\cdots, r}\|d^s\gamma\|_\infty\leq \frac{1}{6}\|d\gamma\|_\infty. \end{equation*} We first recall some basic properties of bounded curves (see Lemma 7 in \cite{bure}). A bounded curve has bounded distorsion meaning that \begin{equation}\label{dist}\forall t,s\in [-1,1], \ \frac{\|d\gamma(t)\|}{\|d\gamma(s)\|}\leq 3/2. \end{equation} Indeed, if $t_*\in [-1,1]$ satisfies $\|d\gamma(t_*)\|=\|d\gamma\|_\infty$ then we have for all $s\in [-1,1]$, \begin{align*} \|d\gamma(t_*)-d\gamma(s)\|&\leq 2 \|d^2\gamma\|_{\infty},\\ &\leq \frac{1}{3}\|d\gamma(t_*)\|, \\ \text{ therefore \ }\frac{2}{3}\|d\gamma(t_*)\|&\leq \|d\gamma(s)\| \leq \|d\gamma(t_*)\|, \end{align*} The projective component of $\gamma$ oscillates also slowly. If we identify $M$ with $\mathbb R^2$ \footnote{This will be always possible as we will only consider curves with diameter less than the radius of injectivity.}, we have \begin{align}\label{oscill}\|d\gamma(t_*)\|\cdot \mathop{\mathrm{ess sup}}in \angle d\gamma(t_*), d\gamma(s)&\leq \|d\gamma(t_*)-d\gamma(s)\|\leq \frac{1}{3}\|d\gamma(t_*)\|,\nonumber \\ \angle d\gamma(t_*), d\gamma(s)&\leq \pi/6. \end{align} When moreover $\|d\gamma\|_\infty\leq \epsilon$ we say that $\gamma$ is \textbf{strongly $\epsilon$-bounded}. In particular such a map satisfies $\|\gamma\|_r:=\max_{1\leq s\leq r}\|d^s\gamma\|\leq \epsilon$, which is the standard $C^r$ upper bound required for the reparametrizations in the usual Yomdin's theory. But this last condition does not allow to control the distorsion along the curve in general.\\ If $\gamma$ is bounded then so is $\gamma_a=\gamma(a\cdot ):[-1,1]\rightarrow M$ for any $a\leq\frac{2}{3}$: \begin{align*} \forall s\geq 2, \ \|d^s\gamma_a\|_\infty&\leq \frac{1}{6} a^s\|d\gamma\|_\infty, \\ &\leq \frac{1}{6} a^s\frac{3}{2}\|d\gamma(0)\|,\\ &\leq \frac{1}{6}a^{s-1}\|d\gamma(0)\|,\\ &\leq \frac{1}{6}\|d\gamma_a\|_\infty. \end{align*} As $\|d\gamma_a\|_\infty\leq a\|d\gamma\|_\infty$, if $\gamma$ is moreover strongly $\epsilon$-bounded, then $\gamma_a$ is $a\epsilon$-strongly bounded. \begin{lem}\label{tech}Let $\gamma:[-1,1]\rightarrow M$ be a $C^r$ bounded curve with $\|d\gamma\|_\infty\geq \epsilon$. Then there is a family of affine maps $\iota_j:[-1,1]\circlearrowleft$, $j\in L:=\underline{L}\cup \overline{L}$ such that: \begin{itemize} \item each $\gamma \circ \iota_j$ is $\epsilon$-bounded and $\|d(\gamma\circ \iota_j)(0)\|\geq \frac{\epsilon}{6}$, \item $[-1,1]$ is the union of $\bigcup_{j\in \underline{L}}\iota_j([-1,1])$ and $\bigcup_{j\in \overline{L}}\iota_j([-\frac{1}{3},\frac{1}{3}])$, \item $\mathop{\mathrm{ess sup}}harp \underline{L}\leq 2$ and $\mathop{\mathrm{ess sup}}harp \overline{L}\leq 6\left( \frac{\|d\gamma\|_\infty}{\epsilon}+1\right)$, \item for any $x\in \gamma_*$, we have $\mathop{\mathrm{ess sup}}harp \{j\in L, \ (\gamma \circ \iota_j)_*\cap B(x,\epsilon)\neq \emptyset\}\leq 100.$ \end{itemize} \end{lem} \begin{proof}[Sketch of proof] For the first three items it is enough to consider affine reparametrizations of $[-1,1]$ with rate $\frac{2\epsilon}{3\|d\gamma\|_\infty}$. As the bounded map $\gamma$ stays in a cone of opening angle $\pi/6$, its intersection with $B(x,\epsilon)$ has length less than $2\epsilon$. The last item follows then easily. \end{proof} Fix a $C^r$ smooth diffeomorphism $f:M\circlearrowleft$. A curve $\gamma:[-1,1]\rightarrow M$ is said \textbf{$n$-bounded} (resp. \textbf{strongly $(n,\epsilon)$-bounded}) when $f^k\circ \gamma$ is bounded (resp. strongly $\epsilon$-bounded) for $k=0,\cdots, n$. A strongly $\epsilon$-bounded curve $\gamma$ is contained in the dynamical ball $B_n(x,\epsilon):=\{y\in M, \ \forall k=0,\cdots, n-1, \ \mathrm{d}(f^kx,y )<\epsilon \}$ with $x=\gamma(0)$. Fix a $C^r$ curve $\mathop{\mathrm{ess sup}}igma:I\rightarrow M$. For $x\in \mathop{\mathrm{ess sup}}igma_*$, a positive integer $n$ is called an \textbf{$(\alpha, \epsilon)$-geometric time} of $x$ when there exists an affine map $\theta_n:[-1,1]\rightarrow I$ such that $\gamma_n:=\mathop{\mathrm{ess sup}}igma\circ \theta_n$ is strongly $(n,\epsilon)$-bounded, $\gamma_n(0)=x$ and $\|d(f^n\circ \gamma_n)(0)\|\geq \frac{3}{2}\alpha\epsilon$. The concept of \textbf{$(\alpha, \epsilon)$-geometric time} is almost independent of $\epsilon$. Indeed it follows from the above observations that, if $n$ is a $(\alpha, \epsilon)$-geometric time of $x$, then it is also a $(\alpha, \epsilon')$-geometric time for $\epsilon'<\frac{2\epsilon}{3}$. Moreover if $n$ is a $(\alpha, \epsilon)$-geometric time of $x$ with $\gamma_n$ the associated curve, then $n$ is a $(\frac{2}{3}\alpha,\frac{2}{3}\epsilon)$-geometric time of any $y\in \gamma_n([1/3,1/3])$: if $y=\gamma_n(t)$ for $t\in[-1/3,1/3]$ then $\tilde \gamma_n:=\gamma_n(t+\frac{2}{3}\cdot)$ is strongly $(n,\frac{2}{3}\epsilon)$-bounded and satisfies $\tilde \gamma_n(0)=y$ and $\|d(f^n\circ \tilde\gamma_n)(0)\|=\frac{2}{3}\|d(f^n\circ\gamma_n)(t)\|\geq \frac{4}{9}\|d(f^n\circ \gamma_n)(0)\|\geq \frac{2}{3}\alpha \epsilon$.\\ We let $D_n(x)$ and $H_n(x)$ be the images of $f^n\circ \gamma_n$ and $\gamma_n$ respectively with $\gamma_n$ as above of maximal length. We define the semi-length of $D_n(x)$ as the minimum of the lengths of $f^n\circ \gamma_n([0,1])$ and $f^n\circ \gamma_n([-1,0])$. The semi-length of $D_n(x)$ is larger than $\alpha\epsilon$ at a $(\alpha, \epsilon)$-geometric time $n$. One can also easily checks that the curvature of $f^n\circ \mathop{\mathrm{ess sup}}igma$ at $f^nx$ is bounded from above by $\frac{1}{\alpha\epsilon}$. From the bounded distorsion property of bounded curves (\ref{dist}) we get \begin{equation}\label{distor}\forall y,z\in H_n(x) \, \forall 0\leq l<n,\ \ \ \frac{e^{\phi_{n-l}(F^l\hat y)}}{e^{\phi_{n-l}(F^l\hat z)}}\leq \frac{9}{4}.\end{equation} \mathop{\mathrm{ess sup}}ubsection{Reparametrization Lemma } We consider a $C^r$ smooth diffeomorphism $g:M\circlearrowleft$ and a $C^r$ smooth curve $\mathop{\mathrm{ess sup}}igma:I\rightarrow M$ with $\mathbb N \ni r\geq 2$. We state a global reparametrization lemma to describe the dynamics on $\mathop{\mathrm{ess sup}}igma_*$. We will apply this lemma to $g=f^p$ for large $p$ with $f$ being the $C^r$ smooth system under study. We denote by $G$ the map induced by $g$ on $\mathbb P TM$.\\ We will encode the dynamics of $g$ on $\mathop{\mathrm{ess sup}}igma_*$ with a tree, in a similar way the symbolic dynamic associated to monotone branches encodes the dynamic of a continuous piecewise monotone interval map. A weighted directed rooted tree $\mathcal T$ is a directed rooted tree whose edges are labelled. Here the weights on the edges are pairs of integers. Moreover the nodes of our tree will be coloured, either in blue or in red. \\ We let $\mathcal T_n$ (resp. $\underline{\mathcal T_n}$, $\overline{\mathcal T_n}$) be the set of nodes (resp. blue, red nodes) of level $n$. For all $k\leq n-1$ and for all $\mathbf i^n\in \mathcal T_n$, we also let $\mathbf i^n_{k}$ be the node of level $k$ leading to $\mathbf i^{n}$. For $\mathbf i^n\in \mathcal T_n$, we let $k(\mathbf i^n)=(k_1(\mathbf i^n),k'_1(\mathbf i^n), k_2(\mathbf i^n) \cdots,k'_n(\mathbf i^n) )$ be the $2n$-uple of integers given by the sequence of labels along the path from the root $\mathbf i^0$ to $\mathbf i^n$, where $\left(k_l(\mathbf i^n),k'_l(\mathbf i^n)\right)$ denotes the label of the edge joining $\mathbf i^n_{l-1}$ and $\mathbf i^n_{l}$.\\ For $x\in \mathop{\mathrm{ess sup}}igma_*$, we let $k(x)\geq k'(\hat x)$ be the following integers: $$k(x):=\left[\log\|d_{x}g \|\right], $$ $$k'( \hat x):=\left[\log \|d_xg(v_x)\|\right].$$ Then for all $n\in \mathbb N^*$ we define $$k^n( x)=(k( x),k'(\hat x), k(gx),\cdots k'(G^{n-2}\hat x), k(g^{n-1}x), k'(G^{n-1}\hat x) ).$$ For a $2n$-uple of integers $\mathbf k^n=(k_1,k'_1, \cdots k'_n,k_n)$ we consider then $$\mathcal H(\mathbf k^n):=\left\{x\in \mathop{\mathrm{ess sup}}igma_*, \, k^n( x)=\mathbf k^n\right\}.$$ We restate the Reparametrization Lemma (RL for short) proved in \cite{bure} in a \textit{global} version. Let $\exp_x$ be the exponential map at $x$ and let $R_{inj}$ be the radius of injectivity of $(M, \|\cdot\|)$. \begin{Rep} Let $\frac{R_{inj}}{2}>\epsilon>0$ satisfying $\|d^sg_{2\epsilon}^x\|_\infty\leq 3\epsilon \|d_xg\|$ for all $s=1,\cdots,r$ and all $x\in M$, where $g^x_{2\epsilon}= g\circ \exp_x(2\epsilon\cdot): \{w_x\in T_xM, \ \|w_x\|\leq 1\}\rightarrow M$ and let $\mathop{\mathrm{ess sup}}igma:[-1,1]\rightarrow M$ be a strongly $\epsilon$-bounded curve. Then there is $\mathcal T$, a bicoloured weighted directed rooted tree, and $\left(\theta_{\mathbf{i}^n}\right)_{\mathbf{i}^n \in \mathcal T_n}$, $n\in \mathbb N$, families of affine reparametrizations of $[-1,1]$, such that for some universal constant $C_r$ depending only on $r$: \begin{enumerate} \item $\forall \mathbf i^n\in \mathcal T_n$, the curve $\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^n}$ is $(n,\epsilon)$-bounded, \item $\forall \mathbf i^n\in \mathcal T_n$, the affine map $\theta_{\mathbf{i}^n}$ may be written as $\theta_{\mathbf{i}^n_{n-1}}\circ \phi_{\mathbf i^n}$ with $\phi_{\mathbf i^n}$ being an affine contraction with rate smaller than $1/100$ and $\theta_{\mathbf{i}^n}([-1,1])\mathop{\mathrm{ess sup}}ubset \theta_{\mathbf{i}^n_{n-1}}([-1/3,1/3])$ when $\mathbf{i}^n_{n-1}$ belongs to $ \overline{\mathcal T_{n-1}}$, \item $\forall \mathbf i^n\in \overline{\mathcal T_n}$, we have $\left\|d\left(f^n\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^n}\right)(0)\right\|\geq \epsilon/6$, \item $\forall \mathbf k^n\in (\mathbb Z\times \mathbb Z)^n$, the set $\mathop{\mathrm{ess sup}}igma^{-1}\mathcal H(\mathbf k^n) $ is contained in the union of \\ $$\displaystyle{\bigcup_{\mathop{\mathrm{ess sup}}tackrel{\mathbf i^n\in \overline{\mathcal T_n}}{ k(\mathbf i^n)=\mathbf k^n}} \theta_{\mathbf{i}^n}([-1/3,1/3])} \text{ and } \displaystyle{\bigcup_{\mathop{\mathrm{ess sup}}tackrel{\mathbf i^n\in \underline{\mathcal T_n}}{ k(\mathbf i^n)=\mathbf k^n}} \theta_{\mathbf{i}^n}([-1,1])}.$$\\ Moreover any term of these unions have a non-empty intersection with $\mathop{\mathrm{ess sup}}igma^{-1}\mathcal H(\mathbf k^n) $, \item $\forall \mathbf i^{n-1}\in \mathcal T_{n-1}$ and $(k_n,k'_n)\in \mathbb Z\times \mathbb Z$ we have $$\mathop{\mathrm{ess sup}}harp \left\{ \mathbf i^n\in \overline{\mathcal T_n}, \ \mathbf i^n_{n-1}= \mathbf i^{n-1} \text{ and } (k_n(\mathbf i^n),k'_n(\mathbf i^n)) =(k_n,k'_n)\right\}\leq C_re^{\max\left(k'_n,\frac{k_n-k'_n}{r-1}\right)}, $$ $$\mathop{\mathrm{ess sup}}harp \left\{ \mathbf i^n\in \underline{\mathcal T_n}, \ \mathbf i^n_{n-1}= \mathbf i^{n-1} \text{ and } (k_n(\mathbf i^n),k'_n(\mathbf i^n)) =(k_n,k'_n)\right\}\leq C_r e^{\frac{k_n-k'_n}{r-1}}.$$ \end{enumerate} \end{Rep} \begin{proof} We argue by induction on $n$. For $n=0$ we let $\mathcal T_0=\underline{\mathcal T_0}=\{\mathbf i^0\}$ and we just take $\theta_{\mathbf i^0}$ equal to the identity map on $[-1,1]$. Assume the tree and the associated reparametrizations have been built till the level $n$. Fix $\mathbf i^n\in\mathcal T_n$ and let \begin{align*}\hat \theta_{\mathbf i^n}:=&\left\{ \begin{array}{ll} \theta_{\mathbf i^n}(\frac{1}{3}\cdot ) & \mbox{if } \mathbf i^n\in \overline{\mathcal T_n},\\ \theta_{\mathbf i^n} & \mbox{if } \mathbf i^n\in \underline{\mathcal T_n}. \end{array}\right. \end{align*} We will define the children $ \mathbf i^{n+1}$ of $\mathbf i^n$, i.e. the nodes $\mathbf i^{n +1}\in \mathcal T_{n+1}$ with $\mathbf i_n^{n+1}=\mathbf i^{n} $. The label on the edge joining $\mathbf i^n$ to $\mathbf i^{n+1} $ is a pair $(k_{n+1}, k'_{n+1})$ such that the $2(n+1)$-uple $ \mathbf{k}^{n+1}=(k_1(\mathbf i^n), \cdots, k'_n(\mathbf i^n),k_{n+1},k'_{n+1})$ satisfies $\mathcal H(\mathbf k^{n+1})\cap \left(\mathop{\mathrm{ess sup}}igma \circ \hat\theta_{\mathbf i^n}\right)_*\neq \emptyset$. We fix such a pair $(k_{n+1}, k'_{n+1})$ and the associated sequence $\mathbf{k}^{n+1}$. We let $\eta,\psi:[-1,1]\rightarrow M$ be the curves defined as: \begin{align*} \eta:=&\mathop{\mathrm{ess sup}}igma\circ \hat \theta_{\mathbf i^n},\\ \psi:=&g^n\circ \eta.\\ \end{align*} \underline{\textit{First step :}} \textbf{Taylor polynomial approximation.} One computes for an affine map $\theta:[-1,1]\circlearrowleft$ with contraction rate $b$ precised later and with $y= \psi(t)\in g^{n}\mathcal H(\mathbf k^{n+1}) $, $t\in \theta([-1,1])$: \begin{align*}\|d^r(g\circ \psi\circ \theta)\|_\infty &\leq b^r \left\|d^{r}\left(g_{2\epsilon}^y \circ \psi_{2\epsilon}^y\right)\right\|_\infty , \textrm{with $\psi_{2\epsilon}^y:=(2\epsilon)^{-1}\exp_y^{-1}\circ \psi$,}\\ &\leq b^r\left\|d^{r-1}\left( d_{\psi_{2\epsilon}^y}g_{2\epsilon}^y\circ d\psi_{2\epsilon}^y \right) \right\|_\infty,\\ &\leq b^r 2^r \max_{s=0,\cdots,r-1}\left\|d^s\left(d_{\psi_{2\epsilon}^y}g_{2\epsilon}^y\right)\right\|_{\infty}\|\psi_{2\epsilon}^y \|_r. \end{align*} By assumption on $\epsilon$, we have $\|d^s g_{2\epsilon}^y\|_{\infty}\leq 3\epsilon\|d_y g\|$ for any $r\geq s\geq 1$. Moreover $\|\psi_{2\epsilon}^y \|_r\leq (2\epsilon)^{-1}\|d\psi\|_\infty\leq 1$ as $\psi$ is strongly $\epsilon$-bounded. Therefore by Fa\'a di Bruno's formula, we get for some\footnote{Although these constants may differ at each step, they are all denoted by $C_r$.} constants $C_r>0$ depending only on $r$: \begin{align*}\max_{s=0,\cdots,r-1}\|d^s\left(d_{\psi_{2\epsilon}^y}g_{2\epsilon}^y\right)\|_{\infty} &\leq \epsilon C_r\|d_y g\|,\\ \text{then }&,\\ \|d^r(g\circ \psi\circ \theta)\|_\infty &\leq \epsilon C_rb^r \|d_y g\|\|\psi_{2\epsilon}^y \|_r,\\ &\leq C_rb^r \|d_y g\|\|d\psi \|_\infty, \\ &\leq ( C_r b^{r-1}\|d_yg\|) \|d(\psi \circ \theta)\|_{\infty}, \\ &\leq (C_r b^{r-1}e^{k_{n+1}}) \|d(\psi \circ \theta)\|_{\infty}, \textrm{ because $y$ belongs to $g^n\mathcal H(\mathbf k^{n+1})$}, \\ & \leq e^{k'_{n+1}-4}\|d(\psi\circ \theta)\|_\infty, \textrm{ by taking $b=\left(C_re^{k_{n+1}-k'_{n+1}+4 }\right)^{-\frac{1}{r-1}}$.} \end{align*} The Taylor polynomial $P$ at $0$ of degree $r-1$ of $d(g\circ \psi\circ \theta)$ satisfies on $[-1,1]$: \begin{align*} \|P-d(g\circ \psi\circ \theta)\|_{\infty}&\leq e^{k'_{n+1}-4}\|d(\psi\circ \theta)\|_\infty. \end{align*} We may cover $[-1,1]$ by at most $b^{-1}+1$ such affine maps $\theta$. \\ \underline{\textit{Second step :}} \textbf{Bezout theorem.} Let $a_n:=e^{k'_{n+1}}\|d(\psi\circ \theta)\|_\infty$. Note that for $s\in [-1,1]$ with $\eta\circ \theta(s)\in \mathcal H(\mathbf k^{n+1})$ we have $\|d(g\circ \psi\circ \theta)(s)\|\in [a_ne^{-2},a_ne^{2}]$, therefore $\|P(s)\|\in [a_ne^{-3},a_ne^3]$. Moreover if we have now $\|P(s)\|\in [a_ne^{-3},a_ne^3]$ for some $s\in [-1,1]$ we get also $\|d(g\circ \psi\circ \theta)(s)\|\in [a_ne^{-4},a_ne^{4}]$. By Bezout theorem the semi-algebraic set $\{ s\in [-1,1],\ \|P(s)\|\in [e^{-3}a_n, e^{3}a_n]\}$ is the disjoint union of closed intervals $(J_i)_{i\in I}$ with $\mathop{\mathrm{ess sup}}harp I$ depending only on $r$. Let $\theta_i$ be the composition of $\theta$ with an affine reparametrization from $[-1,1]$ onto $J_i$. \\ \underline{\textit{Third step :}} \textbf{ Landau-Kolmogorov inequality.} By the Landau-Kolmogorov inequality on the interval (see Lemma 6 in \cite{bur}), we have for some constants $C_r\in \mathbb N^*$ and for all $1\leq s\leq r$: \begin{align*} \|d^s(g\circ \psi\circ \theta_i)\|_\infty & \leq C_r\left(\|d^r(g\circ \psi\circ \theta_i)\|_\infty +\|d(g\circ \psi\circ \theta_i)\|_\infty\right),\\ &\leq C_r\frac{|J_i|}{2}\left( \|d^r(g\circ \psi\circ \theta)\|_\infty+ \mathop{\mathrm{ess sup}}up_{t\in J_i}\|d(g\circ \psi\circ \theta)(t)\| \right),\\ &\leq C_r a_n\frac{|J_i|}{2}. \end{align*} We cut again each $J_i$ into $1000C_r$ intervals $\tilde{J_i}$ of the same length with $(\eta \circ \theta)(\tilde{J}_i)\cap \mathcal H(\mathbf k^{n+1})\neq \emptyset$. Let $\tilde{\theta_i}$ be the affine reparametrization from $[-1,1]$ onto $\theta(\tilde{J_i})$. We check that $g\circ \psi\circ \tilde{\theta_i}$ is bounded: \begin{align*} \forall s=2,\cdots, r, \ \|d^s(g\circ \psi\circ \tilde{\theta_i})\|_\infty & \leq (1000C_r)^{-2} \|d^s(g\circ \psi\circ \theta_i)\|_\infty,\\ &\leq \frac{1}{6}(1000C_r)^{-1}\frac{|J_i|}{2}a_ne^{-4},\\ &\leq \frac{1}{6}(1000C_r)^{-1}\frac{|J_i|}{2}\min_{s\in J_i}\|d(g\circ \psi\circ \theta)(s)\|,\\ &\leq \frac{1}{6}(1000C_r)^{-1}\frac{|J_i|}{2}\min_{s\in \tilde{J}_i}\|d(g\circ \psi\circ \theta)(s)\|,\\ &\leq \frac{1}{6} \|d(g\circ \psi\circ \tilde{\theta_i})\|_\infty. \end{align*} \underline{\textit{ Last step :}} \textbf{$\epsilon$-bounded curve.} Either $g\circ \psi\circ \tilde{\theta_i}$ is $\epsilon$-bounded and $\hat \theta_{\mathbf i^n}\circ \tilde{\theta_i}=\theta_{\mathbf i^{n+1}}$ for some $\mathbf i^{n+1}\in \underline{\mathcal T}_{n+1}$. Or we apply Lemma \ref{tech} to $g\circ \psi\circ \tilde{\theta_i}$ : the new affine parametrizations $\hat \theta_{\mathbf i^n}\circ \tilde{\theta_i}\circ \iota_j$, $j\in \underline L$ (resp. $j\in \overline{L}$) then define $\theta_{\mathbf i^{n+1}}$ for a node $\mathbf i^{n+1}$ in $\underline{\mathcal T}_{n+1}$ (resp. $\overline{\mathcal T}_{n+1}$). Note finally that: \begin{align*} \mathop{\mathrm{ess sup}}harp \overline{L} &\leq 6\left( \frac{\|d(g\circ \psi\circ \tilde{\theta_i})\|_\infty}{\epsilon}+1\right),\\ & \leq 100 \max(e^{k'_{n+1}}b,1), \text{ as $\psi$ is $\epsilon$-bounded and $\|d\tilde{\theta_i}\|_\infty\leq b$},\\ &\leq C_r \max\left(\frac{e^{k'_{n+1}}}{e^{\frac{k_{n+1}-k'_{n+1}}{r-1}}},1\right), \end{align*} therefore \begin{align*} \mathop{\mathrm{ess sup}}harp \left\{ \mathbf i^{n+1}\in \overline{\mathcal T_{n+1}} \ \Big| \ \overset{\mathbf i^{n+1}_{n}= \mathbf i^{n} \text{ and }}{ (k_{n+1}(\mathbf i^{n+1}),k'_{n+1}(\mathbf i^{n+1})) =(k_{n+1},k'_{n+1})}\right\}&\leq \mathop{\mathrm{ess sup}}um_{\tilde\theta_i}C_r \max\left(\frac{e^{k'_{n+1}}}{e^{\frac{k_{n+1}-k'_{n+1}}{r-1}}},1\right), \\ &\leq C_re^{\max\left(k'_{n+1},\frac{k_{n+1}-k'_{n+1}}{r-1}\right)}. \end{align*} \end{proof} As a corollary of the proof of RL we state a \textit{local} reparametrization lemma, i.e. we only reparametrize the intersection of $\mathop{\mathrm{ess sup}}igma_*$ with some given dynamical ball. For $x\in \mathop{\mathrm{ess sup}}igma_*$, $n\in \mathbb N$ and $\epsilon>0$ we let $$B^G_\mathop{\mathrm{ess sup}}igma(x,\epsilon,n):=\left\{y\in \mathop{\mathrm{ess sup}}igma_*, \ \forall k=0,\cdots, n-1, \ \hat{\mathrm{d}}(G^k\hat x,G^k\hat y)<\epsilon \right\}.$$ For all $(x,v)\in \mathbb PTM$, we also let $w(x,v)=w_g(x,v) :=\log\|d_{x}g\|-\log \|d_xg(v)\|$ and for all $n\in \mathbb N$ we let $w^n(x,v)=w^n_g(x,v):=\mathop{\mathrm{ess sup}}um_{k=0}^{n-1}w(G^k(x,v))$. We consider $\epsilon>0$ as in the Reparametrization Lemma. We assume moreover that $$ [\hat{\mathrm{d}}((x,v), (y,w) )<\epsilon]\rrbracketightarrow [ \left|\log \|d_xg(v)\|-\log \|d_yg(w)\|\right|<1 \text{ and }\left|\log \|d_xg\|-\log \|d_yg\|\right|<1].$$ \begin{coro}\label{local} For any strongly $\epsilon$-bounded curve $\mathop{\mathrm{ess sup}}igma:[-1,1]\rightarrow M$ and for any $x\in \mathop{\mathrm{ess sup}}igma_*$, we have for some constant $C_r$ depending only on $r$: \begin{equation}\label{grgr} \forall n\in \mathbb N, \ \ \mathop{\mathrm{ess sup}}harp \left\{\mathbf i^n\in \mathcal T_n, \ (\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^n})_*\cap B^G_{\mathop{\mathrm{ess sup}}igma}(x,\epsilon, n)\neq \emptyset\right\}\leq C_r^n e^{\frac{w^n(\hat x)}{r-1}}.\end{equation} \end{coro} \begin{proof}[Sketch of proof]The Corollary follows from the Reparametrization Lemma together with the two following facts : \begin{itemize} \item for $y \in B^G_\mathop{\mathrm{ess sup}}igma(x,\epsilon,n)$ we have $k^n(x)\mathop{\mathrm{ess sup}}imeq k^n(y)$ up to $1$ on each coordinate, \item for any $\mathbf i^{n-1}$ there is at most $C_re^{\frac{k(g^nx)-k'(G^n\hat x)}{r-1}}$ nodes $\mathbf i^n\in \overline{\mathcal T}_n$ with $\mathbf i_{n-1}^n=\mathbf i^{n-1}$ and $ \theta_{\mathbf i^n}([-1,1])\cap \mathop{\mathrm{ess sup}}igma^{-1} B^G_{\mathop{\mathrm{ess sup}}igma}(x,\epsilon, n+1)\neq \emptyset$. \end{itemize} This last point is a consequence of the last item of Lemma \ref{tech} applied to the bounded map $g\circ \psi\circ \tilde{\theta_i}$ introduced in the third step of the proof of the reparametrization lemma. \end{proof} \mathop{\mathrm{ess sup}}ubsection{The geometric set $E$} We apply the Reparametrization Lemma to $g=f^p$ for some positive integer $p$. For $x\in \mathop{\mathrm{ess sup}}igma_*$ we define the set $E_p(x)\mathop{\mathrm{ess sup}}ubset p\mathbb N$ of integers $mp$ such that there is $\mathbf i^m\in \overline{\mathcal T_m}$ with $k(\mathbf i^m)=k^m(x)$ and $x\in \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}([-1/3,1/3])$. In particular, any integer $n=mp\in E_p(x)$ is a $(\alpha_p, \epsilon_p)$-geometric time of $x$ for $f$ with $\alpha_p$ and $\epsilon_p$ depending only on $p$ by item (3) of RL. Observe that if $k<m$ we have with $x=\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^m}(t)$ and $\theta_{\mathbf i^m}(t)=\theta_{\mathbf i^m_k}(s)$: \begin{align*} \phi_{mp-kp}(F^{kp}\hat x)&=\frac{\left\|d\left(f^{mp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(t)\right\|}{\left\|d\left(f^{kp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(t)\right\|},\\ &\geq \frac{2}{3}\frac{\left\|d\left(f^{mp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(0)\right\|}{\left\|d\left(f^{kp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(t)\right\|}, \text{since $\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^m}$ is $m$-bounded},\\ &\geq \frac{2}{3}\frac{\left\|d\left(f^{mp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(0)\right\|}{\left\|d\left(f^{kp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m_k}\right)(s)\right\|}100^{m-k}, \text{by item (2) of RL,}\\ &\geq \frac{2}{3\epsilon}\left\|d\left(f^{mp}\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf{i}^m}\right)(0)\right\|100^{m-k}, \text{as $\mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^n_k}$ is strongly $(k,\epsilon)$-bounded,}\\ &\geq \frac{1}{9}100^{m-k}\geq \left(\frac{1}{10}\right)^{m-k}, \text{ by item (3) of RL.} \end{align*} Therefore $E_p$ is $\tau_p$-large with $\tau_p=\frac{\log 10}{p}$-large. \begin{prop}\label{lebgeo} Let $f:M\circlearrowleft $ is a $C^r$ diffeomorphism and $b>\frac{R(f)}{r}$. For $p$ large enough there exists $\beta_p>0$ such that $$\limsup_n\frac{1}{n}\log \textrm{Leb}_{\mathop{\mathrm{ess sup}}igma_*}\left(\left\{x\in A, \ d_{n}(E_{p}(x))< \beta_p \textrm{ and } \|d_xf^{n}(v_x)\|\geq e^{nb}\right\}\right)<0.$$ \end{prop} \begin{proof}It is enough to consider $n=mp\in p\mathbb N$. We apply the Reparametrization Lemma to $g=f^p$ with $\epsilon>0$ being the scale. Let $\mathcal T$ be the corresponding tree and $(\theta_{\mathbf i^m})_{\mathbf i^m\in \mathcal T_m}$ its associated affine reparametrizations. By a standard argument the number of sequences of positive integers $(k_1,\cdots, k_m)$ with $k_i< M\in \mathbb N$ for all $i$ is less than $e^{mMH(M)}$ where $H(t):=-\frac{1}{t}\log \frac{1}{t} -(1-\frac{1}{t})\log \left(1-\frac{1}{t}\right)$ for $t>0$. Therefore we can fix the sequence $\mathbf k^m=k^{m}(x)$ up to a factor combinatorial term equal to $e^{2mpA_f H(pA_f)}$ with $A_f:=\log\|df\|_\infty+\log \|df^{-1}\|_\infty+1$. Assume there is $x=\mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}(t)$ with $\|d_xf^{n}(v_x)\|\geq e^{nb}$. Then by the distorsion property of the bounded maps $f^n\circ\mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}$ and $\mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}$ we have \begin{align*} |(\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^m})_*|&\leq 2\|d(\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^m})\|_\infty,\\ &\leq 3\|d(\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf i^m})(t)\|, \text{ as $\mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}$ is bounded,}\\ &\leq 3 \frac{\|d(f^n\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m})(t)\|}{\|d_xf^{n}(v_x)\|} ,\\ &\leq 3\epsilon e^{-nb}, \text{as $f^n\circ \mathop{\mathrm{ess sup}}igma \circ \theta_{\mathbf i^m}$ is $\epsilon$-bounded.} \end{align*} Moreover when $x$ belongs to $ (\mathop{\mathrm{ess sup}}igma\circ \theta_{\mathbf{i}^m})_*$ for some $\mathbf{i}^m\in \mathcal T_m$ and satisfies $d_{n}(E_{p}(x))<\beta_p$, then we have $\mathop{\mathrm{ess sup}}harp \left\{ 0<k<m, \ \mathbf{i}^m_k \in \overline{\mathcal T_k} \right\}\leq n\beta_p$. But, by the estimates on the valence of $\mathcal T$ given in the last item of RL, the number of $m$-paths from the root labelled with $\mathbf k^m$ and with at most $n\beta_p $ red nodes are less than $2^mC_r^{m}e^{\mathop{\mathrm{ess sup}}um_i\frac{k_i-k'_i}{r-1}} \|df\|_\infty^{\beta_p p^2m}$ for some constant $C_r$ depending only on $r$. Then if $x\in \mathcal H(\mathbf k^m)$ satisfies $\|d_xf^n(v_x)\|\geq e^{nb}$, we have $e^{\mathop{\mathrm{ess sup}}um_i\frac{k_i-k'_i}{r-1}}\leq e^me^{m\frac{\log\|df^p\|_\infty-bp}{r-1}}$. But, as $b$ is larger than $\frac{\log \|df^p\|_\infty}{pr}$ for large $p$, we get for such values of $p$ : $\frac{\log\|df^p\|_\infty-bp}{r-1}\leq \frac{1-\frac{1}{r}}{r-1}\cdot \log \|df^p\|_\infty=\frac{\log \|df^p\|_\infty}{r}$. Therefore : \begin{align*} \limsup_n \frac{1}{n}\log\textrm{Leb}_{\mathop{\mathrm{ess sup}}igma_*}\left(\left\{x\in A, \ d_{n}(E_{p}(x))< \beta_p \textrm{ and } \|d_xf^{n}(v_x)\|\geq e^{nb}\right\}\right)\\ \leq \frac{\log C_r}{p}+\left(H(pA_f)+p\beta_p\right)A_f- b+\frac{\log \|df^p\|_\infty}{pr}.\end{align*} As $b$ is larger than $\frac{R(f)}{r}$ one can choose firstly $p\in \mathbb N^*$ large then $\beta_p>0$ small in such a way the right member is negative. \end{proof} From now we fix $p$ and the associated quantities satisfying the conclusion of Proposition \ref{lebgeo} and we will simply write $E,\tau, \alpha, \epsilon, \beta$ for $E_p,\tau_p, \alpha_p, \epsilon_p, \beta_p$. The set $E(x)$ is called the \textbf{geometric set} of $x$. \mathop{\mathrm{ess sup}}ubsection{Cover of $F$-dynamical balls by bounded curves} As a consequence of Corollary \ref{local}, we give now an estimate of the number of strongly $(n,\epsilon')$-bounded curves reparametrizing the intersection of a given strongly $\epsilon'$-bouded curve with a $F$-dynamical ball of length $n$ and radius $\epsilon'$. This estimate will be used in the proof of the F\"olner Gibbs property (Proposition \ref{coeur}).\\ For any $q\in \mathbb N^*$ we let $\omega_q:\mathbb PTM\rightarrow\mathbb R$ be the map defined for all $(x,v)\in \mathbb P TM $ by $$\omega_q(x,v):=\frac{1}{q}\mathop{\mathrm{ess sup}}um_{k=0}^{q-1}\log \|d_{f^kx} f^q\|-\log\|d_xf(v)\|.$$ Note that $\omega_1=w$. We also write $(\omega_q^n)_n$ the additive associated $F$-cocycle, i.e. $$\omega_q^n(x,v)=\mathop{\mathrm{ess sup}}um_{0\leq k<n}\omega_q(F^k(x,v)).$$ \begin{lem}\label{annens} For any $q\in \mathbb N^*$, there exists $\epsilon'_q>0$ and $B_q>0$ such that for any strongly $\epsilon'_q$-bounded curve $\mathop{\mathrm{ess sup}}igma:[-1,1]\rightarrow M$, for any $x\in \mathop{\mathrm{ess sup}}igma_*$ and for any $n\in \mathbb N^*$ there exists a family $(\theta_i)_{i\in I_n}$ of affine maps of $[-1,1]$ such that : \begin{itemize} \item $B_{\mathop{\mathrm{ess sup}}igma}^F(x,\epsilon'_q, n)\mathop{\mathrm{ess sup}}ubset \bigcup_{i\in I_n}(\mathop{\mathrm{ess sup}}igma\circ \theta_i)_*$, \item $\mathop{\mathrm{ess sup}}igma\circ \theta_i$ is $(n,\epsilon'_q)$-bounded (with respect to $f$) for any $i\in I_n$, \item $\mathop{\mathrm{ess sup}}harp I_n\leq B_q C_r^{\frac{n}{q}}e^{\frac{\omega_q^n(\hat x)}{r-1}},$ with $C_r$ a universal constant depending only on $r$. \end{itemize} \end{lem} \begin{proof} Fix $q$. Let $\epsilon'_q=\epsilon/2$ with $\epsilon$ as in Corollary \ref{local} for $g=f^q$. There is a family $\Theta$ of affine maps of $[-1,1]$ such that for any strongly $\epsilon'_q$-bounded map $\gamma:[-1,1]\rightarrow M$, the map $\gamma \circ \theta $ is $(q,\epsilon'_q)$-bounded and $\bigcup_{\theta\in \Theta}\theta_*=[-1,1]$. Fix now a strongly $\epsilon'_q$-bounded curve $\mathop{\mathrm{ess sup}}igma:[-1,1]\rightarrow M$ and let $x\in \mathop{\mathrm{ess sup}}igma_*$. We consider only the map $\theta\in \Theta$ such that $B_{\mathop{\mathrm{ess sup}}igma}^F(x,\epsilon'_q,n)\cap (\mathop{\mathrm{ess sup}}igma\circ\theta)_*\neq \emptyset$. For such a map $\theta$ we let $x_\theta \in B_{\mathop{\mathrm{ess sup}}igma}^F(x,\epsilon'_q,n)\cap (\mathop{\mathrm{ess sup}}igma\circ\theta)_*$. Take any $0\leq k<q$. By applying Corollary \ref{local} to "$g=f^q$", "$\mathop{\mathrm{ess sup}}igma= f^k\circ \mathop{\mathrm{ess sup}}igma \circ \theta$", "$x=f^{k}(x_\theta)$" and "$n=[\frac{n-k}{q}]$", we get a family $\Psi_{\theta, k}$ of affine maps of $[-1,1]$ with $$\bigcup_{\psi\in \Psi_{\theta,k}}(\mathop{\mathrm{ess sup}}igma\circ \theta\circ \psi)_* \mathop{\mathrm{ess sup}}upset f^{-k}B_{ f^k\circ \mathop{\mathrm{ess sup}}igma \circ \theta}^{F^q}\left(f^k(x_\theta),\epsilon,\left[\frac{n-k}{q}\right]\right)\mathop{\mathrm{ess sup}}upset B_{ \mathop{\mathrm{ess sup}}igma \circ \theta}^{F}(x,\epsilon'_q,n)$$ such that $f^{mq+k}\circ \mathop{\mathrm{ess sup}}igma \circ \theta\circ \psi$ is $\epsilon$-bounded for $\psi\in \Psi_{\theta,k}$ and integers $m$ with $0\leq mq+k\leq n$. Then $\Theta_k=\{\theta\circ \psi\circ \theta', \ \psi\in \Psi_{\theta, k} \text{ and } (\theta, \theta')\in \Theta^2\}$ satisfies the two first items of the conclusion. Moreover by letting $m_k=\left[\frac{n-k}{q}\right]$ we have: \begin{align*} \mathop{\mathrm{ess sup}}harp \Theta_k& \leq C_r^{m_k} \mathop{\mathrm{ess sup}}harp \Theta^2 e^{w^{m_k}_{f^q}(F^k\hat x)}. \end{align*} But for some constant $A_q$ depending only on $q$, we have $$\min_{0\leq k<q}e^{w^{m_k}_{f^q}(F^k\hat x)}\leq \left(\prod_{0\leq k<q} e^{w^{m_k}_{f^q}(F^k\hat x)}\right)^{1/q}\leq A_q e^{ \omega_q^n(\hat x)}.$$ This concludes the proof of the lemma, as $\mathop{\mathrm{ess sup}}harp \Theta$ depends only on $q$. \end{proof} \mathop{\mathrm{ess sup}}ection{Existence of SRB measures}\label{srbb} \mathop{\mathrm{ess sup}}ubsection{Entropy formula} By Ruelle's inequality \cite{Ruel}, for any $C^1$ system, the entropy of an invariant measure is less than or equal to the integral of the sum of its positive Lyapunov exponents. For $C^{1+}$ systems, the following entropy characterization of SRB measures was obtained by Ledrappier and Young : \begin{theorem}\cite{led}\label{leddd} An invariant measure of a $C^{1+}$ diffeomorphism on a compact manifold is a SRB measure if and only it has a positive Lyapunov exponent almost everywhere and the entropy is equal to the integral of the sum of its positive Lyapunov exponents. \end{theorem} As the entropy is harmonic (i.e. preserves the ergodic decomposition), the ergodic components of a SRB measures are also SRB measures. \mathop{\mathrm{ess sup}}ubsection{Lyapunov exponents} We consider in this susection a $C^1$ diffeomorphism $f:M\circlearrowleft$. Let $\|\|$ be a Riemaninan structure on $M$. The (forward upper) Lyapunov exponent of $(x,v)$ for $x\in M$ and $v\in T_xM$ is defined as follows (see \cite{pes} for an introduction to Lyapunov exponents): \[\chi(x,v):=\limsup_{n\rightarrow +\infty}\frac{1}{n}\log \|d_xf^n(v)\|.\] The function $\chi(x,\cdot)$ admits only finitely many values $\chi_1(x)>...>\chi_{p(x)}(x)$ on $T_xM\mathop{\mathrm{ess sup}}etminus \{0\}$ and generates a flag $ 0\mathop{\mathrm{ess sup}}ubsetneq V_{p(x)}(x) \mathop{\mathrm{ess sup}}ubsetneq \cdots \mathop{\mathrm{ess sup}}ubsetneq V_{1}=T_xM$ with $V_i(x)=\{ v\in T_xM, \ \chi(x,v)\leq \chi_i(x)\}$. In particular, $\chi(x,v)=\chi_i(x)$ for $v\in V_i(x)\mathop{\mathrm{ess sup}}etminus V_{i+1}(x)$. The function $p$ as well the functions $\chi_i$ and the vector spaces $V_i(x)$, $i=1,...,p(x)$ are invariant and depend Borel measurably on $x$. One can show the maximal Lyapunov exponent $\chi$ introduced in the introduction coincides with $\chi_1$ (see Appendix A). A point $x$ is said \textbf{regular} when there exists a decomposition $$T_xM=\bigoplus_{i=1}^{p(x)} H_i(x)$$ such that $$\forall v\in H_i(x)\mathop{\mathrm{ess sup}}etminus \{0\}, \ \lim_{n\rightarrow \pm\ \infty }\frac{1}{|n|}\log \|d_xf^n(v)\|=\chi_i(x)$$ with uniform convergence in $\{v\in H_i(x), \ \|v\|=1\}$ and $$\lim_{n\rightarrow \pm\ \infty }\frac{1}{|n|}\log \mathop{\mathrm{Jac}} \left(d_xf^n\right)=\mathop{\mathrm{ess sup}}um_i \dim(H_i(x))\chi_i(x).$$ In particular we have $V_i(x)=\bigoplus_{j=1}^{i} H_j(x)$ for all $i$. The set $\mathcal R$ of regular points is an invariant measurable set of full measure for any invariant measure \cite{Os}. The invariant subbundles $H_i$ are called the Oseledec's bundles. We also let $\mathcal R^*:=\{x\in \mathcal R, \ \forall i \ \chi_i(x)\neq 0 \}$. For $x\in \mathcal R^*$ we put $E_u(x)= \bigoplus_{i, \ \chi_i(x)>0} H_i(x)$ and $E_s(x)= \bigoplus_{i, \ \chi_i(x)<0} H_i(x)$.\\ In the following we only consider surface diffeomorphisms. Therefore we always have $p(x)\leq 2$ and when $p(x)$ is equal to $1$, we let $\chi_2(x)=\chi_1(x)$. When $\nu$ is $f$-invariant we let $\chi_i(\nu)=\int \chi_i\, d\nu$. \mathop{\mathrm{ess sup}}ubsection{Building SRB measures}\label{building} Assume $f$ is a $C^r$, $r>1$, surface diffeomorphism and $\limsup_n\frac{1}{n}\log \|d_xf^n\|>b>\frac{R(f)}{r}$ on a set of positive Lebesgue measure as in the Main Theorem. Take $\epsilon>0$ as in Proposition \ref{lebgeo} (it depends only on $b-\frac{R(f)}{r}>0$). By using Fubini's theorem as in \cite{bur} there is a $C^r$ smooth embedded curve $\mathop{\mathrm{ess sup}}igma:I\rightarrow M$, which can be assumed to be $\epsilon$-bounded, and a subset $A$ of $\mathop{\mathrm{ess sup}}igma_*$ with $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}(A)>0$, such that we have $\limsup_n\frac{1}{n}\log \|d_xf^n(v_x)\|>b$ for all $x\in A$. Here $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}$ denotes the Lebesgue measure on $\mathop{\mathrm{ess sup}}igma_*$ induced by its inherited Riemannian structure as a submanifold of $M$. This a finite measure with $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}(M)=|\mathop{\mathrm{ess sup}}igma_*|$. We can also assume that the countable set of periodic sources has an empty intersection with $\mathop{\mathrm{ess sup}}igma_*$. Let $\mathfrak m$ be the measurable sequence given by the set $\mathfrak m(x):=\{n, \ \|d_xf^n(v_x)\|\geq e^{nb}\}\in \mathfrak N $ for $x\in A$. It follows from Proposition \ref{lebgeo} that for $x$ in a subset of $ A$ of positive $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}$ measure we have $d_n\left(E(x)\right)\geq \beta>0$ for $n\in \mathfrak m(x)$ large enough, i.e. we have by denoting again this subset by $A$: $$\forall x\in A, \ \overline{d}(E(x))\geq \overline{d}^{\mathfrak m}(E(x))\geq \beta.$$ For any $q\in \mathbb N^*$ we let $$\psi^q=\phi-\frac{\omega_q}{r-1}.$$ \begin{prop}\label{coeur} There exists an infinite sequence of positive real numbers $(\delta_q)_{q}$ with $\delta_q\xrightarrow{q\rightarrow \infty} 0$ such that the property $(H)$ holds with respect to the additive cocycle on $\mathbb PTM$ associated to the observable $\psi^q+\delta_q$. \end{prop} We prove now the existence of a SRB measure assuming Proposition \ref{coeur}, whose proof is given in the next section. This is a first step in the proof of the Main Theorem. We will apply the results of the first sections to the projective action $F:\mathbb PTM\circlearrowleft$ induced by $f$, where we consider: \begin{itemize} \item the additive derivative cocycle $\Phi=(\phi_k)_k$ given by $\phi_k(x,v)=\log \|d_xf^k(v)\|$, \item the measure $\lambda=\lambda_\mathop{\mathrm{ess sup}}igma$ on $\mathbb PTM$ given by $ \mathfrak s^*\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}$ with $\mathfrak s:x\in \mathop{\mathrm{ess sup}}igma_*\mapsto (x,v_x)$, \item the geometric set $E$, which is $\tau$-large with respect to $\Phi$, \item the additive cocycles $\Psi^q$ associated to $\psi^q+\delta_q$. \end{itemize} The topological extension $\pi:(\mathbb P TM,F)\rightarrow (M,f)$ is principal\footnote{i.e. $h_f(\pi\mu)=h_F(\mu)$ for all $F$-invariant measure $\mu$.} by a straightforward application of Ledappier-Walters variational principle \cite{led} and Lemma 3.3 in \cite{shub}. In fact this holds in any dimension and more generally for any finite dimensional vector bundle morphism instead of $df:TM\circlearrowleft$. Let $\mathcal F=(F_n)_{n\in \mathfrak n}$ and $(A_n)_{n\in \mathfrak n}$ be the sequences associated to $E$ given by Lemma \ref{measurable}. Rigorously $E$ should be defined on the projective tangent bundle, but as $\pi$ is one-to-one on $\mathbb PT\mathop{\mathrm{ess sup}}igma_*$ there is no confusion. In the same way we see the sets $A_n$, $n\in \mathbb N$, as subsets of $A\mathop{\mathrm{ess sup}}ubset\mathop{\mathrm{ess sup}}igma_*$. Any weak-$*$ limit $\mu$ of $\mu_n^{F_n}$ is invariant under $F$ and thus supported by Oseledec's bundles. Let $\nu=\pi\mu$. By Lemma \ref{large}, $\mu$ is supported by the unstable bundle $E_u$ and $\phi_*(\hat x)\geq \tau>0$ for $\mu$ a.e. $\hat x\in \mathbb PTM$. Note also that $\phi_*(\hat x)\in \{\chi_1(\pi \hat x), \chi_2(\pi \hat x)\}$ for $\mu$-almost every $\hat x$. We claim that $\phi_*(\hat x)=\chi_1(\pi \hat x)$. If not $\nu$ would have an ergodic component with two positive exponents. It is well known such a measure is necessarily a periodic measure associated to a periodic source $S$. But there is an open neighborhood $U$ of the orbit of $S$ with $f^{-1}U\mathop{\mathrm{ess sup}}ubset U$ and $\mathop{\mathrm{ess sup}}igma_*\cap U=\emptyset$. In particular we have $\pi\mu_n^{F_n}(U)=0$ for all $n$ because $\pi\mu_n^{F_n}\left(\bigcup_{N\in \mathbb N}f^N\mathop{\mathrm{ess sup}}igma_*\right)=1$ and $f^N\mathop{\mathrm{ess sup}}igma_*\cap U=f^N(\mathop{\mathrm{ess sup}}igma_*\cap f^{-N}U)\mathop{\mathrm{ess sup}}ubset f^N(\mathop{\mathrm{ess sup}}igma_*\cap U)=\emptyset$. By taking the limit in $n$ we would obtain $\nu(S)=0$. Therefore $\phi_*(\hat x)=\chi_1(\pi \hat x)>\tau$ for $\mu$-almost every $x$ and $\chi_1(x)>\tau >0\geq \chi_2(x)$ for $\nu$-almost every $x$. Then by Proposition \ref{por} and Proposition \ref{coeur} we obtain: \begin{align*}\label{pa}h(\nu)=h(\mu)&\geq \int \psi^q\, d\mu +\delta_q,\nonumber\\ &\geq \int \phi\, d\mu- \frac{1}{r-1}\int \omega_q\, d\mu+ \delta_q,\\ & \geq \chi_1(\nu) -\frac{1}{r-1}\left( \frac{1}{q}\mathop{\mathrm{ess sup}}um_{k=0}^{q-1}\int \log\|d_{f^kx}f^q\|\, d\nu(x)- \chi_1(\nu)\right)+\delta_q, \\ &\geq \chi_1(\nu)-\frac{1}{r-1}\left(\frac{1}{q}\int \log\|d_{x}f^q\|\, d\nu(x)-\chi_1(\nu)\right)+\delta_q. \end{align*} By a standard application of the subadditive ergodic theorem, we have $$\frac{1}{q}\int \log\|d_{x}f^q\|\, d\nu(x)\xrightarrow{q\rightarrow +\infty}\int \chi_1(x) \, d\nu(x)=\chi_1(\nu).$$ Therefore $h(\nu)\geq \chi_1(\nu)$, since $\delta_q\xrightarrow{q\rightarrow \infty}0$. Then Ruelle's inequality implies $h(\mu)= \chi_1(\nu)$. According to Ledrappier-Young characterization (Theorem \ref{leddd}), the measure $\nu$ is a SRB measure of $f$. Note also that any ergodic component $\xi$ of $\nu$ is also a SRB measure, therefore $h(\xi)=\chi_1(\xi)>\tau$. But by Ruelle inequality applied to $f^{-1}$, we get also $h(\xi)\leq -\chi_2(\xi)$. In particular we have $\chi_1(x)>\tau>0>-\tau>\chi_2(x)$ for $\nu$-almost every $x$. \begin{rem} We have assumed that $r\geq 2$ is an integer. The proof can be adapted without difficulties to the non integer case $r>1$. \end{rem} \mathop{\mathrm{ess sup}}ubsection{Proof of the F\"olner Gibbs property (H)} In this subsection we prove Proposition \ref{coeur}. We will show that for any $\delta>0$ there is $q$ arbitrarily large and $\epsilon'_q>0$ such that we have for any partition $P$ of $\mathbb PTM$ with diameter less than $\epsilon'_q$: \begin{equation}\label{fred}\exists n_* \ \forall x\in A_n\mathop{\mathrm{ess sup}}ubset \mathop{\mathrm{ess sup}}igma_* \textrm{ with } n_*<n\in \mathfrak{n}, \ \ \ \frac{1}{\lambda_\mathop{\mathrm{ess sup}}igma\left(P^{F_n}(\hat x)\cap \pi^{-1}A_n\right)}\geq e^{\delta \mathop{\mathrm{ess sup}}harp F_n}e^{\psi_q^{F_n}(\hat x)}, \end{equation} where we denote $\psi_q^{F_n}(\hat x):=\mathop{\mathrm{ess sup}}um_{k\in F_n}\psi^q(F^k\hat x)$ to simplify the notations. For $G\mathop{\mathrm{ess sup}}ubset \mathbb{N}$ we let $A^G$ be the set of points $x\in A$ with $G\mathop{\mathrm{ess sup}}ubset E(x)$. When $G=\{ k\}$ or $\{k,l\}$ with $k,l\in \mathbb N$, we just let $A^G= A^k$ or $A^{k,l}$. We recall that $\partial F_n\mathop{\mathrm{ess sup}}ubset E(x)$ for all $x\in A_n$, in others terms $A_n\mathop{\mathrm{ess sup}}ubset A^{\partial F_n}$. We will show (\ref{ire}) for $A^{\partial F_n}$ in place of $A_n$. Fix the error term $\delta>0$. Let $q$ be so large that $C_r^{1/q}<e^{\delta/3}$ and $\epsilon'_q$ as in Lemma \ref{annens}. Without loss of generality we may assume $\epsilon'_q<\frac{\alpha\epsilon}{4}$. Recall $\epsilon, \alpha>0$ corresponds to the fixed scales in the definition of the geometric set $E$. We can also ensure that\begin{equation}\label{ire}\forall \hat x, \hat y \in \mathbb PTM \text{ with } \hat{\mathrm{d}}(\hat x, \hat y)<\epsilon'_q, \ \ |\phi(\hat x)-\phi(\hat y)|<\delta/3. \end{equation} In the next triplets lemmas we consider a strongly $\epsilon$-bounded curve $\mathop{\mathrm{ess sup}}igma$. \begin{lem}\label{first} For any subset $N$ of $M$, any $k\in \mathbb{N}$ and any ball $B_k$ of radius less than $\epsilon'_q$, there exists a finite family $(y_j)_{j\in J}$ of $\mathop{\mathrm{ess sup}}igma_*\cap A^k\cap f^{-k}B_k\cap N$ such that : \begin{itemize} \item $B_k\cap f^{k}(\mathop{\mathrm{ess sup}}igma_*\cap A^k\cap N)\mathop{\mathrm{ess sup}}ubset \bigcup_{j\in J}D_k(y_j)$, \item $B_k\cap D_k(y_j)$, $j\in J$, are pairwise disjoint. \end{itemize} \end{lem} \begin{minipage}{.85\textwidth} \begin{proof}Let $y\in \mathop{\mathrm{ess sup}}igma_*\cap A^k\cap N$ with $f^ky\in B=B_k$. Let $2B$ be the ball of same center as $B$ with twice radius. The curve $D_k(y)$ lies in a cone with opening angle $\pi/6$ by (\ref{oscill}). Moreover its length is larger than $\alpha\epsilon>4\epsilon'_q$. By elementary euclidean geometric arguments, the set $D_k(y)\cap 2B$ is a curve crossing $2B$, i.e. its two endpoints lies in the boundary of $2B$. Two such subcurves of $f^k\circ \mathop{\mathrm{ess sup}}igma$ if not disjoint are necessarily equal. \end{proof} \end{minipage} \begin{minipage}{.35\textwidth} \includegraphics[scale=0.2]{dessin.pdf} \end{minipage} As the distorsion is bounded on $D_k(y_i)$ by (\ref{distor}) and the semi-length of $D_k(y_i)$ is larger than $\alpha \epsilon$ (because $y_i$ belongs to $A^k$), we have \begin{align}\label{mar} \mathop{\mathrm{ess sup}}um_{i\in I}\frac{4}{9}e^{-\phi_k (\hat y_i)}|D_k(y_i)|&\leq \mathop{\mathrm{ess sup}}um_{i\in I}\left|f^{-k}D_k(y_i)\right|,\nonumber \\ &\leq |\mathop{\mathrm{ess sup}}igma_*|\leq 2\epsilon,\nonumber \\ \mathop{\mathrm{ess sup}}um_{i\in I}2\alpha\epsilon e^{-\phi_k (\hat y_i)} &\leq \frac{9}{2}\epsilon,\nonumber \\ \mathop{\mathrm{ess sup}}um_{i\in I}e^{-\phi_k (\hat y_i)} &\leq \frac{9}{4\alpha}. \end{align} \begin{lem}\label{second} For any subset $N$ of $M$ and any dynamical ball $B_{\llbracket 0,k\rrbracket}:=B_\mathop{\mathrm{ess sup}}igma^F(x,\epsilon'_q,k+1)$, there exists a finite family $(z_i)_{i\in I}$ of $\mathop{\mathrm{ess sup}}igma_*\cap A^{k}\cap B_{\llbracket 0,k\rrbracket}\cap N$ such that \begin{itemize} \item $f^{k}\left(\mathop{\mathrm{ess sup}}igma_*\cap A^{k}\cap B_{\llbracket 0,k\rrbracket}\cap N\right)\mathop{\mathrm{ess sup}}ubset \bigcup_{i\in I}D_k(z_i)$, \item $B(f^kx, \epsilon'_q)\cap D_k(z_i)$, $i\in I$, are pairwise disjoint, \item $\mathop{\mathrm{ess sup}}harp I \leq B_qe^{\delta k/3}e^{\frac{\omega_q^k(\hat x)}{r-1}}$ for some constant $B_q$ depending only on $q$. \end{itemize} \end{lem} \begin{proof} As in the previous lemma we consider the subcurves $D_k(z)$ for $z \in \mathop{\mathrm{ess sup}}igma_*\cap A^{k}\cap B_{\llbracket 0,k\rrbracket}\cap N$. By Lemma \ref{annens} we can reparametrize $B_{\llbracket 0,k\rrbracket}$ by a family of strongly $(k, \epsilon'_q)$-bounded curves with cardinality less than $B_qC_r^{\frac{k}{q}}e^{\omega_q^k(\hat x)}$. Each of this curve is contained in some $D_k(z)$. But as already mentioned, the sets $B(f^kx,\epsilon'_q)\cap D_k(z)$, $z\in \mathop{\mathrm{ess sup}}igma_*\cap A^{k}\cap B_{\llbracket 0,k\rrbracket}$, are either disjoint or equal. \end{proof} \begin{lem}\label{third} For any dynamical ball $B_{\llbracket k,l\rrbracket}:=f^{-k}B_{\mathop{\mathrm{ess sup}}igma}^F(f^kx,\epsilon'_q,l-k+1)$, there exists a finite family $(y_i)_{i\in I}$ of $\mathop{\mathrm{ess sup}}igma_*\cap A^{k,l}\cap B_{\llbracket k,l\rrbracket}$ and a partition $I=\mathop{\mathrm{m}}prod_{j\in J}I_j$ of $I$ with $j\in I_j$ for all $j\in J\mathop{\mathrm{ess sup}}ubset I$ such that \begin{itemize} \item $ f^{l}(\mathop{\mathrm{ess sup}}igma_*\cap A^{k,l}\cap B_{\llbracket k,l\rrbracket})\mathop{\mathrm{ess sup}}ubset \bigcup_{i\in I}D_l(y_i)$, \item $B(f^lx,\epsilon'_q)\cap D_l(y_i)$, $i\in I$, are pairwise disjoint, \item $\forall j\in J \ \forall i,i'\in I_j, \ D_k(y_i)\cap B(f^kx,\epsilon'_q)=D_k(y_{i'})\cap B(f^kx,\epsilon'_q)$, \item $\forall j\in J, \ \mathop{\mathrm{ess sup}}harp I_j \leq B_qe^{\delta (l-k)/3}e^{\frac{\omega_q^{l-k}(F^k\hat x)}{r-1}}$ for some constant $B_q$ depending only on $q$. \end{itemize} \end{lem} \begin{proof}We first apply Lemma \ref{first} to $\mathop{\mathrm{ess sup}}igma$ and $N=A^{k,l}\cap B_{\llbracket k,l\rrbracket}$ to get the collection of strongly $\epsilon$-bounded curve $\left(D_k(y_j)\right)_{j\in J}$. Then we apply Lemma \ref{second} to each $ D_k(y_j)$ for $j\in J$ and $N=f^{k}(B_{\llbracket k,l\rrbracket}\cap A_k\cap \mathop{\mathrm{ess sup}}igma_*)$ to get a family $(z_i)_{i\in I_j}$ of $ D_k(y_j)\cap A^{l-k}\cap f^{k}( B_{\llbracket k,l\rrbracket}\cap A_k\cap \mathop{\mathrm{ess sup}}igma_*)$ satisfying:\begin{figure} \caption{\label{bl} \label{bl} \end{figure} \begin{itemize} \item $f^{l-k}\left(D_k(y_j)\cap A^{l-k}\cap f^{k}(B_{\llbracket k,l\rrbracket}\cap A_k\cap \mathop{\mathrm{ess sup}}igma_*)\right)\mathop{\mathrm{ess sup}}ubset \bigcup_{j\in J}D_k(z_i)$, \item $B_l\cap D_{l-k}(z_i)$, $i\in I_j$, are pairwise disjoint, \item $\mathop{\mathrm{ess sup}}harp I_j \leq B_q e^{\delta (l-k)/3}e^{\frac{\omega_q^{l-k}(F^k\hat x)}{r-1}}$. \end{itemize} For all $j\in J$ we can take $j\in I_j$ and $z_j=f^k(y_j)$. We conclude the proof by letting $y_i=f^{-k}z_i\in \mathop{\mathrm{ess sup}}igma_*\cap A^{k,l}\cap B_{\llbracket k,l\rrbracket}$ for all $i\in I:=\mathop{\mathrm{m}}prod_{j\in J}I_j$. \end{proof} We prove now $(H)$. Recall that $\lambda=\lambda_\mathop{\mathrm{ess sup}}igma$ is the push-forward on $\mathbb PTM$ of the Lebesgue measure on $\mathop{\mathrm{ess sup}}igma_*$. As $\mathop{\mathrm{ess sup}}harp \partial F_n=o(n)$ it is enough to show there is a constant $C$ such that for any strongly $\epsilon$-bounded curve $\mathop{\mathrm{ess sup}}igma$ we have \begin{equation}\label{fdf}\lambda_\mathop{\mathrm{ess sup}}igma\left(P^{F_n}(\hat x)\cap \pi^{-1}A^{\partial F_n}\right)\leq C^{\mathop{\mathrm{ess sup}}harp \partial F_n}e^{2\delta \mathop{\mathrm{ess sup}}harp F_n/3}e^{-\psi_q^{F_n}(\hat x)}. \end{equation} To prove (\ref{fdf}) we argue by induction on the number of connected components of $F_n$. Let $\llbracket k,l\rrbracket$, $0\leq k\leq l$, be the first connected component of $F_n$ and write $G_{n-l}= \mathbb{N}^*\cap (F_n-l)$. Then with the notations of Lemma \ref{third} we get \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1}A^{\partial F_n}\right)&\leq \lambda_\mathop{\mathrm{ess sup}}igma\left( \mathop{\mathrm{m}}prod_{i\in I}F^{-l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\cap D_l(y_i)\right)\right), \\ &\leq \lambda_{\mathop{\mathrm{ess sup}}igma}\left( \mathop{\mathrm{m}}prod_{j\in J} F^{-k} \left(\mathop{\mathrm{m}}prod_{i\in I_j}F^{-(l-k)}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\cap D_l(y_i)\right)\right)\right) \end{align*} For $j\in J$ we let $\mathop{\mathrm{ess sup}}igma_j^k$ be the strongly $\epsilon$-bounded curve $\mathop{\mathrm{ess sup}}igma$ given by $D_k(y_j)$. By the bounded distorsion property (\ref{distor}) we get \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1} A^{\partial F_n}\right)&\leq 3\mathop{\mathrm{ess sup}}um_{j\in J}e^{-\phi_k(\hat y_j)}\lambda_{\mathop{\mathrm{ess sup}}igma_j^k}\left(\mathop{\mathrm{m}}prod_{i\in I_j}F^{-(l-k)}\left(\pi^{-1} A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\cap D_l(y_i)\right)\right). \end{align*} By using again the bounded distorsion property (now between the times $k$ and $l$) we get with $\mathop{\mathrm{ess sup}}igma_i^l$ being the curve associated to $D_l(y_i)$ : \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1}A^{\partial F_n}\right)\leq & 9\mathop{\mathrm{ess sup}}um_{j\in J}e^{-\phi_k(\hat y_j)}\mathop{\mathrm{ess sup}}um_{i\in I_j} e^{-\phi_{l-k}(F^k\hat y_i)} \lambda_{\mathop{\mathrm{ess sup}}igma_i^l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\right). \end{align*} We may assume that any $\hat y_i$, $i\in I$, lies in $P^{F_n}(\hat x)$. In particular we have $|\phi_{l-k}(F^k \hat y_i)-\phi_{l-k}(F^k\hat x)|<(l-k)\delta/3$ by (\ref{ire}). Then \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1}A^{\partial F_n}\right)\leq &9\left(\mathop{\mathrm{ess sup}}um_{j\in J}e^{-\phi_k(\hat y_j)}\right) e^{\delta(l-k)/3} e^{-\phi_{l-k}(F^k\hat x)}\mathop{\mathrm{ess sup}}up_j \mathop{\mathrm{ess sup}}harp I_j \\ &\times \mathop{\mathrm{ess sup}}up_{i\in I}\lambda_{\mathop{\mathrm{ess sup}}igma_i^l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\right). \end{align*} By (\ref{mar}) and the last item of Lemma \ref{third} we obtain \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1}\partial A^{F_n}\right)&\leq \frac{50B_qe^{2\delta (l-k)/3}}{\alpha}e^{-\phi_{l-k}(F^k\hat x)+\frac{\omega_q^{l-k}(F^k\hat x)}{r-1}}\mathop{\mathrm{ess sup}}up_{i\in I}\lambda_{\mathop{\mathrm{ess sup}}igma_i^l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\right), \\ & \leq \frac{50B_qe^{2\delta (l-k)/3}}{\alpha}e^{-\psi_q^{\llbracket k,l\rrbracket}(\hat x)}\mathop{\mathrm{ess sup}}up_{i\in I}\lambda_{\mathop{\mathrm{ess sup}}igma_i^l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\right). \end{align*} By induction hypothesis (\ref{fdf}) applied to $G_{n-l}$ for each $\mathop{\mathrm{ess sup}}igma_i^l$, we have for all $i\in I$: $$\lambda_{\mathop{\mathrm{ess sup}}igma_i^l}\left(\pi^{-1}A^{\partial G_{n-l}}\cap P^{G_{n-l}}(F^l\hat x)\right)\leq C^{\mathop{\mathrm{ess sup}}harp \partial G_{n-l}} e^{2\delta\mathop{\mathrm{ess sup}}harp \partial G_{n-l}/3}e^{-\psi_q^{G_{n-l}}(F^l\hat x)}.$$ Note that $\mathop{\mathrm{ess sup}}harp \partial F_n=\mathop{\mathrm{ess sup}}harp \partial G_{n-l}+2$. We conclude by taking $C=\mathop{\mathrm{ess sup}}qrt{ \frac{50B_q}{\alpha}}$ that \begin{align*} \lambda_{\mathop{\mathrm{ess sup}}igma}\left(P^{F_n}(\hat x)\cap \pi^{-1}A^{\partial F_n}\right)&\leq \frac{50B_qe^{2\delta \mathop{\mathrm{ess sup}}harp F_n /3}}{\alpha}C^{\mathop{\mathrm{ess sup}}harp \partial G_{n-l}}e^{-\psi_q^{F_n}(\hat x)},\\ &\leq C^{\mathop{\mathrm{ess sup}}harp \partial F_n} e^{2\delta \mathop{\mathrm{ess sup}}harp F_n/3}e^{-\psi_q^{F_n}(\hat x)}. \end{align*} This completes the proof of (\ref{fred}). \mathop{\mathrm{ess sup}}ection{End of the proof of the Main Theorem} \mathop{\mathrm{ess sup}}ubsection{The covering property of the basins} For $x\in M$ the stable/unstable manifold $W^{s/u}(x)$ at $x$ are defined as follows : $$W^s(x):=\{y\in M, \ \limsup_{n\rightarrow +\infty}\frac{1}{n}\log d(f^n x,f^n y)<0\},$$ $$W^u(x):=\{y\in M, \ \limsup_{n\rightarrow +\infty}\frac{1}{n}\log d(f^{-n} x,f^{-n} y)<0\}.$$ For a subset $\llbracketamma$ of $M$ we let $W^s(\llbracketamma)=\bigcup_{x\in \llbracketamma}W^s(x)$. According to Pesin's theory, there are a nondecreasing sequence of compact, a priori non-invariant, sets $(K_n)_n$ (called the Pesin blocks) with $\mathcal R^*=\bigcup_n K_n$ and two families of embedded $C^\infty$ discs $(W^s_{loc}(x))_{x\in K}$ and $(W^u_{loc}(x))_{x\in K}$ (called the local stable and unstable manifolds) such that : \begin{itemize} \item $W^{s/u}_{loc}(x)$ are tangent to $E_{s/u}$ at $x$, \item the splitting $E_u(x)\oplus E_s(x)$ and the discs $W^{s/u}_{loc}(x)$ are continuous on $x\in K_n$ for each $n$. \end{itemize} For $\gamma>0$ and $x\in K$ we let $W^{s/u}_\gamma(x)$ be the connected component of $B(x,\gamma)\cap W^{s/u}_{loc}(x)$ containing $x$. \begin{prop}\label{bass} The set $\left\{\chi>\frac{R(f)}{r}\right\}$ is covered by the basins of ergodic SRB measures $\mu_i$, $i\in I$, up to a set of zero Lebesgue measure. \end{prop} In fact we prove a stronger statement by showing that $\left\{\chi>\frac{R(f)}{r}\right\}$ is contained Lebesgue a.e. in $W^s(\llbracketamma)$ where $\llbracketamma$ is any $f$-invariant subset of $\bigcup_{i\in I}\mathcal B(\mu_i)_{i\in I}$ with $\mu_i(\llbracketamma)=1$ for all $i$. \\ So far we only have used the characterization of SRB measure in terms of entropy (Theorem \ref{leddd}). In the proof of Proposition \ref{bass} we will use the absolutely continuity property of SRB measures. Let $\mu$ be a Borel measure on $M$. We recall a measurable partition $\xi$ in the sense of Rokhlin \cite{rok} is said $\mu$-subordinate to $W^u$ when $\xi(x)\mathop{\mathrm{ess sup}}ubset W^u(x)$ and $\xi(x)$ contains an open neighborhood of $x$ in the topology of $W^u(x)$ for $\mu$-almost every $x$. The measure $\mu$ is said to have \textbf{absolutely continuous conditional measures on unstable manifolds} if for every measurable partition $\xi$ $\mu$-subordinate to $W^u$, the conditional measures $\mu_x^\xi$ of $\mu$ with respect to $\xi$ satisfy $\mu_x^\xi \ll Leb_{W^u(x)}$ for $\mu$-almost every $x$. \begin{proof} We argue by contradiction. Take $\llbracketamma$ as above. Assume there is a Borel set $B$ with positive Lebesgue measure contained in the complement of $W^s(\llbracketamma)$ such that we have $\chi(x)>b>\frac{R(f)}{r}$ for all $x\in B$. Then we follow the approach of Section \ref{building}. We consider a $C^r$ smooth disc $\mathop{\mathrm{ess sup}}igma$ with $\chi(x,v_x)>b$ for $x\in B'\mathop{\mathrm{ess sup}}ubset B$, $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}(B')>0$. One can then define the geometric set $E$ on a subset $B''$ of $B$ with $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}(B'')>0$. We also let $\tau$, $\beta$, $\alpha$ and $\epsilon$ be the parameters associated to $E$. Recall that : \begin{itemize} \item $E$ is $\tau$-large with respect to the derivative cocycle $\Phi$, \item $\overline{d}(E(x))\geq \beta >0$ for $x\in B''$, \item $D_k(y)=f^k(H_k(y))$ has semi-length larger than $\alpha\epsilon$ when $k\in E(y)$, $y\in B''$. \end{itemize} Let $B'''$ be the subset of $B''$ given by density points of $B''$ with respect to $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}$. In particular, we have $$\forall x\in B''', \ \ \frac{\textrm{Leb}_{\mathop{\mathrm{ess sup}}igma_*}\left( H_k(x)\cap B''\right) }{\textrm{Leb}_{\mathop{\mathrm{ess sup}}igma_*}(H_k(x))}\xrightarrow{k\rightarrow +\infty}1.$$ We choose a subset $A$ of $B'''$ with $\mathop{\mathrm{Leb}}_{\mathop{\mathrm{ess sup}}igma_*}(A)>0$ such that the above convergence is uniform in $x\in A$. Then from this set $A$ and the geometric set $E$ on $A$ we may build $\mathfrak n$, $(F_n)_{n\in \mathfrak n}$ and $(\mu_n^{F_n})_{n\in \mathfrak n}$ as in Sections \ref{drei} and \ref{zwei}. As proved in Section \ref{building} any limit measure $\mu$ of $\mu_n^{F_n}$ is supported on the unstable bundle and projects to a SRB measure $\nu$ with $\chi_1(x)\geq\tau>0>-\tau\geq \chi_2(x)$ for $\nu$ a.e. $x$. The measure $\nu$ is a barycenter of ergodic SRB measures with such exponents. Take $P=K_N$ a Pesin block with $\nu(P)\mathop{\mathrm{ess sup}}im 1>1-\beta$. We let $\theta$ and $l$ be respectively the minimal angle between $E_u$ and $E_s$ and the minimal length of the local stable and unstable manifolds on $P$. Let $\xi$ be a measurable partition subordinate to $W^u$ with diameter less then $\gamma>0$. We have $\nu(\llbracketamma\cap P)=\int \nu_x^\xi(\llbracketamma\cap P)\, d\nu(x)\mathop{\mathrm{ess sup}}im 1$ and $\nu_x^\xi \ll Leb_{W_\gamma^u(x)}$ for $\nu$ a.e. $x$. Therefore we get for some $c>0$ $$\nu\left(x, \ \textrm{Leb}_{W_\gamma^u(x)}(\llbracketamma\cap P)>c\right)\mathop{\mathrm{ess sup}}im 1 .$$ We let $F=\{x\in \llbracketamma\cap P, \ \mathop{\mathrm{Leb}}_{W_\gamma^u(x)}(\llbracketamma\cap P)>c\}$. Observe that we have again $\nu(F)\mathop{\mathrm{ess sup}}im 1$. For $x\in \mathop{\mathrm{ess sup}}igma_*$ and $y\in P$ we use the following notations : $$\hat x_\mathop{\mathrm{ess sup}}igma=(x,v_x)\in \mathbb PT\mathop{\mathrm{ess sup}}igma_* \ \ \ \ \ \hat y_u=(y,v_y^u)\in \mathbb PTM,$$ where $v_y^u$ is the element of $\mathbb PTM$ representing the line $E_u(y)$. Let $\hat F_u^\gamma$ be the open $\gamma/8$-neighborhood of $\hat F_u:=\{\hat y_u, \ y \in F \}$ in $\mathbb PTM$. Recall $E(x)$ denotes the set of geometric times of $x$. We let for $n\in \mathfrak n$: $$\zeta_n:=\int \frac{1}{\mathop{\mathrm{ess sup}}harp F_n }\mathop{\mathrm{ess sup}}um_{k\in E(x)\cap F_n}\delta_{F^k \hat x_\mathop{\mathrm{ess sup}}igma } \, d\mu_n(\hat x_\mathop{\mathrm{ess sup}}igma ).$$ Observe that $ \zeta_n(\mathbb PTM)\geq \inf_{x\in A_n} d_n(E(x)\cap F_n) $. By the last item in Lemma \ref{measurable}, we have $\liminf_{n\in \mathfrak n}\inf_{ x\in A_n} d_n(E(x)\cap F_n)\geq\beta$. Therefore there is a weak limit $\zeta=\lim_k\zeta_{p_k}$ with $\zeta \leq \mu$ and $\zeta(\mathbb PTM) \geq \beta$. From $\mu(\hat F_u^\gamma)\mathop{\mathrm{ess sup}}im 1>1-\beta$ we get $0<\zeta(\hat F_u^\gamma)\leq \lim_k \zeta_{p_k}(\hat F_u^\gamma)$. Note also $\hat A_\mathop{\mathrm{ess sup}}igma:=\{\hat y_\mathop{\mathrm{ess sup}}igma, \ y\in A \} $ has full $\mu_n$-measure for all $n$. In particular, for infinitely many $n\in \mathbb N$ there is $(x^n,v_{x^n})=\hat x^n_\mathop{\mathrm{ess sup}}igma \in \hat A_\mathop{\mathrm{ess sup}}igma $ with $F^n\hat x^n_\mathop{\mathrm{ess sup}}igma\in \hat F_u^{\gamma}$ and $n\in E(x^n)$. Let $\hat y^n_u=(y^n,v^u_{y^n})\in \hat F_u$ which is $\gamma/ 8$-close to $F^n\hat x^n_\mathop{\mathrm{ess sup}}igma$. Then for $\gamma\ll \delta\ll \min(\theta, l)$ independent of $n$, the curve $D_n^\delta(x^n):=D_n(x^n)\cap B(f^nx^n,\delta)$ is transverse to $W^s(P\cap \llbracketamma\cap W^u_{\gamma} (y^n))$ and may be written as the graph of a $C^r$ smooth function $\psi:E\mathop{\mathrm{ess sup}}ubset E_u(y^n)\rightarrow E_s(y^n)$ with $\|d\psi\|<L$ for a universal constant $L$. \begin{figure} \caption{\label{bl} \label{bl} \end{figure} By Theorem 8.6.1 in \cite{pes} the associated holonomy map $h: W^{u}_\gamma(y)\rightarrow D_n^\delta(x^n)$ is absolutely continuous and its Jacobian is bounded from below by a positive constant depending only on the Pesin block $P=K_N$ (not on $x^n$ and $y^n$). In particular $\mathop{\mathrm{Leb}}_{D_n(x^n)}\left(W^s(\llbracketamma \cap P) \right)\geq c'$ for some constant $c'$ independent of $n$. The distorsion of $df^n$ on $H_n(x^n)$ being bounded by $3$, we get (recall $f^nH_n(x^n)=D_n(x^n)$): $$ (\alpha\epsilon)^{-1}\mathop{\mathrm{Leb}}\left(D_n(x^n)\mathop{\mathrm{ess sup}}etminus f^n B\right)\leq \frac{\mathop{\mathrm{Leb}}\left(D_n(x^n)\mathop{\mathrm{ess sup}}etminus f^n B\right)}{\mathop{\mathrm{Leb}}\left(D_n(x^n)\right)}\leq 9 \frac{\mathop{\mathrm{Leb}}\left(H_n(x^n)\mathop{\mathrm{ess sup}}etminus B \right) }{\mathop{\mathrm{Leb}}\left(H_n(x^n)\right)} \xrightarrow{n\rightarrow \infty}0.$$ Therefore for $n$ large enough, there exists $x\in f^{n}B\cap W^s(\llbracketamma \cap P)$, in particular $B\cap f^{-n} W^s(\llbracketamma)=B\cap W^s(\llbracketamma)\neq \emptyset$. This contradicts the definition of $B$. \end{proof} \mathop{\mathrm{ess sup}}ubsection{The maximal exponent} Let $\mathcal R^{+*}$ denote the invariant subset of Lyapunov regular points $x$ of $(M,f)$ with $\chi_1(x)>0>\chi_2(x)$. Such a point admits so called regular neighborhoods (or $\epsilon$-Pesin charts): \begin{lem}\cite{ppes}\label{chart} For a fixed $\epsilon>0$ there exists a measurable function $q=q_\epsilon:\mathcal R^{+*}\rightarrow (0,1]$ with $e^{-\epsilon}<q(fx)/q(x)<e^{\epsilon}$ and a collection of embeddings $\Psi_x:B(0,q(x))\mathop{\mathrm{ess sup}}ubset T_xM=E_u(x)\oplus E_s(x)\mathop{\mathrm{ess sup}}im \mathbb R^2\rightarrow M$ with $\Psi_x(0)=x$ such that $f_x=\Psi_{fx}^{-1}\circ f\circ \Psi_x$ satisfies the following properties : \begin{itemize} \item \begin{equation*}d_0f_x = \begin{pmatrix} a_\epsilon^1(x) &0\\ 0& a_\epsilon^2(x), \end{pmatrix} \end{equation*} with $e^{-\epsilon}e^{\chi_i(x)}<a_\epsilon^i(x) <e^{\epsilon}e^{\chi_i(x)}$ for $i=1,2$, \item the $C^1$ distance between $f_x$ and $d_0f_x$ is less than $\epsilon$, \item there exists a constant $K$ and a measurable function $A=A_\epsilon:\mathcal R^{+*}\rightarrow \mathbb R$ such that for all $y,z\in B(0,q(x))$ $$Kd(\Psi_x(y),\Psi_x(z))\leq \|y-z\|\leq A(x)d(\Psi_x(y), \Psi_x(z)),$$ with $e^{-\epsilon}A(fx)/A(x)<e^{\epsilon}$. \end{itemize} \end{lem} For any $i\in I$ we let $$E_i:=\{x, \ \chi(x)=\chi(\mu_i)\}.$$ The set $E_i$ has full $\mu_i$-measure by the subadditive ergodic theorem. Put $\llbracketamma_i=\mathcal B(\mu_i)\cap E_i\cap \mathcal R^{+*}$ and $\llbracketamma=\bigcup_{i}\llbracketamma_i$. Clearly $\llbracketamma$ is $f$-invariant. We finally check that $\chi(x)=\chi(\mu_i)$ for $x\in W^s(\llbracketamma_i)$. For uniformly hyperbolic systems, we have $$\Sigma \chi(x)=\lim_n\frac{1}{n}\log \mathop{\mathrm{Jac}}(d_xf^n_{E_u})=\lim_n \int \log \mathop{\mathrm{Jac}}(d_yf_{E_u})\, d\delta_x^n.$$ As the geometric potential $y\mapsto \log \mathop{\mathrm{Jac}} d_yf_{E_u}$ is continuous in this case, any point in the basin of a SRB measure $\mu$ satisfies $\Sigma \chi(x)=\int \Sigma \chi(y)\, d\mu(y)$. \begin{lem}\label{fini} If $y\in W^s(x)$ with $x\in \mathcal R^{+*}$, then $\chi(y)=\chi(x)$. \end{lem} \begin{proof} Fix $x\in \mathcal R^{+*}$ and $\delta>0$. We apply Lemma \ref{chart} with $\epsilon\ll \chi_1(x) $. For $\alpha>0$ we let $\mathcal C_\alpha$ be the cone $\mathcal C_\alpha=\{(u,v)\in \mathbb R^2, \ \alpha\|u\|\geq \|v\|\}$. We may choose $\alpha>0$ and $\epsilon>0$ so small that for all $k\in \mathbb N$ we have $d_zf_{f^kx}(\mathcal C_\alpha)\mathop{\mathrm{ess sup}}ubset \mathcal C_\alpha$ and $\|d_zf_{f^kx}(v)\|\geq e^{\chi_1(x)-\delta}$ for all $v\in\mathcal C_\alpha$ and all $z\in B(0, q_\epsilon(f^kx))$. Let $y\in W^s(x)$. There is $C>0$ and $\lambda$ such that $d(f^nx,f^ny)<C\lambda^n$ holds for all $n\in \mathbb N$. We can choose $\epsilon\ll \lambda$. In particular there is $N>0$ such that $f^ny$ belongs to $\Psi_{f^nx} B(0, q(f^nx))$ for $n\geq N$ since we have $A(f^nx)<e^{\epsilon n}A(x)$ and $q(f^nx)>e^{\epsilon n}q(x)$. Let $z \in B(0,q(f^Nx))$ with $\Psi_{f^Nx}(z)=y$. Then for all $v\in \mathcal C_\alpha$ and for all $n\geq N$ we have $\|d_z\left(\Psi_{f^{n-N}x}^{-1}\circ f^{n-N}\circ \Psi_{f^Nx} \right)(v)\|\geq e^{(n-N)(\chi_1(x)-\delta)}.$ As the conorm of $d_{f^{n-N}y}\psi_{f^nx}$ is bounded from above by $A(f^n x)^{-1}$ for all $n$ we get \begin{align*} \chi(y)&= \limsup_n\frac{1}{n}\log \|d_{y} f^{n-N}\|,\\ &= \limsup_n\frac{1}{n}\log \|d_z \left(f^{n-N}\circ \Psi_{f^Nx}\right)\|,\\ & \geq \limsup_{n\rightarrow +\infty}\frac{1}{n}\log \left( A(f^nx)^{-1}\left\|d_z\left(\Psi_{f^nx}^{-1}\circ f^n\circ \Psi_{f^Nx} \right) \right\|\right),\\ &\geq \chi_1(x)-\delta-\epsilon. \end{align*} On the other hand we have \begin{align*} \left\|d_z\left(\Psi_{f^nx}^{-1}\circ f^n\circ \Psi_{f^Nx} \right) \right\|&\leq \prod_{k=N}^{n-1}\mathop{\mathrm{ess sup}}up_{t\in B(0, q(f^kx))}\|d_{t}f_{f^kx}\|,\\ &\leq \left(e^{\chi_1(x)+\epsilon}+\epsilon\right)^{n-N},\\ &\leq e^{(n-N)(\chi_1(x)+2\epsilon)}. \end{align*} Then it follows from $\|d_{f^{n-N}y}\psi_{f^nx}\|\leq K$: \begin{align*} \chi(y)&\leq \limsup_{n\rightarrow +\infty}\frac{1}{n}\log \left(\left\|d_z\left(\Psi_{f^nx}^{-1}\circ f^n\circ \Psi_{f^Nx} \right) \right\|\right), \\ &\leq \chi_1(x)+2\epsilon. \end{align*} As it holds for arbitrarily small $\epsilon$ and $\delta$ we get $\chi(y)=\chi_1(x)=\chi(x)$. \end{proof} We conclude with $\Lambda=\{\chi(\mu_i), \ i\in I\}$ that for Lebesgue a.e. point $x$, we have $\chi(x)\in \left]-\infty, \frac{R(f)}{r}\right]\cup \Lambda$ and that $\{\chi=\lambda\}\mathop{\mathrm{ess sup}}tackrel{o}{\mathop{\mathrm{ess sup}}ubset}\bigcup_{i\in I, \, \chi(\mu_i)=\lambda}\mathcal B(\mu_i)$ for all $\lambda\in \Lambda$. The proof of the Main Theorem is now complete. It follows also from Lemma \ref{fini}, that the converse statement of Corollary \ref{coco} holds : if $(f,M)$ admits a SRB measure then $\mathop{\mathrm{Leb}}(\chi>0)>0$.\\ In the proof of Lemma \ref{fini} we can choose the cone $\mathcal C_\alpha$ to be contracting, so that any vector in a small cone at $y$ will converge to the unstable direction $E_u(x)$. In other terms if we endow the smooth manifold $\mathbb PTM$ with a smooth Riemanian structure, then the lift $\mu$ to the unstable bundle of an ergodic SRB measure $\nu$ is a physical measure of $(\mathbb PTM, F)$. Conversely if $\mu$ is a physical measure of $(\mathbb PTM, F)$ supported on the unstable bundle above $\mathcal R^{+*}$, we can reproduce the scheme of the proof of the Main Theorem to show $\mu$ projects to a SRB measure $\nu$. Indeed we may consider a $C^\infty$ smooth curve $\mathop{\mathrm{ess sup}}igma$ such that $\hat x=(x,v_x)$ lies in the basin of $\mu$ for $\hat x$ in a positive Lebesgue measure set $A$ of $\hat \mathop{\mathrm{ess sup}}igma_*$. Then by following the above construction of SRB measures, we obtain that $\mu=\lim_{n}\frac{1}{n}\mathop{\mathrm{ess sup}}um_{k=0}^{n-1}F_*^k\mathop{\mathrm{Leb}}_A$ project to a SRB measure (or one can directly use the approach of \cite{bure}). This converse statement is very similar to a result of Tsujii (Theorem A in \cite{tsu}) which states in dimension two that for $C^{1+}$ surface diffeomorphism an ergodic hyperbolic measure $\nu$, such that the set of \textbf{regular} points $x\in \mathcal B(\nu)$ with $\chi(x)=\int \chi\, d\nu$ has positive Lebesgue measure, is a SRB measure. Indeed if $\mu$ is a physical measure of $(\mathbb PTM, F)$ supported on the unstable bundle above $\mathcal R^{+*}$, its projection $\nu$ satisfies $\chi(\nu)=\chi(x)$ for any $x\in \pi(\mathcal B(\nu))$. In the present paper we are working with the stronger $C^\infty$ assumption, but, in return, points in the basin are not supposed to be regular contrarily to Tsujii's theorem. From the above discussion, we may restate the Main Theorem as follows: \begin{theorem} Let $f:M\circlearrowleft$ be a $C^\infty$ surface diffeomorphism and $F:\mathbb PTM$ be the induced map on the projective tangent bundle. Then the basins of the physical measures of $(\mathbb PTM,F)$ are covering Lebesgue almost everywhere the set $\{(x,v)\in \mathbb PTM, \ \chi(x,v)>0\}$. \end{theorem} \mathop{\mathrm{ess sup}}ection{Nonpositive exponent in contracting sets}\label{gui} In this last section we show Theorem \ref{yoyo}. For a dynamical system $(M,f)$ a subset $U$ of $M$ is said \textbf{almost contracting} when for all $\epsilon>0$ the set $E_\epsilon=\{k\in \mathbb N, \ \mathop{\mathrm{diam}}(f^k U)>\epsilon\}$ satisfies $\overline{d}(E_\epsilon)=0$. In \cite{gui} the authors build subsets with historic behaviour and positive Lebesgue measure which are almost contracting but not contracting. We will show Theorem \ref{yoyo} for almost contracting sets. \\ We borrow the next lemma from \cite{bure} (Lemma 4 therein), which may be stated with the notations of Section \ref{srbb} as follows : \begin{lem}\label{dav} Let $f:M\circlearrowleft$ be a $C^\infty$ diffeomorphism admitting and let $U$ be a subset of $M$ with $\mathop{\mathrm{Leb}}\left(\left\{\chi>a\right\}\cap U\right)>0$ for some $a>0$. Then for all $\gamma>0$ there is a $C^\infty$ smooth embedded curve $\mathop{\mathrm{ess sup}}igma_*$ and $I\mathop{\mathrm{ess sup}}ubset \mathbb N$ with $\mathop{\mathrm{ess sup}}harp I=\infty$ such that \[\forall n\in I, \ \left| \{x\in U\cap \mathop{\mathrm{ess sup}}igma_*, \, \| d_xf^n(v_x)\|>e^{na} \} \right|>e^{-n\gamma}.\] \end{lem} We are now in a position to prove Theorem \ref{yoyo} for almost contracting sets. \begin{proof}[Proof of Theorem \ref{yoyo}] We argue by contradiction by assuming $\mathop{\mathrm{Leb}}\left(\left\{\chi>a\right\}\cap U\right)>0$ for some $a>0$ with $U$ being a almost contracting set. By Yomdin's Theorem on one-dimensional local volume growth for $C^\infty$ dynamical systems \cite{yom} there is $\epsilon>0$ so small that \begin{equation}\label{volu} v^*(f,\epsilon)=\mathop{\mathrm{ess sup}}up_{\mathop{\mathrm{ess sup}}igma}\limsup_{n\to\infty}\frac{1}{n}\mathop{\mathrm{ess sup}}up_{x\in M}\log \left|f^n(B(x,\epsilon,n)\cap \mathop{\mathrm{ess sup}}igma_*\right|<a/2, \end{equation} where the supremum holds over all $C^{\infty}$ smooth embedded curves $\mathop{\mathrm{ess sup}}igma:[0,1]\rightarrow M$. As $U$ is almost contracting, there are subsets $(C_n)_{n\in \mathbb N}$ of $M$ with $\lim_n\frac{\log \mathop{\mathrm{ess sup}}harp C_n}{n}=0$ satisfying for all $n$ \begin{equation}\label{rat} U\mathop{\mathrm{ess sup}}ubset \bigcup_{x\in C_n}B(x,\epsilon,n). \end{equation} Fix an error term $\gamma\in ]0,\frac{a}{2}[$. Then by Lemma \ref{dav} there is a $C^\infty$-smooth curve $\mathop{\mathrm{ess sup}}igma_*\mathop{\mathrm{ess sup}}ubset U$ and an infinite subset $I$ of $\mathbb N$ such that for all $n\in I$ \begin{eqnarray*} \mathop{\mathrm{ess sup}}um_{x\in C_n}\left| f^n(B(x,\epsilon,n)\cap \mathop{\mathrm{ess sup}}igma_*)\right|&\geq &\left| f^n(U\cap \mathop{\mathrm{ess sup}}igma_*)\right|,\\ &\geq & e^{na}\left| \{x\in U\cap\mathop{\mathrm{ess sup}}igma_*, \, \| d_xf^n(v_x)\|>e^{na} \}\right|,\\ &\geq & e^{n(a-\gamma)} \text{ by (\ref{volu}),}\\ \mathop{\mathrm{ess sup}}harp C_n \mathop{\mathrm{ess sup}}up_{x\in M}\log \left|f^n(B(x,\epsilon,n)\cap \mathop{\mathrm{ess sup}}igma_*)\right| & \geq & e^{n(a-\gamma)} \text{ by (\ref{rat}).}\\ \end{eqnarray*} Therefore we get the contradiction $v^*(f,\epsilon)>a-\gamma>a/2$. \end{proof} \appendix \mathop{\mathrm{ess sup}}ection{} Let $\mathcal A=(A_n)_{n\in \mathbb N}$ be a sequence in $M_d(\mathbb R^d)$. For any $n\in \mathbb{N}$ we let $A^n=A_{n-1}\cdots A_1A_0$. We define the Lyapunov exponent $\chi(\mathcal A)$ of $\mathcal{A}$ with respect to $v\in \mathbb{R}^d\mathop{\mathrm{ess sup}}etminus \{0\}$ as $$\chi(\mathcal A,v):=\limsup_n\frac{1}{n}\log \|A^n(v)\|,$$ \begin{lem} $$\mathop{\mathrm{ess sup}}up_{v\in \mathbb{R}^d\mathop{\mathrm{ess sup}}etminus \{0\}}\chi(\mathcal A,v)=\limsup_n\frac{1}{n}\log \vertiii{A^n}.$$ \end{lem} \begin{proof}The inequality $\leq $ is obvious. Let us show the other inequality. Let $v_n\in \mathbb{R}^d$ with $\|v_n\|=1$ and $\|A^n(v_n)\|= \vertiii{A^n}$. Then take $v=\lim_{k}v_{n_k}$ with $\lim_{k}\frac{1}{n_k}\log\vertiii{A^{n_k}}=\limsup_n\frac{1}{n}\log \vertiii{A^n}$. We get \begin{align*} \|A^{n_k}(v)\|&\geq \|A^{n_k}(v_k)\|- \|A^{n_k}(v-v_k)\|,\\ &\geq \vertiii{A^{n_k}}(1-\|v-v_k\|),\\ \limsup_k\frac{1}{n_k}\log\|A^{n_k}(v)\|&\geq \limsup_n\frac{1}{n}\log \vertiii{A^n}. \end{align*} \end{proof} \end{large} \end{document}
\begin{document} \title{ANOTHER VIEW OF BIPARTITE RAMSEY NUMBERS} \begin{abstract} For bipartite graphs $G$ and $H$ and a positive integer $m$, the $m$-bipartite Ramsey number $BR_m(G, H)$ of $G$ and $H$ is the smallest integer $n$, such that every red-blue coloring of $K_{m,n}$ results in a red $G$ or a blue $H$. Zhenming Bi, Gary Chartrand and Ping Zhang in \cite{bi2018another} evaluate this numbers for all positive integers $m$ when $G= K_{2,2}$ and $H \in \{K_{2,3}, K_{3,3}\}$, especially in a long and hard argument they showed that $BR_5(K_{2,2}, K_{3,3}) = BR_6(K_{2,2}, K_{3,3}) = 12$ and $BR_7(K_{2,2}, K_{3,3}) = BR_8(K_{2,2}, K_{3,3}) = 9$. In this article, by a short and easy argument we determine the exact value of $BR_m(K_{2,2}, K_{3,3})$ for each $m\geq 1$. \end{abstract} \section{Introduction} For given bipartite graphs $G_1,G_2,\ldots,G_t$ the bipartite Ramsey number $BR(G_1,G_2,\ldots,G_t)$ is defined as the smallest positive integer $b$, such that any $t$-edge-coloring of the complete bipartite graph $K_{b,b}$ contains a monochromatic subgraph isomorphic to $G_i$, colored with the $i$th color for some $i$. One can refer to \cite{rowshan2021proof, gholami2021bipartite, hatala2021new, rowshan2021size, chartrand2021new} and their references for further studies. We now consider red-blue colorings of complete bipartite graphs when the numbers of vertices in the two partite sets need not differ by at most $1$. For bipartite graphs $G$ and $H$ and a positive integer $m$, the $m$-bipartite Ramsey number $BR_m(G, H)$ of $G$ and $H$ is the smallest integer $n$, such that every red-blue coloring of $K_{m,n}$ results in a red $G$ or a blue $H$. Zhenming Bi, Gary Chartrand and Ping Zhang in \cite{bi2018another} evaluate this numbers for all positive integers $m$ when $G= K_{2,2}$ and $H = K_{3,3}$, especially in a long and hard argument they showed that: \begin{theorem}[Main results]\label{M.th} Suppose that $m\geq 2 $ be a positive integer. Then: \[ BR_m(K_{2,2},K_{3,3})= \left\lbrace \begin{array}{ll} \text{does not exist}, & ~~~if~~~m=2,3, \\ 15 & ~~~if~~~m=4, \\ 12 & ~~~if~~~ m=5,6, \\ 9 & ~~~if~~~ m=7,8. \\ \end{array} \right. \] \end{theorem} In this article, we come up with a short and easy argument to prove Theorem \ref{M.th}. \section{Preparations} In this article, we are only concerned with undirected, simple, and finite graphs. We follow \cite{bondy1976graph} for terminology and notations not defined here. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. The degree of a vertex $v\in V(G)$ is denoted by $\deg_G(v)$, or simply by $\deg(v)$. The neighborhood $N_G(v)$ of a vertex $v$ is the set of all vertices of $G$ adjacent to $v$ and satisfies $|N_G(v)|=\deg_G(v)$. The minimum and maximum degrees of vertices of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. As usual, the complete bipartite graph with bipartition $(X,Y)$, where $|X|=m$ and $|Y|=n$, is denoted by $K_{m,n}$. We use $[X,Y]$ to denote the set of edges between a bipartition $(X,Y)$ of $G$. The complement of a graph $G$, denoted by $\overline{G}$. $H$ is $n$-colorable to $(G_1, G_2,\ldots, G_n)$ if there exists a $n$-edge decomposition of $H$, say $(H_1, H_2,\ldots, H_n)$ where $G_i\nsubseteq H_i$ for each $i=1,2, \ldots,n.$ We use $H\rightarrow (G_1, G_2,\ldots, G_n)$, to show that $H$ is $n$-colorable to $(G_1, G_2,\ldots, G_n)$. \begin{definition} The Zarankiewicz number $z((m,n), K_{t,t})$ is defined as the maximum number of edges in any subgraph $G$ of the complete bipartite graph $K_{m,n}$, so that $G$ does not contain $K_{t,t}$ as a subgraph. \end{definition} By using the bounds in Table $4$ of \cite{collins2016zarankiewicz}, the following proposition holds. \begin{proposition}\label{pro1}{(\cite{collins2016zarankiewicz})}The following result on Zarankiewicz number is true: \begin{itemize} \item[$\bullet$] $z((7,9), K_{3,3})\leq 40$. \end{itemize} \end{proposition} Hattingh and Henning in \cite{hattingh1998bipartite} determined the exact value of the bipartite Ramsey number of $BR(K_{2,2}, K_{3,3})$ as follow: \begin{theorem}\label{th1}\cite{hattingh1998bipartite} $BR(K_{2,2}, K_{3,3})=9$. \end{theorem} \begin{lemma}\label{l1} Suppose that $G$ be a subgraph of $K_{m,n}$, where $n\geq6$, and $m\geq 4$. If there exists a vertex of $V(G)$ say $w$, so that $\deg_G(w)\geq 6$, then either $K_{2,2} \subseteq G$ or $K_{3,3} \subseteq \overline{G}$. \end{lemma} \begin{proof} W.l.g suppose that $w\in X$, and $N_G(w)=Y'$, where $|Y'|\geq 6$. Assume that $K_{2,2} \nsubseteq G$, therefore $|N_G(w')\cap Y'|\leq 1$ fore each $w'\in X\setminus\{w\}$. Now, as $|X|\geq 4$ and $|Y'|=6$, then one can say that $K_{3,3} \subseteq \overline{G}[X\setminus\{w\}, Y']$, which means that the proof is complete. \end{proof} \section{\bf Proof of the main results} To prove our main results, namely Theorem \ref{M.th}, we begin with the following theorem. \begin{theorem}\label{t1} $BR_4(K_{2,2}, K_{3,3})=15$. \end{theorem} \begin{proof} By Figure \ref{fi0}, one can check that $K_{2,2} \nsubseteq G$. Also, by Figure \ref{fi0}, it can be said that for each $X'=\{x,x',x''\}\subseteq \{x_1,\ldots,x_2\}$ and $Y'=\{y,y',y''\}\subseteq \{y_1,\ldots,y_{14}\}$, there is at least one edge of $E([X',Y'])$ say $e$, so that $e\in E(G)$. Which means that $\overline{G}$ is $K_{3,3}$-free. So, $K_{4,14} \rightarrow (K_{2,2},K_{3,3})$. \begin{figure} \caption{Edge disjoint subgraphs $G$ and $\overline{G} \label{fi0} \end{figure} Let $(X=\{x_1,x_2,x_{3},x_4\},Y=\{y_1,y_2,\ldots ,y_{15}\})$ be the partition sets of $K_{4,15}$ and suppose that $G\subseteq K_{4,15}$, such that $K_{2,2} \nsubseteq G$. Consider $\Delta=\Delta (G_X)$( the maximum degree of vertices in the part $X$ in $G$). Since $K_{2,2} \nsubseteq G$, if $\Delta\geq 6$ then the proof is complete by Lemma \ref{l1}. Also, if $\Delta\leq 4$, then $K_{3,3}\subseteq \overline{G}$. Hence assume that $\Delta=5$ and let $\Delta=|Y'= N_G(x)|$. Since $K_{2,2} \nsubseteq G$, thus $|N_G(x')\cap Y'|\leq 1$ for each $x'\neq x$. If either there exists a vertex of $X\setminus \{x\}$ say $x'$, so that $|N_G(x')\cap Y_1|=0$ or there exist two vertices of $X\setminus \{x\}$ say $x', x''$, such that $N_G(x')\cap Y'=N_G(x'')\cap Y'$, then it can be said that $K_{3,3}\subseteq \overline{G}[X\setminus \{x\},Y']$. Therefore, we may assume that $|N_G(x')\cap Y'|=1$ and $N_G(x')\cap Y'\neq N_G(x'')\cap Y'$ for each $ x',x''\in X\setminus \{x\}$. W.l.g let $x=x_1$ and $Y'=\{y_1,\ldots,y_5\}$. Now, for $i=2,3$, consider $|N_G(x_i)\cap( Y\setminus Y_1)|$. As $\Delta=5$, then either $|N_G(x_i)\cap (Y\setminus Y_1)|\leq 3$ for one $i=2,3$, or $|N_G(x_i)\cap(Y\setminus Y_1)|=4$ and $|N_G(x_2)\cap N_G(x_3)\cap (Y\setminus Y_1)|=1$. Therefore, as $|Y|=15$, it is easy to say that $|\cup_{i=1}^{i=3}N(x_i)\cap Y|\leq 12$, that is $K_{3,3}\subseteq\overline{G}[\{x_1,x_{2},x_{3}\}, Y\setminus \cup_{i=1}^{i=3}N(x_i)]$, which means that the proof is complete. \end{proof} \begin{theorem}\label{t2} Suppose that $m\in \{5,6\}$, then $BR_m(K_{2,2}, K_{3,3})=12$. \end{theorem} \begin{proof} If we prove the theorem for $m=5$, then for $m=6$ the proof is trivial. Hence, let $m=5$ and assume that $(X=\{x_1,x_2,x_{3},x_4,x_5\},Y=\{y_1,y_2,\ldots ,y_{12}\})$ be the partition sets of $K_{5,12}$ and $G\subseteq K_{5,12}$, where $K_{2,2} \nsubseteq G$. Consider $\Delta (G_X)=\Delta$. Since $K_{2,2} \nsubseteq G$, if $\Delta\geq 6$ then the proof is complete by Lemma \ref{l1}. For $\Delta\leq 3$, it is clear that $K_{3,3} \subseteq \overline{G}$. Hence $\Delta\in \{4,5\}$. First assume that $\Delta=4$. W.l.g assume that $Y_1=\{y_1,\ldots,y_4\}=N_G(x_1)$. Since $K_{2,2} \nsubseteq G$, we have $|N_G(x_i)\cap Y_1 |\leq 1$, for each $x_i\in X\setminus\{x_1\}$. Now we have the following claims: \begin{claim}\label{c1} For each $x\in X$, we have $|N_G(x)|=4=\Delta$. \end{claim} \begin{proof}[Proof of Claim \ref{c1}] By contrary assume that $|N_G(x)|\leq 3$ for at least one member of $ X\setminus\{x_1\}$ say $x'$. Let $N_G(x')=Y_2$. As $|X|=5$, there are at least two vertices of $X\setminus\{x'\}$ say $x_i,x_j$, such that $|N_G(x_i)\cap Y_2|= 1$, otherwise $K_{3,3}\subseteq\overline{G}[X\setminus \{x_2\}, Y_2]$. Therefore as $|Y|=12$ and $\Delta =4$ it can be said that $K_{3,3}\subseteq\overline{G}[\{x',x_i,x_j\}, Y\setminus Y_2]$. \end{proof} \begin{claim}\label{c2} For each $x\in X\setminus\{x_1\}$, we have $|N_G(x_1)\cap N_G(x) |=1$. \end{claim} \begin{proof}[Proof of Claim \ref{c2}] By contradiction, let $|N_G(x_1)\cap N_G(x) |=0$ for at least one member of $X\setminus\{x_1\}$ say $x$. W.l.g let $x=x_2$ and by Claim \ref{c1} let $N_G(x_2)\cap Y= \{y_5,y_6,y_7,y_8\}$. For $i=1,2$, as $|Y_i|=4$, if either $|N_G(x_j)\cap Y_i|= 0$ for at least one $i\in \{1,2\}$ and one $j\in \{3,4,5\}$ or there exist $j,j'\in \{3,5,5\}$ such that $|N_G(x_j)\cap N_G(x_{j'})\cap Y_i |=1$ for one $i\in\{1,2\}$, then $K_{3,3}\subseteq\overline{G}[X\setminus \{x_i\}, Y_i]$. So, let $|N_G(x_j)\cap Y_i |= 1$ and $N_G(x_j)\cap Y_i\neq N_G(x_{j'})\cap Y_i$ for each $i\in\{1,2\}$ and each $j,j'\in \{3,4,5\}$. W.l.g let $x_3y_1, x_3y_5, x_4y_2, x_4y_6, x_5y_3, x_5y_7\in E(G)$. Now, since $|Y|=12$, by Claim \ref{c1}, we have $|N_G(x_j)\cap Y_3 |=2$ for each $x\in \{x_3,x_4,x_5\}$, in which $Y_3=\{y_9,y_{10}, y_{11}, y_{12}\}$, Therefore one can say that there exists at least one vertex of $Y_3$ say $y$, so that $|N_G(y)\cap \{x_3,x_4,x_5\}|\leq 1$. W.l.g let $y=y_9$ and assume that $x_3y_9, x_4y_9\in E(\overline{G})$. Hence one can say that $K_{3,3}\subseteq\overline{G}[\{x_2,x_3,x_4\}, \{y_3,y_4,y_9\}]$. \end{proof} So by Claim \ref{c1}, w.l.g let $Y_1=\{y_1,\ldots,y_4\}=N_G(x_1)$, and $Y_2=\{y_1,y_5,y_6,y_7\}=N_G(x_2)$ and by Claim \ref{c2}, $|N_G(x_i)\cap N_G(x) |=1$ for each $i=1,2$ and each $x\neq x_i$. Now, as $\Delta=4$ and $|Y|=12$, if there exists a vertex of $\{x_3,x_4,x_5\}$ say $x$, so that $xy_1\in E(\overline{G})$, then it can be checked that $K_{3,3}\subseteq\overline{G}[\{x_1,x_2,x\}, Y\setminus (Y_1\cup Y_2)]$. Hence assume that $x_iy_1\in E(G)$ for each $i=2,3,4,5$, which means that $K_{3,3}\subseteq\overline{G}[\{x_2,x_3,x_4\}, Y_1\setminus \{y_1\}]$. Hence for the case that $\Delta=4$ the theorem holds. Now assume that $\Delta=5$. Let $X'=\{x\in X, ~\deg_G(x)=5\}$. Now we have the following fact: \begin{fact}\label{f1} If $|X'|= 1$, then the proof is complete. \end{fact} {\bf Proof of the fact:} We may suppose that $ X'=\{x_1\}$. W.l.g let $Y_1=\{y_1,\ldots,y_5\}=N_G(x_1)$. Since $K_{2,2} \nsubseteq G$, we have $|N_G(x_i)\cap Y_1 |\leq 1$, for each $x_i\in X\setminus\{x_1\}$. As $|Y_1|=5$, one can assume that $|N_G(x_i)\cap Y_1 |= 1$ and $N_G(x_i)\cap Y_1\neq N_G(x_j)\cap Y_1$ for each $i,j\in \{2,3,4,5\}$, otherwise $K_{3,3}\subseteq\overline{G}[ X\setminus\{x_1\}, Y_1]$. Therefore, w.l.g suppose that $x_2y_1, x_3y_2, x_4y_3, x_5y_4\in E(G)$. Now, one can say that $|N_G(x_i)\cap (Y\setminus Y_1 )|=3$ for at least three vertices of $X\setminus \{x_1\}$. Otherwise, if there exist at least two vertices of $X\setminus \{x_1\}$ say $x_2,x_3$ so that $|N_G(x_i)\cap (Y\setminus Y_1 )|\leq 2$, since $|Y|=12$ it can be said that $K_{3,3}\subseteq\overline{G}[\{x_1,x_2,x_3\}, Y\setminus Y_1]$. So, assume that $|N_G(x_i)\cap (Y\setminus Y_1 )|=3$ for each $i\in \{2,3,4\}$ and let $Y_i= N_G(x_i)$. Now, w.l.g we may suppose that $Y_2=\{y_1, y_6,y_7,y_8\}=N_G(x_2)$. As $K_{2,2} \nsubseteq G$, we have $|N_G(x_i)\cap (\{y_6,y_7,y_8\})|\leq 1$ for each $i\in \{3,4,5\}$. With symmetry, for each $i\in \{6,7,8\}$, we have $|N_G(y_i)\cap \{x_3,x_4,x_5\} |\geq 1$. Otherwise, if there exists at least one vertex of $Y_2\setminus \{y_1\}$ say $y$ so that $|N_G(y)\cap (\{x_3,x_4,x_5\} )|=0$, then $K_{3,3}\subseteq\overline{G}[\{x_3,x_4,x_5\}, \{y_1,y_5,y\}]$. Hence w.l.g let $x_3y_6, x_4y_7, x_5y_8\in E(G)$ and suppose that $Y_3=\{y_1, y_6,y_9,y_{10}\}=N_G(x_3)$. As $K_{2,2} \nsubseteq G$, we have $N_G(y_9)\cap (\{x_4,x_5\}) \neq N_G(y_{10})\cap (\{x_4,x_5\}) $, and $|N_G(y_9)\cap \{x_4,x_5\} |=|N_G(y_{10})\cap \{x_4,x_5\} |= 1$. W.l.g let $x_4y_9, x_5y_{10}\in E(G)$. Since $|N_G(x_4)\cap (Y\setminus Y_1 )|=3$, we have $|N_G(x_4)\cap \{y_{11}, y_{12}\}|=1$, w.l.g assume that $x_4y_{11} \in E(\overline{G})$. Therefore, $K_{3,3}\subseteq\overline{G}[\{x_2, x_3, x_4\}, \{y_4, y_5, y_{12}\}]$, which means that the proof of the fact is complete. So, by Fact \ref{f1}, assume that $x_1,x_2\in X'$, and $Y_1=\{y_1,\ldots,y_5\}=N_G(x_1)$. Since $K_{2,2} \nsubseteq G$, we have $|N_G(x_i)\cap Y_1 |\leq 1$, for each $x_i\in X\setminus\{x_1\}$. Also since $|Y_1|=5$, one can assume that $|N_G(x_i)\cap Y_1 |= 1$ and $N_G(x_i)\cap Y_1\neq N_G(x_j)\cap Y_1$ for each $i,j\in \{2,3,4,5\}$, otherwise $K_{3,3}\subset \overline{G}$. Therefore, w.l.g suppose that $Y_2=\{y_1, y_6,y_7,y_8, y_9\}$ and $x_3y_2, x_4y_3, x_5y_4\in E(G)$. With symmetry we have $|N_G(x_i)\cap Y_2\setminus\{y_1\} |= 1$ and $N_G(x_i)\cap (Y_2\setminus\{y_1\}) \neq N_G(x_j)\cap (Y_2\setminus\{y_1\}) $ for each $i,j\in \{3,4,5\}$. Now, w.l.g we may suppose that $x_3y_6, x_4y_7, x_5y_8\in E(G)$. Therefore one can say that $K_{3,3}\subseteq\overline{G}[\{x_3, x_4, x_5\}, \{y_1, y_5, y_9\}]$. Hence, $BR_m(K_{2,2}, K_{3,3})\leq 12$ for $m=5,6$. To show that $BR_m(K_{2,2}, K_{3,3})\geq 12$, decompose the edges of $K_{6,11}$ into graphs $G$ and $\overline{G}$, where $G$ is shown in Figure \ref{fi2}. By Figure \ref{fi2} it can be checked that, $K_{6,11} \rightarrow (K_{2,2},K_{3,3})$, which means that $BR_m(K_{2,2}, K_{3,3})= 12$. \begin{figure} \caption{Edge disjoint subgraphs $G$ and $\overline{G} \label{fi2} \end{figure} \end{proof} \begin{theorem}\label{t3} Suppose that $m\in \{7,8\}$, then $BR_m(K_{2,2}, K_{3,3})=9$. \end{theorem} \begin{proof} Suppose that $(X=\{x_1,\ldots,x_7\},Y=\{y_1,y_2,\ldots ,y_{9}\})$ be the partition sets of $K_{7,9}$. Consider $G\subseteq K_{7,9}$, where $K_{2,2} \nsubseteq G$. If there exists a vertex of $X$ say $x$, such that $\deg_G(x)\geq 5$, then as $K_{2,2} \nsubseteq G$, we have $|N_G(x_i)\cap N_G(x) |\leq 1$, hence $|X|=7$, by the pigeon-hole principle it can be said that $K_{3,3}\subseteq \overline{G}[X\setminus\{x\}, N_G(x)]$. Also by Proposition \ref{pro1}, since $z((7,9),K_{3,3})\leq 40$, one can say that $|N_G(x)|=4$ for at least two vertices of $X$. Otherwise $|E(\overline{G})|\geq 41$, so $K_{3,3}\subseteq\overline{G}$. W.l.g let $\deg_G(x_i)=4$ for each $x\in X'$, and let $x_1,x_2\in X'$. Now, we have the following claim: \begin{claim}\label{c3} For each $x\in X$ and each $x'\in X'$, $|N_G(x)\cap N_G(x') |=1$. \end{claim} \begin{proof}[Proof of Claim \ref{c3}] By contradiction, let $|N_G(x)\cap N_G(x') |=0$ for some $x\in X$ and some $x'\in X'$. If $|N_G(x)\cap N_G(x') |=0$ for at least two vertices of $X$, then it is clear that $K_{3,3}\subseteq \overline{G}[X, Y_1]$. So, w.l.g let $x'=x_1$ and $N_G(x_1)= Y_1$. Therefore as $|Y_1|=4$ and $|X|=7$, then by the pigeon-hole principle there exist at least two vertices of $X\setminus\{x_1,x\}$ say $x',x''$ so that $N_G(x')\cap Y_1 =N_G(x'')\cap Y_1$, which means that $K_{3,3}\subseteq \overline{G}[\{x,x', x''\}, Y_1]$. \end{proof} Now, by Claim \ref{c3} w.l.g let $N_G(x_1)\cap Y_1= \{y_1,y_2,y_3,y_4\}$ and $N_G(x_2)\cap Y_2= \{y_1,y_5,y_6,y_7\}$. By considering $|X'|$ we have two case as follow: {\bf Case 1: $|X'|\geq 3$.} W.l.g assume that $x_3\in X'$, therefore Claim \ref{c3} limits us to $N_G(x_3)\cap Y_3= \{y,y',y_8,y_9\}$, where $y\in \{y_2,y_3, y_4\}$ and $y'\in \{y_5,y_6,y_7\}$. W.l.g assume that $y=y_2, y'=y_5$. So, as $K_{2,2}\nsubseteq G$ we have $|X'|=3$, and $|N_G(x)\cap Y_i|\leq 1$ for each $i=1,2,3$ and each $x\in X\setminus X'$. If there exist at least two vertices of $X\setminus X'$ say $x',x''$ such that $|N_G(w)\cap \{y_1, y_2, y_5\}|=1$, for each $w\in \{x',x''\}$, then $|N_G(w)|\leq 2$, otherwise $K_{2,2}\subseteq G$, a contradiction, so as $|X'|=3$ and $|N_G(x')|\leq 2$, we have $|E(G)|\leq 22$, therefore $|E(\overline{G})|\geq 41$, and by Proposition \ref{pro1}, $K_{3,3}\subseteq\overline{G}$. Hence, suppose that $|N_G(x')\cap \{y_1, y_2, y_5\}|=0$ for at least three vertices of $ X\setminus X'$. Which means that $K_{3,3}\subseteq \overline{G}[X\setminus X', \{y_1, y_2, y_5\}]$. {\bf Case 2: $|X'|= 2$.} By Proposition \ref{pro1}, we have $|N_G(x)|=3$ for each $x\in X\setminus X'$. Now we have the following claim: \begin{claim}\label{c4} If there exist a vertex of $X\setminus X'$ say $x$, so that $xy_1\in E(G)$, then $K_{3,3}\subseteq \overline{G}$. \end{claim} \begin{proof}[Proof of Claim \ref{c4}] If $xy_1\in E(G)$ for at least two vertices of $X\setminus X'$, then $K_{3,3}\subseteq \overline{G}[X\setminus \{x_1\}, Y_1\setminus \{y_1\}]$. So, w.l.g let $x_3y_1\in E(G)$. Since $|N_G(x_3)|=3$, we have $N_G(x_3)=Y_3=\{y_1,y_8,y_9\}$. Now consider $X''=\{x_4,x_5,x_6,x_7\}$. As $|N_G(x)|=3$, for each $x\in X''$, we have $|N_G(x)\cap Y_i\setminus \{y_1\}|=1$. Also as $|X''|=4$, and $|Y_i\setminus \{y_1\}|=3$ for $i=1,2$, then by the pigeon-hole principle there are at least two vertices of $X''$, say $x_4, x_5$, such that $N_G(x_4)\cap Y_1\setminus \{y_1\}=N_G(x_5)\cap Y_1\setminus \{y_1\}=\{y\}$. W.l.g assume that $y=y_2$. Also by the pigeon-hole principle one can say that there is at least one vertex of $Y_2\setminus \{y_1\}$ say $y'$, such that $y'x_4, y'x_5\in E(\overline{G})$. W.l.g let $y'=y_5$. Hence, it can be checked that $K_{3,3}\subseteq \overline{G}[\{x_3,x_4, x_5\}, \{y_3,y_4,y_5\}]$. \end{proof} Now, by Claim \ref{c4} we may assume that $xy_1\in E(\overline{G})$ for each $x\in X\setminus X'$. By Claim \ref{c3}, for each $x\in X\setminus \{x_1,x_2\}=X''$ and each $x'\in \{x_1,x_2\}$, we have $|N_G(x)\cap N_G(x') |=1$. Now, since $|X\setminus X'|=5$, by the pigeon-hole principle there exists two vertices of $\{y_2,y_3,y_4\}$ say $y_2, y_3$, such that $|N_G(y_i)\cap X'''|=2$, where $i=2,3$. W.l.g let $N_G(y_2)\cap X'''=\{x_3,x_4\}$ and $N_G(y_3)\cap X'''=\{x_5,x_6\}$. Since $K_{2,2}\nsubseteq G$, $xy_1\in E(\overline{G})$ and $|N_G(x)\cap N_G(x_2) |=1$, we may assume that $x_3y_5, x_4y_6\in E(G)$. Also for at least one $i\in \{5,6\}$ w have $x_iy_7\in E(\overline{G})$, otherwise it can be said that $K_{2,2}\subseteq G$, a contradiction. Hence assume that $x_5y_7\in E(\overline{G})$, Therefore one can check that $K_{3,3}\subseteq \overline{G}[\{x_3,x_4, x_5\}, \{y_1,y_4,y_7\}]$. Which means that in any case $K_{3,3}\subseteq \overline{G}$. Hence by Cases 1,2, we have $BR_7(K_{2,2}, K_{3,3})\leq 9$. This also implies that every red-blue coloring of $K_{8,9}$ results in a red $K_{2,2}$ or a blue $K_{3,3}$. Therefore by Theorem \ref{th1} as $K_{8,8}\rightarrow (K_{2,2}, K_{3,3})$ we have $BR_m(K_{2,2}, K_{3,3})= 9$ where $m=7,8$. Hence the proof is complete. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{M.th}] For $m=2,3$, it is easy to say that $BR_m(K_{2,2}, K_{3,3})$ does not exist. Now, by combining Theorem \ref{t1}, \ref{t2} and \ref{t3} we conclude that the proof of Theorem \ref{M.th} is complete. \end{proof} \end{document}
\begin{document} \title{Dihedral angles and orthogonal polyhedra} \author{Therese Biedl \thanks{David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada. Research of TB and AL supported by NSERC.} \and Martin Derka \addtocounter{footnote}{-1}\footnotemark \and Stephen Kiazyk \addtocounter{footnote}{-1}\footnotemark \and Anna Lubiw \addtocounter{footnote}{-1}\footnotemark \and Hamide Vosoughpour \addtocounter{footnote}{-1}\footnotemark } \date{\today} \maketitle \section{Introduction} Consider an orthogonal polyhedron, i.e., a polyhedron where (at least after a suitable rotation) all faces are perpendicular to a coordinate axis, and hence all edges are parallel to a coordinate axis. Clearly, any {\em facial angle} (i.e., the angle of a face at an incident vertex) is a multiple of $\pi/2$. Also, any {\em dihedral angle} (i.e., the angle between two planes that support to faces with a common edge) is a multiple of $\pi/2$. In this note we explore the converse: if the facial and/or dihedral angles are all multiples of $\pi /2$, is the polyhedron necessarily orthogonal? The case of facial angles was answered previously in two papers at CCCG 2002 \cite{DO-CCCG02,BCD+-CCCG02}: If a polyhedron with connected graph has genus at most 2, then facial angles that are multiples of $\pi/2$ imply that the polyhedron is orthogonal, while for genus 6 or higher there exist examples of non-orthogonal polyhedra with connected graph where all facial angles are $\pi/2$. In this note we show that if both the facial and dihedral angles are multiples of $\pi /2$ then the polyhedron is orthogonal (presuming connectivity), and we give examples to show that the condition for dihedral angles alone does not suffice. \section{Dihedral angles and facial angles} We begin with the following question: Given a polyhedron for which we know that every facial angle and every dihedral angle is a multiple of $\pi/2$, is it an orthogonal polyhedron? This is not true if the graph of the polyhedron is disconnected (see Figure~\ref{fig:counterex}), but it is true if the graph is connected. This can be seen as follows: Because the graph is connected, each face is simply connected, i.e., a polygon without holes. Any simple connected polygon whose angles are all multiples of $\pi /2$ is necessarily orthogonal. Rotate the polyhedron so that one face $f$ lies in a plane perpendicular to a coordinate axis with its edges parallel to coordinate axes. Because the dihedral angles are multiples of $\pi /2$ therefore any face adjacent to $f$ lies in a plane perpendicular to a coordinate axis, and because the face is an orthogonal polygon, all its edges are parallel to coordinate axes. Continuing to propagate to adjacent faces shows that the polyhedron is orthogonal. \iffalse \footnote{TB: If anyone can think of a shorter way of describing this, please change what's here.} Initially pick one facial angle that is not $\pi$ and rotate the polyhedron its two incident edges $e_1,e_2$ are parallel to coordinate axes. Let $v$ be the common endpoint of $e_1,e_2$, let $e_3$ be the edge after $e_2$ at $v$, and let (for $i=1,2$) $f_i$ be the face between $e_i$ and $e_{i+1}$. Face $f_1$ is in the plane spanned by $e_1$ and $e_2$, hence perpendicular to a coordinate axis. Since the dihedral angle at $e_2$ is a multiple of $\pi/2$ and $e_2$ is parallel to a coordinate axis, hence $f_2$ is perpendicular to a coordinate axis. Since the facial angle $e_2-v-e_3$ is a multiple of $\pi/2$, $e_3$ is parallel to a coordinate axis. Continuing the argument, all faces and edges at $v$ are perpendicular/parallel to a coordinate axis. Now consider the other end $w$ of $e_2$. The two incident faces $f_1,f_2$ of $e_2$ are perpendicular to a coordinate axis, and the facial angles at $e_2$ and $w$ are multiples of $\pi/2$, so the edge before and after $e_2$ at $w$ are also parallel to coordinate axes. Now we repeat the argument at $w$, and from there along incident edges to the rest of the graph, which reaches all faces since the graph is connected. So all faces/edges are perpendicular/parallel to coordinate axes as desired. \fi \section{Dihedral angles only} We now consider the following question: If all dihedral angles are multiples of $\pi/2$, is the polyhedron necessarily orthogonal, at least if the graph is connected and the genus is small? The answer is ``no'', even for genus 0. We can even impose additional restrictions, such as all vertices having degree 3 or 4 and all faces being quadrangles. \begin{figure} \caption{Polyhedra where all dihedral angles are $\pi/2$ or $3\pi/2$. (Left) The graph is disconnected. Note that all facial angles are also $\pi/2$ or $3\pi/2$. (Middle) Increasing the ``top'' makes the graph connected. Some vertices have degree 5. (Right) All vertices have degree 3 or 4, all faces are quadrangles.} \label{fig:counterex} \end{figure} Notice that the graph of the polyhedron in Figure~\ref{fig:counterex}(middle) could not possibly be the graph of an orthogonal polyhedron (because it has a vertex of degree 5), whereas the graph of the polyhedron in Figure~\ref{fig:counterex}(right) can also be realized by an orthogonal polyhedron (if we ``rotate'' the top part.) One may wonder how difficult it is to test whether a connected graph, together with dihedral angles, forms an orthogonal polyhedron. We can answer this in the special case where we additionally know the lengths of the edges, the graph is planar (i.e., the polyhedron has genus 0), and no dihedral angle is $\pi$. In this case, an orthogonal polyhedron, if one exists at all, is unique and can be found in linear time \cite{BG11b}. Hence we can run the algorithm to find it, and if it fails, conclude that no orthogonal polyhedron can realize this graph, edge lengths and dihedral angles. In all other cases (if edge lengths are unknown, dihedral angles may be $\pi$, or the graph has higher genus) the complexity of testing realizability by an orthogonal polyhedron remains open. \end{document}
\betagin{document} \title[Vanishing discount] {Vanishing discount problem and the additive eigenvalues on changing domains} \thanks{The author is supported in part by NSF grant DMS-1664424 and NSF CAREER grant DMS-1843320 to Hung Vinh Tran.} \betagin{abstract} We study the asymptotic behavior, as $\lambdabda\rightarrow 0^+$, of the state-constraint Hamilton--Jacobi equation \betagin{equation} \betagin{cases} \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \leq 0 \qquad\text{in}\,\;(1+r(\lambdabda))\Omegaega,\\ \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \geq 0 \qquad\text{on}\;(1+r(\lambdabda))\overline{\Omegaega}. \end{cases} \tag{$S_\lambdabda$} \end{equation} and the corresponding additive eigenvalues, or ergodic constant \betagin{equation} \betagin{cases} H(x,Dv(x)) \leq c(\lambdabda) \qquad\text{in}\,\;(1+r(\lambdabda))\Omegaega,\\ H(x,Dv(x)) \geq c(\lambdabda) \qquad\text{on}\;(1+r(\lambdabda))\overline{\Omegaega}. \end{cases} \tag{$E_\lambdabda$} \end{equation} Here, $\Omegaega$ is a bounded domain of $ \mathbb{R}^n$, $\phi(\lambdabda), r(\lambdabda):(0,\infty)\rightarrow \mathbb{R}$ are continuous functions such that $\phi$ is nonnegative and $\lim_{\lambdabda\rightarrow 0^+} \phi(\lambdabda) = \lim_{\lambdabda\rightarrow 0^+} r(\lambdabda) = 0$. We obtain both convergence and non-convergence results in the convex setting. Moreover, we provide a very first result on the asymptotic expansion of the additive eigenvalue $c(\lambdabda)$ as $\lambdabda\rightarrow 0^+$. The main tool we use is a duality representation of solution with viscosity Mather measures. \end{abstract} \author{Son N. T. Tu} \address[S. N.T. Tu] { Department of Mathematics, University of Wisconsin Madison, 480 Lincoln Drive, Madison, WI 53706, USA} \email{[email protected]} \date{\today} \keywords{first-order Hamilton--Jacobi equations; state-constraint problems; vanishing discount problem; additive eigenvalues; viscosity solutions.} \subjclass[2010]{ 35B40, 35D40, 49J20, 49L25, 70H20 } \maketitle \tableofcontents \section{Introduction} Let $\phi(\lambdabda):(0,\infty)\rightarrow (0,\infty)$ be continuous nondecreasing and $r(\lambdabda):(0,\infty)\rightarrow \mathbb{R}$ be continuous such that $\lim_{\lambdabda\rightarrow 0^+} \phi(\lambdabda) = \lim_{\lambdabda\rightarrow 0^+} r(\lambdabda) = 0$. We study the asymptotic behavior, as the discount factor $\phi(\lambdabda)$ goes to $0$, of the viscosity solutions to the following state-constraint Hamilton--Jacobi equation \betagin{equation}\label{eq:S_lambda} \betagin{cases} \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \leq 0 \qquad\text{in}\;\;(1+r(\lambdabda))\Omegaega,\\ \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \geq 0 \qquad\text{on}\;(1+r(\lambdabda))\overline{\Omegaega}. \end{cases} \tag{$S_\lambdabda$} \end{equation} Here, $\Omegaega$ is a bounded domain of $\mathbb{R}^n$. For simplicity, we will write $\Omegaega_\lambdabda = (1+r(\lambdabda))\Omegaega$ for $\lambdabda > 0$. Roughly speaking, along some subsequence $\lambdabda_j\rightarrow 0^+$, we obtain the limiting equation as a state-constraint ergodic problem: \betagin{equation}\label{eq:S_0} \betagin{cases} H(x,Du(x)) \leq c(0) \qquad\text{in}\;\Omegaega,\\ H(x,Du(x)) \geq c(0) \qquad\text{on}\;\overline{\Omegaega}. \end{cases} \tag{$S_0$} \end{equation} Here $c(0)$ is the so-called critical value (additive eigenvalue) defined as \betagin{equation}\label{eq:c(0)} c(0) = \inf {\mathbb B}ig\lbrace c\in \mathbb{R}: H(x,Du(x)) \leq c\;\;\text{in}\;\Omegaega\;\text{has a solution} {\mathbb B}ig\rbrace. \end{equation} This quantity is finite and indeed the infimum in \eqref{eq:c(0)} can be replaced by minimum under our assumptions. We want to study the convergence of $u_\lambdabda$, solution to \eqref{eq:S_lambda}, under some normalization, to solution of \eqref{eq:S_0}. It turns out this problem is interesting and challenging as it concerns both the vanishing discount and the rate of changing domains at the same time. The selection problem for the vanishing discount problems on fixed domains was studied extensively in the literature recently. The first-order equations on the torus was obtained in \cite{davini_convergence_2016}, and the second-order equations on the torus were studied in \cite{ishii2017,mitake_selection_2017}. The problems in bounded domains with boundary conditions were proved in \cite{al-aidarous_convergence_2016, Ishii2017a}. The problem in $\mathbb{R}^n$ under additional assumptions that lead to the compactness of the Aubry set was studied in \cite{ishii_vanishing_2020}. For the selection problems with state-constraint boundary conditions, so far, there is only \cite{Ishii2017a} that deals with a fixed domain, and there is not yet any result studying the situation of the changing domains. It turns out that the problem is much more subtle as we have to take into account the changing domain factor appropriately. Surprisingly, we can obtain both convergence results and non-convergence results in this setting. This result is an extension of the selection principle in the setting of changing domains. Generally speaking, known results assert that in the convex setting the whole family of solutions of the discounted problems, which are uniquely solved if the ambient space is compact, converges to a distinguished solution of the ergodic limit equation \betagin{equation}\label{eq:erg} H(x,Du(x)) = c(0). \end{equation} We emphasize that \eqref{eq:erg} has multiple solutions, therefore it is a non-trivial problem to characterize the limiting solution. We show the convergence for some natural normalization of solutions to \eqref{eq:S_lambda} together with characterizing their limits, related characterizations are done in \cite{ishii_vanishing_2020} for the case the domain is $\mathbb{R}^n$ and in \cite{davini_convergence_2016, ishii2017,Hung2019} for the case the domain is torus $\mathbb{T}^n = \mathbb{R}^n/ \mathbb{Z}^n$. We also discuss other related results concerning the asymptotic behavior of the additive eigenvalue of $H$ in $\Omegaega_\lambdabda$ as $\lambdabda\rightarrow 0^+$. \subsection{Assumptions} In this paper, by a domain, we mean an open, bounded, connected subset of $\mathbb{R}^n$. Without loss of generality, we will always assume $0\in \Omegaega$. To have well-posedness for \eqref{eq:S_lambda}, one needs to have a comparison principle. For simplicity, we will use the following structural assumptions on $\Omegaega$, which was introduced in \cite{Capuzzo-Dolcetta1990}. \betagin{itemize} \item[(A1)] $\Omegaega$ a bounded star-shaped (with respect to the origin) open subset of $\mathbb{R}^n$ and there exists some $\kappa > 0$ such that \betagin{equation}\label{condA2} \mathrm{dist}(x,\overline{\Omegaega}) \geq \kappa r \qquad\text{for all}\; x\in (1+r) \partial\Omegaega, \;\text{for all}\;r>0. \end{equation} \end{itemize} It is worth noting that the first condition under which the comparison principle holds is the following, first introduced in \cite{Soner1986}. \betagin{itemize} \item[(A2)] There exists a universal pair of positive numbers $(r,h)$ and $\eta\in \mathrm{BUC}(\overline{\Omegaega};\mathbb{R}^n)$ such that $B(x+t\eta(x), rt)\subset\Omegaega$ for all $x\in \overline{\Omegaega}$ and $t\in (0,h]$. \end{itemize} \betagin{rem}\label{rem:Ishii} The assumption $\Omegaega$ is star-shaped can be removed in $\mathrm{(A1)}$, that is any bounded, open subset of $\mathbb{R}^n$ containing the origin that satisfies \eqref{condA2} for some $\kappa > 0$ is star-shaped, and $\mathrm{(A2)}$ is a consequence of $\mathrm{(A1)}$ (see Lemma {\rm Re}\,f{thm:Ishii} in Appendix). Also $\mathrm{(A2)}$ can be generalized to a weaker \emph{interior cone} condition instead, that is there exists $\sigmama\in (0,1)$ such that $B(x+t\eta(x), rt^\sigmama)\subset\Omegaega$ for all $x\in \overline{\Omegaega}$ and $t\in (0,h]$. \end{rem} We consider the following case in our paper about the vanishing and changing domain rates: \betagin{equation}\label{eq:asm} \lim_{\lambdabda\rightarrow 0^+}\left( \frac{r(\lambdabda)}{\phi(\lambdabda)}\right) = \gammama \in [-\infty,+\infty]. \end{equation} \betagin{rem}\label{rem:first} Under the assumption \eqref{eq:asm}, there are only three possible cases: \betagin{enumerate} \item[1.] (Inner approximation) $r(\lambdabda)$ is negative for $\lambdabda\ll 1$, consequently $\gammama\leq 0$. \item[2.] (Outer approximation) $r(\lambdabda)$ is positive for $\lambdabda\ll 1$, consequently $\gammama\geq 0$. \item[3.] $r(\lambdabda)$ is oscillating around 0 when $\lambdabda\rightarrow 0^+$, consequently $\gammama = 0$. An example for this case is $r(\lambdabda) = \lambdabda\sin\left(\lambdabda^{-1}\right)$. \end{enumerate} We note that assumption \eqref{eq:asm} does not cover the case where $r(\lambdabda)/\phi(\lambdabda)$ is bounded but the limit at $\lambdabda\rightarrow 0^+$ does not exist, for example $r(\lambdabda) = \lambdabda\sin\left(\lambdabda^{-1}\right)$ and $\phi(\lambdabda) = \lambdabda$. Nevertheless, when the limit \eqref{eq:asm} exists and $r(\lambdabda)$ is oscillating near $0$, the limit must be $\gammama = 0$ and it turns out that the case $\gammama = 0$ is substantially simpler to analyze, as solutions of \eqref{eq:def_ulambda} converge to the maximal solution of \eqref{eq:S_0} (Theorem {\rm Re}\,f{thm:subcritical}). \end{rem} Throughout the paper, we will assume that $H:\overline{U}\times \mathbb{R}^n\rightarrow \mathbb{R}$ is a continuous Hamiltonian where $U = B(0,R_0)$ such that $2\Omegaega \subseteq U$. We list the main assumptions that will be used throughout the paper. \betagin{itemize} \item[(H1)] For each $R>0$ there exists a constant $C_R$ such that \betagin{equation}\label{H3c} \betagin{cases} |H(x,p) - H(y,p)| \leq C_R|x-y|,\\ |H(x,p) - H(x,q)| \leq C_R|p-q|, \end{cases} \tag{H1} \end{equation} for $x,y \in \overline{U}$ and $p,q \in \mathbb{R}^n$ with $|p|,|q|\leq R$. \item[(H2)] $H$ satisfies the coercivity assumption \betagin{equation}\label{H4} \lim_{|p|\rightarrow \infty} \left(\min_{x\in \overline{U}} H(x,p)\right) = +\infty. \tag{H2} \end{equation} \item[(H3)]\label{H5} $p\mapsto H(x,p)$ is convex for each $x\in\overline{U}$. \item[(H4)]\label{H6} For $v\in \mathbb{R}^n$, $x\mapsto L(x,v)$ is continuously differentiable on $\overline{U}$, where the Lagrangian $L$ of $H$ is defined as \betagin{equation*} L(x,v) = \sup_{p\in \mathbb{R}^n}{\mathbb B}ig(p\cdot v - H(x,p){\mathbb B}ig), \qquad (x,v)\in \overline{U}\times \mathbb{R}^n. \end{equation*} \end{itemize} The regularity assumption $\mathrm{(H4)}$ is needed for technical reason when we deal with changing domains, it satisfies for a vast class of Hamiltonians, for example $H(x,p) = H(p)+V(x)$ or $H(x,p) = V(x)H(p)$ with $V\in \mathrm{C}^1$. \betagin{rem} In fact we only need that $\lambdabda\mapsto L((1+\lambdabda)x,v)$ is continuously differentiable at $\lambdabda = 0$ but we assume $\mathrm{(H4)}$ for simplicity. \end{rem} \subsection{Literature on state-constraint and vanishing discount problems} There is a vast amount of works in the literature on the well-posedness of state-constraint Hamilton-Jacobi equations and fully nonlinear elliptic equations. The state-constraint problem for first-order convex Hamilton-Jacobi equations using optimal control frameworks was first studied in \cite{Soner1986, Soner1986a}. The general nonconvex, coercive first-order equations were then discussed in \cite{Capuzzo-Dolcetta1990}. For the nested domain setting, a rate of convergence for the discount problem is studied in \cite{kim2019stateconstraint}. We also refer to the classical books \cite{Bardi1997, Barles1994}, and the references therein. There are also many works in the spirit of looking at a general framework of the vanishing discount problem. The convergence of solutions to the vanishing discount problems is proved in \cite{Ishii2017a}. A problem with a similar spirit to ours is considered in \cite{Qinbo2019}, in which the authors study the asymptotic behavior of solution on compact domain with respect to the Hamiltonian. In this work we take advantage of the clear from and structure of \eqref{eq:S_lambda} to obtain more explicit properties on solutions and furthermore the asymptotic expansion of the additive eigenvalues. We also remark that the continuity of the additive eigenvalue for general increasing domains for second-order equation is concerned in \cite{barles_large_2010}. See \cite{ishii_vanishing_2021} for a recent work on vanishing discount for weakly coupled system and \cite{murat_ergodic_2021} for second-order equation with Neumann boundary condition. \subsection{Main results} There are two natural normalizations for solutions of \eqref{eq:S_lambda}. The first one is similar to what has been considered in \cite{Ishii2017a,ishii_vanishing_2020, Hung2019} as \betagin{equation}\label{eq:familyc0} \left\lbrace u_\lambdabda +\frac{c(0)}{\phi(\lambdabda)}\right\rbrace_{\lambdabda>0}, \end{equation} and the second one is given by \betagin{equation}\label{eq:familyclambda} \left\lbrace u_\lambdabda+\frac{c(\lambdabda)}{\phi(\lambdabda)}\right\rbrace_{\lambdabda>0}, \end{equation} where $c(\lambdabda)$ is the additive eigenvalues of $H$ in $\Omegaega_\lambdabda$, as defined in equation \eqref{eq:cell3}. Let $u^0$ be the limiting solution of the vanishing discount problem on fixed bounded domain (see \cite{davini_convergence_2016, ishii2017,Ishii2017a,Hung2019} and Theorem {\rm Re}\,f{thm:conv_bdd}), our first result is as follows. \betagin{thm}\label{thm:subcritical} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}$ and $\mathrm{(A1)}$. If $\gammama = 0$ then both families \eqref{eq:familyc0} and \eqref{eq:familyclambda} converge to $u^0$ locally uniformly as $\lambdabda \rightarrow 0^+$. \end{thm} We note that Theorem {\rm Re}\,f{thm:subcritical} includes the case where $r(\lambdabda)$ is oscillating, as long as the limit \eqref{eq:asm} exists. For example $r(\lambdabda) = \lambdabda \sin\left(\lambdabda^{-1}\right)$ and $\phi(\lambdabda) = \lambdabda^p$ with $p\in (0,1)$. If $\gammama$ is finite then \eqref{eq:familyc0} is bounded and convergent. Its limit can be characterized in terms of probability minimizing measures $\mathcal{M}_0$ (or viscosity Mather measures, see Section 2). For a ball $\overline{B}_h\subset \mathbb{R}^n$ and a measure $\mu$ defined on $\overline{\Omegaega}\times \overline{B}_h$, we define \betagin{equation}\label{def:integral} \langle \mu, f\rangle := \int_{\overline{\Omegaega}\times \overline{B}_h} f(x,v)\;d\mu(x,v), \qquad\text{for}\;f\in \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h). \end{equation} \betagin{thm}\label{thm:general} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}, \mathrm{(H4)}$ and $\mathrm{(A1)}$. If $\gammama \in \mathbb{R}$ then the family \eqref{eq:familyc0} converge to $u^\gammama$ locally uniformly in $\Omegaega$ as $\lambdabda\rightarrow 0^+$. Furthermore \betagin{equation}\label{eq:thm} u^\gammama = \sup_{w\in \mathcal{E}^\gammama} w, \end{equation} where $\mathcal{E}^\gammama$ denotes the family of subsolutions $w$ to the ergodic problem \eqref{eq:S_0} such that \betagin{equation*} \gammama\big\langle \mu, (-x)\cdot D_xL(x,v)\big\rangle +\langle \mu, w\rangle \leq 0 \qquad\text{for all}\; \mu\in \mathcal{M}_0. \end{equation*} \end{thm} \betagin{rem} The factor $\gammama\left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle$ here captures the scaling property of the problem, which is where $u^\gammama$ and $u^0$ are different from each other. Also, if $\gammama = \infty$ then the family \eqref{eq:familyc0} could be unbounded (Example {\rm Re}\,f{ex:a}). We note that Theorem {\rm Re}\,f{thm:general} includes the conclusion of Theorem {\rm Re}\,f{thm:subcritical} for the family \eqref{eq:familyc0} but we do not need the technical assumption $\mathrm{(H4)}$ for Theorem {\rm Re}\,f{thm:subcritical}. \end{rem} \betagin{cor}\label{cor:mycor} The mapping $\gammama\mapsto u^\gammama(\cdot)$ from $\mathbb{R}$ to $\mathrm{C}(\overline{\Omegaega})$ is concave and decreasing. Precisely, if $\alphapha,\betata\in \mathbb{R}$ with $\alphapha\leq \betata$ then $u^\betata\leq u^\alphapha$ and \betagin{equation*} (1-\lambdabda)u^\alphapha + \lambdabda u^\betata \leq u^{(1-\lambdabda)\alphapha+\lambdabda\betata} \qquad\text{for every}\;\lambdabda\in [0,1]. \end{equation*} \end{cor} For the second family \eqref{eq:familyclambda}, we observe that it is bounded even if $\gammama = \infty$, and the difference between the two normalization \eqref{eq:familyclambda} and \eqref{eq:familyc0} is given by \betagin{equation}\label{eq:introlimit} \left\lbrace \frac{c(\lambdabda)-c(0)}{\phi(\lambdabda)}\right\rbrace_{\lambdabda>0}. \end{equation} If $\gammama<\infty$ then the two families \eqref{eq:familyc0} and \eqref{eq:familyclambda} are convergent if and only if the limit of \eqref{eq:introlimit} as $\lambdabda\rightarrow 0^+$ exists. In that case we have \betagin{equation}\label{eq:introlimit2} \lim_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda)-c(0)}{\phi(\lambdabda)}\right)= \gammama\lim_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right). \end{equation} The limit on the right-hand side should be understood as taking along sequences where $r(\lambdabda)\neq 0$. In other words we only concern those functions $r(\cdot)$ that are not identically zero near $0$, since otherwise $c(\lambdabda) = c(0)$ for $\lambdabda\ll 1$ and the problem is not interesting. It leads naturally to the question of the asymptotic expansion of the critical value \betagin{equation}\label{asymptotic} c(\lambdabda) = c(0) + c^{(1)} r(\lambdabda) + o(r(\lambdabda)) \qquad\text{as}\; \lambdabda \rightarrow 0^+. \end{equation} To our knowledge, this kind of question is new in the literature. We prove that the limit in \eqref{eq:introlimit2} always exists if $r(\lambdabda)$ does not oscillate its sign near $0$, and as a consequence it provides a necessary and sufficient condition under which the limit \eqref{eq:introlimit2} exists for a general oscillating $r(\lambdabda)$. Of course this oscillating behavior is excluded when we only concern about the convergence of \eqref{eq:familyc0} and \eqref{eq:familyclambda} (since $\gammama = 0$). We also give a characterization for the limit in \eqref{eq:introlimit2} in terms of $\mathcal{M}_0$. \betagin{thm}\label{thm:limit} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}, \mathrm{(H4)}$, $\mathrm{(A1)}$, we have \betagin{align*} &\lim_{\substack{\lambdabda\rightarrow 0^+\\ r(\lambdabda) > 0}} \left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right) = \max_{\mu\in \mathcal{M}_0} \left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle,\\ &\lim_{\substack{\lambdabda\rightarrow 0^+\\ r(\lambdabda) < 0}} \left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right) = \min_{\mu\in \mathcal{M}_0} \left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle. \end{align*} Thus $(c(\lambdabda)-c(0))/r(\lambdabda)$ converges as $\lambdabda\rightarrow 0^+$ if and only if the following invariant holds \betagin{equation*} \left\langle \mu,(-x)\cdot D_xL(x,v)\right\rangle = c^{(1)} \qquad\text{for all}\; \mu\in \mathcal{M}_0 \end{equation*} where $c^{(1)}$ is a positive constant. \end{thm} \betagin{cor}\label{cor:Aug30-1} If $\left\langle \mu,(-x)\cdot D_xL(x,v)\right\rangle = c^{(1)}$ for all $\mu\in \mathcal{M}_0$ then $u^\gammama(\cdot) +\gammama c_{(1)} = u^0(\cdot)$. \end{cor} \betagin{cor}\label{cor:equala} If $u^0(z) = u^\gammama(z)$ for some $z\in \Omegaega$ and $\gammama > 0$ then $ c^{(1)}_-= 0$. \end{cor} Theorem {\rm Re}\,f{thm:limit} gives us the convergence of the second normalization \eqref{eq:familyclambda} for finite $\gammama$. We recall that the case $\gammama = 0$ is already considered in Theorems {\rm Re}\,f{thm:subcritical} and {\rm Re}\,f{thm:general}. \betagin{cor}\label{cor:second_norm} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}, \mathrm{(H4)}$, $\mathrm{(A1)}$ and $\gammama \in \mathbb{R}\backslash \{0\}$, then \betagin{equation*} \lim_{\lambdabda\rightarrow 0^+} \left( u_\lambdabda(x) + \frac{c(\lambdabda)}{\phi(\lambdabda)} \right) = u^\gammama(x) + \gammama\lim_{\lambdabda\rightarrow 0^+} \left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right) \end{equation*} locally uniformly in $\Omegaega$. \end{cor} Even though the second normalization \eqref{eq:familyclambda} remains uniformly bounded when $\gammama = \pm\infty$, it is rather surprising that we have a divergent result in this convex setting. Using tools from weak KAM theory, we can construct an example where divergence happens when approximating from the inside. To our knowledge, this kind of example is new in the literature. \betagin{thm}\label{thm:counter-example} There exists a Hamiltonian where given any $r(\lambdabda)\leq 0$ we can construct $\phi(\lambdabda)$ such that along a subsequence $\lambdabda_j \rightarrow 0^+$ we have $r(\lambdabda_j)/\phi(\lambdabda_j) \rightarrow -\infty$ and \eqref{eq:familyclambda} diverges. \end{thm} \subsection{Organization of the paper} The paper is organized in the following way. In Section {\rm Re}\,f{sec2}, we provide the background results on state-constraint Hamilton--Jacobi equations, the duality representation, and some weak KAM theory backgrounds that will be needed throughout the paper. We also review the selection principle for vanishing discount on a fixed bounded domain and its characterization of the limit, as well as set up the problems of vanishing discount on changing domains. Section {\rm Re}\,f{sec3} is devoted to proving the main results (Theorems {\rm Re}\,f{thm:general}) on the convergence of the first normalization, together with some examples. In Section {\rm Re}\,f{sec4} we provide the proof for Theorem {\rm Re}\,f{thm:limit} which addresses the question of asymptotic expansion of the eigenvalue, which relates the limits of different normalization in vanishing discount and proof for Corollaries {\rm Re}\,f{cor:Aug30-1} and {\rm Re}\,f{cor:equala}. In Section {\rm Re}\,f{sec5} we restate the convergence of the second normalization as a consequence from Sections {\rm Re}\,f{sec3} and {\rm Re}\,f{sec4}, and provide a new counterexample where divergence happens (Theorem {\rm Re}\,f{thm:counter-example}). The proofs of some Theorems and Lemmas are provided in the Appendix. \section{Preliminaries}\label{sec2} \subsection{State-constraint solutions} For an open subset $\Omegaega \subset U \subset\mathbb{R}^n$, we denote the space of bounded uniformly continuous functions defined in $\Omegaega$ by $\mathrm{BUC}(\Omegaega;\mathbb{R})$. Assume that $H:\overline{U}\times \mathbb{R}^n \rightarrow \mathbb{R}$ is a Hamiltonian such that $H\in \mathrm{BUC}(\overline{U}\times \mathbb{R}^n)$ satisfying $\mathrm{(H1)}, \mathrm{(H2)}$. We consider the following equation with $\deltata \geq 0$: \betagin{equation}\label{HJ-static} \deltata u(x) + H(x,Du(x)) = 0 \qquad\text{in}\;\Omegaega \tag{HJ}. \end{equation} \betagin{defn}\label{defn:1} We say that \betagin{itemize} \item[(i)] $v\in \mathrm{BUC}(\Omegaega;\mathbb{R})$ is a viscosity subsolution of \eqref{HJ-static} in $\Omegaega$ if, for every $x\in \Omegaega$ and $\varphi\in \mathrm{C}^1(\Omegaega)$ such that $v-\varphi$ has a local maximum over $\Omegaega$ at $x$, $\deltata v(x) + H\big(x,D\varphi(x)\big) \leq 0$ holds. \item[(ii)] $v\in \mathrm{BUC}(\overline\Omegaega;\mathbb{R})$ is a viscosity supersolution of \eqref{HJ-static} on $\overline\Omegaega$ if, for every $x\in \overline{\Omegaega}$ and $\varphi\in \mathrm{C}^1(\overline{\Omegaega})$ such that $v-\varphi$ has a local minimum over $\overline{\Omegaega}$ at $x$, $\deltata v(x) + H\big(x,D\varphi(x)\big) \geq 0$ holds. \end{itemize} If $v$ is a viscosity subsolution to \eqref{HJ-static} in $\Omegaega$, and is a viscosity supersolution to \eqref{HJ-static} on $\overline{\Omegaega}$, that is, $v$ is a viscosity solution to \betagin{equation}\label{state-def} \betagin{cases} \deltata v(x) + H(x,Dv(x)) \leq 0 &\quad\text{in}\; \Omegaega,\\ \deltata v(x) + H(x,Dv(x)) \geq 0 &\quad\text{on}\; \overline{\Omegaega}, \end{cases} \tag{HJ$_\deltata$} \end{equation} then we say that $v$ is a state-constraint viscosity solution of \eqref{HJ-static}. \end{defn} \betagin{defn} For a real valued function $w(x)$ define for $x\in \Omegaega$, we define the super-differential and sub-differential of $w$ at $x$ as \betagin{align*} D^{+}w(x)&=\left \lbrace p \in {\mathbb R}^n : \limsup_{y \rightarrow x} \frac{w(y)-w(x)-p\cdot (y-x)}{|y-x|} \leq 0\right\rbrace,\\ D^{-}w(x)&=\left \lbrace p \in {\mathbb R}^n : \liminf_{y \rightarrow x} \frac{w(y)-w(x)- p \cdot (y-x)} {|y-x|} \geq 0 \right\rbrace. \end{align*} \end{defn} We refer the readers to \cite{ Capuzzo-Dolcetta1990,kim2019stateconstraint,Soner1986} for the existence and wellposedness, as well as the Lipschitz bound on solutions $u_\lambdabda$ of \eqref{eq:S_lambda} and a description on state-constraint boundary condition. See also \cite{Bardi1997,Barles1994, Hung2019} for the equivalent definition of viscosity solution using super-differential and sub-differential. \betagin{thm}\label{Perron} Assume \eqref{H4} and $\deltata > 0$. Then, there exists a state-constrained viscosity solution $u\in \mathrm{C}(\overline{\Omegaega})\cap \mathrm{W}^{1,\infty}(\Omegaega)$ to \eqref{state-def} with $\deltata|u(x)|+ |Du(x)| \leq C_H$ for $x\in \Omegaega$ where $C_H$ only depends on $H$. \end{thm} \betagin{cor}\label{cor:Inv_max} Let $u\in \mathrm{C}(\overline{\Omegaega})$ be a viscosity subsolution to \eqref{HJ-static} in $\Omegaega$ with $\deltata>0$. If $v\leq u$ on $\overline{\Omegaega}$ for all viscosity subsolutions $v\in \mathrm{C}(\overline{\Omegaega})$ of \eqref{HJ-static} in $\Omegaega$ then $u$ is a viscosity supersolution to \eqref{HJ-static} on $\overline{\Omegaega}$. \end{cor} \betagin{thm}\label{CP continuous} Assume \eqref{H3c}, $\mathrm{(A2)}$ and $\deltata > 0$. If $v_1\in \mathrm{BUC}(\overline{\Omegaega};\mathbb{R})$ is a Lipschitz viscosity subsolution of \eqref{HJ-static} in $\Omegaega$, and $v_2\in \mathrm{BUC}(\overline{\Omegaega};\mathbb{R})$ is a viscosity supersolution of \eqref{HJ-static} on $\overline{\Omegaega}$ then $v_1(x)\leq v_2(x)$ for all $x\in \overline{\Omegaega}$. \end{thm} When the uniqueness of \eqref{state-def} is guaranteed, the unique viscosity solution to \eqref{state-def} is the maximal viscosity subsolution of \eqref{HJ-static}. \subsection{Duality representation of solutions} The duality representation is well-known in the literature (see \cite{Ishii2017a} or \cite[Theorem 5.3]{Hung2019}). We present here a variation of that result. For $\deltata > 0$, let $u_\deltata$ be the unique solution to \eqref{state-def}, we have the following bound: \betagin{equation}\label{eq:priori} \deltata|u_\deltata(x)| + |Du_\deltata(x)| \leq C_H\qquad \text{for all}\; x\in \Omegaega. \end{equation} That means the value of $H(x,p)$ for large $|p|$ is irrelevant, therefore without loss of generality we can assume that there exists $h>0$ such that, the Legendre's transform $L$ of $H$ will satisfy: \betagin{equation} \betagin{cases} \displaystyle H(x,p) = \sup_{|v|\leq h} {\mathbb B}ig(p\cdot v - L(x,v){\mathbb B}ig), &\qquad (x,p)\in \overline{\Omegaega}\times\mathbb{R}^n\\ \displaystyle L(x,v) = \sup_{p\in \mathbb{R}^n} {\mathbb B}ig(p\cdot v - H(x,p){\mathbb B}ig), &\qquad (x,v)\in \overline{\Omegaega}\times \overline{B}_h. \end{cases} \label{eq:H&L} \end{equation} This simplification allows us to work with the compact subset $\overline{\Omegaega}\times \overline{B}_h$ rather than $\overline{\Omegaega}\times \mathbb{R}^n$, as will be utilized to obtain the duality representation. Let us define for each $f\in \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)$ the function \betagin{equation*} H_f(x,p) = \max_{|v|\leq h} {\mathbb B}ig(p\cdot v - f(x,v){\mathbb B}ig), \qquad (x,p)\in \overline{\Omegaega}\times \overline{B}_h. \end{equation*} Recall the definition of the action $\langle \cdot, \cdot\rangle$. The underlying domain of the integral will be implicitly understood. Let $\mathcal{R}(\overline{\Omegaega}\times \overline{B}_h)$ be the space of Radon measures on $\overline{\Omegaega}\times \overline{B}_h$. For $\deltata>0$, $z\in \overline{\Omegaega}$ we define \betagin{align*} \mathcal{F}_{\deltata,\Omegaega} &= {\mathbb B}ig\lbrace (f,u) \in \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)\times \mathrm{C}(\overline{\Omegaega}): \deltata u + H_f(x,Du)\leq 0\;\text{in}\;\Omegaega {\mathbb B}ig\rbrace\\ \mathcal{G}_{z,\deltata,\Omegaega} &= {\mathbb B}ig\lbrace f - \deltata u(z): (f, u)\in \mathcal{F}_{\deltata,\Omegaega} {\mathbb B}ig\rbrace\\ \mathcal{G}'_{z,\deltata,\Omegaega} &= {\mathbb B}ig\lbrace \mu\in \mathcal{R}(\overline{\Omegaega}\times \overline{B}_h): \langle \mu, f\rangle \geq 0\;\text{for all}\;f\in \mathcal{G}_{z,\deltata,\Omegaega} {\mathbb B}ig\rbrace. \end{align*} Here $\mathcal{G}_{z,\deltata,\Omegaega} \subset \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)$ is the evaluation cone of $\mathcal{F}_{\deltata,\Omegaega}$, and its dual cone consists of Radon measures with non-negative actions against elements in $\mathcal{G}_{z,\deltata, \Omegaega}$. Note that $\mathcal{R}(\overline{\Omegaega}\times \overline{B}_h)$ is the dual space of $\mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)$. We also denote $\mathcal{P}$ the set of probability measures. \betagin{lem}\label{lem:cv} $\mathcal{F}_{\deltata,\Omegaega}$ is a convex set, $\mathcal{G}_{z,\deltata,\Omegaega}$ is a convex cone with vertex at the origin, and $\mathcal{G}_{z,\deltata,\Omegaega}'$ consists of only non-negative measures. \end{lem} \betagin{thm}\label{thm:lambdau} For $(z,\deltata) \in \overline{\Omegaega}\times (0,\infty)$ and $u$ is the viscosity solution to \eqref{state-def}, we have \betagin{equation*} \deltata u(z) = \min_{\mu\in \mathcal{P} \cap \mathcal{G}'_{z,\deltata,\Omegaega}} \langle \mu, L\rangle = \min_{\mu\in \mathcal{P} \cap \mathcal{G}'_{z,\deltata,\Omegaega}} \int_{\overline{\Omegaega}\times\overline{B}_h} L(x,v)\;d\mu(x,v). \end{equation*} \end{thm} As $\deltata \rightarrow 0^+$, we also have a representation for the erogdic problem \eqref{eq:S_0} in the same manner. Let us define \betagin{align*} \mathcal{F}_{0,\Omegaega} &= {\mathbb B}ig\lbrace (f, u)\in \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)\times \mathrm{C}(\overline{\Omegaega}): H_f(x,Du(x)) \leq 0\;\text{in}\;\Omegaega {\mathbb B}ig\rbrace\\ \mathcal{G}_{0,\Omegaega} &= {\mathbb B}ig\lbrace f: (f, u)\in \mathcal{F}_{0,\Omegaega}\;\text{for some}\;u\in \mathrm{C}(\overline{\Omegaega}) {\mathbb B}ig\rbrace\\ \mathcal{G}'_{0,\Omegaega} &= {\mathbb B}ig\lbrace \mu\in \mathcal{R}(\overline{\Omegaega}\times \overline{B}_h): \langle \mu, f\rangle \geq 0\;\text{for all}\;f\in \mathcal{G}_{0,\Omegaega} {\mathbb B}ig\rbrace. \end{align*} Here the notion of viscosity subsolution is equivalent to a.e. subsolution in $\Omegaega$, thanks to $\mathrm{(H4)}$. We also have $\mathcal{G}_{0,\Omegaega} \subset\mathrm{C}(\overline{\Omegaega}\times \overline{B}_h)$ is the evaluation cone of $\mathcal{F}_{0,\Omegaega}$ and $\mathcal{G}'_{0,\Omegaega}$ is the dual cone of $\mathcal{G}_{0,\Omegaega}$ in $\mathcal{R}(\overline{\Omegaega}\times \overline{B}_h)$. \betagin{defn} A measure $\mu$ defined on $\overline{\Omegaega}\times \overline{B}_h$ is called a \emph{holonomic measures} if \betagin{equation*} \langle \mu, v\cdot D\psi(x) \rangle = 0 \qquad\text{for all}\; \psi\in \mathrm{C}^1(\overline{\Omegaega}). \end{equation*} \end{defn} \betagin{lem} Measures in $\mathcal{G}'_{0,\Omegaega}$ are holonomic. \end{lem} \betagin{proof} If $\psi\in \mathrm{C}^1(\Omegaega)$ then $\pm(v\cdot D\psi(x), \psi)\in \mathcal{F}_{0,\Omegaega}$, therefore $\pm v\cdot D\psi(x) \in \mathcal{G}_{0,\Omegaega}$ and thus $\langle \mu, v\cdot D\psi(x) \rangle = 0$. \end{proof} \betagin{lem}\label{lem:cv3} Fix $z\in \overline{\Omegaega}$ and $\deltata_j\rightarrow 0$. Assume $u_j\in \mathcal{G}_{z,\deltata_j,\Omegaega}'$ and $\mu_j \rightharpoonup \mu$ weakly in the sense of measures, then $\mu\in \mathcal{G}_{0,\Omegaega}'$. \end{lem} \betagin{lem}\label{lem:cv2} $\mathcal{F}_{0,\Omegaega}$ is a convex set, $\mathcal{G}_{0,\Omegaega}$ is a convex cone with vertex at the origin, and $\mathcal{G}_{0,\Omegaega}'$ consists of only nonnegative measures. \end{lem} \betagin{thm}\label{thm:c0} We have \betagin{equation}\label{eq:Mather} -c(0) = \min_{\mu\in \mathcal{P}\cap \mathcal{G}_{0,\Omegaega}'} \langle \mu, L \rangle =\min_{\mu\in \mathcal{P}\cap \mathcal{G}_{0,\Omegaega}'} \int_{\overline{\Omegaega}\times \overline{B}_h} L(x,v)\;d\mu(x,v) . \end{equation} \end{thm} The set of all measures in $\mathcal{P}\cap \mathcal{G}'_{0,\Omegaega}$ that minimizing \eqref{eq:Mather} is denoted $\mathcal{M}_0$. We call them viscosity Mather measures (\cite{Ishii2017a}). We omit the proofs of Lemmas {\rm Re}\,f{lem:cv}, {\rm Re}\,f{lem:cv3}, {\rm Re}\,f{lem:cv2} and Theorems {\rm Re}\,f{thm:lambdau}, {\rm Re}\,f{thm:c0} as they are slight modifications of those in the periodic setting, which we refer the interested readers to \cite{Ishii2017a,Hung2019}. \subsection{Vanishing discount for fixed bounded domains} In this section we use the representation formulas in Theorem {\rm Re}\,f{thm:lambdau} and Theorem {\rm Re}\,f{thm:c0} to show the convergence of solution of \eqref{eq:S_lambda} to solution of \eqref{eq:S_0}. See also \cite{Ishii2017a,Hung2019} where the similar technique is used. \betagin{thm}\label{thm:pre} Assume \eqref{H3c}, \eqref{H4} and $\mathrm{(A2)}$. Let $u_\deltata\in \mathrm{C}(\overline{\Omegaega})\cap\mathrm{Lip}(\Omegaega)$ be the unique solution to \eqref{state-def}. Then $\deltata u_\deltata(\cdot) \rightarrow -c(0)$ uniformly on $\overline{\Omegaega}$ as $\deltata\rightarrow 0$. Indeed, there exists $C>0$ depends on $H$ and $\mathrm{diam}(\Omegaega)$ such that for all $x\in \overline{\Omegaega}$ there holds \betagin{equation}\label{eq:rate0} \left|\deltata u_\deltata(x)+c(0)\right| \leq C\deltata. \end{equation} Also, for each $x_0\in \overline{\Omegaega}$ there exist a subsequence $\lambdabda_j$ and $u\in \mathrm{C}(\overline{\Omegaega})$ solving \eqref{eq:S_0} such that: \betagin{equation*} \betagin{cases} u_{\deltata_j}(x) - u_{\deltata_j}(x_0) \rightarrow u(x) \\ u_{\deltata _j}(x) + c(0)/\deltata_j \rightarrow w(x) \end{cases} \end{equation*} uniformly on $\overline{\Omegaega}$ as $\deltata_j\rightarrow 0$ and the difference between the two limits are $w(x) - u(x) = w(x_0)$. \end{thm} \betagin{thm}\label{thm:conv_bdd} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}$ and $\mathrm{(A2)}$. Let $u_\deltata\in \mathrm{C}(\overline{\Omegaega})\cap\mathrm{Lip}(\Omegaega)$ be the unique solution to \eqref{state-def}. Then, $u_\deltata+\deltata^{-1}c(0) \rightarrow u^0$ uniformly on $\mathrm{C}(\overline{\Omegaega})$ as $\deltata\rightarrow 0^+$ and $u^0$ solves \eqref{eq:S_0}. Furthermore, the limiting solution can be characterized as \betagin{equation}\label{eq:char} u^0 = \sup_{v\in \mathcal{E}} v \end{equation} where $\mathcal{E}$ is the set of all subsolutions $v\in \mathrm{C}(\overline{\Omegaega})$ to $H(x,Dv(x))\leq c(0)$ in $\Omegaega$ such that $\langle \mu, v\rangle \leq 0$ for all $\mu\in \mathcal{M}_0$, the set of all minimizing measures $\mu\in \mathcal{P}\cap \mathcal{G}'_0$ such that $-c(0) = \langle \mu, L\rangle$. \end{thm} We provide the proof of Theorem {\rm Re}\,f{thm:pre} in the Appendix. The proof of Theorem {\rm Re}\,f{thm:conv_bdd} is omitted as it is a slight modification of the one in \cite{Hung2019}. The characterization \eqref{eq:char} also appears in \cite{davini_convergence_2016, ishii2017, ishii_vanishing_2020} under different settings. \subsection{Maximal subsolutions and the Aubry set} For any domain (with nice boundary) $\Omegaega\subset U$, we recall that the additive eigenvalue of $H$ in $\Omegaega$ is defined as \betagin{equation*} c_\Omegaega = \inf {\mathbb B}ig\lbrace c \in \mathbb{R}: H(x,Dv(x)) \leq c\;\text{has a viscosity subsolution in}\;\Omegaega {\mathbb B}ig\rbrace. \end{equation*} We consider the following equation \betagin{equation}\label{eqn:S_mu} H(x,Dv(x)) \leq c_\Omegaega \qquad \text{in}\;\Omegaega\tag{$S_{\Omegaega}$}. \end{equation} We note that viscosity subsolutions of \eqref{eqn:S_mu} in $U$ are Lipschitz, and therefore they are equivalent to a.e. subsolutions (see \cite{barron_semicontinuous_1990,Hung2019}). Also it is clear that $c_\Omegaega\leq c_U$, where $c_U$ is the additive eigenvalue of $H$ in $U$. \betagin{defn}\label{defn:m_mu} For a fixed $z\in \Omegaega$ as a vertex, we define \betagin{align*} S_{\Omegaega}(x,z) = \sup \big\lbrace v(x)-v(z): v\;\text{solves}\;\eqref{eqn:S_mu} \big\rbrace, \qquad x\in \Omegaega. \end{align*} There is a unique (continuous) extension $S_{\Omegaega}: \overline{\Omegaega}\times \overline{\Omegaega}\rightarrow \mathbb{R}$, we call $x\mapsto S_{\Omegaega}(x,z)$ the \emph{maximal subsolution} to \eqref{eqn:S_mu} with vertex $z$. \end{defn} \betagin{thm}\label{thm:basic} \quad \betagin{itemize} \item[(i)] For each fixed $z\in \overline{\Omegaega}$ then $x\mapsto S_\Omegaega(x,z)$ solves \betagin{equation}\label{eq:sta} \betagin{cases} H(x,Du(x)) \leq c_\Omegaega &\quad\;\text{in}\;\Omegaega,\\ H(x,Du(x)) \geq c_\Omegaega &\quad\;\text{on}\;\overline{\Omegaega}\backslash\{z\}. \end{cases} \end{equation} \item[(ii)] We have the triangle inequality $ S_{\Omegaega}(x,z) \leq S_{\Omegaega}(x,y) + S_{\Omegaega}(y,z)$ for all $x,y,z \in \overline{\Omegaega}$. \end{itemize} We call $S_{\Omegaega}:\overline{\Omegaega}\times \overline{\Omegaega}\rightarrow \mathbb{R}$ an intrinsic semi-distance on $\Omegaega$ (see also \cite{Bardi1997,Barles1994,Ishii2008}). \end{thm} In the sense of Definition {\rm Re}\,f{defn:1}, inequality \eqref{eq:sta} means $x\mapsto u(x)$ is a subsolution to $H(x,Du(x))= c_\Omegaega$ in $\Omegaega$, and $x\mapsto u(x)$ is a supersolution to $H(x,Du(x))= c_\Omegaega$ in $\overline{\Omegaega}\backslash \{z\}$. We omit the proof of Theorems {\rm Re}\,f{thm:basic} as it is a simple variation of Perron's method. \betagin{defn} Let us define the ergodic problem in $\Omegaega$ as \betagin{equation}\label{eq:E} \betagin{cases} H(x,Du(x)) \leq c_\Omegaega &\quad\text{in}\;\Omegaega,\\ H(x,Du(x)) \geq c_\Omegaega &\quad\text{on}\;\overline{\Omegaega}. \end{cases}\tag{E} \end{equation} The Aubry set $\mathcal{A}$ in $\Omegaega$ is defined as \betagin{equation*} \mathcal{A}_{\Omegaega} = {\mathbb B}ig\lbrace z\in \overline{\Omegaega}: x\mapsto S_{\Omegaega}(x,z)\;\text{is a solution to}\;\eqref{eq:E}{\mathbb B}ig\rbrace. \end{equation*} \end{defn} \betagin{thm}\label{thm:V} Assuming $H(x,p) = |p| - V(x)$ where $V\in \mathrm{C}(\overline{\Omegaega})$ is nonnegative. \betagin{itemize} \item[(i)] The additive eigenvalue of $H$ in $\Omegaega$ is $c_\Omegaega = -\min_{\overline{\Omegaega}}V$. \item[(ii)] The Aubry set of $H$ in $\Omegaega$ is $\mathcal{A}_\Omegaega = \left\lbrace z\in \overline{\Omegaega}: V(z) = -c_\Omegaega = \min_{\overline{\Omegaega} }V \right\rbrace $. \end{itemize} \end{thm} \betagin{thm}\label{thm:char_Aubry} Given $z\in \Omegaega$, then $z\notin \mathcal{A}_\Omegaega$ if and only if there is a subsolution of $H(x,Du(x))\leq c_\Omegaega$ in $\Omegaega$ which is strict in some neighborhood of $z$. Here we say $u\in \mathrm{C}(\overline{\Omegaega})$ is a \emph{strict subsolution} to $H(x,Du(x)) = c_\Omegaega$ in $B(x_0,r)\subset \Omegaega$ if there exists some $\varepsilon>0$ such that $H(x,p) \leq c_\Omegaega - \varepsilon$ for all $p\in D^+u(x), x\in B(x_0,r)$. \end{thm} \betagin{thm}\label{thm:eigenvalue} If $\mathcal{A}_U \subset\subset \Omegaega \subset U$ then the additive eigenvalue of $H$ in $\Omegaega$ is $c_\Omegaega = c_U$. \end{thm} We give proofs for {\rm Re}\,f{thm:V} and {\rm Re}\,f{thm:eigenvalue} in Appendix. A proof of Theorem {\rm Re}\,f{thm:V} for the case $\Omegaega = \mathbb{R}^n$ can be found in \cite{Hung2019}. Theorem {\rm Re}\,f{thm:eigenvalue} is taken from \cite[Proposition 5.1]{ishii_vanishing_2020}. Proof of Theorem {\rm Re}\,f{thm:char_Aubry} can be found in \cite{ishii_vanishing_2020}. The maximal solution $S_\Omegaega(x,y)$ also has an optimal control formulation (minimal exists time) as follows (see \cite{fathi_pde_2005, ishii_vanishing_2020}). \betagin{thm}[Optimal control formula]\label{lem:optimal} Let us define for $\betata \geq \alphapha \geq 0$ the following set: \betagin{equation*} \mathcal{F}_{\Omegaega}(x,y; \alphapha,\betata) = {\mathbb B}ig\lbrace\xi \in \mathrm{AC}\left([0,T], \overline{\Omegaega}\right); T > 0, \xi(\alphapha) = y, \xi(\betata) = x {\mathbb B}ig\rbrace. \end{equation*} Then \betagin{equation*} S_{\Omegaega}(x,y) = \inf \left\lbrace \int_0^T {\mathbb B}ig( c(0)+ L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)\;ds: \xi\in \mathcal{F}_{\Omegaega}(x,y;0, T) \right\rbrace. \end{equation*} \end{thm} \subsection{The vanishing discount problem on changing domains} Let $\Omegaega_\lambdabda = (1+r(\lambdabda))\Omegaega$. For each $\lambdabda \in (0,1)$ let $u_\lambdabda \in \mathrm{BUC}(\overline{\Omegaega}_\lambdabda)\cap \mathrm{Lip}(\Omegaega_\lambdabda)$ be the unique viscosity state-constraint solutions to \betagin{equation}\label{eq:def_ulambda} \betagin{cases} \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \leq 0 &\quad\text{in}\;\Omegaega_\lambdabda,\\ \phi(\lambdabda) u_\lambdabda(x) + H(x,Du_\lambdabda(x)) \geq 0 &\quad\text{on}\;\overline{\Omegaega}_\lambdabda. \end{cases} \end{equation} The additive eigenvalue $c(\lambdabda)$ of $H$ in $\Omegaega_\lambdabda$ is the unique constant such that the following ergodic problem can be solved \betagin{equation}\label{eq:cell3} \betagin{cases} H(x,Du(x)) \leq c(\lambdabda) &\quad \text{in}\;\Omegaega_\lambdabda\\ H(x,Du(x)) \geq c(\lambdabda) &\quad \text{on}\;\overline{\Omegaega}_\lambdabda. \end{cases} \end{equation} By comparison principle, it is clear that if $r(\lambdabda)\geq 0$ then $c(0)\leq c(\lambdabda)$ and if $\lambdabda\mapsto r(\lambdabda)$ is increasing (decreasing) then $\lambdabda\mapsto c(\lambdabda)$ increasing (decreasing) as well. \betagin{thm}\label{thm:pre_conv_sta} Considering the problem \eqref{eq:def_ulambda} with \eqref{H3c}, \eqref{H4} and $\mathrm{(A1)}$. \betagin{itemize} \item[(i)] We have the priori estimate $\phi(\lambdabda) |u_\lambdabda(x)| + |Du_\lambdabda(x)| \leq C_H$ for $x\in \Omegaega_\lambdabda$. \item[(ii)] We have $\phi(\lambdabda) u_\lambdabda(\cdot)\rightarrow -c(0)$ locally uniformly as $\lambdabda \rightarrow 0^+$. Furthermore for all $x\in \overline{\Omegaega}_\lambdabda$ and $\lambdabda>0$ we have \betagin{equation}\label{eq:rate1} \betagin{cases} \left|\phi(\lambdabda)u_\lambdabda(x) + c(0)\right| \leq C\left(\phi(\lambdabda) + |r(\lambdabda)|\right)\\ \left|\phi(\lambdabda)u_\lambdabda(x) + c(\lambdabda)\right| \leq C\phi(\lambdabda). \end{cases} \end{equation} As a consequence, whenever $r(\lambdabda)\neq 0$ there holds \betagin{equation}\label{eq:bound} \left|\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right| \leq C. \end{equation} \item[(iii)] For $x_0\in \overline{\Omegaega}$ there exists a subsequence $\lambdabda_j$ and $u,w\in \mathrm{BUC}(\overline{\Omegaega})\cap \mathrm{Lip}(\Omegaega)$ such that $u_{\lambdabda_j}(\cdot) - u_{\lambdabda_j}(x_0) \rightarrow u(\cdot) $ and $u_{\lambdabda_j}(\cdot) + \phi(\lambdabda_j)^{-1}c(0) \rightarrow w(\cdot)$ locally uniformly as $\lambdabda_j\rightarrow 0$ and $u,w$ solve \eqref{eq:S_0} with $w(x) - u(x) = w(x_0)$. \end{itemize} \end{thm} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:pre_conv_sta}] The priori estimate is clear from the coercivity assumption \eqref{H4}. Fix $x_0\in \Omegaega$, by Arzel\`a--Ascoli theorem there exists a subsequence $\lambdabda_j\rightarrow 0^+$, $c\in \mathbb{R}$ and $u$ defined in $\Omegaega$ such that $\phi(\lambdabda_j)u_{\lambdabda_j}(x_0) \rightarrow -c$ and $u_{\lambdabda_j}(\cdot) - u_\lambdabda(x_0) \rightarrow u(\cdot)$ locally uniformly as $\lambdabda_j\rightarrow 0^+$. The case for $w(\cdot)$ can be done in the same manner as well as the relation between $u$ and $w$. It follows that $u\in \mathrm{BUC}(\overline{\Omegaega})$ and by stability of viscosity solution we have $H(x,Du(x)) = c$ in $\Omegaega$. Since $u_\lambdabda(\cdot)$ is Lipschitz, we deduce also that $\phi(\lambdabda_j)u_{\lambdabda_j}(x) \rightarrow -c$ for any $x\in \overline{\Omegaega}$. We show that $H(x,Du(x)) \geq c$ on $\overline{\Omegaega}$. Let $\varphi\in \mathrm{C}^1(\overline{\Omegaega})$ such that $u-\varphi$ has a strict minimum over $\overline{\Omegaega}$ at $\tilde{x}\in \partial \Omegaega$, we aim to show that $ H(\tilde{x},D\varphi(\tilde{x})) \geq c$. Let us define \betagin{equation}\label{eq:u_tilde} \tilde{u}_\lambdabda(x) = (1+r(\lambdabda))^{-1} u_\lambdabda\left((1+r(\lambdabda))x\right), \qquad x\in \overline{\Omegaega} \end{equation} then \betagin{equation}\label{eq:auxiliary0} \betagin{cases} \phi(\lambdabda)(1+r(\lambdabda)) \tilde{u}_\lambdabda(x) + H((1+r(\lambdabda))x, D\tilde{u}_\lambdabda(x)) \leq 0 &\quad \text{in}\; \Omegaega,\\ \phi(\lambdabda)(1+r(\lambdabda)) \tilde{u}_\lambdabda(x) + H((1+r(\lambdabda))x, D\tilde{u}_\lambdabda(x)) \geq 0 &\quad \text{on}\; \overline{\Omegaega}. \end{cases} \end{equation} Let us define \betagin{equation*} \varphi_\lambdabda(x) = (1+|r(\lambdabda)|)\varphi\left(\frac{x}{1+|r(\lambdabda)|}\right), \qquad x\in (1+|r(\lambdabda)|)\overline{\Omegaega}. \end{equation*} Note that $D\varphi_\lambdabda(x) = D\varphi\left((1+|r(\lambdabda)|)^{-1}x\right)$ for $x\in (1+|r(\lambdabda)|\Omegaega$. We us define \betagin{equation*} \Phi^\lambdabda(x,y) = \varphi_\lambdabda(x) - \tilde{u}_\lambdabda(y) - \frac{|x-y|^2}{2r(\lambdabda)^2},\qquad (x,y)\in \left(1+|r(\lambdabda)|\right)\overline{\Omegaega}\times \overline{\Omegaega}. \end{equation*} Assume $\Phi^\lambdabda(x,y)$ has a maximum over $\left(1+|r(\lambdabda)|\right)\overline{\Omegaega}\times \overline{\Omegaega}$ at $(x_\lambdabda,y_\lambdabda)$. By definition we have $\Phi^\lambdabda(x_\lambdabda,y_\lambdabda) \geq \Phi^\lambdabda(y_\lambdabda,y_\lambdabda)$, therefore \betagin{equation*} \varphi_\lambdabda(x_\lambdabda) - \frac{|x_\lambdabda - y_\lambdabda|^2}{2r(\lambdabda)^2} \geq \varphi_\lambdabda(y_\lambdabda) \end{equation*} and thus \betagin{equation*} |x_\lambdabda - y_\lambdabda| \leq 2r(\lambdabda) \Vert \varphi\Vert_{L^\infty(\overline{\Omegaega})}^{1/2}. \end{equation*} From that we can assume that $(x_\lambdabda,y_\lambdabda)\rightarrow (\overline{x},\overline{x})$ for some $\overline{x}\in \overline{\Omegaega}$ as $\lambdabda\rightarrow 0^+$, then \betagin{equation}\label{eq:auxiliary2} \limsup_{\lambdabda\rightarrow 0^+}\left(\frac{|x_\lambdabda - y_\lambdabda|^2}{2r(\lambdabda)^2}\right) \leq \limsup_{\lambdabda\rightarrow 0^+} \left(\varphi_\lambdabda(x_\lambdabda) - \varphi_\lambdabda(y_\lambdabda)\right) = 0. \end{equation} In other words, $|x_\lambdabda-y_\lambdabda| = o\left(|r(\lambdabda)|\right)$. Now $\Phi^\lambdabda(x_\lambdabda,y_\lambdabda) \geq \Phi^\lambdabda(\tilde{x},\tilde{x})$ gives us \betagin{equation*} \varphi_\lambdabda(x_\lambdabda) - \tilde{u}_\lambdabda(y_\lambdabda) - \frac{|x_\lambdabda - y_\lambdabda|^2 }{2r(\lambdabda)^2} \geq \varphi_\lambdabda(\tilde{x}) - \tilde{u}_\lambdabda(\tilde{x}). \end{equation*} Take $\lambdabda \rightarrow 0^+$, by \eqref{eq:auxiliary2} we obtain that $u(\tilde{x}) - \varphi(\tilde{x}) \geq u(\overline{x}) - \varphi(\overline{x})$, which implies that $\tilde{x} = \overline{x}$ as $u-\varphi$ has a strict minimum over $\overline{\Omegaega}$. From $\mathrm{(A1)}$ and $|x_\lambdabda - y_\lambdabda| = o(|r(\lambdabda)|)$, we deduce that $x_\lambdabda \in (1+|r(\lambdabda)|)\Omegaega$. As $y\mapsto \Phi^\lambdabda(x_\lambdabda, y)$ has a max at $y_\lambdabda$, we deduce that \betagin{equation*} \tilde{u}_\lambdabda(y) - \left(- \frac{|x_\lambdabda-y|^2}{2 r(\lambdabda)^2} \right) \end{equation*} has a minimum at $y_\lambdabda$, therefore as $\tilde{u}_\lambdabda$ is Lipschitz with constant $C_H$ we deduce that \betagin{equation}\label{eq:Aug29-1} \left|\frac{x_\lambdabda-y_\lambdabda}{r(\lambdabda)^2}\right|\leq C_H, \end{equation} and we can apply the supersolution test for \eqref{eq:auxiliary0} to obtain \betagin{equation}\label{eq:auxiliary4} \phi(\lambdabda) (1+r(\lambdabda))\tilde{u}_\lambdabda(y_\lambdabda) + H\left((1+r(\lambdabda))y_\lambdabda, \frac{x_\lambdabda - y_\lambdabda}{r(\lambdabda)^2} \right) \geq 0. \end{equation} On the other hand, since $x_\lambdabda \in (1+|r(\lambdabda)|)\Omegaega$ as an interior point and $x\mapsto \Phi^\lambdabda(x, y_\lambdabda)$ has a max at $x_\lambdabda$, we deduce that \betagin{equation}\label{eq:auxiliary5} D\varphi_\lambdabda(x_\lambdabda) = \frac{x_\lambdabda - y_\lambdabda}{r(\lambdabda)^2} \qquad\Longrightarrow\qquad D\varphi\left(\frac{x_\lambdabda}{1+r(\lambdabda)}\right) = \frac{x_\lambdabda - y_\lambdabda}{r(\lambdabda)^2}. \end{equation} From \eqref{eq:u_tilde}, \eqref{eq:auxiliary4} and \eqref{eq:auxiliary5} we obtain \betagin{equation}\label{eq:aug1} \phi(\lambdabda) u_\lambdabda\left((1+r(\lambdabda))y_\lambdabda\right) + H\left((1+r(\lambdabda))y_\lambdabda, D\varphi_\lambdabda(x_\lambdabda)\right) \geq 0. \end{equation} Recall that $\phi(\lambdabda_j)u_{\lambdabda_j}(x)\rightarrow -c$ uniformly as $\lambdabda_j \rightarrow 0^+$ for any $x\in \overline{\Omegaega}$, we observe that \betagin{align*} \left|\phi(\lambdabda) u_\lambdabda\left((1+r(\lambdabda))y_\lambdabda\right) +c \right| &\leq \big|\phi(\lambdabda)u_\lambdabda(\tilde{x})+c\big| + \phi(\lambdabda)\big|u_\lambdabda\left((1+r(\lambdabda))y_\lambdabda\right) - u_\lambdabda\left(\tilde{x}\right)\big|\\ &\leq \big|\phi(\lambdabda)u_\lambdabda(\tilde{x})+c\big| + \phi(\lambdabda)C_H\big|(y_\lambdabda - \tilde{x})+r(\lambdabda)y_\lambdabda\big|\\ &\leq \big|\phi(\lambdabda)u_\lambdabda(\tilde{x})+c\big| + \phi(\lambdabda)C_H\big|y_\lambdabda - \tilde{x}\big| +C_H \phi(\lambdabda)|r(\lambdabda)|\mathrm{diam}\Omegaega. \end{align*} Let $\lambdabda \rightarrow 0^+$ along $\lambdabda_j$ we obtain \betagin{equation}\label{eq:Aug2} \lim_{\lambdabda_j\rightarrow 0^+} \phi(\lambdabda_j) u_{\lambdabda_j}\left((1+r(\lambdabda_j))y_{\lambdabda_j}\right) = c. \end{equation} From \eqref{eq:Aug29-1} and \eqref{H3c} we have (up to subsequences) \betagin{equation}\label{eq:Aug3} \lim_{\lambdabda_j\rightarrow 0^+}H\left((1+r(\lambdabda_j))y_{\lambdabda_j}, D\varphi_{\lambdabda_j}(x_{\lambdabda_j})\right) = H(\tilde{x}, D\varphi(\tilde{x})). \end{equation} From \eqref{eq:aug1}, \eqref{eq:Aug2} and \eqref{eq:Aug3} we deduce that $H(\tilde{x},D\varphi(\tilde{x})) \geq c$. The comparison principle for state-constraint problem gives us the uniqueness of $c$ and furthermore that $c = c(0)$. The estimate \eqref{eq:rate1} can be established using comparison principle. We see that $u(x)-\phi(\lambdabda)^{-1}c(0) - C$, $u(x)-\phi(\lambdabda)^{-1}c(0) + C$ are subsolution and supersolution, respectively, to \betagin{equation}\label{eqn:i} \betagin{cases} \phi(\lambdabda) w(x)+H(x,Dw(x)) \leq 0\qquad&\text{in}\;\Omegaega,\\ \phi(\lambdabda) w(x)+H(x,Dw(x)) \geq 0\qquad&\text{on}\;\overline{\Omegaega}. \end{cases} \end{equation} On the other hand, from \eqref{eq:auxiliary0}, the priori estimate $|\phi(\lambdabda)u_\lambdabda|\leq C$ and \eqref{H3c} we have $\tilde{u}_\lambdabda(x) - C\phi(\lambdabda)^{-1}|r(\lambdabda)|$, $\tilde{u}_\lambdabda(x) + C\phi(\lambdabda)^{-1}|r(\lambdabda)|$ are subsolution and supersolution, respectively, to \eqref{eqn:i}. Therefore by comparison principle for \eqref{eqn:i} we have \betagin{equation*} \betagin{cases} u(x) - \phi(\lambdabda)^{-1}c(0)-C \leq \tilde{u}_\lambdabda(x) + C|r(\lambdabda)|\phi(\lambdabda)^{-1}c(0) ,\\ u(x) - \phi(\lambdabda)^{-1}c(0)+C \geq \tilde{u}_\lambdabda(x) - C|r(\lambdabda)|\phi(\lambdabda)^{-1}c(0). \end{cases} \end{equation*} Therefore $|\phi(\lambdabda)\tilde{u}_\lambdabda(x) + c(0)| \leq C\left(\phi(\lambdabda) + |r(\lambdabda)|\right)$. The other estimate in \eqref{eq:rate1} is a direct consequence of \eqref{eq:rate0}. \end{proof} \betagin{rem}\label{rem:Aug29-2} We note that $\tilde{u}_\lambdabda$ defined as in \eqref{eq:u_tilde} is not necessarily close to $u_\lambdabda$. In fact, for $x\in \overline{\Omegaega}$ we have \betagin{equation*} \tilde{u}_\lambdabda(x) - u_\lambdabda(x)= \frac{u_\lambdabda((1+r(\lambdabda))x) - u_\lambdabda(x)}{1+r(\lambdabda)} - \frac{r(\lambdabda)}{1+r(\lambdabda)} \left(u_\lambdabda(x)+\frac{c(0)}{\phi(\lambdabda)}\right)+\frac{r(\lambdabda)c(0)}{\phi(\lambdabda)(1+r(\lambdabda))}. \end{equation*} Using \eqref{eq:rate1} and $u_\lambdabda$ is Lipschitz, we obtain that \betagin{equation}\label{eq:Aug4} \left|\tilde{u}_\lambdabda(x) - u_\lambdabda(x)\right| \leq 2C(|r(\lambdabda)||x|) + 2C|r(\lambdabda)|\left(1+\left|\frac{r(\lambdabda)}{\varphi(\lambdabda)}\right|\right)+2\left|\frac{r(\lambdabda)}{\phi(\lambdabda)}c(0)\right|. \end{equation} Therefore $\tilde{u}_\lambdabda$ and $u_\lambdabda$ are close if $\gammama = 0$, and $\left\lbrace\tilde{u}_\lambdabda+\phi(\lambdabda)^{-1}c(0)\right\rbrace_{\lambdabda>0}$ is uniformly bounded in $\lambdabda>0$ if $\gammama$ is finite (or more generally if $\left|r(\lambdabda)/\phi(\lambdabda)\right|$) is bounded, in which case \betagin{equation}\label{rem:Aug29-3} \lim_{\lambdabda\rightarrow 0^+} {\mathbb B}ig(\tilde{u}_\lambdabda(x) - u_\lambdabda(x){\mathbb B}ig) = \gammama c(0). \end{equation} \end{rem} As we are working with domains that are smaller or bigger than $\Omegaega$, we introduce the scaling of measures for convenience. \betagin{defn}\label{defn:scaledown} For a measure $\sigmama$ defined on $(1+r)\overline{\Omegaega}\times \overline{B}_h$, we define its scaling $\tilde{\sigmama }$ as a measure on $\overline{\Omegaega}\times \overline{B}_h$ by \betagin{equation}\label{def_measures} \int_{\overline{\Omegaega}\times\overline{B}_h} f(x,v)\;d\tilde{\sigmama}(x,v) = \int_{(1+r)\overline{\Omegaega}\times\overline{B}_h} f\left(\frac{x}{1+r},v\right)\;d\sigmama(x,v). \end{equation} \end{defn} We introduce the following definition for simplicity, as we will deal with mainly approximation from the inside and outside of $\Omegaega$. \betagin{defn}\label{def:2domains} For $r(\lambdabda)\geq 0$, we define $\Omegaega_\lambdabda^{\pm} = (1\pm r(\lambdabda))\Omegaega$. We denote by $c(\lambdabda)^{\pm}$ and $u_\lambdabda^{\pm}$, respectively, the additive eigenvalues of $H$ in $(1\pm r(\lambdabda))\Omegaega$ and the solutions to the discounted problem \eqref{state-def} on $(1\pm r(\lambdabda))\Omegaega$ with discount factor $\deltata = \phi(\lambdabda)$. We let $u_\lambdabda^-$ and $u_{\lambdabda}^+$ be solutions to \betagin{equation*} \betagin{cases} \phi(\lambdabda) v(x) + H(x,Dv(x)) \leq 0 \quad\text{in}\;\Omegaega_\lambdabda,\\ \phi(\lambdabda) v(x) + H(x,Dv(x)) \geq 0 \quad\text{on}\;\overline{\Omegaega}_\lambdabda; \end{cases} \end{equation*} with $\Omegaega_\lambdabda$ being replaced by $(1-r(\lambdabda))\Omegaega$ and $(1+r(\lambdabda))\Omegaega$, respectively. \end{defn} \section{The first normalization: convergence and a counter example}\label{sec3} In view of Theorems {\rm Re}\,f{thm:pre_conv_sta}, it is natural to ask the question if the convergence of $u_\lambdabda(x) - u_\lambdabda(x_0)$ holds for the whole sequence as $\lambdabda \rightarrow 0^+$. The two natural normalization one can study are \betagin{equation}\label{eq:want} \left\lbrace u_\lambdabda(x) + \frac{c(0)}{\phi(\lambdabda)} \right\rbrace_{\lambdabda>0} \qquad\text{and}\qquad \left\lbrace u_\lambdabda(x) + \frac{c(\lambdabda)}{\phi(\lambdabda)} \right\rbrace_{\lambdabda>0}. \end{equation} We observe that from Theorem {\rm Re}\,f{thm:pre_conv_sta} we have \betagin{equation}\label{eq:bdd} \left|u_\lambdabda(x) + \frac{c(0)}{\phi(\lambdabda)}\right| \leq C\left(1+\frac{r(\lambdabda)}{\phi(\lambdabda)}\right) \qquad\text{and}\qquad \left|u_\lambdabda(x) + \frac{c(\lambdabda)}{\phi(\lambdabda)}\right| \leq C. \end{equation} We observe that $u_\lambdabda(x)+\phi(\lambdabda)^{-1}c(0)$ is bounded if $\gammama$ defined in \eqref{eq:asm} is finite, or more generally if $|r(\lambdabda)| = \mathcal{O}(\phi(\lambdabda))$ as $\lambdabda\rightarrow 0^+$, while $u_\lambdabda(x)+\phi(\lambdabda)^{-1}c(\lambdabda)$ is bounded even if $\gammama$ is infinite. The following example show a divergence for $u_\lambdabda(x)+\phi(\lambdabda)^{-1}c(0)$ when $\gammama = \infty$. \betagin{exa}\label{ex:a} Let us consider $H(x,p) = |p|+x$, $\Omegaega =(-1,1)$, $\phi(\lambdabda) = \lambdabda$ and $r(\lambdabda) = \lambdabda^m$ for $\lambdabda > 0$. Using the optimal control formula we obtain \betagin{equation*} u_\lambdabda(x) = \inf_{\alphapha(\cdot)} \left(-\int_0^\infty e^{-\lambdabda s}y(s)\;ds\right) \qquad\text{where}\qquad \betagin{cases} \dot{y}(s) &= \alphapha(s)\in [-1,1]\\ y(0) &= x. \end{cases} \end{equation*} Regarding Definition {\rm Re}\,f{def:2domains}, we have $c(0) = 1$, $c(\lambdabda)^{\pm} = 1\pm\lambdabda^m$ and \betagin{align*} u_\lambdabda(x)^{\pm} + \frac{c(0)}{\lambdabda} = \frac{1-x}{\lambdabda}+\frac{e^{-\lambdabda(1\pm \lambdabda^m-x)}-1}{\lambdabda^2} = \mp \lambdabda^{m-1} + \frac{(1-x\pm \lambdabda^m)^2}{2}+ \mathcal{O}(\lambdabda) \end{align*} as $\lambdabda\rightarrow 0^+$, which are convergent only if $m\geq 1$. On the other hand, we have \betagin{equation*} u_\lambdabda(x)^{\pm} + \frac{c(\lambdabda)^{\pm}}{\lambdabda} = \frac{(1-x \pm \lambdabda^m)^2}{2} + \mathcal{O}(\lambdabda) \end{equation*} as $\lambdabda\rightarrow 0^+$, which converge to the same limit for all $m\geq 0$. In this example the family $\left\lbrace u_\lambdabda+\phi(\lambdabda)^{-1}c(\lambdabda)\right\rbrace_{\lambdabda>0}$ still converges even if $\gammama = \infty$. However it is not true in general, as we will prove an example in Section 5. \end{exa} \noindent We give a simple proof for the convergence of both families in \eqref{eq:want} when $\gammama = 0$. \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:subcritical}] Let $v_\lambdabda\in \mathrm{C}(\overline{\Omegaega})\cap \mathrm{Lip}(\Omegaega)$ solving \betagin{equation}\label{eq:def_vlambda} \betagin{cases} \phi(\lambdabda) v_\lambdabda(x) + H(x,Dv_\lambdabda(x)) \leq 0 &\quad\text{in}\;\Omegaega,\\ \phi(\lambdabda) v_\lambdabda(x) + H(x,Dv_\lambdabda(x)) \geq 0 &\quad\text{on}\;\overline{\Omegaega}. \end{cases} \end{equation} By Theorem {\rm Re}\,f{thm:conv_bdd}, there exists $u^0$ solves \eqref{eq:S_0} such that $v_\lambdabda(x) + \phi(\lambdabda)^{-1}c(0) \rightarrow u^0(x)$ uniformly on $\overline{\Omegaega}$ as $\lambdabda \rightarrow 0^+$. Define $\tilde{u}_\lambdabda(x)$ as in \eqref{eq:u_tilde} then $\tilde{u}_\lambdabda$ solves \eqref{eq:auxiliary0}. Similarly to Theorem {\rm Re}\,f{thm:pre_conv_sta} we obtain that $\tilde{u}_\lambdabda(x) - C\phi(\lambdabda)^{-1}|r(\lambdabda)|, \tilde{u}_\lambdabda(x) + C\phi(\lambdabda)^{-1}|r(\lambdabda)|$ are subsolution and supersolution, respectively, to \eqref{eq:def_vlambda}, therefore \betagin{equation*} \left|\left(\tilde{u}_\lambdabda(x) + \frac{c(0)}{\phi(\lambdabda)}\right) - \left( v_\lambdabda(x) + \frac{c(0)}{\phi(\lambdabda)}\right)\right| \leq C\frac{|r(\lambdabda)|}{\phi(\lambdabda)}. \end{equation*} Recall \eqref{eq:Aug4}, as $\gammama = 0$ we have $|\tilde{u}_\lambdabda - u_\lambdabda|\rightarrow 0$ as $\lambdabda\rightarrow 0^+$, therefore we deduce that $u_\lambdabda(x) + \phi(\lambdabda)^{-1}c(0)\rightarrow u^0(x)$ locally uniformly as $\lambdabda\rightarrow 0^+$. From \eqref{eq:bound} in Theorem {\rm Re}\,f{thm:pre_conv_sta} and $\gammama = 0$ we obtain \betagin{equation*} \lim_{\lambdabda \rightarrow 0^+}\left( \frac{c(\lambdabda) - c(0)}{\phi(\lambdabda)} \right)= 0. \end{equation*} and thus $u_\lambdabda(x) + \phi(\lambdabda)^{-1}c(\lambdabda) \rightarrow u^0(x)$ locally uniformly as $\lambdabda \rightarrow 0^+$. \end{proof} \betagin{rem} If $\gammama\neq 0$ then in general the solution $v_\lambdabda$ to \eqref{eq:def_vlambda} and the solution $u_\lambdabda$ to \eqref{eq:def_ulambda} are not close to each other, as we will see in example {\rm Re}\,f{ex:differ}. \end{rem} \betagin{exa}\label{ex:differ} Let $H(x,p) = |p| - e^{-|x|}$ on $\Omegaega = (-1,1)$ and $\phi(\lambdabda)=r(\lambdabda) =\lambdabda$. Using the optimal control formula, solutions to \eqref{eq:def_ulambda} (regarding Definition {\rm Re}\,f{def:2domains}) are \betagin{align*} u^-_\lambdabda(x) = \frac{e^{-|x|}}{1+\lambdabda} + \frac{e^{-(1-\lambdabda^2)+\lambdabda|x|}}{\lambdabda(1+\lambdabda)}, \qquad x\in [-(1-\lambdabda), (1-\lambdabda)],\\ u^+_\lambdabda(x) = \frac{e^{-|x|}}{1+\lambdabda} + \frac{e^{-(1+\lambdabda)^2+\lambdabda|x|}}{\lambdabda(1+\lambdabda)}, \qquad x\in [-(1+\lambdabda), (1+\lambdabda)]. \end{align*} On fixed bounded domain, the solution $v_\lambdabda$ to \eqref{eq:def_vlambda} is given by \betagin{equation*} v_\lambdabda(x) = \frac{e^{-|x|}}{1+\lambdabda} + \frac{e^{-1-\lambdabda+\lambdabda|x|}}{\lambdabda(1+\lambdabda)}, \qquad x\in [-1,1]. \end{equation*} We have $c(0) = -e^{-1}$ and $c(\lambdabda)^{\pm} = -e^{-1\mp\lambdabda}$, thus $c^{(1)}_-=c^{(1)}_+ = e^{-1}$ and \betagin{equation*} \lim_{\lambdabda\rightarrow 0^+}\left( \frac{c(\lambdabda)_- - c(0)}{-\lambdabda}\right) = \lim_{\lambdabda\rightarrow 0^+}\left( \frac{c(\lambdabda)_+ - c(0)}{\lambdabda}\right) = e^{-1}. \end{equation*} The maximal solution (in the sense of Theorem {\rm Re}\,f{thm:conv_bdd}) on $\Omegaega$ is given by \betagin{align*} u^0(x)= \lim_{\lambdabda\rightarrow 0^+} \left(v_\lambdabda(x) + \frac{c(0)}{\lambdabda}\right)= e^{-|x|} + e^{-1}|x| - e^{-1}, \qquad x\in [-1,1]. \end{align*} On the other hand, using notation as in Theorem {\rm Re}\,f{thm:general} with $\gammama = 1$ we have \betagin{align*} u^{-1}= \lim_{\lambdabda\rightarrow 0^+} \left(u^-_\lambdabda(x) + \frac{c(0)}{\lambdabda}\right) &= e^{-|x|} + e^{-1}|x|, \;\;\qquad\qquad x\in [-1,1],\\ u^{+1} =\lim_{\lambdabda\rightarrow 0^+} \left(u^+_\lambdabda(x) + \frac{c(0)}{\lambdabda}\right) &= e^{-|x|} + e^{-1}|x|-2e^{-1}, \;\;\quad x\in [-1,1] \end{align*} and \betagin{equation*} \lim_{\lambdabda\rightarrow 0^+} \left(u_\lambdabda(x)^{\pm} + \frac{c(\lambdabda)_{\pm}}{\lambdabda}\right)= u^0(x), \qquad x\in [-1,1]. \end{equation*} In this example $u^{+1}(\cdot)+u^{-1}(\cdot) = 2u^0(\cdot)$ and $u_\lambdabda$ and $v_\lambdabda$ are not close to each other. \end{exa} Using the representation formula as in Theorem {\rm Re}\,f{thm:conv_bdd}, we show the convergence of $\left\lbrace u_\lambdabda+\phi(\lambdabda)^{-1}c(0)\right\rbrace _{\lambdabda>0}$ when $\gammama$ is finite. This method also recovers the result of Theorem {\rm Re}\,f{thm:subcritical}. The following technical lemma is a consequence from $\mathrm{(H4)}$, we give a proof for it in Appendix. \betagin{lem}\label{lem:regu} Assume $L$ satisfies $\mathrm{(H4)}$ then \betagin{align*} \frac{L(x,v) - L((1\pm \deltata)x,v)}{\deltata} \rightarrow (\mp x)\cdot D_xL(x,v) \quad\text{uniformly on}\;\overline{\Omegaega}\times \overline{B}_h\;\text{as}\;\deltata\rightarrow 0^+ \end{align*} \end{lem} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:general}] By the reduction step earlier, we may assume that $H$ satisfies \eqref{eq:H&L} for some $h > 0$ and $L \in \mathrm{C}(\overline{\Omegaega} \times \overline{B}_h)$. By \eqref{eq:bdd} and $\gammama < \infty$ we have the boundedness of $\left\lbrace u_\lambdabda(x)+\phi(\lambdabda)^{-1}c(0)\right\rbrace_{\lambdabda>0}$. Recall Remark {\rm Re}\,f{rem:Aug29-2}, let $\tilde{u}_\lambdabda$ be defined as in \eqref{eq:u_tilde} and $\mathcal{\tilde{U}}$ be the set of accumulation points of $\left\lbrace \tilde{u}_\lambdabda + \phi(\lambdabda)^{-1}c(0) \right\rbrace_{\lambdabda>0}$ in $\mathrm{C}(\overline{U})$ as $\lambdabda\rightarrow 0^+$. By Theorem {\rm Re}\,f{thm:pre_conv_sta} we have $\mathcal{\tilde{U}}$ is nonempty. To show that $\mathcal{\tilde{U}}$ is singleton, we show that if $u,w\in \mathcal{\tilde{U}}$ then $u\equiv w$. Assume that there exist $\lambdabda_j\rightarrow 0$ and $\deltata_j \rightarrow 0$ such that $\tilde{u}_{\lambdabda_j}+\phi(\lambdabda_j)^{-1}c(0)\rightarrow u$ and $\tilde{u}_{\deltata_j}+\phi(\deltata_j)^{-1}c(0)\rightarrow w$ locally uniformly as $j \rightarrow \infty$. Let us fix $z\in \Omegaega$, by Theorem {\rm Re}\,f{thm:lambdau} there exists $\mu_{\lambdabda} \in \mathcal{P}\cap \mathcal{G}'_{z,\phi(\lambdabda),\Omegaega_{\lambdabda}}$ such that \betagin{equation}\label{eq:xyz} \phi(\lambdabda) u_{\lambdabda}(z) = \int_{\overline{\Omegaega}_{\lambdabda}\times \overline{B}_h} L(x,v)\;d\mu_{\lambdabda}(x,v) = \min_{\mu\in \mathcal{P}\cap \mathcal{G}'_{z,\phi(\lambdabda),\Omegaega_{\lambdabda}}} \int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} L(x,v)\;d\mu(x,v). \end{equation} Let $\tilde{\mu}_\lambdabda$ be the measure obtained from $\mu_\lambdabda$ defined as in Definition {\rm Re}\,f{defn:scaledown}, it is clear that $\tilde{\mu}_\lambdabda$ is a probability measures on $\overline{\Omegaega}$, therefore the set \betagin{equation}\label{eq:U_*} \mathcal{U}_*(z) = \left\lbrace \mu\in \mathcal{P}(\overline{\Omegaega}\times \overline{B}_h): \tilde{\mu}_\lambdabda \rightharpoonup \mu\;\text{in measure along some subsequences} \right\rbrace \end{equation} is nonempty. By Lemma {\rm Re}\,f{lem:stability} we can assume that (up to subsequence) there exists $\mu_0 \in \mathcal{M}_0$ such that $\tilde{\mu}_\lambdabda\rightharpoonup \mu_0$ in measure. We have $H(x,Dw(x))\leq c(0)$ in $\Omegaega$, let $w_\lambdabda(x) = (1+r(\lambdabda))w\left((1+r(\lambdabda))^{-1}x\right)$ in $x\in (1+r(\lambdabda))\overline{\Omegaega}$ then $w_\lambdabda(x)\rightarrow w(x)$ pointwise s $\lambdabda\rightarrow 0^+$ and \betagin{equation*} \phi(\lambdabda)w_\lambdabda(x)+H_{L\left(\frac{x}{1+r(\lambdabda)},v\right)+\phi(\lambdabda) w_\lambdabda(x)+c(0)}(x,Dw_\lambdabda(x)) \leq 0 \qquad\text{in}\;(1+r(\lambdabda))\Omegaega. \end{equation*} By definition we obtain \betagin{equation*} \left( L\left(\frac{x}{1+r(\lambdabda)},v\right)+\phi(\lambdabda) w_{\lambdabda}(x)+c(0), w_{\lambdabda}(x) \right)\in \mathcal{F}_{\phi(\lambdabda),\Omegaega_{\lambdabda}} \end{equation*} and therefore \betagin{equation*} \left\langle \mu_{\lambdabda}, L\left(\frac{x}{1+r(\lambdabda)},v\right)+\phi(\lambdabda) w_{\lambdabda}(x)- \phi(\lambdabda) w_{\lambdabda}(z)+c(0)\right\rangle \geq 0. \end{equation*} In other words, we have \betagin{equation*} \left\langle \mu_\lambdabda, L\left(\frac{x}{1+r(\lambdabda)},v\right)\right\rangle+\phi(\lambdabda)(1+r(\lambdabda))\left\langle \mu_\lambdabda, w\left(\frac{x}{1+r(\lambdabda)}\right)\right\rangle + c(0) \geq \phi(\lambdabda) w_{\lambdabda}(z). \end{equation*} Combine with $-\langle \mu_\lambdabda, L(x,v)\rangle + \phi(\lambdabda) u_\lambdabda(z) = 0 $ from \eqref{eq:xyz} we obtain \betagin{align*} \left\langle \mu_\lambdabda, L\left(\frac{x}{1+r(\lambdabda)},v\right) -L(x,v)\right\rangle &+ \phi(\lambdabda)(1+r(\lambdabda)) \big\langle\tilde{\mu}_\lambdabda,w(x)\big\rangle \\ &+ \phi(\lambdabda) u_\lambdabda(z) + c(0) \geq \phi(\lambdabda) w_\lambdabda(z). \end{align*} Dividing both sides by $\phi(\lambdabda)$ we deduce that \betagin{equation*} \frac{r(\lambdabda)}{\phi(\lambdabda)}\left\langle \tilde{\mu}_{\lambdabda},\frac{L(x,v) - L((1+r(\lambdabda))x,v)}{r(\lambdabda)} \right\rangle + (1+r(\lambdabda))\left\langle \tilde{\mu}_{\lambdabda}, w\right\rangle + \left( u_{\lambdabda}(z)+\frac{c(0)}{\phi(\lambdabda)}\right)\geq w_{\lambdabda}(z). \end{equation*} Since $\tilde{\mu}_{\lambdabda_j}\rightharpoonup \mu_0$ in measure, using Lemma {\rm Re}\,f{lem:regu} we deduce that \betagin{equation}\label{eqn:ge1} \gammama\left\langle \mu_0, (-x)\cdot D_xL(x,v) \right\rangle +\langle \mu_0, w\rangle + \big(u(z)-\gammama c(0)\big) \geq w(z), \end{equation} where $u_\lambdabda(z)+\phi(\lambdabda_j)^{-1}c(0) \rightarrow \left(u(z)-\gammama c(0)\right)$ comes from $\tilde{u}_\lambdabda(z)+\phi(\lambdabda_j)^{-1}c(0) \rightarrow u(z)$ and \eqref{rem:Aug29-3} in Remark {\rm Re}\,f{rem:Aug29-2}. On the other hand, from \eqref{eq:auxiliary0} we have \betagin{equation*} \phi(\lambdabda)\tilde{u}_\lambdabda(x) + H\big((1+r(\lambdabda))x,D\tilde{u}_\lambdabda(x)\big) \leq 0 \qquad\text{in}\;\Omegaega \end{equation*} In other words, we have \betagin{equation*} L\big((1+r(\lambdabda))x,v\big) - \phi(\lambdabda)(1+r(\lambdabda))\tilde{u}_\lambdabda(x) \in \mathcal{F}_{0,\Omegaega} \end{equation*} and thus \betagin{equation*} \left\langle \mu, L\big((1+r(\lambdabda))x,v\big) - \phi(\lambdabda)(1+r(\lambdabda))\tilde{u}_\lambdabda(x)\right\rangle \geq 0 \qquad\text{for all}\;\mu\in \mathcal{M}_0. \end{equation*} Recall that $- \langle \mu, L(x,v)\rangle = c(0)$ for all $\mu\in \mathcal{M}_0$, we have \betagin{equation*} \frac{r(\lambdabda)}{\phi(\lambdabda)}\left\langle \mu, \frac{L\big((1+r(\lambdabda))x,v\big) - L(x,v)}{r(\lambdabda)} \right\rangle \geq (1+r(\lambdabda)) \left\langle \mu, \tilde{u}_\lambdabda(x)+\frac{c(0)}{\phi(\lambdabda)}\right\rangle - \frac{r(\lambdabda)}{\phi(\lambdabda)}c(0) \end{equation*} for all $\mu\in \mathcal{M}_0$. Let $\lambdabda=\deltata_j$ then as $j\rightarrow \infty$ we have $\gammama\langle \mu, x\cdot D_xL(x,v)\rangle \geq \langle \mu, w\rangle - \gammama c(0)$, i.e., \betagin{equation}\label{eq:s1} \gammama\langle \mu, (-x)\cdot D_xL(x,v)\rangle +\langle \mu, w\rangle -\gammama c(0) \leq 0, \qquad\text{for all}\;\mu\in \mathcal{M}_0. \end{equation} From \eqref{eqn:ge1} and \eqref{eq:s1} we deduce that $u(z) \geq w(z)$. Since $z\in \Omegaega$ arbitrarily we have $u\geq w$ and similarly $u\leq w$, thus $u\equiv w$ and we have the uniform convergence for the full sequence \betagin{equation*} \lim_{\lambdabda\to 0}\left(\tilde{u}_\lambdabda(x) + \frac{c(0)}{\phi(\lambdabda)}\right) \end{equation*} Denote this limit as $\tilde{u}^\gammama$, then from Remark {\rm Re}\,f{rem:Aug29-2} we have $u_\lambdabda+\phi(\lambdabda)^{-1}c(0)\rightarrow u^\gammama = \tilde{u}^\gammama- \gammama c(0)$ locally uniformly in $\Omegaega$ as $\lambdabda\rightarrow0^+$. Clearly $u^\gammama \in \mathcal{E}^\gammama$ thanks to \eqref{eq:s1}. If $v\in \mathcal{E}^\gammama$ then since $\mu_0\in \mathcal{M}_0$, we can establish \eqref{eqn:ge1} with $w$ being replaced by $v$ to obtain $u^\gammama\geq v$, hence $u^\gammama = \sup \mathcal{E}^\gammama$. \end{proof} \betagin{cor}\label{cor:my} For any $\mu \in \mathcal{U}_*(z)$ there holds $\gammama\big\langle \mu, (-x)\cdot D_xL(x,v)\big\rangle + \big\langle \mu, u^\gammama \big\rangle = 0$. \end{cor} \betagin{lem}\label{rem: on L} For any $\mu\in \mathcal{M}_0$ there holds $\langle \mu, (-x)\cdot D_xL(x,v)\rangle \geq 0$. \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{rem: on L}] For $\mu\in \mathcal{M}_0$ and $0<\lambdabda \ll 1$ we define $\mu_\lambdabda$ by \betagin{equation*} \langle \mu_\lambdabda, f(x,v) \rangle := \langle \mu, f((1-\lambdabda)x,v)\rangle, \qquad\text{for}\;f\in \mathrm{C}(\overline{\Omegaega}\times \overline{B}_h). \end{equation*} It is easy to see that $\mu_\lambdabda$ is a probability measure on $\overline{\Omegaega}\times \overline{B}_h$. Furthermore $\mu_\lambdabda\in \mathcal{G}_{0,\Omegaega}'$ as well. In fact, if $f\in \mathcal{G}_{0,\Omegaega}$ then there exists $u\in \mathrm{C}(\overline{\Omegaega})$ such that $H_f(x,Du(x))\leq 0$ in $\Omegaega$. It is clear that $H_{f((1-\lambdabda)x,v)}(x,D\tilde{u}(x))\leq 0$ in $\Omegaega$ as well where $\tilde{u}(x) = (1-\lambdabda)^{-1}u((1-\lambdabda)x)$, therefore $f((1-\lambdabda)x,v)\in \mathcal{G}_{0,\Omegaega}$, hence \betagin{equation*} \left\langle \mu_\lambdabda, f(x,v) \right\rangle = \left\langle \mu, f((1-\lambdabda)x,v) \right\rangle \geq 0. \end{equation*} As $\mu_\lambdabda\in \mathcal{P}\cap \mathcal{G}'_{0,\Omegaega}$, we deduce that \betagin{equation*} \left\langle \mu, L((1-\lambdabda)x,v) \right\rangle = \left\langle \mu_\lambdabda, L(x,v) \right\rangle \geq \left\langle \mu, L(x,v) \right\rangle. \end{equation*} Let $\lambdabda\rightarrow 0^+$ we deduce that $\langle \mu, (-x)\cdot D_xL(x,v)\rangle \geq 0$. \end{proof} \betagin{rem} If the domain is periodic then by translation invariant, we can get an invariant for Mather measures as \betagin{equation*} \langle \mu, (-x)\cdot D_xL(x,v)\rangle = 0 \qquad\text{for all}\; \mu\in \mathcal{M}_0. \end{equation*} We refer the reader's to \cite{biryuk_introduction_2010} for more properties like this in the case of periodic domain. In our setting, it is natural to expect a similar invariant holds. Indeed, it is interesting that \betagin{equation*} \betagin{cases} \langle \mu,(-x)\cdot D_xL(x,v)\rangle = \text{constant}\\ \text{for all}\; \mu\in \mathcal{M}_0 \end{cases} \qquad\Longleftrightarrow\qquad \lambdabda\mapsto c(\lambdabda)\;\text{is differentiable at}\;\lambdabda = 0 \end{equation*} and if that is the case, the constant in the above is $c'(0)$, the derivative of the map $\lambdabda\to c(\lambdabda)$ at $\lambdabda = 0$. For instance, in Example {\rm Re}\,f{ex:differ} we have $c'(0) = e^{-1}$. \end{rem} \betagin{lem}\label{lem:stability} We have $\mathcal{U}_*(z)\subseteq \mathcal{M}_0$ where $\mathcal{U}_*(z)$ is defined as in \eqref{eq:U_*}. \end{lem} \betagin{proof} Assume $\tilde{\mu}_{\lambdabda_j}\rightharpoonup \mu_0$. From \eqref{eq:xyz} it is clear that $-c(0) = \langle \mu_0, L\rangle$. For $f\in \mathcal{G}_{0,\Omegaega}$ there exists $u\in \mathrm{C}(\overline{\Omegaega})$ such that $H_f(x,Du(x))\leq 0$ in $\Omegaega$. Let us define $\tilde{u}$ as in \eqref{eq:u_tilde}, we have \betagin{equation*} \phi(\lambdabda)\tilde{u}(x) + H_{f\left(\frac{x}{1+r(\lambdabda)},v\right)+\phi(\lambdabda)\tilde{u}(x)}(x,D\tilde{u}(x)(x))\leq 0\quad\text{in}\;(1+r(\lambdabda))\Omegaega. \end{equation*} By definition we deduce that \betagin{equation*} \big\langle \tilde{\mu}_\lambdabda, f(x,v) +\phi(\lambdabda)(1+r(\lambdabda))\left(u - u(z)\right) \big\rangle= \left\langle \mu_\lambdabda, f\left(\frac{x}{1+r(\lambdabda)},v\right) + \phi(\lambdabda)\left(\tilde{u} - \tilde{u}(z)\right) \right\rangle \geq 0 \end{equation*} Let $\lambdabda\rightarrow 0^+$ along $\lambdabda_j$ we deduce that $\langle \mu_0, f \rangle \geq 0$, hence $\mu_0\in \mathcal{M}_0$. \end{proof} \betagin{proof}[Proof of Corollary {\rm Re}\,f{cor:mycor}] From \eqref{eq:s1} we have \betagin{equation*} \betagin{cases} \alphapha\langle \mu, (-x)\cdot D_xL(x,v)\rangle +\langle \mu, u^\alphapha\rangle \leq 0 \\ \betata\langle \mu, (-x)\cdot D_xL(x,v)\rangle +\langle \mu, u^\betata\rangle \leq 0 \end{cases} \end{equation*} for all $\mu\in \mathcal{M}_0$. If $\theta = \betata- \alphapha \geq 0$ then \betagin{equation*} \theta \left\langle\mu, (-x)\cdot D_xL(x,v)\right\rangle + \alphapha \left\langle\mu, (-x)\cdot D_xL(x,v)\right\rangle + \big\langle \mu, u^\betata\big\rangle \leq 0 \end{equation*} for all $\mu\in \mathcal{M}_0$. Since $\theta \left\langle\mu, (-x)\cdot D_xL(x,v)\right\rangle\geq 0$, we have $u^\betata \in \mathcal{E}^\alphapha$, therefore $u^\betata\leq u^\alphapha$. Denote $\gammama = (1-\lambdabda)\alphapha+\lambdabda \betata$ for $\lambdabda\in (0,1)$, we have \betagin{equation*} \gammama\left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle + \left\langle\mu, (1-\lambdabda) u^\alphapha +\lambdabda u^\betata \right\rangle \leq 0, \qquad\text{for all}\;\mu\in \mathcal{M}_0. \end{equation*} By the convexity of $H$ we see that $u = (1-\lambdabda)u^\alphapha+\lambdabda u^\betata$ belongs to $\mathcal{E}^{(1-\lambdabda)\alphapha+\lambdabda \betata}$, therefore $(1-\lambdabda)u^\alphapha+\lambdabda u^\betata \leq u^{(1-\lambdabda)\alphapha+\lambdabda \betata}$. \end{proof} \section{The asymptotic expansion of the eigenvalue}\label{sec4} \subsection{The expansion at zero} In this section, we want to study the asymptotic expansion of $c(\lambdabda)$ as $\lambdabda\rightarrow 0^+$. If the following limit exist \betagin{equation}\label{lim2} c^{(1)} = \lim_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right) \end{equation} then heuristically we have $c(\lambdabda) = c(0) + c^{(1)}r(\lambdabda) + o(r(\lambdabda))$ as $\lambdabda \rightarrow 0^+$. \betagin{rem} The dependence of $\lambdabda$ and the eigenvalue should be $c(r(\lambdabda))$ but in fact $c^{(1)}$ is independent of $r(\lambdabda)$ if it exists. Indeed, assume $\lambdabda\mapsto c(\lambdabda) = c(r(\lambdabda))$ is differentiable at $\lambdabda = 0$, for any $\mu\in \mathcal{M}_0$ by scaling into a measure $\mu_\lambdabda$ on $(1+r(\lambdabda))\Omegaega$, we can show that $-c(r(\lambdabda)) \leq \langle \mu_\lambdabda,L\rangle$, hence \betagin{equation*} \big\langle \mu,L((1+r(\lambdabda))x,v)\big\rangle + c(r(\lambdabda)) \geq 0 \end{equation*} for all $\lambdabda$ with an equality at $\lambdabda = r(\lambdabda) = 0$, therefore \betagin{equation*} c^{(1)} = c'(0) = \langle \mu, (-x)\cdot D_xL\rangle. \end{equation*} Thus we can simply write $c(\lambdabda)$ for simplicity and we can simply choose $r(\lambdabda) = \pm\lambdabda$. \end{rem} We will show that \eqref{lim2} holds when $\lambdabda\mapsto r(\lambdabda)$ does not change its sign around $0$, provided that \eqref{H3c} is satisfied. Example {\rm Re}\,f{ex:10} shows that it can be divergent if \eqref{H3c} is violated and Example {\rm Re}\,f{ex:differ} shows that in general it is not zero. \betagin{exa}\label{ex:10} Let $H(x,p) = |p|-\sqrt{1-|x|}$ for $(x,p)\in [-1,1]\times \mathbb{R}$. Let $r(\lambdabda) = \lambdabda \in (0,1)$, the $c(\lambdabda) = -\sqrt{\lambdabda}$ and the limit \eqref{lim2} does not exist. \end{exa} \noindent Let $\nu_\lambdabda$ be measures in $\mathcal{P}\cap \mathcal{G}'_{0,\Omegaega_\lambdabda}$ such that \betagin{equation}\label{eq:limit-p0} -c(\lambdabda) = \int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} L\left(x,v\right)\;d\nu_\lambdabda(x,v) = \min_{\nu \in \mathcal{P}\cap \mathcal{G}'_{0,\Omegaega_\lambdabda}}\int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} L\left(x,v\right)\;d\nu(x,v). \end{equation} Let $\tilde{\nu}_\lambdabda$ be the corresponding measures on $\overline{\Omegaega}$ after scaling from $\nu_\lambdabda$ as in Definitions {\rm Re}\,f{defn:scaledown}, it is easy to see that $\tilde{\nu}_\lambdabda$ is still a probability measure on $\overline{\Omegaega}$, thus by compactness the set of weak limit points $\mathcal{V}_*$ of $ \left\lbrace\tilde{\nu}_\lambdabda\right\rbrace_{\lambdabda>0}$ is nonempty. \betagin{lem}\label{lem:inM0} We have $\mathcal{V}_*\subseteq \mathcal{M}_0$. \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{lem:inM0}] From \eqref{eq:limit-p0} we have \betagin{equation*} -c(\lambdabda) = \int_{\overline{\Omegaega}\times \overline{B}_h} L\big(\left(1+r(\lambdabda)\right)x,v\big)\;d\tilde{\nu}_\lambdabda(x,v). \end{equation*} Assume $\tilde{\nu}_{\lambdabda_j}\rightharpoonup \nu_0$, then since $L\big(\left(1+r(\lambdabda)\right)x,v\big)\rightarrow L(x,v)$ uniformly as $\lambdabda\rightarrow 0^+$, we deduce that $-c(0) = \langle \nu_0,L\rangle$. Let $f\in \mathcal{G}_{0,\Omegaega}$, there exists $u\in \mathrm{C}(\overline{\Omegaega})$ such that $H_f(x,Du(x))\leq 0$ in $\Omegaega$. Let us define $\tilde{u}$ as in \eqref{eq:u_tilde}, then \betagin{equation*} H_{f_\lambdabda}(x,D\tilde{u}(x))\leq 0\quad\text{in}\;\Omegaega_\lambdabda \end{equation*} where $f_\lambdabda(x,v) = f\left(\frac{x}{1+r(\lambdabda)},v\right)$. By definition of $\nu_\lambdabda$ we have \betagin{equation*} \left\langle \tilde{\nu}_\lambdabda, f(x,v) \right\rangle= \left\langle \nu_\lambdabda, f_\lambdabda(x,v) \right\rangle \geq 0. \end{equation*} Let $\lambdabda_j\rightarrow 0^+$ we deduce that $\langle \nu_0, f \rangle \geq 0$, thus $\nu_0\in \mathcal{M}_0$. \end{proof} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:limit}] Let us consider the case $r(\lambdabda)\geq 0$. Let $w_\lambdabda$ be a solution to \eqref{eq:cell3} and $\tilde{w}_\lambdabda$ be its scaling as in \eqref{eq:u_tilde}, then \betagin{equation*} H_{L((1+r(\lambdabda))x,v) +c(\lambdabda)}\big(x,D\tilde{w}_\lambdabda(x)\big) \leq 0 \qquad\text{in}\;\Omegaega. \end{equation*} Therefore $L((1+r(\lambdabda))x,v) +c(\lambdabda)\in \mathcal{F}_{0,\Omegaega}$, thus \betagin{equation}\label{e:nice} {\mathbb B}ig\langle \mu, L((1+r(\lambdabda))x,v) + c(\lambdabda){\mathbb B}ig\rangle \geq 0 \end{equation} for any $\mu\in \mathcal{M}_0$. Using the fact that $-\langle \mu, L(x,v)\rangle = c_0$ we deduce that \betagin{equation*} {\mathbb B}ig\langle \mu, L((1+r(\lambdabda))x,v) - L(x,v){\mathbb B}ig\rangle +(c(\lambdabda) - c(0)) \geq 0. \end{equation*} Thus if $r(\lambdabda)>0$ then for all $\mu\in \mathcal{M}_0$ we have \betagin{equation*} \left\langle\mu, \frac{L((1+r(\lambdabda))x,v) - L(x,v)}{r(\lambdabda)} \right\rangle + \left(\frac{c( \lambdabda)-c(0)}{r(\lambdabda)}\right) \geq 0. \end{equation*} Using \eqref{eq:bound} and the fact that $r(\lambdabda)$ is not identically zero near $0$, as $\lambdabda\rightarrow 0^+$ we have \betagin{equation}\label{eq:limit-p3} \langle\mu, x\cdot D_xL(x,v)\rangle + \liminf_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right) \geq 0 \qquad\text{for all}\; \mu \in \mathcal{M}_0. \end{equation} Let $\lambdabda_j\rightarrow 0^+$ be the subsequence such that \betagin{equation*} \limsup_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right) =\lim_{j\rightarrow \infty}\left(\frac{c(\lambdabda_j)-c(0)}{r(\lambdabda_j)}\right). \end{equation*} For simplicity we can assume that (up to subsequence) $\tilde{\nu}_{\lambdabda_j}\rightharpoonup \nu_0$ and $\nu_0\in \mathcal{M}_0$. Let $w$ be a solution to \eqref{eq:S_0}, then $\tilde{w}(x) = (1+r(\lambdabda))w\left((1+r(\lambdabda))^{-1}x\right)$ solves \betagin{equation*} H_{L\left(\frac{x}{1+r(\lambdabda)},v \right)+c(0)}\left(x,D\tilde{w}(x)\right) \leq 0 \qquad\text{in}\;(1+r(\lambdabda))\Omegaega. \end{equation*} As $v_\lambdabda\in \mathcal{P}\cap \mathcal{G}'_{0,\Omegaega_\lambdabda}$ and $\langle \nu_\lambdabda, L\rangle = -c(\lambdabda)$, we obtain that \betagin{equation*} \left\langle \nu_\lambdabda, L\left(\frac{x}{1+r(\lambdabda)},v\right) - L\left(x,v\right)\right\rangle -c(\lambdabda) + c(0) \geq 0. \end{equation*} By definition of $\tilde{\nu}_\lambdabda$, it is equivalent to \betagin{equation}\label{e:nice2} {\mathbb B}ig\langle \tilde{\nu}_\lambdabda, L\left(x,v\right) - L\left((1+r(\lambdabda))x,v\right){\mathbb B}ig\rangle\geq c(\lambdabda)-c(0). \end{equation} As $r(\lambdabda_j)\geq 0$, let $\lambdabda_j\rightarrow 0^+$ we obtain \betagin{equation}\label{eq:limit-p9} \left\langle \nu_0, (-x)\cdot D_xL(x,v)\right\rangle \geq \limsup_{\lambdabda\rightarrow 0^+} \left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right). \end{equation} In \eqref{eq:limit-p3}, take $\mu = \nu_0\in \mathcal{M}_0$ and together with \eqref{eq:limit-p9} we conclude that \betagin{equation*} \lim_{\lambdabda\rightarrow 0^+}\left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right) = \left\langle \nu_0,(-x)\cdot D_xL(x,v)\right\rangle = \sup_{\mu\in \mathcal{M}_0} \left\langle \mu,(-x)\cdot D_xL(x,v)\right\rangle. \end{equation*} Similarly, if $r(\lambdabda)\leq 0$ as $\lambdabda\rightarrow 0^+$ then \betagin{equation*} \lim_{\lambdabda\rightarrow 0^+} \left(\frac{c(\lambdabda)-c(0)}{r(\lambdabda)}\right) = \min_{\mu\in \mathcal{M}_0} \left\langle \mu,(-x)\cdot D_xL(x,v)\right\rangle. \end{equation*} For an oscillating $r(\lambdabda)$ such that neither $r^-(\lambdabda) = \min\{0,r(\lambdabda)\}$ nor $r^+(\lambdabda) = \max\{0,r(\lambdabda)\}$ is identical to zero as $\lambdabda\rightarrow 0^+$, by applying the previous results we have (we consider the limit $(c(\lambdabda) - c(0))/r(\lambdabda)$ along subsequences where $r(\lambdabda)\neq 0$) \betagin{align*} &\lim_{\substack{\lambdabda\rightarrow 0^+\\ r(\lambdabda) > 0}} \left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right) = \max_{\mu\in \mathcal{M}_0} \left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle = c^{(1)}_+,\\ &\lim_{\substack{\lambdabda\rightarrow 0^+\\ r(\lambdabda) < 0}} \left(\frac{c(\lambdabda) - c(0)}{r(\lambdabda)}\right) = \min_{\mu\in \mathcal{M}_0} \left\langle \mu, (-x)\cdot D_xL(x,v)\right\rangle= c^{(1)}_- . \end{align*} For any given subsequence $\lambdabda_j\rightarrow 0^+$ along which $(c(\lambdabda_j)-c(0))/r(\lambdabda_j)$ converges, by decomposing $\lambdabda_j$ into subsequences where $r(\lambdabda_j)>0$ and $r(\lambdabda_j)<0$ respectively, we see that $(c(\lambdabda_j)-c(0))/r(\lambdabda_j)$ can only converge either to $c^{(1)}_+$ or $c^{(1)}_-$, and therefore we obtain the conclusion of the theorem. \end{proof} \betagin{proof}[Proof of Corollary {\rm Re}\,f{cor:Aug30-1}] If $\left\langle \mu,(-x)\cdot D_xL(x,v)\right\rangle = c^{(1)}$ for all $\mu\in \mathcal{M}_0$ then from Theorem {\rm Re}\,f{thm:general} with $\gammama\in \mathbb{R}$ we have $\left\langle \mu, u^\gammama + \gammama c^{(1)} \right\rangle \leq 0$ for all $\mu\in \mathcal{M}_0$, thus $u^\gammama + \gammama c^{(1)} \in \mathcal{E}$ and hence $u^\gammama + \gammama c_{(1)} \leq u^0$. On the other hand, $\left\langle\mu, u^0\right\rangle = \gammama c^{(1)} + \left\langle \mu, u^0-\gammama c^{(1)} \right\rangle \leq 0$ for all $\mu\in \mathcal{M}_0$, therefore \betagin{equation*} \gammama\langle\mu, (-x)\cdot D_xL(x,v)\rangle + \left\langle\mu, u^0-\gammama c^{(1)} \right\rangle \leq 0 \qquad\text{for all}\;\mu\in \mathcal{M}_0. \end{equation*} Thus $u^0 - \gammama c^{(1)} \in \mathcal{E}^\gammama$, hence $u^0 - \gammama c^{(1)} \leq u^\gammama$. \end{proof} \betagin{rem}\label{remark:Aubry} Here are some examples where $c^{(1)}_- = c^{(1)}_+ = c^{(1)}$. \betagin{itemize} \item[(i)] If $H(x,p) = H(p)+V(x)$ with $x\cdot \nabla V(x) \leq 0$ for all $x\in \Omegaega$. Indeed, Lemma {\rm Re}\,f{rem: on L} says that $\langle\mu, x\cdot \nabla V(x)\rangle \geq 0$ for all $\mu\in \mathcal{M}_0$, thus in this case we have $\langle \mu, (-x)\cdot D_xL(x,v)\rangle = 0$ for all $\mu\in \mathcal{M}_0$, hence $c^{(1)}_- = c^{(1)}_+ = 0$. \item[(ii)] If the Aubry set $\mathcal{A}$ of $H$ is compactly supported in $\Omegaega$, then by Theorem {\rm Re}\,f{thm:eigenvalue} we have $c(\lambdabda) = c(0)$ for all $\lambdabda>0$ small enough, therefore $c^{(1)}=0$. \item[(iii)] Recall from Example {\rm Re}\,f{ex:differ} that if $H(x,p) = |p| - e^{-|x|}$ and $\Omegaega = (-1,1)$ then $c^{(1)}_+ = c^{(1)}_- =c^{(1)} = e^{-1}$. \end{itemize} \end{rem} We state the following lemma concerning properties of limits of minimizing measures on $\Omegaega$, which will be used to prove Corollary {\rm Re}\,f{cor:equala}. \betagin{lem}\label{lem:sigma0} Let $v_\lambdabda\in \mathrm{C}(\overline{\Omegaega})$ be the solution to \eqref{eq:def_vlambda}. For $z\in \Omegaega$, let $\sigmama_\lambdabda\in \mathcal{P}\cap \mathcal{G}'_{z,\phi(\lambdabda),\Omegaega}$ be the minimizing measure such that $\phi(\lambdabda)v_\lambdabda(z) = \langle \sigmama_\lambdabda,L\rangle$. Let us define \betagin{equation*} \mathcal{U}_0(z) = \left\lbrace \mu\in \mathcal{P}(\overline{\Omegaega}\times \overline{B}_h): \sigmama_\lambdabda\rightharpoonup \mu\;\text{in measures along some subsequences}\right\rbrace \end{equation*} then \betagin{itemize} \item[(i)] $\langle \sigmama_0, u^0\rangle =0$ for all $\sigmama_0\in \mathcal{U}_0(z)$. \item[(ii)] $ u^0(z) \geq u^\gammama(z) + \gammama\big\langle \sigmama_0,(-x)\cdot D_xL(x,v) \big\rangle$ for all $\sigmama_0\in \mathcal{U}_0(z)$ and $\gammama\in {\mathbb R}$. \end{itemize} \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{lem:sigma0}] It is clear that $\mathcal{U}_0(z)\subset \mathcal{M}_0$. \betagin{itemize} \item[(i)] For any $w$ solves \eqref{eq:S_0} we have \betagin{equation}\label{june19-p1} {\mathbb B}ig\langle \sigmama_\lambdabda, L(x,v) + c(0) + \phi(\lambdabda)w(x) - \phi(\lambdabda)w(z){\mathbb B}ig\rangle \geq 0. \end{equation} Since $v_\lambdabda+\varphi(\lambdabda)^{-1}c(0)\rightarrow u^0$ by Theorem {\rm Re}\,f{thm:conv_bdd}, from \eqref{june19-p1} we deduce that \betagin{equation}\label{Sep2-1} u^0(z) + \langle \sigmama_0, w\rangle \geq w(z) \end{equation} for some $\sigmama_0\in \mathcal{U}_0(z)$. Let $w = u^0$ we obtain $\langle \sigmama_0, u^0\rangle \geq 0$, thus $\langle \sigmama_0, u^0\rangle = 0$ since $\langle \mu, u^0\rangle\leq 0$ for all $\mu\in \mathcal{M}_0$. \item[(ii)] To connect $u^0$ with $u^\gammama$, we use the approximation $u_\lambdabda$ on $(1+r(\lambdabda))\Omegaega$. Recall that after scaling $\tilde{u}_\lambdabda(x) = (1+r(\lambdabda))^{-1}u_\lambdabda\big((1+r(\lambdabda))x\big)$ for $x\in \overline{\Omegaega}$ we have \betagin{equation*} L((1+r(\lambdabda))x,v) - \phi(\lambdabda)r(\lambdabda)\tilde{u}_\lambdabda(x) \in \mathcal{F}_{z,\phi(\lambdabda),\Omegaega}. \end{equation*} Recall the definition of $\sigmama_\lambdabda\in \mathcal{P}\cap \mathcal{G}'_{z,\phi(\lambdabda),\Omegaega}$ from Lemma {\rm Re}\,f{lem:sigma0}, we have \betagin{equation*} \big\langle \sigmama_\lambdabda, L((1+r(\lambdabda))x,v) - \phi(\lambdabda)r(\lambdabda)\tilde{u}_\lambdabda - \phi(\lambdabda)\tilde{u}_\lambdabda(z) \big\rangle \geq 0. \end{equation*} Using $-\langle \sigmama_\lambdabda, L(x,v)\rangle + \phi(\lambdabda)v_\lambdabda(z)= 0$ where $v_\lambdabda$ solves \eqref{eq:def_vlambda} we obtain \betagin{equation*} \frac{r(\lambdabda)}{\phi(\lambdabda)}\left\langle \sigmama_\lambdabda, \frac{L((1+r(\lambdabda))x,v) - L(x,v)}{r(\lambdabda)}\right\rangle + v_\lambdabda(z) - r(\lambdabda)\langle\sigmama_\lambdabda,\tilde{u}_\lambdabda\rangle \geq \tilde{u}_\lambdabda(z). \end{equation*} Taking into account the normalization, we deduce that \betagin{align*} &\frac{r(\lambdabda)}{\phi(\lambdabda)}\left\langle \sigmama_\lambdabda, \frac{L((1+r(\lambdabda))x,v) - L(x,v)}{r(\lambdabda)}\right\rangle +\left( v_\lambdabda(z) + \frac{c(0)}{\phi(\lambdabda)}\right) \\ &\qquad\qquad\qquad - r(\lambdabda)\left\langle\sigmama_\lambdabda,\tilde{u}_\lambdabda+\frac{c(0)}{\phi(\lambdabda)}\right\rangle \geq \left(\tilde{u}_\lambdabda(z)+ \frac{c(0)}{\phi(\lambdabda)}\right)-\frac{r(\lambdabda)}{\phi(\lambdabda)}c(0). \end{align*} Assume $\sigmama_\lambdabda\rightharpoonup \sigmama_0$ for some $\sigmama_0\in \mathcal{U}_0(z)$, then as $\lambdabda\to 0$ we have \betagin{equation*} \gammama\langle \sigmama_0,(+x)\cdot D_xL(x,v) \rangle +u^0(z) \geq u^\gammama(z) \end{equation*} and thus the conclusion $u^0(z)\geq u^\gammama(z)+ \gammama \left\langle \sigmama_0, (-x)\cdot D_xL(x,v) \right\rangle$ follows. \end{itemize} \end{proof} \betagin{proof}[Proof of Corollary {\rm Re}\,f{cor:equala}] If $\gammama > 0$ then from Lemma {\rm Re}\,f{lem:sigma0} there exists $\sigmama_0\in \mathcal{U}_0(z)$ such that \betagin{equation*} 0 = u^0(z) - u^\gammama(z) \geq \gammama \langle \sigmama_0, (-x)\cdot D_xL(x,v)\rangle \geq\gammama c^{(1)}_- \geq 0 \end{equation*} and thus $c^{(1)}_- = 0$. \end{proof} \subsection{The additive eigenvalues as a function} A natural question that comes from Theorem {\rm Re}\,f{thm:limit} is when do we have the invariant \betagin{equation*} \langle \mu, (-x)\cdot D_xL(x,v)\rangle = c^{(1)} \end{equation*} for all $\mu\in \mathcal{M}_0$? In other words, when is the map $\lambdabda\mapsto c(\lambdabda)$ is differentiable at $\lambdabda = 0$? We can indeed study the map $\lambdabda\mapsto c(\lambdabda)$ on an open interval $I$ including zero, and ask the question at what point $\lambdabda$ where $c'(\lambdabda)$ exists. It is clear that $\lambdabda\mapsto c(\lambdabda)$ is Lipschitz, thus it is differentiable almost everywhere. We will show a stronger claim that indeed the set of points where $c'(\lambdabda)$ does not exists is almost countable. Without loss of generality (from Theorem {\rm Re}\,f{thm:limit}) We can assume $r(\lambdabda) = \lambdabda$ for $\lambdabda \in (-\varepsilon,\varepsilon)$ for some $\varepsilon>0$ in this section. \betagin{thm} Assume $\mathrm{(H1)},\mathrm{(H2)},\mathrm{(H3)}, \mathrm{(H4)},\mathrm{(A1)}$ and $\lambdabda\in (-\varepsilon,\varepsilon)$. \betagin{itemize} \item[(a)] The map $\lambdabda\mapsto c(\lambdabda)$ is left-differentiable and right-differentiable everywhere on its domain. \item[(b)] The left derivative $\lambdabda\mapsto c'_-(\lambdabda)$ is left continuous and the right derivative $\lambdabda\mapsto c'_+(\lambdabda)$ is right continuous on their domains. \item[(c)] The map $\lambdabda\mapsto c(\lambdabda)$ is differentiable except countably many points on its domain. \end{itemize} \end{thm} \betagin{proof} For (a), by the same argument as in the proof of Theorem {\rm Re}\,f{thm:limit} we see that $\lambdabda\mapsto c(\lambdabda)$ is left and right differentiable with \betagin{equation*} \betagin{split} c_+'(\lambdabda) &= \max_{\mu\in \mathcal{M}_\lambdabda} \int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} (-x)\cdot D_xL(x,v)d\mu(x,v)\\ c_-'(\lambdabda) &= \min_{\mu\in \mathcal{M}_\lambdabda} \int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} (-x)\cdot D_xL(x,v)d\mu(x,v) \end{split} \end{equation*} where $\mathcal{M}_\lambdabda$ is the set of minimizing Mather measures on $\Omegaega_\lambdabda$.\\ \noindent For $\lambdabda\in (-\varepsilon,\varepsilon)$ and $\nu_\lambdabda$ be any measure in $\mathcal{M}_\lambdabda$ then by the usual scaling as in Lemma {\rm Re}\,f{lem:inM0} we have $-c(\lambdabda) = \left\langle \tilde{\nu}_\lambdabda,L((1+\lambdabda)x,v) \right\rangle$ and any subsequential weak limit $\tilde{\nu}_\lambdabda \rightharpoonup \nu_0$ in $\mathcal{P}(\overline{\Omegaega}\times \overline{B}_h)$ satisfies $\nu_0\in \mathcal{M}_0$. We claim further that \betagin{equation*} \int_{\overline{\Omegaega}\times \overline{B}_h} (-x)\cdot D_xL(x,v)d\nu_0(x,v) = \betagin{cases} c'_-(0) &\quad \text{if}\;\lambdabda\to 0^-,\\ c'_+(0) &\quad \text{if}\;\lambdabda\to 0^+. \end{cases} \end{equation*} It is rather clear from \eqref{e:nice} and \eqref{e:nice2}, since, for instance if $\lambdabda\to 0^-$ then \betagin{align*} &\left\langle \mu, \frac{L((1+\lambdabda)x,v) - L(x,v)}{\lambdabda}\right\rangle + \frac{c(\lambdabda) - c(0)}{\lambdabda} \leq 0 \qquad\text{for all}\;\mu\in \mathcal{M}_0\\ &\left\langle \tilde{\nu}_\lambdabda, \frac{L(x,v) - L((1+\lambdabda)x,v)}{\lambdabda}\right\rangle \leq \frac{c(\lambdabda) - c(0)}{\lambdabda}. \end{align*} Therefore, together with Theorem {\rm Re}\,f{thm:limit} we deduce that \betagin{equation*} \betagin{split} &c'_-(0) \leq \big\langle \mu, (-x)\cdot D_xL(x,v) \big\rangle \qquad\text{for all}\;\mu\in \mathcal{M}_0\\ & \big\langle \nu_0, (-x)\cdot D_xL(x,v)\big\rangle \leq c'_-(0). \end{split} \end{equation*} We conclude that \betagin{equation*} \big\langle \nu_0, (-x)\cdot D_xL(x,v)\big\rangle = c'_-(0). \end{equation*} Now (b) follows easily. To see that $\lambdabda\mapsto c'_-(\lambdabda)$ is left continuous, it suffices to show it is left continuous at $0$. If $\lambdabda\to 0^-$, let $\nu_{\lambdabda}\in \mathcal{M}_\lambdabda$ that realizes $c'_-(\lambdabda)$, i.e., \betagin{equation*} \betagin{split} c'_-(\lambdabda) &= \int_{\overline{\Omegaega}_\lambdabda\times \overline{B}_h} (-x)\cdot D_xL(x,v)d\nu_\lambdabda(x,v) = (1+\lambdabda) \int_{\overline{\Omegaega}\times \overline{B}_h} (-x)\cdot D_xL\big((1+\lambdabda)x,v\big)d\tilde{\nu}_\lambdabda(x,v). \end{split} \end{equation*} From $\mathrm{(H4)}$ we have that $(-x)\cdot D_xL((1+\lambdabda)x,v)\to (-x)\cdot D_xL(x,v)$ uniformly on $\overline{\Omegaega}\times\overline{B}_h$, and since the limit of the right hand side is $c'_-(0)$ independent of subsequence, we deduce that \betagin{equation*} \lim_{\lambdabda \to 0^-} c_-'(\lambdabda) = c'_-(0). \end{equation*} The case $\lambdabda\to 0^+$ can be done in the same manner. Finally the fact that $\lambdabda\mapsto c(\lambdabda)$ is differentiable except countably many points is standard, since $\lambdabda \mapsto c'(\lambdabda)$ is defined almost everywhere and is non-decreasing, or one can argue as in \cite[Theorem 17.9]{hewitt_real_1965} or \cite[Theorem 4.2]{bruckner_differentiation_1978}. \end{proof} With some additional information about the Hamiltonian, we can say something more about the map $\lambdabda\mapsto c(\lambdabda)$. \betagin{lem}\label{thm:convexnew} Assume $\mathrm{(H1)}, \mathrm{(H2)}, \mathrm{(H3)}, \mathrm{(H4)}$, $\mathrm{(A1)}$ and further that $(x,p)\mapsto H(x,p)$ is \textit{jointly convex}, then $\lambdabda\mapsto c(\lambdabda)$ is convex. \end{lem} We omit the proof of this lemma as it is a simple modification of Corollary {\rm Re}\,f{cor:mycor}. \section{The second normalization: convergence and a counter example}\label{sec5} From Theorems {\rm Re}\,f{thm:general} and {\rm Re}\,f{thm:limit} we obtain the convergence of the second normalization \eqref{eq:familyclambda} when $\gammama$ is finite as in Corollary {\rm Re}\,f{cor:second_norm}. In this section we provide an example where given any $r(\lambdabda)$, we can construct $\phi(\lambdabda)$ such that $\gammama = \infty$ and $\left\lbrace u_\lambdabda+\phi(\lambdabda)^{-1}c(\lambdabda)\right\rbrace_{\lambdabda>0}$ is divergent along some subsequence. To simplify notations, we will consider $r(\lambdabda) \geq 0$ and denote $c(\lambdabda)$ to be the eigenvalue of $H$ in $\Omegaega_\lambdabda = (1-r(\lambdabda)\Omegaega$. Let us consider the following Hamiltonian \betagin{equation}\label{eq:H_counter} H(x,p) = |p| - V(x), \qquad (x,p)\in \overline{\Omegaega}\times \mathbb{R}^n, \end{equation} where $V:\overline{\Omegaega}\rightarrow \mathbb{R}$ is uniformly bounded continuous and is nonnegative. For a given $r(\lambdabda)$, we will construct $\phi(\lambdabda)$ so that $\left\lbrace u_\lambdabda+\phi(\lambdabda)^{-1}c(\lambdabda)\right\rbrace_{\lambdabda>0}$ is divergent as $\lambdabda \rightarrow 0^+$. The example is constructed based on an instability of the Aubry set $\mathcal{A}_{\Omegaega_\lambdabda}$ of $H$ on $\Omegaega_\lambdabda$, when $\lambdabda\rightarrow 0^+$. We recall from Theorem {\rm Re}\,f{thm:V} that \betagin{equation*} -c(0) = \min_{\overline{\Omegaega}} V \qquad\text{and}\qquad\mathcal{A}_{\Omegaega} = \left\lbrace x\in \overline{\Omegaega}: V(x) = \min_{\overline{\Omegaega} }V \right\rbrace. \end{equation*} Also, the Lagrangian is nonnegative in this case, since \betagin{equation}\label{L_counter} L(x,v) = \betagin{cases} V(x) &\qquad \text{if}\;|v|\leq 1,\\ +\infty &\qquad \text{if}\;|v|> 1. \end{cases} \end{equation} \betagin{lem}\label{lem:equal} Assume \eqref{H3c}, \eqref{H4}, $\mathrm{(H3)}$ and $\mathrm{(A2)}$. Let \betagin{equation*} S_{\Omegaega}(x,y) = \sup {\mathbb B}ig\lbrace u(x) - u(y): u\;\text{is a subsolution}\; H(x,Du(x))\leq c(0)\;\text{in}\;\Omegaega{\mathbb B}ig\rbrace. \end{equation*} We can extend $S_{\Omegaega}$ uniquely to $\overline{\Omegaega}\times \overline{\Omegaega}$. If $\mathcal{A}_{\Omegaega} = \{z_0\}$ is a singleton then $u^0(x) \equiv S_\Omegaega(x,z_0)$ where $u^0$ is the maximal solution on $\Omegaega$ defined in Theorem {\rm Re}\,f{thm:conv_bdd}. \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{lem:equal}] One can show that $\mathcal{A}_{\Omegaega}$ is a uniqueness set for \eqref{eq:S_0} (see \cite{ fathi_pde_2005,ishii_vanishing_2020, jing_generalized_2020, mitake_uniqueness_2018}). From Lemma {\rm Re}\,f{lem:sigma0} there exists $\sigmama_0\in \mathcal{M}_0$ such that $\langle \sigmama_0,u^0\rangle = 0$. If $\mathcal{A}_{\Omegaega} = \{z_0\}$ then we can show that $\mathrm{supp}\;(\sigmama_0)\subset \{z_0\}$ and thus $\sigmama_0 \equiv \deltata_{z_0}$, hence \betagin{equation*} u^0(z_0) = \langle \sigmama_0,u^0\rangle = 0 = S_\Omegaega(z_0,z_0). \end{equation*} Therefore $u^0(x)\equiv S_\Omegaega(x,z_0)$. \end{proof} \betagin{defn}[Definition of the potential $V(x)$]\label{defV} We will construct a potential $V$ to use for the proof of Theorem {\rm Re}\,f{thm:counter-example} on $\Omegaega = (-1,1)$. We start with the first step, the building block will be as follows. \usetikzlibrary{arrows} \betagin{figure}[H] \centering \betagin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4cm] \clip(-10.,-0.) rectangle (10.,7.); \draw [line width=2.pt] (-6.,6.)-- (6.,6.); \draw [line width=2.pt] (6.,6.)-- (6.,0.); \draw [line width=2.pt] (-6.,6.)-- (-6.,0.); \draw [line width=2.pt] (-4.,6.)-- (-4.,0.); \draw [line width=2.pt] (-2.,6.)-- (-2.,0.); \draw [line width=2.pt] (2.,6.)-- (2.,0.); \draw [line width=2.pt] (4.,6.)-- (4.,0.); \draw [line width=2.pt] (-6.,4.)-- (6.,4.); \draw [line width=2.pt] (6.,2.)-- (-6.,2.); \draw [line width=2.pt] (0.,6.)-- (-4.,4.); \draw [line width=2.pt] (-4.,4.)-- (-6.,0.); \draw [line width=2.pt] (0.,6.)-- (2.,2.); \draw [line width=2.pt] (2.,2.)-- (6.,0.); \draw [line width=2.pt] (0.,6.)-- (0.,0.); \draw [line width=2.pt] (-6.,0.)-- (6.,0.); \end{tikzpicture} \caption{The first step.} \label{fig1} \end{figure} Next, we apply the same construction but with a smaller scale, which gives us Figure {\rm Re}\,f{fig2}. \usetikzlibrary{arrows} \betagin{figure}[H] \centering \betagin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.4cm,y=0.4 cm] \clip(-10.,-3.) rectangle (10.,7.); \draw [line width=2.pt] (-6.,6.)-- (6.,6.); \draw [line width=2.pt] (6.,6.)-- (6.,0.); \draw [line width=2.pt] (-6.,6.)-- (-6.,0.); \draw [line width=2.pt] (-4.,6.)-- (-4.,0.); \draw [line width=2.pt] (-2.,6.)-- (-2.,0.); \draw [line width=2.pt] (2.,6.)-- (2.,0.); \draw [line width=2.pt] (4.,6.)-- (4.,0.); \draw [line width=2.pt] (-6.,4.)-- (6.,4.); \draw [line width=2.pt] (6.,2.)-- (-6.,2.); \draw [line width=2.pt] (6.,0.)-- (6.,-3.); \draw [line width=2.pt] (6.,0.)-- (9.,0.); \draw [line width=2.pt] (9.,0.)-- (9.,-3.); \draw [line width=2.pt] (9.,-3.)-- (6.,-3.); \draw [line width=2.pt] (7.,0.)-- (7.,-3.); \draw [line width=2.pt] (8.,0.)-- (8.,-3.); \draw [line width=2.pt] (6.,-1.)-- (9.,-1.); \draw [line width=2.pt] (9.,-2.)-- (6.,-2.); \draw [line width=2.pt] (-9.,0.)-- (-9.,-3.); \draw [line width=2.pt] (-9.,-3.)-- (-6.,-3.); \draw [line width=2.pt] (-6.,-3.)-- (-6.,0.); \draw [line width=2.pt] (-6.,0.)-- (-9.,0.); \draw [line width=2.pt] (-8.,0.)-- (-8.,-3.); \draw [line width=2.pt] (-7.,0.)-- (-7.,-3.); \draw [line width=2.pt] (-9.,-1.)-- (-6.,-1.); \draw [line width=2.pt] (-6.,-2.)-- (-9.,-2.); \draw [line width=2.pt] (0.,6.)-- (-4.,4.); \draw [line width=2.pt] (-4.,4.)-- (-6.,0.); \draw [line width=2.pt] (0.,6.)-- (2.,2.); \draw [line width=2.pt] (2.,2.)-- (6.,0.); \draw [line width=2.pt] (-6.,0.)-- (-7.,-2.); \draw [line width=2.pt] (-7.,-2.)-- (-9.,-3.); \draw [line width=2.pt] (6.,0.)-- (8.,-1.); \draw [line width=2.pt] (8.,-1.)-- (9.,-3.); \draw [line width=2.pt] (0.,6.)-- (0.,0.); \draw [line width=2.pt] (-6.,0.)-- (6.,0.); \end{tikzpicture} \caption{The second step.} \label{fig2} \end{figure} Keep switching the small box with this construction and with an appropriate initial length to start with, the graph of $V$ is given as in Figure {\rm Re}\,f{fig3}. \betagin{figure}[H] \centering \includegraphics[scale=0.3]{funct-eps-converted-to.pdf} \caption{Graph of the function $V$.} \label{fig3} \end{figure} \end{defn} \betagin{lem}\label{lem:divergence} Let $V(x)$ defined as in Definition {\rm Re}\,f{defV} and $\Omegaega_\lambdabda = (-1+r(\lambdabda), 1-r(\lambdabda))$. Then the maximal solution on $\Omegaega_\lambdabda$ (as in Theorem {\rm Re}\,f{thm:conv_bdd}), denoted by $u^0_\lambdabda(x)$, does not converge as $\lambdabda\rightarrow 0^+$. \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{lem:divergence}] By Theorem {\rm Re}\,f{thm:V} the additive eigenvalue of $H$ on $\Omegaega_\lambdabda$, denoted by $c(\lambdabda)$, is given by $-c(\lambdabda) = \min_{x\in \overline{\Omegaega}_\lambdabda} V(x)$. By the construction of $V$, there are exactly two points, denoted by $z_\lambdabda^+$ and $z_\lambdabda^-$ such that \betagin{equation}\label{eq:crucial0} \left\lbrace z\in \overline{\Omegaega}_\lambdabda: V(z) = \min_{x\in \overline{\Omegaega}_{\lambdabda}} V(x) = -c(\lambdabda) \right\rbrace = \big\lbrace z_\lambdabda^+,z_\lambdabda^-\big\rbrace. \end{equation} We can find two subsequence of $\lambdabda_j\rightarrow 0^+$ and $\deltata_j\rightarrow 0^+$ such that $\lim_{\lambdabda_j\rightarrow 0^+} z_{\lambdabda_j} = -1$ and $\lim_{\deltata_j\rightarrow 0^+} z_{\deltata_j} = 1$. We claim that \betagin{equation}\label{counter:claim} \lim_{\lambdabda_j\rightarrow 0^+} u_{\lambdabda_j}^0(x) = S_{\Omegaega}(x,-1) \qquad\text{and}\qquad \lim_{\deltata_j\rightarrow 0^+} u_{\deltata_j}^0(x) = S_{\Omegaega}(x,1). \end{equation} For those $z_\lambdabda$ satisfying \eqref{eq:crucial0} we have $ u_{\lambdabda}^0(x) \equiv S_{\Omegaega_{\lambdabda}}\left(x, z_{\lambdabda}\right)$ for $x\in \overline{\Omegaega}_{\lambdabda}$. We show that \betagin{equation}\label{eq:claim_counter} \lim_{\lambdabda_j\rightarrow 0} S_{\Omegaega_{\lambdabda_j}}\left(x,z_{\lambdabda_j}\right) = S_{\Omegaega}\left(x,z_0\right) \qquad\text{for}\;x\in \Omegaega \end{equation} where $z_0 = -1$. The other case is similar. If $x\in \Omegaega$ then for all $\lambdabda$ small enough we have $x\in \Omegaega_\lambdabda$, by Theorem {\rm Re}\,f{lem:optimal} we have \betagin{align*} S_{\Omegaega_\lambdabda}(x,z_\lambdabda) &= \inf\left\lbrace \int_0^T {\mathbb B}ig( c(\lambdabda)+ L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)ds: \xi\in \mathrm{AC}\left([0,T];\overline{\Omegaega}_\lambdabda\right), \xi(0) = z_\lambdabda, \xi(T) = x \right\rbrace,\\ S_{\Omegaega}(x,z_\lambdabda) &= \inf\left\lbrace \int_0^T {\mathbb B}ig(c(0)+L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)ds: \xi\in \mathrm{AC}\left([0,T];\overline{\Omegaega}\right), \xi(0) = z_\lambdabda, \xi(T) = x \right\rbrace. \end{align*} We show that $S_{\Omegaega_\lambdabda}(x,z_\lambdabda) \leq S_{\Omegaega}(x,z_0)$. Take any $\xi\in \mathcal{F}_{\Omegaega}(x,z_0; 0,T)$ (defined in Theorem {\rm Re}\,f{lem:optimal}) and define $t_\lambdabda = \inf \; \big\lbrace s>0: \xi(s) = z_\lambdabda\big\rbrace \in (0,T)$, then \betagin{equation*} \eta(s) = \betagin{cases} z_\lambdabda &\qquad s\in [0,t_\lambdabda],\\ \xi(s) &\qquad s\in [t_\lambdabda, T], \end{cases} \end{equation*} belongs to $\mathcal{F}_{\Omegaega_\lambdabda}(x,z_\lambdabda; 0,T)$, therefore together with \eqref{L_counter} we have \betagin{align*} \int_0^T {\mathbb B}ig(c(0)+L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)ds &= \int_0^{t_\lambdabda} {\mathbb B}ig(c(0)+L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)ds + \int_0^T {\mathbb B}ig(c(0)+L(\eta(s),\dot{\eta}(s)){\mathbb B}ig)ds \\ &\geq \int_0^{t_\lambdabda} {\mathbb B}ig(c(0)+L(\xi(s),\dot{\xi}(s)){\mathbb B}ig)ds + \int_0^T {\mathbb B}ig(c(\lambdabda)+L(\eta(s),\dot{\eta}(s)){\mathbb B}ig)ds \\ &\geq \max {\mathbb B}ig\lbrace S_{\Omegaega}(z_\lambdabda,z_0),0{\mathbb B}ig\rbrace + S_{\Omegaega_\lambdabda}(x,z_\lambdabda). \end{align*} Therefore taking the infimum over all possible $\xi$ we deduce that \betagin{equation*} S_{\Omegaega}(x,z_0) \geq \max {\mathbb B}ig\lbrace S_{\Omegaega}(z_\lambdabda,z_0),0{\mathbb B}ig\rbrace + S_{\Omegaega_\lambdabda}(x,z_\lambdabda) \end{equation*} and thus \betagin{equation}\label{eq:couter_1} \limsup_{\lambdabda\rightarrow 0^+} S_{\Omegaega_\lambdabda}(x,z_\lambdabda) \leq S_{\Omegaega}(x,z_0). \end{equation} Now let us start with $\xi_n\in \mathcal{F}_{\Omegaega_\lambdabda}(x,z_\lambdabda; 0, T_n)$ such that \betagin{equation}\label{eq:couter_2} \int_0^{T_n} {\mathbb B}ig(L(\xi(s),\dot{\xi}(s))+c(\lambdabda){\mathbb B}ig)ds < S_{\Omegaega_\lambdabda}(x,z_\lambdabda) + \frac{1}{n}. \end{equation} Let us connect $z_0$ and $z_\lambdabda$ by the straight line $\zetata(s) = (1-s)z_0 + sz_\lambdabda$ for $s\in [0,1]$. Since $|\dot{\zetata}(s)| = |z_0-z_\lambdabda| \ll 1$, therefore from \eqref{L_counter} we have \betagin{equation*} \int_0^1 L\left(\zetata(s),\dot{\zetata}(s)\right)\;ds = \int_0^1 V(\zetata(s))\;ds \leq \max_{x\in [z_0,z_\lambdabda]} V(x) = -c(\lambdabda). \end{equation*} Therefore \betagin{equation}\label{eq:couter_3} \int_0^1 {\mathbb B}ig(L(\zetata(s),\dot{\zetata}(s)) + c(\lambdabda){\mathbb B}ig)ds \leq 0. \end{equation} Let us define \betagin{equation*} \eta_n(s) = \betagin{cases} \zetata(s) &\qquad\text{for}\; s\in [0,1],\\ \xi(s-1) &\qquad\text{for}\; s\in [1,T_n+1] \end{cases} \end{equation*} then $\eta_n\in \mathcal{F}_{\Omegaega}(x,z_0;0,T_{n+1})$. From \eqref{eq:couter_2} and \eqref{eq:couter_3} we have \betagin{align*} S_{\Omegaega_\lambdabda}(x,z_\lambdabda) + \frac{1}{n} &> \int_0^1 {\mathbb B}ig(L(\zetata(s),\dot{\zetata}(s)) + c(\lambdabda){\mathbb B}ig)ds + \int_0^{T_n} {\mathbb B}ig(L(\xi(s),\dot{\xi}(s))+c(\lambdabda){\mathbb B}ig)ds\\ &= \int_0^{T_n+1} {\mathbb B}ig(L(\eta_n(s),\dot{\eta}_n(s)) + c(\lambdabda){\mathbb B}ig)ds\geq S_{\Omegaega}(x,z_0) \end{align*} since $\eta_n\in \mathcal{F}_{\Omegaega}(x,z_0;0,T_{n+1})$. Let $\lambdabda\rightarrow 0^+$ and then $n\rightarrow \infty$ we have \betagin{equation}\label{eq:couter_4} \liminf_{\lambdabda\rightarrow 0^+} S_{\Omegaega_{\lambdabda}}(x,z_\lambdabda) \geq S_{\Omegaega}(x,z_0). \end{equation} From \eqref{eq:couter_1} and \eqref{eq:couter_4} we obtain \eqref{eq:claim_counter} and \eqref{counter:claim} follows. We finally observe that $S_\Omegaega(x,-1) \neq S_\Omegaega(x,1)$, since otherwise $S_\Omegaega(-1,1) = S_\Omegaega(1,-1) = 0$, which is impossible. Indeed, if $\xi\in \mathcal{F}_\Omegaega(1,-1;0,T)$ then, as \eqref{L_counter} implies that $|\dot{\xi}(s)|\leq 1$ a.e., we deduce from \eqref{L_counter} that \betagin{equation*} \int_0^T L(\xi(s),\dot{\xi}(s))ds = \int_0^T V(\xi(s))ds \geq \int_0^T V(\xi(s))\dot{\xi}(s)ds = \int_{-1}^1 V(x)\;dx = \Vert V\Vert_{L^1(\Omegaega)} > 0. \end{equation*} Therefore $S_\Omegaega(-1,1) > 0$. \end{proof} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:counter-example}] Let $H$ be defined as in \eqref{eq:H_counter}, we consider the following discounted problems: \betagin{equation}\label{eq:SP1} \betagin{cases} \deltata u_\deltata(x) + H(x,Du_\deltata(x)) \leq 0 \qquad\text{in}\;\Omegaega_\lambdabda,\\ \deltata u_\deltata(x) + H(x,Du_\deltata(x)) \geq 0 \qquad\text{on}\;\overline{\Omegaega}_\lambdabda. \end{cases} \end{equation} Let $c(\lambdabda)$ be the eigenvalue of $H$ over $\Omegaega_\lambdabda$. By Theorem {\rm Re}\,f{thm:conv_bdd} we know that \betagin{equation*} \lim_{\deltata\rightarrow 0^+}\left( u_\deltata(x) + \frac{c(\lambdabda)}{\deltata}\right) \rightarrow u^0_\lambdabda(x) \end{equation*} uniformly on $\overline{\Omegaega}$, where $u^0_\lambdabda(x)$ is a maximal solution on $\Omegaega_\lambdabda$. For each $\lambdabda>0$, we can find $\tau(\lambdabda)>0$ such that \betagin{equation}\label{eq:finalcrucial} \sup_{x\in \overline{\Omegaega}_{\lambdabda}} \left|\left(u_\deltata(x) + \frac{c(\lambdabda)}{\deltata}\right) - u^0_\lambdabda(x) \right|\leq r(\lambdabda)\qquad\text{for all}\; \deltata \leq \tau(\lambdabda). \end{equation} Set $\phi(\lambdabda) = \tau(\lambdabda)r(\lambdabda)^2$, then $\phi(\lambdabda)\rightarrow 0$ as $\lambdabda\rightarrow 0^+$ and $\gammama = \infty$. The function $\phi(\lambdabda)$ can be modified to be decreasing. Now by \eqref{eq:finalcrucial} and Lemma {\rm Re}\,f{lem:divergence}, along two subsequences $\lambdabda_j$ and $\deltata_j$ we have \betagin{equation*} \lim_{\lambdabda_j\rightarrow 0^+} \left(u_{\lambdabda_j}(x) + \frac{c(\lambdabda_j)}{\phi(\lambdabda_j)}\right) = S_{\Omegaega}(x,-1) \neq S_\Omegaega(x,1) = \lim_{\deltata_j\rightarrow 0^+} \left(u_{\deltata_j}(x) + \frac{c(\deltata_j)}{\phi(\deltata_j)}\right). \end{equation*} Thus we have the divergence of $\left\lbrace u_\lambdabda+\phi(\lambdabda)^{-1}c(\lambdabda) \right\rbrace _{\lambdabda > 0}$ in this case. \end{proof} \betagin{rem} Note that the parametrization here $u_\lambdabda$ means $u_{\phi(\lambdabda)}$, which is the same as in the original definition \eqref{eq:def_ulambda}. In \eqref{eq:def_ulambda} we should have used $u_{\phi(\lambdabda)}$ instead of $u_\lambdabda$ but we simplify the notation for clarity. \end{rem} \section*{Appendix} \betagin{lem}\label{thm:Ishii} Assume that $\Omegaega$ is bounded, open and $0\in \Omegaega$. Assume further that \eqref{condA2} holds for some $\kappa >0$, then $\Omegaega$ is star-shaped and $\mathrm{(A2)}$ holds. \end{lem} \betagin{proof}[Proof of Lemma {\rm Re}\,f{thm:Ishii}] Suppose that $\Omegaega$ is not star-shaped, there exists $x\in \overline{\Omegaega}$ and $0< \theta < 1$ such that $\theta x\notin \Omegaega$. Since $0$ is an interior point of $\Omegaega$, there exists $0<\deltata<\theta$ such that $\tau x \in \Omegaega$ for all $0<\sigmama \leq \deltata$. Let us define $\eta = \sup \big\lbrace \tau>0: \tau x \in \Omegaega \big\rbrace$ then $0< \deltata \leq \eta \leq \theta$ and $\eta x\in \partial \Omegaega$. Set $y = \eta x\in \partial \Omegaega$, we see that \betagin{equation*} x = \eta^{-1}y = (1+r)y \in (1+r)\partial\Omegaega \end{equation*} where $\eta^{-1} = 1 + r$. Now \eqref{condA2} gives us that $0=\mathrm{dist}(x,\Omegaega) \geq \kappa r$ which is a contradiction and thus $\Omegaega$ is star-shaped. For $0<r<1$, as $\Omegaega$ is star-shaped, $(1-r)\overline{\Omegaega}\subset(1+r)^{-1}\Omegaega$ and $B\left(0,\frac{\kappa r}{1+r}\right)\subset B\left(0,\frac{\kappa r}{2}\right)$. From \eqref{condA2} we have $\Omegaega+B(0,\kappa r) \cap (1+r)\partial \Omegaega =\emptyset$ for all $r\in (0,1)$, therefore \betagin{equation*} (1+r)^{-1}\Omegaega + B\left(0,\frac{\kappa r}{1+r}\right) \cap \partial\Omegaega = \emptyset \quad\Longrightarrow\quad (1-r)\overline{\Omegaega}+ B\left(0,\frac{\kappa r}{2}\right) \cap \partial \Omegaega = \emptyset. \end{equation*} From \eqref{condA2} we deduce that $(1-r)\overline{\Omegaega}+ B\left(0,\frac{\kappa r}{2}\right) \subset \Omegaega$. We observe that \betagin{equation*} B\left(x - rx, \frac{\kappa r}{2}\right) = (1-r)x+B\left(0,\frac{\kappa r}{2}\right) \subset (1-r)\overline{\Omegaega}+ B\left(0,\frac{\kappa r}{2}\right) \subset \Omegaega. \end{equation*} This implies $\mathrm{(A2)}$ with $\eta(x) = -x$, $r = \frac{\kappa}{2}$ and $h = \frac{1}{2}$. \end{proof} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:pre}] By the priori estimate $\deltata |u_\deltata(x)| + |Du_\deltata(x)|\leq C_H$ for $x\in \overline{\Omegaega}$. Fix $x_0\in \overline{\Omegaega}$, then by Aezel\`a-Ascoli theorem there exists a subequence $\deltata_j$ and $u\in \mathrm{C}(\overline{\Omegaega})$ such that $u_{\deltata_j}(\cdot) - u_{\deltata_j}(x_0) \rightarrow u(\cdot)$ uniformly on $\overline{\Omegaega}$ for some $u\in \mathrm{C}(\overline{\Omegaega})$. By Bolzano-Weiertrass theorem there exists $c\in \mathbb{R}$ such that (upto subsequence) $\deltata_ju_{\deltata_j}(x_0)\rightarrow -c$. By stability of viscosity solution we have $H(x,Du(x)) = c$ in $\Omegaega$. We will show $H(x,Du(x))\geq c$ on $\overline{\Omegaega}$. Let $\tilde{x}\in \partial\Omegaega$ and $\varphi\in \mathrm{C}^1(\overline{\Omegaega})$ such that $u-\varphi$ has a strict minimum over $\overline{\Omegaega}$ at $\tilde{x}$, we show that $H(\tilde{x},D\varphi(\tilde{x})) \geq c$. Without loss of generality we can assume that $(u-\varphi)(x) \geq (u-\varphi)(\tilde{x}) = 0$ for $x\in \overline{\Omegaega}$. Define $\varphi_\deltata(x) = (1+\deltata)\varphi\left(\frac{x}{1+\deltata}\right)$ for $x\in (1+\deltata)\overline{\Omegaega}$. Let us define \betagin{equation*} \Phi(x,y) = \varphi_\deltata(x) - u_\deltata(y) - \frac{|x-y|^2}{2\deltata^2}, \qquad (x,y) \in (1+\deltata)\overline{\Omegaega}\times \overline{\Omegaega}. \end{equation*} Assume $\Phi$ has maximum over $(1+\deltata)\overline{\Omegaega}\times \overline{\Omegaega}$ at $(x_\deltata,y_\deltata)$. As $\Phi(x_\deltata,y_\deltata)\geq \Phi(y_\deltata,y_\deltata)$, we obtain $|x_\deltata - y_\deltata|\leq C\deltata$. By compactness we deduce that $(x_\deltata,y_\deltata) \rightarrow (\overline{x},\overline{x})$ for $\overline{x}\in \overline{\Omegaega}$ as $\deltata \rightarrow 0^+$. We deduce further that \betagin{equation*} \limsup_{\deltata\rightarrow 0}\frac{|x_\deltata - y_\deltata|^2}{2\deltata^2} \leq \limsup_{\deltata \rightarrow 0} {\mathbb B}ig(\varphi(x_\deltata) - \varphi(y_\deltata){\mathbb B}ig) = 0 \quad\Longrightarrow\quad |x_\deltata - y_\deltata| = o(\deltata). \end{equation*} Also $\Phi(x_\deltata, y_\deltata) \geq \Phi(\tilde{x},\tilde{x})$, let $\deltata \rightarrow 0$ we have $\Phi(\overline{x},\overline{x}) \geq \Phi(\tilde{x},\tilde{x})$ which implies that $\overline{x} = \tilde{x}$. By $\mathrm{(A2)}$ we deduce that $x_\deltata \in (1+\deltata)\Omegaega$. Now by supersolution test as $y\mapsto \Phi^\deltata(x_\deltata,y)$ has a max at $y_\deltata$, we obtain \betagin{equation*} \deltata u_\deltata(y_\deltata) + H{\mathbb B}ig(y_\deltata, \deltata^{-2}(x_\deltata - y_\deltata){\mathbb B}ig) \geq 0. \end{equation*} As $x\mapsto \Phi(x,y_\deltata)$ has a max at $x_\deltata \in (1+\deltata)\Omegaega$ as an interior point of $(1+\deltata)\Omegaega$, we deduce that $D\varphi_\deltata(x_\deltata) = \deltata^{-2}(x_\deltata - y_\deltata)$. Therefore \betagin{equation*} \deltata u_\deltata(y_\deltata) + H{\mathbb B}ig(y_\deltata, D\varphi_\deltata(x_\deltata){\mathbb B}ig) \geq 0. \end{equation*} As $u_\deltata(\cdot)$ is Lipschitz with constant $C_H$, we have $u_\deltata(y_\deltata)\rightarrow u(\tilde{x})$ along the subsequence $\deltata_j$. Therefore as $\deltata_j\rightarrow 0$ we have $H(\tilde{x}, D\varphi(\tilde{x})) \geq c$. Now with the help of comparison principle, we obtain the uniqueness of $c = c(0)$ and thus the convergence of the full sequence $\deltata u_\deltata(x_0)\rightarrow -c(0)$ follows. If we use the following normalization \betagin{equation*} \lim_{j\rightarrow \infty} \left(u_{\deltata_j}(x) + \frac{c(0)}{\deltata_j}\right) = w(x) \end{equation*} then by a similar argument we can show $w$ solves \eqref{eq:S_0} as well, and \betagin{align*} u(x) &= \lim_{j\rightarrow \infty}\left( u_{\deltata_j}(x) - u_{\deltata_j}(x_0)\right) \\ &= \lim_{j\rightarrow \infty}\left( u_{\deltata_j}(x) + \frac{c(0)}{\deltata_j}\right) - \lim_{j\rightarrow \infty}\left( u_{\deltata_j}(x_0) + \frac{c(0)}{\deltata_j}\right) = w(x) - w(x_0). \end{align*} We have left to show \eqref{eq:rate0}. Let $u$ be defined as the limit of $u_{\deltata_j}(\cdot) - u_{\deltata_j}(x_0)$, we have $|u(x)|+|Du(x)|\leq C$ for $x\in \Omegaega$ where $C$ depends on $C_H$ and $\mathrm{diam}(\Omegaega)$. It is clear that $u(x)-\deltata^{-1}c(0)\pm C$ are, respectively, subsolution and supersolution to \eqref{state-def}, therefore by comparison principle we obtain \eqref{eq:rate0}. \end{proof} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:V}] If $v\in \mathrm{C}(\overline{\Omegaega})$ is a solution to \eqref{eq:E} then for a.e. $x\in \overline{\Omegaega}$ we have $-V(x)\leq|Dv(x)|-V(x) = c_\Omegaega$, therefore $c_\Omegaega \geq \max_{\overline{\Omegaega}}(-V) = -\min_{\overline{\Omegaega}} V$. Assume $V$ attains its minimum over $\overline{\Omegaega}$ at $x_0$ then by supersolution test at that point we have $0\geq -V(x_0)\geq c_\Omegaega$, therefore $c_\Omegaega = -\min_{\overline{\Omegaega}} V$. Let $z\in \overline{\Omegaega}$ such that $V(z) = -c_\Omegaega$, we check that $x\mapsto S_\Omegaega(x,z)$ is a supersolution at $x=z$. Let $\omega(\cdot)$ be the modulus of continuity of $V$ on $\overline{\Omegaega}$, we have $|V(x)+c_\Omegaega|\leq \omega(r)$ for all $x\in B(z,r)\cap \overline{\Omegaega}$. From \eqref{eq:E} as $x\mapsto u(x) = S_\Omegaega(x,z)$ is a subsolution in $\Omegaega$, we have \betagin{equation*} |Du(x)|-V(x) \leq c_\Omegaega\qquad\Longrightarrow\qquad |Du(x)|\leq V(x)+c_\Omegaega\leq \omega(r) \end{equation*} for a.e. $x\in B(z,r)\cap \overline{\Omegaega}$ and for all $r>0$, thus \betagin{equation*} |u(x)| = |u(x)-u(z)|\leq \int_0^1 |Du(sx + (1-s)z)\cdot (x-z)|\;ds \leq \omega(r)r \end{equation*} for $x\in B(z,r)\cap \overline{U}$. That means $x\mapsto u(x)$ is differentiable at $x=z$ and $Du(z) = 0$, thus $x\mapsto u(x) = S_\Omegaega(x,z)$ is a solution to \eqref{eq:E}. Conversely, if $V(z)=-c_\Omegaega + \varepsilon$ for some $\varepsilon>0$, then at $x=z$ we have $0\in D^-u(z)$ where $u(x) = S_\Omegaega(x,z)$, therefore if the supersolution test holds then we must have $-V(z) \geq c_\Omegaega$, hence $\varepsilon < 0$ which is a contradiction, thus $x\mapsto S_\Omegaega(x,z)$ fails to be a supersolution at $x=z$. \end{proof} \betagin{proof}[Proof of Theorem {\rm Re}\,f{thm:eigenvalue}] Without loss of generality we assume $c_U = 0$. Let $z\in \mathcal{A}_U \subset \Omegaega$ and $w(x) = S_U(x,z)$ solves \eqref{eq:E}, we have $H(x,Dw(x)) = 0$ in $\Omegaega$. We have \betagin{equation*} c_\Omegaega = \inf \big\lbrace c\in \mathbb{R}: H(x,Du(x)) = c\;\text{admits a viscosity subsolution in}\;\Omegaega\big\rbrace \leq 0. \end{equation*} Assume the contrary that $c_\Omegaega < 0$ then there exists $u\in \mathrm{C}(\overline{\Omegaega})\cap \mathrm{W}^{1,\infty}(\Omegaega)$ solves $H(x,Du(x)) \leq c(0) < 0$ in $\Omegaega$. Let us consider $g(x) = w(x)$ defined for $x\in \partial \Omegaega$ and the boundary value problem \betagin{equation}\label{eq:bdrS0} \betagin{cases} H(x, Dv(x)) = 0 &\quad\;\text{in}\;\Omegaega,\\ \;\,\qquad\qquad v = g &\quad\;\text{on}\;\partial \Omegaega. \end{cases} \end{equation} As there exists a solution $u$ such that $H(x,Du(x)) < 0$ in $\Omegaega$, by Theorem {\rm Re}\,f{thm:CP_Dirichlet} the problem \eqref{eq:bdrS0} cannot have more than one solution. On the other hand, the following function is a solution to \eqref{eq:bdrS0} \betagin{equation*} \mathcal{V}(x) = \min_{y\in \partial \Omegaega} {\mathbb B}ig\lbrace g(y) + S_U(x,y) {\mathbb B}ig\rbrace. \end{equation*} Indeed, for each $y\in \partial \Omegaega$, $x\mapsto g(y)+S_U(x,y)$ is a Lipschitz viscosity solution to $H(x,Dv(x)) = 0$ in $\Omegaega$, therefore by the convexity of $H$ is obtain $\mathcal{V}$ is a viscosity solution to $H(x,D\mathcal{V}(x)) = 0$ in $\Omegaega$ as well. On the boundary we see that $\mathcal{V}(x) \leq g(x)$, and also for any $y\in \partial\Omegaega$ then \betagin{equation*} g(y)+S_U(x,y) = S_U(y,z) + S_U(x,y) \geq S_U(x,z) = g(x), \end{equation*} which implies that $\mathcal{V}(x) = g(x)$ on $\partial\Omegaega$. Therefore we must have $\mathcal{V}(x) = S_U(x,z)$ for all $x\in \overline{\Omegaega}$, hence $\mathcal{V}(z) = S_U(z,z) = 0$ and as a consequence there exists $y\in \partial \Omegaega$ such that \betagin{equation*} S_U(y,z) + S_U(z,y) = 0. \end{equation*} This implies that $y\in \mathcal{A}_U$ (see \cite{fathi_pde_2005, ishii_vanishing_2020}), which is a contradiction since $\mathcal{A}_U$ is supported inside $\Omegaega$, therefore we must have $c_\Omegaega = 0$. \end{proof} \betagin{proof}[Proof of Lemma {\rm Re}\,f{lem:regu}] From $\mathrm{(H4)}$ for each $R>0$ there is a nondecreasing function $\omega_R:[0,\infty)\rightarrow [0,\infty)$ with $\omega_R(0) = 0$ such that $ |D_xL(x,v) - D_xL(y,v)|\leq \omega_R(|x-y|)$ if $|x|,|v|\leq R$. Fix $(x,v)\in \overline{\Omegaega}\times \overline{B}_h$, we can assume $h$ is large so that $\overline{\Omegaega}\subset \overline{B}_h$. Let $f(\deltata) = L\left((1-\deltata)x,v\right)$ then $\deltata\mapsto f(\deltata)$ is continuously differentiable and \betagin{align*} \left|\frac{f(\deltata) - f(0)}{\deltata} - f'(0)\right| &\leq \sup_{s\in [0,\deltata]}|f'(s) - f'(0)|\\ &= \sup_{s\in [0,\deltata]} |x|.\left|D_xL\left((1-s)x,v\right) - D_xL(x,v)\right| \leq \big(\mathrm{diam}\;\Omegaega\big) \omega_h\left(\deltata|x|\right). \end{align*} Therefore \betagin{equation*} \lim_{\deltata\rightarrow 0^+} \left( \sup_{(x,v)\in \overline{\Omegaega}\times \overline{B}_h} \left|\frac{L\left((1-\deltata)x,v\right) - L(x,v)}{\deltata} - (-x)\cdot D_xL(x,v)\right|\right) = 0. \end{equation*} The conclusions follow from here. \end{proof} \betagin{thm}[Comparison principle for Dirichlet problem, \cite{Bardi1997}]\label{thm:CP_Dirichlet} Let $\Omegaega$ be a bounded open subset of $\mathbb{R}^n$. Assume $u_1,u_2\in \mathrm{C}(\overline{\Omegaega})$ are, respectively viscosity subsolution and supersolution of $H(x,Du(x)) = 0$ in $\Omegaega$ with $u_1\leq u_2$ on $\partial \Omegaega$. Assume further that \betagin{itemize} \item $|H(x,p) - H(y,p)|\leq \omega((1+|p|)|x-y|)$ for all $x,y\in \Omegaega$ and $p\in \mathbb{R}^n$. \item $p\mapsto H(x,p)$ is convex for each $x\in \Omegaega$. \item There exists $\varphi\in \mathrm{C}(\overline{\Omegaega})$ such that $\varphi\leq u_2$ in $\overline{\Omegaega}$ and $H(x,D\varphi(x)) < 0$ in $\Omegaega$. \end{itemize} Then $u_1\leq u_2$ in $\Omegaega$. \end{thm} \section*{Acknowledgments} The author would like to express his appreciation to his advisor, Hung V. Tran for giving him this interesting problem and for his invaluable guidance and patience. The author also would live to thanks Hiroyoshi Mitake for useful discussion on the subject when he visited Madison in September 2019, and for many great comments including suggestions that lead the oscillating behavior of $r(\lambdabda)$ and Corollary {\rm Re}\,f{cor:equala}. The author also would like to thanks Hitoshi Ishii for many useful suggestions, including Lemma {\rm Re}\,f{thm:Ishii} and a different perspective on Theorem {\rm Re}\,f{thm:general}. Finally, the author would like to thank Michel Alexis, Dohyun Kwon for useful discussion and Jingrui Cheng for pointing our mistakes in the earlier version.\\ {} \end{document}
\begin{document} \title{Novel Numerical Algorithm with Fourth-Order Accuracy for the Direct Zakharov-Shabat Problem} \begin{abstract} We propose a new high-precision algorithm for solving the initial problem for the Zakharov-Shabat system. This method has the fourth order of accuracy and is a generalization of the second order Boffetta-Osborne scheme. It is allowed by our method to solve more effectively the Zakharov-Shabat spectral problem for continuous and discrete spectra. \end{abstract} \keywords{Zakharov-Shabat problem, inverse scattering transform, nonlinear Schr\"{o}dinger equation, numerical methods} The solution of the direct problem for the Zakharov-Shabat problem (ZSP) is the first step in the inverse scattering transform (IST) for solving the nonlinear Schr\"{o}dinger equation (NLSE)~\cite{ZakharovShabat1972}. The numerical implementation of the IST has gained great importance and attracted special attention since Hasegawa and Tappert~\cite{Hasegawa1973a} proposed to use soliton solutions as a bit of information for fiber optic data transmission. The direct scattering problem is solved by spectral data. To calculate them it is necessary to solve the initial problem for the Zakharov-Shabat system. Therefore, a lot of effort was made to find effective numerical methods for solving this problem. An overview of the methods used can be found in~\cite{Yousefi2014II,Turitsyn2017Optica,Vasylchenkova2017a}. Currently, one of the most effective methods for solving the ZSP is the Boffetta-Osborne (BO) method~\cite{Boffetta1992a}, which has the second order of approximation. Comparisons of this method with other methods were carried out in~\cite{Vasylchenkova2017a,Burtsev1998}. Besides the approximation accuracy, it is necessary to have an algorithm requiring a minimum computational time to get a discrete set of spectral parameters with sufficient accuracy. This direction is implemented in the fast algorithm (FNFT) for solving the direct ZSP using the modified Ablowitz-Ladik method~\cite{Wahls2013, Turitsyn2017Optica}. The BO method does not allow a direct application of the fast algorithm. But the fast method can be applied to the exponential approximation of the transition matrix of the BO method~\cite{Prins2018a}. In this Letter, we will focus on building the method of the fourth order of accuracy on a uniform grid. For a non-uniform grid, a fourth order scheme~\cite{Blanes2017} was applied in~\cite{Prins2018}. In perspective, the exponential approximation can be applied to our scheme so that we can use the fast algorithm. We write the Zakharov-Shabat system in a matrix form \begin{equation}\label{psit} \frac{d}{dt}{ \Psi}(t)=Q(t){\Psi}(t), \end{equation} where $\Psi(t)$ is a complex vector function of the real argument~$t$, $Q(t)$ is a complex matrix $$ {\Psi}(t) = \left( \begin{array}{c} \psi_1(t)\\\psi_2(t) \end{array} \right),\quad Q(t) = \left( \begin{array}{cc} -i\zeta&q(t,z_0)\\-\sigma q^*(t,z_0)&i\zeta \end{array} \right), $$ where $\sigma=\pm 1$ for anomalous and normal dispersion, $z_0$ plays the role of a parameter and will not be used further. The asterisk means the complex conjugation. Consider the Jost initial conditions \begin{equation}\label{psi0} \left( \begin{array}{c} \psi_{1}\\\psi_{2} \end{array} \right) = \left( \begin{array}{c} e^{-i\zeta t}\\0 \end{array} \right)[1+o(1)],\quad t\to-\infty, \end{equation} which define the Jost solutions for real~$\zeta=\xi$. The coefficients of the scattering matrix~$a(\xi)$ and~$b(\xi)$ are obtained as limits \begin{equation}\label{ab} a(\xi)=\lim_{t\to\infty}\,\psi_1(t,\xi)\,e^{i\xi t},\quad b(\xi)=\lim_{t\to\infty}\,\psi_2(t,\xi)\,e^{-i\xi t}. \end{equation} The function~$a(\xi)$ can be extended to the upper half-plane $\xi\to \zeta$, where~$\zeta$ is a complex number with the positive imaginary part $\eta=\mbox{Im}\,\zeta>0$. The spectral data are determined by $a(\zeta)$ and $b(\zeta)$ in the following way:\\ (1) the zeros of $a(\zeta)=0$ define the discrete spectrum $\{\zeta_k\}$, $k=1,...,K$ of ZSP~(\ref{psit}) and phase coefficients $$r_k=\left.\frac{b(\zeta)}{a'(\zeta)}\right|_{\zeta=\zeta_k},\quad\mbox{where}\quad a'(\zeta)=\frac{da(\zeta)}{d\zeta};$$ (2) the continuous spectrum is determined by the reflection coefficient $$r(\xi)=\frac{b(\xi)}{a(\xi)},\quad \xi\in\mathbb{R}. $$ The matrix~$Q(t)$ in the system~(\ref{psit}) becomes the skew-Her\-mi\-tian ($Q^*=-Q^T$) when the spectral parameter $\zeta=\xi$ is real and $\sigma=1$. Therefore, the system~(\ref{psit}) preserves the integral \begin{equation} \frac{d}{dt}\left(|\psi_1(t)|^2+|\psi_2(t)|^2\right)=0. \end{equation} Taking into account the boundary conditions~(\ref{psi0}), we have \begin{equation}\label{conser} |\psi_1(t)|^2+|\psi_2(t)|^2=1. \end{equation} In addition, the trace formula is valid~\cite{Ablowitz1981} \begin{eqnarray} C_n=-\frac{1}{\pi}\int\limits_{-\infty}^\infty\,(2i\xi)^n\,\ln|a(\xi)|^2\,d\xi+ +\sum\limits_{k=1}^K\,\frac{1}{(n+1)} \left[(2i\zeta_k^*)^{n+1}-(2i\zeta_k)^{n+1}\right],\nonumber \end{eqnarray} which connects the NLSE integrals $C_n$ with the coefficient $a(\xi)$ and the discrete spectrum $\zeta_k$. The first integrals have the form $$C_0=\int\limits_{-\infty}^\infty|q|^2dt,\enskip C_1=\int\limits_{-\infty}^\infty qq^*_tdt, \enskip C_2=\int\limits_{-\infty}^\infty(qq^*_{tt}+|q|^4)dt,$$ $$C_3=\int\limits_{-\infty}^\infty(qq^*_{ttt}+4|q|^2qq^*_t+|q|^2q^*q_t)dt. $$ This formula with $n=0$ is called the Parseval nonlinear equality and is used to verify the numerical calculations and the consistency of the continuous and discrete spectra found. We solve the system~(\ref{psit}). The matrix~$Q(t)$ linearly depends on the complex function~$q(t)$, which is given in the whole nodes of the uniform grid $t_n=-L+\tau n$ with a step~$\tau$ on the interval $[-L,L]$. If the total number of points is~$2M+1$, then the grid step is $\tau=L/M$. Since the matrix~$Q(t)$ is specified only on the grid, Boffetta and Osborne suggested replacing the original system on the interval $[t_n-\frac{\tau}{2},t_n+\frac{\tau}{2}]$ with an approximate system with constant coefficients \cite{Boffetta1992a} \begin{equation} \frac{d}{dt}\Psi(t)=Q(t_n)\Psi(t),\quad Q_n=Q(t_n), \end{equation} which is easily solved on the selected interval and gives the transition matrix from the layer $n-\frac{1}{2}$ to the layer $n+\frac{1}{2}$: \begin{equation}\label{T0} \Psi_{n+\frac{1}{2}}=e^{\tau Q_n}\Psi_{n-\frac{1}{2}}. \end{equation} This method has proven itself well, but nonetheless one would like to get a more accurate solution. So we formulate our task: it is required on the interval $[t_n-\frac{\tau}{2},t_n+\frac{\tau}{2}]$ to build the transition matrix from $\Psi_{n-\frac{1}{2}}$ to $\Psi_{n+\frac{1}{2}}$ with maximum accuracy and minimum computational cost. The first step is a change of variables $$\Psi(t)=e^{tQ_n}Y(t),$$ so the initial system takes the form \begin{equation}\label{yt} \frac{d}{dt}{Y}(t)=L(t)Y(t),\quad L(t)=e^{-tQ_n}(Q(t)-Q_n)e^{tQ_n}. \end{equation} In this form, the linear matrix~$L(t)$ becomes zero at~$t=t_n$, and the derivative of~$Y(t)$ is zero at this point. This means that the solution is almost constant in the neighborhood of~$t_n$. If the original system is replaced by an approximate \begin{equation}\label{app0} \frac{d}{dt}{Y}(t)=0, \end{equation} then the transition from $Y_{n-\frac{1}{2}}$ to $Y_{n+\frac{1}{2}}$ becomes trivial: \begin{equation}\label{YY} Y_{n+\frac{1}{2}}=Y_{n-\frac{1}{2}}. \end{equation} Returning to the original values of the variable~$\Psi$ at the points $t_n-\frac{\tau}{2}$ and $t_n+\frac{\tau}{2}$, we get \begin{equation}\label{psiy} \Psi_{n-\frac{1}{2}}=e^{\left(t_n-\frac{\tau}{2}\right)Q_n}Y_{n-\frac{1}{2}},\quad \Psi_{n+\frac{1}{2}}=e^{\left(t_n+\frac{\tau}{2}\right)Q_n}Y_{n+\frac{1}{2}}, \end{equation} which, taking into account (\ref{YY}), exactly gives a transition in the BO scheme (\ref{T0}). Since we are interested in the values of~$Y$ only in the grid nodes, the solution~(\ref{YY}) can be interpreted as a solution to the difference equation \begin{equation}\label{R0} \frac{Y_{n+\frac{1}{2}}-Y_{n-\frac{1}{2}}}{\tau}=L_n\frac{Y_{n+\frac{1}{2}}+Y_{n-\frac{1}{2}}}{2}, \end{equation} which is an approximation of the continuous equation~(\ref{yt}) given that $L_n=L(t_n)=0$. By decomposing~(\ref{R0}) into a Taylor series at the point~$t=t_n$, we get the second order of approximation $$\frac{d}{dt}Y\left(t_n\right)-L(t_n)Y(t_n)\approx \frac{\tau^2}{24}\frac{d^3}{dt^3}Y(t_n).$$ Thus, we have shown that the BO scheme corresponds to the simplest finite-difference approximation~(\ref{R0}). There are two possibilities to construct more complex approximations for the equation~(\ref{yt}) on the interval $[t_n-\frac{\tau}{2}, t_n+\frac{\tau}{2}]$. The first is to build a finite difference analog for this equation. The second possibility is to construct an approximation of the operator~$L(t)$ on the entire interval $[t_n-\frac{\tau}{2}, t_n+\frac{\tau}{2}]$ according to the existing values of~$Q_n$ on a regular grid and the subsequent solution of such a system by any analytical method. Consider the first approach. Since we want to refine the BO scheme, we take the function~$Y$ only in two nodes of the grid $Y_{n-\frac{1}{2}}$ and $Y_{n+\frac{1}{2}}$. For the matrix~$L$, we take the three nearest values~$L_{n-1}$, $L_n$ and $L_{n+1}$. Using these values, we will look for a scheme using the method of uncertain coefficients \begin{eqnarray}\label{abcd} &&\frac{Y_{n+\frac{1}{2}}-Y_{n-\frac{1}{2}}}{\tau}= \left(\alpha L_{n+1}+\beta L_{n-1}\right)Y_{n+\frac{1}{2}}+\left(\gamma L_{n+1} +\delta L_{n-1}\right)Y_{n-\frac{1}{2}}.\nonumber \end{eqnarray} Here we used the condition~$L_n=0$, to drop the terms with~$L_n$ in the right-hand side. By decomposing~(\ref{abcd}) into a Taylor series at the point~$t=t_n$ and using the equation~(\ref{yt}) at this point and its time derivatives, we get that the expression~(\ref{abcd}) has at least the fourth order approximation in $\tau$: $$\frac{d}{dt}{Y}(t_n)-L(t_n)Y(t_n)\approx\frac{\tau^4}{24}(\beta -\alpha)\frac{d{L}_n}{dt}\frac{d^2L_n}{dt^2}Y_n-\frac{\tau^4}{5760}\left(17\frac{d^4L_n}{dt^4}+12\frac{d^2L_n}{dt^2}\frac{dL_n}{dt}\right)Y_n$$ for $$\gamma =\frac{1}{24}-\alpha,\quad \delta=\frac{1}{24}-\beta$$ and arbitrary $\alpha$ and $\beta$. The resulting scheme can be rewritten as \begin{eqnarray}\label{sch} &\left[I-\tau \alpha L_{n+1}-\tau \beta L_{n-1}\right]Y_{n+\frac{1}{2}}=\left[I+\tau\left(\frac{1}{24}-\alpha \right)L_{n+1}+ \tau\left(\frac{1}{24}-\beta \right)L_{n-1}\right]Y_{n-\frac{1}{2}},\nonumber \end{eqnarray} where $$L_{n+1}=e^{-(t_n+\tau) Q_n}\left(Q_{n+1}-Q_n\right)e^{(t_n+\tau) Q_n},$$ $$ L_{n-1}=e^{-(t_n-\tau) Q_n}\left(Q_{n-1}-Q_n\right)e^{(t_n-\tau) Q_n}.$$ In the original variables, this scheme will take the form \begin{eqnarray}\label{schpsiM} &\left[I-\tau \alpha M_{n+1}-\tau \beta M_{n-1}\right]e^{-\frac{\tau}{2}Q_n}\Psi_{n+\frac{1}{2}}= \left[I+\tau\left(\frac{1}{24}-\alpha \right)M_{n+1}+ \tau\left(\frac{1}{24}-\beta \right)M_{n-1}\right]e^{\frac{\tau}{2}Q_n}\Psi_{n-\frac{1}{2}},\nonumber \end{eqnarray} where $$M_{n+1}=e^{-\tau Q_n}\left(Q_{n+1}-Q_n\right)e^{\tau Q_n},$$ $$ M_{n-1}=e^{\tau Q_n}\left(Q_{n-1}-Q_n\right)e^{-\tau Q_n}.$$ For real values $\zeta=\xi$, energy conservation~(\ref{conser}) is important, so if $\alpha = \beta = 1/48$, then the transition operator \begin{eqnarray}\label{schpsiMS} T=e^{\frac{\tau}{2}Q_n}\left[I-\frac{\tau}{48}\left(M_{n+1}+M_{n-1}\right)\right]^{-1} \left[I+\frac{\tau}{48}\left(M_{n+1}+M_{n-1}\right)\right]e^{\frac{\tau}{2}Q_n}\nonumber \end{eqnarray} becomes a unitary matrix that conserves quadratic energy~(\ref{conser}). Since the spectrum of matrices~$Q_n$ is purely imaginary, the expression with square brackets is the Cayley formula. The transmission matrix (\ref{schpsiMS}) was obtained using a transformation of variables, and it conserves the energy for the real spectral parameters; therefore the corresponding scheme will be called as the fourth-order conservative transformed scheme (CT4). Remark 1. The spectral parameter~$\xi$ is included only through the exponent exponents, as in the BO method; therefore, the use of the fast algorithm (FNFT) is difficult~\cite{Wahls2013}, but it is possible after exponential approximation~\cite{Prins2018a}. Remark 2. An open question is how to use the free parameters $a$ and $b$ for computation with complex spectral parameters. Although the preservation of high-frequency oscillations is important, for the eigenvalues near the imaginary axis, another criterion for the scheme may be needed. \begin{figure} \caption{The approximation order of the Boffetta-Osborne (BO) and conservative transformed schemes (CT4).} \label{fig:1} \end{figure} The following formula was used to calculate the approximation order $m$: \begin{equation}\label{order} m=\log_{\frac{\tau_1}{\tau_2}}\frac{\left\Vert\tilde{\Psi}_1(L)\right\Vert_2} {\left\Vert\tilde{\Psi}_2(L)\right\Vert_2}= \frac{\log_2\frac{\left\Vert\tilde{\Psi}_1(L)\right\Vert_2} {\left\Vert\tilde{\Psi}_1(L)\right\Vert_2}}{\log_2\frac{\tau_1}{\tau_2}}, \end{equation} where $\tau_i$, $i=1$, $2$ are the steps of computational grids for two calculations with one spectral parameter $\zeta$ and $\tau_1 > \tau_2$, $\tilde{\Psi}_i(L)$ is a deviation of the calculated value $\Psi_i(L)$ from the exact analytical value $\bar{\Psi}_i(L)$ at the boundary point $t=L$. The calculations were carried out for different $p$-norms and showed close values for the approximation orders. However, for the Euclidean $2$-norm, the graphics were the smoothest. The scheme~(\ref{schpsiMS}) was tested for different model signals, where the analytical expressions for spectral data were known. In particular, there were calculations for the oversoliton from~\cite{Satsuma1974} for a small number of discrete eigenvalues. However, to present our scheme (\ref{schpsiMS}), we chose calculations for one soliton, because this solution is smooth and not only spectral data are known for it, but also eigenfunctions~\cite{Hasegawa1995}. Here we present numerical results for the best known potential $q(t) = \mbox{sech}(t)$. It has a single eigenvalue $\zeta_1 = 0.5i$, $b(\zeta_1) = -1$. Since this potential is purely solitonic $b(\xi) = 0$, the continuous spectrum energy $E_c =-\frac{1}{\pi}\int\limits_{-\infty}^\infty\,\ln|a(\xi)|^2\,d\xi = 0$, while $a(\xi) = (\xi - 0.5i)/(\xi + 0.5i)$. \begin{figure} \caption{The continuous spectrum errors.} \label{fig:2} \end{figure} \begin{figure} \caption{The continuous spectrum energy errors.} \label{fig:3} \end{figure} Figure~\ref{fig:1} demonstrates the approximation order~$m$ of both schemes with respect to a spectral parameter $\xi\in [-20, 20]$. Each line was calculated by the formula (\ref{order}) using two embedded grids with a doubled grid step $\tau = L/M$, $L = 40$. For the blue line, coarse and fine grids were defined by $M = 2^{10}$ and $2^{11}$. For the red one, the grid was refined one more time, namely $M = 2^{11}$ and $2^{12}$. Let us remind that the total number of points in the whole domain $[-L, L]$ is $2M +1$. Figure~\ref{fig:2} shows the continuous spectrum errors for the fixed value of the spectral parameter $\xi = 20$. Black dashed line in Fig.~\ref{fig:2} marks the minimum number of grid nodes~$M_{\min}$ that guarantee a good approximation. Actually, when calculating the continuous spectrum, it is necessary to choose a time step $\tau=L/M$ to describe correctly the fastest oscillations. For a fixed value of $\xi$, the local frequency $\omega(t;\xi)=\sqrt{\xi^2+|q(t)|^2}$ of the system (\ref{psit}) varies from $\omega_{\min}=|\xi|$ to $\omega_{\max}=\sqrt{\xi^2+q_{\max}^2}$, where $q_{\max}=\max\limits_t|q(t)|$ is the maximum absolute value of the potential $q(t)$. Therefore, step $\tau$ cannot be arbitrary. In order to describe the most rapid oscillations, it is necessary to have at least 4-time steps for the oscillation period, so the inequality must be satisfied: $$4\tau=4\frac{L}{M}\leq \frac{2\pi}{\omega_{\max}}.$$ Therefore, any difference schemes will approximate the solutions of the original continuous system (\ref{psit}) if the inequality is fulfilled for the number of points $M\geq M_{\min}=2\,L\,\omega_{\max}/\pi$. The calculation errors for the continuous spectrum energy are compared in Fig.~\ref{fig:3}. It is important to define the size of the spectral domain $L_{\xi}$ and the corresponding grid step $d\xi$ for the calculation of the continuous spectrum energy. According to the conventional discrete Fourier transform, we take the same number of points $N_{\xi} = N$ in the spectral domain and define a spectral step as $d\xi = \pi/(2L)$. So the size of the spectral interval is $L_{\xi} = \pi/(2\tau)$. The energy integral was computed by the trapezoid rule. The discrete spectrum errors are presented in Fig.~\ref{fig:4}. The parameters $a(\zeta)$ and $b(\zeta)$ were computed for the analytically known eigenvalue $\zeta_1 = 0.5i$. In this test, we did not use any numerical algorithm to find the eigenvalue but compute $a(\zeta)$ and $b(\zeta)$ at the exact point $\zeta = \zeta_1$ right away. It was made intentionally to estimate the error of the scheme itself and to avoid the influence of the other numerical algorithm errors. All the errors in Figs. ~\ref{fig:2}--\ref{fig:4} are calculated using the Euclidean 2-norm. Figures~\ref{fig:2}, \ref{fig:4} also demonstrate a comparison of the computational time. One can see that CT4 scheme allows getting a better accuracy faster than the BO scheme. \begin{figure} \caption{The discrete spectrum errors.} \label{fig:4} \end{figure} In this Letter, we proposed the family of fourth-order finite-difference one-step schemes for solving the direct Zakharov-Shabat problem on a uniform grid. Among this family, a quadratic integral preserving scheme for the continuous spectrum was distinguished. Numerical experiments for the soliton potential confirmed the theoretical order of approximation and demonstrated a significant advantage of our conservative scheme over the Boffetta-Osborne scheme. The proposed scheme works for uniform grids that can be useful when processing optical signals recorded at the receiver at regular time intervals.\\ \noindent{\bf Funding.} This work was supported by the Russian Science Foundation (grant No.~17-72-30006). \end{document}
\begin{document} \title{Hybrid hypercomputing: towards a unification of quantum and classical computation.} \author{Dominic Horsman and William J. Munro} \address{Hewlett-Packard Laboratories, Filton Road, Bristol BS34 8HZ} \begin{abstract} We investigate the computational power and unified resource use of hybrid quantum-classical computations, such as teleportation and measurement-based computing. We introduce a physically causal and local graphical calculus for quantum information theory, which enables high-level intuitive reasoning about quantum information processes. The graphical calculus defines a local information flow in a computation which satisfies conditions for physical causality. We show how quantum and classical processing units can now be formally integrated, and give an analysis of the joint resources used in a typical measurement-based computation. Finally, we discuss how this picture may be used to give a high-level unified model for hybrid quantum computing. \end{abstract} \maketitle \section{Introduction} The standard approach to quantum computing treats quantum and classical information processing and communication as essentially two separate systems. Quantum and classical computations are described using different formalisms, and consume separate resources. While they are frequently combined, for example in protocols such as teleportation \cite{telepo} and dense coding \cite{denco} the interface between them is at best not formalised and at worst entirely obscured by standard presentations. Some `translation rules' exist between quantum and classical \cite[ch1]{nandc}, and we can formally define the complexity of specific quantum algorithms compared to their classical counterparts \cite{quantcomplex}, but we still lack an underlying model of computation and resources for both that would enable direct comparisons to be made. It is even the case (although often overlooked) that the lack of a proof for the extended Church-Turing thesis means that it is still an open question whether quantum computers are in fact significantly more powerful than classical computers \cite{Bennett94strengthsand}. Given the potentially important applications of quantum computing methods in areas such as cryptography and quantum simulation \cite{crypto,sim}, it is actually a significant question whether we should deploy our resources to implement them by physically building quantum computers, or by investigating new classical algorithms to run on current hardware. Without a unified framework for quantum and classical computations, it is hard to see how we could begin to answer questions such as this with any degree of certainty. The problem becomes particularly acute when we consider hybrid computing schemes such as measurement-based quantum computing, where the feed-forward of classical measurement results is an integral part of the quantum design \cite{danmbqc,owqc}. Consideration of the resources used during such a computation tends to concentrate solely on the quantum resources required (generally in the form of a cluster state \cite{cluster}), with classical measurement and communication being taken as essentially resource-free. This leaves open problems both of the resources required to physically implement these computations, and also fundamental questions on the computability power of such algorithms. Further problems arise if we consider hybrid cryptographic schemes where formally provable security is desired. The lack of formal links between the quantum and classical elements gives a potential area of uncertainty in the security proofs for such systems. In this paper we present a candidate for a framework in which to analyse hybrid computation by fully integrating the quantum and classical resources and processes used. This takes the form of a graphical calculus based on the logical Heisenberg picture \cite{gottesman,dh,mevdh}, closely related to the stabilizer formalism that is typically used to describe measurement-based quantum computation \cite{dan}. The framework integrates quantum and classical elements directly, and gives an intuitively simple set of formally exact reasoning tools for hybrid information theory. There is a local flow of information, enabling data to be tracked causally through a computation, including through what are typically considered to be simply `classical' communication channels. Furthermore, this flow satisfies all reasonable assumptions of causality in physical systems, such as one-way action in time. We will find that this framework enables us to describe the resource use of a measurement-based computation in terms of a single set of resources, and gives us a much clearer idea both of the physical processes taking place during thes computation and the logical dependency structure of various algorithms. We will finish by discussing how the framework may be abstracted to give us a formal computability model for hybrid computation. \section{Quantum information processing} The basic component of a quantum information processing system is a two-level quantum state called a quantum bit or qubit. As with classical two-level systems (bits) the levels are usually denoted as the computational `0' and `1', and a measurement of a qubit in the computational basis will return one of these values with a probability depending on the quantum state. Unlike classical bits, however, outside a measurement context a qubit can be in a `mixture' of the 0 and 1 states (which are written $\ket{0}$ and $\ket{1}$), called a \emph{superposition}. Qubits can also maintain correlations that are stronger than those accessible to classical systems, known as \emph{entanglement}. Entanglement can be thought of as `spatially extended' superposition: the joint state of, for example, a system of two qubits is in a superposition, which then gives rise to entanglement between the qubits individually. Quantum computations are significantly more sensitive to the resources required to complete them than their classical counterparts. Classical information can be freely read, duplicated and deleted; for example, if two copies of a bit are required for an algorithm, a single one can be created and then cloned. By contrast, quantum information is subject to strict theorems on \emph{no cloning} and \emph{no deleting} \cite{clone1,nodel}. No cloning states that we cannot start with a qubit in a given quantum state, interact it with another, and then come out with two copies of the original state (this does not, of course, preclude two qubits being \emph{prepared} in the same, known, state). No deleting is the reverse process: we cannot take a given qubit and change its state using only unitary evolution (that is, quantum evolution without classical measurement) to a `blank' state without transferring the state information elsewhere. These results are for general quantum states; there are specific states which can be cloned and deleted, which however turn out to be indistinguishable from classical `bit' states. Both no cloning and no deleting are closely related to the readability problem for quantum information: in general, measuring the state of a qubit changes it (it will, in general, change from a superposition to a definite value). This again contrasts with classical bits which can be easily read without destruction of their state. Using these building blocks, information processing in quantum systems can be categorised broadly into reversible and irreversible computations, dependent crucially on the ordering of quantum and classical elements in the process. Reversible computations use unitary gates acting on coherent quantum systems throughout the computation, with only the final readout of the process result yielding classical data. This has traditionally been the framework used for quantum computing, and includes the circuit and quantum walks models \cite{deutschct,skw}. By contrast, the more recently-devised method of measurement-based quantum computing (MBQC) includes irreversible measurements and classical data during the computation itself as well as for the readout. A coherent and highly-entangled quantum state called a \emph{cluster state} is prepared at the beginning of the computation, and then the algorithm is implemented through a time-ordered pattern of measurements on individual qubits or qubit blocks. The exact type of future measurements in the algorithm will depend on the outcome of previous measurements on the cluster. At the end of the process the entanglement in the cluster state has been used up (it is therefore considered to be the resource of the computation), and the result is read out from previously-designated `output' elements of the original cluster. One fundamental measurement-based protocol that we will use frequently (although historically pre-dating the development of MBQC) is teleportation. In teleportation, a qubit in an unknown state is transmitted using two bits of classical information and previously shared entanglement. Essentially the quantum state is converted to classical data, transmitted, and then converted back to quantum, within the framework of two shared entangled qubits. Various teleportation-type schemes are often found in measurement-based computations, as a way of transferring a quantum state (including its entanglement properties) from one part of the cluster to another. \section{Quantum mechanics in pictures} There has recently been a significant amount of research into graphical calculi for information flow in quantum processes. This has largely centred on the category-theoretic work of Coecke and Abramsky \cite{cat,kqm}. The calculi abstract out the structure of quantum information processes, and present a high-level framework for reasoning about information flow in systems, generally containing both bits and qubits. There is now a considerable menagerie of diagrammatic representations for the mathematical structure of quantum mechanics (eg. \cite{gc1,gc2}), and it is with some trepidation that we intend to advance yet another. The difference lies in the perspective from which the present calculus has been developed. Previous diagrammatic representations have concentrated on the mathematical, logical and information-theoretic structures of information flow. The present work is concerned rather with the \emph{physical} and \emph{causal} flows of information during quantum computation. These two broad areas are by no means mutually exclusive, but are sufficiently dissimilar to cause difficulties if one attempts to answer questions of interest to one using the representations of the other. A particular example of this comes in the Coecke and Abramsky calculus where there are logical information flows that travel backwards in time. If we want to ask about how information \emph{physically} gets between points in a computation, then such a flow picture will present serious problems. This present calculus has been developed specifically from such a ``physics'' rather than ``computer science'' perspective, with the hope of adding to our integrated understanding of the logical and physical nature of quantum information processing. \subsection{The logical Heisenberg formalism} We will first describe the formalism of quantum theory on which the graphical calculus will be based. This is a Heisenberg-picture formulation of quantum mechanics, where operators rather than states give the evolution of a system. The formalism is mathematically entirely equivalent to more standard pictures of quantum mechanics, the only difference being that there is no non-unitary evolution within the system. It was developed by Deutsch and Hayden following Gottesman \cite{gottesman,dh,mevdh}, and can be viewed as a generalised form of Gottesman's stabilizer theory. The basics of this formalism are the Hilbert-Schmidt space \emph{descriptors} for each qubit, from which we can generate the set of all operators on that qubit. These are the Heisenberg operators which track the evolution of the qubit systems through the computation. The initial state for descriptors is defined as the $\ket{0}$ (unentangled) state, which for qubit $a$ in an $n$-qubit system is written $$ \mathbf{q}_a = \ot{q_{ax}}{q_{az}} = \ensuremath{\mathsf{1}}en^{\otimes n-a} \otimes \ot{\sigma_x}{\sigma_z} \otimes \ensuremath{\mathsf{1}}en^{\otimes a-1}$$ \noindent (In its fullest form we should also have a $y$-component to $\mathbf{q}_a$, but it is not independent of the other components: $q_{ay} = q_{ax}q_{az}$. We therefore do not need to track it separately). The descriptor formalism is unique in quantum mechanics in that the individual qubit descriptors contain locally all the information available to any joint system $$ q_{abij} = q_{ai} \ q_{bj} $$ The descriptors evolve only when the specific qubit is subjected to a (single- or multiple-qubit) gate, so remote operations do not change the qubit. Some useful single-qubit gates are the NOT gate, $$ \mathbf{q}(t_1) = \ot{q_x(t)}{- q_z(t)}$$ \noindent and the Hadamard, which in the Schr\"{o}dinger picture acts as \begin{eqnarray*} \ket{0} & \rightarrow & \frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) \\ \ket{1} & \rightarrow & \frac{1}{\sqrt{2}}(\ket{0}-\ket{1}) \end{eqnarray*} \noindent and in descriptor terms is written $$ \mathbf{q}(t_1) = \ot{q_z(t)}{q_x(t)}$$ Another example of a frequently used gate is the controlled-NOT (CNOT) operation between two qubits. A CNOT operation between a control and target qubit performs a NOT on the target if the control is 1, and does nothing if the control is 0. In the descriptor formalism this is written \begin{eqnarray} \mathbf{q}_1(t_1) & = & \ot{q_{1x}(t) \ q_{2x}(t)}{q_{1z}(t)} \nonumber \\ \mathbf{q}_2(t_1) & = & \ot{q_{2x}(t)}{q_{1z}(t) \ q_{2z}(t)} \label{CNOT}\end{eqnarray} Entanglement in this formalism is given by a non-trivial dependency of a descriptor outside the space that it occupied at time $t_0$ \cite{mevent}. The relationship with the standard, Schr\"{o}dinger, formalism of quantum mechanics is given most easily through the density matrix, which can be written for two qubits as \begin{equation}\rho_{12} = \sum_{i,j=0}^3 \av{q_{1i}q_{2j}} \ \sigma_i \otimes \sigma_j\label{expected} \end{equation} \noindent where $q_{ay} = q_{ax}q_{az}$. We can use this relation with the density matrix to simplify the formalism from this rather unwieldy state. We can see from (\ref{expected}) that in terms of measurement outcomes, it is the expected values of the descriptors which are important. These expectation values in (\ref{expected}) are defined with respect to the fixed Heisenberg state $\ket{0}$, which for the Pauli operators gives $$ \av{\sigma_x} = \bra{0} \sigma_x \ket{0} = \av{\sigma_y} = 0 \ \ , \ \ \ \av{\sigma_z} = \av{\openone} = 1$$ \noindent Recalling the property of the Pauli operators that $\sigma_i \sigma_j = \epsilon_{ijk} \sigma_k$ for $i,j,k \in \{0,1,2,3\}$ (where $\epsilon_{ijk}$ is the Levi-Civita symbol), we therefore have the relations $$ \av{\sum_{ijkl} (a_i \sigma_x)(b_j \sigma_y)(c_k \sigma_z)(d_l \openone)} = \av{\sum_{ijkl} (a_i \sigma_x)(b_j \sigma_x)(c_k \openone)(d_l \openone)}$$ \noindent This means that we do not need separate $\sigma_y$ or $\sigma_z$ operators in the description -- wherever they occur, they can be replaced by $\sigma_x$ and $\openone$ respectively, without changing any of the current or future predictions of the formalism. These are now all the elements that are needed to transfer to a graphical calculus. There is an isomorphism between the formalism described and the calculus we will give in the next section, so that anything proven using the calculus can also be proven in the Deutsch-Hayden picture. As this is also isomorphic to the standard quantum formalism, the calculus will form a complete proof net for quantum propositions. \subsection{Yet another graphical calculus} From the previous section we can pick out the elements that we need to translate graphically. The representation has as its basic elements qubits, which have two `slots' representing the $x$- and $z$- components of the descriptor. In these slots we need to put representations of our two basic operators, $\sigma_x$ and $\ensuremath{\mathsf{1}}en$. The representations must satisfy the properties $\sigma_x . \ensuremath{\mathsf{1}}en = \sigma_x$ and $\sigma_x^2 = \ensuremath{\mathsf{1}}en$. As this is a Heisenberg formalism, we start from out initial `zero state' (we can take this by convention to be $\ket{0}$) and evolve from there using gates. A `0' qubit is given by \begin{center} \includegraphics[height=0.8cm]{qubit} \end{center} \noindent The left side represents $q_x$ and the right $q_z$. $\sigma_x$ is given by a triangle, and $\ensuremath{\mathsf{1}}en$ by a space. We can now perform a NOT gate to gain the `1' state of the computational qubit basis. In the descriptor formalism this acts as $q_x \rightarrow q_x$, $q_z \rightarrow -q_z$, so we represent it (time flows from left to right): \begin{center} \includegraphics[height=0.8cm]{NOT} \end{center} \noindent The final single-qubit operation we are interested in is a Hadamard transform, which swaps $q_x$ and $q_z$ compoents, hence left and right on a qubit: \begin{center} \includegraphics[height=0.8cm]{Hadamard} \end{center} \noindent A qubit in an unknown state is represented as \begin{center} \includegraphics[height=0.8cm]{arbitraryqubit} \end{center} \noindent Finally we have composition rules for combining known elements within a qubit. We will represent operators on different subspaces of the full Hilbert space by different colours, so that only triangles of the same colour compose, and different colours commute: \begin{center} \includegraphics[height=6cm]{composition} \end{center} \noindent Using two qubits in unknown states we can define the general actions of the two-qubit CNOT gate: \begin{center} \includegraphics[height=2.1cm]{CNOT} \end{center} \noindent We can look in detail at what's going on here by comparing it with the descriptor formalism (\ref{CNOT}). A CNOT operation consists of \emph{copying} descriptor dependencies between qubits, with the $q_z$ component of qubit 1 being copied into the $q_z$ component of qubit 2, and the $q_x$ component of qubit 2 copying back into qubit 1. Another copying operation is the controlled-Z (CZ) gate: \begin{center} \includegraphics[height=2.1cm]{CZ} \end{center} We now introduce our representation of classical bits using measurement gates. A measurement in the computational basis is represented \begin{center} \includegraphics[height=0.8cm]{MZ} \end{center} \noindent Measurements in other bases are implemented by rotations to the computational basis and then a measurement. One specific useful measurement is that in the X-basis, given by a Hadamard and then a measurement. This is represented as a shorthand by \begin{center} \includegraphics[height=0.8cm]{MX} \end{center} \noindent In operational terms, the bits given by \begin{center} \includegraphics[height=0.8cm]{exactbits} \end{center} \noindent are in a definite state in the measurement basis (0 and 1 respectively). Any bit containing a triangle is in a completely random state. \noindent Bits can now participate in the same single-input NOT gate as is used for qubits: \begin{center} \includegraphics[height=0.8cm]{bitgates} \end{center} \noindent and two-qubit gates can take a mixture of bits and qubits as their inputs: \begin{center} \includegraphics[height=10cm]{bitqubitgates} \end{center} That is now the calculus defined. What we have given is not fully universal for quantum computing as we have do not have a definition for arbitrary single qubit rotation \cite[p191]{nandc}. We do, however, have a full set of Clifford gates which are widely used in measurement-based computing, and give us an interesting subset of quantum computational behaviour. We can now identify the key components of the symmetric monoidal category structure with strong compact closure used by Abramsky and Coecke \cite{cat}. The \emph{objects} of the system are the contents of the qubits and bits, that is the positions and quantities of the triangles and the rotation boxes (remember that the filled box given by the NOT gate is simply a special type of rotation box). The gates comprise the \emph{morphisms} between different configurations of the qubit/bit contents, and we can easily see how an \emph{identity} could be defined that corresponds to a $\openone$ gate. We have a \emph{symmetry operation} that swaps the state of the qubits (given by the circuit-model SWAP gate): \begin{center} \includegraphics[height=1.6cm]{SWAP} \end{center} It is interesting to consider the structure of compound systems in this representation. In contrast to our usual intuitions about the creation of composite systems in quantum mechanics, in this representation a composite system is given by the simple combination of the constituent systems; there's nothing in the joint system that is not present in the individual ones. It is therefore not immediately obvious that we have a tensor rather than cartesian product structure for compound systems. However, when combining systems we not only need to take into account the individual qubit representations, but also the ``triangle'' composition rules, which will be important when gates (including measurement) are performed on the joint system. The asymmetry in the composition rules means that unique subsystem states cannot be recovered from a joint system; the correct \emph{associative connective} for the symmetric monoidal category is indeed the tensor product. Finally, the requirement of strong compact closure is satisfied in systems exhibiting teleportation, we will we demonstrate below. We therefore now have all the elements needed for a graphical representation of quantum mechanics. \section{Dependencies and entanglement} The first important element of quantum computing to consider in this representation is entanglement. As this is the physical outcome of the tensor product structure in quantum mechanics, we can see that it will come from a ``triangle'' composition. To see this in detail, let us look at an entangling Bell operation: \begin{center} \includegraphics[height=2.1cm]{singlet} \end{center} The first obvious thing to say about this system is that the two qubits finish as identical: each has now incorporated part of the state of the other qubit from the interaction operation. In other words, the state of one qubit now depends on information that it picked up from the other during their mutual interaction. Note that this dependency does not change unless there are local operations performed on that particular qubit; unlike in the standard representation of quantum mechanics, remote operations do not change the state of a qubit. So how precisely are these dependencies encoding the entanglement between qubits? As mentioned above, the key to this is the composition rule for triangles. Two qubits are entangled if there is a particular joint measurement which gives a different result from the composition of the individual measurements (eg. \cite[ch2]{nandc}). As the presence of any triangle in a bit means a completely random outcome, entanglement is shown by the ability to use the composition rule to annihilate all triangles from at least one joint measurement outcome bit. Taking the above example of the Bell state, we see that all individual measurements on the two qubits will give random bits. However, we can recover a definite bit in the following way by performing the reverse operation: \begin{center} \includegraphics[height=2.1cm]{recover} \end{center} The true power of the dependencies comes in, however, when they are used to carry information through a computation, and only annihilated at key points. We can see this by turning to arguably the most important measurement-based protocol, teleportation. In a teleportation protocol, Alice and Bob begin by sharing a pair of qubits in the state $\ket{01} + \ket{10}$: \begin{center} \includegraphics[height=2.1cm]{tel1} \end{center} \noindent (proof that this is indeed the correct representation of the state is left as an exercise to the interested reader). Alice and Bob are spatially separate, so we can (in this representation) look only at the qubits and bits local to each. Alice now has a qubit in an unknown state that she wishes to teleport. She entangles this with her half of the shared qubit pair, and then performs two measurements: \begin{center} \includegraphics[height=2.1cm]{tel2} \end{center} \noindent These two bits are then communicated over the intervening distance to Bob, using a classical communication channel. Bob performs two operations between the bits and his half of the original qubit pair, and recovers Alice's teleportation qubit: \begin{center} \includegraphics[height=3.6cm]{tel3} \end{center} One immediate consequence of describing teleportation in this representation is that it ceases to become a non-local operation. The state that Alice is sending to Bob does not disappear at one end and reappear at the other; rather, the data contained in the qubit is split into two bits which are then sent from Alice to Bob. An interesting question then becomes what role the pre-shared entanglement is playing in this picture. In the usual model, the role of the entangled qubits is to create a quantum communication channel. Alice's operation that entangles her half of the pair with the teleportation qubit is considered to change the joint state of the shared qubit pair. Bob can then recover the teleported state if he performs certain operations on his half. The classical communication is only needed to tell Bob which operation to do; it does not, in itself, convey any of the quantum information. By contrast, in this picture Alice's operations do nothing to the state of Bob's qubit: it is only when Bob performs the operations between his qubit and the communicated bits that its state changes. Neither quantum nor classical information or dependencies can travel between two points without something physically carrying them. Rather than acting as a quantum channel, the role of the shared entanglement instead in this picture is analogous to that of an encryption key. The data to be teleported is composed with the ``key'' (the red and green triangles), transmitted, and then decoded at the other end. The sharing of an entangled pair of qubits beforehand then becomes a form of \emph{key distribution}: Bob has an exact copy of the key that Alice is using, generated through the entangling operations that produce the original qubit pair. This is shared ahead of time, and enables him to extract the data contained in the bit pair that is communicated between them. This representation of teleportation also significantly changes the emphasis placed on the final operations performed on his qubit by Bob. On a standard understanding, by this point in the process Alice's operations have changed the state of Bob's qubit into a different definite state. This is the original teleportation state, with an additional `Pauli correction' \cite{owqc} (one of four known states). All the classical communication does is tell Bob which correction operation he needs to do in order to recover the teleportation state -- \emph{i.e.} it tells him which of four states he actually has. The correction operation becomes almost an afterthought: Bob has the state, but does not know it yet. The correction operation can therefore be dealt with almost completely informally; the information transmission protocol is complete by that point. In the graphical representation, however, Bob's ``correction operations'' are the only way in which the teleported information gets to his qubit. These ``decoding operations'' are of equivalent importance to Alice's ``encoding operations'', and are entirely integrated into the formalism. This is, in fact, how all such Pauli corrections are represented in the calculus: a Z-correction translates to a CNOT between bit and qubit, and an X-correction is a CZ operation. The feed-forward of classical information therefore becomes an integral part of the \emph{processing} of information in a hybrid computation, rather than what is necessary to read out the results of an already-completed computation. \section{Classical information} We have seen that classical feed-forward looks very different in the graphical representation compared with the usual view. Even at a formal level we have defined CNOT and CZ gates which can take both bits and qubits as inputs. This is simply not possible on the standard view, but emerges completely naturally within the graphical calculus. Because this is so different from the usual view of classical information, we are left with the problem of defining what exactly it is that is being transmitted through the classical channel in this representation. The usual view of the classical data is a single measurement outcome, either a 0 or a 1; a detector either clicks or it does not. How then does this relate to the relatively complex objects that are bits in processes such as the teleportation example above? The first thing to note when trying to understand the role of the classical channel here is that, in a measurement-based computation, the measurement outcomes are never single outcomes in isolation. Future computational actions are changed dependent on these outcomes, so within the logical structure of the computation they carry with them contextual information. A classical measurement outcome does not just mean ``detector X clicks'', for example: it means, in context, ``detector X clicks, this was caused by state Y, and so operation Z should be performed''. This is precisely the interface between the quantum and classical information processing units that is treated informally in standard presentations. As the aim is to fully formalise the entire process, this ``informal'' data will need to be found within the representation itself, rather than added by hand. There are, then, three elements to the representation: quantum systems and processes, classical systems and processes, and the formal treatment of their interaction within a given protocol. What we find with the graphical representation is that the contextual algorithm information is formalised by being incorporated into the description of the classical channel. This makes intuitive sense as this information concerns the function of a particular bit within the process. In logical terms, then, what is represented by the triangles etc. in a classical bit is not only the 0 or 1 of a straightforward measurement on that bit, but also the information about how that bit will interact with the quantum systems to change the future state of the computation. This in turn will depend on previous states of the computation (for example, in teleportation the states of the bits will depend on the state of the qubit that Alice is trying to teleport). So the description of a bit includes not only the results of measurements on that bit itself, but how those measurements depend on the states of the qubits with which it had previously interacted. In general, these dependencies will not have a significant effect on the outcome of measurements on a bit. However, for certain situations such as teleportation, it is possible to arrange the systems so that the bits interact with qubits which contain identical dependencies. In this case, different data can be extracted from the bit than is possible from individual bit measurements. This is done by making sure there are copies of the dependencies used in the systems with which it will interact -- in other words, by pre-sharing entangled qubits. We can see, then, that the information in the classical channel in this representation fully defines how the bits interact with the qubits. In logical terms, the calculus can be viewed as giving the dependency structure of an algorithm, showing which previous states the future of the computation is dependent on. The representation of classical information therefore also needs to include these dependencies, and to formalise the bit-qubit interaction. In physical terms, the dependencies carried by the bits have been generated by entanglement. Shared entanglement gives shared dependencies, and we can also see from the available gates that it is only the entangling operations (CNOT and CZ) that allow us to copy dependencies from one qubit to another. The unusual feature of the classical bits here is that they too can carry the dependencies created by entanglement, and feed this back into the computation by interacting with qubits containing other dependencies. By carrying these dependencies, the classical channel in this picture has now become part of the computation on a par with the quantum system, rather than merely giving data about the state of the qubits which perform the main computational work. It is the dependencies which enable the computation to happen: the computational resources in this picture are \emph{dependencies} rather than bits or qubits. Bits and qubits become different ways of utilising and transmitting these resources, rather than separate resource entities in their own right. Quantum and classical elements to the computation are unified both by evolving under the same set of actions and interactions (the gates), and also by carrying the same set of fundamental resources. Similarities between the processing abilities of quantum and classical elements now come down to how they carry the dependencies of the computation. At the most fundamental level in the calculus, qubits have two separate components whereas bits have one. When these are acted on by identical gates, there are therefore differences in how bits and qubits appear to act, even though the dependencies are interacting in exactly the same way in all cases. One consequence of this fundamental similarity is that measurement-based computation is possible at all: classical elements can be used in the computation because they carry the same resources as quantum elements. If at some point during a quantum computation these only need to be transmitted individually rather than in pairs, then classical bits rather than qubits can be used. Moving between quantum and classical parts of a computation now becomes a question of how we wish to implement the underlying dependency structure of an algorithm. If we choose to use only qubits then we have a coherent computation with classical read-out at the end, but if we choose to send some dependencies through the system individually then we have a measurement-based implementation. Fundamentally, these are the same type of information processing, and the graphical calculus gives us a visual and intuitive method of translating between the two types of computation. This basic similarity between quantum and classical information can also show the apparently fundamental differences between how information is stored and processed in bits and qubits. Arguably the most important difference between them is the persistence of information: classical bits can be cloned and deleted, whereas qubits are subject to the tight constraints of the no-cloning and no-deleting theorems. From the structure of the graphical calculus we can note the equivalent rules for the dependency information, which turn out to differ from both: \emph{dependencies can be cloned and annihilated but not deleted}. The cloning of dependencies is straightforward; this is what an entangling operation does between two (qu)bits. Entanglement simply is the sharing of identical copies of dependency information in this picture. The interesting question then becomes why we can't clone two dependencies from the left and right hand sides of a qubit at the same time, but we can just for one -- in other words, why a qubit cannot be cloned but a bit can. The key to this lies in the entangling operations in this representation, in particular the CNOT operation. The CNOT operation here is symmetric between the control and target, up to a left/right symmetry on the inputs. The dependencies from the right-hand side of the control are copied into the target, and the target reacts back by copying its left-hand dependencies back to the control. Only one side of a qubit can be copied to another during an entangling operation -- in order for both sides to be copied, two operations would be needed. The CNOT operation can be thought of as a ``perfect measurement'', and so this can be described as a measurement not being possible without a back-reaction from the measuring system onto that being measured. However, after one interaction the state of the system is no longer the same as before the measurement, so trying to extract the second set of dependencies will give the original set but composed with the dependencies picked up during the measurement. So if the information containing system has only one set of dependencies then it can be cloned, but if it contains two then the necessary back-reaction from any entangling operation means that it cannot be duplicated. Bits can be cloned, qubits cannot, but both are a consequence of how dependencies are copied between systems. \section{A local information flow} We now turn to the second motivation for developing the graphical calculus, that of giving a logically and physically local notion of information flow through a hybrid computation, with the flow respecting standard notions of causality (such as action only into the future). There are a number of conditions that such a flow would have to satisfy in order to be considered as local, and we will look at these in turn. Firstly, there is the question of what exactly is representing the `information' that is `flowing' in this picture. What is it that gets tracked through a computation? As we would expect, what are tracked are the resources for the computation -- in this representation, information flow is the flow of dependency information through both bits and qubits. The simplest locality condition on this flow is that the representation of an individual system must give all the information available for the evolution of that system. This is indeed what the individual calculus elements do, by construction. The next condition is much stronger, and is not satisfied by the usual state descriptions in quantum mechanics. This is the condition that there is no global state information that is not present in the local state. In standard quantum mechanics, the product of individual state descriptions of some set of systems (the reduced density matrices) is in general different from the description of the global state (the full density matrix). By contrast, in the graphical representation given here, there is no information contained in a joint state than is not already present in the individual states. There is no separate representation of joint system states in the calculus -- the nearest to such an idea would be how two or more systems evolve under the action of a joint gate. Individual systems retain their individual representations even under entangling operations, and giving the set of individual states in a system gives the complete evolution of that system as a whole. As well as this ``mathematical locality'', we also want our information flow to satisfy ``physical locality'' conditions. The most important of these is that local operations change only local states. This again is something that is usually violated in standard representations of quantum mechanics, but arises naturally in the calculus. This is one of the differences that we saw during teleportation. In the standard picture, Alice's local operations on her qubits change the state of Bob's qubit non-locally. By contrast, in the graphical representation, Alice's operations only act locally, as do Bob's. Information flows between them down the classical channel, rather than jumping from one to the other. The final condition is that not only do local operations only change local states, but that the \emph{only} way to change the state of a system is by locally operating on it. This is true in the graphical calculus by construction: the only morphism on an object is a gate, and the only gates that have been defined are those that act locally. We therefore have a graphical representation of a flow of information through a computation that is represented using objects local to the systems being described, and which change only when acted on locally. This flow is not only local but also \emph{causal}. Information contained within a system is only changed by actions on that system, and these actions only occur in one temporal direction. Information flows forward in time through a computation and never backwards, either logically or physically. \section{Conclusion} We have developed here a graphical calculus for information flow in hybrid information processing systems. This is a framework for information processing that integrates both quantum and classical elements into the same formalism, and explicitly defines the interface between bits and qubits using the same interactions as between qubits alone. This allows a fully formalised description of measurement-based computations, including a full description of the role played by the classical information channel. This leads to new descriptions of hybrid computational situations such as teleportation, with information previously considered to be entirely quantum being transmitted through the classical channel. Quantum and classical resources in a computation are unified in this representation as instances of the underlying dependency resources. These dependencies are the basis for computation in this model, and their transmission and processing during a protocol demonstrate the flow of information. This flow model is local and causal, and defines both the logical structure of an algorithm and the physical flow of information within it. One interesting question is the connection between the information flow given by this representation and the flow notions defined in \cite{cat,elhamflow}, and whether this physical flow underpins these logical flow ideas. Finally, we note that we now have the basis for a formal model of computation for both quantum and classical elements that can enable us to make direct comparisons between the two. Such comparisons could include resources use, including how quantum and classical resources can be interchanged for different implementations of a particular algorithm. Such a formal computability model would have its basic elements as the dependencies which are contained in both quantum and classical systems, and would allow the calculation of, for example, the number of classical bits and gates needed to implement a known quantum algorithm. This in turn is an important step towards giving a formal definition of the relative processing powers of quantum, classical and hybrid computing. \end{document}
\begin{document} \title{Reaction Automata} \author{{\bf Fumiya Okubo$^1$}, {\bf Satoshi Kobayashi$^2$} and {\bf Takashi Yokomori$^3$\footnote{Corresponding author}}\\[2mm] $^1$Graduate School of Education\\ Waseda University, 1-6-1 Nishiwaseda, Shinjuku-ku\\ Tokyo 169-8050, Japan\\ {\tt [email protected]}\\[2mm] $^2$Graduate School of Informatics and Engineering,\\ University of Electro-Communications, \\ 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan\\ {\tt [email protected]}\\[2mm] $^3$Department of Mathematics\\ Faculty of Education and Integrated Arts and Sciences\\ Waseda University, 1-6-1 Nishiwaseda, Shinjuku-ku\\ Tokyo 169-8050, Japan\\ {\tt [email protected]}} \date{} \maketitle \begin{abstract} Reaction systems are a formal model that has been introduced to investigate the interactive behaviors of biochemical reactions. Based on the formal framework of reaction systems, we propose new computing models called {\it reaction automata} that feature (string) language acceptors with multiset manipulation as a computing mechanism, and show that reaction automata are computationally Turing universal. Further, some subclasses of reaction automata with space complexity are investigated and their language classes are compared to the ones in the Chomsky hierarchy. \end{abstract} \section{Introduction} In recent years, a series of seminal papers \cite{ER:07a,ER:07b,ER:09} has been published in which Ehrenfeucht and Rozenberg have introduced a formal model, called {\it reaction systems}, for investigating interactions between biochemical reactions, where two basic components (reactants and inhibitors) are employed as regulation mechanisms for controlling biochemical functionalities. It has been shown that reaction systems provide a formal framework best suited for investigating in an abstract level the way of emergence and evolution of biochemical functioning such as events and modules. In the same framework, they also introduced the notion of time into reaction systems and investigated notions such as reaction times, creation times of compounds and so forth. Rather recent two papers \cite{EMR:10,EMR:11} continue the investigation of reaction systems, with the focuses on combinatorial properties of functions defined by random reaction systems and on the dependency relation between the power of defining functions and the amount of available resource. In the theory of reaction systems, a (biochemical) reaction is formulated as a triple $a=(R_a, I_a, P_a)$, where $R_a$ is the set of molecules called {\it reactants}, $I_a$ is the set of molecules called {\it inhibitors}, and $P_a$ is the set of molecules called {\it products}. Let $T$ be a set of molecules, and the result of applying a reaction $a$ to $T$, denoted by $res_a(T)$, is given by $P_a$ if $a$ is enabled by $T$ (i.e., if $T$ completely includes $R_a$ and excludes $I_a$). Otherwise, the result is empty. Thus, $res_a(T)=P_a$ if $a$ is enabled on $T$, and $res_a(T)=\emptyset$ otherwise. The result of applying a reaction $a$ is extended to the set of reactions $A$, denoted by $res_A(T)$, and an interactive process consisting of a sequence of $res_A(T)$'s is properly introduced and investigated. In the last few decades, the notion of a multiset has frequently appeared and been investigated in many different areas such as mathematics, computer science, linguistics, and so forth. (See, e.g., \cite{CPRS:01} for the reference papers written from the viewpoint of mathematics and computer science.) The notion of a multiset has received more and more attention, particularly in the areas of biochemical computing and molecular computing (e.g., \cite{Paun:00,Set:01}). Motivated by these two notions of a reaction system and a multiset, in this paper we will introduce computing devices called {\it reaction automata} and show that they are computationally universal by proving that any recursively enumerable language is accepted by a reaction automaton. There are two points to be remarked: On one hand, the notion of reaction automata may be taken as a kind of an extension of reaction systems in the sense that our reaction automata deal with {\it multisets} rather than (usual) sets as reaction systems do, in the sequence of computational process. On the other hand, however, reaction automata are introduced as computing devices that accept the sets of {\it string objects} (i.e., languages over an alphabet). This unique feature, i.e., a string accepting device based on multiset computing in the biochemical reaction model can be realized by introducing a simple idea of feeding an input to the device from the environment and by employing a special encoding technique. \begin{figure} \caption{A graphic illustration of interactive biochemical reaction processes for accepting strings in the language $L=\{a^nb^n \mid n\geq 0\} \label{graphic} \end{figure} In order to illustrate an intuitive idea of the notion of reaction automata and their behavior, we give in Figure \ref{graphic} a simple example of the behavior of a reaction automata $\mathcal {A}$ that consists of the set of objects $\{p_0, p_1, a, b, a', f \}$ (with the input alphabet $\{a, b\}$), the set of reactions $\{{\bf a}_0=(p_0, \{a,b,a'\}, f), {\bf a}_1=(p_0a,\{b\}, p_0a'), {\bf a}_2=(p_0a'b,\emptyset, p_1), {\bf a}_3=(p_1a'b,\{a\}, p_1), {\bf a}_4=(p_1,\{a,b,a'\}, f)\}$, where $\{p_0\}$ is the initial multiset and $\{f\}$ is the final multiset. Note that in a reaction ${\bf a}=(R_a, I_a, P_a)$, multisets $R_a$ and $P_a$ are represented by string forms, while $I_a$ is given as a set. In the graphic drawing of Figure \ref{graphic}, each reaction ${\bf a}_i$ is applied to a multiset (of a test tube) after receiving an input symbol (if any is provided) from the environment. In particular, applying ${\bf a}_0$ to $\{p_0\}$ leads to that the empty string is accepted by $\mathcal{A}$. It is seen, for example, that reactions ${\bf a}_1$ and ${\bf a}_2$ are enabled by the multiset $T=\{p_0,a',a'\}$ only when inputs $a$ and $b$, respectively, are received, which result in producing $R_1=\{p_0,a',a',a'\}$ and $R_2=\{p_1,a'\}$, respectively. Thus, we have that $res_{{\bf a}_1}(T\cup \{a\})=R_1$ and $res_{{\bf a}_2}(T\cup \{b\})=R_2$. Once applying ${\bf a}_2$ has brought about a change of $p_0$ into $p_1$, $\mathcal{A}$ has no possibility of accepting further inputs $a$'s, because of the inhibitors in ${\bf a}_3$ or ${\bf a}_4$. One may easily see that $\mathcal{A}$ accepts the language $L=\{a^nb^n\mid n\geq 0\}$. We remark that reaction automata allow a multiset of reactions $\alpha$ to apply to a multiset of objects $T$ in an exhaustive manner (what we call {\it maximally parallel manner}), and therefore the interactive process sequence of computation is nondeterministic in that the reaction result from $T$ may produce more than one product. The details for these are formally described in the sequel. This paper is organized as follows. After preparing the basic notions and notations from formal language theory in Section 2, we formally introduce the main notion of reaction automata together with one language example in Section 3. Then, Section 4 describes a multistack machine (in fact, two-stack machine) whose specific property will be demonstrated to be very useful in the proof of the main result in the next section. Thus, in Section 5 we present our main results: reaction automata are computationally universal. We also consider some subclasses of reaction automata from a viewpoint of the complexity theory in Section 6, and investigate the language classes accepted by those subclasses in comparison to the Chomsky hierarchy. Finally, concluding remarks as well as future research topics are briefly discussed in Section 7. \section{Preliminaries} We assume that the reader is familiar with the basic notions of formal language theory. For unexplained details, refer to~\cite{HMU:03}. Let $V$ be a finite alphabet. For a set $U \subseteq V$, the cardinality of $U$ is denoted by $|U|$. The set of all finite-length strings over $V$ is denoted by $V^*$. The empty string is denoted by $\lambda$. For a string $x$ in $V^*$, $|x|$ denotes the length of $x$, while for a symbol $a$ in $V$ we denote by $|x|_a$ the number of occurences of $a$ in $x$. For $k \ge 0$, let $pref_k (x)$ be the prefix of a string $x$ of length $k$. For a string $w=a_1a_2\cdots a_n \in V^*$, $w^R$ is the reversal of $w$, that is, $(a_1 a_2 \cdots a_n)^R = a_n \cdots a_2 a_1$. Further, for a string $x=a_1 a_2 \cdots a_n \in V^*$, $\hat{x}$ denotes the hat version of $x$, i.e., $\hat{x} = \hat{a_1} \hat{a_2} \cdots \hat{a_n}$, where each $\hat{a}_i$ is in an alphabet $\hat{V} = \{ \hat{a} \, | \, a \in V \}$ such that $V \cap \hat{V} = \emptyset$. We use the basic notations and definitions regarding multisets that follow~\cite{CMM:01,KMP:01}. A {\it multiset} over an alphabet $V$ is a mapping $\mu: V \rightarrow \mathbf{N}$, where $\mathbf{N}$ is the set of non-negative integers and for each $a \in V$, $\mu(a)$ represents the number of occurrences of $a$ in the multiset $\mu$. The set of all multisets over $V$ is denoted by $V^\#$, including the empty multiset denoted by $\mu_{\lambda}$, where $\mu_{\lambda}(a)=0$ for all $a\in V$. A multiset $\mu$ may be represented as a vector, $\mu(V) = (\mu(a_1), \dots, \mu(a_n))$, for an ordered set $V = \{ a_1, \dots, a_n \}$. We can also represent the multiset $\mu$ by any permutation of the string $w_\mu = a^{\mu(a_1)}_1 \cdots a^{\mu(a_n)}_n$. Conversely, with any string $x \in V^*$ one can associate the multiset $\mu_x : V \rightarrow \mathbf{N}$ defined by $\mu_x(a) = |x|_a$ for each $a \in V$. In this sense, we often identify a multiset $\mu$ with its string representation $w_{\mu}$ or any permutation of $w_{\mu}$. Note that the string representation of $\mu_{\lambda}$ is $\lambda$, i.e., $w_{\mu_{\lambda}}=\lambda$. A usual set $U \subseteq V$ is regarded as a multiset $\mu_U$ such that $\mu_U(a) = 1$ if $a$ is in $U$ and $\mu_U(a)=0$ otherwise. In particular, for each symbol $a \in V$, a multiset $\mu_{\{a\}}$ is often denoted by $a$ itself. For two multisets $\mu_1$, $\mu_2$ over $V$, we define one relation and three operations as follows: \[ \begin{array}{ll} {\it Inclusion}: &\mu_1 \subseteq \mu_2 \text{ \ iff \ } \mu_1(a) \le \mu_2(a), \text{ for each } a \in V, \\ {\it Sum}:&(\mu_1 + \mu_2) (a) = \mu_1(a) + \mu_2(a), \text{ for each } a \in V,\\ {\it Intersection}:&(\mu_1 \cap \mu_2) (a) = {\rm min}\{\mu_1(a), \mu_2(a)\}, \text{ for each } a \in V,\\ {\it Difference}:&(\mu_1 - \mu_2) (a) = \mu_1(a) - \mu_2(a), \text{ for each } a \in V \text{ (for the case } \mu_2 \subseteq \mu_1 \text{ only)}. \end{array} \] A multiset $\mu_1$ is called {\it multisubset} of $\mu_2$ if $\mu_1 \subseteq \mu_2$. The sum for a family of multisets $\mathcal{M} = \{\mu_i \}_{i \in I}$ is also denoted by $\sum_{i \in I}\mu_i$. For a multiset $\mu$ and $n \in \mathbf{N}$, $\mu^n$ is defined by $\mu^n (a) = n \cdot \mu(a)$ for each $a \in V$. The {\it weight} of a multiset $\mu$ is $|\mu| = \sum_{a \in V} \mu(a)$. We introduce an injective function $stm : V^* \to V^\#$ that maps a string to a multiset in the following manner: \begin{eqnarray*} \left\{ \begin{array}{ll} stm(a_1 a_2 \cdots a_n) = a_1 a_2^2 \cdots a_n^{2^{n-1}} & \mbox{(for $n\geq 1$)} \\ stm(\lambda) = {\lambda}. & \\ \end{array} \right. \end{eqnarray*} \section{Reaction Automata} As is previously mentioned, a novel formal model called reaction systems has been introduced in order to investigate the property of interactions between biochemical reactions, where two basic components (reactants and inhibitors) are employed as regulation mechanisms for controlling biochemical functionalities (\cite{ER:07a,ER:07b,ER:09}). Reaction systems provide a formal framework best suited for investigating the way of emergence and evolution of biochemical functioning on an abstract level. By recalling from \cite{ER:07a} basic notions related to reactions systems, we first extend them (defined on the sets) to the notions on the multisets. Then, we shall introduce our notion of {\it reaction automata} which plays a central role in this paper. \begin{de} {\rm For a set $S$, a {\it reaction} in $S$ is a 3-tuple ${\bf a} = (R_{\bf a}, I_{\bf a}, P_{\bf a})$ of finite multisets, such that $R_{\bf a}, P_{\bf a} \in S^\#$, $I_{\bf a} \subseteq S$ and $R_{\bf a} \cap I_{\bf a} = \emptyset$. } \end{de} The multisets $R_{\bf a}$ and $P_{\bf a}$ are called the {\it reactant} of ${\bf a}$ and the {\it product} of ${\bf a}$, respectively, while the set $I_{\bf a}$ is called the {\it inhibitor} of ${\bf a}$. These notations are extended to a multiset of reactions as follows: For a set of reactions $A$ and a multiset $\alpha$ over $A$, \[ R_{\alpha} = \sum_{{\bf a}\in A} R_{\bf a}^{\alpha({\bf a})}, \, \, I_{\alpha} = \bigcup_{ {\bf a} \subseteq \alpha} I_{\bf a}, \, P_{\alpha} = \sum_{{\bf a}\in A} P_{\bf a}^{\alpha({\bf a})}. \] In what follows, we usually identify the set of reactions $A$ with the set of labels $Lab(A)$ of reactions in $A$, and often use the symbol $A$ as a finite alphabet. \begin{de} {\rm Let $A$ be a set of reactions in $S$ and $\alpha \in A^\#$ be a multiset of reactions over $A$. Then, for a finite multiset $T \in S^\#$, we say that \\ (1) $\alpha$ is {\it enabled by} $T$ if $R_\alpha \subseteq T$ and $I_\alpha \cap T = \emptyset$, \\ (2) $\alpha$ is {\it enabled by $T$ in maximally parallel manner} if there is no $\beta \in A^\#$ such that $\alpha \subset \beta$, and $\alpha$ and $\beta$ are enabled by $T$. \\ (3) By $En^p_A(T)$ we denote the set of all multisets of reactions $\alpha \in A^\#$ which are enabled by $T$ in maximally parallel manner.\\ (4) The {\it results of $A$ on $T$}, denoted by $Res_{A}(T)$, is defined as follows: \[ Res_{A}(T) = \{ T - R_{\alpha} + P_{\alpha} \, | \, \alpha \in En^p_A(T) \}. \] Note that we have $Res_{A}(T)= \{ T \}$ if $En^p_A(T)=\emptyset$. Thus, if no multiset of reactions $\alpha \in A^{\#}$ is enabled by $T$ in maximally parallel manner, then $T$ remains unchanged. } \end{de} \begin{rem} {\rm ($i$)\ It should be also noted that the definition of the results of $A$ on $T$ (given in (4) above) is in contrast to the original one in \cite{ER:07a}, because we adopt the assumption of {\it permanency of elements}: any element that is not a reactant for any active reaction {\it does} remain in the result after the reaction.\\ ($ii$)\ In general, $En^p_A(T)$ may contain more than one element, and therefore, so may $Res_A(T)$.\\ ($iii$)\ For simplicity, $I_a$ is often represented as a string rather than a set. } \end{rem} \begin{exam}{\rm Let $S=\{a, b, c, d, e\}$ and consider the following set $A=\{{\bf a}, {\bf b}, {\bf c}\}$ of reactions in $S$: \[ {\bf a} = ( b^2, a, c ),\, {\bf b}= ( c^2, \emptyset, b ), \, {\bf c} = ( bc, d, e). \] $(i)$\ Consider a finite multiset $T=b^4cd$. Then, $\alpha_1={\bf a}$ is enabled by $T$, while neither {\bf b} nor {\bf c} is enabled by $T$, because $R_{{\bf b}} \not\subseteq T$ and $I_{{\bf c}} \cap T\not= \emptyset$. Further, $\alpha_2={\bf a}^2$ is not only enabled by $T$ but also enabled by $T$ in maximally parallel manner, because no $\beta$ with $\alpha_2\subset \beta$ is enabled by $T$. Since $R_{{\bf a}^2}=b^4$, $P_{{\bf a}^2}=c^2$, and $En^p_A(T)=\{{\bf a}^2\}$, we have \[ Res_A(T)=\{ T-R_{{\bf a}^2}+P_{{\bf a}^2} \} =\{c^3d\}. \] $(ii)$\ Consider $T'=b^3c^2e$. Then, $\beta_1={\bf ab}$ and $\beta_2={\bf ac}$ are enabled by $T'$, while {\bf bc} is not. Further, it is seen that both $\beta_1$ and $\beta_2$ are enabled by $T'$ in maximally parallel manner, and $En^p_A(T')=\{{\bf ab}, {\bf ac}\}$. Thus, we have \[ Res_A(T')= \{b^2ce, c^2e^2\}. \] If we take $T''=bcd$, then none of the reactions from $A$ is enabled by $T''$. Therefore, we have $Res_A(T'')=T''$. } \end{exam} We are now in a position to introduce the notion of reaction automata. \begin{de}{\rm {(Reaction Automata)}\ A {\it reaction automaton} (RA) $\mathcal{A}$ is a 5-tuple $\mathcal{A} = (S, \Sigma, A, D_0, f)$, where \begin{itemize} \item $S$ is a finite set, called the {\it background set of} $\mathcal{A}$, \item $\Sigma (\subseteq S)$ is called the {\it input alphabet of} $\mathcal{A}$, \item $A$ is a finite set of reactions in $S$, \item $D_0 \in S^\#$ is an {\it initial multiset}, \item $f \in S$ is a special symbol which indicates the final state. \end{itemize} } \end{de} \begin{de}{\rm Let $\mathcal{A} = (S, \Sigma, A, D_0, f)$ be an RA and $w = a_1 \cdots a_n \in \Sigma^*$. An {\it interactive process in $\mathcal{A}$ with input $w$} is an infinite sequence $\pi = D_0, \dots, D_i, \dots$, where \begin{eqnarray*} \left\{ \begin{array}{ll} D_{i+1} \in Res_A(a_{i+1}+D_i) & \mbox{(for $0\leq i \leq n-1$), and} \\ D_{i+1} \in Res_A(D_i) & \mbox{(for all $i\geq n$)}. \end{array} \right. \end{eqnarray*} By $IP(\mathcal{A}, w)$ we denote the set of all interactive processes in $\mathcal{A}$ with input $w$.} \end{de} In order to represent an interactive process $\pi$, we also use the ``arrow notation'' for $\pi : (a_1, D_0) \rightarrow \cdots \rightarrow (a_{n}, D_{n-1}) \rightarrow (D_n) \rightarrow (D_{n+1}) \rightarrow \cdots$, or alternatively, $D_0 \rightarrow^{a_1} D_1 \rightarrow^{a_2} D_2 \rightarrow^{a_3} \cdots \rightarrow^{a_{n-1}} D_{n-1} \rightarrow^{a_{n}} D_{n} \rightarrow D_{n+1} \rightarrow \cdots$. For an interactive process $\pi$ in $\mathcal{A}$ with input $w$, if $En^p_A(D_m) = \emptyset$ for some $m \ge |w|$, then we have that $Res_A(D_m)= \{ D_m \}$ and $D_m =D_{m+1}= \cdots$. In this case, considering the smallest $m$, we say that $\pi$ {\it converges on} $D_m$ (at the $m$-th step). When an interactive process $\pi$ converges on $D_m$, each $D_i$ of $\pi$ is omitted for $i \ge m+1$. \begin{de}{\rm Let $\mathcal{A} = (S, \Sigma, A, D_0, f)$ be an RA. The {\it language accepted by} $\mathcal{A}$, denoted by $L(\mathcal{A})$, is defined as follows: \begin{align*} L(\mathcal{A}) = \{ w \in \Sigma^* \, | \, & \, \text{there exists }\pi \in IP(\mathcal{A}, w) \text{ that converges on} \\ & \text{$D_m$ at the $m$-th step for some $m\geq |w|$, } \text{and } f \subseteq D_m \}. \end{align*}} \end{de} \begin{figure} \caption{(a) Reaction diagram: Interactive processes for accepting $a^2$, $a^4$ and $a^8$ in $\mathcal{A} \label{diag} \end{figure} \begin{exam}{\rm Let us consider a reaction automaton $\mathcal{A} = (S, \Sigma, A, D_0, f)$ defined as follows: \begin{align*} &S = \{ a, b, c, d, e, f \} \mbox{ with } \Sigma=\{a\}, \\ &A = \{ {\bf a}_1, {\bf a}_2, {\bf a}_3, {\bf a}_4, {\bf a}_5, {\bf a}_6 \}, \ \mbox{where} \\ &\text{\ \ \ \ \ \ } {\bf a}_1 = ( a^2, \emptyset, b ),\ \,{\bf a}_2 = ( b^2, ac, c ),\, {\bf a}_3 = ( c^2, b, b ), \\ &\qquad {\bf a}_4 = ( bd, ac, e ), \, {\bf a}_5 = ( cd, b, e ),\ \ {\bf a}_6 = ( e, abc, f ), \\ &D_0 = d. \end{align*} Let $w = aaaaaaaa \in S^*$ be the input string and consider an interactive process $\pi$ such that \begin{align*} \pi : d \rightarrow^a ad \rightarrow^a bd \rightarrow^a abd \rightarrow^a b^2d \rightarrow^a ab^2d \rightarrow^a b^3d \rightarrow ^a ab^3d \rightarrow^a b^4d \rightarrow c^2d \rightarrow bd \rightarrow e \rightarrow f. \end{align*} It can be easily seen that $\pi \in IP(\mathcal{A}, w)$ and $w \in L(\mathcal{A})$. Figure \ref{diag} illustrates the whole view of possible interactive processes in $\mathcal{A}$ with inputs $a^2, a^4$ and $a^8$. For instance, since ${\bf a}^2_2 \in En^p_A(b^4 d)$, it holds that $c^2d \in Res_{A}(b^4 d)$. Hence, the step $b^4 d \rightarrow c^2d$ is valid. We can also see that $L(\mathcal{A}) = \{ a^{2^n} \, | \, n \ge 1 \}$ which is context-sensitive. } \end{exam} \section{Multistack Machines} A multistack machine is a deterministic pushdown automaton with several stacks (\cite{HMU:03}). It is known that a two-stack machine is equivalent to a Turing machine as a language accepting device. A $k$-stack machine $M = (Q, \Sigma, \Gamma, \delta, p_0, Z_0, F)$ is defined as follows: $Q$ is a set of states, $\Sigma$ is an input alphabet, $\Gamma$ is a stack alphabet, $Z_0=(Z_{01},Z_{02},\ldots,Z_{0k})$ is the $k$-tuple of the initial stack symbols, $p_0 \in Q$ is the initial state, $F$ is a set of final states, $\delta$ is a transition function defined in the form: $\delta(p, a, X_1, X_2, \dots, X_k) = (q, \gamma_1, \gamma_2, \ldots, \gamma_k)$, where $p,q \in Q$, $a \in \Sigma \cup \{ \lambda \}$, $X_i \in \Gamma$, $\gamma_i \in \Gamma^*$ for each $1 \le i \le k$. This rule means that in state $p$, with $X_i$ on the top of $i$-th stack, if the machine reads $a$ from the input, then go to state $q$, and replace the the top of each $i$-th stack with $\gamma_i$ for $1 \le i \le k$. We assume that each rule has a unique label and all labels of rules in $\delta$ is denoted by $Lab(\delta)$. Note that the $k$-stack machine can make a $\lambda$-move, but there cannot be a choice of a $\lambda$-move or a non-$\lambda$-move due to the deterministic property of the machine. The $k$-stack machine accepts a string by entering a final state. In this paper, we consider a modification on a multistack (in fact, two-stack) machine. Recall that in the simulation of a given Turing machine $TM$ with an input $w=a_1a_2\cdots a_{\ell}$ in terms of a multistack machine $M$, one can assume the following (see \cite{HMU:03}): \begin{itemize} \item[($i$)] At first, two-stack machine $M$ is devoted to making the copy of $w$ on stack-2. This is illustrated in (a) and (b)-1 of Figure \ref{sim}, for the case of $k=2$. {\it $M$ requires only non-$\lambda$-moves}. \item[($ii$)] Once the whole input $w$ is read-in by $M$, {\it no more access to the input tape of $M$ is necessary}. After having $w^R$ on stack-2, $M$ moves over $w^R$ (from stack-2) to produce $w$ on stack-1, as shown in (b)-2. These moves only require $\lambda$-moves and after this, each computation step of $M$ with respect to $w$ is performed by a $\lambda$-move, without any access to $w$ on the input tape. \item[($iii$)] Each stack has its own stack alphabet, each one being different from the others, and a set of final states is a singleton. Once $M$ enters the final state, it immediately halts. Further, during a computation, each stack is not emptied. \end{itemize} Hence, without changing the computation power, we may restrict all computations of a multistack machine that satisfies the conditions $(i), (ii), (iii)$. We call this modified multistack machine a {\it restricted multistack machine}. In summary, a restricted $k$-stack machine $M_r$ is composed by $2k+5$ elements as follows: \[ M_r = (Q, \Sigma, \Gamma_1, \Gamma_2, \dots, \Gamma_k, \delta, p_0, Z_{01}, Z_{02}, \dots, Z_{0k}, f), \] where for each $1\leq i \leq k$, $Z_{0i} \in \Gamma_i$ is the initial symbol for the $i$-th stack used only for the bottom, $f \in Q$ is a final state, and its computation proceeds only in the above mentioned way ($i$), ($ii$), ($iii$). Especially, $\lambda$-moves are used after all non-$\lambda$-moves in a computation of $M_r$. \begin{prop} {\rm (Theorem 8.13 in \cite{HMU:03})} Every recursively enumerable language is accepted by a restricted two-stack machine. \end{prop} \begin{figure} \caption{(a) Turing machine (TM); (b)Two-stack machine $M$ simulating TM, where $\$$ is the end marker for the input.} \label{sim} \end{figure} \section{Main Results} In this section we shall show the equivalence of the accepting powers between reaction machines and Turing machines. Taking Proposition 1 into consideration, it should be enough for the purpose of this paper to prove the following theorem. \begin{thm} If a language $L$ is accepted by a restricted two-stack machine, then $L$ is accepted by a reaction automaton. \label{teiri1} \end{thm} \noindent \textbf{[Construction of an RA]}\\[2mm] Let $M = (Q, \Sigma, \Gamma_1, \Gamma_2, \delta, p_0, {X}_0, {Y}_0, f)$ be a restricted two-stack machine with $\Gamma_1 = \{ X_0, X_1, \dots, X_n \}$, $\Gamma_2 = \{ Y_0, Y_1, \dots, Y_m \}$, $n, m \ge 1$, where $\Gamma = \Gamma_1 \cup \Gamma_2$, $X_0$ and $Y_0$ are the initial stack symbols for stack-1 and stack-2, repsectively, and we may assume that $\Gamma_1 \cap \Gamma_2 =\emptyset$. We construct an RA $\mathcal{A}_M = (S, \Sigma, A, D_0, f')$ as follows: \begin{align*} &S = Q \cup \hat{Q} \cup \Sigma \cup \Gamma \cup \hat{\Gamma} \cup Lab(\delta) \cup \{ f' \}, \\ &A = A_0 \cup A_a \cup \hat{A_a} \cup A_{\lambda} \cup \hat{A}_{\lambda} \cup A_{X} \cup \hat{A}_{X} \cup A_{Y} \cup \hat{A}_{Y} \cup A_f \cup \hat{A}_f, \\ &D_0 = p_0 X_0 Y_0, \end{align*} where the set of reactions $A$ consists of the following 5 categories : \begin{align*} (1) &A_0 =\{ ( p_0 a X_0 Y_0, Lab(\delta), \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot r' ) \, | \, r: \delta(p_0, a, X_0, Y_0) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (2) &A_a = \{ ( p a X_i Y_j r, \hat{\Gamma}, \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot r' ) \, | \, a \in \Sigma, \, r: \delta(p, a, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ &\hat{A}_a = \{ ( \hat{p} a \hat{X_i} \hat{Y_j} r, \Gamma, q \cdot stm(x) \cdot stm(y) \cdot r' ) \, | \, a \in \Sigma, \, r: \delta(p, a, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (3) &A_{\lambda} = \{ ( p X_i Y_j r, \Sigma \cup \hat{\Gamma}, \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot r' ) \, | \, r: \delta(p, \lambda, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ &\hat{A}_{\lambda} = \{ ( \hat{p} \hat{X_i} \hat{Y_j} r, \Sigma \cup \Gamma, q \cdot stm(x) \cdot stm(y) \cdot r' ) \, | \, r: \delta(p, \lambda, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (4) &A_{X} = \{ ( X^2_k, \hat{Q} \cup \hat{\Gamma} \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, \hat{X}^{2^{|x|}}_k ) \, | \, 0 \le k \le n, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &\hat{A}_{X} = \{ ( \hat{X}^2_k, Q \cup \Gamma \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, X^{2^{|x|}}_k ) \, | \, 0 \le k \le n, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &A_{Y} = \{ ( Y^2_k, \hat{Q} \cup \hat{\Gamma} \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, \hat{Y}^{2^{|y|}}_k ) \, | \, 0 \le k \le m, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &\hat{A}_{Y} = \{ ( \hat{Y}^2_k, Q \cup \Gamma \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, Y^{2^{|y|}}_k ) \, | \, 0 \le k \le m, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ (5) &A_f = \{ ( f, \hat{\Gamma}, f' ) \}, \\ &\hat{A}_f = \{ ( \hat{f}, \Gamma, f' ) \}. \end{align*} \begin{proof} We shall give an informal description on how to simulate $M$ with an input $w=a_1a_2\cdots a_{\ell}$ in terms of $\mathcal{A}_M$ constructed above. $M$ starts its computation from the state $p_0$ with $X_0$ and $Y_0$ on the top of stack-1 and stack-2, respectively. This initial step is performed in $\mathcal{A}_M$ by applying a reaction in $A_0$ to $D_0=p_0X_0Y_0$ together with $a_1$. In order to read the whole input $w$ into $\mathcal{A}_M$, applying reactions in (2) and (4) leads to an interactive process in $\mathcal{A}_M$ : $D_0 \rightarrow^{a_1} D_1 \rightarrow^{a_2} D_2 \rightarrow^{a_3} \cdots \rightarrow^{a_\ell} D_{\ell}$, where $D_{\ell}$ just corresponds to the configuration of $M$ depicted in (b)-1 of Figure \ref{sim}. After this point, only reactions from (3), (4) and (5) are available in $\mathcal{A}_M$, because $M$ makes only $\lambda$-moves. Suppose that for $k\geq 1$, after making $k$-steps $M$ is in the state $p$ and has $\alpha_k \in \Gamma_1^*$ and $\beta_k \in \Gamma_2^*$ on the stack-1 and the stack-2, respectively. Then, from the manner of constructing $A$, it is seen that in the corresponding interactive process in $\mathcal{A}_M$, we have : \begin{eqnarray*} \left\{ \begin{array}{ll} D_k = p \cdot stm(\alpha_k) \cdot stm(\beta_k) \cdot r & \mbox{(if $k$ is even)} \\ D_k = \hat{p} \cdot stm(\hat{\alpha}_k) \cdot stm(\hat{\beta}_k) \cdot r & \mbox{(if $k$ is odd)} \\ \end{array} \right. \end{eqnarray*} for some $r \in Lab(\delta)$, where the rule labeled by $r$ may be used at the $(k+1)$-th step. (Recall that $stm(x)$ is a multiset, in a special 2-power form, representing a string $x$.) Thus, the multisubset ``$stm(\alpha_k)stm(\beta_k)$'' in $D_k$ is denoted by the strings in either $\Gamma^*$ or $\hat{\Gamma}^*$ in an alternate fashion, depending upon the value $k$. Since there is no essential difference between strings denoted by $\Gamma^*$ and its hat version, we only argue about the case when $k$ is even. Suppose that $M$ is in the state $p$ and has $\alpha=X_{i1}\cdots X_{it}X_0$ on the stack-1 and $\beta=Y_{j1}\cdots Y_{js}Y_0$ on the stack-2, where the leftmost element is the top symbol of the stack. Further, let $r$ be the label of a transition $\delta(p, a_{k+1}, X_{i1},Y_{j1})=(q,x,y)$ (if $1 \le k \le l-1$) or $\delta(p,\lambda, X_{i1},Y_{j1})=(q,x,y)$ (if $l \le k$) in $M$ to be applied. Then, the two stacks are updated as $\alpha'=x X_{i2}\cdots X_{it}X_0$ and $\beta'=y Y_{j2}\cdots Y_{js}Y_0$. In order to simulate this move of $M$, we need to prove that it is possible in $\mathcal{A}_M$, $D_k \rightarrow^{a_{k+1}} D_{k+1}$ (if $1 \le k \le l-1$) or $D_k \rightarrow D_{k+1}$ (if $l \le k$), where \begin{align*} &D_k = p\cdot stm(X_{i1} {X}_{i2}\cdots {X}_{it}{X_0}) \cdot stm({Y_{j1}} {Y}_{j2}\cdots {Y}_{js}{Y_0}) r \\ &D_{k+1} = \hat{q}\cdot stm(\hat{x} \hat{X}_{i2}\cdots \hat{X}_{it}\hat{X_0}) \cdot stm(\hat{y} \hat{Y}_{j2}\cdots \hat{Y}_{js}\hat{Y_0}) r' \end{align*} for some $r'\in Lab(\delta)$. Taking a close look at $D_k$, we have that \[ D_k=p X_{i1}Y_{j1} r \cdot X^2_{i2} {X}^{2^2}_{i3} \cdots {X}^{2^{t-1}}_{it}{X}^{2^t}_0 \cdot Y^2_{j2} {Y}^{2^2}_{j3} \cdots {Y}^{2^{s-1}}_{js} {Y}^{2^s}_0, \] from which it is easily seen that a multiset of reactions ${\bf z}= {\bf r x_{i2}\cdots x}_{\bf it}^{2^{t-2}} {\bf x}_{\bf 0}^{2^{t-1}} {\bf y_{j2} \cdots y}_{\bf js}^{2^{s-2}} {\bf y}_{\bf 0}^{2^{s-1}}$ is in $En^p_{\mathcal{A}_M}(a_{k+1} + D_k)$ (if $1 \le k \le l-1$) or in $En^p_{\mathcal{A}_M}(D_k)$ (if $l \le k$), i.e., it is enabled by $a_{k+1} + D_k$ (if $1 \le k \le l-1$) or $D_k$ (if $l \le k$) in maximally parallel manner, where \begin{eqnarray*} \left\{ \begin{array}{ll} {\bf r}&=(p a_{k+1} X_{i1}Y_{j1} r, \hat{\Gamma}, \hat{q} \cdot stm(\hat{x})stm(\hat{y}) r') \in A_{a} \mbox{ (if $1 \le k \le l-1$)} \\ {\bf r}&=(p X_{i1}Y_{j1} r, \Sigma\cup \hat{\Gamma}, \hat{q} \cdot stm(\hat{x})stm(\hat{y}) r') \in A_{\lambda} \mbox{ (if $l \le k$)}, \\ \end{array} \right. \end{eqnarray*} for some $r' \in Lab(\delta)$, \\[-6mm] \begin{align*} {\bf x}_i&=(X^2_{i},\hat{Q}\cup \hat{\Gamma} \cup Lab(\delta)-\{r\} \cup \{ f' \}, \hat{X}^{2^{|x|}}_i) \in A_X \mbox{ (for $i=0, i2,\ldots, it$)}, \\ {\bf y}_j&=(Y^2_{j},\hat{Q} \cup \hat{\Gamma} \cup Lab(\delta)-\{r\} \cup \{ f' \}, \hat{Y}^{2^{|y|}}_j) \in A_Y \mbox{ (for $j=0, j2,\ldots, js$)}. \end{align*} The result of the multiset of the reactions ${\bf z}$ is \begin{align*} &\hat{q} \cdot stm(\hat{x})stm(\hat{y})r'\cdot \hat{X}^{2^{|x|}}_{i2} \cdots \hat{X}^{2^{t-2+|x|}}_{it}\hat{X}^{2^{t-1+|x|}}_0 \cdot \hat{Y}^{2^{|x|}}_{j2} \cdots \hat{Y}^{2^{s-2+|x|}}_{js} \hat{Y}^{2^{s-1+|x|}}_0 \\ = \, &\hat{q} \cdot stm(\hat{x} \hat{X}_{i2}\cdots \hat{X}_{it}\hat{X_0}) \cdot stm(\hat{y} \hat{Y}_{j2}\cdots \hat{Y}_{js}\hat{Y_0}) r' \\ = \, &D_{k+1} \end{align*} Thus, in fact it holds that $D_k \rightarrow^{a_{k+1}} D_{k+1}$ (if $1 \le k \le l-1$) or $D_k \rightarrow D_{k+1}$ (if $l \le k$) in $\mathcal{A}_{M}$. We note that there is a possibility that undesired reaction ${\bf r'}$ can be enabled at the $(k+1)$th step, where ${\bf r'}$ is of the form \begin{eqnarray*} \left\{ \begin{array}{ll} {\bf r'}&=(p a_{k+1} X_{iu}Y_{jv} r, \hat{\Gamma}, \hat{q'} \cdot stm(\hat{x'})stm(\hat{y'}) r') \in A_{a} \mbox{ (if $1 \le k \le l-1$)}\\ {\bf r'}&=(p X_{iu}Y_{jv} r, \Sigma\cup \hat{\Gamma}, \hat{q'} \cdot stm(\hat{x'})stm(\hat{y'}) r') \in A_{\lambda} \mbox{ (if $l \le k$)}, \\ \end{array} \right. \end{eqnarray*} with $u \ne 1$ or $v \ne 1$, that is, the reactant of ${\bf r'}$ contains a stack symbol which is not the top of stack. If a multiset of reactions ${\bf z'}={\bf r' x'_1\cdots x'_{t'} y'_1 \cdots y'_{s'}}$ with ${\bf x'_1, \ldots, x'_{t'}} \in A_X$, ${\bf y'_1, \ldots, y'_{s'}} \in A_Y$ is used at the $(k+1)$th step, then $D_{k+1}$ contains {\it both} the symbols without hat (in $\Gamma$) and the symbols with hat (in $\hat{Q}$ and $\hat{\Gamma}$). This is because in this case, $X_{i1}$ or $Y_{j1}$ in $D_k$ which is not consumed at the $(k+1)$-th step remains in $D_{k+1}$ (since the total numbers of $X_{i1}$ and $Y_{j1}$ are {\it odd}, these objects cannot be consumed out by the reactions from (4)). Hence, no reaction is enabled at the $(k+2)$-th step and $f'$ is never derived after this wrong step. From the arguments above, it holds that for an input $w \in \Sigma^*$, $M$ enters the final state $f$ (and halts) if and only if there exists $\pi : D_0, \ldots ,D_i, \ldots \in IP(\mathcal{A}_M,w)$ such that $D_{k-1}$ contains $f$ or $\hat{f}$, $D_k$ contains $f'$, and $\pi$ converges on $D_k$, for some $k \ge 1$. Therefore, we have that $L(M) = L(\mathcal{A}_M)$ holds. \end{proof} \begin{cor} Every recursively enumerable language is accepted by a reaction automaton. \end{cor} Recall the way of constructing reactions $A$ of $\mathcal{A}_M$ in the proof of Theorem 1. The reactions in categories (1), (2), (3) would not satisfy the condition of determinacy which is given immediately below. However, we can easily modify $\mathcal{A}_M$ to meet the condition. \begin{de}{\rm Let $\mathcal{A}_M = (S, \Sigma, A, D_0, f')$ be an RA. Then, $\mathcal{A}_M$ is {\it deterministic} if for $a=(R, I, P), a' =(R', I', P') \in A$, $(R = R') \wedge (I = I')$ implies that $a = a'$. } \end{de} \begin{thm} If a language $L$ is accepted by a restricted two-stack machine, then $L$ is accepted by a deterministic reaction automaton. \end{thm} \begin{proof} Let $M = (Q, \Sigma, \Gamma_1, \Gamma_2, \delta, p_0, {X}_0, {Y}_0, f)$ be a restricted two-stack machine. For the RA $\mathcal{A}_M = (S, \Sigma, A, D_0, f')$ constructed for the proof of Theorem 1, we consider $\mathcal{A}'_M = (S \cup \hat{Lab(\delta)}, \Sigma, A', D_0, f')$, where $A'$ consists of the following 5 categories : \begin{align*} (1) &A_0 =\{ ( p_0 a X_0 Y_0, Lab(\delta) \cup \{ \hat{r'} \}, \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot \hat{r'} ) \, | \, r: \delta(p_0, a, X_0, Y_0) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (2) &A_a = \{ ( p a X_i Y_j r, \hat{\Gamma} \cup \{ \hat{r'} \}, \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot \hat{r'} ) \, | \, a \in \Sigma, \, r: \delta(p, a, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ &\hat{A}_a = \{ ( \hat{p} a \hat{X_i} \hat{Y_j} r, \Gamma \cup \{ r' \}, q \cdot stm(x) \cdot stm(y) \cdot r' ) \, | \, a \in \Sigma, \, r: \delta(p, a, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (3) &A_{\lambda} = \{ ( p X_i Y_j r, \Sigma \cup \hat{\Gamma} \cup \{ \hat{r'} \}, \hat{q} \cdot stm(\hat{x}) \cdot stm(\hat{y}) \cdot \hat{r'} ) \, | \, r: \delta(p, \lambda, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ &\hat{A}_{\lambda} = \{ ( \hat{p} \hat{X_i} \hat{Y_j} r, \Sigma \cup \Gamma \cup \{ r' \}, q \cdot stm(x) \cdot stm(y) \cdot r' ) \, | \, r: \delta(p, \lambda, X_i, Y_j) = (q, x, y), \, r' \in Lab(\delta) \}, \\ (4) &A_{X} = \{ ( X^2_k, \hat{Q} \cup \hat{\Gamma} \cup (\hat{Lab(\delta)} - \{ \hat{r} \}) \cup \{ f' \}, \hat{X}^{2^{|x|}}_k ) \, | \, 0 \le k \le n, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &\hat{A}_{X} = \{ ( \hat{X}^2_k, Q \cup \Gamma \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, X^{2^{|x|}}_k ) \, | \, 0 \le k \le n, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &A_{Y} = \{ ( Y^2_k, \hat{Q} \cup \hat{\Gamma} \cup (\hat{Lab(\delta)} - \{ \hat{r} \}) \cup \{ f' \}, \hat{Y}^{2^{|y|}}_k ) \, | \, 0 \le k \le m, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ &\hat{A}_{Y} = \{ ( \hat{Y}^2_k, Q \cup \Gamma \cup (Lab(\delta) - \{ r \}) \cup \{ f' \}, Y^{2^{|y|}}_k ) \, | \, 0 \le k \le m, \, r: \delta(p, a, X_i, Y_j) = (q, x, y) \}, \\ (5) &A_f = \{ ( f, \hat{\Gamma}, f' ) \}, \\ &\hat{A}_f = \{ ( \hat{f}, \Gamma, f' ) \}. \end{align*} The reactions in categories (1), (2), (3) in $A'$ meet the condition where $\mathcal{A'}_M$ is deterministic, since the inhibitor of each reaction includes $r'$ or $\hat{r'}$. We can easily observe that the equation $L(M) = L(\mathcal{A'}_M)$ is proved in the manner similar to the proof of Theorem 1. \end{proof} \begin{cor} Every recursively enumerable language is accepted by a deterministic reaction automaton. \end{cor} \section{Space Complexity Classes of RAs} We now consider space complexity issues of reaction automata. That is, we introduce some subclasses of reaction automata and investigate the relationships between classes of languages accepted by those subclasses of automata and language classes in the Chomsky hierarchy. Let $\mathcal{A}$ be an RA and $f$ be a function defined on $\mathbf{N}$. Motivated by the notion of a workspace for a phrase-structure grammar (\cite{AS:73}), we define: for $w\in L(\mathcal{A})$ with $n=|w|$, and for $\pi$ in $IP(\mathcal{A},w)$ that converges on $D_m$ for some $m \geq n$ and $D_m$ includes the final state, \[ WS(w,\pi) = \underset{i}{{\rm max}} \{|D_i| \mid D_i \mbox{ appears in } \pi \ \}. \] Further, the {\it workspace of} $\mathcal{A}$ {\it for} $w$ is defined as: \[ WS(w,\mathcal{A}) =\underset{\pi}{{\rm min}} \{WS(w,\pi) \mid \pi \in IP(\mathcal{A},w) \}. \] \begin{de}{\rm ($i$). An RA $\mathcal{A}$ is {\it $f(n)$-bounded} if for any $w\in L(\mathcal{A})$ with $n=|w|$, $WS(w,\mathcal{A})$ is bounded by $f(n)$. \\ ($ii$). If a function $f(n)$ is a constant $k$ (resp. linear, polynomial, exponential), then $\mathcal{A}$ is termed $k$-bounded (resp. linearly-bounded, polynomially-bounded, exponentially-bounded), and denoted by $k$-RA (resp. $lin$-RA, $poly$-RA, $exp$-RA). Further, the class of languages accepted by $k$-RA (resp. $lin$-RA, $poly$-RA, $exp$-RA, arbitrary RA) is denoted by $k$-$\mathcal{RA}$ (resp. $\mathcal{LRA, PRA, ERA, RA}$). } \end{de} Let us denote by $\mathcal{REG}$ (resp. $\mathcal{LIN, CF, CS, RE}$) the class of regular (resp. linear context-free, context-free, context-sensitive, recursively enumerable) languages. \begin{exam}{\rm Let $L_1 = \{ a^n b^n c^n \, | \, n \ge 0 \}$ and consider an RA $\mathcal{A}_1 = (S, \Sigma, A, D_0, f)$ defined as follows: \begin{align*} &S = \{ a, b, c, d, a', b', c', f \} \mbox{ with } \Sigma=\{a, b, c\}, \\ &A = \{ {\bf a}_1, {\bf a}_2, {\bf a}_3, {\bf a}_4 \}, \ \mbox{where} \\ &\text{\ \ \ \ \ \ } {\bf a}_1 = ( a, bb', a' ), \,{\bf a}_2 = ( a'b, cc', b' ),\, {\bf a}_3 = ( b'c, \emptyset, c' ), \, {\bf a}_4 = ( d, abca'b', f ), \\ &D_0 = d. \end{align*} Then, it holds that $L_1 = L(\mathcal{A}_1)$ (see Figure~\ref{ra-anbncn}). } \end{exam} \begin{exam}{\rm Let $L_2 = \{ a^m b^m c^n d^n \, | \, m, n \ge 0 \}$ and consider an RA $\mathcal{A}_2 = (S, \Sigma, A, D_0, f)$ defined as follows: \begin{align*} &S = \{ a, b, c, d, a', c', p_0, p_1, p_2, p_3, f \} \mbox{ with } \Sigma=\{a, b, c, d \}, \\ &A = \{ {\bf a}_1, {\bf a}_2, {\bf a}_3, {\bf a}_4, {\bf a}_5, {\bf a}_6, {\bf a}_7, {\bf a}_8, {\bf a}_9, {\bf a}_{10}, {\bf a}_{11} \}, \ \mbox{where} \\ &\text{\ \ \ \ \ \ } {\bf a}_1 = ( ap_0, bc, a'p_0 ), \,{\bf a}_2 = ( a'bp_0, c, p_1 ),\, {\bf a}_3 = ( a'bp_1, c, p_1 ), \, {\bf a}_4 = ( cp_0, d, c'p_2 ), \\ &\text{\ \ \ \ \ \ } {\bf a}_5 = ( cp_1, d, c'p_2 ), \, {\bf a}_6 = ( cp_2, d, c'p_2 ), {\bf a}_7 = ( c'dp_2, \emptyset, p_3 ), \,{\bf a}_8 = ( c'dp_3, \emptyset, p_3 ), \\ &\text{\ \ \ \ \ \ } {\bf a}_9 = ( p_0, abcd, f ), \, {\bf a}_{10} = ( p_1, abcda', f ), \, {\bf a}_{11} = ( p_3, abcda'c', f ), \\ &D_0 = p_0. \end{align*} Then, it holds that $L_2 = L(\mathcal{A}_2)$ (see Figure~\ref{ra-anbncmdm}). } \end{exam} It should be noted that $\mathcal{A}_1$ and $\mathcal{A}_2$ are both $lin$-RAs, therefore, the class of languages $\mathcal{LRA}$ includes a context-sensitive language $L_1$ and a non-linear context-free language $L_2$. \begin{figure} \caption{Reaction diagram of $\mathcal{A} \label{diagram} \label{ra-anbncn} \end{figure} \begin{figure} \caption{Reaction diagram of $\mathcal{A} \label{diagram} \label{ra-anbncmdm} \end{figure} \begin{lem} For an alphabet $\Sigma$ with $|\Sigma| \ge 2$, let $h:\Sigma^* \rightarrow \Sigma^*$ be an injection such that for any $w \in \Sigma^*$, $|h(w)|$ is bounded by a polynomial of $|w|$. Then, there is no polynomially-bounded reaction automaton which accepts the language $L = \{ wh(w) \, | \, w \in \Sigma^* \}$. \label{lem-wfw} \end{lem} \begin{proof} Assume that there is a {\it poly}-RA $\mathcal{A} = (S, \Sigma, A, D_0, f)$ such that $L(\mathcal{A}) = \{ wh(w) \, | \, w \in \Sigma^* \}$. Let $|S| = m_1$, $|\Sigma|= m_2 \ge 2$ and the input string be $wh(w)$ with $|w|=n$. Since $|h(w)|$ is bounded by a polynomial of $|w|$, $|wh(w)|$ is also bounded by a polynomial of $n$. Hence, for each $D_i$ in an interactive process $\pi \in IP( \mathcal{A}, wh(w) )$, it holds that $|D_i| \le p(n)$ for some polynomial $p(n)$ from the definition of a {\it poly}-RA. Let $\mathcal{D}_{p(n)} = \{ D \in S^\# \, | \, |D| \le p(n) \}$. Then, it holds that \begin{align*} &|\mathcal{D}_{p(n)}| = \sum^{p(n)}_{k = 0} {}_{m_1} \mathrm{H}_k = \sum^{p(n)}_{k = 0} \frac{(k + m_1 -1)!}{k! \cdot (m_1 -1) !} = \frac{(p(n) + m_1)!}{p(n)! \cdot m_1 !} = \frac{(p(n) + m_1)(p(n) + m_1 - 1) \cdots (p(n) + 1)}{ m_1 !}.\\ &({}_{m_1} \mathrm{H}_k \text{ denotes the number of repeated combinations of $m_1$ things taken $k$ at a time.}) \end{align*} Therefore, there is a polynomial $p'(n)$ such that $|\mathcal{D}_{p(n)}| = p'(n)$. Since it holds that $|\Sigma^n| = (m_2)^n$, if $n$ is sufficiently large, we obtain the inequality $|\mathcal{D}_{p(n)}| < |\Sigma^n|$. For $i \ge 0$ and $w \in \Sigma^*$, let $I_i(w) = \{ D_i \in \mathcal{D}_{p(n)} \, | \, \pi = D_0, \ldots, D_i, \ldots \in IP(\mathcal{A}, w) \} \subseteq \mathcal{D}_{p(n)}$, i.e., $I_i(w)$ is the set of multisets in $\mathcal{D}_{p(n)}$ which appear as the $i$-th elements of interactive processes in $ IP(\mathcal{A}, w)$. From the fact that $L(\mathcal{A}) = \{ wh(w) \, | \, w \in \Sigma^* \}$ and $h$ is an injection, we can show that for any two distinct strings $w_1, w_2 \in \Sigma^n$, $I_n(w_1)$ and $I_n(w_2)$ are incomparable. This is because if $I_n(w_1) \subseteq I_n(w_2)$, the string $w_2 h(w_1)$ is accepted by $\mathcal{A}$, which means that $h(w_1) = h(w_2)$ and contradicts that $h$ is an injection. Since for any two distinct strings $w_1, w_2 \in \Sigma^n$, $I_n(w_1)$ and $I_n(w_2)$ are incomparable and $I_n(w_1), I_n(w_2) \subseteq \mathcal{D}_{p(n)}$, it holds that \[ | \{ I_n(w) \, | \, w \in \Sigma^n \} | \le |\mathcal{D}_{p(n)}| < |\Sigma^n|. \] However, from the pigeonhole principle, the inequality $| \{ I_n(w) \, | \, w \in \Sigma^n \} | < |\Sigma^n|$ contradicts that for any two distinct strings $w_1, w_2 \in \Sigma^n$, $I_n(w_1) \ne I_n(w_2)$. \end{proof} \begin{thm} The following inclusions hold {\rm :} \\ {\rm (1)}. $\mathcal{REG}=k$-$\mathcal{RA} \subset \mathcal{LRA} \subseteq \mathcal{PRA} \subset \mathcal{ERA} \subseteq \mathcal{RA} = \mathcal{RE}$ {\rm (for each $k\geq 1$)}. \\ {\rm (2)}. $\mathcal{LRA} \subset \mathcal{CS} \subseteq \mathcal{ERA}$. \\ {\rm (3)}. $\mathcal{LIN}$ {\rm (}$\mathcal{CF}${\rm )} and $\mathcal{LRA}$ are incomparable. \end{thm} \begin{proof} (1). From the definitions, the inclusion $\mathcal{REG}\subseteq 1$-$\mathcal{RA}$ is straightforward. Conversely, for a given $k$-RA $\mathcal{A}=(S, \Sigma, A, D_0, f)$ and for $w \in L(\mathcal{A})$, there exists a $\pi$ in $IP(\mathcal{A},w)$ such that for each $D_i$ appearing in $\pi$, we have $|D_i| \le k$. Let $Q=\{ D \in S^{\#} \mid |D|\leq k\}$ and $F=\{ D \mid D\in Q, f\subseteq D, Res_A (D) = \{ D \} \}$, and construct an NFA $M=(Q, \Sigma, \delta, D_0, F)$, where $\delta$ is defined by $\delta(D,a)\ni D'$ if $D\rightarrow^a D'$ for $a \in \Sigma\cup \{\lambda\}$. Then, it is seen that $L(\mathcal{A})=L(M)$, and $k$-$\mathcal{RA}\subseteq \mathcal{REG}$, thus we obtain that $\mathcal{REG}= k$-$\mathcal{RA}$. The other inclusions are all obvious from the definitions. The language $L=\{a^nb^n\mid n\geq 0\}$ proves the proper inclusion : $\mathcal{REG}\subset \mathcal{LRA}$. A proper inclusion $\mathcal{PRA} \subset \mathcal{ERA}$ is due to that $L_3 =\{ ww^R \mid w\in \{a,b\}^* \} \in \mathcal{ERA} - \mathcal{PRA}$, which follows from Lemma \ref{lem-wfw}. \\ (2). Given an $lin$-RA $\mathcal{A}$, one can consider a linearly bounded automaton (LBA) $M$ that simulates an interactive process $\pi$ in $IP(\mathcal{A},w)$ for each $w$, because of the linear boundedness of $\mathcal{A}$. This implies that $\mathcal{LRA} \subseteq \mathcal{CS}$. A proper inclusion is due to that $L_3 =\{ ww^R \mid w\in \{a,b\}^* \} \in \mathcal{LIN}-\mathcal{LRA}$, which follows from Lemma \ref{lem-wfw}. Further, for a given LBA $M$, one can find an equivalent two-stack machine $M_s$ whose stack lengths are linearly bounded by the input length. This implies, from the proof of Theorem \ref{teiri1}, that $M_s$ is simulated by an RA $\mathcal{A}$ that is exponentially bounded. Thus, it holds that $\mathcal{CS} \subseteq \mathcal{ERA}$. \\ (3). The language $L_1=\{a^nb^nc^n\mid n\geq 0\}$ (resp. $L_2=\{a^m b^m c^n d^n \mid m,n \geq 0\}$) is in $\mathcal{LRA} - \mathcal{CF}$ (resp. \ $\mathcal{LRA} - \mathcal{LIN})$, while, again from Lemma \ref{lem-wfw}, the language $L_3$ is in $\mathcal{LIN}-\mathcal{LRA}$. This completes the proof. \end{proof} \begin{figure} \caption{Language class relations in the Chomsky hierarchy : $L_1=\{a^nb^nc^n \mid n \geq 0\} \label{hie} \end{figure} \section{Concluding Remarks} Based on the formal framework presented in a series of papers \cite{ER:07a,ER:07b,ER:09,EMR:10,EMR:11}, we have introduced the notion of reaction automata and investigated the language accepting powers of the automata. Roughly, a reaction automaton may be characterized in terms of three key words as follows : a {\it language accepting device} based on the {\it multiset rewriting} in the {\it maximally parallel manner}. Specifically, we have shown that in a computing schema with one-pot solution and a finite number of molecular species, reaction automata can perform the Turing universal computation. The idea behind their computing principle is to simulate the behavior of two pushdown stacks in terms of multiset rewriting with the help of an encoding technique, where both the manner of maximally parallel rewriting and the role of the inhibitors in each reaction are effectively utilized. There already exist quite a few number of literature investigating on the notion of a multiset and its related topics (\cite{CPRS:01}) in which multiset automata and grammars are formulated and explored largely from the formal language theoretic point of view. Rather recent papers (\cite{KTZ:09a,KTZ:09b}) focus on the accepting power of multiset pushdown automata to characterize the classes of multiset languages through investigating their closure properties. To the authors' knowledge, however, relatively few works have devoted to computing languages with multiset rewriting/communicating mechanism. Among them, one can find some papers published in the area of membrane computing (or spiking neural P-systems) where a string is encoded in some manner as a natural number and a language is specified as a set of natural numbers (e.g., \cite{CIPP:06}). Further, recent developments concerning P-automata and its variant called dP-automata are noteworthy in the sense that they may give rise to a new type of computing devices that could be a bridge between P-system theory and the theory of reaction systems and automata (\cite{COV:10,IPPY:11,PP:11}). In fact, a certain number of computing devices similar to reaction automata have already been investigated in the literature. Among others, parallel labelled rewrite transition systems are proposed and investigated (\cite{HM:01}) in which multiset automata may be regarded as special type of reaction automata, whereas neither regulation by inhibitors nor maximally parallel manner of applying rules is employed in their rewriting process. A quite recent article \cite{AV:11} investigates the power of maximally parallel multiset rewriting systems (MPMRSs) and proves the existence of a universal MPMRS having smaller number of rules, which directly implies the existence of a universal antiport P-systems, with one membrane, having smaller number of rules. In contrast to reaction automata, a universal MPMRS computes any partially recursive function provided that the input is the encoding of a register machine computing a target function. Turning to the formal grammars, one can find random context grammars (\cite{DP:01}) and their variants (such as semi-conditional grammars in \cite{Paun:85}) that employ regulated rewriting mechanisms called permitting symbols and forbidding symbols. The roles of these two are corresponding to reactants and inhibitors in reactions, whereas they deal with sets of strings (i.e., languages in the usual sense) rather than multisets. We finally refer to an article on stochastic computing models based on chemical kinetics, which proves that well-mixed finite stochastic chemical reaction networks with a fixed number of species can achieve Turing universal computability with an arbitrarily low error probability (\cite{SCWB:08}). In this paper, we have shown that non-stochastic chemical reaction systems with finite number of molecular species can also achieve Turing universality with the help of inhibition mechanism. Many subjects remain to be investigated along the research direction suggested by reaction automata in this paper. First, it is of importance to completely characterize the computing powers and the closure properties of complexity subclasses of reaction automata introduced in this paper. Secondly, from the viewpoint of designing chemical reactions, it is useful to explore a methodology for ``chemical reaction programming'' in terms of reaction automata. It is also interesting to simulate a variety of chemical reactions in the real world by the use of the framework of reaction automata. \end{document}
\begin{document} \subjclass[2000]{Primary: 46L09; Secondary: 46L35, 46L06} \keywords{C*-algebra, free product} \begin{abstract} Given two unital continuous C$^*$-bundles $A$ and $B$ over the same compact Hausdorff base space $X$, we study the continuity properties of their different amalgamated free products over $C(X)$. \end{abstract} \title{Amalgamated free products of C$^*$-bundles} ${}$ In memory of Gert Pedersen \section{Introduction} Tensor products of C$^*$-bundles have been much studied over the last decade (see for example \cite{Ri}, \cite{ElNaNe}, \cite{bla2}, \cite{KiWa}, \cite{GiMi}, \cite{Ar}, \cite{Ma}, \cite{BdWa}). One of the main results was obtained by Kirchberg and Wassermann who gave in \cite{KiWa} a characterization of the exactness (respectively the nuclearity) of the C$^*$-algebra of sections $A$ of a continuous bundle of C$^*$-algebras over a compact Hausdorff space $X$ through the equivalence between the following conditions $\alpha_e$) and $\beta_e$) (respectively $\alpha_n$) and $\beta_n$)): \noindent $\alpha_e$) \textit{ The C$^*$-bundle $A$ is an exact C$^*$-algebra. } \noindent $\beta_e$) \textit{ For all continuous C$^*$-bundle $B$ over a compact Hausdorff space $Y$, the minimal C$^*$-tensor product $A\mathop{\otimes}\limits^m B$ is a continuous C$^*$-bundle over $X\times Y$ with fibres $A_x\mathop{\otimes}\limits^m B_y$. } \noindent $\alpha_n$) \textit{ The C$^*$-bundle $A$ is a nuclear C$^*$-algebra. } \noindent $\beta_n$) \textit{ For all continuous C$^*$-bundle $B$ over a compact Hausdorff space $Y$, the maximal C$^*$-tensor product $A\mathop{\otimes}\limits^M B$ is a continuous C$^*$-bundle over $X\times Y$ with fibres $A_x\mathop{\otimes}\limits^M B_y$. } \begin{rem} In \cite{KiWa}, the authors add to condition $\beta_e)$ the assumption that all the fibres $A_x$ ($x\in X$) are exact. But this is automatically satisfied (see \cite[Prop~3.3]{BdWa}). \end{rem} The case when we restrict our attention to fibrewise tensor products was then extensively studied in \cite{BdWa}. The two first assertions (respectively the two last ones) are indeed equivalent to the following assertion $\gamma_e$) (respectively $\gamma_n$)) introduced in \cite{bla2}, in case the compact Hausdorff space $X$ is perfect and second countable. \noindent $\gamma_e$) \textit{ For all continuous $C(X)$-algebra $B$, the smallest completion $A\mathop{\otimes}\limits^m_{C(X)} B$ of the algebraic tensor product $A\mathop{\odot}\limits_{C(X)} B$ amalgamated over $C(X)$ is a continuous C$^*$-bundle over $X$ with fibres $A_x \mathop{\otimes}\limits^m B_x$. } \noindent $\gamma_n$) \textit{ For all continuous $C(X)$-algebra $B$, the largest completion $A\mathop{\otimes}\limits^M_{C(X)}B$ of the algebraic tensor product $A\mathop{\odot}\limits_{C(X)} B$ amalgamated over $C(X)$ is a continuous C$^*$-bundle over $X$ with fibres $A_x \mathop{\otimes}\limits^M B_x$. } But there are also other canonical amalgamated products over $C(X)$, such as the completions considered by Pedersen (\cite{Ped}) and Voiculescu (\cite{Voi}) of the algebraic amalgamated free product $A\mathop{\circledast}\limits_{C(X)}B$ of two unital continuous C$^*$-bundles $A$ and $B$ over the same compact Hausdorff space $X$. The point of this paper is to study whether analogous continuity properties hold (or not) for these amalgamated free products. More precisely, we start in \S\ref{prelim} by fixing our notations and extending a few results available for $C(X)$-algebras to the framework of the operator systems which naturally appear when dealing with free products of $C(X)$-algebras amalgamated over $C(X)$. We show in \S\ref{full} that the full amalgamated free products are always continuous (Theorem~\ref{fullfreecont}) and we prove in \S\ref{red} that the exactness of the C$^*$-algebra $A$ is sufficient to ensure the continuity of the reduced ones (Theorem~\ref{redfree}). In particular, this implies that any separable continuous C$^*$-bundle over a compact Hausdorff space $X$ admits a $C(X)$-linear embedding into a C$^*$-algebra with Hausdorff primitive ideal space $X$. The author would like to express his gratitude to S. Wassermann and N. Ozawa for helpful comments. He would also like to thank the referee for his very careful reading of several draft versions of this paper. \section{Preliminaries}\label{prelim} We recall in this section a few basic definitions and constructions related to the theory of C$^*$-bundles. Let us first fix a few notations for operators acting on Hilbert C$^*$-modules (\cite[\S 13]{blac}). \begin{defi}\label{notationHilbertMod} Let $B$ be a C$^*$-algebra and $E$ a Hilbert $B$-module. \\ -- For all $\zeta_1, \zeta_2\in E$, we define the rank $1$ operator $\theta_{\zeta_1, \zeta_2}$ acting on the Hilbert $B$-module $E$ by the relation \hspace{20pt} \begin{equation}\label{rank1} \theta_{\zeta_1, \zeta_2}(\xi)=\zeta_1\langle\zeta_2, \xi\rangle\quad (\xi\in E). \end{equation} -- The closed linear span of these operators is the C$^*$-algebra $\mathcal{K}_B(E)$ of \textit{compact} operators acting on the Hilbert $B$-module $E$. \\ -- The multiplier C$^*$-algebra of $\mathcal{K}_B(E)$ is (isomorphic to) the C$^*$-algebra $\mathcal{L}_B(E)$ of continuous adjointable $B$-linear operators acting on $E$ \\ -- In case $B=\mathbb{C}$, then $E$ is a Hilbert space and we simply denote by $\mathcal{L}(E)$ and $\mathcal{K}(E)$ the C$^*$-algebras $\mathcal{L}_\mathbb{C}(E)$ and $\mathcal{K}_\mathbb{C}(E)$. A basic example is the separable Hilbert space $\ell_2(\mathbb{N})$ of complex valued sequences $( a_i )_{i\in\mathbb{N}}$ which satisfy $\| (a_i)\|^2=\sum_i |a_i|^2<\infty$. \end{defi} Let $X$ be a compact Hausdorff space and let $C(X)$ be the C$^*$-algebra of continuous functions on $X$ with values in the complex field $\mathbb{C}$. \begin{defi} A $C(X)$-algebra is a C$^*$-algebra $A$ endowed with a unital $*$-homomorphism from $C(X)$ to the centre of the multiplier C$^*$-algebra ${\mathcal{M}}(A)$ of~$A$. \end{defi} For all $x\in X$, we denote by $C_x(X)$ the ideal of functions $f\in C(X)$ satisfying $f(x)=0$. We denote by $A_x$ the quotient of $A$ by the \textit{closed} ideal $C_x(X) A$ and by $a_x$ the image of an element $a\in A$ in the \textit{fibre} $A_x$. Then the function \begin{equation} x\mapsto\| a_x\| =\inf\{\| [1-f+f(x)]a\| ,f\in C(X)\} \end{equation} is upper semi-continuous by construction. The $C(X)$-algebra is said to be \textit{continuous} (or to be a continuous C$^*$-bundle over $X$ in \cite{didou}, \cite{bla1}, \cite{KiWa}) if the function $x\mapsto\| a_x\|$ is actually continuous for all element $a$ in $A$. \begin{exs}\label{excont} Given a C$^*$-algebra $D$, the spatial tensor product $A=C(X)\otimes D=C(X; D)$ admits a canonical structure of continuous $C(X)$-algebra with constant fibre $A_x\cong D$. Thus, if $A'$ is a C$^*$-subalgebra of $A$ stable under multiplication with $C(X)$, then $A'$ also defines a continuous $C(X)$-algebra. This is especially the case for separable exact continuous $C(X)$-algebras: they always admit a $C(X)$-embedding in the constant $C(X)$-algebra $C(X; {\mathcal{O}_2})$, where ${\mathcal{O}_2}$ is the Cuntz C$^*$-algebra (\cite{bla3}). \end{exs} \begin{defi}\label{contfaithrep} (\cite{bla2}) Given a continuous $C(X)$-algebra $B$, a \textit{continuous field of faithful representations} of a $C(X)$-algebra $A$ on $B$ is a $C(X)$-linear map $\pi$ from $A$ to the multiplier C$^*$-algebra ${\mathcal{M}}(B)$ of $B$ such that, for all $x\in X$, the induced representation $\pi_x$ of the fibre $A_x$ in ${\mathcal{M}}(B_x)$ is faithful. \end{defi} Note that the existence of such a continuous field of faithful representations $\pi$ implies that the $C(X)$-algebra $A$ is continuous since the function \begin{equation}\label{lsc} x\mapsto \| a_x\|=\|\pi_x(a_x)\| =\| \pi(a)_x|| =\sup\{\| (\pi (a)b)_x\| , b\in B \;\;\mbox{such that}\;\; \| b\|\leq 1\} \end{equation} is lower semi-continuous for all $a\in A$. \\ \indent Conversely, any \textit{separable} continuous $C(X)$-algebra $A$ admits a continuous field of faithful representations. More precisely, there always exists a unital positive $C(X)$-linear map $\varphi: A\to C(X)$ such that all the induced states $\varphi_x$ on the fibres $A_x$ are faithful (\cite{bla1}). By the Gel'fand-Naimark-Segal (GNS) construction this gives a continuous field of faithful representations of $A$ on the continuous $C(X)$-algebra of compact operators $\mathcal{K}_{C(X)}(E)$ on the Hilbert $C(X)$-module $E=L^2(A,\varphi)$. These constructions admit a natural extension to the framework of operator systems. Indeed, for all Banach space $V$ with a unital contractive homomorphism from $C(X)$ into the bounded linear operators on $V$, one can define the fibres $V_x=V/C_x(X) V$ and the projections $v\in V\mapsto v_x:= v+C_x(X)\, V\in V_x$ (\cite{didou}, \cite[\S 2.3]{BdKi}). Then the following $C(X)$-linear version of Ruan's characterization of operator spaces holds. \begin{prop}\label{repOpsyst} Let $W$ be a separable operator system which is a unital $C(X)$-module such that, for all positive integer $n$ and all $w$ in $M_n(W)$, the map $x\mapsto\| w_x\|$ is continuous. \begin{enumerate} \item[(i)] Every unital completely positive map $\phi$ from a fibre $W_x$ to $M_n(\mathbb{C})$ admits a $C(X)$-linear unital completely positive extension $\varphi:W\to M_n(C(X))$. \item[(ii)] There exist a Hilbert $C(X)$-module $E$ and a $C(X)$-linear map $\Phi: W\to\mathcal{L}_{C(X)}(E)$ such that for all $x\in X$, the induced map from $W_x$ to $\mathcal{L}(E_x)$ is completely isometric. \end{enumerate} \end{prop} \noindent {\mbox{\textit{Proof}. }} (i) Let $\zeta_n\in\mathbb{C}^n\otimes\mathbb{C}^n$ be the unit vector $\zeta_n=\frac{1}{\sqrt{n}} \sum_{i=1}^n e_i\otimes e_i$. Then the state $w\mapsto \langle\zeta_n, (id_n\otimes\phi)(w)\, \zeta_n\rangle$ on $M_n(\mathbb{C})\otimes W_x\cong M_n(W_x)$ admits a $C(X)$-linear unital positive extension to $M_n(W)$ (\cite{bla1}, \cite{BdKi}). Thus, there is a $C(X)$-linear unital completely positive (u.c.p.) map $\varphi:W\to M_n(C(X))$ with $\varphi_x=\phi$ and $\|\varphi\|_{cb}=\|\phi\|_{cb}$ (\cite{Pau}, \cite{WaS}). \noindent (ii) The proof is the same as the one of Theorem 2.3.5 in \cite{EfRu2}. Indeed, given a point $x\in X$ and an element $w\in M_k(W)$, there exists, by lemma 2.3.4 and proposition 2.2.2 of \cite{EfRu2}, a u.c.p. map $\varphi_x$ from $W_x$ to $M_k(\mathbb{C})$, such that $$\|(\imath_k\otimes\varphi_x)(w_x)\|=\| w_x\|\,,$$ and one can extend $\varphi_x$ to a $C(X)$-linear u.c.p. map $\varphi:W\to M_k(C(X) )$ by part (i). For all $n\geq 1$, let $\mathfrak{s}_n$ be the set of completely contractive $C(X)$-linear maps from $W$ to $M_n(C(X) )$ and let $\mathfrak{s}=\oplus_n\,\mathfrak{s}_n$. Then the map $w\mapsto (\varphi(w))_{\varphi\in\mathfrak{s}}$ defines an appropriate $C(X)$-linear completely isometric representation of $W$. \qed \begin{rem} Let $\{ M_n(W),\,\|\,.\,\|_n\}$ be a separable operator system such that $W$ is a continuous $C(X)$-module. Then the formula \begin{center} $\|w\|^\sim_n=\sup\{ \|\langle\xi\otimes 1, (1_n\otimes w)\, \eta\otimes 1\rangle\| ; \xi, \eta\in\mathbb{C}^n\otimes\mathbb{C}^n\;\mathrm{unit\; vectors} \}$ \end{center} for $w\in M_n(W)$ defines an operator system structure on $W$ satisfying the hypotheses of Proposition~\ref{repOpsyst}. \end{rem} The Proposition \ref{repOpsyst} also induces the following $C(X)$-linear Wittstock extension: \begin{cor} Let $X$ be a compact Hausdorff space, $A$ a separable unital continuous $C(X)$-algebra and let $V$ be a $C(X)$-submodule of $A$. \\ \indent Then any completely contractive map $\phi$ from a fibre $V_x$ to $M_{k,l}(\mathbb{C})$ admits a $C(X)$-linear completely contractive extension $\varphi: V\to M_{k,l}(C(X) )$. \end{cor} \noindent {\mbox{\textit{Proof}. }} Let $W$ be the $C(X)$-linear operator subsystem $\left[\begin{array}{cc} C(X) &V\\ V^* & C(X)\end{array}\right]$ of $M_2(A)$ and let $\tilde\phi$ be the unital completely positive map from the fibre $W_x$ to $M_{k+l}(\mathbb{C})$ given by $$\tilde\phi(\left[\begin{array}{cc}\alpha& v_x'\\ v_x^*&\beta\end{array}\right]) =\left[\begin{array}{cc}\alpha&\phi(v_x')\\ \phi(v_x)^*&\beta\end{array}\right]\,.$$ Let $\zeta=(k+l)^{-1/2}\sum_i e_i\otimesimes e_i\in \mathbb{C}^{k+l}\otimes\mathbb{C}^{k+l}$. Then the associated state $\psi(d )=\langle\zeta, (\tilde\phi\otimes\imath)(d) \zeta\rangle= \frac{1}{k+l}\sum_{i, j}\tilde\phi_{i,j}(d_{i,j})$ on $M_{k+l}(W_x)$ admits a $C(X)$-linear unital positive extension to $M_{k+l}(W)$ (\cite{bla1}). Thus, there is a $C(X)$-linear completely contractive map $\varphi:V\to M_{k,l}(C(X))$ with $\varphi_x=\phi$ and $\|\varphi\|_{cb}=\|\phi\|_{cb}$ (\cite{Pau}, \cite{WaS}). \qed We end this section with a short proof of the implication $\gamma_e)\mathbb{R}ightarrow\alpha_e)$ given in \cite{KiWa} \textit{i.e.\/}\ the characterization of the exactness of a $C(X)$-algebra $A$ by assertion~$\gamma_e$, if the topological space $X$ is \textit{perfect}, i.e. without any isolated point. \noindent $\gamma_e)\mathbb{R}ightarrow\alpha_e)$ Given two $C(X)$-algebras $A$, $B$ and a point $x\in X$, we have canonical $*$-epimorphisms $q_x:A\mathop{\otimes}\limits^m B \to (A_x\mathop{\otimes}\limits^m B)_x$ and $q_x': (A_x\mathop{\otimes}\limits^m B)_x \to A_x \mathop{\otimes}\limits^m B_x$. Further, $q_x(f\otimes 1 -1\otimes f)=f(x)-f(x)=0$ for all $f\in C(X)$. Hence $q_x$ factorizes through $A\otimes_{C(X)} B$ if the $C(X)$-algebra $A$ is continuous, by \cite[proposition 3.1]{bla2}. If $B$ is also continuous and $A$ satisfies $\gamma_e)$, then $(A \mathop{\otimes}\limits^m_{C(X)} B)_x \cong (A_x \mathop{\otimes}\limits^m B)_x \cong A_x\mathop{\otimes}\limits^m B_x$ and so the $C(X)$-algebra $A_x\mathop{\otimes}\limits^m B$ is continuous at $x$. Thus, Corollary~3 of \cite{CaWa} implies that each fibre $A_x$ is exact ($x\in X$) and the equivalence between assertions (i) and (iv) in \cite[Thm.~4.6]{KiWa} entails that the C$^*$-algebra $A$ itself is exact. \section{The full amalgamated free product}\label{full} In this section, we study the continuity of the full free product amalgamated over $C(X)$ of two unital continuous $C(X)$-algebra (\cite{Ped}, \cite{Pis}). By default all tensor products and free products will be over $\mathbb{C}$. \begin{defi}(\cite{VoDyNi}) Let $X$ be a compact Hausdorff space and let $A_1, A_2$ be two unital $C(X)$-algebras containing a unital copy of $C(X)$ in their centres, \textit{i.e.\/}\ $1_{A_i}\in C(X)\subset A_i$ ($i=1, 2$). \\ -- The \textit{algebraic free product of $A_1$ and $A_2$ with amalgamation over $C(X)$} is the unital quotient $A_1\mathop{\circledast}\limits_{C(X)} A_2$ of the algebraic free product of $A_1$ and $A_2$ over $\mathbb{C}$ by the two sided ideal generated by the differences $f1_{A_1}-f1_{A_2}$, $f\in C(X)$. \\ -- The \textit{full amalgamated free product free product} $A_1\mathop{\ast}\limits^f_{C(X)} A_2$ is the universal unital enveloping C$^*$-algebra of the $\ast$-algebra $A_1\mathop{\circledast}\limits_{C(X)} A_2$. \\ -- Any pair $(\sigma_1, \sigma_2)$ of unital $\ast$-representations of $A_1, A_2$ that coincide on their restrictions to $C(X)$ defines a unital $\ast$-representation $\sigma_1\ast\sigma_2$ of $A_1\mathop{\circledast}\limits_{C(X)} A_2$, the restriction of which to $A_i$ coincides with $\sigma_i$ ($i=1, 2$). \\ \end{defi} In particular, the two unital central copies of $C(X)$ in $A_1$ and $A_2$ coherently define a structure of $C(X)$-algebra on $A_1\mathop{\ast}\limits^f_{C(X)} A_2$ and by universality, we have: \begin{equation}\label{fibrefull} \forall\,x\in X\,,\quad (A_1\mathop{\ast}\limits^f_{C(X)} A_2)_x\cong (A_1)_x\mathop{\ast}\limits^f_{\mathbb{C}} (A_2)_x\,. \end{equation} \begin{rem} If we fix unital positive $C(X)$-linear maps $\varphi_i:A_i\to C(X)$ and we set $A_i^\circ=\ker\varphi_i$ for $i=1, 2$, then the algebraic amalgamated free product $A_1\mathop{\circledast}\limits_{C(X)} A_2$ is (isomorphic to) the $C(X)$-module $C(X)\oplus \mathop{\oplus}\limits_{n\geq 1}\mathop{\bigoplus}\limits_{i_1\not=\ldots\not= i_n} A_{i_1}^\circ\mathop{\otimes}\limits_{C(X)}\ldots\mathop{\otimes}\limits_{C(X)}A_{i_n}^\circ$, which is a $*$-algebra for the product $v.w:=v\otimesimes w$ and the involution $(v.w)^*=w^*.v^*$. \end{rem} Assume now that $A_1$ and $A_2$ are continuous $C(X)$-algebras. Then $A_1\mathop{\ast}\limits^f_{C(X)} A_2$ is also a continuous $C(X)$-algebra as soon as both $A_1$ and $A_2$ are separable exact C$^*$-algebras thanks to an embedding property due to Pedersen (\cite{Ped}): \begin{prop}\label{fullfreeexact} Let $X$ be a compact Hausdorff space and $A_1, A_2$ two separable unital continuous $C(X)$-algebras which are exact C$^*$-algebras. \\ \indent Then the full amalgamated free product $A_1\mathop{\ast}\limits^f_{C(X)} A_2$ is a continuous $C(X)$-algebra. \end{prop} \noindent {\mbox{\textit{Proof}. }} For $i=1, 2$, let $\pi_i$ be a $C(X)$-linear embedding of $A_i$ into $C(X; {\mathcal{O}_2})$ (\S \ref{excont}). Then the induced $C(X)$-linear morphism $\pi_1\ast\pi_2$ from $A_1\mathop{\ast}\limits^f_{C(X)} A_2$ to the continuous $C(X)$-algebra $C(X; {\mathcal{O}_2})\mathop{\ast}\limits^f_{C(X)} C(X; {\mathcal{O}_2})=C(X; {\mathcal{O}_2}\mathop{\ast}\limits^f{\mathcal{O}_2})$, is injective by \cite[Thm.~4.2]{Ped}. \qed This continuity property actually always holds (Theorem~\ref{fullfreecont}). In order to prove it, let us first state the following Lemma which will enable us to reduce the problem to the separable case. (Its proof is the same as in \cite[2.4.7]{BdKi}.) \begin{lem}\label{freesepreduction} Let $X$ be a compact Hausdorff space, $A_1, A_2$ two unital $C(X)$-algebras and $a$ an element of the algebraic amalgamated free product $A_1\mathop{\circledast}\limits_{C(X)}A_2$. Then there exist a second countable compact Hausdorff space $Y$ s.t. $1_{C(X)}\in C(Y)\subset C(X)$ and separable $C(Y)$-algebras $D_1\subset A_1$ and $D_2\subset A_2$ s.t. \hbox{$a$ belongs to the $\ast$-subalgebra $D_1 \mathop{\circledast}\limits_{C(Y)}D_2$. } \end{lem} Let $e_1, e_2, \ldots$ be an orthonormal basis of $\ell^2(\mathbb{N})$ and set $e_{i, j}:=\theta_{e_i, e_j}$ for all $i, j$ in $\mathbb{N}^*:=\mathbb{N}\setminus\{ 0\}$ (see (\ref{rank1})) Note that $e_{i, j}$ is the rank $1$ partial isometry such that $e_{i, j} e_j=e_i$. Then the following critical Lemma holds: \begin{lem}\label{fullfreetech} Let $Y$ be a second countable compact space, $D_1, D_2$ two separable unital continuous $C(Y)$-algebras and set $\mathcal{D}=D_1\mathop{\ast}\limits^f_{C(Y)} D_2$. \\ \indent If the element $d$ belongs to the algebraic amalgamated free product $D_1\mathop{\circledast}\limits_{C(Y)} D_2$, then the function $y\mapsto \| d_y\|_{\mathcal{D}_y}$ is continuous. \end{lem} \noindent {\mbox{\textit{Proof}. }} The map $y\mapsto \| d_y\|$ is always upper semicontinuous by (\ref{fibrefull}). So, it only remains to prove that it is also lower semicontinuous if both the $C(Y)$-algebras $D_1$ and $D_2$ are continuous. Now, any element $d$ in the algebraic amalgamated free product of $D_1$ and $D_2$ admits by construction (at least) one finite decomposition (that we fix) in $M_m(\mathbb{C})\otimes\mathcal{D}$ \begin{equation} e_{1,1}\otimes d= d(1)\ldots d(2n)\end{equation} for suitable integers $m, n\in\mathbb{N}$ and elements $d(k)\in M_m(\mathbb{C})\otimes D_{\iota_k}$ ($1\leq k\leq 2n$), where $\iota_k=1$ if $k$ is odd and $\iota_k=2$ otherwise. \\ \indent Given a point $y\in Y$ and a constant $\varepsilon>0$, there exist unital $*$-representations $\mathbb{T}heta_1, \mathbb{T}heta_2$ of the fibres $(D_1)_y, (D_2)_y$ on $\ell^2(\mathbb{N})$ and unit vectors $\xi, \xi'$ in $e_1\otimes\ell^2(\mathbb{N}) \subset \mathbb{C}^m\otimes\ell^2(\mathbb{N})$ s.t. \begin{equation} \| d_y\|-\varepsilon < \Bigl|\langle\xi', e_{1,1}\otimes(\mathbb{T}heta_1\ast\mathbb{T}heta_2)(d_y) \xi\rangle\Bigr| \,. \end{equation} As the sequence of projections $p_k=\sum_{i=0}^k e_{i,i}$ in $\mathcal{K}(\ell^2(\mathbb{N}))$ satisfies $\mathop{\lim}\limits_{k\to\infty}\| (1-p_k)\zeta\|=0$ for all $\zeta\in\ell^2(\mathbb{N})$, a finite induction implies that there is an integer $l\in\mathbb{N}$ such that $ \| (1\otimes p_l)\xi\|\not=0$, $ \| (1\otimes p_l)\xi'\|\not=0$ and the two u.c.p. maps $\phi_i(\, .\, )=p_l\mathbb{T}heta_i(\, .\, ) p_l$ on the fibres $(D_i)_y$ ($i=1, 2$) satisfy: \begin{equation}\label{weak} \bigl|\langle\xi_l', \Bigl[e_{1,1}\otimes (\mathbb{T}heta_1\ast\mathbb{T}heta_2)(d_y)- (id\otimes\phi_{\iota_1})(d(1)_y)\ldots(id\otimes\phi_{\iota_{2n}})(d(2n)_y)\Bigr] \xi_l\rangle\bigr| < \varepsilon \end{equation} where $\xi_l=(1\otimes p_l)\xi/\| (1\otimes p_l)\xi\|$ and $\xi_l'=(1\otimes p_l)\xi/\| (1\otimes p_l)\xi'\|$ are unit vectors in $\mathbb{C}^m\otimes\mathbb{C}^{l+1}$ which are arbitrarily close to $\xi$ and $\xi'$, respectively, for sufficiently large $l$. Let $\zeta_l\in\mathbb{C}^l\otimes\mathbb{C}^l$ be the unit vector $\zeta_l=\frac{1}{\sqrt{l}} \sum_{1\leq k\leq l} e_k\otimes e_k$. For each $i$, the state $e\mapsto\langle\zeta_l, (id\otimes\phi_i)(e)\,\zeta_l\rangle$ on $M_{l}(\mathbb{C})\otimes (D_i)_y$ associated to $\phi_i$ admits a unital $C(Y)$-linear positive extension $\Psi_i: M_{l}(\mathbb{C})\otimes D_i\to C(Y)$ (\cite{bla1}). If $(\mathcal{H}_i, \eta_i, \sigma_i)$ is the associated GNS-Kasparov construction, then every $d_i\in D_i$ satisfies \begin{equation} \langle 1_l\otimes\eta_i, (id\otimes\sigma_i)(\theta_{\zeta_l, \zeta_l}\otimes d_i) 1_l\otimes\eta_i\rangle(y) =(id\otimes\Psi_i)(\theta_{\zeta_l, \zeta_l}\otimes d_i)(y) =\phi_i((d_i)_y) \end{equation} Let $\sigma=\sigma_1\ast \sigma_2$ be the $*$-representation of the full amalgamated free product $\mathcal{D}$ on the amalgamated pointed free product $C(Y)$-module $(\mathcal{H}, \eta)=\ast_{C(Y)}(\mathcal{H}_i, \eta_i)$ (\cite{VoDyNi}). Then $$ {}\hspace{-3pt} \begin{array}{rl} \bigl| \langle \xi_l'\otimes\eta, e_{1,1}\otimes \sigma(d)\, \xi_l\otimes\eta\rangle\bigr|(y)\hspace{-5pt}& =\bigl| \langle \xi_l'\otimes\eta, (id\otimes\sigma)(d(1))\ldots (id\otimes\sigma)(d(2n))\xi_l\otimes\eta\rangle\bigr|(y)\\ &=\bigl| \langle \xi_l', (id\otimes\phi_{\iota_1})( d(1)_y)\ldots(id\otimes\phi_{\iota_{2n}})( d(2n)_y) \xi_l \rangle\bigr|\\ &> \| d_y\|-2\varepsilon \end{array} $$ And so, $\| d_y\|-2\varepsilon < \Bigl| \langle \xi_l'\otimes\eta, e_{1,1}\otimes \sigma(d)\, (1\otimes p_l)\xi_l\otimes\eta\rangle\Bigr|(z) \leq \| d_z\|$ for all point $z$ in an open neighbourhood of $y$ in $Y$ by continuity. \qed \begin{rem} The referee pointed out that the inequality (\ref{weak}) cannot be replaced by a norm inequality like $\bigl\| e_{1,1}\otimes (\mathbb{T}heta_1\ast\mathbb{T}heta_2)(d_y)- (id\otimes\phi_{\iota_1})(d(1)_y)\ldots(id\otimes\phi_{\iota_{2l}})(d(2l)_y)\bigr\| < \varepsilon'$. \\ Indeed, if for instance $p\in A=M_2(\mathbb{C})$ is the projection $p=\frac{1}{\sqrt{2}} \left[\begin{array}{cc}1&1\\1&1\end{array}\right] $, then $e_{1,1}\,.\,e_{2,2}=0$ \quad but\quad $p\,e_{1,1}\,p\,e_{2,2}\,p= 2^{-3/2} p\not=0\,$. \end{rem} \begin{thm}\label{fullfreecont} Let $X$ be a compact Hausdorff space and let $A_1, A_2$ be two unital continuous $C(X)$-algebras. Then the full amalgamated free product $\mathcal{A} =A_1\mathop{\ast}\limits^f_{C(X)} A_2$ is a continuous $C(X)$-algebra with fibres $\mathcal{A}_x= (A_1)_x\mathop{\ast}\limits^f (A_2)_x$ ($x\in X$). \end{thm} \noindent {\mbox{\textit{Proof}. }} The $C(X)$-algebra $\mathcal{A}$ has fibre $\mathcal{A}_x=(A_1)_x \mathop{\ast}\limits^f (A_2)_x$ at $x\in~X$ by (\ref{fibrefull}). Hence it is enough to prove that for all $a$ in the dense algebraic amalgamated free product $A_1\mathop{\circledast}\limits_{C(X)}A_2\subset \mathcal{A}$, the map $x\mapsto\| a_x\|$ is lower semi-continuous. Let $a$ be such an element and choose a finite decomposition $e_{1,1}\otimes a=a_1\ldots a_{2n}\in M_m(\mathbb{C})\otimes\mathcal{A}$, where $m, n\in\mathbb{N}$ and $a_k$ belongs to $M_m(\mathbb{C})\otimes A_1$ or $M_m(\mathbb{C})\otimes A_2$ according to the parity of $k$. By Lemma \ref{freesepreduction}, there exist a separable unital C$^*$-subalgebra $C(Y)\subset C(X)$ containing the unit $1_{C(X)}$ of $C(X)$ and separable unital C$^*$-subalgebras $D_1\subset A_1$ and $D_2\subset A_2$ such that each $D_i$ is a continuous $C(Y)$-algebra and all the $a_k$ belong to $M_m(\mathbb{C})\otimes D_1$ or $M_m(\mathbb{C})\otimes D_2$ according to the parity of $k$. And so $a$ also belongs to the full free product $\mathcal{D}=D_1\mathop{\ast}\limits^f_{C(Y)} D_2$ which admits a $C(Y)$-linear embedding in $A_1\mathop{\ast}\limits^f _{C(X)} A_2$ (\cite[Thm.~4.2]{Ped}). Hence it is enough to prove that the map $y\in Y\mapsto\| a_y\|_{\mathcal{D}_y}$ is also lower semicontinuous. But this follows from Lemma~\ref{fullfreetech}. \qed \section{The reduced amalgamated free product}\label{red} Let us now study the continuity properties of certain reduced amalgamated free product over $C(X)$ of two unital continuous $C(X)$-algebras (\cite{Voi}, \cite{VoDyNi}). \\ \indent The main result of this section is the following: \begin{thm}\label{redfree} Let $X$ be a compact Hausdorff space and let $A_1, A_2$ be two unital continuous $C(X)$-algebras. For $i=1,2$, let $\phi_i: A_i\to C(X)$ be a unital projection such that for all $x\in X$, the induced state $(\phi_i)_x$ on the fibre $(A_i)_x$ has faithful GNS representation. If the C$^*$-algebra $A_1$ is exact, then the reduced amalgamated free product $$(A,\phi)=(A_1,\phi_1)\mathop{\ast}\limits_{C(X)} (A_2,\phi_2)$$ is a continuous $C(X)$-algebra with fibres $(A_x,\phi_x)=((A_1)_x,(\phi_1)_x) \ast \,((A_2)_x,(\phi_2)_x)$. \end{thm} The proof is similar to the one used by Dykema and Shlyakhtenko in \cite[\S 3]{DySh} to prove that a reduced free product of exact C$^*$-algebras is exact. We shall accordingly omit details except where our proof deviates from theirs. \begin{lem}\label{extension} Let $A$ be a $C(X)$-algebra and $J\triangleleft A$ be a closed two sided ideal in $A$. If the two $C(X)$-algebras $J$ and $A/J$ are continuous, then $A$ is also continuous. \end{lem} \noindent {\mbox{\textit{Proof}. }} The canonical $C(X)$-linear representation $\pi$ of $A$ on $J\oplus A/J$ is a continuous field of faithful representations. Indeed, if $a\in A$ satisfies $\pi_x(a_x)=0$ for some $x\in X$, then $(a a'+J)_x=0$ for all $a'\in A$, hence $(a+J)_x=0$, \textit{i.e.\/}\ $a_x\in J_x$. Now $a_x h_x=(a h)_x=0$ for all $h\in J$ and so $a_x=0$. \qed \begin{rem} The continuity of the $C(X)$-algebra $A$ does not imply the continuity of the quotient $A/J$. In fact, any $C(X)$-algebra $B$ is the quotient of the constant $C(X)$-algebra $A=C(X; B)=C(X)\otimesimes B$ by the (closed) two sided ideal $C_\Delta .A$, where $C_\Delta\subset C(X\times X)$ is the ideal of functions $f$ which satisfy $f(x, x)=0$ for all $x\in X$. \end{rem} \begin{lem}\label{stabeq} Let $B$ be a unital $C(X)$-algebra and $E$ a \textit{full} countably generated Hilbert $B$-module. Then $B$ is a continuous $C(X)$-algebra if and only if the $C(X)$-algebra $\mathcal{K}_B(E)$ of compact operators acting on $E$ (Definition~\ref{notationHilbertMod}) is continuous. \end{lem} \noindent {\mbox{\textit{Proof}. }} The C$^*$-algebra $B$ and $\mathcal{K}_B(E)$ are stably isomorphic by Kasparov stabilisation theorem (\cite[Thm. 13.6.2]{blac}), \textit{i.e.\/}\ there is a $B$-linear isomorphism \\ \centerline{$\mathcal{K}(\ell^2(\mathbb{N}) )\otimes B\cong \mathcal{K}_B(\ell^2(\mathbb{N})\otimes E))\cong \mathcal{K}(\ell^2(\mathbb{N}) )\otimes \mathcal{K}_B(E)$ \qquad (\cite[Ex.~13.7.1]{blac}). } Note that this isomorphism are also $C(X)$-linear since $1_B\in C(X)\subset B$. As the C$^*$-algebra $\mathcal{K}(\ell^2(\mathbb{N}) )$ is nuclear, the Theorem 3.2 of \cite{KiWa} implies the equivalence between the continuity of the $C(X)$-algebras $B$, $\mathcal{K}(\ell^2(\mathbb{N}) )\otimes B$ and $\mathcal{K}_B(E)$. \qed Given a C$^*$-algebra $B$ and a Hilbert $B$-bimodule $E$, recall that the full Fock Hilbert $B$-bimodule associated to $E$ is the sum $\mathcal{F}f_B(E)=B\oplus E\oplus (E\otimesimes_B E)\oplus\ldots= \bigoplus_{n\in\mathbb{N}}\;E^{(\otimesimes_B)n}$ and that for all $\xi\in E$, the creation operator $\ell(\xi)\in\mathcal{L}_B(\mathcal{F}f_B( E))$ is defined by \begin{equation}\label{Fockspace} \begin{array}{lcll} \bullet\quad \ell(\xi) b=\xi b&\textrm{for}&b\in B=:E^0&\mathrm{ and}\\ \bullet\quad \ell(\xi)\,(\zeta_1\otimesimes\ldots\otimesimes\zeta_k)=\xi\otimesimes\zeta_1\otimesimes\ldots\otimesimes\zeta_k & \textrm{for}&\zeta_1,\ldots, \zeta_k\in E\,.& \end{array} \end{equation} Then then Toeplitz C$^*$-algebra $\mathbb{T}t_B ( E)$ of the Hilbert $B$-module $E$ (called \textit{extended Cuntz-Pimsner algebra} in \cite{DySh}) is the C$^*$-subalgebra of $\mathcal{L}_B(\mathcal{F}f_B( E))$ generated by the operators $\ell(\xi),\,\xi\in E$ (\cite{Pim}). \begin{lem}\label{lemToeplitz} Let $B$ be a unital $C(X)$-algebra and let $E$ be a countably generated Hilbert $B$-bimodule such that the left module map $B\to\mathcal{L}_B (E)$ is injective and which satisfies \begin{equation} \label{diamond} f.\zeta=\zeta . f\quad\mathrm{for}\;\mathrm{all}\; \zeta\in E\;\mathrm{and}\; f\in C(X)\,. \end{equation} Then the Toeplitz $C(X)$-algebra $\mathbb{T}t_B (E)$ of $E$ is a continuous $C(X)$-algebra with fibres $\mathbb{T}t_{B_x} (E_x)$ if and only if the $C(X)$-algebra $B$ is continuous. \end{lem} Under the assumption of this Lemma, the canonical $*$-monomorphism $B \to \Pi_{x\in X} B_x$ induces for any Hilbert $B$-module $F$ a $*$-homomorphism $a\mapsto a\otimes 1$ from the C$^*$-algebra $\mathcal{K}_B\left(F \right)$ of compact operators acting on the Hilbert module $F$ to the tensor product $\mathcal{K}_B(F)\otimes_B \left(\Pi_{x\in X} B_x\right)\cong\Pi_{x\in X} \, \mathcal{K}_{B_x}\left( F\otimes_B B_x\right)$. And this map is injective as soon as there is a $B$-linear decomposition $F\cong B\oplus F'$ for some Hilbert $B$-module $F'$. After passing to the multiplier C$^*$-algebras, this gives for $F=\mathcal{F}f_B(E)$ a $*$-monomorphism $\mathbb{T}heta=(\mathbb{T}heta_x)$: $$\mathcal{L}_B(\mathcal{F}f_B(E)) = {\mathcal{M}}\bigl(\mathcal{K}_B(\mathcal{F}f_B(E)) \bigr)\hookrightarrow \mathop\Pi\limits_{x\in X} {\mathcal{M}}\bigl(\mathcal{K}_{B_x}\left(\mathcal{F}f_{B_x}(E_x)\right)\bigr)= \mathop\Pi\limits_{x\in X} \mathcal{L}_{B_x}\left(\mathcal{F}f_{B_x}(E_x)\right),$$ where $E_x$ is the Hilbert $B_x$-module $E_x=E\otimes_B B_x\cong E/C_x(X)E$ for all $x\in X$. Note that for all $\xi\in E$ and $x\in X$, we have $\ell(\xi)\, C_x(X)\mathcal{F}f_B(E)\subset C_x(X)\mathcal{F}f_B(E)$ by (\ref{diamond}) and so the element $\mathbb{T}heta_x(\ell(\xi) )$ satisfies the same creation rules (\ref{Fockspace}) as the creation operator $\ell(\xi_x)$, where $\xi_x=\xi\otimes 1_{B_x} \in E_x$. So, the restriction of $\mathbb{T}heta$ to $\mathbb{T}t_B(E)$ takes values in the product $\mathop\Pi\limits_{x\in X} \mathbb{T}t_{B_x}(E_x)$. \noindent \textit{Proof of Lemma~\ref{lemToeplitz}.} The continuity of the $C(X)$-algebra $\mathbb{T}t_B ( E)$ clearly implies the continuity of the $C(X)$-algebra~$B$ since $B$ embeds $C(X)$-linearly in $\mathbb{T}t_B ( E)$. Suppose conversely that the $C(X)$-algebra $B$ is continuous. Let $\widetilde{E}$ be the full countably generated Hilbert $B$-bimodule $\widetilde{ E}= E\oplus B$. Then the $C(X)$-algebra $\mathcal{K}_B(\mathcal{F}f_B(\widetilde{E}) )$ of compact operators acting on the Hilbert $B$-module $\mathcal{F}f_B(\widetilde{E})$ is a continuous $C(X)$-algebra by Lemma~\ref{stabeq}. Hence it is enough to prove that the Toeplitz C$^*$-algebra $\mathbb{T}t_B (\widetilde{ E})$ admits a continuous field of faithful representations $\mathbb{T}t_B (\widetilde{ E})\to\mathcal{L}_B(\mathcal{F}f_B(\widetilde{E}) )={\mathcal{M}}( \mathcal{K}_B(\mathcal{F}f_B(\widetilde{E})) )$ since $\mathbb{T}t_B ( E)$ embeds in $\mathbb{T}t_B (\widetilde{ E})=\mathbb{T}t_B( E\oplus B)$ (\cite{Pim} or \cite[\S 3]{DySh}). \noindent\textit{Step 1.} \textit{ Let $\beta$ be the action of the group $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ on $\mathbb{T}t_B (\widetilde{ E})$ determined by $\beta_t(\ell(\zeta))=e^{2i\pi t}\ell(\zeta)$. Then the fixed point C$^*$-subalgebra $A$ under this action is a continuous $C(X)$-algebra and the map $A_x\to \mathbb{T}t_{B_x} (\widetilde{E}_x)$ is injective for all $x\in X$. } Define the increasing sequence of $B$-subalgebra $A_n\subset A$ generated by the words of the form $w=\ell(\zeta_1)\ldots \ell(\zeta_k)\,\ell(\zeta_{k+1})^*\ldots\ell(\zeta_{2k})^*$ with $k\leq n$. Let also $A_0=B$. As $A=\overline{\cup A_n}$, it is enough to prove that each $A_n$ is continuous with appropriate fibres. Let $\widetilde{ E}_0=B$ and $\widetilde{ E}_n=\widetilde{ E}\otimes_B\ldots\otimes_B\widetilde{ E}= \widetilde{ E}^{(\otimes_B) n}$ for $n\geq 1$. Define also the projection $P_ n\in \mathcal{L}_B(\mathcal{F}f_B(\widetilde{E}) )$ on the Hilbert $B$-bimodule $F_n=\oplus_{0\leq k\leq n} \widetilde{E}_k$. In the decomposition $\mathcal{F}f_B(\widetilde{E} )\cong F_n\otimes_B\mathcal{F}f_B(\widetilde{E}^{\otimesimes_B(n+1)})$, $A_n$ acts on $\mathcal{F}f_B(\widetilde{E})$ as $\mathbb{T}heta_n(A_n)\otimes 1$, where $\mathbb{T}heta_n(a)=P_n aP_n$ for $a\in A_n$. Thus the map $\mathbb{T}heta_n$ is faithful on $A_n$. Further, the kernel of the restriction map $F_n=F_{n-1}\oplus \widetilde{E}_n\to F_{n-1}$ is the $B$-module generated by the words $w=\ell(\zeta_1)\ldots \ell(\zeta_n)\,\ell(\zeta_{n+1})^*\ldots\ell(\zeta_{2n})^*$ of length $2n$, which is isomorphic to $\mathcal{K}_B(\widetilde{E}_n)$. Hence, we have by induction $C(X)$-linear split exact sequences \begin{equation} 0\to \mathcal{K}_B(\widetilde{ E}_n)\to A_n\to A_{n-1}\to 0\,, \end{equation} and so, each $C(X)$-algebra $A_n$ is continuous by lemma \ref{extension}. \noindent\textit{Step 2.} \textit{ The Toeplitz C$^*$-algebra $\mathbb{T}t_B (\widetilde{ E})$ is isomorphic to the crossed product $A\rtimes_\alpha \mathbb{N}$, where $\alpha: A\to A$ is the injective $C(X)$-linear endomorphism $\alpha(a)=LaL^*$, with $L=\ell(0\oplus 1_B)$ (\cite[Claim 2.1.3]{DySh}). Hence $\mathbb{T}t_B (\widetilde{ E})$ is a continuous field with fibres $\bigl(\mathbb{T}t_B (\widetilde{ E})\bigr)_x\cong A_x\rtimes\mathbb{N}\cong \mathbb{T}t_{B_x} (\widetilde{ E}_x)$ for $x\in X$. } The C$^*$-algebra $\mathbb{T}t_B(\widetilde{E})$ is generated by $A$ and $L$. Hence it is isomorphic to the $C(X)$-algebra $A\rtimes_\alpha\mathbb{N}$ (\cite[claim 3.4]{DySh}). Let us now study the continuity question. Let $\overline{A}$ be the inductive limit of the system $A\mathop\to\limits^\alpha A\mathop\to\limits^\alpha \ldots$ with corresponding $C(X)$-linear monomorphisms $\mu_n: A\to \overline{A}$ ($n\in\mathbb{N}$). It is a continuous $C(X)$-algebra since $\bigcup\mu_n(A)$ is dense in $\overline{A}$ and the map $x\in X\mapsto \| \mu_n(a)_x\|=\| a_x\|$ is continuous for all $(a, n)\in A\times\mathbb{N}$. Let $\overline{\alpha}:\overline{A}\to \overline{A}$ be the $C(X)$-linear automorphism given by $\overline{\alpha}\bigl(\mu_n(a) \bigr) = \mu_n\bigl( \alpha(a) \bigr)$, with inverse $\mu_n(a)\mapsto \mu_{n+1}(a)$. Then the crossed product $\overline{A}\rtimes_{\overline{\alpha} }\mathbb{Z}$ is continuous over $X$ since $\mathbb{Z}$ is amenable (\cite{Ri}). Hence, if $p\in \overline{A}$ is the projection $p=\mu_0(1_A)$, the hereditary $C(X)$-subalgebra $p\left( \overline{A}\rtimes_{\overline{\alpha} }\mathbb{Z} \right) p =: A\rtimes_\alpha\mathbb{N}$ (\cite{DySh}) is also continuous with fibres $A_x\rtimes_{\alpha_x}\mathbb{N}$ ($x\in X$). \qed \noindent \textit{Proof of Theorem~\ref{redfree}.} By density, it is enough to study the case of elements $a$ in the algebraic amalgamated free product $A_1\mathop{\circledast}\limits_{C(X)} A_2$. But, for any such $a$, there are separable unital C$^*$-subalgebras $C(Y)\subset C(X)$, $D_1\subset A_1$ and $D_2\subset A_2$ with same units such that $a$ also belongs to $D_1\mathop{\circledast}\limits_{C(Y)} D_2$ by Lemma \ref{freesepreduction}. And the reduced free product $(D_1,\phi_1)\mathop{\ast}\limits_{C(Y)} (D_2,\phi_2)$ embeds $C(X)$-linearly in $(A,\phi)=(A_1,\phi_1)\mathop{\ast}\limits_{C(X)} (A_2,\phi_2)$ by \cite[Thm. 1.3]{BlDy}. Further, any C$^*$-subalgebra of a exact C$^*$-algebra is exact. Thus, one can assume in the sequel that the compact Hausdorff space $X$ is second countable and that the $C(X)$-algebras $A_1, A_2$ are separable C$^*$-algebras. If the C$^*$-algebra $A_1$ is exact, then the $C(X)$-algebra $B=A_1\mathop{\otimesimes}\limits_{C(X)} A_2$ is continuous with fibres $B_x=(A_1)_x\mathop{\otimes}\limits^m (A_2)_x$ by $\gamma_e$), and the conditional expectation $\rho=\phi_1\otimesimes\phi_2:B\to C(X)$ is a continuous field of states on $B$ such that each $\rho_x$ has faithful GNS representation ($x\in X$). Let $E$ be the full countably generated Hilbert $B$,$B$--bimodule $L^2(B,\rho)\otimes_{C(X)} B$ and let $\mathcal{F}f_B( E)=B\oplus \left( L^2(B,\rho)\mathop{\otimesimes}\limits_{C(X)}B \right) \oplus \left( L^2(B,\rho)\mathop{\otimesimes}\limits_{C(X)} L^2(B,\rho)\mathop{\otimesimes}\limits_{C(X)} B\right) \oplus \ldots$ be its full Fock bimodule. Let also $\xi=\mathcal{L}ambda_{\phi}(1)\otimes 1\in E$. As observed in \cite[Claim 3.3]{DySh}, the Toeplitz C$^*$-algebra $\mathbb{T}t_B(E)\subset \mathcal{L}_B(\mathcal{F}f_B(E) )$ is generated by the left action of $B$ on $\mathcal{F}f_B(E)$ and the operator $\ell(\xi)$, because $\ell(b_1\xi b_2)=b_1\ell(\xi) b_2$ for all $b_1, b_2$ in $B$. Consider the conditional expectation $\mathfrak{E}: \mathbb{T}t_B ( E)\to~B$ defined by compression with the orthogonal projection from $\mathcal{F}f_B( E)$ into the first summand $B\subset\mathcal{F}f_B(E)$. Then Theorem~2.3 of \cite{Sh} implies that $B$ and the $C(X)$-algebra generated by the non trivial isometry $\ell(\xi)$ are free with amalgamation over $C(X)$ in $(\mathbb{T}t_B ( E),\rho\circ\mathfrak{E})$ because $\ell(\xi)^*\,b\,\ell(\xi)= \rho(b)$ for all $b\in B$. By \cite{Sh}, the restriction of $\mathfrak{E}$ to the C$^*$-subalgebra $C^*(\ell(\xi) )\subset \mathbb{T}t_B ( E)$ takes values in $C(X)$ and there exists a unitary $u\in C^*(\ell(\xi) )$ s.t. $\mathfrak{E}(u^k)=0$ for every non-zero integer $k$. The two embeddings $\pi_i: A_i\to \mathbb{T}t_B(E)$ ($i=1, 2$) given by $\pi_1(a_1)=u\, (a_1\otimes 1)\, u^{-1}$ and $\pi_2(a_2)= u^2\, (1\otimes a_2)\, u^{-2}$ have free images in $(\mathbb{T}t_B ( E),\rho\circ\mathfrak{E})$. Thus they generate a $C(X)$-linear monomorphism $\pi: A\hookrightarrow \mathbb{T}t_B ( E)$ extending each $\pi_k$ and satisfying $\rho\circ\mathfrak{E}\circ\pi=\phi$ (Lemma~3.1 and Proposition~3.2 of \cite{DySh}, or \cite{BlDy}). Above Lemma~\ref{lemToeplitz} entails that $A$ is a continuous $C(X)$-algebra with fibre at $x\in X$ its image in $\mathbb{T}t_{B_x}(E_x)$, i.e. the reduced free product $((A_1)_x,(\phi_1)_x)\ast\,((A_2)_x,(\phi_2)_x)$. \qed \begin{rem} The existence of an embedding of $A_1$ in $C(X; {\mathcal{O}_2})$ cannot give a direct proof of Theorem \ref{redfree} since there is no $C(X)$-linear Hahn-Banach theorem (\cite[4.2]{bla3}). \end{rem} \begin{cor} Any separable unital continuous $C(X)$-algebra $A$ admits a $C(X)$-linear unital embedding into a unital continuous field $\widetilde{A}$ with simple fibres. \end{cor} \noindent {\mbox{\textit{Proof}. }} Let $\phi: A\to C(X)$ be a $C(X)$-linear unital map such that each induced state $\phi_x: A_x\to\mathbb{C}$ is faithful. Then, the reduced free product $$(\widetilde{A},\Phi)=(A\otimesimes\mathbb{C}^2;\phi\otimesimes tr_2)\ast_{C(X)} (C(X)\otimesimes\mathbb{C}^3;\mathrm{id}\otimesimes tr_3)$$ is continuous by Proposition~\ref{redfree}, and it has simple fibres (\cite{Av}, \cite{Ba}). \qed \begin{cor}\label{cor 4.8} Let $X$ be a second countable \textsl{perfect} compact space and $A_1$ a unital separable continuous $C(X)$-algebra. Then the following assertions are equivalent. \begin{itemize} \item[$\alpha )$] The C$^*$-algebra $A_1$ is exact. \item[$\beta )$] For all unital separable continuous $C(X)$-algebra $A_2$ and all continuous fields of faithful states $\phi_1$ and $\phi_2$ on $A_1$ and $A_2$, the reduced amalgamated free product $(A,\phi)=(A_1,\phi_1)\mathop{\ast}\limits_{C(X)} (A_2,\phi_2)$ is a continuous $C(X)$-algebra with fibres $(A_x,\phi_x)=((A_1)_x,(\phi_1)_x) \ast \,((A_2)_x,(\phi_2)_x)$. \end{itemize} \end{cor} \noindent {\mbox{\textit{Proof}. }} We only need to prove the implication $\beta)\mathbb{R}ightarrow\alpha)$ since the reverse implication has already been proved in Theorem~\ref{redfree}. Now, if a pair $(A_2, \phi_2)$ satisfies the hypotheses of $\beta )$ and we define the $C(X)$-algebra $B:=A_1\otimesimes_{C(X)} A_2$, the $C(X)$-linear projection $\rho=\phi_1\otimesimes\phi_2: B\to C(X)$ and the Hilbert $B$-module $E=L^2(B, \rho)\otimesimes_{C(X)} B$, then we have a $C(X)$-linear isomorphism $A\rtimes_\alpha\mathbb{N}\cong \mathcal{T}_B(E\oplus B)$ (Step 2 of Lemma \ref{lemToeplitz}). Hence, the Toeplitz $C(X)$-algebra $\mathcal{T}_B(E\oplus B)$ is continuous since the group $\mathbb{Z}$ is amenable (see e.g. [30]). And so, the amalgamated tensor product $A_1\otimesimes_{C(X)} A_2$ is a continuous $C(X)$-algebra for any unital separable continuous $C(X)$-algebra $A_2$ (Lemma \ref{lemToeplitz}). But this implies the exactness of the C$^*$-algebra $A_1$ if the metrizable space $X$ is perfect (\cite[Theorem 1.1]{BdWa}). \qed \begin{rem} Corollary \ref{cor 4.8} does not always hold if the space $X$ is not perfect. For instance, if $X$ is reduced to a point, then the reduced amalgamated free product of $A_1$ and $A_2$ is always continuous. \end{rem} \noindent \email{[email protected]}\\ \address{IMJ, 175, rue du Chevaleret, F--75013 Paris} \end{document}
\begin{document} \title[On Holomorphic Curves in algebraic torus] {On Holomorphic Curves in algebraic torus} \author[Masaki Tsukamoto]{Masaki Tsukamoto} \subjclass[2000]{ 32H30} \keywords{entire holomorphic curve in $(\mathbb{C}^*)^n$, polynomial growth} \maketitle \begin{abstract} We study entire holomorphic curves in the algebraic torus, and show that they can be characterized by the ``growth rate'' of their derivatives. \end{abstract} \section{Introduction} Let $z = x + y \sqrt{-1}$ be the natural coordinate in the complex plane $\mathbb{C}$, and let $f(z)$ be an entire holomorphic function in the complex plane. Suppose that there are a non-negative integer $m$ and a positive constant $C$ such that \[ |f(z)| \leq C |z|^m, \quad (|z| \geq 1). \] Then $f(z)$ becomes a polynomial with $\deg f(z) \leq m$. This is a well-known fact in the complex analysis in one variable. In this paper, we prove an analogous result for entire holomorphic curves in the algebraic torus $(\mathbb{C}^*)^n := (\mathbb{C} \setminus \{0\})^n$. Let $[z_0:z_1:\cdots :z_n]$ be the homogeneous coordinate in the complex projective space $\mathbb{C} P^n$. We define the complex manifold $X \subset \mathbb{C} P^n$ by \[ X := \{ [1:z_1:\cdots :z_n] \in \mathbb{C} P^n |\, z_i \neq 0, \, (1 \leq i \leq n) \} \cong (\mathbb{C}^*)^n .\] $X$ is a natural projective embedding of $(\mathbb{C}^*)^n$. We use the restriction of the Fubini-Study metric as the metric on $X$. For a holomorphic map $f: \mathbb{C} \to X$, we define its norm $|df|(z)$ by setting \begin{equation}\label{def: norm} |df|(z) := \sqrt{2}\, |df(\partial /\partial z)| \quad \text{for all $z \in \mathbb{C}$} . \end{equation} Here $\tangentvector{z} = \frac{1}{2} \,(\tangentvector{x} - \sqrt{-1} \tangentvector{y})$, and the normalization factor $\sqrt{2}$ comes from $|\tangentvector{z}| = 1/\sqrt{2}$. The main result of this paper is the following. \begin{theorem}\label{main theorem} Let $f: \mathbb{C} \to X$ be a holomorphic map. Suppose there are a non-negative integer $m$ and a positive constant $C$ such that \begin{equation} |df|(z) \leq C |z|^m, \quad (|z|\geq 1). \label{polynomial growth} \end{equation} Then there are polynomials $g_1(z)$, $g_2(z)$, $\cdots$, $g_n(z)$ with $\deg g_i(z) \leq m+1$, $(1\leq i \leq n)$, such that \begin{equation} f(z) = [1: e^{g_1(z)} : e^{g_2(z)}: \cdots : e^{g_n(z)}]. \label{exp(m+1)} \end{equation} Conversely, if a holomorphic map $f(z)$ is expressed by $(\ref{exp(m+1)})$ with polynomials $g_i(z)$ of degree at most $m+1$, $f(z)$ satisfies the ``polynomial growth condition'' $(\ref{polynomial growth})$. \end{theorem} The direction (\ref{exp(m+1)}) $\Rightarrow$ (\ref{polynomial growth}) is easier, and the substantial part of the argument is the direction (\ref{polynomial growth}) $\Rightarrow$ (\ref{exp(m+1)}). If we set $m=0$ in the above, we get the following corollary. \begin{corollary}\label{BD} Let $f:\mathbb{C} \to X$ be a holomorphic map with bounded derivative, i.e., $|df|(z) \leq C$ for some positive constant $C$. Then there are complex numbers $a_i$ and $b_i$, $(1\leq i \leq n)$, such that \[ f(z) = [\, 1: e^{a_1 z + b_1}: e^{a_2 z + b_2}: \cdots : e^{a_n z + b_n}\, ] . \] \end{corollary} This is the theorem of [BD, Appendice]. The author also proves this in [T, Section 6]. \begin{remark} The essential point of Theorem \ref{main theorem} is the statement that the degrees of the polynomials $g_i(z)$ are at most $m+1$. Actually, it is easy to prove that if $f(z)$ satisfies the condition (\ref{polynomial growth}) then $f(z)$ can be expressed by (\ref{exp(m+1)}) with polynomials $g_i(z)$ of degree at most $2m+2$. (See Section 4.) \end{remark} Theorem \ref{main theorem} states that holomorphic curves in $X$ can be characterized by the growth rate of their derivatives. We can formulate this fact more clearly as follows; Let $g_1(z),\, g_2(z),\, \cdots,\, g_n(z)$ be polynomials, and define $f:\mathbb{C} \to X$ by (\ref{exp(m+1)}). We define the integer $m \geq -1$ by setting \begin{equation}\label{def: max deg} m+1 := \max (\deg g_1(z), \deg g_2(z), \cdots , \deg g_n(z) ) . \end{equation} We have $m = -1$ if and only if $f$ is a constant map. $m$ can be obtained as the growth rate of $|df|$: \begin{theorem}\label{thm: order of |df|} If $m\geq 0$, we have \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} = m. \] \end{theorem} \begin{corollary} Let $\lambda$ be a non-negative real number, and let $[\lambda ]$ be the maximum integer not greater than $\lambda$. Let $f:\mathbb{C} \to X$ be a holomorphic map, and suppose that there is a positive constant $C$ such that \begin{equation}\label{real growth rate} |df| (z) \leq C |z|^{\lambda} , \quad (|z| \geq 1). \end{equation} Then we have a positive constant $C'$ such that \[ |df|(z) \leq C' |z|^{[\lambda ]}, \quad (|z| \geq 1). \] \end{corollary} \begin{proof} If $f$ is a constant map, the statement is trivial. Hence we can suppose $f$ is not constant. From Theorem \ref{main theorem}, $f$ can be expressed by (\ref{exp(m+1)}) with polynomials $g_i(z)$ of degree at most $[\lambda ] + 2$. Since $f$ satisfies (\ref{real growth rate}), we have \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} \leq \lambda .\] From Theorem \ref{thm: order of |df|}, this shows $\deg g_i(z) \leq [\lambda ] +1$ for all $g_i(z)$. Then, Theorem \ref{main theorem} gives the conclusion. \end{proof} \section{Proof of (\ref{exp(m+1)}) $\Rightarrow$ (\ref{polynomial growth})} Let $f:\mathbb{C} \to X$ be a holomorphic map. From the definition of $X$, we have holomorphic maps $f_i : \mathbb{C} \to \mathbb{C} ^*$, $(1\leq i \leq n)$, such that $f(z) = [1: f_1(z) : \cdots : f_n(z) ]$. The norm $|df|(z)$ in (\ref{def: norm}) is given by \begin{equation}\label{def: Fubini-Study} |df|^2 (z) = \frac{1}{4\pi} \Delta \log \left( 1+ \sum_{i=1}^n |f_i(z)|^2\right) , \quad (\Delta := \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} = 4\, \frac{\partial ^2}{\partial z \partial \bar{z}} ). \end{equation} Suppose that $f$ is expressed by (\ref{exp(m+1)}), i.e., $f_i(z) = \exp (g_i(z))$ with a polynomial $g_i(z)$ of degree $\leq m+1$. We will repeatedly use the following calculation in this paper. \begin{equation} \label{estimate of |df|^2 by f_i and f_i/f_j} \begin{split} |df|^2 &= \frac{1}{\pi}\left[ \frac{\sum_i |f'_i|^2}{(1 + \sum_i |f_i|^2)^2} + \frac{\sum_{i<j} |g'_i - g'_j|^2 |f_i|^2 |f_j|^2}{(1 + \sum_i |f_i|^2)^2} \right] , \\ &\leq \frac{1}{\pi} \left[ \sum_i \frac{|f'_i|^2}{(1 + |f_i|^2)^2} + \sum_{i<j} \frac{|g'_i - g'_j|^2 |f_i|^2 |f_j|^2}{(|f_i|^2 + |f_j|^2)^2}\right] , \\ &= \frac{1}{\pi} \left[ \sum_i \frac{|f'_i|^2}{(1 + |f_i|^2)^2} + \sum_{i<j} \frac{|(f_i/f_j)'|^2}{(1 + |f_i/f_j|^2)^2} \right] ,\\ &= \sum_i |df_i|^2 + \sum_{i<j} |d(f_i/f_j)|^2 . \end{split} \end{equation} Here we set \[ |df_i| := \frac{1}{\sqrt{\pi}} \frac{|f'_i|}{1 + |f_i|^2} \quad \text{and}\quad |d(f_i/f_j)| := \frac{1}{\sqrt{\pi}}\frac{|(f_i/f_j)'|}{1 + |f_i/f_j|^2}. \] These are the norms of the differentials of the maps $f_i,\, f_i/f_j: \mathbb{C} \to \mathbb{C} P^1$. We have $f_i(z) = \exp (g_i(z))$ and $f_i(z)/ f_j(z) = \exp (g_i(z) -g_j(z))$, and the degrees of the polynomials $g_i(z)$ and $g_i(z) - g_j(z)$ are at most $m+1$. Then, the next Lemma gives the desired conclusion: \[ |df| (z) \leq C |z|^m, \quad (|z| \geq 1), \] for some positive constant $C$. \begin{lemma}\label{lemma: |dexp(m+1)|} Let $g(z)$ be a polynomial of degree $\leq m+1$, and set $h(z) := e^{g(z)}$. Then we have a positive constant $C$ such that \[ |dh|(z) = \frac{1}{\sqrt{\pi}}\frac{|h'(z)|}{1 + |h(z)|^2} \leq C |z|^m, \quad (|z| \geq 1). \] \end{lemma} \begin{proof} We have \[ \sqrt{\pi}\, |dh| = \frac{|g'|}{|h| + |h|^{-1}} \leq |g'| \min (|h|, |h|^{-1}) \leq |g'| .\] Since the degree of $g'(z)$ is at most $m$, we easily get the conclusion. \end{proof} \section{Preliminary estimates}\label{section: preliminary estimates} In this section, $k$ is a fixed positive integer. The following is a standard fact in the Nevanlinna theory. \begin{lemma}\label{lemma: easy estimate} Let $g(z)$ be a polynomial of degree $k$, and set $h(z) = e^{g(z)}$. Then we have a positive constant $C$ such that \[ \int_1^r \frac{dt}{t} \int_{|z| \leq t} |dh|^2(z)\, dxdy \leq C r^k , \quad (r \geq 1). \] \end{lemma} \begin{proof} Since $|dh|^2 = \frac{1}{4\pi} \Delta \log (1 + |h|^2)$, Jensen's formula gives \begin{equation*} \int_1^r \frac{dt}{t} \int_{|z| \leq t} |dh|^2\, dxdy = \frac{1}{4\pi} \int_{|z| = r} \log ( 1 + |h|^2)\, d\theta - \frac{1}{4\pi} \int_{|z| = 1} \log ( 1 + |h|^2)\, d\theta . \end{equation*} Here $(r, \theta)$ is the polar coordinate in the complex plane. We have \[ \log (1+|h|^2) \leq 2\, |\mathrm{Re}\, g(z)| + \log 2 \leq C r^k, \quad (r := |z| \geq 1). \] Thus we get the conclusion. \end{proof} Let $I$ be a closed interval in $\mathbb{R}$ and let $u(x)$ be a real valued function defined on $I$. We define its $\mathcal{C}^1$-norm $\Dnorm{u}{1}{(I)}$ by setting \[ \Dnorm{u}{1}{(I)} := \sup_{x\in I} |u(x)| + \sup_{x\in I} |u'(x)|. \] For a Lebesgue measurable set $E$ in $\mathbb{R}$, we denote its Lebesgue measure by $|E|$. \begin{lemma}\label{lemma: cosx} There is a positive number $\varepsilon$ satisfying the following$:$ If a real valued function $u(x) \in \mathcal{C}^1 [0, \pi]$ satisfies \[ \Dnorm{u(x) - \cos x}{1}{[0,\, \pi ]} \leq \varepsilon ,\] then we have \[ |u^{-1} ([-t, t]) | \leq 4 t \quad \text{for any $t \in [0, \varepsilon]$}. \] \end{lemma} \begin{proof} The proof is just an elementary calculus. For any small number $\delta >0$, if we choose $\varepsilon$ sufficiently small, we have \[ u^{-1} ([-t, t]) \subset [\pi /2 -\delta , \pi /2 + \delta ]. \] Let $x_1$ and $x_2$ be any two elements in $u^{-1} ([-t, t])$. From the mean value theorem, we have $y \in [\pi /2 -\delta , \pi /2 + \delta ]$ such that \[ u(x_1) - u(x_2) = u'(y)\, (x_1 - x_2) .\] From $\sin (\pi /2) = 1$, we can suppose that \[ |u'(y)| \geq 1/2 .\] Hence \[ |x_1 - x_2| \leq 2 \, |u(x_1) - u(x_2)| \leq 4 t .\] Thus we get \[ |u^{-1} ([-t, t])| \leq 4t .\] \end{proof} Using a scale change of the coordinate, we get the following. \begin{lemma}\label{lemma: coskx} There is a positive number $\varepsilon$ satisfying the following$:$ If a real valued function $u(x) \in \mathcal{C}^1 [0, 2\pi]$ satisfies \[ \Dnorm{u(x) - \cos kx}{1}{[0,\, 2\pi ]} \leq \varepsilon ,\] then we have \[ |u^{-1} ([-t, t]) | \leq 8 t \quad \text{for any $t \in [0, \varepsilon]$}. \] \end{lemma} \begin{proof} \[u^{-1} ([-t, t]) = \bigcup_{j=0}^{2k-1} u^{-1} ([-t, t])\cap [j\pi /k,\, (j+1)\pi /k]. \] Applying Lemma \ref{lemma: cosx} to $u(x/k)$, we have \[ |u^{-1} ([-t, t])\cap [0, \pi /k]\, | \leq 4t/k. \] In a similar way, \[ |u^{-1} ([-t, t])\cap [j\pi /k,\, (j+1)\pi /k]\, | \leq 4t/k , \quad (j = 0, 1, \cdots , 2k-1). \] Thus we get the conclusion. \end{proof} Let $E$ be a subset of $\mathbb{C}$. For a positive number $r$, we set \begin{equation*} E(r) := \{ \theta \in \mathbb{R}/2\pi \mathbb{Z} |\, r e^{i\theta} \in E\} . \end{equation*} In the rest of the section, we always assume $k\geq 2$. \begin{lemma}\label{lemma: monic} Let $C$ be a positive constant, and let $g(z) = z^k + a_1 z^{k-1} + \cdots + a_k$ be a monic polynomial of degree $k$. Set \[ E := \{ z \in \mathbb{C} |\, |\mathrm{Re}\, g(z) | \leq C |z| \} . \] Then we have a positive number $r_0$ such that \[ |E(r)| \leq 8C/ r^{k-1}, \quad (r \geq r_0). \] \end{lemma} \begin{proof} Set $v(z) := \mathrm{Re}\, (a_1 z^{k-1} + a_2 z^{k-2} + \cdots + a_k)$. Then we have \[ |\mathrm{Re}\, g(re^{i\theta}) | \leq Cr \quad \Longleftrightarrow \quad |\cos k\theta + v(re^{i\theta})/r^k| \leq C/r^{k-1} .\] Set $u(\theta) := \cos k\theta + v(re^{i\theta})/r^k$. It is easy to see that \[ \Dnorm{u(\theta ) - \cos k\theta }{1}{[0, 2\pi]} \leq \mathrm{const}/r , \quad (r \geq 1). \] Then we can apply Lemma \ref{lemma: coskx} to this $u(\theta)$, and we get \[ |E(r)| = |u^{-1}([-C/r^{k-1},\, C/ r^{k-1}]) | \leq 8C /r^{k-1} , \quad (r \gg 1).\] \end{proof} The following is the key lemma. \begin{lemma}\label{lemma: key} Let $g(z) = a_0 z^k + a_1 z^{k-1} + \cdots + a_k$ be a polynomial of degree $k$, $(a_0 \neq 0)$. Set \[ E := \{ z\in \mathbb{C} |\, |\mathrm{Re}\, g(z)| \leq |z| \} .\] Then we have a positive number $r_0$ such that \[ |E(r)| \leq \frac{8}{|a_0|r^{k-1}} ,\quad (r \geq r_0) . \] \end{lemma} \begin{proof} Let $\arg a_0$ be the argument of $a_0$, and set $\alpha := \arg a_0 /k$. We define the monic polynomial $g_1(z)$ by \[ g_1(z) := \frac{1}{|a_0|} g(e^{-i\alpha}z) = z^k + \cdots .\] Then we have \[ |\mathrm{Re}\, g(re^{i\theta})| \leq r \Longleftrightarrow |\mathrm{Re}\, g_1(re^{i(\theta + \alpha )})| \leq r/|a_0| .\] Hence the conclusion follows from Lemma \ref{lemma: monic}. \end{proof} \begin{lemma}\label{lemma: integration over E^c} Let $g(z)$ be a polynomial of degree $k$, and we define $E$ as in Lemma \ref{lemma: key}. Set $h(z) := e^{g(z)}$. Then we have \[ \int_{\mathbb{C} \setminus E} |dh|^2(z) \,dxdy < \infty .\] \end{lemma} \begin{proof} Since $|h| = e^{\mathrm{Re}\, g}$, the argument in the proof of Lemma \ref{lemma: |dexp(m+1)|} gives \[ \sqrt{\pi}\, |dh| \leq |g'|\min (|h|, |h|^{-1}) \leq |g'|\, e^{-|\mathrm{Re}\, g|}. \] $g'(z)$ is a polynomial of degree $k-1$, and we have $|\mathrm{Re}\, g| > |z|$ for $z \in \mathbb{C} \setminus E$. Hence we have a positive constant $C$ such that \[ |dh|(z) \leq C |z|^{k-1} e^{-|z|}, \quad \text{if $z \in \mathbb{C} \setminus E$ and $|z|\geq 1$}. \] The conclusion follows from this estimate. \end{proof} \section{Proof of (\ref{polynomial growth}) $\Rightarrow$ (\ref{exp(m+1)})}\label{section: proof of main theorem} Let $f = [1:f_1:f_2:\cdots :f_n] : \mathbb{C} \to X$ be a holomorphic map with $|df|(z)\leq C|z|^m$, $(|z|\geq 1)$. Since $\exp : \mathbb{C} \to \mathbb{C} ^*$ is the universal covering, we have entire holomorphic functions $g_i(z)$ such that $f_i(z) = e^{g_i(z)}$. We will prove that all $g_i(z)$ are polynomials of degree $\leq m+1$. The proof falls into two steps. In the first step, we prove all $g_i(z)$ are polynomials. In the second step, we show $\deg g_i(z) \leq m+1$. The second step is the harder part of the proof. Schwarz's formula gives\footnote{The idea of using Schwarz's formula is due to [BD, Appendice]. The author gives a different approach in [T, Section 6].} \[ \pi r^k g_i^{(k)}(0) = k! \int_{|z|=r} \mathrm{Re}\, (g_i(z))\, e^{-k\sqrt{-1}\theta} d\theta = k! \int_{|z| = r} \log |f_i(z)|\, e^{-k\sqrt{-1}\theta} d\theta , \quad (k\geq 1).\] We have \[ \left| \log |f_i|\right| \leq \log (|f_i| + |f_i|^{-1}) = \log (1 + |f_i|^2) - \log |f_i| \leq \log (1+ \sum |f_j|^2) - \log |f_i|. \] Hence \[ \pi r^k |g_i^{(k)}(0)| \leq k! \int_{|z|=r} \log (1+ \sum |f_j|^2)\, d\theta - k! \int_{|z|=r}\log |f_i|\, d\theta . \] Since $\log |f_i| = \mathrm{Re}\, g_i(z)$ is a harmonic function, the second term in the above is equal to the constant $-2\pi k!\, \mathrm{Re}\, g_i(0)$. Since $|df|^2 = \frac{1}{4\pi}\Delta \log (1 + \sum |f_j|^2)$, Jensen's formula gives \[\frac{1}{4\pi} \int_{|z|=r} \log (1+ \sum |f_j|^2)\, d\theta - \frac{1}{4\pi} \int_{|z|=1} \log (1+ \sum |f_j|^2)\, d\theta = \int_1^r \frac{dt}{t} \int_{|z|\leq t} |df|^2(z) \, dxdy .\] Thus we get \begin{equation}\label{estimate of g_i^{(k)}(0)} \frac{r^k}{4k!}|g_i^{(k)}(0)| \leq \int_1^r \frac{dt}{t} \int_{|z|\leq t} |df|^2(z) \, dxdy + \mathrm{const}. \end{equation} Since $|df|(z) \leq C |z|^m$, $(|z|\geq 1)$, this shows $g_i^{(k)}(0) = 0$ for $k\geq 2m + 3$. Hence $g_i(z)$ are polynomials. Next we will prove $\deg g_i(z) \leq m+1$. We define $E_i, \, E_{ij} \subset \mathbb{C}$, $(1\leq i \leq n, 1\leq i<j\leq n)$, by setting \begin{align*} &\deg g_i(z) \leq m+1 \Longrightarrow E_i := \emptyset , \\ &\deg g_i(z) \geq m+2 \Longrightarrow E_i := \{ z\in \mathbb{C} | \, |\mathrm{Re}\, g_i(z)| \leq |z| \} ,\\ &\deg (g_i(z) -g_j(z)) \leq m+1 \Longrightarrow E_{ij} := \emptyset ,\\ &\deg (g_i(z) -g_j(z)) \geq m+2 \Longrightarrow E_{ij} := \{ z\in \mathbb{C} | \, |\mathrm{Re}\, (g_i(z) - g_j(z)) | \leq |z| \}. \end{align*} We set $E := \bigcup_{i} E_i \cup \bigcup_{i<j} E_{ij}$. Then we have $E(r) =\bigcup_{i} E_i(r) \cup \bigcup_{i<j} E_{ij}(r)$ for $r>0$. From Lemma \ref{lemma: key}, we have positive constants $r_0$ and $C'$ such that \begin{equation} |E(r)| \leq C'/r^{m+1}, \quad (r\geq r_0). \label{estimate of E(r)} \end{equation} We have \begin{equation}\label{E and E^c} \begin{split} \int_1^r \frac{dt}{t} \int_{|z|\leq t}& |df|^2(z) \, dxdy \\ &= \int_1^r \frac{dt}{t} \int_{E\cap \{ |z|\leq t\}} |df|^2(z) \, dxdy + \int_1^r \frac{dt}{t} \int_{E^c \cap \{ |z|\leq t\}} |df|^2(z) \, dxdy . \end{split} \end{equation} Using (\ref{estimate of E(r)}) and $|df|(z)\leq C |z|^m$, $(|z|\geq 1)$, we can estimate the first term in (\ref{E and E^c}) as follows: \[ \int_{E\cap \{ 1\leq |z|\leq t\}} |df|^2(z) \, dxdy \leq C^2 \int_{E\cap \{ 1\leq |z|\leq t\}} r^{2m+1} \, dr d\theta = C^2 \int_1^t r^{2m+1}|E(r)| dr .\] If $t \geq r_0$, we have \[ \int_{r_0}^t r^{2m+1}|E(r)| dr \leq C' \int_{r_0}^t r^m dr = \frac{C'}{m+1} t^{m+1} - \frac{C'}{m+1} {r_0}^{m+1}.\] Thus \begin{equation}\label{estimate of the first term} \int_1^r \frac{dt}{t} \int_{E\cap \{ |z|\leq t\}} |df|^2(z) \, dxdy \leq \mathrm{const} \cdot r^{m+1}, \quad (r \geq 1). \end{equation} Next we will estimate the second term in (\ref{E and E^c}) by using the inequality (\ref{estimate of |df|^2 by f_i and f_i/f_j}) given in Section 2: \[ |df|^2 \leq \sum_i |df_i|^2 + \sum_{i<j} |d(f_i/f_j)|^2 .\] If $\deg g_i(z) \leq m+1$, Lemma \ref{lemma: easy estimate} gives \[ \int_1^r \frac{dt}{t} \int_{E^c \cap \{ |z|\leq t\}} |df_i|^2(z) \, dxdy \leq \int_1^r \frac{dt}{t} \int_{|z|\leq t} |df_i|^2(z) \, dxdy \leq \mathrm{const} \cdot r^{m+1} .\] If $\deg g_i(z) \geq m+2$, Lemma \ref{lemma: integration over E^c} gives \[ \int_{E^c \cap \{ |z|\leq t\}} |df_i|^2(z) \, dxdy \leq \int_{E_i^c \cap \{ |z|\leq t\}} |df_i|^2(z) \, dxdy \leq \mathrm{const} .\] The terms for $|d(f_i/f_j)|$ can be also estimated in the same way, and we get \begin{equation}\label{estimate of the second term} \int_1^r \frac{dt}{t} \int_{E^c \cap \{ |z|\leq t\}} |df|^2(z) \, dxdy \leq \mathrm{const}\cdot r^{m+1} , \quad (r\geq 1). \end{equation} From (\ref{E and E^c}), (\ref{estimate of the first term}), (\ref{estimate of the second term}), we get \[ \int_1^r \frac{dt}{t} \int_{|z|\leq t} |df|^2(z) \, dxdy \leq \mathrm{const}\cdot r^{m+1}, \quad (r \geq 1) .\] From (\ref{estimate of g_i^{(k)}(0)}), this shows $g_i^{(k)}(0) = 0$ for $k\geq m+2$. Thus $g_i(z)$ are polynomials with $\deg g_i(z) \leq m+1$. This concludes the proof of Theorem \ref{main theorem}. \section{Proof of Theorem \ref{thm: order of |df|} and a corollary} \subsection{Proof of Theorem \ref{thm: order of |df|}} The proof of Theorem \ref{thm: order of |df|} needs the following lemma. \begin{lemma}\label{lem: preparation} Let $k\geq 1$ be an integer, and let $\delta$ be a real number satisfying $0< \delta <1$. Let $g(z) = a_0 z^k + a_1 z^{k-1} + \cdots + a_k$ be a polynomial of degree $k$, $(a_0\neq 0)$. We set $h(z) := e^{g(z)}$ and define $E \subset \mathbb{C}$ by \[ E := \{ z\in \mathbb{C} |\, |\mathrm{Re}\, g(z)| \leq |z|^{\delta}\}.\] Then we have \[ \int_{\mathbb{C} \setminus E} |dh|^2 < \infty ,\] and there is a positive number $r_0$ such that \[ |E(r)| \leq \frac{8}{|a_0|r^{k-\delta}}, \quad (r\geq r_0) .\] \end{lemma} \begin{proof} This can be proven by the methods in Section \ref{section: preliminary estimates}. We omit the detail. \end{proof} Let $g_1(z), \, g_2(z), \, \cdots ,\, g_n(z)$ be polynomials, and define the holomorphic map $f: \mathbb{C} \to X$ and the integer $m\geq -1$ by (\ref{exp(m+1)}) and (\ref{def: max deg}). Here we suppose $m\geq 0$, i.e., $f$ is not a constant map. We will prove Theorem \ref{thm: order of |df|}. From Theorem \ref{main theorem}, we have \[ |df|(z) \leq \mathrm{const}\cdot |z|^m , \quad (|z| \geq 1).\] It follows \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} \leq m .\] We want to prove that this is actually an equality. Suppose \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} \lvertneqq m .\] Then, if we take $\varepsilon >0$ sufficiently small, we have a positive number $r_0$ such that \begin{equation}\label{bound of contradiction} |df|(z) \leq |z|^{m-\varepsilon} ,\quad (|z| \geq r_0). \end{equation} Schwarz's formula gives the inequality (\ref{estimate of g_i^{(k)}(0)}): \begin{equation}\label{estimate by Schwarz} \frac{r^k}{4k!}|g_i^{(k)}(0)| \leq \int_1^r \frac{dt}{t} \int_{|z|\leq t} |df|^2(z) \, dxdy + \mathrm{const}, \quad (k\geq 0). \end{equation} Let $\delta$ be a positive number such that $0 < \delta < 2\varepsilon$. We define $E_i$ and $E_{ij}$, $(1\leq i \leq n, \, 1\leq i<j\leq n)$, by setting \begin{align*} &\deg g_i(z) \leq m \Longrightarrow E_i := \emptyset , \\ &\deg g_i(z) = m+1 \Longrightarrow E_i := \{ z\in \mathbb{C} | \, |\mathrm{Re}\, g_i(z)| \leq |z|^{\delta} \} ,\\ &\deg (g_i(z) -g_j(z)) \leq m \Longrightarrow E_{ij} := \emptyset ,\\ &\deg (g_i(z) -g_j(z)) = m+1 \Longrightarrow E_{ij} := \{ z\in \mathbb{C} | \, |\mathrm{Re}\, (g_i(z) - g_j(z)) | \leq |z|^{\delta} \}. \end{align*} We set $E := \bigcup_{i} E_i \cup \bigcup_{i<j} E_{ij}$. Then, if we take $r_0$ sufficiently large, we have \begin{equation}\label{estimate of E(r) ver.2} |E(r)| \leq \mathrm{const}/r^{m+1-\delta} , \quad (r\geq r_0) . \end{equation} We have \begin{equation*} \begin{split} \int_1^r \frac{dt}{t} \int_{|z|\leq t}& |df|^2(z) \, dxdy \\ &= \int_1^r \frac{dt}{t} \int_{E\cap \{ |z|\leq t\}} |df|^2(z) \, dxdy + \int_1^r \frac{dt}{t} \int_{E^c \cap \{ |z|\leq t\}} |df|^2(z) \, dxdy . \end{split} \end{equation*} From (\ref{bound of contradiction}) and (\ref{estimate of E(r) ver.2}), the first term can be estimated as in Section \ref{section: proof of main theorem}: \[ \int_1^r \frac{dt}{t} \int_{E\cap \{ |z|\leq t\}} |df|^2(z) \, dxdy \leq \mathrm{const}\cdot r^{m+1- (2\varepsilon -\delta )} , \quad (r\geq 1).\] Using Lemma \ref{lem: preparation} and the inequality $|df|^2 \leq \sum_i |df_i|^2 + \sum_{i<j} |d(f_i/f_j)|^2 $, we can estimate the second term: \[ \int_1^r \frac{dt}{t} \int_{E^c \cap \{ |z|\leq t\}} |df|^2(z) \, dxdy \leq \mathrm{const}\cdot \log r + \mathrm{const} \cdot r^m , \quad (r\geq 1). \] Thus we get \[ \int_1^r \frac{dt}{t} \int_{E\cap \{ |z|\leq t\}} |df|^2(z) \, dxdy \leq \mathrm{const}\cdot r^{m+1- (2\varepsilon -\delta )} , \quad (r\geq 1).\] Note that $2\varepsilon -\delta$ is a positive number. Using this estimate in (\ref{estimate by Schwarz}), we get \[ g_i^{(k)}(0) = 0, \quad (k\geq m+1). \] This shows $\deg g_i(z) \leq m$. This contradicts the definition of $m$. \begin{remark} The following is also true: \[ \limsup_{r\to \infty} \frac{\max_{|z| \leq r} \log |df|(z)}{\log r} = m .\] \end{remark} \begin{proof} We have \[ m= \limsup_{r\to \infty} \frac{\max_{|z| = r} \log |df|(z)}{\log r} \leq \limsup_{r\to \infty} \frac{\max_{|z| \leq r} \log |df|(z)}{\log r}. \] And we have $|df|(z) \leq \mathrm{const}\cdot |z|^m$, $(|z| \geq 1)$. Thus \[\limsup_{r\to \infty} \frac{\max_{|z| \leq r} \log |df|(z)}{\log r} \leq m .\] \end{proof} \subsection{Order of the Shimizu-Ahlfors characteristic function} For a holomorphic map $f: \mathbb{C} \to X$, we define the Shimizu-Ahlfors characteristic function $T(r, f)$ by \[ T(r, f) := \int_1^r \frac{dt}{t} \int_{|z| \leq t} |df|^2(z) \, dxdy, \quad (r \geq 1). \] The order $\rho_f$ of $T(r, f)$ is defined by \[ \rho_f := \limsup_{r\to \infty} \frac{\log T(r,f)}{\log r}. \] $\rho_f$ can be obtained as the growth rate of $|df|$: \begin{corollary}\label{cor: order and growth rate of |df|} For a holomorphic map $f:\mathbb{C} \to X$, we have \[ \rho_f < \infty \Longleftrightarrow \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} < \infty .\] If these values are finite and $f$ is not a constant map, then we have \[ \rho_f = \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} +1 . \] \end{corollary} \begin{proof} If $\rho_f < \infty$, the estimate (\ref{estimate by Schwarz}) shows that $f$ can be expressed by (\ref{exp(m+1)}) with polynomials $g_1(z), \, \cdots,\, g_n(z)$. Then we have \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} <\infty .\] The proof of the converse is trivial. Suppose $\rho_f < \infty$. Then we can express $f$ by $f(z) = [1:e^{g_1(z)}:\cdots :e^{g_n(z)}]$ with polynomials $g_1(z), \, \cdots,\, g_n(z)$. We set $f_i(z) := e^{g_i(z)}$, and define the integer $m$ by (\ref{def: max deg}). Theorem \ref{thm: order of |df|} gives \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} +1 = m+1. \] The estimate (\ref{estimate by Schwarz}) gives \[ m+1 \leq \rho_f . \] Since $|df| = \frac{1}{4\pi} \Delta \log (1 + \sum |f_i|^2)$, Jensen's formula gives \begin{equation*} T(r, f) = \frac{1}{4\pi} \int_{|z| = r} \log ( 1 + \sum_i |f_i|^2)\, d\theta - \frac{1}{4\pi} \int_{|z| = 1} \log ( 1 + \sum_i |f_i|^2)\, d\theta . \end{equation*} Since $\deg g_i(z) \leq m+1$, we have \[ \log (1+ \sum_i |f_i|^2) \leq \mathrm{const} \cdot r^{m+1}, \quad (r\geq 1). \] Hence \[ \rho_f \leq m+1 .\] Thus we get \[ \rho_f = m+1 = \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} +1 .\] \end{proof} \begin{remark} Of course, the statement of Corollary \ref{cor: order and growth rate of |df|} is not true for general entire holomorphic curves in the complex projective space $\mathbb{C} P^n$. For example, let $f:\mathbb{C} \to \mathbb{C} P^1$ be a non-constant elliptic function. Since $|df|$ is bounded all over the complex plane, we have \[ \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} = 0.\] And it is easy to see \[ \rho_f = 2 \neq \limsup_{r\to \infty} \frac{\max_{|z| =r} \log |df|(z)}{\log r} + 1. \] \end{remark} \address{ Masaki Tsukamoto \endgraf Department of Mathematics, Faculty of Science \endgraf Kyoto University \endgraf Kyoto 606-8502 \endgraf Japan } \textit{E-mail address}: \texttt{[email protected]} \end{document}
\begin{document} \title{Is a system's wave function in one-to-one correspondence with its elements of reality?} \date{$12^{\text{th}}$ April, 2012} \author{Roger Colbeck} \affiliation{Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada} \author{Renato Renner} \affiliation{Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland} \begin{abstract} Although quantum mechanics is one of our most successful physical theories, there has been a long-standing debate about the interpretation of the wave function---the central object of the theory. Two prominent views are that (i)~it corresponds to an element of reality, i.e.\ an objective attribute that exists before measurement, and (ii)~it is a subjective state of knowledge about some underlying reality. A recent result [Pusey \emph{et al.}\ arXiv:1111.3328] has placed the subjective interpretation into doubt, showing that it would contradict certain physically plausible assumptions, in particular that multiple systems can be prepared such that their elements of reality are uncorrelated. Here we show, based only on the assumption that measurement settings can be chosen freely, that a system's wave function is in one-to-one correspondence with its elements of reality. This also eliminates the possibility that it can be interpreted subjectively. \end{abstract} \pacs{03.65.-w, 03.65.Ud, 03.67.-a} \maketitle \emph{Introduction.|}Given the wave function associated with a physical system, quantum theory allows us to compute predictions for the outcomes of any measurement. Since a wave function corresponds to an extremal state and is therefore maximally informative, one possible view is that it can be considered an \emph{element of reality} of the system, i.e., an objective attribute that exists before measurement. However, an alternative view, often motivated by the probabilistic nature of quantum predictions, is that the wave function represents incomplete (subjective) knowledge about some underlying reality. Which view one adopts affects how one thinks about the theory at a fundamental level. To illuminate the difference between the above views, we give an illustrative example. Consider a meteorologist who gives a prediction about tomorrow's weather (for example that it will be sunny with probability $33 \%$, and cloudy with probability $67 \%$; see left hand side of Fig.~\ref{fig:weather}). We may assume that classical mechanics accurately describes the relevant processes, so that the weather depends deterministically on the initial conditions. The fact that the prediction is probabilistic then solely reflects a lack of knowledge on the part of the meteorologist on these conditions. In particular, the forecast is not an element of reality associated with the atmosphere, but rather reflects the subjective knowledge of the forecaster; a second meteorologist with different knowledge (see right hand side of Fig.~\ref{fig:weather}) may issue an alternative forecast. \begin{figure*} \caption{\label{fig:weather} \label{fig:weather} \end{figure*} Moving to quantum mechanics, one may ask whether the wave function $\Psi$ that we assign to a quantum system should be seen as a subjective object (analogous to the weather forecast) representing the knowledge an experimenter has about the system, or whether $\Psi$ is an element of reality of the system (analogous to the weather being sunny). This question has been the subject of a long debate, which goes back to the early days of quantum theory~\cite{BV}. The debate originated from the fact that quantum theory is inherently probabilistic: even with a full description of a system's wave function, the theory does not allow us to predict the outcomes of future measurements with certainty. This fact is often used to motivate subjective interpretations of quantum theory, such as the Copenhagen interpretation~\cite{Born26,Bohr28,Heisenberg30}, according to which wave functions are mere mathematical objects that allow us to calculate probabilities of future events. Einstein, Podolsky and Rosen (EPR) advocated the view that the wave function does not provide a complete physical description of reality~\cite{EPR}, and that a higher, complete theory is possible. In such a complete theory, any element of reality must have a counterpart in the theory. Were quantum theory not complete, it could be that the higher theory has additional parameters that complement the wave function. The wave function could then be objective, i.e., uniquely determined by the elements of reality of the higher theory. Alternatively, the wave function could take the role of a state of knowledge about the underlying parameters of the higher theory. In this case, the wave function would not be uniquely determined by these parameters and would therefore admit a subjective interpretation. To connect to some terminology in the literature (see for example Ref.~\citenum{HarSpek}), in the first case the underlying model would be called \emph{$\psi$-ontic}, and in the second case \emph{$\psi$-epistemic}. For some recent work in support of a $\psi$-epistemic view, see for example Refs.~\citenum{CFS,Spekkens_toy,LeiSpek}. In some famous works from the 1960s, several constraints were placed on higher descriptions given in terms of hidden variables~\cite{KS,Bell_KS,Bell}, and further constraints have since been highlighted~\cite{Hardy_ontbag,Montina,ChenMontina}. In addition, we have recently shown~\cite{CR_ext} that, under the assumption of free choice, if quantum theory is correct then it is \emph{non-extendible}, in the sense of being maximally informative about measurement outcomes. Very recently, Pusey, Barrett and Rudolph~\cite{PBR} have presented an argument showing that a subjective interpretation of the wave function would violate certain plausible assumptions. Specifically, their argument refers to a model where each physical system possesses an individual set of (possibly hidden) elements of reality, which are the only quantities relevant for predicting the outcomes of later measurements. One of their assumptions then demands that it is possible to prepare multiple systems such that these sets are statistically independent. Here, we present a totally different argument to show that the wave function of a quantum system is fully determined by its elements of reality. In fact, this implies that the wave function is in one-to-one correspondence with these elements of reality (see the Conclusions) and may therefore itself be considered an element of reality of the system. These claims are derived under minimal assumptions, namely that the statistical predictions of existing quantum theory are correct, and that measurement settings can (in principle) be chosen freely. In terms of the language of Ref.~\citenum{HarSpek}, this means that any model of reality consistent with quantum theory and with free choice is $\psi$-complete. \emph{General Model.|}In order to state our result, we consider a general experiment where a system $\cS$ is prepared in a state specified by a \emph{wave function} $\Psi$ (see Fig.~\ref{fig:setting}). Then an experimenter chooses a \emph{measurement setting} $A$ (specified by an observable or a family of projectors) and records the \emph{measurement outcome}, denoted $X$. Mathematically, we model $\Psi$ as a random variable over the set of wave functions, $A$ as a random variable over the set of observables, and $X$ as a random variable over the set of possible measurement outcomes. Finally, we introduce a collection of random variables, denoted $\Gamma$, which are intended to model all information that is (in principle) available before the measurement setting, $A$, is chosen and the measurement is carried out. Technically, we only require that $\Gamma$ includes the wave function $\Psi$. In the following, when we refer to a \emph{list of elements of reality}, we simply mean a subset $\LLambda$ of $\Gamma$. Furthermore, we say that $\LLambda$ is \emph{complete for the description of the system $\cS$} if any possible prediction about the outcome $X$ of a measurement $A$ on $\cS$ can be obtained from $\LLambda$, i.e., we demand that the Markov condition \begin{align} \label{eq_completelist} \Gamma \leftrightarrow (\LLambda,A) \leftrightarrow X \end{align} holds~\footnote{$U \leftrightarrow V \leftrightarrow W$ is called a \emph{Markov chain} if $P_{U|V=v} = P_{U|V=v, W=w}$ or, equivalently, if $P_{W|V=v} = P_{W|V=v,U=u}$, for all $u,v,w$ with strictly positive joint probability. Here and in the following, we use upper case letters for random variables, and lower case letters for specific values they can take.}. Note that, using this definition, the aforementioned result on the non-extendibility of quantum theory~\cite{CR_ext} can be phrased as: \emph{The wave function $\Psi$ associated with a system $\cS$ is complete for the description of $\cS$}. We are now ready to formulate our main technical claim. \begin{figure} \caption{\label{fig:setting} \label{fig:setting} \end{figure} \emph{Theorem.|}Any list of elements of reality, $\LLambda$, that is complete for the description of a system $\cS$ includes the quantum-mechanical wave function $\Psi$ associated with $\cS$ (in the sense that $\Psi$ is uniquely determined by $\LLambda$). \emph{Assumptions.|} The above claim is derived under the following two assumptions, which are usually implicit in the literature. (We note that very similar assumptions are also made in Ref.~\citenum{PBR}, where, as already mentioned, an additional statistical independence assumption is also used.) \begin{itemize} \item \emph{Correctness of quantum theory:} Quantum theory gives the correct statistical predictions. For example, the distribution of $X$ satisfies $P_{X|\Psi=\psi, A=a}(x) = \bra{\psi} \Pi^a_x \ket{\psi}$, where $\Pi^a_x$ denotes the projector corresponding to outcome $X=x$ of the measurement specified by $A = a$. \item \emph{Freedom of choice:} Measurement settings can be chosen to be independent of any pre-existing value (in any frame)~\footnote{This assumption, while often implicit, is for instance discussed (and used) in Bell's work. In Ref.~\citenum{Bell_free}, he writes that ``the settings of instruments are in some sense free variables $\ldots$ [which] means that the values of such variables have implications only in their future light cones.'' This leads directly to the freedom of choice assumption as formulated here. We refer to the Supplemental Material for a more detailed discussion.}. In particular, this implies that the setting $A$ can be chosen independently of $\Gamma$, i.e., $P_{A|\Gamma=\gamma} = P_{A}$~\footnote{In Ref.~\citenum{PBR}, this assumption corresponds to the requirement that a quantum system can be freely prepared according to one of a number of predefined states.}. \end{itemize} We note that the proof of our result relies on an argument presented in Ref.~\citenum{CR_ext}, where these assumptions are also used (see the Supplemental Material for more details). \emph{Proof of the main claim.|} As shown in Ref.~\citenum{CR_ext}, under the above assumptions, $\Psi$ is complete for the description of $\cS$. Since $\LLambda$ is included in $\Gamma$, we have in particular \begin{align} \label{eq_QMcompleteness} \LLambda \leftrightarrow (\Psi,A) \leftrightarrow X \ . \end{align} Our argument then proceeds as follows. The above condition is equivalent to the requirement that \begin{align*} P_{X|\LLambda=\llambda,\Psi = \psi,A=a} = P_{X|\Psi=\psi,A=a} \end{align*} holds for all $\llambda, \psi, a$ that have a positive joint probability, i.e., $P_{\LLambda \Psi A}(\llambda, \psi, a) > 0$. Furthermore, because of the assumption that $\LLambda$ is a complete list of elements of reality, Eq.~\ref{eq_completelist}, and because $\Psi$ is by definition included in $\Gamma$, we have \begin{align*} P_{X|\LLambda=\llambda, \Psi=\psi, A=a} = P_{X|\LLambda=\llambda, A=a} \ . \end{align*} Combining these expressions gives \begin{align} \label{eq_consistency} P_{X|\Psi = \psi,A=a} = P_{X|\LLambda=\llambda,A=a} \ , \end{align} for all values $\llambda,\psi,a$ with $P_{\LLambda \Psi A}(\llambda,\psi, a)>0$. Note that, using the free choice assumption, we have $P_{\LLambda \Psi A} = P_{\LLambda \Psi} \times P_A$, hence this condition is equivalent to demanding $P_{\LLambda \Psi}(\llambda, \psi)>0$ and $P_A(a)>0$. Now consider some fixed $\LLambda=\llambda$ and suppose that there exist two states, $\psi_0$ and $\psi_1$, such that $P_{\LLambda \Psi}(\llambda,\psi_0)>0$ and $P_{ \LLambda \Psi}(\llambda,\psi_1)>0$. From Eq.~\ref{eq_consistency}, this implies $P_{X|\Psi = \psi_0,A=a} = P_{X|\Psi = \psi_1,A=a}$ for all $a$ such that $P_A(a)>0$. However, within quantum theory, it is easy to choose the set of measurements for which $P_A(a)>0$ such that this can only be satisfied if $\psi_0=\psi_1$. This holds, for example, if the set of measurements is tomographically complete. Thus, for each $\LLambda=\llambda$, there exists only one possible value of $\Psi=\psi$ such that $P_{\Lambda \Psi}(\lambda, \psi) > 0$, i.e., $\Psi$ is uniquely determined by $\LLambda$, which is what we set out to prove. \emph{Discussion and Conclusions.|}We have shown that the quantum wave function can be taken to be an element of reality of a system based on two assumptions, the correctness of quantum theory and the freedom of choice for measurement settings. Both of these assumptions are in principle experimentally falsifiable (see the Supplemental Material for a discussion of possible experiments). The correctness of quantum theory is a natural assumption given that we are asking whether the quantum wave function is an element of reality of a system. Furthermore, a free choice assumption is necessary to show that the answer is yes. Without free choice, $A$ would be pre-determined and the complete list of elements of reality, $\LLambda$, could be chosen to consist of the single element $X$. In this case, Eq.~\ref{eq_completelist} would be trivially satisfied. Nevertheless, since the list $\LLambda = \{X\}$ does not uniquely determine the wave function, $\Psi$, we could not consider $\Psi$ to be an element of reality of the system. This shows that the wave function would admit a subjective interpretation if the free choice assumption was dropped. We conclude by noting that, given any complete list of elements of reality, $\LLambda$, the non-extendibility of quantum theory, Eq.~\ref{eq_QMcompleteness}, asserts that any information contained in $\LLambda$ that may be relevant for predicting measurement outcomes $X$ is already contained in the wave function $\Psi$. Conversely, the result shown here is that $\Psi$ is included in $\LLambda$. Since these are two seemingly opposite statements, it is somewhat intriguing that the second can be inferred from the first, as shown in this Letter. Furthermore, taken together, the two statements imply that $\Psi$ is in one-to-one correlation to $\LLambda$. This sheds new light on a question dating back to the early days of quantum theory~\cite{EinsteinSchroedinger}, asking whether the wave function is in one-to-one correlation with physical reality. Interpreting $\LLambda$ as the state of physical reality (or the ontic state), our result asserts that, under the free choice assumption, the answer to this question is yes. \noindent\emph{Acknowledgements.|}We thank Christian Speicher and Oscar Dahlsten for posing questions regarding Ref.~\citenum{PBR}, which initiated this work, and Jonathan Barrett for explaining their result. We also thank Matthias Christandl, Alberto Montina, L\'idia del Rio and Robert Spekkens for discussions and L\'idia del Rio for illustrations. This work was supported by the Swiss National Science Foundation (through the National Centre of Competence in Research ``Quantum Science and Technology'' and grant No.\ 200020-135048) as well as the European Research Council (grant No.\ 258932). Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. \begin{thebibliography}{10} \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem{BV} \bibinfo{author}{Bacciagaluppi, G.} \& \bibinfo{author}{Valentini, A.} \newblock \emph{\bibinfo{title}{Quantum theory at the crossroads: reconsidering the 1927 Solvay Conference}} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{2009}). \bibitem{Born26} \bibinfo{author}{Born, M.} \newblock \bibinfo{title}{Quantenmechanik der {S}to{\ss}vorg\"ange}. \newblock \emph{\bibinfo{journal}{Zeitschrift f\"ur Physik}} \textbf{\bibinfo{volume}{38}}, \bibinfo{pages}{803--827} (\bibinfo{year}{1926}). \bibitem{Bohr28} \bibinfo{author}{Bohr, N.} \newblock \bibinfo{title}{Das {Q}uantenpostulat und die neuere {E}ntwicklung der {A}tomistik}. \newblock \emph{\bibinfo{journal}{Die Naturwissenschaften}} \textbf{\bibinfo{volume}{16}}, \bibinfo{pages}{245--257} (\bibinfo{year}{1928}). \bibitem{Heisenberg30} \bibinfo{author}{Heisenberg, W.} \newblock \emph{\bibinfo{title}{The Physical Principles of the Quantum Theory}} (\bibinfo{publisher}{University of Chicago Press}, \bibinfo{address}{Chicago}, \bibinfo{year}{1930}). \bibitem{EPR} \bibinfo{author}{Einstein, A.}, \bibinfo{author}{Podolsky, B.} \& \bibinfo{author}{Rosen, N.} \newblock \bibinfo{title}{Can quantum-mechanical description of physical reality be considered complete?} \newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{777--780} (\bibinfo{year}{1935}). \bibitem{HarSpek} \bibinfo{author}{Harrigan, N.} \& \bibinfo{author}{Spekkens, R.~W.} \newblock \bibinfo{title}{Einstein, incompleteness, and the epistemic view of quantum states}. \newblock \emph{\bibinfo{journal}{Foundations of Physics}} \textbf{\bibinfo{volume}{40}}, \bibinfo{pages}{125--157} (\bibinfo{year}{2010}). \bibitem{CFS} \bibinfo{author}{Caves, C.~M.}, \bibinfo{author}{Fuchs, C.~A.} \& \bibinfo{author}{Schack, R.} \newblock \bibinfo{title}{Quantum probabilities as {B}ayesian probabilities}. \newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{65}}, \bibinfo{pages}{022305} (\bibinfo{year}{2002}). \bibitem{Spekkens_toy} \bibinfo{author}{Spekkens, R.~W.} \newblock \bibinfo{title}{Evidence for the epistemic view of quantum states: A toy theory}. \newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{032110} (\bibinfo{year}{2007}). \bibitem{LeiSpek} \bibinfo{author}{Leifer, M.~S.} \& \bibinfo{author}{Spekkens, R.~W.} \newblock \bibinfo{title}{Formulating quantum theory as a causally neutral theory of {B}ayesian inference}. \newblock \bibinfo{howpublished}{e-print \url{arXiv:1107.5849}} (\bibinfo{year}{2011}). \bibitem{KS} \bibinfo{author}{Kochen, S.} \& \bibinfo{author}{Specker, E.~P.} \newblock \bibinfo{title}{The problem of hidden variables in quantum mechanics}. \newblock \emph{\bibinfo{journal}{Journal of Mathematics and Mechanics}} \textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{59--87} (\bibinfo{year}{1967}). \bibitem{Bell_KS} \bibinfo{author}{Bell, J.~S.} \newblock \bibinfo{title}{On the problem of hidden variables in quantum mechanics}. \newblock In \emph{\bibinfo{booktitle}{Speakable and unspeakable in quantum mechanics}}, chap.~\bibinfo{chapter}{1} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{1987}). \bibitem{Bell} \bibinfo{author}{Bell, J.~S.} \newblock \bibinfo{title}{On the {E}instein-{P}odolsky-{R}osen paradox}. \newblock In \emph{\bibinfo{booktitle}{Speakable and unspeakable in quantum mechanics}}, chap.~\bibinfo{chapter}{2} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{1987}). \bibitem{Hardy_ontbag} \bibinfo{author}{Hardy, L.} \newblock \bibinfo{title}{Quantum ontological excess baggage}. \newblock \emph{\bibinfo{journal}{Studies In History and Philosophy of Modern Physics}} \textbf{\bibinfo{volume}{35}}, \bibinfo{pages}{267--276} (\bibinfo{year}{2004}). \bibitem{Montina} \bibinfo{author}{Montina, A.} \newblock \bibinfo{title}{State-space dimensionality in short-memory hidden-variable theories}. \newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{032107} (\bibinfo{year}{2011}). \bibitem{ChenMontina} \bibinfo{author}{Chen, Z.} \& \bibinfo{author}{Montina, A.} \newblock \bibinfo{title}{Measurement contextuality is implied by macroscopic realism}. \newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{042110} (\bibinfo{year}{2011}). \bibitem{CR_ext} \bibinfo{author}{Colbeck, R.} \& \bibinfo{author}{Renner, R.} \newblock \bibinfo{title}{No extension of quantum theory can have improved predictive power}. \newblock \emph{\bibinfo{journal}{Nature Communications}} \textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{411} (\bibinfo{year}{2011}). \bibitem{PBR} \bibinfo{author}{Pusey, M.~F.}, \bibinfo{author}{Barrett, J.} \& \bibinfo{author}{Rudolph, T.} \newblock \bibinfo{title}{The quantum state cannot be interpreted statistically}. \newblock \bibinfo{howpublished}{e-print \url{arXiv:1111.3328}} (\bibinfo{year}{2011}). \bibitem{EinsteinSchroedinger} \bibinfo{note}{Einstein to Schr\"odinger, 19 June 1935, extracts translated in Howard, D. Einstein on locality and separability. \emph{Studies in History and Philosophy of Science} {\bf 16}, 171--201 (1985)}. \bibitem{Bell_free} \bibinfo{author}{Bell, J.~S.} \newblock \bibinfo{title}{Free variables and local causality}. \newblock In \emph{\bibinfo{booktitle}{Speakable and unspeakable in quantum mechanics}}, chap.~\bibinfo{chapter}{12} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{1987}). \bibitem{Stinespring} \bibinfo{author}{Stinespring, W.~F.} \newblock \bibinfo{title}{Positive functions on {$C^*$}-algebras}. \newblock \emph{\bibinfo{journal}{Proceedings of the American Mathematical Society}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{211--216} (\bibinfo{year}{1955}). \bibitem{deBroglie} \bibinfo{author}{de~Broglie, L.} \newblock \bibinfo{title}{La m\'ecanique ondulatoire et la structure atomique de la mati\`ere et du rayonnement}. \newblock \emph{\bibinfo{journal}{Journal de Physique, Serie VI}} \textbf{\bibinfo{volume}{VIII}}, \bibinfo{pages}{225--241} (\bibinfo{year}{1927}). \bibitem{Bohm} \bibinfo{author}{Bohm, D.} \newblock \bibinfo{title}{A suggested interpretation of the quantum theory in terms of ``hidden'' variables. {I}}. \newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{166--179} (\bibinfo{year}{1952}). \bibitem{PBSRG} \bibinfo{author}{Pomarico, E.}, \bibinfo{author}{Bancal, J.-D.}, \bibinfo{author}{Sanguinetti, B.}, \bibinfo{author}{Rochdi, A.} \& \bibinfo{author}{Gisin, N.} \newblock \bibinfo{title}{Various quantum nonlocality tests with a commercial two-photon entanglement source}. \newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{052104} (\bibinfo{year}{2011}). \bibitem{SSCRT} \bibinfo{author}{Stuart, T.~E.}, \bibinfo{author}{Slater, J.~A.}, \bibinfo{author}{Colbeck, R.}, \bibinfo{author}{Renner, R.} \& \bibinfo{author}{Tittel, W.} \newblock \bibinfo{title}{An experimental test of all theories with predictive power beyond quantum theory}. \newblock \bibinfo{howpublished}{e-print \url{arXiv:1105.0133}} (\bibinfo{year}{2011}). \end{thebibliography} \section*{SUPPLEMENTAL MATERIAL} \section{Additional discussion of the assumptions} Our work relies on two assumptions, which we discuss in separate subsections. We note that these assumptions are essentially those already used in~\cite{CR_ext}, upon which this work builds. For the following exposition, it is convenient to introduce the concept of \emph{spacetime variables (SVs)}~\cite{CR_ext}. Mathematically, these are simply random variables (which take values from an arbitrary set) together with associated coordinates $(t,r_1,r_2,r_3) \in \mathbb{R}^4$. A SV may be interpreted physically as a value that is accessible at the spacetime point specified by these coordinates (with respect to a given reference frame). The variables described in the main text, $A$, $X$, and $\Gamma$, can readily be modelled as SVs. In particular, the coordinates of $A$ should specify the spacetime point where the measurement setting (for measuring the system $\cS$) is chosen. Accordingly, the coordinates of $X$ correspond to an (arbitrary) point in the spacetime region where the measurement outcome is available. We therefore assign coordinates such that $X$ is in the future lightcone of $A$, whereas no SV in the set $\Gamma$ (which models any information available before the measurement) should lie in the future lightcone of $A$. \subsection{Correctness of quantum theory} This assumption refers to the statistical predictions about measurement outcomes that can be made within standard quantum theory (i.e., it does not make reference to any additional parameters of a potential higher theory\footnote{In particular, in a higher theory that has additional hidden parameters, the predictions of quantum theory should be recovered if these parameters are ignored, which corresponds mathematically to averaging over them.}). Following the treatment in~\cite{CR_ext}, we subdivide the assumption into two parts. \begin{itemize} \item \emph{QMa:} Consider a system whose state is described by a wave function, $\Psi$, corresponding to an element of a Hilbert space $\cH_S$. According to quantum theory, any measurement setting $A=a$ is specified by a family of projectors, $\{\Pi^a_x\}_{x}$, parameterized by the possible outcomes $x$, such that $\sum_x \Pi^a_x = \openone_{\cH_S}$.\footnote{Naimark's dilation theorem asserts that any measurement specified by a Positive Operator Valued Measure can be seen as a projective measurement on a larger Hilbert space.} The assumption demands that the probability of obtaining output $X=x$ when the system is measured with setting $A=a$ is given by \begin{align*} P_{X|A=a}(x) = \tr(\proj{\Psi} \Pi^a_x) \ . \end{align*} \item \emph{QMb:} Consider again a measurement as in Assumption~\emph{QMa} described by projectors $\{\Pi^a_x\}_x$ on $\cH_S$. According to quantum theory, for any fixed choice of the setting $A = a_0$, the measurement can be modelled as an isometry $\cE_{S \to SE}$ from states on $\cH_S$ to states on a larger system $\cH_S \otimesimes \cH_E$ (involving parts of the measurement apparatus and the environment) such that the restriction of $\cE_{S \to SE}$ to the original system corresponds to the initial measurement, i.e., formally \begin{align*} \tr_E \cE_{S \to SE}(\proj{\Psi}) = \sum_x \Pi^{a_0}_x \proj{\Psi} \Pi^{a_0}_x \ , \end{align*} where $\tr_E$ denotes the partial trace over $\cH_E$~\cite{Stinespring}. The assumption then demands that for all $A=a$ and for any measurement (defined by a family of projectors, $\{\Pi^b_y\}_{y}$, parameterized by the possible outcomes $y$) carried out on system $E$, the joint statistics of the outcomes are given by \begin{align*} P_{X Y | A=a, B=b}(x, y) = \tr\bigl(\cE_{S \to SE}(\proj{\Psi}) \Pi^a_x \otimesimes \Pi^b_y\bigr) \ . \end{align*} \end{itemize} Note that both assumptions refer to the Born rule~\cite{Born26} for the probability distribution of measurement outcomes. In Assumption~\emph{QMa}, the rule is applied to a measurement on a single system, whereas Assumption~\emph{QMb} demands that the rule also applies to the joint probability distribution involving the outcome of (arbitrary) additional measurements. \subsection{Freedom of choice} The assumption that measurement settings can be chosen freely is often left implicit in the literature. This is also true, for example, for large parts of Bell's work, although he later mentioned the assumption explicitly~\cite{Bell_free}. The notion of freedom of choice can be expressed mathematically using the language of SVs. We say that a SV $A$ is \emph{free with respect to a set of SVs $\Omega$} if \begin{align*} P_{A\Omega'} = P_{A} \times P_{\Omega'} \end{align*} holds, where $\Omega'$ is the set of all SVs from $\Omega$ whose coordinates lie outside the future lightcone of $A$. This captures the idea that $A$ should be independent of any ``pre-existing'' values (with respect to any reference frame). This definition is motivated by the following notion of causality. For two SVs $A$ and $B$, we say that $B$ \emph{could have been caused by} $A$ if and only if $B$ lies in the future lightcone of $A$. Within a relativistic spacetime structure, this is equivalent to requiring the time coordinate of $B$ to be larger than that of $A$ in all reference frames. Using this notion of causality, our definition that $A$ is free with respect to $\Omega$ is equivalent to demanding that all SVs in $\Omega$ that are correlated to $A$ could have been caused by $A$. Connecting to the main text, we note that (by definition) all SVs in the set $\Gamma$ defined there lie outside the future lightcone of the spacetime point where the measurement setting $A$ is chosen. The requirement for $A$ to be free with respect to $\Gamma$ thus simply reads $P_{A\Gamma} = P_A \times P_{\Gamma}$. In Ref.~\citenum{CR_ext} (upon which the present result is based), the free choice assumption is used in a more general bipartite scenario. There, two measurements are carried out at spacelike separation, one of which has setting $A$ and outcome $X$ and the other has setting $B$ and outcome $Y$. In addition, as in our main argument, we consider arbitrary additional (pre-existing) information $\Gamma$. The assumption that $A$ and $B$ are chosen freely (i.e., such that they are uncorrelated with any variables in their past in any frame) then corresponds mathematically to the requirements $P_{A|BY\Gamma} = P_A$ and $P_{B|AX\Gamma} = P_B$. We remark that these conditions are not obeyed in the de Broglie-Bohm model~\cite{deBroglie,Bohm} if one includes the wave function as well as the hidden particle trajectories in $\Gamma$. It is also worth making a few additional remarks about the connection to other work. As mentioned in the main text, in Ref.~\citenum{Bell_free}, Bell writes that ``the settings of instruments are in some sense free variables $\ldots$ [which] means that the values of such variables have implications only in their future light cones.'' When formalized, this gives the above definition. However, in spite of the motivation given in the above quote, the mathematical expression Bell writes down corresponds to a weaker notion that only requires free choices to be independent of pre-existing hidden parameters (but does not include pre-existing measurement outcomes). This weaker requirement is (as he acknowledges) a particular implication of the full freedom of choice assumption. We imagine that the reason for Bell's reference to this weaker implication is that it is sufficient for his purpose when combined with another assumption, known as local causality. Indeed, the weaker implication of free choice together with Bell's local causality are also sufficient to prove our result. Furthermore, in the literature the weaker notion is sometimes taken to be the definition of free choice, rather than an implication of it. \section{Connection to experiment} Our main argument is based on the assumption that measurement outcomes obey the statistical predictions of quantum theory, and it is interesting to consider how closely experimental observations come to obeying these predictions. For the argument in Ref.~\citenum{CR_ext}, which leads to Eq.~2 in the main text, this assumption is divided into two parts, as mentioned above. The first part of the assumption has already been subject to experimental investigation (see Refs.~\citenum{PBSRG,SSCRT}), giving results compatible with quantum theory to within experimental tolerance. Note that, although no experimental result can establish the Markov chain condition of Eq.~2 precisely, the observed data can be used to bound how close (in trace distance) the Markov chain condition is to holding (see Ref.~\citenum{SSCRT} for more details). The second part of the assumption has not seen much experimental attention to date. However, were we to ever discover a measurement procedure that is demonstrably inconsistent with unitary dynamics on the microscopic scale, this would falsify the assumption and point to new physics. The freedom of choice assumption is more difficult to probe experimentally, since it is stated in terms of $\Gamma$, which is information in a hypothetical higher theory. Nevertheless, it would be possible to falsify the assumption in specific cases, for example using a device capable of predicting the measurement settings before they were chosen. \end{document}
\begin{document} \title{ Empathy in One-Shot Prisoner Dilemma} \author{ Giulia Rossi, Alain Tcheukam and Hamidou Tembine \thanks{ The authors are with Learning \& Game Theory Laboratory, New York University Abu Dhabi, Email:\ [email protected] } } \maketitle \begin{abstract} Strategic decision making involves affective and cognitive functions like reasoning, cognitive and emotional empathy which may be subject to age and gender differences. However, empathy-related changes in strategic decision-making and their relation to age, gender and neuropsychological functions have not been studied widely. In this article, we study a one-shot prisoner dilemma from a psychological game theory viewpoint. Forty seven participants (28 women and 19 men), aged 18 to 42 years, were tested with a empathy questionnaire and a one-shot prisoner dilemma questionnaire comprising a closiness option with the other participant. The percentage of cooperation and defection decisions was analyzed. A new empathetic payoff model was calculated to fit the observations from the test whether multi-dimensional empathy levels matter in the outcome. A significant level of cooperation is observed in the experimental one-shot game. The collected data suggests that perspective taking, empathic concern and fantasy scale are strongly correlated and have an important effect on cooperative decisions. However, their effect in the payoff is not additive. Mixed scales as well as other non-classified subscales (25+8 out of 47) were observed from the data. \end{abstract} {\bf Keywords:} Empathy, other-regarding payoff, cooperation \tableofcontents \section{Introduction} {\color{black} In recent years a growing field of behavioral game studies has started to emerge from several academic perspective. Some of these approaches and disciplines such as neuroscience, social psychology, artificial intelligence have already produced major collections and experiments on empathy. In the context of strategic interaction, empathy may play a key role in the decision-making and the outcome of the game. \subsection*{ Widely known results in repeated games} For long-run interactions under suitable monitoring assumption it has been shown that cooperative outcomes may emerge as time goes. This is known as ``Folk Theorem" or general feasibility theorem (see \cite{folk1,folk2}). For example, the Tit-For-Tat Strategy which consists to start the game by cooperating,Then do whatever your other participant did on the previous iteration, leads to a partial cooperation between the players . While cooperation may emerge by means of repeated long-run interactions under observable plays, there is very little study on how cooperation can be possible in one-shot games. \subsection*{ How about cooperative behavior in one-shot games?} Unfortunately, the Folk theorem result does not apply to one-shot games. This is because there is no previous iteration. There is no next iteration in one-shot games. There is no opportunity to detect, learn or punish from experiences. For the same reasons, the existing reputation-based schemes do not apply directly. \subsection*{Is cooperation possible outcome in experimental one-shot games?} Experimental results have revealed a strong mismatch between the outcome of the experiments and the predicted outcome from classical game model. Is the observed mismatch because of some important factors that are not considered in the classical game formulation? Is it because the empathy of the players are neglected in the classical formulation? This work conducts a basic experiment on one-shot prisoner dilemma and establishes correlation between players choices and their empathy levels. The prisoner's dilemma is a canonical example of a game analyzed in classical game theory that shows why two individuals (without empathy consideration) might not cooperate, even if it appears that it is in their best interests to do so in terms of collective decision. It was originally framed by Flood and Dresher working at RAND. In 1950, Tucker gave the name and interpretation of prisoner's dilemma to Flood and Dresher's model of cooperation and conflict, resulting in the most well-known game theoretic academic example.} \subsection*{Contribution} The contribution of this work can be summarized as follows. We investigate how players behave and react in experimental one-shot Prisoner Dilemma in relation to their levels of empathy. The experiment is conducted with several voluntary participants from different countries, cultures and educational backgrounds. For each participant to the project the Interpersonal Reactivity Index (IRI) which is a multi-dimensional empathy measure, is used. In contrast to the classical empathy scale studied in game theory literature that are limited to perspective taking, this work goes one-step further by investigating the effect of three other empathy subscales: empathy concern, fantasy scale and personal distress. The experiment reveals a strong mixture of the empathy scales across the population. In addition, each participant responds to a questionnaire that mimics one-shot Prisoner Dilemma situation and specific reaction time and closiness to the other participant. We observe that empathic concern as well as fantasy scale dimensions may affect positively other-regarding payoff. In contrast to the classical prisoner dilemma in which Defection is known as a dominating strategy, the experimental game exhibits a significant level of cooperation in the population (see Section \ref{newmodel1}). In particular, Defection not a dominating strategy anymore when players' psychology is involved. Based on these observations an empathetic payoff model in Section \ref{newmodel}, that better captures the preferences of the decision-makers, is proposed. With this empathetic payoff, the outcome of the game captures more the observed cooperation level in the one-shot prisoner dilemma. The experiment reveals not only positive affect of empathy but also a dispositional negative affect (spiteful or malicious) of empathy in the decision-making of some of the participants. Spitefulness is observed at the personal distress scale in the population. Person distress scale is negatively correlated with perspective taking scale. Taken together, these findings suggest that the empathy types of the participants play a key role in their payoffs and in their decision in one-shot prisoner dilemma. It also reduces the gap between game theory and game practice by closing-the-loop between model and observations. It provides an experimental evidence that strengthens the model of empathetic payoff and its possible engineering applications \cite{t1}. \subsection*{Stucture} The rest of the article is organized as follows. The next section presents some background and literature overview on empathy. Section 3 presents the experimental study about the impact of individual psychology on human decision making in one-shot prisoner dilemma. The analysis of the results of the experiment are presented in Section 4. An explanation of the results is given in Section 5. Section 6 concludes the paper. \section{Background on Empathy} From the field of relational care to the field of economy, the concept of empathy seems to have an ubiquitously position. Empathy is not only an important and longstanding issue, but a commonly used term in everyday life and situations. Even if it is easily approached and used, this concept has until nowadays different definitions and meanings. Born in the aesthetic and philosophical field, it came to be an important operative concept in behavioral game theory, where once more it is used as an instrument to create relations between decision-makers. The public opinion and the world scientific scenario, however, are not always giving the correct attention to what is implying an empathetic reaction with the others: be empathetic is not something simple as well as is not something given once to the personality of persons. It depends on the context of relations and on the social interaction dimension where people are involved as players, consumers, or agents. \subsection*{Definition of Empathy } {\color{black} We present historical definitions and concepts of empathy. Rather than having to choose which of the 'definitions' of empathy is correct, this work suggests a better appreciation for it as a multidimensional phenomenon at least allows a perspective and the ability to specify which aspect of empathy the experimentalist and the theorist are referring to when making particular particular investigation in behavioral games.} \begin{itemize}\item {\color{black}Philosophy:} From philosophical perspective, Empathy derives from the Greek word $\acute{ \epsilon } \mu \pi \acute{\alpha} \theta \epsilon i \alpha$ (empatheia), which literally means physical affection. In particular, it is composed by $\acute{ \epsilon} v$ (en) ``in, at" and $ \pi \acute{\alpha} \theta o \varsigma $ (pathos), ``passion'' or ``suffering''. The work in \cite{c1} has introduced the term Einfuhlung to aesthetic philosophical field in his main book ``On the optical Sense of Form: A contribution to Aesthetics'' and many other authors (see \cite{c2} and the references therein), have introduced the concept to feeling and quasi-perceptual acts. \item {\color{black}Psychology:} From a psychological perspective, it corresponds to a cognitive awareness of the emotions, feelings and thoughts of the other persons. In this sense, the term primary significance is that of ``an intellectual grasping of the affects of another, with the result of a mimic expression of the same feelings'' \cite{c3}. \item {\color{black}Sociology: }From a sociological perspective, empathy corresponds to an ability to be aware of the internal lives of the others. It is related to the existence of language as a sort of personal awareness of us as selves \cite{c4}. Regarding a neuroscientific perspective, empathy has been studied as a ``two empathic sub-processes, explicitly considering those states and sharing other's internal states'' \cite{c5}. These two cognitive processes are exhaustively represented in the Figure \ref{fig1}. \begin{figure} \caption{Cognitive and emotional empathy are the bases of Empathy. They are related to cognitive and affective Theory of Mind as suggested in \cite{c10} \label{fig1} \end{figure} Empathy lies in the structures that include the anterior insula cortex and the dorsal anterior cingulate cortex. In particular, empathy for others pain is considered being located in anterior insula cortex \cite{c7,c6}. These areas are studied in relation to empathy and empathy concerns of collective actions (CA) \cite{c8}. Oxytocin (OT), an ``evolutionarily ancient molecule that is a key part of the mammalian attachment system'', is considered in these studies as a sort of variable to be manipulated to increase or decrease CA in human beings. \end{itemize} Furthermore, empathy must be analyzed in relation to the concepts of compassion and sympathy. Compassion, from Latin ecclesiastical compati (suffer with), literally means have feelings together. Nowadays it is associated with the capacity of feeling the other's worries, even tough it doesn't imply an automatic action. Sympathy, from the Greek $ \sigma u \mu \pi \acute{a} \theta \epsilon i a $, literally means fellow feelings. Its meaning lies in the capacity of understanding the internal feelings of the other with the intentional desire of changing his/her worries. As per indication in \cite{c9}, ``the object of sympathy is the other person's well being. The object of empathy is understanding''. It has been difficult to distinguish empathy from sympathy because they both involve the emotional state of one person to the state of another. This problem was increased by the fact that the mapping of the terms has recently reversed: what is now commonly called empathy was referred to before the middle of the twentieth century as sympathy \cite{c10ptreston}. At the end, the concept of empathy must be understood also in relation to Theory of Mind. Theory of Mind, also described as ToM, is the capability to understand of others as mental beings, with personal mental states, for example feelings, motives and thoughts. It is one of the most important developments in early childhood social cognition and it is influencing children life at home as well as at school. Its development from birth to 5 years of age is now described in research literature with the possibility to understand how infants and children behave in experimental and natural situations \cite{d4}. \subsection*{Different types of empathy } We are not restricting ourselves to the positive part of empathy. Empathy may have a dark or at least costly side specially when the environment is strategic and interactive as it is the case in games. Can empathy be bad for the self? Empathy can be used, for example, by a other player attacker to identify the weak nodes in the network. Can empathy be bad for others? Empathetic users may use their ability to destroy the opponents. In strategic interaction between people, empathy may be used to derive antipathetic response (distress at seeing others' pleasure, or pleasure at seeing others distress). In both cases, it will influence the dynamics of the self and other regarding preferences. The ability of empathy to generate moral behavior and determine cooperation is limited by three common occurrences: over-arousal, habituation and bias. \begin{itemize}\item Empathic over-arousal is an involuntary process that occurs when an observer's empathic distress becomes so painful and intolerable that it is transformed into an in tense feeling of personal distress, which may move the person out of the empathic mode entirely \cite{f13,f14} . \item Generally speaking, in a classical relation victim-observer, the greater is the victim's distress, the greater is the observer's empathic distress. If a person is exposed repeatedly to distress over time, the person's empathic distress may diminish to the point where the person becomes indifferent to the distress of others. This is called habituation. This diminished empathic distress and corresponding indifference is very common in those who, for example, abuse and kill animals. \item Humans evolved in small groups. These groups sometimes competed for scarce resources: in this way is not surprising that evolutionary psychologists have identified kin selection has a moral motivator with evolutionary roots. The forms of familiarity bias include in-group bias and similarity biases. In-group bias is simply the tendency to favour one's own group. This is not one group in particular, but whatever group we are able to associate with at a particular time. In-group bias is working on self-esteem of the members. On the opposite side of these biases, we have out-group ones, where people out of the groups are considered in a negative way, with a different (and, for the most of the time) and worst treatment (e. g, racial inequality). The similarity bias derives from psychological heuristic pertaining to how people make judgments based on similarity. More specifically, similarity biases are used to account for how people make judgments based on the similarity between current situations and other situations or prototypes of those situations. The goal of these biases is to maximize productivity through favorable experience while not repeating unfavourable experiences (adaptive behaviour). \end{itemize} \subsection*{Empathy main integrative theories } During the 1970s, in the public psychological scenario empathy was conceived as a procedure with affective and cognitive implications. The work in \cite{d12} introduced the first multidimensional model of empathy in the psychological literature, where affective and cognitive procedures were working together. According to her, although empathy is defined as a shared emotion between two persons, it depends on cognitive factors. In her integrative-affective model, the affective empathy reaction derives from three components factors, hereinafter described. The first is represented by the cognitive ability to discriminate affective cues in others, the second by the cognitive skills that are involved in assuming the perspective and role of the others and the third factor is, at the end, described by the emotional responsiveness, the affective ability to experience emotions. By the other hand, one of the most comprehensive perspectives on empathy and its relation to the moral development is provided in \cite{d13}. The author considered empathy as a biologically based disposition for altruistic behavior \cite{d13}. He conceives of empathy as being due to various modes of arousal, which allows us to respond empathically in light of a variety of distress cues from another person. The author mentions mimicry, classical conditioning, and direct association as fast acting and automatic mechanisms producing an empathic response. The author lists mediated association and role taking in relation to more cognitively demanding modes, mediates by language and proposed some of the limitations in our natural capacity to empathize or sympathize with others, particularly what he refers to as 'here and now' biases. In other words, the main tendency according to \cite{d13}, is to empathize more with persons that are in some sense perceived to be closer to us. The authors in \cite{prest02} defined empathy as a shared emotional experience occurring when one person (comes to feel a similar emotion to another (the object) as a result of perceiving the other's state. This process results from the representations of the emotions activated when the subject pays attention to the emotional state of the object. The neural mechanism assumes that brain areas have processing domains based on their cellular composition and connectivity. The main theories that we will discuss further are related to the use we will have of empathy concept in game theory analysis. Broadly speaking, we are approaching empathy as made up of two components, an affective and a cognitive one. The affective (or emotional) component develops from infants and its structure may be summarized in this progressive intertwinement, a.) emotion recognition, b.) empathic concern, c.) personal distress and d.) emotional contagion. The cognitive component of empathy develops during the progressive development of the person from his \ her childhood. It is based on the theory of mind, imagination (of emotional future outcomes) and on perspective taking. According to \cite{d5} two main approaches have been used to study empathy: the first one focuses on cognitive empathy or the ability to take the perspective of another person and to infer his mental state (Theory of Mind). The second one emphasizes emotional or affective empathy \cite{d6} defined as an observer's emotional response to another person's emotional state. The Table \ref{fig3} below highlights some of principal features of affective and cognitive empathy \cite{c10}. \begin{table*} \begin{tabular}{ |c|c| } \hline Emotional Empathy& Cognitive Empathy \\ \hline Simulation System & Mentalizing system \\ & Theory of Mind \\ \hline Emotional contagion, personal distress & perspective taking\\ Empathic concern, emotion recognition & imagination of emotional future outcomes\\ Core structure & Core structure\\ Development & Development\\ \hline \end{tabular} \caption{Principal features of affective and cognitive empathy.} \label{fig3} \end{table*} A model of empathy-altruism was developed in \cite{b8}. Through the lines of this model, the author simply assumes that empathy feelings for another person create an altruistic motivation to increase that person's welfare. In particular, the work in \cite{c11ba} found out how the participants in a social dilemma experiment allocated some of their resource to a person for whom they felt empathy. The author developed also the model of empathy-joy \cite{b8}. This hypothesis underlines how a prosocial act is not completely explained only by empathy but, also, by the positive emotion of joy a helper expects as a result of helping another person in need. In connection with this theory, empathy relies on an automatic process that generates, immediately, other types of behavior useful to predict the other-regarding actions. In relation to one-trial prisoner's dilemma \cite{c12} underlined how empathy-altruism should increase cooperation (that, then, will emerge in the situation). The model of empathic brain proposed in \cite{c13} proposes a modulate model of empathy, where different factors occur in its development. These factors are four, in particular related to four different situations so described, i) one is in affective state, ii) this state is isomorphic to another person's affective state, iii) this state is elicited by the observation or imagination of another person's affective state, iv) one knows that the other person is the source of one's own affective state. Condition (a) is particularly important as it helps to differentiate empathy from mentalizing. Mentalizing is the ability to represent others' mental states without emotional involvement. In particular, the two authors are underlying the epistemological value of empathy, by one side related to provide information about future actions between people and, by the other side, to function as ``origin of the motivation for cooperative and prosocial behavior''. The model we provide is based on both these theories and, in particular, it will take into account the distinction between empathy itself and the cortical representations of the emotions. By a developmental point of view, empathy is studied also in relation with prosocial behavior. Two theoretical studies \cite{d1,d2} are fundamental in this sense. In particular, the author considers that the development of a vicarious affective reaction to another distress is beginning from infancy. Individual patterns of behavior that responses to the distress are, also, tailored to the needs of the other. According to \cite{d3} although an empathic basis to altruistic behavior entails a net cost to the actor, cooperation and altruism require behavior tailored to the feelings and needs of others. \subsection*{How to measure empathy?} We overview empathy measurement in psychology and present existing models of empathy's effect in game theory. \subsubsection*{Empathy measures in Psychology} Psychologists used to study both situational and dispositional empathy concepts. Situational empathy, i.e., empathic reactions in a specific situations, is measured by asking subjects about their experiences immediately after they were exposed to a particular situation, by studying the ``facial, gestural, and vocal indices of empathy-related responding" \cite{d7} or by various physiological measures such as the measurement of heart rate or skin conductance. Dispositional empathy, understood as a person's stable character trait, has been measured either by relying on the reports of others (particularly in case of children) or, most often (in researching empathy in adults), by relying on the administration of various questionnaires associated with specific empathy scales. \begin{itemize} \item {\color{black} Measuring empathic ability:} The work in \cite{d8} proposes to test empathic ability by measuring the degree of correspondence between a person A and a person B's ratings of each other on six personality traits-such as self-confidence, superior-inferior, selfish-unselfish, friendly-unfriendly, leader-follower, and sense of humor-after a short time of interacting with each other. More specifically, empathic ability is measured through a questionnaire that asked both persons, i) to rate themselves on those personality traits, ii) to rate the other as they see them, iii) to estimate from their perspective of how the other would rate himself and to rate themselves according to how they think the other would rate them. Person's A empathic ability is then determined by the degree to which A's answers to (iii) and (iv) corresponds to B's answer to (i) and (ii). The less A's answers diverge from B's, the higher one judges A's empathic ability and accuracy. The test aims to measure the level of empathy thanks to the dimension of role-taking. \item {\color{black} Empathy Test:} The authors in \cite{d9} created the so-called Empathy Test, that was used in industry in the 1950s. The main purpose of the test is to measure person's ability to ''anticipate'' certain typical reactions, feelings and behaviour of other people. This test consists of three sections, which require persons to rank the popularity of 15 types of music, the national circulation of 15 magazines and the prevalence of 10 types of annoyance for a particular group of people. \item {\color{black} Measure of cognitive empathy:} The authors in \cite{d10} is a cognitive empathy scale that consists of 64 questions selected from a variety of psychological personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI) and the California Personality Inventory (CPI). Hogan chose those questions in response to which he found two groups of people-who were independently identified as either low-empathy or high-empathy individuals-as showing significant differences in their answers. \item {\color{black} Measure of emotional empathy:} EETS, Emotional Empathy Tendency Scale, has been developed in \cite{d11}. The questionnaire consists of 33 items divided into seven subcategories testing for ``susceptibility to emotional contagion", ''appreciation of the feelings of unfamiliar and distant others,``extreme emotional responsiveness", ``tendency to be moved by others' positive emotional experiences", ``tendency to be moved by others' negative emotional experience", ``sympathetic tendency", and ``willingness to be in contact with others who have problems". This questionnaire emphasizes the original definition of the empathy construct in its seven subscales that together show high split-half reliability, indicating the presence of a single underlying factor thought to reflect affective or emotional empathy. The authors in \cite{f1} suggested more recently, however, that rather than measuring empathy per se, the scale more accurately reflects general emotional arousability. In response, a revised version of the measure, the Balanced Emotional Empathy Scale \cite{f2} intercepts respondent's reactions to others' mental states \cite{f3}. \item {\color{black} Multidimensional measure of empathy:} The Interpersonal Reactivity Index has been developed in \cite{d18} as an instrument whose aim was to measure individual differences in empathy. The test is made of 28 items belonging to cognitive and emotional domain, and, in particular, they are belonging to four different domains. These are represented by four different subscales, Perspective Taking, Empathic Concern, Fantasy and Personal Distress, each of which includes seven item answered on a 5-point scale ranging from 0 (Does not describes me very well) to 4 (Describes me very well). The Perspective Taking subscale measures the capability to adopt the views of the others spontaneously. The Empathic Concern subscale measures a tendency to experience the feelings of others and to feel sympathy and compassion for the unfortunate people. \item {\color{black} Self-report empathy measure:} Regarding the dimension of self-report empathy measures, we consider important to be mentioned the Scale of Ethnocultural Empathy \cite{f4}, the Jefferson Scale of Physician Empathy \cite{f5}, the Nursing Empathy Scale \cite{f6}, the Autism Quotient \cite{f7} and the Japanese Adolescent Empathy Scale \cite{f8}. Although these instruments were designed for use with specific groups, aspects of these scales may be suitable for assessing a general capacity for empathic responding. \item {\color{black} Measuring deficit in theory of mind:} The Autism Quotient \cite{f7} was developed to measure Autism spectrum disorder symptoms. The authors viewed a deficit in theory of mind as the characteristic symptom of this disease \cite{f9} and a number of items from this measure relate to broad deficits in social processing (e.g., ``I find it difficult to work out people's intentions.''). Thus, any measure of empathy should exhibit a negative correlation with this measure. The magnitude of this relation, however, will necessarily be attenuated by the other aspects of the Autism Quotient, which measure unrelated constructs (e.g., attentional focus and local processing biases). Additional self-report measures of social interchange appearing in the neuropsychological literature contain items tapping empathic responding including the Disexecutive Questionnaire \cite{f10} and a measure of emotion comprehension developed in \cite{f11}. These scales focus on the respondent's ability to identify the emotional states expressed by another (e.g., ``I recognize when others are feeling sad".). Current theoretical notions of empathy emphasize the requirement for understanding of another's emotions to form an empathic response \cite{f12}. Only a small number of items on current measures of empathy, however, assess this ability. Table \ref{table:Empathymeasurement} summarizes the Empathy scales measurement in Psychology overviewed above. \end{itemize} \begin{table*} \begin{center} \begin{tabular}{ | p{3cm} | p{1cm} | p{2cm} | p{3cm}| p{3cm} | } \hline & ITEMS & SCALE & CORE CONCEPT & {\color{black} AUTHORS} \\ \hline Empathy Ability & 24 & 0-4 Likert Scale & Imaginative transposing of oneself into the thinking of another & Dymond (1949) \\ \hline Empathy Test & 40 & Ranking Multiple Choice & Cognitive role-taking & Kerr \& Speroff (1954) \\ \hline Empathy Scale & 64 & 0-4 Likert Scale & Apprehension of another's condition or state of mind by an intellectual or imaginati ve point of view & Hogan (1969) \\ \hline EETS - Emotional Empathy Tendency Scale & 33 & -4 to 4 Likert Scale & Emotional Empathy & Mehrabian \& Epstein (1972) \\ \hline IRI- {\color{black} Interpersonal Reactivity Index} & 28 & 0-4 Likert Scale & Reactions of one individual to the observed experiences of another & Davis (1980) \\ \hline Ethnocultural Empathy Scale & 31 & & Culturally Specific Empathy Patterns & Wang, David son,Yakushko, Savoy, Tan, Bleier (2003) \\ \hline Jefferson Scale of Physician Empathy & 5 & 0 - 5 Likert Scale & Patient oriented vs technology oriented empathy in physicians & Kane, Gotto, Mangione, West , Hojat, (2001) \\ \hline Nursing Empathy Scale & 12 & 0-7 Likert Scale & Nurses empathic behaviour in the context of interaction with the client & Reynolds (2000) \\ \hline Autism Quotient & 50 & 0-4 Likert Scale & Autism Spectrum Disorder symptoms & Baron-Cohen, Wheelwright, Skinner, Martin, Cubley (2001) \\ \hline Japanese Adolescent Empathy Scale & 30 & Likert Scale & Empathy to feel or not to feel positive and negative feelings towards the others & Hashimoto, Shiomi (2002) \\ \hline \end{tabular} \end{center} \caption{Empathy measurement in psychology} \label{table:Empathymeasurement} \end{table*} \subsubsection*{Empathy Models in Game Theory} In this subsection, we review some fundamental aspect of behavioral game theory. By a game theoretic approach, empathy and emotive intelligence are considered essential for the development of the games themselves between players. In particular, empathy is essential for the strategic evolution of the games and foundational of Nash Equilibrium \cite{b2,b3}. In other words, empathy itself is the instrument that let the dynamic process between n-players happen as well as the understanding and evaluation of their preferences and beliefs. Cooperative behavior patterns must be considered an important sample of close relations between individuals, based on confidding and disclosure. Relations like helping and assistance behavior, as well as mutual confiding, mutual communication and self disclosure are of cooperative behavior. To understand the possible role of friendship in cooperation or defection between n-players in one-shot prisoner dilemma, it's essential to reflect on how the evolutionary line of our specie has the possibility to create close relationship only between persons who are considering themselves keen in terms of genes. The same kind of attitude is also influencing communal relationship.The cooperative behaviour in one shot Prisoner Dilemma between friends, in particular, lead to the activation of the so called cooperative-parithetic system, that is activated only when there is the perception that the last goal may be reached through a sort of collaboration between the players of the group itself. Empathy has been approached in different ways regarding game theory field. The author in \cite{b4} underlines how homo economicus must be empathetic to some degree, even if in a different meaning from the concept used in \cite{d19,d20}. In particular, in relation to game theory, he introduces the concept of empathy in connection to the study of interpersonal comparison of utility in games. More specifically, the model of empathy - altruism developed in \cite{b8} whose assumption is that empathy feelings for another person create an altruistic motivation to increase that person's welfare. Furthermore, the work of \cite{d21} is related to how the participants in a social dilemma experiment allocate some of their resource to a person for whom they felt empathy. In the context of one trial prisoner's dilemma the authors underlined how empathy altruism should increase cooperation (that, then, will emerge in the situation). Secondly, the model of empathy joy developed in \cite{d22}. This hypothesis underlines how a prosocial act is not completely explained only by empathy but, also, by the positive emotion of joy a helper expects as a result of helping (or, better, of having a beneficial impact on) another person in need. In connection with this theory, empathy relies on an automatic process that generates, immediately, other types of behaviour useful to predict the other regarding actions \cite{book,rmg}. The works in \cite{d15,d16} proposed the exploration of more psychological and process-oriented models as a more productive framework in comparison with the classical ones in game theory (fairness, impure altruism, reciprocity). At the light of this perspective, many concepts related to human behaviour are introduced to explain choice in game theory approach. Empathy as an operative concept must be understood as different from biases affecting belief formation and biases affecting utility. Empathy is operating through both beliefs and utility formation. Hence, in the presence of empathy, ``beliefs and utility become intricately linked'' \cite{d17}. Regarding empathy as a process of beliefs formation, the author proposed to analyze two mechanisms, imagine-self and imagine-other. Imagine self-players are able to imagine themselves in other people's shoes, in other words they try to imagine themselves in similar circumstances. ''Imagine other'' is when a person tries to imagine how another person is feeling. The authors underlined how empathy refers to people's capability to infer what others think or feel, the so-called mind reading. They underlined, at the same time, how empathy itself may have also any consequences on each player evaluation. The main contribution of the authors lies in a critique analysis of altruistic behaviour in game theory. With three toy games, they demonstrated how empathy-altruism is not always linked with imagine-other dimension (and the so-called beliefs formation), since players may use only imagine-self dimension. The authors in \cite{d14} criticize a common concept of given empathy present in public good experiment and, at the end, they demonstrate how empathy may be linked more to the context and social interaction itself in game theory experimental researches. The work \cite{b6} states that a disposition for empathy does not influence the behaviour related to different games (towards them a central role is played by Theory of Mind). Regarding this position, the same authors are underlying that also individual differences related to empathy do not shape social preferences. On the contrary, many other studies conducted show how empathy may influence the structure of the games themselves. The work in \cite{b7} study the Ultimatum Game in an evolutionary concept and underline, in their study, how empathy can lead to the evolution of fairness. The work in \cite{b9} studied the correlation between empathy, anticipated guilt and pro social behaviour; in this study he found out that empathy affects pro-social behaviour in a more complex way than the one represented by the model of social choices. Recently, the concept of empathy has been introduced in mean-field-type games in \cite{t1,tt1v,tt2v,tt3v} in relation to cognitively plausible explanation models of choices in wireless medium access channel and mobile devices strategic interaction. The main results of this applied research that lie in an operative and real world use of empathy concept, are represented by the enforcement of mean-field equilibrium payoff equity and fairness itself between players. \section{Study} \subsection*{Participants} The population of participants includes 47 persons between 18 and 42 years old. The population is composed of 19 men and 28 women chosen from different educational backgrounds, cultures and nationalities (see Table \ref{table:gender}). The names of the participants are not revealed. Different numbers are generated and assigned to the participants. \begin{table}[htb] \centering \begin{tabular}{|l|lr|} \hline \cline{2-3} Gender & Number & Frequency \% \\ \hline Men & 19 & 40.42 \\ \hline Women & 28 & 59.58 \\ \hline Total & 47 & \\ \hline \end{tabular} \caption[]{ Composition: gender and frequency of the participants } \label{table:gender} \end{table} All the subjects were asked to perform two different tests: an IRI test (Interpersonal Reactivity Index \cite{b11}) and a questionnaire that is mimicking, with an empathic and moral emphasis, a prisoner dilemma situation. \subsection*{Empathy questionnaire} The IRI is a 28-item, 5-point Likert-type scale that evaluates four dimensions of empathy: Perspective-Taking, Fantasy, Empathic Concern, and Personal Distress. Each of these four subscales counts 7 items. The Perspective-Taking subscale measures empathy in the form of individual's tendency to adopt, in a spontaneous way, the other's points of view. The Fantasy subscale of the IRI evaluates the subject's ability to put themselves into the feelings and behaviours of fictional characters in books, movies, or plays. The Empathic Concern subscale assesses individual's feelings of concern, warmth, and sympathy toward others. The Personal Distress subscale measures self-oriented anxiety and distress feelings regarding the distress experienced by others. As pointed out by Baron-Cohen and colleagues \cite{d24}, however, the Fantasy and Personal Distress subscales of this measure contain items that may more properly assess imagination (e.g., ``I daydream and fantasize with some regularity about things that might happen to me'') and emotional self-control (e.g., ``in emergency situations I feel apprehensive and ill at ease"), respectively, than theoretically-derived notions of empathy. Indeed, the Personal Distress subscale appears to assess feelings of anxiety, discomfort, and a loss of control in negative environments. Factor analytic and validity studies suggest that the Personal Distress subscale may not assess a central component of empathy \cite{d25}. Instead, Personal Distress may be more related to the personality trait of neuroticism, while the most robust components of empathy appear to be represented in the Empathic Concern and Perspective Taking subscales \cite{d26} IRI Davis Scale has been chosen for its relation to the measurement of individual differences in empathy construct and, secondly, for its relation with measures of social functioning and the so-called psychological superior functions \cite{d23}. Table \ref{tab:table1} summarizes the first questionnaire on multidimensional empathy measure. \begin{table*} \centering \caption{IRI subscales. Extension of the empathy measure of Davis 1980, Yarnold et al.1996 and Vitaglione et al. 2003. The star sign (*) denotes an opposite (reversed) counting/scoring.} \label{tab:table1} \begin{tabular}{l|cccc|cccc|} \hline \multirow{1}{*}{Abridged item} & \multicolumn{4}{c}{Women (59.58\%)} & \multicolumn{4}{c}{Men(40.42\%)} \\ & {PT} & EC& {FS} & {PD} & {PT} & EC& {FS} & {PD}\\ \hline (1) Daydream and fantasize (FS) & & & & & & & & \\ (2) Concerned with unfortunates (EC) & & 0.6 & & & & & & \\ (3) Can't see others' views$^{*}$ (PT) & & & & & & & &\\ (4) Not sorry for others $^{*}$ (EC) & & & & & & & &\\ (5) Get involved in novels (FS)& & & 0.8& & & & &\\ (6) Not-at-ease in emergencies (PD) & & & & & & & & 0.7\\ (7) Not caught-up in movies$^{*}$ (FS) & & & & & & & &\\ (8) Look at all sides in a fight (PT) & & 0.9124 & 0.2444 & & & & &\\ (9) Feel protective of others (EC) & & & & & &0.3 & &\\ (10) Feel helpless when emotional (PD) & & & & & & & &\\ (11) Imagine friend's perspective (PT) & & & & & 0.8393& 0.824 & &\\ (12) Don't get involved in books$^{*}$ (FS) & & & & & & & &\\ (13) Remain calm if other's hurt $^{*}$ (PD)& & & & & & & &\\ (14) Others' problems none mine$^{*}$ (EC) & & & & & & & &\\ (15) If I'm right I won't argue$^{*}$ (PT)& & & & & & & &\\ (16) Feel like movie character (FS) & & & & & & & &\\ (17) Tense emotions scare me (PD) & & & & & & & &\\ (18) Don't feel pity for others $^{*}$ (EC)& & & & & & & &\\ (19) Effective in emergencies$^{*}$ (PD) & & & & & & & &\\ (20) Touched by things I see (EC)& & & & -0.3452& & & &\\ (21) Two sides to every question (PT)& & & & & & & &\\ (22) Soft-hearted person (EC)& & & & & & & &\\ (23) Feel like leading character (FS) & & & & & & & &\\ (44) Lose control in emergencies (PD)& & & & & & & &\\ (25) Put myself in others' shoes (PT) & & & & & & & &\\ (26) Image novels were about me (FS)& & & & & & & &\\ (27) Other's problems destroy me (PD)& & & & & & & &\\ (28) Put myself in other's place (PT)& & 0.42& & & & & &\\ \hline \end{tabular} \end{table*} \subsection*{Game questionnaire} The second questionnaire is about a prisoner dilemma game. Each of the 47 participants is asked to answer with a yes or no to 4 questions (see Table \ref{table:questionaire0}), each related to the level of cooperation - cooperation (CC), cooperation - defection (CD), defection - cooperation (DC), and defection-defection (DD). A virtual other participant is represented in each interaction leading to 94 decision-makers in the whole process. The set of choices of each participant is $\{C,D\}$ where $D$ is also referred to $N$ for non-cooperation. \begin{table}[htb] \centering \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{Player I} \\ \cline{3-4} & & Cooperate & Defect \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{Player II} } & \multicolumn{1}{ |c| }{Cooperate} & $(A,A)$ & $ (B,C) $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{Defect } & $(C,B) $ & $(D,D)$ \\ \cline{1-4} \end{tabular} \caption[]{ Payoff matrix for standard prisoner's dilemma (without empathy consideration). The following inequalities must hold: $A >D>B>C$ \cite{b12}.} \label{table:questionaire0} \end{table} \subsection*{Data collection} Regarding the approach to the test, the whole population had a complete comprehension and adherence to the tasks. Only 2 questions have been left out by a participant in the IRI test. In total we have a 99,63\% of responsiveness in all the questions. In the next section, we analyze the results of the second questionnaire and study the impact of the four IRI scales on the decision making of the population. \section{Method and Analysis} The analysis is divided into three steps. In the first step, the population is classified based on the result of the IRI scale. In the second step we analyze the result of the cooperation. Lastly, the level of cooperation is studied on the basis of classification of the population in the IRI scale. \subsection*{IRI scale and population classification} The first step of the analysis concerns the results of women and men population respectively at the IRI scale. We depict the characteristic of each individual who participated to the test in table \ref{table:IRIdistra}. Table \ref{table:IRIdistr2b} represents the number of people belonging to each sub-scale and those who do not. \begin{table}[htb] \centering \begin{tabular}{lll} {\bf Scale Type} & {\bf Women ID} & {\bf Men ID} \\ \hline PT & 1,3, 4,10, 16,19,26,27 & 12 \\ \hline EC & 15 & - \\ \hline FS & 11,25 & 8 \\ \hline PD & - & 18 \\ \hline && \\ \hline PT + EC & 7,8,20& - \\ \hline PT + PD & - & 4,15 \\ \hline PT + FS & 6,12,24 & - \\ \hline EC + FS & 17& 14 \\ \hline EC + PD & 2& - \\ \hline && \\ \hline PT + EC + FS & 9,18,21& 5,19 \\ \hline PT + EC + PD & 23& 6,11 \\ \hline PT + FS + PD & - & 17 \\ \hline EC + FS + PD& 5& - \\ \hline && \\ \hline PT + EC + FS + PD & 13,14,22 & 9 \\ \hline && \\ \hline None of the scale & 28 & 1,2,3,7,10,13,16 \\ \hline \end{tabular} \caption[]{ IRI scale and participant identification} \label{table:IRIdistra} \end{table} \begin{table}[htb] \centering \begin{tabular}{lllll} {\bf Scale Type} & {\bf Women} & {\bf Men} & {\bf Total } & {\bf Freq} \\ \hline PT & 8 & 1 & 9 & 19,14\% \\ \hline EC & 1 & - &1 &2,12 \%\\ \hline FS & 2 & 1 &3 & 6,38\% \\ \hline PD & - & 1 &1 &2,12 \%\\ \hline && \\ \hline PT + EC & 3& - &3 &6,38\% \\ \hline PT + PD & - & 2 &2 &4,25\% \\ \hline PT + FS & 3 & - &3 &6,38\% \\ \hline EC + FS & 1& 1 &2 &4,25\% \\ \hline EC + PD & 1& -&1 &2,12 \%\\ \hline && \\ \hline PT + EC + FS & 3& 2 &5 &10,63\% \\ \hline PT + EC + PD & 1 & 2 &3 & 6,38\% \\ \hline PT + FS + PD & - & 1 &1 &2,12 \%\\ \hline EC + FS + PD& 1& - &1 &2,12 \%\\ \hline && \\ \hline PT + EC + FS + PD & 3 & 1 &4 & 8,51\% \\ \hline && \\ \hline None of the scale & 1 & 7 &8 & 17,02\% \\ \hline & & & \\ \hline Participants & 28 &19 & 47 \\ \hline \end{tabular} \caption[]{ IRI scale and population distribution} \label{table:IRIdistr2b} \end{table} The classification of the population based on different empathy subscales is presented in Table \ref{table:IRIdistr2b}. The result shows that 14 people belong to a pure IRI scale while 25 people has a mixed IRI characteristics and 8 people do not belong to any IRI scale. In the next section we study the level of cooperation based on the classification of the population in the IRI scale. \subsection*{Cooperation study: Prisoner Dilemma} The analysis is based on the IRI test results and on the prisoner's dilemma test results. The result of the prisoner's dilemma suggested that the 35,71 \% of the women population and the 36,84 \% of men population have fully confessed. The results are depicted in Table \ref{table:fem1}, \ref{table:mal1} and \ref{table:pop1}. Notice that 53,57 \% of the women population and 31,57 \% of the men population have partially confessed. When considering the whole population, 23,40\% of them has partially confessed. Hence, it is necessary to classified them looking at the cooperation level within the population of those who partially confessed. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{ { \bf Cooperation results (Women) } } \\ \hline Decision & positively & partially & deny \\ \hline Result& 10 out of 28 & 15 out of 28 & 3 out of 28 \\ \hline Frequency & 35,71\% & 53,57\% & 10,71\%\\ \hline \end{tabular} \caption[]{ Cooperation results: Women} \label{table:fem1} \end{table} \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{ { \bf Cooperation results (Men) } } \\ \hline Decision & positively & partially & deny \\ \hline Result& 7 out of 19 & 6 out of 19 & 6 out of 19 \\ \hline Frequency & 36,84 \% & 31,57\% & 31,57\%\\ \hline \end{tabular} \caption[]{ Cooperation results: Men} \label{table:mal1} \end{table} \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{ { \bf Cooperation results (Entire population) } } \\ \hline Decision & positively & partially & deny \\ \hline Result& 17 out of 47 & 21 out of 47 & 9 out of 47 \\ \hline Frequency & 36,17 \% & 44,68\% & 19,14\% \\ \hline \end{tabular} \caption[]{ Cooperation results: Entire Population} \label{table:pop1} \end{table} A more refined version of cooperation among the women population who partially confessed (15 out of 28) is given in Table \ref{table:par 1}. We want to compute the level of cooperation within that population and hence we care about all the answer of the participants at the questionnaire. Then we derive the level of cooperation in that population by computing the marginal probability of confess in the population. More precisely, we consider two random variables $X =\{ c_1, d_1\}$ and $Y= \{ c_2, d_2\}$ for player 1 and player 2 respectively where $c_i$ stands for cooperation of player $i$ and $d_i$ defection. We then compute the marginal probability of cooperation of the player 1 given that the player 2 can confess or defect. $$p(c_1) = \sum_{y \in \{ c_2, d_2\}} p(X = c_1, Y = y).$$ We use the sample statistics to compute the probability of player 1 to cooperate through the number of occurrences of $c_1$. The result of the marginal probability of cooperation is equals to $0,51$. Hence in average, 7 people out the 15 can be classified as positively confessed. \begin{table}[htb] \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{Player I} \\ \cline{3-4} & & Cooperate & Defect \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{Player II} } & \multicolumn{1}{ |c| }{Cooperate} & 10 & $ 1 $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{Defect } & $5 $ & 13 \\ \cline{1-4} \end{tabular} \caption[]{ Women population: Partially confess (decision making). $p(c_1) = 0.51$ } \label{table:par 1} \end{table} In the case of the men, a more refined version of cooperator among those who partially confess (6 out of 19) is given in Table \ref{table:par 2}. The marginal probability of cooperation is equal to $p(c_1) = 0,46$. Hence in average, 2 people out the 6 can be classified as positively confessed. \begin{table}[htb] \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{Player I} \\ \cline{3-4} & & Cooperate & Defect \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{Player II} } & \multicolumn{1}{ |c| }{Cooperate} & 5 & $ 2 $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{Defect } & $1 $ & 5 \\ \cline{1-4} \end{tabular} \caption[]{ Men: Partially confess (decision making). $p(c_1) = 0.46.$ } \label{table:par 2} \end{table} When considering the whole population, the fraction of people who partially confess is 21/47 and the marginal probability of cooperation is equals to $p(c_1) = 0,5$. Hence in average, 10 people out of the 21 can be classified as positively confessed. \begin{table}[htb] \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{Player I} \\ \cline{3-4} & & Cooperate & Defect \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{Player II} } & \multicolumn{1}{ |c| }{Cooperate} & 15 & $ 3 $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{Defect } & $6 $ & 18 \\ \cline{1-4} \end{tabular} \caption[]{ Population: Partially confess (decision making). $p(c_1) = 0.5$ } \label{table:par 3} \end{table} \subsection*{Cooperation vs IRI scale} In this subsection, we are interested in computing the level of cooperation in each IRI 'pure' subscale. For this aim, we consider the subpopulation belonging to each scale. We then use the answer of cooperation in the prisoner dilemma game for computing the probability of cooperation. \\ \\ { \bf \color{black} PT vs Cooperation} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Women } \\ \hline {\color{black} Coop\textbackslash PT } & A & B & C & D & E \\ \hline p(c) &0 \% &0.5 \% & 0.75\% & 0.66\% &100\% \\ \hline \end{tabular} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Men } \\ \hline {\color{black} Cooperation\textbackslash PT } & A & B & C & D & E \\ \hline p(c) &0 \% &0 \% & 0\% & 66,66\% &0\% \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Women + Men } \\ \hline {\color{black} Coop\textbackslash PT } & A & B & C & D & E \\ \hline p(c) &0 \% &0,5 \% & 75\% & 66\% &100\% \\ \hline \end{tabular} \end{minipage} The Pearson correlation coefficient between the level of cooperation and the PT scale is $r_{Women}$$ = 0.7797$ $(p < .01)$ for Women, $r_{men}$ = 1 $(p < .01)$ for men and $r_{population}$ $= 0.6347$ $(p < .01)$ for the population belonging to the PT scale. The interpretation is that there is a positive correlation for Women. The fact that only one man was PT and had positively cooperate leads to a strong positive correlation. The overall population of PT positively cooperates. { \bf \color{black} PD vs Cooperation} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Men (18) } \\ \hline {\color{black} Cooperation\textbackslash PD } & A & B & C & D & E \\ \hline p(c) & 0 \%&0 \% & { \color{black} 0 \% } & 0\% &0\% \\ \hline \end{tabular} \end{minipage} There is only one man who is PD in the IRI scale and the probability of his level of cooperation was zero since he only denied. { \bf \color{black} EC vs Cooperation: Women} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Women (15) } \\ \hline {\color{black} Cooperation\textbackslash EC } & A & B & C & D & E \\ \hline p(c) & 0 \% &0 \% & { \color{black} 100 \% } & 0\% &0\% \\ \hline \end{tabular} \end{minipage} There is only one woman EC in the IRI scale and the probability of his level of cooperation is 1 since she positively confessed. Therefore the Pearson correlation coefficient is one. { \bf \color{black} FS vs Cooperation} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Women (11,25) } \\ \hline {\color{black} Coop\textbackslash FS } & A & B & C & D & E \\ \hline p(c) &0 \% &0 \% & 0 \% & { \color{black} 100\% } &0\% \\ \hline \end{tabular} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Men (8) } \\ \hline {\color{black} Coop\textbackslash FS } & A & B & C & D & E \\ \hline p(c) &0 \% &0 \% & 0\% & 0\% & { \color{black} 66,66\% } \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{ \bf \color{black} Women + Men } \\ \hline {\color{black} Coop.\textbackslash FS } & A & B & C & D & E \\ \hline p(c) &0 \% &0 \% & 0\% & { \color{black} 100\% } &{ \color{black} 66,66\% } \\ \hline \end{tabular} \end{minipage} The Pearson correlation coefficient between the level of cooperation and the FS scale is $r_{Women}$$ = 1$ $(p < .01)$ for Women, $r_{men}$ = 1 $(p < .01)$ for men and $r_{population}$ $= 0.9891$ $(p < .01)$ for the population belonging to the FS scale. The strong positive correlation between FS and cooperation is due to the fact that two people had positively confessed and one had partially confessed. \pgfplotsset{ compat=newest, xlabel near ticks, ylabel near ticks } \begin{tikzpicture}[font=\small] \begin{axis}[ ybar interval=0.3, bar width=2pt, grid=major, xlabel={Empathy Scale Quality: Perspective Taking (PT)}, ylabel={Total Score of Answers}, ymin=0, ytick=data, xtick=data, axis x line=bottom, axis y line=left, enlarge x limits=0.1, symbolic x coords={awful,bad,average,good,excellent,ideal}, xticklabel style={anchor=base,yshift=-0.5\baselineskip}, ] \addplot[fill=yellow] coordinates { (awful,3) (bad,13) (average,14) (good,16) (excellent,9) (ideal, 20) }; \addplot[fill=white] coordinates { (awful,1) (bad,1) (average,2) (good,3) (excellent,0) (ideal, 20) }; \legend{ Women, Men} \end{axis} \end{tikzpicture} \pgfplotsset{ compat=newest, xlabel near ticks, ylabel near ticks } \begin{tikzpicture}[font=\small] \begin{axis}[ ybar interval=0.3, bar width=2pt, grid=major, xlabel={Empathy Scale Quality: Personal Distress (PD)}, ylabel={Total Score of Answers}, ymin=0, ytick=data, xtick=data, axis x line=bottom, axis y line=left, enlarge x limits=0.1, symbolic x coords={awful,bad,average,good,excellent,ideal}, xticklabel style={anchor=base,yshift=-0.5\baselineskip}, ] \addplot[fill=yellow] coordinates { (awful,0) (bad,0) (average,0) (good,0) (excellent,0) (ideal, 10) }; \addplot[fill=white] coordinates { (awful,0) (bad,2) (average,4) (good,1) (excellent,0) (ideal, 10) }; \legend{ Women, Men} \end{axis} \end{tikzpicture} \pgfplotsset{ compat=newest, xlabel near ticks, ylabel near ticks } \begin{tikzpicture}[font=\small] \begin{axis}[ ybar interval=0.3, bar width=2pt, grid=major, xlabel={Empathy Scale Quality: Empathic Concern (EC)}, ylabel={Total Score of Answers}, ymin=0, ytick=data, xtick=data, axis x line=bottom, axis y line=left, enlarge x limits=0.1, symbolic x coords={awful,bad,average,good,excellent,ideal}, xticklabel style={anchor=base,yshift=-0.5\baselineskip}, ] \addplot[fill=yellow] coordinates { (awful,2) (bad,0) (average,2) (good,3) (excellent,0) (ideal, 10) }; \addplot[fill=white] coordinates { (awful,0) (bad,0) (average,0) (good,0) (excellent,0) (ideal, 0) }; \legend{ Women, Men} \end{axis} \end{tikzpicture} \pgfplotsset{ compat=newest, xlabel near ticks, ylabel near ticks } \begin{tikzpicture}[font=\small] \begin{axis}[ ybar interval=0.3, bar width=2pt, grid=major, xlabel={Empathy Scale Quality: Fantasy Scale (FS)}, ylabel={Total Score of Answers}, ymin=0, ytick=data, xtick=data, axis x line=bottom, axis y line=left, enlarge x limits=0.1, symbolic x coords={awful,bad,average,good,excellent,ideal}, xticklabel style={anchor=base,yshift=-0.5\baselineskip}, ] \addplot[fill=yellow] coordinates { (awful,1) (bad,3) (average,3) (good,6) (excellent,2) (ideal, 10) }; \addplot[fill=white] coordinates { (awful,1) (bad,1) (average,1) (good,1) (excellent,3) (ideal, 10) }; \legend{ Women, Men} \end{axis} \end{tikzpicture} \subsection*{Cooperation vs IRI mixed scale} \label{newmodel1} In this section we analyze the correlation between the mixed scale of length two and we also compute the level of cooperation in each sub-population. \begin{itemize} \item Case {\bf PT + EC}: the sub-population is composed of only women. The Pearson correlation coefficient between PT and EC is $r_{Women} =$ $r_{population}$ $ = 0,8108$ $(p < .01).$ The probability of cooperation was p(c) = 0,5. \item Case {\bf PT + FS}: the sub-population is composed of only women. The Pearson correlation coefficient between PT and FS is $r_{Women} =$ $r_{population}$ $ = 0,9382$ $(p < .01).$ The probability of cooperation was p(c) = 0,62. \item Case {\bf EC + FS}: the sub-population is composed of women and men. The Pearson correlation coefficient between EC and FS is $r_{Women} =$ $ = 0,7845$ $(p < .01)$, $r_{Men} =$ $ = 0,8709$ $(p < .01),$ $r_{population}$ $ = 0,7148$ $(p < .01)$ for women, men and the global population respectively. The probability of cooperation was p(c) = 0,75. \item Case {\bf PT + PD}: the sub-population is composed of only men. The Pearson correlation coefficient between PT and PD is $r_{Men} =$ $r_{population}$ $ = 0,2796$ $(p < .01).$ The probability of cooperation was p(c) = 0,6. \item Case {\bf EC + PD }: the sub-population is composed of only women. The Pearson correlation coefficient between PT and FS is $r_{Women} =$ $r_{population}$ $ = -0,3462$ $(p < .01).$ The probability of cooperation was p(c) = 0,5. \end{itemize} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Pearson correlation & PT & EC & FS & PD \\ \hline PT & -& { \bf 0,81}& { \bf0,9382} & 0,2796 \\ \hline EC & -&- & { \bf0,8709}& -0,3462 \\ \hline FS & - &- & -& - \\ \hline PD & -&- &- &- \\ \hline \end{tabular} \end{center} \caption{subscale correlation} \label{ms} \end{table} The result of level of cooperation corresponding to the mixed IRI scale of Table \ref{ms} is given in Table \ref{mscoop}. We can observe that a high level of cooperation associated to a high correlation coefficient correspond to the Empathy-Altruism behaviour (namely PT + FS and EC + FS ). A high level of cooperation associated to a low correlation coefficient corresponds to Empathy-Spitefulness (namely PT + PD, EC + PD). \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline & Cooperation level \\ \hline PT + EC & 50\% \\ \hline PT + FS & 62,5\% \\ \hline PT + PD & 66,66 \% \\ \hline EC + FS & 75\% \\ \hline EC + PD & 50\% \\ \hline \end{tabular} \end{center} \caption{Level of cooperation at mixed scales} \label{mscoop} \end{table} {\color{black} \subsection*{The effect of empathy on decisions} \label{empathyDecisions} In this section we study the effect of individual scales on the degree of cooperation. For this aim we will compare the result of a pure scale with the group of individual who do not belong to any IRI sub-scale. The motivation of this placebo test is that people who do not belongs to any scale can be a valuable sample for assessing the impact of an IRI scale like PT, EC and FS on user's decision making. Since our dataset is not too large, we rely on the nonparametric linear regression using the Theil's method \cite{Theil1950a,Theil1950b,Theil1950c} for computing the slope median value, given a dependent variable set $\{y\}$ and an independent variable $\{x\}$. The dataset of the independent variable $\{x\}$ is represented by an IRI scale and it is obtained in the following way: (i) we first select the scale we want to study ( let say PT scale). The cardinality of the dataset is then given by the number of people belonging to that scale (see Table \ref{table:IRIdistra}). Next the value $x_i$ associate to the individual $i$ is computed by choosing the value (let say A) with the highest choice within the questionnaire $\{ A, B, C, D,E\}$. Based on the choice's result, an integer value is assigned to $\{x_i\}$ following $\{ A= 1, B = 2, C= 3, D=4, E = 5 \} $. The dataset of the dependable variable $\{y\}$ is represented by the result of the cooperation. An individual $i$ will be assigned a value $y_i = 1$ if he has fully cooperated, $y_i = 0.5$ if he has partially cooperated and $y_i = 0$ if he has denies. Based on the above definition we can now study the effect of the PT scale on the level of cooperation. There are 9 individual belonging to the pure PT scale (see Table \ref{table:PTlr} ). \begin{table} [h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{ { \bf PT level: pure PT individual }} \\ \hline {\color{black} x} & 2 & 3 & 3 & 4 & 4 & 4 & 4 & 4 & 5 \\ \hline {\color{black} y } & 0.5 & 0.5 & 1 & 0 & 0.5 & 1 & 1 & 1 & 1 \\ \hline \multicolumn{10}{|c|}{ { \bf Median slope: $\beta_{PT} = \bf{0.2515}$ }} \\ \hline \end{tabular} \caption[]{ { \bf PT: nonparametric linear regression dataset}} \label{table:PTlr} \end{table} The result of the Theil's slope median is given by $\beta_{PT} = \bf{0.2515}$, where $\beta_{PT} $ is the median slope value of the set $\{ -1.0010, -0.5010, -0.4980, -0.2500, 0.001, 0.001, 0.001, \\ 0.001, 0.001, 0.002, 0.002, 0.002, 0.003, 0.003, \\ 0.1680, 0.2502, 0.2506, 0.2510, { \color{black} \bf{0.2515} }, 0.4990, 0.4995, \\ 0.4995, 0.5000, 0.5025, 1, 1, 1, 1.004, \\ 167, 250, 250.75, 334, 499, 499, \\ 500.5, 502 \}$ For the placebo test, the dataset for the individual belonging to ``None of the scale'' is given by the Table \ref{table:nonPTlr} \begin{table} [h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{ { \bf PT level: None of the scale individual }} \\ \hline {\color{black} x} & 1 &2 &2 &2 &2 &2 &4& 4 \\ \hline {\color{black} y } & 0& 0& 0& 0.5& 0.5& 1& 0.5 & 1\\ \hline \multicolumn{9}{|c|}{ { \bf Median slope: $\beta_{non \,\, of \,\, the \,\, scale} = \bf{0.4995}$ }} \\ \hline \end{tabular} \caption[]{ { \bf PT level: nonparametric linear regression dataset (none of the scale)}} \label{table:nonPTlr} \end{table} The result of the Theil's slope median is given by $\beta_{\mbox{non of the scale}} = \bf{0.4995}$, where $\beta_{\mbox{non of the scale}} $ is the median slope value of the set $\{ -0.2495, 0.0005, 0.0005, 0.001 , 0.001, 0.002 , 0.1673, \\ 0.2501, 0.2503, 0.2505, 0.2506, 0.3336, 0.499, { \color{black} \bf{ 0.4995 } }, \\ 0.4995 , 0.4998, 0.996 , 1, 1, 166.6667, 249.5 , \\ 249.5, 249.75, 250, 332.6667 , 498 , 499 , \\ 499 \}$ Result interpretation: the angular coefficient $\beta$ measures the effect of the independent variable $x$ on the dependent variable $y$. The more is the value of the angular coefficient, the better is the effect of the independent variable $x$ on the variable $y$. The results we get are $\beta_{PT} = 0.2515$ and $\beta_{non \,\, of \,\, the \,\, scale} = 0.4995$ and $\beta_{PT} < \beta_{non \,\, of \,\, the \,\, scale}$. Since we are studying the effect of the PT scale on the level of cooperation and based on the following result, we cannot concludes that the only factor which increases the level of cooperation is given by the PT component. Similarly, we are interested in studying the influence of Fantasy scale component on the level of cooperation and we follow and apply the approach mentioned above on the FS component. The result of the Theil's slope median is given by $\beta_{FS} = \bf{0.5000 }$, where $\beta_{FS} $ is the median slope value (see Table \ref{table:FSlr} ) of the set $\{ -0.0010, { \color{black} \bf{ 0.5000 } }, 501.0000 \}$ \begin{table} [h] \centering \begin{tabular}{|c|c|c|c| } \hline \multicolumn{4}{|c|}{ { \bf FS level: pure FS individual }} \\ \hline {\color{black} x} & 4& 4 &5 \\ \hline {\color{black} y } & 0.5& 1& 1 \\ \hline \multicolumn{4}{|c|}{ { \bf Median slope: $\beta_{FS} = \bf{0.5} $ } } \\ \hline \end{tabular} \caption[]{ { \bf FS: nonparametric linear regression dataset}} \label{table:FSlr} \end{table} In the case of individual belonging to ``None of the scale'', the result of the Theil's slope median is given by $ \beta_{non \,\, of \,\, the \,\, scale} = 0.5000 $ where, $\beta_{non \,\, of \,\, the \,\, scale} $ is the median slope (see Table \ref{table:nonFSlr} ) value of the set $\{ -0.5005, -0.499, 0.001, 0.0010, 0.001, 0.002, 0.002, \\ 0.2504, 0.2508 , 0.4995 , 0.4995 , 0.5 , 0.5003 , { \color{black} \bf{ 0.5005 }}, \\ 0.5010, 0.5015, 0.9980, 0.9980, 0.999, \\ 1, 1, 167, 250, 250, 498, 499, 499, 500 \}$ \begin{table} [h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{ { \bf FS level: Non of the scale individual }} \\ \hline {\color{black} x} & 1 &1 &1 &1& 2 &2& 2 &3\\ \hline {\color{black} y } & 0 &0 &0.5& 0.5& 0 &0.5& 1& 1 \\ \hline \multicolumn{9}{|c|}{ { \bf Median slope: $\beta_{non \,\, of \,\, the \,\, scale} = \bf{0.5005}$ }} \\ \hline \end{tabular} \caption[]{ { \bf FS level: nonparametric linear regression dataset (none of the scale)}} \label{table:nonFSlr} \end{table} The dataset of the component EC and PD is to small to perform the nonparametric linear regression on it. } \subsection*{Explanation} \label{newmodel} \subsubsection*{Game without empathy} We consider the one-shot game given by Table \ref{table:noEmpathy}. {\color{black} Pareto efficiency, or Pareto optimality, is an action profile in which it is not possible to make any one player better off without making at least one player worse off. A Nash equilibrium is a situation in which no player can improve her payoff by unilateral deviation. The action profile $(D,D)$ is the unique Nash equilibrium, and $D$ is a dominant strategy choice for each player. But $(C,C)$ Pareto-dominates $(D,D).$ The three choice pairs $(C,C)$, $(C,D),$ and $(D,C)$ are all Pareto optimal, but $(C,C)$ is the most socially efficient choice pair.} \begin{table}[htb] \centering \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{Player I} \\ \cline{3-4} & & Cooperate & Defect \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{Player II} } & \multicolumn{1}{ |c| }{Cooperate} & (-6,-6) & $ (-120,0) $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{Defect } & $(0,-120) $ & (-72,-72) \\ \cline{1-4} \end{tabular} \caption[]{ Payoff matrix of prisoner's dilemma questionnaire } \label{table:noEmpathy} \end{table} The classical game model fails to explain the experimental observation: In the classical prisoner dilemma (without empathy consideration), the strategy which consists to defect is a dominated strategy. Hence it is expected that - in the rational case - all the people decides to defect in the questionnaire of the first experiment (see Table \ref{table:noEmpathy}). But this was not the case since 19,14\% of the population defected (see Table \ref{table:pop1}). This result can be due to some psychological aspect of the human behavior when taking part of the game. The idea is to modify the classical payoff and integrate the empathy in the preferences of the players. This leads to an empathetic payoff as explained below. \subsubsection*{Game with empathy consideration} As observed from the data, a significant level of cooperation appears in the experimental game. This suggests a new modelling and design of the classical game and better understanding the behavior of the participants. We propose a a new payoff matrix that takes into consideration the effect of empathy in the outcome. Denote by $\lambda_{12}$ the degree of empathy of the prisoner $1$ has over the prisoner $2$ and by $\lambda_{21}$ the vice versa. The payoff of the classical prisoner dilemma game (see Table \ref{table:noEmpathy}) changes and now depends on the level of empathy $\lambda_{12}$ and $\lambda_{21}$ of the prisoners (see Table \ref{table:Empathyconsideration}). Now we are interested in finding all the possible equilibria of the new game based on the value of $\lambda_{12}$ and $\lambda_{21}$. \begin{table}[htb] \footnotesize \centering \begin{tabular}{cc|c|c|c|c|l} \cline{3-4} & & \multicolumn{2}{ c| }{P I} \\ \cline{3-4} & & C & D \\ \cline{1-4} \multicolumn{1}{ |c }{\multirow{2}{*}{P II} } & \multicolumn{1}{ |c| }{C} & $(-6-6 \lambda_{12}, -6 -6\lambda_{21})$ & $ (-120, -120\lambda_{21}) $ \\ \cline{2-4} \multicolumn{1}{ |c }{} & \multicolumn{1}{ |c| }{D} & $(-120\lambda_{12},-120)$ & $(-72-72\lambda_{12}, -72-72\lambda_{21})$ \\ \cline{1-4} \end{tabular} \caption[]{ Payoff matrix of the prisoner dilemma with empathy consideration.} \label{table:Empathyconsideration} \end{table} { Equilibrium analysis of Table \ref{table:Empathyconsideration} } \begin{itemize} \item CC is an equilibrium if $\lambda_{12} \geq \frac{6}{114};$ $\lambda_{21} \geq \frac{6}{114}$ \item CN is an equilibrium if $\lambda_{12} \geq \frac{2}{3};$ $\lambda_{21} \leq \frac{6}{114}$ \item NC is an equilibrium if $\lambda_{12} \leq \frac{6}{114};$ $\lambda_{21} \geq \frac{2}{3}$ \item NN is an equilibrium if $\lambda_{12} \leq \frac{2}{3};$ $\lambda_{21} \leq \frac{2}{3}$ \end{itemize} Since empathy can be positive, negative or null, we analyze the outcome of the game with different signs of the parameters $\lambda_{12}$ and $\lambda_{21}$. \\ \\ { \bf Analysis 1: $\lambda_{12}, \lambda_{21} \geq 0$ } we now consider $\lambda_{12}$ and $\lambda_{21}$ as two random variables with a distribution across the population. We generalized the outcome of the game based on their values. \begin{itemize} \item {\bf case 1:} {\bf (medium-medium)}: if $\lambda_{12},\lambda_{21} $ $ \in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ then we have 3 equilibria: CC, NN, and the mixed equilibria $p_2C + (1-p_2)N$ or $p_1C + (1-p_1)N$ with $p_1 = \frac{8 - 12\lambda_{12} }{ 7 (1 + \lambda_{12})}$ and $p_2 = \frac{8 - 12\lambda_{21} }{ 7 (1 + \lambda_{21})}$. \item {\bf case 2 (high - high):} if $\lambda_{12}, \lambda_{21}$ $ \in$ $\left[ \frac{2}{3},1 \right] $ then CC is the unique equilibrium. \item {\bf case 3$_{a}$ (high - low):} if $\lambda_{12}$ $ \in$ $\left[ \frac{2}{3},1 \right] $ and $\lambda_{21} \in \left[ 0, \frac{6}{114} \right] $ then CN is the unique equilibrium. \item {\bf case 3$_{b}$ (low - high):} if $\lambda_{12}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ and $\lambda_{21} \in \left[ \frac{2}{3},1 \right] $ then NC is the unique equilibrium. \item {\bf case 4$_{a}$ (medium - low):} if $\lambda_{12}$ $\in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ and $\lambda_{21} \in \left[ 0, \frac{6}{114} \right] $ then NN is the unique equilibrium. \item {\bf case 4$_{b}$ ( low - medium ):} if $\lambda_{12}$ $\in$ $ \left[ 0, \frac{6}{114} \right] $ and $\lambda_{21} \in$ $\left[ \frac{6}{114}, \frac{2}{3} \right] $ then NN is the unique equilibrium. \item {\bf case 5$_{a}$ ( $\lambda_{12}$ high ):} if $\lambda_{12} > \frac{2}{3} $ then C is a dominating strategy (unconditional cooperation) for player 1. \item {\bf case 5$_{b}$ ( $\lambda_{21}$ high ):} if $\lambda_{21} > \frac{2}{3} $ then C is a dominating strategy (unconditional cooperation). for player 2. \item {\bf case 6$_{a}$ ( $\lambda_{12}$ low ):} if $\lambda_{12}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ then N is a dominating strategy (unconditional non-cooperation) for player 1. \item {\bf case 6$_{b}$ ( $\lambda_{21}$ low ):} if $\lambda_{21}$ $\in$ $\left[ 0, \frac{6}{114}\right] $ then N is a dominating strategy (unconditional non-cooperation) for player 2. \end{itemize} { \bf Analysis 2: $\lambda_{12}, \lambda_{21} < 0$ } \begin{itemize} \item if $\lambda_{12}, \lambda_{21} < 0$ then NN is the unique equilibrium. \end{itemize} { \bf Analysis 3: $\lambda_{12}>0, \lambda_{21} < 0$ } \begin{itemize} \item if $\lambda_{21} < 0$ then N is the dominating strategy (unconditional non-cooperation) for player 2. \item if $\lambda_{12} > \frac{2}{3} $ then CN is an equilibrium. \item if $\lambda_{12} < \frac{2}{3} $ then NN is an equilibrium. \end{itemize} { \bf Analysis 4: $\lambda_{12}<0, \lambda_{21} > 0$ } \begin{itemize} \item if $\lambda_{12} < 0$ then N is a dominating strategy (unconditional non-cooperation) for player 1. \item if $\lambda_{21} > \frac{2}{3} $ and $\lambda_{12}$ $\in$ $\left[ - \frac{6}{114}, 0 \right] $ then NC is an equilibrium. \item if $\lambda_{21} < \frac{2}{3} $ then NN is an equilibrium. \end{itemize} Figure \ref{fig:twoEmpathy} summarizes the outcome of the two-player game. \begin{figure} \caption{Equilibrium of the game with Empathy consideration} \label{fig:twoEmpathy} \end{figure} The proposed empathetic payoff better captures the preferences of the players as a negligible proportion of cooperators are obtained analytically in the new game. Thus, if one quantity accurately the empathy's effect in the payoff, then the resulting game is more adapted to the results of experiment. By doing this iteratively over several experiments and model adjustment we obtain better game theoretic models for real life interaction. We believe that the generic approach developed here can be extended to other class of games as indicated in \cite{t1,mimo}. \def0{0} \def8{8} \def{"green", "blue","orange","red", "yellow"}{{"green", "blue","orange","red", "yellow"}} \newcount\cyclecount \cyclecount=-1 \newcount\ind \ind=-1 \begin{figure} \caption{ Empathy scale distribution across the population of participants } \label{fig2:repartition} \end{figure} \section{Conclusion} We have proposed a basic experiment on the role of empathy in one-shot Prisoner Dilemma. We analyzed multidimensional components of empathy using IRI scale. The experiment on the field conducted at NYUAD, Learning and Game Theory Lab, was composed of population of 47 persons (28 women and 19 men). The experimental game provided interesting data. A non-negligible proportion of the participants (35,71\% of women and 36,84\% of the men population) have fully confessed. Considering the whole population, 36,17\% have fully confessed and the 19,14\% have fully denied. In terms of partial confession behaviors, 0.46\% of the women population and 0,54\% of the men population have partially confessed. Considering the whole population, 0,45\% have partially confessed in this reproduction of Prisoner Dilemma game. Regarding the distribution of women and men population at the Interpersonal Reactivity Index, the experimental results reveal that the dominated strategies of the classical game theory are not dominated any more when users's psychology is involved, and a significant level of cooperation is observed among the users who are positively partially empathetic. The next future lines of our work would be based on a possible creation and implementation of a new model of empathy measurement that may take into account a multi-faceted presence of different variables (general attitude to risk, a general estimation of the different heuristics present in the person, how is internalized individually the model Imagine Self/Imagine other that leads to reciprocity, individual attitude to fairness). Our aim is to make of this model a possible and valid model of measurement related to every day life situations where empathy is playing a key role not only in engineering field but, also, in social, economic and institutional area. Everyday life seems to be really different and far from laboratory situations, where all the concepts are built around a crystalized idea of what it is or it should be. The evolutionary lines are then taken into account in our research as an important way to get data from a longitudinal perspective. The dissatisfaction for a simple instrument of testing empathy leads us to rethink, first of all, our next step in empathy measurement. It would take into account two lines, essentially: \begin{enumerate} \item a combined series of instrument of measurement, that will consider a multidimensional level of empathy. \item a possibility to test and retest the variable in different moments and ways (construct validity, test and retest the person about the same cluster, e. g affective empathy). \item A feedback from verbal and not verbal communication analysis software. \end{enumerate} It would be interesting to investigate (i) if there is any relationship between age and strategic decision making, (ii) How the altruism and spitefulness evolve with increasing age. Furthermore, gender difference (if any) should be investigated in a bigger population and in games with distribution-dependent payoffs \cite{temftg1,temftg2,temftg3,temftg4}. \section*{Compliance with Ethical Standards} Conflict of interest: The authors declare that they have no conflict of interest. Ethical approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent: Informed consent was obtained from all individual participants included in the study. \section{Statistical Data } {\color{black} Below we provide statistical data from the experimental game conducted in the Laboratory. Tables \ref{table:PTfemNNt} \ref{table:ECfemNNsss} \ref{table:FSfemNN} \ref{table:PDfemNN} report the data for women in the four IRI scales (PT,EC,FS,PD). Tables \ref{table:PTmenNN}, \ref{table:ECmenNN}, \ref{table:FSmenNN},\ref{table:PDmenNN} focus on men statistical data for the four IRI scales (PT,EC,FS,PD). } \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Women}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Woman 3 &0 & 3&0 & 4 &0 \\ \hline Woman 5 & 0& 3& 3& 1& 0 \\ \hline Woman 6 &0&1&2& 3&1 \\ \hline Woman 11 &0 &2& 4& 1&0 \\ \hline Woman 15 & 0 & 5& 2 &0 & 0 \\ \hline Woman 16 &1&2 &1 &3&0 \\ \hline Woman 17 & 0 & 3 & 3 & 0 & 1 \\ \hline Woman 19 & 1& 1 & 0& 1 & 4 \\ \hline Woman 24 & 2 & 1 & 2 & 2 &0 \\ \hline Woman 27 & 0& 2 & 3 & 1 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{ 3,6, 16, 19,24, 27 } \\ \hline Not PT & \multicolumn{5}{|c|}{ 5,11,15,17 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Women}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Woman 2 &1 &3 & 3 & 0 &0 \\ \hline Woman 4 &1 & 1& 2& 2&1 \\ \hline Woman 7 & 2& 0 & 0&0& 5 \\ \hline Woman 8 &2& 0& 2 & 3&0 \\ \hline Woman 9 & 0&2 & 3 & 2 & 0 \\ \hline Woman 10 & 0& 3& 2 & 2 &0 \\ \hline Woman 12 & 0& 1& 5& 1& 0 \\ \hline Woman 14 & 0& 1& 4& 2& 0 \\ \hline Woman 18 & 0& 2& 0& 1& 4 \\ \hline Woman 20 & 1& 1& 1& 1& 3 \\ \hline Woman 21 & 1& 1& 1& 4& 0 \\ \hline Woman 22 & 0& 0& 7& 0& 0 \\ \hline Woman 23 & 0& 2& 3& 1& 1 \\ \hline Woman 25 & 1& 3& 1& 0& 2 \\ \hline Woman 26 & 0& 1& 3& 1& 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{ 4,7,8,9,10,12,14,18,20,21,22,23,26} \\ \hline Not PT & \multicolumn{5}{|c|}{ 2, 25 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf Deny } \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Woman 1 &0 &1 &2 &4 & 0 \\ \hline Woman 13 &0 &1 &2 & 2 &2 \\ \hline Woman 28 & 3 & 1 & 1 & 2 & 0\\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{ 1,13} \\ \hline Not PT & \multicolumn{5}{|c|}{ 28 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf PT: Woman Result (positively, partially and deny tables)}} \label{table:PTfemNNt} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} EC} & A & B & C & D & E \\ \hline Woman 3 &2 & 3& 1& 1 &0 \\ \hline Woman 5 & 2& 1& 0& 4& 0 \\ \hline Woman 6 &0&4&1& 1&1 \\ \hline Woman 11 &0 &4& 1& 1&1 \\ \hline Woman 15 & 3 & 0& 2 &2 & 0 \\ \hline Woman 16 & 3& 1 &2 &1&0 \\ \hline Woman 17 & 2 & 1 & 1 & 3 & 0 \\ \hline Woman 19 & 3& 1 & 1& 1 & 1 \\ \hline Woman 24 & 0 & 4 & 1& 1 &1 \\ \hline Woman 27 & 2& 2 & 2 & 1 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 5,15,17 } \\ \hline Not EC & \multicolumn{5}{|c|}{ 3,6,11,16,19,24,27 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} EC} & A & B & C & D & E \\ \hline Woman 2 &1 &2 & 0 & 3 &1 \\ \hline Woman 4 &1 & 2& 4& 0&0 \\ \hline Woman 7 & 1& 2 & 0&1& 3 \\ \hline Woman 8 &3& 0& 1 & 3&0 \\ \hline Woman 9 & 1&2 & 0 & 4 & 0 \\ \hline Woman 10 & 3& 1& 1 & 2 &0 \\ \hline Woman 12 & 0& 4& 2& 1& 0 \\ \hline Woman 14 & 0& 0& 4& 3& 0 \\ \hline Woman 18 & 1& 0& 2& 1& 3 \\ \hline Woman 20 & 3& 0& 0& 1& 3 \\ \hline Woman 21 & 2& 1& 0& 3& 1 \\ \hline Woman 22 & 1& 2& 0& 4& 0 \\ \hline Woman 23 & 0& 2& 2& 3& 0 \\ \hline Woman 25 & 4& 0& 1& 1& 1 \\ \hline Woman 26 & 2& 1& 2& 0& 2 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 2,9,18,14,7,20,21,8,22,23 } \\ \hline Not EC & \multicolumn{5}{|c|}{ 4,10,12,25,26 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} EC} & A & B & C & D & E \\ \hline Woman 1 &1 &1 &4 &1 & 0 \\ \hline Woman 13 &0 &2 &2 & 3 &0 \\ \hline Woman 28 & 2 & 1 & 2 & 0 & 2\\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 13} \\ \hline Not EC & \multicolumn{5}{|c|}{ 1,28 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf EC: Woman Result (positively, partially and deny tables)}} \label{table:ECfemNNsss} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{\bf { \color{black} positively } confess } \\ \hline {\color{black} FS} & A & B & C & D & E \\ \hline Woman 3 &4 & 1& 2& 0 &0 \\ \hline Woman 5 & 3& 0& 3& 1& 0 \\ \hline Woman 6 &0&2&4& 1&0 \\ \hline Woman 11 &0 &2& 1& 4&0 \\ \hline Woman 15 & 3 & 2& 2 &0 & 0 \\ \hline Woman 16 & 1& 5 &0 &1&0 \\ \hline Woman 17 & 2 & 1 & 2 & 2 & 0 \\ \hline Woman 19 & 2& 3 & 0& 1 & 1 \\ \hline Woman 24 & 0 & 1 & 5& 1 &0 \\ \hline Woman 27 & 2& 0 & 3 & 2 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 11,5,6,17,24,3 } \\ \hline Not FS & \multicolumn{5}{|c|}{ 3,15,16,19 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} FS} & A & B & C & D & E \\ \hline Woman 2 &2 &1 & 1 & 1 &2 \\ \hline Woman 4 &1 & 4& 1& 0&1 \\ \hline Woman 7 & 0& 3 & 1&3& 0 \\ \hline Woman 8 &1& 1& 3 & 1&1 \\ \hline Woman 9 & 2&0 & 0 & 5 & 0 \\ \hline Woman 10 & 1& 6& 0 & 0 &0 \\ \hline Woman 12 & 0& 1& 4& 2& 0 \\ \hline Woman 14 & 0& 0& 2& 4& 1 \\ \hline Woman 18 & 1& 1& 2& 2& 1 \\ \hline Woman 20 & 2& 1& 1& 1& 2 \\ \hline Woman 21 & 1& 1& 3& 2& 0 \\ \hline Woman 22 & 1& 1& 2& 3& 0 \\ \hline Woman 23 & 0& 0& 1& 1& 5 \\ \hline Woman 25 & 1& 1& 2& 2& 1 \\ \hline Woman 26 & 0& 4& 3& 0& 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 9,12,14,18,21,22,23,25 } \\ \hline Not FS & \multicolumn{5}{|c|}{ 4,10,26 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} FS} & A & B & C & D & E \\ \hline Woman 1 &2 &3 &1 &1 & 0 \\ \hline Woman 13 &0 &0 &0 & 3 &4 \\ \hline Woman 28 & 4 & 0 & 0 & 2 & 1\\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 1,28 } \\ \hline Not FS & \multicolumn{5}{|c|}{ 13 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf FS: Women's Result (positively, partially and deny tables)}} \label{table:FSfemNN} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Woman 3 &1 & 2& 2& 2 &0 \\ \hline Woman 5 & 3& 0& 0& 4& 0 \\ \hline Woman 6 &2&3&0& 4&0 \\ \hline Woman 11 &0 & 4& 3& 0&0 \\ \hline Woman 15 & 0 & 4& 2 &1 & 0 \\ \hline Woman 16 & 1& 2 &3 &1&0 \\ \hline Woman 17 & 3 & 3 & 1 & 0 & 0 \\ \hline Woman 19 & 4& 1 & 0& 1 & 1 \\ \hline Woman 24 & 1 & 2 & 3& 1 &0 \\ \hline Woman 27 & 2& 3 & 2 & 0 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 3,6,11,15,16,17,4,27,24} \\ \hline Not PD & \multicolumn{5}{|c|}{ 5 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Woman 2 & 3 &0 & 1 & 1 &2 \\ \hline Woman 4 &1 & 1& 4& 0&1 \\ \hline Woman 7 & 1& 2 & 1&2& 1 \\ \hline Woman 8 &3& 1& 2 & 1&0 \\ \hline Woman 9 & 0&4 & 3 & 0 & 0 \\ \hline Woman 10 & 4& 2& 1 & 0 &0 \\ \hline Woman 12 & 0& 4& 2& 1& 0 \\ \hline Woman 14 & 0& 1& 5& 1& 0 \\ \hline Woman 18 & 2& 2& 1& 2& 0 \\ \hline Woman 20 & 1& 2& 3& 0& 1 \\ \hline Woman 21 & 1& 5& 0& 1& 0 \\ \hline Woman 22 & 0& 3& 2& 1& 1 \\ \hline Woman 23 & 0& 0& 5& 2& 0 \\ \hline Woman 25 & 2& 4& 0& 0& 1 \\ \hline Woman 26 & 2& 3& 1& 0& 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 23, 2, 14, 22 } \\ \hline Not PD & \multicolumn{5}{|c|}{ 4, 8, 9, 10, 12, 18, 20, 21, 25, 26 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Woman}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Woman 1 &2 &1 &1 &1 & 1 \\ \hline Woman 13 &0 &0 &3 & 4 &0 \\ \hline Woman 28 & 3 & 1 & 0 & 1 & 2\\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 1,28 } \\ \hline Not PD & \multicolumn{5}{|c|}{ 13 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf PD: Women Result (positively, partially and deny tables)}} \label{table:PDfemNN} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Man 1 &0 & 3& 2& 1 &1 \\ \hline Man 4 & 0& 2& 1& 1& 3 \\ \hline Man 8 &0&2&4& 1&0 \\ \hline Man 10 & 0&1 &2& 4& 1 \\ \hline Man 11 & 3 & 0& 1 &1 & 2 \\ \hline Man 12 &1&1 &2 &3&0 \\ \hline Man 19 & 0 & 1 & 1 & 4 & 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{4,10,12,11,19 } \\ \hline Not PT & \multicolumn{5}{|c|}{1,8 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Man 3 & 0 &3 & 1 & 3 &0 \\ \hline Man 5 &0 & 3& 3& 0&0 \\ \hline Man 6 & 1& 0 & 1&4& 1 \\ \hline Man 7 & 3& 3& 0 & 0&1 \\ \hline Man 13 & 2&5 & 0 & 0 & 0 \\ \hline Man 14 & 4 & 0& 1 & 0 &2 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{ 7,13,14} \\ \hline Not PT & \multicolumn{5}{|c|}{ 5,6 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} PT} & A & B & C & D & E \\ \hline Man 2 &0 &3 &2 &0 & 2 \\ \hline Man 9 &0 &1 &3 &3 & 0 \\ \hline Man 15 &0 &2 &1 &4 & 0 \\ \hline Man 16 &0 &4 &3 &0 & 0 \\ \hline Man 17 &0 &3 &3 &1 & 0 \\ \hline Man 18 &0 &4 &3 &0 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PT & \multicolumn{5}{|c|}{ 9,15,17 } \\ \hline Not PT & \multicolumn{5}{|c|}{2,16,18} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf PT: Men's Result (positively, partially and deny tables)}} \label{table:PTmenNN} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} EC} & A & B & C & D & E \\ \hline Man 1 &3 & 3& 0& 1 &0 \\ \hline Man 4 & 3& 0& 2& 1& 1 \\ \hline Man 8 &0&4&3& 0&0 \\ \hline Man 10 & 0&2 &4& 0& 1 \\ \hline Man 11 & 1 & 0& 1 &4 & 1 \\ \hline Man 12 &3&1 &1 &2&0 \\ \hline Man 19 & 1& 0 & 1 & 4 & 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 10,11,19} \\ \hline Not EC & \multicolumn{5}{|c|}{1,8,12,4 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} EC} & A & B & C & D & E \\ \hline Man 3 & 0 &5 & 2 & 0 &0 \\ \hline Man 5 &1 & 1& 3& 0&2 \\ \hline Man 6 & 1& 1 & 0&4& 1 \\ \hline Man 7 & 1& 2& 4 & 0&0 \\ \hline Man 13 & 0& 5 &2 & 0 & 0 \\ \hline Man 14 & 0 & 1& 4 & 2 &0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 5,6,14} \\ \hline Not EC & \multicolumn{5}{|c|}{ 3,7,13 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline Man {\color{black} EC} & A & B & C & D & E \\ \hline Man 2 &2 &2 &0 &2 & 1 \\ \hline Man 9 &0 &3 &1 &3 & 0 \\ \hline Man 15 &1 &3 &3 &0 & 0 \\ \hline Man 16 &3 &1 &3 &0 & 0 \\ \hline Man 17 &0 &3 &4 &0 & 0 \\ \hline Man 18 &3 &2 &1 &1 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is EC & \multicolumn{5}{|c|}{ 9 } \\ \hline Not EC & \multicolumn{5}{|c|}{2,15,16,17,18} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf EC: Men's Result group 3}} \label{table:ECmenNN} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} FS} & A & B & C & D & E \\ \hline Man 1 &1 & 3& 1& 0 & 2 \\ \hline Man 4 & 1& 2& 1& 0& 1 \\ \hline Man 8 &1&1&1& 1&3 \\ \hline Man 10 & 0& 2 &3 & 2& 0 \\ \hline Man 11 & 3 & 1& 0 &0 & 3 \\ \hline Man 12 &0&2 &4 &1&0 \\ \hline Man 19 & 0& 2 & 0 & 4 & 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 8,10,19} \\ \hline Not FS & \multicolumn{5}{|c|}{1, 4,11,12 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} FS} & A &Man B & C & D & E \\ \hline Man 3 & 2 &5 & 0 & 0 &0 \\ \hline Man 5 &0 & 0& 4& 1&2 \\ \hline Man 6 & 4& 3 & 0&0& 0 \\ \hline Man 7 & 4& 3& 0 & 0&0 \\ \hline Man 13 & 5& 1 &1 & 0 & 0 \\ \hline Man 14 & 0 & 1& 6 & 0 &0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 3,6,7,13} \\ \hline Not FS & \multicolumn{5}{|c|}{ 5,14 } \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men} } & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} FS} & A & B & C & D & E \\ \hline Man 2 &3 &1 &0 &3 & 0 \\ \hline Man 9 &0 &2 &4 &1 & 0 \\ \hline Man 15 &0 &3 &2 &1 & 1 \\ \hline Man 16 &1 &3 &2 &1 & 0 \\ \hline Man 17 &1 &1 &4 &1 & 0 \\ \hline Man 18 &0 &4 &3 &0 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is FS & \multicolumn{5}{|c|}{ 9, 17 } \\ \hline Not FS & \multicolumn{5}{|c|}{2,15,16,18} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf FS: Men's Result (positively, partially and deny tables)}} \label{table:FSmenNN} \end{table} \begin{table}[htb] \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} positively } confess } \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Man 1 &2 & 2& 0& 3 & 0 \\ \hline Man 4 & 2& 0& 2& 0& 3 \\ \hline Man 8 &0&3&2& 2&0 \\ \hline Man 10 & 0& 3 &3 & 1& 0 \\ \hline Man 11 & 1 & 1& 1 &0 & 4 \\ \hline Man 12 &3&2 &2 &0&0 \\ \hline Man 19 & 2& 1 & 2 & 1 & 1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 4,10,11} \\ \hline Not PD & \multicolumn{5}{|c|}{ 1,8,10,12,19} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ \bf { \color{black} partially } confess } \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Man 3 & 0 &5 & 2 & 0 &0 \\ \hline Man 5 &0 & 4& 1& 2&0 \\ \hline Man 6 & 1& 0 & 4&2& 0 \\ \hline Man 7 & 4& 1& 0 & 0&2 \\ \hline Woman 13 & 1& 3 &3 & 0 & 0 \\ \hline Woman 14 & 0 & 4 & 1& 1 &1 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 6} \\ \hline Not PD & \multicolumn{5}{|c|}{3,5,7,13,14} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline { \bf { \color{black} Men}} & \multicolumn{5}{|c|}{ { \bf Deny }} \\ \hline {\color{black} PD} & A & B & C & D & E \\ \hline Man 2 &4 &1 &0 &1 & 1 \\ \hline Man 9 &1 &2 &3 &1 & 0 \\ \hline Man 15 &1 &2 &0 &4 & 0 \\ \hline Man 16 &0 &7 &0 &0 & 0 \\ \hline Man 17 &2 &0 &1 &4 & 0 \\ \hline Man 18 &0 &2 &4 &1 & 0 \\ \hline \multicolumn{6}{|c|}{ } \\ \hline Who is PD & \multicolumn{5}{|c|}{ 9,15,17,18 } \\ \hline Not PD & \multicolumn{5}{|c|}{2,16} \\ \hline Unclear & \multicolumn{5}{|c|}{ } \\ \hline \end{tabular} \end{minipage} \caption[]{ { \bf PD: Men's Result (positively, partially and deny tables)}} \label{table:PDmenNN} \end{table} \section*{Biography} {\bf Giulia Rossi} received her Master degree with summa cum laude in Clinical Psychology in 2009 from the University of Padova. She worked as independent researcher in the analysis and prevention of psychopathological diseases and in the intercultural expression of mental diseases. Her research interests include behavioral game theory, social norms and the epistemic foundations of mean-field-type game theory. She is currently a research associate in the Learning \& Game Theory Laboratory at New York University Abu Dhabi. {\bf Alain Tcheukam} received his PhD in 2013 in Computer Science and Engineering at the IMT Institute for Advanced Studies Lucca. His research interests include crowd flows, smart cities and mean-field-type optimization. He received the Federbim Valsecchi award 2015 for his contribution in design, modelling and analysis of smarter cities, and a best paper award 2016 from the International Conference on Electrical Energy and Networks. He is currently a postdoctoral researcher with Learning \& Game Theory Laboratory at New York University Abu Dhabi. {\bf Hamidou Tembine} (S'06-M'10-SM'13) received his M.S. degree in Applied Mathematics from Ecole Polytechnique and his Ph.D. degree in Computer Science from University of Avignon. His current research interests include evolutionary games, mean field stochastic games and applications. In 2014, Tembine received the IEEE ComSoc Outstanding Young Researcher Award for his promising research activities for the benefit of the society. He was the recipient of 7 best paper awards in the applications of game theory. Tembine is a prolific researcher and holds several scientific publications including magazines, letters, journals and conferences. He is author of the book on "distributed strategic learning for engineers" (published by CRC Press, Taylor \& Francis 2012), and co-author of the book "Game Theory and Learning in Wireless Networks" (Elsevier Academic Press). Tembine has been co-organizer of several scientific meetings on game theory in networking, wireless communications and smart energy systems. He is a senior member of IEEE. \end{document}
\begin{document} \title[Correlations of von Mangoldt and divisor functions]{Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges} \author{Kaisa Matom\"aki} \address{Department of Mathematics and Statistics \\ University of Turku, 20014 Turku\\ Finland} \email{[email protected]} \author{Maksym Radziwi{\l}{\l}} \address{ Department of Mathematics \\ McGill University \\ Burnside Hall \\ Room 1005 \\ 805 Sherbrooke Street West \\ Montreal \\ Quebec \\ Canada\\ H3A 0B9 } \email{[email protected]} \author{Terence Tao} \address{Department of Mathematics, UCLA\\ 405 Hilgard Ave\\ Los Angeles CA 90095\\ USA} \email{[email protected]} \begin{abstract} We study asymptotics of sums of the form $\sum_{X < n \leq 2X} \Lambda(n) \Lambda(n+h)$, $\sum_{X < n \leq 2X} d_k(n) d_l(n+h)$, $\sum_{X < n \leq 2X} \Lambda(n) d_k(n+h)$, and $\sum_n \Lambda(n) \Lambda(N-n)$, where $\Lambda$ is the von Mangoldt function, $d_k$ is the $k^{\operatorname{th}}$ divisor function, and $N,X$ are large. Our main result is that the expected asymptotic for the first three sums holds for almost all $h \in [-H,H]$, provided that $X^{\record+\eps} \leq H \leq X^{1-\eps}$ for some $\eps>0$, where $\record \coloneqq \recordexplicit$, with an error term saving on average an arbitrary power of the logarithm over the trivial bound. This improves upon results of Mikawa, Perelli-Pintz, and Baier-Browning-Marasingha-Zhao, who obtained statements of this form with $\record$ replaced by $\frac{1}{3}$. We obtain an analogous result for the fourth sum for most $N$ in an interval of the form $[X, X + H]$ with $X^{\record+\eps} \leq H \leq X^{1-\eps}$. Our method starts with a variant of an argument from a paper of Zhan, using the circle method and some oscillatory integral estimates to reduce matters to establishing some mean-value estimates for certain Dirichlet polynomials associated to ``Type $d_3$'' and ``Type $d_4$'' sums (as well as some other sums that are easier to treat). After applying H\"older's inequality to the Type $d_3$ sum, one is left with two expressions, one of which we can control using a short interval mean value theorem of Jutila, and the other we can control using exponential sum estimates of Robert and Sargos. The Type $d_4$ sum is treated similarly using the classical $L^2$ mean value theorem and the classical van der Corput exponential sum estimates. In a sequel to this paper we will obtain related results for the correlations involving $d_k(n)$ for much smaller values of $H$ but with weaker bounds. \end{abstract} \maketitle \section{Introduction} \label{se:intro} This paper (as well as the sequel \cite{mrt-corr2}) will be concerned with the asymptotic estimation of correlations of the form \begin{equation}\label{co} \sum_{X < n \leq 2X} f(n) \overline{g(n+h)} \end{equation} for various functions $f,g \colon \Z \to \C$ and large $X$, and for ``most'' integers $h$ in the range $|h| \leq H$ for some $H = H(X)$ growing in $X$ at a moderate rate; in this paper we will mostly be concerned with the regime where $H = X^\theta$ for some fixed $0 < \theta < 1$. We will focus our attention on the particularly well studied correlations \begin{align} \sum_{X < n \leq 2X} \Lambda(n) &\Lambda(n+h) \label{lambda} \\ \sum_{X < n \leq 2X} d_k(n) &d_l(n+h) \label{d3} \\ \sum_{X < n \leq 2X} \Lambda(n) &d_k(n+h) \label{titchmarsh} \\ \sum_n \Lambda(n) &\Lambda(X-n) \label{goldbach} \end{align} for fixed $k,l \geq 2$, where $\Lambda$ is the von Mangoldt function and $$d_k(n) \coloneqq \sum_{n_1 \dotsm n_k = n} 1$$ is the $k^{\operatorname{th}}$ divisor function, adopting the convention that $\Lambda(n) = d_k(n)= 0$ for $n \leq 0$. Of course, to interpret \eqref{goldbach} properly one needs to take $X$ to be an integer, and then one can split this expression by symmetry into what is essentially twice a sum of the form \eqref{co} with $X$ replaced by $X/2$, $f(n) \coloneqq \Lambda(n)$, $g(n) \coloneqq \Lambda(-n)$, and $h \coloneqq -X$. One can also work with the range $1 \leq n \leq X$ rather than $X < n \leq 2X$ for \eqref{lambda}, \eqref{d3}, \eqref{titchmarsh} with only minor changes to the arguments below. As is well known, the von Mangoldt function $\Lambda$ behaves similarly in many ways to the divisor functions $d_k$ for $k$ moderately large, with identities such as the Linnik identity \cite{linnik} and the Heath-Brown identity \cite{hb-ident} providing an explicit connection between the two functions. Because of this, we will be able to treat both $\Lambda$ and $d_k$ in a largely unified fashion. In the regime when $h$ is fixed and non-zero, and $X$ goes to infinity, we have well established conjectures for the asymptotic values of each of the above expressions: \begin{conjecture}\label{bigconj} Let $h$ be a fixed non-zero integer, and let $k,l \geq 2$ be fixed natural numbers. \begin{itemize} \item[(i)] (Hardy-Littlewood prime tuples conjecture \cite{hl}) We have\footnote{See Section \ref{notation-sec} for the asymptotic notation used in this paper.} \begin{equation}\label{hl-conj} \sum_{X < n \leq 2X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + O(X^{1/2+o(1)}) \end{equation} as $X \to \infty$, where the \emph{singular series} ${\mathfrak S}(h)$ vanishes if $h$ is odd, and is equal to \begin{equation}\label{sing-def} {\mathfrak S}(h) \coloneqq 2 \Pi_2 \prod_{p|h: p>2} \frac{p-1}{p-2} \end{equation} when $h$ is even, where $\Pi_2 \coloneqq \prod_{p>2} (1 - \frac{1}{(p-1)^2})$ is the twin prime constant. \item[(ii)] (Divisor correlation conjecture \cite{vinogradov}, \cite{ivic}, \cite[Conjecture 3]{conrey}) We have\footnote{In \cite{vinogradov} it is conjectured (in the $k=l$ case) that the error term is only bounded by $O( x^{1-1/k+o(1)} )$, and in \cite{ivic} it is in fact conjectured that the error term is not better than this; see also \cite{iw} for further discussion. Interestingly, in the function field case (replacing $\Z$ by $F_q[t]$) the error term was bounded by $O(q^{-1/2})$ times the main term in the large $q$ limit in \cite{abr}, but this only gives square root cancellation in the degree $1$ case $n=1$ and so does not seem to give strong guidance as to the size of the error term in the large $n$ limit.} \begin{equation}\label{conrey-gonek} \sum_{X < n \leq 2X} d_k(n) d_l(n+h) = P_{k,l,h}(\log X) X + O(X^{1/2+o(1)}) \end{equation} as $X \to \infty$, for some polynomial $P_{k,l,h}$ of degree $k+l-2$. \item[(iii)] (Higher order Titchmarsh divisor problem) We have \begin{equation}\label{titch} \sum_{X < n \leq 2X} \Lambda(n) d_k(n+h) = Q_{k,h}(\log X) X + O(X^{1/2+o(1)}) \end{equation} as $X \to \infty$, for some polynomial $Q_{k,h}$ of degree $k-1$. \item[(iv)] (Quantitative Goldbach conjecture, see e.g. \cite[Ch. 19]{ik}) We have \begin{equation}\label{goldbach-conj} \sum_{n} \Lambda(n) \Lambda(X-n) = {\mathfrak S}(X) X + O(X^{1/2+o(1)}) \end{equation} as $X \to \infty$, where ${\mathfrak S}(X)$ was defined in \eqref{sing-def} and $X$ is restricted to be integer. \end{itemize} \end{conjecture} \begin{remark} The polynomials $P_{k,l,h}$ are in principle computable (see \cite{conrey} for an explicit formula), but they become quite messy in their lower order terms. For instance, a classical result of Ingham \cite{ingham} shows that the leading term in the quadratic polynomial $P_{2,2,h}(t)$ is $(\frac{6}{\pi^2} \sum_{d|h} \frac{1}{d}) t^2$, but the lower order terms of this polynomial, computed in \cite{estermann} (with the sum $\sum_{X < n \leq 2X}$ replaced with the closely related sum $\sum_{n \leq X}$), are significantly more complicated. A similar situation occurs for $Q_{k,h}$; see for instance \cite{fiorilli} for an explicit formula for $Q_{2,h}$. The top degree terms of $P_{k,l,h}, Q_{k,h}$ are however easy to predict from standard probablistic heuristics: one should have \begin{equation}\label{pkl} P_{k,l,h}(t) = \frac{t^{k-1}}{(k-1)!} \frac{t^{l-1}}{(l-1)!} \left(\prod_p {\mathfrak S}_{k,l,p}(h)\right) + O_{k,l,h}(t^{k+l-3}) \end{equation} and $$ Q_{k,h}(t) = \frac{t^{k-1}}{(k-1)!} \left(\prod_p {\mathfrak S}_{k,p}(h)\right) + O_{k,h}(t^{k-2})$$ where the local factors ${\mathfrak S}_{k,l,p}(h), {\mathfrak S}_{k,p}(h)$ are defined by the formulae\footnote{One can simplify these formulae slightly by observing that $\E d_{k,p}({\mathbf n}) = (1-\frac{1}{p})^{1-k}$ and $\E \Lambda_p({\bf n}) = 1$.} $$ {\mathfrak S}_{k,l,p}(h) \coloneqq \frac{ \E d_{k,p}({\mathbf n}) d_{l,p}({\mathbf n}+h) }{ \E d_{k,p}({\mathbf n}) \E d_{l,p}({\mathbf n}) }$$ and $$ {\mathfrak S}_{k,p}(h) \coloneqq \frac{ \E d_{k,p}({\mathbf n}) \Lambda_{p}({\mathbf n}+h) }{ \E d_{k,p}({\mathbf n}) \E \Lambda_{p}({\mathbf n}) }$$ where ${\mathbf n}$ is a random variable drawn from the profinite integers $\hat \Z$ with uniform Haar probability measure, $d_{k,p}({\mathbf n}) \coloneqq \binom{v_p({\mathbf n})+k-1}{k-1}$ is the local component of $d_k$ at $p$ (with the $p$-valuation $v_p({\mathbf n})$ being the supremum of all $j$ such that $p^j$ divides ${\mathbf n}$), and $\Lambda_p({\mathbf n}) \coloneqq \frac{p}{p-1} 1_{p \nmid {\mathbf n}}$ is the local component of $\Lambda$. See \cite{nt} for an explanation of these heuristics and a verification of the asymptotic \eqref{pkl}, as well as an explicit formula for the local factor ${\mathfrak S}_{k,l,p}(h)$. For comparison, it is easy to see that $$ {\mathfrak S}(h) = \prod_p \frac{\E \Lambda_p({\mathbf n}) \Lambda_p({\mathbf n}+h)}{\E \Lambda_p({\mathbf n}) \E \Lambda_p({\mathbf n})}$$ for all non-zero integers $h$, and similarly $$ {\mathfrak S}(X) = \prod_p \frac{\E \Lambda_p({\mathbf n}) \Lambda_p(X-{\mathbf n})}{\E \Lambda_p({\mathbf n}) \E \Lambda_p({\mathbf n})}$$ for all non-zero integers $X$. \end{remark} Conjecture \ref{bigconj} is considered to be quite difficult, particularly when $k$ and $l$ are large, even if one allows the error term to be larger than $X^{1/2+o(1)}$ (but still smaller than the main term). For instance it is a notorious open problem to obtain an asymptotic for the divisor correlations in the case $k = l = 3$. The objective of this paper is to obtain a weaker version of Conjecture \ref{bigconj} in which one has less control on the error terms, and one is content with obtaining the asymptotics for \emph{most} $h$ in a given range $[h_0-H,h_0+H]$, rather than for \emph{all} $h$. This is in analogy with our recent work on Chowla and Elliott type conjectures for bounded multiplicative functions \cite{mrt}, although our methods here are different\footnote{In particular, the arguments in \cite{mrt} rely heavily on multiplicativity in small primes, which is absent in the case of the von Mangoldt function, and in the case of the divisor functions $d_k$ would not be strong enough to give error terms of size $O_A(\log^{-A} x)$ times the main term. In any event, the arguments in this paper certainly cannot work for $H$ slower than $\log X$ even if one assumes conjectures such as the Generalized Lindel\"of Hypothesis, the Generalized Riemann Hypothesis or the Elliott-Halberstam conjecture, as the $h=0$ term would dominate all of the averages considered here.}. Our ranges of $h$ will be shorter than those in previous literature on Conjecture \ref{bigconj}, although they cannot be made arbitrarily slowly growing with $X$ as was the case for bounded multiplicative functions in \cite{mrt}. In particular, the methods in this paper will certainly be unable to unconditionally handle intervals of length $X^{1/6-\eps}$ or shorter for any $\eps>0$, since it is not even known\footnote{See \cite{zacc} for the best known result in this direction.} currently if the prime number theorem is valid in most intervals of the form $[X,X+X^{1/6-\eps}]$, and such a result would easily follow from an averaging argument (using a well-known calculation of Gallagher \cite{gal-hl}) if we knew the prime tuples conjecture \eqref{hl-conj} for most $h = O(X^{1/6-\eps})$. However, one can do much better than this if one assumes powerful conjectures such as the Generalized Lindel\"of Hypothesis (GLH), the Generalized Riemann Hypothesis (GRH), or the Elliott-Halberstam conjecture (EH). We plan to discuss some of these conditional results in more detail on another occasion. In the case of the divisor correlation conjecture \eqref{conrey-gonek} and the higher order Titchmarsh divisor problem~(\ref{titch}), we can obtain much smaller values of $H$ (but with a much weaker error term) by a different method related to \cite{mr} and \cite{mrt}. We will address this question in the sequel \cite{mrt-corr2} to this paper. \subsection{Prior results} We now discuss some partial progress on each of the four parts to Conjecture \ref{bigconj}, starting with the prime tuples conjecture \eqref{hl-conj}. The conjecture \eqref{hl-conj} is trivial for odd $h$, so we now restrict attention to even $h$. In this case, even the weaker estimate \begin{equation}\label{hl-weak} \sum_{X < n \leq 2X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + o(X) \end{equation} is not known to hold for any single choice of $h$; for instance, the case $h=2$ would imply the twin prime conjecture, which remains open. One can of course still use sieve theoretic methods (see e.g. \cite[Corollary 3.14]{mv}) to obtain the upper bound \begin{equation*} \sum_{X < n \leq 2X} \Lambda(n) \Lambda(n+h) \ll {\mathfrak S}(h) X \end{equation*} uniformly for $|h| \leq X$ (say). There are a number of results \cite{vdc}, \cite{lavrik}, \cite{balog}, \cite{wolke}, \cite{mikawa}, \cite{pp}, \cite{kawada} that show that \eqref{hl-conj} holds for ``most'' $h$ with $|h| \leq H$, as long as $H$ grows moderately quickly with $X$. The best known result in the literature (with respect to the range of $H$) is by Mikawa \cite{mikawa} and Perelli-Pintz \cite{pp}, who showed (in our notation) that if $X^{1/3+\eps} \leq H \leq X^{1-\eps}$ for some\footnote{One can also handle the range $X^{1-\eps} \leq H \leq X$ by the same methods; see \cite{mikawa} or \cite{pp}. However, we restrict $H$ to be slightly smaller than $X$ here in order to avoid some minor technicalities arising from the fact that $n+h$ might have a slightly different magnitude than $n$. This becomes relevant when dealing with the $d_k$ functions, whose average value depends on the magnitude of the argument.} fixed $\eps>0$, then the estimate \eqref{hl-weak} holds for all but $O_{A,\eps}( H \log^{-A} X )$ values of $h$ with $|h| \leq H$, for any fixed $A$; in fact the $o(X)$ error term in \eqref{hl-weak} can also be taken to be of the form $O_{A,\eps}( X \log^{-A} X )$. Now we turn to the divisor correlation conjecture \eqref{conrey-gonek}. These correlations have been studied by many authors \cite{ingham-0}, \cite{ingham}, \cite{estermann}, \cite{linnik}, \cite{hb}, \cite{moto0}, \cite{moto}, \cite{moto2}, \cite{moto3}, \cite{kuz}, \cite{desh}, \cite{di}, \cite{top}, \cite{ft}, \cite{ivic}, \cite{ivic2}, \cite{conrey}, \cite{iw}, \cite{meurman}, \cite{bv}, \cite{drappeau}, \cite{nt}. When $k=2$, the conjecture is known to be true with a somewhat worse error term. For instance in the case $k = l = 2$ the current record is \begin{align*} \sum_{X < n \leq 2X} d_2(n) d_2(n+h) &= P_{2,2,h}(\log X) X + O(X^{2/3+o(1)}) \\ \end{align*} as $X \to \infty$. This result is due to Deshouillers-Iwaniec \cite{di}. In the cases $l \geq 3$, a power savings $$ \sum_{X < n \leq 2X} d_2(n) d_l(n+h) = P_{2,l,h}(\log X) X + O(X^{1-\delta_{l}+o(1)})$$ for exponents $\delta_l > 0$, is known \cite{bv}, \cite{drappeau}, \cite{top2}. See \cite{drappeau}, \cite{nt} for further references and surveys of the problem. Finally, we remark that a function field analogue of \eqref{conrey-gonek} has been established in \cite{abr}, but with an error term that is only bounded by $O_{k,l}(q^{-1/2})$ times the main term (so the result pertains to the ``large $q$ limit'' rather than the ``large $n$ limit''). When $k,l \geq 3$, no unconditional proof of even the weaker asymptotic $$ \sum_{X < n \leq 2X} d_k(n) d_l(n+h) = P_{k,l,h}(\log X) X + o(X \log^{k+l-2} X) $$ is known. However, upper and lower bounds of the correct order of magnitude are available; see for example, \cite{henriot, henriot2, matt, matt2, Nair, nt}. In the case $k=l=3$, the analogue of Mikawa's and Perelli-Pintz's results (now with a power savings in error terms) were recently established by Baier, Browning, Marasingha, and Zhao \cite{bbmz}, who were able to obtain the asymptotic $$ \sum_{X < n \leq 2X} d_3(n) d_3(n+h) = P_{3,3,h}(\log X) X + O( X^{1-\delta} )$$ for all but $O_\eps(H X^{-\delta})$ choices of $h$ with $|h| \leq H$, provided that $X^{1/3+\eps} \leq H \leq X^{1-\eps}$ for some fixed $\eps>0$, and $\delta>0$ is a small exponent depending only on $\eps$. Next, we turn to the (higher order) Titchmarsh divisor problem \eqref{titch}. This problem is often expressed in terms of computing an asymptotic for $\sum_{p \leq X} d_k(p+h)$ rather than $\sum_{X < n \leq 2X} \Lambda(n) d_k(n+h)$, but the two sums can be related to each other via summation by parts up to negligible error terms, so it is fairly easy to translate results about one sum to the other. The $k=2$ case of \eqref{titch} with qualitative error term was established by Linnik \cite{linnik}. This result was improved by Fouvry \cite{fouvry} and Bombieri-Friedlander-Iwaniec \cite{bfi}, who in our notation showed that $$\sum_{X < n \leq 2X} \Lambda(n) d_2(n+h) = Q_{2,h}(\log X) X + O_A( X \log^{-A} X )$$ for any $A>0$. Recently, Drappeau \cite{drappeau} showed that the error term could be improved to $O(X \exp( - c \sqrt{\log X} ) )$ for some $c>0$ provided that one added a correction term in the case of a Siegel zero; under the assumption of GRH, the error term could be improved further to $O(X^{1-\delta})$ for some absolute constant $\delta>0$. Fiorilli \cite{fiorilli} also established some uniformity of the error term in the parameter $h$. A function field analog of \eqref{titch} was proven (for arbitrary $k$) in \cite{abr}, but with an error term that is $O_k(q^{-1/2})$ times the main term. When $k \geq 3$ even the weaker estimate $$\sum_{X < n \leq 2X} \Lambda(n) d_k(n+h) = Q_{k,h}(\log X) X + o( X \log^{k-1} X )$$ remains open; sieve theoretic methods would only give this asymptotic assuming a level of distribution of $\Lambda$ that is greater than $1-1/k$, which would follow from EH but is not known unconditionally for any $k \geq 3$, even after the recent breakthrough of Zhang \cite{zhang} (see also \cite{polymath8a}). In analogy with the results of Baier, Browning, Marasingha, and Zhao \cite{bbmz}, it is likely that the method of Mikawa \cite{mikawa} or Perelli-Pintz \cite{pp} can be extended to give an asymptotic of the form $$\sum_{X < n \leq 2X} \Lambda(n) d_3(n+h) = Q_{3,h}(\log X) X + O_{A,\eps}( X \log^{-A} X )$$ for all but $O_{A,\eps}( H \log^{-A} X )$ values of $h$ with $|h| \leq H$, for any fixed $A$, if $X^{1/3+\eps} \leq H \leq X^{1-\eps}$ for some fixed $\eps>0$; however to our knowledge this result has not been explicitly proven in the literature. Finally, we discuss some known results on the Goldbach conjecture \eqref{goldbach-conj}. As with the prime tuples conjecture, standard sieve methods (e.g. \cite[Theorem 3.13]{mv}) will give the upper bound $$ \sum_{n} \Lambda(n) \Lambda(X-n) \ll {\mathfrak S}(X) X$$ uniformly in $X$. There are a number of results \cite{tch}, \cite{vdc}, \cite{est}, \cite{mv-gold}, \cite{chen-pan}, \cite{li}, \cite{lu} establishing that the left-hand side of \eqref{goldbach-conj} is positive for ``most'' large even integers $X$; for instance, in \cite{lu} it was shown that this was the case for all but $O(X_0^{0.879})$ of even integers $X \leq X_0$, for any large $X_0$. There are analogous results in shorter intervals \cite{peneva}, \cite{ly}, \cite{yao}, \cite{jia}, \cite{harman}, \cite{matomaki}; for instance in \cite{matomaki} it was shown that for any $1/5 < \theta \leq 1$ the left-hand side of \eqref{goldbach-conj} is positive for all but $O( X_0^{\theta - \delta} )$ even integers $X \in [X_0, X_0 + X_0^\theta]$, for some $\delta>0$ depending on $\theta$, while in \cite[Chapter 10]{harman} it is shown that for $\frac{11}{180} \leq \theta \leq 1$ and $A>0$, the left-hand side of \eqref{goldbach-conj} is positive for all but $O_A( X_0 \log^{-A} X_0)$ even integers $X \in [X_0, X_0 + X_0^\delta]$. On the other hand, if one wants the left-hand side of \eqref{goldbach-conj} to not just be positive, but be close to the main term ${\mathfrak S}(X) X$ on the right-hand side, the state of the art requires larger intervals. For instance, in \cite[Proposition 19.5]{ik} it is shown that \eqref{goldbach-conj} holds (with $O_A( X_0 \log^{-A} X_0)$ error term) for all but $O_A(X_0 \log^{-A} X_0)$ even integers $X$ in $[1,X_0]$. In \cite{pp}, Perelli and Pintz obtained a similar result for the intervals $[X_0,X_0 + X_0^{\frac{1}{3}+\eps}]$ for any $\eps>0$. In \cite{halupczok}, Halupczok obtains variants of the result of Perelli-Pintz with the additional requirement that one of the prime in $n = p_1 + p_2$ is constrained to a short interval or an arithmetic progression with large moduli. \subsection{New results} Our main result is as follows: for all four correlations (i)-(iv) in Conjecture \ref{bigconj}, we can improve upon the results of Mikawa, Perelli-Pintz, and Baier-Browning-Marasingha-Zhao by improving the exponent $\frac{1}{3}$ to the quantity \begin{equation}\label{sigmadef} \record \coloneqq \recordexplicit; \end{equation} for future reference we observe that $\record$ lies in the range \begin{equation}\label{sigma-range} \frac{1}{5} < \frac{11}{48} < \frac{7}{30} < \record < \frac{1}{4}. \end{equation} (The significance of the other fractions in \eqref{sigma-range} will become more apparent later in the paper.) More precisely, we have \begin{theorem}[Averaged correlations]\label{unav-corr} Let $A>0$, $0 < \eps < 1/2$ and $k,l \geq 2$ be fixed, and suppose that $X^{\record+\eps} \leq H \leq X^{1-\eps}$ for some $X \geq 2$, where $\record$ is defined by \eqref{sigmadef}. Let $0 \leq h_0 \leq X^{1-\eps}$. \begin{itemize} \item[(i)] (Averaged Hardy-Littlewood conjecture) One has $$ \sum_{X < n \leq 2X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + O_{A,\eps}( X \log^{-A} X )$$ for all but $O_{A,\eps}( H \log^{-A} X )$ values of $h$ with $|h-h_0| \leq H$. \item[(ii)] (Averaged divisor correlation conjecture) One has $$ \sum_{X < n \leq 2X} d_k(n) d_l(n+h) = P_{k,l,h}(\log X) X + O_{A, \eps,k,l}( X \log^{-A} X )$$ for all but $O_{A,\eps,k,l}( H \log^{-A} X )$ values of $h$ with $|h-h_0| \leq H$. \item[(iii)] (Averaged higher order Titchmarsh divisor problem) One has $$ \sum_{X < n \leq 2X} \Lambda(n) d_k(n+h) = Q_{k,h}(\log X) X + O_{A,\eps,k}( X \log^{-A} X )$$ for all but $O_{A,\eps,k}( H \log^{-A} X )$ values of $h$ with $|h-h_0| \leq H$. \item[(iv)] (Averaged Goldbach conjecture) One has $$ \sum_n \Lambda(n) \Lambda(N-n) = {\mathfrak S}(N) N + O_{A,\eps}(X \log^{-A} X )$$ for all but $O_{A,\eps}(H \log^{-A} X)$ integers $N$ in the interval $[X,X+H]$. \end{itemize} \end{theorem} In the case of correlations of the divisor functions, our method can be modified to obtain power-savings in the error terms. However, since we cannot obtain power-savings in the case of correlations of the von Mangoldt function, in order to keep our choice of parameters uniform accross the four cases stated in Theorem \ref{unav-corr} we have decided to state the result for the divisor function with weaker error terms (see Remark \ref{power} below for more details). As mentioned previously, the cases $H \geq X^{\frac{1}{3}+\eps}$ of the above theorem are essentially in the literature, either being contained in the papers of Mikawa \cite{mikawa} Perelli-Pintz \cite{pp} and Baier et al. \cite{bbmz}, or following from a modification of their methods. We give a slightly different proof on these cases in this paper. Still another, but related, proof of the $H \geq X^{\frac{1}{3}+\eps}$ cases could be obtained by adapting arguments used previously for studying the Goldbach problem in short intervals, see e.g. \cite[Chapter 10]{harman} for such arguments. In the range $H > X^{\frac{1}{3} + \varepsilon}$ our argument relies only on standard mean-value theorems and on a simple bound for the fourth moment of Dirichlet $L$-functions (which follows from the mean-value theorem and Poisson summation formula). In contrast, the approaches in \cite{mikawa, bbmz, pp} depend in the range $H > X^{1/3 + \varepsilon}$ on some non-trivial input, such as either bounds for the sixth moment of the Riemann zeta-function off the half-line (in \cite{bbmz}), zero-density estimates (in \cite{pp}) or estimates for Kloosterman sums (in \cite{mikawa}). In fact our approach is entirely independent of results on Kloosterman sums, even for smaller $H$ (see Remark \ref{kloosterman} below for more details). Before we embark on a discussion of the proof, we note that our results do not appear to have new consequences for moments of the Riemann-zeta function. For instance for the problem of estimating the sixth moment of the Riemann zeta-function one needs an estimate for \begin{equation} \label{correlationD} \sum_{h \leq H} \sum_{n \leq X} d_3(n) d_3(n + h) \end{equation} in the range $H = X^{1/3}$. To obtain an improvement over the best-known estimate $$ \int_{T}^{2T} |\zeta(\tfrac 12 + it)|^6 dt \ll T^{5/4+\eps} $$ one would need to show that in the range $H = X^{1/3}$ the error term in \eqref{correlationD} is $\ll H X^{5/6 - \varepsilon}$. A naive application of our result, gives a bound of $\ll_{A} H X (\log X)^{-A}$ for the error term. As we pointed out earlier in the case of the divisor function, it is possible to improve the $(\log X)^{-A}$ to $X^{-\delta}$ for some $\delta > 0$, however since our method is optimized for dealing with smaller $H$, rather than with $H = X^{1/3}$ we doubt that there will be new results in this range. We now briefly summarize the arguments used to prove Theorem \ref{unav-corr}. To follow the many changes of variable of summation (or integration) in the argument, it is convenient to refer to the following diagram: $$ \left. \begin{array}{ccc} \text{Additive frequency } \alpha & & \text{Multiplicative frequency } t \\ \Updownarrow & & \Updownarrow \\ \text{Position } n & \Leftrightarrow & \text{Logarithmic position } u \end{array} \right. $$ Initially, the correlations studied in Theorem \ref{unav-corr} are expressed in terms of the position variable $n$ (an integer comparable to $X$), which we have placed in the bottom left of the above diagram. The first step in analyzing these correlations, which is standard, is to apply the Hardy-Littlewood circle method (i.e., the Fourier transform), which expresses correlations such as \eqref{co} as an integral $$ \int_{\T} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha$$ over the unit circle $\T \coloneqq \R/\Z$, where $S_f, S_g$ are the exponential sums \begin{align*} S_f(\alpha) &\coloneqq \sum_{X < n \leq 2X} f(n) e(n \alpha) \\ S_g(\alpha) &\coloneqq \sum_{X < n \leq 2X} g(n) e(n\alpha). \end{align*} The additive frequency $\alpha$, which is the Fourier-analytic dual to the position variable $n$, is depicted on the top left of the above diagram. In our applications, $f$ will be of the form $\Lambda 1_{(X,2X]}$ or $d_k 1_{(X,2X]}$, and similarly for $g$. We then divide $\T$ into the \emph{major arcs}, in which $|\alpha - \frac{a}{q}| \leq \frac{\log^{B'} X}{X}$ for some $q \leq \log^B X$, and the \emph{minor arcs}, which consist of all other $\alpha$. Here $B' > B > 0$ are suitable large constants (depending on the parameters $A,k,l$). The major arcs contribute the main terms ${\mathfrak S}(h) X$, $P_{k,l,h}(\log X) X$, $Q_{k,h}(\log X) X$, ${\mathfrak S}(N) N$ to Theorem \ref{unav-corr}, and the estimation of their contribution is standard; we do this in Section \ref{major-sec}. The main novelty in our arguments lies in the treatment of the minor arc contribution, which we wish to show is negligible on the average. After an application of the Cauchy-Schwarz inequality, the main task becomes that of estimating the integral \begin{equation}\label{dol} \int_{\beta - 1/H}^{\beta + 1/H} |S_f(\alpha)|^2\ d\alpha \end{equation} for various ``minor arc'' $\beta$. To do this, we follow a strategy from a paper of Zhan \cite{zhan} and estimate this type of integral in terms of the Dirichlet series $$ {\mathcal D}[f](\frac{1}{2}+it) \coloneqq \sum_n \frac{f(n)}{n^{\frac{1}{2}+it}}$$ for various ``multiplicative frequencies'' $t$. Actually for technical reasons we will have to twist these Dirichlet series by a Dirichlet character $\chi$ of small conductor, but we ignore this complication for this informal discussion. The variable $t$ is depicted on the top right of the above diagram, and so we will have to return to the position variable $n$ and then go through the logarithmic position variable $u$, which we will introduce shortly. Applying the Fourier transform (as was done by Gallagher in \cite{gallagher}), we can control the expression \eqref{dol} in terms of an expression of the form $$ \int_\R |\sum_{x \leq n \leq x+H} f(n) e(\beta n)|^2\ dx.$$ Actually, it is convenient to smooth the summation appearing here, but we ignore this technicality for this informal discussion. This returns one to the bottom left of the above diagram. Next, one makes the logarithmic change of variables $u = \log n - \log X$, or equivalently $n = X e^u$. This transforms the main variable of interest to a bounded real number $u=O(1)$, and the phase $e(\beta n)$ that appears in the above expression now takes the form $e(\beta X e^u)$. We are now at the bottom right of the diagram. Finally, one takes the Fourier transform to convert the expression involving $u$ to an expression involving $t$, which (up to a harmless factor of $2\pi$, as well as a phase modulation) is the Fourier dual of $u$. Because the $u$ derivative of the phase $\beta X e^u$ is comparable in magnitude to $|\beta| X$, one would expect the main contributions in the integration over $t$ to come from the region where $t$ is comparable to $|\beta| X$. This intuition can be made rigorous using Fourier-analytic tools such as Littlewood-Paley projections and the method of stationary phase. At this point, after all the harmonic analytic transformations, we come to the arithmetic heart of the problem. A precise statement of the estimates needed can be found in Proposition \ref{mve}; a model problem is to obtain an upper bound on the quantity $$ \int_{|t| \asymp \lambda X} \left(\int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[f](\frac{1}{2}+it')\right|\ dt'\right)^2\ dt$$ for $\frac{1}{H} \ll \lambda \ll \log^{-B} X$ that improves (by a large power of $\log X$) upon the trivial bound of $O_k(\lambda^2 H^2 X \log^{O_k(1)} X)$ that one can obtain from the Cauchy-Schwarz inequality $$ \left(\int_{t-\lambda H}^{t+\lambda H} |{\mathcal D}[f](\frac{1}{2}+it')|\ dt'\right)^2 \ll \lambda H \int_{t-\lambda H}^{t+\lambda H} |{\mathcal D}[f](\frac{1}{2}+it')|^2\ dt' $$ Fubini's theorem, and the standard $L^2$ mean value theorem for Dirichlet polynomials. The most difficult case occurs when $\lambda$ is large (e.g. $\lambda = \log^{-B} X$); indeed, the case $\lambda \leq X^{-\frac{1}{6}-\eps}$ of small $\lambda$ is analogous to the prime number theorem in most short intervals of the form $[X, X+X^{\frac{1}{6}+\eps}]$, and (following \cite{harman}) can be treated by such methods as the Huxley large values estimate and mean value theorems for Dirichlet polynomials. This is done in Appendix \ref{harman-sec}. (In the case $f = d_3 1_{(X,2X]}$, these bounds are essentially contained (in somewhat disguised form) in \cite[Theorem 1.1]{bbmz}.) For sake of argument let us focus now on the case $f = \Lambda 1_{(X,2X]}$. We proceed via the usual technique of decomposing $\Lambda$ using the Heath-Brown identity \cite{hb-ident} and further dyadic decompositions. Because $\sigma$ lies in the range \eqref{sigma-range}, this leaves us with ``Type II'' sums where $f$ is replaced by a Dirichlet convolution $\alpha \ast \beta$ with $\alpha$ supported on $[X^{\eps^2}, X^{-\eps^2} H]$, as well as ``Type $d_1$'', ``Type $d_2$'', ``Type $d_3$'', and ``Type $d_4$'' sums where (roughly speaking) $f$ is replaced by a Dirichlet convolution that resembles one of the first four divisor functions $d_1,d_2,d_3,d_4$ respectively. (See Proposition \ref{types} for a precise statement of the estimates needed.) The contribution of the Type II sums can be easily handled by an application of the Cauchy-Schwarz inequality and $L^2$ mean value theorems for Dirichlet polynomials. The Type $d_1$ and Type $d_2$ sums can be treated by $L^4$ moment theorems \cite{ramachandra}, \cite{bhp} for the Riemann zeta function and Dirichlet $L$-functions. These arguments are already enough to recover the results in \cite{mikawa}, \cite{pp}, \cite{bbmz}, which treated the case $H \geq X^{1/3+\eps}$; our methods are slightly different from those in \cite{mikawa}, \cite{pp}, \cite{bbmz} due to our heavier reliance on Dirichlet polynomials. To break the $X^{1/3}$ barrier we need to control Type $d_3$ sums, and to go below $X^{1/4}$ one must also consider Type $d_4$ sums. The standard unconditional moment estimates on the Riemann zeta function and Dirichlet $L$-functions are inadequate for treating the $d_3$ sums. Instead, after applying the Cauchy-Schwarz inequality and subdividing the range $\{ t: t \asymp \lambda X\}$ into intervals of length $\sqrt{\lambda X}$, the problem reduces to obtaining two bounds on Dirichlet polynomials in ``typical'' short or medium intervals. A model for these problems would be to establish the bounds \begin{equation}\label{t1} \int_{t_j - \sqrt{\lambda X}}^{t_j+\sqrt{\lambda X}} \left|{\mathcal D}[1_{(X^{1/3},2X^{1/3}]}]\left(\frac{1}{2}+it\right)\right|^4\ dt \ll_\eps X^{\eps^2} \sqrt{\lambda X} \end{equation} and \begin{equation}\label{t2} \int_{t_j - H}^{t_j+H} \left|{\mathcal D}[1_{(X^{1/3},2X^{1/3}]}]\left(\frac{1}{2}+it\right)\right|^2\ dt \ll_\eps X^{\eps^2} H \end{equation} for ``typical'' $j=1,\dotsc,r$, where $t_1,\dotsc,t_r$ is a maximal $\sqrt{\lambda X}$-separated subset of $[\lambda X, 2\lambda X]$. (These are oversimplifications; see Proposition \ref{qwe} and Proposition \ref{pq} for more precise statements of the bounds needed.) The first estimate \eqref{t1} turns out to follow readily from a fourth moment estimate of Jutila \cite{jutila} for Dirichlet $L$-functions in medium-sized intervals on average. As for \eqref{t2}, one can use the Fourier transform to bound the left-hand side by something that is roughly of the form \begin{equation}\label{hx3} \frac{H}{X^{1/3}} \sum_{\ell = O( X^{1/3} / H )} \left|\sum_{m \asymp X^{1/3}} e\left( \frac{t_j}{2\pi} \log \frac{m+\ell}{m-\ell} \right)\right|. \end{equation} The diagonal term $\ell=0$ is easy to treat, so we focus on the non-zero values of $\ell$. By Taylor expansion, the phase $\frac{t_j}{2\pi} \log \frac{m+\ell}{m-\ell}$ is approximately equal to the monomial $\frac{t_j}{\pi} \frac{\ell}{m}$. If one were to actually replace $e( \frac{t_j}{2\pi} \log \frac{m+\ell}{m-\ell} )$ by $e( \frac{t_j}{\pi} \frac{\ell}{m})$, then it turns out that one can obtain a very favorable estimate by using the fourth moment bounds of Robert and Sargos \cite{rs} for exponential sums with monomial phases. Unfortunately, the Taylor expansion does contain an additional lower order term of $\frac{t_j}{3\pi} \frac{\ell^3}{m^3}$ which complicates the analysis, but it turns out that (at the cost of some inefficiency) one can still apply the bounds of Robert and Sargos to obtain a satisfactory estimate for the indicated value \eqref{sigmadef} of $\sigma$. In the range \eqref{sigma-range} one must also treat the Type $d_4$ sums. Here we use a cruder version of the Type $d_3$ analysis. The analogue of Jutila's estimate (which would now require control of sixth moments) is not known unconditionally, so we use the classical $L^2$ mean value theorem in its place. The estimates of Robert and Sargos are now unfavorable, so we instead estimate the analogue of \eqref{hx3} using the classical van der Corput exponent pair $(1/14,2/7)$, which turns out to work even for $\sigma$ as small as $7/30$ (see \eqref{sigma-range}). Hence $d_4$ sums turn out to be easier than $d_3$ in our range of $H$. However we are not able to estimate $d_4$ sums in the full range $X^{1/5 + \varepsilon} < H < X^{1/4 - \varepsilon}$. Therefore there is no advantage in considering $d_5$ sums which would appear if we wanted to take $H$ below $X^{1/5 - \varepsilon}$ (we note that we can cover a tiny region of the $d_5$ sums by proceeding in the same manner as we do with $d_4$ sums). \begin{remark} \label{kloosterman} It is interesting to note that our work does not depend at all on estimates for Kloosterman sums. While the work of Mikawa for $H > X^{1/3 + \varepsilon}$ depends on the Weil bound for Kloosterman sums, our result in the same range only uses a bound for the fourth moment of Dirichlet $L$-functions The latter follows from the approximate functional equation and a mean-value theorem. In the smaller ranges of $H$ we use in addition estimates for short moments of Dirichlet $L$-functions (due to Jutila, see Proposition \ref{jutila-prop} below and also Corollary \ref{jutila-cor}) that are of the same strength as those that one obtains from using Kloosterman sums (due to Iwaniec, see \cite{iwaniec}) and yet whose proof is independent of input from algebraic geometry or spectral theory. On the other hand we note that the arguments of Perelli-Pintz \cite{pp} for $H > X^{1/3 + \varepsilon}$ do not depend on Kloosterman sums but instead of zero-density estimates. \end{remark} \begin{remark}\label{power} As usual, the results involving $\Lambda$ will have the implied constant depend in an ineffective fashion on the parameter $A$, due to our reliance on Siegel's theorem. It may be possible to eliminate this ineffectivity (possibly after excluding some ``bad'' scales $X \asymp X_0$) by introducing a separate argument (in the spirit of \cite{hb-twins}) to handle the case of a Siegel zero, but we do not pursue this matter here. In the proof of Theorem \ref{unav-corr}(ii), we do not need to invoke Siegel's theorem, and it is likely that (as in \cite{bbmz}) we can improve the logarithmic savings $\log^{-A} X$ to a power savings $X^{-\frac{c\eps}{k+l}}$ for some absolute constant $c>0$ (and with effective constants) by a refinement of the argument. However, we do not do this here in order to be able to treat all four estimates in a unified fashion. \end{remark} \subsection{Acknowledgments} KM was supported by Academy of Finland grant no. 285894. MR was supported by a NSERC Discovery Grant, the CRC program and a Sloan fellowship. TT was supported by a Simons Investigator grant, the James and Carol Collins Chair, the Mathematical Analysis \& Application Research Fund Endowment, and by NSF grant DMS-1266164. We are indebted to Yuta Suzuki for a reference and for pointing out a gap in the proof of Proposition \ref{spe} in an earlier version of the paper. We also thank Sary Drappeau and Karin Halupczok for comments on the introduction and the referee for a careful reading of the paper. Part of this paper was written while the authors were in residence at MSRI in Spring 2017, which is supported by NSF grant DMS-1440140. \section{Notation and preliminaries}\label{notation-sec} All sums and products will be over integers unless otherwise specified, with the exception of sums and products over the variable $p$ (or $p_1$, $p_2$, $p'$, etc.) which will be over primes. To accommodate this convention, we adopt the further convention that all functions on the natural numbers are automatically extended by zero to the rest of the integers, e.g. $\Lambda(n) = 0$ for $n \leq 0$. We use $A = O(B)$, $A \ll B$, or $B \gg A$ to denote the bound $|A| \leq C B$ for some constant $C$. If we permit $C$ to depend on additional parameters then we will indicate this by subscripts, thus for instance $A = O_{k,\eps}(B)$ or $A \ll_{k,\eps} B$ denotes the bound $|A| \leq C_{k,\eps} B$ for some $C_{k,\eps}$ depending on $k,\eps$. If $A,B$ both depend on some large parameter $X$, we say that $A = o(B)$ as $X \to \infty$ if one has $|A| \leq c(X) B$ for some function $c(X)$ of $X$ (as well as further ``fixed'' parameters not depending on $X$), which goes to zero as $X \to \infty$ (holding all ``fixed'' parameters constant). We also write $A \asymp B$ for $A \ll B \ll A$, with the same subscripting conventions as before. We use $ \T \coloneqq \R/\Z$ to denote the unit circle, and $e: \T \to \C$ to denote the fundamental character $$ e(x) \coloneqq e^{2\pi i x}.$$ We use $1_E$ to denote the indicator of a set $E$, thus $1_E(n) = 1$ when $n \in E$ and $1_E(n) = 0$ otherwise. Similarly, if $S$ is a statement, we let $1_S$ denote the number $1$ when $S$ is true and $0$ when $S$ is false, thus for instance $1_E(n) = 1_{n \in E}$. If $E$ is a finite set, we use $\# E$ to denote its cardinality. We use $(a,b)$ and $[a,b]$ for the greatest common divisor and least common multiple of natural numbers $a,b$ respectively, and write $a|b$ if $a$ divides $b$. We also write $a = b\ (q)$ if $a$ and $b$ have the same residue modulo $q$. Given a sequence $f: X \to \C$ on a set $X$, we define the $\ell^p$ norm $\|f\|_{\ell^p}$ of $f$ for any $1 \leq p < \infty$ as $$ \|f\|_{\ell^p} \coloneqq \left(\sum_{n \in X} |f(n)|^p\right)^{1/p}$$ and similarly define the $\ell^\infty$ norm $$ \|f\|_{\ell^\infty} \coloneqq \sup_{n \in X} |f(n)|.$$ Given two arithmetic functions $f, g: \N \to \C$, the Dirichlet convolution $f \ast g$ is defined by $$ f \ast g(n) \coloneqq \sum_{d|n} f(d) g\left(\frac{n}{d}\right).$$ \subsection{Summation by parts and exponential sums} If one has an asymptotic of the form $\sum_{X \leq n \leq X''} g(n) \approx \int_X^{X''} h(x)\ dx$ for all $X \leq X'' \leq X'$, then one can use summation by parts to then obtain approximations of the form $\sum_{X \leq n \leq X'} f(n) g(n) \approx \int_X^{X'} f(x) h(x)\ dx$ for sufficiently ``slowly varying'' amplitude functions $f: [X,X'] \to \C$. The following lemma formalizes this intuition: \begin{lemma}[Summation by parts]\label{sbp} Let $X \leq X'$, and let $f: [X,X'] \to \C$ be a smooth function. Then for any function $g: \N \to \C$ and absolutely integrable $h: [X,X'] \to \C$, we have $$ \sum_{X \leq n \leq X'} f(n) g(n) - \int_X^{X'} f(x) h(x)\ dx \leq |f(X')| E(X') + \int_{X}^{X'} |f'(X'')| E(X'')\ dX''$$ where $f'$ is the derivative of $f$ and $E(X'')$ is the quantity $$ E(X'') \coloneqq \left|\sum_{X \leq n \leq X''} g(n) - \int_X^{X''} h(x)\ dx\right|.$$ \end{lemma} \begin{proof} From the fundamental theorem of calculus we have \begin{equation}\label{sb-ident} \sum_{X \leq n \leq X'} f(n) g(n) = f(X') \sum_{X \leq n \leq X'} g(n) - \int_X^{X'} \left(\sum_{X \leq n \leq X''} g(n)\right) f'(X'')\ dX'' \end{equation} and similarly $$ \int_X^{X'} f(x) h(x)\ dx = f(X') \int_X^{X'} h(x)\ dx - \int_X^{X'} \left(\int_X^{X''} h(x)\ dx\right) f'(X'')\ dX''.$$ Subtracting the two identities and applying the triangle inequality and Minkowski's integral inequality, we obtain the claim. \end{proof} The following variant of Lemma \ref{sbp} will also be useful. Following Robert and Sargos \cite{rs}, define the maximal sum $|\sum_{X \leq n \leq X'} g(n)|^*$ to be the expression \begin{equation}\label{maxexp} \left|\sum_{X \leq n \leq X'} g(n)\right|^* \coloneqq \sup_{X \leq X_1 \leq X_2 \leq X'} \left|\sum_{X_1 \leq n \leq X_2} g(n)\right|. \end{equation} \begin{lemma}[Summation by parts, II]\label{sbp-2} Let $X \leq X'$, let $f: [X,X'] \to \C$ be smooth, and let $g: \N \to \C$ be a sequence. Then $$ \left|\sum_{X \leq n \leq X'} f(n) g(n)\right|^* \leq \left|\sum_{X \leq n \leq X'} g(n)\right|^* \left( \sup_{X \leq x \leq X'} |f(x)| + (X'-X) \sup_{X \leq x \leq X'} |f'(x)|\right).$$ \end{lemma} \begin{proof} Our task is to show that $$ \left|\sum_{X_1 \leq n \leq X_2} f(n) g(n)\right| \leq \left|\sum_{X \leq n \leq X'} g(n)\right|^* \left( \sup_{X \leq x \leq X'} |f(x)| + (X'-X) \sup_{X \leq x \leq X'} |f'(x)|\right).$$ for all $X \leq X_1 \leq X_2 \leq X'$. The claim then follows from \eqref{sb-ident} (replacing $X,X'$ by $X_1,X_2$) and the triangle inequality and Minkowski's integral inequality. \end{proof} To estimate maximal exponential sums, we will use the following estimates, contained in the work of Robert and Sargos \cite{rs}: \begin{lemma}\label{rs1} Let $M \geq 2$ be a natural number, and let $X \geq 2$ be a real number. \begin{itemize} \item[(i)] Let $\varphi(1),\dotsc,\varphi(M)$ be real numbers, let $a_1,\dotsc,a_M$ be complex numbers of modulus at most one, and let $2 \leq Y \leq X$. Then $$ \int_0^X \left(\left|\sum_{m=1}^M a_m e( t \varphi(m) )\right|^*\right)^4\ dt \ll \frac{X \log^4 X}{Y} \int_0^Y \left(\left|\sum_{m=1}^M e( t \varphi(m) )\right|\right)^4\ dt.$$ \item[(ii)] Let $\theta \neq 0,1$ be a real number, let $\eps>0$, and let $a_M,\dotsc,a_{2M}$ be complex numbers of modulus at most one. Then $$ \int_0^X \left(\left|\sum_{m=M}^{2M} a_m e\left( t \left(\frac{m}{M}\right)^\theta \right)\right|^*\right)^4\ dt \ll_{\theta,\eps} (X+M)^\eps (M^4 + M^2 X).$$ \item[(iii)] Suppose that $M \ll X \ll M^2$. Let $\varphi: \R \to \R$ be a smooth function obeying the derivative estimates $|\varphi^{(j)}(x)| \asymp X / M^j$ for $j=1,2,3,4$ and $x \asymp M$. Then $$ \left| \sum_{m=M}^{2M} e( \varphi(m) )\right|^* \ll \frac{M}{X^{1/2}} \left| \sum_{\epsilon \ell \asymp L} e(\varphi^*(\ell))\right|^* + M^{1/2}$$ for some $L \asymp \frac{X}{M}$, where $\varphi^*(t) \coloneqq \varphi(u(t)) - t u(t)$ is the (negative) Legendre transform of $\varphi$, $u$ is the inverse of the function $\varphi'$, and $\epsilon = \pm 1$ denotes the sign of $\varphi'(x)$ in the range $x \asymp M$. \end{itemize} \end{lemma} \begin{proof} Part (i) follows from the $p=2$ case of \cite[Lemma 3]{rs}. Part (ii) follows from \cite[Lemma 7]{rs} when $X \leq M^2$, and the remaining case $X > M^2$ then follows from part (i). Finally, part (iii) follows from applying the van der Corput $B$-process (and Lemma \ref{sbp-2}), see e.g. \cite[Lemma 3.6]{graham} or \cite[Theorem 8.16]{ik}, replacing $\varphi$ with $-\varphi$ if necessary to normalize the second derivative $\varphi''$ to be positive. \end{proof} \subsection{Divisor-bounded arithmetic functions}\label{div-bounded-functions-sec} Let us call an arithmetic function $\alpha: \N \to \C$ \emph{$k$-divisor-bounded} for some $k \geq 0$ if one has the pointwise bound $$ \alpha(n) \ll_k d_2^k(n) \log^k(2+n)$$ for all $n$. From the elementary mean value estimate \begin{equation}\label{divisor-crude} \sum_{1 \leq n \leq x} d_l(n)^k \ll_{k,l} x \log^{l^k-1}(2+x), \end{equation} valid for any $k \geq 0$, $l \geq 2$, and $x \geq 1$ (see e.g., \cite[formula (1.80)]{ik}), we see that a $k$-divisor-bounded function obeys the $\ell^2$ bounds \begin{equation}\label{alpha-2} \sum_{n \leq x} \alpha(n)^2 \ll_k x \log^{O_k(1)}(2+x) \end{equation} for any $x \geq 1$. Applying \eqref{alpha-2} with $\alpha$ replaced by a large power of $\alpha$, we conclude in particular the $\ell^\infty$ bound \begin{equation}\label{alpha-infty} \sup_{n \leq x} \alpha(n) \ll_{k,\eps} x^\eps \end{equation} for any $\eps > 0$. \subsection{Dirichlet polynomials}\label{dp-sec} Given any function $f: \N \to \C$ supported on a finite set, we may form the Dirichlet polynomial \begin{equation}\label{dir} {\mathcal D}[f](s) \coloneqq \sum_n \frac{f(n)}{n^s} \end{equation} for any complex $s$; if $f$ has infinite support but is bounded, we can still define ${\mathcal D}[f]$ in the region $\mathrm{Re} s > 1$. We will use a normalization in which we mostly evaluate Dirichlet polynomials on the critical line $\{\frac{1}{2}+it: t \in \R\}$, but one could easily run the argument using other normalizations, for instance by evaluating all Dirichlet polynomials on the line $\{1+it: t \in \R\}$ instead. We have the following standard estimate: \begin{lemma}[Truncated Perron formula]\label{tpf-lem} Let $f: \N \to \C$, let $T, X \geq 2$, and let $1 \leq x \leq X$. \begin{itemize} \item[(i)] If $f$ is $k$-divisor-bounded for some $k\geq 0$, and $T \leq X^{1-\eps}$, then for any $0 \leq \sigma < 1 - 2\eps$, one has $$ \sum_{n \leq x} \frac{f(n)}{n^\sigma} - \frac{1}{2\pi} \int_{-T}^T {\mathcal D}[f](1 + \frac{1}{\log X} +it) \frac{x^{1-\sigma+\frac{1}{\log X}+it}}{1-\sigma+\frac{1}{\log X}+it}\ dt + O_{k,\sigma,\eps}\left( \frac{X^{1-\sigma} \log^{O_k(1)}(TX)}{T} \right).$$ \item[(ii)] If $f: \N \to \C$ is supported on $[X/C, CX]$ for some $C>1$, then \begin{equation}\label{tpf} \sum_{n \leq x} f(n) = \frac{1}{2\pi} \int_{-T}^T {\mathcal D}[f](\frac{1}{2}+it) \frac{x^{\frac{1}{2}+it}}{\frac{1}{2}+it}\ dt + O_C\left( \sum_n |f(n)| \min\left( 1, \frac{X}{T|x-n|}\right) \right). \end{equation} In particular, if we estimate $f(n)$ pointwise by $\|f\|_{\ell^\infty}$, we have \begin{equation}\label{tpf-2} \sum_{n \leq x} f(n) = \frac{1}{2\pi} \int_{-T}^T {\mathcal D}[f](\frac{1}{2}+it) \frac{x^{\frac{1}{2}+it}}{\frac{1}{2}+it}\ dt + O_C\left( \|f\|_{\ell^\infty} \frac{X \log(2 + T)}{T} \right). \end{equation} \end{itemize} \end{lemma} \begin{proof} For (i), apply \cite[Corollary 5.3]{mv} with $a_n \coloneqq \frac{f(n)}{n^\sigma}$ and $\sigma_0 \coloneqq 1 - \sigma + \frac{1}{\log X}$, as well as \eqref{divisor-crude}. For (ii), apply \cite[Corollary 5.3]{mv} instead with $a_n \coloneqq f(n)$ and $\sigma_0 \coloneqq \frac{1}{2}$. \end{proof} As one technical consequence of this lemma, we can estimate the effect of truncating an arithmetic function $f$ on its Dirichlet series: \begin{corollary}[Truncating a Dirichlet series]\label{trunc-dir} Suppose that $f: \N \to \C$ is supported on $[X/C, CX]$ for some $X \geq 1$ and $C>1$. Let $T \geq 1$. Then for any interval $[X_1,X_2]$ and any $t \in \R$, we have the pointwise bound $$ {\mathcal D}[f 1_{[X_1,X_2]}](\frac{1}{2}+it) \ll_C \int_{-T}^T |{\mathcal D}[f](\frac{1}{2}+it+iu)| \frac{du}{1+|u|} + \|f\|_{\ell^\infty} \frac{X^{1/2} \log(2+T)}{T}.$$ \end{corollary} Because the weight $\frac{1}{1+|u|}$ integrates to $O(\log(2+T))$ on $[-T,T]$, this corollary is morally asserting that the Dirichlet polynomial of $f 1_{[X_1,X_2]}$ is controlled by that of $f$ up to logarithmic factors. As such factors will be harmless in our applications, this corollary effectively allows one to dispose of truncations such as $1_{[X_1,X_2]}$ appearing in a Dirichlet polynomial whenever desired. \begin{proof} Applying Lemma \ref{tpf-lem}(ii) with $f$ replaced by $n \mapsto f(n) / n^{it}$, we have for any $x$ that $$ \sum_{n \leq x} \frac{f(n)}{n^{it}} = \frac{1}{2\pi} \int_{-T}^T {\mathcal D}[f](\frac{1}{2}+it+iu) \frac{x^{\frac{1}{2}+iu}}{\frac{1}{2}+iu}\ du + O_C\left( \|f\|_{\ell^\infty} \frac{X\log(2 + T) }{T} \right)$$ and hence by the triangle inequality $$ \sum_{n \leq x} \frac{f(n)}{n^{it}} \ll_C x^{1/2} \int_{-T}^T |{\mathcal D}[f](\frac{1}{2}+it+iu)| \frac{du}{1+|u|}+ \|f\|_{\ell^\infty} \frac{X \log(2 + T)}{T} .$$ The claim now follows from Lemma \ref{sbp} (with $h=0$, $g(n)$ replaced by $f(n)/n^{it}$, and $f(x)$ replaced by $x^{-1/2}$). \end{proof} \subsection{Arithmetic functions with good cancellation}\label{sec:good-cancel} Let $\alpha \colon \mathbb{N} \to \mathbb{C}$ be a $k$-divisor-bounded function. From \eqref{alpha-2} and Cauchy-Schwarz, we see that \begin{equation}\label{dirt} \sum_{n \leq x: n = a\ (q)} \frac{\alpha(n)}{n^{\frac{1}{2}+it}} \ll_{k} x^{1/2} \log^{O_k(1)} x \end{equation} for any $t \in \R$, $q \geq 1$, and $a \in \Z$. We will say that a $k$-divisor-bounded function $\alpha$ has \emph{good cancellation} if one has the improved bound \begin{equation}\label{alb} \sum_{n \leq x: n = a\ (q)} \frac{\alpha(n)}{n^{\frac{1}{2}+it}} \ll_{k,A,B,B'} x^{1/2} \log^{-A} x \end{equation} for any $A,B,B'>0$, $x \geq 2$, $q \leq \log^B x$, $a \in \Z$, and $t \in \R$ with $\log^{B'} x \leq |t| \leq x^{B'}$, provided that $B'$ is sufficiently large depending on $A,B,k$. It is clear that if $\alpha$ is a $k$-divisor-bounded function with good cancellation, then so is its restriction $\alpha 1_{[X_1,X_2]}$ to any interval $[X_1,X_2]$. The property of being $k$-divisor-bounded with good cancellation is also basically preserved under Dirichlet convolution: \begin{lemma}\label{good-cancel} Let $\alpha, \beta$ be $k$-divisor-bounded functions. Then $\alpha \ast \beta$ is a $(2k+1)$-divisor-bounded function. Furthemore, if $\alpha$ and $\beta$ both have good cancellation, then so does $\alpha \ast \beta$. If, in addition, there is an $N$ for which $\alpha$ is supported on $[N^2,+\infty]$ and $\beta$ is supported on $[1,N]$, then one can omit the hypothesis that $\beta$ has good cancellation in the previous claim. \end{lemma} \begin{proof} Using the elementary inequality $d_2^{k_1} \ast d_2^{k_2} \leq d_2^{k_1+k_2+1}$, we see $\alpha \ast \beta$ is $2k+1$-divisor-bounded. Next, suppose that $\alpha$ and $\beta$ have good cancellation, and let $A,B,B' > 0$, $x \geq 2$, $q \leq \log^B X$, $a \in \Z$, and $t \in \R$ with $\log^{B'} x \leq |t| \leq x^{B'}$, with $B'$ is sufficiently large depending on $A,B,k$. To show that $\alpha \ast \beta$ has good cancellation, it suffices by dyadic decomposition to show that \begin{equation}\label{dance} \sum_{x < n \leq 2x: n = a\ (q)} \frac{\alpha \ast \beta(n)}{n^{\frac{1}{2}+it}} \ll_{k,A,B,B'} x^{1/2} \log^{-A} x. \end{equation} By decomposing $\alpha$ into $\alpha 1_{[1, \sqrt{x}]}$ and $\alpha 1_{(\sqrt{x},+\infty)}$, and similarly for $\beta$, we may assume from the triangle inequality that at least one of $\alpha,\beta$ is supported on $(\sqrt{x},+\infty)$; by symmetry we may assume that $\alpha$ is so supported. The left-hand side of \eqref{dance} may thus be written as $$ \sum_{a = b c\ (q)} \sum_{m \ll \sqrt{x}: m = c\ (q)} \frac{\beta(m)}{m^{\frac{1}{2}+it}} \sum_{x/m < n \leq 2x/m: n = b\ (q)} \frac{\alpha(n)}{n^{\frac{1}{2}+it}}.$$ Let $A'>0$ be a quantity depending on $A,B,k$ to be chosen later. As $\alpha$ has good cancellation, we may bound this (for $B'$ sufficiently large depending on $k,A',B$) using the triangle inequality by $$ \ll_{k,A',B,B'} \sum_{a = bc\ (q)} \sum_{m \ll \sqrt{x}: m = c\ (q)} \frac{|\beta(m)|}{m^{\frac{1}{2}}} (x/m)^{1/2} \log^{-A'} (x/m);$$ as $\beta$ is $k$-divisor-bounded and $q \leq \log^B x$, we may apply \eqref{divisor-crude} and bound this by $$ \ll_{k,A',B,B'} x^{\frac 12} \log^{-A' + B + O_k(1)} x.$$ Choosing $A'$ sufficiently large depending on $A,B,k$, we obtain \eqref{dance}. The final claim of the lemma is proven similarly, noting that the left-hand side of \eqref{dance} vanishes unless $x \gg N^2$, and hence from the support of $\beta$ we may already restrict $\alpha$ to the region $(c\sqrt{x},+\infty)$ for some absolute constant $c>0$ without invoking symmetry. \end{proof} We have three basic examples of functions with good cancellation: \begin{lemma}\label{point} The constant function $1$, the logarithm function $L: n \mapsto \log n$ and the M\"obius function $\mu$ are $1$-divisor-bounded with good cancellation. \end{lemma} From this lemma and Lemma \ref{good-cancel}, we also see that $\Lambda$ and $d_k$ have good cancellation for any fixed $k$. \begin{proof} For the functions $1,L$ this follows from standard van der Corput exponential sum estimates for $|\sum_{n \leq x} e(-\frac{t}{2\pi} \log (qn+a))|^*$ (e.g. \cite[Lemma 8.10]{ik}) and Lemma \ref{sbp-2}, normalizing $a$ to be in the range $0 \leq a < q$. Now we consider the function $\mu$. By using multiplicativity (and increasing $A$ as necessary) we may assume that $a$ is coprime to $q$. By decomposition into Dirichlet characters (and again increasing $A$ as necessary) it suffices to show that \begin{equation}\label{much1} \sum_{n \leq x} \frac{\mu(n) \chi(n)}{n^{\frac{1}{2}+it}} \ll_{k,A,B,B'} x^{1/2} \log^{-A} x \end{equation} for any Dirichlet character $\chi$ of period $q$. This estimate is certainly known to the experts, but as we did not find it in this form in the literature, we prove it here. The Vinogradov-Korobov zero-free region \cite[\S 9.5]{mont} implies that $L(s,\chi)$ has no zeroes in the region $$ \left\{ \sigma+it': 0 < |t'| \ll |t| + x^2; \sigma \geq 1 - \frac{c_{B,B'}}{\log^{2/3} |x| (\log\log |x|)^{1/3}} \right\}$$ for some $c_{B,B'}>0$ depending only on $B,B'$. Applying the estimates in \cite[\S 16]{davenport}, and shrinking $c_{B,B'}$ if necessary, we obtain the crude upper bounds $$ \frac{L'(s,\chi)}{L(s,\chi)} \ll_{B,B'} \log^2 |x|$$ in this region. Applying Perron's formula as in \cite[Lemma 2]{mr-p} (see also \cite[Lemma 1.5]{harman}), one then has the bound $$ \sum_{n \leq y} \Lambda(n) \chi(n) n^{-it} \ll_{A',B,B'} y \log^{-A'} x $$ for any $A'>0$ and any $y$ with $\exp(\log^{3/4} x) \leq y \leq x^2$ (in fact one may replace $3/4$ here by any constant larger than $2/3$). To pass from $\Lambda$ to $\mu$ we use a variant of the arguments used to prove Lemma \ref{good-cancel} (one could also work more directly, using upper bound for $1/L(s, \chi)$ but we could not find the exact upper bound we need from the literature). We begin with the trivial bound \begin{equation}\label{mu-triv} \sum_{n \leq y} \mu(n) \chi(n) n^{-it} \ll y \end{equation} for any $y > 0$. Writing $\mu(n) \log(n) \chi(n) n^{-it}$ as the Dirichlet convolution of $\Lambda(n) \chi(n) n^{-it}$ and $-\mu(n) \chi(n) n^{it}$ and using the Dirichlet hyperbola method, we conclude that $$ \sum_{n \leq y} \mu(n) \log n \chi(n) n^{-it} \ll_{B,B'} y \log^{3/4} x $$ for any $y = x^{1+o(1)}$, which by Lemma \ref{sbp-2} implies that $$ \sum_{n \leq y} \mu(n) \chi(n) n^{-it} \ll_{B,B'} y \log^{-1/4} x $$ for $y = x^{1+o(1)}$. Applying the Dirichlet hyperbola method again (using the above bound to replace the trivial bound \eqref{mu-triv} for $y = x^{1+o(1)}$) we conclude that $$ \sum_{n \leq y} \mu(n) \chi(n) n^{-it} \ll_{B,B'} y \log^{-2/4} x $$ for $y = x^{1+o(1)}$. Iterating this argument $O(A)$ times, we eventually conclude that $$ \sum_{n \leq y} \mu(n) \chi(n) n^{-it} \ll_{A,B,B'} y \log^{-A} x $$ for $y=x^{1+o(1)}$, and the claim \eqref{much1} then follows from Lemma \ref{sbp-2}. \end{proof} \subsection{Mean value theorems} In view of Lemma \ref{tpf-lem}, it becomes natural to seek upper bounds on the quantity $|{\mathcal D}[f](\frac{1}{2}+it)|$ for various functions $f$ supported on $[X/C,CX]$. We will primarily be interested in functions $f$ which are $k$-divisor-bounded for some bounded $k$. In such a case, we see from \eqref{alpha-2} that $$ \|f\|_{\ell^2}^2 \ll_k X \log^{O_k(1)} X.$$ The heuristic of \emph{square root cancellation} then suggests that the quantity $|{\mathcal D}[f](\frac{1}{2}+it)|$ should be of size $O( X^{o(1)})$ for all values of $t$ that are of interest (except possibly for the case $t=O(1)$ in which there might not be sufficient oscillation). Such square root cancellation is not obtainable unconditionally with current techniques; for instance, square root cancellation for $f = 1_{(X,2X]}$ is equivalent to the Lindel\"of hypothesis, while square root cancellation for $f = \Lambda 1_{(X,2X]}$ is equivalent to the Riemann hypothesis. However, we will be able to use a number of results that obtain something resembling square root cancellation on the average. The most basic instance of these results is the classical $L^2$ mean value theorem: \begin{lemma}[Mean value theorem]\label{mvt-lem} Suppose that $f \colon \N \to \C$ is supported on $[X/2,4X]$ for some $X \geq 2$. Then one has $$ \int_{T_0}^{T_0+T} \left|{\mathcal D}[f](\frac{1}{2}+it)\right|^2\ dt \ll \frac{T + X}{X} \|f\|_{\ell^2}^2$$ for all $T > 0$ and $T_0 \in \R$. In particular (from \eqref{alpha-2}), if $f$ is $k$-divisor-bounded, then $$ \int_{T_0}^{T_0+T} \left|{\mathcal D}[f](\frac{1}{2}+it)\right|^2\ dt \ll_k (T + X) \log^{O_k(1)} X.$$ \end{lemma} \begin{proof} See \cite[Theorem 9.1]{ik}. \end{proof} We will need to twist Dirichlet series by Dirichlet characters. With $f$ as above, and any Dirichlet character $\chi \colon \Z \to \C$, we can define $$ {\mathcal D}[f](s,\chi) \coloneqq \sum_n \frac{f(n)}{n^s} \chi(n)$$ and more generally $$ {\mathcal D}[f](s,\chi,q_0) \coloneqq \sum_n \frac{f(q_0 n)}{n^s} \chi(n)$$ for any complex $s$ and any natural number $q_0$. These Dirichlet series naturally appear when estimating Dirichlet series with a Fourier weight $e\left(\frac{an}{q}\right)$, as the following simple lemma shows: \begin{lemma}[Expansion into Dirichlet characters]\label{edc} Let $f \colon \N \to \C$ be a function supported on a finite set, let $q$ be a natural number, and let $a$ be coprime to $q$. Let $e\left(\frac{a \cdot}{q}\right)$ denote the function $n \mapsto e\left(\frac{an}{q}\right)$. Then we have the pointwise bound \begin{equation}\label{c-split} \left|{\mathcal D}\left[f e\left(\frac{a \cdot}{q}\right)\right](s)\right| \leq \frac{d_2(q)}{\sqrt{q}} \sum_{q = q_0 q_1} \sum_{\chi\ (q_1)} |{\mathcal D}[f](s,\chi,q_0)| \end{equation} for all complex numbers $s$ with $\mathrm{Re}(s) = \frac{1}{2}$, where in this paper the sum $\sum_{\chi\ (q_1)}$ denotes a summation over all characters $\chi$ (including the principal character) of period $q_1$, and $q_0,q_1$ are understood to be natural numbers. \end{lemma} \begin{proof} Let $s$ be such that $\mathrm{Re}(s) = \frac{1}{2}$. By definition we have $$ {\mathcal D}\left[f e\left(\frac{a \cdot}{q}\right)\right](s) = \sum_n \frac{f(n)}{n^s} e\left(\frac{an}{q}\right).$$ We now decompose the $n$ summation in terms of the greatest common divisor $q_0 \coloneqq (n,q)$ of $n$ and $q$, obtaining (after writing $q = q_0 q_1$ and $n = q_0 n_1$) $$ {\mathcal D}\left[f e\left(\frac{a \cdot}{q}\right)\right](s) = \sum_{q = q_0 q_1} \frac{1}{q_0^s} \sum_{n_1: (n_1,q_1)=1} \frac{f(q_0 n_1)}{n_1^s} e\left(\frac{an_1}{q_1}\right)$$ and thus by the triangle inequality $$ \left|{\mathcal D}\left[f e\left(\frac{a \cdot}{q}\right)\right](s)\right| \leq \sum_{q = q_0 q_1} \frac{1}{\sqrt{q_0}} \left|\sum_{n_1: (n_1,q_1)=1} \frac{f(q_0 n_1)}{n_1^s} e\left(\frac{an_1}{q_1}\right)\right|.$$ Next, we perform the usual Dirichlet expansion \begin{equation}\label{usual} e\left(\frac{an_1}{q_1}\right) 1_{(n_1,q_1)=1} = \frac{1}{\varphi(q_1)} \sum_{\chi\ (q_1)} \chi(a) \chi(n_1) \tau(\overline{\chi}) \end{equation} where $\tau(\overline{\chi})$ is the Gauss sum \begin{equation}\label{taug} \tau(\overline{\chi}) \coloneqq \sum_{l=1}^{q_1} e\left(\frac{l}{q_1}\right) \overline{\chi(l)} \end{equation} As is well known, we have $$ |\tau(\overline{\chi})| \leq \sqrt{q_1}$$ (as can be seen for instance from making the substitution $l \mapsto al$ to \eqref{taug} for $(a,q)=1$ and then applying the Parseval identity in $a$). Using the crude bound $$ \frac{1}{\sqrt{q_0}} \frac{1}{\varphi(q_1)} \sqrt{q_1} \ll \frac{1}{\sqrt{q_0}} \frac{d_2(q_1)}{q_1} \sqrt{q_1} \leq \frac{d_2(q)}{\sqrt{q}}$$ and the triangle inequality, we obtain \eqref{c-split}. \end{proof} It thus becomes of interest to have upper bounds, on average at least, on the quantity $|{\mathcal D}[f](\frac{1}{2}+it,\chi,q_0)|$. We first recall a variant of Lemma \ref{mvt-lem}, which can save a factor of $q_1$ or so compared to that lemma when summing over characters $\chi$: \begin{lemma}[Mean value theorem with characters]\label{mvt-lem-ch} Suppose that $f \colon \N \to \C$ is supported on $[X/2,4X]$ for some $X \geq 2$. Then one has $$ \sum_{\chi\ (q_1)} \int_{T_0}^{T_0+T} \left|{\mathcal D}[f]\left(\frac{1}{2}+it, \chi \right)\right|^2 \ll \frac{q_1 T + X}{X} \|f\|_{\ell^2}^2 \log^3(q_1 T X)$$ for all $T \geq 2$, $T_0 \in \R$, and natural numbers $q_1$, where $\chi$ is summed over all Dirichlet characters of period $q_1$. In particular, if $f$ is $k$-divisor-bounded, then (from \eqref{alpha-2}) we have $$ \sum_{\chi\ (q_1)} \int_{T_0}^{T_0+T} \left|{\mathcal D}[f]\left(\frac{1}{2}+it, \chi \right)\right|^2 \ll_k (q_1 T + X) \log^{O_k(1)}(q_1 T X)$$ \end{lemma} \begin{proof} This is a special case of \cite[Theorem 9.12]{ik}. \end{proof} In the case when $f$ is an indicator function $f = 1_{[1,X]}$ we have a fourth moment estimate: \begin{lemma}[Fourth moment estimate]\label{fourth} Let $X \geq 2$, $q_1 \geq 1$, and $T \geq 1$. Let ${\mathcal S}$ be a finite set of pairs $(\chi,t)$ with $\chi$ a character of period $q$, and $t \in [-T,T]$. Suppose that ${\mathcal S}$ is $1$-separated in the sense that for any two distinct pairs $(\chi,t), (\chi',t') \in {\mathcal S}$, one either has $\chi \neq \chi'$ or $|t-t'| \geq 1$. Then one has \begin{align*} \sum_{(\chi,t) \in {\mathcal S}} \left|{\mathcal D}[1_{[1,X]}]\left(\frac{1}{2}+it,\chi\right)\right|^4 &\ll q_1 T \log^{O(1)} X + |{\mathcal S}| \log^4 X \left( \frac{q_1^2}{T^2} + \frac{X^2}{T^4} \right)\\ &\quad + X^2 \sum_{(\chi,t) \in {\mathcal S}} \delta_\chi (1+|t|)^{-4} \end{align*} where $\delta_\chi$ is equal to $1$ when $\chi$ is principal, and equal to zero otherwise. \end{lemma} \begin{proof} See \cite[Lemma 9]{bhp}. We remark that this estimate is proven using fourth moment estimates \cite{ramachandra} for Dirichlet $L$-functions. \end{proof} It will be more convenient to use a (slightly weaker) integral form of this estimate: \begin{corollary}[Fourth moment estimate, integral form]\label{fourth-integral} Let $X \geq 2$, $q_1 \geq 1$, and $T \geq 1$. Then \begin{equation}\label{iant} \sum_{\chi\ (q_1)} \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[1_{[1,X]}](\frac{1}{2}+it,\chi)\right|^4\ dt \ll q_1 T \left(1 + \frac{q_1^2}{T^2} + \frac{X^2}{T^4} \right) \log^{O(1)} X. \end{equation} A similar bound holds with $1_{[1,X]}$ replaced by $L 1_{[1,X]}$. \end{corollary} \begin{proof} For each $\chi$, we cover the region $T/2 \leq |t| \leq T$ by unit intervals $I$, and for each such $I$ we find a point $t \in I$ that maximizes $|{\mathcal D}[1_{[1,X]}](\frac{1}{2}+it,\chi)|$, then add $(\chi,t)$ to ${\mathcal S}$. Then $|{\mathcal S}| \ll q T$; it is not necessarily $1$-separated, but one can easily separate it into $O(1)$ $1$-separated sets. Applying Lemma \ref{fourth} (bounding $\delta_\chi (1+|t|)^{-4}$ by $O(1/T^4)$), we obtain \eqref{iant}. Finally, to handle $L 1_{[1,X]}$, one can use the integration by parts identity \begin{equation}\label{lix} L 1_{[1,X]} (y) = \log X 1_{[1,X]}(y) - \int_1^X 1_{[1,X']}(y) \frac{dX'}{X'} \end{equation} and the triangle inequality (cf. Lemmas \ref{sbp}, \ref{sbp-2}). \end{proof} We will also need the following variant of the fourth moment estimate due to Jutila \cite{jutila}. \begin{proposition}[Jutila]\label{jutila-prop} Let $q, T \geq 1$ and $\eps > 0$. Let $T^{1/2 + \varepsilon} \ll T_0 \ll T^{2/3}$ and $T < t_1 < \ldots < t_r < 2T$ with $t_{i + 1} - t_{i} > T_0$. Then we have $$ \sum_{\chi \ (q)} \sum_{i = 1}^{r} \int_{t_i}^{t_i + T_0} |L(\tfrac 12 + it, \chi)|^4 dt \ll_\eps q (r T_0 + (r T)^{2/3}) (q T)^{\varepsilon}. $$ \end{proposition} \begin{proof} See \cite[Theorem 3]{jutila}. This estimate is a variant of Iwaniec's result \cite{iwaniec} on the fourth moment of $\zeta$ in short intervals; it is however proven using a completely different and more elementary method. \end{proof} Using a variant of Corollary \ref{trunc-dir} we may truncate the Dirichlet $L$-function to conclude \begin{corollary}\label{jutila-cor} Let the hypotheses be as in Proposition \ref{jutila-prop}. Then for any $1 \leq X \ll T^2$ and any Dirichlet character $\chi$ of period $q$, one has $$ \sum_{i = 1}^{r} \int_{t_i}^{t_i + T_0} |{\mathcal D}[1_{[1,X]}](\tfrac 12 + it, \chi)|^4 dt \ll_\eps q^{O(1)} (r T_0 + (r T)^{2/3}) T^{\varepsilon}. $$ Similarly with $1_{[1,X]}$ replaced by $L 1_{[1,X]}$. \end{corollary} One can be more efficient here with respect to the dependence of the right-hand side on $q$, but we will not need to do so in our application, as we will only use Corollary \ref{jutila-cor} for quite small values of $q$. \begin{proof} In view of \eqref{lix} and the triangle inequality, followed by dyadic decomposition, it suffices to show that $$ \sum_{i = 1}^{r} \int_{t_i}^{t_i + T_0} |{\mathcal D}[1_{[X,2X]}](\tfrac 12 + it, \chi)|^4 dt \ll_\eps q^{O(1)} (r T_0 + (r T)^{2/3}) T^{\varepsilon}. $$ From the fundamental theorem of calculus we have $$ {\mathcal D}[1_{[X,2X]}](\tfrac 12 + it, \chi) = \frac{1}{(2X)^{1/2}} {\mathcal D}[1_{[X,2X]}](it, \chi) + \int_X^{2X} {\mathcal D}[1_{[X,X']}](it, \chi) \frac{dX'}{2(X')^{3/2}} $$ so by the triangle inequality again, it suffices to show that \begin{equation}\label{spak} \sum_{i = 1}^{r} \int_{t_i}^{t_i + T_0} |{\mathcal D}[1_{[1,X]}](it, \chi)|^4 dt \ll_\eps q^{O(1)} X^2 (r T_0 + (r T)^{2/3}) T^{\varepsilon}. \end{equation} Let $t$ lie in the range $[T, 3T]$. From Lemma \ref{tpf-lem}(i) with $f(n), \sigma, T$ replaced by $\frac{\chi(n)}{n^{it}}$, $0$, $T^2$ respectively, we see that $$ {\mathcal D}[1_{[1,X]}](it, \chi) \ll \left|\int_{-T^2}^{T^2} L\left( 1 + \frac{1}{\log X} + i(t+t'), \chi\right) \frac{X^{1+\frac{1}{\log X} + it'}}{1+\frac{1}{\log X}+it'}\ dt'\right| + \frac{X \log^{O(1)} T}{T^2}$$ where $L(s,\chi)$ is the Dirichlet $L$-function. Note that $X / T^2 \ll 1$ by assumption. Shifting the contour and using the crude convexity bound $L(\sigma+it,\chi) \ll q^{O(1)} (1+t)^{1/2}$ for $1/2 \leq \sigma \leq 2$, all $\varepsilon > 0$ and $|\sigma+it-1| \gg 1$, and also noting that the residue of $L(s,\chi)$ at $1$ (if it exists) is $O(1)$, we obtain the estimate $$ {\mathcal D}[1_{[1,X]}](it, \chi) \ll q^{O(1)} \left( \left|\int_{-T^{2}}^{T^{2}} L\left( \frac{1}{2} + i(t+t'), \chi\right) \frac{X^{\frac{1}{2}+it'}}{\frac{1}{2}+it'}\ dt'\right| + \frac{X}{T} + \log^{O(1)} T \right) $$ (say). Since $X \ll T^2$, we can bound the error term inside brackets by $X^{1/2} \cdot \log^{O(1)} T$. We have the crude $L^2$ mean value estimate $$ \int_{-T'}^{T'} \left|L\left(\frac{1}{2}+i( t+ t'), \chi\right)\right|^2\ dt' \ll q^{O(1)} T' (\log (T' + 2))^{O(1)}$$ for any $T' > T/10$ (which can be established for instance from Lemma \ref{mvt-lem} and the approximate functional equation). From this, Cauchy-Schwarz, and dyadic decomposition, we see that $$ \left|\int_{-T^{2}}^{T^{2}} L\left( \frac{1}{2} + i(t+t'), \chi\right) \frac{X^{\frac{1}{2}+it'}}{\frac{1}{2}+it'}\ dt'\right| \ll X^{1/2} \Big ( \int_{-T/2}^{T/2} \frac{ | L ( \frac{1}{2} + i(t + t'), \chi ) |}{1 + |t'|} d t' + (\log (T + 2))^{O(1)} \Big ) $$ We conclude that, $$ {\mathcal D}[1_{[1,X]}](it, \chi) \ll q^{O(1)} X^{1/2} (\log(T+2))^{O(1)} \left( 1 + \int_{-T/2}^{T/2} \frac{\left|L( \frac{1}{2} + i(t+t'), \chi)\right|}{1+|t'|}\ dt' \right). $$ By H\"older's inequality, we then have $$ |{\mathcal D}[1_{[1,X]}](it, \chi)|^4 \ll q^{O(1)} X^2 (\log(T+2))^{O(1)} \left( 1 + \int_{-T/2}^{T/2} \frac{\left|L\left( \frac{1}{2} + i(t+t'), \chi\right)\right|^4}{1+|t'|}\ dt' \right). $$ From shifting the $t_j$ by $t'$, we see from Proposition \ref{jutila-prop} that $$ \sum_{i = 1}^{r} \int_{t_i}^{t_i + T_0} |L(\tfrac 12 + i(t+t'), \chi)|^4 dt \ll_\eps q (r T_0 + (r T)^{2/3}) (q T)^{\varepsilon}. $$ whenever $-T/2 \leq t' \leq T/2$. The claim \eqref{spak} now follows from Fubini's theorem. \end{proof} \subsection{Combinatorial decompositions} We will treat the functions $\Lambda 1_{(X,2X]}$ and $d_k 1_{(X,2X]}$ in a unified fashion, decomposing both of these functions as certain (truncated) Dirichlet convolutions of various types, which we will call ``Type $d_j$ sums'' for some small $j=1,2,\dots$ and ``Type II sums'' respectively. More precisely, we have \begin{lemma}[Combinatorial decomposition]\label{comb-decomp} Let $k, m \geq 1$ and $0 < \eps < \frac{1}{m}$ be fixed. Let $X \geq 2$, and let $H_0$ be such that $X^{\frac{1}{m} + \eps} \leq H_0 \leq X$. Let $f \colon \N \to \C$ be either the function $f \coloneqq \Lambda 1_{(X,2X]}$ or $f \coloneqq d_k 1_{(X,2X]}$. Then one can decompose $f$ as the sum of $O_{k,m,\eps}( \log^{O_{k,m,\eps}(1)} X )$ components $\tilde f$, each of which is of one of the following types: \begin{itemize} \item[(Type $d_j$)] A function of the form \begin{equation}\label{fst} \tilde f = (\alpha \ast \beta_1 \ast \dotsb \ast \beta_j) 1_{(X,2X]} \end{equation} for some arithmetic functions $\alpha,\beta_1,\dotsc,\beta_j \colon \N \to \C$, where $1 \leq j < m$, $\alpha$ is $O_{k,m,\eps}(1)$-divisor-bounded and supported on $[N,2N]$, and each $\beta_i$, $i=1,\dotsc,j$ is either equal to $1_{(M_i,2M_i]}$ or $L 1_{(M_i,2M_i]}$ for some $N, M_1,\dotsc,M_j$ obeying the bounds \begin{equation}\label{n-small} 1 \ll N \ll_{k,m,\eps} X^\eps, \end{equation} \begin{equation}\label{nmx} N M_1 \dots M_j \asymp_{k,m,\eps} X \end{equation} and $$ H_0 \ll M_1 \ll \dotsb \ll M_j \ll X.$$ \item[(Type II sum)] A function of the form $$ f = (\alpha \ast \beta) 1_{(X,2X]}$$ for some $O_{k,m,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta \colon \N \to \C$ of good cancellation supported on $[N,2N]$ and $[M,2M]$ respectively, for some $N,M$ obeying the bounds \begin{equation}\label{n-medium} X^\eps \ll_{k,m,\eps} N \ll_{k,m,\eps} H_0 \end{equation} and \begin{equation}\label{nmx-2} NM \asymp_{k,m,\eps} X. \end{equation} \end{itemize} \end{lemma} As the name suggests, Type $d_j$ sums behave similarly to the $j^{\operatorname{th}}$ divisor function $d_j$ (but with all factors in the Dirichlet convolution constrained to be supported on moderately large natural numbers). In our applications we will take $m$ to be at most $5$, so that the only sums that appear are Type $d_1$, Type $d_2$, Type $d_3$, Type $d_4$, and Type II sums, and the dependence in the above asymptotic notation on $m$ can be ignored. The contributions of Type $d_1$, Type $d_2$, and Type II sums were essentially treated by previous literature; our main innovations lie in our estimation of the contributions of the Type $d_3$ and Type $d_4$ sums. \begin{proof} We first claim a preliminary decomposition: $f$ can be expressed as a linear combination (with coefficients of size $O_{k,m,\eps}(1)$) of $O_{k,m,\eps}(\log^{O_{k,m,\eps}(1)} X)$ terms $\tilde f$ that are each of the form \begin{equation}\label{pre} \tilde f = (\gamma_1 \ast \dotsb \ast \gamma_r) 1_{(X,2X]} \end{equation} for some $r = O_{k,m,\eps}(1)$, where each $\gamma_i \colon \N \to \C$ is supported on $[N_i,2N_i]$ for some $N_1,\dotsc,N_r \gg 1$ and are $1$-divisor-bounded with good cancellation. Furthermore, for each $i$, one either has $\gamma_i = 1_{(N_i,2N_i]}$, $\gamma_i = L 1_{(N_i,2N_i]}$, or $N_i \ll X^\eps$. We first perform this decomposition in the case $f = d_k 1_{(X,2X]}$. On the interval $(X,2X]$, we clearly have $$ d_k = 1_{[1,2X]} \ast \dotsb \ast 1_{[1,2X]}$$ where the term $1_{[1,2X]}$ appears $k$ times. We can dyadically decompose $1_{[1,2X]}$ as the sum of $O( \log X )$ terms, each of which is of the form $1_{(N,2N]}$ for some $1 \ll N \ll X$. This decomposes $d_k 1_{(X,2X]}$ as the sum of $O_k( \log^k X )$ terms of the form $$ (1_{(N_1,2N_1]} \ast \dotsb \ast 1_{(N_k,2N_k]}) 1_{(X,2X]}$$ and this is clearly of the required form \eqref{pre} thanks to Lemma \ref{point}. Now suppose that $f = \Lambda 1_{(X,2X]}$. Here we use the well-known Heath-Brown identity \cite[Lemma 1]{hb-ident}. Let $K$ be the first natural number such that $K \geq \frac{1}{\eps} \geq m$, thus $K = O_{m,\eps}(1)$. The Heath-Brown identity then gives \begin{equation}\label{lao} \Lambda = \sum_{j=1}^K (-1)^{j+1} \binom{K}{j} L \ast 1^{\ast (j-1)} * (\mu 1_{[1,(2X)^{1/K}]})^{\ast j} \end{equation} on the interval $(X,2X]$, where $f^{\ast j}$ denotes the Dirichlet convolution of $j$ copies of $f$. Clearly we may replace $L$ and $1$ by $L 1_{[1,2X]}$ and $1_{[1,2X]}$ respectively without affecting this identity on $(X,2X]$. As before, we can decompose $1_{[1,2X]}$ into $O(\log X)$ terms of the form $1_{(N,2N]}$ for some $1 \ll N \ll X$; one similarly decomposes $L 1_{[1,2X]}$ into $O(\log X)$ terms of the form $1_{(N,2N]}$ for $1 \ll N \ll X$, and $\mu 1_{[1,(2X)^{1/K}]}$ into $O(\log X)$ terms of the form $\mu 1_{(N,2N]}$ for $1 \ll N \leq (2X)^{1/K} \ll X^\eps$. Inserting all these decompositions into \eqref{lao} and using Lemma \ref{point}, we obtain the desired expansion of $f$ into $O_{k,m,\eps}(\log^{O_{k,m,\eps}(1)} X)$ terms of the form \eqref{pre}. In view of the above decomposition, it suffices to show that each individual term of the form \eqref{pre} can be expressed as the sum of $O_{k,m,\eps}(1)$ terms, each of which are either a Type $d_j$ sum for some $1 \leq j < m$ or as a Type II sum (note that the coefficients of the linear combination can be absorbed into the $\alpha$ factor for both the Type $d_j$ and the Type II sums). First note that we may assume that \begin{equation}\label{nrx} N_1 \dotsm N_r \asymp_{k,m,\eps} X \end{equation} otherwise the expression in \eqref{pre} vanishes. By symmetry we may also assume that $N_1 \leq \dotsb \leq N_r$. We may also assume that $X$ is sufficiently large depending on $k,m,\eps$ as the claim is trivial otherwise (every arithmetic function of interest would be a Type II sum, for instance, setting $\alpha$ to be the Kronecker delta function at one). Let $0 \leq s \leq r$ denote the largest integer for which \begin{equation}\label{nas} N_1 \dotsm N_s \leq X^\eps. \end{equation} From \eqref{nrx} we have $s < r$ (if $X$ is large enough). We divide into two cases, depending on whether $N_1 \dotsm N_{s+1} \leq 2H_0$ or not. First suppose that $N_1 \dotsm N_{s+1} \leq 2H_0$, then by construction we have $$ X^\eps \leq N_1 \dotsm N_{s+1} \leq 2H_0.$$ One can then almost express \eqref{pre} as a Type II sum by setting $$\alpha \coloneqq \gamma_1 \ast \dotsb \ast \gamma_{s+1} $$ and $$\beta \coloneqq \gamma_{s+2} \ast \dotsb \ast \gamma_r$$ and using Lemma \ref{good-cancel}. The only difficulty is that $\alpha$ is not quite supported on an interval of the form $[N,2N]$, instead being supported on $[N_1 \dotsm N_{s+1}, 2^{s+1} N_1 \dotsm N_{s+1}]$, and similarly for $\beta$; but this is easily rectified by decomposing both $\alpha$ and $\beta$ dyadically into $O_{k,m,\eps}(1)$ pieces, each of which are supported in an interval of the form $[N,2N]$. Finally we consider the case when $N_1 \dotsm N_{s+1} > 2H_0$. Since $H_0 \geq X^{\frac{1}{m}+\eps}$, we conclude from \eqref{nas} that $$ N_r \geq \dotsb \geq N_{s+2} \geq N_{s+1} > 2 X^{\frac{1}{m}} \geq (2X)^{\frac{1}{m}}.$$ In particular, if $r-s \geq m$, then $N_{s+1} \dotsm N_r > 2X$ and \eqref{pre} vanishes. Thus we may assume that $s = r-j$ for some $1 \leq j < m$. Also, as $N_r,\dotsc,N_{s+1}$ are significantly larger than $X^\eps$, the $\gamma_j$ for $j=s+1,\dotsc,r$ must be of the form $1_{(N_j,2N_j]}$ or $L 1_{(N_j,2N_j]}$. One can then almost express \eqref{pre} as a Type $d_j$ sum by setting $$ \alpha \coloneqq \gamma_1 \ast \dotsb \ast \gamma_s$$ and $$ \beta_i \coloneqq \gamma_{s+i}$$ for $i=1,\dotsc,j$ and using Lemma \ref{good-cancel}. The support of $\alpha$ is again slightly too large, but this can be rectified as before by a dyadic decomposition. \end{proof} For technical reasons (arising from the terms in Lemma \ref{edc} when $q_0>1$), we will need a more complicated variant of this proposition, in which one decomposes the function $n \mapsto f(q_0 n)$ rather than $f$ itself. This introduces some additional ``small'' sums which are not Dirichlet convolutions, but which are quite small in $\ell^2$ norm and so can be easily managed using crude estimates such as Lemma \ref{mvt-lem}. \begin{lemma}[Combinatorial decomposition, II]\label{comb-decomp-2} Let $k, m, B \geq 1$ and $0 < \eps < \frac{1}{m}$ be fixed. Let $X \geq 2$, and let $H_0$ be such that $X^{\frac{1}{m} + \eps} \leq H_0 \leq X$. Let $q_0$ be a natural number with $q_0 \leq \log^B X$. Let $f \colon \N \to \C$ be either the function $f \coloneqq \Lambda 1_{(X,2X]}$ or $f \coloneqq d_k 1_{(X,2X]}$. Then one can decompose the function $f(q_0 \cdot): n \mapsto f(q_0 n)$ as a linear combination (with coefficients of size $O_k (d_2(q_0)^{O_k(1)})$) of $O_{k,m,\eps}( \log^{O_{k,m,\eps}(1)} X )$ components $\tilde f$, each of which is of one of the following types: \begin{itemize} \item[(Type $d_j$ sum)] A function of the form \begin{equation}\label{fst2} \tilde f = (\alpha \ast \beta_1 \ast \dotsb \ast \beta_j) 1_{(X/q_0,2X/q_0]} \end{equation} for some arithmetic functions $\alpha,\beta_1,\dotsc,\beta_j \colon \N \to \C$, where $1 \leq j < m$, $\alpha$ is $O_{k,m,\eps}(1)$-divisor-bounded and supported on $[N,2N]$, and each $\beta_i$, $i=1,\dotsc,j$ is either of the form $\beta_i = 1_{(M_i,2M_i]}$ or $\beta_i = L 1_{(M_i,2M_i]}$ for some $N, M_1,\dotsc,M_j$ obeying the bounds \eqref{n-small}, \begin{equation}\label{nmx-q} N M_1 \dotsm M_j \asymp_{k,m,\eps} X/q_0 \end{equation} and $$ H_0 \ll M_1 \ll \dotsb \ll M_j \ll X/q_0.$$ \item[(Type II sum)] A function of the form $$ \tilde f = (\alpha \ast \beta) 1_{(X/q_0,2X/q_0]}$$ for some $O_{k,m,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta \colon \N \to \C$ with good cancellation supported on $[N,2N]$ and $[M,2M]$ respectively, for some $N,M$ obeying the bounds \eqref{n-medium} and \begin{equation}\label{nmx-2q} NM \asymp_{k,m,\eps} X / q_0. \end{equation} The good cancellation bounds \eqref{alb} are permitted to depend on the parameter $B$ in this lemma (in particular, $B'$ can be assumed to be large depending on this parameter). \item[(Small sum)] A function $\tilde f$ supported on $(X/q_0,2X/q_0]$ obeying the bound $$ \|\tilde f \|_{\ell^2}^2 \ll_{k,m,\eps,B} X^{1-\eps/8}.$$ \end{itemize} \end{lemma} \begin{proof} If $q_0=1$ then the claim follows from Lemma \ref{comb-decomp}, so suppose $q_0 > 1$. We first dispose of the case when $f = \Lambda 1_{(X,2X]}$. From the support of the von Mangoldt function we see that the function $f(q_0 \cdot): n \mapsto f(q_0 n)$ vanishes unless $q_0$ is a prime power $p_0^{k_0}$ and in this case is supported on powers of $p_0$. Thus \begin{align*} \|f(q_0 \cdot)\|_{\ell^2}^2 &\leq \sum_{k \geq 0} 1_{(X,2X]}(p_0^{k_0 + k}) \Lambda(p_0^{k_0 + k})^2 \ll (\log X)^3. \end{align*} Thus $f(q_0 \cdot)$ is already a small sum, and we are done in this case. It remains to consider the case when $f = d_k 1_{(X,2X]}$. Let $d_k(q_0 \cdot)$ denote the function $n \mapsto d_k(q_0 n)$. The function $d_k(q_0 \cdot)/d_k(q_0)$ is multiplicative, hence by M\"obius inversion we may factor $$ d_k(q_0 \cdot) = d_k(q_0) d_k \ast g$$ where $g$ is the multiplicative function $$ g \coloneqq \frac{1}{d_k(q_0)} d_k(q_0 \cdot) \ast \mu^{\ast k}$$ and $\mu^{\ast k}$ is the Dirichlet convolution of $k$ copies of $\mu$. From multiplicativity we see that $g$ is $O_k(1)$-divisor-bounded, non-negative and supported on the multiplicative semigroup ${\mathcal G}$ generated by the primes dividing $q_0$. We split $g = g_1 + g_2$, where $g_1(n) \coloneqq g(n) 1_{n \leq X^{\eps/2}}$ and $g_2(n) \coloneqq g(n) 1_{n > X^{\eps/2}}$, thus $$ f(q_0 \cdot) = d_k(q_0) (d_k \ast g_1) 1_{(X/q_0,2X/q_0]} + d_k(q_0) (d_k \ast g_2) 1_{(X/q_0,2X/q_0]}.$$ The term $(d_k \ast g_1) 1_{(X/q_0,2X/q_0]}$ can be decomposed into terms of the form \eqref{pre} (but with $X$ replaced by $X/q_0$), exactly as in the proof of Lemma \ref{comb-decomp} (with the additional factor $g_1$ simply being an additional term $\gamma_i$), so by repeating the previous arguments (with $X$ replaced by $X/q_0$ as appropriate), we obtain the required decomposition of this term as a linear combination (with coefficients of size $O( d_k(q_0) ) = O(d_2(q_0)^{O_k(1)})$) of Type $d_j$ and Type II sums, using the second part of Lemma \ref{good-cancel} to ensure that convolution by $g_1$ does not destroy the good cancellation property. It remains to handle the $(d_k \ast g_2) 1_{(X/q_0,2X/q_0]}$ term, which we will show to be small. Indeed, we expand $$ \|(d_k \ast g_2) 1_{(X/q_0,2X/q_0]}\|_{\ell^2}^2 = \sum_{n: X < q_0 n \leq 2X} |d_k \ast g_2(q_0 n)|^2.$$ We can expand out the square and bound this by \begin{equation}\label{curd} \ll \sum_{m_1,m_2} g_2(m_1) g_2(m_2) \sum_{X < n \leq 2X: m_1,m_2,q_0|n} d_k\left(\frac{n}{m_1}\right) d_k\left(\frac{n}{m_2}\right), \end{equation} where $m_1,m_2$ range over the natural numbers. We crudely drop the constraint $q_0|n$. From \eqref{divisor-crude} (factoring $n = [m_1,m_2] n'$ and noting that $d_k\left(\frac{n}{m_1}\right) \leq d_2(m_2)^{O_k(1)} d_2(n')^{O_k(1)}$, and similarly for $d_k\left(\frac{n}{m_2}\right)$) we have $$ \sum_{X < n \leq 2X: m_1,m_2|n} d_k\left(\frac{n}{m_1}\right) d_k\left(\frac{n}{m_2}\right) \ll_k \frac{X d_2(m_1)^{O_k(1)} d_2(m_2)^{O_k(1)} \log^{O_k(1)} X}{[m_1,m_2]} $$ and also crudely bounding $\frac{1}{[m_1,m_2]} \leq \frac{1}{m_1^{1/2} m_2^{1/2}}$, we can bound \eqref{curd} by $$ \ll_k \left( \sum_{m} \frac{g_2(m) d_2(m)^{O_k(1)}}{m^{1/2}}\right)^2 X \log^{O_k(1)} X $$ which on bounding $g_2(m) \leq X^{-\eps/8} g(m) m^{1/4}$ becomes $$ \ll_{k,\eps} X^{-\eps/8} X \left(\sum_{m} \frac{g(m) d_2(m)^{O_k(1)}}{m^{1/4}}\right)^2 \log^{O_k(1)} X. $$ From Euler products and the support and bounds on $g$ we have $$ \sum_{m} \frac{g(m) d_2(m)^{O_k(1)}}{m^{1/4}} \ll_k d_2(q_0)^{O_k(1)}$$ and so by using \eqref{alpha-infty}, we conclude that $(d_k \ast g_2) 1_{(X/q_0,2X/q_0]}$ is small as required. \end{proof} \section{Applying the circle method} Let $f,g \colon \Z \to \C$ be functions supported on a finite set, and let $h$ be an integer. Following the Hardy-Littlewood circle method, we can express the correlation \eqref{co} as an integral $$ \sum_n f(n) \overline{g}(n+h) = \int_\T S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha $$ where $S_f, S_g \colon \T \to \C$ are the exponential sums \begin{align*} S_f(\alpha) &\coloneqq \sum_n f(n) e(\alpha n) \\ S_g(\alpha) &\coloneqq \sum_n g(n) e(\alpha n). \end{align*} If we then designate some (measurable) portion ${\mathfrak M}$ of the unit circle $\T$ to be the ``major arcs'', we thus have \begin{equation}\label{sam} \sum_n f(n) \overline{g}(n+h) - \operatorname{MT}_{{\mathfrak M},h} = \int_{\mathfrak m} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha \end{equation} where $ \operatorname{MT}_{{\mathfrak M},h} $ is the \emph{main term} \begin{equation}\label{mt-def} \operatorname{MT}_{{\mathfrak M},h} \coloneqq \int_{\mathfrak M} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha \end{equation} and ${\mathfrak m} \coloneqq \T \backslash {\mathfrak M}$ denotes the complementary \emph{minor arcs}. We will choose the major arcs so that the main term can be computed for any given $h$ by classical techniques (basically, the Siegel-Walfisz theorem, together with the analogous asymptotics for the divisor functions $d_k$). To control the minor arcs, we take advantage of the ability to average in $h$ to control this contribution by certain short $L^2$ integrals of the exponential sum $S_f(\alpha)$ (the factor $S_g(\alpha)$ will be treated by a trivial bound). \begin{proposition}[Circle method]\label{circle-method} Let $H \geq 1$, and let $f, g, {\mathfrak M}, {\mathfrak m}, S_f, S_g, \operatorname{MT}_{{\mathfrak M},h}$ be as above. Then for any integer $h_0$, we have \begin{equation}\label{bound} \begin{split} &\sum_{|h-h_0| \leq H} \left|\sum_n f(n) \overline{g(n+h)} - \operatorname{MT}_{{\mathfrak M},h}\right|^2 \\ &\quad\ll H \int_{\mathfrak m} |S_f(\alpha)| |S_g(\alpha)| \int_{\mathfrak m \cap [\alpha -1/2H, \alpha+1/2H]} |S_f(\beta)| |S_g(\beta)|\ d\beta d\alpha \end{split}\end{equation} \end{proposition} \begin{proof} From \eqref{sam}, the left-hand side of \eqref{bound} may be written as $$ \sum_{|h-h_0| \leq H} \left|\int_{\mathfrak m} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha\right|^2.$$ Next, we introduce an even non-negative Schwartz function $\Phi \colon \R \to \R^+$ with $\Phi(x) \geq 1$ for all $x \in [-1,1]$, such that the Fourier transform $\hat\Phi(\xi) \coloneqq \int_\R \Phi(x) e(-x \xi)\ dx$ is supported in $[-1/2,1/2]$. (Such a function may be constructed by starting with the inverse Fourier transform of an even test function supported on a small neighbourhood of the origin, and then squaring.) Then we may bound the preceding expression by $$ \sum_h \left|\int_{\mathfrak m} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha\right|^2 \Phi\left(\frac{h-h_0}{H}\right).$$ Expanding out the square, rearranging, and using the triangle inequality, we may bound this expression by $$ \int_{\mathfrak m} |S_f(\alpha)| |S_g(\alpha)| \int_{\mathfrak m} |S_f(\beta)| |S_g(\beta)| \left|\sum_h e((\alpha-\beta) h) \Phi\left(\frac{h-h_0}{H}\right)\right|\ d\beta d\alpha. $$ From the Poisson summation formula we have $$ \sum_h e((\alpha-\beta) h) \Phi\left(\frac{h}{H}\right) = H \sum_k \hat \Phi( H (\tilde \alpha - \tilde \beta + k) )$$ where $\tilde \alpha, \tilde \beta$ are any lifts of $\alpha,\beta$ from $\T$ to $\R$. In particular, this expression is of size $O(H)$, and vanishes unless $\beta$ lies in the interval $[\alpha-1/2H, \alpha+1/2H]$. Shifting $h$ by $h_0$, the claim follows. \end{proof} From the Plancherel identities \begin{align} \int_\T |S_f(\alpha)|^2 d\alpha &= \|f\|_{\ell^2}^2 \label{planch-1}\\ \int_\T |S_g(\alpha)|^2 d\alpha &= \|g\|_{\ell^2}^2 \label{planch-2} \end{align} and Cauchy-Schwarz, we have $$ \int_{\mathfrak m} |S_f(\alpha)| |S_g(\alpha)|\ d\alpha \leq \|f\|_{\ell^2} \|g\|_{\ell^2}$$ so we can bound the right-hand side of \eqref{bound} by $$ \|f\|_{\ell^2} \|g\|_{\ell^2} \sup_{\alpha \in {\mathfrak m}} \int_{\mathfrak m \cap [\alpha -1/2H, \alpha+1/2H]} |S_f(\beta)| |S_g(\beta)|\ d\beta.$$ By \eqref{planch-2} and Cauchy-Schwarz, we may bound this expression in turn by \begin{equation}\label{bound-2} \|f\|_{\ell^2} \|g\|_{\ell^2}^2 \sup_{\alpha \in {\mathfrak m}} \left(\int_{\mathfrak m \cap [\alpha -1/2H, \alpha+1/2H]} |S_f(\beta)|^2\ d\beta\right)^{1/2}. \end{equation} Note that from \eqref{planch-1} we have the trivial upper bound \begin{equation}\label{triv-planch} \int_{{\mathfrak m} \cap [\alpha-1/2H, \alpha+1/2H]} |S_f(\alpha)|^2\ d\alpha \leq \|f\|_{\ell^2}^2 \end{equation} and so the right-hand side of \eqref{bound} may be crudely upper bounded by $H \|f\|_{\ell^2}^2 \|g\|_{\ell^2}^2$, which is essentially the trivial bound on \eqref{bound} that one obtains from the Cauchy-Schwarz inequality. Thus, any significant improvement (e.g. by a large power of $\log X$) over \eqref{triv-planch} for minor arc $\alpha$ will lead to an approximation of the form $$ \sum_n f(n) \overline{g(n+h)} \approx \operatorname{MT}_{{\mathcal M},h}$$ for most $h \in [h_0-H,h_0+H]$. We formalize this argument as follows: \begin{corollary}\label{chebyshev} Let $H \geq 1$ and $\eta, F, G, X > 0$. Let $f,g \colon \Z \to \C$ be functions supported on a finite set, let ${\mathfrak M}$ be a measurable subset of $\T$, and let ${\mathfrak m} \coloneqq \T \backslash {\mathfrak M}$. For each $h$, let $\operatorname{MT}_h$ be a complex number. Let $h_0$ be an integer. Assume the following axioms: \begin{itemize} \item[(i)] (Size bounds) One has $\|f\|_{\ell^2}^2 \ll F^2 X$ and $\|g\|_{\ell^2}^2 \ll G^2 X$. \item[(ii)] (Major arc estimate) For all but $O(\eta H)$ integers $h$ with $|h-h_0| \leq H$, one has $$ \int_{\mathfrak M} S_f(\alpha) \overline{S_g(\alpha)} e(\alpha h)\ d\alpha = \operatorname{MT}_{h} + O( \eta F G X ).$$ \item[(iii)] (Minor arc estimate) For each $\alpha \in {\mathfrak m}$, one has \begin{equation}\label{fF} \int_{{\mathfrak m} \cap [\alpha-1/2H, \alpha+1/2H]} |S_f(\alpha)|^2\ d\alpha \ll \eta^6 F^2 X. \end{equation} \end{itemize} Then for all but $O(\eta H)$ integers $h$ with $|h-h_0| \leq H$, one has \begin{equation}\label{fg} \sum_n f(n) \overline{g(n+h)} = \operatorname{MT}_{h} + O( \eta FG X ). \end{equation} \end{corollary} In our applications, $F$ and $G$ will behave like a fixed power of $\log X$, and $\eta$ will be set $\log^{-A} X$ for some large $A$. By symmetry one can replace $f,F$ in \eqref{fF} with $g,G$ if desired, but note that we only need a minor arc estimate for one of the two functions $f,g$. \begin{proof} From Proposition \ref{circle-method}, the upper bound \eqref{bound-2}, and axioms (i), (iii) we have $$ \sum_{|h-h_0| \leq H} \left|\sum_n f(n) \overline{g(n+h)} - \operatorname{MT}_{{\mathfrak M},h}\right|^2 \ll \eta^3 F^2 G^2 H X^2$$ and hence by Chebyshev's inequality we have $$ \sum_n f(n) \overline{g(n+h)} - \operatorname{MT}_{{\mathfrak M},h} = O( \eta F G X ) $$ for all but $O(\eta H)$ integers $h$ with $|h-h_0| \leq H$. Applying axiom (ii), \eqref{mt-def} and the triangle inequality, we obtain the claim. \end{proof} In view of the above corollary, Theorem \ref{unav-corr} will be an easy consequence of major and minor arc estimates which we will soon present. Given parameters $Q \geq 1$ and $\delta > 0$, define the major arcs $$ {\mathfrak M}_{Q,\delta} \coloneqq \bigcup_{1 \leq q \leq Q} \bigcup_{a: (a,q) = 1} \left[\frac{a}{q} - \delta, \frac{a}{q} + \delta\right],$$ where we identify intervals such as $[\frac{a}{q}-\delta, \frac{a}{q}+\delta]$ with subsets of the unit circle $\T$ in the usual fashion. We will take $Q \coloneqq \log^B X$ and $\delta \coloneqq X^{-1} \log^{B'} X$ for some large $B' > B > 1$. To handle the major arcs, we use the following estimate: \begin{proposition}[Major arc estimate]\label{major} Let $A >0$, $0 < \eps < 1/2$ and $k,l \geq 2$ be fixed, and suppose that $X \geq 2$, $B \geq 2A$ and $B' \geq 2B+A$. Let $h$ be an integer with $0 < |h| \leq X^{1-\eps}$. Let $P_{k, l, h}, Q_{k, h}$ and $\mathfrak{S}(h)$ be as in Section \ref{se:intro}. \begin{itemize} \item[(i)] (Major arcs for Hardy-Littlewood conjecture) We have \begin{equation}\label{dust} \begin{split} \int_{{\mathfrak M}_{\log^B X,X^{-1} \log^{B'} X}} |S_{\Lambda 1_{(X,2X]}}(\alpha)|^2 e(\alpha h)\ d\alpha &= {\mathfrak G}(h) X \\ &\quad + O_{\eps,A,B, B'}( d_2(h)^{O(1)} X \log^{-A} X ). \end{split} \end{equation} \item[(ii)] (Major arcs for divisor correlation conjecture) We have \begin{align*} \int_{{\mathfrak M}_{\log^B X,X^{-1} \log^{B'} X}} S_{d_k 1_{(X,2X]}}(\alpha) \overline{S_{d_l 1_{(X,2X]}}(\alpha)} e(\alpha h)\ d\alpha &= P_{k,l,h}(\log X) X \\ + O_{\eps,k,l,A,B,B'}&( d_2(h)^{O_{k,l}(1)} X \log^{k+l-2-A} X ). \end{align*} \item[(iii)] (Major arcs for higher order Titchmarsh problem) We have \begin{align*} \int_{{\mathfrak M}_{\log^B X,X^{-1} \log^{B'} X}} S_{\Lambda 1_{(X,2X]}}(\alpha) \overline{S_{d_k 1_{(X,2X]}}(\alpha)} e(\alpha h)\ d\alpha &= Q_{k,h}(\log X) X \\ + O_{\eps,k,A,B,B'}&( d_2(h)^{O_k(1)} X \log^{k-1-A} X ). \end{align*} \item[(iv)] (Major arcs for Goldbach conjecture) If $X$ is an integer, then \begin{align*} \int_{{\mathfrak M}_{\log^B X,X^{-1} \log^{B'} X}} S_{\Lambda 1_{(X,2X]}}(\alpha) S_{\Lambda 1_{[1,X)}}(\alpha) e(-\alpha X)\ d\alpha &= {\mathfrak G}(X) X \\ + O_{\eps,A,B,B'}&( d_2(X)^{O(1)} X \log^{-A} X ). \end{align*} \end{itemize} \end{proposition} These bounds are quite standard and will be established in Section \ref{major-sec}. It is likely that one can remove the factors of $d_2(h), d_2(X)$ from the error terms with a little more effort, but we will not need to do so here, as these factors will usually be dominated by the $\log^{-A} X$ savings. In case (ii), it is also likely that we can improve the error term to a power saving in $X$ if one enlarges the major arcs accordingly, but we will again not do so here. To handle the minor arcs, we use the following exponential sum estimate: \begin{proposition}[Minor arc estimate]\label{minor} Let $\eps>0$ be a sufficiently small absolute constant, and let $A,B,B'>0$. Let $k \geq 2$ be fixed, let $X \geq 2$, and set $Q \coloneqq \log^B X$. Assume that $B$ is sufficiently large depending on $A,k$, and that $B'$ is sufficiently large depending on $A, B, k$. Let $1 \leq q \leq Q$, let $a$ be coprime to $q$. Let $f \colon \N \to \C$ be either the function $f(n) \coloneqq \Lambda(n) 1_{(X,2X]}$ or $f(n) \coloneqq d_k(n) 1_{(X,2X]}$. \begin{itemize} \item[(i)] One has \begin{equation}\label{ma0} \int_{X^{-1} \log^{B'} X \ll |\theta| \ll X^{-1/6-\eps}} \left|S_f\left(\frac{a}{q}+\theta\right)\right|^2\ d\theta \ll_{k,\eps,A,B,B'} X \log^{-A} X \end{equation} \item[(ii)] One has, for $\sigma \geq 8/33$, the bound \begin{equation}\label{mae} \int_{|\theta - \beta| \ll X^{-\record-\eps}} \left|S_f\left(\frac{a}{q}+\theta\right)\right|^2\ d\theta \ll_{k,\eps,A,B} X \log^{-A} X \end{equation} for any real number $\beta$ with $X^{-1/6-\eps} \ll |\beta| \leq \frac{1}{qQ}$. \end{itemize} \end{proposition} Note from \eqref{alpha-2} and \eqref{planch-1} that one already has the bound $$ \int_\T \left|S_f\left(\frac{a}{q}+\theta\right)\right|^2\ d\theta \ll_{k,\eps} X \log^{O_k(1)} X,$$ so the bounds \eqref{ma0}, \eqref{mae} only gain a logarithmic savings over the trivial bound. We also remark that the $\sigma \geq 1/3$ case of Proposition \ref{minor}(ii) can be established from the estimates in \cite{mikawa} (in the case $f = \Lambda 1_{(X,2X]}$) or \cite{bbmz} (in the case $f = d_3 1_{(X,2X]}$). Proposition \ref{minor} will be proven in Sections \ref{dirichlet-sec}-\ref{vdc-sec}. Assuming it for now, let us see why it (and Proposition \ref{major}) imply Theorem \ref{unav-corr}. The four cases are very similar and we will only describe the argument in detail for Theorem \ref{unav-corr}(i). By subdividing the interval $[h_0-H,h_0+H]$ if necessary, we may assume that $H = X^{\sigma+\eps}$ with $\eps$ small. Let $A>0$, let $B>0$ be sufficiently large depending on $A$, and let $B'>0$ be sufficiently large depending on $A,B$. We apply Corollary \ref{chebyshev} with $f=g=\Lambda$, $\eta = \log^{-A-2} X$, and ${\mathfrak M} := {\mathfrak M}_{\log^B X, X^{-1} \log^{B'} X}$. From crude bounds we can verify the hypothesis in Corollary \ref{chebyshev}(i) with $F,G = \log X$. From the estimates in \cite{landreau}, we know that $$ \sum_{h: |h-h_0| \leq H} d_2(h) \ll H \log^{O(1)} X$$ and hence by the Markov inequality we have $d_2(h) \ll \log^{O_A(1)} X$ for all but $O(\eta H)$ values of $h \in [h_0-H,h_0+H]$. This fact and Proposition \ref{major}(i) then give the hypothesis in Corollary \ref{chebyshev}(ii). It remains to verify the hypothesis in Corollary \ref{chebyshev}(iii) for any $\alpha \not \in {\mathfrak M}_{\log^B X, X^{-1} \log^{B'} X}$. By the Dirichlet approximation theorem, we can write $\alpha = a/q + \beta$ for some $1 \leq q \leq \log^B X$, $(a,q)=1$, and $|\beta| \leq \frac{1}{qQ}$. Since $\alpha \not \in {\mathfrak M}_{\log^B X, X^{-1} \log^{B'} X}$, we also have $|\beta| \geq X^{-1} \log^{B'} X$. If $\beta \leq X^{-\frac{1}{6}-\eps}$, the claim then follows from Proposition \ref{minor}(ii), while for $\beta > X^{-\frac{1}{6}-\eps}$ the claim follows from Proposition \ref{minor}(i). Theorem~\ref{unav-corr}(ii)-(iv) follow similarly (with slightly larger choices of $F,G$). \begin{remark} The bound \eqref{mae} is being used here to establish Theorem \ref{unav-corr}. In the converse direction, it is possible to use Theorem \ref{unav-corr} to establish \eqref{mae}; we sketch the argument as follows. The left-hand side of \eqref{mae} may be bounded by $$ \int_\R \left|S_f\left(\theta\right)\right|^2 \eta\left( X^{\record +\eps} \left(\theta - \frac{a}{q} - \beta\right) \right)\ d\theta$$ for some rapidly decreasing $\eta$ with compactly supported Fourier transform,and this can be rewritten as $$ X^{-\record-\eps} \sum_h e\left(-h\left(\frac{a}{q}+\beta\right)\right) \hat \eta\left( \frac{h}{X^{\record+\eps}} \right) \sum_n f(n) \overline{f(n+h)}.$$ The inner sum can be controlled for most $h$ using Theorem \ref{unav-corr}, and the contribution of the exceptional values of $h$ can be controlled by upper bound sieves. We leave the details to the interested reader. \end{remark} \section{Major arc estimates}\label{major-sec} In this section we prove Proposition \ref{major}. To do this we need some estimates on $S_{\Lambda 1_{(X,2X]}}(\alpha)$ and $S_{d_k 1_{(X,2X]}}(\alpha)$ for major arc $\alpha$. The former is standard: \begin{proposition}\label{kog} Let $A, B,B' > 0$, $X \geq 2$, and let $\alpha = \frac{a}{q} + \beta$ for some $1 \leq q \leq \log^B X$, $(a,q)=1$, and $|\beta| \leq \frac{\log^{B'} X}{X}$. Then we have $$ S_{\Lambda 1_{[1,X]}}(\alpha) = \frac{\mu(q)}{\varphi(q)} \int_1^{X} e(\beta x)\ dx + O_{A,B,B'}( X \log^{-A} X )$$ and hence also $$ S_{\Lambda 1_{(X,2X]}}(\alpha) = \frac{\mu(q)}{\varphi(q)} \int_X^{2X} e(\beta x)\ dx + O_{A,B,B'}( X \log^{-A} X )$$ \end{proposition} \begin{proof} See \cite[Lemma 8.3]{nathanson}. We remark that this estimate requires Siegel's theorem and so the bounds are ineffective. \end{proof} For the $d_k$ exponential sum, we have \begin{proposition}\label{klog} Let $A, B,B' > 0$, $k \geq 2$, $X \geq 2$, and let $\alpha = \frac{a}{q} + \beta$ for some $1 \leq q \leq \log^B X$, $(a,q)=1$, and $|\beta| \leq \frac{\log^{B'} X}{X}$. Then we have $$ S_{d_k 1_{(X,2X]}}(\alpha) = \int_X^{2X} p_{k,q}(x) e(\beta x)\ dx + O_{k,A,B,B'}( X \log^{-A} X )$$ where $$ p_{k,q}(x) \coloneqq \sum_{q=q_0q_1} \frac{\mu(q_1)}{\varphi(q_1) q_0} p_{k,q_0,q_1}\left(\frac{x}{q_0}\right)$$ $$ p_{k,q_0,q_1}(x) \coloneqq \frac{d}{dx} \mathrm{Res}_{s=1} \frac{x^s F_{k,q_0,q_1}(s)}{s} $$ $$ F_{k,q_0,q_1}(s) \coloneqq \sum_{n \geq 1: (n,q_1) = 1} \frac{d_k(q_0 n)}{n^s}.$$ \end{proposition} Using Euler products we see that $F_{k,q_0,q_1}$ has a pole of order $k$ at $s=1$, and so $p_{k,q_0,q_1}$ (and hence $p_{k,q}$) will be a polynomial of degree at most $k-1$ in $\log x$. One could improve the error term $X \log^{-A} X$ here to a power savings $X^{1-c/k}$ for some absolute constant $c>0$, and also allow $q$ and $X |\beta|$ to similarly be as large as $X^{c/k}$, but we will not exploit such improved estimates here. \begin{proof} This is a variant of the computations in \cite[\S 6]{bbmz}. Using Lemma \ref{sbp} (and increasing $A$ as necessary), it suffices to show that $$ \sum_{n \leq X'} d_k(n) e(an/q) = \int_0^{X'} p_{k,q}(x)\ dx + O_{k,A,B}( X \log^{-A} X )$$ for all $X' \asymp X$. Writing $n = q_0 n_1$ where $q_0 = (q, n)$, we can expand the left-hand side as $$ \sum_{q=q_0q_1} \sum_{n_1 \leq X'/q_0: (n_1,q_1)=1} d_k(q_0 n_1) e(a n_1 / q_1)$$ so (again by enlarging $A$ as necessary) it will suffice to show that \begin{equation}\label{dope} \sum_{n_1 \leq X'/q_0: (n_1,q_1)=1} d_k(q_0 n_1) e(a n_1 / q_1) = \frac{\mu(q_1)}{\varphi(q_1)} \int_0^{X'/q_0} p_{k,q_0,q_1}(x)\ dx + O_{k,A,B}( X \log^{-A} X ) \end{equation} for each factorization $q = q_0 q_1$. By \eqref{usual}, the left-hand side of \eqref{dope} expands as $$ \frac{1}{\varphi(q_1)} \sum_{\chi\ (q_1)} \chi(a) \tau(\overline{\chi}) \sum_{n_1 \leq X'/q_0} \chi(n_1) d_k(q_0 n_1)$$ where the Gauss sum $\tau(\overline{\chi})$ is defined by \eqref{taug}. For non-principal $\chi$, a routine application of the Dirichlet hyperbola method shows that $$ \sum_{n_1 \leq X'/q_0: (n_1,q_1)=1} \chi(n_1) d_k(q_0 n_1) \ll_{k,A,B} X \log^{-A} X$$ for any $A>0$ (in fact one can easily extract a power savings of order $\ll_{\varepsilon} X^{-1/k + \varepsilon}$ from this argument). Thus it suffices to handle the contribution of the principal character. Here, the Gauss sum \eqref{taug} is just $\mu(q_1)$, so we reduce to showing that \begin{equation} \label{eq:divsumdk} \sum_{n_1 \leq X'/q_0: (n_1,q_1)=1} d_k(q_0 n_1) = \int_0^{X'/q_0} p_{k,q_0,q_1}(x)\ dx + O_{k,A,B}( X \log^{-A} X ). \end{equation} By the fundamental theorem of calculus one has $$ \int_0^{X'/q_0} p_{k,q_0,q_1}(x)\ dx = \mathrm{Res}_{s=1} \frac{(X'/q_0)^s F_{k,q_0,q_1}(s)}{s}.$$ Meanwhile, by Lemma \ref{tpf-lem}(i), we can write the left-hand side of \eqref{eq:divsumdk} as $$ \frac{1}{2\pi i} \int_{\sigma-iX^{\eps}}^{\sigma+iX^\eps} \frac{(X'/q_0)^s F_{k,q_0,q_1}(s)}{s}\ ds + O_{k,A,B,\eps}( X \log^{-A} X)$$ where $\sigma \coloneqq 1 + \frac{1}{\log X}$ and $\eps>0$ is arbitrary. On the other hand, by modifying the arguments in \cite[Lemma 4.3]{bbmz} (and using the standard convexity bound for the $\zeta$ function) we have for all sufficiently small $\varepsilon > 0$, the bounds $$ F_{k,q_0,q_1}(s) \ll_{k,B,\eps} X^{O_k(\eps^2)}$$ when $\mathrm{Re}(s) \geq 1-\eps$, $|\mathrm{Im}(s)| \leq X^\eps$, and $|s-1| \geq \eps$. Shifting the contour to the rectangular path connecting $\sigma-iX^{\eps}$, $(1-\eps)-iX^{\eps}$, $(1-\eps)+iX^\eps$, and $\sigma+iX^\eps$ and using the residue theorem, we obtain the claim. \end{proof} Now we establish Proposition \ref{major}(i). From Proposition \ref{kog} and the trivial bound $| \int_{X}^{2X} e(\beta x) dx | \leq X$, we have $$ |S_{\Lambda 1_{(X,2X]}}(\alpha)|^2 = \frac{\mu^2(q)}{\varphi^2(q)} \left|\int_X^{2X} e(\beta x)\ dx\right|^2 + O_{A',B,B'}( X^2 \log^{-A'} X )$$ for any $A'>0$ and major arc $\alpha$ as in that proposition. On the other hand, the set ${\mathfrak M}_{\log^B X,X^{-1} \log^{B'} X}$ has measure $O( X^{-1} \log^{2B+B'} X )$. Thus (on increasing $A'$ as necessary) to prove \eqref{dust}, it suffices to show that \begin{align*} & \sum_{q \leq \log^B X} \sum_{(a,q)=1} \int_{|\beta| \leq X^{-1} \log^{B'} X} \frac{\mu^2(q)}{\varphi^2(q)} \left|\int_X^{2X} e(\beta x)\ dx\right|^2 e\left(\left(\frac{a}{q}+\beta\right) h\right)\ d\beta \\ &\quad = {\mathfrak G}(h) X + O_{\eps,A,B, B'}( d_2(h)^{O(1)} X \log^{-A} X ). \end{align*} By the Fourier inversion formula we have \begin{align*} \int_\R \left|\int_X^{2X} e(\beta x)\ dx\right|^2 e(\beta h)\ d\beta &= \int_\R 1_{[X,2X]}(x) 1_{[X,2X]}(x+h)\ dx \\ &= (1 + O(X^{-\eps})) X \end{align*} so from the elementary bound $\int_X^{2X} e(\beta x)\ dx \ll 1/|\beta|$ one has \begin{equation}\label{tbf} \int_{|\beta| \leq X^{-1} \log^{B'} X} \left|\int_X^{2X} e(\beta x)\ dx\right|^2 e(\beta h)\ d\beta = (1 + O_{\eps,B'}( \log^{-B'} X)) X. \end{equation} Since $B' \geq 2B+A$, it thus suffices to show that $$ \sum_{q \leq \log^B X} \sum_{(a,q)=1} \frac{\mu^2(q)}{\varphi^2(q)} e\left(\frac{ah}{q}\right)= {\mathfrak G}(h) + O_{\eps,A,B}( d_2(h)^{O(1)} \log^{-A} X ).$$ Introducing the Ramanujan sum \begin{equation}\label{cqd} c_q(a) \coloneqq \sum_{1 \leq b \leq q: (b,q) = 1} e\left(\frac{ab}{q}\right), \end{equation} the left-hand side simplifies to $$ \sum_{q \leq \log^B X} \frac{\mu^2(q) c_q(h)}{\varphi^2(q)}.$$ Recall that, for fixed $h$, $c_q(h)$ is multiplicative in $q$ and that $c_p(h) = -1$ if $p \nmid h$ and $c_p(h) = \varphi(h)$ if $p \mid h$. Hence by Euler products one has \begin{align*} \sum_q \frac{\mu^2(q) c_q(h)}{\varphi^2(q)} q^{1/2} &\ll \prod_{p \nmid h} \left(1+O\left(\frac{1}{p^{3/2}}\right)\right) \times \prod_{p |h} O(1) \\ &\ll d_2(h)^{O(1)} \end{align*} and hence $$ \sum_{q > \log^B X} \frac{\mu^2(q) c_q(h)}{\varphi^2(q)} \ll d_2(h)^{O(1)} \log^{-B/2} X.$$ Since $B \geq 2A$, it thus suffices to establish the identity $$ \sum_q \frac{\mu^2(q) c_q(h)}{\varphi^2(q)} = {\mathfrak G}(h)$$ but this follows from a standard Euler product calculation. The proof of Theorem \ref{major}(iv) is similar to that of Theorem \ref{major}(i) and is left to the reader. We now turn to Theorem \ref{major}(ii). From \eqref{divisor-crude} one has $$ S_{d_k 1_{(X,2X]}}(\alpha) \ll_k X \log^{k-1} X$$ and similarly $$ S_{d_l 1_{(X,2X]}}(\alpha) \ll_l X \log^{l-1} X.$$ Write $$ S_{k,q}(\beta) \coloneqq \int_X^{2X} p_{k,q}(x) e(\beta x)\ dx$$ and similarly for $S_{l,q}(\beta)$. Then from Proposition \ref{klog} we have (on increasing $A$ as necessary) that $$ S_{d_k 1_{(X,2X]}}(\alpha) \overline{S_{d_l 1_{(X,2X]}}(\alpha)} = S_{k,q}(\beta) \overline{S_{l,q}(\beta)} + O_{k,l,A',B,B'}( X^2 \log^{-A'} X ) $$ for any $A'>0$. It thus suffices to show that \begin{align*} & \sum_{q \leq \log^B X} \sum_{(a,q)=1} \int_{|\beta| \leq X^{-1} \log^{B'} X} S_{k,q}(\beta) \overline{S_{l,q}(\beta)} e\left(\left(\frac{a}{q}+\beta\right)h\right)\ d\beta\\ &= P_{k,l,h}(\log X) X + O_{\eps,k,l,A,B,B'}( d_2(h)^{O_{k,l}(1)} X \log^{-A} X ). \end{align*} Using Euler products one can obtain the crude bounds \begin{equation}\label{caqg} p_{k,q}(x) \ll_k \frac{d_2(q)^{O_k(1)}}{q} \log^{k-1} X \end{equation} for $x \asymp X$; indeed, the coefficients of $p_{k,q}$ (viewed as a polynomial in $\log X$) are of size $O_k( \frac{d_2(q)^{O_k(1)}}{q})$. By repeating the proof of \eqref{tbf}, we can then conclude that \begin{align*} &\int_{|\beta| \leq X^{-1} \log^{B'} X} S_{k,q}(\beta) \overline{S_{l,q}(\beta)} e(\beta h)\ d\beta\\ &\quad = \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x+h)\ dx + O_{k,l,B'}\left( \frac{d_2(q)^{O_{k,l}(1)}}{q^2} X \log^{k+l-2-B'} X\right). \end{align*} Since $\log(x+h) = \log x + O_\eps(X^{-\eps})$ for $|h| \leq X^{1-\eps}$ and $x \asymp X$, we have $$ \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x+h)\ dx = \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x)\ dx + O_{k,l,B'}\left( d_2(q)^{O_{k,l}(1)} X \log^{k+l-2-B'} X\right).$$ Using \eqref{divisor-crude} to control the error terms, using \eqref{cqd}, and recalling that $B' \geq 2B+A$, it therefore suffices to establish the bound $$ \sum_{q \leq \log^B X} c_q(h) \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x)\ dx = P_{k,l,h}(\log X) X + O_{\eps,k,l,A,B}\left( d_2(h)^{O_{k,l}(1)} X \log^{k+l-2-A} X \right).$$ Using the bounds \eqref{caqg}, we can argue as before to show that $$ \sum_{q} q^{1/2} c_q(h) \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x)\ dx \ll_{k,l} d_2(h)^{O_{k,l}(1)} X \log^{k+l-2} X $$ and so for $B \geq 2A$ it suffices to show that $$ \sum_q c_q(h) \int_X^{2X} p_{k,q}(x) \overline{p_{l,q}}(x)\ dx = X P_{k,l,h}(\log X).$$ But as $p_{k,q}, p_{l,q}$ are polynomials in $\log X$ of degree at most $k-1,l-1$ respectively, this follows from direct calculation (using \eqref{caqg} to justify the convergence of the summation). An explicit formula for the polynomial $P_{k,l,h}$ may be computed by using the calculations in \cite{conrey}, but we will not do so here. To prove Theorem \ref{major}(iii), we repeat the arguments used to establish Theorem \ref{major}(ii) (replacing one of the invocations of Proposition \ref{klog} with Proposition \ref{kog}) and eventually reduce to showing that $$ \sum_q c_q(h) \int_X^{2X} p_{k,q}(x) \frac{\mu(q)}{\varphi(q)}\ dx = X Q_{k,h}(\log X),$$ but this is again clear since $p_{k,q}(x)$ is a polynomial in $\log X$ of degree at most $k-1$. Again, the polynomial $Q_{k,h}$ is explicitly computable, but we will not write down such an explicit formula here. \section{Reduction to a Dirichlet series mean value estimate}\label{dirichlet-sec} We begin the proof of Proposition \ref{minor}. As discussed in the introduction, we will estimate the expressions \eqref{mae}, \eqref{ma0}, which currently involve the additive frequency variable $\alpha$, by expressions involving the multiplicative frequency $t$, by performing a sequence of Fourier-analytic transformations and changes of variable. The starting point will be the following treatment of the $q=1$ case: \begin{proposition}[Bounding exponential sums by Dirichlet series mean values]\label{spe} Let $1 \leq H \leq X/2$, and let $f \colon \N \to \C$ be a function supported on $(X,2X]$. Let $\beta, \eta$ be real numbers with $|\beta| \ll \eta \ll 1$, and let $I$ denote the region \begin{equation}\label{I-def} I \coloneqq \left\{ t \in \R: \eta |\beta| X \leq |t| \leq \frac{|\beta| X}{\eta} \right\} \end{equation} \begin{itemize} \item[(i)] We have \begin{equation}\label{spe-2} \begin{split} \int_{\beta-1/H}^{\beta+1/H} |S_f(\theta)|^2\ d\theta &\ll \frac{1}{|\beta|^2 H^2} \int_I \left( \int_{t-|\beta| H}^{t+|\beta| H} |{\mathcal D}[f](\frac{1}{2}+it')|\ dt'\right)^2\ dt \\ &\quad + \frac{\left(\eta + \frac{1}{|\beta| H}\right)^2 }{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx. \end{split} \end{equation} \item[(ii)] If $\beta = 1/H$, then we have the variant \begin{equation}\label{spe-4} \begin{split} \int_{\beta \leq |\theta| \leq 2\beta} |S_f(\theta)|^2\ d\theta &\ll \int_I |{\mathcal D}[f](\frac{1}{2}+it)|^2\ dt \\ &\quad + \frac{\left(\eta + \frac{1}{|\beta| X}\right)^2}{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx \end{split} \end{equation} \end{itemize} \end{proposition} Observe from the Cauchy-Schwarz inequality that $$ \frac{1}{|\beta|^2 H^2} \int_I \left(\int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it'\right)\right|\ dt'\right)^2\ dt \ll \int_{-2|\beta| X/\eta}^{2|\beta| X/\eta} \left|{\mathcal D}[f]\left(\frac{1}{2}+it\right)\right|^2\ dt $$ and so from the $L^2$ mean value estimate (Lemma \ref{mvt-lem}) we see that for $f$ a $k$-divisor bounded function \eqref{spe-2} trivially implies the bound $$ \int_{\beta-1/H}^{\beta+1/H} |S_f(\theta)|^2\ d\theta \ll_k X \log^{O_k(1)} X.$$ Note that this bound also follows from the ``trivial'' bounds \eqref{planch-1} and \eqref{divisor-crude}. Thus, ignoring powers of $\log X$, \eqref{spe-2} is efficient in the sense that trivial estimation of the right-hand side recovers the trivial bound on the left-hand side. In particular, any significant improvement (such as a power savings) over the trivial bound on the right-hand side will lead to a corresponding non-trivial estimate on the left-hand side, of the type needed for Proposition \ref{minor}. Similarly for \eqref{spe-4} (which roughly corresponds to the endpoint $|\beta| \asymp \frac{1}{H}$ of \eqref{spe-2}). \begin{proof} For brevity we adopt the notation $$ F(t) \coloneqq {\mathcal D}[f](\frac{1}{2}+it).$$ We first prove \eqref{spe-2}. It will be convenient for Fourier-analytic computations to work with smoothed sums. Let $\varphi \colon \R \to \R$ be a smooth even function supported on $[-1,1]$, equal to one on $[-1/10,1/10]$, and whose Fourier transform $\hat \varphi(\theta) := \int_\R \varphi(y) e(-\theta y)\ dy$ obeys the bound $|\hat \varphi(\theta)| \gg 1$ for $\theta \in [-1,1]$. Notice that $\varphi$ is a Schwartz function since it is smooth and compactly supported, thus $\widehat{\varphi}$ is also a Schwarz function. Then we have \begin{align*} \int_{\beta-1/H}^{\beta+1/H} |S_f(\theta)|^2\ d\theta &\ll \int_\R |S_f(\theta)|^2 |\hat \varphi( H(\theta - \beta) )|^2 \ d\theta \\ &= \int_\R \left| \int_\R \sum_n f(n) \varphi(y) e( \beta H y ) e( \theta (n - H y) )\ dy \right|^2\ d\theta \\ &= H^{-2} \int_\R \left| \int_\R \sum_n f(n) \varphi\left(\frac{n-x}{H}\right) e( \beta (n-x) ) e( \theta x )\ dx \right|^2\ d\theta \\ &= H^{-2} \int_\R \left| \sum_n f(n) \varphi\left(\frac{n-x}{H}\right) e( \beta (n-x) ) \right|^2\ dx \\ &= H^{-2} \int_\R \left| \sum_n f(n) \varphi\left(\frac{n-x}{H}\right) e( \beta n ) \right|^2\ dx \\ &= H^{-2} \int_{X/2}^{4X} \left| \sum_n f(n) \varphi\left(\frac{n-x}{H}\right) e( \beta n ) \right|^2\ dx. \end{align*} where we have made the change of variables $x = n-Hy$, followed by the Plancherel identity, and then used the support of $f$ and $\varphi$. (This can be viewed as a smoothed version of a lemma of Gallagher \cite[Lemma 1]{gallagher}).) By the triangle inequality, we can bound the previous expression by $$ \ll H^{-2} \int_\R \left(\sum_{x-H \leq n \leq x+H} |f(n)|\right)^2\ dx,$$ which is acceptable if $|\beta| H \ll 1$ or $\eta \geq 1/100$. Thus we may assume henceforth that $|\beta| \gg 1/H$ and $\eta < 1/100$. By duality, it thus suffices to establish the bound \begin{equation}\label{fad} \begin{split} \int_\R \sum_n f(n) \varphi\left(\frac{n-x}{H}\right) e( \beta n ) g(x)\ dx &\ll \frac{1}{|\beta|} \left( \int_I \left( \int_{t-|\beta| H}^{t+|\beta| H} |F(t')|\ dt'\right)^2\ dt \right)^{1/2} \\ &\quad + \left(\eta + \frac{1}{|\beta| H}\right) \left(\int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx\right)^{1/2} \end{split} \end{equation} whenever $g \colon \R \to \C$ is a measurable function supported on $[X/2,4X]$ with the normalization \begin{equation}\label{g-norm} \int_\R |g(x)|^2\ dx = 1. \end{equation} Using the change of variables $u = \log n - \log X$ (or equivalently $n = X e^u$), as discussed in the introduction, we can write the left-hand side of \eqref{fad} as \begin{equation}\label{ga} \sum_n \frac{f(n)}{n^{1/2}} G(\log n - \log X) \end{equation} where $G \colon \R \to \R$ is the function $$ G(u) \coloneqq X^{1/2} e^{u/2} e(\beta X e^u) \int_\R \varphi\left(\frac{Xe^u-x}{H}\right) g(x)\ dx.$$ From the support of $g$ and $\varphi$, we see that $G$ is supported on the interval $[-10,10]$ (say). At this stage we could use the Fourier inversion formula \begin{equation}\label{G-invert} G(u) = \frac{1}{2\pi} \int_\R \hat G\left(-\frac{t}{2\pi}\right) e^{-itu}\ dt \end{equation} to rewrite \eqref{ga} in terms of the Dirichlet series ${\mathcal D}[f](\frac{1}{2}+it) = \sum_n \frac{f(n)}{n^{\frac{1}{2}+it}}$. However, the main term in the right-hand side of \eqref{spe-2} only involves ``medium'' values of the frequency variable $t$, in the sense that $|t|$ is constrained to lie between $\eta \beta X$ and $\beta X/\eta$. Fortunately, the phase $e(\beta X e^u)$ of $G(u)$ oscillates at frequencies comparable to $\beta X$ in the support $[-10,10]$ of $G$, so the contribution of ``high frequencies'' $|t| \gg \beta X/\eta$ and ``low frequencies'' $|t| \ll \eta \beta X$ will both be acceptable, in the sense that they will be controllable using the error term in \eqref{spe-2}. To make this precise we will use the harmonic analysis technique of Littlewood-Paley decomposition. Namely, we split the sum \eqref{ga} into three subsums \begin{equation}\label{gaj} \sum_n \frac{f(n)}{n^{1/2}} G_i(\log n - \log X) \end{equation} for $i=1,2,3$, where $G_1,G_2,G_3$ are Littlewood-Paley projections of $G$, \begin{align*} G_1(u) &\coloneqq \int_\R G\left( u - \frac{2\pi v}{10 \eta |\beta| X} \right) \hat \varphi(v)\ dv \\ G_2(u) &\coloneqq \int_\R G\left( u - \frac{2\pi \eta v}{|\beta| X} \right) \hat \varphi(v)\ dv - \int_\R G\left( u - \frac{2\pi v}{10 \eta |\beta| X} \right) \hat \varphi(v)\ dv \\ G_3(u) &\coloneqq G(u) - \int_\R G\left( u - \frac{2\pi \eta v}{|\beta| X} \right) \hat \varphi(v)\ dv \end{align*} and estimate each subsum separately. \begin{remark}\label{fourier} Expanding out $\hat \varphi$ as a Fourier integral and performing some change of variables, one can compute that \begin{align*} G_1(u) &= \frac{1}{2\pi} \int_\R \hat G\left(-\frac{t}{2\pi}\right) \varphi \left( \frac{t}{10 \eta |\beta| X} \right) e^{-itu}\ dt \\ G_2(u) &= \frac{1}{2\pi} \int_\R \hat G\left(-\frac{t}{2\pi}\right) \left( \varphi\left( \frac{\eta t}{|\beta| X} \right) - \varphi\left( \frac{t}{10 \eta |\beta| X} \right)\right) e^{-itu}\ dt \\ G_3(u) &= \frac{1}{2\pi} \int_\R \hat G\left(-\frac{t}{2\pi}\right) \left( 1 - \varphi\left( \frac{\eta t}{|\beta| X} \right)\right) e^{-itu}\ dt. \end{align*} Comparing this with \eqref{G-invert}, we see that $G_1,G_2,G_3$ arise from $G$ by smoothly truncating the frequency variable $t$ to ``low frequencies'' $|t| \ll \eta |\beta| X$, ``medium frequencies'' $\eta |\beta| X \ll |t| \ll |\beta| X / \eta$, and ``high frequencies'' $|t| \gg |\beta| X / \eta$ respectively. It will be the medium frequency term $G_2$ that will be the main term; the low frequency term $G_1$ and high frequency term $G_3$ can be shown to be small by using the oscillation properties of the phase $e(\beta X e^u)$. \end{remark} We first consider the contribution of \eqref{gaj} in the ``high frequency'' case $i=3$. Since $\int_\R \hat \varphi(v)\ dv = \varphi(0) = 1$, we can use the fundamental theorem of calculus to write \begin{equation}\label{g3-d} G_3(u) = \frac{2\pi \eta}{|\beta| X} \int_0^1 \int_\R v G'\left( u - a \frac{2\pi \eta v}{|\beta| X} \right) \hat \varphi(v)\ dv da. \end{equation} For $x \in [X/2,4X]$, the function $u \mapsto X^{1/2} e^{u/2} e(\beta X e^u) \varphi\left(\frac{Xe^u-x}{H}\right)$ is only non-zero when $x = X e^u + O(H)$, and has a derivative of $O( |\beta| X^{3/2} )$. As a consequence, we have the derivative bound $$ G'(u) \ll \beta X^{3/2} \int_{x = X e^u + O(H)} |g(x)|\ dx$$ for any $u$, and hence by the triangle inequality $$ G_3(u) \ll \eta X^{1/2} \int_0^1 \int_\R \int_{x = X e^{-a \frac{2\pi \eta v}{|\beta| X}} e^u + O(H)} |g(x)| |v| |\hat \varphi(v)|\ dx dv da.$$ The expression \eqref{gaj} when $i=3$ may thus be bounded by $$ \ll \eta X^{1/2} \int_0^1 \int_\R \int_\R \sum_{n: x = \lambda n + O(H)} \frac{|f(n)|}{n^{1/2}} |g(x)| |v| |\hat \varphi(v)|\ dx dv da$$ where we abbreviate $\lambda \coloneqq e^{-a \frac{2\pi \eta v}{|\beta| X}}$. From the support of $f$ and $g$ we see that the inner integral vanishes unless $\lambda \asymp 1$. By the rapid decrease of $\hat \varphi$, we may then bound the previous expression by $$ \ll \eta X^{1/2} \sup_{\lambda \asymp 1} \int_\R \sum_{n: x = \lambda n + O(H)} \frac{|f(n)|}{n^{1/2}} |g(x)|\ dx.$$ Since $f$ is supported on $[X,2X]$, we see from \eqref{g-norm} and Cauchy-Schwarz that this quantity is bounded by $$ \ll \eta \sup_{\lambda \asymp 1} \left(\int_\R \left(\sum_{n: x = \lambda n + O(H)} |f(n)|\right)^2\ dx\right)^{1/2}.$$ Rescaling $x$ by $\lambda$ and using the triangle inequality, we can bound this by $$ \ll \eta \left(\int_\R (\sum_{x \leq n \leq x+H} |f(n)|)^2\ dx\right)^{1/2}$$ which is acceptable. Now we consider the contribution of \eqref{gaj} in the ``low frequency'' case $i=1$. We first make the change of variables $w \coloneqq u - \frac{2\pi v}{10 \eta |\beta| X}$ to write \begin{align*} G_1(u) &= \frac{-10 \eta |\beta| X}{2\pi} \int_\R G(w) \hat \varphi\left(\frac{10 \eta |\beta| X}{2\pi} (u-w)\right)\ dw \\ &= \frac{-10 \eta |\beta| X^{3/2}}{2\pi} \int_\R \left(\int_\R e( \beta X e^w ) \psi_{x,u}(w)\ dw\right) g(x)\ dx \end{align*} where $\psi_{x,u} \colon \R \to \C$ is the amplitude function \begin{equation}\label{psixu-def} \psi_{x,u}(w) \coloneqq e^{w/2} \hat \varphi\left(\frac{10 \eta |\beta| X}{2\pi} (u-w)\right) \varphi\left( \frac{Xe^w - x}{H} \right). \end{equation} The function $\psi_{x,u}$ is supported on the region $w = \log \frac{x}{X} + O(\frac{H}{X})$ (in particular, $w = O(1)$); from the rapid decrease of $\hat \varphi$ and the hypothesis $|\beta| \gg 1/H$ we also have the bound $$ \psi_{x,u}(w) \ll (1 + \eta |\beta| X |w-u|)^{-2}$$ (say). Differentiating \eqref{psixu-def} in $w$, we conclude the bounds $$ \psi'_{x,u}(w) \ll \left(\eta |\beta| X + \frac{X}{H}\right) (1 + \eta |\beta| X |w-u|)^{-2}.$$ Meanwhile, the phase $\beta X e^w$ has all $w$-derivatives comparable to $|\beta| X$ in magnitude. Integrating by parts, we conclude the bound \begin{align*} \int_\R e( \beta X e^w ) \psi_{x,u}(w)\ dw &\ll \left( \eta + \frac{1}{|\beta| H} \right) \int_{w = \log \frac{x}{X} + O(\frac{H}{X})} (1 + \eta |\beta| X |w-u|)^{-2}\ dw \end{align*} and hence \eqref{gaj} for $i=1$ may be bounded by $$ \ll \left( \eta + \frac{1}{|\beta| H} \right) \eta |\beta| X^{3/2} \int_\R \int_{w = \log \frac{x}{X} + O(\frac{H}{X})} \sum_n \frac{|f(n)|}{n^{1/2}} \frac{|g(x)|}{(1 + \eta |\beta| X \cdot |w-\log n+\log X|)^{2}}\ dw dx.$$ Making the change of variables $z \coloneqq w - \log n + \log X$, this becomes $$ \ll \left( \eta + \frac{1}{|\beta| H} \right) \eta |\beta| X^{3/2} \int_\R \int_\R \sum_{n: z = \log \frac{x}{n} + O(\frac{H}{X})} \frac{|f(n)|}{n^{1/2}} \frac{|g(x)|}{(1 + \eta |\beta| X |z|)^{2}}\ dx dz.$$ The sum vanishes unless $z = O(1)$, in which case the condition $z = \log \frac{x}{n} + O(\frac{H}{X})$ can be rewritten as $n = e^{-z} x + O(H)$. Since $\int_\R \eta |\beta| X (1 + \eta |\beta| X |z|)^{-2}\ dz \ll 1$, we can thus bound the previous expression by $$ \ll \left( \eta + \frac{1}{|\beta| H} \right) X^{1/2} \sup_{z = O(1)} \int_\R \sum_{n = e^{-z} x + O(H)} \frac{|f(n)|}{n^{1/2}} |g(x)|\ dx.$$ Arguing as in the high frequency case $i=3$ (with $e^{-z}$ now playing the role of $\lambda$), we can bound this by $$ \left( \eta + \frac{1}{|\beta| H} \right) \left(\int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx\right)^{1/2}$$ which is acceptable. Finally we consider the main term, which is \eqref{gaj} in the ``medium frequency'' case $i=2$. For any $T>0$, the quantity $$ \sum_n \frac{f(n)}{n^{1/2}} \int_\R G\left( \log n - \log X - \frac{2\pi v}{T} \right) \hat \varphi(v)\ dv $$ can be expanded by first opening $\widehat{\varphi}(v) = \int_{\mathbb{R}} \varphi(y) e(- v y) dy$ and then using the change of variables $t \coloneqq Ty$, $w \coloneqq \log n - \log X - \frac{2\pi v}{T}$ as \begin{align*} &\int_\R \sum_n \frac{f(n)}{n^{1/2}} \int_\R G\left( \log n - \log X - \frac{2\pi v}{T} \right) e(-vy) \varphi(y)\ dv dy \\ &\quad = \frac{1}{2\pi} \int_\R \sum_n \frac{f(n)}{n^{1/2}} \int_\R G( w ) n^{-it} e^{itw} X^{it} \varphi\left(\frac{t}{T}\right)\ dw dt \\ &\quad = \frac{1}{2\pi} \int_\R F(t) \int_\R G( w ) e^{itw} X^{it} \varphi\left(\frac{t}{T}\right)\ dw dt \end{align*} (compare with Remark \ref{fourier}). Applying identity for $T \coloneqq \frac{|\beta| X}{\eta}$ and $T \coloneqq 10 \eta |\beta| X$ and subtracting, we may thus write \eqref{gaj} for $i=2$ as \begin{equation}\label{g2-expand} \int_\R \int_\R \tilde F(t) G(w) e^{itw}\ dw dt \end{equation} where $\tilde F$ is the function $$ \tilde F(t) \coloneqq \frac{1}{2\pi} F(t) X^{it} \left(\varphi\left(\frac{\eta t}{|\beta| X}\right) - \varphi\left( \frac{t}{10 \eta |\beta| X}\right) \right). $$ For future reference we observe that $\tilde F$ is supported on $I$ and enjoys the pointwise bound $\tilde F(t) = O( |F(t)| )$. Expanding out $G$, we can write the preceding expression as $$ X^{1/2} \int_\R \int_\R \tilde F(t) g(x) J_x(t)\ dt dx$$ where $J_x(t)$ is the oscillatory integral \begin{equation}\label{jxt-def} J_x(t) \coloneqq \int_\R e(\varphi_t(w)) a_x(w)\ dw \end{equation} with the phase function $$ \varphi_t(w) \coloneqq \beta X e^w + \frac{tw}{2\pi}$$ and the amplitude function $$ a_x(w) \coloneqq e^{w/2} \varphi\left(\frac{Xe^w-x}{H}\right) \varphi(w/100),$$ noting that $\varphi(w/100)$ will equal $1$ whenever $g(x)\varphi\left(\frac{Xe^w-x}{H}\right)$ is non-zero. By \eqref{g-norm} and Cauchy-Schwarz, the above expression may be bounded in magnitude by $$ X^{1/2} \left( \int_\R \int_\R \tilde F(t) \overline{\tilde F(t')} \int_\R J_x(t) \overline{J_x(t')}\ dx dt dt' \right)^{1/2},$$ so by the triangle inequality and the pointwise bounds on $\tilde F$ it will suffice to establish the bound \begin{equation}\label{ffjj} \int_I \int_I |F(t)| |F(t')| \left|\int_\R J_x(t) \overline{J_x(t')}\ dx\right| dt dt' \ll \frac{1}{|\beta|^2 X} \int_I \left(\int_{t-|\beta| H}^{t+|\beta|H} |F(t')|\ dt'\right)^2\ dt. \end{equation} We shall shortly establish the bound \begin{equation}\label{jax} \int_\R J_x(t) \overline{J_x(t')}\ dx \ll \frac{H}{|\beta| X \left(1 + \frac{|t-t'|}{|\beta| H}\right)^2}. \end{equation} Assuming this bound, we can bound the left-hand side of \eqref{ffjj} by $$ \ll \frac{1}{(|\beta| H)^2} \int_0^{2|\beta| X/\eta} \int_I A(t) A(t+h) \frac{H}{|\beta| X \left(1 + \frac{h}{|\beta| H}\right)^2} \ dt dh $$ where $A(t) \coloneqq \int_{t-|\beta| H}^{t+|\beta| H} |F(t')| \ dt'$, and from Schur's test (see \cite[Theorem 5.2]{Halmos}) we conclude that this contribution is acceptable. It remains to obtain \eqref{jax}. We first consider the regime where $|t-t'| = O( |\beta| H )$. By Cauchy-Schwarz, this bound will follow if we can obtain the bound \begin{equation}\label{jxxt} \int_\R |J_x(t)|^2\ dx \ll \frac{H}{|\beta| X} \end{equation} for all $t \in I$. To establish this bound, we divide into the cases $\beta H^2 \geq X$ and $\beta H^2 < X$. First suppose that $\beta H^2 \geq X$. Then one has $\varphi''_t(w) \asymp |\beta| X$ on the support of $a_x$, and the cutoff $a_x$ has total variation $O(1)$. Hence by van der Corput estimates (see e.g. \cite[Lemma 8.10]{ik}) we have the bound $$ J_x(t) \ll (|\beta| X)^{-1/2}.$$ Furthermore, if $|\frac{t}{2\pi} + \beta x| \geq C |\beta| H$ for a large constant $C$, then on the support of $a_x$, then one has $\varphi'_t(w) \asymp |\frac{t}{2\pi} + \beta x|$, so that $1/\varphi'_t(w)$ is of size $O( \frac{1}{|\frac{t}{2\pi} + \beta x|} )$. A calculation then shows that the $j^{\mathrm{th}}$ derivative of $1/\varphi'_t(w)$ is of size $O( \frac{(X/H)^j}{|\frac{t}{2\pi} + \beta x|} )$ for $j=0,1,2$. Similarly, the $j^{\mathrm{th}}$ derivative of $a_x$ has an $L^1$ norm of $O( (X/H)^{j-1} )$ for $j=0,1,2$. Applying two integrations by parts, we then obtain the bound \begin{equation}\label{jxb} J_x(t) \ll \frac{X/H}{|\frac{t}{2\pi} + \beta x|^2} \end{equation} in this regime. Combining these bounds we obtain \eqref{jxxt} in the case $\beta H^2 \geq X$ after some calculation. Now suppose that $\beta H^2 < X$. On the one hand, from the triangle inequality we have the bound $$ J_x(t) \ll \frac{H}{X}.$$ On the other hand, if $|\frac{t}{2\pi} + \beta x| \geq C \frac{X}{H}$ for a large constant $C$, then on the support of $a_x$, one can again calcualate that $j^{\mathrm{th}}$ derivative of $1/\varphi'_t(w)$ is of size $O( \frac{(X/H)^j}{|\frac{t}{2\pi} + \beta x|} )$ for $j=0,1,2$, and that $j^{\mathrm{th}}$ derivative of $a_x$ has an $L^1$ norm of $O( (X/H)^{j-1} )$. This again gives the bound \eqref{jxb} after two integrations by parts. Combining these bounds we obtain \eqref{jxxt} in the case $\beta H^2 < X$ after some calculation. It remains to treat the case when $|t-t'| > C |\beta| H$ for some large constant $C>0$. Here we write $$ \int_\R J_x(t) \overline{J_x(t')}\ dx = H \int_\R \int_\R e( \varphi_t(w) - \varphi_{t'}(w') ) \tilde a( w, w' )\ dw dw' $$ where $$ \tilde a(w,w') \coloneqq e^{w/2} e^{w'/2} \varphi_2\left(\frac{Xe^w-Xe^{w'}}{H}\right) \varphi(w/100) \varphi(w'/100)$$ and $\varphi_2$ is the convolution of $\varphi$ with itself, thus $$ \varphi_2(x) \coloneqq \int_\R \varphi(y) \varphi(x+y)\ dy.$$ We make the change of variables $w' = w+h$ to then write $$ \int_\R J_x(t) \overline{J_x(t')}\ dx = H \int_\R \int_\R e( \varphi_t(w) - \varphi_{t'}(w+h) ) \tilde a( w, w+h )\ dw dh. $$ Observe that $\tilde a(w,w+h)$ vanishes unless $h = O(H/X)$, so we may restrict to this range. If $|t-t'| > C |\beta| H$ for a sufficiently large $C$, a calculation then reveals that on the support of $\tilde a(w,w+h)$, the $w$-derivative of $\varphi_t(w) - \varphi_{t'}(w+h)$ has magnitude comparable to $|t'-t|$, and that the $j^{\mathrm{th}}$ $w$-derivative is of size $O( |\beta| H ) = O( |t'-t| )$ for $j=2,3$. Furthermore, the $j^{\mathrm{th}}$ $w$-derivative of $\tilde a(w,w+h)$ is of size $O(1)$ for $j=0,1,2$. From two integrations by parts we conclude that $$ \int_\R e( \varphi_t(w) - \varphi_{t'}(w+h) ) \tilde a( w, w+h )\ dw \ll \frac{1}{|t'-t|^2}$$ for all $h = O(H/X)$, and hence $$ \int_\R J_x(t) \overline{J_x(t')}\ dx \ll \frac{H^2}{X |t-t'|^2}.$$ This gives \eqref{jax} (with some room to spare), since $|\beta| H \gg 1$. This concludes the proof of \eqref{spe-2}. Now we prove \eqref{spe-4}. Again, we use duality. It suffices to show that \begin{equation} \label{eq:spe-4dual} \int_\R S_f(\theta) g(\theta)\ d\theta \ll \left(\int_I |F(t)|^2\ dt\right)^{1/2} + \frac{\eta + \frac{1}{|\beta| X}}{H} \left(\int_\R \left (\sum_{x \leq n \leq x+H} |f(n)|\right )^2\ dx\right)^{1/2} \end{equation} whenever $g \colon \R \to \C$ is a measurable function supported on $\{ \theta: \beta \leq |\theta| \leq 2\beta \}$ with the normalization \begin{equation}\label{gnorm} \int_\R |g(\theta)|^2\ d\theta = 1. \end{equation} The expression $\int_\R S_f(\theta) g(\theta)\ d\theta$ can be rearranged as $$ \sum_n \frac{f(n)}{n^{1/2}} G(\log n - \log X)$$ where \begin{equation}\label{gdef} G(u) \coloneqq \varphi( u / 10 ) X^{1/2} e^{u/2} \int_\R g(\theta) e(X e^u \theta)\ d\theta, \end{equation} noting that the cutoff $\varphi(u/10)$ will equal $1$ for $n \in [X,2X]$. We again split this sum as the sum of three subsums \eqref{gaj} with $i=1,2,3$, where $G_1,G_2,G_3$ are defined as before. We first control the sum \eqref{gaj} in the ``high frequency'' case $i=3$. By \eqref{g3-d}, the triangle inequality, and the rapid decay of $\hat \varphi$, we may bound this sum by $$ \ll \frac{\eta}{|\beta| X} \sup_{a,v} \sum_n \frac{|f(n)|}{n^{1/2}} \left|G'\left( \log n - \log X - a \frac{2\pi \eta v}{|\beta| X} \right)\right|.$$ Because of the cutoff $\varphi(u/10)$ in \eqref{gdef}, the sum vanishes unless $a \frac{2\pi \eta v}{|\beta| X} = O(1)$, so we may bound the preceding expression by $$ \ll \frac{\eta}{|\beta| X} \sum_n \frac{|f(n)|}{n^{1/2}} |G'( \log n - \log X' )|$$ for some $X' \asymp X$. Computing the derivative of $G$, we may bound this in turn by $$ \ll \frac{\eta}{|\beta| X} \sum_n |f(n)| \left( \left|\int_\R g(\theta) e\left( \frac{X}{X'} n \theta \right)\ d\theta\right| + \left|\int_\R X \theta g(\theta) e\left( \frac{X}{X'} n \theta \right)\ d\theta\right| \right).$$ We shall just treat the second term here, as the first term is estimated analogously (with significantly better bounds). We write this contribution as $$ \ll \eta \sum_n |f(n)| \left|\int_\R \frac{\theta}{\beta} g(\theta) e\left( \frac{X}{X'} n \theta \right)\ d\theta\right|.$$ By partitioning the support $[X,2X]$ of $f$ into intervals of length $H$, and selecting on each such interval a number $n$ that maximizes the quantity $|\int_\R \frac{\theta}{\beta} g(\theta) e( \frac{X}{X'} n \theta )\ d\theta|$, we may bound this by $$ \ll \eta \sum_{j=1}^J \left(\sum_{|n-n_j| \leq H} |f(n)|\right) \left|\int_\R \frac{\theta}{\beta} g(\theta) e\left( \frac{X}{X'} n_j \theta \right)\ d\theta\right|$$ for some $H$-separated subset $n_1,\dots,n_J$ of $[X,2X]$. By \eqref{gnorm}, the support of $g$ (which in particular makes the factor $\frac{\theta}{\beta}$ bounded), the choice $\beta = 1/H$ and the large sieve inequality (e.g. the dual of \cite[Corollary 3]{monts}), we have $$ \sum_{j=1}^J \left|\int_\R \frac{\theta}{\beta} g(\theta) e\left( \frac{X}{X'} n_j \theta \right)\ d\theta\right|^2 \ll \beta $$ and so by Cauchy-Schwarz, one can bound the preceding expression by $$ \ll \eta \beta^{1/2} \left( \sum_{j=1}^J \left(\sum_{|n-n_j| \leq H} |f(n)|\right)^2 \right)^{1/2}.$$ But one has $$ \left(\sum_{|n-n_j| \leq H} |f(n)|\right)^2 \ll \frac{1}{H} \int_{|x-n_j| \leq 2H} \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx $$ for each $j$, and so as $\beta = 1/H$ and the $n_j$ are $H$-separated, the high frequency case $i=3$ of \eqref{gaj} contributes to~\eqref{eq:spe-4dual} $$ \ll \frac{\eta}{H} \left(\int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx\right)^{1/2}$$ which is acceptable. Now we control the ``low frequency'' case $i=1$ of \eqref{gaj}. Using the change of variables $h \coloneqq \frac{2\pi v}{10 \eta |\beta| X}$, we can write this as $$ \frac{10 \eta |\beta| X}{2\pi} \int_\R \int_\R \sum_n f(n) \varphi\left(\frac{\log n - \log X - h}{10}\right) e^{-h/2} g(\theta) e( n e^{-h} \theta ) \hat \varphi\left(\frac{10 \eta |\beta| X h}{2\pi}\right)\ d\theta dh.$$ The integrand vanishes unless $h = O(1)$. Writing $$ e(n e^{-h} \theta) = \frac{-1}{2\pi i n e^{-h} \theta} \frac{d}{dh} e(n e^{-h} \theta) $$ and then integrating by parts in the $h$ variable, we can write this expression as $$ \frac{10 \eta |\beta| X}{4\pi^2 i} \int_\R \int_\R \sum_n \frac{f(n) g(\theta)e( n e^{-h} \theta )}{n \theta} \frac{d}{dh} R_{n,\theta}(h)\ d\theta dh$$ where $R_{n,\theta}(h)$ is the quantity $$ R_{n,\theta}(h) \coloneqq e^{h/2} \varphi\left(\frac{\log n - \log X - h}{10}\right) \hat \varphi\left(\frac{10 \eta |\beta| X h}{2\pi}\right).$$ By the Leibniz rule, and the fact that $\widehat{\varphi}$ is a Schwartz function we see that $\frac{d}{dh} R_{n,\theta}(h)$ is supported on the region $h = O(1)$ with $$ \int_\R \left|\frac{d}{dh} R_{n,\theta}(h)\right|\ dh \ll 1.$$ Thus we may bound the preceding expression using the triangle inequality and pigeonhole principle by $$ \ll \eta \sum_n |f(n)| \left| \int_\R \frac{\beta}{\theta} g(\theta) e(n e^h \theta) d\theta \right |$$ for some $h=O(1)$. But by the same large sieve inequality arguments used to control the high frequency case $i=3$ (with $e^h$ now playing the role of $\frac{X}{X'}$), we see that this contribution is acceptable. This concludes the treatment of the low frequency case $i=1$. Finally we consider the main term, which is the ``medium frequency'' case $i=2$ of \eqref{gaj}. As in the proof of \eqref{spe-2}, we may bound this expression by \eqref{g2-expand}. By Cauchy-Schwarz and the Plancherel identity, one may bound this by $$ \ll \left(\int_I |F(t)|^2\ dt\right)^{1/2} \left(\int_\R |G(w)|^2\ dw\right)^{1/2}.$$ By \eqref{gdef} and the change of variables $y \coloneqq X e^w$, we have $$ \int_\R |G(w)|^2\ dw \ll \int_\R \left|\int_\R g(\theta) e(y\theta)\ d\theta\right|^2\ dy,$$ and by \eqref{gnorm} and the Plancherel identity again, the right-hand side is equal to $1$. Hence the $i=2$ case also gives an acceptable contribution to~\eqref{eq:spe-4dual}, and the claim \eqref{spe-4} follows. \end{proof} We can now use Lemma \ref{edc} to obtain an estimate for general $q$: \begin{corollary}[Stationary phase estimate, minor arc case]\label{spem} Let $1 \leq H \leq X$, and let $f \colon \N \to \C$ be a function supported on $(X,2X]$. Let $q \geq 1$, let $a$ be coprime to $q$, and let $\beta, \eta$ be real numbers with $|\beta| \ll \eta \ll 1$. Let $I$ denote the region in \eqref{I-def}. Then we have \begin{align*} \int_{\beta-1/H}^{\beta+1/H} \left|S_f\left(\frac{a}{q}+\theta\right)\right|^2\ d\theta &\ll\\ \frac{d_2(q)^4}{|\beta|^2 H^2 q} &\sup_{q=q_0q_1} \int_I \left(\sum_{\chi\ (q_1)} \int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|\ dt'\right)^2\ dt \\ &\quad + \frac{\left(\eta + \frac{1}{|\beta| H}\right)^2 }{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx. \end{align*} If $\beta = 1/H$, one has the variant \begin{align*} \int_{\beta \leq |\theta| \leq 2\beta} \left|S_f\left(\frac{a}{q} + \theta\right)\right|^2\ d\theta &\ll \frac{d_2(q)^4}{q} \sup_{q=q_0 q_1} \int_I \left(\sum_{\chi\ (q_1)} \left|{\mathcal D}[f]\left(\frac{1}{2}+it, \chi, q_0\right)\right|\right)^2\ dt \\ &\quad + \frac{\left(\eta + \frac{1}{|\beta| X}\right)^2 }{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx. \end{align*} \end{corollary} The factor $d_2(q)^4$ might be improvable, but is already negligible in our analysis, so we do not attempt to optimize it. The presence of the $q_0$ variable is technical; the most important case is when $q_0=1$ and $q_1=q$, so the reader may wish to restrict to this case for a first reading. It will be important that there are no $q$ factors in the error terms on the right-hand side; this is possible because we estimate the left-hand side in terms of Dirichlet series at moderate values of $t$ \emph{before} decomposing into Dirichlet characters. \begin{proof} We just prove the first estimate, as the second is similar. By applying Proposition \ref{spe} with $f$ replaced by $f e(a\cdot /q)$, we obtain the bound \begin{align*} \int_{\beta - 1/H}^{\beta + 1/H} |S_f(a/q + \theta)|^2 d \theta &\ll \frac{1}{|\beta|^2 H^2} \int_I \left(\int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[f e(a\cdot/q)]\left(\frac{1}{2}+it'\right)\right|\ dt'\right)^2\ dt \\ &\quad + \frac{\left(\eta + \frac{1}{|\beta| H}\right)^2 }{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx \end{align*} From Lemma \ref{edc} we have $$ \int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[fe(a\cdot/q)]\left(\frac{1}{2}+it'\right)\right|\ dt' \leq \frac{d_2(q)}{\sqrt{q}} \sum_{q = q_0 q_1} \sum_{\chi\ (q_1)} \int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|$$ and hence by Cauchy-Schwarz \begin{align*} & \left(\int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[fe(a\cdot/q)]\left(\frac{1}{2}+it'\right)\right|\ dt'\right)^2 \\ &\quad \leq \frac{d_2(q)^3}{q} \sum_{q = q_0 q_1} \left(\sum_{\chi\ (q_1)} \int_{t-|\beta| H}^{t+|\beta| H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|\right)^2 \end{align*} Inserting this bound and bounding the summands in the $q=q_0q_1$ summation by their supremum yields the claim. \end{proof} If the function $f$ in the above corollary is $k$-divisor-bounded, then by Cauchy-Schwarz and \eqref{divisor-crude} we have \begin{align*} \frac{1}{H^2} \int_\R \left(\sum_{x \leq n \leq x+H} |f(n)|\right)^2\ dx &\ll \frac{1}{H} \int_\R \sum_{x \leq n \leq x+H} |f(n)|^2\ dx \\ &\ll \sum_n |f(n)|^2\ dx \\ &\ll_k X \log^{O_k(1)} X. \end{align*} Applying the above corollary with $\eta \coloneqq Q^{-1/2} = \log^{-B/2} X$, Proposition \ref{minor} is now an immediate consequence of the following mean value estimates for Dirichlet series. \begin{proposition}[Mean value estimate]\label{mve} Let $\eps > 0$ be a sufficiently small constant, and let $A>0$. Let $k \geq 2$ be fixed, let $B > 0$ be sufficiently large depending on $k,A$, and let $X \geq 2$. Set \begin{equation}\label{H-def} H \coloneqq X^{\sigma+\eps} \end{equation} and \begin{equation}\label{Q-def} Q \coloneqq \log^B X. \end{equation} Let $1 \leq q \leq Q$, and suppose that $q = q_0 q_1$. Let $\lambda$ be a positive quantity such that \begin{equation}\label{l6} X^{-1/6-\eps} \leq \lambda \ll \frac{1}{qQ}. \end{equation} Let $f \colon \N \to \C$ be either the function $f \coloneqq \Lambda 1_{(X,2X]}$ or $f \coloneqq d_k 1_{(X,2X]}$. Then we have \begin{equation}\label{mve-est} \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|\ dt'\right)^2\ dt \ll_{k,\eps, A, B} q \lambda^2 H^2 X \log^{-A} X. \end{equation} \end{proposition} \begin{proposition}\label{harm-prop} Let $\eps > 0$ be a sufficiently small constant, and let $A, B > 0$. Let $k \geq 2$ be fixed, let $X \geq 2$, and suppose that $q_0, q_1$ are natural numbers with $q_0, q_1 \leq \log^B X$. Let $f \colon \N \to \C$ be either the function $f \coloneqq \Lambda 1_{(X,2X]}$ or $f \coloneqq d_k 1_{(X,2X]}$. Let $B'$ be sufficiently large depending on $k, A, B$. Then one has \begin{equation}\label{drip0} \int_{\log^{B'} X \ll |t'| \ll X^{5/6 - \eps} } \left( \sum_{\chi\ (q_1)} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|\right)^2\ dt' \ll_{k,\eps,A,B,B'} q X \log^{-A} X. \end{equation} \end{proposition} Proposition \ref{harm-prop} is comparable\footnote{See \cite[Lemma 9.3]{harman} for a precise connection between $L^2$ mean value theorems such as \eqref{drip0} and estimates for sums of $f \chi$ on short intervals.} in strength to the prime number theorem (in arithmetic progressions) in almost all intervals of the form $[x, x+x^{1/6+\eps}]$. A popular approach to proving such theorems is via zero density estimates (see e.g. \cite[\S 10.5]{ik}); this works well in the case $f = \Lambda 1_{(X,2X]}$, but is not as suitable for treating the case $f = d_k 1_{(X,2X]}$. We will instead adapt a slightly different approach from \cite{harman} using combinatorial decompositions and mean value theorems and large value theorems for Dirichlet polynomials; the details of the argument will be given in Appendix \ref{harman-sec}. In the case $f = d_3 1_{(X,2X]}$, an estimate closely related to Proposition \ref{harm-prop} was established in \cite[Theorem 1.1]{bbmz}, relying primarily on sixth moment estimates for the Riemann zeta function. It remains to prove Proposition \ref{mve}. This will be done in the remaining sections of the paper. \section{Combinatorial decompositions}\label{hb-sec} Let $\eps, k, A, B, H, X, Q, q_0, q_1, q, \lambda, f$ be as in Proposition \ref{mve}. We may assume without loss of generality that $\eps$ is small, say $\eps < 1/100$; we may also assume that $X$ is sufficiently large depending on $k,\eps$. We first invoke Lemma \ref{comb-decomp-2} with $m=5$, and with $\eps$ and $H_0$ replaced by $\eps^2$ and $X^{-\eps^2} H$ respectively; this choice of $m$ is available thanks to \eqref{sigma-range}. We conclude that the function $(t,\chi) \mapsto {\mathcal D}[f](\frac{1}{2}+it, \chi, q_0)$ can be decomposed as a linear combination (with coefficients of size $O_{k,\eps}( d_2(q_0)^{O_{k,\eps}(1)} )$) of $O_{k,\eps}( \log^{O_{k,\eps}(1)} X)$ functions of the form $(t,\chi) \mapsto {\mathcal D}[\tilde f]\left(\frac{1}{2}+it,\chi\right)$, where $\tilde f \colon \N \to \C$ is one of the following forms: \begin{itemize} \item[(Type $d_1$, $d_2$, $d_3$, $d_4$ sums)] A function of the form \begin{equation}\label{fst-yetagain} \tilde f = (\alpha \ast \beta_1 \ast \dots \ast \beta_j) 1_{(X/q_0,2X/q_0]} \end{equation} for some arithmetic functions $\alpha,\beta_1,\dots,\beta_j \colon \N \to \C$, where $j=1,2,3,4$, $\alpha$ is $O_{k,\eps}(1)$-divisor-bounded and supported on $[N,2N]$, and each $\beta_i$, $i=1,\dots,j$ is either of the form $\beta_i = 1_{(M_i,2M_i]}$ or $\beta_i = L 1_{(M_i,2M_i]}$ for some $N, M_1,\dots,M_j$ obeying the bounds $$ 1 \ll N \ll_{k,\eps} X^{\eps^2},$$ $$ N M_1 \dots M_j \asymp_{k,\eps} X/q_0,$$ and $$ X^{-\eps^2} H \ll M_1 \ll \dots \ll M_j \ll X/q_0.$$ \item[(Type II sum)] A function of the form $$ \tilde f = (\alpha \ast \beta) 1_{(X/q_0,2X/q_0]}$$ for some $O_{k,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta \colon \N \to \C$ supported on $[N,2N]$ and $[M,2M]$ respectively, for some $N,M$ obeying the bounds $$ X^{\eps^2} \ll N \ll X^{-\eps^2} H$$ and $$NM \asymp_{k,\eps} X / q_0.$$ \item[(Small sum)] A function $\tilde f$ supported on $(X/q_0,2X/q_0]$ obeying the bound \begin{equation}\label{smallsum} \|\tilde f \|_{\ell^2}^2 \ll_{k,\eps} X^{1-\eps^2/8}. \end{equation} \end{itemize} We have omitted the conclusion of good cancellation in the Type II case as it is not required in the regime $\lambda \gg X^{-1/6-\eps}$ under consideration. By the triangle inequality, it thus suffices to show that for $\tilde f$ being a sum of one of the above forms, that we have the bound \begin{equation}\label{trivb} \begin{split} &\int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\tilde f]\left(\frac{1}{2}+it',\chi\right)\right|\ dt'\right)^2\ dt \\ &\quad \ll_{k,\eps,A,B} d_2(q_1)^{O_k(1)} q_1 \lambda^2 X H^2 \log^{-A} X \end{split} \end{equation} (noting from \eqref{alpha-infty} that the factors of $d_2(q_1)^{O_k(1)}$ can be easily absorbed into the $\log^{-A} X$ factor after increasing $A$ slightly). We can easily dispose of the small case. From Cauchy-Schwarz one has $$ \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\tilde f]\left(\frac{1}{2}+it',\chi\right)\right|\ dt'\right)^2 \ll q_1 \lambda H \sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\tilde f]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'$$ and hence after interchanging the integrals, the left-hand side of \eqref{trivb} can be bounded by $$ \ll q_1 \lambda^2 H^2 \sum_{\chi\ (q_1)} \int_{|t| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[\tilde f]\left(\frac{1}{2}+it,\chi\right)\right|^2\ dt.$$ Using Lemma \ref{mvt-lem-ch} we can bound this by $$ \ll_{k,\eps} q_1 \lambda^2 H^2 \frac{X / q_0 + q_1 Q^{1/2} \lambda X}{X/q_0} \|\tilde f\|_{\ell^2}^2 \log^3 X.$$ Crudely bounding $q_0, q_1, Q, \lambda \leq \log^B X$, the claim \eqref{trivb} then follows in this case from \eqref{smallsum}. It remains to consider $\tilde f$ that are of Type $d_1$, Type $d_2$, Type $d_3$, Type $d_4$, or Type II. In all cases we can write $\tilde f = f' 1_{(X/q_0, 2X/q_0]}$, where $f'$ is a Dirichlet convolution of the form $\alpha \ast \beta_1 \dots \ast \beta_j$ (in the Type $d_j$ cases) or of the form $\alpha \ast \beta$ (in the Type II case). It is now convenient to remove the $1_{(X/q_0,2X/q_0]}$ truncation. Applying Corollary \ref{trunc-dir} with $T \coloneqq \lambda X^{1-\varepsilon/10}$ and $f$ replaced by $\tilde f \chi$, and using the divisor bound \eqref{alpha-infty} to control the supremum norm, we see that $$ {\mathcal D}[\tilde f]\left(\frac{1}{2}+it,\chi\right) \ll_{\eps} \int_{|u| \leq \lambda X^{1-\varepsilon/10}} |F(t+u)| \frac{du}{1+|u|} + \frac{X^{-1/2+\eps/5}}{\lambda}$$ where $$ F(t) \coloneqq {\mathcal D}[\tilde f]\left(\frac{1}{2}+it,\chi\right).$$ We can thus bound the left-hand side of \eqref{trivb} by \begin{align*} &\ll_{\eps,k} \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\int_{|u| \leq \lambda X^{1-\varepsilon/10}} \sum_{\chi\ (q_1)} \int_{t+u-\lambda H}^{t+u+\lambda H} |F(t')|\ dt' \frac{du}{1+|u|}\right)^2\ dt \\ &\quad + \left(Q^{1/2} \lambda X \right) (q_1 \lambda H)^2 \left(\frac{X^{-1/2+\eps/5}}{\lambda}\right)^2. \end{align*} The second term can be written as $$ Q^{1/2} \frac{X^{2\eps/5} q_1}{\lambda X} q_1 \lambda^2 X H^2;$$ since $\lambda \geq X^{-1/6-\varepsilon}$ and $q_1 \leq Q \leq (\log X)^B$, we see that this contribution to \eqref{trivb} is acceptable. Meanwhile, as $\frac{1}{1+|u|}$ has an integral of $O( \log X )$ on the region $|u| \leq \lambda X^{1-\varepsilon/10}$, we see from the Minkowski integral inequality in $L^2$ and on shifting $t$ by $u$ that \begin{align*} &\int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\int_{|u| \leq \lambda X^{1-\varepsilon/10}} \sum_{\chi\ (q_1)} \int_{t+u-\lambda H}^{t+u+\lambda H} |F(t')|\ dt' \frac{du}{1+|u|}\right)^2\ dt \\ &\leq \left( \int_{|u| \leq \lambda X^{1-\varepsilon/10}} \left( \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t+u-\lambda H}^{t+u+\lambda H} |F(t')|\ dt'\right)^2\ dt\right)^{1/2} \frac{du}{1+|u|}\right)^2 \\ &\ll \log^2 X \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} |F(t')|\ dt'\right)^2\ dt \end{align*} where we allow the implied constants in the region $\{ Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X\}$ to vary from line to line. Putting all this together, we now see that Proposition \ref{mve} will be a consequence of the following estimates. \begin{proposition}[Estimates for Type $d_1,d_2,d_3,d_4$,II sums]\label{types} Let $\eps>0$ be sufficiently small. Let $k \geq 2$ and $A>0$ be fixed, and let $B>0$ be sufficiently large depending on $A,k$. Let $X \geq 2$, and set $H \coloneqq X^{\record+\eps}$. Set $Q \coloneqq \log^B X$, and let $1 \leq q_1 \leq Q$. Let $\lambda$ be a quantity such that $X^{-1/6-2\eps} \leq \lambda \ll \frac{1}{q_1 Q}$. Let $f \colon \N \to \C$ be a function of one of the following forms: \begin{itemize} \item[(Type $d_1,d_2,d_3,d_4$ sums)] One has \begin{equation}\label{fbb} f = \alpha \ast \beta_1 \ast \dots \ast \beta_j \end{equation} for some $O_{k,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta_1,\dots,\beta_j \colon \N \to \C$, where $j=1,2,3,4$, $\alpha$ is supported on $[N,2N]$, and each $\beta_i$, $i=1,\dots,j$ is supported on $[M_i, 2M_i]$ for some $N, M_1,\dots,M_j$ obeying the bounds $$ 1 \ll N \ll_{k,\eps} X^{\eps^2},$$ $$ X/Q \ll N M_1 \dots M_j \ll_{k,\eps} X,$$ and $$ X^{-\eps^2} H \ll M_1 \ll \dots \ll M_j \ll X.$$ Furthermore, each $\beta_i$ is either of the form $\beta_i = 1_{(M_i,2M_i]}$ or $\beta_i = L 1_{(M_i,2M_i]}$. \item[(Type II sum)] One has $$ f = \alpha \ast \beta$$ for some $O_{k,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta \colon \N \to \C$ supported on $[N,2N]$ and $[M,2M]$ respectively, for some $N,M$ obeying the bounds \begin{equation}\label{nib} X^{\eps^2} \ll N \ll X^{-\eps^2} H \end{equation} and \begin{equation}\label{xq} X/Q \ll NM \ll_{k,\eps} X. \end{equation} \end{itemize} Then \begin{equation}\label{lk} \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi\right)\right|\ dt'\right)^2\ dt \ll_{k,\eps,A,B} q_1 \lambda^2 H^2 X \log^{-A} X. \end{equation} \end{proposition} It remains to prove Proposition \ref{types}. We will deal with the Type $d_1$, Type $d_2$, Type $d_4$, and Type II cases in next section; the Type $d_3$ case is trickier and we will prove it in next section only assuming an average result for exponential sums which will in turn be proven in Section \ref{vdc-sec}. \section{Proof of Proposition~\ref{types}} We begin with the Type II case. Since $f = \alpha \ast \beta$, we may factor $$ {\mathcal D}[f]\left(\frac{1}{2}+it',\chi\right) = {\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right) {\mathcal D}[\beta]\left(\frac{1}{2}+it',\chi\right) $$ and hence by Cauchy-Schwarz we have \begin{align*} &\left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi\right)\right|\ dt'\right)^2 \\ &\ll \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H}\left |{\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right) \\ &\quad \times \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\beta]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right) \end{align*} for any $t$. From Lemma \ref{mvt-lem-ch} we have $$ \sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt' \ll_{k,\eps} (q_1 \lambda H + N) \log^{O_{k,\eps}(1)} X $$ while from Fubini's theorem and Lemma \ref{mvt-lem-ch} we have $$ \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\beta]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt' \ll_{k,\eps} \lambda H (q_1 Q^{1/2} \lambda X + M) \log^{O_{k,\eps}(1)} X $$ and so we can bound the left-hand side of \eqref{lk} by $$ \ll_{k,\eps} (q_1 \lambda H + N)\left( q_1 Q^{1/2} \lambda X + M \right) \lambda H \log^{O_{k,\eps}(1)} X.$$ We rewrite this expression using \eqref{xq} as $$ \ll_{k,\eps} q_1 \left( Q^{1/2} q_1 \lambda + \frac{Q^{1/2} N}{H} + \frac{1}{N} + \frac{1}{q_1 \lambda H} \right) \lambda^2 H^2 X \log^{O_{k,\eps}(1)} X.$$ Using the hypotheses \eqref{l6}, \eqref{nib}, \eqref{H-def}, \eqref{Q-def}, we obtain \eqref{lk} in the Type II case as required. Now we handle the Type $d_1$ and $d_2$ cases. Actually we may unify the $d_1$ case into the $d_2$ case by adding a dummy factor $\beta_2$ (i.e $\beta_2(n) = \mathbf{1}_{n = 1}$) so that in both cases we have $$ f = \alpha \ast \beta_1 \ast \beta_2$$ where $\alpha$ is supported on $[N,2N]$ and is $O_{k,\eps,B}(1)$-divisor-bounded, and each $\beta_i$ is either $1_{(M_i,2M_i]}$ or $L 1_{(M_i,2M_i]}$, where \begin{equation}\label{nib2} 1 \ll N \ll X^{\eps^2} \end{equation} and $$ X/Q \ll N M_1 M_2 \ll X.$$ We may factor $$ {\mathcal D}[f]\left(\frac{1}{2}+it',\chi\right) = {\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right) {\mathcal D}[\beta_1]\left(\frac{1}{2}+it',\chi\right) {\mathcal D}[\beta_2]\left(\frac{1}{2}+it',\chi\right).$$ By Cauchy-Schwarz we have \begin{align*} &\left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[f]\left(\frac{1}{2}+it',\chi\right)\right|\ dt'\right)^2 \\ &\ll \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right) \\ &\quad \times \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it',\chi\right)\right|^2 \left|{\mathcal D}[\beta_2]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right). \end{align*} From Lemma \ref{mvt-lem} we have $$ \sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\alpha]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt' \ll_{k,\eps} (q_1 \lambda H + N) \log^{O_{k,\eps}(1)} X $$ so from Fubini's theorem we can bound the left-hand side of \eqref{lk} by \begin{align*} & \ll_{k,\eps} (q_1 \lambda H + N) \lambda H \log^{O_{k,\eps}(1)} X \\ &\quad \times \sum_{\chi\ (q_1)} \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right)\right|^2 \left|{\mathcal D}[\beta_2]\left(\frac{1}{2}+it,\chi\right)\right|^2\ dt. \end{align*} By the pigeonhole principle, we can thus bound the left-hand side of \eqref{lk} by \begin{align*} & \ll_{k,\eps} (q_1 \lambda H + N) \lambda H \log^{O_{k,\eps}(1)} X \\ &\qquad \times \sum_{\chi\ (q_1)} \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right)\right|^2 \left|{\mathcal D}[\beta_2]\left(\frac{1}{2}+it,\chi\right)\right|^2\ dt \end{align*} for some $T$ with \begin{equation}\label{tb} Q^{-1/2} \lambda X \ll T \ll Q^{1/2} \lambda X. \end{equation} By Corollary \ref{fourth-integral} and the triangle inequality, we have $$ \sum_{\chi\ (q_1)} \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right)\right|^4\ dt \ll q_1 T \left(1 + \frac{q_1^2}{T^2} + \frac{M_1^2}{T^4} \right) \log^{O(1)} X; $$ since $T \gg \lambda X Q^{-1/2} \geq \max\{M_1^{1/2}, q_1\}$ by our assumptions, we get $$ \sum_{\chi\ (q_1)} \int_{T/2 \leq |t| \leq T}\left |{\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right)\right|^4\ dt \ll q_1 T \log^{O(1)} X.$$ Similarly with $\beta_1$ replaced by $\beta_2$. By Cauchy-Schwarz, we thus have $$ \sum_{\chi\ (q_1)} \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right)\right|^2 \left|{\mathcal D}[\beta_2]\left(\frac{1}{2}+it,\chi\right)\right|^2\ dt \ll q_1 T \log^{O(1)} X$$ and so we can bound the left-hand side of \eqref{lk} by $$ \ll_{k,\eps} (q_1 \lambda H + N) \lambda H q_1 T \log^{O_{k,\eps}(1)} X.$$ Using \eqref{tb}, we can bound this by $$ \ll_{k,\eps} q_1 \left(q_1 Q^{1/2}\lambda + \frac{N Q^{1/2}}{H} \right) \lambda^2 H^2 X \log^{O_{k,\eps}(1)} X.$$ Using \eqref{l6}, \eqref{nib2}, \eqref{H-def}, \eqref{Q-def}, we obtain \eqref{lk} as desired. \begin{remark} The above arguments recover the results of Mikawa \cite{mikawa} and Baier, Browning, Marasingha, and Zhao \cite{bbmz}, in which $\sigma$ is now set equal to $\frac{1}{3}$ so that we can take $m = 3$, and the Type $d_3$ and Type $d_4$ sums do not appear. \end{remark} Now we turn to the Type $d_j$ cases for $j=3,4$. Here we have \begin{equation}\label{nam-d4} N M_1 \dots M_j \ll_{k,\eps} X \end{equation} and \begin{equation}\label{nam2-d4} X^{-\eps^2} H \ll M_1 \ll \dots \ll M_j \end{equation} which implies that \begin{equation}\label{mij} M_1 \ll X^{1/j}. \end{equation} We now factor $f = \beta_1 * g$ where $g \coloneqq \alpha \ast \beta_2 \ast \dots \ast \beta_j$, so that $$ {\mathcal D}[f]\left(\frac{1}{2}+it,\chi\right) = {\mathcal D}[\beta_1]\left(\frac{1}{2}+it,\chi\right) {\mathcal D}[g]\left(\frac{1}{2}+it,\chi\right).$$ The function $g$ is supported in the range $\{ n: n \asymp N M_2 \dots M_j \}$ and is $O_{k,\eps}(1)$-divisor bounded. By Cauchy-Schwarz, the left-hand side of \eqref{lk} may be bounded by \begin{align*} & \int_{Q^{-1/2} \lambda X \ll |t| \ll Q^{1/2} \lambda X} \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right) \\ &\quad \left(\sum_{\chi\ (q_1)} \int_{t-\lambda H}^{t+\lambda H} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it'',\chi\right)\right|^2\ dt''\right)\ dt. \end{align*} Using Fubini's theorem to perform the $t$ integral first, we can estimate this by \begin{align*} & \ll \lambda H \sum_{\chi\ (q_1)} \int_{Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2 \\ &\quad \left(\sum_{\chi'\ (q_1)} \int_{t'-2\lambda H}^{t'+2\lambda H} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it'',\chi'\right)\right|^2\ dt''\right)\ dt'. \end{align*} We fix a smooth non-negative Schwartz function $\eta \colon \R \to \R$, positive on $[-2,2]$ and whose Fourier transform $\hat \eta(u) \coloneqq \int_\R \eta(t) e(-tu)\ du$ is supported on $[-1,1]$ with $\hat \eta(0)=1$. Since $\lambda \leq \frac{1}{q_1 Q}$, we can bound \begin{align*} &\sum_{\chi'\ (q_1)} \int_{t'-2\lambda H}^{t'+2\lambda H} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it'',\chi'\right)\right|^2\ dt'' \\ &\ll \sum_{\chi'\ (q_1)} \int_\R \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it'',\chi'\right)\right|^2 \eta\left(\frac{(t''-t') q_1 Q}{H}\right) \ dt'' \\ &= \int_\R \eta\left(\frac{(t''-t') q_1 Q}{H}\right) \sum_{\chi'\ (q_1)} \sum_{m_1,m_2} \frac{\beta_1(m_1) \overline{\beta_1(m_2)} \chi'(m_1) \overline{\chi'}(m_2)}{m_1^{1/2+it''} m_2^{1/2-it''}}\ dt'' \\ &= \varphi(q_1) \int_\R \eta\left(\frac{(t''-t') q_1 Q}{H}\right) \sum_{1 \leq a \leq q_1: (a,q_1)=1} \left|\sum_{m = a\ (q_1)} \frac{\beta_1(m)}{m^{1/2+it''}}\right|^2\ dt'' \\ &\ll q_1 \int_\R \eta\left(\frac{(t''-t') q_1 Q}{H}\right) \sum_{1 \leq a \leq 2q_1} \left|\sum_{m = a\ (2q_1)} \frac{\beta_1(m)}{m^{1/2+it''}}\right|^2\ dt'' \\ &= \frac{H}{Q} \sum_{m_1 = m_2\ (2q_1)} \frac{\beta_1(m_1) \beta_1(m_2)}{m_1^{1/2+it'} m_2^{1/2-it'}} \hat \eta\left( \frac{H}{2\pi q_1 Q} \log \frac{m_1}{m_2} \right) \\ &= \frac{H}{Q} \sum_{m,\ell} \frac{\beta_1(m+q_1 \ell) \beta_1(m-q_1 \ell)}{(m+q_1 \ell)^{1/2+it'} (m-q_1 \ell)^{1/2-it'}} \hat \eta\left( \frac{H}{2\pi q_1 Q} \log \frac{m+q_1 \ell}{m-q_1 \ell} \right) \end{align*} where we have used the balanced change of variables $(m_1,m_2) = (m+q_1\ell, m-q_1\ell)$ to obtain some cancellation in a Taylor expansion that will be performed in the next section. Observe that the $\ell=0$ contribution to the above expression is $O(\frac{H}{Q}\log X (\log M_1)^2)$; also, the quantity $\beta_1(m+q_1 \ell) \beta_1(m-q_1 \ell) \hat{\eta}$ is only non-vanishing when $m - q_1 \ell \asymp M_1$ and $\log \frac{m+q_1 \ell}{m-q_1 \ell} \ll \frac{q_1 Q}{H} \ll \frac{Q^2}{H}$, which implies that $q_1 \ell \ll \frac{Q^2 M_1}{H}$ and $m \asymp M_1$. By symmetry and the triangle inequality (and crudely summing over $q_1 \ell$ instead of over $\ell$) we have $$ \sum_{\chi\ (q_1)} \int_{t'-2\lambda H}^{t'+2\lambda H} \left|{\mathcal D}[\beta_1]\left(\frac{1}{2}+it'',\chi\right)\right|^2\ dt'' \ll Y(t') + \frac{H}{Q} \log X (\log M_1)^2$$ where $Y(t')$ denotes the quantity $$ Y(t') \coloneqq \frac{H}{Q} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \left|\sum_m \frac{\beta_1(m+\ell) \beta_1(m-\ell)}{(m+\ell)^{1/2+it'} (m-\ell)^{1/2-it'}} \hat \eta\left( \frac{H}{2\pi q_1 Q} \log \frac{m+ \ell}{m- \ell}\right)\right|. $$ The function $m \mapsto \frac{\beta_1(m+\ell) \beta_1(m-\ell)}{(m+\ell)^{1/2} (m-\ell)^{1/2}} \hat \eta( \frac{H}{2\pi q_1 Q} \log \frac{m+ \ell}{m- \ell})$ is supported on the interval $[M_1+\ell, 2M_1-\ell]$, is of size $O_\eps( X^{O(\eps^2)} / M_1 )$ on this interval, and has derivative of size $O_\eps( X^{O(\eps^2)} / M_1^2 )$. Thus by Lemma \ref{sbp-2}, one has $Y(t') \ll_\eps X^{O(\eps^2)} \tilde Y(t')$, where $\tilde Y(t')$ denotes the quantity \begin{equation}\label{yat} \tilde Y(t') \coloneqq \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{t'}{2\pi} \log \frac{m+ \ell}{m- \ell} \right) \right|^*. \end{equation} We may thus bound the left-hand side of \eqref{lk} by $O(Z_1+Z_2)$, where $$ Z_1 \coloneqq \lambda H \sum_{\chi\ (q_1)} \int_{\lambda X/Q^{1/2} \ll |t'| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2 \tilde Y(t')\ dt'$$ and $$ Z_2 \coloneqq \frac{\lambda H^2}{Q} \log X (\log M_1)^2 \sum_{\chi\ (q_1)} \int_{\lambda X/Q^{1/2} \ll |t'| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'.$$ From Lemma \ref{mvt-lem-ch} we have $$ Z_2 \ll_k \frac{\lambda H^2}{Q} (q_1 Q^{1/2} \lambda X + N M_2 \dots M_j) \log^{O_k(1)} X.$$ From \eqref{nam-d4}, \eqref{nam2-d4} we have $$ N M_2 \dots M_j \ll_{k,\eps} \frac{X}{M_1} \ll X^{\eps^2} \frac{X}{H}$$ and hence by \eqref{Q-def}, \eqref{H-def} and \eqref{l6} $$ N M_2 \dots M_j \ll q_1 Q^{1/2} \lambda X.$$ We thus have $$ Z_2 \ll_{k,\eps} Q^{-1/2} q_1 \lambda^2 H^2 X \log^{O_k(1)} X;$$ by \eqref{Q-def}, this contribution is acceptable for $B$ large enough. Now we turn to $Z_1$. At this point we will begin conceding factors of $X^{O(\eps^2)}$, in particular we can essentially ignore the role of the parameters $q_1$ and $Q$ thanks to \eqref{Q-def}. We begin with the easier case $j=4$ of Type $d_4$ sums. To deal with ${\mathcal D}[g]$ in this case, we simply invoke Lemma \ref{mvt-lem-ch} to obtain the bound $$ \sum_{\chi\ (q_1)} \int_{Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt' \ll_\eps X^{O(\eps^2)} \lambda X.$$ To show the contribution of $Z_1$ is acceptable in the $j=4$ case, it thus suffices to show the following lemma. \begin{lemma}\label{pt} Let the notation be as above with $j = 4$. Then for any $Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X$, one has $$ \tilde Y(t') \ll_\eps X^{-\eps + O(\eps^2)} H.$$ \end{lemma} \begin{proof} From \eqref{yat} and the triangle inequality it suffices to show that $$ \left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{t'}{2\pi} \log \frac{m+ \ell}{m- \ell} \right )\right|^* \ll_\eps X^{-c\eps + O(\eps^2)} H$$ for all $1 \leq \ell \ll Q^2 M_1 / H$. The phase $m \mapsto \frac{t'}{2\pi} \log \frac{m+ \ell}{m- \ell}$ has $j^{\operatorname{th}}$ derivative $\asymp_j \frac{|t'| \ell}{M_1^{j+1}}$ for all $j \geq 1$. Using the classical van der Corput exponent pair $(1/14,2/7)$ (see \cite[\S 8.4]{ik}) we have $$ \left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{t'}{2\pi} \log \frac{m+ \ell}{m- \ell} \right)\right|^* \ll_\eps X^{O(\eps^2)} \left(\frac{|t'| \ell}{M_1^2}\right)^{\frac{1}{14}} M_1^{\frac{2}{7} + \frac{1}{2}}$$ (noting from \eqref{l6}, \eqref{mij} that $|t'| \ell \geq |t'| \geq M_1^2$). Using \eqref{mij}, \eqref{Q-def}, the right-hand side is $$ \ll_\eps X^{O(\eps^2)} (X/H)^{\frac{1}{14}} (X^{1/4})^{\frac{2}{7} + \frac{1}{2} - \frac{1}{14}}.$$ From \eqref{sigma-range}, \eqref{H-def} we have $H \geq X^{\frac{7}{30}+\eps}$, and the claim then follows after some arithmetic. \end{proof} \begin{remark} If one uses the recent improvements of Robert \cite{robert} to the classical $(1/14,2/7)$ exponent pair, one can establish Lemma \ref{pt} for $\sigma$ as small as $\frac{3}{13} = 0.2307\dots$, improving slightly upon the exponent $\frac{7}{30} = 0.2333\dots$ provided by the classical pair. Unfortunately, due to the need to also treat the $d_3$ sums, this does not improve the final exponent \eqref{sigmadef} in Theorem \ref{unav-corr}. \end{remark} Now we turn to estimating $Z_1$ in the $j=3$ case of Type $d_3$ sums. To deal with ${\mathcal D}[g]$ in this case, we apply Jutila's estimate (Corollary \ref{jutila-cor}) to conclude \begin{proposition}\label{qwe} Let the notation and assumptions be as above. Cover the region $\{ t': Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X\}$ by a collection ${\mathcal J}$ of disjoint half-open intervals $J$ of length $X^{\eps^2} \sqrt{\lambda X}$. Then $$ \sum_{J \in {\mathcal J}} \left( \sum_{\chi\ (q_1)} \int_{J} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'\right)^{3} \ll_{k,\eps,B} X^{O(\eps^2)} (\lambda X)^2. $$ \end{proposition} \begin{proof} For each $R > 0$, let ${\mathcal J}_R$ denote the set of those intervals $J \in {\mathcal J}$ such that $$ R \leq \sum_{\chi\ (q_1)} \int_J \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt' \leq 2R.$$ Applying Corollary \ref{jutila-cor} with $Q^{-1/2} \lambda X \ll T \ll Q^{1/2} \lambda X$ and $T_0 \coloneqq X^{\eps^2} \sqrt{\lambda X}$ (and $\eps$ replaced by $\eps^2$), together with the triangle inequality and conjugation symmetry, we have $$ \sum_{J \in {\mathcal J}_R} \sum_{\chi \ (q_1)} \int_J |{\mathcal D}[\beta_j](\tfrac 12 + it, \chi)|^4\ dt \ll_{\eps,B} X^{O(\eps^2)} ((\# {\mathcal J}_R) \sqrt{\lambda X} + ((\# {\mathcal J}_R) \lambda X)^{2/3}) $$ for $j=2,3$, where we recall that $\# {\mathcal J}_R$ denotes the cardinality of ${\mathcal J}_R$. Note that the hypothesis $M_j \ll T^2$ required for Corollary \ref{jutila-cor} will follow from \eqref{nam-d4} and \eqref{l6}. From Cauchy-Schwarz and the crude estimate $$ {\mathcal D}[\alpha](\tfrac 12 + it, \chi) \ll_{k,\eps} X^{O(\eps^2)}$$ we thus have $$ \sum_{J \in {\mathcal J}_R} \sum_{\chi \ (q_1)} \int_{J} |{\mathcal D}[g](\tfrac 12 + it, \chi)|^2\ dt \ll_{k,\eps,B} X^{O(\eps^2)} ((\# {\mathcal J}_R) \sqrt{\lambda X} + ((\# {\mathcal J}_R) \lambda X)^{2/3}).$$ By definition of ${\mathcal J}_R$, we conclude that $$ R \# {\mathcal J}_R \ll_{k,\eps,B} X^{O(\eps^2)} ((\# {\mathcal J}_R) \sqrt{\lambda X} + ((\# {\mathcal J}_R) \lambda X)^{2/3})$$ and thus either $R \ll_{k,\eps,B} X^{O(\eps^2)} \sqrt{\lambda X}$ or $\# {\mathcal J}_R \ll_{k,\eps,B} X^{O(\eps^2)} (\lambda X)^2 / R^3$. Using the trivial bound $\# {\mathcal J}_R \ll \sqrt{\lambda X}$ in the former case, we thus have $$ \# {\mathcal J}_R \ll_{k, \eps, B} X^{O(\eps^2)} \min\left( \frac{(\lambda X)^2}{R^3}, \sqrt{\lambda X} \right)$$ for all $R>0$. The claim then follows from dyadic decomposition (noting that ${\mathcal J}_R$ is only non-empty when $R \ll X^{O(1)}$). \end{proof} In the next section, we will establish a discrete fourth moment estimate for the $Y(t)$: \begin{proposition}\label{pq} Let the notation and assumptions be as above. Let $t_1 < \dots < t_r$ be elements of $\{ t': Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X\}$ such that $|t_{j+1}-t_j| \geq \sqrt{\lambda X}$ for all $1 \leq j < r$. Then $$ \sum_{j=1}^r \tilde Y(t_j)^4 \ll_{k,\eps,B} X^{-\eps+O(\eps^2)} H^4 \sqrt{\lambda X}.$$ \end{proposition} Assume this proposition for the moment. Cover the set $\{ t': Q^{-1/2} \lambda X \ll |t'| \ll Q^{1/2} \lambda X\}$ by a family ${\mathcal J}$ of disjoint half-open intervals $J$ of length $X^{\eps^2} \sqrt{\lambda X}$ for $i=1,\dots,r$. On each such $J$, let $t_J$ be a point in $J$ that maximizes the quantity $\tilde Y(t_J)$. One can partition the $t_J$ into $O(1)$ subsequences that are $\sqrt{\lambda X}$-separated in the sense of Proposition \ref{pq}. From the triangle inequality, we thus have $$ \sum_{J \in {\mathcal J}} \tilde Y(t_J)^4 \ll_{k,\eps,B} X^{-\eps+O(\eps^2)} H^4 \sqrt{\lambda X}$$ and hence by H\"older's inequality and the cardinality bound $|{\mathcal J}| \ll_{\eps,B} X^{O(\eps^2)} \sqrt{\lambda X}$ $$ \sum_{J \in {\mathcal J}} \tilde Y(t_J)^{3/2} \ll_{k,\eps,B} X^{-3\eps/8} X^{O(\eps^2)} H^{3/2} \sqrt{\lambda X}.$$ On the other hand, we can bound $$ Z_1 \leq \lambda H \sum_{J \in {\mathcal J}} \tilde Y(t_J) \sum_{\chi\ (q_1)} \int_{J} \left|{\mathcal D}[g]\left(\frac{1}{2}+it',\chi\right)\right|^2\ dt'$$ and hence by H\"older's inequality and Proposition~\ref{qwe} we have $$ Z_1 \ll_{k,\eps,B} \lambda H (X^{-3\eps/8+O(\eps^2)} H^{3/2} \sqrt{\lambda X})^{2/3} ((\lambda X)^2)^{1/3}$$ which simplifies to $$ Z_1 \ll_{k,\eps,B} \lambda^2 H^2 X^{-\eps/4 + O(\eps^2)} X$$ which is an acceptable contribution to \eqref{lk} for $\eps$ small enough. This completes the proof of \eqref{lk}. Thus it remains only to establish Proposition \ref{pq}. This will be the objective of the next section. \section{Averaged exponential sum estimates}\label{vdc-sec} We now prove Proposition \ref{pq}. We will now freely lose factors of $X^{O(\eps^2)}$ in our analysis, for instance we see from the hypothesis $Q \leq \log^B X$ that \begin{equation}\label{qsmall} 1\leq q_1 \leq Q \ll_{B,\eps} X^{\eps^2}. \end{equation} By partitioning the $t_j$ based on their sign, and applying a conjugation if necessary, we may assume that the $t_j$ are all positive. By covering the positive portion of $\{ Q^{-1/2} \lambda X \ll |t| \ll \lambda X Q^{1/2}\}$ into dyadic intervals $[T,2T]$ (and giving up an acceptable loss of $O(\log X)$), we may assume that there exists $$ Q^{-1/2} \lambda X \ll T \ll \lambda X Q^{1/2}$$ such that $t_1,\dots,t_r \in [T,2T]$; from \eqref{qsmall} we see in particular that \begin{equation}\label{tbound} T = X^{O(\eps^2)} \lambda X. \end{equation} From \eqref{l6} we also note that \begin{equation}\label{tbound-2} X^{5/6-2\eps} \ll T \ll X. \end{equation} Since the $t_1,\dots,t_r$ are $\sqrt{\lambda X}$-separated, we have \begin{equation}\label{qor} r \ll_\eps X^{O(\eps^2)} \sqrt{\lambda X}. \end{equation} Finally, from \eqref{mij} we have \begin{equation}\label{mi3} M_1 \ll X^{1/3}. \end{equation} Now we need to control the maximal exponential sums $\tilde Y(t_j)$ defined in \eqref{yat}. If one uses exponent pairs such as $(1/6,1/6)$ here as in Lemma \ref{pt} to obtain uniform control on the $\tilde Y(t_j)$, one obtains inferior results (indeed, the use of $(1/6,1/6)$ only gives Theorem \ref{unav-corr} for $\sigma = \frac{2}{7} = 0.2857\dots$). Instead, we will exploit the averaging in $j$. We first use H\"older's inequality to note that $$ \tilde Y(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \left(\left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{t_j}{2\pi} \log \frac{m+ \ell}{m- \ell} \right)\right|^*\right)^4.$$ Next, we observe that $\log \frac{m+ \ell}{m- \ell} = \log \frac{1+\ell/m}{1-\ell/m}$ has the Taylor expansion \begin{align*} \log \frac{m+ \ell}{m- \ell} &= \sum_{j=0}^\infty \frac{2}{2j+1} \left(\frac{\ell}{m}\right)^{2j+1} \\ &= 2 \frac{\ell}{m} + \frac{2}{3} \frac{\ell^3}{m^3} + \frac{2}{5} \frac{\ell^5}{m^5} + \dots. \end{align*} Note that there are no terms in the Taylor expansion with even powers of $\frac{\ell}{m}$. Thus we can write $$ e\left(\frac{t_j}{2\pi} \log \frac{m+ \ell}{m- \ell}\right) = e\left(\frac{1}{\pi} \frac{t_j \ell}{m} + \frac{1}{3\pi} \frac{t_j \ell^3}{m^3}\right) e(R_{j,\ell}(m))$$ where for $m \in [M_1,2M_1]$, the remainder $R_{j,\ell}(m)$ is of size $$ R_{j,\ell}(m) \ll T \left(\frac{Q^2 M_1/H}{M_1}\right)^5 \ll_\eps X^{-\frac{7}{33}+O(\eps^2)} $$ and has derivative estimates $$ R_{j,\ell}'(m) \ll \frac{1}{M_1} T \left(\frac{Q^2 M_1/H}{M_1}\right)^5 \ll_\eps \frac{1}{M_1} X^{-\frac{7}{33}+O(\eps^2)}.$$ Thus by Lemma \ref{sbp-2} again, we have $$ \tilde Y(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \left(\left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{1}{\pi} \frac{t_j \ell}{m} + \frac{1}{3\pi} \frac{t_j \ell^3}{m^3} \right)\right|^*\right)^4. $$ We write this bound as $$ \tilde Y(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} f\left( \frac{t_j \ell}{M_1}, \frac{t_j \ell^3}{M_1^3} \right)^4 $$ $f(\alpha,\beta)$ denotes the maximal exponential sum \begin{equation}\label{fab} f(\alpha,\beta) \coloneqq \left|\sum_{M_1 \leq m \leq 2M_1} e\left( \frac{\alpha}{\pi} \frac{M_1}{m} + \frac{\beta}{3\pi} \frac{M_1^3}{m^3} \right)\right|^*. \end{equation} By a further application of Lemma \ref{sbp-2}, we see that \begin{equation}\label{f-stable} f(\alpha+u,\beta+v) \asymp f(\alpha,\beta) \end{equation} whenever $\alpha,\beta,u,v$ are real numbers with $u,v = O(1)$. Thus $$ \tilde Y(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \int_{t_j \ell/M_1}^{t_j \ell/M_1 + 1} f\left( t, \frac{\ell^2}{M_1^2} t \right)^4\ dt.$$ As the $t_j$ are $\sqrt{\lambda X}$-separated and lie in $[T,2T]$, and by \eqref{l6}, \eqref{mi3} we have $$ (\lambda X)^{1/2} \gg X^{5/12 - \eps/2} \gg X^{1/3} \gg M_1,$$ we see that for fixed $\ell$, the intervals $[t_j \ell/M_1, t_j \ell/M_1 + 1]$ are disjoint and lie in the region $\{ t: t \asymp T \ell / M_1 \}$. Thus we have $$ \sum_{j=1}^r \tilde Y(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{1 \leq \ell \ll \frac{Q^2 M_1}{H}} \int_{|t| \asymp T \ell / M_1} f\left( t, \frac{\ell^2}{M_1^2} t\right )^4\ dt.$$ By the pigeonhole principle, we thus have $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H}{M_1} \sum_{L \leq \ell < 2L} \int_{|t| \asymp T L / M_1} f\left( t, \frac{\ell^2}{M_1^2} t\right )^4\ dt$$ for some \begin{equation}\label{lime} 1 \leq L \ll \frac{Q^2 M_1}{H} \ll X^{O(\eps^2)} \frac{M_1}{H}. \end{equation} To obtain the best bounds, it becomes convenient to reduce the range of integration of $t$. Let $S$ be a parameter in the range \begin{equation}\label{srange} M_1 \ll S \ll \min \Big (M_1^2, \frac{TL}{M_1} \Big ) \end{equation} to be chosen later. By Lemma \ref{rs1}(i), we then have $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H TL}{S M_1^2} \sum_{L \leq \ell < 2L} \int_{0 < \alpha \ll S} f\left( \alpha, \frac{\ell^2}{M_1^2} \alpha\right )^4\ d\alpha$$ Applying \eqref{f-stable}, we then have $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H TL}{S M_1^2} \int_{0 < \alpha \ll S}\int_{0 < \beta \ll 1 + \frac{L^2 \alpha}{M_1^2}} f(\alpha,\beta)^4 \mu(\alpha,\beta)\ d\beta d\alpha $$ where the multiplicity $\mu(\alpha,\beta)$ is defined as the number of integers $\ell \in [L, 2L)$ such that $$ \left|\beta - \frac{\ell^2}{M_1^2} \alpha\right| \leq 1.$$ Clearly we have the trivial bound $\mu(\alpha,\beta) \ll L$. On the other hand, for fixed $\alpha$, the numbers $\frac{\ell^2}{M_1^2} \alpha$ are $\asymp \frac{L \alpha}{M_1^2}$-separated, so for $\beta \gg 1$ we also have the bound $$ \mu(\alpha,\beta) \ll 1 + \frac{M_1^2}{L \alpha}.$$ We thus have $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H TL}{S M_1^2} (W_1 + W_2 + W_3)$$ where \begin{align*} W_1 &\coloneqq L \int_{0 < \alpha \ll S} \int_{0 < \beta \ll 1} f(\alpha,\beta)^4\ d\beta d\alpha \\ W_2 &\coloneqq \int_{0 < \alpha \ll S} \frac{M_1^2}{L \alpha} \int_{1 \ll \beta \ll \frac{L^2 \alpha}{M_1^2}} f(\alpha,\beta)^4\ d\beta d\alpha \\ W_3 &\coloneqq \int_{0 < \alpha \ll S} \int_{1 \ll \beta \ll \frac{L^2 \alpha}{M_1^2}} f(\alpha,\beta)^4\ d\beta d\alpha. \end{align*} From \eqref{f-stable} and Lemma \ref{rs1}(ii) (with $\theta \coloneqq -1$ and $a_m \coloneqq e \Big ( \frac{\beta}{3 \pi} \Big ( \frac{M_1}{m} \Big )^3 \Big )$) we have $$ W_1 \ll_\eps X^{O(\eps^2)} L (M_1^4 + M_1^2 S) \ll X^{O(\eps^2)} L M_1^4$$ thanks to \eqref{srange}. Now we treat $W_2$. We may assume that $\alpha \gg M_1^2/L^2$ since the inner integral vanishes otherwise. By the pigeonhole principle, we thus have $$ W_2 \ll \frac{M_1^2 \log X}{LA} \int_{\alpha \asymp A} \int_{\beta \ll \frac{L^2 A}{M_1^2}} f(\alpha,\beta)^4\ d\beta d\alpha$$ for some $\frac{M_1^2}{L^2} \ll A \ll S$. Applying Lemma \ref{rs1}(ii) again (now treating the $e(\frac{\beta}{3\pi} \frac{M_1^3}{m^3})$ term in \eqref{fab} as a bounded coefficient $a_m$) we have $$ \int_{\alpha \asymp A} f(\alpha,\beta)^4\ d\alpha \ll_\eps X^{O(\eps^2)} (M_1^4 + M_1^2 A) \ll X^{O(\eps^2)} M_1^4$$ and hence $$ W_2 \ll X^{O(\eps^2)} L M_1^4.$$ Finally we turn to $W_3$. The contribution of the region $\alpha \ll \frac{M_1^2}{L}$ is $O(W_2)$. Thus by the pigeonhole principle we have \begin{equation}\label{trad} W_3 \ll W_2 + \log X \int_{\alpha \asymp A} \int_{\beta \ll \frac{L^2 A}{M_1^2}} f(\alpha,\beta)^4\ d\beta d\alpha \end{equation} for some $\frac{M_1^2}{L} \ll A \ll S$. In particular (from \eqref{lime}, \eqref{srange}) one has \begin{equation}\label{mam} M_1 \ll A \ll M_1^2. \end{equation} One could estimate the integral here using Lemma \ref{rs1}(ii) once again, but this turns out to lead to an inferior estimate if used immediately, given that the length $M_1$ of the exponential sum and the dominant frequency scale $A$ lie in the range \eqref{mam}; indeed, this only lets one establish Theorem \ref{unav-corr} for $\sigma = 1/4$. Instead, we will first apply the van der Corput $B$-process (Lemma \ref{rs1}(iii)), which morally speaking will shorten the length from $M_1$ to $A/M_1$, at the cost of applying a Legendre transform to the phase in the exponential sum. We turn to the details. For a fixed $\alpha,\beta$ with \begin{equation}\label{abs} \alpha \asymp A;\quad \beta \ll \frac{L^2 A}{M_1^2} \end{equation} (so in particular $\beta$ is much smaller in magnitude than $\alpha$, thanks to \eqref{lime}), let $$ \varphi(x) \coloneqq \frac{\alpha}{\pi} \frac{M_1}{x} + \frac{\beta}{3\pi} \frac{M_1^3}{x^3}$$ denote the phase appearing in \eqref{fab}. The first derivative is given by $$ \varphi'(x) = - \frac{\alpha}{\pi} \frac{M_1}{x^2} - \frac{\beta}{\pi} \frac{M_1^3}{x^4}.$$ This maps the region $\{ x: x \asymp M_1\}$ diffeomorphically to a region of the form $\{ t: -t \asymp \frac{A}{M_1} \}$. Denoting the inverse map by $u$, we thus have $$ t = - \frac{\alpha}{\pi} \frac{M_1}{u(t)^2} - \frac{\beta}{\pi} \frac{M_1^3}{u(t)^4} $$ for $-t \asymp \frac{A}{M_1}$. One can solve explicitly for $u(t)$ using the quadratic formula as \begin{align*} u(t)^2 &= \frac{1}{2} \left( \frac{\alpha M_1}{\pi |t|} + \sqrt{\left(\frac{\alpha M_1}{\pi |t|}\right)^2 + 4 \frac{\beta M_1^3}{\pi |t|}} \right).\\ &= \left(\frac{\alpha M_1}{\pi |t|}\right) \frac{1}{2} \left( 1 + \left(1 + \frac{4\beta}{\alpha} \frac{\pi |t| M_1}{\alpha}\right)^{1/2} \right). \end{align*} A routine Taylor expansion then gives the asymptotic $$ u(t) = M_1 \left(\frac{\alpha}{\pi |t| M_1}\right)^{1/2} + \frac{\beta}{2\alpha} M_1 \left(\frac{\alpha}{\pi |t| M_1}\right)^{-1/2} + R_{\alpha,\beta}(t)$$ where the remainder term $R_{\alpha,\beta}(t)$ obeys the estimates $$ R_{\alpha,\beta}(t) \ll \frac{\beta^2}{\alpha^2} M_1; \quad R'_{\alpha,\beta}(t) \ll \frac{\beta^2}{\alpha^2} \frac{M_1^2}{A}$$ for $-t \asymp \frac{A}{M}$. The (negative) Legendre transform $\varphi^*(t) \coloneqq \varphi(u(t)) - t u(t)$ can then be similarly expanded as $$ \varphi^*(t) = \frac{2\alpha}{\pi} \left(\frac{\alpha}{\pi |t| M_1}\right)^{-1/2} + \frac{\beta}{3\pi} \left(\frac{\alpha}{\pi |t| M_1}\right)^{-3/2} + E_{\alpha,\beta}(t)$$ where the error term $E_{\alpha,\beta}$ obeys the estimates $$ E_{\alpha,\beta}(t) \ll \frac{\beta^2}{\alpha^2} A; \quad E'_{\alpha,\beta}(t) \ll \frac{\beta^2}{\alpha^2} M_1.$$ From \eqref{abs}, \eqref{lime}, \eqref{mam} we have $$ \frac{\beta^2}{\alpha^2} A \ll \frac{L^4}{M_1^4} A \ll \frac{X^{O(\varepsilon^2)}}{H^4} M_1^2 \ll 1 $$ where the last bound follows from \eqref{sigma-range} since $H = X^{\sigma + \eps}$ and $M_1 \ll X^{1/3}$. Applying Lemma \ref{rs1}(iii) followed by Lemma \ref{sbp-2}, we conclude that $$ f(\alpha,\beta) \ll \frac{M_1}{A^{1/2}} g(A^{1/2} \alpha^{1/2}, A^{3/2} \alpha^{-3/2} \beta) + M_1^{1/2}$$ where $g$ is the maximal exponential sum $$ g(\alpha',\beta') \coloneqq \left| \sum_{-\ell \asymp A/M_1} e\left( \frac{2\alpha'}{\pi^{1/2}} \left( \frac{|\ell|}{A/M_1} \right)^{1/2} + \frac{\beta' \pi^{1/2}}{3} \left( \frac{|\ell|}{A/M_1} \right)^{3/2} \right)\right|^*.$$ Inserting this back into \eqref{trad} and performing a change of variables, we conclude that $$ W_3 \ll W_2 + \log X \left( L^2 A^2 + \frac{M_1^4}{A^2} \int_{\alpha \asymp A} \int_{\beta \ll \frac{L^2 A}{M_1^2}} g(\alpha,\beta)^4\ d\beta d\alpha \right).$$ On the other hand, by applying Lemma \ref{rs1}(ii) as before we have $$ \int_{\alpha \asymp A} g(\alpha,\beta)^4\ d\alpha \ll_\eps X^{O(\eps^2)} ( (A/M_1)^4 + (A/M_1)^2 A ) \ll X^{O(\eps^2)} (A/M_1)^2 A $$ for any $\beta$, where the last inequality follows from \eqref{mam}. We thus arrive at the bound $$ W_3 \ll_\eps W_2 + X^{O(\eps^2)} L^2 A^2.$$ Since $A \ll S$, we thus have $$ W_3 \ll_\eps W_2 + X^{O(\eps^2)} L^2 S^2.$$ Combining all the above bounds for $W_1,W_2,W_3$, we have \begin{equation}\label{yt4} \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} \frac{H TL}{S M_1^2} (L M_1^4 + L^2 S^2). \end{equation} To optimize this bound we select $$S \coloneqq \min\left( \frac{M_1^2}{L^{1/2}}, \frac{TL}{M_1} \right).$$ It is easy to see (using \eqref{l6}, \eqref{lime}, and \eqref{mi3}) that $S$ obeys the bounds \eqref{srange}. From \eqref{yt4} we have \begin{align*} \sum_{j=1}^r \tilde{Y}(t_j)^4 &\ll_\eps X^{O(\eps^2)} \left( \frac{H TL^2 M_1^2}{S} + \frac{HTL^3 S}{M_1^2} \right) \\ &\ll_\eps X^{O(\eps^2)} ( H L M_1^3 + HTL^{5/2} ). \end{align*} Applying \eqref{lime}, \eqref{tbound} we thus have $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} ( M_1^4 + H^{-3/2} \lambda X M_1^{5/2} ) $$ and hence by \eqref{mi3} $$ \sum_{j=1}^r \tilde{Y}(t_j)^4 \ll_\eps X^{O(\eps^2)} ( X^{4/3} + H^{-3/2} \lambda X^{11/6} ) $$ From \eqref{H-def}, \eqref{sigma-range} we have $H \geq X^{\frac{11}{48}+\eps}$, which implies after some arithmetic and \eqref{l6} that the $X^{4/3}$ term here gives an acceptable contribution to Proposition \ref{pq}. The $H^{-3/2} \lambda X^{11/6}$ term is similarly acceptable thanks to \eqref{l6}, \eqref{H-def}, and \eqref{sigmadef}. \appendix \section{Mean value estimate}\label{harman-sec} In this section we prove Proposition \ref{harm-prop}. This estimate is fairly standard, for instance following from the methods in \cite[Chapter 9]{harman}; for the convenience of the reader we sketch a full proof here. Let $\eps, A, B, X, q_0, q_1, f, B'$ be as in Proposition \ref{harm-prop}. We first invoke Lemma \ref{comb-decomp-2} with $m=3$, and with $\eps$ and $H$ replaced by $\eps/10$ and $X^{1/3+\eps/10}$ respectively. We conclude that the function $(\chi,t) \mapsto {\mathcal D}[f](\frac{1}{2}+it, \chi, q_0)$ can be decomposed as a linear combination (with coefficients of size $O_{k,\eps}( d_2(q_0)^{O_{k,\eps}(1)} )$) of $O_{k,\eps}( \log^{O_{k,\eps}(1)} X)$ functions of the form $(\chi,t) \mapsto {\mathcal D}[\tilde f](\frac{1}{2}+it,\chi)$, where $\tilde f \colon \N \to \C$ is one of the following forms: \begin{itemize} \item[(Type $d_1$, $d_2$ sum)] A function of the form \begin{equation}\label{fst-again } \tilde f = (\alpha \ast \beta_1 \ast \dots \ast \beta_j) 1_{(X/q_0,2X/q_0]} \end{equation} for some arithmetic functions $\alpha,\beta_1,\dots,\beta_j \colon \N \to \C$, where $j=1,2$, $\alpha$ is $O_{k,\eps}(1)$-divisor-bounded and supported on $[N,2N]$, and each $\beta_i$, $i=1,\dots,j$ is either of the form $\beta_i = 1_{(M_i,2M_i]}$ or $\beta_i = L 1_{(M_i,2M_i]}$ for some $N, M_1,\dots,M_j$ obeying the bounds $$ 1 \ll N \ll_{k,\eps} X^{\eps/10},$$ $$ N M_1 \dots M_j \asymp_{k,\eps} X/q_0,$$ and $$ X^{1/3+\eps/10} \ll M_1 \ll \dots \ll M_j \ll X/q_0.$$ \item[(Type II sum)] A function of the form $$ \tilde f = (\alpha \ast \beta) 1_{(X/q_0,2X/q_0]}$$ for some $O_{k,\eps}(1)$-divisor-bounded arithmetic functions $\alpha,\beta \colon \N \to \C$ with good cancellation supported on $[N,2N]$ and $[M,2M]$ respectively, for some $N,M$ obeying the bounds $$ X^{\eps/10} \ll N \ll X^{1/3+\eps/10}$$ and $$NM \asymp_{k,\eps} X / q_0.$$ The good cancellation bounds \eqref{alb} are permitted to depend on the parameter $B$ appearing in the bound $q_0 \leq \log^B X$. \item[(Small sum)] A function $\tilde f$ supported on $(X/q_0,2X/q_0]$ obeying the bound \begin{equation}\label{smallsum-0} \|\tilde f \|_{\ell^2}^2 \ll_{k,\eps} X^{1-\eps/80}. \end{equation} \end{itemize} By the $L^2$ triangle inequality (and enlarging $A$ as necessary), it thus suffices to establish the bound \begin{equation}\label{drip} \int_{\log^{B'} X \leq |t| \leq X^{5/6 - \eps} } \left|{\mathcal D}[\tilde f]\left(\frac{1}{2}+it',\chi,q_0\right)\right|^2\ dt' \ll_{k,\eps,A,B,B'} X \log^{-A} X. \end{equation} for each individual character $\chi$ and $\tilde f$ one of the above forms. We first dispose of the small sum case. From Lemma \ref{mvt-lem} and \eqref{smallsum-0} we have $$ \int_{|t| \leq X^{5/6 - \eps}} \left|{\mathcal D}[\tilde f] \left (\frac{1}{2}+it,\chi \right )\right|^2\ dt \ll_k X^{1-\eps/80} \log^{O_k(1)} X.$$ which gives \eqref{drip} (with a power savings). In the remaining Type $d_1$, Type $d_2$, and Type II cases, $\tilde f$ is of the form $\tilde f = f' 1_{(X/q_0,2X/q_0]}$, where $f'$ is of the form $\alpha \ast \beta_1$, $\alpha \ast \beta_1 \ast \beta_2$, or $\alpha \ast \beta$ in the Type $d_1$, Type $d_2$, and Type II cases respectively. From Corollary \ref{trunc-dir} one has $$ |{\mathcal D}[\tilde f](\frac{1}{2}+it,\chi)| \ll \int_{|u| \leq X^{5/6-\eps}} \left |{\mathcal D}[f'] \left (\frac{1}{2}+it+iu,\chi \right )\right| \frac{du}{1+|u|} + 1.$$ Meanwhile, from Lemma \ref{mvt-lem} we have $$ \int_{|t'| \leq \frac{1}{2} \log^{B'} X} \left|{\mathcal D}[\tilde f]\left (\frac{1}{2}+it',\chi \right )\right|^2\ dt' \ll_{k,\eps} X \log^{O_{k,\eps}(1)} X $$ and hence by Cauchy-Schwarz $$ \int_{|t'| \leq \frac{1}{2} \log^{B'} X} \left|{\mathcal D}[\tilde f]\left (\frac{1}{2}+it',\chi \right )\right|\ dt' \ll_{k,\eps} X^{1/2} \log^{O_{k,\eps}(1) + B'/2} X.$$ We conclude that $$ {\mathcal D}[\tilde f]\left (\frac{1}{2}+it,\chi \right ) \ll_{k,\eps} \int_{\frac{1}{2} \log^{B'} X \leq |t'| \leq 2X^{5/6-\eps}} \left|{\mathcal D}[f'] \left (\frac{1}{2}+it',\chi \right )\right| \frac{dt'}{1+|t-t'|} + 1 + \frac{X^{1/2} \log^{O_{k,\eps}(1) + B'/2} X}{|t|} $$ for $\log^{B'} X \leq |t| \leq X^{5/6-\eps}$; by Cauchy-Schwarz, one thus has $$ \left|{\mathcal D}[\tilde f]\left (\frac{1}{2}+it,\chi \right )\right|^2 \ll_{k,\eps} \int_{\frac{1}{2} \log^{B'} X \leq |t'| \leq 2X^{5/6-\eps}} \left |{\mathcal D}[f']\left (\frac{1}{2}+it',\chi\right )\right|^2 \frac{dt'}{1+|t-t'|} \log X + 1 + \frac{X \log^{O_{k,\eps}(1) + B'} X}{|t|^2}.$$ Integrating in $t$, we can bound the left-hand side of \eqref{drip} for $\tilde f$ by $$ \ll_{k,\eps} \int_{\frac{1}{2} \log^{B'} X \leq |t| \leq 2 X^{5/6 - \eps}} \left|{\mathcal D}[f']\left (\frac{1}{2}+it,\chi\right )\right|^2\ dt \log^2 X + X \log^{O_{k,\eps}(1) - B'} X,$$ so (by taking $B'$ large enough) it suffices to establish the bounds \begin{equation}\label{das} \int_{\frac{1}{2} \log^{B'} X \leq |t| \leq 2X^{5/6 - \eps}} \left|{\mathcal D}[f'] \left (\frac{1}{2}+it,\chi \right )\right|^2\ dt \ll_{k,\eps, A, B, B'} X \log^{-A} X \end{equation} in the Type $d_1$, Type $d_2$, and Type II cases. We first treat the Type $d_2$ case. By dyadic decomposition it suffices to show that $$ \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[f']\left (\frac{1}{2}+it,\chi\right)\right|^2\ dt \ll_{k,\eps,A,B,B'} d_2(q_1)^{O_k(1)} X \log^{O_{k,\eps,B}(1) - 2A} X $$ for all $\log^{B'} X \leq T \ll X^{5/6-\eps}$. From Corollary \ref{fourth-integral} we have $$ \int_{T/2 \leq |t| \leq 2T} \left|{\mathcal D}[\beta_j]\left (\frac{1}{2}+it,\chi\right)\right|^4\ dt \ll_B T \left( 1 + \frac{M_j^2}{T^4} \right) \log^{O_B(1)} X$$ for $j=1,2$; also from \eqref{dirt} we have the crude bound $$ {\mathcal D}[\alpha]\left (\frac{1}{2}+it,\chi\right ) \ll_{k,\eps} N^{1/2} \log^{O_{k,\eps}(1)} X.$$ Thus by Cauchy-Schwarz we may bound $$ \int_{T/2 \leq |t| \leq T} \left|{\mathcal D}[f']\left (\frac{1}{2}+it\right )\right|^2\ dt \ll_{k,\eps} N T \left(1 + \frac{M_1}{T^2} \right) \left(1 + \frac{M_2}{T^2} \right) \log^{O_{k,\eps,B}(1)} X.$$ We can bound $$ \left(1 + \frac{M_1}{T^2} \right) \left(1 + \frac{M_2}{T^2} \right) \ll 1 + \frac{M_1 M_2}{T^2}$$ and use $N M_1 M_2 \ll X$ to conclude $$ \int_{T/2 \leq |t| \leq T} \left |{\mathcal D}[f']\left (\frac{1}{2}+it,\chi\right) \right |^2\ dt \ll_{k,\eps,B} \left(NT + \frac{X}{T} \right) \log^{O_{k,\eps,B}(1)} X$$ which is acceptable since $\log^{B'} X \leq T \ll X^{5/6-\eps}$ and $N \ll X^{\eps/10}$. The Type $d_1$ case can be treated similarly to the Type $d_2$ case (with the role of $\beta_2(n)$ now played by the Kronecker delta function $\delta_{n=1}$). It thus remains to handle the Type II case. Here we factor $$ {\mathcal D}[f']\left(\frac{1}{2}+it,\chi\right) = {\mathcal D}[\alpha]\left(\frac{1}{2}+it,\chi\right) {\mathcal D}[\beta]\left(\frac{1}{2}+it,\chi\right).$$ and hence $$ {\mathcal D}[f']\left(\frac{1}{2}+it,\chi\right)^2 = {\mathcal D}[\beta]\left(\frac{1}{2}+it,\chi\right) {\mathcal D}[\beta]\left(\frac{1}{2}+it,\chi\right) {\mathcal D}[\alpha \ast \alpha]\left(\frac{1}{2}+it,\chi\right).$$ At this point it is convenient to invoke an estimate of Harman (which in turn is largely a consequence of Huxley's large values estimate and standard mean value theorems for Dirichlet polynomials), translated into the notation of this paper: \begin{lemma} Let $X \geq 2$ and $\eps>0$, and let $M,N,R \geq 1$ be such that $M = X^{2\alpha_1}$, $N = X^{2\alpha_2}$, and $MNR \asymp X$ for some $\alpha_1, \alpha_2 > 0$ obeying the bounds $$ |\alpha_1 - \alpha_2| < \frac{1}{6} + \eps$$ and $$ \alpha_1 + \alpha_2 > \frac{2}{3} - \eps.$$ Let $a,b,c \colon \N \to \C$ be $O_{k,\eps}(1)$-divisor-bounded arithmetic functions supported on $[M/2,2M]$, $[N/2,2N]$, $[R/2,2R]$ respectively obeying the bounds $$ a(n), b(n), c(n) \ll_{k,\eps} d_2(n)^{O_{k,\eps}(1)} \log^{O_{k,\eps}(1)} X$$ for all $n$. Suppose also that $c$ has good cancellation. Then we have $$ \int_{\log^B X \leq |t| \leq X^{\frac{5}{6}-\eps}} \left|{\mathcal D}[a]\left(\frac{1}{2}+it\right) {\mathcal D}[b]\left(\frac{1}{2}+it\right) {\mathcal D}[c]\left(\frac{1}{2}+it\right)\right|\ dt \ll_{k,\eps,A,B} X \log^{-A} X$$ whenever $A>0$ and $B$ is sufficiently large depending on $A$. \end{lemma} \begin{proof} Apply \cite[Lemma 7.3]{harman} with $x \coloneqq X^2$ and $\theta \coloneqq \frac{7}{12} + \frac{\eps}{2}$ (so that the quantity $\gamma(\theta)$ defined in \cite[Lemma 7.3]{harman} is at least as large as $\frac{1}{3} + 2\eps$). Strictly speaking, the hypotheses in \cite[Lemma 7.3]{harman} restricted $|t|$ to be at least $\exp(\log^{1/3} X)$ rather than $\log^B X$, but one can check that the argument is easily modified to adapt to this new lower bound on $|t|$. \end{proof} If we apply this lemma with $a \coloneqq \beta \chi$, $b \coloneqq \beta \chi$, $c \coloneqq (\alpha \ast \alpha)\chi$ (with $\alpha_1 = \alpha_2 \geq \frac{1}{3} - \frac{\eps}{20} + o(1)$) using Lemma \ref{good-cancel} to preserve the good cancellation property, we conclude that $$ \int_{\log^{B'} X \leq |t| \leq X^{\frac{5}{6}-\eps}} \left|{\mathcal D}[f']\left(\frac{1}{2}+it\right)\right|^2\ dt \ll_{k,\eps,A,B} X \log^{-A} X$$ giving \eqref{das} in the Type II case. \end{document}
\begin{document} \title{Estimating Precipitation Extremes using the Log-Histospline} \author{Whitney K. Huang\footnote{Statistical and Applied Mathematical Sciences Institute. E-mail: \href{[email protected]}{\nolinkurl{[email protected]}} and Pacific Climate Impacts Consortium, University of Victoria. E-mail: \href{[email protected]}{\nolinkurl{[email protected]}}}, Douglas W. Nychka\footnote{Department of Applied Mathematics and Statistics, Colorado School of Mines. E-mail: \href{[email protected]}{\nolinkurl{[email protected]}}}, Hao Zhang\footnote{Department of Statistics, Purdue University. E-mail: \href{[email protected]}{\nolinkurl{[email protected]}}}} \date{\today} \maketitle \begin{abstract} One of the commonly used approaches to modeling extremes is the peaks-over-threshold (POT) method. The POT method models exceedances over a threshold that is sufficiently high so that the exceedance has approximately a generalized Pareto distribution (GPD). This method requires the selection of a threshold that might affect the estimates. Here we propose an alternative method, the Log-Histospline (LHSpline), to explore modeling the tail behavior and the remainder of the density in one step using the full range of the data. LHSpline applies a smoothing spline model to a finely binned histogram of the log transformed data to estimate its log density. By construction, a LHSpline estimation is constrained to have polynomial tail behavior, a feature commonly observed in daily rainfall observations. We illustrate the LHSpline method by analyzing precipitation data collected in Houston, Texas. \end{abstract} \doublespacing \section{Introduction} \label{sec1} Estimating extreme quantiles is crucial for risk management in a variety of applications. For example, an engineer would seek to estimate the magnitude of the flood event which is exceeded once in 100 years on average, the so-called 100-year return level, based on a few decades of time series \citep{katz2002}. A financial analyst needs to provide estimates of the Value at Risk (VaR) for a given portfolio, essentially the high quantiles of financial loss \citep{embrechts1997,tsay2005}. In climate change studies, as the research focus shifts from estimating the global mean state of climate variables to the understanding of regional and local climate extremes, there is a pressing need for better estimation of the magnitudes of extremes and their potential changes in a changing climate \citep{zwiers1998,easterling2000,alexander2006,tebaldi2006,cooley2007,kharin2007,aghakouchak2012,shaby2012,huang2016}.\\ The estimation of extreme quantiles poses a unique statistical challenge. Essentially, such an estimation pertains to the upper tail of a distribution where the available data of extreme values are usually sparse (e.g.\ see Fig.~\ref{fig:fig1}). As a result, the estimate will have a large variance that can increase rapidly as we move progressively to high quantiles. Furthermore, if the quantile being estimated is beyond the range of data (e.g., estimating the 100-year return level given the 50 years history of observation), the need to explicitly model the tail with some parametric form is unavoidable. \textit{Extreme value theory} (EVT) provides a mathematical framework of performing inference for the upper tail of distributions. One widely used approach, based on the extremal types theorem \citep{fisher1928,gnedenko1943}, is the so-called block maxima (BM) method where one fits a \textit{generalized extreme value} (GEV) distribution to block maxima given that the block size is large enough. The reader is referred to \citep{jenkinson1955}, \cite{gumbel1958} and \cite{coles2001} for more details. \begin{figure} \caption{Histogram of daily rainfall amount (mm) at the Hobby Airport in Houston, Texas, from 1949 to 2017. The vertical ticks at the x-axis are the values of the individual data points and the black dashed line is a kernel density estimate. Due to its tail heaviness, the largest values are substantially larger than the bulk of the distribution. Three out of the five largest precipitation measurements (indicated by red arrows) were observed during Hurricane Harvey, August 25 - 31, 2017} \label{fig:fig1} \end{figure} One drawback of the BM method is that data are not used efficiently, that is, the block size $b$ needs to be sufficiently large so that the GEV distribution is approximately valid for a given time series (with sample size $n$), which makes the sample size of the block maxima $(n/b)$ substantially smaller than the original sample size. Moreover, the top few largest order statistics within a given block should, in principle, inform us about the behavior of extreme events \citep{weissman1978,smith1986}. The peak-over-threshold (POT) method is based on the Pickands--Balkema--de Haan theory \citep{pickands1975, balkema1974}. For a random variable $Y$ with cumulative distribution function $F(\cdot)$, it states that the distribution of the exceedance over the threshold (i.e., the distribution of $Y-u$ given $Y>u$) converges to a \textit{generalized Pareto distribution} (GPD) as the threshold $u$ tends to the upper limit point $y_{F} = \sup\{ y: F(y) < 1\}$. With an appropriately chosen $u$, the POT method makes use of the exceedances and the censored data of those that do not exceed the threshold \citep{smith1984, davison1984,davison1990}. One advantage of the POT method over the BM method is that it typically makes use of the available data more efficiently in estimating extreme events.\\ However, to apply the POT method, a threshold $u$ has to be chosen and the estimates may be sensitive to the chosen threshold \citep{scarrott2012, wadsworth2012a}. The threshold selection involves a bias-variance trade-off: if the chosen threshold is too high, the estimates exhibit large variability; if the chosen threshold is too low, the asymptotic justification of the GPD approximation to the tail density is less effective, which leads to bias. There are several graphical tools that aim to help with the threshold selection, for example, the mean residual life plot and the parameter stability plot \citep[e.g.][p.80 and p.85]{coles2001}. However, the use of these graphical tools does not always lead to a clear choice of the threshold. In general, automated threshold selection is a difficult problem \citep{dupuis1999,wadsworth2012a}. In addition, the POT method does not assume anything about the distribution below the chosen threshold, which can be of interest in some applications (e.g., stochastic weather generators \citep{kleiber2012,li2012}), although some efforts have been made. We will review some attempts in the next paragraph.\\ Recently, there are some attempts to model the distribution of a random variable while retaining a GPD tail behavior \citep{frigessi2002,tancredi2006,carreau2009,papastathopoulos2013,naveau2016}. These methods can be broadly divided into two categories: the extreme value mixture models \citetext{see \citealp{scarrott2012,dey2015} for review} and the \textit{extended generalized Pareto distribution} (EGPD) method \citep{papastathopoulos2013,naveau2016}. The basic idea of the extreme value mixture is to model a distribution as a mixture of a \textit{bulk} distribution and a GPD distribution above a threshold. The threshold is then treated as an additional parameter to be estimated from the data. However, the specification of the bulk distribution can have non-negligible impacts on the estimates of GPD parameters, which can lead to substantial biases in the tail estimates. Finally, the estimation uncertainty for the threshold can be quite large \citep{frigessi2002} and hence it is not clear whether this parameter can be identifiable. The extended Pareto method proposed by \cite{papastathopoulos2013} bypasses this issue of the mixture modeling by proposing several classes of parametric models with GPD limiting behavior for the upper tail. \cite{naveau2016} extended the scope of this approach by allowing the classes of parametric models with GPD limiting behavior for both lower and upper tails in an application to rainfall modeling.\\ This work presents a new statistical method, called the Log-Histospline (LHSpline), for estimating probabilities associated with extreme values. Similar to \cite{naveau2016}, this method applies extreme value theory to the upper tail distribution. However, it does not impose a parametric form in the bulk of the distribution where the density can be determined from the data. This work is motivated by some applications in climate studies, one of which involves precipitation data. We would like to (i) provide a flexible model to the \textit{full range} of the non-zero rainfall distribution, and (ii) reliably estimate extreme events (e.g.\ 100-year daily rainfall amount). Specifically, our approach is to first log transform the nonzero daily rainfall observations and then apply a generalized cubic smoothing spline on a finely binned histogram to estimate the log-density. The purpose of the data transformation step is similar to that of a variable bandwidth in density--estimation literature \citep{wand1991}. Applying the spline smoothing log-density estimation \citep{silverman1982,gu1993} to the transformed variable will ensure the algebraic (e.g., Pareto) upper tail behavior and gamma-like lower (near 0) tail behavior, commonly observed in daily rainfall data.\\ The rest of this paper is structured as follows: In Section~\ref{sec2}, the LHSpline method is introduced and computation for inferences is described. A simulation study is presented in Section~\ref{sec3}. An application to daily precipitation collected in Houston Hobby Airport, Texas is illustrated in Section~\ref{sec4}. Some discussion of future research directions is provided in the last section. \section{The Log-Histospline} \label{sec2} \subsection{The Model of Log Density: Natural Cubic Spline} Let $Y$ be a continuous random variable with a probability density function (pdf) $f(y)$. Throughout this work we assume that $Y \in (0, \infty]$ is heavy-tailed, i.e., \begin{equation} f(y) \sim y^{-(\alpha+1)} \text{ as } y \to \infty, \mbox{ for some } \alpha >0. \end{equation} To represent the heavy tail of the distribution of $Y$, we take the logarithmic transformation $X = \log (Y)$, via its log-density, and assume the pdf of $X$ takes the following form: \begin{equation} e^{g(x)}, \quad x \in (-\infty, \infty] \end{equation} Specifically, the density $f(\cdot)$ and log-density $g(\cdot)$ are related in the following way: \begin{equation} \label{pdf-transform} f(y) = y^{-1}e^{g(\log(y))}, \qquad y > 0. \end{equation} Modeling the log-density $g$ has an advantage that it conveniently enforces the positivity constraint on $f$ \citep{leonard1978,silverman1982,kooperberg1991,eilers1996}. Here we would like to model $g$ nonparametrically to avoid a strong and likely misspecified parametric structure \citep[e.g.][]{silverman1986}. However, it is well-known that the usual nonparametric density estimation procedure will often introduce spurious features in the tail density due to limited data in the tail \citep[e.g.][]{kooperberg1991}. This issue is amplified when dealing with heavy-tailed distributions that are typically observed in precipitation data. On the other hand, since the focus is on estimating extreme quantiles, it is critical that the model can capture both qualitative (polynomial decay) and quantitative (the tail index $\alpha$) behaviors of the tail density. Here we assume that $g$ belongs to the family of \textit{natural cubic splines} (see Definition~\ref{ncp}) to accommodate both a flexible bulk distribution and the algebraic tail. The LHSpline combines the ideas of \textit{spline smoothing} \citep[e.g.][]{wahba1990,gu2013} and \textit{Histospline} \citep{boneva1971} to derive an estimator to achieve these goals. To facilitate the description of our method, we review the following definition \citep{wahba1990} \begin{definition}[\textbf{Natural cubic spline}] \label{ncp} A natural cubic spline $g(x)$, $x \in [a,b]$ is defined with a set of points $\{\zeta_{i}\}_{i=1}^{N}$ (knots) such that $-\infty \leq a \le \zeta_{1} \le \zeta_{2} \le \cdots \le \zeta_{N} < b \leq \infty$ with the following properties: \begin{enumerate} \item $g$ is linear, $\quad x \in [a, \zeta_{1}], \quad x \in [\zeta_{N}, b]$ \item $g$ is a cubic polynomial, $\quad x \in [\zeta_{i}, \zeta_{i+1}], \quad i = 1, \cdots, N -1$ \item $g \in \mathcal{C}^{2}, \quad x\in (a, b)$ \end{enumerate} where $\mathcal{C}^{k}$ is the class of functions with $k$ continuous derivatives \end{definition} The cubic spline smoother is widely used in nonparametric density estimation due to its flexibility and hence provides a flexible model for the bulk of the distribution \citep[e.g.][]{silverman1986,gu2013}. In our setting, when $y \ge \exp(\zeta_{N})$, the tail density of $Y$ is \begin{align*} f(y) & = y^{-1}e^{g(\log(y))} = y^{-1}e^{-\alpha \log(y) + \beta}\\ & = y^{-1} e^{-\alpha \log(y)}e^{\beta} = Cy^{-1}e^{-\alpha \log(y)}\\ & = Cy^{-1}y^{-\alpha} = Cy^{-(\alpha + 1)} \tag{4} \end{align*} where $-\alpha$ is the slope and $\beta$ is the intercept of the log-density $g$ at $\zeta_{N}$. Therefore, by the linear boundary conditions on a cubic spline we automatically obtain the algebraic right tail behavior. \subsection{The Estimation Procedure} \label{sec:2.2} Given an i.i.d.\ sample $Y_{1}, Y_{2}, \ldots, Y_{n}$ of $Y$, the estimation of $g$ involves two steps: data binning to create a histogram object, and a generalized smoothing spline of the histogram. In what follows, it is useful (see Fig.~\ref{fig:fig2}) to illustrate these steps using a synthetic data set simulated from a EGPD \citetext{\citealp{naveau2016}, p.\ 2757}. Some considerations when applying LHSpline to precipitation data will be discussed in Section~\ref{sec4}.\\ \textbf{Data binning}: First, bin the transformed data $X_i=\log(Y_i), i=1, \ldots, n$ by choosing $N+1$ equally spaced break points $\{b_j \}$ to construct the corresponding histogram object (i.e.\ the histogram counts $\{Z_{j}\} = \# \{ X_{i}: X_{i}\in [b_{j}, b_{j+1}]\}$. We set the knots associated with the spline to the bin midpoints: $\zeta_j = (b_{j} + b_{j+1})/2$. Several remarks on data binning should be made here: \begin{enumerate}[(1)] \item Equal-sized bins in the log scale implies a variable bin size with the bin size becoming increasingly larger in the upper tail. \item The number of bins should be ``sufficiently'' large for the Poisson assumption made in the next step justifiable. \item The choice of the first and especially the last break points (i.e.\ $b_{1}$ and $b_{N+1}$) will have non-negligible impacts on tail estimation in our framework. We will choose them somehow smaller (larger) than the minimum $X_{(1)}$ (maximum $X_{(n)}$) of the log-transformed data, which is different than what is typically done in constructing a histogram. We will defer the discussion on this point to Sec~\ref{sec2.3}. \item It is assumed that the bin size is fine enough so that $\int_{b_{i}}^{b_{i+1}} e^{g(x)} dx $ is well approximated by $e^{g(\zeta_i)}(b_{i+1} - b_{i}).$ \end{enumerate} \textbf{Smoothing the histogram}: We adapt a penalized approach \citep{good1971,tapia1978,green1994} to obtain a functional estimate of $g$ as follows. First, we assume the bin counts $\{Z_{j}\}_{j=1}^{N}$ each follow a Poisson distribution\footnote{\{$Z_{j}\}_{j=1}^{N}$ has a \textit{multinomial} distribution with parameters $n$ and $\bm{\pi} = (\pi_{1}, \cdots, \pi_{N})$ where $\pi_{j}$ is the probability that a random variable $X$ falls into the j$_{th}$ bin. Here we consider $Z_{j} \stackrel{ind}{\sim}\text{Pois}(\gamma \pi_{j}), \, j = 1,\cdots, N$. The MLE of $\gamma$ under the Poisson model follows $\hat{\gamma} = \sum_{j=1}^{N}Z_{j} = n$ and $\hat{\bm{\pi}}$ is equal to the MLE of $\bm{\pi}$ under the multinomial model.} \citep{lindsey1974a,lindsey1974b,eilers1996,efron1996} with log intensity $\tilde{g}_{j}, \, j = 1, \cdots, N$, given that $N$ is large enough so that the bin size is fine enough. Furthermore, we assume $\tilde{g}_{j} \approx \tilde{g}(\zeta_j)$, i.e., we assume the Poisson log intensity at each bin represents the log intensity function (unnormalized log density function) evaluated at its midpoint. Hence we perform a penalized Poisson regression to the data pair $\{Z_{j}, \zeta_j\}_{j=1}^{N}$ using a log link function and with a penalty term that penalizes the ``roughness'' of $\tilde{g}$. The estimate $\tilde{g}$, the unnormalized version of $g$, can be obtained by solving the following minimization problem: \begin{equation}\label{pnl} -\sum_{j=1}^{N} \left\{ \tilde{g}(x_j)z_{j}-e^{\tilde{g}(x_j)} -\log(z_{j})! \right\} + \lambda\left(\int _{x \in \mathbb{R}} \left(\tilde{g}^{\prime \prime}(x)\right)^2\,dx\right) \end{equation} where $\lambda$ is the smoothing parameter. Note that the first term in the objective function is the negative log likelihood for the histogram counts under a Poisson approximation to the distribution, and the second term, the squared integral of the second derivative of $\tilde{g}$ multiplied by the smoothing parameter, is the penalty function. The solution to this optimization problem exists, is unique, and has the form of a natural cubic spline for any $\lambda$ \citep{green1994,gu2013}. The selection of the smoothing parameter $\lambda$ plays a role of balancing the data fidelity of the Poisson regression fit to the histogram counts, as presented by the negative log-likelihood, and the ``smoothness'' of $\tilde{g}$, and is chosen by approximate cross validation \citetext{CV, see \citealp{o1988} for more details}. Note that the smoothness penalty favors linear functions, that correspond to the algebraic tail behavior we want in the untransformed distribution. Finally we renormalize the $\tilde{g}$ to make the integral of $\int_{\mathbb{R}} e^{g}$ equal to one. Numerical integration is used to approximate the integral within the data range and an analytic form is used for the density beyond the bin limits. \begin{figure} \caption{An illustration of applying LHSpline to a simulated data set. Here we bin the log transformed data $\{x_{i} \label{fig:fig2} \end{figure} \subsection{Bias Correction} \label{sec2.3} \subsubsection{Extending Data Range} \label{sec2.3.1} As mentioned in the remark (3) in the data binning step, the choice of $b_{1}$ and $b_{N+1}$ plays an important role in our penalized approach. Under the usual histogram construction (i.e.\ $b_{1} = X_{(1)}$, $b_{N+1} = X_{(n)}$, see Fig~\ref{fig:fig2}) and a large number of bins, as in our setting, one will very likely observe ``bumps'' (i.e., isolated non-zero counts) in the boundaries of the histogram. As a result, the cross validation will maintain the smoothness and tend to overestimate the slopes near the boundary knots. Our solution to reduce this bias is to extend the range of the histogram beyond the sample maximum/minimum so that some zero counts beyond the boundaries will be included. In fact, one can think of the estimation procedure of the LHSpline as a discrete approximation to estimating the intensity (density) function of an (normalized) inhomogeneous Poisson process \citep{brown2004}. In this regard, one should take into account the support (observation window in the point process context) as part of the data. \subsubsection{Further Corrections: Bootstrap and Smoothing Parameter Adjustment} \label{sec2.3.2} After applying the aforementioned boundary correction by extending the observation window, tail bias still exists. Here we propose two approaches to correct the bias. The first approach is to estimate the bias via a ``parametric'' bootstrap \citep{efron1979} on the LHSpline estimate. Note that given the approximate Poisson model for the histogram bin counts it is straightforward to generate bootstrap samples for this computation. The bias is estimated as the difference between the (point-wise) mean of the bootstrap estimates $\bar{g^{*}}$ and the ``true'' $g$, in this case, $\hat{g}$ estimated from the histogram. We then subtract this bias term from the LHSpline estimate. The second approach is to adjust the smoothing parameter estimate $\hat{\lambda}_{\text{CV}}$ directly. We found that the smoothing parameter obtained from CV may not give the best result as here the primary objective is to estimate the tail density. It was found that, in our LHSpline, reducing $\lambda$, \ $0.05 \times \hat{\lambda}_{\text{CV}}$, gives a better tail estimation while still maintaining a good estimation performance in the bulk distribution (see Sec~\ref{sec3.3}). \subsection{Quantifying Estimation Uncertainty: The ``Bayesian'' Approach} \label{sec2.4} Here we give a brief account of the Bayesian interpretation of smoothing splines that will be used for quantifying estimation uncertainty for the LHSpline. Readers are referred to \citetext{\citealp{wahba1990}, Chapter 5, \citealp{green1994}, Chapter 3.8, and \citealp{gu2013}, Chapter 2.4} for more details.\\ From a Bayesian perspective, the penalty term in Equation~(\ref{pnl}) effectively introduces a Gaussian process prior on $g$, the log density. Hence the LHSpline estimation can be thought as a procedure of finding the posterior mode of $g$ with a Poisson likelihood for the histogram and a zero mean Gaussian process with a generalized covariance (the reproducing kernel), $K(\cdot,\cdot)$, as the prior. The LHSpline estimate of a given histogram can be found by using the Fisher scoring algorithm \citetext{\citealp{green1994}, p.\ 100} where the minimization of the non-quadratic negative penalized log-likelihood is approximated by the method of iteratively reweighted least squares (IRLS).\\ The goal here is to sample from an approximate posterior distribution $[\bm{g}|\bm{u}, \lambda]$ where $\bm{g} = \{g _{k} = g(\zeta_k)\}_{k=1}^{N}$, $\bm{u} = \{u_{k}\}_{k=1}^{N}$ are the ``pseudo'' observations in IRLS, $\lambda$ is the smoothing parameter, and the likelihood has been approximated with a multivariate Gaussian distribution. Based on the linear approximation around the posterior model we have \begin{equation} [\bm{g}|\bm{u}, \lambda] \sim \text{MVN}(\hat{\bm{g}}, (W + \Gamma)^{-1}) \end{equation} where $W$ is a diagonal matrix representing the approximate precision matrix of the pseudo observations and $\Gamma$ is the prior precision for $g$ based on the Bayesian interpretation of a spline. Several assumptions are being made here: 1) the smoothing parameter $\lambda$ is known; 2) the linear problem is assumed to be multivariate normal and finally 3) a proper prior, $\text{N}(\bm{0}, \epsilon I)$, for the linear trend (i.e., the null space of a natural cubic spline) has been taken in the limit to be improper (i.e., $\epsilon \to \infty$).\\ The sampling approach is similar to a classical bootstrap \citep{efron1979} with the following steps: \begin{enumerate} \item generate $\bm{g}^{*} \sim \text{MVN}(\bm{0}, \Gamma^{-1})$ \item generate pseudo observations $\bm{u}^{*} \sim \text{MVN}(\bm{g}^{*}, W^{-1})$ \item compute the estimate based on $\bm{u}^{*}$: $\hat{\bm{g}}^{*} = (W+\Gamma)^{-1}W\bm{u^{*}}$ \item compute error: $\bm{u} = \bm{g}^{*} - \hat{\bm{g}}^{*}$ \item approximate draw from the posterior, $\hat{\bm{g}} + \bm{u}$, where $\hat{\bm{g}}$ is the LHSpline estimate (the posterior mode/mean under the aforementioned Gaussian assumption) evaluated at knots $\{\zeta_{k}\}_{k=1}^{N}$. \end{enumerate} \section{Simulation Study} \label{sec3} The purposes of this simulation study are threefold: (i) to demonstrate how we implement the LHSpline, (ii) to compare the LHSpline method with the EVT-based methods (i.e.\ POT and BM) in estimating extreme quantiles, and (iii) to compare with a gamma distribution fit to the bulk distribution. \subsection{Setup} \label{sec3.1} We conduct a Monte Carlo study by simulating the data from the model (i) of the EGPD \citetext{see \citealp{naveau2016}, p.\ 2757}. The basic idea of EGPD is to modify the GPD random number generator $H_{\sigma, \xi}^{-1}(U)$, where $H^{-1}_{\sigma, \xi}$ denotes the inverse GPD cdf with scale $(\sigma)$ and shape $(\xi)$ parameters, by replacing $U$, the standard uniform random variable, with $V = G^{-1}(U)$ where $G$ is a continuous cdf on $[0,1]$. The model (i) takes $G(v) = v^{\kappa}, \kappa>0$ which contains the GPD as a special case (i.e., $\kappa = 1$). We choose the parameter values to be $\kappa = 0.8, \sigma = 8.5, \xi =0.2$ and the sample size $n = 18,250$, which corresponds to a time series with 50 years of complete daily data (ignore the leap years). The EGPD parameters are chosen to reflect typical distributional behavior of daily precipitation. We repeat the Monte Carlo experiment 100 times and evaluate the estimation performance for $25-$, $50-$, and $100-$ year return levels. \subsection{LHSpline Illustration} \label{sec3.2} We use a simulated data series (see Fig.~\ref{fig:fig2}) to illustrate the LHSpline method as described in Sec.~\ref{sec2}. To illustrate the ``boundary effect'' we first set equidistant breakpoints so that the knots (the midpoints of the histogram) span the whole range of the data (i.e.\ $\zeta_{1} = x_{(1)}$, $\zeta_{N}=x_{(n)}$ where $\zeta_{1}$ and $\zeta_{N}$ are the first and last knot points). We choose $N = 150$ meaning that we construct a histogram with a rather large number of bins (much larger than what the default in \texttt{hist} in \textsf{R} would suggest). We then apply the generalized smoothing spline procedure by solving equation~(\ref{pnl}) via approximate cross-validation to obtain an estimate. Figure ~\ref{fig:fig3} shows the LHSpline estimate along with a GPD fit (with threshold $u$ chosen as the $.95$ empirical quantile) and the true density. Upon visual inspection one may conclude that LHSpline performs well, or at least as good as the POT method, and both estimates are fairly close to the true density. \begin{figure} \caption{The density estimates for log scale $X$ \textbf{(left)} \label{fig:fig3} \end{figure} However, a more careful examination for the log-log plot (log-density against $\log(y)$, see the left panel of Fig~\ref{fig:fig4}) and the return levels estimation reveals that LHSpline with $\zeta_{1} = x_{(1)}$ and $\zeta_{N}=x_{(n)}$, in general, overestimates the extreme quantiles (see Fig.~\ref{fig:fig4}, right panel). \begin{figure} \caption{\textbf{Left} \label{fig:fig4} \end{figure} The issue of overestimation of LHSpline is partly due to the ``boundary effect'' when fitting penalized Poisson regression to a histogram. Specifically, the histogram counts in Fig.~\ref{fig:fig2} are $\{Z_{1} = 1, Z_{2} = Z_{3} = \cdots = Z_{26} = 0, \cdots,Z_{148} = Z_{149} = 0, Z_{150} = 1\}$. The histogram counts in the first and the last bins are non-zero (typically 1) by construction but the nearby bins are likely to be zero. Therefore, these ``bumps'' at both ends force the smoothing spline to overestimate the slopes at the boundaries (sometimes it will give nonsensical results, e.g., negative slope at the left boundary or/and positive slope at the right boundary) and hence the extreme quantiles.\\ To alleviate this boundary effect, we extend the range and number of bins in the histogram beyond the range of the data to include extra bins that have zero counts and refit these augmented data pairs (the original counts and those extra zero counts) to penalized Poisson regression. We observe that this strategy does remove some of the positive bias in return levels estimation (see Fig.~\ref{fig:fig5}) but there is still some positive bias remaining. We then apply the bias correction via bootstrap with 200 bootstrap samples and the aforementioned smoothing parameter adjustment. Both approaches further reduce the bias (see Fig~\ref{fig:fig5}). \begin{figure} \caption{The estimated log density and return levels with (green dashed line, LHS-er) and without (blue dashed line, LHS) boundary correction. Brown (purple) dashed lines is the LHSpline estimate with bootstrap ($\lambda$ adjustment) bias correction. The GPD estimate is shown in the red dot-dash line. The vertical red line is the chosen threshold for fitting the GPD.} \label{fig:fig5} \end{figure} \subsection{Estimator Performance} \label{sec3.3} In this subsection we first assess the performance of estimating high quantiles using an LHSpline and two commonly used EVT-based methods, namely, the block maxima (BM) method by fitting a GEV to ``annual maxima'', and the peaks-over-threshold (POT) method by fitting a GPD to excesses over a high threshold ($.95$ empirical quantile in our study). In order to put this comparison on an equal footing as much as possible, we put a positive Gamma prior (with mean equal to 0.2, the true value) on $\xi$ when fitting the GEV and GPD models. We use the \texttt{fevd} function in the \texttt{extRemes} R package \citep{extRemes} with the Generalized maximum likelihood estimation (GMLE) method \citep{martins2000,martins2001} to estimate the GEV and GPD parameters. We compare the GMLE plug-in estimates for 25-, 50-, and 100-year return levels with that of the LHSpline estimates (with and without bias correction). To get a better sense of how well each method performs, we also include the ``oracle estimates'' where we fit the true model (EGPD) using the maximum likelihood method to obtain the corresponding plug-in estimates.\\ Figure~\ref{fig:fig6} shows that, perhaps not surprisingly, the variability of the return level estimates obtained by fitting a GEV distribution to block maxima is generally larger than that of the estimates obtained by fitting a GPD distribution to threshold exceedances under the i.i.d.\ setting here. Also, it is clear that the LHSpline without bias correction can not only lead to serious bias but also inflate the estimation variance. After applying two different bias corrections mentioned in Sec~\ref{sec2.3}, the estimate becomes closer to being unbiased (especially the bias correction with adjustment of the smoothing parameter) with a reduction in the estimator variability. Also the estimator performance is comparable with that of the GPD. \begin{figure} \caption{Boxplots of estimated \textbf{Left:} \label{fig:fig6} \end{figure} We then assess the estimation uncertainty using the conditional simulation approach described in Sec~\ref{sec2.4} and compare with the GPD interval estimation obtained by the delta and profile likelihood methods. To simplify this presentation, we only show the result of the 90\% confidence interval for the 50-year return level (see Fig~\ref{fig:fig7}) \begin{figure} \caption{The interval estimate of LHSpline (with $\lambda = 0.05 \times \hat{\lambda} \label{fig:fig7} \end{figure} We summarize in Table~\ref{table1} the empirical coverage probability (ECP) and the width of CI for the delta and profile likelihood methods when fitting a GPD to excesses over the .95 quantile and the conditional simulation (cond-sim) when fitting LHSpline with $\lambda = 0.05 \times \hat{\lambda}_{\text{CV}}$ for 25-, 50-, and 100-year return levels, respectively. \begin{table}[H] \caption{The empirical coverage probability (ECP) and (90\%) CI width in parentheses for each method for 25-, 50-, and 100-year return levels, respectively.} \label{table1} \begin{center} \begin{tabular}{ c l l l } \hline Method & Delta & Proflik & Cond-sim \\ \hline \hline 25-yr \, RL ECP & 0.63 (46.9) & 0.80 (66.9) & 0.87 (111.1) \\ \hline 50-yr \, RL ECP & 0.74 (77.8) & 0.84 (93.4) & 0.87 (177.9) \\ \hline 100-yr RL ECP & 0.77 (107.8) & 0.82 (128.9) & 0.88 (295.4) \\ \hline \end{tabular} \end{center} \end{table} We also assess the estimation performance of LHSpline in the bulk distribution. Fig~\ref{fig:fig8} shows that the LHSpline performs much better than the gamma fit, a widely used distribution for modeling precipitation \citep{katz1977,wilks2011}, and performs nearly as well as the ``oracle'' (EGPD) approach. \begin{figure} \caption{Boxplots of quantile (from 0.001 to 0.999) estimates. The true values are represented by horizontal red lines.} \label{fig:fig8} \end{figure} \section{Applications} \label{sec4} \subsection{Motivation: Hurricane Harvey Extreme Rainfall} \label{sec4.1} Hurricane Harvey brought unprecedented amounts of rainfall to the Houston metropolitan area between the 25th and the 31st of August 2017, resulting in catastrophic damages to personal property and infrastructure. In the wake of such an extreme event, there is interest in understanding its rarity and how human-induced climate change might alter the chances of observing such an event \citep{risser2017}. In this section, we apply the LHSpline to the daily precipitation measurements for Houston Hobby airport (see Fig.~\ref{fig:fig1} for the histogram and Fig.~\ref{fig:fig9} for the time series) from the Global Historical Climatology Network (GHCN) \citep{GHCN}. \subsection{Houston Hobby Rainfall Data Analysis} \label{sec4.2} We fit a LHSpline with $150$ equally spaced bins to the full range of non-zero precipitation for Hobby Airport ($\sim 27.6\%$ of all the observations) prior to this event (from Jan. 1949 to Dec. 2016). To facilitate a comparison with the POT method we also fit a GPD to excesses over a high threshold (chosen as $u_{0} = 43 \text{ mm}$ $\sim .93$ empirical quantile of nonzero daily precipitation, see diagnostic plot in Fig.~\ref{fig:fig10}).\\ \begin{figure} \caption{The time series plot of Houston Hobby Airport daily precipitation from 1949 to 2017. Three out of the five largest daily precipitation values were observed during 26th to 28th August, 2017.} \label{fig:fig9} \end{figure} \begin{figure} \caption{\textbf{Left} \label{fig:fig10} \end{figure} An important practical issue when applying the LHSpline is that the discretization of rainfall measurement ($1/100^{\text{th}}$ of an inch) introduces an artifact in the finely binned histogram in the log scale (see Fig.~\ref{fig:fig11}, left panel) and hence the cross validation choice for $\lambda$ is affected by this discretization. In practice, all the precipitation measurements have precision limits and thus it is important to take this fact into account especially when the measurements are small (i.e., near zero). Here we treat the data being left-censored by truncating the values below two different values ($\exp(1)$ and $\exp(2)$) to alleviate this ``discreteness'' effect. \begin{figure} \caption{The density estimate for log scale \textbf{(left)} \label{fig:fig11} \end{figure} As has been demonstrated in Fig.~\ref{fig:fig3} and Fig.~\ref{fig:fig4} in Sec.~\ref{sec3}, a good visual agreement in log-density (density) does not necessarily imply that it actually provides a good return level estimate. Fig.~\ref{fig:fig12} shows the estimated log-log plot and return levels ranging from 2 years to 100 years (under the assumption of temporal stationarity). The LHSpline gives somewhat lower estimates than that of the GPD estimates. A quantile-quantile plot in Fig.~\ref{fig:fig13} (left panel) indicates that there might be an issue of overestimating extreme high quantiles in the GPD fit. That is, the GPD estimate of the 68-year rainfall is 395.47 (90\% CI (300.88, 569.67)) mm whereas the ($\lambda$-adjusted) LHSpline estimates (with lower bound 0.254, $\exp(1)$, and $\exp(2$)) are 371.04 (302.39, 567.62) mm, 310.64 (261.62, 491.80) mm, and 287.06 (246.86, 457.05) mm, respectively, which are somewhat closer to the maximum value (252.73 mm) during the 1949 $\sim$ 2016 period. However, one should be aware that the sample maximum can be quite variable and hence might not reflect the true magnitude of the ``68-year rainfall''. Fig.~\ref{fig:fig13} (right panel) also indicates that the LHSpline might provide a better fit than the EGPD to the Hobby Airport data. \begin{figure} \caption{The estimated log densities in the log scale \textbf{(left)} \label{fig:fig12} \end{figure} \textbf{\begin{figure} \caption{The quantile-quantile plot of the Hobby daily precipitation data. \textbf{Left:} \label{fig:fig13} \end{figure}} Lastly, we would like to investigate the question: ``\textit{How unusual was the event of 306.58 mm (26th August, 2017) in daily total precipitation at Hobby Airport}?'' To simplify the matter we make a stationarity assumption, that is, the distribution of daily precipitation does not change during the time period 1949 $\sim$ 2017. The estimates for the return period of this event are given in the following table: \begin{table}[H] \caption{The estimated return period of the 26th August, 2017 daily total precipitation observed at Hobby Airport using POT, LH-Spline, and EGPD.}\label{table2} \begin{center} \begin{tabular}{ c c c c c c c} \hline Method & POT & LHSpline & LHSpline1 & LHSpline2& EGPD1 & EGPD2 \\ \hline \hline Estimate (years) & 30.5 & 34.8 & 64.0 & 98.0 & 92.5 & 154.5 \\ 90\% CI Lower limit & 17.0 & 15.2 & 19.1& 22.0& 71.5& 117.6\\ 90\% CI Upper limit & 73.3 & 73.2 & 172.4& 345.7& 457.6& 515.6\\ \hline \end{tabular} \end{center} \end{table} The much shorter return period estimate obtained from the POT method is due to the overestimation of the upper tail (see Fig.~\ref{fig:fig13}) whereas the somewhat longer return periods estimated by LHSplines might be more aligned with what people would expect. Although again one should be aware that there exists, among other issues, a large estimation uncertainty with respect to large return periods (see Table.~\ref{table2}). \section{Discussion} \label{sec5} This work presents a new statistical method, the LHspline, for estimating extreme quantiles of heavy-tailed distributions. In contrast with some widely used EVT based methods that require extracting ``extreme'' observations to fit a corresponding asymptotically justified distribution, the LHSpline makes use of the \textit{full range} of the observations for the fitting. By combining data transform and spline smoothing, the LHSpline estimation effectively achieved the desirable tail structure that is consistent with EVT and a flexible bulk distribution in the context of rainfall modeling. We demonstrate through simulation that this method performs comparable to the POT method for return level estimation with an additional benefit that it jointly models the bulk and the tail of a distribution.\\ However, by construction, the LHSpline method is only applicable for heavy-tailed distributions, which excludes many important environmental processes, for example, air surface temperature which may have a bounded tail \citep{gilleland2006}. In terms of implementation, the LHSpline requires some additional tuning, for example, the number of bins for the histogram should grow with the sample size. Limited experiments (not reported here) suggest the estimate is not sensitive to the number of bins once the binning is chosen fine enough. Another tuning issue is to decide how far one should extend beyond the range of the observations and how to adjust the smoothing parameter $\lambda$ to remove the boundary effect. Here we suggest extending the bin range and reducing the smoothing parameter. This choice is in place of picking the threshold in the POT approach and we believe it is less sensitive in terms of density estimation.\\ The theoretical properties of our method are largely unexplored. Much of the theoretical results developed in nonparametric density estimation concern the performance in terms of global indices such as $\mathbb{E}\left[\int_{x \in \mathbb{R}} |f(x) - \hat{f}(x)|\, dx\right]$ or $\mathbb{E}\left[\sup_{x \in \mathbb{R}} |f(x) - \hat{f}(x)|\right]$ and are largely confined in a bounded region (i.e. $x \in [a,b]$, a and b finite). It is not clear how these results can inform us about the estimation performance for extreme quantiles on a potentially unbounded domain.\\ Applying LHSpline to many observational records or the grid cells of high resolution and/or ensemble climate model experiments \citep[e.g.][]{NARCCAP,wang2015} will result in summaries of the distribution that are well suited for further data mining and analysis. The log-density form is particularly convenient for dimension reduction because linear (additive) projection methods such as principle component analysis make sense in the log space. In general we believe the LHSpline will be a useful tool as climate informatics tackles complex problems of quantifying climate extremes. \end{document}
\begin{document} \title[fractional Hamiltonian systems with positive semi-definite matrix] {Existence and concentration of solution for a fractional Hamiltonian systems with positive semi-definite matrix} \author{C\'esar Torres} \author{Ziheng Zhang} \author{Amado Mendez} \address[C\'esar Torres]{\newline\indent Departamento de Matem\'aticas \newline\indent Universidad Nacional de Trujillo, \newline\indent Av. Juan Pablo II s/n. Trujillo-Per\'u} \email{\href{mailto:ctl\[email protected]}{ctl\[email protected]}} \address[Ziheng Zhang] {\newline\indent Department of Mathematics, \newline\indent Tianjin Polytechnic University, \newline\indent Tianjin 300387, China.} \email{\href{mailto:[email protected]}{[email protected]}} \address[Amado Mendez]{\newline\indent Departamento de Matem\'aticas \newline\indent Universidad Nacional de Trujillo, \newline\indent Av. Juan Pablo II s/n. Trujillo-Per\'u} \email{\href{mailto:[email protected]}{[email protected]}} \pretolerance10000 \begin{abstract} \noindent We study the existence of solutions for the following fractional Hamiltonian systems $$ \left\{ \begin{array}{ll} - _tD^{\alpha}_{\infty}(_{-\infty}D^{\alpha}_{t}u(t))-\lambda L(t)u(t)+\nabla W(t,u(t))=0,\\[0.1cm] u\in H^{\alpha}(\mathbb{R},\mathbb{R}^n), \end{array} \right. \eqno(\mbox{FHS})_\lambda $$ where $\alpha\in (1/2,1)$, $t\in \mathbb{R}$, $u\in \mathbb{R}^n$, $\lambda>0$ is a parameter, $L\in C(\mathbb{R},\mathbb{R}^{n^2})$ is a symmetric matrix for all $t\in \mathbb{R}$, $W\in C^1(\mathbb{R} \times \mathbb{R}^n,\mathbb{R})$. Assuming that $L(t)$ is a positive semi-definite symmetric matrix for all $t\in \mathbb{R}$, that is, $L(t)\equiv 0$ is allowed to occur in some finite interval $T$ of $\mathbb{R}$, $W(t,u)$ satisfies some superquadratic conditions weaker than Ambrosetti-Rabinowitz condition, we show that (FHS)$_\lambda$ has a solution which vanishes on $\mathbb{R}\setminus T$ as $\lambda \to \infty$, and converges to some $\tilde{u}\in H^{\alpha}(\R, \R^n)$. Here, $\tilde{u}\in E_{0}^{\alpha}$ is a solution of the Dirichlet BVP for fractional systems on the finite interval $T$. Our results are new and improve recent results in the literature even in the case $\alpha =1$. \end{abstract} \subjclass[2010]{Primary 34C37; Secondary 35A15, 35B38.} \keywords{Fractional Hamiltonian systems, Fractional Sobolev space, Critical point theory, Concentration phenomena.} \maketitle \section{Introduction} Fractional Hamiltonian systems are a significant area of nonlinear analysis, since they appear in many phenomena studied in several fields of applied science, such as engineering, physics, chemistry, astronomy and control theory. On the other hand, the theory of fractional calculus is a part that intensively developing during the last decades; see \cite{ATMS04, ErvinR06, Hilfer00, KST06, MR93, Pod99} and the references therein. The existence of homoclinic solutions for Hamiltonian systems and their importance in the study of behavior of dynamical systems can be recognized from Poincar\'{e} \cite{Poincare}. Since then, the investigation of the existence and multiplicity of homoclinic solutions became one of the main important problems of research in dynamical systems. Critical point theorem was first used by Rabinowitz \cite{Rab86} to obtain the existence of periodic solutions for first order Hamiltonian systems, while the first multiplicity result is due to Ambrosetti and Zelati \cite{AmbroZelati93}. Therefore, a large number of mathematicians used critical point theory and variational methods to prove the existence of homoclinic solutions for Hamiltonian systems; see for instance \cite{Co91,GWC,Ding95,Izydorek05,Izydorek07, Omana92,Rab90,Rab91} and the reference therein. The critical point theory has become an effective tool in investigating the existence and multiplicity of solutions for fractional differential equations by constructing fractional variational structures. Especially, in \cite{FJYZ} the authors firstly dealt with a class of fractional boundary value problem via critical point theory. From then on, Variational methods and critical point theory are shown to be effective in determining the solutions for fractional differential equations with variational structure. We also mention the work by Torres \cite{Torres12}, where the author considered the following fractional Hamiltonian systems $$ \left\{ \begin{array}{ll} _tD^{\alpha}_{\infty}(_{-\infty}D^{\alpha}_{t}u(t))+L(t)u(t)=\nabla W(t,u(t)),\\[0.1cm] u\in H^{\alpha}(\mathbb{R},\mathbb{R}^n), \end{array} \right. \eqno(\mbox{FHS}) $$ where $\alpha\in (1/2,1)$, $t\in \mathbb{R}$, $u\in \mathbb{R}^n$, $L\in C(\mathbb{R},\mathbb{R}^{n^2})$ is a symmetric and positive definite matrix for all $t\in \mathbb{R}$, $W\in C^1(\mathbb{R}\times \mathbb{R}^n,\mathbb{R})$ and $\nabla W(t,u)$ is the gradient of $W(t,u)$ at $u$. Assuming that $L(t)$ satisfied the following coercivity condition \begin{itemize} \item[(L)] there exists an $l\in C(\mathbb{R},(0,\infty))$ with $l(t)\rightarrow \infty$ as $|t|\rightarrow \infty$ such that \begin{equation}\label{eqn:L coercive} (L(t)u,u)\geq l(t)|u|^2 \quad \mbox{for all}\,\, t\in \mathbb{R} \,\, \mbox{and} \,\, u\in \mathbb{R}^n. \end{equation} \end{itemize} and that $W(t,u)$ satisfies the Ambrosetti-Rabinowitz condition \begin{itemize} \item[(FHS$_1$)]$W\in C^1(\mathbb{R} \times \mathbb{R}^n,\mathbb{R})$ and there is a constant $\theta>2$ such that $$ 0<\theta W(t,u)\leq (\nabla W(t,u),u)\quad \mbox{for all}\,\, t\in \mathbb{R} \,\,\mbox{and}\,\, u\in \mathbb{R}^n\backslash\{0\}, $$ \end{itemize} and other suitable conditions, the author showed that (FHS) possesses at least one nontrivial solution via Mountain Pass Theorem. Note that (FHS)$_1$, implies that $W(t,u)$ is of superquadratic growth as $|u|\rightarrow \infty$. Since then, many researchers dealt with (FHS) for the cases that $W(t,u)$ is superquadratic or subquadratic at infinity; see for instance \cite{MendezTorres15,XuReganZhang15,ZhangYuan}. In addition, some perturbed fractional Hamiltonian systems are discussed in \cite{Torres14,XuReganZhang15}. In \cite{ZhangYuan14} the authors focused on weakening the coercivity condition $(L)$, more precisely they assumed that $L(t)$ is bounded in the following sense: \begin{itemize} \item[(L)$'$] $L\in C(\mathbb{R},\mathbb{R}^{n^2})$ is a symmetric and positive definite matrix for all $t\in \mathbb{R}$ and there are constants $0<\tau_1<\tau_2<\infty$ such that $$ \tau_1|u|^2\leq (L(t)u,u)\leq \tau_2|u|^2\quad \mbox{for all}\,\, (t,u)\in \mathbb{R} \times \mathbb{R}^n, $$ \end{itemize} By supposed that $W(t,u)$ is subquadratic as $|u|\rightarrow +\infty$, the authors also showed that (FHS) possessed infinitely many solutions, which has been generalized in \cite{NyaZhou2017,ZhouZhang2017}. In the present paper we deal with the following fractional Hamiltonian systems $$ \left\{ \begin{array}{ll} - _tD^{\alpha}_{\infty}(_{-\infty}D^{\alpha}_{t}u(t))-\lambda L(t)u(t)+\nabla W(t,u(t))=0,\\[0.1cm] u\in H^{\alpha}(\mathbb{R},\mathbb{R}^n), \end{array} \right. \eqno(\mbox{FHS})_{\lambda} $$ where $\alpha\in (1/2,1)$, $t\in \mathbb{R}$, $u\in \mathbb{R}^n$, $\lambda>0$ is a parameter, $W\in C^1(\mathbb{R} \times \mathbb{R}^n,\mathbb{R})$ and $L$ satisfies the following conditions \begin{itemize} \item[$(\mathcal{L})_1$]$L\in C(\mathbb{R},\mathbb{R}^{n\times n})$ is a symmetric matrix for all $t\in\mathbb{R}$; there exists a nonnegative continuous function $l:\mathbb{R} \rightarrow \mathbb{R}$ and a constant $c>0$ such that $$ (L(t)u,u)\geq l(t)|u|^2, $$ and the set $\{l<c\}:=\{t\in \mathbb{R} \,|\,l(t)<c\}$ is nonempty with $meas \{l<c\}<\frac{1}{C_{\infty}^2}$, where $meas \{\cdot\}$ is the Lebesgue measure and $C_\infty$ is the best Sobolev constant for the embedding of $X^{\alpha}$ into $L^{\infty}(\mathbb{R})$; \item[$(\mathcal{L})_2$]$J=int (l^{-1}(0))$ is a nonempty finite interval and $\overline{J}=l^{-1}(0)$; \item[$(\mathcal{L})_3$]there exists an open interval $T\subset J$ such that $L(t)\equiv 0$ for all $t\in \overline{T}$. \end{itemize} In particular, if $\alpha=1$ in (FHS)$_\lambda$, then it reduces to the following well-known second order Hamiltonian systems $$ \ddot u- \lambda L(t) u+\nabla W(t,u)=0.\eqno(\mbox{HS}) $$ Recently a second order Hamiltonian systems like (HS) with positive semi-definite matrix was considered in \cite{JSTW}. Assuming that $W \in C^1(\mathbb{R}\times \mathbb{R}^n, \mathbb{R})$ is an indefinite potential satisfying asymptotically quadratic condition at infinity on $u$, Sun and Wu, with a little mistake in their embedding results, have proved the existence of two homoclinic solutions of (FHS$_\lambda$). For more related works, we refer the reader to \cite{Co91,Ding95,Izydorek05,Izydorek07,Omana92,Rab91} and the references mentioned there. Here we must point out, to obtain the existence or multiplicity of solutions for Hamiltonian systems, all the papers mentioned above need the assumption that the symmetric matrix $L(t)$ is positive definite, see (L) and (L)$'$. Therefore, recently the authors in \cite{Benhassine2017,TorresZhang2017,ZhangTorres} considered the case that $L(t)$ is positive semi-definite satisfying $(\mathcal{L})_1$. In \cite{Benhassine2017}, the author dealt with (FHS) for the case that $(\mathcal{L})_1$ is satisfied and $W(t,u)$ involves a combination of superquadratic and subquadratic terms and is allowed to be sign-changing. In \cite{TorresZhang2017,ZhangTorres}, we have considered the existence of solutions of (FHS)$_\lambda$ and the concentration of its solutions when $(\mathcal{L})_1$-$(\mathcal{L})_3$ are satisfied and $W(t,u)$ meets with some classes of superquadratic hypothesis. Motivated by these previous results, the main purpose of this paper is to investigate (FHS)$_\lambda$ without Ambrosetti-Rabinowitz condition (FHS$_1$). More precisely, we suppose that $W(t,u)$ satisfy the following assumptions \begin{enumerate} \item[($W_1$)] $|\nabla W(t,u)| = o(|u|)$ as $|u|\to 0$ uniformly in $t\in \mathbb{R}$. \item[$(W_2)$] $W(t,u)\geq 0$ for all $(t,u)\in \mathbb{R}\times \mathbb{R}^N$ and $H(t,u)\geq0$ for all $(t, u) \in \mathbb{R} \times \mathbb{R}^N$, where $$ H(t,u) :=\frac{1}{2}\langle \nabla W(t,u), u \rangle - W(t,u). $$ \item[$(W_3)$] $\frac{W(t,u)}{|u|^2} \to + \infty$ as $|u| \to +\infty$ uniformly in $t\in \mathbb{R}$. \item[$(W_4)$] There exist $C_0, R >0$, and $\sigma >1$ such that $$ \frac{|\nabla W(t,u)|^\sigma}{|u|^\sigma} \leq C_0H(t,u)\;\;\mbox{if}\;\;|u| \geq R. $$ \end{enumerate} Note that, according to \cite{GWC} the nonlinearity $$ W(t,u) = g(t)(|u|^p + (p-2)|u|^{p-\epsilon}\sin^2(\frac{|u|^\epsilon}{\epsilon})), $$ where $g(t)>0$ is $T$-periodic in $t$, $0<\epsilon < p-2$ and $p>2$, satisfies $(W_1)-(W_4)$, but (FHS$_1$) is not satisfied. Now we are in the position to state our main result. \begin{theorem}\label{Thm:MainTheorem1} Suppose that {\rm ($\mathcal{L}$)$_1$}-{\rm ($\mathcal{L}$)$_3$}, $(W_1) - (W_4)$ are satisfied, then there exists $\Lambda _*>0$ such that for every $\lambda>\Lambda_*$, {\rm(FHS)$_\lambda$} has at least one nontrivial solution. \end{theorem} On the concentration of solutions obtained above, for technical reason, we consider that there exists $0<\varrho< +\infty$, such that $T = [-\varrho,\varrho]$, where $T$ is given by $(\mathcal{L})_3$. We have the following result. \begin{theorem}\label{Thm:MainTheorem2} Let $u_\lambda$ be a solution of problem ${\rm (FHS)}_\lambda$ obtained in Theorem \ref{Thm:MainTheorem1}, then $u_\lambda \to \tilde{u}$ strongly in $H^{\alpha}(\mathbb{R})$ as $\lambda \to \infty$, where $\tilde{u}$ is a nontrivial solution of the following boundary value problem \begin{eqnarray}\label{eqn:BVP} \left\{ \begin{array}{ll} {_{t}}D_{\varrho}^{\alpha} ({_{-\varrho}}D_{t}^{\alpha})u = \nabla W(t, u), & t\in (-\varrho, \varrho) \\[0.2cm] u(-\varrho) = u(\varrho) = 0, \end{array} \right. \end{eqnarray} where ${_{-\varrho}}D_{t}^{\alpha}$ and $_{t}D_{\varrho}^{\alpha}$ are left and right Riemann-Liouville fractional derivatives of order $\alpha$ on $[-\varrho,\varrho]$ respectively. \end{theorem} \begin{remark} {\rm In Theorem \ref{Thm:MainTheorem1}, we give some new superquadratic conditions on $W(t,u)$ to guarantee the existence of solutions and investigate the concentration of these solutions in \ref{Thm:MainTheorem2}. However, we must point out that the methods in \cite{Benhassine2017,TorresZhang2017,ZhangTorres} are not be valid for our new assumptions. To overcome this difficulty we apply the Mountain Pass Theorem with Cerami condition, however, the direct application of the mountain pass theorem is not enough since the Cerami sequences might lose compactness in the whole space $\mathbb{R}$. Then it is necessary to introduce a new compactness result to recover the convergence of Cerami sequence, for more details see Lemma \ref{cerami1}. } \end{remark} The remaining part of this paper is organized as follows. Some preliminary results are presented in Section 2. In Section 3, we are devoted to accomplishing the proof of Theorem \ref{Thm:MainTheorem1} and in Section 4 we present the proof of Theorem \ref{Thm:MainTheorem2}. \section{Preliminary Results} In this section, for the reader's convenience, firstly we introduce some basic definitions of fractional calculus. The Liouville-Weyl fractional derivative of order $0<\alpha<1$ are defined as \begin{equation}\label{eqn:RD} _{-\infty}D^{\alpha}_x u(x)=\frac{d}{dx} {_{-\infty}I^{1-\alpha}_x u(x)} \quad \mbox{and}\quad _{x}D^{\alpha}_{\infty} u(x)=-\frac{d}{dx} {_{x}I^{1-\alpha}_{\infty} u(x)}. \end{equation} where $_{-\infty}I^{\alpha}_x$ and $_{x}I^{\alpha}_{\infty}$ are the left and right Liouville-Weyl fractional integrals of order $0<\alpha<1$ defined as $$ _{-\infty}I^{\alpha}_x u(x)=\frac{1}{\Gamma(\alpha)}\int^x_{-\infty} (x-\xi)^{\alpha-1}u(\xi)d\xi \quad \mbox{and}\quad _{x}I^{\alpha}_{\infty} u(x)=\frac{1}{\Gamma(\alpha)}\int^{\infty}_{x}(\xi-x)^{\alpha-1}u(\xi)d\xi. $$ Furthermore, for $u\in L^p(\R)$, $p\geq 1$, we have $$ \mathcal{F}({_{-\infty}}I_{x}^{\alpha}u(x)) = (i\omega)^{-\alpha}\widehat{u}(\omega)\quad \quad \mbox{and}\quad \quad \mathcal{F}({_{x}}I_{\infty}^{\alpha}u(x)) = (-i\omega)^{-\alpha}\widehat{u}(\omega), $$ and for $u\in C_{0}^{\infty}(\R)$, we have $$ \mathcal{F}({_{-\infty}}D_{x}^{\alpha}u(x)) = (i\omega)^{\alpha}\widehat{u}(\omega)\quad \quad \mbox{and}\quad \quad \mathcal{F}({_{x}}D_{\infty}^{\alpha}u(x)) = (-i\omega)^{\alpha}\widehat{u}(\omega), $$ In order to establish the variational structure which enables us to reduce the existence of solutions of (FHS)$_\lambda$ to find critical points of the corresponding functional, it is necessary to consider some appropriate function spaces. Denote by $L^p(\mathbb{R},\mathbb{R}^n)$ ($1\leq p <\infty$) the Banach spaces of functions on $\mathbb{R}$ with values in $\mathbb{R}^n$ under the norms $$ \|u\|_{L^p}=\Bigl(\int_{\mathbb{R}}|u(t)|^p dt\Bigr)^{1/p}, $$ and $L^{\infty}(\mathbb{R},\mathbb{R}^n)$ is the Banach space of essentially bounded functions from $\mathbb{R}$ into $\mathbb{R}^n$ equipped with the norm $$ \|u\|_{\infty}=\mbox{ess} \sup\left\{|u(t)|: t\in \mathbb{R} \right\}. $$ Let $-\infty<a<b<+\infty$, $0< \alpha \leq 1$ and $1<p<\infty$. The fractional derivative space $E_{0}^{\alpha ,p}$ is defined by the closure of $C_{0}^{\infty}([a,b], \mathbb{R}^n)$ with respect to the norm \begin{equation}\label{norm} \|u\|_{\alpha ,p} = \left(\int_{a}^{b} |u(t)|^pdt + \int_{a}^{b}|{_{a}}D_{t}^{\alpha}u(t)|^pdt \right)^{1/p}, \;\;\forall\; u\in E_{0}^{\alpha ,p}. \end{equation} Furthermore $(E_{0}^{\alpha ,p}, \|.\|_{\alpha ,p})$ is a reflexive and separable Banach space and can be characterized by $$E_{0}^{\alpha , p} = \{u\in L^{p}([a,b], \mathbb{R}^n) | {_aD}_{t}^{\alpha}u \in L^{p}([a,b], \mathbb{R}^n)\;\mbox{and}\;u(a) = u(b) = 0\}.$$ \begin{proposition}\label{FC-FEprop3} \cite{FJYZ} Let $0< \alpha \leq 1$ and $1 < p < \infty$. For all $u\in E_{0}^{\alpha ,p}$, we have \begin{equation}\label{FC-FEeq3} \|u\|_{L^{p}} \leq \frac{(b-a)^{\alpha}}{\Gamma (\alpha +1)} \|{_aD}_{t}^{\alpha}u\|_{L^{p}}. \end{equation} If $\alpha > 1/p$ and $\frac{1}{p} + \frac{1}{q} = 1$, then \begin{equation}\label{FC-FEeq4} \|u\|_{\infty} \leq \frac{(b-a)^{\alpha -1/p}}{\Gamma (\alpha)((\alpha - 1)q +1)^{1/q}}\|{_aD}_{t}^{\alpha}u\|_{L^{p}}. \end{equation} \end{proposition} \noindent By (\ref{FC-FEeq3}), we can consider in $E_{0}^{\alpha ,p}$ the following norm \begin{equation}\label{FC-FEeq5} \|u\|_{\alpha ,p} = \|{_aD}_{t}^{\alpha}u\|_{L^{p}}, \end{equation} which is equivalent to (\ref{norm}). \begin{proposition}\label{FC-FEprop4} \cite{FJYZ} Let $0< \alpha \leq 1$ and $1 < p < \infty$. Assume that $\alpha > \frac{1}{p}$ and $\{u_{k}\} \rightharpoonup u$ in $E_{0}^{\alpha ,p}$. Then $u_{k} \to u$ in $C[a,b]$, i.e. $$ \|u_{k} - u\|_{\infty} \to 0,\;k\to \infty. $$ \end{proposition} For $\alpha>0$, consider the Liouville-Weyl fractional spaces $$ I^{\alpha}_{-\infty}=\overline{C^{\infty}_0(\mathbb{R},\mathbb{R}^n)}^{\|\cdot\|_{I^{\alpha}_{-\infty}}}, $$ where \begin{equation}\label{eqn:defn Rnorm} \|u\|_{I^{\alpha}_{-\infty}}=\Bigl(\int_{\mathbb{R}}u^2(x)dx+ \int_{\mathbb{R}}|_{-\infty}D^{\alpha}_x u(x)|^2dx\Bigr)^{1/2}. \end{equation} Furthermore, we introduce the fractional Sobolev space $H^{\alpha}(\mathbb{R},\mathbb{R}^n)$ of order $0<\alpha<1$ which is defined as \begin{equation}\label{eqn:alphanorm} H^{\alpha}=\overline{C^{\infty}_0(\mathbb{R},\mathbb{R}^n)}^{\|\cdot\|_{\alpha}}, \end{equation} where $$ \|u\|_{\alpha}=\Bigl(\int_{\mathbb{R}}u^2(x)dx+ \int_{\mathbb{R}}|w|^{2\alpha}\widehat{u}^2(w)dw\Bigr)^{1/2}. $$ Note that, a function $u\in L^2(\mathbb{R},\mathbb{R}^n)$ belongs to $I^{\alpha}_{-\infty}$ if and only if $$ |w|^{\alpha}\widehat{u}\in L^2(\mathbb{R},\mathbb{R}^n). $$ Therefore, $I^{\alpha}_{-\infty}$ and $H^{\alpha}$ are equivalent with equivalent norm, for more details see \cite{ErvinR06}. \begin{lemma}\label{Lem:LinftyContH}\cite[Theorem 2.1]{Torres12} If $\alpha>1/2$, then $H^{\alpha}\subset C(\mathbb{R},\mathbb{R}^n)$ and there is a constant $C_\infty=C_{\alpha,\infty}$ such that \begin{equation}\label{12} \|u\|_{\infty}=\sup_{x\in \mathbb{R}}|u(x)|\leq C_\infty \|u\|_{\alpha}. \end{equation} \end{lemma} \begin{remark}\label{Rem:Lp} From Lemma \ref{Lem:LinftyContH}, we know that if $u\in H^{\alpha}$ with $1/2<\alpha<1$, then $u\in L^p(\mathbb{R},\mathbb{R}^n)$ for all $p\in [2,\infty)$, since $$ \int_{\mathbb{R}}|u(x)|^p dx \leq \|u\|^{p-2}_{\infty}\|u\|^2_{L^2}. $$ \end{remark} Now, we introduce the fractional space which we will use to construct the variational framework for (FHS)$_\lambda$. Let $$ X^{\alpha}=\Bigl\{u\in H^{\alpha}: \int_{\mathbb{R}}[|_{-\infty}D^{\alpha}_{t}u(t)|^2+(L(t)u(t),u(t))]dt<\infty\Bigr\}, $$ then $X^{\alpha}$ is a reflexive and separable Hilbert space with the inner product $$ \langle u,v \rangle_{X^{\alpha}}=\int_{\mathbb{R}}[(_{-\infty}D^{\alpha}_{t}u(t),_{-\infty}D^{\alpha}_{t}v(t))+(L(t)u(t),v(t))]dt $$ and the corresponding norm is $$ \|u\|^2_{X^{\alpha}}=\langle u,u \rangle_{X^{\alpha}}. $$ For $\lambda>0$, we also need the following inner product $$ \langle u,v \rangle_{X^{\alpha,\lambda}}=\int_{\mathbb{R}}[(_{-\infty}D^{\alpha}_{t}u(t),_{-\infty}D^{\alpha}_{t}v(t))+\lambda(L(t)u(t),v(t))]dt $$ and the corresponding norm is $$ \|u\|^2_{X^{\alpha,\lambda}}=\langle u,u \rangle_{X^{\alpha,\lambda}}. $$ \begin{lemma}\label{Lem:XcontH} \cite{ZhangTorres} Suppose $L(t)$ satisfies {\rm ($\mathcal{L}$)$_1$} and {\rm ($\mathcal{L}$)$_2$}, then $X^{\alpha}$ is continuously embedded in $H^{\alpha}$. \end{lemma} \begin{remark}\label{keynta} {\rm Under the same conditions of Lemma \ref{Lem:XcontH}, for all $\lambda\geq \frac{1}{c C_\infty^2 \, meas \{l<c\}}$, we also obtain \begin{equation}\label{13} \int_{\mathbb{R}}|u(t)|^2 dt\leq \frac{C_\infty^2\, meas\{l<c\}}{1-C_\infty^2\, meas\{l<c\}}\|u\|_{X^{\alpha,\lambda}}=\frac{1}{\Theta}\|u\|_{X^{\alpha,\lambda}}^2 \end{equation} and \begin{equation}\label{14} \|u\|_\alpha^2\leq \Bigl(1+\frac{C_{\infty}^2\, meas\{l<c\}}{1-C_{\infty}^2\, meas \{l<c\}}\Bigr)\|u\|_{X^\alpha}^2=(1+\frac{1}{\Theta})\|u\|^2_{X^{\alpha,\lambda}}. \end{equation} Furthermore, for every $p\in (2,\infty)$ and $\lambda\geq \frac{1}{c C_\infty^2 \, meas \{l<c\}}$, we have \begin{equation}\label{15} \begin{split} \int_{\mathbb{R}}|u(t)|^p dt\leq \mathcal{K}_{p}^{p}\|u\|_{X^{\alpha,\lambda}}^p. \end{split} \end{equation} where $\mathcal{K}_{p}^{p} = \frac{1}{\Theta^{\frac{p}{2}}\, (meas\{l<c\})^{\frac{p-2}{2}}}$. For more details, see \cite{Torres15, ZhangTorres}.} \end{remark} \section{ Proof of Theorem \ref{Thm:MainTheorem1}} The aim of this section is to establish the proof of Theorem \ref{Thm:MainTheorem1}. Consider the functional $I: X^{\alpha,\lambda}\rightarrow \mathbb{R}$ given by \begin{equation}\label{mt01} \begin{aligned} I_\lambda (u)&=\int_{\mathbb{R}}\Bigl[\frac{1}{2}|_{-\infty}D_t^{\alpha}u(t)|^2+\frac{1}{2}(\lambda L(t)u(t),u(t))-W(t,u(t))\Bigr]dt\\ &=\dfrac{1}{2}\|u\|^2_{X^{\alpha,\lambda}}-\int_{\mathbb{R}}W(t,u(t))dt. \end{aligned} \end{equation} Under the conditions of Theorem \ref{Thm:MainTheorem1}, we note that $I\in C^1(X^{\alpha,\lambda},\mathbb{R})$, and \begin{equation}\label{mt02} I'_\lambda (u)v=\int_{\mathbb{R}}\Bigl[(_{-\infty}D_t^{\alpha}u(t), _{-\infty}D_t^{\alpha}v(t))+(\lambda L(t)u(t),v(t))-(\nabla W(t,u(t)),v(t))\Bigr]dt \end{equation} for all $u$, $v\in X^{\alpha}$. In particular we have \begin{equation}\label{mt03} I'_\lambda (u)u =\|u\|^2_{X^{\alpha,\lambda}}-\int_{\mathbb{R}}(\nabla W(t,u(t)),u(t))dt. \end{equation} \begin{remark}\label{ineq} It follows from $(W_1)$ and $(W_4)$ that $$ |\nabla W(t,u)|^\sigma \leq \frac{C_0}{2}|\nabla W(t,u)||u|^{\sigma +1 }\;\;\mbox{for}\;\;|u|\geq R. $$ Thus, by ($W_1$), for any $\epsilon >0$, there is $C_\epsilon >0$ such that \begin{equation}\label{mt04} |\nabla W(t,u)| \leq \epsilon |u| + C_\epsilon |u|^{p-1},\;\;\forall (t,u) \in \mathbb{R} \times \mathbb{R}^N \end{equation} and \begin{equation}\label{mt05} |W(t,u)| \leq \frac{\epsilon}{2}|u|^2 + \frac{C_\epsilon}{p}|u|^p\;\;\forall (t,u)\in \mathbb{R} \times \mathbb{R}^N, \end{equation} where $p = \frac{2\sigma }{\sigma -1}>2$. \end{remark} \begin{lemma}\label{GC1} Suppose that {\rm ($\mathcal{L}$)$_1$}-{\rm ($\mathcal{L}$)$_3$}, {\rm(W$_1$)} and {\rm(W$_2$)} are satisfied. Then \begin{enumerate} \item[\fbox{i}] There exists $\rho>0$ and $\eta>0$ such that $$ \inf_{\|u\|_{X^{\alpha,\lambda}}=\rho} I_\lambda (u)>\eta\quad for \,\, all \,\, \lambda\geq \frac{1}{cC_\infty^2 \, meas \{l<c\}}. $$ \item[\fbox{ii}] Let $\rho>0$ defined in $(i)$, then there exists $e\in X^{\alpha,\lambda}$ with $\|e\|_{X^{\alpha,\lambda}}>\rho$ such that $I_\lambda(e)<0$ for all $\lambda>0$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[\fbox{i}] By (\ref{mt05}) and Remark \ref{keynta}, we obtain $$ \begin{aligned} I_{\lambda}(u) & = \frac{1}{2}\|u\|_{X^{\alpha, \lambda}}^2 - \int_{\mathbb{R}} W(t, u(t))dt\\ & \geq \frac{1}{2}\|u\|_{X^{\alpha, \lambda}}^{2} - \frac{\epsilon}{2}\int_{\mathbb{R}}|u(t)|^2dt - \frac{C_\epsilon}{p}\int_{\mathbb{R}} |u(t)|^pdt\\ &\geq \frac{1}{2}\left( 1 - \frac{\epsilon}{\Theta}\right)\|u\|_{X^{\alpha, \lambda}}^{2} - \frac{C_\epsilon}{p\Theta^{\frac{p}{2}}(meas\{l<c\})^{\frac{p-2}{2}}}\|u\|_{X^{\alpha, \lambda}}^{p}. \end{aligned} $$ Let $\epsilon >0$ small enough such that $1- \frac{\epsilon}{\Theta}>0$ and $\|u\|_{X^{\alpha, \lambda}} = \rho$. Since $p>2$, taking $\rho$ small enough such that $$ \frac{1}{2}\left( 1-\frac{\epsilon}{\Theta} \right) - \frac{C_\epsilon}{p\Theta^{\frac{p}{2}}(meas\{l<c\})^{\frac{p-2}{2}}} \rho^{p-2}>0. $$ Therefore $$ I_\lambda (u) \geq \rho^2\left[ \frac{1}{2}\left( 1-\frac{\epsilon}{\Theta} \right) - \frac{C_\epsilon}{p \Theta^{\frac{p}{2}}(meas\{l<c\})^{\frac{p-2}{2}}} \rho ^{p-2}\right] : = \eta >0. $$ \item[\fbox{ii}] By $(\mathcal{L})_3$ and without loss of generality let $T = (-\varrho, \varrho )\subset J$ such that $L(t) \equiv 0$. Let $\psi \in C_{0}^{\infty}(\mathbb{R}, \R^n)$ such that $supp(\psi) \subset (-\tau, \tau)$, for some $\tau< \varrho$. Hence \begin{equation}\label{ss00} \begin{aligned} 0&\leq \int_{\R}\langle L(t)\psi , \psi\rangle dt = \int_{supp(\psi)} \langle L(t)\psi, \psi\rangle dt \leq \int_{-\tau}^{\tau} \langle L(t)\psi, \psi\rangle dt \leq \int_{T} \langle L(t)\psi, \psi\rangle dt = 0. \end{aligned} \end{equation} On the other hand, by $(W_3)$, for any $\epsilon >0$, there exists $R>0$ such that $$ W(t,u) > \frac{|u|^2}{\epsilon} - \frac{R^2}{\epsilon}\;\;\mbox{for all}\;\;|u|\geq R. $$ Then, by taking $\epsilon \to 0$ we get \begin{equation}\label{ss01} \lim_{|\sigma| \to \infty} \int_{supp(\psi)} \frac{W(t, \sigma \psi)}{|\sigma|^2}dt = +\infty. \end{equation} Hence, by (\ref{ss00}) and (\ref{ss01}) we obtain \begin{equation}\label{ss02} \frac{I_\lambda (\sigma \psi)}{|\sigma|^2} = \frac{1}{2}\int_{\R} |_{-\infty}D_{t}^{\alpha}\psi(t)|^2dt - \int_{\R} \frac{W(t, \sigma \psi)}{|\sigma|^2}dt \to -\infty, \end{equation} as $|\sigma| \to \infty$. Therefore, if $\sigma_0$ is large enough and $e = \sigma_0 \psi$ one gets $I_\lambda (e) <0$. \end{enumerate} \end{proof} Since we have loss of compactness we need the following compactness results to recover the Cerami condition for $I_\lambda$. \begin{lemma}\label{PS1} Suppose that $(\mathcal{L})_1 - (\mathcal{L})_3$, $(W_1) - (W_4)$ be satisfied. If $u_n \rightharpoonup u$ in $X^{\alpha, \lambda}$, then \begin{equation}\label{mt07} I_\lambda (u_n - u) = I_\lambda (u_n) - I_\lambda (u) + o(1)\;\;\mbox{as}\;\;n\to +\infty \end{equation} and \begin{equation}\label{mt08} I'_\lambda(u_n-u) = I'_\lambda(u_n) - I'_\lambda (u) + o(1) \;\;\mbox{as}\;\;n\to +\infty. \end{equation} In particular, if $I_\lambda (u_n) \to c$ and $I'_\lambda(u_n) \to 0$, then $I'_\lambda(u) = 0$ after passing to a subsequence. \end{lemma} \begin{proof} Since $u_n \rightharpoonup u$ in $X^{\alpha, \lambda}$, we have $\langle u_n -u, u\rangle_{X^{\alpha, \lambda}} \to 0$ as $n\to \infty$, which implies that $$ \begin{aligned} \|u_n\|_{X^{\alpha, \lambda}}^{2} = \|u_n - u\|_{X^{\alpha, \lambda}}^{2} + \|u\|_{X^{\alpha, \lambda}}^{2} + o(1). \end{aligned} $$ Therefore, to obtain (\ref{mt07}) and (\ref{mt08}) it suffices to check that \begin{equation}\label{mt09} \int_{\mathbb{R}} [W(t,u_n) - W(t, u_n-u) - W(t,u)]dt = o(1) \end{equation} and \begin{equation}\label{mt10} \sup_{\varphi \in X^{\alpha, \lambda}, \|\varphi\|_{\alpha, \lambda} =1} \int_{\mathbb{R}} \langle \nabla W(t, u_n) - \nabla W(t, u_n-u)- \nabla W(t,u), \varphi\rangle dt = o(1). \end{equation} Here, we only prove (\ref{mt10}), the verification of (\ref{mt09}) is similar. In fact, let \begin{equation}\label{mt11} \mathcal{A}:= \lim_{n\to \infty} \sup_{\varphi \in X^{\alpha, \lambda}, \|\varphi\|_{\alpha, \lambda} =1} \int_{\mathbb{R}} \langle \nabla W(t, u_n) - \nabla W(t, u_n-u)- \nabla W(t,u), \varphi\rangle dt. \end{equation} If $\mathcal{A}>0$, then, there exists $\varphi_0 \in X^{\alpha, \lambda}$ with $\|\varphi_0\|_{X^{\alpha, \lambda}} = 1$ such that $$ \left| \int_{\mathbb{R}} \langle \nabla W(t,u_n) - \nabla W(t, u_n-u) - \nabla W(t,u), \varphi_0\rangle dt \right| \geq \frac{\mathcal{A}}{2} $$ for $n$ large enough. Now, from (\ref{mt04}) and Young's inequality, there exist $C_1$, $C_2$ and $C_3 >0$ such that $$ \begin{aligned} &|\langle \nabla W(t, u_n) - \nabla W(t, u_n-u), \varphi_0\rangle| \leq C_1 \left( \epsilon |u|^2 + \epsilon |u_n-u|^2 + \epsilon |\varphi_0|^2 + C_2|u|^p + \epsilon |u_n - u|^{p} + C_3|\varphi_0|^p\right). \end{aligned} $$ Hence, there exists $C_4, C_5, C_6 >0$ such that $$ \begin{aligned} |\langle \nabla W(t, u_n) - \nabla W(t, u_n - u) &- \nabla W(t,u), \varphi_0\rangle|\\ & \leq C_4 \left( \epsilon |u|^2 + \epsilon |u_n - u|^2 + \epsilon |\varphi_0|^2 + C_5|u|^p + \epsilon |u_n - u|^p + C_6|\varphi_0|^p\right). \end{aligned} $$ Let $$ h_n(t) = \max\{|\langle \nabla W(t, u_n) - \nabla W(t, u_n - u) - \nabla W(t,u), \varphi_0 \rangle| - C_4 \epsilon (|u_n - u|^2 + |u_n - u|^p), 0\}. $$ So $$ 0\leq h_n(t) \leq C_4 (\epsilon |u|^2 + \epsilon |\varphi_0|^2 + C_5|u|^p + C_6|\varphi_0|^p). $$ By the Lebesgue dominated convergence Theorem and the fact $u_n \to u$ a.e. in $\mathbb{R}$, , we can get $$ \int_{\mathbb{R}} h_n(t)dt \to 0\;\;\mbox{as}\;\;n \to \infty. $$ From where $$ \int_{\mathbb{R}} |\langle \nabla W(t, u_n(t)) - \nabla W(t, u_n(t) - u(t)) - \nabla W(t,u(t)), \varphi_0(t) \rangle|dt \to 0\;\;\mbox{as}\;\; n\to \infty, $$ which is a contradiction. Hence $\mathcal{A} = 0$. Furthermore, if $I_\lambda (u_n) \to c$ and $I'_\lambda (u_n) \to 0$ as $n \to \infty$, by (\ref{mt07}) and (\ref{mt08}), we get $$ I_\lambda (u_n - u) \to c-I_\lambda (u) + o(1) $$ and $$ I'_\lambda (u_n - u) = -I'_\lambda(u)\;\;\mbox{as}\;\;n \to +\infty. $$ Now, for every $\varphi \in C_{0}^{\infty}(\mathbb{R}, \mathbb{R}^n)$ we have $$ I'_\lambda(u)\varphi = \lim_{n \to \infty} I'_\lambda (u_n) \varphi = 0. $$ Consequently, $I'_\lambda (u) = 0$. \end{proof} \begin{lemma}\label{cerami1} Suppose that $(\mathcal{L})_1 - (\mathcal{L})_3$, $(W_1) - (W_4)$ be satisfied and let $c\in \mathbb{R}$. Then each $(Ce)_c$-sequence of $I_\lambda$ is bounded in $X^{\alpha, \lambda}$. \end{lemma} \begin{proof} Suppose that $\{u_n\} \subset X^{\alpha, \lambda}$ is a $(Ce)_c$ sequence for $c>0$, namely \begin{equation}\label{mt12} I_\lambda (u_n) \to c,\quad (1+\|u_n\|_{X^{\alpha, \lambda}})I'_\lambda (u_n) \to 0\;\;\mbox{as}\;\;n\to \infty. \end{equation} Therefore \begin{equation}\label{mt13} c- o_n(1) = I_\lambda (u_n) - \frac{1}{2}I'_\lambda(u_n)u_n = \int_{\mathbb{R}} H(t, u_n(t))dt. \end{equation} By contradiction, suppose that there is a subsequence, again denoted by $\{u_n\}$, such that $\|u_n\|_{X^{\alpha, \lambda}} \to +\infty$ as $n\to +\infty$. Taking $v_n = \frac{u_n}{\|u_n\|_{X^{\alpha, \lambda}}}$, we get that $\{v_n\}$ is bounded in $X^{\alpha, \lambda}$ and $\|v_n\|_{X^{\alpha, \lambda}} = 1$. Moreover, we have $$ o(1) = \frac{\langle I'_\lambda (u_n), u_n\rangle}{\|u_n\|_{X^{\alpha, \lambda}}^{2}} = 1- \int_{\mathbb{R}} \frac{\langle \nabla W(t,u_n), u_n \rangle}{\|u_n\|_{X^{\alpha, \lambda}}^{2}}, $$ as $n \to \infty$, which implies \begin{equation}\label{lim} \int_{\mathbb{R}} \frac{\langle \nabla W(t, u_n), v_n\rangle}{|u_n|} |v_n|dt = \int_{\mathbb{R}} \frac{\langle \nabla W(t, u_n), u_n\rangle}{\|u_n\|_{X^{\alpha, \lambda}}^{2}} \to 1. \end{equation} For $r\geq 0$, let $$ h( r ) := \inf\{H(t, u):\;\;t\in \mathbb{R}, \;\;|u|\geq r\}. $$ From ($W_2$) we have $h ( r) >0$ for all $r>0$. Furthermore, by ($W_2$) and $(W_4)$, for $|u|\geq r$, \begin{equation}\label{mt14} C_0 H(t,u) \geq \frac{|\nabla W(t,u)|^\sigma}{|u|^\sigma} = \left( \frac{|\nabla W(t,u)||u|}{|u|^2} \right)^{\sigma} \geq \left( \frac{\langle \nabla W(t,u), u\rangle}{|u|^2}\right)^{\sigma} \geq \left( \frac{2W(t,u)}{|u|^2} \right)^{\sigma}, \end{equation} it follows from $(W_3)$ and the definition of $h(r)$ that $$ h( r) \to \infty\;\;\mbox{as}\;\;r\to \infty. $$ For $0\leq a < b$, let $$ \Omega_{n}(a, b) :=\{t\in \mathbb{R}:\;\;a \leq |u_n(t)|< b\} $$ and $$ C_{a}^{b} : = \inf \left\{\frac{H(t, u)}{|u|^2}: \;\;t\in \mathbb{R}\;\;\mbox{and}\;\;u\in \mathbb{R}^N\;\;\mbox{with}\;\;a \leq |u|< b \right\}. $$ By ($W_1$), for any $\epsilon >0$, there is $\delta >0$ such that $$ |\nabla W(t,u)| \leq \frac{\epsilon}{\mathcal{K}_{2}^{2}}|u|\;\;\mbox{for all}\;\;|t|\leq \delta. $$ Consequently \begin{equation}\label{mt15} \begin{aligned} \int_{\Omega_{n}(0, \delta)} \frac{|\nabla W(t, u_n)|}{|u_n|}|v_n|^2dt \leq \int_{\Omega_n(0, \delta)} \frac{\epsilon}{\mathcal{K}_{2}^{2}}|v_n|^2dt\leq \frac{\epsilon}{\mathcal{K}_{2}^{2}}\|v_n\|_{L^2}^{2} \leq \epsilon,\;\;\forall n. \end{aligned} \end{equation} We note that $$ H(t,u_n)\geq C_{a}^{b}|u_n|^2\quad \mbox{for all} \quad t \in \Omega_n (a,b), $$ consequently, by (\ref{mt13}) we get \begin{equation}\label{mt16} \begin{aligned} c-o_n(1) &= \int_{\Omega_n(0,a)} H(t,u_n)dt + \int_{\Omega_n(a,b)}H(t,u_n)dt + \int_{\Omega_n(b, +\infty)}H(t,u_n)dt\\ &\geq \int_{\Omega_n (0,a)} H(t,u_n)dt + C_{a}^{b}\int_{\Omega_n(a,b)}|u_n|^2dt + \int_{\Omega_n(b, +\infty)}H(t,u_n)dt\\ &= \int_{\Omega_n (0,a)} H(t,u_n)dt + C_{a}^{b}\int_{\Omega_n(a,b)}|u_n|^2dt + h(b)meas(\Omega_n(b,+\infty)). \end{aligned} \end{equation} Since $h( r) \to +\infty$ as $r\to +\infty$, for $p< q < \infty$ it follows from (\ref{mt16}) that \begin{equation}\label{mt17} \begin{aligned} \int_{\Omega_{n}(b, +\infty)}|v_n|^pdt &\leq \left( \int_{\Omega_n(b, +\infty)}|v_n|^qdt \right)^{\frac{p}{q}} meas(\Omega_n(b+\infty))^{\frac{q-p}{q}}\\ &\leq \|v_n\|_{L^q}^{p} \left( \frac{c-o_n(1)}{h(b)} \right)^{\frac{q-p}{p}}\leq \mathcal{K}_{q}^{p}\left( \frac{c-o_n(1)}{h(b)} \right)^{\frac{q-p}{p}} \to 0 \end{aligned} \end{equation} as $b\to +\infty$, where $p=\frac{2\sigma}{\sigma -1}>2$. Furthermore, by ($W_4$) and the H\"older inequality, we can choose $R>0$ large enough such that \begin{equation}\label{mt18} \begin{aligned} \left|\int_{\Omega_n(R, +\infty)} \frac{\langle \nabla W(t, u_n), u_n\rangle}{\|u_n\|_{X^{\alpha, \lambda}}^{2}} dt \right|& \leq \int_{\Omega_n(R,+\infty)} \frac{|\nabla W(t,u_n)|}{|u_n|}|v_n|^2dt\\ &\leq \left( \int_{\Omega_n(R, +\infty)} \frac{|\nabla W(t,u_n)|^\sigma}{|u_n|^\sigma} \right)^{1/\sigma} \left( \int_{\Omega_n(R,+\infty)} |v_n|^pdt \right)^{\frac{\sigma-1}{\sigma}}\\ &\leq \left(\int_{\Omega_n(R, +\infty)}C_0H(t,u_n)dt \right)^{1/\sigma} \left( \int_{\Omega_n(R, +\infty)} |v_n|^pdt \right)^{\frac{\sigma -1}{\sigma}}\\ &\leq C_0^{1/\sigma} (c-o_n(1))^{1/\sigma} \left( \int_{\Omega_n(R, +\infty)}|v_n|^pdt \right)^{\frac{\sigma -1}{\sigma}}\\&< \epsilon. \end{aligned} \end{equation} Now, by using (\ref{mt16}) again, we get $$ \int_{\Omega_n(\delta, R)} |v_n|^2dt = \frac{1}{\|u_n\|_{X^{\alpha, \lambda}}^{2}} \int_{\Omega_n(\delta, R)}|u_n|^2dt \leq \frac{c-o_n(1)}{C_{\delta}^{R}\|u_n\|_{X^{\alpha, \lambda}}^{2}} \to 0 $$ as $n\to \infty$. Then, for $n$ large enough, by the continuity of $\nabla W$ one has \begin{equation}\label{mt19} \int_{\Omega_n(\delta, R)} \frac{|\nabla W(t, u_n)|}{|u_n|}|v_n|^2dt \leq K \int_{\Omega_n(\delta, R)} |v_n|^2dt < \epsilon. \end{equation} Hence, by (\ref{mt15}), (\ref{mt18}) and (\ref{mt19}) we have $$ \int_{\mathbb{R}} \frac{\langle \nabla W(t,u_n), v_n \rangle}{|u_n|}|v_n|dt \leq \int_{\mathbb{R}} \frac{|\nabla W(t, u_n)|}{|u_n|}|v_n|^2dt \leq 3 \epsilon<1, $$ for $n$ large enough, a contradiction with (\ref{lim}) and then $\{u_n\}$ is bounded in $X^{\alpha, \lambda}$. \end{proof} \begin{lemma}\label{cerami2} Suppose that $(\mathcal{L})_1-(\mathcal{L})_3$, $(W_1) - (W_4)$ be satisfied. Then, for any $\mathfrak{C}>0$, there exists $\Lambda_1 = \Lambda(\mathfrak{C})>0$ such that $I_\lambda$ satisfies $(Ce)_c$ condition for all $c \leq \mathfrak{C}$ and $\lambda > \Lambda_1$. \end{lemma} \begin{proof} For any $\mathfrak{C}>0$, suppose that $\{u_n\} \subset X^{\alpha, \lambda}$ is a $(Ce)_c$ sequence for $c\leq \mathfrak{C}$, namely $$ I_\lambda (u_n) \to c,\quad (1+\|u_n\|_{X^{\alpha, \lambda}})I'_\lambda (u_n) \to 0\;\;\mbox{as}\;\;n\to \infty. $$ By Lemma \ref{cerami1}, $\{u_n\}$ is bounded. Therefore, there exists $u\in X^{\alpha, \lambda}$ such that $u_n \rightharpoonup u $ in $X^{\alpha, \lambda}$ and $u_n \to u$ a.e. in $\mathbb{R}$. Let $w_n:= u_n-u$. By Lemma \ref{PS1} we get $$ I'_\lambda (u) = 0,\quad I_\lambda (w_n) \to c-I_\lambda (u) \quad \mbox{and}\quad I'_\lambda (w_n) \to 0\;\;\mbox{as}\;\;n\to \infty. $$ Next \begin{equation}\label{mt21} I_\lambda (u) = I_\lambda (u) - \frac{1}{2}I'_\lambda (u)u = \int_{\mathbb{R}} H(t,u)dt \geq 0, \end{equation} and \begin{equation}\label{mt22} \int_{\mathbb{R}}H(t,w_n)dt \to c- I_\lambda (u). \end{equation} Therefore, for $c \leq \mathfrak{C}$, we get \begin{equation}\label{mt23} \int_{\mathbb{R}} H(t, w_n)dt \leq \mathfrak{C} + o_n(1). \end{equation} On the other hand, by $(\mathcal{L})_1$ and since $w_n \to 0$ in $L^2_{loc}(\mathbb{R}, \mathbb{R}^N)$, we have \begin{equation}\label{mt24} \begin{aligned} \|w_n\|_{L^2}^{2} \leq \frac{1}{\lambda c}\int_{\{l\geq c\}}\lambda \langle L(t)w_n,w_n \rangle dt + o_n(1)\leq \frac{1}{\lambda c} \|w_n\|_{X^{\alpha, \lambda}}^{2} + o_n(1). \end{aligned} \end{equation} Let $p < q < \infty$, where $p = \frac{2\sigma}{\sigma -1}$. Using Remark \ref{keynta} and H\"older inequality we obtain \begin{equation}\label{mt25} \begin{aligned} \int_{\mathbb{R}}|w_n|^pdt &= \int_{\mathbb{R}} |w_n|^{\frac{2(q-p)}{q-2}}|w_n|^{\frac{q(p-2)}{q-2}}dt \leq \|w_n\|_{L^2}^{\frac{2(q-p)}{q-2}}\|w_n\|_{L^q}^{\frac{q(p-2)}{q-2}}\\ &\leq \mathcal{K}_{q}^{\frac{q(p-2)}{q-2}} \left( \frac{1}{\lambda c} \right)^{\frac{q-p}{q-2}}\|w_n\|_{X^{\alpha, \lambda}}^{p} + o_n(1). \end{aligned} \end{equation} Furthermore, for $|u| \leq R$ (where $R$ is defined in (W$_4$)), from (\ref{mt04}), we get $$ |\nabla W(t, u)| \leq (\epsilon + C_\epsilon R^{p-2})|u| = \tilde{C}|u|. $$ It follows from (\ref{mt24}) that $$ \begin{aligned} \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|\leq R\}} \langle \nabla W(t,w_n), w_n\rangle dt & \leq \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|\leq R\}}|\nabla W(t, w_n)||w_n|dt \\ &\leq \tilde{C}\int_{\{t\in \mathbb{R}:\;\;|w_n(t)|\leq R\}}|w_n|^2dt\leq \frac{\tilde{C}}{\lambda c}\|w_n\|_{X^{\alpha, \lambda}}^{2} + o_n(1). \end{aligned} $$ On the other hand, from (\ref{mt25}) and the H\"older inequality we obtain $$ \begin{aligned} \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|> R\}} \langle \nabla W(t,w_n), w_n\rangle dt &\leq \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|> R\}}|\nabla W(t,w_n)||w_n|dt \\ &\leq \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|> R\}} \frac{|\nabla W(t, w_n)|}{|w_n|}|w_n|^2dt\\ &\leq \left( \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|> R\}} \frac{|\nabla W(t, w_n)|^\sigma}{|w_n|^\sigma}dt\right)^{1/\sigma} \left( \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|> A\}} |w_n|^p \right)^{\frac{2}{p}}\\ &\leq \left( C_0\int_{\mathbb{R}} H(t, w_n)dt \right)^{1/\sigma} \|w_n\|_{p}^{2}\\ &\leq (C_0 \mathfrak{C})^{1/\sigma} \mathcal{K}_{q}^{\frac{2q(p-2)}{p(q-2)}}\left( \frac{1}{\lambda c} \right)^{\frac{2(q-p)}{p(q-2)}} \|w_n\|_{X^{\alpha, \lambda}}^{2} + o_n(1). \end{aligned} $$ Therefore $$ \begin{aligned} o_n(1) & = \langle I'_\lambda (w_n), w_n\rangle = \|w_n\|_{X^{\alpha, \lambda}}^{2} - \int_{\mathbb{R}}\langle \nabla W(t,w_n), w_n\rangle dt\\ &= \|w_n\|_{X^{\alpha, \lambda}}^{2} - \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|\leq R\}} \langle \nabla W(t,w_n), w_n\rangle dt - \int_{\{t\in \mathbb{R}:\;\;|w_n(t)|>R\}} \langle \nabla W(t,w_n), w_n\rangle dt\\ &\geq \left( 1 - \frac{\tilde{C}}{\lambda c}- C^{*} \left( \frac{1}{\lambda c} \right)^{\frac{2(q-p)}{p(q-2)}} \right) \|w_n\|_{X^{\alpha, \lambda}}^{2}+ o_n(1), \end{aligned} $$ where $C^* = (C_0 \mathfrak{C})^{1/\sigma} \mathcal{K}_{q}^{\frac{2q(q-p)}{p(q-2)}}$. Now, we choose $\Lambda_1 = \Lambda (\mathfrak{C}) >0$ large enough such that $$ 1 - \frac{\tilde{C}}{\lambda c}- C^{*} \left( \frac{1}{\lambda c} \right)^{\frac{2(q-p)}{p(q-2)}} >0\quad \mbox{for all} \;\;\lambda > \Lambda_1. $$ Then $w_n \to 0$ in $X^{\alpha, \lambda}$ for all $\lambda > \Lambda_1$. \end{proof} \noindent {\bf Proof of Theorem \ref{Thm:MainTheorem1}} By Lemmas \ref{GC1}, $I_\lambda$ has the mountain pass geometry and by Lemma \ref{cerami2}, $I_\lambda$ satisfies the $(Ce)_c$-condition. Therefore, by using mountain pass lemma with Cerami condition \cite{Ekeland}, for any $c_\lambda>0$ defined as follows $$ c_\lambda=\inf_{g\in \Gamma}\max_{s\in [0,1]}I_\lambda (g(s)), $$ where $$ \Gamma=\{g\in C([0,1],X^{\alpha,\lambda})\,|\, g(0)=0, g(1)=e\}, $$ ($e$ is defined in Lemma \ref{GC1}-ii), there exists $u_\lambda\in X^{\alpha,\lambda}$ such that \begin{equation}\label{eqn:ulambda} I_\lambda(u_\lambda)=c_\lambda\quad\mbox{and}\quad I'_\lambda(u_\lambda)=0. \end{equation} That is, (FHS)$_\lambda$ has at least one nontrivial solution for $\lambda>\Lambda(c_\lambda)$ (defined in Lemma \ref{cerami2}). \section{Concentration phenomena} In this section, we study the concentration of solutions for problem $(\mbox{FHS})_{\lambda}$ as $\lambda \to \infty$. That is, we focus our attention on the proof of Theorem \ref{Thm:MainTheorem2}. \begin{remark}\label{ccnota} The main difficulty to proof Theorem \ref{Thm:MainTheorem2}, is to show that $c_\lambda$ is bounded form above independent of $\lambda$. Thank to the proof of Lemma \ref{GC1}-ii, we can get a finite upper bound to $c_\lambda$, that is, choose $\psi$ as in the proof of Lemma \ref{GC1}-ii, then by definition of $c_\lambda$, we have $$ \begin{aligned} c_\lambda &\leq \max_{\sigma \geq 0} I_\lambda (\sigma \psi)\\ &= \max_{\sigma \geq 0}\left( \frac{\sigma^2}{2}\int_{\R}|{_{-\infty}}D_{t}^{\alpha}\psi(t)|^2 - \int_{\R}W(t, \sigma \psi)dt \right)\\ &= \tilde{c}, \end{aligned} $$ where $\tilde{c} <+\infty$ is independent of $\lambda$. As a consequence of the above estimates, we have that $\Lambda(c_\lambda)$ is bounded form below. That is, there exists $\Lambda_*>0$ such that the conclusion of Theorem \ref{Thm:MainTheorem1} is satisfied for $\lambda>\Lambda_*$. \end{remark} Consider $T = [-\varrho, \varrho]$ and the following fractional boundary value problem \begin{equation}\label{eqn:BVP} \left\{ \begin{array}{ll} {_{t}}D_{\varrho}^{\alpha} {_{-\varrho}}D_{t}^{\alpha}u = \nabla W(t, u),\quad t\in (-\varrho, \varrho),\\[0.1cm] u(-\varrho) = u(\varrho) = 0. \end{array} \right. \end{equation} Associated to (\ref{eqn:BVP}) we have the functional $I: E_{0}^{\alpha} \to \mathbb{R}$ given by $$ I(u):= \frac{1}{2}\int_{-\varrho}^{\varrho} |{_{-\varrho}}D_{t}^{\alpha}u(t)|^2dt - \int_{-\varrho}^{\varrho}W(t,u(t))dt $$ and we have that $I\in C^1(E_{0}^{\alpha}, \mathbb{R})$ with $$ I'(u)v = \int_{-\varrho}^{\varrho} \langle {_{-\varrho}}D_{t}^{\alpha}u(t), {_{-\varrho}}D_{t}^{\alpha}v(t)\rangle dt - \int_{-\varrho}^{\varrho}\langle\nabla W(t,u(t)), v(t) \rangle dt. $$ Following the ideas of the proof of Theorem \ref{Thm:MainTheorem1}, we can get the following existence result \begin{theorem}\label{Gthm1} Suppose that $W$ satisfies $(W_1)-(W_4)$ with $t\in [-\varrho,\varrho]$, then (\ref{eqn:BVP}) has at least one weak nontrivial solution. \end{theorem} \noindent {\bf Proof of Theorem \ref{Thm:MainTheorem2}} We follow the argument in \cite{ZhangTorres}. For any sequence $\lambda_k \to \infty$, let $u_k = u_{\lambda_k}$ be the critical point of $I_{\lambda_k}$, namely $$ c_{\lambda_k} = I_{\lambda_k}(u_k)\quad \mbox{and}\quad I'_{\lambda_k}(u_k)=0, $$ and, by (\ref{mt05}), we get $$ \begin{aligned} c_{\lambda_k} &= I_{\lambda_k}(u_k)=\frac{1}{2}\|u_k\|_{X^{\alpha,\lambda}}^2-\int_{\R} W(t,u_k(t))dt \\ &\geq \frac{1}{2}\|u_k\|_{X^{\alpha,\lambda}}^2-\frac{\epsilon}{2}\int_{\R}|u_k|^2dt - \frac{C_\epsilon}{p}\int_{\R}|u_k|^pdt, \end{aligned} $$ which implies that $\{u_k\}$ is bounded, due to Remarks \ref{Rem:Lp} and \ref{keynta}. Therefore, we may assume that $u_k \rightharpoonup \tilde{u}$ weakly in $X^{\alpha,\lambda_k}$. Moreover, by Fatou's lemma, we have $$ \begin{aligned} \int_{\mathbb{R}} l(t) |\tilde{u}(t)|^2dt \leq \liminf_{k\to \infty} \int_{\mathbb{R}} l(t)|u_k(t)|^2dt\leq \liminf_{k\to \infty} \int_{\mathbb{R}} (L(t)u_k(t), u_{k}(t)) dt \leq \liminf_{k\to \infty} \frac{\|u_k\|_{X^{\alpha, \lambda_k}}^2}{\lambda_{k}} = 0. \end{aligned} $$ Thus, $\tilde{u} = 0$ a.e. in $\mathbb{R} \setminus J$. Now, for any $\varphi \in C_{0}^{\infty}(T, \mathbb{R}^n)$, since $I'_{\lambda_k}(u_k)\varphi=0$, it is easy to see that $$ \int_{-\varrho}^{\varrho} ({_{-\varrho}}D_{t}^{\alpha} \tilde{u}(t), {_{-\varrho}}D_{t}^{\alpha}\varphi(t)) dt - \int_{-\varrho}^{\varrho} (\nabla W(t,\tilde{u}(t)), \varphi (t)) dt=0, $$ that is, $\tilde{u}$ is a solution of (\ref{eqn:BVP}) by the density of $C_{0}^{\infty}(T, \mathbb{R}^n)$ in $E^{\alpha}$. Now we show that $u_k \to \tilde{u}$ in $X^{\alpha}$. Since $I'_{\lambda_k}(u_k) u_k=I'_{\lambda_k}(u_k)\tilde{u}=0$, we have \begin{equation}\label{c6} \|u_k\|_{X^{\alpha, \lambda_k}}^{2} = \int_{\mathbb{R}} (\nabla W(t, u_k(t)), u_k(t)) dt \end{equation} and \begin{equation}\label{c7} \langle u_k, \tilde{u} \rangle_{\lambda_k} = \int_{\mathbb{R}} (\nabla W(t, u_k(t)), \tilde{u}(t)) dt, \end{equation} which implies that $$ \lim_{k\to \infty} \|u_{k}\|_{X^{\alpha, \lambda_k}}^{2} = \lim_{k\to \infty} \langle u_k, \tilde{u} \rangle_{X^{\alpha, \lambda_k}} = \lim_{k\to \infty} \langle u_k, \tilde{u} \rangle_{X^{\alpha}} = \|\tilde{u}\|_{X^\alpha}^2. $$ Furthermore, by the weak semi-continuity of norms we obtain $$ \|\tilde{u}\|_{X^{\alpha}}^{2} \leq \liminf_{k \to \infty} \|u_k\|_{X^{\alpha}}^{2} \leq \limsup_{k\to \infty}\|u_k\|_{X^\alpha}^{2} \leq \lim_{k\to \infty}\|u_k\|_{X^{\alpha,\lambda_k}}^{2}. $$ So $u_k \to \tilde{u}$ in $X^{\alpha}$, and $u_k \to \tilde{u}$ in $H^{\alpha}(\mathbb{R}, \mathbb{R}^n)$ as $k\to \infty$. \qed \end{document}
\begin{document} \title[\resizebox{4.7in}{!}{An Ax-Kochen-Ershov Theorem for Monotone Differential-Henselian Fields}]{An Ax-Kochen-Ershov Theorem for Monotone Differential-Henselian Fields} \author[Hakobyan]{Tigran Hakobyan} \address{Department of Mathematics\\ University of Illinois at Urbana-Cham\-paign\\ Urbana, IL 61801\\ U.S.A.} \email{[email protected]} \begin{abstract} Scanlon~\cite{S} proves Ax-Kochen-Ershov type results for differential-henselian monotone valued differential fields with many constants. We show how to get rid of the condition {\em with many constants}. \end{abstract} \maketitle \section*{Introduction} \noindent Let $\k$ be a differential field (always of characteristic $0$ in this paper, with a single distinguished derivation). Let also an ordered abelian group $\Gamma$ be given. This gives rise to the Hahn field $K=\k((t^\Gamma))$, to be considered in the usual way as a valued field. We extend the derivation $\der$ of $\k$ to a derivation on $K$ by $$\der(\sum_{\gamma} a_{\gamma}t^\gamma)\ :=\ \sum_{\gamma} \der(a_{\gamma})t^\gamma.$$ Scanlon~\cite{S} extends the Ax-Kochen-Ershov theorem (see \cite{AK}, \cite{E}) to this differential setting. This includes requiring that $\k$ is linearly surjective in the sense that for each nonzero linear differential operator $A=a_0+a_1\der + \dots +a_n\der^n$ over $\k$ we have $A(\k)=\k$. Under this assumption, $K$ is differential-henselian (see Section~\ref{sec:pre} for this notion), and the theory $\Th(K)$ of $K$ as a valued differential field (see also Section~\ref{sec:pre} for this) is completely axiomatized by: \begin{enumerate} \item the axiom that there are many constants; \item the theory $\Th(\k)$ of the differential residue field $\k$; \item the theory $\Th(\Gamma)$ of the ordered abelian value group; \item the axioms for differential-henselian valued fields. \end{enumerate} As to (1), having many constants means that every element of the differential field has the same valuation as some element of its constant field. This holds for $K$ as above (whether or not $\k$ is linearly surjective) because the constant field of $K$ is $C_K=C_{\k}((t^{\Gamma}))$. This axiom plays an important role in some proofs of \cite{S}. Below we drop the ``many constants'' axiom and generalize the theorem above to a much larger class of differential-henselian valued fields. This involves a more general way of extending the derivation of $\k$ to $K$. In more detail, let $c: \Gamma \to \k$ be an additive map. Then the derivation $\der$ of $\k$ extends to a derivation $\der_c$ of $K$ by setting $$ \der_c(\sum_{\gamma} a_{\gamma}t^{\gamma})\ :=\ \sum_\gamma \big(\der(a_\gamma) + c(\gamma) a_\gamma \big)t^\gamma.$$ Thus $\der_c$ is the unique derivation on $K$ that extends $\der$, respects infinite sums, and satisfies $\der_c(t^\gamma)=c(\gamma)t^\gamma$ for all $\gamma$. The earlier case has $c(\gamma)=0$ for all $\gamma$. Another case is where $\k$ contains $\R$ as a subfield, $\Gamma=\R$, and $c: \R\to \k$ is the inclusion map; then $\der_c(t^r)=rt^r$ for $r\in \R$. Let $K_c$ be the valued differential field $K$ with $\der_c$ as its distinguished derivation. Assume in addition that $\k$ is linearly surjective. Then $K_c$ is differential-henselian, and Scanlon's theorem above generalizes as follows: \begin{theoremintro}\label{F1} The theory $\Th(K_c)$ is completely determined by $\Th(\k,\Gamma;c)$, where $(\k, \Gamma;c)$ is the $2$-sorted structure consisting of the differential field $\k$, the ordered abelian group $\Gamma$, and the additive map $c: \Gamma \to \k$. \end{theoremintro} \noindent We actually prove in Section 2 a stronger version with the one-sorted structure $K_c$ expanded to a $2$-sorted one, with $\Gamma$ as the underlying set for the second sort, and as extra primitives the cross-section $\gamma \mapsto t^\gamma: \Gamma\to K$, the set $\k\subseteq K$, and the map $c: \Gamma \to \k$. The question arises: which complete theories of valued differential fields are covered by Theorem~\ref{F1}? The answer involves the notion of monotonicity: a valued differential field $F$ with valuation $v$ is said to be {\em monotone\/} if $v(f')\ge v(f)$ for all $f\in F$; as usual, $f'$ denotes the derivative of $f\in F$ with respect to the distinguished derivation of $F$. The valued differential fields $K_c$ are all clearly monotone. We show: \begin{theoremintro}\label{F2} Every monotone differential-henselian valued field is elementarily equivalent to some $K_c$ as in Theorem~\ref{F1}. \end{theoremintro} \noindent This is proved in Section 3 and is analogous to the result from \cite{S} that any differential-henselian valued field with many constants is elementarily equivalent to some $K$ as in Scanlon's theorem stated in the beginning of this Introduction. (In fact, that result follows from the ``complete axiomatization'' given in that theorem.) Theorem~\ref{F2} has a nice algebraic consequence, generalizing \cite[Corollary 8.0.2]{ADAMTT}: \begin{corollaryintro} \label{F3} If a valued differential field $F$ is monotone and differential-henselian, then every valued differential field extension of $F$ that is algebraic over $F$ is also (monotone and) differential-henselian. \end{corollaryintro} \noindent See Section 4. To state further results it is convenient to introduce some notation. Let $F$ be a differential field. For nonzero $f\in F$ we set $f^\dagger := f' / f$ and $F^\dagger~ :=~ \{f^\dagger~ :~f~ \in~F^{\times}\}$, where $F^{\times} := F \setminus\{0\}.$ So far our only assumption on $c: \Gamma \to \k$ is that it is additive, but the case $c(\Gamma)\cap \k^\dagger = \{0\}$ is of particular interest: it is not hard to show that then the constant field of $K_c$ is $C_\k ((t^\Delta))$, where the value group $\Delta$ of the constant field equals $\ker(c)$ and is a pure subgroup of $\Gamma$. Conversely (see Section 3): \begin{theoremintro} \label{F4} Every monotone differential-henselian valued field $F$ such that $v(C_F^\times)$ is pure in $v(F^\times)$ is elementarily equivalent to some $K_c$ as in Theorem \ref{F1} with $c(\Gamma)\cap~\k^\dagger = \{0\}$. \end{theoremintro} \noindent The referee showed us an example of a monotone henselian valued differential field $F$ for which $v(C_F^\times)$ is not pure in $v(F^\times)$. In Section 4 we give an example of a monotone differential-henselian field $F$ such that $v(C_F^\times)$ is not pure in $v(F^\times)$. The hypothesis of Theorem~\ref{F4} that $v(C_F^\times)$ is pure in $v(F^\times)$ holds if the residue field is algebraically closed or real closed (see Section 4). It includes also the case of main interest to us, where $F$ has few constants, that is, the valuation is trivial on $C_F$. In that case any $c$ as in Theorem~\ref{F4} is injective by Corollary~\ref{fewConstants}. Section 3 contains examples of additive maps $c: \Gamma\to \k$ for which $K_c$ has few constants, including a case where $\Th(K_c)$ is decidable. Two of those examples show that in Theorem~\ref{F1}, even when we have few constants, the traditional Ax-Kochen-Ershov principle without the map $c$ does not hold. (It does hold in Scanlon's theorem where $c=0$, but in general we do not expect to have a $c$ that is definable in the valued differential field structure.) \section{Preliminaries}\label{sec:pre} \noindent Adopting terminology from \cite{ADAMTT}, a {\em valued differential field\/} is a differential field $K$ together with a (Krull) valuation $v: K^{\times} \to \Gamma$ whose residue field $\mathbf{k}$ $:=\mathcal{O}/\smallo$ has characteristic zero; here $\Gamma=v(K^\times)$ is the value group, and we also let $\mathcal{O}=\mathcal{O}_K$ denote the valuation ring of $v$ with maximal ideal $\smallo$, and let $$C\ =\ C_K\ :=\ \{f\in K:\ f'=0\}$$ denote the constant field of the differential field $K$. We use notation from~\cite{ADAMTT}: for elements $a,b$ of a valued field with valuation $v$ we set $$a\asymp b:\Leftrightarrow va=vb, \quad a\preceq b\Leftrightarrow b\succeq a:\Leftrightarrow va\ge vb, \quad a\prec b \Leftrightarrow b\succ a:\Leftrightarrow va > vb.$$ Let $K$ be a valued differential field as above, and let $\der$ be its derivation. We say that $K$ has {\em many constants\/} if $v(C^\times)=\Gamma$. We say that the derivation of $K$ is {\em small\/} if $\der(\smallo)\subseteq \smallo$. If $K$, with a small derivation, has many constants, then $K$ is {\em monotone\/} in the sense of \cite{C}, that is, $v(f) \le v(f')$ for all $f\in K$. We say that $K$ has {\em few constants\/} if $v(C^\times)=\{0\}$. Note: if $K$ is monotone, then its derivation is small; if the derivation of $K$ is small, then $\der$ is continuous with respect to the valuation topology on $K$. Note also that if $K$ is monotone, then so is any valued differential field extension with small derivation and the same value group as $K$. {\em From now on we assume that the derivation of $K$ is small}. This has the effect (see \cite{C} or \cite[Lemma 4.4.2]{ADAMTT}) that also $\der(\mathcal{O})\subseteq \mathcal{O}$, and so $\der$ induces a derivation on the residue field; we view $\mathbf{k}$ below as equipped with this induced derivation, and refer to it as the {\em differential residue field of $K$}. \noindent We say that $K$ is {\em differential-henselian\/} (for short: {\em $\operatorname{d}$-henselian}) if every differential polynomial $P\in \mathcal{O}\{Y\}=\mathcal{O}[Y, Y', Y'',\dots]$ whose reduction $\overline{P}\in \mathbf{k}\{Y\}$ has total degree $1$ has a zero in $\mathcal{O}$. (Note that for ordinary polynomials $P\in \mathcal{O}[Y]$ this requirement defines the usual notion of a henselian valued field, that is, a valued field whose valuation ring is henselian as a local ring.) If $K$ is $\operatorname{d}$-henselian, then its differential residue field is clearly {\em linearly surjective}: any linear differential equation $y^{(n)}+ a_{n-1}y^{(n-1)} + \cdots + a_0y = b$ with coefficients $a_i,b\in \mathbf{k}$ has a solution in $\mathbf{k}$. This is a key constraint on our notion of $\operatorname{d}$-henselianity. If $K$ is $\operatorname{d}$-henselian, then $\mathbf{k}$ has a {\em lift to $K$}, meaning, a differential subfield of $K$ contained in $\mathcal{O}$ that maps isomorphically onto $\mathbf{k}$ under the canonical map from $\mathcal{O}$ onto $\mathbf{k}$; see \cite[7.1.3]{ADAMTT}. Other items from \cite{ADAMTT} that are relevant in this paper are the following differential analogues of Hensel's Lemma and of results due to Ostrowski/Krull/Kaplansky on valued fields: \begin{enumerate} \item[(DV1)] If the derivation of $\mathbf{k}$ is nontrivial, then $K$ has a spherically complete immediate valued differential field extension with small derivation; \cite[6.9.5]{ADAMTT}. \item[(DV2)] If $\mathbf{k}$ is linearly surjective and $K$ is spherically complete, then $K$ is $\d$-henselian; \cite[7.0.2]{ADAMTT}. \item[(DV3)] If $\mathbf{k}$ is linearly surjective and $K$ is monotone, then any two spherically complete immediate monotone valued differential field extensions of $K$ are isomorphic over $K$; \cite[7.4.3]{ADAMTT}. \end{enumerate} \noindent We also need a model-theoretic variant of (DV3): \begin{enumerate} \item[(DV4)] Suppose $\mathbf{k}$ is linearly surjective and $K$ is monotone with $v(K^\times) \not= \{ 0 \}$. Let $K^{\bullet}$ be a spherically complete immediate valued differential field extension of $K$. Then $K^{\bullet}$ can be embedded over $K$ into any $|v(K^\times)|^+$-saturated $\d$-henselian monotone valued differential field extension of $K$; \cite[7.4.5]{ADAMTT}. \end{enumerate} \section{Elementary equivalence of monotone differential-henselian fields} \noindent In this section we obtain Theorem $\ref{F1}$ from the introduction as a consequence of a more precise result in a 2-sorted setting. We consider 2-sorted structures $$\mathcal{K}\ =\ (K, \Gamma; v, s, c),$$ where $K$ is a differential field equipped with a differential subfield $\k$ (singled out by a unary predicate symbol), $\Gamma$ is an ordered abelian group, $v: K^\times \to \Gamma=v(K^\times)$ is a valuation that makes $K$ into a monotone valued differential field such that $\k \subseteq K$ is a lift of the differential residue field, $s: \Gamma \to K^\times$ is a cross-section of $v$ (that is, $s$ is a group morphism and $v\circ s=\text{id}_{\Gamma}$), and $c: \Gamma \to \k$ satisfies $c(\gamma)=s(\gamma)^\dagger$ for all $\gamma \in \Gamma$ (so $c$ is additive). We construe these $\mathcal{K}$ as $L_2$-structures for a natural $2$-sorted language $L_2$ (with unary function symbols for $v$, $s$, and $c$). We have an obvious set $\operatorname{Mo}(\ell, s,c)$ of $L_2$-sentences whose models are exactly these $\mathcal{K}$; the ``$\ell$'' is to indicate the presence of a lift. For example, for $K=\k((t^\Gamma))$ as in the introduction and additive $c: \Gamma\to \k$ we consider $K_c$ as a model of $\operatorname{Mo}(\ell, s,c)$ in the obvious way by taking $\k\subseteq K$ as lift, and $\gamma\mapsto t^\gamma$ as cross-section. \begin{thm}\label{MainTheorem} If $\mathcal{K}$ is $\d$-henselian, then $\Th(\mathcal{K})$ is axiomatized by: \begin{enumerate}[font=\normalfont] \item $\operatorname{Mo}(\ell, s,c)$; \item the axioms for $\d$-henselianity; \item Th$(\k,\Gamma;c)$ with $\k$ as differential field and $\Gamma$ as ordered abelian group. \end{enumerate} \end{thm} \noindent We first develop the required technical material, and give the proof of this theorem at the end of this section. Until further notice, $\mathcal{K} = (K, \Gamma; \k, v, s, c)\models \operatorname{Mo}(\ell, s,c)$. For any subfield $E$ of $K$ we set $\Gamma_E:= v(E^\times)$. \noindent We define a {\em good subfield of $\mathcal{K}$} to be a differential subfield of $K$ such that (i) $\k \subseteq E$, (ii) $s(\Gamma_E) \subseteq E$, and (iii) $\vert{ \Gamma_E}\vert \leq \aleph_0$. Thus $\k$ is a good subfield of $\mathcal{K}$. \begin{lem}\label{CountabilityOfValuation} Let $E$ be a good subfield of $\mathcal{K}$ and $x\in K\setminus E$. Then $|\Gamma_{E(x)}|\le \aleph_0$. \end{lem} \noindent This is well-known; see for example \cite[Lemma 3.1.10]{ADAMTT}. \begin{comment} It is enough to show that $v(E[x])$ is countable. Consider the sets $$P_n\ :=\ \{p(x):\ p\in E[X], \text{ deg}(p)\le n \}.$$ We have $\displaystyle v(E[x]) = \bigcup_{n} v(P_n)$, and we prove by induction that $v(P_n)$ is countable. By assumption, $v(P_0) = v(E)$ is countable. Suppose $p\in E[X], \deg p=n+1$, and $v(p(x))\notin v(P_n)$. Then for any $q\in E[X]$ with $\deg q=n+1$ we have $q=ap+r$ with $a\in E^\times$ and $\deg r\le n$, so $v(p(x))\ne v(r(x)/a)$, that is, $v(ap(x))\ne v(r(x))$, and thus $$v(q(x))\ =\ \min(v(ap(x)), v(r(x)))\in v(P_n)\cup \big(v(E^\times)+ v(p(x))\big).$$ This gives $v(P_{n+1})\subseteq v(P_n)\cup \big(v(E^\times)+ v(p(x))\big)$, and thus $v(P_{n+1})$ is countable if $v(P_n)$ is. \end{comment} \begin{lem}\label{GoodSubfields} Let $E\subseteq K$ be a good subfield of $\mathcal{K}$ and $\gamma\in \Gamma\setminus \Gamma_E$, that is, $s(\gamma)\notin E$. Then $E(s(\gamma))$ is also a good subfield of $\mathcal{K}$. \end{lem} \begin{proof} From $c(\gamma)\in \k\subseteq E$ and $s(\gamma)' = c(\gamma)s(\gamma)$ we get that $E(s(\gamma))$ is a differential subfield of $K$ and that condition (i) for being a good subfield is satisfied by $E(s(\gamma))$. For condition (ii) we distinguish two cases: \noindent (1) $n \gamma \in \Gamma_E$ for some $n\in \mathbb{N}^{\ge 1}$. Take $n\geq 1$ minimal with $n \gamma \in \Gamma_E$. Then $0,\gamma, 2 \gamma,\ldots, (n-1) \gamma$ are in different cosets of $\Gamma_E$, so for every $q(X)\in E[X]^{\neq}$ of degree $<n$ we get $q(s(\gamma))\neq 0$. Hence the minimum polynomial of $s(\gamma)$ over $E$ is $X^n - s(n \gamma)$. Thus, given any $x\in E(s(\gamma))^\times$, we have $$x\ =\ q_0 + q_1 s(\gamma) + \ldots + q_{n-1} s(\gamma)^{n-1}$$ with $q_0,\dots, q_{n-1}\in E$, not all $0$, so $\displaystyle v(x) = \min_{i=0,\ldots,n-1} \{ v(q_i) + i \gamma\}$. Therefore, $\Gamma_{E(s(\gamma))} = \Gamma_E + \mathbb{Z} \gamma$ and hence $s(\Gamma_{E(s(\gamma))})\subseteq s(\Gamma_E) \cdot s(\gamma)^{\mathbb{Z}}\subseteq E(s(\gamma))$. \noindent (2) $n \gamma\notin \Gamma_E$ for all $n\in \mathbb{N}^{\ge 1}$. Then $0,\gamma, 2\gamma, \ldots $ are in different cosets of $\Gamma_E$, so $s(\gamma)$ is transcendental over $E$ and for any polynomial $q(X)= q_0+q_1 X + \ldots + q_n X^n\in E[X]$, we have $\displaystyle v(q(s(\gamma))) = \min_{i=0,\ldots, n} \{ v(q_i) + i \gamma \}$. As in case (1) this yields $\Gamma_{E(s(\gamma))} = \Gamma_E + \mathbb{Z} \gamma$ and so $s(\Gamma_{E(s(\gamma))})\subseteq s(\Gamma_E) \cdot s(\gamma)^{\mathbb{Z}}\subseteq E(s(\gamma))$. Thus condition (ii) of good subfields holds for $E(s(\gamma))$. Condition (iii) is satisfied by Lemma \ref{CountabilityOfValuation}. \end{proof} \noindent \textit{In the rest of this section we fix a $\d$-henselian $\mathcal{K}$.} Let $T_{\mathcal{K}}$ be the $L_2$-theory given by (1)--(3) in Theorem~\ref{MainTheorem}. Assume CH (the Continuum Hypothesis), and let $$\mathcal{K}_1\ =\ (K_1, \Gamma_1; v_1, s_1, c_1), \mathbb{Q}uad \mathcal{K}_2\ =\ (K_2, \Gamma_2; v_2, s_2, c_2)$$ be saturated models of $T_{\mathcal{K}}$ of cardinality $\aleph_1$; remarks following Corollary~\ref{cor:syntequiv} explain why we can assume CH. Then the structures $(\k_1, \Gamma_1; c_1)$ and $(\k_2, \Gamma_2; c_2)$ are also saturated of cardinality $\aleph_1$, where $\k_1$ and $\k_2$ are the lifts of the differential residue fields of $K_1$ and $K_2$ respectively. Since $(\k_1, \Gamma_1; c_1)$ and $(\k_2, \Gamma_2; c_2)$ are elementarily equivalent to $(\k, \Gamma; c)$, we have an isomorphism $f = (f_r, f_v)$ from $(\k_1, \Gamma_1; c_1)$ onto $(\k_2, \Gamma_2; c_2)$ with $f_r : \k_1 \to \k_2$ and $f_v : \Gamma_1 \to \Gamma_2$. A map $g:E_1\to E_2$ between good subfields $E_1$ and $E_2$ of $\mathcal{K}_1$ and $\mathcal{K}_2$ respectively, will be called {\em good\/} if \begin{enumerate}[resume*] \item $g:E_1\to E_2$ is a differential field isomorphism, \item $g$ extends $f_r$, \item $f_v \circ v_1 = v_2\circ g$, \item $g\circ s_1=s_2\circ f_v$. \end{enumerate} \noindent Note that then $g$ is also an isomorphism of the valued subfield $E_1$ of $K_1$ onto the valued subfield $E_2$ of $K_2$. The map $f_r:\k_1\to \k_2$ is clearly a good map. \begin{prop} \label{BackAndForth} $\mathcal{K}_1\cong \mathcal{K}_2$. \end{prop} \begin{proof} We claim that the collection of good maps is a back-and-forth system between $K_1$ and $K_2$. (By the saturation assumption this yields the desired result.) This claim holds trivially if $\Gamma_1 = \{ 0 \}$, so assume $\Gamma_1 \not= \{ 0 \}$, and thus $\Gamma_2 \not= \{ 0\}$. \noindent Let $g:E_1 \to E_2$ be a good map and $\gamma\in \Gamma_1\setminus \Gamma_{E_1}$. By Lemma~\ref{GoodSubfields} we have good subfields $E_1\big(s_1(\gamma)\big)$ of $\mathcal{K}_1$ and $E_2\big(s_2(f_v(\gamma))\big)$ of $\mathcal{K}_2$. The proof of that lemma then yields easily a good map $$g_\gamma: E_1\big(s_1(\gamma)\big)\to E_2\big(s_2(f_v(\gamma))\big)$$ that extends $g$ with $g_\gamma\big(s_1(\gamma)\big)=s_2\big(f_v(\gamma)\big)$. \noindent Let $g:E_1\to E_2$ be a good map and $x\in K_1 \setminus E_1$. We show how to extend $g$ to a good map with $x$ in its domain. By condition (i) of being a good subfield, $E_1\supseteq \k_1$ and $E_2\supseteq \k_2$. The group $\Gamma_{E_1\< x \>}$ is countable by Lemma \ref{CountabilityOfValuation}. Thus by applying iteratively the construction above to elements $\gamma\in \Gamma_{E_1\<x\>}$, we can extend $g$ to a good map $g^1:E_1^1\to E_2^1$ with $\Gamma_{E_1^1}=\Gamma_{E_1 \< x \>}$. Likewise we can extend $g^1$ to a good map $g^2:E_1^2 \to E_2^2$ with $\Gamma_{E_1^2}=\Gamma_{E_1^1 \< x \>}$. Iterating this process and taking the union $\displaystyle E_i^{\infty} = \bigcup_n E_i^n$, for $i=1,2$, we get a good map $g^\infty: E_1^\infty \to E_2^\infty$ extending $g$ such that $\Gamma_{E_1^{\infty}}=\Gamma_{E_1^{\infty} \< x \>}$, so the valued differential field extension $E_1^{\infty} \< x \>$ of $E_1^{\infty}$ is immediate. By (DV1) and (DV4) we have a spherically complete immediate valued differential field extension $E_1^{\bullet}\subseteq K_1$ of $E_1^{\infty} \< x \>$. Note that then $E_1^{\bullet}$ is also a spherically complete immediate valued differential field extension of $E_1^{\infty}$. Likewise we have a spherically complete immediate valued differential field extension $E_2^{\bullet}\subseteq K_2$ of $E_2^{\infty}$. By (DV3) we can extend $g^\infty$ to a valued differential field isomorphism $g^{\bullet}:E_1^{\bullet} \to E_2^{\bullet}$. It is clear that then $g^{\bullet}$ is a good map extending $g$ with $x$ in its domain. This finishes the proof of the \textit{forth} part. The \textit{back} part is done likewise. \end{proof} \begin{proof}[Proof of Theorem \ref{MainTheorem}] We can assume the Continuum Hypothesis (CH) for this argument. (This is explained further in the remarks following Corollary~\ref{cor:syntequiv}.) Our job is to show that the theory $T_{\mathcal{K}}$ is complete. In other words, given any two models of $T_{\mathcal{K}}$ we need to show they are elementarily equivalent. Using CH we can assume that these models are saturated of cardinality $\aleph_1$, and so they are indeed isomorphic by Proposition~\ref{BackAndForth}. \end{proof} \noindent Note that Theorem \ref{F1} is a consequence of Theorem $\ref{MainTheorem}$. \begin{cor}\label{cor:equiv} Suppose $\mathcal{K}_1 = (K_1, \Gamma_1; v_1, s_1, c_1)$ and $\mathcal{K}_2 = (K_2, \Gamma_2; v_2, s_2, c_2)$ are $\d$-henselian models of $\operatorname{Mo}(\ell,c,s)$. Then: $\mathcal{K}_1 \equiv \mathcal{K}_2\Longleftrightarrow (\k_1, \Gamma_1; c_1) \equiv (\k_2, \Gamma_2; c_2)$. \end{cor} \noindent In connection with eliminating the use of CH we introduce the $L_2$-theory $T$ whose models are the $\d$-henselian models of $\operatorname{Mo}(\ell,s,c)$. The structures $(\k, \Gamma; c)$ where $\k$ is a differential field, $\Gamma$ is an ordered abelian group, and $c: \Gamma \to \k$, are $L_c$-structures for a certain sublanguage $L_c$ of $L_2$. Now Corollary~\ref{cor:equiv} yields: \begin{cor}\label{cor:syntequiv} Every $L_2$-sentence is $T$-equivalent to some $L_c$-sentence. \end{cor} \noindent The above proof of Corollary~\ref{cor:syntequiv} depends on CH, but $T$ has an explicit axiomatization and so the statement of this corollary is ``arithmetic''. Therefore this proof can be converted to one using just ZFC (without CH). Thus as an obvious consequence of Corollary~\ref{cor:syntequiv}, Theorem~\ref{MainTheorem} also holds without assuming CH. \section{Existence of $\k$, $s$, $c$} \noindent In this section we construct under certain conditions a lift $\k$, a cross-section $s$, and a map $c$ as in the previous section. \begin{prop}\label{facts} Assume $\mathcal{K} = (K, \Gamma; v, s, c)\models \operatorname{Mo}(\ell,c,s)$. Then \begin{align*} s(\ker(c))\ =\ C^\times &\cap s(\Gamma)\quad \text{$($so $\ker(c)\ \subseteq\ v(C^\times))$}, \mathbb{Q}uad c\big(v(C^\times)\big)\ \subseteq\ \k^\dagger,\\ c(\Gamma)\cap \k^{\dagger}\ &=\ \{0\}\ \Longleftrightarrow\ \ker(c)\ =\ v(C^{\times}). \end{align*} \end{prop} \begin{proof} Let $\gamma\in \Gamma$. If $c(\gamma)=0$, then $s(\gamma)^\dagger=0$, so $s(\gamma)\in C^\times\cap s(\Gamma)$. If $s(\gamma)\in C^\times$, then $c(\gamma)=s(\gamma)^\dagger=0$, so $\gamma\in \ker(c)$. This proves the first equality. Next, for the inclusion $c\big(v(C^\times)\big)\subseteq \k^\dagger$, suppose $\gamma=va$ with $a\in C^{\times}$. Then $s(\gamma) = ua$ with $u\asymp 1$ in $K$, so $u =d(1+\epsilon)$ with $d\in \k^\times$ and $\epsilon\prec 1$. Hence $$c(\gamma)\ =\ s(\gamma)^\dagger\ =\ u^\dagger\ =\ d^\dagger + (1 + \epsilon)^\dagger\ =\ d^\dagger+\frac{\epsilon'}{1+\epsilon}.$$ Since $c(\gamma), d^\dagger\in \k$ and $\epsilon'\prec 1$, this gives $\epsilon'=0$, so $c(\gamma)\in \k^\dagger$, as claimed. As to the equivalence, suppose $c(\Gamma)\cap \k^{\dagger}= \{0\}$. Then $c\big(v(C^\times)\big)=\{0\}$ by the inclusion that we just proved, so $v(C^\times)\subseteq \ker(c)$. We already have the reverse inclusion, so $\ker(c)= v(C^{\times})$. For the converse, assume $\ker(c)= v(C^{\times})$. Let $\gamma\in \Gamma$ be such that $c(\gamma)=d^\dagger$ with $d\in \k^\times$. Then $s(\gamma)^\dagger=d^\dagger$, so $s(\gamma)/d\in C^\times$, hence $\gamma=v\big(s(\gamma)/d\big)\in v(C^\times)$, and thus $c(\gamma)=0$, as claimed. \end{proof} \noindent Examples where $c(\Gamma)\cap \k^{\dagger}\ne \{0\}$: Take any differential field $\k$ with $\k\ne C_{\k}$, and take $\Gamma=\Z$. Then $\k^\dagger\ne \{0\}$; take any nonzero element $u\in \k^\dagger$. Then for the additive map $c: \Gamma\to \k$ given by $c(1)=u$ we have $c(\Gamma)= {\bbold Z} u\subseteq \k^\dagger$, and so $\k((t^\Gamma))_c$ is a model of $\operatorname{Mo}(\ell,c,s)$ with $c(\Gamma)\cap \k^{\dagger}\ne \{0\}$. By taking $\k$ to be linearly surjective, this model is $\d$-henselian. \noindent An example where $c(\Gamma)\cap \k^{\dagger}= \{0\}$: Take $\k= \T_{\log}$, the differential field of logarithmic transseries; see \cite[Chapter 15 and Appendix A]{ADAMTT} about $\T_{\log}$, especially the fact that $\T_{\log}$ is linearly surjective. Also $\T_{\log}$ contains $\R$ as a subfield, and $f^\dagger \notin \R$ for all nonzero $f\in \T_{\log}$. Next, take $\Gamma=\R$ and define $c: \Gamma\to \k$ by $c(r)=r$. Then $K:= \k((t^\Gamma))$ yields a $\d$-henselian model $K_c$ of $\operatorname{Mo}(\ell,c,s)$ with $c(\Gamma)\cap \k^{\dagger}= \{0\}$. Allen Gehret conjectured an axiomatization of $\Th(\T_{\log})$ that would imply its decidability, and thus the decidability of the theory of $K_c$. This $K_c$ has few constants by the following obvious consequence of Proposition~\ref{facts}: \begin{cor} \label{fewConstants} Suppose $\mathcal{K} = (K, \Gamma; v, s, c)\models \operatorname{Mo}(\ell,c,s)$. Then: $$c \text{ is injective and }c(\Gamma)\cap \k^{\dagger}= \{0\}\ \Longleftrightarrow\ \mathcal{K} \text{ has few constants}.$$ \end{cor} \noindent We now provide an example to show that in Theorem~\ref{F1} we cannot drop the map $c$ in the case of few constants. Take $\k = \T_{\log}$ and $\Gamma = \mathbb{Z}$. Define the additive maps $c_1 : \Gamma \to \k$ by $c_1(1) = 1$ and $c_2 :\Gamma \to \k$ by $c_2(1) = \sqrt{2}$; instead of $\sqrt{2}$, any irrational real number will do. Let $K_1 := \k((t^\Gamma))$ and $K_2 := \k((t^\Gamma))$ be the differential Hahn fields with derivations defined as in the introduction using the maps $c_1$ and $c_2$, respectively. They are $\d$-henselian monotone valued differential fields. As in the previous example they have few constants by Corollary~ \ref{fewConstants}. We claim that $K_1$ and $K_2$ are not elementarily equivalent as valued differential fields (without $c_1$ and $c_2$ as primitives), so the traditional Ax-Kochen-Ershov principle does not hold. In $K_1$, we have $t^\dagger = c(1) = 1$ and so $K_1 \models \exists a \neq 0 (a^\dagger = 1)$. We now show that $K_2 \not\models \exists a \neq 0 (a^\dagger = 1)$. Towards a contradiction, assume $a \in K_2^\times$ is such that $a^\dagger = 1$. Then $a = t^k d (1 + \epsilon)$ with $k \in\mathbb{Z}$, $d \in \k^\times$ and $\epsilon \in K_2$ with $\epsilon \prec 1$. Hence $a^\dagger = c_2(k) + d^\dagger + (1 + \epsilon)^\dagger$, so $$k\sqrt{2} + d^\dagger + \frac{\epsilon'}{1 + \epsilon}\ =\ 1.$$ Since $\epsilon' \prec 1$ we get $k\sqrt{2} + d^\dagger = 1$ and $\epsilon' = 0$. Thus $d^\dagger = 1 - k\sqrt{2} \in \mathbb{R}$. Since $1 - k\sqrt{2} \neq 0$, this contradicts $\T_{\log}^\dagger \cap \mathbb{R} = \{ 0 \}$. Next we give an example of a decidable $\d$-henselian monotone valued differential field with few constants. The valued differential field $\T$ of transseries is linearly surjective by \cite[Corollary 15.0.2]{ADAMTT} and \cite[Corollary 14.2.2]{ADAMTT}. As $\T[\mathrm{i}]$ with $\mathrm{i}^2 = -1$ is algebraic over $\T$, it is also linearly surjective by \cite[Corollary 5.4.3]{ADAMTT}. The proof of \cite[Proposition 10.7.10]{ADAMTT} gives $(\T[\mathrm{i}]^\times)^\dagger = {\bbold T} + \mathrm{i}\der \smallo$, where $\smallo$ is the maximal ideal of the valuation ring of $\T$. Thus taking $\k = \T[\mathrm{i}]$, $\Gamma = \R$ and the additive map $c:\Gamma \to \k$ given by $c(r) = \mathrm{i}r$, we have $c(\Gamma) \cap \k^\dagger = \mathrm{i} {\bbold R} \cap ( {\bbold T} + \mathrm{i}\der \smallo) = \{ 0 \}$ and therefore $K := \T[\mathrm{i}]((t^\R))_c$ will be a monotone $\d$-henselian valued differential field with few constants by Corollary \ref{fewConstants}. Moreover, $\text{Th}(K)$ is decidable by Theorem~\ref{F1}, since the 2-sorted structure $(\T[\mathrm{i}], \R; c)$ is interpretable in the valued differential field $\T$ and the latter has decidable theory by \cite[Corollary 16.6.3]{ADAMTT}. \noindent \textit{In what follows we fix a differential field $K$ with a valuation $v: K^\times \to \Gamma=v(K^\times)$ such that $(K, \Gamma;v)$ is a monotone valued differential field.} \begin{lem} \label{GroupValuation} Suppose $(K, \Gamma;v)$ is $\d$-henselian and $\k$ is a lift of its differential residue field. Then $G := \{ a\in K^\times : a^\dagger \in \k \}$ is a subgroup of $K^\times$ with $v(G) = \Gamma$. \end{lem} \begin{proof} Using $(a/b)^\dagger=a^\dagger-b^\dagger$ for $a,b\in K^\times$ we see that $G$ is a subgroup of $K^\times$. Let $\gamma \in \Gamma$; our goal is to find a $g\in G$ with $vg=\gamma$. Take $f\in K^\times$ with $vf=\gamma$. If $f'\prec f$, then \cite[7.1.10]{ADAMTT} gives $g\in C^\times$ such that $f\asymp g$, so $g\in G$ and $vg=\gamma$. Next, suppose $f'\asymp f$. Then $f^\dagger\asymp 1$, so $f^\dagger = a + \epsilon$ with $a\in \k$ and $\epsilon \in \smallo$. By \cite[Corollary 7.1.9]{ADAMTT} we have $\smallo = (1 + \smallo)^\dagger$, so $\epsilon = (1 + \delta)^\dagger$ with $\delta\in \smallo$. Then $ (\frac{f}{1+\delta})^\dagger = a\in \k$, so $\frac{f}{1 + \delta}\in G$ and $v(\frac{f}{1 + \delta}) = \gamma$. \end{proof} \noindent Recall that if $(K, \Gamma;v)$ is $\d$-henselian, then a lift of the differential residue field exists. Below we assume a lift $\k$ of the differential residue field is given, and we consider the $2$-sorted structure $\big((K,\k), \Gamma;v\big)$ (so $\k$ is a distinguished subset of $K$). \begin{lem} \label{SaturatedCaseCrossSectionExistence} Suppose $\big((K,\k), \Gamma; v\big)$ is $\d$-henselian, $\aleph_1$-saturated and $G$ is a definable subgroup of $K^\times$ such that $v(G) = \Gamma$. Then there exists a cross-section $s:\Gamma \to K^\times$ such that $s(\Gamma)\subseteq G$. \end{lem} \begin{proof} First note that $H := \mathcal{O}^\times \cap G$ is a pure subgroup of $G$. The inclusion $H\to G$ and the restriction of the valuation $v$ to $G$ yield an exact sequence $$1 \to H \to G \to \Gamma \to 0$$ of abelian groups. Since $H$ is $\aleph_1$-saturated as an abelian group, this exact sequence splits; see \cite[Corollary 3.3.37]{ADAMTT}. This yields a cross-section $s:\Gamma \to K^\times$ with $s(\Gamma)\subseteq G$. \end{proof} \noindent Combining the previous two lemmas gives us the main result of this section: \begin{thm}\label{ConstructionOfMaps} Suppose $\big((K,\k), \Gamma; v\big)$ is $\d$-henselian and $\aleph_1$-saturated. Then there is a cross-section $s:\Gamma \to K^\times$ and an additive map $c: \Gamma \to \k$ with $s(\gamma)^\dagger = c(\gamma)$ for all $\gamma \in \Gamma$. \end{thm} \begin{proof} Since $\k$ is now part of the structure, the subgroup $G$ of $K^\times$ from Lemma~\ref{GroupValuation} is definable. Now apply Lemma \ref{SaturatedCaseCrossSectionExistence} and get a cross-section $s:\Gamma \to K^\times$ such that $s(\Gamma)^\dagger \subseteq \k$. Take the additive map $c:\Gamma \to \k$ to be given by $c(\gamma) = s(\gamma)^\dagger$. \end{proof} \begin{comment} \noindent Here is the main result of this section: \begin{thm} \label{ConstructionOfMaps0} Suppose $\big((K,\k), \Gamma; v\big)$ is $\d$-henselian and $\aleph_1$-saturated. Then there is a cross-section $s: \Gamma\to K^\times$ and an additive map $c: \Gamma \to \k$ with $s(\gamma)^\dagger = c(\gamma)$ for all $\gamma \in \Gamma$. \end{thm} \begin{proof} By Proposition~\ref{Pureness} we know that $\Delta := v(C^\times)$ is pure in $\Gamma$. Since $\Delta$ is also $\aleph_1$-saturated (as an abelian group), we have a direct sum decomposition $\Gamma = \Delta \oplus \Gamma^*$ by \cite[Corollary 3.3.37]{ADAMTT}. Since the valued subfield $C$ of $K$ is $\aleph_1$-saturated, it has a cross-section $s_C : \Delta \to C^\times$. Corollary~\ref{ExistenceOfCrossSection} yields a cross-section $\tilde{s}: \Gamma\to K^\times$ of the valued field $K$ such that $\tilde{s}(\Gamma)^\dagger \subseteq \k$. By the definition of $\Delta$ we have $\tilde{s}(\gamma)\notin C$ for all $\gamma\in \Gamma\setminus \Delta$. Let $s$ be the cross-section of the valued field $K$ that agrees with $s_C$ on $\Delta$ and with $\tilde{s}$ on $\Gamma^*$. Then $s(\gamma)^\dagger \in \k$ for all $\gamma\in \Gamma$, so we have an additive map $c:\Gamma\to \k$ given by $c(\gamma)=s(\gamma)^\dagger$. Moreover, for $\gamma\in \Gamma$, $$ c(\gamma) = 0\ \Leftrightarrow\ s(\gamma)'=0\ \Leftrightarrow\ s(\gamma)\ \in C\ \Leftrightarrow\ \gamma \in \Delta.$$ This gives $\ker(c) = v(C^\times)$, and thus $c(\Gamma) \cap \k^\dagger = \{ 0 \}$ by Proposition~\ref{facts}. \end{proof} \end{comment} \begin{proof}[Proof of Theorem \ref{F2}] Let a monotone $\d$-henselian valued field be given. Then it has a lift of its differential residue field, and fixing such a lift $\k$, it is a structure $\big((K,\k), \Gamma; v\big)$ as above. Passing to an elementary extension, we can assume $\big((K,\k), \Gamma; v\big)$ is $\aleph_1$-saturated. Then Theorem \ref{ConstructionOfMaps} yields a cross-section $s: \Gamma\to K^\times$ and an additive map $c: \Gamma \to \k$ with $s(\gamma)^\dagger = c(\gamma)$ for all $\gamma \in \Gamma$. This in turn yields a Hahn field $\k((t^\Gamma))_c$ that is elementarily equivalent to $\big((K,\k), \Gamma; v,s,c\big)$. \end{proof} \noindent We can now prove Theorem \ref{F4}: \begin{proof}[Proof of Theorem \ref{F4}] Let $F$ be a monotone $\d$-henselian valued field such that $v_F(C_F^\times)$ is pure in $\Gamma_F=v_F(F^\times)$. The valued differential field $F$ has a lift of its differential residue field, and fixing such a lift $\k_F$ we get the structure $\big((F,\k_F), \Gamma_F; v_F\big)$. Take an elementary extension $\big((K,\k), \Gamma; v\big)$ of it that is $\aleph_1$-saturated. Then $\Delta := v(C_K^\times)$ is pure in $v(K^\times)$. Since $\Delta$ is also $\aleph_1$-saturated (as an abelian group), we have a direct sum decomposition $\Gamma = \Delta \oplus \Gamma^*$ by \cite[Corollary 3.3.37]{ADAMTT}. Since the valued subfield $C := C_K$ of $K$ is $\aleph_1$-saturated, it has a cross-section $s_C : \Delta \to C^\times$. Theorem~\ref{ConstructionOfMaps} yields a cross-section $\tilde{s}: \Gamma\to K^\times$ of the valued field $K$ such that $\tilde{s}(\Gamma)^\dagger \subseteq \k$. By the definition of $\Delta$ we have $\tilde{s}(\gamma)\notin C$ for all $\gamma\in \Gamma\setminus \Delta$. Let $s$ be the cross-section of the valued field $K$ that agrees with $s_C$ on $\Delta$ and with $\tilde{s}$ on $\Gamma^*$. Then $s(\gamma)^\dagger \in \k$ for all $\gamma\in \Gamma$, so we have an additive map $c:\Gamma\to \k$ given by $c(\gamma)=s(\gamma)^\dagger$. Moreover, for $\gamma\in \Gamma$, $$ c(\gamma) = 0\ \Leftrightarrow\ s(\gamma)'=0\ \Leftrightarrow\ s(\gamma)\ \in C\ \Leftrightarrow\ \gamma \in \Delta.$$ This gives $\ker(c) = v(C^\times)$, and thus $c(\Gamma) \cap \k^\dagger = \{ 0 \}$ by Proposition~\ref{facts}. Since $\ker(c)$ is a pure subgroup of $\Gamma$ then so is $\Delta$. This in turn yields a Hahn field $\k((t^\Gamma))_c$ with the required properties that is elementarily equivalent to $\big((K,\k), \Gamma; v,s,c\big)$. \end{proof} \section{Eliminating the cross-section} \noindent Note that every $\mathcal{K}\models \operatorname{Mo}(\ell,s,c)$ satisfies the sentences \begin{enumerate} \item $\forall \gamma \forall \delta \quad c(\gamma + \delta) = c(\gamma) + c(\delta)$, \item $\forall \gamma \exists x\ne 0 \quad v(x) = \gamma\ \&\ x^\dagger = c(\gamma)$. \end{enumerate} These sentences don't mention the cross-section $s$. Below we derive the analogue of Theorem~\ref{MainTheorem} in the setting without a cross-section. Let $L_2^{-}$ be the language $L_2$ with the symbol $s$ for the cross-section removed. Let $\operatorname{Mo}(\ell, c)$ be the $L_2^{-}$-theory whose models are the $L_2^{-}$-structures $$\mathcal{K}\ =\ (K, \Gamma; v, c),$$ where $K$ is a differential field equipped with a differential subfield $\k$ (singled out by a unary predicate symbol), $\Gamma$ is an ordered abelian group, $v: K^\times \to \Gamma=v(K^\times)$ is a valuation that makes $K$ into a monotone valued differential field such that $\k \subseteq K$ is a lift of the differential residue field, and $c: \Gamma \to \k$ is such that the sentences (1) and (2) above are satisfied. \begin{lem} \label{ExistenceOfCrossSectionWithMapCOnly} Suppose $\mathcal{K}=(K, \Gamma; v, c)\models \operatorname{Mo}(\ell,c)$ is $\d$-henselian and $\aleph_1$-saturated. Then there is a cross-section $s:\Gamma \to K^\times$ such that $s(\gamma)^\dagger = c(\gamma)$ for all $\gamma\in \Gamma$. \end{lem} \begin{proof} By (1) and (2) we have a definable subgroup $G := \{ x \in K^\times : x^\dagger = c( v(x) )\}$ of $K^\times$ with $v(G) = \Gamma$. Now, use Lemma \ref{SaturatedCaseCrossSectionExistence} to get a cross section $s:\Gamma \to K^\times$ with $s(\Gamma)\subseteq G$. This $s$ has the desired property. \end{proof} \begin{thm}\label{MT-s} Suppose $\mathcal{K}=(K,\Gamma;v,c)\models \operatorname{Mo}(\ell,c)$ is $\d$-henselian. Then $\Th(\mathcal{K})$ is axiomatized by the following axiom schemes: \begin{enumerate}[font=\normalfont] \item $\operatorname{Mo}(\ell, c)$; \item the axioms for $\d$-henselianity; \item $\operatorname{Th}(\k,\Gamma;c)$ with $\k$ as differential field and $\Gamma$ as ordered abelian group. \end{enumerate} \end{thm} \begin{proof} Let any two $\aleph_1$-saturated models of the axioms in the theorem be given. By Lemma~\ref{ExistenceOfCrossSectionWithMapCOnly} we have in both models a cross-section that make these into models of $\operatorname{Mo}(\ell, s,c)$. It remains to appeal to Theorem \ref{MainTheorem} to conclude that these two models are elementarily equivalent. \end{proof} \noindent Before giving the proof of Corollary \ref{F3} from the introduction we note that any algebraic valued differential field extension of a monotone valued differential field is again monotone; see \cite[Corollary 6.3.10]{ADAMTT}. \begin{proof}[Proof of Corollary \ref{F3}] Let $K$ range over $\d$-henselian monotone valued differential fields. As in \cite[Proof of Corollary 8.0.2]{ADAMTT} we have a set $\Sigma_n$ of sentences in the language of valued differential fields, independent of $K$, such that $K \models \Sigma_n$ if and only if every valued differential field extension $L$ of $K$ with $[L:K] = n$ is $\d$-henselian. Now by Theorem \ref{F2} we have $K\equiv \k((t^\Gamma))_c$ for a suitable differential field $\k$, ordered abelian group $\Gamma$, and additive map $c: \Gamma \to \k$. Every valued differential field extension $L$ of $\k((t^\Gamma))_c$ of finite degree is spherically complete as a valued field and so $\d$-henselian by \cite[Corollary 5.4.3 and Theorem 7.2.6]{ADAMTT}. Hence $\k((t^\Gamma))_c\models \Sigma_n$ and thus $K \models \Sigma_n$, for all $n\ge 1$. \end{proof} \noindent We now give an example of a monotone $\d$-henselian field $F$ such that $v(C_F^\times)$ is not pure in $v(F^\times)$. This elaborates on an example by the referee of a monotone henselian valued differential field $F$ for which $v(C_F^\times)$ is not pure in $v(F^\times)$. Let the additive map $c:\mathbb{Z} \to \T_{\log}$ be given by $c(1) = 1$. With the usual derivation on $\T_{\log}$, this yields the (discretely) valued differential field $\k = \T_{\log}((s^\mathbb{Z}))_c$, with $s'=~{}s$. Since $\T_{\log}$ is linearly surjective, $\k$ is $\d$-henselian field and thus linearly surjective. We now forget about the valuation of $\k$, consider it just as a differential field, and introduce $K := \k((t^\mathbb{Z}))_d$ with the additive map $d:\mathbb{Z} \to \k$ given by $d(1) = 0$, so $t'=0$. Then $K$ is a monotone $\d$-henselian field with $v(K^\times) = \mathbb{Z}$. Finally, let $F := K(\sqrt{st})$, which is naturally a valued differential field extension of $K$. Since $F$ is algebraic over $K$, it is monotone and $\d$-henselian too, by Corollary~\ref{F3}. Clearly, $v(F^\times) = \frac{1}{2}\mathbb{Z}$. We claim that $v(C^\times_F) = \mathbb{Z}$ and so it is not pure in $v(F^\times)$. From $t^{\mathbb{Z}}\subseteq C_F$ we get $\mathbb{Z} \subseteq v(C_F^\times)$. For the reverse inclusion, let any element $a + b\sqrt{st}\in C_F^\times$ be given with $a, b \in K$, not both zero. Now, $$(a+b\sqrt{st})' = a' + b'\sqrt{st} + b (\sqrt{st})' = a' + b' \sqrt{st} + b(\sqrt{st}/2) = a' + (b' + b/2)\sqrt{st},$$ so $a' = 0$ and $b' + b / 2 = 0$. From $b' = -b / 2$ we now derive $b = 0$. (Then $a + b\sqrt{st} = a\in C_{\k}((t^\mathbb{Z}))$, and thus $v(a + b\sqrt{st})\in \mathbb{Z}$, as claimed.) Let $k,l$ range over $\mathbb{Z}$. Towards a contradiction, suppose $\displaystyle b = \sum_{l \geq l_0} b_l t^l$ with all $b_l \in \k$, $l_0 \in \mathbb{Z}$, $b_{l_0}\ne 0$. Then $\displaystyle b' = \sum_{l \geq l_0} b'_l t^l$ and so the equality $b'=-b/2$ takes the form $$\sum_{l \geq l_0} b'_l t^l\ =\ -\frac{1}{2} \sum_{l \geq l_0} b_l t^l\ =\ \sum_{l \geq l_0} -\frac{1}{2} b_l t^l.$$ Therefore $b'_l = - b_l/2$ for all $l\geq l_0$, in particular for $l = l_0$. Assume $\displaystyle b_{l_0} = \sum_{k\geq k_0} u_k s^k$, with all $u_k \in \T_{\log}$, and $k_0\in \mathbb{Z}$, $u_{k_0} \neq 0$. We have $\displaystyle b'_{l_0} = \sum_{k\geq k_0} (u'_k + k u_k) s^k$ and $\displaystyle -\frac{1}{2} b_{l_0} = \sum_{k\geq k_0} -\frac{1}{2}u_k s^k$. Thus $u'_k + ku_k = -u_k/2$ for all $k \geq k_0$. For $k = k_0$ we have $u_{k_0} \neq 0$, and so this gives $u^\dagger_{k_0} = -k_0 - 1/2$. However, this contradicts $\T_{\log}^\dagger \cap \mathbb{R} = \{ 0 \}$ and hence the claim is proved. On the other hand: \begin{prop} Let $F$ be a henselian valued differential field with algebraically closed or real closed residue field. Then $v(C_F^\times)$ is pure in $v(F^\times)$. \end{prop} \begin{proof} Let $n\alpha = \beta$ with $\alpha \in v(F^\times), \beta \in v(C_F^\times), n\ge 1$; our job is to show that then $\alpha \in v(C_F^\times)$. Take $a \in F^\times$ with $v(a) = \alpha$ and $b \in C_F^\times$ with $v(b) = \beta$, so $v(b/a^n)=0$; if the residue field is real closed we also arrange that the residue class of $b/a^n$ is positive. Considering the polynomial $\displaystyle P(Y) = Y^n - (b/a^n)\in \mathcal{O}_F[Y]$, the henselianity of $F$ and the assumption on the residue field gives a zero $y \asymp 1$ in $F$ of $P$. Then $(ay)^n = b \in C_F^\times$, hence $ay \in C_F^\times$ with $v(ay) = \alpha$. \end{proof} \noindent A valued differential field with small derivation is said to be {\em $\d$-algebraically maximal\/} if it has no proper immediate $\d$-algebraic valued differential field extension. For monotone valued differential fields with linearly surjective differential residue field, $$\text{$\d$-algebraically maximal}\ \Longrightarrow\ \text{$\d$-henselian}$$ by \cite[Theorem 7.0.1]{ADAMTT}. By \cite[Theorem 7.0.3]{ADAMTT}, the converse holds in the case of few constants, but an example at the end of Section 7.4 of \cite{ADAMTT} shows that this converse fails for some $\d$-henselian monotone valued differential field with many constants. Below we generalize this example as follows: \begin{cor} Let $K$ be a $\d$-henselian, monotone, valued differential field with $v(C^\times)\ne \{0\}$. Then some $L\equiv K$ is not $\d$-algebraically maximal. \end{cor} \begin{proof} By Theorems ~\ref{F1} and ~\ref{F2} and L\"owenheim-Skolem we can arrange $K= \k((t^\Gamma))_c$ where the differential field $\k$ and the ordered abelian group $\Gamma$ are countable and $c: \Gamma \to \k$ is additive. With $C:= C_K$, take $a\in C^\times$ with $va=\gamma_0>0$. Then $a = \sum_{\gamma\ge \gamma_0} a_\gamma t^\gamma$, with $\der(a_\gamma) + c(\gamma) a_\gamma = 0$ for all $\gamma$, in particular for $\gamma=\gamma_0$. Hence $\mathfrak{m} := a_{\gamma_0} t^{\gamma_0}\in C$, and so all infinite sums $\sum_n q_n \mathfrak{m}^n$ with rational $q_n$ lie in $C$ as well. Thus $C$ is uncountable. On the other hand, $\k(t^\Gamma)$ is countable and so by L\"owenheim-Skolem we have a countable $L\prec K$ that contains $\k(t^\Gamma)$. Thus $K$ is an immediate extension of $L$ and we can take $a\in C\setminus L$. Then $L\< a \> = L(a)$ is a proper immediate $\d$-algebraic extension of $L$ and therefore $L$ is not $\d$-algebraically maximal. \end{proof} \section{Eliminating the lift of the differential residue field} \noindent In this section we drop the requirement of having a {\em lift} of the differential residue field in our structure and instead use a copy of the differential residue field. For this purpose we consider 3-sorted structures $$\mathcal{K} = (K, \mathbf{k}, \Gamma; \pi, v, c)$$ where $K$ and $\mathbf{k}$ are differential fields, $\Gamma$ is an ordered abelian group, $v:K^\times \to \Gamma$ is a valuation which makes $K$ into a monotone valued differential field, $\pi:\mathcal{O} \to \mathbf{k}$ with $\mathcal{O} := \mathcal{O}_v$ is a surjective differential ring morphism, $c:\Gamma \to \mathbf{k}$ is an additive map satisfying $\forall \gamma \exists x\ne 0 \quad \Big[v(x) = \gamma \ \&\ \pi(x^\dagger) = c(\gamma)\Big]$. We construe these $\mathcal{K}$ as $L_3$-structures for a natural $3$-sorted language $L_3$ (with unary function symbols for $\pi, v$ and $c$). We have an obvious set $\textnormal{Mo}(c)$ of $L_3$-sentences whose models are exactly these $\mathcal{K}$. \begin{lem}\label{ExistenceOfLiftWithMapCOnly} Suppose $\mathcal{K} = (K, \mathbf{k}, \Gamma; \pi, v, c) \models \textnormal{Mo}(c)$ is $\d$-henselian and $\k$ is any lift of the differential residue field. Then $(K, \Gamma; v, \iota \circ c) \models \textnormal{Mo}(l, c)$, where $\iota : \mathbf{k} \to \k$ is the inverse of the differential field isomorphism $\left.\pi\right|_\k :\k \to \mathbf{k}$. \end{lem} \begin{proof} We need to check the two conditions from the previous section. First of all $\iota \circ c$ is obviously additive. Fix $\gamma \in \Gamma$. There is an element $x \in K^\times$ with $v(x) = \gamma$ and $\pi(x^\dagger) = c(\gamma)$. Let $a = (\iota \circ \pi) (x^\dagger) = (\iota \circ c) (\gamma)$. As $a \in \k$ and $\pi(a) = \pi(x^\dagger)$, we get $x^\dagger = a + \epsilon$ for some $\epsilon \prec 1$. By \cite[Corollary 7.1.9]{ADAMTT} we have $ \epsilon = (1 + \delta) ^ \dagger$ for some $\delta \prec 1$ and thus $$(\iota \circ c) (\gamma) = a = x^\dagger - (1 + \delta)^\dagger = \Big( \frac{x}{1 + \delta} \Big)^\dagger, \textnormal{ and } v \Big( \frac{x}{1 + \delta} \Big) = v(x) = \gamma. $$ This completes the proof of the lemma. \end{proof} \begin{thm}\label{MT-l} Suppose $\mathcal{K}=(K, \mathbf{k}, \Gamma; \pi, v, c)\models \operatorname{Mo}(c)$ is $\d$-henselian. Then $\Th(\mathcal{K})$ is axiomatized by the following axiom schemes: \begin{enumerate}[font=\normalfont] \item $\operatorname{Mo}(c)$; \item the axioms for $\d$-henselianity; \item $\operatorname{Th}(\mathbf{k},\Gamma;c)$ with $\mathbf{k}$ as differential field and $\Gamma$ as ordered abelian group. \end{enumerate} \end{thm} \begin{proof} Let any two $\aleph_1$-saturated models $\mathcal{K}_1=(K_1, \mathbf{k}_1, \Gamma_1; \pi_1, v_1, c_1)$ and $\mathcal{K}_2 = (K_2, \mathbf{k}_2, \Gamma_2; \pi_2, v_2, c_2)$ of the axioms in the theorem be given. By Lemma~\ref{ExistenceOfLiftWithMapCOnly} we have in both models lifts of the differential residue fields that make these into models of $\operatorname{Mo}(\ell, c)$. So $\Th(\mathbf{k}_i, \Gamma_i; c_i) = \Th(\k_i, \Gamma_i; \iota_i \circ c_i)$ where $\iota_i$ is the isomorphism between the differential residue field $\mathbf{k}_i$ and its lift $\k_i$ for $i=1,2$. It remains to appeal to Theorem \ref{MT-s} to conclude that these two models are elementarily equivalent. \begin{comment} As in the proof of Theorem~\ref{MainTheorem}, we can assume the Continuum Hypothesis (CH) for this argument. Let any two saturated models $\mathcal{K}_1=(K_1, \mathbf{k}_1, \Gamma_1; \pi_1, v_1, c_1)$ and $\mathcal{K}_2=(K_2, \mathbf{k}_2, \Gamma_2; \pi_2, v_2, c_2)$ of cardinality $\aleph_1$ of the axioms in the theorem be given. Take any two lifts $\iota_1 : \mathbf{k}_1 \to \k_1$ and $\iota_2 : \mathbf{k}_2 \to \k_2$ of the differential residue fields. Then by Lemma~\ref{ExistenceOfLiftWithMapCOnly} we have that $(K_1, \Gamma_1; v_1, \iota_1 \circ c_1)$ and $(K_2, \Gamma_2; v_2, \iota_2 \circ c_2)$ are models of $\operatorname{Mo}(\ell, c)$. Note that the structures $(\mathbf{k}_1, \Gamma_1; c_1)$ and $(\k_1, \Gamma_1;\iota_1 \circ c_1)$ are isomorphic given by $\phi = (\iota_1, \textnormal{id}_{\Gamma_1}) : (x, \gamma) \mapsto (\iota_1(x), \gamma)$. Therefore $\operatorname{Th}(\mathbf{k}_1, \Gamma_1; c_1) = \operatorname{Th}(\k_1, \Gamma_1; \iota_1 \circ c_1)$. Likewise, $\operatorname{Th}(\mathbf{k}_2, \Gamma_2; c_2) = \operatorname{Th}(\k_2, \Gamma_2; \iota_2 \circ c_2)$ and thus $\operatorname{Th}(\k_1, \Gamma_1; \iota_1\circ c_1) = \operatorname{Th}(\k_2, \Gamma_2; \iota_2 \circ c_2)$. By Theorem \ref{MT-s} we get an isomorphism $(f, f_v):(K_1, \Gamma_1; v_1, \iota_1 \circ c_1) \to (K_2, \Gamma_2; v_2, \iota_2 \circ c_2)$ of $L_2$-structures. Define $f_r : \mathbf{k}_1 \to \mathbf{k}_2$ by $f_r(a) = \pi_2(f(\iota_1(a)))$. It is easy to see that then $(f, f_r, f_v) : \mathcal{K}_1 \to \mathcal{K}_2$ is an isomorphism of $L_3$-structures. This completes the proof of the theorem. \end{comment} \end{proof} \section*{Acknowledgments} \noindent The author thanks Lou van den Dries for numerous discussions and comments on this paper. The author also thanks the referee for pointing out an error in the previous version of the paper and for making other helpful comments. \end{document}
\begin{document} \date{\today} \title{The natural density of some sets of square-free numbers} \author{Ron Brown} \address{Department of Mathematics\\University of Hawaii\\2565 McCarthy Mall\\Honolulu, Hawaii 96822} \email{[email protected]} \begin{abstract} Let $P$ and $T$ be disjoint sets of prime numbers with $T$ finite. A simple formula is given for the natural density of the set of square-free numbers which are divisible by all of the primes in $T$ and by none of the primes in $P$. If $P$ is the set of primes congruent to $r$ modulo $m$ (where $m$ and $r$ are relatively prime numbers), then this natural density is shown to be $0$. \end{abstract} \maketitle \section{Main results}\label{s:main} In $1885$ Gegenbauer proved that the natural density of the set of square-free integers, i.e., the proportion of natural numbers which are square-free, is $6/\pi^2$ \cite[Theorem 333; reference on page 272]{HW}. In 2008 J. A. Scott conjectured that the proportion of natural numbers which are odd square-free numbers is $4/\pi^2$ or, equivalently, the proportion of natural numbers which are square-free and divisible by $2$ is $2/\pi^2$ \cite{Scott}. The conjecture was proven in 2010 by G. J. O. Jameson, in an argument adapted from one computing the natural density of the set of all square-free numbers \cite{Jameson}. In this note we use the classical result for all square-free numbers to reprove Jameson's result and indeed to generalize it: \begin{theorem}\label{main} Let $P$ and $T$ be disjoint sets of prime numbers with $T$ finite. Then the proportion of all numbers which are square-free and divisible by all the primes in $T$ and by none of the primes in $P$ is \[ \frac{6}{\pi^2}\prod_{p \in T}\frac{1}{1+p }\prod_{p \in P}\frac{p}{1+p}. \] \end{theorem} As in the above theorem, throughout this paper $P$ and $T$ will be disjoint sets of prime numbers with $T$ finite. The letter $p$ will always denote a prime number. The term \textit{numbers} will always refer to positive integers. Empty products, such as occurs in the first product above when $T$ is empty, are understood to equal $1$. If $P$ is infinite, we will argue below that the second product above is well-defined. \begin{comment} The theorem says that the proportion of square-free numbers which are divisible by $p$ is $\frac{1}{p+1}$. Thus, taking $p=2$, we see that the proportion of natural numbers which are even square-free numbers is $\frac{6}{\pi^2}\frac{1}{2+1} = 2/\pi^2$, so one third of the square-free numbers are even. Taking $p=101$, we see that the proportion of natural numbers which are square-free and divisible by $101$ is $\frac{6}{102\pi^2} = \frac{1}{17\pi^2}$, and the proportion of square-free numbers which are divisible by $101$ is $\frac{1}{102}$. \end{comment} \begin{examples} 1. Setting $P = \{2\}$ and $T$ equal to the empty set in the theorem we see that the natural density of the set of odd square-free numbers is $\frac{6}{\pi^2}\frac{2}{2+1} = \frac{4}{\pi^2}$; taking $T = \{2\}$ and $P$ equal to the empty set we see that the natural density of the set of even square-free numbers is $\frac{6}{\pi^2}\frac{1}{2+1} = \frac{2}{\pi^2}$. Thus one third of the square-free numbers are even and two thirds are odd. (These are Jameson's results of course.) 2. Set $T = \{2,3,5\}$ and $P=\{7\}$ in the theorem. Then the theorem says that the natural density of the set of square-free numbers divisible by 30 but not by 7 is $\frac{6}{\pi^2}\frac{1}{2+1}\frac{1}{3+1}\frac{1}{5+1}\frac{7}{7+1}$, so the proportion of square-free numbers which are divisible by $30$ but not by $7$ is $\frac{1}{2+1}\frac{1}{3+1}\frac{1}{5+1}\frac{7}{7+1}= \frac{7}{576} $. \end{examples} Our interest in the case that $P$ is infinite arose in part from a question posed by Ed Bertram: what is the natural density of the set of square-free numbers none of which is divisible by a prime congruent to $1$ modulo $4$? The answer is zero; more generally we have the: \begin{theorem}\label{arithprog} Let $r$ and $m$ be relatively prime numbers. Then the natural density of the set of square-free numbers divisible by no prime congruent to $r$ modulo $m$ is zero. \end{theorem} This theorem is a corollary of the previous theorem since, as we shall see in Section \ref{s:arithprog}, for any $r$ and $m$ as above, \[ \prod_{p \equiv r\mod{m}} \frac{p}{1+p}= 0. \] \section{A basic lemma}\label{s:bijection} For any real number $x$ and set $B$ of numbers, we let $B[x]$ denote the number of elements $t$ of $B$ with $t \le x$. Recall that if $\lim_{x \to \infty} B[x]/x$ exits, then it is by definition the \textit{natural density} of $B$ \cite[Definition 11.1]{NZ}. Let $\mathcal A$ denote the set of square-free numbers. Then we let $\mathcal A(T,P)$ denote the set of elements of $\mathcal A$ which are divisble by all elements of $T$ and by no element of $P$ (so, for example, $\mathcal A=\mathcal A(\emptyset,\emptyset)$). The set of square-free numbers analyzed in Theorem \ref{main} is $A(T,P)$. The next lemma shows how the calculation of the natural density of the sets $A(T,P)$ reduces to the calculation of the natural density of sets of the form $A(\emptyset,S)$ and, when $P$ is finite, also reduces to the the calculation of the natural density of sets of the form $\mathcal A(S,\emptyset)$. \begin{lemma}\label{bijection} For any finite set of primes $ S $ disjoint from $T$ and from $P$ and for any real number $x$, we have $\mathcal A(T,S\cup P)[x] = \mathcal A(T \cup S,P)[xs]$ where $s = \prod_{p\in S}p$. Moreover, the set $\mathcal A(T\cup S, P)$ has a natural density if and only if $\mathcal A(T ,P\cup S)$ has a natural density, and if $D$ is the natural density of $\mathcal A(T\cup S, P)$, then the natural density of $\mathcal A(T ,P\cup S)$ is $sD$. \end{lemma} \begin{proof} The first assertion is immediate from the fact that multiplication by $s$ gives a bijection from the set of elements of $\mathcal A(T,S\cup P)$ less than or equal to $x$ to the set of elements of $\mathcal A(T \cup S,P)$ less than or equal to $xs$. This implies that \[ \frac{\mathcal A(T,P\cup S)[x]}{x} = s\frac{\mathcal A(T \cup S,P)[xs]}{xs}. \] The lemma follows by taking the limit as $x$ (and hence $xs$) goes to infinity. \end{proof} \begin{remarks} 1. We might note that if we assume that for all $T$ the sets $A(T,\emptyset)$ have natural densities, then it is easy to compute these natural densities. After all, for any $T$ the set $\mathcal A $ is the disjoint union over all subsets $S$ of $T$ of the sets $\mathcal A(T\setminus S,S)$, so by Lemma \ref{bijection} for any real number $x$ \[ \mathcal A[x] = \sum_{S\subseteq T} \mathcal A(T\setminus S,S)[x] = \sum_{S\subseteq T}\mathcal A(T,\emptyset)[d_Sx] \] where for any $S\subseteq T$ we set $d_S = \prod_{p \in S} p$. Hence \[ \frac{\mathcal A[x]}{x} = \sum_{S\subseteq T} d_S\frac{\mathcal A(T,\emptyset)[d_Sx]}{d_Sx}. \] Taking the limit as $x$ goes to $\infty $ we have $ 6/\pi^2 = (\sum_{S\subseteq T} d_S) G $ where $G$ denotes the natural density of $A(T,\emptyset)$. But \[ \sum_{S\subseteq T} d_S = \sum_{d|d_T} d = \prod_{p \in T} (1+p) \] \cite[Theorem 4.5]{NZ}, so indeed \[ G= \frac{6}{\pi^2} \prod_{p \in P} \frac{1}{ 1+p}. \] 2. In the language of probability theory, Theorem \ref{main} says that the probability that a number in $\mathcal A$ is divisible by a prime $p$ is $1/(p+1)$ (so the probability that it is not is $p/(p+1)$) and, moreover, for any finite set $S$ of primes not equal to $p$, being divisible by $p$ is independent of being divisible by all of the elements of $S$. \end{remarks} \section{Proof of Theorem \ref{main} when $P$ is finite}\label{mainproof1} First suppose that $P = \emptyset$. We will prove Theorem \ref{main} in this case by induction on the number of elements of $T$. The next lemma gives the induction step. \begin{lemma} Let $p$ be a prime number not in $T$. If the set $\mathcal A(T,\emptyset)$ has natural density $D$, then the set $\mathcal A(\{p\}\cup T,\emptyset) $ has natural density $ \frac{1}{p+1}D$. \end{lemma} \begin{proof} For any real number $x$ we set $E(x) = \mathcal A(\{p\}\cup T,\emptyset)[x]$. Let $\epsilon > 0$. The theorem says that \[ \lim_{x \to \infty}\frac{E(x)}{x} = \frac{1}{p+1} D. \] Therefore it suffices to show for all choices of $\epsilon$ above that, for all sufficiently large $x$ (depending on $\epsilon$), \[ \left| \frac{E(x)}{x} - \frac{1}{p+1} D\right| < \epsilon. \] Note that $\mathcal A( T,\emptyset)$ is the disjoint union of $\mathcal A(\{p\}\cup T,\emptyset)$ and $\mathcal A(T,\{p\})$. Hence by Lemma \ref{bijection} (applied to $\mathcal A(T,\{p\})$) for any real number $x$, \[ A(T,\emptyset)[x/p] = E(x/p) + E(x) \] and so by the choice of $D$ there exists a number $M$ such that if $x>M$ then \[ \left| \frac{E(x)}{x/p} + \frac{E(x/p)}{x/p} - D \right| < \epsilon/3. \] We next pick an even integer $k$ such that $\frac{1} {p^k} < \frac{\epsilon}{3}$. Then \begin{equation}\label{e2} \left| E(x/p^k) \right| \le x/p^k < \frac{\epsilon}{3} x \end{equation} and also (using the usual formula for summing a geometric series) \begin{equation}\label{e1} \left| -Dx \sum_{i=1}^{k} (-\frac{1}{p})^i - Dx\frac{1}{p+1} \right| = Dx \left|\frac{(-\frac{1}{p})-(-\frac{1}{p})^{k+1}}{1-(-\frac{1}{p})} +\frac{1}{p+1}\right| \end{equation} \[ =Dx \left|\frac{-1+\frac{1}{p^k} }{p+1} +\frac{1}{p+1}\right|< \frac{1}{p^k} Dx < \frac{\epsilon}{3}\frac{6}{\pi^2} x <\frac{\epsilon}{3} x . \] Now suppose that $x > p^kM$. Then for all $i \le k$ we have $x/p^i > M$ and hence (applying the choice of $M$ above), \begin{equation}\label{e3} \left| E(x) + E(x/p) - D\frac{x}{p}\right| < \frac{\epsilon}{3} \frac{x}{p} \end{equation} and similarly \[ \left|- E(x/p) -E(x/p^2) + D\frac{x}{p^2 }\right| < \frac{\epsilon}{3} \frac{x}{p^2} \] and \[ \left| E(x/p^2) + E(x/p^3) - D\frac{x}{p^3}\right| < \frac{\epsilon}{3} \frac{x}{p^3} \] and \[ \left|- E(x/p^3) -E(x/p^4) + D\frac{x}{p^4}\right| < \frac{\epsilon}{3} \frac{x}{p^4} \] \centering{\LARGE{$\vdots$}} \enlargethispage{\baselineskip} \begin{equation} \label{ek} \left|- E(x/p^{k-1}) -E(x/p^k) + D\frac{x}{p^k} \right| < \frac{\epsilon}{3} \frac{x}{p^k}. \end{equation} \begin{flushleft} Using the triangle inequality to combine the inequalities (\ref{e2}) and (\ref{e1}) together with all those between (\ref{e3}) and (\ref{ek}) (inclusive) and dividing through by $x$, we can conclude that \end{flushleft} \[ \left| \frac{E(x)}{x} - \frac{1}{p+1} D\right| < \frac{\epsilon}{3}\left( \sum_{i=1}^{k} \frac{1}{p^i}\right) + \frac{\epsilon}{3} + \frac{\epsilon}{3}<\epsilon. \] \end{proof} Theorem \ref{main} now follows in the case that $P$ is empty from the above lemma by induction on the number of elements of $T$. That it is true when $P$ is finite but not necessarily empty follows from Lemma \ref{bijection}: in the statement of the lemma replace $P$ by $\emptyset$ and $S$ by $P$; then the natural density of $\mathcal A(T,P)$ is \[ \frac{6}{\pi^2}\prod_{p \in P}p \prod_{p \in T \cup P} \frac{1}{1+p} = \frac{6}{\pi^2}\prod_{p \in T}\frac{1}{1+p}\prod_{p \in P}\frac{p}{1+p}. \] \section{Proof of Theorem \ref{main} when $P$ is infinite}\label{s:mainproof} We begin by proving the theorem in the case that $T$ is empty. Let $p_1,p_2,p_3, \cdots$ be the strictly increasing sequence of elements of $P$. Since all the quotients $p_i/(1+p_i)$ are less than $1$, the partial products of the infinite product $\prod_i p_i/(1+p_i)$ form a strictly decreasing sequence bounded below by $0$; thus $\prod_{p \in P} p/(1+p) $ converges and its limit, say $\alpha$, is independent of the order of the factors. First suppose that $\alpha \ne 0$. Then $\sum_{p\in P} 1/p < \infty$. After all, we have \[ - \log \alpha = - \sum_{p \in P} \log \frac{p}{1+p} = \sum_{p \in P} \log (1+p) -\log p > \sum_{p\in P}\frac{1}{1+p} > \frac{1}{2}\sum_{p\in P} 1/p. \] Now observe that $\mathcal A \setminus \mathcal A(\emptyset , P )$ is the disjoint union \[ \mathcal A \setminus \mathcal A(\emptyset , P ) = \cup_{k \ge 1} \mathcal A(\{p_k\}, \{ p_1, \cdots, p_{k-1}\}) \] since for all $b \in \mathcal A \setminus \mathcal A(\emptyset , P )$ there exists a least $k$ with $p_k|b$, so that $b\in \mathcal A(\{p_k\}, \{ p_1, \cdots, p_{k-1}\})$. For all $n$ and $k$ we have \[ \frac{ \mathcal A(\{p_k\}, \{ p_1, \cdots, p_{k-1}\})[n]}{n} \le \frac{|\{j: 1 \le j \le n, p_k | j\}|}{n} \le \frac{1}{p_k}. \] Hence by Tannery's theorem (see \cite[p. 292]{tannery} or \cite[p. 199]{hofbauer}) the natural density of $\mathcal A \setminus \mathcal A(\emptyset,P)$ is \[ \lim_{n \to \infty} \frac{(\mathcal A \setminus \mathcal A(\emptyset,P))[n]}{n} = \lim_{n \to \infty} \sum_{k=1}^{\infty}\frac{\mathcal A(\{p_k\}, \{ p_1, \cdots, p_{k-1}\})[n]}{n} \] \[ = \sum_{k=1}^{\infty}\lim_{n \to \infty}\frac{\mathcal A(\{p_k\}, \{ p_1, \cdots, p_{k-1}\})[n]}{n} =\sum_{k=1}^{\infty}\frac{6}{\pi^2} \frac{1}{1+p_k}\prod_{i<k}\frac{p_i}{1+p_i} \] by the proof in the previous section of the theorem in the case that $P$ is finite. Writing $1/(1+p_k) = 1 - p_k/(1+p_k)$ we can see that the natural density of $\mathcal A \setminus \mathcal A(\emptyset,P)$ is therefore a limit of a telescoping sum \[ \frac{6}{\pi^2}\lim_{L \to \infty} \sum_{k=1}^{L} \left(\prod_{i<k}\frac{p_i}{1+p_i} - \prod_{i<k+1}\frac{p_i}{1+p_i}\right) \] \[ =\frac{6}{\pi^2}\lim_{L \to \infty}\left(1- \prod _{i \le L}\frac{p_i}{1+p_i}\right) =\frac{6}{\pi^2}\left(1-\prod_{p\in P}\frac{p}{1+p}\right) \] and therefore the natural density of $\mathcal A(\emptyset,P)$ is \[ \frac{6}{\pi^2} - \frac{6}{\pi^2}\left(1-\prod_{p\in P}\frac{p}{1+p}\right)=\prod_{p\in P}\frac{p}{1+p}, \] as was claimed. We now consider the case that $\alpha =\prod_{p \in P} p/(1+p)= 0$ (still assuming that $T = \emptyset$). Suppose that $\epsilon >0$. By hypothesis there exists a number $M$ with $\frac{6}{\pi^2} \prod_{i\le M}\frac{p_i}{1+p_i} < \epsilon/2$. Then by our proof of the theorem in the case that $P$ is finite there exists a number $L$ such that if $n>L$ then \[ \frac{(\mathcal A(\emptyset, \{p_1,p_2, \cdots, p_M\})[n]}{n} < \frac{\epsilon}{2} +\frac{6}{\pi^2}\prod_{i\le M}\frac{p_i}{1+p_i} . \] Then if $n>L$ we have \[ 0 < \frac{\mathcal A(\emptyset, P)[n]}{n} \le \frac{\mathcal A(\emptyset, \{ p_1,p_2, \cdots, p_M \})[n]}{n} < \frac{\epsilon}{2}+\frac{\epsilon}{2} = \epsilon. \] Therefore, \[ \lim_{n \to \infty}\frac{\mathcal A(\emptyset,P)[n]}{n} = 0 = \prod_{p\in P}\frac{p}{1+p}. \] This completes the proof of the theorem in the case that $T= \emptyset$. The general case where $T$ is arbitrary then follows from Lemma \ref{bijection}, applied with $T$ and $S$ replaced respectively by $\emptyset$ and $T$: then if we set $s = \prod_{p \in T}p$ we have that the natural density of $\mathcal A(T,P)$ equals $1/s$ times the natural density of $\mathcal A(\emptyset, P\cup T)$, i.e., equals \[ \frac{6}{\pi^2}\prod_{p\in T}\frac{1}{p} \prod_{p \in T}\frac{p}{1+p}\prod_{p \in P}\frac{p}{1+p} =\frac{6}{\pi^2} \prod_{p \in T}\frac{1}{1+p}\prod_{p \in P}\frac{p}{1+p}, \] which completes the proof of Theorem \ref{main}. \section{Proof of Theorem \ref{arithprog}}\label{s:arithprog} It suffices by Theorem \ref{main} to prove that $\prod_{p \in P} p/(1+p) = 0$. Let $M$ be any number. By a lemma of K. K. Norton \cite[Lemma 6.3]{norton} there exists a constant $B$ independent of the choices of $r,m,$ and $M$ such that \[ \left|\sum_{M>p\in P}\frac{1}{p} - \frac{\log\log M}{\phi(m)} \right| < B \frac{\log(3m)}{\phi(m)}. \] As we observed earlier, for any $p$ we have \[ 1/(2p) < 1/(1+p) < \log(1+p) - \log p \] so \[ \log \prod_{M > p \in P} \frac{p}{1+p} = \sum_{M>p \in P}(\log p -\log(1+p)) \] \[ < - \sum_{M>p\in P}\frac{1}{2p} < \frac{1}{2}\left(-\frac{\log\log M}{\phi (m)} + B \frac{\log(3m)}{\phi(m)}\right) \] which has limit $-\infty$ as $M \to \infty$. Hence \[ \prod_{p \in P} \frac{p}{1+p}= \lim_{M \to \infty} \prod_{M>p \in P}\frac{p}{1+p} = 0, \] completing the proof of Theorem \ref{arithprog}. \end{document}
\begin{document} \begin{abstract} In this paper we discuss generalized group, provides some interesting examples. Further we introduce a generalized module as a module like structure obtained from a generalized group and discuss some of its properties and we also describes generalized module groupoids. \end{abstract} \title{Generalized Groups and Module Groupoids} \section{Introduction} The generalized groups introduced by Molaei is an interesting generalization of groups(cf.\cite{molai}). The identity element in a group is unique, but in a generalized group there exists an identity forall each elements. Clearly every group is a generalized group. A groupid is another generalization of a group is a small category in which every morphism is invertible and was first defined by Brandt in the year 1926. Groupoids are studied by many Mathematicians with different objective. One of the different approach is the structured groupoid which is obtained by adding another structure in such a way that the added structure is compatible with the groupoid operation. \par In this paper we introduce the module action on a generalized group and we call the resulting structure as a generalized module. Then we discuss some interesting examples and properties of generalized modules. Further, analogoues to generalized group groupoid we describe the generalized module groupoid over a ring and obtained a relation between category of generalized module and category of generalized module groupoids. \section{Preliminaries} In this section we briefly recall all basic definitions and the elementary concepts needed in the sequel. In particular we recall the definitions of categories, groupoids, generalized groups with examples and discuss some intersetng properties of these structures. \begin{defn} (cf.\cite{kss}) A Category ${\mathcal C} $ consists of the following data$:$ \begin{enumerate} \item A class called the class of vertices or objects $\nu {\mathcal C}. $ \item A class of disjoint sets ${\mathcal C}(a,b)$ one for each pair $ (a,b) \in \nu {\mathcal C}\times \nu {\mathcal C}.$ An element $f \in{\mathcal C}$ is called a morphism from $ a $ to $ b, $ written $ f : a \rightarrow b$ ; $ a = dom\; f$ the domain of $f$ and $ b = cod\; f $ called the codomain of $f . $ \item For $ a, b, c, \in \nu{\mathcal C}, $ a map $$ \circ: {\mathcal C} ( a,b) \times {\mathcal C}( b , c) \rightarrow {\mathcal C}( a , c) \,\,\text{given by}\,\, ( f , g ) \mapsto f \circ g $$ is the composition of morphisms in $ {\mathcal C} . $ \item for each $ a \in \nu {\mathcal C} $, a unique $ 1_a \in {\mathcal C}( a,a ) $ is the identity morphism on $a.$ \end{enumerate} These must satisfy the following axioms: \begin{itemize} \item(cat1) \qquad for $ f \in {\mathcal C} ( a,b) , g \in {\mathcal C}( b , c)\; and\; h \in {\mathcal C}( c , d ), $ then $$ f \circ ( g \circ h) = (f \circ g) \circ h $$ \item (cat 2)\qquad for each $ a \in \nu {\mathcal C}, f \in {\mathcal C}\; (a,b) \; and\; g \in {\mathcal C} (c , a),$ then $$1_a \circ f = f \qquad and \qquad g \circ 1_a = g $$. \end{itemize} \end{defn} Clearly $\nu \mathcal C$ can be identify as a subclass of $\mathcal C,$ and with this identification it is possible to regard categories in terms of morphisms alone. The category $\mathcal C$ is said to be small if the class $\mathcal C$ is a set. A morphism $f\in \mathcal C(a, b)$ is said to be an isomorphism if there exists $f^{-1} \in \mathcal C(b, a)$ such that $ff^{-1} = 1_a = e_a,$ domain identity and $f^{-1}f = 1_b= f_b,$ range identity. \begin{exam} A group $G$ can be regarded as category in the following way; define category $\mathcal C$ with just one object say $\nu \mathcal{C}=G$ and morphisma $\mathcal C = \{g: g\in G\}$ with composition in $\mathcal C$ is the binary operation in $G.$ Identity element in the group will be the identity morphism on the vertex $G.$ \end{exam} \begin{defn} A groupoid $\mathcal G=(\nu \mathcal G, \mathcal G)$ is a small category such that for morphisms $f,g\in \mathcal G$ with $cod\,f=dom\,g$ then $fg\in \mathcal G$ and every morphism is an isomorphism. A groupoid $\mathcal G$ is said to be connected if for all $a\in \nu \mathcal G,\;\; \mathcal G(a, a) \neq \phi$ \end{defn} \begin{exam} Every group can be regarded as a groupoid with only one object. \end{exam} \begin{exam} For a set $X$ the cartesian product $X\times X$ is a groupoid over $X$ with morphisms are the elements in $X\times X$ with the composition $ (x,y)\cdot(u,v)$ exists only when $y=u$ and is given by $(x,y)(u,v)=(x,v).$ In particular $(x,x)$ is the unique left identity and $(y, y)$ is the unique right identity. \end{exam} \begin{defn} Given two categories $\mathcal C$ and $\mathcal D,$ a functor $F: \mathcal C \rightarrow \mathcal D$ consists of two functions: the object function denoted by $\nu F$ which assigns to each object $a$ of $\mathcal C,$ an object $\nu F(a)$ of the category $\mathcal D$ and a morphism function which assigns to each morphism $f: a\rightarrow b$ of $\mathcal C,$ a morphism $F(f): F(a)\rightarrow F(b) $ in $\mathcal D $ satisfies the following $$F(1_c) = 1_{F(c)}\,\text{for every}\, c \in \nu \mathcal C,\,\,\text{and}\,\, F(fg)= F(f) F(g)$$ whenever $fg$ is defined in $\mathcal C.$ \end{defn} We denote by $\bf{Gpd}$ the category of groupoids in which objects are the groupoids and morphisms are the functors.\par A semigroup is a pair $(S, \cdot)$ where $S$ is a nonempty set and $\cdot$ is an associative binary operation on $S.$ A semigroup $S$ is said to be regular if for each $x\in S$ there exists $x'\in S$ such that $xx'x= x.$ An element $e$ of $S$ is said to be idempotent if $e^2 = e.$ A band is a semigroup in which all elements are idempotents. Inverse semigroup is a regular semigroup in which for each $x\in S$ there exists an unique $y\in S$ such that $x= xyx$ and $y = yxy.$ \subsection{Generalized Groups and Generalized group groupoids.\\} In the following we recall the definitions of generalized groups and describe some of its properties. \begin{defn}(cf.\cite{molai}) A generalized group $G$ is a non-empty set together with a binary operation called multiplication subject to the set of rules given below: \begin{enumerate} \item $(ab)c = a(bc)$ for all $a,\; b,\;c \;\;\in G $ \item for each $a\in G$ there exists unique $e(a)\in G$ with $ae(a) = e(a)a= a$ \item for each $a \in G$ there exists $a^{-1} \in G$ with $aa^{-1} = a^{-1}a = e(a)$ \end{enumerate} \end{defn} It is seen that for each element $a$ in a generalized group the inverse is unique and both $a$ and $a^{-1}$ have the same identity. Every abelian generalized group is a group. \begin{defn}(cf.\cite{molai}) A generalized group $G$ is said to be normal generalized group if $ e(ab) = e(a) e(b)$for all elements $a,b \in G$ \end{defn} \begin{defn}(cf.\cite{molai}) A non-empty subset $H$ of a generalized group $G$ is a generalized subgroup of $G$ if and only if for all $a, b \in H,\;\; ab^{-1} \in H.$ \end{defn} \begin{thm} Rectangular band semigroup is a normal generalized group. \end{thm} \begin{proof} A band $ B$ is a regular semigroup in which every element is an idempotent. Let $I,$ $\Lambda$ be non empty sets and $B = I\times \Lambda$ with multipication defined by $$ (i,\lambda)(j, \mu) = (i, \mu) \;\; for\;all \;\; (i,\lambda),(j, \mu)\in I\times\Lambda $$ is a semigroup and in which all elements are idempotents. Then $B$ is a rectangular band. For any $ (i, \lambda), (j, \mu) \in B $ $$ (i, \lambda)(j, \mu) = (j, \mu)(i, \lambda) = (i, \lambda)\;\; gives$$ $$ (j, \mu)= (i,\lambda)$$ Thus we have $e(i, \lambda) = (i, \lambda)\; for\;all \; (i, \lambda) \in B$ and $(i, \lambda)^{-1} = (i, \lambda).$\\ Moreover $$e((i, \lambda)(j, \mu)) = e(i, \lambda)e(j,\lambda) $$ hence $B$ is a normal generalized group. \end{proof} \begin{exam} Let $G = (V(G), E(G))$ be a complete digraph without multiple edges, then each edge can be uniquely identified with the starting and ending vertices. Let $d(g)$ stands for the domain and $r(g)$ stands for the range of the edge $g.$ For any $f, g \in E(G)$ define the composition of $f\; and \;g$ as the unique edge starting from the domain of $f$ and ending at the range of $g.$ $$ie; \quad f \circ g = h^{r(g)}_{d(f)}\;\;\; with \;\;d(h^{r(g)}_{d(f)}) =d(f)\;\; and \;\;r(h^{r(g)}_{d(f)}) = r(g).$$ The composition $\circ$ is an associative binary operation. For, $ f, g, h \in E(G)$ with $d(f) = v_1 \; \; r(f) = v_2,\,\,d(g) = u_1, \; \; r(g) = u_2,\,\,d(h) = w_1$ and $ r(h) = w_2$ \begin{equation*} \begin{split} (f\circ g)\circ h & = (h^{u_2}_{v_1}) \circ h = h^{w_2}_{v_1} \\ f\circ ( g\circ h )& = f \circ(h^{w_2}_{u_1}) = h^{w_2}_{v_1} \\ ie.,\, (f\circ g)\circ h & = f\circ ( g\circ h ) \end{split} \end{equation*} also for each edge $f,$ $$ f\circ f = f$$ hence $e(f) = f^{-1} = f$. Thus the complete digraph is a generalized group. \end{exam} \begin{exam} $G = \Bigg\{ A = $\(\begin{pmatrix} a&b&\\ 0&0&\\ \end{pmatrix}\) $ : a \neq 0,\;\; a,\; b\; \in R\Bigg\} $ \\ Then, $G$ is a generalized group and for all $A \in G$,\\\\ $e(A) =$ \(\begin{pmatrix} 1&b/a&\\ 0&0&\\ \end{pmatrix}\) and $A^{-1} = $ \(\begin{pmatrix} 1/a&b/a^2&\\ 0&0&\\ \end{pmatrix}\)\\\\ where $e(A)$ and $A^{-1}$ are the identity and the inverse of matrix $A$ respectively. Also $e(AB) = e(B)$ for all $A, B \in G $ \end{exam} \begin{defn}(cf.\cite{molai}) Let $G$ and $H$ be two generalized groups. A generalized group homomorphism from $G$ to $ H $ is a map $f: G\rightarrow H$ such that $$ f(ab) = f(a)f(b)$$ \end{defn} \begin{thm}(cf.\cite{molai}) Let $f: G\rightarrow H$ be a homomorphism of generalized groups $G$ and $H.$ Then \begin{enumerate} \item $f(e(a))= e(f(a))$ is an identity element in $H$ for all $a\in G $ \item $ f(a^{-1}) = f(a)^{-1}$ \item if $K$ is a generalized subgroup of $G$ then $f(K)$ is a generalized subgroup of $H.$ \end{enumerate} \end{thm} Generalized groups and their homomorphisms form a category and is denoted by $GG$ \begin{defn}(cf.\cite{Gursoy}) A generalized group groupoid $\mathcal G $ is a groupoid $(\nu\mathcal{G}, \mathcal{G})$ endowed with the structure of generalized group such that the following maps \begin{enumerate} \item $+ : G\times G \rightarrow G, \;\; (f, g) \rightarrow f + g,$ \item $u : G\rightarrow G,\;\; f \rightarrow -f,$ \item $e:G\rightarrow G,\;\; f \rightarrow e(f),$ \end{enumerate} are functorial. \end{defn} Since $+$ is a functorial we have \begin{equation*} \begin{split} ( f\circ g) + (h\circ k) & = +(f\circ g, h\circ k)\\ & = +[(f, h)\circ(g, k)] \\ & = +(f, h) \circ +(g, k)\\ & = (f+h) \circ (g+k)\\ ( f\circ g) + (h\circ k) & = (f+h) \circ (g+k) \end{split} \end{equation*} thus the interchange law $$(f\circ g)+(h\circ k) = (f+h) \circ (g+k)$$ exists between groupoid composition and generalized group operation. In other words, a generalized group groupoid is a groupoid endowed with a structure of generalized group such that the structure maps of groupoid are generalized group homomorphisms. \begin{exam} Let $G$ be a generalized group. Then $G\times G$ is a generalized group groupoid with object set $G$. For each morphism $(x, y)\in G\times G$ the identity arrow of $(x,y)$ is $(e(x),e(y))$ and the inverse is $(-x, -y)$ and the interchange law also holds. For, \begin{equation*} \begin{split} [(f,g)\circ(g, h)]+[(f', g')\circ(g', h')]& = (f, h)+(f', h')\\ & = (f+f',h+h')\\ [(f,g)+ (f',g')] \circ [(g, h)+(g', h')] & = (f+f', g+g')\circ (g+g', h+h')\\ & = (f+f', h+h')\\ [(f,g)\circ(g, h)]+[(f', g')\circ(g', h')]& = [(f,g)+ (f',g')] \circ [(g,h)+(g', h')] \end{split} \end{equation*} \end{exam} \subsection{Generalized Groups from a connected groupoid.\\} We proceed to describe the generalized group obtained from a connected groupoid. Let $\mathcal G$ be a connected groupoid with $\mathcal G(a, b)$ contains exactly one morphism for all $a,\; b\;\in \mathcal G .$ Define a binary operation which is deoted by $+$ on $\mathcal G.$ For, let $f \in \mathcal G(a, b)$ and $g \in \mathcal G(c,d)$ $$f+g = f\circ k_{bc}\circ g $$ where $\circ$ is the composition in the groupoid and $k_{bc}$ is the unique morphism in $\mathcal G(b,c)$. Then $\mathcal G$ is a generalized group, since \begin{enumerate} \item The operation $"+"$ is associative.\\ for $f\in \mathcal G(a,b)\; g\in \mathcal G(c,d)\; h\in \mathcal G(i,j)$ \begin{equation*} \begin{split} (f+g)+h & = (f\circ k_{bc}\circ g)+h \\ & = (f\circ k_{bc}\circ g)\circ( k_{di}\circ h ) \\ & = f k_{bc}gk_{di} h\\ f+(g+h) & = f+( g\circ k_{di}\circ h)\\ & = (f\circ (k_{bc})\circ (g\circ k_{di}\circ h)\\ & = f k_{bc}gk_{di} h \end{split} \end{equation*} $ie;$ $(f+g)+h = f+(g+h)$ and $+$ is an associative binary opration. \item for each $f \in \mathcal G$ $$ f+f = f\circ f^{-1}\circ f = f$$ hence $e(f) = -f = f$\\ Thus the groupoid $\mathcal G$ together with the operation $+,$ forms a generalized group and is denoted by $\mathcal G^*.$ More over this generalized group $\mathcal G^*$ is normal, since $ e(f+g) = f+g = e(f)+e(g)$ \end{enumerate} Now we extend this construction of a generalized group in to an arbitrary connected groupoid. We start with a connected groupid $\mathcal G$ with partial composition $\circ.$ Choose a morphism from each homset $\mathcal G(a,b)$ and is denote by $h_{ab}$ in such a way that $h_{ba} = {h_{ab}}^{-1}$ and $h_a = 1_a.$ Define an addition $+$ on $\mathcal G$ for $f\in \mathcal G(a, b), g\in \mathcal G(c,d)$ as follows $$ f+g = f\circ h_{bc} \circ g .$$ Clerarly $+$ is an associative binary operation on $\mathcal G$ and for each $f\in \mathcal G(a,b)$ \begin{center} \begin{equation*} \begin{split} f+h_{ab} & = f\circ h_{ba} \circ h_{ab}\\ & = f\circ{ h_{ab}}^{-1}\circ h_{ab}\\ & = f\circ 1_b\\ & = f\\ h_{ab} +f & = h_{ab} \circ h_{ba}\circ f\\ & = h_{ab} \circ {h_{ab}}^{-1}\circ f\\ & = 1_a \circ f\\ & = f\\ \end{split} \end{equation*} \end{center} $f+h_{ab} = h_{ab}+f = f\;\; and \;\;e(f) = h_{ab} \;\; \forall\;\; f\in \mathcal G $ \begin{center} \begin{equation*} \begin{split} f +(h_{ab}f^{-1} h_{ab})& = f \circ 1_b \circ f^{-1}\circ h_{ab}\\ & = h_{ab}\\ (h_{ab}f^{-1} h_{ab})+f & = h_{ab}\circ f^{-1}\circ 1_a \circ f\\ & = h_{ab} \end{split} \end{equation*} \end{center} $$f +(h_{ab}f^{-1} h_{ab}) = (h_{ab}f^{-1} h_{ab})+f = h_{ab} = e(f)$$ $$ -f = h_{ab}f^{-1} h_{ab} $$ Thus the connected groupoid $\mathcal G$ is a generalized group with respect to the addition we defined above. \\ Next we recall the generalized rings which was introduced by Molaei. \begin{defn}(cf.\cite{mrmolaei}) A generalized ring $R$ is a nonempty set $ R$ with two different operations adition and multiplication denoted by $`+`\; and\; `\times`$respectively in which $(R,+)$ is a generalized group and satisfies the following conditions. \begin{enumerate} \item multiplication is an associative binary operation. \item for all $x,\; y,\; z\;\in R \quad x(y+z) = xy+xz$ and $(x+y)z = xz+yz$ \end{enumerate} \end{defn} Note that in a generalized ring $R,$ $e(ab) =e(a)e(b)$ for all $a, b\;\in R.$ \begin{exam} $\mathbb{R^2}$ with the operations $(a, b) +(c, d) = (a,d)$ and $ (a, b)(c, d)=(ac, bd)$ is a generalized ring. \end{exam} \section{Generalized modules} Let $M$ be a generalized group and $R$ be a ring, in the following we proceed to define a generalized module using the generalized group $M$. \begin{defn} Let $R$ be a ring with unity. A generalized group $M$ is said to be (left) generalized $R$ module if for each element $r$ in $R$ and each $m$ in $M$ we have a product $rm$ in $M$ such that for $r, s \in\; R \;\; and\;\; m, n \in M $ \begin{enumerate} \item (r+s)m \;=\; rm+sm \item r(m+n)\; =\; rm+ rn \item r(sm)\; =\; (rs)m \item $r e(m) = e(m)$ \;\;for all $r\in R$ and $m\in M$ \item 1.m = m \end{enumerate} \end{defn} \begin{prop} Let $M$ be an $R-$ module then $M \times M$ is a generalized $R$ module with the operations $$(x, y)+(m, n) = (x, y+n) $$ $$r(x, y) = (x, ry)$$ and for each $x \in M$ the subset $M_x = \{ (x, y): y\in M\}$ is an $R$ module. \end{prop} \begin{proof} It can be seen that $(M, +)$ is a generalized group in which for all $(m, n) \in M,\;\; e(m, n)= (m, 0)$ and $(m, n)^{-1} = (m, -n).$ To show that $M$ is a generalized $R$ module consider $r, s \in R$ and $(x,y),(m,n) \in M\times M,$ \begin{enumerate} \item \begin{equation*} \begin{split} (r+s)(x,y) & = (x,(r+s)y)\\ & = (x, ry+rs) \end{split} \end{equation*} \begin{equation*} \begin{split} r(x,y) +s(x,y) & = (x, ry)+(x,rs) \\ & =(x, ry+rs) \end{split} \end{equation*} $ie;\;\;\;\;\;(r+s)(x,y)= r(x,y) +s(x,y)$ axiom(1)is satisfied \item \begin{equation*} \begin{split} r[(x,y)+(m,n)] & = r[(x,y+n)] \\ & = (x, r(y+n))\\ & = (x, ry+rn) \end{split} \end{equation*} \begin{equation*} \begin{split} r(x,y) +r(m,n) & = (x, ry)+(m, rn)\\ & = (x, ry+rn) \end{split} \end{equation*} hence $r[(x,y)+(m,n)]= r(x,y) +r(m,n)$ axiom(2)is satisfied \item \begin{equation*} \begin{split} rs(x,y) & = (x, rsy)\\ & = (x, r(sy))\\ & = r(x,sy)\\ & = r(s(x,y)) \end{split} \end{equation*} $rs(x,y) = r(s(x,y) )$ axiom(3)is satisfied \item \begin{equation*} \begin{split} re(x,y) & = r(x,0) \\ & = (x,r0)\\ & = (x,0)\\ & = e(x, y) \end{split} \end{equation*} $ re(x,y) = e(x,y)$ axiom(4)is satisfied \item $$1\cdot(x, y) = (x,y)\quad \forall (x,y)\in M$$ \end{enumerate} To show that for each $x \in M$ the subset $M_x = \{ (x, y): y\in M\}$ is an $R$ module it is enough to show that $M_x$ is an abelian group and the scalar multipication is closed. Let $m+n =(x, y)$ and $n= (x, z)$ be elements in $M_x$ then $$ m+n = (x,y)+(x,z) = (x, y+z)$$ and $$ n+m = (x,z)+(x,y) = (x, z+y) = (x,y+z) $$ Therefore $$m+n = n+m$$ and $+$ is a commutative binary operation on $M_x.$ Forevery $m\in M_x$ $(x,y)+(x,0)= (x,0) +(x,y) = (x,y)$ and $(x,y)^{-1}= (x,-y).$ Thus $M_x$ is an abelian subgroup of$M$ and for any $r\in R\;\;\; r(x, y) = (x,ry) \in M_x.$ Hence $M_x$ is a $R$ module. \end{proof} Thus we have the following example for a generalized $\mathbb R$ module. \begin{exam} Consider $M = \mathbb R \times \mathbb R$ with the following operations $$(x, y)+(m, n) = (x, y+n) $$ $$r(x, y) = (x, ry)$$ is a generalized $R$ module. \end{exam} \begin{thm} If $M$ is a generalized $R$ module then \begin{enumerate} \item$ e(rm) = re(m)$ for all $r \in R,\; m\in M$ \item $(rm)^{-1} = r(m^{-1})$ \end{enumerate} \end{thm} \begin{proof} Let $r \in R$ and $m \in M$, \begin{enumerate} \item $$ rm + re(m) = r(m+e(m)) = rm$$ $$ re(m) + rm = r(e(m)+m) = rm$$ Hence $e(rm) = re(m)$ \item $$rm + rm^{-1} = r(m+m^{-1}) = re(m)$$ $$ rm^{-1} + rm = r (m^{-1} +m ) = re(m)$$ \end{enumerate} \end{proof} \begin{defn} Let $M$ and $N$ be two generalized $R-$modules. A function $f: M \rightarrow N $ is called generalized module homomorphism if $$ f(m+ n)\; = f(m) +f(n)\;\; for\;all\; m,\; n\in M $$ $$ f(rm) = rf(m)\;\; for\;all\; r\in R,\; m \in M $$ \end{defn} \begin{defn} Let $M$ be a generalized $R$ module and $N\subset M $ is said to be generalized submodule of $M$ if $N$ is a generalized subgroup of $G$ and for each $r\in R \; and \; m \in N$ $rm \in N $ \end{defn} \begin{prop} Let $M$ be a generalized $R$-module which is also a normal generalized group. Then the set of identity elements in $M$ is a submodule of $M$ and we call it the zero submodule or trivial submodule. \end{prop} \begin{proof} We denote the set of identity elements in $M$ by $$e(M)=\{e(x)\;:\; x\in M\}$$ First we show that $e(M)$ is a generalized subgroup of $M.$ For $m,\; n\;\in e(M)$ there exists $x,\; y\;\;\in M$ such that $m = e(x)$ and $n = e(y).$ $$ mn^{-1} = e(x)e(y)^{-1} = e(x)e(y) = e(xy)$$ hence $xy^{-1} \in e(M)$ and $e(M)$ is a generalized subgroup of $M.$ \\ Ler $r$ be an element in the ring $R$ and $$rm = re(x) = e(x)$$ hence $rm \in M$ for all $r\in R$ and $m \in M.$ Thus set of identitt elements of $M$ form a generalized submodule of $M.$ \end{proof} Every nonzero generalized module $M$ contains at least one submodule $M$ itself. \begin{thm} Let $R$ is a ring and $M$ and $N$ are normal generalized $R$ modules. If $f: M\rightarrow N$ is a generalized $R-$ module homomorphism, then $$ ker f = \{x: x \in M,\; f(x)\in e(N) \}$$ where $e(N)$ denotes the set of identities elements of $N,$ is a submodule of $M$ and $Im f = \{f(x):\;\; x\in M \}$ is a submodule of $N.$ \end{thm} \begin{proof} Let $M$ and $N$ be a generalized modules over a ring $R$ and $f$ is a homomorphism between $M$ and $N.$ To show that $ ker f $ is a submodule of $M$ it is suffices to prove that it is a generalized subgroup of $M$ and is closed under scalar multipication.\\ Let $ x, y \in ker f $ then $ f(x) = e(n) \; and\; f(y)= e(n')$ for some $n, \; n'\in N$ Now consider; \begin{equation*} \begin{split} f(xy^{-1}) & = f(x)f(y^{-1})\\ & = f(x)(f(y))^{-1}\\ & = e(n)e(n')^{-1} \\ & = e(n)e(n')\\ & = e(nn') \end{split} \end{equation*} hence $xy^{-1} \in ker f$ and $ ker f$ is a generalized subgroup of $M.$ \\ for any $r\in R$ \begin{equation*} \begin{split} f(rx) & = rf(x)\\ & = re(n)\\ &= e(n)\\ rx \in ker f \; for\; each\;\;r\in R. \end{split} \end{equation*} Therefore $ker f$ is a submodule of $M$ for every $m\in M.$ \\ Similarly we can prove that $Im f$ is a submodule of $N.$ For; let $x, y \in Im f$ then there exists $m, n, \in M$ such that $f(m) = x$ and $f(n) = y.$ Now consider, \begin{equation*} \begin{split} xy^{-1} & = f(m)f(n)^{-1}\\ & = f(m) f(n^{-1}) \\ & = f(mn^{-1})\\ and\;\; mn^{-1} \in M \;\; hence \;\; xy^{-1}\in Im f. \end{split} \end{equation*} Hence $Im f$ is a generalized subgroup of $N$ and for any $r \in R,$\\ \begin{equation*} \begin{split} rx & = rf(m)\\ & = f(rm),\;\; rm\in M \end{split} \end{equation*} Hence $rx \in Im f$ for any $r \in R$ and $x \in M.$ Therefore $Im f$ is a generalized submodule of $N.$ \end{proof} \begin{thm} If $M$ is a generalized $R$-module and if there exists $x \in M$ such that $M = Rx$ then $M$ is a module over $R$ \end{thm} \begin{proof} First we show that $M$ is an abelian group with respect to the generalized group operation. For any $m,\; n\; \in M$ there exists $r,\; s\; \in R$ such that $$ m = rx\quad and \quad n = sx $$ Consider $$ rx + sx = (r+s)x = (s+r)x = sx+rx $$ $$ ie;\;\;\; m+n = n+m \;\;\; for\; all\; \; m, n \in M.$$ Hence $M$ is an abelian generalized group hence is an abelian group.\\ Therefore $M$ is a $R-$ module. \end{proof} \begin{note} The generalized modules and their homomorphisms form a category in which objects are the generalized modules and morphisms are their homomorphisms denoted by $\mathcal{GM}.$ \end{note} \begin{res} If $M$ and $N$ be two generalized modules over a ring $R$ then their cartesian product $M\times N$ defined by $$ M\times N = \{ (m,n):\;\;m\in M,\; \; n\in N\}$$ is a generalized module over $R$ with respect to the component wise operations. $ie;$ for any $(m, n),\; (x,y)\;\in M\times N$ and $r\in R$ the addition and scalar multipication is given by $$(m,n)+(x,y) = (m+x,n+y)$$ $$ r(m,n) = (rm, rn)$$ \end{res} \begin{defn} A groupoid $\mathcal G$ is a generalized module groupoid over $R$ if it has a generalized module structure over $R$ and it satisfies the following conditions. \begin{enumerate} \item $\mathcal G$ is a generalized group groupoid. \item For each $r\in R$ the mapping $$\eta_r : \mathcal G\rightarrow \mathcal G$$ defined by $\eta_r(g) = rg$ is a functor on $\mathcal G$. \\ $ie;$ for any composable morphisms in $\mathcal G$ and any $r\in R$ we should have $$\quad r(g\circ h) = rg \circ rh.$$ \end{enumerate} \end{defn} \begin{exam} Let $M$ be a generalized module over a ring $R.$ Then $M\times M$ is a generalized module groupoid over $R$ with object set $M.$ It follows from the example(6) that $M\times M$ with operation $(x,y)+(m,n) = (x+m,y+n)$ is a generalized group groupoid. The cartesian product of two generalized modules are again a generalized module. To show that $M\times M$ is a generalized module groupoid over $R$ it is enough to prove that for any $r\in R$ the map $\eta_r: M\times M \rightarrow M\times M$ defined by $\eta_r(x, y) = (rx, ry)$ is a functor. For, let $x$ be any object in the category $M\times M$ then $$\eta_r(1_x) = \eta_r(x,x) = (rx, rx) = 1_{\eta_r(x)}. $$ Consider two composable morphisms $(x,y), (y, z) $ in $M\times M$ then \begin{equation*} \begin{split} \eta_r[(x,y)\circ(y, z)] & = \eta_r[(x, z)]\\ & = (rx, rz)\\ \eta_r(x,y)\circ \eta_r(y, z)& = (rx, ry)\circ (ry, rz)\\ & = (rx, rz)\\ \eta_r[(x,y)\circ(y, z)]& = \eta_r(x,y)\circ \eta_r(y, z) \end{split} \end{equation*} hence $M\times M$ is a generalized module groupoid over $R.$ \end{exam} \begin{defn} Let $M$ and $N$ be two generalized module groupoids over a ring $R.$ A homomorphism $f: M\rightarrow N$ of generalized module groupoids is a functor of underlying groupoids preserving generalized module structure. \end{defn} Note That the generalized module groupoids and their homomorphisms form a category denoted by $\mathcal G\mathcal M\mathcal G $ \begin{prop} There is a functor from the category $\mathcal G\mathcal M$ of generalized modules to the category $\mathcal G\mathcal M\mathcal G $ of generalized module groupoid \end{prop} \begin{proof} Let $M$ be a generalized module over a ring $R.$ Then it can be seen that cartesian product $M\times M$ is a generalized module groupoid over $R.$ If $f: M_1 \rightarrow M_2$ is a homomorphism of generalized modules then define $ F :\mathcal G\mathcal M \rightarrow \mathcal G\mathcal M\mathcal G $ by $$F(M) = M\times M$$ and $$ F(f): M_1 \times M_1 \rightarrow M_2\times M_2$$ $$F(f)(m,n) = (f(m), f(n))$$ and it can be seen that $F(f)$ is a functor between $M_1\times M_1$ and $M_2\times M_2$ so that $F(f)$ is a morphism in the category of generalized module groupoid. Now we prove that $F$ is a functor between the category of generalized modules and the generalized module groupoids. For; let for any vertex $M\in\mathcal G \mathcal M$ we have, $$F(1_M)(m,n) = (m,n)$$ $$ F(1_M) = 1_{F(M)}$$ let $f : M\rightarrow N $ and $g: N\rightarrow P$ be two composable homomorphisms in $\mathcal{GM}$ then \begin{equation*} \begin{split} F(fg)(m,n) & = (fg(m), fg(n)) \\ & = ((g(f(m)), g(f(n)))\\ F(f)F(g)(m,n) & = F(g) (F(f)(m,n)) \\ & = F(g)(f(m),f(n))\\ &= (g(f(m)), g(f(n)))\\ F(fg)(m,n) & = F(f)F(g)(m,n) \;\; \forall (m,n) \in M\times M\\ Hence\;\;\; F(fg)= F(f)F(g) \end{split} \end{equation*} \end{proof} \begin{prop} Let $\{ M_i\;\; i\in I\}$ be a family of generalized $R$ module groupoids. Then $M=(\nu M,M)$ where $\nu M = \prod \nu M_i $ and $M = \prod M_i$ is a generalized $R$ module groupoid. \end{prop} \begin{proof} $\prod \nu M_i$ has elements $(m_i)_{i\in I}$ where $m_i\in \nu M_i$ and morphisms $(f_i)_{i\in I}$ from $dom\, f_i$ to $cod\,f_i$ in $M_i$. The composition of morphisms is $$(f_i)_{i\in I} \cdot (g_i)_{i\in I}=(f_i\cdot g_i)_{i\in I}$$ whenever $f_i$ and $g_i$ re composable morphisms in $M_i$. Since each $M_i$ is a groupoid each morphism $f_i$ admits an inverse $f_i^{-1}$, thus the product $M=(\prod \nu M_i, \prod M_i),\,i\in I$ is a groupoid. Moreover the product $M = \prod M_i$ has a structure of generalized module with respect to component wise operations $$(f_i)_{i\in I} + (g_i)_{i\in I} = (f_i+g_i)_{i\in I}\;\; and \;\; r(f_i)_{i\in I} = (rf_i)_{i\in I}$$ thus $M$ is a generalized group groupoid. It remains to show that the map $\eta_r: M \rightarrow M$ by $$ \eta_r(m_i)_{i\in I} = (rm_i)_{i\in I}$$ is a functor on $ M.$ For each $ (x_i)_{i\in I} \in \nu M $ consider \begin{equation*} \begin{split} \eta_r (1_{(m_i)})_{i\in I} & = (r1_{(m_i)})_{i\in I}\\ & = (1_{(rm_i)})_{i\in I}\;\;\;\;( each\; M_i\; is\; a\; generalized\; module\; groupid)\\ & = 1_{\eta_r(m_i)_{i\in I}} \end{split} \end{equation*} let $(m_i)_{i\in I}, (n_i)_{i\in I}$ are two composable morpisms in $M,$ then \begin{equation*} \begin{split} \eta_r((m_i)_{i\in I}\circ (n_i)_{i\in I}) & = r((m_i)_{i\in I}\circ (n_i)_{i\in I})\\ & = r(m_i \circ n_i)_{i\in I} \\ & = (r (m_i \circ n_i)_{i\in I})\\ & = (rm_i \circ rn_i)_{i\in I}\\ & = (rm_i)_{i\in I}\circ (rn_i)_{i\in I}\\ \eta_r(m_i)_{i\in I} \circ \eta_r(n_i)_{i\in I} & = (rm_i)_{i\in I}\circ (rn_i)_{i\in I}\\ \eta_r((m_i)_{i\in I}\circ (n_i)_{i\in I}) & = \eta_r(m_i)_{i\in I} \circ \eta_r(n_i)_{i\in I} \end{split} \end{equation*} \end{proof} \end{document}
\begin{document} \title{Operator inequalities related to the Corach--Porta--Recht inequality} \author[C. Conde, M.S. Moslehian, A. Seddik]{Cristian Conde$^1$, Mohammad Sal Moslehian$^2$ and Ameur Seddik$^3$} \address{$^1$ Instituto de Ciencias, Universidad Nacional de Gral. Sarmiento, J. M. Gutierrez 1150, (B1613GSX) Los Polvorines and Instituto Argentino de Matemática ``Alberto P. Calder\'on", Saavedra 15 3º piso, (C1083ACA) Buenos Aires\\ Argentina} \email{[email protected]} \address{$^2$ Department of Pure Mathematics, Center of Excellence in Analysis on Algebraic Structures (CEAAS), Ferdowsi University of Mashhad, P.O. Box 1159, Mashhad 91775, Iran} \email{[email protected] and [email protected]} \address{$^3$ Department of Mathematics, Faculty of Science, Tabuk University, Saudi Arabia} \email{[email protected]} \keywords{Invertible operator, unitarily invariant norm, Heinz inequality, Corach--Porta--Recht inequality, operator inequality.} \subjclass[2010]{Primary 47B47; Secondary 47A63, 47A30.} \begin{abstract} We prove some refinements of an inequality due to X. Zhan in an arbitrary complex Hilbert space by using some results on the Heinz inequality. We present several related inequalities as well as new variants of the Corach--Porta--Recht inequality. We also characterize the class of operators satisfying $\left\Vert SXS^{-1}+S^{-1}XS+kX\right\Vert \geq (k+2)\left\Vert X\right\Vert$ under certain conditions. \end{abstract} \maketitle \section{Introduction} Let $\mathbb{B}(\mathscr{H})$, $\mathfrak{I}(\mathscr{H})$ and $\mathfrak{U}(\mathscr{H})$ be the $C^*$-algebra of all bounded linear operators acting on a complex Hilbert space $\mathscr{H}$, the set of all invertible elements in $\mathbb{B}(\mathscr{H})$ and the class of all unitary operators in $\mathbb{B}(\mathscr{H})$, respectively. The operator norm on $\mathbb{B}(\mathscr{H})$ is denoted by $\Vert\cdot\Vert$. We denote by \begin{enumerate} \item[$\bullet $] $\mathscr{S}_{0}(\mathscr{H}),$ the set of all invertible self-adjoint operators in $\mathbb{B}(\mathscr{H}),$ \item[$\bullet $] $\mathcal{P}(\mathscr{H}),$ the set of all positive operators in $\mathbb{B}(\mathscr{H}),$ \item[$\bullet $] $\mathcal{P}_{0}(\mathscr{H}),$ the set of all invertible positive operators in $\mathbb{B}(\mathscr{H}),$ \item[$\bullet $] $\mathfrak{U}_{r}(\mathscr{H})=\mathscr{S}_{0}(\mathscr{H})\cap \mathfrak{U} (\mathscr{H}),$ the set of all unitary reflection operators in $\mathbb{B}(\mathscr{H}),$ \item[$\bullet $] $\mathfrak{N}_{0}(\mathscr{H})$, the set of all invertible normal operators in $\mathbb{B}(\mathscr{H}).$ \end{enumerate} For $1\leq p<\infty$, the Schatten $p$-norm class consists of all compact operators $A$ for which $\|A\|_{p}:=({\rm tr}|A|^{p})^{1/p}<\infty$, where ${\rm tr}$ is the usual trace functional. If $A$ and $B$ are operators in ${\mathbb B}({\mathscr H})$ we use $A\oplus B$ to denote the $2\times2$ operator matrix $\left[\begin{array}{cc} A & 0 \\ 0 & B \end{array}\right]$, regarded as an operator on ${\mathscr H}\oplus {\mathscr H}$. One can show that \begin{eqnarray}\label{erf} \|A\oplus B\|=\max(\|A\|,\|B\|),\quad \|A\oplus B\|_{p}=\left(\|A\|_{p}^{p}+\|B\|_{p}^{p}\right)^{1/p}\end{eqnarray} One of the most essential inequalities in the operator theory is the following so-called Heinz inequality: \begin{eqnarray}\label{1} \left\Vert PX+XQ\right\Vert \geq \left\Vert P^{\alpha }XQ^{1-\alpha }+P^{1-\alpha }XQ^{\alpha }\right\Vert \end{eqnarray} for all $P,Q\in \mathcal{P}(\mathscr{H})$, all $X\in \mathbb{B}(\mathscr{H})$ and all $\alpha \in [0,1]$. The proof given by Heinz \cite{4} is based on the complex analysis and is somewhat complicated. In \cite{5}, McIntosh showed that the Heinz inequality is a consequence of the following inequality \begin{eqnarray}\label{2} \forall A,B,X\in \mathbb{B}(\mathscr{H}),\ \left\Vert A^*AX+XBB^{* }\right\Vert \geq 2\left\Vert AXB\right\Vert \end{eqnarray} McIntosh proved that \eqref{2} holds and gave his ingenious proof of $\eqref{2}\Rightarrow \eqref{1}$. In the literature, inequality \eqref{2} is called ``Arithmetic-geometric-Mean Inequality''. In \cite{2} Corach--Porta--Recht proved the following, so-called C-P-R inequality, \begin{eqnarray}\label{3} \forall S\in \mathscr{S}_{0}(\mathscr{H}) \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert \geq 2\left\Vert X\right\Vert \end{eqnarray} The C-P-R inequality is a key factor in their study of differential geometry of self-adjoint operators. They proved this inequality by using the integral representation of a self-adjoint operator with respect to a spectral measure. An immediate consequence of the C-P-R inequality is the following: \begin{eqnarray}\label{4} \forall S,T\in \mathscr{S}_{0}(\mathscr{H}) \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXT^{-1}+S^{-1}XT\right\Vert \geq 2\left\Vert X\right\Vert \end{eqnarray} Using the polar decomposition of an operator, we may deduce easily from the C-P-R inequality the following operator inequality \begin{eqnarray}\label{5} \forall S\in \mathfrak{I}(\mathscr{H}) \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert S^{* }XS^{-1}+S^{-1}XS^*\right\Vert \geq 2\left\Vert X\right\Vert \end{eqnarray} Three years after and in \cite{3}, Fujii--Fujii--Furuta--Nakamato proved that inequalities \eqref{1}, \eqref{2}, \eqref{3}, \eqref{4} and two other ones hold and are mutually equivalent. By giving an easy proof of one of them, they showed a simplified proof of Heinz inequality, see also \cite{FFN}. Also, it is easy to see that two inequalities \eqref{3} and \eqref{5} are equivalent. In \cite{6}, it is shown that the operator inequality \begin{eqnarray}\label{6} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert \geq 2\left\Vert X\right\Vert ,\ \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} is in fact a characterization of $\mathbb{C}^*\mathscr{S}_{0}(\mathscr{H})=\{\lambda M: \lambda \in \mathbb{C}\setminus\{0\}, M \in \mathscr{S}_{0}(\mathscr{H})\}$. Recently in \cite{7}, using inequality \eqref{5} and the above characterization of $\mathbb{C}^*\mathscr{S}_{0}(\mathscr{H})$, it is proved that this class is also characterized by each of the following statements: \begin{eqnarray}\label{7} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert =\left\Vert S^*XS^{-1}+S^{-1}XS^*\right\Vert \,\, \left(S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{8} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert \geq \left\Vert S^*XS^{-1}+S^{-1}XS^*\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} Note that this class of operators is the class of all invertible normal operators in $\mathbb{B}(\mathscr{H})$ the spectrum of which is included in a straight line passing through the origin. For the class of all invertible normal operators in $\mathbb{B}(\mathscr{H}),$ it is proved \cite{7, 8} that this class is characterized by each of the following properties \begin{eqnarray}\label{9} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert \geq 2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I} (\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{10} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert =\left\Vert S^*XS^{-1}\right\Vert +\left\Vert S^{-1}XS^*\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{11} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert \geq \left\Vert S^*XS^{-1}\right\Vert +\left\Vert S^{-1}XS^*\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{12} \forall X\in \mathbb{B}(\mathscr{H}),\left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert \leq \left\Vert S^*XS^{-1}\right\Vert +\left\Vert S^{-1}XS^*\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} It is natural to ask what happen if we consider in each of the above operator inequalities instead of ``$\geq$'', either ``$\leq$'' or ``$=$''. Let us consider the following associated operator inequalities \begin{eqnarray}\label{13} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert \leq 2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{14} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert =2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{15} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS\right\Vert \leq \left\Vert S^*XS^{-1}+S^{-1}XS^*\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{16} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert =2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I} (\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{17} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}\right\Vert +\left\Vert S^{-1}XS\right\Vert \leq 2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I} (\mathscr{H})\right) \end{eqnarray} In \cite{7, 8, 9}, it was established that each of inequalities \eqref{13}, \eqref{16} and \eqref{17} characterize $\mathbb{R}^{* }\mathfrak{U}(\mathscr{H})$ and \eqref{14} characterizes $\mathbb{C}^*\mathfrak{U}_{r}(\mathscr{H})$. We found also in \cite{7, 8} that $\mathbb{R}^*\mathfrak{U}(\mathscr{H})$ is also characterized by each of the following two operator equalities \begin{eqnarray}\label{18} \forall X\in \mathbb{B}(\mathscr{H}),\left\Vert S^*XS^{-1}+S^{-1}XS^{* }\right\Vert =2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} \begin{eqnarray}\label{19} \forall X\in \mathbb{B}(\mathscr{H}),\left\Vert S^*XS^{-1}\right\Vert +\left\Vert S^{-1}XS^*\right\Vert =2\left\Vert X\right\Vert \,\, \left( S\in \mathfrak{I}(\mathscr{H})\right) \end{eqnarray} A unitarily invariant norm $\left|\left|\left|\cdot\right|\right|\right|$ is defined on a norm ideal $\mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$ of $\mathbb{B}(\mathscr{H})$ associated with it and has the property $\left|\left|\left| UXV\right|\right|\right|=\left|\left|\left| X\right|\right|\right|$, where $U$ and $V$ are unitaries and $X \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$. Note that inequalities \eqref{1}, \eqref{2}, \eqref{3}, \eqref{4} were generalized for arbitrary unitarily invariant norms. Furthermore, it is proved in \cite{Co} that the characterization of the invertible normal operators via inequalities of the uniform norm in $\mathbb{B}(\mathscr{H})$ (\eqref{9}-\eqref{13}) also holds for any unitarily invariant norm. In \cite{12} and in the case $\dim \mathscr{H}<\infty $, by introducing two parameters $r$ and $t$, Zhan proved that for $n \times n$ positive matrices $ A, B$, arbitrary $n \times n$ matrix $X$ and $(t,r)\in (-2,2]\times \lbrack \frac{1}{2}, \frac{3}{2}],$ the following inequality \begin{eqnarray}\label{21} (2+t)\left|\left|\left| A^{r}XB^{2-r}+A^{2-r}XB^{r}\right|\right|\right| \leq 2\left|\left|\left| A^{2}X+tAXB+XB^{2}\right|\right|\right| \end{eqnarray} holds for any unitarily invariant norm $\left|\left|\left| .\right|\right|\right|$. The tool used for proving this inequality is based on the induced Schur product norm. It should be noted that the case $r=1,\ t=0$ of this result is the well-known arithmetic-geometric mean inequality due to Bhatia and Davis \cite{BD}. In this paper we want to extend it and to obtain some refinements of this inequality to the case where $\mathscr{H}$ is a Hilbert space of arbitrary dimension by using elementary techniques. We also characterize the class of operators satisfying $\left\Vert SXS^{-1}+S^{-1}XS+kX\right\Vert \geq (k+2)\left\Vert X\right\Vert$ under certain conditions. Recently, Kittaneh proved in \cite{Ki} the following refinement of the Heinz inequality. \begin{proposition} Let $A,B\in \mathcal{P}(\mathscr{H})$ and $X \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$. Then \begin{enumerate} \item for $\alpha\in[0,\frac12]$ the following inequalities hold \begin{eqnarray}\label{kit1} \left|\left|\left| A^{\alpha }XB^{1-\alpha }+A^{1-\alpha }XB^{\alpha }\right|\right|\right|&\leq& \left|\left|\left| A^{\alpha/2 }XB^{1-\alpha/2 }+A^{1-\alpha/2 }XB^{\alpha/2 }\right|\right|\right|\nonumber\\ &\leq& \frac{1}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu }XB^{1-\nu }+A^{1-\nu }XB^{\nu }\right|\right|\right|d\nu \nonumber \\ &\leq& \frac 12\left|\left|\left| AX+XB\right|\right|\right|+\frac 12 \left|\left|\left| A^{\alpha }XB^{1-\alpha }+A^{1-\alpha }XB^{\alpha }\right|\right|\right|\nonumber\\ &\leq& \left|\left|\left| AX+XB\right|\right|\right| \ \end{eqnarray} \item for $\alpha\in[\frac12,1]$ the following inequalities hold \begin{eqnarray}\label{kit2} \left|\left|\left| A^{\alpha }XB^{1-\alpha }+A^{1-\alpha }XB^{\alpha }\right|\right|\right|&\leq& \left|\left|\left| A^{\frac{1+\alpha}{2}}XB^{\frac{1-\alpha}{2}}+A^{\frac{1-\alpha}{2}}XB^{\frac{1+\alpha}{2}}\right|\right|\right|\nonumber\\ &\leq& \frac{1}{1-\alpha}\int_{\alpha}^1\left|\left|\left| A^{\nu }XB^{1-\nu }+A^{1-\nu }XB^{\nu }\right|\right|\right|d\nu \nonumber \\ &\leq& \frac 12\left|\left|\left| AX+XB\right|\right|\right|+\frac 12 \left|\left|\left| A^{\alpha }XB^{1-\alpha }+A^{1-\alpha }XB^{\alpha }\right|\right|\right|\nonumber\\ &\leq& \left|\left|\left| AX+XB\right|\right|\right| \ \end{eqnarray} where \begin{align} \left|\left|\left| AX+XB\right|\right|\right|&=\lim\limits_{\alpha\to 0}\frac{1}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu }XB^{1-\nu }+A^{1-\nu }XB^{\nu }\right|\right|\right|d\nu\nonumber \\&=\lim\limits_{\alpha\to 1}\frac{1}{1-\alpha}\int_{\alpha}^1\left|\left|\left| A^{\nu }XB^{1-\nu }+A^{1-\nu }XB^{\nu }\right|\right|\right|d\nu.\nonumber \ \end{align} \end{enumerate} \end{proposition} \section{Main results} In this section, we shall prove that inequality \eqref{21} of Zhan follows immediately from the generalized version of the known inequalities \eqref{1} and \eqref{3} in the more general case of arbitrary complex Hilbert space. \begin{theorem}\label{t1} Let $A,B\in \mathcal{P}(\mathscr{H})$, where $\mathscr{H}$ is a Hilbert space of of arbitrary dimension and let $t\leq 2$, $r\in \lbrack \frac{1}{2}, \frac{3}{2}].$ Then for any unitarily invariant norm $\left|\left|\left| .\right|\right|\right|$ and for every $ X \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|},$ the following inequalities hold \begin{enumerate} \item for $r\in[\frac12, 1]$ \begin{align} &\hspace{-0.5cm}\ 2\left|\left|\left| A^{2}X+XB^{2}+tAXB\right|\right|\right|\geq 2\left|\left|\left|A^{2}X+XB^{2}+2AXB \right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right| \nonumber \\ &\geq 4\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right| -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 2\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right|+ 2\left|\left|\left| A^rXB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq \frac{4}{r-\frac12}\int_0^{r-\frac12}\left|\left|\left| A^{\nu +\frac{1}{2}}XB^{\frac{3}{2}-\nu }+A^{ \frac{3}{2}-\nu }XB^{\nu +\frac{1}{2}}\right|\right|\right|d\nu -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber\\ &\geq 4\left|\left|\left| A^{\frac{2r +1}{4}}XB^{\frac{7-2r}{4}}+A^{\frac{7-2r}{4} }XB^{\frac{2r +1}{4}}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 4\left|\left|\left| A^{r}XB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq (t+2)\left|\left|\left| A^{r}XB^{2-r}+A^{2-r}XB^{r}\right|\right|\right|\,. \end{align} \item for $r\in[1,\frac32]$ \begin{align} &\hspace{-0.5cm}\ 2\left|\left|\left| A^{2}X+XB^{2}+tAXB\right|\right|\right|\geq 2\left|\left|\left|A^{2}X+XB^{2}+2AXB \right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right| \nonumber \\ &\geq 4\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right| -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 2\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right|+ 2\left|\left|\left| A^rXB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq \frac{4}{\frac 32 -r}\int_{r-\frac12}^1\left|\left|\left| A^{\nu +\frac{1}{2}}XB^{\frac{3}{2}-\nu }+A^{ \frac{3}{2}-\nu }XB^{\nu +\frac{1}{2}}\right|\right|\right|d\nu -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber\\ &\geq 4\left|\left|\left| A^{\frac{2r +3}{4}}XB^{\frac{5-2r}{4}}+A^{\frac{5-2r}{4} }XB^{\frac{2r +3}{4}}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 4\left|\left|\left| A^{r}XB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq (t+2)\left|\left|\left| A^{r}XB^{2-r}+A^{2-r}XB^{r}\right|\right|\right|\,. \end{align} \end{enumerate} \end{theorem} \begin{proof} Let $X \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$ and without loss of generality we may assume that $A,B\in \mathcal{P} _{0}(\mathscr{H}). $ Put $\alpha =r-\frac{1}{2}$ then $0\leq \alpha \leq 1$. First, we consider the case $\alpha\in [0,\frac12]$. Using Heinz inequality and its refinements \eqref{kit1} for unitarily invariant norms and considering $A^{-\frac 12}XB^{-\frac 12}\in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$ we have \begin{align}\label{A} &\hspace{-0.5cm}\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right| \nonumber \\ &\hspace{2cm}\geq \frac 12\left(\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right|+ \left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|\right)\nonumber \\ &\hspace{2cm}\geq \frac{1}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu -\frac{1}{2}}XB^{\frac{1}{2}-\nu }+A^{ \frac{1}{2}-\nu }XB^{\nu -\frac{1}{2}}\right|\right|\right|d\nu\nonumber\\ &\hspace{2cm}\geq\left|\left|\left| A^{\frac{\alpha -1}{2}}XB^{\frac{1-\alpha}{2}}+A^{\frac{1-\alpha}{2} }XB^{\frac{\alpha -1}{2}}\right|\right|\right|\nonumber \\ &\hspace{2cm}\geq \left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|. \end{align} Since \begin{align} AXB^{-1}+A^{-1}XB+2X&=A^{\frac{1}{2}}(A^{\frac{ 1}{2}}XB^{-\frac{1}{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}})B^{-\frac{1}{2}} \nonumber \\&\:+A^{-\frac{1}{2}}(A^{\frac{1}{2}}XB^{-\frac{1}{2} }+A^{-\frac{1}{2}}XB^{\frac{1}{2}})B^{\frac{1}{2}}, \nonumber \ \end{align} utilizing the generalized version of C-P-R inequality for unitarily invariant norms, we obtain \begin{eqnarray}\label{B} \left|\left|\left| AXB^{-1}+A^{-1}XB+2X\right|\right|\right| \geq 2\left|\left|\left| A^{ \frac{1}{2}}XB^{-\frac{1}{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right|\,. \end{eqnarray} It follows from \eqref{A} and \eqref{B} that \begin{align}\label{C} &\hspace{-1.9cm}\left|\left|\left| AXB^{-1}+A^{-1}XB+2X\right|\right|\right| \geq 2\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right| \nonumber \\ &\geq \left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right|+ \left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|\nonumber \\ &\geq \frac{2}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu -\frac{1}{2}}XB^{\frac{1}{2}-\nu }+A^{ \frac{1}{2}-\nu }XB^{\nu -\frac{1}{2}}\right|\right|\right|d\nu\nonumber\\ &\geq 2\left|\left|\left| A^{\frac{\alpha -1}{2}}XB^{\frac{1-\alpha}{2}}+A^{\frac{1-\alpha}{2} }XB^{\frac{\alpha -1}{2}}\right|\right|\right|\nonumber \\ &\geq 2\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|. \end{align} On the other hand, due to \begin{equation*} AXB^{-1}+A^{-1}XB+2X=AXB^{-1}+A^{-1}XB+tX+(2-t)X, \end{equation*} we have \begin{align}\label{D} \left|\left|\left| AXB^{-1}+A^{-1}XB+2X\right|\right|\right|\leq \left|\left|\left| AXB^{-1}+A^{-1}XB+tX\right|\right|\right| +(2-t)\left|\left|\left| X\right|\right|\right|\,. \end{align} From two last inequalities \eqref{C} and \eqref{D}, we obtain \begin{align}\label{D'} &\hspace{-0.1cm}2\left|\left|\left| AXB^{-1}+A^{-1}XB+tX\right|\right|\right|\geq 2\left|\left|\left| AXB^{-1}+A^{-1}XB+2X\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\&\geq 4\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right| -(4-2t)\left|\left|\left| X\right|\right|\right| \nonumber \\ &\geq 2\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right|+ 2\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\ &\geq \frac{4}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu -\frac{1}{2}}XB^{\frac{1}{2}-\nu }+A^{ \frac{1}{2}-\nu }XB^{\nu -\frac{1}{2}}\right|\right|\right|d\nu -(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber\\ &\geq 4\left|\left|\left| A^{\frac{\alpha -1}{2}}XB^{\frac{1-\alpha}{2}}+A^{\frac{1-\alpha}{2} }XB^{\frac{\alpha -1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\ &\geq 4\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\,. \end{align} From the generalized version of C-P-R inequality for unitarily invariant norms, it is easy to see that if $s\in \mathbb{R}$ $$4\left|\left|\left| A^sXB^{-s }+A^{-s}XB^{s }\right|\right|\right| -4\left|\left|\left| X\right|\right|\right| +2t\left|\left|\left| X\right|\right|\right| \geq (t+2)\left|\left|\left| A^sXB^{-s }+A^{-s}XB^{s }\right|\right|\right|\,. $$ From \eqref{D'} and the last inequality, we can deduce that for any $X\in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$ \begin{align} & 2\left|\left|\left| AXB^{-1}+A^{-1}XB+tX\right|\right|\right| \geq 2\left|\left|\left|AXB^{-1} +A^{-1}XB+2X \right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right| \nonumber \\ &\geq 4\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right| -(4-2t)\left|\left|\left| X\right|\right|\right| \nonumber \\ &\geq 2\left|\left|\left| A^{\frac{1}{2}}XB^{-\frac{1 }{2}}+A^{-\frac{1}{2}}XB^{\frac{1}{2}}\right|\right|\right|+ 2\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\ &\geq \frac{4}{\alpha}\int_0^{\alpha}\left|\left|\left| A^{\nu -\frac{1}{2}}XB^{\frac{1}{2}-\nu }+A^{ \frac{1}{2}-\nu }XB^{\nu -\frac{1}{2}}\right|\right|\right|d\nu -(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber\\ &\geq 4\left|\left|\left| A^{\frac{\alpha -1}{2}}XB^{\frac{1-\alpha}{2}}+A^{\frac{1-\alpha}{2} }XB^{\frac{\alpha -1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\ &\geq 4\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{ \frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|-(4-2t)\left|\left|\left| X\right|\right|\right|\nonumber \\&\geq (t+2)\left|\left|\left| A^{\alpha -\frac{1}{2}}XB^{\frac{1}{2}-\alpha }+A^{\frac{1}{2}-\alpha }XB^{\alpha -\frac{1}{2}}\right|\right|\right|\,.\ \end{align} whence, by replace $X$ by $AXB$ and $\alpha$ by $r-\frac12$, we get \begin{align} &\hspace{-0.5cm}\ 2\left|\left|\left| A^{2}X+XB^{2}+tAXB\right|\right|\right|\geq 2\left|\left|\left|A^{2}X+XB^{2}+2AXB \right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right| \nonumber \\ &\geq 4\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right| -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 2\left|\left|\left| A^{\frac{3}{2}}XB^{\frac{1 }{2}}+A^{\frac{1}{2}}XB^{\frac{3}{2}}\right|\right|\right|+ 2\left|\left|\left| A^rXB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq \frac{4}{r-\frac12}\int_0^{r-\frac12}\left|\left|\left| A^{\nu +\frac{1}{2}}XB^{\frac{3}{2}-\nu }+A^{ \frac{3}{2}-\nu }XB^{\nu +\frac{1}{2}}\right|\right|\right|d\nu -(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber\\ &\geq 4\left|\left|\left| A^{\frac{2r +1}{4}}XB^{\frac{7-2r}{4}}+A^{\frac{7-2r}{4} }XB^{\frac{2r +1}{4}}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq 4\left|\left|\left| A^{r}XB^{2-r}+A^{ 2-r}XB^{r}\right|\right|\right|-(4-2t)\left|\left|\left| AXB\right|\right|\right|\nonumber \\ &\geq (t+2)\left|\left|\left| A^{r}XB^{2-r}+A^{2-r}XB^{r}\right|\right|\right|\,. \end{align} Finally, we note that the case $\alpha\in [\frac12, 1]$ is obtained analogously and this completes the proof. \end{proof} Note that the case $t\leq -2$ is trivial. An immediate consequence for the case $r=1$ of this last theorem is the following (exactly the corollary 7 in \cite{12} in finite dimensional case). \begin{corollary} Let $A,B\in \mathbb{B}(\mathscr{H})$ and let $t\leq 2.$ Then \begin{eqnarray}\label{23} \forall X\in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|},\ \left|\left|\left| A^*AX+XBB^{* }+t\left\vert A\right\vert X\left\vert B\right\vert \right|\right|\right| \geq (t+2)\left|\left|\left| AXB^*\right|\right|\right|\,. \end{eqnarray} \end{corollary} Another immediate consequence of this last corollary is (exactly the corollary 8 in \cite{12} in finite dimensional case). \begin{corollary} Let $P,Q\in \mathcal{P}_{0}(\mathscr{H})$ and let $t\leq 2.$ Then \begin{eqnarray}\label{24} \forall X\in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|},\ \left|\left|\left| PXQ^{-1}+P^{-1}XQ+tX\right|\right|\right| \geq (t+2)\left|\left|\left| X\right|\right|\right| \end{eqnarray} \end{corollary} \begin{remark} The last theorem and their two consequences was proved by Zhan in \cite{12} in the particular case of finite dimensional case. Note that Cano--Mosconi--Stojanoff \cite{1} have proved the last corollary using the spectral measure of a normal operator to generalize \cite[Corollary 8]{12} of Zhan for arbitrary complex Hilbert space. Here, we have proved it in a general situation for an arbitrary Hilbert space using only known operator inequalities. \end{remark} \begin{remark} It follows from the above corollary that for every $k\leq 2$ and for every operator $S\in \mathbb{C}^*\mathcal{P}_{0}(\mathscr{H})$ the following inequality holds \begin{eqnarray}\label{25} \forall X\in \mathbb{B}(\mathscr{H}),\ \left\Vert SXS^{-1}+S^{-1}XS+kX\right\Vert \geq (k+2)\left\Vert X\right\Vert \end{eqnarray} So it is interesting to characterize the class of all operators $S$ in $ \mathfrak{I}(\mathscr{H})$ satisfying this last inequality. We denote this class by $ \mathfrak{D}_{k}(\mathscr{H})$. \end{remark} \begin{proposition}\label{above} For every real numbers $k,\ t$, (i) if $k\geq t$, then $\mathfrak{D}_{k}(\mathscr{H})\subset \mathfrak{D}_{t}(\mathscr{H})$, (ii) if $k\geq 0$, then $$\mathfrak{D}_{k}(\mathscr{H})\subset \left\{ \alpha S:\alpha \in \mathbb{C}^*,\ S\in \mathscr{S}_{0}(\mathscr{H})\text{, }\left\vert \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k\right\vert \geq k+2,\ \lambda ,\mu \in \sigma (S)\right\} .$$ \end{proposition} \begin{proof} (i) This follows by the same argument as in the proof of Theorem \ref{t1}. (ii) Let $S\in \mathfrak{D}_{k}(\mathscr{H}).$ It follows immediately that the inequality $\left\Vert SXS^{-1}+S^{-1}XS\right\Vert \geq 2\left\Vert X\right\Vert $ holds for every $X$ in $\mathbb{B}(\mathscr{H}).$ Thus $S\in \mathbb{C }^*\mathscr{S}_{0}(\mathscr{H}).$ We may assume without loss of generality that $S$ is invertible and self-adjoint. Denote by $\varphi _{S,k}$ the operator on $\mathbb{B}(\mathscr{H})$ given by $\varphi _{S,k}(X)=SXS^{-1}+S^{-1}XS+kX.$ So that $\sigma (\varphi _{S,k})=\left\{ \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k:\lambda ,\mu \in \sigma (S)\right\} \subset \mathbb{R}.$ Hence each spectral value of $ \varphi _{S,k}$ is in an approximate point value. Let $\lambda ,\mu \in \sigma (S).$ Then there exists a sequence $(X_{n})$ of operators of norm one such that $\left\Vert SX_{n}S^{-1}+S^{-1}X_{n}S+kX_{n}\right\Vert \rightarrow \left\vert \frac{\lambda }{\mu }+\frac{\mu }{\lambda } +k\right\vert .$ Thus $k+2=\inf_{\left\Vert X\right\Vert =1}\left\Vert SXS^{-1}+S^{-1}XS+kX\right\Vert \leq \left\vert \frac{\lambda }{\mu }+\frac{ \mu }{\lambda }+k\right\vert .$ \end{proof} \begin{remark} In the case where $k\geq 0$ and $\dim \mathscr{H}=2$, the inclusion given in the above proposition becomes an equality$.$ Indeed, let $S$ be an invertible self-adjoint operator in $\mathbb{B}(\mathscr{H}),$ and let $\lambda $ and $\mu $ be the eigenvalues of $S$ such that $\left\vert \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k\right\vert \geq k+2.$ By a simple computation, we obtain \begin{equation*} \forall X\in \mathbb{B}(\mathscr{H}),\ SXS^{-1}+S^{-1}XS+kX=\left( \begin{array}{cc} k+2 & \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k \\ \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k & k+2 \end{array} \right) \circ X \end{equation*} Since the matrix $\left( \begin{array}{cc} \frac{1}{k+2} & 1\left/ \left( \frac{\lambda }{\mu }+\frac{\mu }{\lambda } +k\right) \right. \\ 1\left/ \left( \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k\right) \right. & \frac{1}{k+2} \end{array} \right) $ is positive definite, thus using the Schur theorem, we obtain \begin{equation*} \forall X\in \mathbb{B}(\mathscr{H}),\left\Vert \left( \begin{array}{cc} \frac{1}{k+2} & 1\left/ \left( \frac{\lambda }{\mu }+\frac{\mu }{\lambda } +k\right) \right. \\ 1\left/ \left( \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k\right) \right. & \frac{1}{k+2} \end{array} \right) \circ X\right\Vert \leq \frac{1}{k+2}\left\Vert X\right\Vert \end{equation*} Therefore \begin{equation*} \forall X\in \mathbb{B}(\mathscr{H}),\left\Vert \left( \begin{array}{cc} k+2 & \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k \\ \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k & k+2 \end{array} \right) \circ X\right\Vert \geq (k+2)\left\Vert X\right\Vert \end{equation*} \end{remark} \begin{conjecture}\label{conj} Let $k$ be a real number such that $0\leq k\leq 2.$ Then for every natural number $n$ and for every nonzero numbers $\lambda _{1},\dots,\lambda _{n}$ such that $\left\vert \frac{\lambda _{i}}{\lambda _{j}}+\frac{\lambda _{j}}{ \lambda _{i}}+k\right\vert \geq k+2$, for $i,j=1, \dots, n$ the matrix $\left( \frac{\lambda _{i}\lambda _{j}}{\lambda _{i}^{2}+\lambda _{j}^{2}+k\lambda _{i}\lambda _{j}}\right) _{1\leq i,j\leq n}$ is positive. \end{conjecture} \noindent Furthermore, \begin{theorem} Assume that Conjecture \ref{conj} is valid and $\dim \mathscr{H}=n$. Then for every number $k$ such that $0\leq k\leq 2$, $$\mathfrak{D}_{k}(\mathscr{H})=\left\{ \alpha S:\alpha \in \mathbb{C}^*,\ S\in \mathscr{S}_{0}(\mathscr{H})\text{, }\left\vert \frac{\lambda }{\mu }+\frac{\mu }{ \lambda }+k\right\vert \geq k+2,\ \lambda ,\mu \in \sigma (S)\right\} .$$ \end{theorem} \begin{proof} Using Proposition \ref{above}, it remains to prove that $$\left\{ \alpha S:\alpha \in \mathbb{C}^*,\ S\in \mathscr{S}_{0}(\mathscr{H})\text{, }\left\vert \frac{\lambda }{\mu }+\frac{\mu }{\lambda }+k\right\vert \geq k+2,\ \lambda ,\mu \in \sigma (S)\right\} \subset \mathfrak{D}_{k}(\mathscr{H}).$$ This follows by using the same argument used in the Remark. \end{proof} Finally we present new variants of C-P-R inequality. \begin{theorem} \label{t2} Let $S \in \mathfrak{I}(\mathscr{H})$ and $X, Y \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}.$ The following inequality holds and is equivalent to the C-P-R inequality for unitarily invariant norms: \begin{eqnarray}\label{mos1} {\rm (i)} \left|\left|\left| \left(SYS^{-1}+S^{* -1}YS^*\right) \oplus \left(S^*XS^{* -1}+S^{-1}XS\right)\right|\right|\right| \geq 2 \left|\left|\left| X\oplus Y\right|\right|\right|\,; \end{eqnarray} \begin{eqnarray}\label{mos2} {\rm (ii)} \left|\left|\left| \left(SYS^{* -1}+S^{* -1}YS\right) \oplus \left(S^*XS^{-1}+S^{-1}XS^*\right)\right|\right|\right| \geq 2 \left|\left|\left| X\oplus Y\right|\right|\right|\,. \end{eqnarray} \end{theorem} \begin{proof} (i) Clearly $\left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right]$ is a self adjoint operator in $\mathbb{B}(\mathscr{H}\oplus\mathscr{H})$ and $\left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right]^{-1}=\left[\begin{array}{cc} 0 & S^{* -1} \\ S^{-1} & 0 \end{array}\right]$. It follows from the C-P-R inequality for unitarily invariant norms that \begin{align*} \left|\left|\left|\left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right] \left[\begin{array}{cc} X & 0 \\ 0 & Y \end{array}\right] \left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right]^{-1} + \left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right]^{-1}\left[\begin{array}{cc} X & 0 \\ 0 & Y \end{array}\right]\left[\begin{array}{cc} 0 & S \\ S^* & 0 \end{array}\right] \right|\right|\right|\\ \geq 2 \left|\left|\left|\left[\begin{array}{cc} X & 0 \\ 0 & Y \end{array}\right] \right|\right|\right|\,, \end{align*} whence \begin{align*} \left|\left|\left|\left[\begin{array}{cc} SYS^{-1}+S^{* -1}YS^* & 0 \\ 0 & S^*XS^{* -1}+S^{-1}XS \end{array}\right]\right|\right|\right| \geq 2 \left|\left|\left|\left[\begin{array}{cc} X & 0 \\ 0 & Y \end{array}\right] \right|\right|\right|\,, \end{align*} which is indeed \eqref{mos1}. Now assume that \eqref{mos1} holds for all $X, Y \in \mathfrak{J}_{\left|\left|\left| .\right|\right|\right|}$ and $S \in \mathfrak{I}(\mathscr{H})$. To get \eqref{2} for unitarily invariant norms, let $S$ be self-adjoint, take $Y=X$ and use the fact that two inequalities $|||A|||\leq |||B|||$ and $|||A\oplus A|||\leq |||B\oplus B|||$, by the Fan dominance principle, are equivalence for all unitarily invariant norms (see \cite{Ki}).\\ (ii) To get inequality \eqref{mos2}, use the same argument as in (i) with the matrix $\left[\begin{array}{cc} 0 & X \\ Y & 0 \end{array}\right]$ and note that $|||X\oplus Y|||=\left|\left|\left|\left[\begin{array}{cc} 0 & X \\ Y & 0 \end{array}\right]\right|\right|\right|$. \end{proof} \begin{corollary} (i) If $S \in \mathfrak{I}(\mathscr{H})$ and $X \in \mathbb{B}(\mathscr{H})$, then the following inequality holds and is equivalent to the C-P-R inequality \begin{eqnarray*} \max\{\Vert SXS^{-1}+S^{* -1}XS^* \Vert, \Vert S^*XS^{* -1}+S^{-1}XS \Vert\} \geq 2 \Vert X\Vert\,. \end{eqnarray*} (ii) If $S \in \mathfrak{I}(\mathscr{H})$ and $X, Y$ are in the Schatten $p$-class, then the following inequality holds and is equivalent to the C-P-R inequality for the Schatten $p$-norm \begin{eqnarray*} \left|\left|\left| SXS^{-1}+S^{* -1}XS^*\right|\right|\right|_p^p + \left|\left|\left| S^*XS^{* -1}+S^{-1}XS\right|\right|\right|_p^p \geq 2^{p+1} \left|\left|\left| X\right|\right|\right|_p^p \,. \end{eqnarray*} \end{corollary} \begin{proof} Apply \eqref{mos1} to $Y=X$ and equalities \eqref{erf}. \end{proof} \end{document}
\begin{document} \title{Optimal existence classes and nonlinear--like dynamics in the linear heat equation in $\R^{\N} \setcounter{footnote}{2} \makeatletter \begin{center} ${}^{1}$Mathematics Institute\\ University of Warwick \\ Gibbet Hill Rd, Coventry CV4 7AL, UK\\ {E-mail: [email protected]} \\ \mbox{} \\ ${}^{2}$Departamento de Matem\'atica Aplicada\\ Universidad Complutense de Madrid\\ 28040 Madrid, Spain\\ and \\ Instituto de Ciencias Matem\'aticas \\ CSIC-UAM-UC3M-UCM \footnote{${}^{*}$Partially supported by ICMAT Severo Ochoa project SEV-2015-0554 (MINECO)} \\ {E-mail: [email protected]} {\rm e}nd{center} \makeatother \begin{abstract} We analyse the behaviour of solutions of the linear heat equation in ${\mathbb R}^d$ for initial data in the classes $\mathcal{M}_\varepsilon({\mathbb R}^{{d}})$ of Radon measures with $\int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|x|^2}\,{\rm d} |u_0|<\infty$. We show that these classes are in some sense optimal for local and global existence of non-negative solutions: in particular $\mathcal{M}_0({\mathbb R}^{{d}})=\cap_{\varepsilon>0}\mathcal{M}_\varepsilon({\mathbb R}^{{d}})$ consists precisely of those initial data for which the a solution of the heat equation can be given for all time using the heat kernel representation formula. After considering properties of existence, uniqueness, and regularity for such initial data, which can grow rapidly at infinity, we go on to show that they give rise to properties associated more often with nonlinear models. We demonstrate the finite-time blowup of solutions, showing that the set of blowup points is the complement of a convex set, and that given any closed convex set there is an initial condition whose solutions remain bounded precisely on this set at the `blowup time'. We also show that wild oscillations are possible from non-negative initial data as $t\to\infty$ (in fact we show that this behaviour is generic), and that one can prescribe the behaviour of $u(0,t)$ to be any real-analytic function $\gamma(t)$ on $[0,\infty)$. {\rm e}nd{abstract} \section{ Introduction} \setcounter{equation}{0} In this paper we consider the linear heat equation posed on the whole space ${\mathbb R}^{d}$, with very general initial data, which may be either only locally integrable or even a Radon measure. For an appropriate class of initial data $u_0$, see e.g.\ \cite{Tychonov}, it is well known that solutions to this equation, \begin{equation} \label{eq:intro_heat_equation} u_t-{\rm d}isplaystyleelta u=0,\ x\in{\mathbb R}^{d},\ t>0,\qquad u(x,0)=u_0(x), {\rm e}nd{equation} can be written using the heat kernel as \begin{equation} \label{eq:intro_solution_heat_up_t=0} u(x,t)=S(t)u_{0} (x) := \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-y|^2/4t} u_0(y) \, {\rm d} y , \quad x\in {\mathbb R}^{d}, \ t>0 . {\rm e}nd{equation} It turns out that the behaviour of solutions in (\ref{eq:intro_solution_heat_up_t=0}) is significantly affected by the way the mass of the initial data is distributed in space. If the mass as $|x|\to \infty$ is not too large it is well known that the `mass' of the initial data moves to infinity and the solutions decay to zero in suitable norms. For example, if $u_{0} \in L^{p}({\mathbb R}^{d})$ for some $1\leq p< \infty$ then classical estimates ensure that \begin{equation} \label{eq:heat_LpLq_estimates} \|u(t)\|_{L^{q}({\mathbb R}^{d})} \leq (4\pi t)^{-\frac{{d}}{2}(\frac{1}{p}-\frac{1}{q})} \|u_{0}\|_{L^{p}({\mathbb R}^{d})},\quad\mbox{for every } t>0 \mbox{ and }q\mbox{ with } p\leq q \leq \infty, {\rm e}nd{equation} which in particular implies that all solutions converge uniformly to zero on the whole of ${\mathbb R}^{d}$. In particular, for $u_{0} \in L^{1}({\mathbb R}^{d})$ since we also have \begin{displaymath} \int_{{\mathbb R}^{{d}}} u(x,t)\, {\rm d} x = \int_{{\mathbb R}^{d}} u_{0}(y)\, {\rm d} y, \quad t>0, {\rm e}nd{displaymath} it follows that for such $u_{0}$ the total mass is preserved but (from (\ref{eq:heat_LpLq_estimates})) the supremum tends to zero, i.e.\ the mass moves to infinity. It is also known that as $t\to \infty$, solutions asymptotically resemble the heat kernel $$K(x,t)=(4\pi t)^{-{d}/2} {\rm e}^{-|x|^2/4t},$$ see for example Section 1.1.4 in \cite{GigaGigaSaal}. The faster the initial data decays as $|x|\to \infty$ the higher the order of the asymptotics of the solution that are described by the heat kernel, see e.g.\ \cite{DuoanZuazua}. When the initial data is bounded, $u_{0} \in L^{\infty}({\mathbb R}^{{d}})$, the decay described above does not necessarily take place. In fact (\ref{eq:heat_LpLq_estimates}) reduces to \begin{displaymath} \|u(t)\|_{L^{\infty}({\mathbb R}^{d})} \leq \|u_{0}\|_{L^{\infty}({\mathbb R}^{d})}, \quad t>0, {\rm e}nd{displaymath} which does not in general imply any decay. For example, if $u_{0}{\rm e}quiv1$ then $u(x,t)=1$ for every $t>1$; for any $R>0$ we can write \begin{displaymath} 1 = u(t, {\mathbb C}hi_{B(0,R)}) + u(t, {\mathbb C}hi_{{\mathbb R}^{{d}}\setminus B(0,R)} ), {\rm e}nd{displaymath} where ${\mathbb C}hi_A$ denotes the characteristic function of the set $A$. Since ${\mathbb C}hi_{B(0,R)} \in L^{1}({\mathbb R}^{d})\cap L^{\infty}({\mathbb R}^{d})$ the mass of $0\leq u(t, {\mathbb C}hi_{B(0,R)}) $ escapes to infinity but, on the other hand, the mass of ${\mathbb C}hi_{{\mathbb R}^{{d}}\setminus B(0,R)}$, diffused by $u(t, {\mathbb C}hi_{{\mathbb R}^{{d}}\setminus B(0,R)} )$, moves `inwards' from infinity and both balance precisely at every time. Hence, it turns out the dynamics of the solutions (\ref{eq:intro_solution_heat_up_t=0}) of the heat equation (\ref{eq:intro_heat_equation}) for bounded initial data is much richer than for initial data with small mass at infinity. For example, the existence of one-dimensional bounded oscillations was proved in Section 8 in \cite{ColletEckman}, while bounded `wild' oscillations in any dimensions were shown to exist in \cite{VZ2002} by a scaling method. It is worth noting that this scaling argument is also applied in \cite{VZ2002} to some nonlinear equations (porous medium, $p$-Laplacian, and scalar conservation laws). Indeed, this scaling argument allows one to show that for $L^{p}({\mathbb R}^{{d}})$ initial data, $1\leq p < \infty$, the solution of (\ref{eq:intro_heat_equation}) asymptotically approaches the heat kernel. The scaling argument was later extended to some nonlinear dissipative reaction diffusion equations in \cite{CazenaveDW}. In this paper our goal is to consider some (optimal) classes of unbounded data that possess large mass at infinity. In such a situation we show how the mechanism of mass moving inwards from infinity plays a dominant role on the structure and properties of solutions of (\ref{eq:intro_heat_equation}). It turns out that in this setting, solutions of (\ref{eq:intro_heat_equation}) show surprising dynamical behaviours more akin to what is expected in nonlinear equations. For example, in our class of `large' initial data finite-time blowup is possible. We completely characterise (non-negative) initial data for which the solution ceases to exist in some finite time; we determine the maximal existence time and characterise the blow-up points, which are the complement of a convex set. Hence we are able to construct non-negative initial data for which the solution exhibits regional, or complete blow--up. One can even find solutions with a finite pointwise limit at every point in ${\mathbb R}^{{d}}$ at the maximal existence time, but that can not be continued beyond this maximal time (`finite existence time without blowup'). In particular, we prove that given any closed convex set in ${\mathbb R}^{{d}}$, there exists an initial condition such that the solution remains bounded at the maximal existence time precisely on this set. Observe that most of this behaviour is characteristic of nonlinear non-dissipative problems, see e.g.\ \cite{QuittnerSouplet}. Our analysis includes and extends the classical example $u_0(x)={\rm e}^{A|x|^2}$, with $A>0$ for which the solution is given by \begin{displaymath} u(x,t) = \frac{T^{{d}/2}}{ (T-t)^{{d}/2}} {\rm e}^{\frac{|x|^2}{4(T-t)}}, {\rm e}nd{displaymath} with $T=\frac{1}{4A}$ which blows up at every point $x\in{\mathbb R}^{d}$ as $t\to T$. For those solutions that exist globally in time we characterise those that are unbounded and also construct (non-negative) initial data such that the solution displays wild unbounded oscillations (cf.\ \cite{VZ2002}). For this, given any sequence of nonnegative numbers $\{\alpha_{k}\}_{k}$ we construct initial data such that there exists a sequence of times $t_{k} \to \infty$ such that for any $k\in {\mathbb N}$ there exists a subsequence $\{t_{k_{j}}\}_{j}$ such that \begin{displaymath} u(0,t_{k_{j}}) \to \alpha_{k} \quad\mbox{as}\quad j\to \infty. {\rm e}nd{displaymath} We also show that this oscillatory behaviour is generic within a suitable (optimal) class of solutions. Notice that unbounded oscillatory behaviour is an outstanding feature of some nonlinear non-dissipative equations, see, for example, Theorem 6.2 in \cite{PolacikYanagida} where some solutions are shown to satisfy \begin{displaymath} \liminf_{t \to \infty} \|u(t;u_{0})\|_{L^{\infty}({\mathbb R}^{{d}})} =0 \quad\mbox{and}\quad \limsup_{t \to \infty} \|u(t;u_{0})\|_{L^{\infty}({\mathbb R}^{{d}})} =\infty. {\rm e}nd{displaymath} All the nonlinear-like behaviour described above is caused by the large mass of the initial data at infinity that is diffused by the solution of the heat equation and is moved inwards bounded regions in ${\mathbb R}^{{d}}$, so that its effect is felt at later times. Throughout the paper our analysis is based on the following spaces: we define the subclass $\mathcal{M}_\varepsilon({\mathbb R}^{d})$ of Radon measures $\mathcal{M}_{\rm loc}({\mathbb R}^{d})$ by setting $$ \mathcal{M}_{\varepsilon} ({\mathbb R}^{d}):=\left\{\mu \in \mathcal{M}_{\rm loc}({\mathbb R}^{d}):\ \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon |x|^{2}} \, {\rm d} |\mu( x)| < \infty{\rm i}ght\}; $$ where $ |\mu|$ denotes the total variation of $\mu$, with the norm \begin{displaymath} \|\mu\|_{\mathcal{M}_\varepsilon ({\mathbb R}^{d})} := \left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|x|^2} \,{\rm d} |\mu(x)|; {\rm e}nd{displaymath} i.e.\ $\mathcal{M}_\varepsilon ({\mathbb R}^{d})$ consists of Radon measures for which ${\rm e}^{-\varepsilon |x|^{2}} \in L^{1}({\rm d}|\mu|)$ and is a Banach space. [This set of measures was briefly mentioned in \cite{Aro71}, which considered only non-negative weak solutions of parabolic problems.] Since any locally integrable function $f \in L^{1}_{{\rm loc}} ({\mathbb R}^{{d}})$ defines the Radon measure $f \, {\rm d} x \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ the class above contains $$ L^1_\varepsilon({\mathbb R}^{d}):=\left\{f\in L^1_{\rm loc}({\mathbb R}^{d}):\ \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon|x|^2}|f(x)|\,{\rm d} x<\infty{\rm i}ght\}. $$ These classes turn out to be optimal in several ways for non-negative solutions (\ref{eq:intro_solution_heat_up_t=0}) of (\ref{eq:intro_heat_equation}) which are now given by \begin{equation} \label{eq:intro_solution_heat_up_t=0_measure} u(x,t) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-y|^2/4t} \, {\rm d} u_{0}(y) . {\rm e}nd{equation} First an initial condition in $\mathcal{M}_\varepsilon ({\mathbb R}^{d})$ gives rise to a (classical) solution of (\ref{eq:intro_heat_equation}) defined for $0<t<T(\varepsilon)=\frac{1}{4\varepsilon}$. Conversely for any non-negative solution (\ref{eq:intro_solution_heat_up_t=0}) of (\ref{eq:intro_heat_equation}) that is finite at some $(x,t)$ then the initial data must belong to $\mathcal{M}_{1/4t}({\mathbb R}^{d})$. As a consequence a non-negative initial condition in $\mathcal{M}_{\rm loc}({\mathbb R}^{d})$ gives rise to a globally defined solution if and only if it belongs to \begin{displaymath} \mathcal{M}_0 ({\mathbb R}^{d}) :=\bigcap_{\varepsilon>0}\mathcal{M}_\varepsilon ({\mathbb R}^{d}). {\rm e}nd{displaymath} Within this class of initial data we also show that a non-negative solution is bounded for some $t_{0}>0$ (and hence for all $t>0$) if and only if the initial data is a uniform measure in the sense that \begin{displaymath} \sup_{x \in {\mathbb R}^{d}} \ \int_{B(x,1)} {\rm d} |u_{0}(y)| < \infty . {\rm e}nd{displaymath} Finally we show that a non-negative solution is bounded on sets of the form $|x|^{2}/t \leq R$, with $R>0$, if and only if \begin{displaymath} \sup_{\varepsilon >0} \| u_{0} \|_{ \mathcal{M}_{\varepsilon} ({\mathbb R}^{{d}})} < \infty . {\rm e}nd{displaymath} In contrast, if $$ \varepsilon_0(u_{0})=\inf\{\varepsilon>0:\ 0\leq u_0\in\mathcal{M}_\varepsilon({\mathbb R}^{d}) \}>0 $$ then the solution will exists only up to $T= T(u_{0})= \frac{1}{4\varepsilon_0}$ and cannot be continued beyond this time at any point. The points $x$ at which the solution has a finite limit as $t\to T$ are characterised by a condition on the translated measure, namely $\tau_{-x} u_{0}\in\mathcal{M}_{\varepsilon_0} ({\mathbb R}^{d})$, and they must form a convex set. Conversely, as mentioned above, at any chosen closed convex subset of ${\mathbb R}^{d}$, there exist some $u_{0}\geq 0$ such that the limit as $t\to T$ of the solution is finite precisely at this set. In particular there are initial conditions such that $\lim_{t\to T}u(x,t)<\infty$ for every $x\in{\mathbb R}^{d}$ but the solution cannot be defined past time $T$. Large initial data can also exhibit other unusual properties not normally associated with the heat equation. For example, observe that for any $\omega \in {\mathbb R}^{{d}}$ the function $\varphi(x)= {\rm e}^{\omega x} \in L^{1}_{0}({\mathbb R}^{{d}}) :=\bigcap_{\varepsilon>0}L^1_\varepsilon({\mathbb R}^{d})$ satisfies $-{\rm d}isplaystyleelta \varphi = -|\omega|^{2} \varphi$, while $\phi(x) = {\rm e}^{{\rm i}\omega x} \in L^{1}_{0}({\mathbb R}^{{d}})$ satisfies $-{\rm d}isplaystyleelta \phi = |\omega|^{2} \phi$. It follows that the spectrum of the Laplacian satisfies in this setting is the whole of ${\mathbb R}$, \begin{displaymath} \sigma_{L^{1}_{0}({\mathbb R}^{{d}})} (-{\rm d}isplaystyleelta) = {\mathbb R}, {\rm e}nd{displaymath} and that for any $\omega\in{\mathbb R}^{{d}}$ the function \begin{displaymath} u(x,t) = {\rm e}^{|\omega|^2 t + \omega x} \quad x \in {\mathbb R}^{{d}}, \ t >0 {\rm e}nd{displaymath} is a globally-defined solution of (\ref{eq:intro_heat_equation}) in $L^{1}_{0}({\mathbb R}^{{d}})$; the exponential growth rate of such solutions can be arbitrarily large. The paper is organized as follows. In Section \ref{sec:radon-measures} we recall some basic properties of Radon measures. In Section \ref{sec:existence_regularity} we show that for an initial condition in $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ the integral expression (\ref{eq:intro_solution_heat_up_t=0_measure}) defines a classical solution of the heat equation that attains the initial data in the sense of measures. Conversely, we show that if (\ref{eq:intro_solution_heat_up_t=0_measure}) is finite at some $(x,t)$ for some non--negative measure $u_{0}$, then it must be in some $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ space. In Section \ref{sec:uniqueness} we tackle the problem of uniqueness. In Section \ref{sec:blowup} we discuss and characterise the non-negative solutions that cease to exist in finite time, determining both the blow-up time $T$ and the points at which the solution has a finite limit as $t\to $T. In Section \ref{sec:asympt-behav-heat} we discuss the long-time behaviour of global solutions showing, in particular, wild unbounded oscillations for some initial data; we show that this behaviour is generic (in an appropriate sense). Allowing for sign-changing solutions we also show there how to obtain solutions with any prescribed behavior in time at $x=0$. Finally, in Section \ref{sec:extensions-other-probl}, we briefly discuss other problems that can be dealt with the same techniques. Appendix \ref{sec:some-auxil-results} contains some required technical results. \section{Radon measures on ${\mathbb R}^{{d}}$} \label{sec:radon-measures} \setcounter{equation}{0} In this section we will recall some basic results on Radon measures that will be used throughout the rest of the paper; details can be found in \cite{BeneCzaja,Folland,FritzRoy,GiaqModicaSoucek}. A Radon measure in ${\mathbb R}^{d}$ is a regular Borel measure assigning finite measure to each compact set. The set of all Radon measures in ${\mathbb R}^{d}$ is denoted $\mathcal{M}_{\rm loc} ({\mathbb R}^{d})$. Radon measures arise as the natural representation of linear functionals on the set $C_c({\mathbb R}^d)$ of real-valued functions of compact support in two distinct settings. \begin{theorem} If $L{\colon}C_{c}({\mathbb R}^{d}) \to {\mathbb R}$ is linear and positive, i.e.\ $L(\varphi)\geq0$ for $0\leq \varphi \in C_{c}({\mathbb R}^{d})$, then there exists a (unique) non-negative Radon measure $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ such that $$ L(\varphi) = \int_{{\mathbb R}^{d}} \varphi \, {\rm d}\mu\qquad\mbox{for every}\quad\varphi \in C_{c}({\mathbb R}^{d}). $$ {\rm e}nd{theorem} A similar result holds if positivity is replaced by continuity, in the following sense: we equip $C_{c}({\mathbb R}^{d})$ with the final (linear) topology associated with the inclusions \begin{displaymath} C_{c}(K) \hookrightarrow C_{c}({\mathbb R}^{d}), \qquad K {\subset\subset \kern2pt} {\mathbb R}^{{d}} {\rm e}nd{displaymath} where, for each compact set $K \subset {\mathbb R}^{{d}}$ we consider the sup norm in $ C_{c}(K)$. More concretely, a sequence $\{\varphi_{j}\}_{j} $ in $C_{c}({\mathbb R}^{d})$ converges to $\varphi \in C_{c}({\mathbb R}^{d})$, iff there exists a compact $K\subset {\mathbb R}^{d}$ such that ${\rm supp}(\varphi_{j} ) \subset K$ for all $j\in {\mathbb N}$ and $\varphi_{j} \to \varphi$ uniformly in $K$. A linear map $L{\colon}C_c({\mathbb R}^N)\to{\mathbb R}$ is then continuous if for every compact set $K\subset {\mathbb R}^{d}$ there exists a constant $C_{K}$ such that for every $\varphi \in C_{c}({\mathbb R}^{d})$ with support in $K$ \begin{displaymath} |L(\varphi)| \leq C_{K} \sup_{x\in K} |\varphi(x)| . {\rm e}nd{displaymath} \begin{theorem} If $L{\colon}C_{c}({\mathbb R}^{d}) \to {\mathbb R}$ is linear and continuous (in the sense described above) then there exists a (unique, signed) Radon measure $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ such that \begin{equation} \label{eq:Radon_as_linear} L(\varphi) = \int_{{\mathbb R}^{d}} \varphi \, {\rm d} \mu\qquad\mbox{for every}\quad\varphi \in C_{c}({\mathbb R}^{d}). {\rm e}nd{equation} {\rm e}nd{theorem} As a consequence of this second theorem the set of Radon measures can be characterised as the dual space of $C_c({\mathbb R}^{d})$, \begin{displaymath} \mathcal{M}_{\rm loc} ({\mathbb R}^{d}) = \Big(C_{c}({\mathbb R}^{d})\Big)', {\rm e}nd{displaymath} and we typically identify $L\in(C_c({\mathbb R}^{d}))'$ with the corresponding Radon measure $\mu$ from (\ref{eq:Radon_as_linear}). In this way we can write $$ \langle\mu, \varphi\rangle = {\rm d}isplaystyle \int_{{\mathbb R}^{d}} \varphi \, {\rm d} \mu\qquad\mbox{for every}\quad \varphi \in C_{c}({\mathbb R}^{d}). $$ Notice that, in particular, \begin{displaymath} L^{1}_{{\rm loc}} ({\mathbb R}^{{d}}) \subset \mathcal{M}_{\rm loc} ({\mathbb R}^{d}) {\rm e}nd{displaymath} as we identify $f \in L^{1}_{{\rm loc}} ({\mathbb R}^{{d}})$ with the measure $f \, {\rm d} x \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$. Any Radon measure $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ can be (uniquely) split as the difference of two non-negative, mutually singular, Radon measures $\mu = \mu^{+} - \mu^{-}$ (the `Jordan decomposition' of $\mu$). Then we can define the Radon measure $|\mu|$, the `total variation of $\mu$', by setting \begin{displaymath} |\mu|: = \mu^{+} + \mu^{-}. {\rm e}nd{displaymath} Then for every $\varphi \in C_{c}({\mathbb R}^{d})$ and $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ we have \be{dagger} \left|\int_{{\mathbb R}^{d}} \varphi \, {\rm d} \mu {\rm i}ght| \leq \int_{{\mathbb R}^{d}} |\varphi| \, {\rm d}|\mu| . {\rm e}e Finally we recall the definition of measures of bounded total variation. Consider the space $C_{0}({\mathbb R}^{d})$ of continuous functions converging to $0$ as $|x|\to \infty$ with the $\sup$ norm ($C_{c}({\mathbb R}^{d})$ is dense in this space). \begin{theorem} A linear mapping $L{\colon}C_{0}({\mathbb R}^{d}) \to {\mathbb R}$ is continuous, iff there exists a (signed) Radon measure $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ such that $|\mu|({\mathbb R}^{d})<\infty$ and \begin{displaymath} L(\varphi) = \int_{{\mathbb R}^{d}} \varphi \, {\rm d} \mu\qquad\mbox{for every}\quad\varphi \in C_{0}({\mathbb R}^{d}). {\rm e}nd{displaymath} {\rm e}nd{theorem} The quantity $\|\mu\|_{{\rm BTV}} = |\mu|({\mathbb R}^{d}) $ is the {\rm e}mph{total variation} of $\mu$ and is the norm of the functional $L$. In other words \begin{displaymath} \mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d}) = \Big(C_{0}({\mathbb R}^{d})\Big)' {\rm e}nd{displaymath} is the Banach space of Radon measures with bounded total variation. It is then immediate that $L^{1}({\mathbb R}^{d}) \subset \mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d})$, isometrically, and $\mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d}) \subset \mathcal{M}_{{\rm loc}} ({\mathbb R}^{d})$. We discuss solutions of the heat equation with initial data in $\mathcal{M}_{\rm BTV} ({\mathbb R}^{d})$ in Lemma \ref{lem:heat_total_variation}. Note that the set of Radon measures is therefore distinct from the class of tempered distributions on ${\mathbb R}^{d}$, which are continuous linear functionals on the Schwarz class ${\mathscr S}({\mathbb R}^{\rm d})$: such functions are smoother than functions in $C_c({\mathbb R}^{\rm d})$ but satisfy less stringent growth conditions, so neither class is contained in the other. Recall that $\mathscr{S}({\mathbb R}^{d})$ is made up of $C^{\infty}({\mathbb R}^{{d}})$ functions such that for all multi-indices $\alpha, \beta$ \begin{displaymath} |x^{\alpha}| |D^{\beta} \varphi (x)| \to 0 \quad\mbox{as}\quad |x| \to \infty . {\rm e}nd{displaymath} The family of seminorms \begin{displaymath} p_{\alpha,\beta}(\varphi) = \sup_{x \in {\mathbb R}^{{d}}} (1+ |x^{\alpha}|) |D^{\beta} \varphi (x)| {\rm e}nd{displaymath} defines a locally-convex topology on ${\mathscr S}({\mathbb R}^{d})$, and the tempered distributions are the dual space $\mathscr{S}'({\mathbb R}^{\rm d})$. A tempered distribution $L\in \mathscr{S}'({\mathbb R}^{d})$ has order $(m,n)\in {\mathbb N} \times {\mathbb N}$ if for all $\varphi \in \mathscr{S}({\mathbb R}^{d})$ and some constant $c>0$ \begin{displaymath} |\langle L, \varphi \rangle| \leq c p_{\alpha,\beta}(\varphi) {\rm e}nd{displaymath} with $|\alpha|= m$ and $|\beta| = n$. Since $(1+ x^{\alpha}) \varphi (x) \in \mathscr{S}({\mathbb R}^{d})$ for every $\varphi \in \mathscr{S}({\mathbb R}^{\rm d})$ and multi-index $\alpha$ and $\mathscr{S}({\mathbb R}^{d})$ is dense in $C_{0}({\mathbb R}^{\rm d})$, it follows that if $L\in \mathscr{S}'({\mathbb R}^{d})$ has order $(m,0)$ then $(1+ |x|^{2})^{-m/2} L$ is an element of $\mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d})$. That is, $L$ can be identified with a measure $\mu \in \mathcal{M}_{{\rm loc}} ({\mathbb R}^{{d}})$ such that \be{Cmcond} \int_{{\mathbb R}^{{d}}} (1+|x|^{2})^{-m/2} \, {\rm d} |\mu(x)| < \infty, {\rm e}e since \begin{displaymath} |\langle L, \varphi \rangle| = |\langle (1+ |x|^{2})^{-m/2} L, (1+ |x|^{2})^{m/2}\varphi \rangle|\leq c p_{\alpha,0}(\varphi) \leq c \sup_{x \in {\mathbb R}^{\ }} |\xi (x)|, {\rm e}nd{displaymath} with $\xi(x)= (1+ |x|^{2})^{m/2}\varphi (x) \in C_{0}({\mathbb R}^{\rm d})$ and $|\alpha|=m$. Let us denote by $\mathscr{C}_{m}({\mathbb R}^{\rm d})$ the collection of all measures $\mu$ that satisfy (\ref{Cmcond}). Then any such $\mu$ defines a tempered distribution of order $(m,0)$ since for any $\varphi \in \mathscr{S}({\mathbb R}^{\rm d})$ we have \begin{align*} \left| \int_{{\mathbb R}^{{d}}} \varphi (x) \, d \mu(x) {\rm i}ght| & = \left|\int_{{\mathbb R}^{{d}}} (1+ |x|^{2})^{m/2}\varphi(x) \, (1+ |x|^{2})^{-m/2}\, {\rm d} \mu(x) {\rm i}ght|\\ &\leq p_{\alpha, 0}(\varphi) \int_{{\mathbb R}^{{d}}} (1+|x|^{2})^{-m/2} \, {\rm d} |\mu(x)| {\rm e}nd{align*} with $|\alpha|=m$. Hence $\mathscr{C}_{m}({\mathbb R}^{\rm d})$ is precisely the class of tempered distributions of order $(m,0)$. \section{Initial data in $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$: existence and regularity} \label{sec:existence_regularity} \setcounter{equation}{0} Throughout this paper we consider the Cauchy problem \begin{equation} \label{eq:heat_equation} u_{t} - {\rm d}isplaystyleelta u =0,\ x\in{\mathbb R}^{d},\ t>0,\qquad u(x,0)= u_{0}(x) , {\rm e}nd{equation} whose solutions we expect to be given in terms of the heat kernel by \begin{displaymath} u(x,t; u_{0})=S(t)u_{0} (x) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-y|^2/4t} u_0(y) \, {\rm d} y , {\rm e}nd{displaymath} if $u_0\in L^1_{\rm loc}({\mathbb R}^{d})$ or, more generally, if $u_{0} \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ is a Radon measure, by \begin{equation} \label{eq:solution_heat_up_t=0} u(x,t; u_{0}) = S(t)u_{0} (x) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-y|^2/4t} \, {\rm d} u_{0}(y) . {\rm e}nd{equation} Of course, it is entirely natural to consider sets of measures as initial conditions for the heat equation, since the heat kernel, which is smooth for all $t>0$, is precisely the solution when $u_0$ is the ${\rm d}elta$ measure. Notice that from {\rm e}qref{eq:solution_heat_up_t=0} and {\rm e}qref{dagger} we immediately obtain \begin{displaymath} |S(t) u_{0}| \leq S(t) |u_{0}|, \qquad t>0, \quad u_{0} \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d}) . {\rm e}nd{displaymath} We start with some estimates for the expression in (\ref{eq:solution_heat_up_t=0}) which show that the solution can be essentially estimated by its value at $x=0$. \begin{lemma} \label{lem:estimate_from_x=0} If $u_0\in \mathcal{M}_{\rm loc}({\mathbb R}^{d})$ and $u(x,t)$ is given by {\rm e}qref{eq:solution_heat_up_t=0} then for any $a>1$ we have \begin{equation}\label{bound_above_x=0} |u(x,t,u_{0})|\le c_{{d},a}\,u(0,at,|u_{0}|)\, {\rm e}^{\frac{|x|^{2}}{4(a-1)t}}\qquad \mbox{for all}\quad x\in{\mathbb R}^{d},\ t>0, {\rm e}nd{equation} where $c_{{d},z}:=z^{{d}/2}$ for any $z>0$. If in addition $0\leq u_{0}\in \mathcal{M}_{{\rm loc}} ({\mathbb R}^{d})$ then for any $0<b<1<a$ we have \begin{equation} \label{bound_below_above_x=0} c_{{d},b}\, u(0,bt)\, {\rm e}^{-\frac{|x|^{2}}{4(1-b)t}} \leq u(x,t) \leq c_{{d},a}\, u(0,at)\, {\rm e}^{\frac{|x|^{2}}{4(a-1)t}}\qquad \mbox{for all}\quad x\in{\mathbb R}^{d},\ t>0. {\rm e}nd{equation} {\rm e}nd{lemma} \begin{proof} For the upper bound we use the fact that for any $0<{\rm d}elta<1$, \begin{equation}\label{sqlower} |x-y|^{2} \geq |y|^{2} + |x|^{2} - 2|y||x| \geq (1-{\rm d}elta) |y|^{2} + (1-\frac{1}{{\rm d}elta}) |x|^{2}, {\rm e}nd{equation} from which it follows that $$ | u(x,t,u_{0})| \leq {\rm e}^{(\frac{1}{{\rm d}elta}-1) \frac{|x|^{2}}{4t}} \left( \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{- (1-{\rm d}elta) \frac{|y|^2}{4t}} \, {\rm d} |u_0(y)| {\rm i}ght); $$ taking $a= \frac{1}{1-{\rm d}elta} >1$ yields (\ref{bound_above_x=0}). For the lower bound when $u_{0}\geq 0$, we argue similarly, now using the fact that for any ${\rm d}elta >0$, $$ |x-y|^{2} \leq |x|^{2} + |y|^{2} + 2 |x||y| \leq (1+{\rm d}elta) |x|^{2} + (1 + \frac{1}{{\rm d}elta}) |y|^{2}; $$ we obtain \begin{displaymath} u(x,t) \geq {\rm e}^{ -(1+{\rm d}elta) \frac{|x|^{2}}{4t}} \left( \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{- (1+\frac{1}{{\rm d}elta}) \frac{|y|^2}{4t}} \, {\rm d} u_0(y) {\rm i}ght) {\rm e}nd{displaymath} and then take $b= \frac{{\rm d}elta}{1+{\rm d}elta} <1$. {\rm e}nd{proof} We now introduce some classes of initial data that are particularly suited to an analysis of solutions of the heat equation: for $\varepsilon >0$ we define \begin{equation} \label{eq:space_L1eps} L^{1}_{\varepsilon} ({\mathbb R}^{d}):=\left\{f\in L^1_{\rm loc}({\mathbb R}^{d}):\ \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon |x|^{2}} |f(x)|\, {\rm d} x < \infty{\rm i}ght\}; {\rm e}nd{equation} with the norm \begin{equation} \label{eq:norm_L1eps} \|f\|_{L^1_\varepsilon ({\mathbb R}^{d})} := \left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|x|^2}|f(x)|\,{\rm d} x {\rm e}nd{equation} for which a positive constant function has norm equal to itself. For the case of measures for $\varepsilon >0$ we define \begin{equation} \label{eq:space_Meps} \mathcal{M}_{\varepsilon} ({\mathbb R}^{d}):=\left\{\mu \in \mathcal{M}_{\rm loc}({\mathbb R}^{d}):\ \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon |x|^{2}} \, {\rm d} |\mu( x)| < \infty{\rm i}ght\}; {\rm e}nd{equation} i.e. ${\rm e}^{-\varepsilon |x|^{2}} \in L^{1}(d|\mu|)$, with the norm \begin{equation} \label{eq:norm_Meps} \|\mu\|_{\mathcal{M}_\varepsilon ({\mathbb R}^{d})} := \left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|x|^2} \,{\rm d} |\mu(x)| . {\rm e}nd{equation} Obviously $L^{1}_{\varepsilon} ({\mathbb R}^{d}) \subset \mathcal{M}_{\varepsilon} ({\mathbb R}^{d})$ isometrically, that is, if $f\in L^{1}_{\varepsilon} ({\mathbb R}^{d})$ then $\|f\|_{\mathcal{M}_\varepsilon ({\mathbb R}^{d})} = \|f\|_{L^1_\varepsilon ({\mathbb R}^{d})}$. Also note that $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ and $L^{1}_{\varepsilon} ({\mathbb R}^{d})$ are increasing in $\varepsilon>0$ and if $\varepsilon_{1}<\varepsilon_{2}$ then for $\mu \in \mathcal{M}_{\varepsilon_{1}} ({\mathbb R}^{{d}})$ \begin{equation} \label{eq:increase_Meps_norm} \|\mu\|_{\mathcal{M}_{\varepsilon_{2}} ({\mathbb R}^{d})} \leq \left(\frac{\varepsilon_{2}}{\varepsilon_{1}}{\rm i}ght)^{{d}/2} \|\mu\|_{\mathcal{M}_{\varepsilon_{1}} ({\mathbb R}^{d})} . {\rm e}nd{equation} Finally $L^1_\varepsilon ({\mathbb R}^{d})$ and $\mathcal{M}_\varepsilon ({\mathbb R}^{d})$ with the norms (\ref{eq:norm_L1eps}) and (\ref{eq:space_Meps}) respectively, are Banach spaces, see Lemma \ref{lem:Meps_banach}. The following simple lemma demonstrates the relevance of the spaces $L^1_\varepsilon ({\mathbb R}^{d})$ and $\mathcal{M}_\varepsilon ({\mathbb R}^{d})$ to the heat equation. Note that the first part of the statement does not require that $u_0$ is non-negative. We will improve on the first part of this lemma in Proposition \ref{prop:semigroup_estimates}, obtaining bounds on $u(t)$ in the norm of $L^1_{\varepsilon(t)}({\mathbb R}^{{d}})$. \begin{lemma}\label{whyL1e} Let $u_0\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$, set $T(\varepsilon)=1/4\varepsilon$, and let $u(x,t)$ be given by {\rm e}qref{eq:solution_heat_up_t=0}. Then for each $t\in (0, T(\varepsilon))$ we have $u(t) \in L^{1}_{{\rm d}elta} ({\mathbb R}^{d})$ for any ${\rm d}elta>\varepsilon(t):= \frac{1}{4(T(\varepsilon)-t)} = \frac{\varepsilon}{1-4\varepsilon t}$. Conversely, if $0\leq u_{0}\in \mathcal{M}_{{\rm loc}} ({\mathbb R}^{d})$ and $u(x,t) < \infty$ for some $x\in{\mathbb R}^{d}$, $t>0$ then $$ u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{d}) \quad\mbox{for every}\quad \varepsilon > 1/4t . $$ {\rm e}nd{lemma} \begin{proof} Taking $u_0\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ we use the upper bound {\rm e}qref{bound_above_x=0} from Lemma \ref{lem:estimate_from_x=0} to obtain \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-{\rm d}elta |x|^{2}}|u(x,t)| \, {\rm d} x \leq c_{{d},a}\,u(0,at,|u_{0}|) \int_{{\mathbb R}^{d}} {\rm e}^{-({\rm d}elta - \frac{1}{4(a-1)t}) |x|^{2}} \, {\rm d} x , {\rm e}nd{displaymath} where we choose any $1<a<T(\varepsilon)/t$. Given such a choice of $a$, to ensure that the integral is finite we require ${\rm d}elta>\frac{1}{4(a-1)t}$. Noting that the right-hand side of this expression can be made arbitrarily close to $\frac{1}{4(T(\varepsilon)-t)}$ it follows that $u(t)\in L^1_{\rm d}elta({\mathbb R}^{d})$ for any ${\rm d}elta>\varepsilon(t):=1/4(T(\varepsilon)-t)=\varepsilon/(1-4\varepsilon t)$, as claimed. Conversely, from the lower bound in (\ref{bound_below_above_x=0}), if $0\leq u(x,t) < \infty$ for some $x\in{\mathbb R}^{d}$, $t>0$ then for any $0<b<1$ \begin{displaymath} u(0,bt) = \frac{1}{(4\pi bt)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|y|^2/4bt} \, {\rm d} u_0(y) <\infty, {\rm e}nd{displaymath} i.e.\ $u_{0} \in \mathcal{M}_{1/4bt}({\mathbb R}^{d})$. Since we can take any $0<b<1$, it follows that $u_0\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ for any $\varepsilon>1/4t$. {\rm e}nd{proof} We reserve the notation $T(\varepsilon)$ and $\varepsilon(t)$ in what follows for the functions defined in the statement of this lemma; for the latter this is something of an abuse of notation, since $\varepsilon(t)$ is really a function that depends on a particular choice of $\varepsilon$ (as well as $t$): \begin{equation}\label{epsilon-t} T(\varepsilon)=\frac{1}{4\varepsilon}\qquad\mbox{and}\qquad\varepsilon(t):=\frac{1}{4(T(\varepsilon)-t)} =\frac{\varepsilon}{1-4\varepsilon t},\quad 0\le t<T(\varepsilon). {\rm e}nd{equation} At something of an opposite extreme, the following lemma - which we will require many times in what follows - allows us to capture some of the ways in which any solution starting from a continuous function with compact support retains a trace of its initial data; more or less it satisfies the same decay as the heat kernel, $\sim t^{-{d}/2}{\rm e}^{-|x|^2/4t}$. \begin{lemma}\label{heat_solution_Cc} If $\varphi\in C_c({\mathbb R}^{d})$ with ${\rm supp}\,\varphi\subset B(0,R)$ then for any $0<{\rm d}elta <1$ and $t>0$ \begin{itemize} \item [\rm(i)] $ |S(t)\varphi(x)|\le\begin{cases} C_{\varphi}(t) {\rm e}^{-\gamma(t) |x|^2}&|x|\ge 2R/{\rm d}elta\\ \|\varphi\|_{L^\infty({\mathbb R}^{{d}}) } &|x|\le 2R/{\rm d}elta, {\rm e}nd{cases}$\\ where $$ C_{\varphi}(t) = \frac{{\rm e}^{-3(1-{\rm d}elta)R^2/4{\rm d}elta t}}{(4\pi t)^{{d}/2}} \|\varphi\|_{L^1({\mathbb R}^{{d}})}\qquad\mbox{and}\qquad\gamma(t)= \frac{(1-{\rm d}elta)^2}{4t}. $$ \item [\rm(ii)] $|S(t)\varphi(x)-\varphi (x) |\le\begin{cases} C_{\varphi}(t) {\rm e}^{-\gamma(t) |x|^2}&|x|\ge 2R/{\rm d}elta\\ \tilde C_{\varphi}(t) &|x|\le 2R/{\rm d}elta, {\rm e}nd{cases}$\\ with $C_{\varphi}(t)$ and $\gamma(t)$ as above and $\tilde C_{\varphi}(t) \to 0$ as $t\to 0$. \item [\rm(iii)] In particular, for any $\varepsilon>0$ and $0< T <T(\varepsilon)=\frac{1}{4\varepsilon}$ there exists $\gamma=\gamma(T,\varepsilon)>0$ such that \begin{displaymath} {\rm e}^{\varepsilon|x|^2}|S(t)\varphi (x)|\le C_{T,\varphi,\varepsilon}{\rm e}^{-\gamma|x|^2}, \quad x \in {\mathbb R}^{{d}} \quad\mbox{for every }t\in[0,T]. {\rm e}nd{displaymath} In addition, \begin{displaymath} {\rm e}^{\varepsilon|x|^{2}} \big (S(t) \varphi(x)-\varphi(x) \big) \to 0\quad\mbox{ uniformly in }{\mathbb R}^{d}\quad\mbox{as}\quad t\to 0. {\rm e}nd{displaymath} {\rm e}nd{itemize} {\rm e}nd{lemma} \begin{proof} For any $\varphi \in C_{c}({\mathbb R}^{d})$ with support in the ball $B(0,R)$, we have \begin{equation} \label{eq:solution_heat_C0} |S(t) \varphi(x)|\le S(t)|\varphi(x)| = \frac{1}{(4\pi t)^{{d}/2}} \int_{B(0,R)} {\rm e}^{-\frac{|x-y|^2}{4t}} |\varphi(y)| \,{\rm d} y; {\rm e}nd{equation} using again $$ |x-y|^{2} \geq (1-{\rm d}elta) |x|^{2} -\left(\frac{1}{{\rm d}elta}-1{\rm i}ght) |y|^{2} \geq (1-{\rm d}elta) |x|^{2} -\left(\frac{1}{{\rm d}elta}-1{\rm i}ght) R^{2} $$ for any $0<{\rm d}elta <1$, it follows that \begin{equation} \label{eq:point_bound_heat_C0} 0\leq S(t) |\varphi (x)| \leq \frac{ {\rm e}^{-(1-{\rm d}elta)\frac{|x|^2}{4t} + (\frac{1}{{\rm d}elta}-1) \frac{R^2}{4t}} }{(4\pi t)^{{d}/2}} \int_{B(0,R)} |\varphi(y)| \,{\rm d} y =\frac{{\rm e}^{-\frac{(1-{\rm d}elta)}{4t} (|x|^{2} - \frac{R^2}{{\rm d}elta})}}{(4\pi t)^{{d}/2}} I(\varphi), {\rm e}nd{equation} where $I(\varphi)=\|\varphi\|_{L^1({\mathbb R}^{{d}})}$. Now note that \begin{displaymath} |x|^{2} -\frac{R^{2}}{{\rm d}elta} \geq (1-{\rm d}elta) |x|^{2}+\frac{3R^2}{{\rm d}elta} {\rm e}nd{displaymath} if $|x|\ge 2R/{\rm d}elta$, and hence for any such $x$ we obtain $$ 0\leq |S(t) \varphi (x)| \leq\frac{{\rm e}^{-3(1-{\rm d}elta)R^2/4{\rm d}elta t}}{(4\pi t)^{{d}/2}} {\rm e}^{-\frac{(1-{\rm d}elta)^2}{4t}|x|^{2}} I(\varphi). $$ Since also $\|S(t)\varphi\|_{L^\infty({\mathbb R}^{{d}})}\le\|\varphi\|_{L^\infty({\mathbb R}^{{d}})}$ for all $t\ge0$, we get part (i). Now, observe that for $|x|\ge 2R/{\rm d}elta$ we get the same upper bound for $ |S(t) \varphi (x) -\varphi(x)|$ as above and since as $\varphi \in {\rm BUC}({\mathbb R}^{{d}})$ we know from e.g. \cite{Mora,HKMM96,L} that $S(t)\varphi-\varphi \to 0$ uniformly in ${\mathbb R}^{{d}}$ as $t\to 0$. Hence we get part (ii). Now fix $\varepsilon>0$ and $0<T<T(\varepsilon)$; we choose $0<{\rm d}elta<1$ such that $$ \gamma:=\frac{(1-{\rm d}elta)^2}{4T}-\varepsilon>0, $$ i.e.\ so that for all $0\le t\le T$ we have $(1-{\rm d}elta)^2/4t\ge(1-{\rm d}elta)^2/4T=\varepsilon+\gamma$; note that $\gamma$ and ${\rm d}elta$ can be chosen explicitly in such a way that they depend only on $T$ and $\varepsilon$. Then parts (i) and (ii) give part (iii). {\rm e}nd{proof} Notice that in particular if $u_0\in\mathcal{M}_\varepsilon$ and $\varphi$ is as in the previous lemma then \begin{equation} \label{eq:u0_tested_with_heat} \int_{{\mathbb R}^{d}} S(t) \varphi\,{\rm d} u_{0} = \int_{{\mathbb R}^{d}} {\rm e}^{\varepsilon|x|^{2}} S(t) \varphi (x)\, {\rm e}^{-\varepsilon|x|^{2}} \,{\rm d} u_{0}(x) {\rm e}nd{equation} is well defined for all $0\leq t \leq T<T(\varepsilon)$. The next preparatory result shows that the solution of the heat equation for an initial condition that decays like a quadratic exponential preserves this sort of decay, but with a rate that degrades in time. \begin{lemma} \label{lem:estimate_fast_decay_initialdata} If $\varphi \in C_{0}({\mathbb R}^{{d}})$ with $|\varphi (x)| \leq A {\rm e}^{-\gamma|x|^{2}}$, $x\in {\mathbb R}^{{d}}$, then $u(t)= S(t) \varphi$ satisfies \begin{displaymath} |u(x,t)| \leq \frac{A}{(1+4\pi \gamma t)^{{d}/2}} {\rm e}^{-\frac{\gamma}{1+4 \gamma t} |x|^{2}}, \qquad x\in {\mathbb R}^{{d}}, \quad t>0. {\rm e}nd{displaymath} {\rm e}nd{lemma} \begin{proof} Note that completing the square yields \begin{displaymath} \frac{|x-y|^{2}}{4t} + \gamma |y|^{2} = \frac{1+4 \gamma t}{4t} \left|y- \frac{1}{1+4\gamma t} x{\rm i}ght|^{2} + \frac{\gamma |x|^{2}}{1+4\gamma t} {\rm e}nd{displaymath} and then \begin{displaymath} |u(x,t)| \leq \frac{A}{(4 \pi t)^{{d}/2}} {\rm e}^{-\frac{\gamma |x|^{2}}{1+4\gamma t} } \int_{{\mathbb R}^{{d}}} {\rm e}^{- \frac{1+4 \gamma t}{4t} \left|y- \frac{1}{1+4\gamma t} x{\rm i}ght|^{2}} \, {\rm d} y = \frac{A}{(4 \pi t)^{{d}/2}} {\rm e}^{-\frac{\gamma |x|^{2}}{1+4\gamma t} } \int_{{\mathbb R}^{{d}}} {\rm e}^{- \frac{1+4 \gamma t}{4t} |y|^{2}} \, {\rm d} y {\rm e}nd{displaymath} and the estimate follows. {\rm e}nd{proof} As a consequence, for any $u_{0}\in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ and $\varphi$ that decays sufficiently fast, $u_{0}$ and $S(t)\varphi$ can be integrated against each other for some time, see (\ref{eq:u0_tested_with_heat}). In fact the following symmetry property holds. \begin{lemma}\label{lem:preparing_4_Fubini} Assume that $\mu\in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ and $\phi \in C_{0}({\mathbb R}^{{d}})$ is such that $|\phi (x)| \leq A {\rm e}^{-\gamma|x|^{2}}$, $x\in {\mathbb R}^{{d}}$ with $\gamma > \varepsilon$. Then for every $0<t< T(\varepsilon) -T(\gamma) = \frac{1}{4\varepsilon}-\frac{1}{4\gamma}$ \begin{displaymath} \int_{{\mathbb R}^{{d}}} \int_{{\mathbb R}^{{d}}} K(x-y,t) |\phi(x)| \, \, {\rm d} |\mu(y)| \leq (4\varepsilon t)^{-{d}/2} \|\mu\|_{\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})} \int_{{\mathbb R}^{{d}}} {\rm e}^{\varepsilon(t) |x|^{2}} |\phi(x)| \, {\rm d} x . {\rm e}nd{displaymath} where $K(x,t)=(4\pi t)^{-{d}/2} {\rm e}^{-\frac{|x|^2}{4t}}$ is the heat kernel and $\varepsilon(t)= \frac{1}{4(T(\varepsilon)-t)}= \frac{\varepsilon }{1 -4\varepsilon t}$. In particular, for $0<t< T(\varepsilon) -T(\gamma) = \frac{1}{4\varepsilon}-\frac{1}{4\gamma}$ \begin{equation} \label{Fubbini_4-S(t)} \int_{{\mathbb R}^{{d}}} \phi\, S(t)\mu = \int_{{\mathbb R}^{{d}}} S(t) \phi \, {\rm d} \mu . {\rm e}nd{equation} {\rm e}nd{lemma} \begin{proof} Notice that \begin{displaymath} I= \int_{{\mathbb R}^{{d}}} \int_{{\mathbb R}^{{d}}} K(x-y,t) |\phi(x)| \, {\rm d} x\, {\rm d} |\mu(y)| = \int_{{\mathbb R}^{{d}}} \int_{{\mathbb R}^{{d}}} K(x-y,t) {\rm e}^{\varepsilon|y|^{2}} |\phi(x)| {\rm e}^{-\varepsilon|y|^{2}} \, {\rm d} x \, {\rm d} |\mu(y)| {\rm e}nd{displaymath} and completing the square \begin{displaymath} \frac{|x-y|^{2}}{4t} -\varepsilon |y|^{2} = \frac{1-4 \varepsilon t}{4t} \left|y - \frac{1}{1-4 \varepsilon t} x{\rm i}ght|^{2} - \frac{\varepsilon |x|^{2}}{1 -4\varepsilon t} . {\rm e}nd{displaymath} Hence \begin{align*} I &\leq (4\pi t)^{-{d}/2} \int_{{\mathbb R}^{{d}}} \int_{{\mathbb R}^{{d}}} {\rm e}^{ -\frac{1-4 \varepsilon t}{4t} \left|y - \frac{1}{1-4 \varepsilon t} x{\rm i}ght|^{2}} {\rm e}^{\frac{\varepsilon |x|^{2}}{1 -4\varepsilon t}} |\phi(x)| {\rm e}^{-\varepsilon|y|^{2}} \, {\rm d} x\, {\rm d} |\mu(y)| \\ &\leq (4\varepsilon t)^{-{d}/2} \|\mu\|_{\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})} \int_{{\mathbb R}^{{d}}} {\rm e}^{\frac{\varepsilon |x|^{2}}{1 -4\varepsilon t}} |\phi(x)| \, {\rm d} x, {\rm e}nd{align*} which is finite as long as $\varepsilon(t) < \gamma$, that is $0<t< T(\varepsilon) -T(\gamma) = \frac{1}{4\varepsilon}-\frac{1}{4\gamma}$. The rest follows from Fubini's theorem. {\rm e}nd{proof} We can now show that for $u_0\in \mathcal{M}_\varepsilon ({\mathbb R}^{d})$ [there is no requirement for $u_0$ to be non-negative] the function defined in (\ref{eq:solution_heat_up_t=0}) is indeed the solution of the heat equation on the time interval $(0,1/4\varepsilon)$, and satisfies the initial data in the sense of measures. There are, of course, many classical results on the validity of the heat kernel representation, but the proof that follows has to be particularly tailored to $\mathcal{M}_\varepsilon({\mathbb R}^{d})$ initial data, since this allows for significant growth at infinity. \begin{theorem} \label{thm:properties_sltns_given_u0} Suppose that $u_{0}\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$, set $T(\varepsilon)=1/4\varepsilon$, and let $u(x,t)$ be given by {\rm e}qref{eq:solution_heat_up_t=0}. Then \begin{enumerate} \item[\rm(i)] $u(t) \in L^{\infty}_{{\rm loc}}({\mathbb R}^{d})$ for $t\in (0, T(\varepsilon))$. Also $u\in C^{\infty} ({\mathbb R}^{d} \times (0,T(\varepsilon)))$ and satisfies \begin{displaymath} u_{t} - {\rm d}isplaystyleelta u =0 \qquad\mbox{for all}\quad x\in {\mathbb R}^{d}, \ 0< t <T(\varepsilon). {\rm e}nd{displaymath} \item[\rm(ii)] For every $\varphi \in C_{c}({\mathbb R}^{d})$ and $0\leq t< T(\varepsilon)$ \begin{displaymath} \int_{{\mathbb R}^{d}} \varphi \, u(t)= \int_{{\mathbb R}^{d}} S(t)\varphi \, {\rm d} u_{0} . {\rm e}nd{displaymath} In particular, $u(t) \to u_{0} $ as $t\to 0^{+}$ as a measure, i.e. \begin{displaymath} \int_{{\mathbb R}^{d}} \varphi \, u(t) \to \int_{{\mathbb R}^{d}} \varphi \, {\rm d} u_{0}\qquad\mbox{for every}\quad \varphi \in C_{c}({\mathbb R}^{d}). {\rm e}nd{displaymath} \item[\rm(iii)] If $0\leq u_{0}\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ is non-zero then $u(x,t)>0$ for all $x\in{\mathbb R}^{d}$, $t\in (0,T(\varepsilon))$, i.e.\ the Strong Maximum Principle holds. {\rm e}nd{enumerate} {\rm e}nd{theorem} \begin{proof} (i) If $u_{0}\in \mathcal{M}_{\varepsilon} ({\mathbb R}^{d})$, then for any $a>1$ \begin{displaymath} u(0,at,|u_{0}|) = \frac{1}{(4\pi at)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|y|^2/4at} \, {\rm d} |u_0(y)| <\infty {\rm e}nd{displaymath} provided that $\frac{1}{4at}\geq \varepsilon$, that is $t\leq \frac{1}{4a\varepsilon} < T(\varepsilon)$. Hence by (\ref{bound_above_x=0}) from Lemma \ref{lem:estimate_from_x=0}, we have $u(t) \in L^{\infty}_{{\rm loc}}({\mathbb R}^{d})$ for $t\in (0, T(\varepsilon))$. The rest of part (i) follows from the regularity of the heat kernel, since for any multi-index $\alpha = (\alpha_{1}, \ldots, \alpha_{{d}}) \in {\mathbb N}^{{d}}$ and $n\in {\mathbb N}$, the derivatives satisfy \begin{displaymath} D^{\alpha,n}_{x,t} K(x,t) = {p_{\alpha,n}(x,t) \over t^{{{d}/2}+ |\alpha| + 2 n}}\, {\rm e}^{-|x|^{2}/4t}, {\rm e}nd{displaymath} where $p_{\alpha,n}(x,t)$ is a polynomial of degree not exceeding $|\alpha| + 2n$. For $t$ bounded away from zero and ${\rm d}elta >0$ this can be bounded by a constant times $ {\rm e}^{(-\frac{1}{4t}+ {\rm d}elta) |x|^{2} }$. Therefore for $0<s\leq t\leq \tau < T(\varepsilon)$ \begin{displaymath} \int_{{\mathbb R}^{d}} | D^{\alpha,n}_{x,t} K(x-y,t) | \, {\rm d} |u_0(y)| \leq C_{s,\tau} \int_{{\mathbb R}^{d}} {\rm e}^{(-\frac{1}{4 \tau}+ {\rm d}elta) |x-y|^{2} } \, {\rm d} |u_0(y)| {\rm e}nd{displaymath} with $0<{\rm d}elta < \frac{1}{4 \tau}$. Proceeding as in the upper bound in Lemma \ref{lem:estimate_from_x=0} the above integral is bounded, for $x$ in compact sets and $0<\alpha <1$, by a multiple of \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{(-\frac{1}{4 \tau}+ {\rm d}elta) (1-\alpha) |y|^{2} } \, {\rm d} |u_{0}(y)| = \int_{{\mathbb R}^{d}} {\rm e}^{(\varepsilon + (-\frac{1}{4\tau}+ {\rm d}elta) (1-\alpha) ) |y|^{2} } {\rm e}^{-\varepsilon|y|^{2}} \, {\rm d} |u_{0}(y)| {\rm e}nd{displaymath} which is finite as long as we chose ${\rm d}elta, \alpha$ small such that $\varepsilon < (1-\alpha) (\frac{1}{4\tau} -{\rm d}elta)$. For this it suffices that $\frac{1}{4 T(\varepsilon)(1-\alpha)} = \frac{\varepsilon}{1-\alpha} < \frac{1}{4\tau} -{\rm d}elta$ which is possible since $\tau < T(\varepsilon)$. Hence $u\in C^{\infty} ({\mathbb R}^{d} \times (0,T(\varepsilon)))$ and satisfies the heat equation pointwise. For (ii), i.e.\ to show that the initial data is attained in the sense of measures, notice first that it is enough to consider non-negative test functions in $C_{c}({\mathbb R}^{d})$. Now, from Lemma \ref{heat_solution_Cc} and (\ref{Fubbini_4-S(t)}) in Lemma \ref{lem:preparing_4_Fubini}, we get for $t$ small $$ \int_{{\mathbb R}^{d}} \varphi u(t) = \int_{{\mathbb R}^{d}} S(t)\varphi\, {\rm d} u_{0} . $$ Since Lemma \ref{heat_solution_Cc} also guarantees that ${\rm e}^{\varepsilon|x|^{2}} \big (S(t) \varphi(x)-\varphi(x) \big) \to 0$ uniformly in ${\mathbb R}^{d}$ as $t\to 0$, we can take $t\to0$ in (\ref{eq:u0_tested_with_heat}) and obtain \begin{displaymath} \int_{{\mathbb R}^{d}} \varphi \, u(t) = \int_{{\mathbb R}^{d}} S(t) \varphi \, {\rm d} u_{0} = \int_{{\mathbb R}^{d}} \varphi \, {\rm d} u_{0} + \int_{{\mathbb R}^{d}} {\rm e}^{\varepsilon |x|^{2}}\big(S(t) \varphi -\varphi) \, {\rm e}^{-\varepsilon |x|^{2}} \, {\rm d} u_{0} \to \int_{{\mathbb R}^{d}} \varphi \, {\rm d} u_{0} {\rm e}nd{displaymath} and (ii) is proved. Part (iii) is a consequence of the lower bound in (\ref{bound_below_above_x=0}) from Lemma \ref{lem:estimate_from_x=0}. {\rm e}nd{proof} Now we derive some estimates on the solution in the $L^1_\varepsilon({\mathbb R}^{{d}})$ spaces introduced in (\ref{eq:space_L1eps}), using the norm from {\rm e}qref{eq:norm_L1eps}. We also discuss the continuity of the solutions in time. Note that part (i) shows that in fact whenever $u_0\in\mathcal{M}_\varepsilon({\mathbb R}^{{d}})$ we have $u(t)\in L^1_{\varepsilon(t)}({\mathbb R}^{{d}})$; in part (iii) we obtain a similar result for the derivatives of $u$, but with some loss in the allowed growth (in $L^1_{\rm d}elta ({\mathbb R}^{{d}})$ only for ${\rm d}elta>\varepsilon(t)$). Recalling the notations in (\ref{epsilon-t}), we have the following result. \begin{proposition} \label{prop:semigroup_estimates} Suppose that $u_{0}\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ and let $u(x,t)$ be given by {\rm e}qref{eq:solution_heat_up_t=0}. \begin{itemize} \item[\rm(i)] For $0 < t <T(\varepsilon)$ we have $u(t) \in L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$ for any ${\rm d}elta \geq \varepsilon(t)$. Moreover \begin{equation}\label{eq:estimate_solution_L1eps_L1delta} \|u(t)\|_{ L^{1}_{\varepsilon(t)}({\mathbb R}^{{d}})} \leq \|u_{0}\|_{ \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})} . {\rm e}nd{equation} \item[\rm(ii)] For $0\leq s<t < T(\varepsilon)$ \begin{equation} \label{eq:semigroup_solution} u(t) = S(t-s) u(s). {\rm e}nd{equation} \item[\rm(iii)] For any multi-index $\alpha \in {\mathbb N}^{{d}}$, for $0 < t <T(\varepsilon)$ we have $D^{\alpha}_{x}u(t) \in L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$ for any ${\rm d}elta > \varepsilon(t)$. Moreover for any $\gamma>1$ we have \begin{equation} \label{eq:estimate_derivative_solution_L1eps_L1delta} \|D^{\alpha}_{x}u(t)\|_{ L^{1}_{{\rm d}elta(t)}({\mathbb R}^{{d}})} \leq \frac{c_{\alpha,\gamma}}{t^{\frac{|\alpha|}{2}}} \|u_{0}\|_{ \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})}\qquad\mbox{for all}\quad 0 < t <\frac{T(\varepsilon)}{\gamma}, {\rm e}nd{equation} where ${\rm d}elta(t):= \frac{1}{4(T(\varepsilon)-\gamma t)} = \frac{\varepsilon}{(1-4\varepsilon \gamma t)}$. \item[\rm(iv)] For any multi-index $\alpha \in {\mathbb N}^{{d}}$, $m\in {\mathbb N}$ and for each $t_{0} \in (0,T(\varepsilon))$ there exists ${\rm d}elta (t_{0}) > \varepsilon$ such that the mapping $(0,T(\varepsilon)) \ni t \mapsto D^{\alpha,m}_{x,t}u(t)$ is continuous in $L^{1}_{{\rm d}elta (t_{0})}({\mathbb R}^{{d}})$ at $t=t_{0}$. {\rm e}nd{itemize} {\rm e}nd{proposition} \begin{proof} \noindent (i) Setting ${\rm d}elta = \frac{1}{4\tau}$ \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-|x|^2/4\tau} |u(x,t)| \, {\rm d} x \leq \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x|^2/4\tau} {\rm e}^{-|x-z|^2/4t} \, {\rm d} |u_0(z)| \, {\rm d} x. {\rm e}nd{displaymath} Notice that completing the square we obtain \begin{equation} \label{eq:complete_squares} \frac{|x|^{2}}{\tau} + \frac{|x-z|^{2}}{t} = \frac{t+\tau}{t\tau} \left|x- \frac{\tau}{t+\tau} z{\rm i}ght|^{2} + \frac{|z|^{2}}{t+\tau} {\rm e}nd{equation} and so \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-|x|^2/4\tau} |u(x,t)| \, {\rm d} x \leq \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|z|^{2}}{4(t+\tau)} } \, {\rm d} |u_0(z)| \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{t+\tau}{4t\tau} |x- \frac{\tau}{t+\tau} z|^{2}}\, {\rm d} x. {\rm e}nd{displaymath} Since \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{t+\tau}{4t\tau} |x- \frac{\tau}{t+\tau} z|^{2}}\, {\rm d} x = \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{t+\tau}{4t\tau} |x|^{2}}\, {\rm d} x = \left(\frac{4 \pi t\tau}{t+\tau}{\rm i}ght)^{{d}/2} {\rm e}nd{displaymath} it follows that \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-|x|^2/4\tau} |u(x,t)| \, {\rm d} x \leq \left(\frac{\tau}{t+\tau}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|z|^{2}}{4(t+\tau)} } \, {\rm d} |u_0(z)| . {\rm e}nd{displaymath} Now given $t$ with $0<t<T(\varepsilon)$, choose $\tau=T(\varepsilon)-t=(1-4\varepsilon t)/4\varepsilon$; then $1/4\tau=\varepsilon(t)$, $1/4(t+\tau)=\varepsilon$, and this estimate becomes $$ \varepsilon(t)^{d/2}\int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon(t)|x|^2}|u(x,t)|,{\rm d} x\le\varepsilon^{d/2}\int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|z|^2}\,{\rm d}|u_0(z)|, $$ which is precisely {\rm e}qref{eq:estimate_solution_L1eps_L1delta} up to a constant multiple of both sides. \noindent (ii) Now \begin{displaymath} S(t-s)u(s) (x) = \frac{1}{(4\pi (t-s))^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-y|^2/4(t-s)} u(y,s) \, {\rm d} y {\rm e}nd{displaymath} and \begin{displaymath} u(y,s) = \frac{1}{(4\pi s)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|y-z|^2/4s} \, {\rm d} u_0(z). {\rm e}nd{displaymath} Notice that completing the square as in (\ref{eq:complete_squares}) with $x-y$ replacing $x$, $z-y$ replacing $z$ and $t-s$ replacing $\tau$ and $s$ replacing $t$, we get \begin{displaymath} S(t-s)u(s) (x) = \frac{1}{(4\pi (t-s))^{{d}/2}} \frac{1}{(4\pi s)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|x-z|^2/4t} \, {\rm d} u_{0}(z) \int_{{\mathbb R}^{d}} {\rm e}^{ -\frac{t}{4s(t-s)} |(y-z)- \frac{s}{t} (x-z)|^{2}}\, {\rm d} y {\rm e}nd{displaymath} and \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{ -\frac{t}{4s(t-s)} |(y-z)- \frac{s}{t} (x-z)|^{2}}\, {\rm d} y = \int_{{\mathbb R}^{d}} {\rm e}^{ -\frac{t}{4s(t-s)} |y|^{2}}\, {\rm d} y = \left(\frac{4\pi s(t-s)}{t}{\rm i}ght)^{{d}/2} {\rm e}nd{displaymath} and the result is proved. \noindent (iii) Notice that for any multi-index $\alpha \in {\mathbb N}^{{d}}$ \begin{displaymath} D^{\alpha}_{x} u(x,t)= \int_{{\mathbb R}^{d}} D^{\alpha}_{x} K(x-y,t) \, {\rm d} u_0(y) = \frac{1}{t^{{d}/2 + |\alpha|/2}} \int_{{\mathbb R}^{d}} p_{\alpha}(x-y,t) \, {\rm e}^{-|x-y|^{2}/4t} \, {\rm d} u_0(y) {\rm e}nd{displaymath} with $p_{\alpha}(x-y,t)$ is a polynomial of degree $|\alpha|$ in powers of $\frac{x-y}{t^{1/2}}$. Hence for any $0<\beta<1$ \begin{align} \label{eq:point_bound_further_derivatives} |D^{\alpha}_{x} u(x,t)| &\leq {c_{\alpha,\beta} \over t^{{{d}/2}+ |\alpha|/2}}\, \int_{{\mathbb R}^{d}} {\rm e}^{- \beta\frac{ |x-y|^{2}}{4t}} \, {\rm d} |u_0(y)|\\ &=\frac{\tilde c_{\alpha,\beta}}{t^{|\alpha|/2}}\,v(x,\gamma t),\nonumber {\rm e}nd{align} where $v(x,t)$ is the solution with initial data $|u_0|$ and $\gamma=1/\beta>1$ is arbitrary. The estimate in {\rm e}qref{eq:estimate_derivative_solution_L1eps_L1delta} follows using part (i). \noindent (iv) Note that we can argue as we did for (\ref{bound_above_x=0}), and use (\ref{eq:point_bound_further_derivatives}) to obtain, for $0<\gamma <1$, \begin{equation} \label{eq:exponential_bound_further_derivatives} |D^{\alpha}_{x} u(x,t)| \leq {c_{\alpha,\beta} \over t^{{{d}/2}+ |\alpha|/2}} {\rm e}^{(\frac{1}{\gamma}-1) (1-\beta)\frac{|x|^{2}}{4 t} } \, \int_{{\mathbb R}^{d}} {\rm e}^{-(1-\gamma )(1-\beta) \frac{|y|^{2}}{4 t} } \, {\rm d} |u_{0}(y)| , {\rm e}nd{equation} which is finite provided we choose $\beta, \gamma$ such that $(1-\gamma) (1-\beta) \frac{1}{4 t} > \varepsilon$ i.e.\ provided that $t<T=(1-\gamma)(1-\beta)T(\varepsilon)$. From the regularity of $u$ in Theorem \ref{thm:properties_sltns_given_u0} we know that, as $t\to t_{0}$, \begin{displaymath} D^{\alpha}_{x} u(t) \to D^{\alpha}_{x} u(t_{0}) \quad \mbox{in $L^{\infty}_{{\rm loc}}({\mathbb R}^{{d}})$} . {\rm e}nd{displaymath} Now, if $\alpha =0$, (\ref{bound_above_x=0}) implies that for $\varepsilon(t_{0}) =\frac{\gamma}{4(a-1)t_{0}}$ and $a, \gamma >1$ we have a uniform quadratic exponential bound for $u(t)$ for all $t$ close enough to $t_{0}$. For nonzero $\alpha$, (\ref{eq:exponential_bound_further_derivatives}) implies that for $0<\beta, \gamma<1$ and ${\rm d}elta(t_{0}) = (1-\gamma) (1-\beta) \frac{1}{4 t_{0}} > \varepsilon$ we have again a uniform quadratic exponential bound for $D^{\alpha}_{x} u(t)$ for all $t$ close enough to $t_{0}$. Now, for $n\in {\mathbb N}$, \begin{displaymath} \|D^{\alpha}_{x} u(t) - D^{\alpha}_{x} u(t_{0})\|_{L^1_{{\rm d}elta(t_{0})}({\mathbb R}^{d})}= c \int_{|x|\leq n}{\rm e}^{-{\rm d}elta(t_{0}) |x|^2} |D^{\alpha}_{x} u(t) - D^{\alpha}_{x} u(t_{0})|(x) \, {\rm d} x {\rm e}nd{displaymath} \begin{displaymath} + c \int_{|x|\ge n}{\rm e}^{-{\rm d}elta(t_{0}) |x|^2} |D^{\alpha}_{x} u(t) - D^{\alpha}_{x} u(t_{0})|(x) \, {\rm d} x . {\rm e}nd{displaymath} From the uniform quadratic exponential bound, the second term is arbitrarily small for sufficiently large $n$, uniformly in $t$ close to $t_{0}$, while the first term is small, with fixed $n$ and $t$ close enough to $t_{0}$. For time derivatives just note that for $m\in {\mathbb N}$, $\partial_{t}^{m} u(t) = (-{\rm d}isplaystyleelta)^{2m} u(t),$ and then \begin{displaymath} D^{\alpha,m}_{x,t} u(t) = \partial_{t}^{m} D^{\alpha}_{x} u(t) = (-{\rm d}isplaystyleelta)^{2m} D^{\alpha}_{x} u(t) {\rm e}nd{displaymath} and we apply the argument above.{\rm e}nd{proof} We now discuss further the sense in which the initial data is attained (improving on part (ii) of Theorem \ref{thm:properties_sltns_given_u0}). First we show that $u(t)= S(t) u_{0}$ with $u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ attains the initial data against any test function that decays fast enough. \begin{corollary} \label{cor:attain_initial_measure} If $u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ and $\varphi \in C_{0}({\mathbb R}^{{d}})$ is such that $|\varphi (x)| \leq A {\rm e}^{-\gamma|x|^{2}}$, $x\in {\mathbb R}^{{d}}$, with $\gamma > \varepsilon$, then $u(t)= S(t) u_{0}$ satisfies \begin{displaymath} \int_{{\mathbb R}^{{d}}} u(t) \varphi \to \int_{{\mathbb R}^{{d}}} \varphi \, {\rm d} u_{0}\qquad\mbox{as}\quad t\to0. {\rm e}nd{displaymath} {\rm e}nd{corollary} \begin{proof} For $0\le t<T(\varepsilon)$ small and $\varepsilon(t) = \frac{\varepsilon}{1-4\varepsilon t}$ we have $\gamma > \varepsilon(t)$ and then from~(\ref{Fubbini_4-S(t)}) in Lemma \ref{lem:preparing_4_Fubini} $\int_{{\mathbb R}^{{d}}} u(t) \varphi = \int_{{\mathbb R}^{{d}}} S(t) \varphi \, {\rm d} u_{0}$. Now, from Lemma \ref{lem:estimate_fast_decay_initialdata} it follows that for $t$ sufficiently small $$|S(t) \varphi|(x) \leq C {\rm e}^{-\gamma(t) |x|^{2}},\quad\mbox{with}\quad \gamma(t)=\frac{\gamma}{1+4 \gamma t} > \varepsilon,$$ and then $|S(t) \varphi|(x) \leq C {\rm e}^{-\varepsilon |x|^{2}} \in L^{1}({\rm d} |u_{0}|)$. Also, $S(t) \varphi (x) \to \varphi(x)$ for $x\in {\mathbb R}^{{d}}$ and then Lebesgue's theorem gives the result. {\rm e}nd{proof} Assuming the initial data is a pointwise defined function, we get the following result. \begin{corollary}\label{cor:time_regularity} Suppose that $u_{0}\in L^{1}_\varepsilon({\mathbb R}^{d})$, set $T(\varepsilon)=1/4\varepsilon$, and let $u(x,t)$ be given by {\rm e}qref{eq:solution_heat_up_t=0}. Then \begin{itemize} \item[\rm(i)] $u(t) \to u_{0}$ in $L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$ as $t\to 0^+$ for any ${\rm d}elta>\varepsilon$; \item[\rm(ii)] if $u_{0} \in L^{p}_{{\rm loc}}({\mathbb R}^{{d}})$ with $1\leq p < \infty$ then \begin{displaymath} u(t) \to u_{0} \quad \mbox{in}\quad L^{p}_{{\rm loc}}({\mathbb R}^{{d}})\qquad\mbox{as}\quad t\to0^+;\qquad\mbox{and} {\rm e}nd{displaymath} \item[\rm(iii)] if $u_{0} \in C({\mathbb R}^{{d}})$ then $u(t) \to u_{0}$ in $L^{\infty}_{{\rm loc}}({\mathbb R}^{{d}})$ as $t \to 0^+$. {\rm e}nd{itemize} {\rm e}nd{corollary} \begin{proof} (i) Note that for any $\varphi \in C_{c}({\mathbb R}^{{d}})$ we have \begin{displaymath} \|S(t) u_{0} - u_{0} \|_{L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})} \leq \|S(t) u_{0} - S(t) \varphi \|_{L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})} + \|S(t) \varphi - \varphi \|_{L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})} + \|\varphi - u_{0} \|_{L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})}. {\rm e}nd{displaymath} Let $\gamma >0$ and take $\varphi \in C_{c}({\mathbb R}^{{d}})$ such that \begin{displaymath} \|u_{0} -\varphi\|_{L^{1}_{\varepsilon}({\mathbb R}^{{d}})} = \int_{{\mathbb R}^{{\rm d}}} {\rm e}^{-\varepsilon|x|^{2}} |u_{0}(x) - \varphi(x)| \, {\rm d} x < \gamma . {\rm e}nd{displaymath} To see this note that for $R>0$, if ${\rm supp}\,(\varphi) \subset B(0,R)$ then \begin{displaymath} \int_{{\mathbb R}^{{\rm d}}} {\rm e}^{-\varepsilon|x|^{2}} |u_{0}(x) - \varphi(x)| \, {\rm d} x = \int_{|x|\leq R} {\rm e}^{-\varepsilon|x|^{2}} |u_{0}(x) - \varphi(x)| \, {\rm d} x + \int_{|x|>R} {\rm e}^{-\varepsilon|x|^{2}} |u_{0}(x)| \, {\rm d} x . {\rm e}nd{displaymath} The second term is small for $R$ large and so is the first one if we approach $u_{0}$ by $\varphi$ in $L^{1}(B(0,R))$. Now for any ${\rm d}elta >\varepsilon$ and all sufficiently small $t>0$ we have $\tilde {\rm d}elta(t)\le{\rm d}elta$, where $\tilde {\rm d}elta(t):=\frac{\varepsilon}{(1-4\varepsilon t)}$. Then from (\ref{eq:increase_Meps_norm}) and (\ref{eq:estimate_solution_L1eps_L1delta}) we have \begin{displaymath} \|S(t) (u_{0}-\varphi) \|_{ L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})} \leq \left(\frac{{\rm d}elta}{\varepsilon}{\rm i}ght)^{{d}/2} \|S(t) (u_{0}-\varphi) \|_{ L^{1}_{\tilde {\rm d}elta (t)}({\mathbb R}^{{d}})} \leq \left(\frac{{\rm d}elta}{\varepsilon}{\rm i}ght)^{{d}/2} \|u_{0}-\varphi \|_{ L^{1}_{\varepsilon}({\mathbb R}^{{d}})} < \left(\frac{{\rm d}elta}{\varepsilon}{\rm i}ght)^{{d}/2} \gamma . {\rm e}nd{displaymath} Finally, as in Lemma \ref{heat_solution_Cc} we have $S(t)\varphi -\varphi \to 0$ uniformly in ${\mathbb R}^{{d}}$ as $t\to 0$. Hence $ \|S(t) \varphi - \varphi \|_{L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})} \to 0$ as $t\to 0$, which proves (i). \noindent (ii) and (iii). Fix $x_{0} \in {\mathbb R}^{{d}}$ and ${\rm d}elta>0$ and take $0\leq \varphi \in C_{c}({\mathbb R}^{{d}})$ such that $0\leq \varphi \leq 1$, $\varphi =1$ on $B(x_{0}, {\rm d}elta)$, and ${\rm supp}(\varphi) \subset B(x_{0}, 2{\rm d}elta)$. Decompose $u_{0} = \varphi u_{0} + (1-\varphi) u_{0}$ and write $$ u(t,u_{0})= u(t, \varphi u_{0}) + u(t, (1-\varphi) u_{0}). $$ Then, if $u_{0} \in L^{p}_{{\rm loc}}({\mathbb R}^{{d}})$ with $1\leq p < \infty$ we have $\varphi u_{0} \in L^{p}({\mathbb R}^{{d}})$ then, as $t\to 0$, \begin{displaymath} u(t, \varphi u_{0}) \to \varphi u_{0}\quad \mbox{in $ L^{p}({\mathbb R}^{{d}})$}. {\rm e}nd{displaymath} In particular $u(t, \varphi u_{0}) \to u_{0}$ in $L^{p}(B(x_{0},{\rm d}elta))$. If $u_{0} \in C({\mathbb R}^{{d}})$ then $\varphi u_{0} \in {\rm BUC}({\mathbb R}^{{d}})$ then, as $t\to 0$, \begin{displaymath} u(t, \varphi u_{0}) \to \varphi u_{0}\quad \mbox{in $ L^{\infty}({\mathbb R}^{{d}})$}. {\rm e}nd{displaymath} In particular $u(t, \varphi u_{0}) \to u_{0}$ in $L^{\infty}(B(x_{0},{\rm d}elta))$. Now we prove that, as $t\to 0$, $u(t, (1-\varphi) u_{0}) \to 0$ uniformly in a ball $B(x_{0}, \tilde {\rm d}elta)$ for some $\tilde{\rm d}elta < {\rm d}elta$, independent of $x_{0}$; this will conclude the proof of (ii) and (iii). For this notice that for $x \in B(x_{0}, {\rm d}elta/2)$ \begin{displaymath} u(t, (1-\varphi) u_{0}) (x) = \frac{1}{(4\pi t)^{{d}/2}} \int_{|y-x_{0}|\geq {\rm d}elta} {\rm e}^{-\frac{|x-y|^2}{4t}} (1-\varphi)(y) u_0(y) \,{\rm d} y . {\rm e}nd{displaymath} Then $|x-y|\geq |x_{0} - y| - |x-x_{0}| \geq {\rm d}elta - {\rm d}elta/2 = {\rm d}elta/2$. Hence for $0<t <t_{0}$ and $0<\alpha<1$, $|x-y|^{2} \geq \alpha |x-y|^{2} + (1-\alpha)\frac{{\rm d}elta^{2}}{4}$ and we obtain \begin{displaymath} | u(t, (1-\varphi) u_{0}) (x)| \leq \frac{{\rm e}^{-(1-\alpha)\frac{{\rm d}elta^{2}}{16 t}}}{(4\pi t)^{{d}/2}} \int_{|y-x_{0}|\geq {\rm d}elta} {\rm e}^{-\frac{\alpha|x-y|^2}{4t}} | u_0(y)| \,{\rm d} y . {\rm e}nd{displaymath} Now we look for a uniform estimate in $x \in B(x_{0}, \tilde {\rm d}elta)$ for the right-hand side above. For this note that for $0<\beta <1$, \begin{displaymath} |x-y|^{2} \geq |y-x_{0}|^{2} + |x-x_{0}|^{2} - 2|y-x_{0}||x-x_{0}| \geq (1-\beta) |y-x_{0}|^{2} + (1-\frac{1}{\beta}) |x-x_{0}|^{2}, {\rm e}nd{displaymath} thus for $x \in B(x_{0}, \tilde {\rm d}elta)$ and $0<t<t_{0} = \frac{\alpha (1-\beta)}{8\varepsilon}$ we have \begin{displaymath} | u(t, (1-\varphi) u_{0}) (x)| \leq \frac{{\rm e}^{-(1-\alpha)\frac{{\rm d}elta^{2}}{16 t}}}{(4\pi t)^{{d}/2}} {\rm e}^{(\frac{1}{\beta}-1) \frac{|x-x_{0}|^{2}}{4t}} \int_{|y-x_{0}|\geq {\rm d}elta} {\rm e}^{-\frac{\alpha (1-\beta)|x_{0}-y|^2}{4t}} | u_0(y)| \,{\rm d} y {\rm e}nd{displaymath} \begin{equation} \label{eq:uniformly_2_zero} \leq \frac{{\rm e}^{-(1-\alpha)\frac{{\rm d}elta^{2}}{16 t} + (\frac{1}{\beta}-1) \frac{\tilde {\rm d}elta^{2}}{4t}}}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{{d}}} {\rm e}^{-2\varepsilon |x_{0}-y|^2} | u_0(y)| \,{\rm d} y; {\rm e}nd{equation} again $|x_{0}-y|^{2} \geq |y|^{2} + |x_{0}|^{2} - 2|y||x_{0}| \geq 1/2 |y|^{2} - |x_{0}|^{2}$ gives \begin{displaymath} \int_{{\mathbb R}^{{d}}} {\rm e}^{-2\varepsilon |x_{0}-y|^2} | u_0(y)| \,{\rm d} y \leq {\rm e}^{ 2\varepsilon |x_{0}|} \int_{{\mathbb R}^{N}} {\rm e}^{- \varepsilon |y|^2} | u_0(y)| \,{\rm d} y {\rm e}nd{displaymath} and so (\ref{eq:uniformly_2_zero}) tends to $0$ as $t\to 0$ uniformly in $x \in B(x_{0}, \tilde {\rm d}elta)$ if $(\frac{1}{\beta}-1) \tilde {\rm d}elta^{2} < (1-\alpha)\frac{{\rm d}elta^{2}}{4}$. Notice, finally, that $\tilde {\rm d}elta$ does not depend on $x_{0}$. {\rm e}nd{proof} Notice that by comparing ${\rm e}^{-\varepsilon |x|^{2}}$ and $(1+|x|^{2})^{-m/2}$ it follows that the class of tempered distributions of class $(m,0)$ as introduced at the end of Section \ref{sec:radon-measures}, satisfies, for all $\varepsilon >0$, \begin{displaymath} \mathscr{C}_{m} ({\mathbb R}^{{d}}) \subset \mathscr{C}_{m+1} ({\mathbb R}^{{d}}) \subset \mathcal{M}_{\varepsilon} ({\mathbb R}^{{d}}) . {\rm e}nd{displaymath} \section{Initial data in $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$: uniqueness} \label{sec:uniqueness} \setcounter{equation}{0} Now we prove a uniqueness result for heat solutions with initial data $u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{d})$. Observe that uniqueness for \textit{non-negative} weak solutions of (\ref{eq:heat_equation}) can be found in \cite{Aro71}. On the other hand, one can find a proof of the uniqueness of classical solutions with no sign assumptions but with bounded continuous initial data in \cite{Tychonov} in dimension one and in e.g. \cite{J} (Chapter 7, page 176) in arbitrary dimensions, provided that they satisfy the pointwise bound \begin{equation} \label{eq:quadratic_exponential_bound_upt=0} |u(x,t)|\leq M {\rm e}^{a |x|^{2}} , \ x \in {\mathbb R}^{{d}}, \ 0<t<T. {\rm e}nd{equation} Here we prove a uniqueness result adapted to initial data in $\mathcal{M}_\varepsilon({\mathbb R}^{{d}})$, with no sign condition imposed. Observe that if $u_0\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ for some $\varepsilon>0$ and $u(t)=S(t)u_{0}$ is as in (\ref{eq:solution_heat_up_t=0}), then assumptions (\ref{eq:uniqueness_Du_in_L1delta}), (\ref{eq:uniqueness_initial_data_semigroup}), (\ref{eq:uniqueness_initial_data_fastdecay}), (\ref{eq:uniqueness_u_in_L1eps}) and (\ref{eq:uniqueness_initial_data_Cc}) below are ensured by parts (i) and (iii) in Proposition \ref{prop:semigroup_estimates}, Corollary \ref{cor:attain_initial_measure} and part (iii) in Theorem \ref{thm:properties_sltns_given_u0}. Also (\ref{eq:uniqueness_semigroup_solution}) is ensured by part (ii) in Proposition \ref{prop:semigroup_estimates}. \begin{theorem} \label{thr:uniqueness} Suppose that $u$, defined in ${\mathbb R}^{d} \times (0,T]$, is such that for some ${\rm d}elta >0$ and for each $0<t<T$, $u(t) \in L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$. \begin{enumerate} \item[{\rm(i)}] Suppose furthermore that \begin{equation} \label{eq:uniqueness_Du_in_L1delta} u, \nabla u, {\rm d}isplaystyleelta u \in L^{1}_{{\rm loc}}((0,T), L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})) {\rm e}nd{equation} and satisfies $u_{t} -{\rm d}isplaystyleelta u=0$ almost everywhere in ${\mathbb R}^{{d}} \times (0,T)$. Then we have \begin{equation} \label{eq:uniqueness_semigroup_solution} u (t)= S(t-s)u(s) {\rm e}nd{equation} for any $0<s<t <T$. {\rm e}nd{enumerate} Assume hereafter that $u$ satisfies {\rm e}qref{eq:uniqueness_semigroup_solution} for any $0<s<t <T$. \begin{enumerate} \item[{\rm(ii)}] Then for each $0<t<T$ and every $\varphi \in C_{c}({\mathbb R}^{d})$ the following limit exist \begin{displaymath} \lim_{s \to 0} \int_{{\mathbb R}^{d}} u(s) S(t) \varphi = \int_{{\mathbb R}^{d}} u(t) \varphi . {\rm e}nd{displaymath} \item[{\rm(iii)}] There exists $u_0\in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ for some $\varepsilon >0$ and such that $u(t)=S(t)u_{0}$ for $0<t<T$ if and only if for every $\varphi \in C_{c}({\mathbb R}^{d})$ and $t$ small enough \begin{equation} \label{eq:uniqueness_initial_data_semigroup} \lim_{s \to 0} \int_{{\mathbb R}^{d}} u(s) S(t) \varphi = \int_{{\mathbb R}^{d}} S(t)\varphi \, {\rm d} u_{0}. {\rm e}nd{equation} \item[{\rm(iv)}] Condition {\rm e}qref{eq:uniqueness_initial_data_semigroup} is satisfied provided either one of the following holds: \begin{enumerate} \item [{\rm(iv-a)}] For any function $\phi \in C_{0}({\mathbb R}^{{d}})$ such that $|\phi (x)| \leq A {\rm e}^{-\gamma|x|^{2}}$, $x\in {\mathbb R}^{{d}}$, with $\gamma > \varepsilon$ we have, as $ t \to 0$ \begin{equation} \label{eq:uniqueness_initial_data_fastdecay} \lim_{t \to 0} \int _{{\mathbb R}^{d}} \phi u(t) \to \int _{{\mathbb R}^{d}} \phi \, {\rm d} u_{0} . {\rm e}nd{equation} \item[{\rm(iv-b)}] For some $\tau \leq T$ small and $0<t\leq \tau$ we have $u(t) \in L^{1}_{\varepsilon}({\mathbb R}^{d})$ with \begin{equation} \label{eq:uniqueness_u_in_L1eps} \int_{{\mathbb R}^{d}} {\rm e}^{- \varepsilon |x|^{2}}|u(x,t)| \, {\rm d} x \leq M \quad t\in (0,\tau]; {\rm e}nd{equation} i.e. $u\in L^{\infty}((0,\tau], L^{1}_{\varepsilon}({\mathbb R}^{{d}}))$ and for every $\varphi \in C_{c}({\mathbb R}^{d})$, as $ t \to 0$ \begin{equation} \label{eq:uniqueness_initial_data_Cc} \int_{{\mathbb R}^{d}} \varphi \, u(t) \to \int_{{\mathbb R}^{d}} \varphi \, {\rm d} u_{0} . {\rm e}nd{equation} {\rm e}nd{enumerate} {\rm e}nd{enumerate} {\rm e}nd{theorem} \begin{proof} For (i) the key to the proof is to show that for every $\varphi \in C_{c}({\mathbb R}^{d})$ and $0<s<t<T$ one has \begin{equation} \label{eq:integral_semigroup_solution} \int_{{\mathbb R}^{d}} u(t) \varphi = \int_{{\mathbb R}^{d}} u(s) S(t-s) \varphi {\rm e}nd{equation} for some small enough $T$ depending only on ${\rm d}elta>0$ in (\ref{eq:uniqueness_Du_in_L1delta}). In such a case, we then apply (\ref{Fubbini_4-S(t)}) with $\mu = u(s)\in L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$ and $\phi = S(t-s) \varphi$, provided $0\leq t-s \leq T$ is small enough (depending on ${\rm d}elta>0$) such that from Lemma \ref{heat_solution_Cc}, $\phi$ satisfies the assumption in Lemma \ref{lem:preparing_4_Fubini} to obtain that the right hand side of (\ref{eq:integral_semigroup_solution}) equals ${\rm d}isplaystyle \int_{{\mathbb R}^{d}} S(t-s) u(s) \varphi$. Hence, we get (\ref{eq:uniqueness_semigroup_solution}) for $0<s<t<T$. Then for $0<t_{0}<T$ consider $v(t) = u(t+t_{0})$ for $0\leq t\leq T$ which satisfies the assumptions in (i). Hence (\ref{eq:uniqueness_semigroup_solution}) implies in particular $u(t+t_{0}) = S(t) u(t_{0})$ for $0\leq t_{0} , t\leq T$ which combined with (\ref{eq:semigroup_solution}) gives (\ref{eq:uniqueness_semigroup_solution}) for $0<s<t<2T$. In a finite numer of steps we obtain this property on any finite time interval. For the proof of (\ref{eq:integral_semigroup_solution}) we fix $0< t<T$ and differentiate $$ I(s) := \int_{{\mathbb R}^{d}} u(s) S(t-s) \varphi\qquad s\in (0,t) $$ to obtain \begin{equation} \label{eq:expression_4_derivative_I(s)} I'(s) = \int_{{\mathbb R}^{d}} \partial_{s} u (s) S(t-s) \varphi - \int_{{\mathbb R}^{d}} u(s) \partial_{s} S(t-s) \varphi = \int_{{\mathbb R}^{d}} {\rm d}isplaystyleelta u (s) S(t-s) \varphi - \int_{{\mathbb R}^{d}} u(s) {\rm d}isplaystyleelta S(t-s) \varphi . {\rm e}nd{equation} For this observe that from Lemma \ref{heat_solution_Cc} and decreasing $T$ if necessary but depending only on ${\rm d}elta$, we have that for all $0<t<T$ \begin{displaymath} |S(t) \varphi(x)| \leq c {\rm e}^{-\alpha |x|^{2}} , \quad x\in {\mathbb R}^{{d}}, \quad 0<t<T, {\rm e}nd{displaymath} with $\alpha = \alpha (T) > {\rm d}elta$, with ${\rm d}elta$ as in (\ref{eq:uniqueness_Du_in_L1delta}). Also, by (\ref{eq:solution_heat_C0}) and proceeding as in (\ref{eq:point_bound_further_derivatives}) and as in (\ref{eq:point_bound_heat_C0}) we obtain \begin{displaymath} | {\rm d}isplaystyleelta S(t) \varphi (x) | \leq \frac{c}{t^{{d}/2 +1}} {\rm e}^{-(1-{\rm d}elta)^{2}\frac{|x|^2}{4t} + \frac{(1-{\rm d}elta)^{2}}{{\rm d}elta} \frac{R^2}{4t}} \int_{B(0,R)} |\varphi (y)| \,{\rm d} y \quad x\in {\mathbb R}^{{d}}, \quad 0<t<T, {\rm e}nd{displaymath} and again as in Lemma \ref{heat_solution_Cc}, we obtain \begin{displaymath} \label{eq:bound_Delta_heat_C0} |{\rm d}isplaystyleelta S(t) \varphi(x)| \leq c {\rm e}^{-\alpha |x|^{2}} \quad x\in {\mathbb R}^{{d}}, \quad 0<t<T, {\rm e}nd{displaymath} with $\alpha= \alpha(T) >{\rm d}elta$ and some $c=c(\varphi, R, T)$ with ${\rm d}elta$ as in (\ref{eq:uniqueness_Du_in_L1delta}). With these, using (\ref{eq:uniqueness_Du_in_L1delta}), the integrand on the right-hand side of (\ref{eq:expression_4_derivative_I(s)}) has a bound \begin{displaymath} | {\rm d}isplaystyleelta u (\cdot)| | S(t-\cdot) \varphi| + | u(\cdot)| | {\rm d}isplaystyleelta S(t-\cdot) \varphi| \in L^{1}_{loc}((0,t), L^{1}({\mathbb R}^{{d}})) . {\rm e}nd{displaymath} Hence, by differentiation inside the integral, (\ref{eq:expression_4_derivative_I(s)}) is proved. Now observe that (\ref{eq:uniqueness_Du_in_L1delta}) and the upper bounds above for $S(t)\varphi$, ${\rm d}isplaystyleelta S(t) \varphi$ mean that we can use a.e. $s\in (0,t)$, Lemma \ref{lem:green_formula} below to integrate by parts in (\ref{eq:expression_4_derivative_I(s)}) to get $I'(s)=0$ for $s\in (0,t)$ which gives that $I(s)$ is constant in $(0,t)$. Now we show that, as $s\to t$ we have $I(s) \to I(t) = {\rm d}isplaystyle \int_{{\mathbb R}^{{d}}} u(t) \varphi$. For this, write \begin{displaymath} I(s) = \int_{{\mathbb R}^{{d}}} u(s) \varphi + \int_{{\mathbb R}^{{d}}} u(s) \big( S(t-s) \varphi -\varphi \big) {\rm e}nd{displaymath} and observe that from the assumptions $\partial_{t} u = {\rm d}isplaystyleelta u \in L^{1}_{{\rm loc}}((0,T), L^{1}_{{\rm d}elta}({\mathbb R}^{{d}}))$ and in particular, $u(s)$ is continuous as $s\to t$ in $L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$. On the other hand from Lemma \ref{heat_solution_Cc} we have ${\rm e}^{{\rm d}elta |x|^{2}} \big (S(t-s) \varphi(x)-\varphi(x) \big) \to 0$ uniformly in ${\mathbb R}^{{d}}$ as $s \to t$. Hence (\ref{eq:integral_semigroup_solution}) and part (i) are proved. Now we prove (ii). For fixed $0<t <T$ and $s$ small, from Lemma \ref{heat_solution_Cc}, we get that $u(t) S(s) \varphi$ is integrable and then, from (\ref{eq:uniqueness_semigroup_solution}), using Lemma \ref{lem:preparing_4_Fubini} again (with $\mu = u(s) \in L^{1}_{{\rm d}elta}({\mathbb R}^{{d}})$ and $\phi=S(s) \varphi$) and (\ref{eq:semigroup_solution}), we get \begin{displaymath} \int _{{\mathbb R}^{d}} u(t) S(s) \varphi = \int _{{\mathbb R}^{d}} S(t-s)u(s) S(s) \varphi = \int _{{\mathbb R}^{d}} u(s) S(t-s) S(s) \varphi =\int _{{\mathbb R}^{d}} u(s) S(t) \varphi . {\rm e}nd{displaymath} Now, from Lemma \ref{heat_solution_Cc} we have that, as $s \to 0$, \begin{displaymath} | \int _{{\mathbb R}^{d}} u(t) \big( S(s) \varphi -\varphi \big) | \leq \int_{{\mathbb R}^{d}} {\rm e}^{{\rm d}elta |x|^{2}}|u(t)| \big(S(s) \varphi -\varphi) \, {\rm e}^{-{\rm d}elta |x|^{2}} \to 0 {\rm e}nd{displaymath} and then the following limit exists \begin{displaymath} \lim_{s\to 0} \int _{{\mathbb R}^{d}} u(s) S(t) \varphi = \lim_{s\to 0} \int _{{\mathbb R}^{d}} u(t) S(s) \varphi =\int _{{\mathbb R}^{d}} u(t) \varphi . {\rm e}nd{displaymath} This concludes the proof of part (ii). To prove (iii) notice that from Lemma \ref{heat_solution_Cc} with $t$ small and Corollary \ref{cor:attain_initial_measure} we have that condition (\ref{eq:uniqueness_initial_data_semigroup}) is necessary for $u(t)$ to be equal to $S(t)u_{0}$. Conversely if (\ref{eq:uniqueness_initial_data_semigroup}) is satisfied then for $t$ small and $\varphi \in C_{c}({\mathbb R}^{d})$ \begin{displaymath} \int _{{\mathbb R}^{d}} u(t) \varphi = \lim_{s\to 0} \int _{{\mathbb R}^{d}} u(s) S(t) \varphi = \int _{{\mathbb R}^{d}} S(t) \varphi \, {\rm d} u_{0} {\rm e}nd{displaymath} Then by (\ref{Fubbini_4-S(t)}) in Lemma \ref{lem:preparing_4_Fubini} with $\mu = u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$, $\phi = \varphi$, we get \begin{displaymath} \int _{{\mathbb R}^{d}} u(t) \varphi = \int _{{\mathbb R}^{d}} S(t) \varphi \, {\rm d} u_{0} = \int _{{\mathbb R}^{d}} \varphi S(t) u_{0} {\rm e}nd{displaymath} for every $\varphi \in C_{c}({\mathbb R}^{d})$ and then $u(t) = S(t) u_{0}$ for $t$ small. This and (\ref{eq:uniqueness_semigroup_solution}) proves part (iii). For part (iv-a) it is now clear that if $u$ satisfies (\ref{eq:uniqueness_initial_data_fastdecay}) then Lemma \ref{heat_solution_Cc} and $t$ small allows to take $\phi=S(t)\varphi$ in (\ref{eq:uniqueness_initial_data_fastdecay}) to get that (\ref{eq:uniqueness_initial_data_semigroup}) satisfied. Finally, assuming (\ref{eq:uniqueness_u_in_L1eps}) and (\ref{eq:uniqueness_initial_data_Cc}) we prove part (iv-b). For this consider a sequence of smooth functions $0\leq \phi_{n}\leq 1$ with $supp(\phi_{n}) \subset B(0,2n)$ and $\phi_{n}=1$ in $B(0,n)$. Then we write \begin{displaymath} \int_{{\mathbb R}^{d}} u(s) S(t) \varphi = \int_{{\mathbb R}^{d}} u(s) \phi_{n}S(t) \varphi + \int_{{\mathbb R}^{d}} u(s) (1-\phi_{n}) S(t) \varphi {\rm e}nd{displaymath} and then \begin{displaymath} \int_{{\mathbb R}^{d}} u(s) S(t) \varphi - \int_{{\mathbb R}^{d}} S(t) \varphi \, {\rm d} u_{0} = I_{1} + I_{2} + I_{3} = {\rm e}nd{displaymath} \begin{displaymath} \big( \int_{{\mathbb R}^{d}} u(s) \phi_{n}S(t) \varphi - \int_{{\mathbb R}^{d}} \phi_{n}S(t) \varphi \, {\rm d} u_{0} \big) +\int_{{\mathbb R}^{d}} (\phi_{n}-1) S(t) \varphi \, {\rm d} u_{0} + \int_{{\mathbb R}^{d}} u(s) (1-\phi_{n}) S(t) \varphi . {\rm e}nd{displaymath} Now $I_{3}$ goes to zero with $n\to \infty$ uniformly in $0<s<\tau$. To see this, observe that by Lemma \ref{heat_solution_Cc}, for some $t_{0}>0$ small and $0< t < t_{0}$ we have $0\leq {\rm e}^{\varepsilon|x|^{2}} (1-\phi_{n})| S(t)\varphi | \leq c {\rm e}^{-\gamma |x|^{2}} (1-\phi_{n})$, $\gamma >0$ and $ 0\leq 1-\phi_{n} \to 0$ uniformly in compact sets as $n\to \infty$. Hence ${\rm e}^{\varepsilon|x|^{2}} (1-\phi_{n}) S(t)\varphi \to 0$ as $n\to \infty$, uniformly in ${\mathbb R}^{{d}}$ and uniformly in $0<t<t_{0}$. Thus by (\ref{eq:uniqueness_u_in_L1eps}), \begin{displaymath} I_{3} = \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon|x|^{2}} u(s) {\rm e}^{\varepsilon|x|^{2}} (1-\phi_{n}) S(t) \varphi \to 0, \quad n\to \infty {\rm e}nd{displaymath} uniformly for $0<s<\tau$ and $0<t<t_{0}$. With the same argument, $I_{2}$ goes to zero with $n\to \infty$ uniformly in $0<t<t_{0}$ since $u_{0}\in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$. Finally, by (\ref{eq:uniqueness_initial_data_Cc}), for any fixed $n$ and $0<t<t_{0}$, $I_{1} \to 0$ as $s\to 0$. Hence, for any $0<t<t_{0}$ we get $ \lim_{s \to 0} \int_{{\mathbb R}^{d}} u(s) S(t) \varphi = \int_{{\mathbb R}^{d}} S(t)\varphi \, {\rm d} u_{0}$ and part (iv-b) is proved. {\rm e}nd{proof} Notice that condition (\ref{eq:uniqueness_initial_data_fastdecay}) is precisely the definition of ``initial data'' for the weak solutions considered in \cite{Aro68}, page 319. Also observe that, for $t$ small, from Lemma \ref{lem:preparing_4_Fubini}, condition (\ref{eq:integral_semigroup_solution}) is indeed equivalent to (\ref{eq:uniqueness_semigroup_solution}) provided $u(s) \in L^{1}_{{\rm d}elta} ({\mathbb R}^{{d}})$ for some ${\rm d}elta>0$. Finally, observe that if we assume $u\in C({\mathbb R}^{{d}} \times (0,T])$ is such that for any $0<s<T$ there exists $M,a>0$ such that \begin{displaymath} |u(x,t)|\leq M {\rm e}^{a |x|^{2}} , \ x \in {\mathbb R}^{{d}}, \ s\leq t<T. {\rm e}nd{displaymath} from the results in in \cite{Tychonov} and \cite{J} (Chapter 7, page 176), then (\ref{eq:uniqueness_semigroup_solution}) is satisfied. Also, observe that from Lemma \ref{lem:estimate_from_x=0} implies that $S(t)u_{0}$ satisfies the quadratic exponential bound above. Therefore, if additionally $u$ satisfies (\ref{eq:uniqueness_initial_data_fastdecay}) or (\ref{eq:uniqueness_u_in_L1eps}) and (\ref{eq:uniqueness_initial_data_Cc}) then we have $u(t)=S(t)u_{0}$. These contitions are slightly weaker than the classical Tychonov condition (\ref{eq:quadratic_exponential_bound_upt=0}). \section{Global existence versus finite-time blowup} \label{sec:blowup} \setcounter{equation}{0} From the results in Section \ref{sec:existence_regularity} it is natural to set \begin{displaymath} L^{1}_{0}({\mathbb R}^{d}) = \bigcap_{\varepsilon >0} L^{1}_{\varepsilon} ({\mathbb R}^{d}) {\rm e}nd{displaymath} and \begin{displaymath} \mathcal{M}_{0}({\mathbb R}^{d}) = \bigcap_{\varepsilon >0} \mathcal{M}_{\varepsilon} ({\mathbb R}^{d} ) . {\rm e}nd{displaymath} Clearly $L_0^1({\mathbb R}^{d})\subset\mathcal{M}_0({\mathbb R}^{d})$. It is a simple consequence of Lemma \ref{whyL1e} that these are precisely the collections of initial data for which (non-negative) solutions exist for all time. \begin{proposition}\label{prop:global_existence_L10} If $u_0\in \mathcal{M}_0({\mathbb R}^{d})$ then $u(x,t)$, given by {\rm e}qref{eq:solution_heat_up_t=0}, is well defined for all $x\in {\mathbb R}^{{d}}$ and $t>0$; in particular $u(t)\in L^1_0({\mathbb R}^{d})$ for every $t>0$. Conversely, if $u_{0}\in \mathcal{M}_{{\rm loc}} ({\mathbb R}^{d})$ with $u_0\ge0$ and $u(x,t)$ is defined for all $t>0$ then $u_0\in \mathcal{M}_0({\mathbb R}^{d})$. {\rm e}nd{proposition} Note that $L^1_0({\mathbb R}^{d})$ is a natural space of functions in which to study the heat semigroup, since $S(t){\colon}L^1_0({\mathbb R}^N)\to L^1_0({\mathbb R}^N)$ for every $t\ge0$; this form the main topic of our paper \cite{RR2}. For the time being, one can note that if $u_0\in \mathcal{M}_0({\mathbb R}^N)$ then the estimate from Proposition \ref{prop:semigroup_estimates} can be reinterpreted as $$ \|u(t)\|_{L^1_{\rm d}elta}\le\|u_0\|_{\mathcal{M}_{{\rm d}elta(t)}},\qquad\mbox{where}\quad{\rm d}elta(t):=\frac{{\rm d}elta}{1+4{\rm d}elta t}. $$ The collection $ L^1_0({\mathbb R}^{d})$ is a large set of functions: it contains $L^{p}({\mathbb R}^{d})$ and $L^p_U({\mathbb R}^{d})$ for every $1\le p\leq \infty$, and (for example) any function that satisfies $$ |f(x)| \le M{\rm e}^{k|x|^\alpha}, \quad x\in {\mathbb R}^{{d}} $$ for some $M>0$ and $\alpha<2$. It also contains functions that are not bounded by any quadratic exponential, such as \begin{displaymath} f(x)= \sum_k \alpha_{k} \, {\mathbb C}hi_{B(x_{k},r_k)} (x) , \quad \alpha_{k} = {\rm e}^{|x_{k}|^{3}} {\rm e}nd{displaymath} with $|x_{k}| \to \infty$ and $r_{k} \to 0$ such that $r_{k}^{{d}} \leq \frac{1}{\alpha_{k} k^{2}}$. We show below that $\mathcal{M}_0({\mathbb R}^{d})$ also contains the space of uniform measures $\mathcal{M}_U({\mathbb R}^{d})$ defined as the set of measures $\mu \in \mathcal{M}_{{\rm loc}}({\mathbb R}^{d})$ such that \begin{equation} \label{M_U} \sup_{x \in {\mathbb R}^{d}} \ \int_{B(x,1)} {\rm d} |\mu(y)| < \infty {\rm e}nd{equation} with norm \begin{equation} \label{eq:M_U_norm} \|\mu\|_{\mathcal{M}_{U}({\mathbb R}^{d})} = \sup_{x \in {\mathbb R}^{d}} \ \int_{B(x,1)} {\rm d} |\mu(y)|. {\rm e}nd{equation} This is a Banach space, see Lemma \ref{lem:MU_banach}. In fact, as a consequence of Theorem \ref{thm:properties_sltns_given_u0}, we can show that the uniform space $\mathcal{M}_{U}({\mathbb R}^{d})$ is precisely the set of initial data for which non-negative solutions of the heat equation given by (\ref{eq:solution_heat_up_t=0}) remain bounded in ${\mathbb R}^{d}$ for positive times. See \cite{ACDRB2004} for results of the heat equation between uniform spaces $L^p_U({\mathbb R}^{d})$, $1\leq p <\infty$, the collection of all functions $\phi\in L^p_{{\rm loc}}({\mathbb R}^{d})$ such that \begin{displaymath} \sup_{x \in {\mathbb R}^{d}} \ \int_{B(x,1)} |\phi(y)|^{p} \, {\rm d} y < \infty {\rm e}nd{displaymath} with norm $\|\phi\|_{L^{p}_{U}({\mathbb R}^{d})} = \sup_{x \in {\mathbb R}^{d}} \ \|\phi\|_{L^{p}(B(x,1))}$. For $p=\infty$ we have $L^{\infty}_{U}({\mathbb R}^{d}) = L^{\infty}({\mathbb R}^{d})$ with norm $\|\phi\|_{L^{\infty}_{U}({\mathbb R}^{d})} = \sup_{x \in {\mathbb R}^{d}} \ \|\phi\|_{L^{\infty}(B(x,1))} = \|\phi\|_{L^{\infty}({\mathbb R}^{d})}$. \begin{proposition}\label{boundediff} \begin{itemize} \item [\rm(i)] If $u_0\in \mathcal{M}_{U}({\mathbb R}^{d})$ then $u_0\in \mathcal{M}_{0}({\mathbb R}^{d})$ and $u(t)\in L^{\infty}({\mathbb R}^{d})$ for all $t>0$ and for every $1\leq q \leq \infty$ \begin{displaymath} \|u(t)\|_{L^{q}_{U}({\mathbb R}^{d})} \leq M_0 \Big( t^{-\frac{{d}}{2}(1-\frac{1}{q})} +1 \Big) \|u_{0}\|_{\mathcal{M}_U({\mathbb R}^{d})} . {\rm e}nd{displaymath} \item[\rm(ii)] Conversely, assume that $0\leq u_0\in \mathcal{M}_0({\mathbb R}^{d})$. If $0\le u(t_{0})\in L^{\infty}({\mathbb R}^{d})$ for some $t_{0}>0$ then \begin{displaymath} u_{0} \in \mathcal{M}_{U}({\mathbb R}^{d}); {\rm e}nd{displaymath} hence $u(t)\in L^{\infty}({\mathbb R}^{d})$ for all $t>0$. {\rm e}nd{itemize} {\rm e}nd{proposition} \begin{proof} \noindent (i) Note first that from (\ref{eq:solution_heat_up_t=0}) we have $|S(t) u_{0}| \leq S(t) |u_{0}|$ and, by definition, that $u_{0} \in \mathcal{M}_{U}({\mathbb R}^{d})$ iff $|u_{0}| \in \mathcal{M}_{U}({\mathbb R}^{d})$. Hence, using Proposition \ref{prop:global_existence_L10}, it is enough to prove the result for non-negative $u_{0}$. Let us consider a cube decomposition of ${\mathbb R}^{{d}}$ as follows. For any index $i\in {\mathbb Z}^{{d}}$, denote by $Q_{i}$ the open cube in ${\mathbb R}^{{d}}$ of center $i$ with all edges of length 1 and parallel to the axes. Then $Q_i\cap Q_j={\rm e}mptyset$ for $i\neq j$ and ${\mathbb R}^{{d}}=\cup_{i\in {\mathbb Z}^{{d}}} \overline{Q_i}$. For a given $i\in {\mathbb Z}^{{d}}$ let us denote by $N(i)$ the set of indexes near $i$, that is, $j\in N(i)$ if and only if $\overline{Q_i} \cap \overline{Q_j} \ne {\rm e}mptyset$. Obviously \be{dij} d_{ij}:= \inf\{{\rm dist}(x,y),\, x\in Q_i,y\in Q_j\} {\rm e}e satisfies $d_{ij} =0$, if $j\in N(i)$, $d_{ij} \geq 1$, if $j \not \in N(i)$, and as a matter of fact it is not difficult to see that $d_{ij} \geq \|i-j\|_{\infty} - 1$. Let us denote by $Q_i^{\rm near}=\cup_{j\in N(i)}Q_i$ and $Q_i^{\rm far}={\mathbb R}^{{d}}\setminus \overline{Q_i^{\rm near}}$. Assume that $u_{0} \in \mathcal{M}_U({\mathbb R}^{{d}})$ and, for a fixed $i$, decompose \begin{displaymath} u_{0} = u_{0} {\mathbb C}hi_{Q_i^{\rm near}} + u_{0} {\mathbb C}hi_{Q_i^{\rm far}}; {\rm e}nd{displaymath} by applying the linear semigroup $S(t)$ to each term in this equality we obtain the decomposition $$ u(t) = u_i^{\rm near}(t) + u_i^{\rm far}(t) . $$ The result will follow from the following estimates of the two terms of the decomposition. First, \begin{equation}\label{basicLp-Lq} \| u_i^{\rm near}(t)\|_{L^q(Q_i)} \leq (4\pi t)^{-\frac{{d}}{2}(1-\frac{1}{q})} \|u_{0}\|_{\mathcal{M}(Q_i^{\rm near})} \quad t > 0 , {\rm e}nd{equation} for $1\leq q \leq \infty$ and, second, \begin{equation}\label{basicL1-Linfty} \| u_i^{\rm far}(t) \|_{L^\infty(Q_i)} \leq c(t) \|u_{0}\|_{\mathcal{M}_U(Q_i^{\rm far})},\quad t \geq 0. {\rm e}nd{equation} for some bounded monotonic function $c(t)$ such that $c(0)=0$ and $0 \leq c(t) \leq C t^{-{d}/2}{\rm e}^{-\alpha/t}$ as $t \to 0$, where $C$ and $\alpha>0$ depend only on $N$. Then since the constant for the embedding $L^\infty(Q_i) \hookrightarrow L^q(Q_i)$ is $1$, independent of $q$ and $i$, (\ref{basicL1-Linfty}),(\ref{basicLp-Lq}) imply $$ \|u(t) \|_{L^q(Q_i)}\leq ((4\pi t)^{-\frac{{d}}{2}(1-\frac{1}{q})} + c(t)) \|u_{0}\|_{\mathcal{M}_U({\mathbb R}^{{d}})} \qquad i\in {\mathbb Z}^{{d}} . $$ Since the $L^q_U({\mathbb R}^N)$ norm can be bounded by a constant, only depending on $N$, times the supremum of the $L^q(Q_i)$ norms, (i) follows. Now observe that (\ref{basicLp-Lq}) follows from ``standard'' estimates for the heat equation, since in fact, $u_{0}{\mathbb C}hi_{Q_i^{\rm near}} $ is a measure of bounded total variation and then for $t>0$, \be{xtra} \| u_i^{\rm near}(t)\|_{L^{q} (Q_i)} \leq \|S(t)(u_{0} {\mathbb C}hi_{Q_i^{\rm near}})\|_{L^{q} ({\mathbb R}^{{d}})} \leq (4\pi t)^{-\frac{{d}}{2} (1-\frac{1}{q})} \|u_{0}\|_{\mathcal{M} (Q_i^{\rm near})} {\rm e}e since $\|u_{0} {\mathbb C}hi_{Q_i^{\rm near}} \|_{\mathcal{M}_{BTV} ({\mathbb R}^{{d}})} = \|u_{0}\|_{\mathcal{M} (Q_i^{\rm near})} $ see Lemma \ref{lem:heat_total_variation} below. We now prove (\ref{basicL1-Linfty}). Observe that $u_{0} {\mathbb C}hi_{Q_i^{\rm far}} =\sum_{j\in {\mathbb Z}^{{d}}\setminus N(i)}u_0^{j}$ where $u_0^{j} = u_{0} {\mathbb C}hi_{Q_j}$; for each $j$ we have $$ S(t)u_{0}^{j}(x) = (4\pi t)^{-{d}/2} \int_{{\mathbb R}^{{d}}} {\rm e}^{-\frac{|x-y|^2}{4t}} \, {\rm d} u_{0}^{j}(y) = (4\pi t)^{-{d}/2}\int_{Q_j}{\rm e}^{-\frac{|x-y|^2}{4t}} \, {\rm d} u_{0}^{j}(y) $$ which implies that for $j \not \in N(i)$ $$ \|S(t)u_{0}^{j}\|_{L^\infty(Q_i)}\leq (4\pi t)^{-{d}/2}{\rm e}^{-d_{ij}^2\over 4t} \|u_{0}^{j}\|_{\mathcal{M} (Q_j)}\leq (4\pi t)^{-{d}/2}{\rm e}^{-d_{ij}^2\over 4t}\|u_{0}\|_{\mathcal{M}_U(Q_i^{\rm far})}, $$ where $d_{ij}$ is defined above in {\rm e}qref{dij}. Hence, \begin{align*} \| u_i^{\rm far}(t)\|_{L^\infty(Q_i)} &\leq \sum_{j\in {\mathbb Z}^{{d}}\setminus N(i)} \|S(t)u_{0}^{j}\|_{L^\infty(Q_i)} &\leq (4\pi t)^{-{d}/2} \|u_{0}\|_{\mathcal{M}_U(Q_i^{\rm far})} \sum_{j\in {\mathbb Z}^{{d}}\setminus N(i)}{\rm e}^{-d_{ij}^2\over 4t} . {\rm e}nd{align*} But, using that $\#\{j \in {\mathbb Z}, \ d_{ij}=k\} \leq C k^{{d}-1}$, we obtain $$ \sum_{j\in {\mathbb Z}^{{d}}\setminus N(i)}{\rm e}^{-d_{ij}^2\over 4t}\leq C\sum_{k=1}^\infty k^{{d}-1} {\rm e}^{-k^2\over 4t} $$ which has the same character as the integral \begin{displaymath} \int_{1}^{\infty} r^{{d}-1} {\rm e}^{- \frac{r^{2}}{4t}} \, {\rm d} r = (4t)^{{d}/2} \int_{\frac{1}{\sqrt{4 t}}}^{\infty} s^{{d}-1} {\rm e}^{-s^{2}} \, {\rm d} s = t^{{d}/2}c(t) {\rm e}nd{displaymath} with $c(t)$ as claimed in (\ref{basicL1-Linfty}). \noindent (ii) If for some $t_{0}>0$ we have $u(t_{0})\in L^{\infty}({\mathbb R}^{d})$ then from (\ref{eq:solution_heat_up_t=0}) we get for all $x\in {\mathbb R}^{d}$ and any $R>0$ \begin{align*} \infty> M \geq u(x,t_{0})& \geq \frac{1}{(4\pi t_{0})^{{d}/2}} \int_{B(x,R)} {\rm e}^{-\frac{|x-y|^2}{4t_{0}}} \, {\rm d} u_0(y) \\ & \geq \frac{1}{(4\pi t_{0})^{{d}/2}} \inf_{z\in B(0,R)} {\rm e}^{-\frac{|z|^2}{4t_{0}}} \int_{B(x,R)} \, {\rm d} u_0(y) {\rm e}nd{align*} that is \begin{displaymath} 0\leq \int_{B(x,R)} \,{\rm d} u_0(y) \leq M {\rm e}^{\frac{R^2}{4t_{0}}} (4\pi t_{0})^{{d}/2} , \quad x\in {\mathbb R}^{d} {\rm e}nd{displaymath} i.e. $0\leq u_{0} \in \mathcal{M}_{U}({\mathbb R}^{d})$. From part (i) we obtain $u(t)\in L^{\infty}({\mathbb R}^{d})$ for all $t>0$. {\rm e}nd{proof} Now we prove the result used above in {\rm e}qref{xtra}. Note that from (\ref{eq:space_Meps}), $\mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d}) \subset \mathcal{M}_{U} ({\mathbb R}^{{d}}) \subset \mathcal{M}_{0} ({\mathbb R}^{d})$. The following lemma shows that $\mathcal{M}_{\rm BTV}$ is invariant under the heat equation, and gives bound on the rate of decay in $L^q$ of solutions when $u_0\in\mathcal{M}_{\rm BTV}$. \begin{lemma}\label{lem:heat_total_variation} For $\mu \in \mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d}) $ the solution of the heat equation given by (\ref{eq:solution_heat_up_t=0}) satisfies \begin{displaymath} \|S(t)\mu \|_{{\rm BTV}} \leq \|\mu \|_{{\rm BTV}}, \quad t>0, {\rm e}nd{displaymath} and for every $1\leq q \leq \infty$ \begin{displaymath} \|S(t)\mu \|_{L^{q}({\mathbb R}^{d})} \leq (4\pi t)^{-\frac{{d}}{2}(1-\frac{1}{q})} \|\mu \|_{{\rm BTV}}, \quad t>0 . {\rm e}nd{displaymath} {\rm e}nd{lemma} \begin{proof} Observe that since for every $\varphi \in C_{c}({\mathbb R}^{d})$ and $0\leq t< \infty$, $u(t) =S(t)\mu$ satisfies (\ref{Fubbini_4-S(t)}) that is, \begin{displaymath} \int_{{\mathbb R}^{d}} u(t) \varphi = \int_{{\mathbb R}^{d}} S(t)\varphi \, {\rm d} \mu {\rm e}nd{displaymath} then \begin{displaymath} \left| \int_{{\mathbb R}^{d}} u(t) \varphi {\rm i}ght| \leq \|S(t)\varphi\|_{L^{\infty}({\mathbb R}^{d})} \|\mu\|_{{\rm BTV}} . {\rm e}nd{displaymath} Therefore the estimates (\ref{eq:heat_LpLq_estimates}) give, for every $1\leq q \leq \infty$, \begin{displaymath} \left| \int_{{\mathbb R}^{d}} u(t) \varphi {\rm i}ght| \leq (4\pi t)^{-\frac{{d}}{2q'} } \|\varphi\|_{L^{q'}({\mathbb R}^{d})} \|\mu\|_{{\rm BTV}} {\rm e}nd{displaymath} and the claims follow. Note that in particular, for $q=1$ since $u(t) \in L^{1}_{{\rm loc}}({\mathbb R}^{d}) \cap \mathcal{M}_{{\rm BTV}} ({\mathbb R}^{d})$ then $u(t)\in L^{1}({\mathbb R}^{d})$. {\rm e}nd{proof} \subsection{Finite-time blowup for non-negative initial data} \label{sec:finite-time-blowup} Now we turn to non-negative solutions that may not exist for all time, that is, according to Proposition \ref{prop:global_existence_L10}, $0\leq u_{0}\notin \mathcal{M}_{0}({\mathbb R}^{d})$. Lemma \ref{whyL1e} shows that the maximal existence time for the solution arising from the non-negative initial condition $0\leq u_0\in \mathcal{M}_{\rm loc}({\mathbb R}^{d})$ will be determined by its `optimal index' \begin{equation}\label{eps0} \varepsilon_{0}(\mu) := \inf\{\varepsilon: \ \mu \in \mathcal{M}_{\varepsilon}({\mathbb R}^{d}) \} = \sup\{\varepsilon: \ \mu \notin \mathcal{M}_{\varepsilon}({\mathbb R}^{d}) \} \leq \infty. {\rm e}nd{equation} The simplest example is to take $A>0$ and consider $u_0(x)={\rm e}^{A|x|^2}$; then $u_0\in L^1_\varepsilon({\mathbb R}^{d})$ if and only if $\varepsilon>A$, so in this case $\varepsilon_0(u_0)=A$ but $u_0\notin L^1_A({\mathbb R}^{d})$. If we set $T=1/4A$ then the integral in (\ref{eq:solution_heat_up_t=0}) can be computed explicitly and one gets \begin{equation} \label{eq:blowing_up_quadratic_exponential} u(x,t) = \frac{T^{{d}/2}}{ (T-t)^{{d}/2}} {\rm e}^{\frac{|x|^2}{4(T-t)}}, {\rm e}nd{equation} which satisfies the heat equation for $t\in (0,T)$, has $u(x,0)= u_{0}(x)$, and blows up at every point $x\in{\mathbb R}^{d}$ as $t\to T$. At the other extreme is an initial condition like $$ u_0(x)={\rm e}^{A|x|^2-\gamma|x|^\alpha} $$ for some $\gamma>0$ and $1<\alpha<2$, which we treat as Example \ref{howodd}, below. In this case $\lim_{t\to 1/4A}u(x,t)$ exists for every $x\in{\mathbb R}^{d}$, but the solution cannot be extended past $t=T$. Below we analyse the behaviour for a general non-negative initial condition $u_0$ that is not an element of $\mathcal{M}_{0}({\mathbb R}^{d})$. While the time span of the solution does not depend specifically on any fine properties of the initial data, but only its asymptotic growth as $|x| \to \infty$ (in terms of its optimal index), the existence or otherwise of a finite limit as $t\to T$ is more delicate. We will see below that at the maximal existence time a number of different behaviours are possible: from complete blowup, as in the example (\ref{eq:blowing_up_quadratic_exponential}) above, to the existence of a finite limit at all points in space. We will show that by `tuning' the initial data it is possible to obtain solutions with a finite limit only at any chosen convex subset of ${\mathbb R}^{d}$. These results, in turn, will depend on the integrability at the optimal index of the translate of the initial data. In the case of pointwise-defined functions, any translation $\tau_{y} f(x) := f(x-y)$ has the same optimal index as $f$, since whenever $f\in L^1_\varepsilon({\mathbb R}^{d})$ we have $\tau_yf\in L^1_{\rm d}elta ({\mathbb R}^{d})$ for any ${\rm d}elta>\varepsilon$: \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-{\rm d}elta |x|^{2}} |f(x-y)|\, {\rm d} x = \int_{{\mathbb R}^{d}} {\rm e}^{-{\rm d}elta |z+y|^{2}} |f(z)|\, {\rm d} z\le {\rm e}^{{\rm d}elta (\frac{1}{\alpha}-1) |y|^{2} } \int_{{\mathbb R}^{d}} {\rm e}^{-{\rm d}elta (1-\alpha) |z|^{2}} |f(z)|\, {\rm d} z<\infty {\rm e}nd{displaymath} for ${\rm d}elta (1-\alpha) \geq \varepsilon$, using {\rm e}qref{sqlower}. However, whether or not $\tau_{y} f \in L^{1}_{\varepsilon_{0}(f)}({\mathbb R}^{d})$ depends strongly on the decay at infinity of ``lower order terms'' of $f$ as the examples below will show. For the case of measures, observe that we can define translations of measures via the formula that would hold for a locally integrable $f$ and $\varphi \in C_{c}({\mathbb R}^{d})$, namely \begin{displaymath} \int_{{\mathbb R}^{d}} \tau_{y} f(x) \varphi(x) \,{\rm d} x = \int_{{\mathbb R}^{d}} f(z) \varphi (z+y) \, {\rm d} z = \int_{{\mathbb R}^{d}} f (z)\tau_{-y} \varphi(z)\, {\rm d} z . {\rm e}nd{displaymath} That is, for $y\in {\mathbb R}^{d}$ and $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ we define $\tau_y\mu$ by setting \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi (x)\, {\rm d} \tau_{y} \mu(x) := \int_{{\mathbb R}^{d}} \tau_{-y} \varphi(z) \, {\rm d} \mu(z)\quad\mbox{for every}\quad \varphi \in C_{c}({\mathbb R}^{d}). {\rm e}nd{displaymath} Hence $\tau_{y} \mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ and is a positive measure whenever $\mu$ is. It also follows that if $\mu = \mu^{+}-\mu^{-}$ then \begin{displaymath} \tau_{y} \mu = \tau_{y} \mu^{+} - \tau_{y} \mu^{-} \qquad\mbox{and}\qquad |\tau_{y} \mu| = \tau_{y} |\mu| . {\rm e}nd{displaymath} \begin{lemma}\label{lem:translation_of_measures} For $\mu \in \mathcal{M}_\varepsilon({\mathbb R}^{d})$ and $y\in {\mathbb R}^{{d}}$, we have $\tau_y\mu \in \mathcal{M}_{\rm d}elta ({\mathbb R}^{d})$ for any ${\rm d}elta>\varepsilon$ and \begin{displaymath} \int_{{\mathbb R}^{{d}}} {\rm e}^{-{\rm d}elta |x|^{2}} \, {\rm d} |\tau_{y} \mu(x) |= \int_{{\mathbb R}^{{d}}} {\rm e}^{-{\rm d}elta |x+y|^{2}} \, {\rm d} |\mu(x)| . {\rm e}nd{displaymath} In particular, $\tau_{y} \mu $ has the same optimal index as $\mu$. {\rm e}nd{lemma} \begin{proof} Take $\phi_{k} \in C_{c}({\mathbb R}^{{d}})$ such that $0\leq \phi_{k} \leq 1$ and $\phi_{k} \to 1$ as $k\to \infty$ monotonically in compact sets of ${\mathbb R}^{{d}}$. Then \begin{displaymath} \int_{{\mathbb R}^{{d}}} \phi_{k} (x) {\rm e}^{-{\rm d}elta |x|^{2}} \, {\rm d} |\tau_{y} \mu(x) | = \int_{{\mathbb R}^{{d}}} \phi_{k} (x) {\rm e}^{-{\rm d}elta |x|^{2}} \, {\rm d} \tau_{y} |\mu(x) | = \int_{{\mathbb R}^{{d}}} \phi_{k} (x+y) {\rm e}^{-{\rm d}elta |x+y|^{2}} \, {\rm d} |\mu(x) | . {\rm e}nd{displaymath} Using (\ref{sqlower}), the right-hand side above is bounded by \begin{displaymath} {\rm e}^{{\rm d}elta (\frac{1}{\alpha}-1) |y|^{2} } \int_{{\mathbb R}^{d}} {\rm e}^{-{\rm d}elta (1-\alpha) |x|^{2}} \, {\rm d} |\mu(x)| < \infty {\rm e}nd{displaymath} for ${\rm d}elta (1-\alpha) \geq \varepsilon$. Then Fatou's Lemma gives ${\rm d}isplaystyle \int_{{\mathbb R}^{{d}}} {\rm e}^{-{\rm d}elta |x|^{2}} \, {\rm d} |\tau_{y} \mu(x) | < \infty$. Now the Monotone Convergence Theorem gives the result. {\rm e}nd{proof} Now we can prove the following result on the pointwise behaviour, as $t\to T$ of the solution of the heat equation (\ref{eq:solution_heat_up_t=0}) with initial data $0\leq u_{0}\notin \mathcal{M}_{0}({\mathbb R}^{d})$. \begin{theorem} \label{thr:blow-up} Assume that $0\leq u_{0} \in \mathcal{M}_{\rm loc}({\mathbb R}^{d})$ and that the optimal index $\varepsilon_0=\varepsilon_0(u_{0})$, see {\rm e}qref{eps0}, satisfies $0< \varepsilon_{0} <\infty$. Then the solution $u$ of the heat equation given by {\rm e}qref{eq:solution_heat_up_t=0} is not defined (at any point $x\in {\mathbb R}^{d}$) beyond $T:=1/4\varepsilon_0$. Furthermore as $t\to T$ \begin{displaymath} u(x,t) \to \begin{cases} u(x,T) & \mbox{if }\tau_{-x} u_{0} \in \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d}) \cr \infty & \mbox{if }\tau_{-x} u_{0} \notin \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d}). {\rm e}nd{cases} {\rm e}nd{displaymath} {\rm e}nd{theorem} \begin{proof} It follows from Lemma \ref{whyL1e} that if $u(x,t)$ is finite for some $x\in {\mathbb R}^{d}$ and $t> T$ then $u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{d})$ for some $\varepsilon$ with $\varepsilon_{0} > \varepsilon>\frac{1}{4t}$, which is impossible. To analyse the limiting behaviour as $t\to T$, using Lemma \ref{lem:translation_of_measures} we first write, for $t<T$, \begin{displaymath} u(x,t) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|x-y|^2}{4t}} \, {\rm d} u_0(y) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|z|^2}{4t}} \, {\rm d} \tau_{-x} u_0(z) . {\rm e}nd{displaymath} Now, if $\tau_{-x}u_{0} \notin \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$ then by Fatou's Lemma \begin{displaymath} u(x,t) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|y|^2}{4t}} \,{\rm d} \tau_{-x}u_0(y) \to \infty , \quad t\to T. {\rm e}nd{displaymath} On the other hand, if $\tau_{-x}u_{0} \in \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$ then ${\rm e}^{-\frac{|y|^2}{4t}} \leq {\rm e}^{-\varepsilon_{0}|y|^2} $ and using the Monotone Convergence Theorem it follows that as $t\to T$, \begin{displaymath} u(x,t) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|y|^2}{4t}} \,{\rm d} \tau_{-x}u_0(y) \to \frac{1}{(4\pi T)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon_{0} |y|^2} \,{\rm d} \tau_{-x}u_0(y) <\infty .\qedhere {\rm e}nd{displaymath} {\rm e}nd{proof} For a given initial data $0\leq u_{0} \in \mathcal{M}_{\rm loc}({\mathbb R}^{d})$ with optimal index $0<\varepsilon_0=\varepsilon_0(u_{0}) <\infty$ we now analyse the `regular set' of points $x\in {\mathbb R}^{{d}}$ such that the solution of the heat equation has a finite limit as $t\to T:=1/4\varepsilon_0$ as in Theorem \ref{thr:blow-up}. For short we define $A:=\varepsilon_0(u_{0})>0$. Then observe that if no translation of $u_{0}$ satisfies $\tau_{-x} u_{0} \in \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$ then the solution $u(x,t)$ of the heat equation diverges to infinity at every point in ${\mathbb R}^{d}$ as $t \to T=1/4A$. Otherwise, assume that $u_{0} \in \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$; then \begin{equation} \label{eq:choosing_u0} u_0(x)={\rm e}^{A|x|^2} v(x), \quad x\in {\mathbb R}^{{d}} {\rm e}nd{equation} where $0\le v\in \mathcal{M}_{{\rm BTV}}({\mathbb R}^{d})$. If, on the contrary $u_{0} \notin \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$ then for some $x_{0} \in {\mathbb R}^{{d}}$ we have $v_{0}:= \tau_{-x_{0}} u_{0} \in \mathcal{M}_{\varepsilon_{0}}({\mathbb R}^{d})$ and then $v_{0}$ is as in (\ref{eq:choosing_u0}), while from Lemma \ref{lem:translation_of_measures} we obtain \begin{displaymath} u(x,t,u_{0}) = u(x-x_{0},t, v_{0}), \quad x\in {\mathbb R}^{{d}} {\rm e}nd{displaymath} so it suffices to study the `regular set' of an initial data as in (\ref{eq:choosing_u0}) with $0\le v\in \mathcal{M}_{{\rm BTV}}({\mathbb R}^{d})$. For simplicity in the exposition we will restrict to the case $0\le v\in L^{1}({\mathbb R}^{d})$. In such a case we have the following result that shows that the `regular set' of $x\in{\mathbb R}^{d}$ at which $u(x,t)$ has a finite limit as $t\to T$ must be a convex set. For a converse result see Proposition \ref{anyconvex} below. \begin{lemma} \label{lem:blowup-set} Assume $u_{0}$ is as in {\rm e}qref{eq:choosing_u0} with $0\leq v \in L^{1}_{loc}({\mathbb R}^{{d}})$, so that $\varepsilon_0(u_{0})=A$. Then \begin{enumerate} \item[\rm (i)] $\tau_{-x} u_{0} \in L^{1}_{\varepsilon_{0}}({\mathbb R}^{d})$ iff $I_v(x):= {\rm d}isplaystyle \int_{{\mathbb R}^{{d}}} {\rm e}^{2A \langlex,z\rangle} v(z) \, {\rm d} z < \infty$. \item[\rm (ii)] If moreover $0\le v\in L^{1}({\mathbb R}^{d})$ then the set of $x\in {\mathbb R}^{{d}}$ such that $I_v(x) < \infty$ is a convex set that contains $x=0$. {\rm e}nd{enumerate} {\rm e}nd{lemma} \begin{proof} Since ${\rm e}^{-A|y|^{2}} \tau_{-x} u_0 (y) = {\rm e}^{A|x|^{2}} {\rm e}^{2A \langlex,y\rangle} v(x+y) = {\rm e}^{-A|x|^{2}} {\rm e}^{2A \langlex,x+y\rangle} v(x+y) $ part (i) follows. For part (ii) note that since $v\in L^1({\mathbb R}^{d})$, it is always the case that $$ \int_{\langlex,y\rangle\le0}{\rm e}^{\lambda 2A\langlex,y\rangle}v(y)\,{\rm d} y<\infty $$ whenever $\lambda\ge0$. Now observe that if $I_v(x)=\infty$ then for any $\lambda\ge1$ $$ \int_{\langlex,y\rangle > 0} {\rm e}^{\lambda 2A \langlex,y\rangle} v(y) \, {\rm d} y = \infty, $$ i.e.\ $I_v(\lambda x)= \infty$ for any $\lambda \geq 1$. Consider now $x_{1}, x_{2}$ such that $I_v(x_{i}) < \infty$ and take $\theta \in (0,1)$. Then $I_v(\theta x_{i}) < \infty$ and $I_v((1-\theta) x_{i}) < \infty$ and \begin{displaymath} I_v(\theta x_{1}+ (1-\theta) x_{2}) = \int_{{\mathbb R}^{{d}}} {\rm e}^{2A \theta\langlex_{1},y\rangle} {\rm e}^{2A (1-\theta)\langlex_{2},y\rangle} v(y) \, {\rm d} y . {\rm e}nd{displaymath} Observe that the integral in the regions where either $\langlex_{1},y\rangle \leq 0$ or $\langlex_{2},y\rangle \leq 0$ is finite while \begin{displaymath} \int_{\{ \langlex_{1},y\rangle >0, \ \langlex_{2},y\rangle >0\} } {\rm e}^{2A \theta\langlex_{1},y\rangle} {\rm e}^{2A (1-\theta)\langlex_{2},y\rangle} v(y) \, {\rm d} y {\rm e}nd{displaymath} can be split in the regions $\langlex_{1},y\rangle \geq \langlex_{2},y\rangle$ and $\langlex_{1},y\rangle < \langlex_{2},y\rangle$. In both of these the integral is finite, which completes the proof. {\rm e}nd{proof} Now we give some examples of initial data as in (\ref{eq:choosing_u0}) and explicitly compute its regular set. In our first example $I_v(x)$ is never finite so we obtain complete blowup, generalising the example in (\ref{eq:blowing_up_quadratic_exponential}). \begin{example} If we take $v(x) \geq c >0$ for all $x\in {\mathbb R}^{{d}}$ then $\varepsilon_0(u_0)=A$ and in {\rm e}qref{eq:choosing_u0} we have $\tau_{-x}u_0\notin L^1_A({\mathbb R}^{d})$ for every $x\in{\mathbb R}^{d}$. {\rm e}nd{example} In this case the solution $u$ of the heat equation given by (\ref{eq:solution_heat_up_t=0}) and initial data (\ref{eq:choosing_u0}) blows up at every point in ${\mathbb R}^{{d}}$ at time $T=\frac{1}{4A}$. In our next example $I_v(x)$ is only finite at $x=0$ so the regular set consists of a single point at the origin. Thus the solution $u$ of the heat equation given by (\ref{eq:solution_heat_up_t=0}) has a finite limit at $x=0$ as $t\to T$, but blows up at all other points of ${\mathbb R}^{d}$. \begin{example} When $v(x) = (1+|x|^{2})^{-\alpha/2}$ with $\alpha > {d}/2$ we have $\varepsilon_{0}(u_0) =A$ and in (\ref{eq:choosing_u0}) we have $\tau_{-x} u_0 \in L^{1}_{A}({\mathbb R}^{d})$ only when $x=0$. {\rm e}nd{example} To see this we write $y = sx + y'$ with $y'\perp x$ to get \begin{displaymath} I(x)= \int_{{\mathbb R}^{{d}-1}} \int_{-\infty}^{\infty} \frac{{\rm e}^{2A|x|^{2} s}}{(1+s^{2}|x|^{2} + |y'|^{2})^{\alpha/2}} \, {\rm d} s\, {\rm d} y' {\rm e}nd{displaymath} If $x\neq 0$ then the integral in $s$ is infinity for each $y'\in {\mathbb R}^{N-1}$. In the next example $I_v(x)$ is finite only in the open ball $|x|<\gamma/2A$; the solution $u$ of the heat equation given by (\ref{eq:solution_heat_up_t=0}) has a finite limit here as $t\to T$, but blows up at all other points of ${\mathbb R}^{d}$. \begin{example} Take $v(x) = {\rm e}^{-\gamma |x| }$ with $\gamma >0$. Then $\varepsilon_{0}(u_0) =A$ and in (\ref{eq:choosing_u0}) we have $\tau_{-x} u_0 \in L^{1}_{A}({\mathbb R}^{d})$ if and only if $|x| <\frac{\gamma}{2A}$. {\rm e}nd{example} To see this note first that \begin{displaymath} 0\leq {\rm e}^{2A \langlex,y\rangle} v(y) \leq {\rm e}^{(2A |x|-\gamma )|y|} {\rm e}nd{displaymath} which is integrable if $|x|< \frac{\gamma}{2A}$. On the other hand writing $y = sx + y'$ with $y'\perp x$ \begin{displaymath} I(x)= \int_{{\mathbb R}^{{d}-1}} \int_{-\infty}^{\infty} {\rm e}^{2A |x|^{2}s} {\rm e}^{-\gamma \sqrt{s^{2} |x|^{2} +|y'|^{2}}} \, {\rm d} s\, {\rm d} y'. {\rm e}nd{displaymath} If $2A |x|^{2} -\gamma |x| \geq 0$, that is, $|x| \geq \frac{\gamma}{2A}$, then the integral in $s$ is infinite for each $y'\in {\mathbb R}^{{d}-1}$. It is also possible to make $I_{v}(x)$ finite only on a closed ball. \begin{example} Take $v(x) = {\rm e}^{-\gamma |x| } (1+|x|^{2})^{-\alpha/2}$ with $\alpha > {d}/2$ and $\gamma >0$. Then $\varepsilon_{0}(u_0) =A$ and in (\ref{eq:choosing_u0}) we have $\tau_{-x} u_0 \in L^{1}_{A}({\mathbb R}^{d})$ if and only if $|x| \leq \frac{\gamma}{2A}$. {\rm e}nd{example} To see this note first that \begin{displaymath} 0\leq {\rm e}^{2A \langlex,y\rangle} v(y) \leq {\rm e}^{(2A |x|-\gamma )|y|} (1+|y|^{2})^{-\alpha/2} {\rm e}nd{displaymath} which is integrable if $|x|\leq \frac{\gamma}{2A}$. On the other hand, if $x\neq 0$ writing $y = sx + y'$ with $y'\perp x$ \begin{displaymath} I_{v}(x)= \int_{{\mathbb R}^{{d}-1}} \int_{-\infty}^{\infty} {\rm e}^{2A |x|^{2}s} {\rm e}^{-\gamma \sqrt{s^{2} |x|^{2} +|y'|^{2}}} \frac{1}{(1+s^{2}|x|^{2} + |y'|^{2})^{\alpha/2}} \, {\rm d} s\, {\rm d} y'. {\rm e}nd{displaymath} If $2A |x|^{2} -\gamma |x| > 0$, that is, $|x| > \frac{\gamma}{2A}$, then the integral in $s$ is infinite for each $y'\in {\mathbb R}^{{d}-1}$. Our last example is perhaps the most striking: here $I_v(x)$ is finite for all $x\in {\mathbb R}^{{d}}$. \begin{example}\label{howodd} Take $v(x) = {\rm e}^{-\gamma |x|^{\alpha}}$ with $\gamma >0$, $1<\alpha <2$. Then $\varepsilon_{0}(u_0) =A$ and in (\ref{eq:choosing_u0}) we have $\tau_{-x} u_0 \in L^{1}_{A}({\mathbb R}^{d})$ for any $x\in {\mathbb R}^{d}$. {\rm e}nd{example} To see this note that \begin{displaymath} 0\leq {\rm e}^{2A \langlex,y\rangle} v(y) = {\rm e}^{2A <x,y> -\gamma |y|^{\alpha}} \leq {\rm e}^{2A |x| |y| -\gamma |y|^{\alpha}} \in L^{1}({\mathbb R}^{d}). {\rm e}nd{displaymath} Thus for the initial data $u_0(x)={\rm e}^{A|x|^2-\gamma|x|^\alpha}$ the solution $u(x,t)$ of the heat equation takes a finite value at every point in ${\mathbb R}^{d}$ at $T=1/4A$, but cannot be continued beyond this time. We now show that in fact we can arrange for the regular set, $\{x:\ I_v(x)<\infty\}$, to be any chosen closed convex subset of ${\mathbb R}^{d}$. First we recall the following characterisation of such sets. \begin{lemma} \label{lem:closed_convex_set} Any closed convex set with $0 \in K$ is of the form $$ K= \bigcap_{j\in J} \{x:\ \langlex, n_j\rangle \le c_j\} $$ for some unit vectors $n_{j}$ and $c_{j} \geq 0$, where $J$ is at most countable. {\rm e}nd{lemma} \begin{proof} Note first that $K$ is the intersection of all closed half spaces containing $K$. Then observe that $K= \bigcap_{y \in \partial K} \{x:\ \langlex, n(y)\rangle \le c(y)\}$ for some unit vectors $n(y)$ and constants $c(y) \geq 0$, see e.g. \cite{S}. This implies that, $\bigcup_{y \in \partial K} \{x:\ \langlex, n(y)\rangle > c(y)\}$ is an open covering of the open set ${\mathbb R}^{{d}} \setminus K$. Thus we can extract an, at most, countable covering. {\rm e}nd{proof} Note that the form of $K_0$ in the following results is more general than that given by the previous lemma; in particular it allows for any closed convex set. \begin{proposition}\label{anyconvex} Assume that $K \subset {\mathbb R}^{{d}}$ is a convex set given by the intersection of at most a countable number of half spaces, that is, $K= x_{0} + K_{0}$ with \be{convexK} 0\in K_{0}= \bigcap_{j\in J_{1}} \{x:\ \langlex, n_j\rangle \leq c_j\} \cap \bigcap_{j\in J_{2}} \{x:\ \langlex, n_j\rangle < c_j\} {\rm e}e for some unit vectors $n_{j}$ and $c_{j} \geq 0$ for $j\in J_{1}$ and $c_{j} >0$ for $j\in J_{2}$, where $J_{1}$ and $J_{2}$ are at most countable and $\inf_{j\in J_{2}} c_{j} >0$. Then there exist $0\leq v \in L^{1}_{{\rm loc}}({\mathbb R}^{{d}})$ such that \begin{displaymath} I_{v}(x) < \infty \quad \mbox{if and only if }x\in K: {\rm e}nd{displaymath} the solution $u$ of the heat equation with initial data $u_{0}(x)= {\rm e}^{A|x|^2}v(x)$ has a finite limit at every point $x \in K$ but blows up at every other point in ${\mathbb R}^{{d}}$ as $t\to \frac{1}{4A}$. {\rm e}nd{proposition} \begin{proof} Assume first that $x_{0}=0$. Take any orthonormal basis $\mathcal{B} = \{e_{j}\}_{j}$ of ${\mathbb R}^{{d}}$. Using coordinates with respect to this basis, write $x\in{\mathbb R}^{{d}}$ as $(x_{1},x')$ and $y=(y_{1},y')$. We choose ${\rm e}ta_0\geq 0$ and set \begin{displaymath} 0\leq v(x)=\begin{cases}\chi(x')\phi(x_{1}){\rm e}^{-{\rm e}ta_0 x_{1}},&x_{1}>0\\ 0&x_{1}<0, {\rm e}nd{cases} \quad 0\leq w(x)=\begin{cases}\chi(x') {\rm e}^{-{\rm e}ta_0 x_{1}},&x_{1}>0\\ 0&x_{1}<0, {\rm e}nd{cases} {\rm e}nd{displaymath} where $\chi$ is the characteristic function of the unit ball in ${\mathbb R}^{{d}-1}$ and $\phi(s)=\frac{1}{1+s^2}$. Note that $v \in L^{1}({\mathbb R}^{{d}})$ with $\|v\|_{ L^{1}({\mathbb R}^{{d}})} \leq M$ and if ${\rm e}ta_{0} >0$ then $w \in L^{1}({\mathbb R}^{{d}})$ with $\|w\|_{ L^{1}({\mathbb R}^{{d}})} \leq \frac{M}{{\rm e}ta_{0}}$ with $M$ independent of ${\rm e}ta_{0}$. Also, for $x\in {\mathbb R}^{{d}}$ \begin{align*} I_{v}(x) = \int_{{\mathbb R}^{{d}}} {\rm e}^{2A\langlex, y\rangle} v(y) \,{\rm d} y & = \int_0^\infty \int_{|y'|\le 1} {\rm e}^{(2A x_{1}- {\rm e}ta_0) y_{1}} {\rm e}^{2A \langlex', y' \rangle} \phi(y_{1}) \,{\rm d} y' \,{\rm d} y_{1} \\ & = \left( \int_{|y'|\le 1}{\rm e}^{2A \langle x', y'\rangle} \,{\rm d} y'{\rm i}ght) \left( \int_0^\infty {\rm e}^{ (2A x_{1}-{\rm e}ta_0) y_{1}}\phi(y_{1})\,{\rm d} y_{1}{\rm i}ght). {\rm e}nd{align*} The first factor is always finite, and the second is finite if $x_{1} \leq \frac{{\rm e}ta_{0}}{2A}$ and infinite if $x_{1} > \frac{{\rm e}ta_{0}}{2A}$. So choosing ${\rm e}ta_{0} \geq 0$ appropriately, for any given $c\geq 0$ we can ensure that $I_{v}(x) < \infty$ iff $x\in \{z:\ z_{1} \leq c\}$. An analogous computation with $w$ gives that for any given $c > 0$ we obtain $I_{w}(x) < \infty$ iff $x\in \{z:\ z_{1} < c\}$. Hence for any given unit vector $n$ and $c\geq 0$ (respectively $c>0$) we can find an integrable function $v=v(n,c)$ ($w=w(n,c)$ respectively) such that \begin{displaymath} I_{v}(x) <\infty \quad \mbox{ only in the half space $\langlex, n\rangle \leq c$ \quad ($\langlex, n\rangle < c$ respectively)} {\rm e}nd{displaymath} with $\|v\|_{ L^{1}({\mathbb R}^{{d}})}$ bounded independent of $n$ and $c\geq 0$ ($\|w\|_{ L^{1}({\mathbb R}^{{d}})} \leq \frac{M}{c}$, $M$ independent of $n$). Based on the assumed form of $K_{0}$ in {\rm e}qref{convexK} we set \begin{displaymath} v_{0}(x)=\sum_{j \in J_{1}} j^{-2}v_j(x) + \sum_{j \in J_{2}} j^{-2}w_j(x) {\rm e}nd{displaymath} where $v_j(x) = v(n_{j}, c_{j})(x)$ and $w_j(x) = w(n_{j}, c_{j})(x)$ are constructed as above. Since $\inf_{j\in J_{2}} c_{j} >0$ then $v_{0} \in L^{1}({\mathbb R}^{{d}}) $ and clearly $I_{v_{0}}(x) <\infty$ iff $x\in K_{0}$. Now, for $x_{0} \neq 0$, define \begin{displaymath} v(x) = e^{-2A \langlex_{0},x\rangle} v_{0}(x), \quad x\in {\mathbb R}^{{d}} . {\rm e}nd{displaymath} Then $I_{v}(x) = {\rm d}isplaystyle \int_{{\mathbb R}^{{d}}} {\rm e}^{2A \langlex-x_{0}, y\rangle} v_{0}(y) \, {\rm d} y <\infty$ iff $x-x_{0}\in K_{0}$. {\rm e}nd{proof} \subsection{Continuation of signed solutions} \label{sec:cont-sign-solut} For signed solutions the maximal existence time of the solution may not be given by $T= \frac{1}{4\varepsilon_{0}(u_{0})}$ as in Theorem \ref{thr:blow-up}; see Section \ref{sec:prescribed_g_at_x=0}. However, we can establish the following continuation result. \begin{proposition} \label{prop:prolongation} Assume that $u_{0} \in \mathcal{M}_{\varepsilon}({\mathbb R}^{d})$ and that $u(t;u_{0}) = S(t) u_{0}$ given by (\ref{eq:solution_heat_up_t=0}) is defined on $[0,T)$ but cannot be defined any time after. Then for any ${\rm d}elta >0$ \begin{displaymath} \limsup_{t\to T} \|u(t,u_{0})\|_{ L^1_{{\rm d}elta}({\mathbb R}^{d})} = \infty . {\rm e}nd{displaymath} {\rm e}nd{proposition} \begin{proof} Assume otherwise that for some ${\rm d}elta >0$ (which, without loss of generality, we can take such that ${\rm d}elta > \varepsilon$) \begin{displaymath} \|u(t,u_{0})\|_{ L^1_{{\rm d}elta}({\mathbb R}^{d})} \leq M, \quad 0\leq t < T. {\rm e}nd{displaymath} Take $t_{0} <T$ such that $T < t_{0} + T({\rm d}elta)/2$ and define $v_{0} = S(t_{0}) u_{0} \in L^1_{{\rm d}elta}({\mathbb R}^{d})$. Then define \begin{displaymath} U(t) = \begin{cases} S(t)u_{0} & 0\leq t < t_{0} \\ S(t-t_{0}) v_{0} & t_{0}\leq t < t_{0} + T({\rm d}elta) , {\rm e}nd{cases} \quad 0\leq t < t_{0} + T({\rm d}elta) . {\rm e}nd{displaymath} Then we claim that $U$ satisfies assumptions (\ref{eq:uniqueness_Du_in_L1delta}), (\ref{eq:uniqueness_u_in_L1eps}), (\ref{eq:uniqueness_initial_data_Cc}) in $[0, t_{0} + T({\rm d}elta)) $. Hence, Theorem \ref{thr:uniqueness} implies $U(t) = S(t) u_{0}$ for $0\leq t < t_{0} + T({\rm d}elta)$ which contradicts the maximality of $T$. To prove the claim, notice that (\ref{eq:uniqueness_initial_data_Cc}) is satisfied (using Theorem \ref{thm:properties_sltns_given_u0}). Also, (\ref{eq:uniqueness_u_in_L1eps}) holds because of the assumption on $u$ and by part (i) in Proposition \ref{prop:semigroup_estimates} applied to $S(t-t_{0}) v_{0}$ for $t_{0}\leq t \leq t_{0} + \tau$ for any $\tau <T({\rm d}elta)$. Finally (\ref{eq:uniqueness_Du_in_L1delta}) follows from (\ref{bound_above_x=0}) and (\ref{eq:exponential_bound_further_derivatives}) applied to $S(t)u_{0}$ with $0\leq t < t_{0}$ and $S(t-t_{0}) v_{0}$ with $t_{0}\leq t \leq t_{0} +\tau$ for any $\tau < T({\rm d}elta)$. {\rm e}nd{proof} \section{Long-time behaviour of heat solutions} \label{sec:asympt-behav-heat} \setcounter{equation}{0} We now discuss the asymptotic behaviour as $t\to\infty$ of solutions when $u_{0} \in \mathcal{M}_{0}({\mathbb R}^{d})$. For this Lemma \ref{lem:estimate_from_x=0} will be a central tool. We start with some simple consequences of this result, which show that the asymptotic behaviour is largely determined by the behaviour at $x=0$. Observe that the converse of parts (i), (ii), (iii), and (v) are obviously true. \begin{proposition} \label{prop:estimates_from_u(0;t)} Assume that $u_{0} \in \mathcal{M}_{0}({\mathbb R}^{d})$. \begin{itemize} \item[\rm(i)] If $u(0,t,|u_{0}|)$ is bounded for $t>0$ then $u(t,u_{0})$ remains uniformly bounded in sets $\frac{|x|}{\sqrt{t}} \leq R$. In particular, $u \in L^{\infty}_{\rm loc}({\mathbb R}^{d} \times (0,\infty))$. \item[\rm(ii)] If $u(0,t,|u_{0}|) \to 0$ as $t\to \infty$ then $u(t,u_{0})\to 0$ uniformly in sets $\frac{|x|}{\sqrt{t}} \leq R$. {\rm e}nd{itemize} Assume in addition that $u_0\ge0$. Then \begin{itemize} \item[\rm(iii)] If $\lim_{t\to \infty} u(0,t,u_{0})=L \in (0,\infty)$ exists, then $u(x,t, u_{0}) \to L$ uniformly in compact sets of ${\mathbb R}^{d}$. \item[\rm(iv)] If $u(0,t,u_{0})$ is unbounded for $t>0$ then $u(t,u_{0})$ is unbounded in sets $\frac{|x|}{\sqrt{t}} \leq R$, and so in particular unbounded in any compact subset of ${\mathbb R}^{d}$. \item[\rm(v)] If $u(0,t, u_{0}) \to \infty$ as $t\to \infty$ then $u(t,u_{0})\to \infty$ uniformly in sets $\frac{|x|}{\sqrt{t}} \leq R$. {\rm e}nd{itemize} {\rm e}nd{proposition} \begin{proof} (i) and (ii). From the upper bound in Lemma \ref{lem:estimate_from_x=0} in sets with $\frac{|x|^{2}}{t} \leq R$ we get \begin{displaymath} | u(x,t) |\leq c_{{d},a} u(0,at,|u_{0}|) {\rm e}^{\frac{|x|^{2}}{4(a-1)t}} \leq c_{{d},a} u(0,at,|u_{0}|) {\rm e}^{\frac{R^{2}}{4(a-1)}} . {\rm e}nd{displaymath} Assume furthermore that $0\leq u_{0} \in \mathcal{M}_{0}({\mathbb R}^{d})$. Then \noindent (iii) Using the lower and upper bounds in Lemma \ref{lem:estimate_from_x=0} if $|x|^{2}\leq R$ we get for every $b<1<a$ \begin{displaymath} b^{{d}/2} u(0,bt) {\rm e}^{-\frac{R^{2}}{4(1-b)t}} \leq u(x,t) \leq a^{{d}/2} u(0,at) {\rm e}^{\frac{R^{2}}{4(a-1)t}} . {\rm e}nd{displaymath} \noindent (iv) and (v) From the lower bound in Lemma \ref{lem:estimate_from_x=0} in sets with $\frac{|x|^{2}}{t} \leq R$ we get $$ c_{{d},b} u(0,bt) {\rm e}^{-\frac{R^{2}}{4(1-b)}} \leq \inf_{\frac{|x|^2}{t}\leq R} u(x,t).{\rm e}qno\qedhere $$ {\rm e}nd{proof} Recalling the definition of the $\mathcal{M}_\varepsilon({\mathbb R}^{{d}})$ norm (\ref{eq:norm_Meps}) observe that \begin{equation} \label{eq:value_at_zero_and_normMeps} u(0,t,|u_{0}|)=\frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|y|^2}{4t}} \,{\rm d} |u_0(y)| = \|u_{0}\|_{\mathcal{M}_{1/4t}({\mathbb R}^{d})} {\rm e}nd{equation} hence Proposition \ref{prop:estimates_from_u(0;t)} could easily be restated in terms of the behavior of the norms $\|u_0\|_{\mathcal{M}_\varepsilon({\mathbb R}^{{d}})}$ as $\varepsilon\to0$. In particular $u(t,u_{0})$ remains uniformly bounded in sets $\frac{|x|}{\sqrt{t}} \leq R$ if and only if \begin{displaymath} \sup_{\varepsilon >0} \| u_{0} \|_{ \mathcal{M}_{\varepsilon} ({\mathbb R}^{{d}})} < \infty . {\rm e}nd{displaymath} Also notice that part (iii) implies that there are no other stationary solutions of (\ref{eq:intro_heat_equation}) other than constants. In other words, a harmonic function in $\mathcal{M}_{0}({\mathbb R}^{d})$ must be constant. \subsection{Sufficient conditions for decay} \label{sec:suff-cond-decay} As observed above, solutions converge to zero as $t\to\infty$ if and only if $$ \lim_{\varepsilon\to0}\|u_0\|_{\mathcal{M}_\varepsilon ({\mathbb R}^{d}) }=0. $$ We now give some (non-sharp) conditions to ensure this, in terms of the distribution of mass of the initial condition measured in terms of the averages over balls. Note that from Proposition \ref{prop:estimates_from_u(0;t)} this behaviour is determined by the value of the solution at $x=0$. \begin{theorem}\label{astti} Suppose that $u_0\in \mathcal{M}_0({\mathbb R}^{d})$. \begin{itemize} \item[\rm(i)] If $$ \frac{1}{R^d}\int_{R/2\le|x|\le R} \,{\rm d} |u_0(x)|\le M $$ then $u$ remains uniformly bounded in sets of the form $\frac{|x|}{\sqrt{t}} \leq R$ for any $R>0$; in particular, $u \in L^{\infty}_{\rm loc}({\mathbb R}^{d} \times (0,\infty))$. \item[\rm(ii)] If $$ \lim_{R\to\infty}\frac{1}{R^d} \int_{R/2\le|x|\le R} \,{\rm d} |u_0(x)|=0; $$ then $u(0,t) \to 0$ as $t\to \infty$ and hence $ u(t) \to 0$ in $L^{\infty}_{\rm loc}({\mathbb R}^{d})$ and uniformly in sets $\frac{|x|}{\sqrt{t}} \leq R$. {\rm e}nd{itemize} Assume in addition that $u_0\ge0$. Then \begin{itemize} \item[\rm(iii)] If $$ \liminf_{R\to\infty} \frac{1}{R^d}\int_{|x|\le R}\,{\rm d} u_0(x) >0, $$ then $\liminf_{t \to \infty} u(0,t) >0$. \item[\rm(iv)] If $$ \lim_{R\to\infty}\frac{1}{R^d}\int_{|x|\le R} \,{\rm d} u_0(x) =\infty $$ then $u(0,t)\to\infty$ as $t\to\infty$. {\rm e}nd{itemize} {\rm e}nd{theorem} \begin{proof} (i) Consider \begin{displaymath} |u(0,t)|=\frac{1}{(4\pi t)^{d/2}} \int_{{\mathbb R}^d}{\rm e}^{-|x|^2/t} \,{\rm d} |u_0(x)| =\frac{1}{(4\pi t)^{d/2}} \sum_{k=-\infty}^\infty \int_{2^k\le|x|\le 2^{k+1}} {\rm e}^{-|x|^2/t} \,{\rm d} |u_0(x)| . {\rm e}nd{displaymath} Note that \begin{align*} \int_{R\le |x|\le 2R} {\rm e}^{-|x|^2/4t} \,{\rm d} |u_0(x)| & \le {\rm e}^{-R^2/4t} \int_{R\le|x|\le 2R} \,{\rm d} |u_0(x)|\\ &\le(2R)^dM {\rm e}^{-R^2/4t} \le 2^{d+1} M \int_{R/2\le |x|\le R} {\rm e}^{-|x|^2/4t} \, {\rm d} x , {\rm e}nd{align*} and so \begin{align*} |u(0,t)|&\le 2^{d+1}M \, \frac{1}{(4\pi t)^{d/2}} \sum_{k=-\infty}^\infty \int_{2^{k-1}\le|x|\le 2^k}{\rm e}^{-|x|^2/4t} \, {\rm d} x\\ &= 2^{d+1}M\,\frac{1}{(4\pi t)^{d/2}} \int_{{\mathbb R}^d}{\rm e}^{-|x|^2/4t } \, {\rm d} x = 2^{d+1}M. {\rm e}nd{align*} (ii) Given $\varepsilon>0$ take $k_0>0$ such that $$ \frac{1}{R^d} \int_{R/2\le|x|\le R} \, {\rm d} |u_0(x)| <\varepsilon $$ for all $R\ge R_0:=2^{k_0}$. Then we can split the domain of integration in the integral expression for $|u(0,t)|$ into $|x|\le R_0$ and $|x|>R_0$. The above argument shows that the integral over the unbounded region contributes at most $2^{d+1}\varepsilon$ for all $t>0$, while the integral over the bounded region contributes no more than $$ \frac{1}{(4\pi t)^{d/2}} \int_{|x|\le R_0} \, {\rm d} |u_0(x)| \le \frac{c}{t^{d/2}}. $$ It follows that $|u(0,t)|\to0$ as $t\to\infty$. (iii) There exists an $R_0$ and $m>0$ such that $$ \frac{1}{R^d} \int_{|x|\le R} \, {\rm d} u_0(x) \ge m>0 $$ for all $R\ge R_0$. Then for $t$ sufficiently large such that $\sqrt t>R_0$ we have \begin{displaymath} u(0,t)\ge\frac{1}{(4\pi t)^{d/2}} \int_{|x|\le\sqrt t} {\rm e}^{-|x|^2/4t} \, {\rm d} u_0(x) \ge\frac{1}{(4\pi t)^{d/2}}{\rm e}^{-1/4}mt^{d/2}=\frac{1}{(4\pi)^{d/2}}{\rm e}^{-1/4} m. {\rm e}nd{displaymath} (iv) We repeat the above argument, taking $m$ arbitrary.{\rm e}nd{proof} \subsection{Wild behaviour of solutions as $t\to\infty$.} \label{sec:wild-behav-solut} Theorem \ref{astti} gives conditions on the averages over balls to distinguish between various time-asymptotic regimes; but in the case that $$ \liminf_{R\to\infty} \frac{1}{R^d}\int_{|x|\le R} \, {\rm d} u_0(x) >0 $$ and $$ \frac{1}{R^d} \int_{|x|\le R} \, {\rm d} u_0(x) \not\to\infty \qquad\mbox{as}\quad R\to\infty $$ some very rich behaviour is possible. In the following theorem we show that there is initial data in $L^1_0({\mathbb R}^{d})$ that gives rise to unbounded oscillating solutions. For bounded oscillations in the case of bounded initial data and solutions, see also Section \ref{sec:rescalling-appr-vazq} below and \cite{VZ2002}. \begin{theorem} \label{thr:oscillation_at_0} For any sequence of non-negative numbers $\{\alpha_{k}\}_{k}$ there exists a non-negative $u_{0} \in L^{1}_{0}({\mathbb R}^{d})$ and a sequence $t_{n}\to \infty$ such that for every $k$ there exists a subsequence $t_{k,j}$ with $u(0,t_{k,j}) \to \alpha_{k}$ as $j\to \infty$. {\rm e}nd{theorem} In fact it is enough to prove the following (apparently weaker) result. \begin{proposition} \label{prop:oscillations} Given any sequence of non-negative numbers $\{b_{k}\}_{k}$ there exists a non-negative $u_{0} \in L^{1}_{0}({\mathbb R}^{d})$ and a sequence $t_{k}\to \infty$ such that, as $k\to \infty$ \begin{displaymath} |u(0,t_{k}) -b_{k}| \to 0 . {\rm e}nd{displaymath} {\rm e}nd{proposition} Indeed, given a sequence $\{\alpha_{k}\}_{k}$ as in the statement of Theorem \ref{thr:oscillation_at_0} construct the sequence $\{b_{k}\}_{k}$ as \begin{displaymath} \alpha_{1} | \alpha_{1} ,\alpha_{2} | \alpha_{1}, \alpha_{2} , \alpha_{3} | \alpha_{1},\ldots, \alpha_{4} | \alpha_{1}, \ldots, \alpha_{5} | \ldots . {\rm e}nd{displaymath} Now apply Proposition \ref{prop:oscillations} and note that for any $k\in {\mathbb N}$ there is a subsequence ${k_j}$ such that $b_{k_{j}} = \alpha_{k}$. We now prove Proposition \ref{prop:oscillations}, inspired by the proof of Lemma 6 in \cite{VZ2002}. \begin{proof} Observe, with Proposition \ref{prop:estimates_from_u(0;t)} in mind, that we can write \begin{displaymath} u(0,t) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|y|^2}{4t}} u_0(y) \,{\rm d} y = \frac{1}{\pi^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|z|^2} u_0(2\sqrt{t} z) \, {\rm d} z . {\rm e}nd{displaymath} So if $\lambda_{n} \to \infty$, \begin{equation} \label{eq:u(0,tn)} u(0,t_{n}) = \frac{1}{\pi^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-|z|^2} u_0(\lambda_{n} z) \, {\rm d} z . {\rm e}nd{equation} with $\lambda_{n} = 2\sqrt{t_{n}}$. We set $c_{{d}} =: \frac{1}{\pi^{{d}/2}}$. Consider for $r>1$ the annulus $A(r) =\{y, \ r^{-1} < |y|< r\}$ and given the sequence $\{b_{k}\}_{k}$ consider a function \begin{displaymath} u_{0}(x) = \sum_{j} b_{j} {\mathbb C}hi_{\lambda_{j} A(r_{j}) }(x) {\rm e}nd{displaymath} for some increasing and divergent sequences $\{\lambda_{k}\}_{k}$, $\{r_{k}\}_{k}$ chosen recursively as follows: first we choose $r_{k}$ large with respect to the sequence $\{b_k\}_{k}$, according to \begin{equation} \label{eq:condition_4_near_origin} 2^{k} \beta_{k-1} < r_{k-1}^{{d}} {\rm e}nd{equation} where $\beta_{k} =\max_{1\leq j\leq k} \{ b_{k}\}$ and \begin{equation} \label{eq:condition_4_mid_term} b_{k} c_{{d}}\int_{{\mathbb R}^{d} \setminus A(r_{k})} {\rm e}^{-|x|^2} \,{\rm d} x < 2^{-k} . {\rm e}nd{equation} Then choose $\lambda_{k}$ sufficiently large that \begin{equation} \label{eq:condition_4_u0_in_L10} 2^{k} b_{k} r_{k}^{3{d}} < \lambda_{k}^{{d}} {\rm e}nd{equation} and \begin{equation} \label{eq:condition_4_far_away} b_{k} {\rm e}xp\left(-\frac{\lambda_{k}^{2}}{\lambda_{k-1}^{2}r_{k}^{2}}{\rm i}ght) \lambda_{k}^{{d}} r_{k}^{{d}} < 2^{-k} . {\rm e}nd{equation} Finally choose the next value $\lambda_{k+1}$ large enough that \begin{equation} \label{eq:disjoint_annulus} \lambda_{k} r_{k} < \frac{\lambda_{k+1}}{r_{k+1}} . {\rm e}nd{equation} \noindent {\it Step 1.} Observe that from (\ref{eq:disjoint_annulus}) the scaled annulae $\lambda_{j} A(r_{j})$ are disjoint and increasing. \noindent {\it Step 2.} Now we prove that from (\ref{eq:condition_4_u0_in_L10}), we get $u_{0} \in L^{1}_{0}({\mathbb R}^{d})$. For this take any $\varepsilon >0$ and then \begin{displaymath} \int_{{\mathbb R}^{d}} {\rm e}^{-\varepsilon|x|^{2}} u_{0}(x) \,{\rm d} x = \sum_{j} b_{j} \int_{\lambda_{j} A(r_{j}) } {\rm e}^{-\varepsilon |x|^{2}} \,{\rm d} x \leq \sum_{j} b_{j} {\rm e}^{-\varepsilon \lambda_{j}^{2}/r_{j}^{2}} \lambda_{j}^{{d}} r_{j}^{{d}} . {\rm e}nd{displaymath} Now for any $m\in {\mathbb N}$ there exist $R_{\varepsilon}, c_{\varepsilon}$ (depending on $m$ as well) such that if $z\geq R_{\varepsilon}$ then \begin{displaymath} {\rm e}^{-\varepsilon z} \leq \frac{c_{\varepsilon}}{z^{m}} . {\rm e}nd{displaymath} Since from (\ref{eq:disjoint_annulus}) we get $\frac{\lambda_{k}}{r_{k}} \to \infty$, then for some $j_{0} \in {\mathbb N}$, we get \begin{displaymath} \sum_{j\geq j_{0}} b_{j} {\rm e}^{-\varepsilon \lambda_{j}^{2}/r_{j}^{2}} \lambda_{j}^{{d}} r_{j}^{{d}} \leq c_{\varepsilon} \sum_{j\geq j_{0}} b_{j} \frac{r_{j}^{2m+{d}}}{\lambda_{j}^{2m-{d}}} . {\rm e}nd{displaymath} For example with $m={d}$ we get, by (\ref{eq:condition_4_u0_in_L10}), \begin{displaymath} c_{\varepsilon} \sum_{j\geq j_{0}} b_{j} \frac{r_{j}^{3{d}}}{\lambda_{j}^{{d}}} \leq c_{\varepsilon} \sum_{j\geq j_{0}} 2^{-j} < \infty . {\rm e}nd{displaymath} \noindent {\it Step 3.} Now we prove that from (\ref{eq:condition_4_near_origin}), (\ref{eq:condition_4_mid_term}) and (\ref{eq:condition_4_far_away}) then $|u(0,t_{k}) -b_{k}| \to 0$. For this observe that for any $\lambda>0$ \begin{displaymath} u_{0}(\lambda x) = \sum_{j} b_{j} {\mathbb C}hi_{\lambda_{j}\lambda^{-1} A(r_{j}) }(x) {\rm e}nd{displaymath} and then for each $k$ we have in (\ref{eq:u(0,tn)}) \begin{align} \int_{{\mathbb R}^{d}} {\rm e}^{-|z|^2} u_0(\lambda_{k} z) \, {\rm d} z = \sum_{1\leq j\leq k-1} b_{j} \int_{{\lambda_{j}}{\lambda_{k}^{-1}} A(r_{j}) }& {\rm e}^{- |z|^{2}} \, {\rm d} z + b_{k} \int_{A(r_{k}) } {\rm e}^{- |z|^{2}} \, {\rm d} z\nonumber\\ & + \sum_{j\geq k+1} b_{j} \int_{{\lambda_{j}}{\lambda_{k}^{-1}} A(r_{j}) } {\rm e}^{- |z|^{2}} \, {\rm d} z. \label{eq:2_estimate_u(0)} {\rm e}nd{align} Then the first term in (\ref{eq:2_estimate_u(0)}) is bounded by \begin{displaymath} \beta_{k-1} \frac{\lambda_{k-1}^{{d}}}{\lambda_{k}^{{d}}} r_{k-1}^{{d}} <\frac{\beta_{k-1}} {r_{k-1}^{{d}}} < 2^{-k} {\rm e}nd{displaymath} by (\ref{eq:condition_4_near_origin}), where we used (\ref{eq:disjoint_annulus}). For the second term in (\ref{eq:2_estimate_u(0)}) observe that by (\ref{eq:condition_4_mid_term}) \begin{displaymath} \left | b_{k} \int_{A(r_{k}) } {\rm e}^{- |z|^{2}} \, {\rm d} z -b_{k} {\rm i}ght | < \frac{1}{2^{k} c_{{d}}} . {\rm e}nd{displaymath} Finally, observe that the third term in (\ref{eq:2_estimate_u(0)}) is bounded by \begin{displaymath} \sum_{j\geq k+1} b_{j} {\rm e}xp\left(- \frac{\lambda_{j}^{2}}{\lambda_{k}^{2}r_{j}^{2}}{\rm i}ght) \frac{\lambda_{j}^{{d}}}{\lambda_{k}^{{d}}} r_{j}^{{d}} \leq \sum_{j\geq k+1} b_{j} {\rm e}xp\left(- \frac{\lambda_{j}^{2}}{\lambda_{j-1}^{2}r_{j}^{2}}{\rm i}ght) \lambda_{j}^{{d}} r_{j}^{{d}} \leq \sum_{j\geq k+1} 2^{-j} {\rm e}nd{displaymath} by (\ref{eq:condition_4_far_away}). Then, from (\ref{eq:u(0,tn)}) and the bounds above on the three terms in (\ref{eq:2_estimate_u(0)}) we get, with $\lambda_{k} = 2\sqrt{t_{k}}$ \begin{displaymath} \big| u(0,t_{k}) -b_{k} \big | \leq \frac{c_{{d}}}{2^{k}} + \frac{1}{2^{k}} + c_{{d}} \sum_{j\geq k+1} 2^{-j} \to 0, \qquad k \to \infty.\qedhere {\rm e}nd{displaymath} {\rm e}nd{proof} The next result shows that the oscillatory behavior in Theorem \ref{thr:oscillation_at_0} is somehow generic for heat solutions. For this, given a sequence of positive numbers $\alpha= \{\alpha_{k}\}_{k}$ denote $\mathscr{O}_{\alpha}$ the nonempty family of $0\le u_{0} \in L^{1}_{0}({\mathbb R}^{d})$ that satisfy the statement in Theorem \ref{thr:oscillation_at_0}. We use the topology on $L^1_0({\mathbb R}^{d})$ generated by the family of $L^1_\varepsilon({\mathbb R}^{{d}})$ norms defined in {\rm e}qref{eq:norm_L1eps}, which makes $L^1_0({\mathbb R}^{d})$ into a Fr\'echet space (see \cite{RR2} for more details); the following more explicit definition is sufficient for our statement of the following theorem: we say that $u_n\to u_0$ in $L^1_0({\mathbb R}^{d})$ if and only if $u_n\to u_0$ in $L^1_\varepsilon({\mathbb R}^{d})$ for every $\varepsilon>0$. Note that, in particular, such convergence implies that $u_n\to u_0$ in $L^1_{\rm loc}({\mathbb R}^{d})$. \begin{theorem} \label{thr:density_of_oscillations} For any sequence of positive numbers $\alpha= \{\alpha_{k}\}_{k}$, $\mathscr{O}_{\alpha}$ is dense in $L^{1}_0({\mathbb R}^{d})$. {\rm e}nd{theorem} \begin{proof} Denote by $U_{0}$ the initial data constructed in Theorem \ref{thr:oscillation_at_0}. Then $U_{0} \in \mathscr{O}_{\alpha}$ and for any $n\in {{\mathbb N}}$, $U_{0} {\mathbb C}hi_{{\mathbb R}^{d} \setminus B(0,n)} \in \mathscr{O}_{\alpha}$ since we are only suppressing a finite number of annulae in $U_{0}$. Then for given $0\leq v_{0} \in L^{1}_{0}({\mathbb R}^{d})$ define \begin{displaymath} v_{0}^{n} = v_{0} {\mathbb C}hi_{B(0,n)} + U_{0} {\mathbb C}hi_{{\mathbb R}^{d} \setminus B(0,n)} \in \mathscr{O}_{\alpha}. {\rm e}nd{displaymath} Take $\varepsilon>0$; since $$ v_0^{n}-v_0=(v_0-U_0){\mathbb C}hi_{{\mathbb R}^{d}\setminus B(0,n)} $$ we have $$ \|v_0^{n}-v_0\|_{L^1_\varepsilon({\mathbb R}^{d})}=\left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2}\int_{|x|\ge n}{\rm e}^{-\varepsilon|x|^2}|v_0-U_0|; $$ since $v_0,U_0\in L^1_0({\mathbb R}^{d}) \subset L^1_\varepsilon({\mathbb R}^{d})$, it follows that $v_0^{n}\to v_0$ in $L^1_0({\mathbb R}^{d})$. {\rm e}nd{proof} The following result shows that any heat solution can be ``shadowed'' as close as we want, in any large time interval and any large compact set by an oscillatory solution of the heat equation. \begin{theorem} \label{thr:shadowing} For any sequence of positive numbers $\alpha= \{\alpha_{k}\}_{k}$ and any $0\leq v_{0} \in L^{1}_{0}({\mathbb R}^{d})$, any ${\rm d}elta>0$ and $T >0$ and any compact set $K\subset {\mathbb R}^{d}$, there exists $u_{0}\in \mathscr{O}_{\alpha}$ such that \begin{displaymath} \sup_{K\times [0,T]} |u(x,t,v_{0}) - u(x,t ,u_{0})| \leq {\rm d}elta . {\rm e}nd{displaymath} {\rm e}nd{theorem} \begin{proof} Observe that it is enough to find $u_{0}\in \mathscr{O}_{\alpha}$ such that \begin{equation} \label{eq:shadow_at_0} \sup_{[0,T+1]} u(t,0,|v_{0} -u_{0}|) \leq {\rm d}elta . {\rm e}nd{equation} In such a case, from (\ref{bound_above_x=0}) we would get, for any $a>1$ \begin{displaymath} \sup_{K\times [0,T]} |u(x,t,v_{0}) - u(x,t,u_{0})| \le c_{{d},a}\, \sup_{K\times [0,T]} u(0,at, |v_{0} -u_{0}|) {\rm e}^{\frac{|x|^{2}}{4(a-1)t}} \leq C(K,T) {\rm d}elta . {\rm e}nd{displaymath} Denote by $U_{0} \in \mathscr{O}_{\alpha}$ the initial data constructed in Theorem \ref{thr:oscillation_at_0}. Then for given $0\leq v_{0} \in L^{1}_{0}({\mathbb R}^{d})$ define \begin{displaymath} u_{0} = v_{0} {\mathbb C}hi_{B(0,R)} + U_{0} {\mathbb C}hi_{{\mathbb R}^{d} \setminus B(0,R)} \in \mathscr{O}_{\alpha}. {\rm e}nd{displaymath} Then we show that (\ref{eq:shadow_at_0}) holds provided we take $R$ large enough. For this, observe that \begin{displaymath} |v_{0} -u_{0}| \leq |v_{0} -U_{0}|{\mathbb C}hi_{{\mathbb R}^{d} \setminus B(0,R)} {\rm e}nd{displaymath} hence \begin{equation} \label{eq:bound_at_0} 0\leq u(0,t,|v_{0} -u_{0}|) \leq \frac{1}{(4\pi t)^{{d}/2}} \int_{|y|\geq R} {\rm e}^{-\frac{|y|^2}{4t}} |v_0(y) -U_{0}(y)| \,{\rm d} y . {\rm e}nd{equation} Taking $R>1$ we have, for any given $0<t_{0} < T+1$ and $0<\alpha<1$, $|y|^{2} \geq \alpha |y|^{2} + (1-\alpha)$ and then for $0<t <t_{0}$ we obtain in (\ref{eq:bound_at_0}) \begin{displaymath} 0\leq u(0,t,|v_{0} -u_{0}|) \leq \frac{1}{(4\pi t)^{{d}/2}} {\rm e}^{-(1-\alpha) \frac{1}{4t}} \int_{|y|\geq 1} {\rm e}^{-\alpha \frac{|y|^2}{4t_{0}}} |v_0(y) -U_{0}(y)| \,{\rm d} y \leq \frac{{\rm d}elta}{2} {\rm e}nd{displaymath} provided $t_{0}$ is small enough since $ \frac{1}{(4\pi t)^{{d}/2}} {\rm e}^{-(1-\alpha) \frac{1}{4t}} \to 0$ as $t\to 0$. Now for $t_{0} < t< T+1$ we obtain in (\ref{eq:bound_at_0}) \begin{displaymath} 0\leq u(0,t,|v_{0} -u_{0}|) \leq \frac{1}{(4\pi t_{0})^{{d}/2}} \int_{|y|\geq R} {\rm e}^{-\frac{|y|^2}{4(T+1)}} |v_0(y) -U_{0}(y)| \,{\rm d} y \leq \frac{{\rm d}elta}{2} {\rm e}nd{displaymath} provided $R$ is sufficiently large. {\rm e}nd{proof} \subsection{The rescaling approach of V\'azquez \& Zuazua} \label{sec:rescalling-appr-vazq} For the case of solutions of the heat equation that remain locally bounded, the results in Propositions \ref{prop:estimates_from_u(0;t)} and \ref{prop:oscillations} and Theorem \ref{thr:oscillation_at_0} can be revisited in terms of the rescaling argument of \cite{VZ2002} as follows. As we now show, it is relatively straightforward to extend their approach from $L^{\infty}({\mathbb R}^{{d}})$ initial data to more general measure-valued data that leads to globally bounded solutions. \noindent (i) We can define dilatations of measures through the analogous result holding for a locally integrable $f$ and $\varphi \in C_{c}({\mathbb R}^{d})$, namely $f_{\lambda}(x)= f(\lambda x)$ satisfies \begin{displaymath} \int_{{\mathbb R}^{d}} f_{\lambda} (x) \varphi(x) \,{\rm d} x = \frac{1}{\lambda^{{d}}} \int_{{\mathbb R}^{d}} f(z) \varphi (\frac{z}{\lambda}) \, {\rm d} z = \frac{1}{\lambda^{{d}}} \int_{{\mathbb R}^{d}} f (z) \varphi _{\frac{1}{\lambda}} (z)\, {\rm d} z . {\rm e}nd{displaymath} That is, for $\lambda >0$ and $\mu \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ \begin{displaymath} \int_{{\mathbb R}^{d}} \varphi (z)\, {\rm d} \mu _{\lambda} (z) := \frac{1}{\lambda^{{d}}} \int_{{\mathbb R}^{d}} \varphi _{\frac{1}{\lambda}} (z)\, {\rm d} \mu(z) . {\rm e}nd{displaymath} Hence $\mu_{\lambda} \in \mathcal{M}_{\rm loc} ({\mathbb R}^{d})$ and is a positive measure whenever $\mu$ is. Then it follows that \begin{displaymath} \int_{{\mathbb R}^{d}} \varphi (z)\, {\rm d} |\mu _{\lambda} (z)| := \frac{1}{\lambda^{{d}}} \int_{{\mathbb R}^{d}} \varphi _{\frac{1}{\lambda}} (z)\, {\rm d} |\mu(z)| . {\rm e}nd{displaymath} These extend, by density, to $\varphi \in L^{1}({\rm d} \mu) = L^{1}({\rm d} \mu_{\lambda})$. \noindent (ii) For $\mu \in \mathcal{M}_{\varepsilon} ({\mathbb R}^{d})$ and $\varepsilon >0$, $\lambda >0$ \begin{displaymath} \|\mu_{\lambda}\|_{\mathcal{M}_{\varepsilon} ({\mathbb R}^{d})} = \left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}}{\rm e}^{-\varepsilon|x|^2} \,{\rm d} |\mu_{\lambda} (x)| = \left(\frac{\varepsilon}{\pi \lambda^{2}}{\rm i}ght)^{{d}/2} \int_{{\mathbb R}^{d}}{\rm e}^{-\frac{\varepsilon}{\lambda^{2}} |y|^2} \,{\rm d} |\mu (y)| = \|\mu\|_{\mathcal{M}_{\frac{\varepsilon}{\lambda^{2}} } ({\mathbb R}^{d})} . {\rm e}nd{displaymath} Therefore, $\{\mu_{\lambda}\}_{\lambda >0}$ is bounded in $\mathcal{M}_{\varepsilon} ({\mathbb R}^{d})$ if and only if $\mu \in \mathcal{M}_{0,B} ({\mathbb R}^{{d}})$, that is, \begin{displaymath} \vvert{\mu}_{ \mathcal{M}_{0,B} ({\mathbb R}^{{d}})}:= \sup_{\varepsilon >0} \| \mu \|_{ \mathcal{M}_{\varepsilon} ({\mathbb R}^{{d}})} < \infty . {\rm e}nd{displaymath} According to {\rm e}qref{eq:value_at_zero_and_normMeps} and part (i) in Proposition \ref{prop:estimates_from_u(0;t)}, this is equivalent to the solution of the heat equation $u(x, t, \mu)$ being uniformly bounded in sets $\frac{|x|}{\sqrt{t}} \leq R$. \noindent (iii) We also get for $u_{0} \in \mathcal{M}_{0,B} ({\mathbb R}^{d})$ and $\lambda >0$ \begin{align*} S(t) u_{0, \lambda} (x)&= u(x,t, u_{0, \lambda}) = \frac{1}{(4\pi t)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|x-y|^2}{4t}} \, {\rm d} u_{0,\lambda}(y) = \frac{1}{(4\pi t \lambda^{2})^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|x-\frac{y}{\lambda}|^2}{4t}} \, {\rm d} u_{0}(y)\\ & = \frac{1}{(4\pi t \lambda^{2})^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|\lambda x-y|^2}{4t \lambda^{2}}} \, {\rm d} u_{0}(y) = u(\lambda x, \lambda^{2} t, u_{0}) = S(\lambda^{2} t) u_{0} (\lambda x) . {\rm e}nd{align*} In particular, with $t=1$ \begin{displaymath} S(1) u_{0, \lambda} (x) = S(\lambda^{2}) u_{0} (\lambda x) {\rm e}nd{displaymath} \noindent (iv) As a consequence of Lemma \ref{lem:Meps_banach} it follows that $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}}) = (C_{-\varepsilon,0}({\mathbb R}^{{d}}))'$, where $$ f\in C_{-\varepsilon,0} ({\mathbb R}^{{d}})\qquad\mbox{if and only if}\qquad {\rm e}^{\varepsilon|x|^2} f(x) \in C_{0}({\mathbb R}^{{d}}) $$ and the norm is $\|f\|_{C_{-\varepsilon,0} ({\mathbb R}^{d})} := \sup_{x\in {\mathbb R}^{d}} {\rm e}^{\varepsilon|x|^2} |f(x)|$. Now, if $u_{0} \in \mathcal{M}_{0,B} ({\mathbb R}^{d})$ then $\{u_{0,\lambda}\}_{\lambda>0}$ is sequentially weak-$*$ compact in $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ for any $\varepsilon >0$. Taking subsequences $\lambda_{n} \to \infty$, we can assume that $u_{0,\lambda_{n}}$ converges weakly-$*$ to $\mu \in \mathcal{M}_{0,B} ({\mathbb R}^{d})$ with $\vvert{\mu}_{ \mathcal{M}_{0,B} ({\mathbb R}^{{d}})} \leq \vvert{u_{0}}_{ \mathcal{M}_{0,B} ({\mathbb R}^{{d}})}$ and, by smoothing, $S(1)u_{0,\lambda_{n}} $ converges in $L^{\infty}_{\rm loc}({\mathbb R}^{d})$ to $v= S(1) \mu$. Hence, setting $t_{n}= \lambda_{n}^{2}$ (so $t_n \to \infty$) we obtain \begin{displaymath} S(t_{n}) u_{0} ( \sqrt{t_{n}} x) = u(\sqrt{t_{n}} x, t_{n}, u_{0}) \to S(1) \mu (x) , \quad L^{\infty}_{\rm loc}({\mathbb R}^{d}) . {\rm e}nd{displaymath} In particular, with $x=0$ we have \begin{displaymath} u(0, t_{n}, u_{0}) \to S(1) \mu (0) , \quad L^{\infty}_{\rm loc}({\mathbb R}^{d}) . {\rm e}nd{displaymath} This relates the results in Propositions \ref{prop:estimates_from_u(0;t)}, \ref{prop:oscillations} and Theorem \ref{thr:oscillation_at_0} to the set of weak-$*$ sequential limits of $\{u_{0,\lambda}\}_{\lambda>0}$ in $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$. Notice that for all such $\mu$ \begin{displaymath} | S(1) \mu (0)| = \left| \frac{1}{(4\pi)^{{d}/2}} \int_{{\mathbb R}^{d}} {\rm e}^{-\frac{|y|^2} {4}} \, {\rm d} \mu (y) {\rm i}ght| \leq \|\mu\|_{\mathcal{M}_{\frac{1}{4}}({\mathbb R}^{{d}})} \leq \vvert{u_{0}}_{ \mathcal{M}_{0,B} ({\mathbb R}^{{d}})}; {\rm e}nd{displaymath} thus the above argument only applies when $u(0, t, u_{0})$ is bounded for large times. \subsection{Prescribed behaviour at $x=0$} \label{sec:prescribed_g_at_x=0} We now show that if we drop the restriction that the solutions are non-negative then any (sufficiently smooth) behaviour of the solution at $x=0$ can be obtained with an appropriate choice of initial condition. We use a construction inspired by the Tychonov example of an initial condition that leads to non-uniqueness with zero initial data (see \cite{Tychonov} and Chapter 7, pages 171-172 in \cite{J}, for example). \begin{proposition} \label{prop:prescribed_value_at_x=0} Let $\gamma$ be any real analytic function on $[0,T)$ with $T\leq \infty$. Then there exists $u_{0} \in L^{1}_\varepsilon({\mathbb R}^{d})$ for some $\varepsilon>0$ such that $u(t) = S(t) u_{0}$ given by {\rm e}qref{eq:solution_heat_up_t=0} is defined for all $t \in [0,T)$ and such that \begin{displaymath} u(0,t)= \gamma(t), \quad 0\leq t < T . {\rm e}nd{displaymath} {\rm e}nd{proposition} \begin{proof} We seek a solution of the one-dimensional heat equation in the form $$ u(x,t) = \sum_{k=0}^{\infty} g_{k}(t) x^{k}, \quad x\in {\mathbb R} $$ converging for every $x\in{\mathbb R}$ (for each $t$ in some range), such that $u(0,t)=\gamma(t)$ for all $t \in [0,T)$. Substituting this expression into the PDE gives \begin{displaymath} g_{k}'(t) = (k+2)(k+1) g_{k+2}(t) , \quad t \in [0,T). {\rm e}nd{displaymath} Assume that $u_{x}(0,t) =0$; then $g_{1}(t)=0$ and so $g_{2m+1}(t)=0$ for all $t$ and $m\in {\mathbb N}$. Solving the recurrence for even powers gives $g_{0}(t) = \gamma(t)$ and \begin{displaymath} g_{2m}(t)= \frac{\gamma^{(m)}(t)}{(2m)!} {\rm e}nd{displaymath} and therefore \begin{equation}\label{g2u} u(x,t) = \sum_{k=0}^{\infty} \frac{\gamma^{(k)}(t)}{(2k)!} x^{2k} . {\rm e}nd{equation} As $\gamma$ is real analytic it follows that for each $t \in [0,T)$ there exist constants $C,\tau>0$ (potentially depending on $t$) such that \begin{equation}\label{RAgk} |\gamma^{(k)}(t)|\le Ck!\tau^{-k} {\rm e}nd{equation} (see Exercise 15.3 in \cite{R}, for example; in fact the constants $C$ and $\tau$ can be chosen uniformly on any compact subinterval of $[0,T)$). It follows that the series in {\rm e}qref{g2u} converges for all $x\in{\mathbb R}^{d}$ and for every $t \in [0,T)$, and given (\ref{RAgk}) we have then $$ |u(x,t)|\le \sum_{k=0}^\infty C \frac{k!}{(2k)!} x^{2k}\tau^{-k}\le C\sum_{k=0}\frac{(x/\sqrt\tau)^2}{k!} = C{\rm e}^{|x|^2/\tau} . $$ In particular $u$ satisfies (\ref{eq:quadratic_exponential_bound_upt=0}) on any compact time interval of $[0,T)$. Also it is easy to see that $u(x,t)$ satisfies (\ref{eq:uniqueness_Du_in_L1delta}) in compact intervals of $(0,T)$ and (\ref{eq:uniqueness_initial_data_Cc}). By Theorem \ref{thr:uniqueness}, we get \begin{displaymath} u(t) = S(t) u_{0} , \quad t \in [0,T). {\rm e}nd{displaymath} We can embed this solution in ${\mathbb R}^{d}$ by setting $u(x_{1}, \ldots, x_{{d}}, t) = u(x_{1},t)$. {\rm e}nd{proof} When $T=\infty$ this provides an example showing how the condition $u_0\in L^1_0({\mathbb R}^{d})$ is not required to ensure global existence for initial data that is not required to be non-negative: the solution $u(x,t)$ satisfies the heat equation for all time, remains in one of the $L^1_{\varepsilon(t)}({\mathbb R}^{{d}})$ spaces for each $t\ge0$, but does not necessarily satisfy $u_0\in L^1_0({\mathbb R})$. The non-uniqueness example of Tychonov uses precisely the above construction, but based on a function such as $\gamma(t)={\rm e}^{-1/t^2}$ whose radius of analyticity shrinks as $t\to 0^+$, see \cite[pg 172]{J}. For such a case, we have that the heat solution in (\ref{g2u}) satisfies $u(x,t)\to u_{0}(x)= 0$ uniformly in compact sets as $t\to 0^{+}$. In the language of this paper, $u(t)\in L^1_{\varepsilon(t)}({\mathbb R}^{{d}})$ for every $t>0$, but as $t\to 0^{+}$ we have $\varepsilon(t)\to\infty$ and (\ref{eq:uniqueness_u_in_L1eps}) is not satisfied. In this way this classic non-uniqueness example does not contradict the uniqueness result of Theorem \ref{thr:uniqueness}. It would be interesting to find conditions on $\gamma(t)$ that ensure the positivity of $u_0$ (and hence of $u(x,t)$). Certainly positivity of $\gamma$ itself is not sufficient; indeed, note that $$ \gamma(t)=\sum_{k=0}^\infty \frac{\alpha_k}{k!}t^k\qquad{\mathbb R}ightarrow\qquad u_0(x)=\sum_{k=0}^\infty \frac{\alpha_k}{(2k)!}x^{2k}. $$ The simple choice $\alpha_0=1$, $\alpha_1=-2$, $\alpha_2=2$ yields $$ \gamma(t)=1-2t+t^2\ge0\qquad\mbox{but}\qquad u_0(x)=1-x+\frac{x^2}{24} $$ and $u_0(2)<0$. \section{Extension to other problems} \label{sec:extensions-other-probl} \setcounter{equation}{0} First note that by simple reflection arguments, we can also consider the heat equation in the half space ${\mathbb R}^{d}_{+}$, that is \begin{equation} \label{eq:heat_in_halfspace} \begin{cases} u_{t}- {\rm d}isplaystyleelta u =0, & x \in {\mathbb R}^{d}_{+} , \ t>0,\cr u(x,0) = u_{0}(x), & x \in {\mathbb R}^{d}_{+},\cr B(u)(x)=0, & x \in \partial {\mathbb R}^{d}_{+} {\rm e}nd{cases} {\rm e}nd{equation} where $B(u)$ denotes boundary conditions of Dirichlet type, i.e. $B(u)=u$ or Neumann, i.e. $B(u)= \frac{\partial u}{\partial \vec{n}} = -\partial_{x_{{d}}} u (x',0)$. Indeed, performing odd or even reflection respectively we extend (\ref{eq:heat_in_halfspace}) to the heat equation in ${\mathbb R}^{{d}}$ for solutions with odd or even symmetry. Hence, the arguments in previous sections apply. Also note that a basic ingredient in the proofs above is the gaussian structure of the heat kernel. Hence, the same results apply to any parabolic operator with a similar gaussian bound for the kernel, see \cite{Daners00}. In particular, our results apply for differential operators of the form \begin{displaymath} L(u) = -\sum_{i=1}^{N} \partial_{i} \Big( a_{i,j}(x) \partial_{j} u + a_{i}(x) u\Big) + b_{i}(x) \partial_{i} u + c_{0}(x) u {\rm e}nd{displaymath} with real coefficients $a_{i,j}, a_{i}, b_{i}, c_{0} \in L^{\infty}({\mathbb R}^{{d}})$ and satisfies the ellipticity condition \begin{displaymath} \sum_{i,j=1}^{N} a_{i,j}(x) \xi_{i} \xi_{j} \geq \alpha_{0} |\xi|^{2} {\rm e}nd{displaymath} for some $\alpha_{0} >0$ and for every $\xi \in {\mathbb R}^{{d}}$. In such a case the fundamental solution of the parabolic problem $u_{t} + Lu=0$ in ${\mathbb R}^{{d}}$ satisfies a gaussian bound $$ 0 \leq k(x,y,t,s) \leq C(t-s)^{-{d}/2} {\rm e}^{\omega (t-s)} {\rm e}^{-c\frac{|x-y|^2}{(t-s)}} $$ for $t>s$ and $x,y \in {\mathbb R}^{{d}}$ where $C,c, \omega$ depend on the $L^{\infty}$ norm of the coefficients. The gaussian bounds are obtained from \cite{Daners00} while the positivity of the kernel comes from the maximum principle, see \cite{GT}, chapter 8. Therefore the analysis of previous sections, applies to solutions of the form \begin{displaymath} u(x,t)=S_{L}(t)u_{0} = \int_{{\mathbb R}^{d}} k(x,y,t,0) \, {\rm d} u_0(y) . {\rm e}nd{displaymath} Other results on Gaussian upper bounds can be found in \cite{Aro68, Davies,Ouhabaz,Stroock}. \appendix \section{Some auxiliary results} \label{sec:some-auxil-results} \setcounter{equation}{0} Here we prove several technical results used above. First, we prove that certain spaces of functions or measures used above are Banach spaces. \begin{lemma}\label{lem:Meps_banach} The sets $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ and $L^{1}_{\varepsilon} ({\mathbb R}^{d})$ in {\rm e}qref{eq:space_Meps} and {\rm e}qref{eq:space_L1eps} with the norms {\rm e}qref{eq:norm_Meps} and {\rm e}qref{eq:norm_L1eps} respectively, are Banach spaces. {\rm e}nd{lemma} \begin{proof} For $\mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ we proceed as follows. Given $\mu \in \mathcal{M}_{{\rm loc}}({\mathbb R}^{{d}})$ we define the Borel measure such that for all Borel sets $A \subset {\mathbb R}^{{d}}$ \begin{displaymath} \Phi_{\varepsilon}(\mu) (A) = \int_{A} \rho_{\varepsilon}(x)\, {\rm d} \mu (x) {\rm e}nd{displaymath} where $\rho_{\varepsilon}(x) = \left(\frac{\varepsilon}{\pi}{\rm i}ght)^{{d}/2} {\rm e}^{-\varepsilon|x|^2}$. Then $\Phi_{\varepsilon}(\mu) \in \mathcal{M}_{loc}({\mathbb R}^{{d}})$, is clearly absolutely continuous with respect to $\mu$ and for all $\varphi \in C_{c}({\mathbb R}^{{d}})$ we have \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi (x)\, {\rm d} \Phi_{\varepsilon}(\mu) (x) = \int_{{\mathbb R}^{{d}}} \varphi (x) \rho_{\varepsilon}(x)\, {\rm d} \mu (x) . {\rm e}nd{displaymath} Now we claim that the total variation of $\Phi_{\varepsilon}(\mu)$ satisfies \begin{displaymath} |\Phi_{\varepsilon}(\mu)| (A) = \int_{A} \rho_{\varepsilon}(x)\, {\rm d} |\mu (x) | {\rm e}nd{displaymath} for all Borel sets $A \subset {\mathbb R}^{{d}}$, which would imply that for all $\varphi \in C_{c}({\mathbb R}^{{d}})$ we have \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi (x)\, {\rm d} |\Phi_{\varepsilon}(\mu) (x)| = \int_{{\mathbb R}^{{d}}} \varphi (x) \rho_{\varepsilon}(x)\, {\rm d} |\mu (x) |. {\rm e}nd{displaymath} To prove the claim, observe that the positive part in the Jordan decomposition satisfies \begin{displaymath} \Phi_{\varepsilon}(\mu)^{+} (A) = \sup_{B\subset A} \int_{B} \rho_{\varepsilon}(x)\, {\rm d} \mu (x) = \sup_{B\subset A} \int_{B^{+}} \rho_{\varepsilon}(x)\, {\rm d} \mu^{+} (x) - \int_{B^{-}} \rho_{\varepsilon}(x)\, {\rm d} \mu^{-} (x) {\rm e}nd{displaymath} \begin{displaymath} = \sup_{B\subset A} \int_{B^{+}} \rho_{\varepsilon}(x)\, {\rm d} \mu^{+} (x) = \sup_{B\subset A} \int_{B} \rho_{\varepsilon}(x)\, {\rm d} \mu^{+} (x) = \int_{A} \rho_{\varepsilon}(x)\, {\rm d} \mu^{+} (x) {\rm e}nd{displaymath} where we have used that $B= B^{+} \cup B^{-}$ are the positive and negative parts of a set $B$, according to the Hahn decomposition of the measure $\mu$; see Theorem 3.3 in \cite{Folland}. Analogously, $ \Phi_{\varepsilon}(\mu)^{-} (A) = \int_{A} \rho_{\varepsilon}(x) {\rm d} \mu^{-} (x)$ and with this the claim follows. Now if, $\mu \in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$ we have \begin{displaymath} |\Phi_{\varepsilon}(\mu)| ({\mathbb R}^{{d}}) = \int_{{\mathbb R}^{{d}}} \rho_{\varepsilon}(x)\, {\rm d} |\mu (x) | < \infty {\rm e}nd{displaymath} that is \begin{displaymath} \Phi_{\varepsilon}: \mathcal{M}_{\varepsilon} ({\mathbb R}^{{d}}) \to \mathcal{M}_{{\rm BTV}}({\mathbb R}^{{d}}) {\rm e}nd{displaymath} is an isometry. To prove the result it remains to show that $\Phi_{\varepsilon}$ is onto. In fact if $\sigma \in \mathcal{M}_{{\rm BTV}}({\mathbb R}^{{d}})$ we define $\mu$ such that for all borel sets $A \subset {\mathbb R}^{{d}}$ \begin{displaymath} \mu (A) = \int_{A} \frac{{\rm d} \sigma (x)}{\rho_{\varepsilon}(x)} {\rm e}nd{displaymath} and thus for all $\varphi \in C_{c}({\mathbb R}^{{d}})$ we have \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi (x)\, {\rm d} \mu (x) = \int_{{\mathbb R}^{{d}}} \frac{\varphi (x)}{ \rho_{\varepsilon}(x)}\, {\rm d} \sigma (x) . {\rm e}nd{displaymath} Clearly $\mu \in \mathcal{M}_{loc} ({\mathbb R}^{{d}})$ and arguing as above we get for all borel sets $A \subset {\mathbb R}^{{d}}$ \begin{displaymath} |\mu| (A) = \int_{A} \frac{{\rm d} |\sigma (x)|}{\rho_{\varepsilon}(x)} {\rm e}nd{displaymath} and for all $\varphi \in C_{c}({\mathbb R}^{{d}})$ we have \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi (x)\, {\rm d} |\mu (x)| = \int_{{\mathbb R}^{{d}}} \frac{\varphi (x)}{ \rho_{\varepsilon}(x)}\, {\rm d} |\sigma (x)| . {\rm e}nd{displaymath} Now take an increasing sequence $0\leq \varphi_{n} \in C_{c}({\mathbb R}^{{d}})$ such that $\varphi_{n} \to \rho_{\varepsilon}$ pointwise in ${\mathbb R}^{{d}}$ and then \begin{displaymath} \int_{{\mathbb R}^{{d}}} \varphi_{n} (x)\, {\rm d} |\mu (x)| = \int_{{\mathbb R}^{{d}}} \frac{\varphi_{n} (x)}{ \rho_{\varepsilon}(x)}\, {\rm d} |\sigma (x)| \leq \int_{{\mathbb R}^{{d}}}\, {\rm d} |\sigma (x)| = |\sigma ({\mathbb R}^{{d}})| <\infty {\rm e}nd{displaymath} and by Fatou's lemma we get $ \int_{{\mathbb R}^{{d}}} \rho_{\varepsilon} (x)\, {\rm d} |\mu (x)| <\infty$, that is, $\mu \in \mathcal{M}_{\varepsilon}({\mathbb R}^{{d}})$. Clearly $\Phi_{\varepsilon}(\mu) = \sigma$ and we conclude the proof. On the other hand, note that $L^{1}_{\varepsilon} ({\mathbb R}^{d}) = L^{1} ({\mathbb R}^{d}, \rho_{\varepsilon} \, {\rm d} x)$ and so is a Banach space. Also, note that along the lines of the proof above it is easy to see that the operator $\Phi_{\varepsilon}(f) = \rho_{\varepsilon} f$, $\Phi _{\varepsilon}: L^{1}_{\varepsilon}({\mathbb R}^{{d}}) \to L^{1}({\mathbb R}^{{d}})$ is an isometric isomorphism. {\rm e}nd{proof} \begin{lemma} \label{lem:MU_banach} The space of uniform measures $\mathcal{M}_U({\mathbb R}^{d})$ defined in {\rm e}qref{M_U} with the norm {\rm e}qref{eq:M_U_norm} is a Banach space. {\rm e}nd{lemma} \begin{proof} Clearly a Cauchy sequence in the norm (\ref{eq:M_U_norm}) is a Cauchy sequence in $\mathcal{M}_{{\rm BTV}}(\overline{B(x,1)})$ uniformly for $x\in {\mathbb R}^{{d}}$, that is, the dual of $C(\overline{B(x,1)})$ with the uniform convergence. Therefore, it converges in $\mathcal{M}_{{\rm BTV}}(\overline{B(x,1)})$ uniformly for $x\in {\mathbb R}^{{d}}$. Hence it converges in $\mathcal{M}_U({\mathbb R}^{d})$. {\rm e}nd{proof} \begin{lemma} {\bf (Green's formulae)} \label{lem:green_formula} \noindent (i) Assume that $u\in W^{1,1}_{{\rm loc}}({\mathbb R}^{{d}})$ satisfies $u, \nabla u \in L^{1}_{\varepsilon}({\mathbb R}^{{d}})$ and that $\xi$ is a smooth function such that $|\nabla \xi (x) |, |{\rm d}isplaystyleelta \xi (x) |\leq c {\rm e}^{-\alpha |x|^{2}}$ for $x\in {\mathbb R}^{{d}}$ and $\alpha \geq \varepsilon$. Then \begin{displaymath} \int_{{\mathbb R}^{{d}}} u (-{\rm d}isplaystyleelta \xi) = \int_{{\mathbb R}^{{d}}} \nabla u \nabla \xi. {\rm e}nd{displaymath} \noindent (ii) Assume that $u\in W^{1,1}_{{\rm loc}}({\mathbb R}^{{d}})$ satisfies ${\rm d}isplaystyleelta u\in L^{1}_{{\rm loc}}({\mathbb R}^{{d}})$ and $\nabla u, {\rm d}isplaystyleelta u \in L^{1}_{\varepsilon}({\mathbb R}^{{d}})$ and that $\xi$ is a smooth function such that $| \xi (x) |, |\nabla \xi (x) |\leq c {\rm e}^{-\alpha |x|^{2}}$ for $x\in {\mathbb R}^{{d}}$ and $\alpha \geq \varepsilon$. Then \begin{displaymath} \int_{{\mathbb R}^{{d}}} \nabla u \nabla \xi = \int_{{\mathbb R}^{{d}}} (-{\rm d}isplaystyleelta u) \xi. {\rm e}nd{displaymath} {\rm e}nd{lemma} \begin{proof} \noindent (i) Observe that for any $R>0$ \begin{displaymath} \int_{B(0,R)} u (-{\rm d}isplaystyleelta \xi) = \int_{B(0,R)} \nabla u \nabla \xi - \int_{\partial B(0,R)} u \frac{\partial \xi}{\partial \vec{n}} \, {\rm d} S. {\rm e}nd{displaymath} Thanks to the Dominated Convergence Theorem it is enough to prove that the last term above converges to zero, as $R\to \infty$, since we can write \begin{displaymath} \int_{B(0,R)} u (-{\rm d}isplaystyleelta \xi) = \int_{B(0,R)} {\rm e}^{-\varepsilon|x|^{2}} u \, {\rm e}^{\varepsilon|x|^{2}} (-{\rm d}isplaystyleelta \xi) {\rm e}nd{displaymath} and \begin{displaymath} \int_{B(0,R)} \nabla u \nabla \xi = \int_{B(0,R)} {\rm e}^{-\varepsilon|x|^{2}} \nabla u\, {\rm e}^{\varepsilon|x|^{2}}\nabla \xi {\rm e}nd{displaymath} and pass to the limit in $R\to \infty$ in both terms. For this observe that \begin{displaymath} \int_{0}^{\infty} \int_{\partial B(0,R)} |u \frac{\partial \xi}{\partial \vec{n}} | \, {\rm d} S \, {\rm d} R \leq \int_{{\mathbb R}^{{d}}} |u| | |\nabla \xi| < \infty. {\rm e}nd{displaymath} Hence for some sequence $R_{n} \to \infty$ we have ${\rm d}isplaystyle \int_{\partial B(0,R_{n})} |u \frac{\partial \xi}{\partial \vec{n}} | \to 0$. Part (ii) is obtained in a similar fashion. {\rm e}nd{proof} \addcontentsline{toc}{section}{References} \begin{thebibliography}{} \bibitem{Aro68} D.G. Aronson, Non-negative solutions of linear parabolic equations. Ann. Scuola Norm. Sup. Pisa 22, 607–694 (1968). \bibitem{Aro71} D.G. Aronson, Non-negative solutions of linear parabolic equations: an addendum. Ann. Scuola Norm. Sup. Pisa 25, 221--228 (1971). \bibitem{ACDRB2004} J.M. Arrieta, J.Cholewa, T. Dlotko, A. Rodriguez--Bernal, Linear parabolic equations in locally uniform spaces. Math. Models Methods Appl. Sci. 14, 253 (2004). \bibitem{BeneCzaja} J. Benedetto, W.Czaja, {\it Integration and Modern Analysis}, Birkh\"auser Basel, 2009. \bibitem{CazenaveDW} T. Cazenave, F. Dickstein, F. Weissler, Universal solutions of a nonlinear heat equation on ${\mathbb R}^{N}$, Ann. Scuola Norm Sup. Pisa Cl. Sci 5, vol II, 77-117 (2003). \bibitem{ColletEckman} P. Collet, J. P. Eckmann, Space-time behaviour in problems of hydrodynamic type: a case study, Nonlinearity 5, 1265--1302 (1992). \bibitem{Daners00} D. Daners, Heat kernel estimates for operators with boundary conditions, Math. Nachr. 217 13–41 (2000). \bibitem{Davies} E. B. Davies, {\it Heat kernels and spectral theory}, Cambridge University Press, 1989. \bibitem{DuoanZuazua} J. Duoandikoetxea, E. Zuazua, Moments, masses de Dirac et d'ecomposition de fonctions, C. R. Acad. Sci. Paris Serie. I Math. 315, 693--698 (1992). \bibitem{Folland} G.B. Folland, {\it Real analysis. Modern techniques and their applications}, Wiley 1990. \bibitem{GigaGigaSaal} M.H. Giga, Y. Giga, J. Saal, {\it Nonlinear Partial Differential Equations. Asymptotic Behavior of Solutions and self--similar solutions}, Progress in Nonlinear Differential Equations and Their Applications 79, Birkhauser, 2010. \bibitem{FritzRoy} P. Fitzpatrick, H. Royden, {\it Real Analysis} (4th Edition), Prentice Hall, 2010 \bibitem{GT} D. Gilbarg, N. S. Trudinger, {\it Elliptic partial differential equations of second order}, Springer, 1998. \bibitem{GiaqModicaSoucek} M. Giaquinta, G. Modica, J. Souček, {\it Cartesian Currents in the Calculus of Variations I}, Springer, 1998. \bibitem{HKMM96} M. Hieber, P. Koch-Medina, and S. Merino, Linear and semilinear parabolic equations on $BUC({\mathbb R}^{N})$, Math. Nachr., vol. 179, pp. 107--118 (1996). \bibitem{J} F. John, {\it Partial differential equations}, 3rd Edition, Springer--Verlag 1991. \bibitem{L} A. Lunardi, {\it Analytic semigroups and optimal regularity in parabolic problems}. Basel: Birkhäuser Verlag, 1995 \bibitem{Mora} X. Mora, Semilinear parabolic problems define semiflows on $C^{k}$ spaces, Trans. Amer. Math. Soc., vol. 278, no. 1, pp. 21--55 (1983). \bibitem{Ouhabaz} E. M. Ouhabaz, {\it Analysis of heat equations on domains}, London Math. Soc. Monographs, vol. 31, Princeton University Press 2004. \bibitem{QuittnerSouplet} P. Quittner and P. Souplet. {\it Superlinear parabolic problems. Blow-up, global existence and steady states}. Birkha\"user Advanced Texts: Basler Lehrb\"cher. Birkha\"user Verlag, Basel, 2007. \bibitem{PolacikYanagida} P.Pol\'acik, E. Yanagida, On bounded and unbounded global solutions of a supercritical semilinear heat equation, Math. Ann 327, 745--771 (2003). \bibitem{R} J.C. Robinson, \textit{Dimensions, Embeddings, and Attractors}, Cambridge Tracts in Mathematics 186. Cambridge University Press, Cambridge, 2011. \bibitem{RR2} J.C. Robinson \& A. Rodriguez Bernal The heat flow in a Frechet space of unbounded initial data and applications to elliptic equations in ${\mathbb R}^d$, (2018). \bibitem{S} B. Simon, {\it Convexity. An Analytic Viewpoint}, Cambridge Tracts in Mathematics 187. Cambridge University Press, 2011 . \bibitem{Stroock} D. W. Stroock, {\it Partial differential equations for probabilists}, Cambridge Studies in Advanced Mathematics, 112. Cambridge University Press, Cambridge, 2008. \bibitem{Tychonov} A. Tychonoff, Th\'eor\`emes d'unicit\'e pour l'\'equation de la chaleur, Mat. Sb., Volume 42, Number 2, 199--216 (1935). \bibitem{VZ2002} J.L.V\'azquez, E. Zuazua, Complexity of large time behaviour of evolution equations with bounded data Chinese Annals of Mathematics, 23, ser. B, 2, 293-310 (2002). Special issue in honor of J.L. Lions. {\rm e}nd{thebibliography} {\rm e}nd{document}
{\bf b}egin{document} {\bf m}aketitle {\bf b}egin{abstract} Spacecraft attitude control using only magnetic torques is a time-varying system. Many designs were proposed using LQR and $H_{{\bf i}nfty}$ formulations. The existence of the solutions depends on the controllability of the linear time-varying systems which has not been established. In this paper, we will derive the conditions of the controllability for this linear time-varying systems. {\bf e}nd{abstract} {{\bf b}f Keywords:} Spacecraft attitude control, linear time-varying system, reduced quaternion model, controllability. {\bf n}ewpage {\bf s}ection{ Introduction} Spacecraft attitude control using magnetic torque is a very attractive technique because the implementation is simple, the system is reliable (without moving mechanical parts), the torque coils are inexpensive, and their weights are light. The main issue of using only magnetic torques to control the attitude is that the magnetic torques generated by magnetic coils are not available in all desired axes at any time {\bf c}ite{sidi97}. However, because of the constant change of the Earth's magnetic field as a spacecraft circles around the earth, the controllable subspace changes all the time, many researchers believe that spacecraft's attitude is actually controllable by using only magnetic torques. Numerous spacecraft attitude control designs were proposed in the last twenty five years exploring the features of the time-varying systems {\bf c}ite{me89,pitte93,wisni97,psiaki01,la04,sl05,la06,yra07,plv10,cw10,rh11,zl11}. Some of these papers tried Euler angle model and Linear Quadratic Regulator (LQR) formulations {\bf c}ite{me89,pitte93,wisni97,psiaki01,cw10} which are explicitly or implicitly assumed that the controllability for the linear time-varying system holds so that the optimal solutions exist {\bf c}ite{kalman60}. But for the problem of spacecraft attitude control using only magnetic torque, no controllability condition has been established for this linear time-varying system to the best of our knowledge. Other researchers {\bf c}ite{la04,sl05,la06} proposed direct design methods using Lyapunov stabilization theory. The existence of the solutions for these methods implicitly depends on the controllability for the nonlinear time-varying system. Therefore, Bhat {\bf c}ite{bhat05} investigated controllability of the nonlinear time-varying systems. However, the condition for the controllability of the nonlinear time-varying systems established in this paper is hard to be verified and is a sufficient condition. Recently, a reduced quaternion model was proposed in {\bf c}ite{yang10} and its merits over Euler angle model were discussed in {\bf c}ite{yang10,yang12,yang14}. The reduced quaternion model was also used for the design of spacecraft attitude control system using magnetic torque {\bf c}ite{plv10,rh11,zl11}. Because the controllability of the linear time-varying systems was not established, the existence of the solutions was not guaranteed. In this paper, we will consider the reduced linear quaternion model proposed in {\bf c}ite{yang10} and incorporate control model using only magnetic torques. We will establish the conditions of the controllability for this very general linear time-varying system. The same strategy can easily be used to prove the controllability of the Euler angle based linear time-varying system considered in {\bf c}ite{psiaki01}. However, we will not derive the similar result because of the merits of the reduced quaternion model as discussed in {\bf c}ite{yang10,yang12,yang14}. The remainder of the paper is organized as follows. Section 2 provides a description of the linear time-varying model of the spacecraft attitude control system using only magnetic torque. Section 3 gives the proof of the controllability for this linear time-varying system. The conclusions are summarized in Section 4. {\bf s}ection{The linear time-varying model} Let ${{\bf b}f J}$ be the inertia matrix of a spacecraft defined by {\bf b}egin{eqnarray} {{\bf b}f J} ={\bf l}eft[ {\bf b}egin{array}{ccc} J_{11} & J_{12} & J_{13} \\ J_{21} & J_{22} & J_{23} \\ J_{31} & J_{32} & J_{33} {\bf e}nd{array} {\bf r}ight], {\bf l}abel{inertia} {\bf e}nd{eqnarray} We will consider the nadir pointing spacecraft. Therefore, the attitude of the spacecraft is represented by the rotation of the spacecraft body frame relative to the local vertical and local horizontal (LVLH) frame. Therefore, we will represent the quaternion and spacecraft body rate in terms of the rotations of the spacecraft body frame relative to the LVLH frame. Let ${\bf b}oldsymbol{\omega}=[\omega_1,\omega_2, \omega_3]^{{{\bf b}f T}r}$ be the body rate with respect to the LVLH frame represented in the body frame, $\omega_0$ be the orbit (and LVLH frame) rate with respect to the inertial frame, represented in the LVLH frame. Let ${\bf b}ar{{\bf q}}=[q_0, q_1, q_2, q_3]^{{{\bf b}f T}r}=[q_0, {\bf q}^{{{\bf b}f T}r}]^{{{\bf b}f T}r}= [{\bf c}os({\bf f}rac{{{\bf b}f a}lpha}{2}), {\bf h}at{{\bf e}}^{{{\bf b}f T}r}{\bf s}in({\bf f}rac{{{\bf b}f a}lpha}{2})]^{{{\bf b}f T}r}$ be the quaternion representing the rotation of the body frame relative to the LVLH frame, where ${\bf h}at{{\bf e}}$ is the unit length rotational axis and ${{\bf b}f a}lpha$ is the rotation angle about ${\bf h}at{{\bf e}}$. Therefore, the reduced kinematics equation becomes {\bf c}ite{yang10} {\bf b}egin{eqnarray} {\bf n}onumber {\bf l}eft[ {\bf b}egin{array} {c} {\bf d}ot{q}_1 \\ {\bf d}ot{q}_2 \\ {\bf d}ot{q}_3 {\bf e}nd{array} {\bf r}ight] & = &{\bf f}rac{1}{2} {\bf l}eft[ {\bf b}egin{array} {ccc} {\bf s}qrt{1-q_1^2-q_2^2-q_3^2} & -q_3 & q_2 \\ q_3 & {\bf s}qrt{1-q_1^2-q_2^2-q_3^2} & -q_1 \\ -q_2 & q_1 & {\bf s}qrt{1-q_1^2-q_2^2-q_3^2} \\ {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array} {c} \omega_{1} \\ \omega_{2} \\ \omega_{3} {\bf e}nd{array} {\bf r}ight] \\ & = & {\bf g}(q_1,q_2, q_3, {\bf b}oldsymbol{\omega}). {\bf l}abel{nadirModel2} {\bf e}nd{eqnarray} Assume that the inertia matrix of the spacecraft is diagonal which is approximately correct for real systems, let the control torque vector be ${\bf u}=[u_x,u_y,u_z]^{{{\bf b}f T}r}$, then the linearized nadir pointing spacecraft model with gravity gradient disturbance torque is given as follows {\bf c}ite{yang10}: {\bf b}egin{eqnarray} {\bf l}eft[ {\bf b}egin{array}{c} {\bf d}ot{q}_1 \\ {\bf d}ot{q}_2 \\ {\bf d}ot{q}_3 \\ {\bf d}ot{\omega}_1 \\ {\bf d}ot{\omega}_2 \\ {\bf d}ot{\omega}_3 {\bf e}nd{array} {\bf r}ight] ={\bf l}eft[ {\bf b}egin{array}{cccccc} 0 & 0 & 0 & .5 & 0 & 0 \\ 0 & 0 & 0 & 0 & .5 & 0 \\ 0 & 0 & 0 & 0 & 0 & .5 \\ f_{41} & 0 & 0 & 0 & 0 & f_{46} \\ 0 & f_{52} & 0 & 0 & 0 & 0 \\ 0 & 0 & f_{63} & f_{64} & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {q}_1 \\ {q}_2 \\ {q}_3 \\ {\omega}_1 \\ {\omega}_2 \\ {\omega}_3 {\bf e}nd{array} {\bf r}ight] +{\bf l}eft[ {\bf b}egin{array}{c} 0 \\ 0 \\ 0 \\ u_x/J_{11} \\ u_y/J_{22} \\ u_z/J_{33} {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ {\bf l}abel{23} {\bf e}nd{eqnarray} where {\bf b}egin{subequations} {\bf b}egin{gather} f_{41}=[8(J_{33}-J_{22})\omega_0^2]/J_{11} \\ f_{46}=(-J_{11}+J_{22}-J_{33})\omega_0/J_{11} \\ f_{64}=(J_{11}-J_{22}+J_{33})\omega_0/J_{33} \\ f_{52}=[6(J_{33}-J_{11})\omega_0^2]/J_{22} \\ f_{63}=[2(J_{11}-J_{22})\omega_0^2]/J_{33}. {\bf e}nd{gather} {\bf l}abel{para} {\bf e}nd{subequations} The control torques generated by magnetic coils interacting with the Earth's magnetic field is given by (see {\bf c}ite{sidi97}) \[ {\bf u}={\bf m} {\bf t}imes {\bf b} \] where the Earth's magnetic field in spacecraft coordinates, ${\bf b}(t)=[b_1(t),b_2(t),b_3(t)]^{{{\bf b}f T}r}$, is computed using the spacecraft position, the spacecraft attitude, and a spherical harmonic model of the Earth's magnetic field {\bf c}ite{wertz78}; and ${\bf m}=[m_1,m_2,m_3]^{{{\bf b}f T}r}$ is the spacecraft magnetic coils' induced magnetic moment in the spacecraft coordinates. The time-variation of the system is an approximate periodic function of ${\bf b}(t)={\bf b}(t+T)$ where $T={\bf f}rac{2{\bf p}i}{\omega_0}$ is the orbital period. This magnetic field ${\bf b}(t)$ can be approximately expressed as follows {\bf c}ite{psiaki01}: {\bf b}egin{equation} {\bf l}eft[ {\bf b}egin{array}{c} b_1(t) \\ b_2(t) \\ b_3(t) {\bf e}nd{array} {\bf r}ight] = {\bf f}rac{{\bf m}u_f}{a^{3}} {\bf l}eft[ {\bf b}egin{array}{c} {\bf c}os(\omega_0t){\bf s}in(i_m) \\ -{\bf c}os(i_m) \\ 2{\bf s}in(\omega_0t){\bf s}in(i_m) {\bf e}nd{array} {\bf r}ight], {\bf l}abel{field} {\bf e}nd{equation} where $i_m$ is the inclination of the spacecraft orbit with respect to the magnetic equator, ${\bf m}u_f=7.9 {\bf t}imes 10^{15}$ Wb-m is the field's dipole strength, and $a$ is the orbit's semi-major axis. The time $t=0$ is measured at the ascending-node crossing of the magnetic equator. Therefore, the reduced quaternion linear time-varying system is as follows: {\bf b}egin{eqnarray} {\bf l}eft[ {\bf b}egin{array}{c} {\bf d}ot{q}_1 \\ {\bf d}ot{q}_2 \\ {\bf d}ot{q}_3 \\ {\bf d}ot{\omega}_1 \\ {\bf d}ot{\omega}_2 \\ {\bf d}ot{\omega}_3 {\bf e}nd{array} {\bf r}ight] &=& {\bf l}eft[ {\bf b}egin{array}{cccccc} 0 & 0 & 0 & .5 & 0 & 0 \\ 0 & 0 & 0 & 0 & .5 & 0 \\ 0 & 0 & 0 & 0 & 0 & .5 \\ f_{41} & 0 & 0 & 0 & 0 & f_{46} \\ 0 & f_{52} & 0 & 0 & 0 & 0 \\ 0 & 0 & f_{63} & f_{64} & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {q}_1 \\ {q}_2 \\ {q}_3 \\ {\omega}_1 \\ {\omega}_2 \\ {\omega}_3 {\bf e}nd{array} {\bf r}ight] +{\bf l}eft[ {\bf b}egin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & {\bf f}rac{b_3(t)}{J_{11}} & -{\bf f}rac{b_2(t)}{J_{11}} \\ -{\bf f}rac{b_3(t)}{J_{22}} & 0 & {\bf f}rac{b_1(t)}{J_{22}} \\ {\bf f}rac{b_2(t)}{J_{33}} & -{\bf f}rac{b_1(t)}{J_{33}} & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {m}_1 \\ {m}_2 \\ {m}_3 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & := & {\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f 0}_3 & {\bf f}rac{1}{2} {{\bf b}f I}_3 \\ {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {\bf q} \\ {\bf b}oldsymbol{\omega} {\bf e}nd{array} {\bf r}ight] + {\bf l}eft[ {\bf b}egin{array}{c} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2(t) {\bf e}nd{array} {\bf r}ight] {\bf m} ={{\bf b}f A}{\bf x}+{{\bf b}f B}(t) {\bf m}. {\bf l}abel{varying} {\bf e}nd{eqnarray} Substituting ({\bf r}ef{field}) into ({\bf r}ef{varying}) yields {\bf b}egin{equation} {{\bf b}f B}_2(t)= {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b_{42}(t) & b_{43}(t) \\ b_{51}(t) & 0 & b_{53}(t) \\ b_{61}(t) & b_{62}(t) & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}abel{bt} {\bf e}nd{equation} where {\bf b}egin{subequations} {\bf b}egin{gather} b_{42} (t) = {\bf f}rac{2{\bf m}u_f}{a^3 J_{11}} {\bf s}in(i_m) {\bf s}in(\omega_0t) \\ b_{43} (t) = {\bf f}rac{{\bf m}u_f}{a^3 J_{11}} {\bf c}os(i_m) \\ b_{53} (t) = {\bf f}rac{{\bf m}u_f}{a^3 J_{22}} {\bf s}in(i_m) {\bf c}os(\omega_0t) \\ b_{51} (t) = -{\bf f}rac{2{\bf m}u_f}{a^3 J_{22}} {\bf s}in(i_m) {\bf s}in(\omega_0t) = -b_{42}{\bf f}rac{J_{11}}{J_{22}} \\ b_{61} (t) =-{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf c}os(i_m) = -b_{43} {\bf f}rac{J_{11}}{J_{33}} \\ b_{62} (t) =-{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf s}in(i_m) {\bf c}os(\omega_0t) =-b_{53}{\bf f}rac{J_{22}}{J_{33}}. {\bf e}nd{gather} {\bf l}abel{bij} {\bf e}nd{subequations} Therefore, we have {\bf b}egin{subequations} {\bf b}egin{gather} b'_{42} (t) = {\bf f}rac{2{\bf m}u_f\omega_0}{a^3 J_{11}} {\bf s}in(i_m) {\bf c}os( \omega_0 t) \\ b'_{43} (t) = 0 \\ b'_{53} (t) = -{\bf f}rac{{\bf m}u_f\omega_0}{a^3 J_{22}} {\bf s}in(i_m) {\bf s}in(\omega_0t) \\ b'_{51} (t) = -{\bf f}rac{2{\bf m}u_f\omega_0}{a^3 J_{22}} {\bf s}in(i_m) {\bf c}os(\omega_0t) = -b'_{42}{\bf f}rac{J_{11}}{J_{22}} \\ b'_{61} (t) =0 \\ b'_{62} (t) ={\bf f}rac{{\bf m}u_f\omega_0}{a^3 J_{33}} {\bf s}in(i_m) {\bf s}in(\omega_0t) =-b_{53}{\bf f}rac{J_{22}}{J_{33}} {\bf e}nd{gather} {\bf l}abel{bp} {\bf e}nd{subequations} and {\bf b}egin{subequations} {\bf b}egin{gather} b''_{42} (t) = -{\bf f}rac{2{\bf m}u_f\omega_0^2}{a^3 J_{11}} {\bf s}in(i_m) {\bf s}in(\omega_0t) \\ b''_{43} (t) = 0 \\ b''_{53} (t) = -{\bf f}rac{{\bf m}u_f\omega_0^2}{a^3 J_{22}} {\bf s}in(i_m) {\bf c}os(\omega_0t) \\ b''_{51} (t) = {\bf f}rac{2{\bf m}u_f\omega_0^2}{a^3 J_{22}} {\bf s}in(i_m) {\bf s}in(\omega_0t) = -b''_{42}{\bf f}rac{J_{11}}{J_{22}} \\ b''_{61} (t) =0 \\ b''_{62} (t) = {\bf f}rac{{\bf m}u_f\omega_0^2}{a^3 J_{33}} {\bf s}in(i_m) {\bf c}os(\omega_0t) =-b''_{53}{\bf f}rac{J_{22}}{J_{33}}. {\bf e}nd{gather} {\bf l}abel{bpp} {\bf e}nd{subequations} In matrix format, we have {\bf b}egin{equation} {{\bf b}f B}'_2(t)= {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b'_{42} & 0 \\ b'_{51} & 0 & b'_{53} \\ 0 & b'_{62} & 0 {\bf e}nd{array} {\bf r}ight], {\bf e}nd{equation} and {\bf b}egin{equation} {{\bf b}f B}''_2(t)= {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b''_{42} & 0 \\ b''_{51} & 0 & b''_{53} \\ 0 & b''_{62} & 0 {\bf e}nd{array} {\bf r}ight]. {\bf e}nd{equation} A special case is when $i_m=0$, i.e., the spacecraft orbit is on the equator plane of the Earth's magnetic field. In this case, ${\bf b}(t)=[0,-{\bf f}rac{{\bf m}u_f}{a^3},0]^{{{\bf b}f T}r}$ is a constant vector. The linear time-varying system of this special case is reduced to a linear time-invariant system whose model is given by {\bf b}egin{eqnarray} {\bf l}eft[ {\bf b}egin{array}{c} {\bf d}ot{q}_1 \\ {\bf d}ot{q}_2 \\ {\bf d}ot{q}_3 \\ {\bf d}ot{\omega}_1 \\ {\bf d}ot{\omega}_2 \\ {\bf d}ot{\omega}_3 {\bf e}nd{array} {\bf r}ight] &=& {\bf l}eft[ {\bf b}egin{array}{cccccc} 0 & 0 & 0 & .5 & 0 & 0 \\ 0 & 0 & 0 & 0 & .5 & 0 \\ 0 & 0 & 0 & 0 & 0 & .5 \\ f_{41} & 0 & 0 & 0 & 0 & f_{46} \\ 0 & f_{52} & 0 & 0 & 0 & 0 \\ 0 & 0 & f_{63} & f_{64} & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {q}_1 \\ {q}_2 \\ {q}_3 \\ {\omega}_1 \\ {\omega}_2 \\ {\omega}_3 {\bf e}nd{array} {\bf r}ight] +{\bf l}eft[ {\bf b}egin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -b_2/J_{11} \\ 0 & 0 & 0 \\ b_2/J_{33} & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {m}_1 \\ {m}_2 \\ {m}_3 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & {{\bf b}f A}{\bf x}+{{\bf b}f B} {\bf m}. {\bf l}abel{invariant} {\bf e}nd{eqnarray} {\bf s}ection{The proof of the controllability} The definition of controllability of linear time-varying systems can be found in {\bf c}ite[page 124]{rugh93}. {\bf b}egin{definition} The linear state equation ({\bf r}ef{varying}) is called controllable on $[t_0,t_f]$ if given any $x_0$, there exists a continuous input signal ${\bf m}(t)$ defined on $[t_0,t_f]$ such that the correspnding solution of ({\bf r}ef{varying}) satisfies $x(t_f)=0$. {\bf e}nd{definition} A main theorem used to prove the controllability of ({\bf r}ef{varying}) is also given in {\bf c}ite[page 127]{rugh93}. {\bf b}egin{theorem} Let the state transition matrix ${\bf b}oldsymbol{{{\bf b}f P}hi}(t,{\bf t}au)=e^{{{\bf b}f A}(t-{\bf t}au)}$. Denote {\bf b}egin{equation} {{\bf b}f K}_j(t) = {\bf f}rac{{\bf p}artial^j}{{\bf p}artial {\bf t}au^j} {\bf l}eft[ {\bf b}oldsymbol{{{\bf b}f P}hi}(t, {\bf t}au){{\bf b}f B}({\bf t}au) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t}, {\bf h}space{0.1in} j=1, 2, {\bf l}dots {\bf l}abel{controllable} {\bf e}nd{equation} if $p$ is a positive integer such that, for $t {\bf i}n [t_0,t_f]$, ${{\bf b}f B}(t)$ is $p$ time continuously differentiable. Then, the linear time-varying equation ({\bf r}ef{varying}) is controllable on $[t_0,t_f]$ if for some $t_c {\bf i}n [t_0,t_f]$ {\bf b}egin{equation} {\bf t}ext{rank} {\bf l}eft[ {{\bf b}f K}_0(t_c), {{\bf b}f K}_1(t_c), {\bf l}dots, {{\bf b}f K}_p(t_c) {\bf r}ight]=n. {\bf l}abel{rank} {\bf e}nd{equation} {\bf l}abel{theorem1} {\bf e}nd{theorem} {\bf b}egin{remark} If ${{\bf b}f A}$ and ${{\bf b}f B}$ are constant matrices, the rank condition of ({\bf r}ef{rank}) for the linear time-varying system is reduced to the rank condition for the linear time-invariant system {\bf c}ite[page 128]{rugh93}, i.e., if {\bf b}egin{equation} {\bf t}ext{rank} {\bf l}eft[ {{\bf b}f B}, {{\bf b}f A}{{\bf b}f B}, {\bf l}dots, {{\bf b}f A}^{n-1}{{\bf b}f B}{\bf r}ight]=n. {\bf l}abel{rank1} {\bf e}nd{equation} then the linear time-invariant system $({{\bf b}f A}, {{\bf b}f B})$ is controllable. {\bf e}nd{remark} First, we consider the special case of ({\bf r}ef{invariant}), the time-invariant system when the spacecraft orbit is on the equator plane of the Earth's magnetic field ($i_m =0$). Let ${\bf b}oldsymbol{{{\bf b}f S}igma}$ denote any $3{\bf t}imes 3$ diagonal or anti-diagonal matrix with the second row composed of zeros \[ {\bf b}oldsymbol{{{\bf b}f S}igma} := {{\bf b}f B}igg\{ {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & 0 & {\bf t}imes \\ 0 & 0 & 0 \\ {\bf t}imes & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf h}space{0.2in} {\bf m}box{or} {\bf h}space{0.2in} {\bf l}eft[ {\bf b}egin{array}{ccc} {\bf t}imes & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & {\bf t}imes {\bf e}nd{array} {\bf r}ight] {{\bf b}f B}igg\}, \] and ${\bf b}oldsymbol{{{\bf b}f L}ambda}$ denote any $3{\bf t}imes 3$ diagonal matrix with the form \[ {\bf b}oldsymbol{{{\bf b}f L}ambda} := {{\bf b}f B}igg\{ {\bf l}eft[ {\bf b}egin{array}{ccc} {\bf t}imes & 0 & 0 \\ 0 & {\bf t}imes & 0 \\ 0 & 0 & {\bf t}imes {\bf e}nd{array} {\bf r}ight] {{\bf b}f B}igg\}. \] It is easy to verify that if ${\bf b}oldsymbol{{{\bf b}f S}igma}_i {\bf i}n {\bf b}oldsymbol{{{\bf b}f S}igma}$, ${\bf b}oldsymbol{{{\bf b}f S}igma}_j {\bf i}n {\bf b}oldsymbol{{{\bf b}f S}igma}$, and ${\bf b}oldsymbol{{{\bf b}f L}ambda}_k {\bf i}n {\bf b}oldsymbol{{{\bf b}f L}ambda}$, then ${\bf b}oldsymbol{{{\bf b}f S}igma}_i {\bf b}oldsymbol{{{\bf b}f S}igma}_j {\bf i}n {\bf b}oldsymbol{{{\bf b}f S}igma}$, ${\bf b}oldsymbol{{{\bf b}f S}igma}_i+ {\bf b}oldsymbol{{{\bf b}f S}igma}_j {\bf i}n {\bf b}oldsymbol{{{\bf b}f S}igma}$, and ${\bf b}oldsymbol{{{\bf b}f L}ambda}_k {\bf b}oldsymbol{{{\bf b}f S}igma}_i {\bf i}n {\bf b}oldsymbol{{{\bf b}f S}igma}$. Using this fact to expand the matrix $[{{\bf b}f B}, {{\bf b}f A}^2{{\bf b}f B}, {{\bf b}f A}^3{{\bf b}f B}, {{\bf b}f A}^4{{\bf b}f B}, {{\bf b}f A}^5{{\bf b}f B}]$, where ${{\bf b}f A}$ and ${{\bf b}f B}$ are defined in ({\bf r}ef{invariant}), shows that the second row of the controllability matrix in ({\bf r}ef{rank1}) is composed of all zeros. This proves that if the spacecraft orbit is on the equator plane of the Earth's magnetic field, the spacecraft attitude cannot be stabilized by using only magnetic torques. Now we show that under some simple conditions, the linear time-varying system ({\bf r}ef{varying}) is controllable for any orbit which is not on the equator plane of the Earth's magnetic field, i.e., $i_m {\bf n}eq 0$. From ({\bf r}ef{controllable}), we have \[ {{\bf b}f K}_0(t)={\bf b}oldsymbol{{{\bf b}f P}hi}(t,t){{\bf b}f B}(t)=e^{{{\bf b}f A}(t-t)}{{\bf b}f B}(t)={{\bf b}f B}(t), \] {\bf b}egin{eqnarray} {{\bf b}f K}_1(t)& = & {\bf f}rac{{\bf p}artial}{{\bf p}artial {\bf t}au} {\bf l}eft[ {\bf b}oldsymbol{{{\bf b}f P}hi}(t, {\bf t}au){{\bf b}f B}({\bf t}au) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t} = {\bf f}rac{{\bf p}artial}{{\bf p}artial {\bf t}au} {\bf l}eft[ e^{{{\bf b}f A}(t-{\bf t}au)}{{\bf b}f B}({\bf t}au) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t} {\bf n}onumber \\ & = & {\bf l}eft[ -{{\bf b}f A} e^{{{\bf b}f A}(t-{\bf t}au )}{{\bf b}f B}({\bf t}au ) + e^{{{\bf b}f A}(t-{\bf t}au )}{{\bf b}f B}'({\bf t}au ) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t} = -{{\bf b}f A}{{\bf b}f B}(t)+{{\bf b}f B}'(t), {\bf l}abel{k1} {\bf e}nd{eqnarray} {\bf b}egin{eqnarray} {{\bf b}f K}_2(t)& = & {\bf f}rac{{\bf p}artial^2}{{\bf p}artial {\bf t}au^2} {\bf l}eft[ {\bf b}oldsymbol{{{\bf b}f P}hi}(t, {\bf t}au){{\bf b}f B}({\bf t}au) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t} {\bf n}onumber \\ & = & {\bf l}eft[ {{\bf b}f A}^2e^{{{\bf b}f A}(t-{\bf t}au)}{{\bf b}f B}({\bf t}au) -2{{\bf b}f A} e^{{{\bf b}f A}(t-{\bf t}au )}{{\bf b}f B}'({\bf t}au ) + e^{{{\bf b}f A}(t-{\bf t}au )}{{\bf b}f B}''({\bf t}au ) {\bf r}ight] {{\bf b}f B}igr|_{{\bf t}au = t} {\bf n}onumber \\ & = & {{\bf b}f A}^2{{\bf b}f B}(t) -2{{\bf b}f A}{{\bf b}f B}'(t)+{{\bf b}f B}''(t). {\bf l}abel{k2} {\bf e}nd{eqnarray} Using the notation of ({\bf r}ef{varying}), we can rewrite equation ({\bf r}ef{k1}) as \[ {{\bf b}f K}_1(t) = -{\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f 0}_3 & {\bf f}rac{1}{2} {{\bf b}f I}_3 \\ {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{array} {\bf r}ight]+ {\bf l}eft[ {\bf b}egin{array}{c} {{\bf b}f 0}_3 \\ {{\bf b}f B}'_2 {\bf e}nd{array} {\bf r}ight] ={\bf l}eft[ {\bf b}egin{array}{c} -{\bf f}rac{1}{2}{{\bf b}f B}_2 \\ -{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 +{{\bf b}f B}'_2 {\bf e}nd{array} {\bf r}ight]. \] Since \[ {{\bf b}f A}^2{{\bf b}f B}= {{\bf b}f A} {\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f 0}_3 & {\bf f}rac{1}{2} {{\bf b}f I}_3 \\ {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{array} {\bf r}ight] ={\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f 0}_3 & {\bf f}rac{1}{2} {{\bf b}f I}_3 \\ {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {\bf f}rac{1}{2}{{\bf b}f B}_2 \\ {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 {\bf e}nd{array} {\bf r}ight] ={\bf l}eft[ {\bf b}egin{array}{c} {\bf f}rac{1}{2}{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2+{\bf b}oldsymbol{{{\bf b}f S}igma}_1^2 {{\bf b}f B}_2 {\bf e}nd{array} {\bf r}ight] \] and \[ -2{{\bf b}f A} {{\bf b}f B}'=-2 {\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f 0}_3 & {\bf f}rac{1}{2} {{\bf b}f I}_3 \\ {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{c} {{\bf b}f 0}_3 \\ {{\bf b}f B}'_2 {\bf e}nd{array} {\bf r}ight] = {\bf l}eft[ {\bf b}egin{array}{c} -{{\bf b}f B}'_2 \\ -2 {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}'_2 {\bf e}nd{array} {\bf r}ight], \] equation ({\bf r}ef{k2}) is reduced to \[ {{\bf b}f K}_2(t) = {{\bf b}f A}^2{{\bf b}f B}-2{{\bf b}f A}{{\bf b}f B}'+{{\bf b}f B}''= {\bf l}eft[ {\bf b}egin{array}{c} {\bf f}rac{1}{2}{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -{{\bf b}f B}'_2 \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2+{\bf b}oldsymbol{{{\bf b}f S}igma}_1^2 {{\bf b}f B}_2-2 {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}'_2+{{\bf b}f B}''_2 {\bf e}nd{array} {\bf r}ight]. \] Hence, {\bf b}egin{eqnarray} [{{\bf b}f K}_0(t), {{\bf b}f K}_1(t), {{\bf b}f K}_2(t)] & = & [ {{\bf b}f B}(t) \,\,\, | \,\,\, -{{\bf b}f A}{{\bf b}f B}(t)+{{\bf b}f B}'(t) \,\,\, | \,\,\, {{\bf b}f A}^2{{\bf b}f B}(t)-2{{\bf b}f A}{{\bf b}f B}'(t)+{{\bf b}f B}''(t)] {\bf n}onumber \\ & = & {\bf l}eft[ {\bf b}egin{smallmatrix} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} -{\bf f}rac{1}{2} {{\bf b}f B}_2 \\ -{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2+{{\bf b}f B}'_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} {\bf f}rac{1}{2}{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -{{\bf b}f B}'_2(t) \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2+{\bf b}oldsymbol{{{\bf b}f S}igma}_1^2 {{\bf b}f B}_2 -2{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}'_2 +{{\bf b}f B}''_2 {\bf e}nd{smallmatrix} {\bf r}ight]. {\bf e}nd{eqnarray} Notice that {\bf b}egin{eqnarray} & & {\bf t}ext{rank} [{{\bf b}f K}_0(t), {{\bf b}f K}_1(t), {{\bf b}f K}_2(t)] {\bf n}onumber \\ & = & {\bf t}ext{rank} {\bf l}eft( {\bf l}eft[ {\bf b}egin{array}{cc} {{\bf b}f I}_3 & 0_3 \\ -2 {\bf b}oldsymbol{{{\bf b}f S}igma}_1 & {{\bf b}f I}_3 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{smallmatrix} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} -{\bf f}rac{1}{2} {{\bf b}f B}_2 \\ -{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2+{{\bf b}f B}'_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} {\bf f}rac{1}{2}{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -{{\bf b}f B}'_2(t) \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2+{\bf b}oldsymbol{{{\bf b}f S}igma}_1^2 {{\bf b}f B}_2 -2{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}'_2 +{{\bf b}f B}''_2 {\bf e}nd{smallmatrix} {\bf r}ight] {\bf r}ight) {\bf n}onumber \\ & = & {\bf t}ext{rank} {\bf l}eft[ {\bf b}egin{smallmatrix} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} -{\bf f}rac{1}{2} {{\bf b}f B}_2 \\ {{\bf b}f B}'_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} {\bf f}rac{1}{2}{\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -{{\bf b}f B}'_2(t) \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2 +{{\bf b}f B}''_2 {\bf e}nd{smallmatrix} {\bf r}ight] {\bf n}onumber \\ & = & {\bf t}ext{rank} {\bf l}eft[ {\bf b}egin{smallmatrix} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} - {{\bf b}f B}_2 \\ {{\bf b}f B}'_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -2{{\bf b}f B}'_2(t) \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2 +{{\bf b}f B}''_2 {\bf e}nd{smallmatrix} {\bf r}ight], {\bf e}nd{eqnarray} {\bf b}egin{eqnarray} & & {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2-2 {{\bf b}f B}'_2(t) {\bf n}onumber \\ & = & {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & 0 & f_{46} \\ 0 & 0 & 0 \\ f_{64} & 0 & 0 {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b_{42}(t) & b_{43}(t) \\ b_{51}(t) & 0 & b_{53}(t) \\ b_{61}(t) & b_{62}(t) & 0 {\bf e}nd{array} {\bf r}ight] -2{\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b'_{42} & 0 \\ b'_{51} & 0 & b'_{53} \\ 0 & b'_{62} & 0 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & {\bf l}eft[ {\bf b}egin{array}{ccc} f_{46} b_{61}(t) & f_{46} b_{62}(t)-2b'_{42} & 0 \\ -2 b'_{51} & 0 & 2b'_{53} \\ 0 & f_{64} b_{42}(t)-2b'_{62} & f_{64} b_{43}(t) {\bf e}nd{array} {\bf r}ight], {\bf e}nd{eqnarray} and {\bf b}egin{eqnarray} & & {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2 + {{\bf b}f B}''_2(t) {\bf n}onumber \\ & = & {\bf f}rac{1}{2} {\bf l}eft[ {\bf b}egin{array}{ccc} f_{41} & 0 & 0 \\ 0 & f_{52} & 0 \\ 0 & 0 & f_{63} {\bf e}nd{array} {\bf r}ight] {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b_{42}(t) & b_{43}(t) \\ b_{51}(t) & 0 & b_{53}(t) \\ b_{61}(t) & b_{62}(t) & 0 {\bf e}nd{array} {\bf r}ight] +{\bf l}eft[ {\bf b}egin{array}{ccc} 0 & b''_{42} & 0 \\ b''_{51} & 0 & b''_{53} \\ 0 & b''_{62} & 0 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & {\bf l}eft[ {\bf b}egin{array}{ccc} 0 & {\bf f}rac{1}{2} f_{41} b_{42}(t) + b''_{42} & {\bf f}rac{1}{2} f_{41} b_{43}(t) \\ {\bf f}rac{1}{2} f_{52} b_{51}(t) + b''_{51} & 0 & {\bf f}rac{1}{2} f_{52} b_{53}(t) + b''_{53} \\ {\bf f}rac{1}{2} f_{63} b_{61}(t) & {\bf f}rac{1}{2} f_{63} b_{62}(t) + b''_{62} & 0 {\bf e}nd{array} {\bf r}ight], {\bf e}nd{eqnarray} we have {\bf b}egin{eqnarray} & & {\bf l}eft[ {\bf b}egin{smallmatrix} {{\bf b}f 0}_3 \\ {{\bf b}f B}_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} - {{\bf b}f B}_2 \\ {{\bf b}f B}'_2 {\bf e}nd{smallmatrix} {\bf m}iddle| {\bf b}egin{smallmatrix} {\bf b}oldsymbol{{{\bf b}f S}igma}_1 {{\bf b}f B}_2 -2{{\bf b}f B}'_2(t) \\ {\bf f}rac{1}{2} {\bf b}oldsymbol{{{\bf b}f L}ambda}_1 {{\bf b}f B}_2 +{{\bf b}f B}''_2 {\bf e}nd{smallmatrix} {\bf r}ight] {\bf n}onumber \\ & = & {\bf f}ootnotesize {\bf l}eft[ {\bf b}egin{array}{ccccccccc} 0 & 0 & 0 & 0 & -b_{42}(t) & -b_{43}(t) & f_{46} b_{61}(t) & f_{46} b_{62}(t)-2b'_{42} & 0 \\ 0 & 0 & 0 & -b_{51}(t) & 0 & -b_{53}(t) & -2 b'_{51} & 0 & 2b'_{53} \\ 0 & 0 & 0 & -b_{61}(t) & -b_{62}(t) & 0 & 0 & f_{64} b_{42}(t)-2b'_{62} & f_{64} b_{43}(t) \\ 0 & b_{42}(t) & b_{43}(t) & 0 & b'_{42} & 0 & 0 & {\bf f}rac{1}{2} f_{41} b_{42}(t) + b''_{42} & {\bf f}rac{1}{2} f_{41} b_{43}(t) \\ b_{51}(t) & 0 & b_{53}(t) & b'_{51} & 0 & -b_{53}(t) & {\bf f}rac{1}{2} f_{52} b_{51}(t) + b''_{51} & 0 & {\bf f}rac{1}{2} f_{52} b_{53}(t) + b''_{53} \\ b_{61}(t) & b_{62}(t) & 0 & 0 & b'_{62} & 0 & {\bf f}rac{1}{2} f_{63} b_{61}(t) & {\bf f}rac{1}{2} f_{63} b_{62}(t) + b''_{62} & 0 {\bf e}nd{array} {\bf r}ight]. {\bf n}onumber {\bf n}ormalsize {\bf e}nd{eqnarray} To show that this matrix is full rank for some $t_c$, we show that there is a $6 {\bf t}imes 6$ submatrix whose determinant is not zero for $\omega_0t_c = {\bf f}rac{{\bf p}i}{2}$. In view of ({\bf r}ef{bij}), ({\bf r}ef{bp}), and ({\bf r}ef{bpp}), for this $t_c$, we have {\bf b}egin{equation} b_{53}(t_c)=b_{62}(t_c)=b'_{51}(t_c)=b'_{42}(t_c)=b''_{53}(t_c)=b''_{62}(t_c)=0. {\bf l}abel{reduce} {\bf e}nd{equation} Consider the submatrix composed of the $1$st, $2$nd, $4$th, $5$th, $7$th, $8$th columns, and using ({\bf r}ef{reduce}), we have {\bf b}egin{eqnarray} & & {\bf d}et {\bf l}eft[ {\bf b}egin{array}{cccccc} 0 & 0 & 0 & -b_{42}(t_c) & f_{46} b_{61}(t_c) & f_{46} b_{62}(t_c)-2b'_{42} \\ 0 & 0 & -b_{51}(t_c) & 0 & -2 b'_{51} & 0 \\ 0 & 0 & -b_{61}(t_c) & -b_{62}(t_c) & 0 & f_{64} b_{42}(t_c)-2b'_{62} \\ 0 & b_{42}(t_c) & 0 & b'_{42} & 0 & {\bf f}rac{1}{2} f_{41} b_{42}(t_c) + b''_{42} \\ b_{51}(t_c) & 0 & b'_{51} & 0 & {\bf f}rac{1}{2} f_{52} b_{51}(t_c) + b''_{51} & 0 \\ b_{61}(t_c) & b_{62}(t_c) & 0 & b'_{62} & {\bf f}rac{1}{2} f_{63} b_{61}(t_c) & {\bf f}rac{1}{2} f_{63} b_{62}(t_c) + b''_{62} {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & {\bf d}et {\bf l}eft[ {\bf b}egin{array}{cccccc} 0 & 0 & 0 & -b_{42}(t_c) & f_{46} b_{61}(t_c) & 0 \\ 0 & 0 & -b_{51}(t_c) & 0 & 0 & 0 \\ 0 & 0 & -b_{61}(t_c) & 0 & 0 & f_{64} b_{42}(t_c)-2b'_{62} \\ 0 & b_{42}(t_c) & 0 & 0 & 0 & {\bf f}rac{1}{2} f_{41} b_{42}(t_c) + b''_{42} \\ b_{51}(t_c) & 0 & 0& 0 & {\bf f}rac{1}{2} f_{52} b_{51}(t_c) + b''_{51} & 0 \\ b_{61}(t_c) & 0 & 0 & b'_{62} & {\bf f}rac{1}{2} f_{63} b_{61}(t_c) & 0 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & b_{51}(t_c) {\bf d}et{\bf l}eft[ {\bf b}egin{array}{ccccc} 0 & 0 & -b_{42}(t_c) & f_{46} b_{61}(t_c) & 0 \\ 0 & 0 & 0 & 0 & f_{64} b_{42}(t_c)-2b'_{62} \\ 0 & b_{42}(t_c) & 0 & 0 & {\bf f}rac{1}{2} f_{41} b_{42}(t_c) + b''_{42} \\ b_{51}(t_c) & 0 & 0 & {\bf f}rac{1}{2} f_{52} b_{51}(t_c) + b''_{51} & 0 \\ b_{61}(t_c) & 0 & b'_{62} & {\bf f}rac{1}{2} f_{63} b_{61}(t_c) & 0 {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & -{\bf l}eft( f_{64} b_{42}(t_c)-2b'_{62} {\bf r}ight) b_{51}(t_c) {\bf d}et{\bf l}eft[ {\bf b}egin{array}{cccc} 0 & 0 & -b_{42}(t_c) & f_{46} b_{61}(t_c) \\ 0 & b_{42}(t_c) & 0 & 0 \\ b_{51}(t_c) & 0 & 0 & {\bf f}rac{1}{2} f_{52} b_{51}(t_c) + b''_{51} \\ b_{61}(t_c) & 0 & b'_{62} & {\bf f}rac{1}{2} f_{63} b_{61}(t_c) {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & - b_{42}(t_c) {\bf l}eft( f_{64} b_{42}(t_c)-2b'_{62} {\bf r}ight) b_{51}(t_c) {\bf d}et{\bf l}eft[ {\bf b}egin{array}{ccc} 0 & -b_{42}(t_c) & f_{46} b_{61}(t_c) \\ b_{51}(t_c) & 0 & {\bf f}rac{1}{2} f_{52} b_{51}(t_c) + b''_{51} \\ b_{61}(t_c) & b'_{62} & {\bf f}rac{1}{2} f_{63} b_{61}(t_c) {\bf e}nd{array} {\bf r}ight] {\bf n}onumber \\ & = & - b_{42}(t_c) {\bf l}eft( f_{64} b_{42}(t_c)-2b'_{62} {\bf r}ight) b_{51}(t_c) {\bf t}imes {\bf n}onumber \\ & & {\bf l}eft[ b_{51} b'_{62} f_{46} b_{61} -b_{42}{\bf l}eft( {\bf f}rac{1}{2} f_{52}b_{51}+b''_{51} {\bf r}ight)b_{61} +{\bf f}rac{1}{2} f_{63}b_{61}b_{42}b_{51} {\bf r}ight]. {\bf e}nd{eqnarray} Therefore, in view of Theorem {\bf r}ef{theorem1}, the time-varying system is controllable if {\bf b}egin{equation} f_{64} b_{42}(t_c)-2b'_{62} {\bf n}eq 0, {\bf l}abel{neq1} {\bf e}nd{equation} and {\bf b}egin{equation} b_{51} b'_{62} f_{46} b_{61} -b_{42}{\bf l}eft( {\bf f}rac{1}{2} f_{52}b_{51}+b''_{51} {\bf r}ight)b_{61} +{\bf f}rac{1}{2} f_{63}b_{61}b_{42}b_{51} {\bf n}eq 0. {\bf l}abel{neq2} {\bf e}nd{equation} Using ({\bf r}ef{para}), ({\bf r}ef{bij}), ({\bf r}ef{bp}), ({\bf r}ef{bpp}), and noticing that ${\bf s}in(\omega_0 t_c) = {\bf s}in({\bf f}rac{{\bf p}i}{2})=1$, we have {\bf b}egin{eqnarray} & & f_{64} b_{42}(t_c)-2b'_{62} {\bf n}onumber \\ & = & {\bf f}rac{(J_{11}-J_{22}+J_{33}) \omega_0}{J_{33}} {\bf f}rac{2{\bf m}u_f }{a^3 J_{11}}{\bf s}in(i_m) -2 {\bf f}rac{{\bf m}u_f \omega_0}{a^3 J_{33}}{\bf s}in(i_m) {\bf n}onumber \\ & = & {\bf f}rac{2{\bf m}u_f \omega_0 {\bf s}in(i_m)}{a^3 (J_{11}J_{33})} (J_{33}-J_{22}), {\bf n}onumber {\bf e}nd{eqnarray} the first condition ({\bf r}ef{neq1}) is reduced to {\bf b}egin{equation} J_{33} {\bf n}eq J_{22}. {\bf l}abel{cond1} {\bf e}nd{equation} Repeatedly using the same relations, we have {\bf b}egin{eqnarray} b_{51} b'_{62} f_{46} b_{61} & = & {\bf l}eft( -{\bf f}rac{2{\bf m}u_f}{a^3 J_{22}} {\bf s}in(i_m) {\bf r}ight) {\bf l}eft( {\bf f}rac{{\bf m}u_f\omega_0}{a^3 J_{33}} {\bf s}in(i_m) {\bf r}ight) {\bf l}eft( {\bf f}rac{(-J_{11}+J_{22}-J_{33})\omega_0}{J_{11}} {\bf r}ight) {\bf l}eft(-{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf c}os(i_m) {\bf r}ight) {\bf n}onumber \\ & = & {\bf f}rac{2 {\bf m}u_f^3 \omega_0^2 (-J_{11}+J_{22}-J_{33})}{a^9 J_{11}J_{22}J_{33}^2} {\bf s}in^2(i_m) {\bf c}os(i_m) , {\bf l}abel{1st} {\bf e}nd{eqnarray} {\bf b}egin{eqnarray} -b_{42}{\bf l}eft( {\bf f}rac{1}{2} f_{52}b_{51}+b''_{51} {\bf r}ight)b_{61} & = & - {\bf l}eft( {\bf f}rac{2{\bf m}u_f}{a^3 J_{11}} {\bf s}in(i_m) {\bf r}ight) {\bf l}eft( {\bf f}rac{3(J_{33}-J_{11})\omega_0^2}{J_{22}} {\bf l}eft( -{\bf f}rac{2{\bf m}u_f}{a^3 J_{22}} {\bf s}in(i_m) {\bf r}ight) +{\bf f}rac{2{\bf m}u_f\omega_0^2}{a^3 J_{22}} {\bf s}in(i_m) {\bf r}ight) {\bf n}onumber \\ & & {\bf l}eft( -{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf c}os(i_m) {\bf r}ight) {\bf n}onumber \\ & = & - {\bf l}eft( {\bf f}rac{2{\bf m}u_f}{a^3 J_{11}} {\bf s}in(i_m) {\bf r}ight) {\bf l}eft( {\bf f}rac{2{\bf m}u_f\omega_0^2 }{a^3 J_{22}^2}{\bf s}in(i_m) ( -3J_{33} + 3J_{11} +J_{22}) {\bf r}ight) {\bf n}onumber \\ & & {\bf l}eft( -{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf c}os(i_m) {\bf r}ight) {\bf n}onumber \\ & = & {\bf f}rac{4{\bf m}u_f^3 \omega_0^2 ( -3J_{33} + 3J_{11} +J_{22}) }{a^9 J_{11} J_{22}^2 J_{33}} {\bf s}in^2(i_m) {\bf c}os(i_m) , {\bf l}abel{2nd} {\bf e}nd{eqnarray} and {\bf b}egin{eqnarray} {\bf f}rac{1}{2} f_{63}b_{61}b_{42}b_{51} & = & {\bf f}rac{ (J_{11}-J_{22})\omega_0^2}{J_{33}} {\bf l}eft( -{\bf f}rac{{\bf m}u_f}{a^3 J_{33}} {\bf c}os(i_m) {\bf r}ight) {\bf l}eft( {\bf f}rac{2{\bf m}u_f}{a^3 J_{11}} {\bf s}in(i_m) {\bf r}ight) {\bf l}eft( -{\bf f}rac{2{\bf m}u_f}{a^3 J_{22}} {\bf s}in(i_m) {\bf r}ight) {\bf n}onumber \\ & = & {\bf f}rac{4{\bf m}u_f^3 \omega_0^2 (J_{11}-J_{22}) }{ a^9 J_{11} J_{22} J_{33}^2} {\bf s}in^2(i_m) {\bf c}os(i_m). {\bf l}abel{3rd} {\bf e}nd{eqnarray} Combining ({\bf r}ef{1st}), ({\bf r}ef{2nd}), and ({\bf r}ef{3rd}), we can rewrite ({\bf r}ef{neq2}) as {\bf b}egin{eqnarray} & & b_{51} b'_{62} f_{46} b_{61} -b_{42}{\bf l}eft( {\bf f}rac{1}{2} f_{52}b_{51}+b''_{51} {\bf r}ight)b_{61} +{\bf f}rac{1}{2} f_{63}b_{61}b_{42}b_{51} {\bf n}onumber \\ & = & {\bf f}rac{{\bf m}u_f^3 \omega_0^2}{ a^9 J_{11} J_{22}^2 J_{33}^2} {\bf s}in^2(i_m) {\bf c}os(i_m) {\bf n}onumber \\ & & {\bf l}eft( 2J_{22}(-J_{11}+J_{22}-J_{33})+4J_{33} ( -3J_{33} + 3J_{11} +J_{22})+4 J_{22} (J_{11}-J_{22}) {\bf r}ight) {\bf n}onumber \\ & = & {\bf f}rac{2{\bf m}u_f^3 \omega_0^2}{ a^9 J_{11} J_{22}^2 J_{33}^2} {\bf s}in^2(i_m) {\bf c}os(i_m) [J_{22} (J_{11}-J_{22}+J_{33})-6J_{33} (J_{33}-J_{11})]. {\bf l}abel{all} {\bf e}nd{eqnarray} Therefore, the second condition of ({\bf r}ef{neq2}) is reduced to {\bf b}egin{equation} J_{22} (J_{11}-J_{22}+J_{33}) {\bf n}eq 6J_{33} (J_{33}-J_{11}). {\bf e}nd{equation} We summarize our main result of this paper as the main theorem of this paper. {\bf b}egin{theorem} For the linear time-varying spacecraft attitude control system ({\bf r}ef{varying}) using only magnetic torques, if the orbit is on the equator plane of the Earth's magnetic field, then the spacecraft attitude is not fully controllable. If the orbit is not on the equator plane of the Earth's magnetic field, and the following two conditions hold: {\bf b}egin{subequations} {\bf b}egin{gather} J_{33} {\bf n}eq J_{22}, \\ J_{22} (J_{11}-J_{22}+J_{33}) {\bf n}eq 6J_{33} (J_{33}-J_{11}), {\bf e}nd{gather} {\bf e}nd{subequations} then the spacecraft attitude is fullly controllable by magnetic coils. {\bf e}nd{theorem} {\bf b}egin{remark} The controllability conditions include only the spacecraft orbit plane and the spacecraft inertia matrix which are very easy to be verified. {\bf e}nd{remark} {\bf s}ection{Conclusions} In this paper, the controllability of spacecraft attitude control system using only magnetic torques is considered. The conditions of the controllability are derived. These conditions are easier to be verifed than previously established conditions. {\bf b}egin{thebibliography}{99} {\bf b}ibitem{sidi97} {M.J. Sidi}, Spacecraft Dynamics and Control: A Practical Engineering Approach, Cambridge University Press, Cambridge, UK, 1997. {\bf b}ibitem{me89} K.L. Musser and W.L. Ebert, Autonomous spacecraft attitude control using magnetic torquing only, Proceedings of the Flight Mechanics and Estimation Theory Symposium, NASA Goddard Apce Flight Center, Greenbelt, MD, pp. 23-38, 1989. {\bf b}ibitem{pitte93} M.E. Pittelkau, Optimal periodic control for spacecraft pointing and attitude determination, Journal of Guidance, Control, and Dynamics, 16(6), pp. 1078-1064, 1993. {\bf b}ibitem{wisni97} R. Wisniewski, Linear time varying approach to satellite attitude control using only electromagnetic actuation, Proceedings of the AIAA Guidance, Navigation, and Control Conference, New Orleans, pp. 243-251, 1997. {\bf b}ibitem{psiaki01} M.L. Paiaki, Magnetic torque attitude control via asymptotic period linear quadratic requlation, Journal of Guidance, Control, and Dynamics, 24(2), pp386-394, 2001. {\bf b}ibitem{la04} M. Lovera and A. Astolfi, Spacecraft attitude control using magnetic actuators, Automatica, 40, pp. 1405-1414, 2004. {\bf b}ibitem{sl05} E. Silani and M. Lovera, Magnetic spacecraft attitude control: a survey and some new results, Control Engineering Practice, 13, pp. 357-371, 2005. {\bf b}ibitem{la06} M. Lovera and A. Astolfi, Global magnetic attitude control of spacecraft in the presence of gravity gradient, IEEE Transactions on Aerospace and Electronic System, 42(3), pp.796-805, 2006 {\bf b}ibitem{yra07} H. Yan, I.M. Ross, and K.T. Alfriend, Pseudospectral feedback control for three-axes magnetic attitude stabilization in elliptic orbits, Journal of Guidance, Control, and Dynamics, 30(4), pp. 1107-1115, 2007. {\bf b}ibitem{plv10} T. Pulecchi, M. Lovera, and A. Varga, Optimal discrete-time design of three-axis magnetic attitude control laws, IEEE Transactions on Control System Technology, 18(3), pp. 714-722, 2010. {\bf b}ibitem{cw10} X. Chen and X. Wu, Model predictive control of cube satellite with magnet-torque, Proceedings of the 2010 IEEE International Conference on Information and Automation, pp. 997-1002, Harbin, China, 2010. {\bf b}ibitem{rh11} M. Rehanoglu and J.R. Hervas, Three-axis magnetic attitude control algorithm for small satellites, Proceeding of the 5th International Conference on Recent Advance Technologies, pp. 897-902, Istanbul, 2011. {\bf b}ibitem{zl11} A.M. Zanchettin and M. Lovera, $H_{{\bf i}nfty}$ attitude control of magnetically actuated satellite, Proceedings of 18th IFAC World Congress, pp. 8479-8484, Milano, Italy, 2011. {\bf b}ibitem{kalman60} R.E. Kalman, Contributions to the theory of optimal control, Boletin de la Sociedad Matematica Mexicana, 5(2), pp. 102-109, 1960. {\bf b}ibitem{bhat05} S.P. Bhat, Controllability of nonlinear time-varying systems: application to spacecraft attitude control using magnetic actuation, IEEE Transactions on Automatic Control, 50(11), pp. 1725-1735, 2005. {\bf b}ibitem{yang10} Y. Yang, Quaternion based model for momentum biased nadir pointing spacecraft, Aerospace Science and Technology, 14(3), 199-202, 2010. {\bf b}ibitem{yang12} Y. Yang, Analytic LQR design for spacecraft control system based on quaternion model, Journal of Aerospace Engineering, 25(3), pp. 448-453, 2012. {\bf b}ibitem{yang14} Y. Yang, Quaternion based LQR spacecraft control design is a robust pole assignment design, Journal of Aerospace Engineering, 27(1), , pp. 168-176. 2014. {\bf b}ibitem{wertz78} {J. Wertz}, {Spacecraft Attitude Determination and Control}, {Kluwer Academic Publishers}, {Dordrecht, Holland}, 1978. {\bf b}ibitem{rugh93} {W.J. Rugh}, Linear System Theory, Prentice Hall, Englewoood Cliffs, New Jersey, 1993. {\bf e}nd{thebibliography} {\bf e}nd{document}
\begin{document} \title[]{A sharp equivalence between $H^\infty$ functional calculus and square function estimates} \author{Christian Le Merdy} \address{Laboratoire de Math\'ematiques\\ Universit\'e de Franche-Comt\'e \\ 25030 Besan\c con Cedex\\ France} \email{[email protected]} \date{\today} \begin{abstract} Let $(T_t)_{t\geq 0}$ be a bounded analytic semigroup on $L^p(\mbox{${\mathcal O}$}mega)$, with $1<p<\infty$. Let $-A$ denote its infinitesimal generator. It is known that if $A$ and $A^*$ both satisfy square function estimates $\bignorm{\bigl(\int_{0}^{\infty}\vert A^{\frac{1}{2}} T_t(x)\vert^2\, dt\,\bigr)^{\frac{1}{2}}}_{L^p} \,\lesssim\,\norm{x}_{L^p}$ and $\bignorm{\bigl(\int_{0}^{\infty}\vert A^{*\frac{1}{2}} T_t^*(y)\vert^2\, dt\,\bigr)^{\frac{1}{2}}}_{L^{p'}} \,\lesssim\,\norm{y}_{L^{p'}}$ for $x\in L^p(\mbox{${\mathcal O}$}mega)$ and $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$, then $A$ admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus for any $\theta>\frac{\pi}{2}$. We show that this actually holds true for some $\theta<\frac{\pi}{2}$. \end{abstract} \maketitle \noindent {\it 2000 Mathematics Subject Classification : 47A60, 47D06.} \section{Introduction} Let $(\mbox{${\mathcal O}$}mega,\mu)$ be a measure space, let $1<p<\infty$ and let $(T_t)_{t\geq 0}$ be a bounded analytic semigroup on $L^p(\mbox{${\mathcal O}$}mega)$. Let $-A$ denote its infinitesimal generator. Various square functions can be associated to $(T_t)_{t\geq 0}$ and $A$. In this paper we will focus on the following ones. For any positive real number $\alpha>0$ and any $x\in L^p(\mbox{${\mathcal O}$}mega)$, we set \begin{equation}\label{Alpha} \norm{x}_{A,\alpha}\,=\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty} t^{2\alpha-1} \bigl\vert A^{\alpha}T_t(x)\bigr\vert^2 \, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}. \end{equation} Such square functions have played a key role in the analysis of analytic semigroups and in various estimates involving them for at least 40 years. In \cite{MI} and \cite{CDMY}, McIntosh and his co-authors introduced $H^\infty$ functional calculus and gave remarkable connections between boundedness of that functional calculus and estimates of square functions on $L^p$-spaces. The aim of this note is to give an improvement of their main result regarding the angle condition in the $H^\infty$ functional calculus. We start with a little background on sectoriality and $H^\infty$ functional calculus. We refer to \cite{CDMY, H, KW1, LM1, LM3, MI} for details and complements. For any angle $\omega\in(0,\pi)$, we consider the sector $\mbox{${\mathcal S}$}igma_\omega=\{z\in\ensuremath{\mathbb{C}}^*\, :\, \vert{\rm Arg}(z)\vert<\omega\}$. Let $X$ be a Banach space and let $B(X)$ denote the algebra of all bounded operators on $X$. A closed, densely defined linear operator $A\colon D(A)\subset X\to X$ is called sectorial of type $\omega$ if its spectrum $\sigma(A)$ is included in the closed sector $\overline{\mbox{${\mathcal S}$}igma_\omega}$, and for any angle $\theta\in(\omega,\pi)$, there is a constant $K_\theta>0$ such that \begin{equation}\label{Sector} \vert z \vert\norm{(z -A)^{-1}}\leq K_\theta,\qquad z \in\ensuremath{\mathbb{C}}\setminus \overline{\mbox{${\mathcal S}$}igma_\theta}. \end{equation} We recall that sectorial operators of type $<\frac{\pi}{2}$ coincide with negative generators of bounded analytic semigroups. For any $\theta\in(0,\pi)$, let $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ be the algebra of all bounded analytic functions $f\colon \mbox{${\mathcal S}$}igma_\theta\to \ensuremath{\mathbb{C}}$, equipped with the supremum norm $\norm{f}_{\infty,\theta}=\,\sup\bigl\{\vert f(z)\vert \, :\, z\in \mbox{${\mathcal S}$}igma_\theta\bigr\}.$ Let $H^{\infty}_{0}(\mbox{${\mathcal S}$}igma_\theta)\subset H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ be the subalgebra of bounded analytic functions $f\colon \mbox{${\mathcal S}$}igma_\theta\to \ensuremath{\mathbb{C}}$ for which there exist $s,c>0$ such that $\vert f(z)\vert\leq c \vert z \vert^s\bigl(1+\vert z \vert)^{-2s}\,$ for any $z\in \mbox{${\mathcal S}$}igma_\theta$. Given a sectorial operator $A$ of type $\omega\in(0,\pi)$, a bigger angle $\theta\in(\omega,\pi)$, and a function $f\in H^{\infty}_{0}(\mbox{${\mathcal S}$}igma_\theta)$, one may define a bounded operator $f(A)$ by means of a Cauchy integral (see e.g. \cite[Section 2.3]{H} or \cite[Section 9]{KuW}); the resulting mapping $H^{\infty}_{0}(\mbox{${\mathcal S}$}igma_\theta)\to B(X)$ taking $f$ to $f(A)$ is an algebra homomorphism. By definition, $A$ has a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus provided that this homomorphism is bounded, that is, there exists a constant $C>0$ such that $\norm{f(A)}_{B(X)}\leq C\norm{f}_{\infty,\theta}$ for any $f\in H^{\infty}_{0}(\mbox{${\mathcal S}$}igma_\theta)$. In the case when $A$ has a dense range, the latter boundedness condition allows a natural extention of $f\mapsto f(A)$ to the full algebra $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$. It is clear that if $\omega<\theta_1<\theta_2<\pi$ and the operator $A$ admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta_1})$ functional calculus, then it admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta_2})$ functional calculus as well. Indeed we have $H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta_2})\subset H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta_1})$, with $\norm{\cdotp}_{\infty,\theta_1}\leq\norm{\cdotp}_{\infty,\theta_2}$. However the converse is wrong, as shown by Kalton \cite{K}. The resulting issue of reducing the angle of a bounded $H^{\infty}$ functional calculus is at the heart of the present work. Let us now focus on the case when $X=L^p(\mbox{${\mathcal O}$}mega)$, with $1<p<\infty$. For simplicity we let $\norm{\ }_p$ denote the norm on this space. We let $p'=p/(p-1)$ denote the conjugate number of $p$. Assume that $A\colon D(A)\subset L^p(\mbox{${\mathcal O}$}mega) \to L^p(\mbox{${\mathcal O}$}mega)$ is a sectorial operator of type $<\frac{\pi}{2}$, and let $T_t=e^{-tA}$ for $t\geq 0$. It follows from \cite[Cor. 6.7]{CDMY} that if $A$ has a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus for some $\theta<\frac{\pi}{2}$, then for any $\alpha>0$, it satisfies a square function estimate $$ \norm{x}_{A,\alpha}\lesssim\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). $$ Further, the adjoint $A^*$ is a sectorial operator of type $\omega$ on $X^*=L^{p'}(\mbox{${\mathcal O}$}mega)$ and $f(A)^*=f(A^*)$ for any $f\in H^{\infty}_{0}(\mbox{${\mathcal S}$}igma_\theta)$. Thus $A^*$ admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus as well hence satisfies estimates $\norm{y}_{A^*,\alpha}\lesssim\norm{y}_{p'}$ for $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$. Conversely, it follows from \cite[Cor. 6.2]{CDMY} and its proof that if $A$ and $A^*$ satisfy square function estimates $\norm{x}_{A,\frac{1}{2}}\lesssim\norm{x}_p$ and $\norm{y}_{A^*,\frac{1}{2}}\lesssim\norm{y}_{p'}$, then $A$ admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus for any $\theta>\frac{\pi}{2}$. Our main result is that the latter property can be achieved for some $\theta<\frac{\pi}{2}$. Altogether, this yields the following equivalence statement. \begin{theorem}\label{Main} Let $1<p<\infty$ and let $(T_t)_{t\geq 0}$ be a bounded analytic semigroup on $L^p(\mbox{${\mathcal O}$}mega)$, with generator $-A$. The following assertions are equivalent. \begin{itemize} \item [(i)] $A$ and $A^*$ satisfy square function estimates \begin{equation}\label{SFE} \norm{x}_{A,\frac{1}{2}}\lesssim\norm{x}_p \qquad\hbox{and}\qquad \norm{y}_{A^*,\frac{1}{2}}\lesssim\norm{y}_{p'} \end{equation} for $x\in L^p(\mbox{${\mathcal O}$}mega)$ and $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$. \item [(ii)] There exists $\theta\in \bigl(0,\frac{\pi}{2}\bigr)$ such that $A$ admits a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus. \end{itemize} \end{theorem} In the above discussion and later on in the paper, we write `$N_1(x)\lesssim N_2(x)$ for $x\in X$' to indicate an inequality $N_1(x)\leq C N_2(x)$ which holds for a constant $C>0$ not depending on $x\in X$. It follows from the pionneering work of Dore and Venni \cite{DV} that if $A$ has a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_\theta)$ functional calculus for some $\theta<\frac{\pi}{2}$, then it has the so-called maximal $L^q$-regularity for $1<q<\infty$ (see e.g. \cite{KuW} for details). As a consequence of Theorem \ref{Main}, we see that $A$ has maximal $L^q$-regularity provided that $A$ and $A^*$ satisfy the square function estimates (\ref{SFE}). More observations will be given in Section 4. The implication `(i)$\,\mbox{${\mathcal R}$}ightarrow\,$(ii)' of Theorem \ref{Main} is proved in Section 3 (see the two lines following Theorem \ref{Main2}). It uses preliminary results and some background discussed in Section 2 below. \section{Preparatory results} Throughout we let $(\mbox{${\mathcal O}$}mega,\mu)$ be a measure space and we fix a number $1<p<\infty$. We start with a few observations on tensor products. Given any Banach space $Z$, we regard the algebraic tensor product $L^p(\mbox{${\mathcal O}$}mega) \otimes Z$ as a (dense) subspace of the Bochner space $L^p(\mbox{${\mathcal O}$}mega; Z)$ in the usual way. It is plain that for any $S\in B(Z)$, the tensor extension $I_{L^p}\otimes S$ of $S$ defined on $L^p(\mbox{${\mathcal O}$}mega) \otimes Z$ extends to a bounded operator $I_{L^p}\overline{\otimes} S\colon L^p(\mbox{${\mathcal O}$}mega;Z) \to L^p(\mbox{${\mathcal O}$}mega;Z)$, whose norm is equal to $\norm{S}$. Let $K$ be a Hilbert space. It is well-known that similarly for any $T\in B(L^p(\mbox{${\mathcal O}$}mega))$, the tensor extension $T\otimes I_K$ extends to a bounded operator $T\overline{\otimes} I_K\colon L^p(\mbox{${\mathcal O}$}mega;K) \to L^p(\mbox{${\mathcal O}$}mega;K)$, whose norm is equal to $\norm{T}$. The following is an extension of the latter result. \begin{lemma}\label{Tensor} Let $H,K$ be Hilbert spaces, and let $H\mathop{\otimes}\limits^{2} K$ denote their Hilbertian tensor product. For any bounded operator $T\colon L^p(\mbox{${\mathcal O}$}mega)\to L^p(\mbox{${\mathcal O}$}mega;H)$, the tensor extension $T\otimes I_K$ extends to a bounded operator $$ T\overline{\otimes}I_K\colon L^p(\mbox{${\mathcal O}$}mega;K)\longrightarrow L^p(\mbox{${\mathcal O}$}mega;H\mathop{\otimes}\limits^{2} K), $$ whose norm is equal to $\norm{T}$. \end{lemma} This can be easily shown by adapting the proof of the scalar case (i.e. $H=\ensuremath{\mathbb{C}}$) given in \cite[Chap. V; Thm. 2.7]{GCRF}. Details are left to the reader. Let $(\Lambda,\nu)$ be an auxiliary measure space and recall the isometric duality isomorphism $$ L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))^*\,=\, L^{p'}(\mbox{${\mathcal O}$}mega; L^2(\Lambda)), $$ that we will use throughout without further reference. Following \cite[Def. 2.7]{JLX}, we say that an element $u$ of $L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))$ is represented by a measurable function $\varphi\colon \Lambda\to L^p(\mbox{${\mathcal O}$}mega)$ provided that $\langle \varphi(\cdotp),y\rangle$ belongs to $L^2(\Lambda)$ for any $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$ and $$ \langle u, y\otimes b\rangle=\,\int_{\Lambda} \langle\varphi(t),y\rangle\, b(t)\, d\nu(t), \qquad y\in L^{p'}(\mbox{${\mathcal O}$}mega),\, b\in L^{2}(\Lambda). $$ Such a reprentation is necessarily unique and $$ \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{\Lambda}\bigl\vert\varphi(t)\bigr\vert^2\, d\nu(t)\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)} $$ is the norm of $u$ in $L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))$. In this case we simply say that $\varphi$ belongs to $L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))$, and make no difference between $u$ and $\varphi$. If $1<p<2$, $L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))\subset L^2(\Lambda; L^p(\mbox{${\mathcal O}$}mega))$ contractively, hence every element of $L^p(\mbox{${\mathcal O}$}mega; L^2(\Lambda))$ can be represented by a measurable function $\Lambda\to L^p(\mbox{${\mathcal O}$}mega)$. However this is no longer true if $p>2$, as shown in \cite[App. B]{JLX}. We let $J=(0,\infty)$ equipped with Lebesgue measure $dt$. The above discussion applies to the definition of square functions. Namely consider $(T_t)_{t\geq 0}$ and $A$ as in (\ref{Alpha}), and let $\alpha>0$ and $x\in L^p(\mbox{${\mathcal O}$}mega)$. The function $\varphi\colon J\to L^p(\mbox{${\mathcal O}$}mega)$ defined by $\varphi(t)=t^{\alpha -\frac{1}{2}} A^\alpha T_t(x)$ is continuous. When it belongs to $L^p(\mbox{${\mathcal O}$}mega;L^2(J))$, then $\norm{x}_{A,\alpha}$ is equal to its norm in that space. Otherwise we have $\norm{x}_{A,\alpha}=\infty$. The following lemma will be used in the analysis of square functions. \begin{lemma}\label{FVarphi} Let $\mbox{${\mathcal G}$}amma\colon J\to B(L^p(\mbox{${\mathcal O}$}mega))$ be a continuous function such that $\mbox{${\mathcal G}$}amma(\cdotp)x$ represents an element of $L^p(\mbox{${\mathcal O}$}mega;L^2(J))$ for any $x\in L^p(\mbox{${\mathcal O}$}mega)$, and there exists a constant $C\geq 0$ such that $$ \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty}\bigl\vert \mbox{${\mathcal G}$}amma(t)x\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)} \,\leq\,C\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). $$ Let $\varphi\colon J\to L^p(\mbox{${\mathcal O}$}mega)$ be a continuons function representing an element of $L^p(\mbox{${\mathcal O}$}mega;L^2(J))$. Then the function $(s,t)\mapsto \mbox{${\mathcal G}$}amma(t)\varphi(s)$ represents an element of $L^p(\mbox{${\mathcal O}$}mega;L^2(J^2))$ and \begin{equation}\label{FVarphi1} \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty}\int_{0}^{\infty}\bigl\vert \mbox{${\mathcal G}$}amma(t)\varphi(s)\bigr\vert^2\, dsdt \,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\, \leq C\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty}\bigl\vert \varphi(s)\bigr\vert^2\, ds\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}. \end{equation} \end{lemma} \begin{proof} According to the assumption on $\mbox{${\mathcal G}$}amma$, we may define a bounded operator $T\colon L^p(\mbox{${\mathcal O}$}mega)\to L^p(\mbox{${\mathcal O}$}mega;L^2(J))$ by letting $T(x)= \mbox{${\mathcal G}$}amma(\cdotp)x$ for any $x\in L^p(\mbox{${\mathcal O}$}mega)$. Recall that we have a natural identification $L^2(J)\mathop{\otimes}\limits^{2} L^2(J)\simeq L^2(J^2)$. Hence applying Lemma \ref{Tensor}, we obtain an extension \begin{equation}\label{Tensor-T} T\overline{\otimes} I_{L^2(J)}\colon L^p(\mbox{${\mathcal O}$}mega;L^2(J))\longrightarrow L^p(\mbox{${\mathcal O}$}mega; L^2(J^2)), \end{equation} whose norm is equal to $\norm{T}$. Let $\mbox{${\mathcal A}$}\subset L^2(J)$ be the (dense) subspace of all square summable functions with compact support. For any $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$, the function $(s,t)\mapsto \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle$ is continuous hence its product with any element of $\mbox{${\mathcal A}$}\otimes\mbox{${\mathcal A}$}$ is integrable. We claim that for any $\phi\in \mbox{${\mathcal A}$}\otimes\mbox{${\mathcal A}$}$, \begin{equation}\label{FVarphi2} \int_{J^2} \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\, \phi(s,t)\, dsdt\, = \bigl\langle (T\overline{\otimes} I_{L^2(J)})(\varphi), y\otimes\phi\bigr\rangle. \end{equation} To prove this, consider $\phi\in \mbox{${\mathcal A}$}\otimes\mbox{${\mathcal A}$}$; one can find finite families $(a_i)_i$ and $(b_i)_i$ in $\mbox{${\mathcal A}$}$ such that $(a_i)_i$ is an orthonormal family, and $$ \phi=\sum_i a_i\otimes b_i. $$ Since $\varphi$ is continuous, each $a_i\varphi\colon J\to L^p(\mbox{${\mathcal O}$}mega)$ is integrable and we may define $$ z_i=\int_{J} a_i(s) \varphi(s)\, ds $$ in $L^p(\mbox{${\mathcal O}$}mega)$. Then we have \begin{align*} \int_{J^2} \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\, \phi(s,t)\, dsdt\, & =\, \int_{J^2} \sum_i \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\,a_i(s)b_i(t)\, dsdt\\ & = \, \int_{J}\sum_i \langle \mbox{${\mathcal G}$}amma(t)z_i,y\rangle\, b_i(t)\, dt\\ & =\, \sum_i \langle T(z_i), y\otimes b_i\rangle\\ & =\, \mbox{${\mathcal B}$}igl\langle \sum_i T(z_i)\otimes \overline{a_i}, \sum_i y\otimes a_i\otimes b_i\mbox{${\mathcal B}$}igr\rangle. \end{align*} Let $Q\colon L^2(J)\to L^2(J)$ denote the orthogonal projection onto the linear span of the $\overline{a_i}$'s. Then we have $$ \sum_i z_i\otimes \overline{a_i}=\bigl(I_{L^p}\overline{\otimes} Q\bigr)(\varphi). $$ Hence the above calculation shows that \begin{align*} \int_{J^2} \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\, \phi(s,t)\, dsdt\, & = \bigl \langle (T\otimes I_{L^2(J)})\bigl(I_{L^p}\overline{\otimes} Q\bigr)(\varphi), y\otimes\phi\bigr\rangle\\ & = \bigl\langle\bigl(I_{L^p}\overline{\otimes}Q \overline{\otimes} I_{L^2(J)} \bigr) (T\overline{\otimes} I_{L^2(J)})(\varphi), y\otimes\phi\bigr\rangle\\ & = \bigl\langle (T\overline{\otimes} I_{L^2(J)})(\varphi), y\otimes (Q^*\otimes I_{L^2(J)})(\phi)\bigr\rangle. \end{align*} Moreover in the duality considered here, $(Q^*\otimes I_{L^2(J)})(\phi)=\phi$, hence we obtain (\ref{FVarphi2}). The latter identity shows that $$ \mbox{${\mathcal B}$}igl\vert \int_{J^2} \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\, \phi(s,t)\, dsdt\,\mbox{${\mathcal B}$}igr\vert\leq\norm{T}\norm{y}_{p'} \norm{\phi}_{L^2(J^2)}\,\norm{\varphi}_{L^p(\mbox{${\mathcal O}$}mega;L^2(J))}. $$ By the density of $\mbox{${\mathcal A}$}\otimes \mbox{${\mathcal A}$}$ in $L^2(J^2)$ and duality, this shows that $(s,t)\mapsto \langle \mbox{${\mathcal G}$}amma(t)\varphi(s),y\rangle\,$ belongs to $L^2(J^2)$. By densiy again, (\ref{FVarphi2}) holds true as well for any $\phi\in L^2(J^2)$ and any $y\in L^{p'}(\mbox{${\mathcal O}$}mega)$. Hence $(s,t)\mapsto \mbox{${\mathcal G}$}amma(t)\varphi(s)$ belongs to $L^p(\mbox{${\mathcal O}$}mega; L^2(J^2))$ and represents $(T\overline{\otimes} I_{L^2(J)})(\varphi)$. Then the inequality (\ref{FVarphi1}) follows at once. \end{proof} We now turn to Rademacher averages and $R$-boundedness. Let $(\varepsilon_k)_{k\geq 1}$ be a sequence of independent Rademacher variables on some probability space $\mbox{${\mathcal O}$}mega_0$. For any Banach space $X$, we let ${\rm Rad}(X)\subset L^{2}(\mbox{${\mathcal O}$}mega_0;X)$ denote the closed linear span of finite sums $\sum_k\varepsilon_k\otimes x_k\,$, with $x_k\in X$. A well-known application of Khintchine's inequality is that we have an isomorphism ${\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))\simeq L^p(\mbox{${\mathcal O}$}mega;\ell^2)$. The argument for this result (see e.g. \cite[pp. 73-74]{LT}) shows as well that if $K$ is any Hilbert space, then we have ${\rm Rad}(L^p(\mbox{${\mathcal O}$}mega;K))\simeq L^p(\mbox{${\mathcal O}$}mega;\ell^2(K))$. Thus if $(\varphi_k)_k$ is a finite family of functions $\Lambda\to L^p(\mbox{${\mathcal O}$}mega)$ reprensenting elements of $L^p(\mbox{${\mathcal O}$}mega;L^2(\Lambda))$, we have \begin{equation}\label{Rad} \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes \varphi_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega;L^2(\Lambda)))} \,\approx\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{\Lambda}\sum_k\vert\varphi_k(t)\vert^2\, d\nu(t)\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}. \end{equation} By definition, a set $F\subset B(X)$ is $R$-bounded if there is a constant $C\geq 0$ such that $$ \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes T_k(x_k)}_{{\rm Rad}(X)}\,\leq\, C \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(X)} $$ for any finite families $(T_k)_{k}$ in $F$ and $(x_k)_{k}$ in $X$. Let $A$ be a sectorial operator on $X$. We say that $A$ is $R$-sectorial of $R$-type $\omega\in (0,\pi)$ if $\sigma(A)\subset\overline{\mbox{${\mathcal S}$}igma_{\omega}}$ and for any angle $\theta\in (\omega,\pi)$, the set $$ \bigl\{ z(z-A)^{-1}\, :\, z\in \ensuremath{\mathbb{C}}\setminus \overline{\mbox{${\mathcal S}$}igma_{\theta}}\bigr\} $$ is $R$-bounded. Clearly this condition is a strengthening of (\ref{Sector}). $R$-boundedness goes back to \cite{BG,CPSW}, whereas $R$-sectoriality was introduced by Weis \cite{W1}. That fundamental paper was the starting point of a bunch of applications of $R$-boundedness to various questions involving multipliers, square functions and functional calculi. See in particular \cite{KuW,KW1} and the references therein. We will use the following result (see \cite[Prop. 5.1]{KW1}) which shows the role of $R$-boundedness in the problem of reducing the angle of a bounded $H^\infty$ functional calculus. \begin{proposition}\label{KW} (Kalton-Weis) Let $1<p<\infty$, let $A$ be a sectorial operator on $L^p(\mbox{${\mathcal O}$}mega)$ and let $0<\omega<\theta_0<\pi$ be two angles such that $A$ has a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta_0})$ functional calculus, and $A$ is $R$-sectorial of $R$-type $\omega$. Then for any $\theta>\omega$, the operator $A$ has a bounded $H^{\infty}(\mbox{${\mathcal S}$}igma_{\theta})$ functional calculus. \end{proposition} \section{Square function estimates imply $R$-sectoriality} Our main result is the following. \begin{theorem}\label{Main2} Let $A$ be a sectorial operator operator of type $<\frac{\pi}{2}$ on $L^p(\mbox{${\mathcal O}$}mega)$, with $1<p<\infty$. Assume that $A$ and $A^*$ satisfy square function estimates \begin{equation}\label{SFE3} \norm{x}_{A,\frac{1}{2}}\lesssim\norm{x}_p,\qquad x\in L^{p}(\mbox{${\mathcal O}$}mega), \end{equation} and \begin{equation}\label{SFE4} \norm{y}_{A^*,\frac{1}{2}}\lesssim\norm{y}_{p'},\qquad y\in L^{p'}(\mbox{${\mathcal O}$}mega). \end{equation} Then $A$ is $R$-sectorial of $R$-type $<\frac{\pi}{2}$. \end{theorem} According to Proposition \ref{KW} and the discussion before Theorem \ref{Main}, the latter is a consequence of Theorem \ref{Main2}. \begin{proof}[Proof of Theorem \ref{Main2}] We assume (\ref{SFE3}) and (\ref{SFE4}). Consider the two sets $$ F_1=\bigl\{T_t\, :\, t\geq 0\bigr\} \qquad\hbox{and}\qquad F_2=\bigl\{tAT_t\, :\, t\geq 0\bigr\}. $$ According to \cite[Section 4]{W1}, showing that $A$ is $R$-sectorial of $R$-type $<\frac{\pi}{2}$ is equivalent to showing that $F_1$ and $F_2$ are $R$-bounded. Before getting to the proof of these two properties, we need to establish a few intermediate estimates. We first show that the square function $\norm{x}_{A,1}$ is finite for any $x\in L^p(\mbox{${\mathcal O}$}mega)$ and that we have a uniform estimate \begin{equation}\label{SFE1} \norm{x}_{A,1}\lesssim\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). \end{equation} Indeed set $\mbox{${\mathcal G}$}amma(t)=A^{\frac{1}{2}}T_t$ for any $t>0$, consider $x\in L^p(\mbox{${\mathcal O}$}mega)$ and set $\varphi(s)=A^{\frac{1}{2}}T_s (x)$ for any $s>0$. Then $$ \mbox{${\mathcal G}$}amma(t)\varphi(s) = A^{\frac{1}{2}}T_t\bigl(A^{\frac{1}{2}}T_s (x)\bigr) = AT_{t+s}(x),\qquad t,s>0. $$ Applying twice the estimate (\ref{SFE3}) together with Lemma \ref{FVarphi}, we deduce an estimate $$ \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty}\int_{0}^{\infty} \bigl\vert AT_{t+s}(x)\bigr\vert^2\, dsdt \,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\, \lesssim\,\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). $$ Now observe that at the pointwise level, we have \begin{align*} \int_{0}^{\infty} t\bigl\vert AT_t(x)\bigr\vert^2\,dt\ &=\,\int_{0}^{\infty} \mbox{${\mathcal B}$}igl(\int_{0}^{t}\, ds\,\mbox{${\mathcal B}$}igr) \bigl\vert AT_t(x)\bigr\vert^2\,dt\\ &=\,\int_{0}^{\infty}\int_{s}^{\infty} \bigl\vert AT_t(x)\bigr\vert^2\,dt\, ds\\ &=\,\int_{0}^{\infty}\int_{0}^{\infty} \bigl\vert AT_{t+s}(x)\bigr\vert^2\,dsdt\,. \end{align*} This yields (\ref{SFE1}). Second we show that the square function $\norm{x}_{A,\frac{3}{2}}$ is finite for any $x\in L^p(\mbox{${\mathcal O}$}mega)$ and that we have a uniform estimate \begin{equation}\label{SFE32} \norm{x}_{A,\frac{3}{2}}\lesssim\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). \end{equation} Indeed using (\ref{SFE1}) and (\ref{SFE3}), and arguing as above, we obtain an estimate $$ \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\int_{0}^{\infty}\int_{0}^{\infty} s \bigl\vert A^{\frac{3}{2}} T_{t+s}x\bigr\vert^2\, dsdt \,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\, \lesssim\,\norm{x}_p,\qquad x\in L^p(\mbox{${\mathcal O}$}mega). $$ This implies (\ref{SFE32}), since \begin{align*} \int_{0}^{\infty} t^2\bigl\vert A^{\frac{3}{2}} T_t(x)\bigr\vert^2\,dt\ &=\,2\,\int_{0}^{\infty} \mbox{${\mathcal B}$}igl(\int_{0}^{t}\,s\, ds\,\mbox{${\mathcal B}$}igr) \bigl\vert A^{\frac{3}{2}} T_t(x)\bigr\vert^2\,dt\\ &=\, 2 \,\int_{0}^{\infty} s \int_{s}^{\infty} \bigl\vert A^{\frac{3}{2}} T_t(x)\bigr\vert^2\,dt\, ds\\ &=\,2 \,\int_{0}^{\infty}\int_{0}^{\infty} s \bigl\vert A^{\frac{3}{2}} T_{t+s}(x)\bigr\vert^2\,dsdt\,. \end{align*} In the sequel, we let $R(A)$ and $N(A)$ denote the range and the kernel of $A$, respectively. It is well-known that we have a direct sum decomposition \begin{equation}\label{Decomp} L^{p}(\mbox{${\mathcal O}$}mega) = N(A)\oplus\overline{R(A)}, \end{equation} see e.g. \cite[Thm 3.8]{CDMY}. We observe that the dual estimate (\ref{SFE4}) implies a uniform reverse estimate \begin{equation}\label{Reverse} \norm{x}_p\lesssim \norm{x}_{A,\frac{1}{2}},\qquad x\in \overline{R(A)}. \end{equation} This follows from a well-known duality argument, we briefly give a proof for the sake of completeness. For any integer $n\geq 1$, let $g_n$ be the rational function defined by $g_n(z)=n^2 z(n+z)^{-1}(1+nz)^{-1}$. For any $x\in L^p(\mbox{${\mathcal O}$}mega)$, we have $$ g_n(A)x=\,2\,\int_{0}^{\infty} AT_{2t}g_n(A)x\, dt, $$ by \cite[Lem. 6.5 (1)]{JLX}. Moreover the sequence $\bigl(g_n(A)\bigr)_{n\geq 1}$ is bounded. Hence for any $y$ in $L^{p'}(\mbox{${\mathcal O}$}mega)$, we have \begin{align*} \frac{1}{2}\,\bigl\vert \langle g_n(A)x,y\rangle\bigr\vert \, &=\,\mbox{${\mathcal B}$}igl\vert \int_{0}^{\infty} \bigl\langle AT_{2t}g_n(A)x,y\bigr\rangle\, dt\,\mbox{${\mathcal B}$}igr\vert\\ &=\,\mbox{${\mathcal B}$}igl\vert \int_{0}^{\infty} \bigl\langle A^{\frac{1}{2}} T_{t}(x), A^{*\frac{1}{2}} T_{t}^{*}g_n(A)^*y\bigr\rangle\, dt\,\mbox{${\mathcal B}$}igr\vert\\ &\leq\,\norm{x}_{A,\frac{1}{2}}\norm{g_n(A)^*y}_{A^*,\frac{1}{2}}\qquad\hbox{by Cauchy-Schwarz},\\ &\lesssim\, \norm{x}_{A,\frac{1}{2}}\norm{g_n(A)^*y}_{p'} \,\lesssim \norm{x}_{A,\frac{1}{2}}\norm{y}_{p'}\qquad\hbox{by (\ref{SFE4})}. \end{align*} If $x\in \overline{R(A)}$, then $g_n(A)x\to x$ when $n\to\infty$ (see \cite[Thm. 3.8]{CDMY}), hence (\ref{Reverse}) follows by taking the supremum over all $y$ in the unit ball of $L^{p'}(\mbox{${\mathcal O}$}mega)$. Let $(x_k)_k$ be a finite family of $\overline{R(A)}$ and for any $k$, set $\varphi_k(t)=A^{\frac{1}{2}}T_t(x_k)$ for any $t>0$. Averaging (\ref{Reverse}), we obtain that $$ \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,\lesssim\,\mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes \varphi_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega;L^2(J)))}. $$ Then applying (\ref{Rad}) we deduce a uniform estimate \begin{equation}\label{Rad-Reverse} \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,\lesssim\, \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}\bigl\vert A^{\frac{1}{2}}T_t(x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)} \end{equation} for finite families $(x_k)_k$ of elements of $\overline{R(A)}$. By the same averaging principle, the assumption (\ref{SFE3}) implies a uniform estimate \begin{equation}\label{Rad-SFE12} \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}\bigl\vert A^{\frac{1}{2}}T_t(x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^{p}(\mbox{${\mathcal O}$}mega)} \,\lesssim\, \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \end{equation} for finite families $(x_k)_k$ of elements of $L^p(\mbox{${\mathcal O}$}mega)$. Likewise, (\ref{SFE32}) implies that we have a uniform estimate \begin{equation}\label{Rad-SFE32} \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty} t^2\bigl\vert A^{\frac{3}{2}}T_t(x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^{p}(\mbox{${\mathcal O}$}mega)} \,\lesssim\, \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \end{equation} for finite families $(x_k)_k$ of elements of $L^p(\mbox{${\mathcal O}$}mega)$. We now turn to $R$-boundedness proofs. Let $(t_k)_k$ be a finite family of nonnegative real numbers. For any $x_1, x_2,\ldots$ in $\overline{R(A)}$, we have $T_{t_k}(x_k)\in \overline{R(A)}$ for any $k$, hence $$ \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes T_{t_k}(x_k)}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,\lesssim\, \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}\bigl\vert A^{\frac{1}{2}}T_{t+t_k} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)} $$ by (\ref{Rad-Reverse}). Moreover we have \begin{align*} \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}\bigl\vert A^{\frac{1}{2}}T_{t+t_k} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\, &=\, \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{t_k}^{\infty}\bigl\vert A^{\frac{1}{2}}T_{t} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\\ &\leq\, \mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}\bigl\vert A^{\frac{1}{2}}T_{t} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}. \end{align*} Applying (\ref{Rad-SFE12}), we deduce an estimate $$ \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes T_{t_k}(x_k)}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,\lesssim\, \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))}. $$ Since $T_t(x)=x$ for any $x\in N(A)$, the above estimate and (\ref{Decomp}) show that the set $F_1$ is $R$-bounded. Next consider $x_1, x_2,\ldots$ in $L^p(\mbox{${\mathcal O}$}mega)$. Applying (\ref{Rad-Reverse}) again, we have \begin{align*} \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes t_kAT_{t_k}(x_k)}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,&\lesssim\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}t_k^2\bigl\vert A^{\frac{3}{2}}T_{t+t_k} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\\ &\lesssim\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}(t+t_k)^2\bigl\vert A^{\frac{3}{2}}T_{t+t_k} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\\ &\lesssim\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{t_k}^{\infty}t^2\bigl\vert A^{\frac{3}{2}}T_{t} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}\\ &\lesssim\,\mbox{${\mathcal B}$}ignorm{\mbox{${\mathcal B}$}igl(\sum_k\int_{0}^{\infty}t^2\bigl\vert A^{\frac{3}{2}}T_{t} (x_k)\bigr\vert^2\, dt\,\mbox{${\mathcal B}$}igr)^{\frac{1}{2}}}_{L^p(\mbox{${\mathcal O}$}mega)}. \end{align*} According to (\ref{Rad-SFE32}), this implies the estimate $$ \mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes t_kAT_{t_k}(x_k)}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))} \,\lesssim\,\mbox{${\mathcal B}$}ignorm{\sum_k\varepsilon_k\otimes x_k}_{{\rm Rad}(L^p(\mbox{${\mathcal O}$}mega))}, $$ which shows that the set $F_2$ is $R$-bounded. \end{proof} \section{Concluding remarks} \noindent {\it 4.1 Comparing square functions estimates.} Let $A$ be a sectorial operator of type $<\frac{\pi}{2}$ on $L^p(\mbox{${\mathcal O}$}mega)$, with $1<p<\infty$. If $A$ is $R$-sectorial of $R$-type $<\frac{\pi}{2}$, then the square functions $\norm{\ }_{A,\alpha}$ defined by (\ref{Alpha}) are pairwise equivalent, by \cite[Thm. 1.1]{LM2}. We do not know if this equivalence property holds in the general (non $R$-sectorial) case. The proof of Theorem \ref{Main2} shows that if $A$ satisfies an estimate $\norm{x}_{A,\frac{1}{2}}\lesssim \norm{x}_p$ on $L^p(\mbox{${\mathcal O}$}mega)$, then it also satisfies estimates $\norm{x}_{A,1}\lesssim \norm{x}_p$ and $\norm{x}_{A,\frac{3}{2}}\lesssim \norm{x}_p$, which is a step towards that direction. It is easy to check (left to the reader) that with the same techniques, one obtains the following: for any positive numbers $\alpha,\beta>0$, square function estimates $$ \norm{x}_{A,\alpha}\lesssim \norm{x}_p \qquad\hbox{and}\qquad \norm{x}_{A,\beta}\lesssim \norm{x}_p $$ imply a square function estimate $$ \norm{x}_{A,\alpha+\beta}\lesssim \norm{x}_p. $$ \noindent {\it 4.2 Variants of the main result.} Using the above observation, the proof of Theorem \ref{Main2} can be easily adapted to show the following generalization: let $q\geq 1$ be an integer, let $\beta>0$ be a positive real number and assume that $A$ and $A^*$ satisfy square function estimates $$ \norm{x}_{A,\frac{1}{q}}\lesssim \norm{x}_p \qquad\hbox{and}\qquad \norm{y}_{A^*,\beta}\lesssim \norm{y}_{p'}. $$ Then $A$ is $R$-sectorial of $R$-type $<\frac{\pi}{2}$. This implies that Theorem \ref{Main} holds as well with (\ref{SFE}) replaced by $$ \norm{x}_{A,1}\lesssim\norm{x}_p \qquad\hbox{and}\qquad \norm{y}_{A^*,1}\lesssim\norm{y}_{p'}. $$ \noindent {\it 4.3 Other Banach spaces.} Any bounded set of operators on Hilbert space is automatically $R$-bounded. In the case $p=2$ (more generally, for sectorial operators on Hilbert space), Theorem \ref{Main} reduces to McIntosh's fundamental Theorem \cite{MI}. In the last decade, various square functions associated to sectorial operators on general Banach spaces (not only on Hilbert spaces or $L^p$-spaces) have been studied in relation with $H^\infty$ functional calculus, see \cite{KW2}, \cite{KKW} \cite{LM3} and the references therein. It is therefore natural to wonder whether Theorem \ref{Main} can be extended to other contexts. In a positive direction, we note that if $X$ is a reflexive Banach lattice and if square functions associated to sectorial operators are defined as in (\ref{SFE}) then Theorem \ref{Main} holds true on $X$. This actually extends (for adapted square functions) to the case when $X$ is a reflexive Banach space with property $(\alpha)$. We refer the reader to \cite{LM4} for more on that theme. However we do not know if Theorem \ref{Main} holds true on noncomutative $L^p$-spaces. See \cite{JLX} for a thorough study of square functions associated to sectorial operators on those spaces. \end{document}
\begin{document} \title{A short note on conflicting definitions of locality} \begin{abstract} There are various non-equivalent definitions of locality. Three of them, impossibility of instantaneous communication, impossibility of action-at-a-distance, and impossibility of faster-than-light travel, while not fully implying each other, have a large overlap. When the term non-locality is used, most physicists think that one of these three conditions is being violated. There is a minority of physicists, however, who uses ``locality" with a fourth meaning, the satisfaction of the hypotheses underlying Bell inequality. This definition devoids Bell's theorem of its profoundness, reducing it to a mere tautology. It is demonstrated here, through a classical example using a deck of cards, that this latter definition of ``locality" is untenable. \end{abstract} Bell inequality \cite{Bell1964}, and the related inequalities derived later \cite{Clauser1969,Clauser1974}, constitute an important result in theoretical physics, since their experimental violation \cite{Aspect1982a,Aspect1982b} binds the shape of any possible physical theory. At present, quantum mechanics explains satisfactorily the experimental data, but, even if in the future it should be substituted by a new theory, the observed violation of Bell inequality (BI) requires such hypothetical new theory to satisfy some general properties. It is commonly encountered the statement that the violation of BI implies that reality is non-local. As far as I know, locality can have any of the following meanings \begin{description} \item[a.] Impossibility of instantaneous communication between two space-like separated parties (no-signaling). \item[b.] Impossibility of instantaneous action-at-a-distance. \item[c.] Impossibility for a body to have an arbitrary velocity. \end{description} These three formulation are non-equivalent, and one should specify which one is intended when using the term ``locality". For instance, the violation of (c) certainly implies the violation of (a) and (b), but the \emph{vice versa} is not true. I believe that the majority of physicists has in mind one of these three possibilities when hearing ``locality", as practically the totality of non-physicists. However, a fourth meaning has been attached to ``locality" by a minority of physicists. This leads to misunderstandings and to discussions leading nowhere among physicists, and also to a sense of bewilderment of the wide public towards quantum mechanics, which is already a counter-intuitive theory by itself and whose exposition need not be further obscured by misleading terminology. This fourth definition of non-locality refers specifically to a theory predicting events $e_j$ observed by local detectors $\Sigma_j$ at space-like separated regions $R_j$ given that a system is specified by some parameters $\lambda$, \begin{description} \setcounter{enumi}{3} \item[d.] A theory is local if the probability of observing the events $e_j$ factorizes as \begin{equation}\label{eq:fact} P(\{e\}|\lambda,\Sigma)=\prod_j P(e_j|\lambda,\Sigma_j) \end{equation} \end{description} Following Ref.~\cite{Fine1982a}, we shall refer to the property described in Eq.~\eqref{eq:fact} as factorability (or factorizability). In the special case of a bipartite entangled two-level system, we have that $\Sigma_j$ represent (pseudo)spin components, and $e_j=\pm 1$ in appropriate units. The hypothesis of factorability coincides then with one of the hypotheses at the basis of Bell inequality (the other hypothesis being that $\lambda$ are distributed independently of the setup $\Sigma$). We want to show that if one takes factorability as a definition of locality then various models which are patently local according to any of the definitions (a)-(c) are classified as non-local according to (d). Let us first rewrite the left hand side of Eq.~\eqref{eq:fact} by applying repeatedly Bayes's theorem \begin{equation}\label{eq:factbayes} P(\{e\}|\lambda,\Sigma)=\prod_j P(e_j|E_j,\lambda,\Sigma), \end{equation} where $E_1=\emptyset$ is the empty set, and $E_j=\{e_1,e_2,\dots,e_{j-1}\}$. Eq.~\eqref{eq:factbayes} is a mathematical identity following from the rules of probability theory, and does not involve any physical assumption. Now, let us consider the first factor in the right hand side of Eq.~\eqref{eq:factbayes}, $P(e_1|\lambda,\Sigma)$. If we invoke locality either in the form (a) or (b), we have the physical equality $P(e_1|\lambda,\Sigma)=P(e_1|\lambda,\Sigma_1)$. The second factor, however, is $P(e_2|e_1,\lambda,\Sigma)$. Invoking again (a) or (b), we have that $P(e_2|e_1,\lambda,\Sigma)=P(e_2|e_1,\lambda,\Sigma_1,\Sigma_2)$. Analogously, we have in general that \begin{equation} P(e_j|E_j,\lambda,\Sigma)=P(e_j|E_j,\lambda,\Sigma_1,\dots,\Sigma_j). \end{equation} Thus, by applying (a) or (b) we cannot justify Eq.~\eqref{eq:fact}. In other words, condition (d) is stronger than (a) or (b). This was pointed out by Jarrett \cite{Jarrett1984}. In order to obtain the equality in Eq.~\eqref{eq:fact} we have to make a further hypothesis, which Jarrett referred to as completeness \cite{Jarrett1984}, and Shimony as outcome-independence \cite{Shimony1990}. We shall use the latter terminology, since it is purely technical and thus immune to misinterpretations. This hypothesis is simply that \begin{equation}\label{eq:oi} P(e_j|E_j,\lambda,\Sigma_1,\dots,\Sigma_j)=P(e_j|\lambda,\Sigma_1,\dots,\Sigma_j) , \end{equation} i.e., the conditional probability of observing $e_j$ given that $e_1,\dots,e_{j-1}$ were observed is identical to the marginal probability of observing $e_j$. Now, while in order to establish $P(e_j|E_j,\lambda,\Sigma_1,\dots,\Sigma_j)$ it is necessary that observers in $R_1,\dots,R_{j-1}$ communicate their results to the observer in $R_j$, the marginal $P(e_j|\lambda,\Sigma_1,\dots,\Sigma_j)$ can be determined by means of a local measurement in $R_j$, without need for communication. Then, it is now possible to invoke once again locality in either form (a) or (b) and obtain finally \begin{equation}\label{eq:oiplusloc} P(e_j|E_j,\lambda,\Sigma_1,\dots,\Sigma_j)= P(e_j|\lambda,\Sigma_j), \end{equation} which, upon substitution in the right hand side of Eq.~\eqref{eq:factbayes} yields Eq.~\eqref{eq:fact}. So far, we have basically reformulated the conclusions of Ref.~\cite{Jarrett1984}, that, however, is ignored by most physicists, so we hope that this short note help propagating this important result. Let us establish sufficient conditions for the validity of outcome-independence as formulated in Eq.~\eqref{eq:oi}. Determinism is a sufficient condition: if the knowledge of $\lambda$ determines the outcome of the measurements $\Sigma_j$, all other information is redundant.\footnote{Historical note: Determinism also implies that $P(e_j|\lambda,\Sigma_j)=0$ or $1$. Refs.~\cite{Bell1964,Clauser1969} actually assumed locality, in the form (a) or (b), and determinism. The fact that $P(e_j|\lambda,\Sigma_j)\in\{0,1\}$ was exploited in both papers. Later on \cite{Clauser1974}, it was realized that the inequalities could still be derived if $0\le P(e_j|\lambda,\Sigma_1,\dots,\Sigma_j)\le 1$, thus it was claimed that the hypothesis of determinism was dropped, and that Bell inequality established the incompatibility of all local stochastic theories with quantum mechanics. However, in deriving all Bell-type inequalities, a consequence of determinism, namely the hypothesis of outcome-independence, was retained implicitly, and this was realized only ten years later \cite{Jarrett1984}. Meanwhile, so strong was the belief that the hypotheses underlying Bell inequality only required locality, that the requirement (d) was taken as the definition of locality.} Another sufficient hypothesis is that of probabilistic determinism, which is not an oxymoron, but means that knowledge of the $\lambda$ determines the probability of the outcome, not the outcome itself. Perhaps this concept coincides with what Jarrett \cite{Jarrett1984} calls ``completeness'', which we avoid since it is a term laden with subjective meanings. Another sufficient condition is separability \cite{Howard1989}: the parameters $\lambda=\bigcup_j \lambda_j$, where $\lambda_j$ are parameters attached to the particle number $j$ and represent prepossessed values. Then we have that the probability of any outcome $e_j$ can be determined by knowledge of the $\lambda_j$ alone, i.e., $P(e_j|E_j,\lambda,\Sigma_1,\dots,\Sigma_j)=P(e_j|\lambda_j,\Sigma_j)$, which is a special form of outcome-independence. Finally, it has been proved recently \cite{Hall2011} that if a model satisfies factorability then there is a natural extension of this model which is deterministic. For convenience, the proof is reported in the appendix. In order to show the absurdity of identifiying Eq.~\eqref{eq:fact} with locality, we shall provide now a simple model which is manifestly local, but would be classified as non-local according to (d). Let us consider the following experiment: a dealer prepares two decks of cards which can be a King ($K$) or a Queen ($Q$), and Black ($B$) or Red ($R$); each deck is subdivided into pairs, such that each pair is formed by a King and a Queen, and a Black and a Red card. In the first deck ($D_1$), 30\% of the pairs are $(KR,QB)$, and 70\% $(KB,QR)$. In the second deck ($D_2$), 70\% of the pairs are $(KR,QB)$, and 30\% $(KB,QR)$. The dealer chooses a deck at random, with equal probability, then extracts a pair out of the deck, and handles one card each to two observers, one, $L$, sitting to his right and the other, $R$, to his left. In this model, the hidden variable $\lambda$ is the deck which has been chosen. Once the pair is extracted from the deck, $\lambda$ is not localized on either card, but is a global property, which cannot be reconstructed by observers $L$ and $R$ by making local observations (even if they compare their results). The only way to determine $\lambda$ would be to check which deck was chosen at the location of the preparation. Now let us consider the joint probability that $L$ will receive a King and $R$ will receive a black card given that deck 1 was chosen: \begin{equation} P(K,B|\lambda=D_1)= \frac{3}{20} , \end{equation} since a pair with a red King and a black Queen will be extracted with probability $3/10$, and the black Queen is received by $R$ half of the times. On the other hand, if factorability holds, we should have \begin{equation} P(K,B|D_1)= P(K|D_1)P(B|D_1)=\frac{1}{4}. \end{equation} Thus, according to definition (d), the model we have presented is non-local! The reader can surely make up uncountable other examples in which knowledge of some additional parameters does not break the correlations, theories which (d) classifies as non-local but are local according to one of (a)-(c). Notice that, with our example (or with any example which we can make out with our classical imagination), it is not possible to violate Bell inequality. The reason lies in the fact that in any classical case it is possible to find a more complete set of parameters $\lambda_{det}$ which determine univocally the outcome of each observation. E.g., in our case $\lambda_{det}$ would be determined by the knowledge of the pair of cards, and of which card goes to which observer. The peculiarity of quantum mechanics consists in the fact that no such complete characterization of a system exists in principle. In conclusion, we have proved that the definition (d) of locality conflicts with any other accepted definition of the term, and thus is untenable. I hope that this statement is not met with ideological animosity, but that the physics community may achieve unanimity over the definition of such an important concept. \section*{Appendix} We reproduce the proof of the theorem in Ref.~\cite{Hall2011} according to which any factorable model admits a natural more fundamental parametrization which is deterministic. We recall that we are considering the detection of $N$ events at as many space-like separated regions. We take the events to be discrete, so that at region $R_j$ the possible values of the outcomes are $e_j\in\{o^{(j)}_1,o^{(j)}_2,\dots,o^{(j)}_{m_j}\}$. By hypothesis, the probability satisfies the factorability condition Eq.~\eqref{eq:fact}. In addition to the parameters $\lambda$ we introduce the parameters $\mu$, an $N$-uple of numbers uniformly distributed within the unit hypercube $[0,1]^{N}$. The event $e_j$ consists with certainty in the outcome $o^{(j)}_{k}$ when $\mu_j\in[\sum_{k'=1}^{k-1} P(o^{(j)}_{k}|\lambda,\Sigma_j),\sum_{k'=1}^{k} P(o^{(j)}_{k}|\lambda,\Sigma_j)]$, i.e., \begin{equation} P(\{o^{(j)}_{k_j}\}|\lambda,\mu,\Sigma)= \begin{cases} 1,&\!\!\!\!\!\text{if } \mu_j\!\in\!\left[\sum_{k'=1}^{k-1} P(o^{(j)}_{k}|\lambda,\Sigma_j),\sum_{k'=1}^{k} P(o^{(j)}_{k}|\lambda,\Sigma_j)\right],\\ 0,&\!\!\!\!\! \text{otherwise}. \end{cases} \end{equation} Upon integrating over $\mu$, Eq.~\eqref{eq:fact} is recovered. \end{document}
\begin{document} \begin{abstract} We study the maximum number of limit cycles that can bifurcate from a degenerate center of a cubic homogeneous polynomial differential system. Using the averaging method of second order and perturbing inside the class of all cubic polynomial differential systems we prove that at most three limit cycles can bifurcate from the degenerate center. As far as we know this is the first time that a complete study up to second order in the small parameter of the perturbation is done for studying the limit cycles which bifurcate from the periodic orbits surrounding a degenerate center (a center whose linear part is identically zero) having neither a Hamiltonian first integral nor a rational one. This study needs many computations, which have been verified with the help of the algebraic manipulator Maple. \end{abstract} \maketitle \section{Introduction} Hilbert in \cite{H} asked for the maximum number of limit cycles which real polynomial differential systems in the plane of a given degree can have. This is actually the well known {\it 16th Hilbert Problem}, see for example the surveys \cite{Yu,Li} and references therein. Recall that a {\it limit cycle} of a planar polynomial differential system is a periodic orbit of the system isolated in the set of all periodic orbits of the system. Poincar\'e in \cite{point1} was the first to introduce the notion of a center for a vector field defined on the real plane. So according to Poincar\'e a {\it center} is a singular point surrounded by a neighborhood filled of periodic orbits with the unique exception of the singular point. Consider the polynomial differential system \begin{equation} \dot{x}=P(x,y), \qquad \dot{y}=Q(x,y), \label{first} \end{equation} and as usually we denote by $ \dot{}={d}/{dt} $. Assume that system \eqref{first} has a center located at the origin. Then after a linear change of variables and a possible scaling of time system \eqref{first} can be written in one of the following forms $$ \begin{array}{lll} \begin{array}{ll} (A) & \begin{array}{l} \dot{x}=-y+F_1(x,y),\\ \dot{y}=x+F_2(x,y), \end{array} \end{array} & \begin{array}{ll} (B) & \begin{array}{l} \dot{x}=y+F_1(x,y),\\ \dot{y}=F_2(x,y), \end{array} \qquad \end{array} & \begin{array}{ll} (C) & \begin{array}{l} \dot{x}=F_1(x,y),\\ \dot{y}=F_2(x,y), \end{array} \end{array} \end{array} $$ with $F_1$ and $F_2$ polynomials without constant and linear terms. When system \eqref{first} can be written into the form (A) we say that the center is of {\it linear type}. When system \eqref{first} can take the form (B) the center is {\it nilpotent}, and when system \eqref{first} can be transformed into the form (C) the center is {\it degenerate}. Due to the difficulty of this problem mathematicians have consider simpler versions. Thus Arnold \cite{Arnold} considered the {\it weakened 16th Hilbert Problem}, which consists in determining an upper bound for the number of limit cycles which can bifurcate from the periodic orbits of a polynomial Hamiltonian center when it is perturbed inside a class of polynomial differential systems, see for instance \cite{ChrisLi} and the hundred of references quoted therein. It is known that in a neighborhood of a center always there is a first integral, see \cite{MS}. When this first integral is not polynomial the computations become more difficult. Moreover, if the center is degenerate the computations become even harder. In the literature we can basically find the following methods for studying the limit cycles that bifurcate from a center: \begin{itemize} \item The method that uses the Poincar\'e return map, like the articles \cite{BP, chico}. \item The one that uses the Abelian integrals or Melnikov integrals (note that for systems in the plane the two notions are equivalent), see for example section 5 of Chapter 6 of \cite{ArnoldIly} and section 6 of Chapter 4 of \cite{GH}. \item The one that uses the inverse integrating factor, see \cite{GLV1,GLV2,GLV3,GLV4}. \item The averaging theory \cite{Buica,GGLD,llibre, Sanders, Verhulst}. \end{itemize} The first two methods provide information about the number of limit cycles whereas the last two methods additionally give the shape of the bifurcated limit cycle up to any order in the perturbation parameter. Almost all the papers studying how many limit cycles can bifurcate from the periodic orbits of a center, work with centers of linear type. There are very few papers studying this problem for nilpotent or degenerate centers. In fact, for degenerate centers as far as we known the bifurcation of limit cycles from the periodic orbits of a degenerate center only have been studying completely using formulas of first order in the small parameter of the perturbation. Here we will provide a complete study of this problem using formulas of second order, and as it occurs with the formulas of second order applied to linear centers that they provide in general more limit cycles than the formulas of first order, the same occurs for the formulas of second order applied to degenerate centers. Of course, the computations from first order to second order increases almost exponentially. This paper deals with the weakened 16th Hilbert's problem but perturbing non--Hamiltonian degenerate centers using the technique of the averaging method of second order, see \cite{GGLD}, and Section \ref{saveraging} for a summary of the results that we need here. Since we want to study the perturbation of a degenerate center with averaging of second order, from the homogeneous centers the first ones that are degenerate, are the cubic homogeneous centers, see for instance \cite{CL}. In this class in \cite{LLTT} the authors studied the perturbation of the following cubic homogeneous center \begin{equation}\label{1} \dot{x}=-y(3x^2+y^2),\qquad \dot{y}=x(x^2-y^2), \end{equation} inside the class of all cubic polynomial differential systems, using averaging theory of first order. Here we study this problem but using averaging theory of second order. System \eqref{1} has a global center at the origin (i.e. all the orbits contained in $\ensuremath{\mathbb{R}}^2\setminus\{(0,0)\}$ are periodic), and it admits the non--rational first integral $$ H(x,y)=(x^2+y^2)\exp\left(-\frac{2x^2}{x^2+y^2}\right). $$ The limit cycles bifurcating from the periodic orbits of the global center \eqref{1} have already been studied in the following two results, see \cite{LLTT} and \cite{Buica0}, respectively. \begin{theorem} We deal with differential system \eqref{1}. Then the polynomial differential system $$ \begin{array}{ccl} \dot{x}&=&-y(3x^2+y^2)+\varepsilon \left( \sum\limits_{0\leq i+j\leq 3}a_{ij}x^iy^{j}\right),\\ \dot{y}&=&x(x^2-y^2)+\varepsilon\left( \sum\limits_{0\leq i+j\leq 3}b_{ij}x^iy^{j}\right), \end{array} $$ has at most one limit cycle bifurcating from the periodic orbits of the center of system \eqref{1} using averaging theory of first order. Moreover, there are examples with 1 and 0 limit cycles. \end{theorem} \begin{proposition} We consider the homogeneous polynomial differential system \eqref{1}. Let $P_i(x,y)$ and $Q_i(x,y)$ for $i=1,2$ be polynomials of degree at most 3. Then for convenient polynomials $P_i$ and $Q_i$, the polynomial differential system $$ \begin{array}{ccl} \dot{x}&=&-y(3x^2+y^2)+\varepsilon P_1(x,y)+\varepsilon^2 P_2(x,y),\\ \dot{y}&=&x(x^2-y^2)+\varepsilon Q_1(x,y)+\varepsilon^2 Q_2(x,y), \end{array} $$ has at first order averaging one limit cycle, and at second order averaging two limit cycles bifurcating from the periodic solutions of the global center \eqref{1}. \end{proposition} Our main result is the following one and it do by first time the complete study of the averaging method of second order for a degenerate center having neither a Hamiltonian first integral nor a rational one. \begin{theorem}\label{main} We consider the cubic homogeneous differential system \eqref{1}. Then the perturbation of system \eqref{1} inside the class of all cubic polynomial systems \begin{equation} \begin{array}{ccl} \dot{x}&=&-y(3x^2+y^2)+\varepsilon \left( \sum\limits_{0\leq i+j\leq 3}a_{ij}x^iy^{j}\right)+\varepsilon^2 \left( \sum\limits_{0\leq i+j\leq 3}b_{ij}x^iy^{j}\right),\\ \dot{y}&=&x(x^2-y^2)+\varepsilon\left( \sum\limits_{0\leq i+j\leq 3}c_{ij}x^iy^{j}\right)+\varepsilon^2 \left( \sum\limits_{0\leq i+j\leq 3}d_{ij}x^iy^{j}\right), \end{array} \label{perturbed} \end{equation} has at most three limit cycles bifurcating from the periodic orbits of the center of system \eqref{1} using averaging theory of second order. Moreover, there are examples with 3, 2, 1 and 0 limit cycles. \end{theorem} The paper is organized as follows: In section \ref{saveraging} we present a summary of the averaging method of second order following \cite{GGLD}. Next in section \ref{smain} we provide the proof of our main Theorem \ref{main}. In section \ref{sexamples} we provide three examples, of systems \eqref{perturbed} with 0, 1, 2 and 3 limit cycles bifurcating from the degenerate center. At the end we present the Appendices A, B and C. \section{The averaging method of second order}\label{saveraging} In this section we present the averaging method of second order following \cite{GGLD}. In that paper the averaging theory for differential equations of one variable is done up to any order in the small parameter of the perturbation. We consider the analytic differential equation \begin{equation} \dfrac{dr}{d\theta}=G_0(\theta, r)+\sum\limits_{k\geq 1}\varepsilon^kG_k(\theta, r),\label{eq0} \end{equation} with $r\in\ensuremath{\mathbb{R}}$, $\theta\in \ensuremath{\mathbb{S}}^1$ and $\varepsilon\in(-\varepsilon_0, \varepsilon_0)$ with $\varepsilon_0$ a small positive real value, and the functions $G_k(\theta, r)$ are $2\pi$--periodic in the variable $\theta.$ Note that for $\varepsilon=0$ system \eqref{eq0} is unperturbed. Let $r_s(\theta, r_0)$ be the solution of system \eqref{eq0} with $\varepsilon=0$ satisfying $r_s(0,r_0)=r_0$ and $r_s(\theta, r_0)$ is $2\pi$ periodic for $r_0\in\mathcal{I}$ with $\mathcal{I}$ a real open interval. We are interested in the limit cycles of equation \eqref{eq0} which bifurcate from the periodic orbits of the unperturbed system with initial condition $r_0\in\mathcal{I}.$ So, we define by $r_{\varepsilon}(\theta,r_0)$ the solution of equation \eqref{eq0} satisfying $r_{\varepsilon}(0,r_0)=r_0.$ In what follows we denote by $u=u(\theta, r_0)$ the solution of the variational equation $$ \dfrac{\partial u}{\partial \theta}=\dfrac{\partial G_0}{\partial r}(\theta, r_s(\theta, r_0))u, $$ satisfying $u(0, r_0)=1.$ We define \begin{equation}\label{formulas} \begin{array}{ccl} u_1(\theta, r_0)&=&\displaystyle{\int\limits_{0}\limits^\theta \dfrac{ G_1(\phi,\ r_s(\phi, r_0))}{u(\phi, r_0)}d\phi} =\displaystyle{\int\limits_{0}\limits^\theta \dfrac{ G_1(w,\ r_s(w, r_0))}{u(w, r_0)}d w},\\ G_{10}(r_0)&=&\displaystyle{\int\limits_{0}\limits^{2\pi} \dfrac{ G_1(\theta, \ r_s(\theta, r_0))}{u(\theta, r_0)}d\theta},\\ G_{20}(r_0)&=&\displaystyle{\int\limits_{0}\limits^{2\pi} \left(\dfrac{ G_2(\theta, \ r_s(\theta, r_0))}{u(\theta, r_0)} +\dfrac{\partial G_1}{\partial r} (\theta, \ r_s(\theta, r_0))\ u_1(\theta,r_0)\right.}\\ &&\displaystyle{\qquad \qquad +\left.\dfrac{1}{2}\dfrac{\partial^2 G_0}{\partial r^2}(\theta, \ r_s(\theta, r_0))\ u_1(\theta, r_0)^2\right)d\theta}. \end{array} \end{equation} In statement (b) of Corollary 5 of \cite{GGLD} it is proved the following result. \begin{theorem} Assume that the solution $r_s(\theta, r_0)$ of the unperturbed equation \eqref{eq0} such that $r_s(0, r_0)=r_0$ is $2\pi$--periodic for $r_0\in\mathcal{I}$ with $\mathcal{I}$ a real open interval. If $G_{10}(r_0)$ is identically zero in $\mathcal{I}$ and $G_{20}(r_0)$ is not identically zero in $\mathcal{I}$, then for each simple zero $r^*\in\mathcal{I}$ of $G_{20}(r_0)=0$ there exists a periodic solution $r_\varepsilon(\theta, r_0)$ of \eqref{eq0} such that $r_\varepsilon(0, r_0)\rightarrow r^*$ when $\varepsilon\rightarrow 0.$ \end{theorem} \section{Proof of Theorem \ref{main}}\label{smain} System \eqref{1} in polar coordinates becomes \begin{equation} \dot{r}=-2r^3\cos\theta \sin\theta, \qquad \dot{\theta}=r^2, \label{1polar} \end{equation} or equivalently, $$ \dfrac{dr}{d\theta}=-2r\cos\theta \sin\theta, $$ and it has the solution $r_s(\theta, r_0)=r_0 \exp(-\sin ^2 \theta)$ satisfying that $r_s(0, r_0)=r_0.$ Now we perturb system \eqref{1} inside the class of all cubic polynomial differential systems as in \eqref{perturbed}. System \eqref{perturbed} in polar coordinates give rise to the differential equation $$ {\frac {d\,r}{d\,\theta}}=G_0(\theta, r)+\varepsilon G_1(\theta, r)+\varepsilon^2 G_2(\theta, r)+O(\varepsilon^3), $$ with $$ \begin{array}{l} G_0(\theta, r)=-2\,r\cos \theta \sin \theta ,\\ \\ G_1(\theta, r)=g_{1,1}(\theta)\,\dfrac{1}{r^2}+g_{1,2}(\theta)\,\dfrac{1} {r}+g_{1,3}(\theta)\,+g_{1,4}(\theta)\,r,\\ \\ G_2(\theta,r)=g_{2,1}(\theta)\,\dfrac{1}{r^5}+g_{2,2}(\theta)\, \dfrac{1}{r^4}+g_{2,3}(\theta)\,\dfrac{1}{r^3} +g_{2,4}(\theta)\,\dfrac{1}{r^2}+g_{2,5}(\theta)\,\dfrac{1}{r}+ g_{2,6}(\theta)\,+g_{2,7}(\theta)\,r, \end{array} $$ where the expressions of the coefficients $g_{1,i}(\theta)\,$ for $i=1,2,3,4$ and $g_{2,j}(\theta)\,$ for $j=1,2,\cdots,7$ are given in the Appendix A. Additionally, we consider the variational equation $$ \dfrac{\partial u}{\partial \theta}=\dfrac{\partial G_0}{\partial r}(\theta, r_s(\theta, r_0)), $$ and its solution $u(\theta, r_0)$ satisfying $u(0, r_0)=1$, namely $u_s(\theta)=\exp(-\sin^2 \theta).$ We define \begin{equation}{\label{integrals}} \begin{array}{l} I_1=\displaystyle{\int\limits_{0}\limits^{2\pi}\exp(2\sin^2\theta)\ \cos^4\theta\ d\theta}=3.572403292...,\\ I_2=\displaystyle{\int\limits_{0}\limits^{2\pi}\exp(2\sin^2\theta)\ \cos^2\theta\ d\theta}=5.985557563...,\\ I_3=\displaystyle{\int\limits_{0}\limits^{2\pi}\exp(2\sin^2\theta) \ d\theta}=21.62373221...\\ \end{array} \end{equation} \begin{lemma} Consider $I_1, I_2, I_3$ defined in \eqref{integrals}. Then for $a_{10}=-(I_3-2I_1+I_2)/({2I_1-I_2})c_{01}$ and $a_{30}= -2c_{03}-c_{21}$ we have that the function $G_{10}(r_0)$ defined in \eqref{formulas} is identically zero. \end{lemma} \begin{proof} We have $$ \dfrac{G_1(\theta, r_s(\theta,r_0))}{u(\theta,r_0)}=A(\theta)\, \dfrac{1}{r_0^2}+B(\theta)\,\dfrac{1}{r_0}+C(\theta)\,+D(\theta)\,r_0, $$ with $$ \begin{array}{l} A(\theta)\,=\left[\sin \theta \left( 2\, \cos^2 \theta +1 \right) c_{00} +\cos \theta \left( -1+ 2\, \cos^2 \theta \right) a_{00}\right]{{\rm e}^{3 \, \sin^2 \theta }},\\ \\ B(\theta)\,=\left[\cos \theta \sin \theta \left( -1+2\, \cos^2 \theta \right) a_{01} + \cos^2 \theta \left( -1+2\, \cos^2 \theta \right)a_{{1 0}}\right.\\ \left.- \left( - \cos^2 \theta +2\, \cos^4 \theta -1 \right)c_{01} +\cos \theta \sin \theta \left( 2\, \cos^2 \theta +1 \right) c_{10}\right]{{\rm e}^{2 \, \sin^2 \theta }}, \\ \\ C(\theta)\,=\left[- \cos \theta \left( -3\, \cos^2 \theta +1+2\, \cos^4 \theta \right)a_{02} + \cos^3 \theta \left( - 1+2\, \cos^2 \theta \right)a_{20}\right. \\ + \cos^2 \theta \sin \theta \left( -1+2\, \cos^2 \theta \right) a_{11} + \sin^3 \theta \left( 2\, \cos^2 \theta +1 \right)c_{{0 2}} \\ \left.+ \cos^2 \theta \sin \theta \left( 2\, \cos^2 \theta +1 \right)c_{20} -\cos \theta \left( - \cos^2 \theta +2\, \cos^4 \theta -1 \right)c_{11}\right]{{\rm e}^{ \sin^2 \theta }},\\ \\ D(\theta)\,= \cos \theta \sin^3 \theta \left( -1+2\, \cos^2 \theta \right)a_{03} + \cos^4 \theta \left( -1+2\, \cos^2 \theta \right)a_{30}\\ + \cos^3 \theta \sin \theta \left( -1+2\, \cos^2 \theta \right)a_{21} - \cos^2 \theta \left( -3\, \cos^2 \theta +1+2\, \cos^4 \theta \right)a_{12}\\ + \left( 1-3\, \cos^4 \theta +2\, \cos^6 \theta \right)c_{03} + \cos^3 \theta \sin \theta \left( 2\, \cos^2 \theta +1 \right)c_{30}\\ - \cos^2 \theta \left( - \cos^2 \theta +2\, \cos^4 \theta -1 \right)c_{21} +\cos \theta \sin^3 \theta \left( 2\, \cos^2 \theta +1 \right)c_{12}. \end{array} $$ Now $$ G_{10}(r_0)=\displaystyle{\int\limits_{0}\limits^{2\pi}\dfrac{G_1(\theta, r_s(\theta,r_0))}{u(\theta,r_0)}}d\theta= \displaystyle{\int\limits_{0}\limits^{2\pi}\left(A(\theta)\,\dfrac{1} {r_0^2}+B(\theta)\,\dfrac{1}{r_0}+C(\theta)\,+D(\theta)\,r_0\right)d\theta,} $$ and considering the change of coordinates $\theta=\phi+\pi$ in the interval $[0, 2\pi]$ and the symmetries \begin{equation}\label{simmetry} \sin(\theta+\pi)=-\sin\theta, \qquad \cos(\theta+\pi)=-\cos\theta, \end{equation} we have that $$ \displaystyle{\int\limits_{0}\limits^{2\pi}A(\theta) d\theta}=\displaystyle{ \int\limits_{-\pi}\limits^{\pi}A(\phi) d\phi}=0, \qquad \displaystyle{\int\limits_{0}\limits^{2\pi}C(\theta) d\theta}= \displaystyle{\int\limits_{-\pi}\limits^{\pi}C(\phi) d\phi}=0. $$ So we have $$ \begin{array}{ccl} G_{10}(r_0)&=&\displaystyle{\int\limits_{0}\limits^{2\pi} \dfrac{B(\theta)\,} {r_0} d\theta}+\displaystyle{\int\limits_{0}\limits^{2\pi} D(\theta)\, r_0 d\theta}\\ &=&\left[(2a_{10}-2c_{01})I_1+(c_{01}-a_{10})I_2+c_{01}I_3\right]\dfrac{1}{r_0} +\dfrac{\pi}{2}\left(2c_{03}+a_{30}+c_{21}\right)r_0, \end{array} $$ \noindent and therefore $G_{10}\equiv 0$ if $$ a_{10}=-\dfrac{I_3-2I_1+I_2}{2I_1-I_2}c_{01}=-17.65322447\ldots c_{01}, \qquad a_{30}= -2c_{03}-c_{21}. $$ This completes the proof of the lemma. \end{proof} Now we have $$ \dfrac{G_2(\theta, r_s(\theta,r_0))}{u(\theta,r_0)}=A_5(\theta)\,\dfrac{1}{r_0^5}+ A_4(\theta)\,\dfrac{1}{r_0^4}+A_3(\theta)\,\dfrac{1}{r_0^3} +A_2(\theta)\,\dfrac{1}{r_0^2}+A_1(\theta)\,\dfrac{1}{r_0}+ A_0(\theta)\,+\tilde{A}_1(\theta)\,r_0, $$ with $A_5(\theta)\,,A_4(\theta)\,,A_3(\theta)\,,A_2(\theta)\,, A_1(\theta)\,,A_0(\theta)\,,\tilde{A}_1(\theta)\,$ are given in the Appendix B. We note that $$ \displaystyle{\int\limits_{0}\limits^{2\pi}A_4(\theta)\, d\theta }=0, \qquad \displaystyle{\int\limits_{0}\limits^{2\pi}A_2(\theta)\, d\theta }=0, \qquad \displaystyle{\int\limits_{0}\limits^{2\pi}A_0(\theta)\, d\theta }=0, $$ because of the symmetries \eqref{simmetry}. So we have \begin{equation}\label{ints} \displaystyle{\int\limits_{0}\limits^{2\pi}\dfrac{G_2(\theta, r_s(\theta,r_0))}{u(\theta,r_0)}}d\theta =\dfrac{1}{r_0^5}\displaystyle{\int\limits_{0}\limits^{2\pi}A_5(\theta)\,d\theta} +\dfrac{1}{r_0^3}\displaystyle{\int\limits_{0}\limits^{2\pi}A_3(\theta)\,d\theta }+\dfrac{1}{r_0}\displaystyle{\int\limits_{0}\limits^{2\pi}A_1(\theta)\,d\theta} +\left(\displaystyle{\int\limits_{0}\limits^{2\pi}\tilde{A}_1(\theta)\,d\theta}\right) r_0, \end{equation} and we recall that the expressions of $A_5(\theta)\,,A_3(\theta)\,,A_1(\theta)\,,\tilde{A}_1(\theta)\,$ are given in the Appendix B. We have $$ \begin{array}{ll} \displaystyle{\int\limits_{0}\limits^{2\pi}A_5(\theta)\, d\theta }&=\displaystyle{ \left( -4\int _{0}^{2\,\pi }\,{{\rm e}^{6\, \sin^2 \theta }} \cos^4 \theta {d\theta }+2\int _{0}^{2\,\pi }\,{{\rm e}^{6 \, \sin^2 \theta }} \cos^2 \theta {d\theta }+\int _{0}^{2\,\pi }\!{{\rm e}^{6\, \sin^2 \theta }}{d\theta } \right)a_{{00}}c_{{00}}}\\ &=665.2264930\ldots a_{{00}}c_{{00}},\\ \displaystyle{\int\limits_{0}\limits^{2\pi}A_3(\theta)\, d\theta }&= 239.0000390\ldots\,a_{01}c_{01}+ 97.83745135\ldots\,a_{00}c_{02} +97.83745135\ldots\,a_{02}c_{00}\\ &- 257.2692783\ldots\,c_{01}c_{10}+ 12.93483815\ldots\,a_{00}c_{20}- 7.99641945\ldots\,a_{00}a_{11}\\ &+12.93483815\ldots\,a_{20}c_{00}- 28.92767705\ldots\,c_{00}c_{11},\\ \displaystyle{\int\limits_{0}\limits^{2\pi}A_1(\theta)\, d\theta }&=1.159249021\ldots\,b_{10}+ 20.46448319\ldots\,d_{{01}}+ 2.318498043\ldots\,a_{{11} }c_{11}- 4.73165232\ldots\,c_{02}c_{11}\\ &+ 2.318498043\ldots\,a_{20}c_{{0 ,2}}+ 2.318498045\ldots\,a_{02}c_{20}- 3.761715750\ldots\,c_{11}c_{20}\\ &+ 14.47892563\ldots\,a_{02}c_{02}- 1.253905250\ldots\,a_{02}a_{11}+ 0.189312456\ldots\,a_{20}c_{20}\\ &+ 0.094656226\ldots\,a_{11}a_{20}+ 0.647510463\ldots\,a_{21}c_{01}- 5.110277230\ldots\,c_{03}c_{10}\\ &+ 2.318498043\ldots\,a_{12}c_{10}+ 2.22384182\ldots\,a_{01}c_{21}+ 36.6143963\ldots\,a_{03}c_{01}\\ & - 3.95102821\ldots\,c_{10}c_{21}- 1.25390525\ldots\,a_{01}a_{12}- 45.6606188\ldots\,c_{01}c_{12}\\ &- 7.1036910\ldots\,c_{01}c_{30}+ 14.28961317\ldots\,a_{01}c_{03},\\ \displaystyle{\int\limits_{0}\limits^{2\pi}\tilde{A}_1(\theta)\, d\theta }&= - 0.3926990817\ldots\,c_{30}c_{21}+ 0.1963495408\ldots\,c_{30}a_{12}+ 2.159844949\ldots\,a_{03}c_{03}\\ &- 0.1963495408\ldots\,a_{03}a_{12}- 0.7853981634\ldots\,c_{12}c_{21}+ 0.3926990817\ldots\,a_{03}c_{21}\\ &-1.178097245\ldots\,c_{03}c_{12}+ 0.3926990817\ldots\,c_{12}a_{12}+ 0.9817477042\ldots\,c_{03}c_{30}\\ & + 1.570796327\ldots\,d_{{21}}+ 3.141592654\ldots \,d_{03}+ 1.570796327\ldots\,b_{30}.\\ \\ \end{array} $$ \begin{remark} \label{remark1} (a) Looking at the expressions of $A_3, A_1,\tilde{A}_1$ in the Appendix B we can have the exact definition for the numerical coefficients which appear in the previous integrals. Thus for instance $$ 239.0000390\cdots=-\displaystyle{\int\limits_{0}\limits^{2\pi}}{\frac {{{\rm e}^{4\, \sin^2 \theta}} \left( \cos^2 \theta -1 \right) \left( 2\,{ I_1}-4{ I_3}\, \cos^4 \theta +2{ I_3}\, \cos^2 \theta -{ I_2} \right) }{2\,{ I_1}-{ I_2}}} d \theta, $$ and $I_1,I_2,I_3$ satisfying relations \eqref{integrals}. (b) All the computations of this paper have been verified with the algebraic manipulator Maple \cite{maple}. \end{remark} We additionally have $$ \dfrac{\partial G_1}{\partial r}(\theta, \ r_s(\theta, r_0))=B_0(\theta)\,+\dfrac{B_1(\theta)\,}{r_0^2}+\dfrac{B_2(\theta)\,}{r_0^3}, $$ with $$ \begin{array}{ll} B_0(\theta)\,=& \cos^2 \theta \left( 1+ 2\, \cos^2 \theta - 4\, \cos^4 \theta \right)\, c_{21}\\ &+ \left( - 2\, \cos^6 \theta + 1- \cos^4 \theta \right) c_{03}\\ &+ \cos \theta \left( \, \sin \theta \cos^2 \theta - 2\, \cos^4 \theta \sin \theta + \, \sin \theta \right) c_{12}\\ &+ \cos^3 \theta \left( \,\sin \theta + 2\, \cos^2 \theta \sin \theta \right) c_{30} \\ &+\sin \theta \,\cos \theta \left( 3\, \cos^2 \theta - 1 - 2\, \cos^4 \theta \right) a_{03}\\ &+ \cos^2 \theta\left( -1 \, + 3\, \cos^2 \theta - 2\, \cos^4 \theta \right) a_{12}\\ &+ \,\sin \theta \cos^3 \theta\left( -1 + 2\, \cos^2 \theta \right) a_{21},\\ \\ B_1(\theta)\,=& { \left( - 1- 18.65322447...\,\cos^2\theta + 37.30644894...\, \cos^4 \theta \right) {{\rm e}^{ 2\sin^2 \theta }} c_{01}} \\ &{\, -\sin \theta \cos \theta \left(2\, \cos^2 \theta +1\right) {{\rm e}^{ 2\sin^2 \theta }} c_{10}} \\ &+ \sin \theta \, \cos \theta \left( - 2\, \cos^2 \theta + 1 \right) {{\rm e}^{ 2\sin^2 \theta }} a_{01}, \\ \\ B_2(\theta)\,=& {-2\,\sin \theta \left( 2\, \cos^2 \theta + 1 \right) c_{00}+ 2\left( - 2\, + \, {{\rm e}^{ 3\sin^2 \theta }} \cos \theta \right) a_{00}}. \end{array} $$ Now we have $$ \dfrac{ G_1(w,\ r_s(w, r_0))}{u(w, r_0)}=\dfrac{C_2(w)\,}{r_0^2}+\dfrac{C_1(w)\,}{r_0}+C_0(w)\,+{\tilde C}_1(w)\, r_0, $$ with $$ \begin{array}{ll} C_2(w)\,=&{{\rm e}^{3 \sin^2 w }}\left[\cos w \left( 2\, \cos^2 w -1 \right) a_{00} + \sin w \left( 1+2\, \cos^2 w \right) c_{00}\right],\\ \\ C_1(w)\,=&\Big[ \cos w \sin w\left( -1 +2\, \cos^2 w \right) a_{01}\\ &+ \left(1+ \dfrac {I_3}{2I_1-I_2} (\cos^2 w -2\cos^4 w ) \right) c_{01}\\ &+ \cos w \sin w\ \left( 1 +2\, \cos^2 w \right) c_{10}\Big] {{\rm e}^{ 2\sin^2 w }}, \\ \\ C_0(w)\,=& \left[ \cos^3 w \left( 2\, \cos^2 w -1 \right) a_{20} -\cos w \left( -3\, \cos^2 w +2\, \cos^4 w +1 \right) a_{02}\right.\\ &+\sin w \cos^2 w \left( 2\, \cos^2 w -1 \right) a_{11} +\sin w \cos^2 w \left( 1+2\, \cos^2 w \right) c_{20}\\ &+c_{02} \sin^3 w \left( 1+2\, \cos^2 w \right)\left. -\cos w \left( 2\, \cos^4 w -1- \cos^2 w \right) c_{11}\right]{{\rm e}^{ \sin^2 w }}, \\ \\ {\tilde{C}_1}(w)\,=&- \left( \cos^4 w +2\, \cos^6 w -1 \right) c_{03} + \sin^3 w \cos w \left( 1+2\, \cos^2 w \right)c_{12}\\ &- \cos^2 w \left( -1-2\, \cos^2 w +4\, \cos^4 w \right)c_{21} + \cos^3 w \sin w \left( 1+2\, \cos^2 w \right)c_{30}\\ &+ \sin^3 w \cos w \left( 2\, \cos^2 w -1 \right)a_{03} + \sin^2 w \cos^2 w \left( 2\, \cos^2 w -1 \right) a_{12} \\ &+ \cos^3 w \sin w \left( 2\, \cos^2 w -1 \right)a_{21}. \end{array} $$ \noindent Additionally, from \eqref{formulas} we obtain $$ u_1(\theta, r_0)=\dfrac{1}{r_0^2}\displaystyle{\int\limits_{0}\limits^\theta C_2(w)\,}dw +\dfrac{1}{r_0}\displaystyle{\int\limits_{0}\limits^\theta C_1(w)\,}dw+ \displaystyle{\int\limits_{0}\limits^\theta}C_0(w)\,dw +r_0\displaystyle{\int\limits_{0}\limits^\theta {\tilde C}_1(w)\, }dw, $$ and so \begin{equation}\label{ss} \dfrac{\partial G_1}{\partial r} (\theta, \ r_s(\theta, r_0))\ u_1(\theta,r_0)=s_{5}(\theta)\, \dfrac{1}{r_0^5}+s_{4}(\theta)\, \dfrac{1}{r_0^4}+s_3(\theta)\, \dfrac{1}{r_0^3} +s_2(\theta)\, \dfrac{1}{r_0^2}+s_1(\theta)\, \dfrac{1}{r_0}+s_0(\theta)\, +\tilde{s}_1(\theta)\, r_0, \end{equation} and the explicit expressions of $s_i(\theta)\, $ for $i=0,1,\cdots,5$ and $\tilde{s}_1(\theta)\, $ are given in the Appendix C. Since $\dfrac{\partial^2 G_0}{\partial r^2}=0$ from \eqref{formulas} we have that $$ G_{20}(r_0)=\displaystyle{\int\limits_{0}\limits^{2\pi} \left(\dfrac{ G_2(\theta, \ r_s(\theta, r_0))}{u(\theta, r_0)} +\dfrac{\partial G_1}{\partial r} (\theta, \ r_s(\theta, r_0))\ u_1(\theta,r_0)\right)d\theta},\\ $$ and we obtain $$ r_0^5{G_{20}(r_0)}=v_6 r_0^6+v_4 r_0^4+v_2 r_0^2+v_0, $$ with $$ \begin{array}{ll} v_6=& - 0.3926990800\cdots\,c_{21}c_{30}+ 0.1963495397\cdots\,a_{12}c_{ {30}}+ 2.159844949\cdots\,a_{03}c_{03}\\ &- 0.1963495365\cdots\,a_{03}a_{{1 2}} - 0.7853981634\cdots\,c_{12}c_{21}+ 0.3926990817\cdots\,a_{03}c_{{21} } \\ &- 1.178097245\cdots\,c_{03}c_{12}+ 0.3926990817\cdots\,a_{12}c_{12}+ 0.9817477042\cdots\,c_{03}c_{30}\\ &+ 1.570796327\cdots\,d_{{21}}+ 3.141592654\cdots \,d_{03}+ 1.570796327\cdots\,b_{30}, \\ \\ v_4=&- 3.155691751\cdots\,a_{21}c_{01}+ 6.612510180\cdots\,c_{03}c_{10}+ 1.786201647\cdots\,a_{12}c_{10}\\ &+ 2.413154277\cdots\,a_{01}c_{21}+ 38.88726613\cdots\,a_{03}c_{01}- 1.253905255\cdots\,c_{10}c_{21}\\ &- 1.206577137\cdots\,a_{01}a_{12}- 68.30745733\cdots\,c_{01}c_{12}-38.88726635\cdots\,c_{01}c_{30}\\ &+ 13.27234849\cdots\,a_{01}c_{03}+ 2.318498043\cdots\,a_{11}c_{11}- 4.73165232\cdots\,c_{02}c_{11}\\ & +2.318498043\cdots\,a_{20}c_{02}+ 2.318498045\cdots\,a_{02}c_{20}- 3.761715750\cdots\,c_{11}c_{20}\\ &+ 14.47892563\cdots\,a_{02}c_{02}-1.253905250\cdots\,a_{02}a_{11}+ 0.189312456\cdots\,a_{20}c_{20}\\ &+ 0.094656226\cdots\,a_{11}a_{20}+ 1.159249021\cdots\,b_{10}+ 20.46448319\cdots \,d_{{01}},\\ \\ v_2= &95.95703341\cdots\,a_{00}c_{02}+105.7377762\cdots\,a_{02}c_{00}+ 239.0000390\cdots\,a_{01}c_{01}\\ &-7.649140220\cdots\,a_{00}a_{11}+ 16.68739736\cdots\,a_{00}c_{20}-110.9164314\cdots\,c_{00}c_{11}\\ &- 257.2692783\cdots\,c_{01}c_{10}-0.000001\cdots\,{c_{01}}^{2} - 31.79348852\cdots\,a_{20}c_{00},\\ \\ v_0=& 665.2264933\cdots\,a_{00}c_{00}. \end{array} $$ We have that the coefficients $v_6,v_4,v_2,v_0$ are independent because $d_{03}$ only appears in $v_6$, $b_{10}$ only appears in $v_4$, $a_{00}c_{02}$ only appears in $v_2$, and $a_{00}c_{00}$ only appears in $v_0.$ Now we are going to use Descartes Theorem: \begin{theorem}[Descartes Theorem]\label{Descartes} Consider the real polynomial $p(x)=a_{i_1}x^{i_1}+a_{i_2}x^{i_2} \cdots+a_{i_r}x^{i_r}$ with $0\leq i_1< i_2< \cdots<i_r$ and $a_{i_j}\neq 0$ real constants for $j\in\{1,2,\cdots,r\}.$ When $a_{i_j}a_{i_{j+1}}<0$, we say that $a_{i_j}$ and $a_{i_{j+1}}$ have a variation of sign. If the number of variations of signs is $m$, then $p(x)$ has at most $m$ positive real roots. Moreover, it is always possible to choose the coefficients of $p(x)$ in such a way that $p(x)$ has exactly $r-1$ positive real roots. \end{theorem} For a proof of Descartes Theorem see pages 82--83 of \cite{Berezin}. So from Descartes Theorem we can choose $v_6,v_4,v_2,v_0$ in order that the $G_{20}$ has 3,2,1 or 0 real positive roots. This completes the proof of the first part of Theorem \ref{main}. \begin{remark}\label{r2} Again the exact definition for the numerical coefficients which appear in $v_6,v_4,v_2$ and $v_0$ are given in Appendices B and C. For instance $$ v_0=\displaystyle{\int\limits_{0}\limits^{2\pi}}A_5 d\theta+\displaystyle{\int\limits_{0}\limits^{2\pi}}s_{5,3} d\theta=\displaystyle{\int\limits_{0}\limits^{2\pi}}A_5 d\theta =665.2264933... $$ \end{remark} For completing the proof of Theorem \ref{main} we shall provide examples of system \eqref{perturbed} with 3, 2, 1 and 0 limit cycles. In fact, strictly speaking it is not necessary to provide examples with 3, 2, 1 and 0 limit cycles but we want to provide such examples. \section{Examples}\label{sexamples} \noindent{\bf Example with 3 limit cycles}\\ In Figure \ref{tres} we see that for $\varepsilon=0.001$ the system \begin{equation} \begin{array}{ll}\label{tris} \dot{x}=&-y(3x^2+y^2)+\varepsilon+\varepsilon^2(3570.576292 x-752.8823806 x^3)\\ &=y(3x^2+y^2)+0.001+0.003570576292x-0.0007528823806x^3,\\ \\ \dot{y}=&x(x^2-y^2)+\varepsilon(1-37.74385845y^2)\\ &=x(x^2-y^2)+0.001-0.03774385845 y^2, \end{array} \end{equation} has three limit cycles, since for system \eqref{tris} we have $$ G_{20}(r_0)=- 1182.624878\,{ r_0}+ 4139.187071\,\dfrac{1}{r_0}- 3621.788686\,\dfrac{1}{r_0^3} + 665.2264933\, \dfrac{1}{r_0^5}, $$ and from $G_{20}(r_0)=0$ we obtain the three positive roots near to $r_0=0.5, 1, 1.5.$ \begin{figure} \caption{For $\varepsilon=0$ we have the degenerate center of system \eqref{1} \label{tres} \end{figure} We have used the program P4 described in Chapters 9 and 10 of \cite{DLLA} for doing the phase portraits in the Poincar\'e disc which appear in this paper. \noindent{\bf Example with 2 limit cycles}\\ For $\varepsilon=0.001$ the system \begin{equation}\label{dio} \begin{array}{ll} \dot{x}=&-y(3x^2+y^2)+\varepsilon(1+y+x^2y)+\varepsilon^2(-856.6373973x+y^3)\\ &=-y(3x^2+y^2)+0.001+0.001y+0.001x^2y-0.0008566373973x+0.000001y^3,\\ \\ \dot{y}=&x(x^2-y^2)+\varepsilon(1+y^2)+\varepsilon^2(x+73.80732101y^3)\\ &=x(x^2-y^2)+0.001+0.001y^2+0.000001x+0.00007380732101y^3, \end{array} \end{equation} gives $$ G_{20}(r_0)= 231.8725375\,{ r_0}- 993.0560642\, \dfrac{1}{r_0}+ 95.95703341\, \dfrac{1}{r_0^3}+ 665.2264933\, \dfrac{1}{r_0^5}, $$ and $G_{20}(r_0)=0$ has the two positive zeros $r_0=1$ and $r_0=2.$ In Figure \ref{fdio} we see the two limit cycles bifurcated from the degenerate center of the unperturbed system \eqref{dio}. \begin{figure} \caption{Two limit cycles bifurcate from the degenerate center of the unperturbed system \eqref{dio} \label{fdio} \end{figure} \noindent{\bf Example with 1 limit cycle}\\ For $\varepsilon=0.001$ the system \begin{equation}\label{ena} \begin{array}{ll} \dot{x}&=-y(3x^2+y^2)+\varepsilon(1-176.5322447x)+\varepsilon^2(x+x^3)\\ &=-y \left( 3\,{x}^{2}+{y}^{2} \right) + 0.001- 0.1765312447\,x+ 0.000001\,{x}^{3},\\ \\ \dot{y}&=x(x^2-y^2)+\varepsilon(10+10y+5xy)+\varepsilon^2(y-y^3)\\ &=x \left( {x}^{2}- {y}^{2} \right) + 0.010+ 0.010001\,y+ 0.005\,xy - 0.000001\,{y}^{3}, \end{array} \end{equation} has $$ G_{20}(r_0)=-1.570796327r_0+21.62373221\dfrac{1}{r_0}-5545.821670 \dfrac{1}{r_0^3}+6652.264933\dfrac{1}{r_0^5}. $$ From relation $G_{20}(r_0)=0$ we obtain $-1.097575824,$ $ 1.097575824, $ $ -5.725902515-5.148324797i,$ $ 5.725902515+5.148324797i,$$ -5.725902515+5.148324797i, $$ 5.725902515-5.148324797i.$ So only one limit cycle can bifurcate from a periodic orbit of the center of the unperturbed system \eqref{ena}, as we can see in Figure \ref{fena}. \begin{figure} \caption{The limit cycle bifurcated from the degenerate center of the unperturbed system \eqref{ena} \label{fena} \end{figure} \noindent{\bf Example with zero limit cycles}\\ Now for $\varepsilon=0.001$ we consider system \begin{equation}\label{mhden} \begin{array}{ll} \dot{x}&=-y(3x^2+y^2)+\varepsilon+\varepsilon^2x\\ &=-y \left( 3\,{x}^{2}+{y}^{2} \right) + 0.001+ 0.000001\,x,\\ \\ \dot{y}&=x(x^2-y^2)+\varepsilon(1+y^2)+\varepsilon^2y^3\\ &=x \left( {x}^{2}- {y}^{2} \right) + 0.001+ 0.001\,{y}^{2}+ 0.000001\,{y}^{3}, \end{array} \end{equation} with $$ G_{20}(r_0)=3.141592654r_0+1.159249021\dfrac{1}{r_0}+95.95703341 \dfrac{1}{r_0^3}+665.2264933\dfrac{1}{r_0^5}. $$ We have that $G_{20}(r0)=0$ has solutions $-2.116012294-1.570359831i, 2.116012294+1.570359831i, -2.095699520i, 2.095699520i, -2.116012294+1.570359831i, 2.116012294-1.570359831i.$ So no limit cycles can bifurcate from the degenerate center, see also Figure \ref{fmhden}. \begin{figure} \caption{No limit cycle bifurcates from the degenerate center of the unperturbed system \eqref{mhden} \label{fmhden} \end{figure} \begin{thebibliography}{99} \bibitem{Arnold} {\sc V.I. Arnold}, {\it Loss of stability of self--oscillation close to resonance and versal deformations of equivariant vector fields}, Funct. Anal. Appl. {\bf 11} (1977), 1--10. \bibitem{ArnoldIly} {\sc V.I. Arnold and Yu. Il'yashenko}, {\it Dynamical systems I. Ordinary differential equations,} in Encyclopaedia Math. Sci., {\bf 1}, Springer, Berlin, 1988. \bibitem{Berezin} {\sc I.S. Berezin and N.P. Zhidkov,} Computing Methods, vol. II. Pergamon Press, Oxford (1964). \bibitem{BP}{\sc T.R. Blows, L.M. Perko}, {\it Bifurcation of limit cycles from centers and separatrix cycles of planar analytic systems,} SIAM Rev. 36 (1994), 341--376. \bibitem{Buica0} {\sc A. Buic\u{a}, J. Gin\'e and L. Llibre}, {\it A second order analysis of the periodic solutions for nonlinear periodic differential systems with a smaller parameter}, Physica D {\bf 241} (2012) 528--533. \bibitem{Buica} {\sc A. Buic\u{a} and L. Llibre}, {\it Averaging methods for finding periodic orbits via Brouwer degree}, Bull. Sci. Math. {\bf 128} (2004), 7--22. \bibitem{CL} {\sc A. Cima and J. Llibre,} {\it Algebraic and topological classification of the homogeneous cubic systems in the plane,} J. Math. Anal. Appl. {\bf 147} (1990), 420--448. \bibitem{chico}{\sc C. Chicone and M. Jacobs,} {\it Bifurcation of limit cycles from quadratic isochrones,} J. Differential Equations {\bf 91} (1991), 268--326. \bibitem{ChrisLi} {\sc C. Christopher and C. Li}, {\it limit cycles of differential equations,} Advanced Courses in Mathematics, CRM Barcelona, Birkh\"{a}user Verlag, Basel, 2007. \bibitem{DLLA} {\sc F. Dumortier, J. Llibre and J. C. Art\'es}, {\it Qualitative theory of planar polynomial systems}, Springer, 2006. \bibitem{GLV1} {\sc H. Giacomini, J. Llibre and M. Viano}, {\it On the nonexistence, existence and uniqueness of limit cycles,} Nonlinearity {\bf 9} (1996), 501--516. \bibitem{GLV2} {\sc H. Giacomini, J. Llibre and M. Viano}, {\it On the shape of limit cycles that bifurcate from Hamiltonian centers}, Nonlinear Anal. {\bf 41} (2000), 523--537. \bibitem{GLV3} {\sc H. Giacomini, J. Llibre and M. Viano}, {\it On the shape of limit cycles that bifurcate from non--Hamiltonian centers}, Nonlinear Anal. {\bf 43} (2001), 837--859. \bibitem{GGLD} {\sc J. Gin\'e, M. Grau and J. Llibre}, {\it Averaging theory at any order for computing periodic orbits}, Physica D {\bf 250} (2013), 58--65. \bibitem{GH} {\sc J. Guckenheimer and P. Holmes}, {\it Nonlinear Oscillations, dynamical systems and bifurcations of vector fields}, Applied Mathematical Sciences, {\bf 42}, Springer--Verlag, New York, 1986. \bibitem{H} {\sc D. Hilbert}, {\it Mthematische Probleme,} Lecture at the Second International Congress of Mathematicians, Paris 1900; reprinted in Mathematical Developments Arising from Hilbert Problems (ed. F.E. Browder), Proc.Symp. Pure Math. {\bf 28}, Amer.Math.Soc., Providence, RI, 1976, 1--34. \bibitem{Yu}{\sc Yu. Il'yashenko}, {\it Centennial History of Hilbert's 16th Problem,} Bull.Amer.Math.Soc. {\bf 39}(2002), 301--354. \bibitem{Li}{\sc J. Li,} {\it Hilbert's 16th problem and bifurcations of planar polynomial vector fields}, Internat. J. Bifur.Chaos {\bf 13} (2003), 47--106. \bibitem{llibre} {\sc J. Llibre,} {\it Averaging theory and limit cycles for quadratic systems}, Rad.Mat. {\bf 11} (2002/3) 215--228. \bibitem{LLTT} {\sc J. Llibre, M.A. Teixeira and J. Torregrosa,} {\it Limit cycles bifurcating from a k--dimensional isochronous center contained in $\ensuremath{\mathbb{R}}^n$ with $k\leq n$}, Math. Phys. Anal. Geom. {\bf 10} (2007), 237--249. \bibitem{MS} {\sc L. Mazzi and M.A. Sabatini}, {\it Characterization of centres via first integrals. Advanced topics in the theory of dynamical systems}, (Trento, 1987), 165–177, Notes Rep. Math. Sci. Engrg. {\bf 6} Academic Press, Boston, MA, (1989). \bibitem{point1}{\sc H. Poincar\'e}, {\it M\'emoire sur les courbes d\'efinies par les \'equation diff\'erentielles}, in: Oeuvreus de Henri Poincar\'e, Vol. I, Gauthier-Villars, Paris, (1951), 95--114. 161--191. \bibitem{Sanders} {\sc J.A. Sanders and F. Verhulst}, {\it Averaging methods in nonlinear dynamical systems}, Appl. Math. Sci. {\bf 59} Springer-Verlag, New York - Berlin - Heidelberg - Tokyo, 1985. \bibitem{Verhulst} {\sc F. Verhulst}, {\it Nonlinear Differential Equations and Dynamical systems}, Universitext, Springer, Berlin Heidelberg, New York, 1991. \bibitem{GLV4} {M. Viano, J. Llibre and H. Giacomini}, {\it Arbitrary order bifurcation for perturbed Hamiltonian planar systems via the reciprocal of an integrating factor}, Nonlinear Anal. {\bf 48} (2002), 117--136. \bibitem{maple} http://www.maplesoft.com/ \end{thebibliography} \section{Appendix A}\label{Appendix A} $$ \small{ \begin{array}{ll} g_{1,1}(\theta)=&\cos \theta\left( 2\, \cos^5 \theta -1 \right)a_{00}+\sin \theta \left( 1+2\, \cos^2\theta \right) ^{2}c_{00}, \\ &\\ g_{1,2}(\theta)=&\sin^2 \theta \left( 1+2\, \cos^2 \theta \right)c_{01}+\sin \theta\cos \theta \left( 1+2\, \cos^2 \theta \right)c_{{10}}\\ &+ \sin \theta\cos \theta \left( 2\, \cos^2 \theta-1 \right) a_{{01}}+ \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) a_{{10}},\\ &\\ g_{1,3}(\theta)=& \sin^3 \theta \left( 1+2\, \cos^2 \theta \right)c_{{02}}+\left( -2\, \cos^5 \theta+ \cos^3 \theta+\cos \theta \right) c_{{11}}\\ &+ \cos^2 \theta\sin \theta\left( 1+2\, \cos^2 \theta \right)c_{{20}}+ \left( -2\, \cos^5 \theta+3\, \cos^3 \theta-\cos \theta\right) a_{{02}}+\\ &\cos^2 \theta\sin \theta\left( 2\, \cos^2 \theta-1 \right) a_{{11}}+ \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right)a_{{20}},\\ &\\ g_{1,4}(\theta)=& \sin^4 \theta \left( 1+2\, \cos^2 \theta \right)c_{{03}}+ \sin^3 \theta\cos \theta\left( 1+2\, \cos^2 \theta \right)c_{{12}}\\ &+ \sin^2 \theta \cos^2 \theta \left( 1+2\, \cos^2 \theta \right)c_{{21}}+\sin \theta\cos^3 \theta \left( 1+2\, \cos^2 \theta \right)c_{{30}}\\ &+ \sin^3 \theta\cos \theta\left( 2\, \cos^2 \theta-1 \right)a_{{03}}+ \sin^2 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right)a_{{12}}\\ &+\sin \theta\cos^3 \theta \left( 2\, \cos^2 \theta-1 \right)a_{{21}}+ \cos^4 \theta \left( 2\, \cos^2 \theta-1 \right)a_{{30}},\\ \\ g_{2,1}(\theta)=& \left( 2\, \cos^2 \theta+1-4\, \cos^4 \theta \right) a_{{00}}c_{{0 0}}-{c_{{00}}}^{2}\sin \theta\cos \theta \left( 2\, \cos^2 \theta+1 \right){c_{{00}}}^{2}\\ &+\sin \theta\cos \theta \left( 2\, \cos^2 \theta-1 \right){a_{{00}}}^{2},\\ \\ g_{2,2}(\theta)=&-\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)c_{{10}}a_{{00}} -2 \cos^2 \theta\sin \theta \left( 2\, \cos^2 \theta+1 \right)\,c_{{10}}c_{{00}}\\ &+2 \cos^2 \theta \sin \theta\left( 2\, \cos^2 \theta-1 \right)\,a_{{10}}a_{{00}} -\sin \theta\left( -2\, \cos^2 \theta- 1+4\, \cos^4 \theta \right)c_{{01}}a_{{00}}\\ &-\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)c_{{00}}a_{{01}} -\cos \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)c_{{00}}a_{{10}}\\ &+ \left( 4\, \cos^5 \theta-2\,\cos \theta-2\, \cos^3 \theta \right) c_{{01}}c_{{00}} +\left( 6\, \cos^3 \theta-4\, \cos^5 \theta-2\, \cos \theta\right) a_{{01}}a_{{00}},\\ \\ g_{2,3}(\theta)=&\left( \cos^2 \theta+4\, \cos^6 \theta-6\, \cos^4 \theta +1\right) a_{{01}}c_{{01}} + \left( \cos^2 \theta+4\, \cos^6 \theta -6\, \cos^4 \theta+1 \right) a_{{00}}c_{{02}}\\ &+ \left( \cos^2 \theta+4\, \cos^6 \theta-6\, \cos^4 \theta+1 \right) a_{ {02}}c_{{00}} -c_{{11}}a_{{00}}\sin \theta \cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{00}}a_{{11}}\sin \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -2\,c_{{20}}c_{{00}}\sin \theta\cos^3 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &+2\,a_{{20}}a_{{00}}\sin \theta \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right) +2\,a_{{02}}a_{{00}} \sin^3 \theta\cos \theta\left( 2\, \cos^2 \theta-1 \right)\\ &-2\,c_{{02}}c_{{00}} \sin^3 \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right) -c_{{10}}a_{{01}}\sin \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{01}}a_{{10}}\sin \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{20}}a_{{00}} \cos^2 \theta \left( -2\, \cos^2 \theta -1+4\, \cos^4 \theta \right)\\ &-c_{ {00}}a_{{20}} \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) + \left( -4\, \cos^6 \theta+6\, \cos^4 \theta-2\, \cos^2 \theta \right) a_{{11}}a_{{00}}\\ &+ \left( 4\, \cos^6 \theta-2\, \cos^2 \theta -2\, \cos^4 \theta\right) c_{{11}}c_{{00}} -c_{{10}}a_{{10}} \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) \\ &+ \left( -4\, \cos^6 \theta+6\, \cos^4 \theta-2\, \cos^2 \theta \right) a_{{10}}a_{{0 1}} + \left( 4\, \cos^6 \theta-2\, \cos^2 \theta-2\, \cos^4 \theta \right) c_{{10}}c_{{01}}\\ &+{a_{{1 0}}}^{2}\sin \theta\cos^3 \theta \left( 2\, \cos^2 \theta-1 \right) -{c_{{01}}}^{2} \sin^3 \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right)\\ &+{a_{{01}}}^{2} \sin^3 \theta\cos \theta\left( 2\, \cos^2 \theta-1 \right) -{c_{{10}}}^{2}\sin \theta\cos^3 \theta \left( 2\, \cos^2 \theta+1 \right),\\ \\ g_{2,4}(\theta)=&-c_{{03}}a_{{00}} \sin^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{00}}a_{ {03}} \sin^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{30}}a_{{00}} \cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{00}}a_{{30}} \cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &+d_{{00}}\sin \theta\left( 2\, \cos^2 \theta+1 \right) +b_{{00}} \cos \theta\left( 2\, \cos^2 \theta-1 \right) -c_{{20}}a_{{10}} \cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{10}}a_{{20}} \cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{02}}a_{{01}} \sin^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{ {01}}a_{{02}} \sin^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{21}}a_{ {00}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{00}}a_{{21}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -2\,c_{{12}}c_{{00}} \sin^3 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &-2\,c_{{30}}c_{{00}} \cos^4 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) +2\,a_{{12 }}a_{{00}} \sin^3 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right)\\ &+2\,a_{{03}}a_{{00}} \sin^4 \theta\cos \theta\left( 2\, \cos^2 \theta-1 \right) +2\,a_{{30}}a_{{00}} \cos^4 \theta\sin \theta\left( 2\, \cos^2 \theta-1 \right)\\ &+2\,a_{{02}}a_{{01}} \sin^4 \theta\cos \theta\left( 2\, \cos^2 \theta -1 \right)\\ & -2\,c_{{10}}c_{{02}} \sin^3 \theta \cos^2 \theta \left( 2 \, \cos^2 \theta+1 \right) -2\,c_{{2 0}}c_{{10}} \cos^4 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right)\\ &+2\,a_{{11}}a_{{01}} \sin^3 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) -c_{{20}}a_{{01}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &+2\,a_{{10}}a_{{02}} \sin^3 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) +2\,a_{{20}}a_{{10}} \cos^4 \theta\sin \theta\left( 2\, \cos^2 \theta-1 \right)\\ &-c_{{11}}a _{{10}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{10}}a_{{11}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{01}}a_{{20}} \cos^2 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -2\,c_{{11}}c_{{01}} \sin^3 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &+ \left( 4\, \cos^7 \theta-6\, \cos^5 \theta+ \cos^3 \theta+\cos \theta\right) c_{{12}}a_{{00}} + \left( 4\, \cos^7 \theta-6\, \cos^5 \theta + \cos^3 \theta+\cos \theta \right) c_{{00}}a_{{12}}\\ \end{array} } $$ $$ \small{ \begin{array}{l} + \left( -4\, \cos^7 \theta-2\, \cos^3 \theta+6\, \cos^5 \theta \right) a_{{21}}a_{{00}} + \left( -2\, \cos^5 \theta-2\, \cos^3 \theta +4\, \cos^7 \theta \right) c_{{21}}c_{{00}}\\ + \left( -2\,\cos \theta+6\, \cos^5 \theta-4\, \cos^7 \theta \right) c_{{03}}c_{{0 0}} + \left( 4\, \cos^7 \theta-6\, \cos^5 \theta+ \cos^3 \theta+\cos \theta\right) c_{{1 1}}a_{{01}}\\ + \left( 4\, \cos^7 \theta-6\, \cos^5 \theta+ \cos^3 \theta+\cos \theta \right) c_{{10}}a_{{02}} + \left( 4\, \cos^7 \theta-6\, \cos^5 \theta+ \cos^3 \theta+\cos \theta\right) c_{{02}}a_{{10}}\\ + \left( 4\, \cos^7 \theta-6\, \cos^5 \theta+ \cos^3 \theta +\cos \theta\right) c_{{01}}a_{{11}} + \left( -2\, \cos \theta+6\, \cos^5 \theta-4\, \cos^7 \theta\right) c_{{02}}c_{{01}}\\ + \left( -4\, \cos^7 \theta-2\, \cos^3 \theta+6\, \cos^5 \theta \right) a_{{2 0}}a_{{01}} + \left( -4\, \cos^7 \theta-2\, \cos^3 \theta+6\, \cos^5 \theta \right) a_{{11}}a_{{10}}\\ + \left( -2\, \cos^5 \theta-2\, \cos^3 \theta+4\, \cos^7 \theta \right) c_{{20}}c_{{01}}+ \left( -2\, \cos^5 \theta-2\, \cos^3 \theta+4\, \cos^7 \theta \right) c_{{11}}c_{{10}}, \end{array} } $$ $$ \small{ \begin{array}{ll} g_{2,5}(\theta)=&-c_{{10}}a_{{30}} \cos^4 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{30}}a_{ {10}} \cos^4 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right), \\ & -{c_{{02}}}^{2} \sin^5 \theta\cos \theta \left( 2\, \cos^2 \theta+1 \right) -{c_{{11}}}^{2} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) \\ &-{c_{{20}}}^{2} \cos^5 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) +{a_{{02}}}^{2} \sin^5 \theta \cos \theta\left( 2\, \cos^2 \theta-1 \right) \\ & + \left( -7\, \cos^4 \theta+1-4\, \cos^8 \theta+10\, \cos^6 \theta \right) a_{{01}}c_{{03}} + \left( -7\, \cos^4 \theta+1-4\, \cos^8 \theta+10\, \cos^6 \theta \right) a_{{03}}c_{{01}} \\ &+ \left( -7\, \cos^4 \theta+1-4\, \cos^8 \theta+10\, \cos^6 \theta \right) a_{{02}}c_{{02}} +d_{{10}}\sin \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right) \\ &+b_{{10}} \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) +d_{{01}} \sin^2 \theta ^{2 } \left( 2\, \cos^2 \theta+1 \right) +{a_{{11}}}^{2} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right) \\ &-c_{{01}}a_{{12}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta- 1+4\, \cos^4 \theta \right) -c_{{30 }}a_{{01}}\sin \theta\cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{21}}a_{{10}}\sin \theta\cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{20}}a_{{11}}\sin \theta \cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) \\ &-c_{{11}}a_{{20}}\sin \theta\cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) +2\,a_{{10}} a_{{03}} \sin^4 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right)\\ &-c_{{10}}a_{{21}}\sin \theta\cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{01}}a_{ {30}}\sin \theta\cos^3 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{ {12}}a_{{01}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{11}}a_{{02}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ & -c_{{10}}a_{{03}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{03}}a_{{10}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta- 1+4\, \cos^4 \theta \right)\\ &-c_{{02 }}a_{{11}} \sin^3 \theta\cos \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) +2\,a_{{12}}a_{{01}} \sin^4 \theta \cos^2 \theta \left( 2 \, \cos^2 \theta-1 \right)\\ &+2\,a_{{1 1}}a_{{02}} \sin^4 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) +{a_{{20}}}^{2} \cos^5 \theta\sin \theta\left( 2\, \cos^2 \theta-1 \right)\\ &+2\,a_{{21}}a_{{01}} \sin^3 \theta \cos^3 \theta \left( 2 \, \cos^2 \theta-1 \right) +2\,a_{{2 0}}a_{{02}} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right) \\ &+2\,a_{{30}}a_{{10}} \cos^5 \theta\sin \theta\left( 2\, \cos^2 \theta-1 \right) -2\,c_{{30}}c_{{10}} \cos^5 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) \\ &-2\,c_{{03}}c_{{01}} \sin^5 \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right) -c_{{20}}a_{{20}} \cos^4 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) \\ &-2\, c_{{21}}c_{{01}} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) -2\,c_{{20}}c_{{02}} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) \\ &-2\,c_{{12}}c_{{10}} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) +2\,a_{{12}}a_{{10}} \sin^3 \theta \cos^3 \theta \left( 2\, \cos^2 \theta -1 \right)\\ &+2\,a_{{03}}a_{{01}} \sin^5 \theta\cos \theta \left( 2\, \cos^2 \theta-1 \right) + \left( 6\, \cos^6 \theta-4\, \cos^8 \theta -2\, \cos^2 \theta \right) c_{{10}}c_{{03}}\\ &+ \left( -4\, \cos^8 \theta-2\, \cos^4 \theta+6\, \cos^6 \theta \right) a_{{3 0}}a_{{01}} + \left( -4\, \cos^8 \theta-2\, \cos^4 \theta+6\, \cos^6 \theta \right) a_{{21}}a_{{10}} \\ &+ \left( -4\, \cos^8 \theta-2\, \cos^4 \theta+6\, \cos^6 \theta \right) a_{{20}}a_{{11}} + \left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{11}}a_{{11}} \\ &+ \left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{01}}a_{{21}} + \left( -2\, \cos^6 \theta-2\, \cos^4 \theta+4\, \cos^8 \theta \right) c_{{30}}c_{{01}} \\ &+ \left( -2\, \cos^6 \theta-2\, \cos^4 \theta+4\, \cos^8 \theta \right) c_{{21}}c_{{10}} + \left( -2\, \cos^6 \theta-2\, \cos^4 \theta+4\, \cos^8 \theta \right) c_{{20}}c_{{11}}\\ &+\left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{20}}a_{{02}} + \left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{12}}a_{{10}} \\ &+ \left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{10}}a_{{12}} + \left( \cos^2 \theta-6\, \cos^6 \theta + \cos^4 \theta+4\, \cos^8 \theta \right) c_{{02}}a_{{20}} \\ &+ \left( \cos^2 \theta-6\, \cos^6 \theta+ \cos^4 \theta+4\, \cos^8 \theta \right) c_{{21}}a_{{01}} + \left( 6\, \cos^6 \theta -4\, \cos^8 \theta-2\, \cos^2 \theta\right) c_{{1 2}}c_{{01}}\\ &+ \left( 6\, \cos^6 \theta -4\, \cos^8 \theta-2\, \cos^2 \theta \right) c_{{11}}c_{{02}}+b_ {{01}}\sin \theta\cos \theta\left( 2 \, \cos^2 \theta-1 \right),\\ \\ g_{2,6}(\theta)=&-2\,c_{{21}}c_{{11}} \sin^3 \theta \cos^4 \theta \left( 2\, \cos^2 \theta+1 \right) -2\,c_{{20}}c_{{1 2}} \sin^3 \theta \cos^4 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &-2\,c_{{30}}c_{{20}} \cos^6 \theta\sin \theta \left( 2\, \cos^2 \theta+1 \right) -c_{{20}}a_{{03}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &+2\,a_{{21}}a_{{11}} \sin^3 \theta \cos^4 \theta \left( 2\, \cos^2 \theta-1 \right)\\ &+2\,a_{{20}}a_{{12}} \sin^3 \theta \cos^4 \theta \left( 2\, \cos^2 \theta -1 \right) -c_{{30}}a_{{20}} \cos^5 \theta \left( -2\, \cos^2 \theta -1+4\, \cos^4 \theta \right)\\ &-c_{{20}}a_{{30}} \cos^5 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) + \left( \cos^3 \theta+\cos \theta-2\, \cos^5 \theta \right) d_{{11}}\\ &+ \left( -\cos \theta+3\, \cos^3 \theta-2\, \cos^5 \theta \right) b_{{02}} +d_{{02}} \sin^3 \theta \left( 2\, \cos^2 \theta+1 \right)\\ & -c_{{20}}a_{{21}} \cos^4 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) +2\,a_{{21}}a_{{02}} \sin^4 \theta \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right)\\ &+2\,a_{{20}}a_{{03}} \sin^4 \theta \cos^3 \theta \left( 2\, \cos^2 \theta -1 \right) +2\,a_{{12}}a_{{11}} \sin^4 \theta \cos^3 \theta \left( 2 \, \cos^2 \theta-1 \right)\\ &+2\,a_{{0 3}}a_{{02}} \sin^6 \theta \cos \theta\left( 2\, \cos^2 \theta-1 \right) -2\,c_{{03}}c_{{02}} \sin^6 \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right)\\ &+2\,a_{{11}}a_{{03}} \sin^5 \theta \cos^2 \theta \left( 2\, \cos^2 \theta-1 \right) -2\,c_{{11}}c_{{03}} \sin^5 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &-2\,c_{{30}}c_{{02}} \sin^3 \theta \cos^4 \theta \left( 2\, \cos^2 \theta+1 \right) +2\,a_{{12}}a_{{02}} \sin^5 \theta \cos^2 \theta \left( 2\, \cos^2 \theta -1 \right)\\ &-c_{{11}}a_{{30}} \cos^4 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{11}}a_{{12}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{ {21}}a_{{02}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ \end{array} } $$ $$ \small{ \begin{array}{ll} &+b_{{20}} \cos^3 \theta \left( 2\, \cos^2 \theta-1 \right) -2\,c_{{12}}c_{{02}} \sin^5 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &-c_{{12}}a_{{11}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{30}}a_{ {11}} \cos^4 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &-c_{{21}}a_{{20}} \cos^4 \theta\sin \theta\left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) +2\,a_{{30}}a_{{20}} \cos^6 \theta\sin \theta \left( 2\, \cos^2 \theta-1 \right)\\ &-c_{{03}}a_{{20}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) -c_{{02}}a_{{21}} \sin^3 \theta \cos^2 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &+2\,a_{{30}}a_{{02}} \sin^3 \theta \cos^4 \theta \left( 2 \, \cos^2 \theta-1 \right) -c_{{03} }a_{{02}} \sin^5 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right)\\ &+d_{{20}} \cos^2 \theta\sin \theta \left( 2\, \cos^2 \theta+1 \right) +b_{{11}} \cos^2 \theta\sin \theta\left( 2\, \cos^2 \theta -1\right) \\ &-c_{{02}}a_{{03}} \sin^5 \theta \left( -2\, \cos^2 \theta-1+4\, \cos^4 \theta \right) + \left( \cos \theta+10\, \cos^7 \theta-7\, \cos^5 \theta-4\, \cos^9 \theta \right) c_{{1 1}}a_{{03}}\\ &+ \left( \cos \theta+10\, \cos^7 \theta-7\, \cos^5 \theta-4\, \cos^9 \theta \right) c_{{03}}a_{{11}} + \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta+ \cos^3 \theta \right) c_{{21}}a_{{11}}\\ &+ \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta+ \cos^3 \theta\right) c_{{11}}a_{{21}} + \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta+ \cos^3 \theta \right) c_{{20}}a_{{12}}\\ &+ \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta + \cos^3 \theta\right) c_{{12}}a_{{20}} + \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta+ \cos^3 \theta \right) c_{{02}}a_{{30}} \\ &+ \left( \cos \theta+10\, \cos^7 \theta-7\, \cos^5 \theta-4\, \cos^9 \theta \right) c_{{1 2}}a_{{02}} + \left( \cos \theta+10\, \cos^7 \theta -7\, \cos^5 \theta-4\, \cos^9 \theta \right)c_{{02}}a_{{12}} \\ &+ \left( \cos^5 \theta-6\, \cos^7 \theta+4\, \cos^9 \theta+ \cos^3 \theta \right) c_{{30}}a_{{02}} + \left( -2\, \cos^3 \theta+6\, \cos^7 \theta-4\, \cos^9 \theta \right) c_{{21}}c_{{02}} \\ &+ \left( -2\, \cos^3 \theta+6\, \cos^7 \theta-4\, \cos^9 \theta \right) c_{{20}}c_{{03}}+ \left( -2\, \cos^3 \theta+6\, \cos^7 \theta-4\, \cos^9 \theta \right) c_{{12}}c_{{11}}\\ &+ \left( -4\, \cos^9 \theta+6\, \cos^7 \theta-2\, \cos^5 \theta \right) a_{{30}}a_{{11}}+ \left( -4\, \cos^9 \theta+6\, \cos^7 \theta-2\, \cos^5 \theta \right) a_{{21}}a_{{20}}\\ &+ \left( 4\, \cos^9 \theta-2\, \cos^5 \theta-2\, \cos^7 \theta \right) c_{{30}}c_{{11}}+ \left( 4\, \cos^9 \theta-2\, \cos^5 \theta-2\, \cos^7 \theta \right) c_{{21}}c_{{20}}, \end{array} } $$ $$ \small{ \begin{array}{ll} g_{2,7}(\theta)=&-{c_{{21}}}^{2} \sin^3 \theta \cos^5 \theta \left( 2\, \cos^2 \theta+1 \right) +{a_{{12}}}^{2} \sin^5 \theta \cos^3 \theta \left( -1+2\, \cos^2 \theta \right)\\ &-{c_{{03}}}^{2} \sin^7 \theta \cos \theta\left( 2\, \cos^2 \theta+1 \right) +{a_{{03}} }^{2} \sin^7 \theta \cos \theta\left( -1+2\, \cos^2 \theta \right)\\ &+{a_{{21}}}^{2} \sin^3 \theta \cos^5 \theta \left( -1+2\, \cos^2 \theta \right) -{c_{{12}}}^{2} \sin^5 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) -{c_{{30}}}^{2}\\ &\cos^7 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) +2\,a_{{30}}a_{{12}} \sin^3 \theta \cos^5 \theta \left( - 1+2\, \cos^2 \theta \right)\\ &-c_{{03 }}a_{{12}} \sin^5 \theta\cos \theta\left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right) -c_{{30}}a_{{21}} \cos^5 \theta\sin \theta\left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &-c_{{12}}a_{{21}} \sin^3 \theta \cos^3 \theta \left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right) -c_{{30}}a_{{03}} \sin^3 \theta \cos^3 \theta \left( 4 \, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &-2\,c_{{21}}c_{{03}} \sin^5 \theta \cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) -2\,c_{{12}}c_{{03}} \sin^6 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &+2\,a_{{30}}a_{{03}} \sin^4 \theta \cos^4 \theta \left( -1+2\, \cos^2 \theta \right) +2\,a_{{12}}a_{{03}} \left( \sin \theta\right) ^{6} \cos^2 \theta \left( -1+2\, \cos^2 \theta \right)\\ &+2\,a_{{21}}a_{{12}} \sin^4 \theta \cos^4 \theta \left( - 1+2\, \cos^2 \theta \right) -c_{{21 }}a_{{30}} \cos^5 \theta\sin \theta\left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &-c_{{03}}a_{{30}} \sin^3 \theta \cos^3 \theta \left( 4 \, \cos^4 \theta-2\, \cos^2 \theta-1 \right) -c_{{21}}a_{{12}} \sin^3 \theta \cos^3 \theta \left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &+b_{{03}} \sin^3 \theta \cos \theta\left( -1+2\, \cos^2 \theta \right) +b_{{12}} \sin^2 \theta \cos^2 \theta \left( -1+2\, \cos^2 \theta \right)\\ &-c_{{30}}a_{{30}} \cos^6 \theta \left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right) -a_{{0 3}}c_{{03}} \sin^6 \theta \left( 4 \, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &+d_{{12}} \sin^3 \theta\cos \theta \left( 2\, \cos^2 \theta+1 \right) +d_{{21}} \sin^2 \theta \cos^2 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &+d_{{30}}\sin \theta\cos^3 \theta \left( 2\, \cos^2 \theta+1 \right) -2\,c_{{30 }}c_{{12}} \sin^3 \theta \cos^5 \theta \left( 2\, \cos^2 \theta+1 \right)\\ &+2\,a_{{21}}a_{{03}} \sin^5 \theta \cos^3 \theta \left( -1+2\, \cos^2 \theta \right) -c_{{12}}a_{{03}} \sin^5 \theta\cos \theta\left( 4\, \cos^4 \theta-2\, \cos^2 \theta-1 \right)\\ &+d_{{03}} \sin^4 \theta \left( 2\, \cos^2 \theta+1 \right) +b_{{30}} \cos^4 \theta \left( -1+2\, \cos^2 \theta \right) +{a_{{30}}}^{2} \cos^7 \theta\sin \theta\left( -1+2\, \cos^2 \theta \right)\\ &+ \left( 6\, \cos^8 \theta-4\, \cos^{10} \theta-2\, \cos^4 \theta \right) c_{{30}}c_{{03}} + \left( 6\, \cos^8 \theta-4\, \cos^{10} \theta -2\, \cos^4 \theta\right) c_{{21}}c_{{12}}\\ &+ \left( 4\, \cos^{10} \theta-2\, \cos^8 \theta-2\, \cos^6 \theta \right) c_{{30}}c_{{21}} +b_{{21}}\sin \theta \cos^3 \theta \left( -1+2\, \cos^2 \theta \right)\\ &+ \left( -6\, \cos^8 \theta+ \cos^4 \theta+ \cos^6 \theta +4\, \cos^{10} \theta \right)c_{{21 }}a_{{21}} + \left( -4\, \cos^{10} \theta -2\, \cos^6 \theta+6\, \cos^8 \theta \right) a_{{30}}a_{{21}}\\ &+ \left( \cos^2 \theta-7\, \cos^6 \theta+10\, \cos^8 \theta-4\, \cos^{10} \theta \right) c_{{03}}a_{{21}} + \left( -6\, \cos^8 \theta+ \cos^4 \theta+ \cos^6 \theta +4\, \cos^{10} \theta \right) c_{{12 }}a_{{30}}\\ &+ \left( \cos^2 \theta-7 \, \cos^6 \theta+10\, \cos^8 \theta-4\, \cos^{10} \theta \right) c_{{12}}a_{{12}} + \left( -6\, \cos^8 \theta+ \cos^4 \theta+ \cos^6 \theta+4\, \cos^{10} \theta \right) c_{{30}}a_{{12}}\\ &+ \left( \cos^2 \theta-7\, \cos^6 \theta+10\, \cos^8 \theta-4\, \cos^{10} \theta \right) c_{{21}}a_{{03}}. \end{array} } $$ \section{Appendix B} Here we present the expressions of $A_4,A_2,A_0,A_5,A_3,A_1,\tilde{A}_1$ that appear in equation \eqref{ints}. $$ \small{ \begin{array}{ll} A_4(\theta)=&{{\rm e}^{5\, \sin^2 \theta }}\left[\left[ -{\dfrac { \left( 4\,a_{01}c_{00}{I_2}-8\,{I_1}\,a_ {{01}}c_{00}-8\,{I_1}\,c_{00}c_{10}-4\,a_{00}c_{{01} }{I_3}+4\,c_{00}c_{10}{I_2} \right) \cos^4 \theta }{{I_2}-2\,{I_1}}}\right.\right.\\ &-{\dfrac { \left( -4\,{I_1}\,c_{00}c_{10}+2\,a_{00}c_{01}{I_3}-2\,a_{01}c_ {{00}}{I_2}+2\,c_{00}c_{10}{I_2}+4\,{I_1}\,a_{01}c_{00} \right) \cos^2 \theta }{{ I_2}-2\,{I_1}}}\\ &-\left.{\dfrac { \left( -a_{01}c_{00}{I_2}+2\,{I_1}\,a_{01}c_{00}+2\,{I_1}\,a_{00}c_{01}-a_{00}c_{ {01}}{I_2} \right) }{{I_2}-2\,{I_1}}} \right] \sin \theta\\ & -{\dfrac { \left( 4\,c_{00}c_{01}{I_3}+4\,a_{{00}}c_{10}{I_2}-8\,{I_1}\,a_{00}a_{01}-8\,{I_1}\, a_{00}c_{10}+4\,a_{00}a_{01}{I_2} \right) \left( \cos \theta \right) ^{5}}{{I_2}-2\,{I_1}}}\\ &-{\dfrac{ \left( 12 \,{I_1}\,a_{00}a_{01}+4\,{I_1}\,a_{00}c_{10}-2\,c_ {{00}}c_{01}{I_3}-2\,a_{00}c_{10}{I_2}-6\,a_{00}a _{{01}}{I_2} \right) \left( \cos \theta \right) ^{ 3}}{{I_2}-2\,{I_1}}}\\ &-\left.{\dfrac { \left( -2\,c_{00}c_{01}{I_1}+c_ {{00}}c_{01}{I_2}-4\,{I_1}\,a_{00}a_{01}+2\,a_{{00} }a_{01}{I_2}+2\,{I_1}\,a_{00}c_{10}-c_{00}c_{{01} }{I_3}-a_{00}c_{10}{I_2} \right) \cos \theta }{{I_2}-2\,{I_1}}}\right],\\ \\ A_2(\theta)=&\left[A_{2,7}\cos^7\theta +A_{2,6}\cos^6\theta \sin\theta +A_{2,5}\cos^5\theta +A_{2,4}\cos^4\theta \sin\theta + A_{2,3}\cos^3\theta +A_{2,2}\cos^2\theta \sin\theta \right.\\ &+\left.A_{2,1}\cos\theta +A_{2,0}\theta \sin\theta \right]\exp(3\sin^2\theta ),\\ \\ A_{2,7}=&4\,a_{03}a_{00}-4\,a_{21}a_{00}-4\,c_{30}a_{00}+4\,c _{{12}}a_{00}-4\,a_{20}a_{01}+4\,a_{02}a_{01}+4\,c_{{1 1}}a_{01}-4\,c_{10}a_{20}\\ &+4\,c_{10}a_{02}+4\,{\dfrac {a _{{11}}c_{01}{I_3}}{2\,{I_1}-{I_2}}}+4\,c_{00}a_{{ 12}}+4\,c_{03}c_{00}+8\,c_{21}c_{00}+4\,{\dfrac {c_{01} c_{20}{I_3}}{2\,{I_1}-{I_2}}}\\ &-4\,{\dfrac {c_{01}c_{{0 2}}{I_3}}{2\,{I_1}-{I_2}}}+4\,c_{11}c_{10},\\ \\ A_{2,6}=&-4\,a_{12}a_{00}-4\,c_{03}a_{00}-8\,c_{21}a_{00}-4\, a_{11}a_{01}-4\,c_{20}a_{01}+4\,c_{02}a_{01}-4\,{ \dfrac {a_{20}c_{01}{I_3}}{2\,{I_1}-{I_2}}}\\ &+4\,{ \dfrac {a_{02}c_{01}{I_3}}{2\,{I_1}-{I_2}}}-4\,a_{{1 1}}c_{10}+4\,a_{03}c_{00}-4\,a_{21}c_{00}-4\,c_{30} c_{00}+4\,c_{12}c_{00}\\ &+4\,{\dfrac {c_{01}c_{11}{I_3} }{2\,{I_1}-{I_2}}}-4\,c_{20}c_{10}+4\,c_{10}c_{02},\\ \\ A_{2,5}=&-10\,a_{03}a_{00}+6\,a_{21}a_{00}+2\,c_{30}a_{00}-6 \,c_{12}a_{00}+6\,a_{20}a_{01}-10\,a_{02}a_{01}-6\,c _{{11}}a_{01}+2\,c_{10}a_{20}\\ &-6\,c_{10}a_{02}-6\,{ \dfrac {a_{11}c_{01}{I_3}}{2\,{I_1}-{I_2}}}-6\,c_{{0 0}}a_{12}+2\,c_{03}c_{00}-4\,c_{21}c_{00}-2\,{\dfrac {c _{{01}}c_{20}{I_3}}{2\,{I_1}-{I_2}}}\\ &+6\,{\dfrac {c_{{0 1}}c_{02}{I_3}}{2\,{I_1}-{I_2}}}-2\,c_{11}c_{{10} }, \\ \\ A_{2,4}=&6\,a_{00}a_{12}-2\,a_{00}c_{03}+4\,a_{00}c_{21}+6\,a _{{01}}a_{11}+2\,a_{01}c_{20}-6\,a_{01}c_{02}+2\,{ \dfrac {a_{20}c_{01}{I_3}}{2\,{I_1}-{I_2}}}\\ &-6\,{ \dfrac {a_{02}c_{01}{I_3}}{2\,{I_1}-{I_2}}}+2\,a_{{1 1}}c_{10}-6\,a_{03}c_{00}+2\,a_{21}c_{00}-2\,c_{00} c_{30}-2\,c_{00}c_{12}-2\,{\dfrac {c_{01}c_{11}{I_3} }{2\,{I_1}-{I_2}}}\\ &-2\,c_{10}c_{20}-2\,c_{02}c_{{10} }, \\ \\ A_{2,3}=&2\,b_{{00}}+8\,a_{00}a_{03}-2\,a_{00}a_{21}+a_{00}c_{{ 30}}+a_{00}c_{12}-2\,a_{01}a_{20}+8\,a_{01}a_{02}+a _{{01}}c_{11}+a_{20}c_{10}\\ &+a_{02}c_{10}-{\dfrac { \left( -2\,{I_3}-{I_2}+2\,{I_1} \right) c_{01}a_{{11} }}{2\,{I_1}-{I_2}}}+a_{12}c_{00}-2\,c_{00}c_{03}- 3\,c_{00}c_{21}\\ &-{\dfrac { \left( -{I_2}+2\,{I_1}+{I_3 } \right) c_{01}c_{20}}{2\,{I_1}-{I_2}}}+{\dfrac { \left( 2\,{I_1}-{I_3}-{I_2} \right) c_{02}c_{01}}{- {I_2}+2\,{I_1}}}-2\,c_{10}c_{11},\\ \\ A_{2,2}=&2\,d_{{00}}-2\,a_{00}a_{12}+a_{00}c_{03}+a_{00}c_{{21 }}-2\,a_{01}a_{11}+a_{01}c_{20}+a_{01}c_{02}+a_{{20 }}c_{01}\\ &- {\dfrac { \left( -2\,{I_3}-{I_2}+2\,{I_1} \right) c_{01}a_{02}}{2\,{I_1}-{I_2}}}+a_{11}c_{{10 }}+a_{03}c_{00}+a_{21}c_{00}-2\,c_{00}c_{12}\\ &-{\dfrac { \left( -{I_2}+2\,{I_1}+{I_3} \right) c_{11}c_{01}} {2\,{I_1}-{I_2}}}-2\,c_{02}c_{10},\\ \\ A_{2,1}=&-b_{{00}}-2\,a_{00}a_{03}+a_{00}c_{12}-2\,a_{01}a_{{0 2}}+a_{01}c_{11}+a_{02}c_{10}+a_{11}c_{01}+a_{12} c_{00}-2\,c_{00}c_{03}\\ &-{\dfrac { \left( -{I_2}+2\,{I_1 }+{I_3} \right) c_{01}c_{02}}{2\,{I_1}-{I_2}}},\\ \\ A_{2,0}=&a_{03}c_{00}+d_{{00}}+a_{00}c_{03}+a_{01}c_{02}+a_{ {02}}c_{01}. \end{array} } $$ Now we have $$ \small{ \begin{array}{l} A_0(\theta)=A_{0,9}\cos^9\theta +A_{0,8}\cos^8\theta +A_{0,7}\cos^7\theta +A_{0,6}\cos^6\theta +A_{0,5}\cos^5\theta +A_{0,4}\cos^4\theta \\ +A_{0,3}\cos^3\theta +A_{0,2}\cos^2\theta +A_{0,1}\cos\theta +A_{0,0}, \end{array} } $$ with $$ \small{ \begin{array}{ll} A_{0,9}(\theta)=&- \left( -4\,a_{21}c_{11}+4\,c_{02}c_{03}+8\,c_{02}c_{{ 21}}-4\,c_{03}c_{20}+4\,c_{11}c_{12}-4\,c_{11}c_{{30} }-8\,c_{20}c_{21}\right. \\ &-8\,a_{11}c_{21}+4\,a_{12}c_{02}-4 \,a_{12}c_{20}+4\,a_{20}a_{21}-4\,a_{20}c_{12}+4\,a_ {{20}}c_{30}+4\,a_{02}a_{03}-4\,a_{02}a_{21}\\ &\left.+4\,a_{{0 2}}c_{12}-4\,a_{02}c_{30}-4\,a_{03}a_{20}+4\,a_{03}c _{{11}}-4\,a_{11}a_{12}-4\,a_{11}c_{03} \right) {{\rm e}^ { \sin^2 \theta }},\\ \\ A_{0,8}(\theta)=&- \left( -4\,a_{03}c_{20}+4\,a_{11}a_{21}-4\,a_{11}c_{{ 12}}+4\,a_{12}a_{20}-4\,a_{02}c_{03}-8\,a_{02}c_{{21} }-4\,a_{03}a_{11}\right.\\ &+4\,a_{03}c_{02}-8\,c_{11}c_{21}-4 \,c_{12}c_{20}+4\,c_{20}c_{30}+4\,a_{11}c_{30}-4\,a_{02}a_{12}-4\,a_{12}c_{11}\\ &\left. +4\,a_{20}c_{03}+8\,a_{{20 }}c_{21}-4\,a_{21}c_{02}+4\,a_{21}c_{20}+4\,c_{02}c_ {{12}}-4\,c_{02}c_{30}-4\,c_{03}c_{11} \right) {{\rm e}^{ \sin^2 \theta }}\sin \theta,\\ \\ A_{0,7}(\theta)=&- \left( 2\,c_{11}c_{30}+4\,c_{20}c_{21}-14\,a_{02}a_{{0 3}}+10\,a_{02}a_{21} -10\,a_{02}c_{12}+6\,a_{02}c_{{30 }}+10\,a_{03}a_{20}\right.\\ &-10\,a_{03}c_{11}+10\,a_{11}a_{{12} }+2\,a_{11}c_{03}+12\,a_{11}c_{21}-10\,a_{12}c_{02}+ 6\,a_{12}c_{20}-6\,a_{20}a_{21}+6\,a_{20}c_{12}\\ &\left.-2\,a _{{20}}c_{30}+6\,a_{21}c_{11}-2\,c_{02}c_{03}-12\,c_{{0 2}}c_{21}-2\,c_{03}c_{20}-6\,c_{11}c_{12} \right) { {\rm e}^{ \sin^2 \theta }},\\ \\ A_{0,6}(\theta)=&- \left( 2\,c_{20}c_{30}+2\,a_{02}c_{03}+6\,a_{12}c_{{1 1}}+2\,a_{20}c_{03}-4\,a_{20}c_{21}+6\,a_{21}c_{02} -2\,a_{21}c_{20}-6\,c_{02}c_{12}\right.\\ &+2\,c_{02}c_{30}-2\, c_{03}c_{11}+4\,c_{11}c_{21}+2\,c_{12}c_{20}-6\,a_{{ 11}}a_{21}+6\,a_{11}c_{12}-2\,a_{11}c_{30}-6\,a_{{12} }a_{20}\\ &\left.+6\,a_{03}c_{20}+12\,a_{02}c_{21}+10\,a_{03}a _{{11}}+10\,a_{02}a_{12}-10\,a_{03}c_{02} \right) { {\rm e}^{ \sin^2 \theta }}\sin \theta,\\ \\ A_{0,5}(\theta)=&- \left[ -2\,b_{20}+18\,a_{02}a_{03}-8\,a_{11}a_{12}+2 \,b_{02}-a_{20}c_{12}-a_{20}c_{30}-a_{21}c_{11}-4 \,c_{02}c_{03}+c_{02}c_{21}\right.\\ &+2\,c_{03}c_{20}+2\,c_{{1 ,1}}c_{30} +3\,c_{20}c_{21}+3\,a_{11}c_{03}-3\,a_{11} c_{21}+7\,a_{12}c_{02}-a_{12}c_{20}+2\,a_{20}a_{{21 }}\\ &\left.+7\,a_{02}c_{12}-a_{02}c_{30}+2\,d_{11}-8\,a_{02}a _{{21}}-8\,a_{03}a_{20}+7\,a_{03}c_{11} \right] {\rm e}^{ \sin^2 \theta },\\ A_{0,4}(\theta)=&- \left[ 7\,a_{03}c_{02}-a_{03}c_{20}-8\,a_{03}a_{{11} }-a_{11}c_{30}+2\,a_{12}a_{20}-a_{12}c_{11}-a_{{11} }c_{12}-a_{20}c_{21}\right.\\ &-8\,a_{02}a_{12}+3\,a_{02}c_{{0, 3}}-3\,a_{02}c_{21}+2\,c_{03}c_{11}+3\,c_{11}c_{21}+ 2\,c_{12}c_{20}+2\,a_{11}a_{21}-a_{20}c_{03}\\ &\left.-2\,b_{{ 11}}+2\,d_{02}-2\,d_{{2,0}}+2\,c_{02}c_{30}-a_{21}c_{{02 }}-a_{21}c_{20} \right) {{\rm e}^{ \sin^2 \theta }}\sin \theta,\\ \\ A_{0,3}(\theta)=&- \left[ -a_{11}c_{21}-d_{11}-a_{12}c_{20}-3\,b_{02} +b_{20}+2\,a_{11}a_{12}+3\,c_{02}c_{21}+2\,c_{03}c_{ {20}}-a_{20}c_{12}\right.\\ &\left.-a_{21}c_{11}-10\,a_{02}a_{03}+2 \,a_{02}a_{21}+2\,c_{11}c_{12}+2\,a_{03}a_{20}-a_{{0 2}}c_{30} \right] {{\rm e}^{ \sin^2 \theta }},\\ \\ A_{0,2}(\theta)=&- \left[ 2\,a_{03}a_{11}-a_{03}c_{20}-a_{02}c_{21}-a _{{20}}c_{03}-a_{21}c_{02}-a_{11}c_{12}-a_{12}c_{{1 1}}-d_{02}-d_{{20}}\right.\\ &\left.+2\,c_{02}c_{12}+2\,c_{03}c_{11}+b _{{11}}+2\,a_{02}a_{12} \right] {{\rm e}^{ \sin^2 \theta }}\sin \theta,\\ \\ A_{0,1}(\theta)=&- \left( 2\,c_{02}c_{03}-a_{02}c_{12}+b_{02}-d_{11}+ 2\,a_{02}a_{03}-a_{03}c_{11}-a_{11}c_{03}-a_{12}c _{{02}} \right) {{\rm e}^{ \sin^2 \theta }},\\ \\ A_{0,0}(\theta)=&- \left[ -d_{02}-a_{02}c_{03}-a_{03}c_{02} \right] { {\rm e}^{ \sin^2 \theta }}\sin \theta. \end{array} } $$ Additionally, we have $$ \small{ \begin{array}{ll} A_5(\theta)=&\left[\sin \theta\cos \theta\left( -1+2\, \cos^2 \theta \right){a_{{00}}}^{2} + \left( -4\, \cos^4 \theta+2\, \cos^2 \theta +1 \right) c_{{00}}a_{{00}}\right.\\ &\left.-\sin \theta\cos \theta \left( 2\, \cos^2 \theta+1 \right){c_{{00}}}^{2} \right]{{\rm e}^{6\, \sin^2 \theta}},\\ \end{array} } $$ $$ \small{ \begin{array}{ll} A_3(\theta)=&2\, \cos^3 \theta\sin \theta\left( -1+2\, \cos^2 \theta \right) a_{{00}}a_{{20} } -2\,\cos \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) a_{{00}}a_{{02}}\\ &-2\, \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) a_{{00}}a_{{11}} - \cos^2 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{0 0}}c_{{20}}\\ &+ \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{00}}c_{{02}} -\cos \theta\sin \theta\left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{11}}a_{{00}}\\ &-\cos \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) {a_{{01}}}^{2} - \left( \cos^2 \theta-1 \right) \frac{ 2\,{I_1}-4\, \cos^4 \theta{I_3}+2\, \cos^2 \theta{I_3}-{I_2} }{ 2\,{ I_1 -{I_2}} } a_{{01}}c_{{01}}\\ &-\cos \theta\sin \theta\left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{10}}a_{{01}} -\cos^2 \theta \left( 4\, \cos^4 \theta -1-2\, \cos^2 \theta\right) c_{{00}}a_{{20}}\\ &+ \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{00}}a_{{02}} -\cos \theta\sin \theta\left( 4\, \cos^4 \theta-1-2 \, \cos^2 \theta \right) c_{{00}}a_{{11}}\\ &-2\, \cos^3 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) c_{{00}}c_{{20}} +2\,\cos \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{00}}c_{{02}}\\ &+2\, \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{11}}c_{{00}} -\cos \theta\sin \theta\, {I_3}\, \frac{ 2\,{I_1}-{I_2}+ \cos^2 \theta{I_3}-2\, \cos^4 \theta{I_3} }{ 2\,{ I_1}-{I_2} } {c_{{01}}}^{2}\\ &- \cos^2 \theta \,\frac{ 2\,{I_1}+{I_3}+2\, \cos^2 \theta{I_3}-{I_2}-4\, \cos^4 \theta{I_3} }{ 2\,{ I_1}-{I_2} } c_{{10}}c_{{01}} -\left. \cos^3 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) {c_{{10}}}^{2}\right]\exp(4\sin^2\theta),\\ \\ A_{1}(\theta)=&\left[- \cos^4 \theta\frac{ 2\,{I_1}+{I_3}+2 \, \cos^2 \theta{I_3}-{I_2}-4 \, \cos^4 \theta{I_3}}{2I_1-I_2} c_{{0 1}}c_{{30}} + \cos^4 \theta \left( 8\, \cos^4 \theta-3-4\, \cos^2 \theta \right) c_{{10}}c_{{21}}\right.\\ & +2\, \cos^2 \theta \left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_ {{02}}a_{{11}} + \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1- 2\, \cos^2 \theta \right) a_{{20}}c_{{02}}\\ &+\sin \theta\cos \theta\left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} {a_{{02}}}^{2} -\sin \theta\cos^3 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) {a_{{11}}}^{2}\\ &-2\, \cos^4 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) a_{{11}}a_{{20}} +{{\rm e}^{2\, \sin^2 \theta}} \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 8\, \cos^4 \theta-4\, \cos^2 \theta-1 \right) a_{{01}}c_{{21}}\\ &+2 \, \cos^2 \theta \left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{01}}a_{{12} } + \cos^2 \theta \left( -1+2\, \cos^2 \theta \right) b_{{10}}\\ &+ \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{10}}a_{{12}} +2 \, \cos^4 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{11}}c_{{20}}\\ &-\sin \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right) \left( \cos^2 \theta-1 \right) ^{2} {c_{{02}}}^{2} -2\, \cos^2 \theta \left( 2\, \cos^2 \theta+1\right) \left( \cos^2 \theta-1 \right) ^{ 2} c_{{11}}c_{{02}}\\ &+\sin \theta\cos^3 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) {c_{{11}}}^{2} + \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{11}}a_{{11}}\\ &+ \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{02}}c_{{20}}\\ & - \sin \theta\cos \theta\left( \cos^2 \theta-1 \right) \frac{ 2\,{I_1}-4\, \cos^4 \theta {I_3}+2\, \cos^2 \theta{I_3}-{ I_2} }{2I_1-I_2} a_{{12}}c_{{01}}\\ &-\sin \theta\cos \theta\frac{ -4\, \cos^6 \theta{ I_3}-{I_2}+2\,{I_1}+2\, \cos^2 \theta{I_1}+ \cos^2 \theta{ I_3}+{I_3}-2\, \cos^4 \theta{ I_3}- \cos^2 \theta{I_2} }{{2I_1-I_2}} c_{{01}}c_{{03}}\\ &-\sin \theta \cos^5 \theta \left( 2\, \cos^2 \theta +1\right) {c_{{2 0}}}^{2} -\sin \theta \cos^3 \theta \frac{4\, \cos^2 \theta{I_3}+4 \,{I_1}-8\, \cos^4 \theta{I_3} +{I_3}-2\,{I_2}}{2I_1-I_2} c_{{01}}c_{{21}}\\ &+2\, \cos^2 \theta \left( \cos^4 \theta - \cos^2 \theta-1+2\, \cos^6 \theta \right) c_{{03}}c_{{10}} +\sin \theta \cos^5 \theta \left( -1+2\, \cos^2 \theta \right) {a_{{20}}}^{2}\\ &- \cos^4 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{20}}c_{{20}} + \left( \cos^2 \theta- 1 \right) ^{2} \frac{2\,{I_1}-4\, \cos^4 \theta{I_3 }+2\, \cos^2 \theta{I_3}-{I_2} }{2I_1-I_2} a_{{03}}c_{{01}}\\ &- \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) d_{{01}} +\sin \theta\cos \theta\left( 2\, \cos^2 \theta+1 \right) d_{{10}}+\sin \theta\cos \theta\left( -1+2\, \cos^2 \theta \right) b_{{01}}\\ &-\sin \theta\cos^3 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{11}}c_{{20}} -\sin \theta\cos^3 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{01}}c_{{30}}\\ &+ \left( \cos^2 \theta- 1 \right) \left( 4\, \cos^6 \theta+2\, \cos^4 \theta- \left( \cos \theta\right) ^ {2}-1 \right) a_{{01}}c_{{03 }}\\ &- \cos^2 \theta \left( \cos^2 \theta-1 \right) \frac{ 2\,{I_1}-4\, \cos^4 \theta {I_3}+2\, \cos^2 \theta{I_3}-{ I_2} }{2I_1-I_2} a_{{21}}c_{{01}}\\ &- \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{02}}c_{{02}} -\sin \theta\cos^3 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{11}}a_{{20}}\\ &+ \cos^2 \theta \left( \cos^2 \theta-1 \right) \frac{ 2\,{I_1}+{I_3}+2\, \cos^2 \theta{I_3}-{I_2}-4\, \cos^4 \theta{I_3} }{2I_1-I_2} c_{{12}}c_{{01}}\\ &-\sin \theta\cos^3 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{10}}a_{{21}} -2\,\sin \theta \cos^5 \theta \left( 2\, \cos^2 \theta+1 \right) c_{{10}}c_{{30}}\\ &+\sin \theta\cos \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{12}}a_{{01}} -2\,\sin \theta\cos^3 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) a_{{01} }a_{{21}}\\ &+2\,\sin \theta\cos \theta\left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{01}}a_{{03 }} -2\, \sin \theta \cos^3 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) a_{{02} }a_{{20}}\\ &+\sin \theta\cos \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2 \, \cos^2 \theta \right) c_{{11}}a_{{02}}\\ &+\sin \theta\cos \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{11}}c_{{02}}\\ &+\sin \theta\cos \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) c_{{10}}a_{{03} } +2\,\sin \theta\cos^3 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{1 2}}c_{{10}}\\ &\left.+2\,\sin \theta\cos^3 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{02}}c_{{20}}\right]{{\rm e}^{2\, \sin^2 \theta}}, \end{array} } $$ $$ \small{ \begin{array}{ll} \tilde{A}_1(\theta)=&\cos \theta\sin \theta \left( \cos^4 \theta- \cos^2 \theta+2\, \cos^8 \theta+3\, \cos^6 \theta-1 \right){c_{{03}}}^{2}\\ &- \left( 4\, \cos^6 \theta+2\, \cos^4 \theta- \cos^2 \theta-1 \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{03}}c_{{03}}\\ &- \cos^5 \theta \sin \theta\left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{21}}c_{{30}} + \cos^6 \theta \left( 8\, \cos^4 \theta-3-4\, \cos^2 \theta \right) c_{{21}}c_{{30 }}\\ &+2\, \cos^4 \theta \left( \cos^4 \theta- \cos^2 \theta-1+2\, \cos^6 \theta \right) c_{{03}}c_{{30}}\\ &+ \cos^3 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) d_{{30}} + \cos^3 \theta\sin \theta\left( -1+2\, \cos^2 \theta \right) b_{{21}} \\ &+2\,{c _{{21}}}^{2} \cos^5 \theta\sin \theta\left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) +{a_{{12}}}^{2} \sin^5 \theta \cos^3 \theta \left( -1+2\, \cos^2 \theta \right)\\ &- \cos^7 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) {c_{{30}}}^{2} + \left( 2\, \cos^2 \theta+1 \right) \left( \cos^2 \theta-1 \right) ^{2} d_{{03}}\\ &+ \cos^3 \theta\sin \theta\left( 4\, \cos^4 \theta+8\, \cos^6 \theta-3-3\, \cos^2 \theta \right) c_{{03}}c_{{21}} - \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) d_{{21}}\\ &- \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) b_{{12}}+ \cos^4 \theta \left( -1+2\, \cos^2 \theta \right) b_{{30}}\\ & -\cos \theta\sin \theta\left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{3} {a_{{03} }}^{2} + \cos^4 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2 \, \cos^2 \theta \right) a_{{12}}c_{{ 30}}\\ &-2\, \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( \cos^4 \theta- \cos^2 \theta-1+2\, \cos^6 \theta \right) c_{{03}}c_{{12}} -\cos \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) d_ {{12}}\\ &-\cos \theta\sin \theta\left(\cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) b_{{03}} +2\, \cos^4 \theta \left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{12}}a_{{21}}\\ &+ \cos^4 \theta \left( \cos^2 \theta-1 \right) \left( 8\, \cos^4 \theta-4\, \cos^2 \theta-1 \right) a_{{21}}c_{{21}}\\ &- \cos^4 \theta \left( \cos^2 \theta-1 \right) \left( 8\, \cos^4 \theta-3-4\, \cos^2 \theta \right) c_{{12}}c_{{21}}\\ &-2\, \cos^2 \theta \left( -1+2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{3} a_{{03}}a_{{12}} - \cos^2 \theta \left( 8\, \cos^4 \theta -4\, \cos^2 \theta-1 \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{03}}c_{{21}}\\ &- \cos^2 \theta \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{12}}c_{{12}}\\ &+ \cos^2 \theta \left( \cos^2 \theta-1 \right) \left( 4\, \cos^6 \theta+2\, \cos^4 \theta- \cos^2 \theta -1 \right) a_{{21}}c_{{03}}\\ &- \cos^5 \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right) { a_{{21}}}^{2}\\ &- \cos^3 \theta\sin \theta\left( 2\, \cos^2 \theta+1 \right) \left( \cos^2 \theta-1 \right) ^{ 2} {c_{{12}}}^{2} +2\,\cos^5 \theta \sin \theta\left( \cos^2 \theta-1 \right) \left( 2\, \cos^2 \theta+1 \right) c_{{12}}c_{{30}}\\ &+2\, \cos^3 \theta\sin \theta\left( -1+ 2\, \cos^2 \theta \right) \left(\cos^2 \theta-1\right) ^{2} a_{{03}}a_{{21}}\\ &+ \cos^3 \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{03}}c_{{30 }}\\ &+ \cos^3 \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 8\, \cos^4 \theta-4\, \cos^2 \theta-1 \right) a_{{12}}c_{{21}}\\ &+ \cos^3 \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) a_{{21}}c_{{1 2}}\\ &-\cos \theta\sin \theta\left( 4\, \cos^4 \theta-1-2\, \cos^2 \theta \right) \left( \cos^2 \theta-1 \right) ^{2} a_{{03}}c_{{12}}\\ &+\cos \theta\sin \theta\left( \cos^2 \theta-1 \right) \left( 4\, \cos^6 \theta+2\, \cos^4 \theta- \cos^2 \theta-1 \right) a_{{12}}c_{{03}}. \end{array} } $$ \section{Appendix C} Here we present the explicit expressions of $s_5(\theta),s_4(\theta),s_3(\theta),s_2(\theta),s_1(\theta),\tilde{s}_1(\theta)$ that appear in relation \eqref{ss}. $$ s_5(\theta)=s_{5,1}(\theta)c_{00}^2+s_{5,2}(\theta) a_{00}^2+s_{5,3}(\theta) a_{00}c_{00}, $$ with $$ \small{ \begin{array}{ll} s_{5,1}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }} \left(\displaystyle{ \int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}+2\,\int _{0}^{\theta}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right)\\ &\left( \sin \theta +2\,\sin \theta \cos^2 \theta \right),\\ \\ s_{5,2}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }} \left( -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}+2\,\int _{0}^{\theta}\!{ {\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} \right) \\ & \qquad \left( -\cos \theta +2\, \cos^3 \theta \right),\\ \\ s_{5,3}(\theta)=&-2\,{{\rm e}^{3\, \sin ^2 \theta }} \left(\displaystyle{ \int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}+2\,\int _{0}^{\theta}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right)\\ & \qquad \left( -\cos \theta +2\, \cos^3 \theta \right)\\ & \qquad -2\,{{\rm e}^{3\, \sin^2 \theta }} \left( -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw }+2\,\int _{0}^{\theta}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} \right)\\ &\qquad \left( \sin \theta +2\,\sin \theta \cos^2 \theta \right). \end{array} } $$ Additionally, we have $$ s_4(\theta)=s_{4,1}(\theta)a_{00}c_{10}-s_{4,2}(\theta)a_{00}a_{01}-s_{4,3}(\theta)c_{00}c_{01}-s_{4,4}(\theta)c_{00}c_{10} -s_{4,5}(\theta)a_{00}c_{01}-s_{4,6}(\theta)a_{01}c_{00}, $$ and $s_{4,i}(\theta)$ for $i=1\cdots,6$ are the following: $$ \small{ \begin{array}{ll} s_{4,1}(\theta)=&- \left( - \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}+4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right.\\ & +2\, \sin \theta { {\rm e}^{- \sin^2 \theta }}\int _{0}^{ \theta}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}+4\, \cos^2 \theta \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ & +8\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-2\, \cos^2 \theta \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw }\\ &-\left.4\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w}} \cos^3 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w}}\sin w \cos w {d w} \right) {{\rm e}^{3\, \sin^2 \theta }}\cos \theta, \\ \\ s_{4,2}(\theta)=&- \left( \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}-2\, \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} \right. \\ &-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}-4\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-2\, \cos^2 \theta \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\cos w {dw}\\ &+4\, \cos^2 \theta \sin \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &\left.+8\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right) { {\rm e}^{3\, \sin^2 \theta }}\cos \theta, \end{array} } $$ \small{ $$ \begin{array}{ll} s_{4,3}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta }} \left( 8\,{{\rm e}^{ \sin^2 \theta }}{ I_1}\, \cos^2 \theta \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\right.\\ &-2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ {3\, \sin^2 w }}\sin w \cos^2 w {dw}+2\,{I_1} \,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &-2\,{{\rm e}^{ \sin^2 \theta }}{I_2}\, \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}-2\, \cos ^4 \theta{ I_3}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &-4\,{{\rm e}^{ \sin^2 \theta }}{I_2}\, \cos^2 \theta \sin \theta \int _{0 }^{\theta}\!{{\rm e}^{2\, \sin^2 w }}{d w}+2\,{{\rm e}^{ \sin^2 \theta }}{I_3}\, \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &+4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} -4\,{{\rm e}^{ \sin^2 \theta }}{I_3}\, \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+2\, \cos^2 \theta {I_3}\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} +4\,{ {\rm e}^{ \sin^2 \theta }}{I_3}\, \cos^2 \theta \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &+4\,{ {\rm e}^{ \sin^2 \theta }}{I_1}\, \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+ \cos^2 \theta {I_3}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw }\\ &-{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}-4\, \cos ^4 \theta{I_3}\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\\ &\left. -8\,{{\rm e}^{ \sin^2 \theta }}{I_3}\, \cos^2 \theta \sin \theta \int _{0 }^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} \right) \dfrac{1}{2\,{I_1}-{I_2}}, \end{array} $$ } \small{ $$ \begin{array}{ll} s_{4,4}(\theta)=&- \left( 8\,{{\rm e}^{ \sin^2 \theta }} \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right.\\ &+4\,{{\rm e}^ { \sin^2 \theta }} \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} + 2\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\right)\cos \theta \\ &+\left(\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{3\, \sin^2 w }}\sin w {dw}\right)\cos \theta +2\,{{\rm e}^{ \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &+4\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right) \cos^3 \theta +4\,{{\rm e}^{ \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &\left.+2\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\right) \cos^3 w \right) {{\rm e}^{2\, \sin^2 \theta }} \sin \theta, \end{array} $$ } \small{ $$ \begin{array}{ll} s_{4,5}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta }} \left( -2\,{I_1}\,\displaystyle{\displaystyle{\int _{0}^{\theta}}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}+2\, \cos ^4 \theta{I_3}\,\int _{0}^{ \theta}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw} \right.\\ &-2\,{I_2}\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}+4\,{{\rm e}^{ \sin^2 \theta }}{I_3}\,\cos \theta \int _{0}^ {\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &-2\,{{\rm e}^{ \sin^2 \theta }}{I_3}\,\cos \theta \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}-4\, \cos ^4 \theta{I_3}\,\int _{0}^{ \theta}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &+2\,{{\rm e}^{ \sin^2 \theta }}{I_2}\,\cos \theta \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+{I_2}\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}\\ &-4\,{ {\rm e}^{ \sin^2 \theta }}{I_2}\, \cos^3 w \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}-8\,{ {\rm e}^{ \sin^2 \theta }}{I_3}\, \cos^3 w \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+4\,{{\rm e}^{ \sin^2 \theta }}{I_3}\, \cos^3 w \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}+4\,{I_1}\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &+8\,{{\rm e}^{ \sin^2 \theta }}{I_1}\, \cos^3 w \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }}{dw} +2\, \cos^2 \theta {I_3}\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &-\left. \cos^2 \theta {I_3}\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\cos w {dw}-4\,{{\rm e}^{ \sin^2 \theta }}{I_1}\,\cos \theta \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }}{dw} \right) \dfrac{1}{2\,{I_1}-{I_2}}, \end{array} $$ } $$ \small{ \begin{array}{ll} s_{4,6}(\theta)=&- \left( 4\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+4\, \cos^3 w {{\rm e}^{- \sin^2 \theta }}\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\right.\\ & +8\, \cos^2 \theta \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-4\, \cos^2 \theta \int _{0 }^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}\\ &+2\, \cos^3 w {{\rm e}^{- \sin^2 \theta }}\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}-\cos \theta {{\rm e}^{- \sin^2 \theta }}\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ & -\left. 2\,\cos \theta {{\rm e}^{- \sin^2 \theta }} \displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}}}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}-2\,\displaystyle{\displaystyle{\displaystyle{\int _{0}^{\theta}} }} \!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right) {{\rm e}^{3\, \sin^2 \theta}} \sin \theta. \end{array} } $$ Now we have $$ \begin{array}{l} s_3(\theta)=s_{3,1}(\theta) a_{00}c_{02} +s_{3,2}(\theta)a_{02}c_{00} +s_{3,3}(\theta)a_{01}c_{01} +s_{3,4}(\theta)a_{00}a_{11} +s_{3,5}(\theta)a_{00}c_{20}\\ +s_{3,6}(\theta)c_{00}c_{11} +s_{3,7}(\theta)c_{01}c_{10} +s_{3,8}(\theta)a_{00}a_{02} +s_{3,9}(\theta)a_{00}a_{20}\\ +s_{3,10}(\theta)a_{00}c_{11} +s_{3,11}(\theta)a_{01}c_{10} +s_{3,12}(\theta)a_{11}c_{00} +s_{3,13}(\theta)c_{00}c_{02} +s_{3,14}(\theta)c_{00}c_{20}\\ +s_{3,15}(\theta)a_{01}^2 +s_{3,16}(\theta)c_{10}^2 +s_{3,17}(\theta)c_{01}^2 +s_{3,18}(\theta)a_{20}c_{00}, \end{array} $$ and $s_{3,i}(\theta)$ for $i=1\cdots,18$ satisfying the following expressions: \small{ $$ \begin{array}{ll} s_{3,1}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[\left( 4\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\right) \cos^2 \theta \right.\\ &\left.-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}+ 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}\right) \cos^2 \theta -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw} \right],\\ \\ s_{3,2}(\theta)=&2\,{{\rm e}^{3\, \sin^2 \theta}}\sin \theta \left[ 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w {dw}\right) \cos^2 \theta \right.\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w {dw}+4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw}\right) \cos^2 \theta +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w ^{2}}}\cos w {dw}\\ &-\left.6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta -3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} \right),\\ \\ s_{3,3}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta }} \left[ -2\,{I_1}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} +4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right.\\ &-2\,{I_1}\,\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}+4\,{I_1}\,\sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} \right)\cos^3 \theta \\ & +2\,{I_3}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+2\,{ I_3}\,\sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\right) \cos^3 \theta +2\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta\\ &-2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+{I_2}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-4\,{I_3}\,\sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w{dw}\right) \cos^3 \theta-{I_3}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &+2\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2 \, \sin^2 w }}\sin w \cos w {dw} \right)\cos^4 \theta -2\,{I_2}\,\sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ 2\, \sin^2 w }}{dw}\right) \cos^3 \theta\\ &-4\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^3 w \cos^4 \theta-{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta \\ & \left.+{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right] \dfrac{1}{{2\,{I_1}-{I_2}}}, \\ \\ s_{3,4}(\theta)=&2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[\left( 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right.\\ & -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w }} \cos^4 w \sin w {dw}\\ &\left.-4\,\left(\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} \right)\cos^2 \theta \right],\\ \\ \end{array} $$ } $$ \small{ \begin{array}{ll} s_{3,5}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[ 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right.\\ &+4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right) \cos^2 \theta \\ &-\left.\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w }} \cos^4 w \sin w {dw} \right],\\ \\ s_{3,6}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }}\sin \theta \left[ \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w {dw} -2\,\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw}\right.\\ &+\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w {dw }\right) \cos^2 \theta \\ & +\left.2\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta -4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw }\right) \cos^2 \theta \right],\\ \\ s_{3,7}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta }} \left[ 2\,{I_1}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} +4\,{I_1}\,\sin \theta \left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\right) \cos^3 \theta \right.\\ &+2\,{I_1}\,\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}+4\,{I_1}\,\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &+ {I_3}\,\cos \theta\sin \theta \displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}-{I_2}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\\ &-2\,{I_3}\,\cos \theta\sin \theta \displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} -{I_2}\,\int _{0}^{ \theta}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &+{I_3}\,\left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta -2\,{I_2}\,\sin \theta \left(\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{d w} \right) \cos^3 \theta\\ &-2\,{I_2}\,\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}- 4\,{I_3}\,\sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} \right) \cos^3 \theta\\ &+2\,{I_3}\,\sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ 2\, \sin^2 w }} \cos^2 w {dw}\right) \cos^3 \theta +2\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta\\ &\left.-4\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^4 \theta-2\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w}\right) \cos^4 \theta \right] \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{3,8}(\theta)&=-2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[ -3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}-2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w {dw}\right) \cos^2 \theta \right.\\ &+6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w {dw} \right)\cos^2 \theta +2\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w ^{2 }}} \cos^5 w {dw}\\ &+\left.\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }}\cos w {dw}-4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw } \right)\cos^2 \theta \right],\\ \\ \end{array} } $$ \small{ $$ \begin{array}{ll} s_{3,9}(\theta)&=2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[ 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos^5 w {dw}-4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw } \right)\cos^2 \theta \right.\\ &-\left.\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta\right],\\ \\ s_{3,10}(\theta)&=-2\,{{\rm e}^{3\, \sin^2 \theta }}\cos \theta\left[ 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta \right.\\ &+2\, \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}}\cos w {dw}\right) \cos^2 \theta -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw }\\ &\left. -4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw}\right) \cos^2 \theta -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w {dw }+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw} \right],\\ s_{3,11}(\theta)=& 2\,{{\rm e}^{2\, \sin^2 \theta }}\sin \theta \cos \theta\left[ -4\,\left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta \right.\\ &+\left.\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right],\\ \\ s_{3,12}(\theta)=&2\,{{\rm e}^{3\, \sin^2 \theta }}\sin \theta \left[ 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right.\\ &+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w }} \cos^4 w \sin w {dw}\\ &\left.-4\,\left(\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right) \cos^2 \theta \right],\\ \\ s_{3,13}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta }}\sin \theta \left[ 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\right.\\ &+\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}+4\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw} \right)\cos^2 \theta \\ &+\left.2\,\left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}\right) \cos^2 \theta \right],\\ \\ s_{3,14}(\theta)=& -2\,{{\rm e}^{3\, \sin^2 \theta }}\sin \theta \left[ 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right.\\ &+4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w }} \cos^4 w \sin w {dw} \right)\cos^2 \theta +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\\ & +\left. 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w }} \cos^2 w \sin w {dw} \right)\cos^2 \theta \right], \end{array} $$ } \small{ $$ \begin{array}{ll} s_{3,15}(\theta)=&{{\rm e}^{2\, \sin^2 \theta }}\sin \theta \cos \theta\left[\left( -4\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta \right.\\ & -\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right)\cos^2 \theta\\ &+\left.2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w}} \cos^3 w \sin w {dw} \right),\\ \\ s_{3,16}(\theta)=&-{{\rm e}^{2\, \sin^2 \theta }}\sin \theta \cos \theta\left[ 4\,\left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta \right.\\ &+ \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta \\ &+\left.2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w}} \cos^3 w \sin w {dw} \right],\\ \\ s_{3,17}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta }} \left[ -4\,{I_1}\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -2\,{{I_3}}^{2} \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\right) \cos^2 \theta \right.\\ & -{I_2}\,{I_3}\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} +{{I_2}}^{2}\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{d w}\\ &+2\,{I_1}\,{I_3}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} +2\,{I_2}\,{I_3}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ {2\, \sin^2 w }} \cos^4 w {dw}\\ &+2\,{I_1}\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{2\, \sin^2 w }}{dw}\right) \cos^2 \theta +{{I_3}}^{2}\left(\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} \right)\cos^2 \theta \\ &-{I_2}\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw} \right)\cos^2 \theta +4\,{{I_1}}^{2}\int _{0}^{ \theta}\!{{\rm e}^{2\, \sin^2 w }}{dw}- 4\,{I_1}\,{I_3}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ & -2\,{{I_3}}^{2}\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} \right) \cos^4 \theta +2\,{I_2}\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\right) \cos^4 \theta \\ &\left. -4\,{I_1}\,{I_3}\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\right) \cos^4 \theta +4\,{{I_3}}^{2}\left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\right) \cos^4 \theta \right] \dfrac{1}{ \left( -{I_2}+2\,{I_1}\right) ^{2}},\\ \\ s_{3,18}(\theta)=&2\,{{\rm e}^{3\, \sin^2 \theta }}\sin \theta \left[ -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right.\\ &+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} \right)\cos^2 \theta -\left. 4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w {dw} \right) \cos^2 \theta \right]. \end{array} $$ } Now we have $$ \small{ \begin{array}{ll} s_2(\theta)=&s_{2,1}(\theta)\,a_{{00}}c_{{30}}+s_{2,2}(\theta)\,a_{{20}}c_{{10}}+s_{2,3}(\theta)\,a_{{00}}c_{{12}}+s_{2,4}(\theta)\,a_{{02}}c_{{10}} +s_{2,5}(\theta) \,a_{{12}}c_{{0,0}}+s_{2,6}(\theta)\,a_{{00}}a_{{21}}\\ &+s_{2,7}(\theta)\,a_{{0 1}}a_{{20}}+s_{2,8}(\theta)\,a_{{01}}c_{{11}}+s_{2,9}(\theta)\,a_{{11}}c_ {{01}}+s_{2,10}(\theta)\,c_{{00}}c_{{21}}+s_{2,11}(\theta)\,c_{{01}}c_{{20 }}+s_{2,12}(\theta)\,c_{{10}}c_{{11}}\\ &+s_{2,13}(\theta)\,a_{{00}}a_{{03}}+ s_{2,14}(\theta)\,a_{{01}}a_{{02}}+s_{2,15}(\theta)\,c_{{00}}c_{{03}}+s_{2,16}(\theta)\,c_{{01}}c_{{02}}+s_{2,17}(\theta)\,a_{{00}}a_{{12}}+s_{2,18}(\theta)\,a_{{03}}c_{{00}} \\ &+s_{2,19}(\theta)\,a_{{11}}c_{{10}}+s_{2,20}(\theta)\,a_{{01}}c_{{02}}+s_{2,21}(\theta)\,a_{{01}}c_{{20}}+s_{2,22}(\theta)\,a_{{01}}a_{{11}}+s_{2,23}(\theta)\,a_{{00}}c_{{03}} \\ &+s_{2,24}(\theta)\,a_{{00}}c_{{21}}+s_{2,25}(\theta)\,a_{{02}}c_{{01}}+s_{2,26}(\theta)\,a_{{20}}c_{{01}}+s_{2,27}(\theta)\,a_{{21}}c_{{00}}\\ &+s_{2,28}(\theta)\,c_{{00}}c_{{12}} +s_{2,29}(\theta)\,c_{{00}}c_{{30}}+s_{2,30}(\theta)\,c_{{01}}c_{{11}}+s_{2,31}(\theta)\,c_{{02}}c_{{10}}+s_{2,32}(\theta)\,c_{{10}}c_{{20}}, \end{array} } $$ and $s_{2,i}(\theta)$ for $i=1,\cdots,32$ satisfying the following expressions: $$ \small{ \begin{array}{ll} s_{2,1}(\theta)=&\dfrac{1}{6}\, \left( 12\, \cos^2 \theta \sin \theta {{\rm e}^{-3\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }\right. -12\, \cos^4 \theta \sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{-3\, \sin^2 \theta}}\\ &+2\, \cos^6 \theta -14\, \cos^2 \theta -6\, \cos^2 \theta \sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{-3\, \sin^2 \theta}}\\ &+24\, \cos^4 \theta \sin \theta {{\rm e}^{-3\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3 \, \sin^2 w }} \cos^3 w {dw}-3\, \cos^4 \theta +7 +\left.8\, \cos^8 \theta \right) {{\rm e}^{3\, \sin^2 \theta}} \cos \theta,\\ \\ s_{2,2}(\theta)=&\left( -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}+\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right.\\ &+\left.2\,\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta -4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw }\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta}}\sin \theta\cos \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,3}(\theta)=&-\dfrac{1}{6}\, \left( -12\,\sin \theta {{\rm e}^{-3\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} -9\, \cos^4 \theta \right.\\ &-12\, \cos^4 \theta \sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{- 3\, \sin^2 \theta}} +6\, \cos^2 \theta \sin \theta \left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{-3\, \sin^2 \theta}}\\ &-10\, \cos^6 \theta -12 \, \cos^2 \theta \sin \theta {{\rm e}^{-3\, \sin^2 \theta} }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} +6\,\sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{- 3\, \sin^2 \theta}}\\ &+8\, \cos^8 \theta +16\, \cos^2 \theta +\left.24\, \cos^4 \theta \sin \theta {{\rm e}^{-3\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} -5 \right) {{\rm e}^{3\, \sin^2 \theta}}\cos \theta,\\ \\ s_{2,4}(\theta)=&- \left( -4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw }\right) \cos^2 \theta +3\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw} \right. +6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} \right) \cos^2 \theta \\ &-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw} -\left.2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw } \right)\cos^2 \theta -2\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw} \right) {{\rm e}^{2\, \sin^2 \theta}}\sin \theta \cos \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,5}(\theta)=&-\dfrac{1}{3}\, \left( 6\, \cos^4 \theta \displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \sin w {dw}+4\,{{\rm e}^{3\, \sin^2 \theta}} \cos^7 \theta \right. -18\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right)\cos^2 \theta\\ & -6\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^5 \theta -9\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ 3\, \sin^2 w }}\sin w {dw} \right)\cos^2 \theta +2\,{{\rm e}^{3\, \sin^2 \theta}}\cos \theta +3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &+\left.12\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}+6\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right) \cos^2 \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,6}(\theta)=&\dfrac{1}{6}\, \left( -12\, \cos^2 \theta \sin \theta {{\rm e}^{-3\, \sin^2 \theta}} \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }+1-2\, \cos^2 \theta \right.\\ &-10\, \cos^6 \theta -12\, \cos^4 \theta \sin \theta \left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{-3\, \sin^2 \theta }}+3\, \cos^4 \theta \\ &+24\, \cos^4 \theta \sin \theta { {\rm e}^{-3\, \sin^2 \theta}}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &+\left.6\, \cos^2 \theta \sin \theta \left(\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right){{\rm e}^{-3\, \sin^2 \theta}} +8\, \cos^8 \theta \right) {{\rm e}^{3\, \sin^2 \theta}} \cos \theta,\\ \\ s_{2,7}(\theta)=&\left( 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw} -4\, \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}\right) \cos^2 \theta \right.\\ &-\left.\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} +2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw } \right)\cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta}}\sin \theta \cos \theta,\\ \\ s_{2,8}(\theta)=&- \left( 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta +2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw }\right) \cos^2 \theta \right. -\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}\\ &-4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}\right) \cos^2 \theta \left.-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw} \right) {{\rm e}^{2\, \sin^2 \theta}}\sin \theta\cos \theta,\\ \\ s_{2,9}(\theta)=& {{\rm e}^{2\, \sin^2 \theta}} \left( -{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}+2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right.\\ &-4\,{I_1}\,\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}- 2\,{I_3}\, \cos^2 \theta \int _{0}^ {\theta}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\\ &+ {I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}-2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\\ &\left.+4\,{I_3} \, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} +2\,{I_2} \,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{2,10}(\theta)=&-\dfrac{1}{3}\, \cos^2 \theta \left( -6\,\left(\displaystyle{\int _ {0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \sin w {dw}\right) \cos^2 \theta-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos \theta\right. -3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &-6\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} -12\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\right) \cos^2 \theta-6\,{{\rm e}^{3\, \sin^2 \theta}} \cos^3 \theta \\ &+8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^7 \theta +12\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw} +\left.24\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,11}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta}} \left( 2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw} +2\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right.\\ &-{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}+{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\\ &-2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}-4\,{I_3} \, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\\ &-\left.2\,{I_2} \,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}+4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{2,12}(\theta)=&- \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right.\\ &+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}\right) \cos^2 \theta +2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta\\ &-\left.4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta}}\cos \theta \sin \theta, \\ \\ s_{2,13}(\theta)=&-\dfrac{1}{6}\, \left( -12\, \cos^4 \theta \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw} -6\,\sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right.\\ &+24\, \cos^4 \theta \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}+12\, \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &-36\, \cos^2 \theta \sin \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}-22\,{{\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta +{{\rm e} ^{3\, \sin^2 \theta}}\\ &-8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta +8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta +18\,\sin \theta \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}\\ &+\left.21\,{{\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta \right) \cos \theta,\\ \\ s_{2,14}(\theta)=&- \left( -3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw }-2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw} \right)\cos^2 \theta \right. +6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\right) \cos^2 \theta\\ &+2\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}+\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }}\cos w{dw} \left.-4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw }\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta}}\cos \theta \sin \theta, \\ \\ s_{2,15}(\theta)=&-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^7 \theta- \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw} -2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^5 \theta -2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\\ &+\dfrac{10}{3}\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^3 \theta -4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}-\dfrac{4}{3}\, \cos^9 \theta {{\rm e}^{3\, \sin^2 \theta}}\\ &+2\,{{\rm e}^{3\, \sin^2 \theta}}\cos \theta +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3 \, \sin^2 w }}\sin w { dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,16}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta}} \left( -2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\right. +2\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\\ &+2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}+4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw} -4\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\\ &+{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}-{I_2}\,\int _{0 }^{\theta}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}-\left.2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw} \right) \dfrac{1}{{2\,{I_1}-{I_2}}}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,17}(\theta)=&\dfrac{1}{3}\, \left( -6\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }+3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw} +2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta \sin \theta \right.\\ &+6\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw}+4\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta \sin \theta -6\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta \sin \theta \\ &-9\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\cos w{dw}-12\, \cos^4 \theta \int _ {0}^{\theta}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}+\left.18\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} \right) \cos^2 \theta, \\ \\ s_{2,18}(\theta)=&-\dfrac{1}{6}\, \left( 8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta -14\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta -{{\rm e}^{3\, \sin^2 \theta}}\right. +12\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &+24\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} +4\,{{\rm e}^{ 3\, \sin^2 \theta}} \cos^2 \theta -36\, \cos^3 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\\ & -18\, \cos^3 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw} +6\,\cos \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &+\left.12\,\cos \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}+3\,{{\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta \right) \sin \theta, \\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,19}(\theta)=&\left( 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right. +\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\\ &-\left. 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} -4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right) \cos^2 \theta \right){{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta, \\ \\ s_{2,20}(\theta)=&- \left( 4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\right) \cos^2 \theta \right. -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\\ &-\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \sin^3 w {dw} +\left.2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,21}(\theta)=&- \left( 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right. +4\,\left(\displaystyle{\int _ {0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right) \cos^2 \theta \\ &-\left.\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}-2\,\int _{0} ^{\theta}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} \right) {{\rm e}^{2\, \sin^2 \theta}} \cos \theta \sin \theta \\ \\ s_{2,22}(\theta)=&\left( 2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta -\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right.\\ &+ 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} -\left.4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} \right)\cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta ,\\ \\ s_{2,23}(\theta)=&2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta \sin \theta +\dfrac{4}{3}\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta \sin \theta +2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }+\dfrac{8}{3}\,{{\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta \sin \theta\\ &-2\, \cos^4 \theta \displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta \sin \theta +2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw }\\ &-4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }} \cos^3 w {dw} -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\cos w{dw }+ \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\cos w{dw},\\ \\ s_{2,24}(\theta)=&\dfrac{1}{3}\, \left( 6\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw }+12\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{3\, \sin^2 w }}\cos w{dw}\right. +12\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w } } \cos^3 w {dw}\\ &-3\,\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{3\, \sin^2 w }}\cos w{dw} +8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta \sin \theta -24\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \cos^3 w {dw}\\ &-6\, \left. \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\cos w{dw}-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta \sin \theta \right) \cos^2 \theta,\\ \\ s_{2,25}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta}} \left( 3\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}+2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }}\cos w{dw}\right.\\ &+6\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}-4\,{I_1}\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw} +4\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw }\\ &+2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw } +{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw} -6\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}\\ &-2\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw} -2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}\\ &\left.-3\,{ I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}-{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }}\cos w{dw} \right) \dfrac{1}{2\,{I_1}-{I_2}}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,26}(\theta)=& {{\rm e}^{2\, \sin^2 \theta}} \left( -2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} +4\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw}\right.\\ &-{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^3 w {dw}-2\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw} +2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}\\ &+2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw} -\left.4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}+{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw } \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{2,27}(\theta)=&\dfrac{1}{6}\, \left( -6\, \cos^3 \theta \displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{3\, \sin^2 w }} \sin w {dw}+12\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\right. -3\,{{\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta\\ & -2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta-2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta -{{\rm e}^{3\, \sin^2 \theta}}+24\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\\ &-\left.12\, \cos^3 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}+8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta \right) \sin \theta , \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,28}(\theta)=&-\dfrac{1}{6}\, \left( 5\,{{\rm e}^{3\, \sin^2 \theta}}+24\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\right.\\ &-12\, \cos^3 \theta \displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}-12\, \cos \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw}\\ &-6\, \cos^3 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}-15\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta -6\,\cos \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &+4\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta +8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta +12\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw }\\ &\left. -2\,{{\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta \right) \sin \theta, \\ \\ s_{2,29}(\theta)=&\dfrac{1}{6}\, \left( -14\,{{\rm e}^{3\, \sin^2 \theta}} \cos^2 \theta +10\,{ {\rm e}^{3\, \sin^2 \theta}} \cos^6 \theta \right. +12\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w {dw}\\ &+6\, \cos^3 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{3\, \sin^2 w }}\sin w {dw} +12\, \cos^3 \theta \int _ {0}^{\theta}\!{{\rm e}^{3\, \sin^2 w }} \sin w \cos^2 w {dw}+ 8\,{{\rm e}^{3\, \sin^2 \theta}} \cos^8 \theta \\ &+\left.3\,{{\rm e}^{3\, \sin^2 \theta}} \cos^4 \theta -7\,{{\rm e}^{3\, \sin^2 \theta}}+24\, \cos^5 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{3\, \sin^2 w }}\sin w \cos^2 w {dw} \right) \sin \theta , \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,30}(\theta)=&- {{\rm e}^{2\, \sin^2 \theta}} \left( -4\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^5 w{dw}-2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}\right.\\ &-2\,{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw}+2\,{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw} -2\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}\\ &+2\,{I_1}\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw} -{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}+{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w }}\cos w{dw}\\ &+2\,{I_1}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw}+{I_3}\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^3 w {dw }\\ &-\left.{I_2}\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }}\cos w{dw}+4\,{I_3}\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w }} \cos^5 w{dw} \right) \dfrac{1}{2\,{I_1}-{I_2}}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,31}(\theta)=&- \left( 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}\right. +4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w \cos^2 w {dw}\right) \cos^2 \theta \\ &\left.+2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \sin^3 w {dw}\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{2,32}(\theta)=&- \left( 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw} +4\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^4 w \sin w {dw}\right) \cos^2 \theta \right.\\ &+\left.\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw} +2\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w }} \cos^2 w \sin w {dw}\right) \cos^2 \theta \right) {{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta. \end{array} } $$ We also have $$ \small{ \begin{array}{ll} s_1(\theta)=&s_{1,1}(\theta)a_{03}c_{10}+s_{1,2}(\theta)a_{01}c_{03}+s_{1,3}(\theta)a_{03}c_{01}+s_{1,4}(\theta)a_{01}c_{30}+s_{1,5}(\theta)c_{10}c_{21}+ s_{1,6}(\theta)a_{01}a_{12}\\ &+s_{1,7}(\theta)c_{01}c_{12}+s_{1,8}(\theta)c_{01}a_{11} +s_{1,9}(\theta)c_{03}c_{10}+s_{1,10}(\theta)a_{12}c_{10}+s_{1,11}(\theta)c_{01}c_{21}\\ &+s_{1,12}(\theta)c_{10}c_{12}+s_{1,13}(\theta)a_{21}c_{10} +s_{1,14}(\theta)a_{01}a_{21}+s_{1,15}(\theta)c_{10}c_{30}+s_{1,16}(\theta)a_{01}c_{21}\\ &+s_{1,17}(\theta)a_{01}a_{03}+s_{1,18}(\theta)a_{01}c_{12}+s_{1,19}(\theta)a_{12}c_{01}+s_{1,20}(\theta)c_{01}c_{03} +s_{1,21}(\theta)a_{21}c_{01}, \end{array} } $$ and $s_{1,i}(\theta)$ for $i=1\cdots,21$ are given in the following expressions: $$ \small{ \begin{array}{ll} s_{1,1}(\theta)=&-\dfrac{1}{12}\, \left( 24\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w { dw} \right)\cos^4 \theta -{{\rm e}^{2\, \sin^2 \theta }}\right. +12\,\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &+4\,{{\rm e}^{2\, \sin^2 \theta }} \cos^2 \theta -36\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta -72\,\left(\int _ {0}^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta \\ &+24\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+8\,{{\rm e}^ {2\, \sin^2 \theta }} \cos^8 \theta +48\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^4 \theta \\ &\left.+3\,{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta -14\,{{\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta \right) \cos \theta \sin \theta, \\ \\ s_{1,2}(\theta)=&-\dfrac{1}{3}\,{{\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta +2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} -\frac{2}{3}\,{{\rm e}^{2\, \sin^2 \theta }} \cos^{10} \theta\\ & -2\, \left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^4 \theta -\int _{0}^ {\theta}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \\ &+\left(\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right)\cos^4 \theta +\dfrac{7}{3}\,{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta -4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-{{\rm e}^{2 \, \sin^2 \theta }} \cos^2 \theta -\dfrac{1}{3}\,{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta +2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw},\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,3}(\theta)=& {{\rm e}^{2\, \sin^2 \theta }} \left( \cos \theta -1 \right) \left( \cos \theta +1 \right) \left( -8\, \cos^8 \theta {I_3}+14\, \cos^6 \theta{I_3}-7\, \cos^4 \theta {I_3 }+8\, \cos^4 \theta {I_1}\right.\\ &-4\, \cos^4 \theta {I_2}+24\,{I_3} \, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w{dw}\\ & +48\,{ I_1}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -48\,{I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &-24\,{I_2}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2 \, \sin^2 w }}{dw}+ \cos^2 \theta {I_3}-10\, \cos^2 \theta {I_1}+5\, \cos^2 \theta {I_2}\\ &+12\,{I_2}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -12\,{I_3}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &-24\,{I_1}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ & +\left.24\,{I_3}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}-{I_2}+2\,{I_1} \right) \frac{1}{12({I_2}-2\,{I_1})}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,4}(\theta)=&-\dfrac{1}{12}\, \left( 24\, \cos^4 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}-7\right.+3\, \cos^4 \theta\\ & -8\, \cos^8 \theta +14\, \cos^2 \theta -24\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-2\, \cos^6 \theta +12\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w}\\ &\left.-48\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right) { {\rm e}^{2\, \sin^2 \theta }}\sin \theta \cos \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,5}(\theta)=&-\dfrac{1}{3}\, \left[ -{{\rm e}^{2\, \sin^2 \theta }} \cos^2 \theta +12\left(\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^4 \theta \right.\\ &+24\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^4 \theta -3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &-6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w} \right)\cos^2 \theta -6\,\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-12\,\left(\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta +4\,{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta \\ &-\left.3\,{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta \right] \cos^2 \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,6}(\theta)=&-\dfrac{1}{3}\, \left( 12\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \cos^4 \theta \right. +9\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta \\ & +6\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-3\,\int _{0} ^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}+4\,{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta \\ &-6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right)\cos^4 \theta -5 \,{{\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta +2\,{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta \\ &\left.-18\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^2 \theta -{{\rm e}^{2\, \sin^2 \theta }} \cos^2 \theta \right) \cos^2 \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,7}(\theta)=&-\dfrac{1}{12}\, {{\rm e}^{2\, \sin^2 \theta }} \left( 12\,{I_2}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\right.\\ & -12\,{I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} -24\,{I_1}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &+12\,{I_2}\, \cos^3 \theta \sin \theta {{\rm e}^{- 2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw} +24\,{ I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+24\,{ I_3}\,\cos \theta \sin \theta {{\rm e}^{ -2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} -12\,{I_3}\,\cos \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &-24\,{I_1}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} +24\,{I_3}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \left( \cos^4 w \right) ^{2}{dw}\\ &+48\,{ I_1}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -24\,{I_2}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-48\,{I_3}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} -16\, \cos^4 \theta {I_3}+9\, \cos^6 \theta {I_3}+5\, \cos^2 \theta {I_3}\\ &-12\, \cos^2 \theta {I_1}-8\,{I_3}\, \cos^{10} \theta +8\, \cos^6 \theta {I_1}-4\, \cos \theta ^{ 6}{I_2}-5\,{I_2}+10\,{I_1}+6\, \cos^2 \theta {I_2}\\ &\left.-6\, \cos^4 \theta {I_1}+3\, \cos^4 \theta {I_2}+10\, \cos^8 \theta {I_3 } \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{1,8}(\theta)=&\dfrac{1}{12}\, {{\rm e}^{2\, \sin \theta ^ {2}}} \left( -4\, \cos^6 \theta {I_2}-48\,{I_3}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw }\right.\\ &-3\, \cos^4 \theta {I_2}+14\, \cos^4 \theta {I_3}-24\,{I_3} \, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+3\, \cos^6 \theta {I_3}-2\, \cos^8 \theta {I_3}+24\,{I_3}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &+6\, \cos^4 \theta {I_1}+8\, \cos^6 \theta {I_1}+7\,{I_2}-24\,{I_2}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-12\,{I_2}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} +48\,{I_1}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}-8\,{I_3}\, \cos^{10} \theta \\ &-7\, \cos^2 \theta {I_3}+12\,{I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw }\\ &+\left.24\,{I_1}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}-14\,{I_1} \right) \dfrac{1}{-{I_2}+2\,{ I_1}},\\ \\ s_{1,9}(\theta)=&-\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^4 \theta +\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &-{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta -2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}-{{\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta \\ & +5/3\,{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta +2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w } } \cos^3 w \sin w {dw}\\ &-4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} -\dfrac{2}{3}\,{ {\rm e}^{2\, \sin^2 \theta }} \cos^{10} \theta \\ &+{{\rm e}^{2\, \sin^2 \theta }} \cos^2 \theta -2\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^4 \theta, \\ \\ s_{1,10}(\theta)=&-\dfrac{1}{3}\, \left( {{\rm e}^{2\, \sin \theta ^ {2}}} \cos^2 \theta +12\left(\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^4 \theta \right. -3\,{{\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta\\ & -9\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right)\cos^2 \theta +6\left( \,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^4 \theta\\ &+6\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} -18\left(\,\int _{0 }^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta \\ &\left.+3\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}+2\,{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta \right) \cos^2 \theta, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,11}(\theta)=&-\dfrac{1}{3}\, {{\rm e}^{2\, \sin \theta ^ {2}}} \cos^2 \theta \left( -6\,{I_1}\,{{\rm e}^{-2\, \sin^2 \theta }} \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+4\, \cos^7 \theta\sin \theta {I_3}\right.\\ & -4\, \cos^3 \theta \sin \theta {I_1}+6\,{I_2}\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-12\,{I_1}\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2 \, \sin^2 w }}{dw}-3\,{I_3}\,{ {\rm e}^{-2\, \sin^2 \theta }}\int _{0} ^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ & +2\, \cos^3 \theta \sin \theta {I_2}+ 24\,{I_1}\, \cos^4 \theta {{\rm e}^ {-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }}{dw} +12\,{ I_3}\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+12\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}-2\,\cos \theta \sin \theta {I_1}\\ &-12\,{I_2}\, \cos^4 \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+\cos \theta \sin \theta {I_2}\\ &+6\,{I_3}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &-24\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}+3\,{ I_2}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &- \left.\cos^3 \theta \sin \theta {I_3}-6\,{I_3}\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw } \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{1,12}(\theta)=&-\dfrac{1}{12}\, \left( -12\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w}-15\, \cos^4 \theta -2\, \cos^6 \theta \right.\\ & +24\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w} -24\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-24\,{{\rm e} ^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+8\, \cos^8 \theta +4\, \cos^2 \theta\\ &+48\, \cos^4 \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &\left.-12\, \cos^2 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}+5 \right) {{\rm e}^{2 \, \sin^2 \theta }}\cos \theta \sin \theta,\\ \\ s_{1,13}(\theta)=&\dfrac{1}{12}\, \left( -12\, \cos^2 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw} \right.\\ &+24\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} +8\, \cos^8 \theta -3\, \cos^4 \theta \\ &-24\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} +48\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\\ &-1-2\, \cos^2 \theta\left.-2\, \cos^6 \theta \right) {{\rm e}^{2\, \sin^2 \theta }} \cos \theta \sin \theta, \\ \\ s_{1,14}(\theta)=&\dfrac{1}{12}\, \left( 1+48\, \cos^4 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+ 8\, \cos^8 \theta \right.\\ &-24\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-10\, \cos^6 \theta \\ & -24\, \cos \theta ^ {4}{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w } }\sin w \cos w {dw}-2\, \cos^2 \theta \\ &\left.+12\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w}+3\, \cos^4 \theta \right) {{\rm e}^ {2\, \sin^2 \theta }}\cos \theta \sin \theta,\\ \\ \\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,15}(\theta)=&\dfrac{1}{12}\, \left( 24\, \cos^2 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+ 10\, \cos^6 \theta \right.\\ &+24\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ & +48\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w } } \cos^3 w \sin w {dw}\\ &+3\, \cos^4 \theta +12\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} +8\, \cos^8 \theta\\ &\left.-14\, \cos^2 \theta -7 \right) { {\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta, \\ \\ s_{1,16}(\theta)=&-\dfrac{1}{3}\, \left( -6\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-12\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^4 \theta \right.\\ &+{ {\rm e}^{2\, \sin^2 \theta }} \cos^2 \theta +4\,{{\rm e}^{2\, \sin^2 \theta }} \cos^8 \theta -12\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}\right) \cos^2 \theta\\ &+24\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right)\cos^4 \theta -4\,{ {\rm e}^{2\, \sin^2 \theta }} \cos^6 \theta \\ &+6\,\left(\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ 2\, \sin^2 w }}\sin w \cos w {dw}\right) \cos^2 \theta -{{\rm e}^{2\, \sin^2 \theta }} \cos^4 \theta \\ &\left.+3\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} \right) \cos^2 \theta,\\ \\ s_{1,17}(\theta)=&-\dfrac{1}{12}\, \left( 48\, \cos^4 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} \right.\\ &- 12\,{{\rm e}^{-2\, \sin^2 \theta }} \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw} -24\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}\\ &+36\, \cos^2 \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}+8\, \cos^8 \theta +1+21\, \cos^4 \theta \\ &+24\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw} -72\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-8\, \cos^2 \theta \\ &\left.-22\, \cos^6 \theta \right) {{\rm e}^{2\, \sin \theta ^{2 }}}\cos \theta \sin \theta, \\ \\ s_{1,18}(\theta)=&-\dfrac{1}{12}\, \left( -24\, \cos^4 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \sin w \cos w {dw}\right.\\ &-24\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}+16\, \cos^2 \theta -5\\ &+12\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {d w}\\ &-9\, \cos^4 \theta -24\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^3 w \sin w {dw}-10\, \cos^6 \theta \\ &+12\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}\sin w \cos w {dw}+8\, \cos^8 \theta \\ &\left.+48\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2 \, \sin^2 w }} \cos^3 w \sin w {dw} \right) {{\rm e}^{2\, \sin^2 \theta }}\cos \theta \sin \theta,\\ \\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{1,19}(\theta)=&-\dfrac{1}{3}\, {{\rm e}^{2\, \sin \theta ^ {2}}} \cos^2 \theta \left( 2\,\cos \theta \sin \theta {I_1}-18\,{I_1} \, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2 \, \sin^2 w }}{dw} \right.\\ &-\cos \theta \sin \theta {I_2}-3\, \cos^5 \theta \sin \theta {I_3}+ \cos^3 \theta \sin \theta {I_2}\\ &+9\,{I_2}\, \cos^2 \theta { {\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0} ^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw } +18\,{I_3}\, \cos^2 \theta { {\rm e}^{-2\, \sin^2 \theta }}\int _{0} ^{\theta}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+ \cos^3 \theta \sin \theta {I_3} -6\,{I_2}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\\ &+6\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}+6\,{I_1}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\\ &+2\, \cos \theta ^{7}\sin \theta {I_3}-3\,{I_2}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -9\,{I_3}\, \cos^2 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &-12\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}+12\,{ I_1}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\\ &\left.-2\, \cos^3 \theta \sin \theta {I_1}-6\,{I_3}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}+3\,{I_3}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{1,20}(\theta)=&-\dfrac{1}{3}\, {{\rm e}^{2\, \sin^2 \theta }} \left( -6\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }} \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw} \right.\\ &+2\, \cos^3 \theta \sin \theta {I_2}+ 12\,{I_1}\, \cos^6 \theta {{\rm e}^ {-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-6\,{ I_2}\, \cos^6 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}-3\,{I_2}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}\\ &-3\, \cos^3 \theta \sin \theta {I_3}-12\,{I_3}\, \cos^6 \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &+6\,{ I_1}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{2\, \sin^2 w }}{dw}-6\,{I_1}\,{{\rm e}^{-2\, \sin^2 \theta }} \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+ \cos^5 \theta \sin \theta {I_2}\\ &+4\, \cos^5 \theta \sin \theta {I_3}+3\,\cos \theta \sin \theta {I_2}+3\, \cos^7 \theta \sin \theta {I_3}\\ &-3\,{I_3}\,{{\rm e} ^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} +3\,{I_3}\, \cos^4 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &+3\,{I_2}\,{{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} -6\,\cos \theta \sin \theta {I_1}-2\, \cos^5 \theta \sin \theta {I_1}\\ &+6\,{I_3}\,{{\rm e} ^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}-4\, \cos^3 \theta \sin \theta {I_1}+2\,{I_3} \, \cos^9 \theta \sin \theta \\ &\left.+6\,{I_3}\, \cos^6 \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw} \right) \dfrac{1}{2\,{I_1}-{I_2}},\\ \\ s_{{1},{21}}(\theta)=&\dfrac{1}{12}\, {{\rm e}^{2\, \sin \theta ^ {2}}} \left( 2\, \cos^4 \theta {I_3 }-48\,{I_3}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw } \right.\\ &-3\, \cos^6 \theta {I_3}-4\, \cos^6 \theta {I_2}+{I_2}-12\, {I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &-6\, \cos^4 \theta {I_1}-8\,{I_3}\, \cos^{10} \theta -2\,{I_1}+24\,{I_3}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^4 w {dw}\\ &-24\,{ I_2}\, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin \theta ^ {2}}}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+12\,{I_2}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta }}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}\\ &-24\,{I_1}\, \cos^3 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw}+10\, \cos^8 \theta {I_3}\\ &+8\, \cos^6 \theta {I_1}- \cos^2 \theta {I_3}+24\,{I_3} \, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }} \cos^2 w {dw}\\ &\left.+3\, \cos^4 \theta {I_2}+48\,{I_1} \, \cos^5 \theta \sin \theta {{\rm e}^{-2\, \sin^2 \theta } }\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{2\, \sin^2 w }}{dw} \right) \dfrac{1}{2\,{I_1}-{I_2}}. \end{array} } $$ Now we present the expression of $s_0(\theta)$. $$ \small{ \begin{array}{ll} s_0(\theta)=&s_{0,1}(\theta)a_{12}c_{11}+s_{0,2}(\theta)a_{20}c_{03}+s_{0,3}(\theta)a_{20}c_{21}+s_{0,4}(\theta)a_{21}c_{11} +s_{0,5}(\theta)a_{21}c_{02}+s_{0,6}(\theta)a_{21}c_{20}\\ &+s_{0,7}(\theta)c_{02}c_{30}+s_{0,8}(\theta)a_{02}a_{12} +s_{0,9}(\theta)a_{02}c_{03}+s_{0,10}(\theta)a_{02}c_{21}+s_{0,11}(\theta)a_{03}a_{11}+s_{0,12}(\theta)a_{03}c_{02}\\ &+ s_{0,13}(\theta)a_{03}c_{20}+s_{0,14}(\theta)a_{11}a_{21} +s_{0,15}(\theta)a_{11}c_{12}+s_{0,16}(\theta)a_{11}c_{30}+s_{0,17}(\theta)a_{12}a_{20}\\ &+s_{0,18}(\theta)a_{11}a_{12}+s_{0,19}(\theta)a_{03}a_{20}+s_{0,20}(\theta)a_{02}a_{21}+s_{0,21}(\theta)a_{11}c_{21}+ s_{0,22}(\theta)c_{20}c_{30}\\ &+s_{0,23}(\theta)c_{11}c_{03}+s_{0,24}(\theta)c_{02}c_{03}+s_{0,25}(\theta)c_{11}c_{21}+s_{0,26}(\theta)c_{12}c_{20} +s_{0,27}(\theta)c_{02}c_{12}+s_{0,28}(\theta)a_{02}c_{12}\\ &+s_{0,29}a_{02}c_{30} +s_{0,30}(\theta)a_{02}a_{03}+s_{0,31}(\theta)a_{20}a_{21}+s_{0,32}(\theta)c_{11}c_{30}+s_{0,33}(\theta)c_{20}c_{21}+s_{0,34}(\theta)c_{02}c_{21}\\ &+s_{0,35}(\theta)c_{03}c_{20}+s_{0,36}(\theta)c_{11}c_{12}+s_{0,37}(\theta)a_{20}c_{30}+s_{0,38}(\theta)a_{12}c_{20}+s_{0,39}(\theta)a_{20}c_{12}\\ &+s_{0,40}(\theta)a_{11}c_{03}+s_{0,41}(\theta)a_{12}c_{02} +s_{0,42}(\theta)a_{03}c_{11}. \end{array} } $$ and $s_{0,i}(\theta)$ for $i=1\cdots,42$ are the following: $$ \small{ \begin{array}{ll} s_{0,1}(\theta)=&- \cos^2 \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}+\displaystyle{\int _ {0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos w {dw} -3\, \cos^2 \theta \int _{0}^{\theta}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\right.\\ & -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {d w} +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \left. -3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,2}(\theta)=&-4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} -2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &+2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}+ \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw},\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,3}(\theta)=&\cos^2 \theta \left( 2\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} +4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {d w}\right. -8\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}-2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw } \left. +4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,4}(\theta)=& \cos^3 \theta \sin \theta \left( 2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\right.-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\left.+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,5}(\theta)=& \cos^3 \theta \sin \theta \left( 2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}-\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w {dw}\right.\\ &\left.+4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right),\\ \\ s_{0,6}(\theta)=& \cos^3 \theta \sin \theta \left( 2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {d w} \right.-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &+4\, \cos^2 \theta \displaystyle{\int _{0 }^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \left.- 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,7}(\theta)= & \cos^3 \theta \sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw }+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w {dw}\right.\\ &\left. +2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}+ 4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right),\\ \\ s_{0,8}(\theta)=&- \cos^2 \theta \left( 6\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right. +3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }-9\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw}\\ &-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {d w}-2\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} +6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &-2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\left. -\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}}\cos w {dw}+3\, \cos^2 \theta \int _{0 }^{\theta}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right),\\ \\ s_{0,9}(\theta)=&-6\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw}+3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}-3\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }\\ & +4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &+2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}}\cos w {dw}-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}+ \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw},\\ \\ s_{0,10}(\theta)=&- \cos^2 \theta \left( -3\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}-6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {d w} \right.+12\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}+4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }-8\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &\left.+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw }+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}}\cos w {dw}-4\, \cos^4 \theta \int _{0 }^{\theta}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right),\\ s_{0,11}(\theta)=&\cos \theta \sin \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right.+\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ & -3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw} \left.+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,12}(\theta)=&-\cos \theta \sin \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw} \right.+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}\\ &-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \sin^3 w {dw} +4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}\\ &\left.+2\,\displaystyle{\int _ {0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} -6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,13}(\theta)=&-\cos \theta \sin \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\right.+\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ & -3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw} \left.-6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,14}(\theta)=&- \cos^3 \theta \sin \theta \left( 2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^2 w \sin w {d w} \right. -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &-4\, \cos^2 \theta \int _{0 }^{\theta}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \left. + 2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,15}(\theta)=&\cos \theta \sin \theta \left( - \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\right. -\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw} \left. -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,16}(\theta)=&- \cos^3 \theta \sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right. +2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \left.-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,17}(\theta)=& \cos^2 \theta \left( -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\right. -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {d w}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}\\ &\left.-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,18}(\theta)=&\cos^2 \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right.,\\ &+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &\left.+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,19}(\theta)=&\cos \theta \sin \theta \left( -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} \right.-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^5 w {dw}+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }\\ &\left.-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,20}(\theta)= & \cos^3 \theta \sin \theta \left( 6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^3 w {dw} \right.\\ &-3\,\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {d w}\\ &\left.+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} -2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}}\cos w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,21}(\theta)=&\cos^2 \theta \left( -\int _{0}^{\theta }\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}-2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right.\\ &+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw}\\ &\left.+4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}-8\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,22}(\theta)= &\cos^3 \theta \sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right. +2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \left.+4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,23}(\theta)=&-2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}}\cos w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} - \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\\ &+4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} +2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &-2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} - \cos \theta ^ {4}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw},\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,24(\theta)}=&-2\left(\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}\right) \cos^6 \theta +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}- \cos \theta ^ {4}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw} \\ &-4\left(\,\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right)\cos^6 \theta+2\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \sin^3 w \cos^2 w {dw}\\ &-2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}, \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,25}(\theta)=&- \cos^2 \theta \left( -\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}-2\, \cos \theta ^{ 2}\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^ {2}}}\cos w {dw} \right.+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\\ \\ &+2\,\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} +4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {d w}-8\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} -2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw } \left.+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,26}(\theta)=&-\cos \theta \sin \theta \left( - \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right.-\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} -2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw}\\ &\left.+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,27}(\theta)=&-\cos \theta \sin \theta \left( - \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw} \right)-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}\\ & +2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \sin^3 w {dw}-2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}-2\,\int _ {0}^{\theta}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}\\ &\left.+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,28}(\theta)=&-\cos \theta \sin \theta \left( -3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw}-3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}\right.\\ &+6\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }+ \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}}\cos w {dw}\\ &\left.+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}-2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right),\\ \\ s_{0,29}(\theta)=& \cos^3 \theta \sin \theta \left( 3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }\right. +6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ & \left.-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^ {2}}}\cos w {dw}-2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right),\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,30}(\theta)=&-\cos \theta \sin \theta \left( 6\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw}+3\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right.\\ & -9\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^3 w {dw}-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw} -2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}-\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}}\cos w {dw}\\ & \left.+3\, \cos^2 \theta \int _{0 }^{\theta}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right),\\ \\ s_{0,31}(\theta)=&- \cos^3 \theta \sin \theta \left( -4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^5 w {dw}+2\,\displaystyle{\int _{0}^{\theta}} \!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} \right.\\ &\left.+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {d w}-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,32}(\theta)=& \cos^3 \theta \sin \theta \left( \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} +2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} \right. -2\, \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^5 w {dw}\\ &-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}\left. +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,33}(\theta)=&- \cos^2 \theta \left( -\int _{0}^{ \theta}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right. -2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &\left. -4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}+8\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right), \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,34}(\theta)=&- \cos^2 \theta \left( -\displaystyle{\int _{0}^{ \theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw}-2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {d w} \right.\\ &+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\! {{\rm e}^{ \sin^2 w}} \sin^3 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \\ &\left. - 4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}+8\, \cos^4 \theta \int _{0}^{ \theta}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right),\\ \\ s_{0,35}(\theta)=&-2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}+\int _{0}^{ \theta}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &- \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}-4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw}-2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw},\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,36}(\theta)=&-\cos \theta \sin \theta \left( - \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw }-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^ {2}}}\cos w {dw} \right.\\ &+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw} +2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\\ &-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw} - \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }-\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^ {2}}} \cos^3 w {dw}\\ &\left.+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \\ s_{0,37}(\theta)=&- \cos^3 \theta \sin \theta \left( -2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }\right.-4\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ &+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \left.+2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw } \right),\\ \\ s_{0,38}(\theta)=&- \cos^2 \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} \right.+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ &-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} +4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw}\\ &+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \left.-6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} \right),\\ \\ s_{0,39}(\theta)=&\cos \theta \sin \theta \left( 2\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}+2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw} \right. -4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^{2 }}} \cos^5 w {dw}\\ &- \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \left. -\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw }+2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right),\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,40}(\theta)=&2\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}-\int _{0}^{ \theta}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw} + \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^2 w \sin w {dw}\\ & -4\, \cos^6 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw} +2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^ { \sin^2 w}} \cos^4 w \sin w {dw}-2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^4 w \sin w {dw},\\ \\ s_{0,41}(\theta)=&- \cos^2 \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw} +\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w {dw } \right.-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w {dw}\\ &+4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \left.+2\,\displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw}-6\, \cos^2 \theta \int _{0}^{ \theta}\!{{\rm e}^{ \sin^2 w}} \sin^3 w \cos^2 w {dw} \right),\\ \end{array} } $$ $$ \small{ \begin{array}{ll} s_{0,42}(\theta)=&-\cos \theta \sin \theta \left( 2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw }+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin w ^ {2}}}\cos w {dw} \right.-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}}\cos w {dw}\\ &-4\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw}-2\,\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^5 w {dw }+6\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{ {\rm e}^{ \sin^2 w}} \cos^5 w {dw}\\ & +2\, \cos^4 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {d w}+\displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \left.-3\, \cos^2 \theta \displaystyle{\int _{0}^{\theta}}\!{{\rm e}^{ \sin^2 w}} \cos^3 w {dw} \right). \end{array} } $$ Now we have $$ \small{ \begin{array}{ll} \tilde{s}_1(\theta)=&\tilde{s}_{1,1}(\theta)c_{03}c_{12}+\tilde{s}_{1,2}(\theta)c_{03}c_{30}+\tilde{s}_{1,3}(\theta)a_{03}c_{03} +\tilde{s}_{1,4}(\theta)a_{21}c_{03}+\tilde{s}_{1,5}(\theta)c_{30}c_{12} +\tilde{s}_{1,6}(\theta)c_{12}a_{21} \\ &+\tilde{s}_{1,7}(\theta)c_{30}a_{21}+\tilde{s}_{1,8}(\theta)a_{03}a_{21}+\tilde{s}_{1,9}(\theta)a_{03}c_{30}+\tilde{s}_{1,10}(\theta)a_{03}c_{12}+\tilde{s}_{1,11}(\theta)a_{12}c_{03} +\tilde{s}_{1,12}(\theta)a_{12}c_{21}\\ &+\tilde{s}_{1,13}(\theta)c_{03}c_{21} +\tilde{s}_{1,14}(\theta)a_{21}c_{21}+\tilde{s}_{1,15}(\theta)a_{21}a_{12} +\tilde{s}_{1,16}(\theta)c_{12}a_{12}+\tilde{s}_{1,17}(\theta)a_{03}c_{21}\\ &+\tilde{s}_{1,18}(\theta)a_{03}a_{12} +\tilde{s}_{1,19}(\theta)c_{30}c_{21}+\tilde{s}_{1,20}(\theta)c_{30}a_{12} +\tilde{s}_{1,21}(\theta)c_{12}c_{21}+\tilde{s}_{1,22}(\theta)a_{12}^2\\ &+\tilde{s}_{1,23}(\theta)c_{21}^2+\tilde{s}_{1,24}(\theta)c_{30}^2+ \tilde{s}_{1,25}(\theta)a_{03}^2+\tilde{s}_{1,26}(\theta)c_{03}^2+\tilde{s}_{1,27}(\theta)a_{21}^2+\tilde{s}_{1,28}(\theta)c_{12}^2, \end{array} } $$ and $\tilde{s}_{1,i}(\theta)$ for $i=1\cdots,28$ satisfying the following expressions. $$ \small{ \begin{array}{ll} \tilde{s}_{1,1}(\theta)=&-\dfrac{1}{12}\, \left( 16\,\cos^8 \theta+34\, \cos^6 \theta+37\,\cos^4 \theta+8\,\cos^2 \theta-5 \right) \left( \cos^2 \theta-1 \right) ^{2},\\ \\ \tilde{s}_{1,2}(\theta)=&\dfrac{1}{12}\, \left( \cos^2 \theta-1 \right) \left( 16\,\cos^{10} \theta+38\,\cos^8 \theta+53\, \cos^6 \theta+15\,\cos^4 \theta-7\,\cos^2 \theta-7\right), \\ \\ \tilde{s}_{1,3}(\theta)=&-\dfrac{1}{12}\, \left( 16\,\cos^8 \theta+14\, \cos^6 \theta+15\,\cos^4 \theta-16\,\cos^2 \theta+1 \right) \left( \cos^2 \theta-1 \right) ^{2},\\ \\ \tilde{s}_{1,4}(\theta)=&\dfrac{1}{12}\, \left( \cos^2 \theta-1 \right) \left( 16\, \cos^{10} \theta +18\,\cos^8 \theta+19\, \cos^6 \theta-15\,\cos^4 \theta-\cos^2 \theta-1 \right),\\ \\ \tilde{s}_{1,5}(\theta)=&\dfrac{1}{12}\,\cos \theta\sin^5 \theta \left( 2\,\cos^2 \theta +1 \right) \left( 8\,\cos^4 \theta+ 12\,\cos^2 \theta+7 \right),\\ \\ \tilde{s}_{1,6}(\theta)=&\dfrac{1}{12}\,\cos \theta\sin^5 \theta \left( 16\,\cos^6 \theta +12\,\cos^4 \theta-2\,\cos^2 \theta+1 \right),\\ \\ \tilde{s}_{1,7}(\theta)=&\dfrac{1}{6}\,\cos^3 \theta \sin^3 \theta \left( 8\,\cos^6 \theta+8\, \cos^4 \theta +5\,\cos^2 \theta-3 \right),\\ \\ \tilde{s}_{1,8}(\theta)=&\dfrac{1}{12}\,\cos \theta\sin^5 \theta \left( -1+2\, \cos^2 \theta \right) \left( 8\,\cos^4 \theta+ 1 \right),\\ \\ \tilde{s}_{1,9}(\theta)=&\dfrac{1}{12}\,\cos \theta\sin^5 \theta \left( 16\, \cos^6 \theta +12\,\cos^4 \theta+6\,\cos^2 \theta-7 \right),\\ \\ \tilde{s}_{1,10}(\theta)=&\dfrac{1}{6}\,\cos \theta\sin^7 \theta \left( 8\,\cos^4 \theta +4\,\cos^2 \theta-3 \right),\\ \\ \tilde{s}_{1,11}(\theta)=&-\dfrac{4}{3}\,\cos^3 \theta \sin^3 \theta \left(\cos^6 \theta+\cos^4 \theta+ \cos^2 \theta-1 \right),\\ \\ \tilde{s}_{1,12}(\theta)=&-\dfrac{2}{3}\, \cos^5 \theta \sin^3 \theta \left( 4\,\cos^4 \theta-\cos^2 \theta- 1 \right),\\ \\ \tilde{s}_{1,13}(\theta)=&\dfrac{2}{3}\,\cos^3 \theta\sin \theta \left( -5\,\cos^2 \theta-2+ 5\,\cos^6 \theta+4\,\cos^4 \theta+4\,\cos^8 \theta \right),\\ \\ \end{array} } $$ $$ \small{ \begin{array}{ll} \tilde{s}_{1,14}(\theta)=&-\dfrac{1}{12}\, \sin^2 \theta \cos^2 \theta \left( 32\,\cos^8 \theta-4\, \cos^6 \theta-6\,\cos^4 \theta-3\,\cos^2 \theta-1 \right),\\ \\ \tilde{s}_{1,15}(\theta)=&\dfrac{1}{12}\, \sin^4 \theta \cos^2 \theta \left( -1+2\,\cos^2 \theta \right) \left( 8\,\cos^4 \theta +\cos^2 \theta+ 1\right),\\ \\ \tilde{s}_{1,16}(\theta)=&\dfrac{1}{12}\, \sin^6 \theta \cos^2 \theta \left( 16\,\cos^4 \theta+10\,\cos^2 \theta -5\right), \\ \\ \tilde{s}_{1,17}(\theta)=&-\dfrac{1}{12}\, \sin^4 \theta \cos^2 \theta \left( 32\,\cos^6 \theta-12\, \cos^4 \theta-6\,\cos^2 \theta+1 \right), \end{array} } $$ $$ \small{ \begin{array}{ll} \tilde{s}_{1,18}(\theta)=&\dfrac{1}{12}\, \sin^6 \theta \cos^2 \theta \left( 8\,\cos^2 \theta-1 \right) \left( -1+2\, \cos^2 \theta \right),\\ \\ \tilde{s}_{1,19}(\theta)=&-\dfrac{1}{12}\, \sin^2 \theta \cos^2 \theta \left( 32\,\cos^8 \theta+36\, \cos^6 \theta+14\,\cos^4 \theta-21\,\cos^2 \theta-7 \right),\\ \\ \tilde{s}_{1,20}(\theta)=&\dfrac{1}{12}\, \sin^4 \theta \cos^2 \theta \left( 16\,\cos^6 \theta+14\,\cos^4 \theta +7\,\cos^2 \theta-7\right) ,\\ \\ \tilde{s}_{1,21}(\theta)=&-\dfrac{1}{12}\, \sin^4 \theta \cos^2 \theta \left( 32\,\cos^6 \theta+28\,\cos^4 \theta -10\,\cos^2 \theta-5\right), \\ \\ \tilde{s}_{1,22}(\theta)=&\dfrac{1}{3}\, \cos^5 \theta \sin^5 \theta \left( -1+2\,\cos^2 \theta \right),\\ \\ \tilde{s}_{1,23}(\theta)=&\dfrac{1}{3}\, \cos^5 \theta \sin \theta \left( 2\,\cos^2 \theta+1 \right) \left( 4\,\cos^4 \theta-1-2 \,\cos^2 \theta \right),\\ \\ \tilde{s}_{1,24}(\theta)=&\dfrac{1}{12}\,\cos^3 \theta \sin^3 \theta \left( 2\,\cos^2 \theta+1 \right) \left( 4\,\cos^4 \theta+7\, \cos^2 \theta+7 \right),\\ \\ \tilde{s}_{1,25}(\theta)=&\dfrac{1}{12}\,\cos \theta\sin^7 \theta \left( 2\,\cos \theta+1 \right) \left( 2 \,\cos \theta-1 \right) \left( -1+2\,\cos^2 \theta \right),\\ \\ \tilde{s}_{1,26}(\theta)=&\dfrac{1}{3}\,\cos \theta\sin \theta \left( \cos^4 \theta+2\,\cos^2 \theta+3 \right) \left( 2\,\cos^6 \theta-1+\cos^4 \theta \right),\\ \\ \tilde{s}_{1,27}(\theta)=&\dfrac{1}{12}\,\cos^3 \theta \sin^3 \theta \left( -1+2\,\cos^2 \theta \right) \left( 4\,\cos^4 \theta+\cos^2 \theta+1\right), \\ \\ \tilde{s}_{1,28}(\theta)=&\dfrac{1}{12}\,\cos \theta \sin^7 \theta \left( 4\,\cos^2 \theta +5 \right) \left( 2\,\cos^2 \theta+1 \right).\\ \end{array} } $$ \end{document}
\begin{document} \newcommand{1312.4581}{1312.4581} \allowdisplaybreaks \renewcommand{031}{031} \FirstPageHeading \ShortArticleName{Invariants and Inf\/initesimal Transformations for Contact Sub-Lorentzian Structures} \ArticleName{Invariants and Inf\/initesimal Transformations\\ for Contact Sub-Lorentzian Structures\\ on 3-Dimensional Manifolds} \Author{Marek GROCHOWSKI~$^{\dag\ddag}$ and Ben WARHURST~$^\S$} \AuthorNameForHeading{M.~Grochowski and B.~Warhurst} \Address{$^\dag$~Faculty of Mathematics and Natural Sciences, Cardinal Stefan Wyszy\'{n}ski University,\\ \hphantom{$^\dag$}~ul. Dewajtis 5, 01-815 Waszawa, Poland} \EmailD{\href{mailto:[email protected]}{[email protected]}} \Address{$^\ddag$~Institute of Mathematics, Polish Academy of Sciences,\\ \hphantom{$^\ddag$}~ul.~\'{S}niadeckich 8, 00-950 Warszawa, Poland} \EmailD{\href{mailto:[email protected]}{[email protected]}} \Address{$^\S$~Institute of Mathematics, The Faculty of Mathematics, Informatics and Mechanics,\\ \hphantom{$^\S$}~University of Warsaw, Banacha 2, 02-097 Warszawa, Poland} \EmailD{\href{mailto:[email protected]}{[email protected]}} \ArticleDates{Received October 10, 2014, in f\/inal form March 30, 2015; Published online April 17, 2015} \Abstract{In this article we develop some elementary aspects of a~theory of symmetry in sub-Lorentzian geometry. First of all we construct invariants characterizing isometric classes of sub-Lorentzian contact $3$~manifolds. Next we characterize vector f\/ields which generate isometric and conformal symmetries in general sub-Lorentzian manifolds. We then focus attention back to the case where the underlying manifold is a~contact $3$~manifold and more specif\/ically when the manifold is also a~Lie group and the structure is left-invariant.} \Keywords{sub-Lorentzian; contact distribution; left-invariant; symmetry} \Classification{53B30; 53A55; 34C14} \section{Introduction} \subsection{Basic notions and motivation} Sub-Lorentzian geometry is a~relatively new subject although it does fall within the scope of broader perspectives on geometry. For instance the work of Berestovskii and Gichev~\cite{BerGic} on metrized semigroups is perhaps the broadest perspective one could take on the subject or alternatively a~more closely related generalisation is the notion of para-CR-geometry see~\cite{HN}. The aim in this work is to look specif\/ically at sub-Lorentzian geometry and so in this section we only present the notions that are required for the formulation of the main results of the paper. For more details and facts concerning sub-Lorentzian geometry, the reader is referred to~\cite{g6} and the references therein (see also~\cite{Grong, markin}). The structures which support the sub-Lorentzian structures in this paper are identical to those which support the analogous structures in the sub-Riemannian setting and so the reader is also referred to the paper of Agrachev and Barilari~\cite{Agr2} and the paper of Falbel and Gorodski~\cite{FalGor} for background on these structures. Let~$M$ be a~smooth manifold. A~\textit{sub-Lorentzian structure} on~$M$ is a~pair $(H,g)$, where~$H$ is a~bracket generating distribution of constant rank on~$M$, and~$g$ is a~Lorentzian metric on~$H$. A~triple $(M,H,g)$, where $(H,g)$ is a~sub-Lorentzian structure on~$M$, will be called a~\textit{sub-Lorentzian manifold}. For any $q\in M$, a~vector $v\in H_{q}$ will be called \textit{horizontal}. A~vector f\/ield~$X$ on~$M$ is horizontal if it takes values in~$H$. We will denote the set of all local horizontal vector f\/ields by $\Gamma(H)$. To be more precise, $X \in \Gamma (H)$ if and only if~$X$ is a~horizontal vector f\/ield def\/ined on some open subset $U\subset M$. A nonzero vector $v\in H_{q}$ is said to be \textit{timelike} (resp.\ \textit{spacelike}, \textit{null, nonspacelike}) if $g(v,v)<0$ (resp.\ $g(v,v)>0$, $g(v,v)=0$, $g(v,v)\leq 0$), moreover the zero vector is def\/ined to be spacelike. Similarly, vector f\/ields are categorized analogously according to when their values lie in exactly one of the four categories mentioned above. An absolutely continuous curve $\gamma:[a,b]\longrightarrow M$ is called horizontal if $\dot{\gamma}(t)\in H_{\gamma(t)}$ a.e.\ on $[a,b]$. A~horizontal curve $\gamma:[a,b]\longrightarrow M$ is timelike (spacelike, null, nonspacelike) if $\dot{\gamma}(t)$ is timelike (spacelike, null, nonspacelike) a.e.~on $[a,b]$. If $(H,g)$ is a~sub-Lorentzian metric on~$M$ then, as shown in~\cite{groGlob},~$H$ can be represented as a~direct sum $H=H^{-}\oplus H^{+}$ of subdistributions such that $\rank H^{-}=1$ and the restriction of~$g$ to~$H^{-}$ (resp.\ to $H^{+}$) is negative (resp.\ positive) def\/inite. This type of decomposition will be called \textit{a causal decomposition of}~$H$. Now, by \textit{a time $($resp.\ space$)$ orientation} of $(M,H,g)$ we mean an orientation of the vector bundle $H^{-}\longrightarrow M$ (resp.\ $H^{+}\longrightarrow M$). This def\/inition requires some explanations since causal decompositions are not unique. So suppose that we are given two causal decompositions $H=H_{1}^{-}\oplus H_{1}^{+}=H_{2}^{-}\oplus H_{2}^{+}$ and that the bundles $H_{i}^{-}\longrightarrow M$ (resp.\ $H_{i}^{+}\longrightarrow M)$, $i=1,2$, are oriented. We say that the two orientations of $H_{1}^{-}\longrightarrow M$ and $H_{2}^{-}\longrightarrow M$ (resp.\ of $H_{1}^{+}\longrightarrow M$ and $H_{2}^{+}\longrightarrow M$) def\/ine the same time (resp.\ space) orientation of $(M,H,g)$ if around any point of~$M$ there exist local sections $X_{1}^{(i)}$ of $H_{i}^{-}\longrightarrow M$ which agree with the given orientations of $H_{i}^{-}\longrightarrow M$ (resp.\ local sections $X_{2}^{(i)},\dots,X_{k}^{(i)}$ of $H_{i}^{+}\longrightarrow M$ which agree with the given orientations of $H_{i}^{+}\longrightarrow M$), $i=1,2$, such that $g\big(X_{1}^{(1)},X_{1}^{(2)}\big)<0$ (resp.\ $\det \big(g\big(X_{i}^{(1)},X_{j}^{(2)}\big)_{i,j=2,\dots,k}>0\big)$; by~$k$ we denote the rank of~$H$. Since a~line bundle is orientable if and only if it is trivial, time orientability of $(M,H,g)$ is equivalent to the existence of a~continuous timelike vector f\/ield on~$M$. A~choice of such a~timelike f\/ield is called a~time orientation of $(M,H,g)$. \looseness=-1 Suppose that $(M,H,g)$ is time oriented by a~vector f\/ield~$X$. A~nonspacelike $v\in H_{q}$ will be called \textit{future} (resp.\ \textit{past}) \textit{directed} if $g(v,X(q))<0$ (resp.\ $g(v,X(q))>0$). A~horizontal curve $\gamma:[a,b]\longrightarrow M$ is called timelike future (past) directed if $\dot{\gamma}(t)$ is timelike future (past) directed a.e. Similar classif\/ications can be made for other types of curves, e.g.\ nonspacelike future directed etc. If $q_{0}\in M$ is a~point and~$U$ is a~neighborhood of $q_{0}$, then by the future timelike (nonspacelike, null) reachable set from $q_{0}$ relative to~$U$ we mean the set of endpoints of all timelike (nonspacelike, null) future directed curves that start from $q_{0}$ and are contained in~$U$. Now we def\/ine a~very important notion that will play a~crucial role in the sequel. As it is known~\cite{g6}, any sub-Lorentzian structure $(H,g)$ determines the so-called \textit{geodesic Hamiltonian} which is def\/ined as follows. The existence of the structure $(H,g)$ is equivalent to the existence of the f\/iber bundle morphism $G:T^{\ast}M\longrightarrow H$ covering the identity def\/ined by $G(\lambda)=(\lambda_p |_{H_p})^\sharp$, where $\lambda \in T_p^*M$ and $\sharp$ denotes the musical isomorphism. In particular, if $v$, $w$ are any horizontal vectors, then $g(v,w)=\left\langle \xi,G\eta \right\rangle=\left\langle \eta,G\xi \right\rangle$ whenever $\xi \in G^{-1}(v)$, $\eta \in G^{-1}(w)$. The geodesic Hamiltonian is the map $h:T^{\ast}M\longrightarrow \mathbb{R}$ def\/ined~by \begin{gather*} h(\lambda)=\frac{1}{2}\left\langle \lambda,G\lambda \right\rangle. \end{gather*} If $X_{1},\dots,X_{k}$ is an orthonormal basis for $(H,g)$ with a~time orientation $X_{1}$, then \begin{gather*} h|_{T_{q}^{\ast}M}(\lambda)=-\frac{1}{2}\left\langle \lambda,X_{1}(q)\right\rangle^{2}+\frac{1}{2}\sum\limits_{i=2}^{k}\left\langle \lambda,X_{i}(q)\right\rangle^{2}. \end{gather*} A~horizontal curve $\gamma:[a,b]\longrightarrow M$ is said to be a~\textit{Hamiltonian geodesic} if there exists $\Gamma:[a,b]\longrightarrow T^{\ast}M$ such that $\dot{\Gamma}=\vec{h}(\Gamma)$ and $\pi (\Gamma (t))=\gamma(t)$ on $[a,b]$; by $\pi:T^{\ast}M\longrightarrow M$ we denote the canonical projections, and $\vec{h}$ is the Hamiltonian vector f\/ield corresponding to~$h$. Let $\gamma:[a,b]\longrightarrow M$ be a~nonspacelike curve. The non-negative number \begin{gather*} L(\gamma)=\int_{a}^{b}\left\vert g(\dot{\gamma}(t), \dot{\gamma}(t))\right\vert^{1/2}dt \end{gather*} is called the \textit{sub-Lorentzian length of a~curve} $\gamma$. If $U\subset M$ is an open subset, then the \textit{$($local$)$ sub-Lorentzian distance relative to} $U$ is the function $d[U]:U\times U\longrightarrow \lbrack 0,+\infty]$ def\/ined as follows: For $q_{1},q_{2}\in U$, let $\Omega_{q_{1},q_{2}}^{nspc}(U)$ denote the set of all nonspacelike future directed curves contained in~$U$ which join $q_{1}$ to $q_{2}$, then \begin{gather*} d[U](q_{1},q_{2})=\begin{cases} \sup \left\{L(\gamma):\gamma \in \Omega_{q_{1},q_{2}}^{nspc}(U)\right\}: & \Omega_{q_{1},q_{2}}^{nspc}(U)\neq \varnothing, \\ 0: & \Omega_{q_{1},q_{2}}^{nspc}(U)=\varnothing \end{cases} \end{gather*} (if $\Omega_{q,q}^{nspc}(U)$ is non-empty for a~$q\in U$ then $d[U](q,q)=\infty $). A~nonspacelike future directed curve $\gamma:[a,b]\longrightarrow U$ is called a~$U$-\textit{maximizer} if $d[U](\gamma (a),\gamma (b))=L(\gamma)$. It can be proved (see~\cite{g6}) that every suf\/f\/iciently small subarc of every nonspacelike future directed Hamiltonian geodesic is a~$U$-maximizers for suitably chosen~$U$. Suppose now that we are given two sub-Lorentzian manifolds $(M_{i},H_{i},g_{i})$, $i=1,2$. A~dif\/feo\-morphism $\varphi:M_{1}\longrightarrow M_{2}$ is said to be a~\textit{sub-Lorentzian isometry}, if $d\varphi (H_{1})\subset H_{2}$ and for each $q\in M_{1}$, the mapping $d\varphi_{q}:(H_{1})_{q}\longrightarrow (H_{2})_{\varphi (q)}$ is a~linear isometry, i.e., for every $v_{1},v_{2}\in (H_{1})_{q}$ it follows that \begin{gather*} g_{1}(v_{1},v_{2})=g_{2}(d\varphi_{q}(v_{1}),d\varphi_{q}(v_{2})). \end{gather*} Of course, any isometry maps timelike curves from $M_{1}$ to timelike curves on $M_{2}$. The same for spacelike and null curves. Moreover isometries preserve the sub-Lorentzian length of nonspacelike curves. If $(M_{i},H_{i},g_{i})$, $i=1,2$, are both time- and space-oriented, then we can distinguish among all isometries those that preserve one of the orientations or both of them. More precisely, suppose that $\varphi:M_{1}\longrightarrow M_{2}$ is an isometry. Let $H_{1}=H_{1}^{-}\oplus H_{1}^{+}$ be a~causal decomposition with given orientation on $H_{1}^{\pm}$. Let $(H_{2}^{-})_{\varphi (q)}=d\varphi_{q}(H_{1}^{-})_{q}$, and $(H_{2}^{+})_{\varphi (q)}=d\varphi_{q}(H_{1}^{+})_{q}$, $q\in M$. Then $H_{2}=H_{2}^{-}\oplus H_{2}^{+}$ is again a~causal decomposition where the summands $H_{2}^{\pm}$ inherit the orientation carried from~$H_{1}^{\pm}$ by~$\varphi$. Now we say that $\varphi$ preserves time (resp.\ space) orientation if the orientations of~$H_{2}^{-}$ (resp.\ $H_{2}^{+}$) induced by $\varphi$ agrees with the time (resp.\ space) orientation of $(M_{2},H_{2},g_{2})$. An isometry that preserves time and space orientation will be called \textit{a} $ts$\textit{-isometry}. It is clear that any $ts$-isometry preserves Hamiltonian geodesics, maximizers, and local sub-Lorentzian distance functions. Notice furthermore that the set of all isometries $(M,H,g)\longrightarrow (M,H,g)$ is a~Lie group and the set of all $ts$-isometries forms a~connected component containing the identity. A sub-Lorentzian manifold $(M,H,g)$ is called a~\textit{contact sub-Lorentzian manifold}, if~$H$ is a~contact distribution on~$M$. Among sub-Lorentzian manifolds, those which are contact seem to be the easiest to study and hence well known. Contact sub-Lorentzian manifolds are studied for instance in papers~\cite{g2, bcc2004, g7, Gro02, Grong, Huang, markin, markin2}. The investigations go in two directions. The f\/irst addresses global aspects, e.g., in~\cite{bcc2004, g7} the Heisenberg sub-Lorentzian metric is treated. More precisely, the future timelike, nonspacelike and null reachable sets from a~point are computed, and a~certain estimate on the distance function is given. Moreover, it is shown that the future timelike conjugate locus of the origin is zero, while the future null conjugate locus equals the union of the two null future directed Hamiltonian geodesics starting from the origin. In turn, in~\cite{markin} and~\cite{g7} it is proved that the set reachable from the origin by future directed timelike Hamiltonian geodesics coincides with the future timelike reachable set from the origin. In~\cite{markin} the authors also study the set reachable by spacelike Hamiltonian geodesics and prove the uniqueness of geodesics in the Heisenberg case. Next, in the papers~\cite{markin, markin2} the so-called $\mathbb{H}$-type groups (i.e.~higher-dimensional analogues of the 3D Heisenberg group) with suitable sub-Lorentzian metrics are studied, and the main emphasis is put on the problem of connectivity by geodesics, i.e.~given two points $q_{1}$, $q_{2}$, f\/igure out how many geodesics joining~$q_{1}$ to~$q_{2}$ exist. A~similar problem is also dealt with in~\cite{Huang}. On the other hand, in~\cite{Grong} the group ${\rm SL}(2,\mathbb{R})$ with the sub-Lorentzian metric is studied. As it will become clear below, the cases of the Heisenberg group and that of ${\rm SL}(2,\mathbb{R})$ are especially interesting for us because these are exactly the cases that arise when the invariant $\tilde{h}$ (def\/ined below) vanishes. The other direction of studies concerns the local situation and is based on the construction (see, e.g.,~\cite{g2, Gro01, Gro02}) of local normal forms. Such local normal forms depend on two functional parameters and permit to view general structures as perturbations of f\/lat ones. This fact allows to generalize some global results that hold in the f\/lat case to local results for general structures describable by the mentioned normal forms. As one can see, problems connected with isometric and conformal symmetry have not been examined in an explicit sense although in broader contexts such as parabolic geometry and Cartan's equivalence, there are applicable results. The aim of this paper is to embark on f\/illing this gap. More precisely, f\/irst we construct invariants for contact sub-Lorentzian manifolds $(M,H,g)$ with $\dim M=3$, more or less in the way as it is done in the contact sub-Riemannian case~-- cf.~\cite{Agr2}. Our invariants are: a~$(1,1)$-tensor $\tilde{h}$ on~$H$ and a~smooth function $\kappa$ on~$M$. Then, we consider in some detail the case that~$M$ is a~$3$-dimensional Lie group such that $\tilde{h}=0$. It turns out that in such a~case~$M$ is locally either the Heisenberg group or the universal cover of ${\rm SL}(2,\mathbb{R})$. In these two cases we describe inf\/initesimal isometries and more generally inf\/initesimal conformal transformations. \subsection{The content of the paper} In Section~\ref{Section2} we construct invariants for $ts$-oriented contact sub-Lorentzian metrics on 3D~manifolds. The construction follows the ideas of~\cite{Agr2}, however the full analogy does not exist due to the special character of indef\/inite case. Our main invariants for a~manifold $(M,H,g)$ are: a~smooth $(1,1)$-tensor $\tilde{h}$ on~$H$ and a~smooth function $\kappa$ on~$M$. These invariants provide necessary conditions for two contact sub-Lorentzian manifolds to be locally $ts$-isometric. We also consider another invariant $\chi$ arising from the eigenvalues of $\tilde{h}$ which to a~lesser extent also distinguishes the structure. The question as to whether $\{\tilde h, \kappa \}$ is a~complete set of invariants requires deeper ana\-ly\-sis using Cartan's theory and is deferred to a~forthcoming paper with Alexandr Medvedev~\cite{GMW}. In Section~\ref{SubLorentzInf} we def\/ine and prove basic properties of inf\/initesimal sub-Lorentzian isometries and conformal transformations. Then we notice that the invariant $\tilde{h}$ can be expressed in terms of the restricted Lie derivative of the metric~$g$ in the direction of the Reeb vector f\/ield. The immediate consequence of this latter fact is that the Reeb vector f\/ield $X_{0}$ is an inf\/initesimal isometry if and only if $\tilde{h}$ vanishes identically. Section~\ref{Section4} covers some other implications of certain combinations of the invariants vanishing. In particular we demonstrate (see Proposition~\ref{smpcon}) that without any assumptions on orientation, the condition $\chi=0$ and $\tilde{h}\neq 0$ implies the existence of line sub-bundle $L\rightarrow M$ of~$H$ on which the metric~$g$ is equal to zero. We then begin to focus on the condition $\tilde{h}=0$, where $\kappa$ comes to the fore. For example, when~$M$ is a~simply connected Lie group, we show that $\tilde{h}=0$ and $\kappa=0$ implies that~$M$ is the Heisenberg group~-- cf.~Corollary~\ref{CorHeis}, and $\tilde{h}=0$ and $\kappa \neq 0$ implies that $M$ is the universal cover of ${\rm SL}(2,\mathbb{R})$~-- see Corollary~\ref{CorSL2}. This contrasts with the sub-Riemannian case where a~third group, namely ${\rm SU}(2)$, also appears. Section~\ref{Section6} is devoted to computing inf\/initesimal isometries and inf\/initesimal conformal transformations using Cartan's equivalence method and Appendix presents an example of an isometrically rigid sub-Lorentzian structure. Finally, the appendix presents possible applications of our invariants to a~non-contact case. \section{Constructing the invariants}\label{Section2} \subsection{Preliminaries} Let $(M,H,g)$ be a~contact sub-Lorentzian manifold, $\dim M=3$, which is supposed to be both time and space oriented or $ts$\textit{-oriented} for short. Since~$H$ is of rank~$2$, any causal decomposition $H=H^{-}\oplus H^{+}$ splits~$H$ into a~direct sum of line bundles. So in this case a~space orientation is just a~continuous spacelike vector f\/ield, and consequently~$H$ admits a~global basis. Let us f\/ix an orthonormal basis $X_{1},X_{2}$ for $(H,g)$, i.e. \begin{gather*} g(X_{1},X_{1})=-1, \qquad g(X_{1},X_{2})=0, \qquad g(X_{2},X_{2})=1, \end{gather*} where $X_{1}$ (resp.\ $X_{2}$) is a~time (resp.\ space) orientation. From now on we will work with $ts$\textit{-invariants}, i.e.,~with invariants relative to $ts$-isometries. However, when reading the text the reader will see that the space orientation is only an auxiliary notion here and most of the results do not depend on it (some of them do not depend on an orientation at all). Let $\omega$ be a~contact $1$-form such that $H=\ker \omega$. Without loss of generality we may assume that $\omega$ is normalized so that \begin{gather*} d\omega (X_{1},X_{2})=\omega ([X_{2},X_{1}])=1. \end{gather*} Next, denote by $X_{0}$ the so-called \textit{Reeb vector field} on~$M$ which is def\/ined by \begin{gather} \omega (X_{0})=1, \qquad d\omega (X_{0},\cdot)=0. \label{Reeb} \end{gather} It is seen that $X_{0}$ is uniquely determined by $ts$-oriented sub-Lorentzian structure. Using~\eqref{Reeb} it is seen that the action of $\operatorname{ad}_{X_{0}}$ preserves the horizontality of vector f\/ields, i.e. \begin{gather} \operatorname{ad}_{X_{0}}(\Gamma (H))\subset \Gamma (H). \label{act} \end{gather} Now (similarly as in~\cite{Agr2}) we introduce the structure functions. Thanks to~\eqref{act} and~\eqref{Reeb} we have \begin{gather} [X_{1},X_{0}]=c_{01}^{1}X_{1}+c_{01}^{2}X_{2}, \qquad [X_{2},X_{0}]=c_{02}^{1}X_{1}+c_{02}^{2}X_{2}, \nonumber \\ [X_{2},X_{1}]=c_{12}^{1}X_{1}+c_{12}^{2}X_{2}+X_{0}. \label{StrConst} \end{gather} Let $\nu_{0}$, $\nu_{1}$, $\nu_{2}$ be the dual basis of $1$-forms: $ \langle \nu_{i},X_{j} \rangle=\delta_{ij}$, $i,j=0,1,2$. Rewriting~\eqref{StrConst} in terms of $\nu_{i}$'s we have \begin{gather} d\nu_{0}=\nu_{1}\wedge \nu_{2}, \qquad d\nu_{1}=c_{01}^{1}\nu_{0}\wedge \nu_{1}+c_{02}^{1}\nu_{0}\wedge \nu_{2}+c_{12}^{1}\nu_{1}\wedge \nu_{2}, \nonumber \\ d\nu_{2}=c_{01}^{2}\nu_{0}\wedge \nu_{1}+c_{02}^{2}\nu_{0}\wedge \nu_{2}+c_{12}^{2}\nu_{1}\wedge \nu_{2}. \label{StrConstForm} \end{gather} Dif\/ferentiating the f\/irst equation in~\eqref{StrConstForm} we obtain $0=d\nu_{1}\wedge \nu_{2}-\nu_{1}\wedge d\nu_{2}=(c_{01}^{1}+c_{02}^{2})\nu_{0}\wedge \nu_{1}\wedge \nu_{2}$ from which it follows that \begin{gather} c_{01}^{1}+c_{02}^{2}=0. \label{eq1} \end{gather} \subsection{Induced bilinear form and linear operator} In the introduction we def\/ined the geodesic Hamiltonian~$h$ which can be written as \begin{gather*} h=-\frac{1}{2}h_{1}^{2}+\frac{1}{2}h_{2}^{2}, \end{gather*} where $h_{i}(\lambda)=\langle \lambda,X_{i}\rangle$, $i=1,2$. We also consider the function $h_{0}(\lambda)=\langle \lambda,X_{0}\rangle$ and observe that by def\/inition both~$h$ and $h_{0}$ are invariant with respect to $ts$-oriented structure $(H,g)$. Therefore, it is the same with their Poisson bracket $\{h,h_{0}\}$ which, when evaluated at $q\in M$, gives a~symmetric bilinear form \begin{gather*} \{h,h_{0}\}_{q}: \ T_{q}^{\ast}M\times T_{q}^{\ast}M\longrightarrow \mathbb{R}. \end{gather*} If $\lambda \in T_{q}^{\ast}M$ then $\lambda=\sum\limits_{i=0}^{3}h_{i}(\lambda)\nu_{i}(q)$ and \begin{Lemma} \label{ala1} $\{h,h_{0}\}_{q}=-c_{01}^{1}h_{1}^{2}+(c_{02}^{1}-c_{01}^{2})h_{1}h_{2}+c_{02}^{2}h_{2}^{2}$. \end{Lemma} \begin{proof} The formula follows from $\{h,h_{0}\}=-h_{1}\{h_{1},h_{0}\}+h_{2}\{h_{2},h_{0}\}$, where we substitute $\{h_{i},h_{0}\} (\lambda)=\langle \lambda, [X_{i},X_{0}] \rangle$, and then use~\eqref{StrConst}. \end{proof} In the assertion of Lemma~\ref{ala1} and in many other places below we write $c_{jk}^{i}$ for $c_{jk}^{i}(q)$. It follows that $ \{h,h_{0} \}_{q}(\lambda,\cdot)=0$ whenever $\lambda \in H_{q}^{\perp}$ (by def\/inition $H_{q}^{\perp}$ is the set of such covectors $\lambda \in T_{q}^{\ast}M$ that $ \langle \lambda,v \rangle=0$ for every $v\in H_{q}$), so in fact \begin{gather*} \{h,h_{0} \}_{q}: \ T_{q}^{\ast}M/H_{q}^{\perp}\times T_{q}^{\ast}M/H_{q}^{\perp}\longrightarrow \mathbb{R}. \end{gather*} Let us recall (mutually inverse) musical isomorphisms determined by the metric~$g$; these are $^{\sharp}:H^{\ast}\longrightarrow H$ and $^{\flat}:H\longrightarrow H^{\ast}$, where by def\/inition $(\nu_{1})^{\sharp}=-X_{1}$, $(\nu_{2})^{\sharp}=X_{2}$, $(X_{1})^{\flat}=-\nu_{1}$, $(X_{2})^{\flat}=\nu_{2}$. Now it is easy to see that the bundle morphism $G:T^{\ast}M\longrightarrow H$ from the introduction induces for each~$q$ a~natural identif\/ication \begin{gather*} F_{q}: \ T_{q}^{\ast}M/H_{q}^{\perp}\longrightarrow H_{q}, \qquad F([\alpha])= (\alpha_{|H_{q}} )^{\sharp}, \end{gather*} where $[\alpha]$ stands for the class of $\alpha \in T_{q}^{\ast}M$ modulo $H_{q}^{\perp}$; more precisely, $H_{q}^{\perp}$ is spanned by~$\nu_{0}$, and $F([\nu_{1}])=-X_{1}$, $F([\nu_{2}])=X_{2}$. This permits us to def\/ine a~bilinear symmetric form $\bar{h}_{q}:H_{q}\times H_{q}\longrightarrow \mathbb{R}$~by \begin{gather*} \bar{h}_{q}(v,w)= \{h,h_{0} \}_{q}\big(F_{q}^{-1}(v),F_{q}^{-1}(w)\big) . \end{gather*} Its matrix in the basis $X_{1}(q)$, $X_{2}(q)$ is \begin{gather} \left( \begin{matrix} -c_{01}^{1} &-\frac{1}{2}\big(c_{02}^{1}-c_{01}^{2}\big) \\ -\frac{1}{2}\big(c_{02}^{1}-c_{01}^{2}\big) & c_{02}^{2} \end{matrix} \right) . \label{mac} \end{gather} Finally we def\/ine a~linear mapping $\tilde{h}_{q}:H_{q}\longrightarrow H_{q}$ by the following formula: \begin{gather*} \tilde{h}_{q}(v)=\left(\bar{h}_{q}(v,\cdot)\right)^{\sharp}. \end{gather*} Using~\eqref{mac}, it is seen that the matrix of the operator $\tilde{h}_{q}$ in the basis $\{X_{1}(q),X_{2}(q)\}$ is equal to \begin{gather} \left( \begin{matrix} c_{01}^{1} & \frac{1}{2}(c_{02}^{1}-c_{01}^{2}) \\ -\frac{1}{2}(c_{02}^{1}-c_{01}^{2}) & c_{02}^{2} \end{matrix} \right). \label{mac2} \end{gather} \subsection[The $ts$-invariants]{The $\boldsymbol{ts}$-invariants} By our construction, the eigenvalues and determinant of $\tilde{h}_{q}$ as well as $\tilde{h}_{q}$ itself, are all invariants for the $ts$-oriented structure $(H,g)$. Clearly $\det\tilde{h}_{q}=c_{01}^{1}c_{02}^{2}+\frac{1}{4}(c_{02}^{1}-c_{01}^{2})^{2}=-(c_{01}^{1})^{2}+\frac{1}{4}(c_{02}^{1}-c_{01}^{2})^{2}$. Since, in view of \eqref{eq1}, the trace of $\tilde{h}_{q}$ is equal to~$0$, the eigenvalues of $\tilde{h}_{q}$ are equal to $\pm \sqrt{-(c_{01}^{1})^{2}+\frac{1}{4}(c_{02}^{1}-c_{01}^{2})^{2}}$. We can choose \begin{gather*} \chi=-\big(c_{01}^{1}\big)^{2}+\frac{1}{4}\big(c_{02}^{1}-c_{01}^{2}\big)^{2} \end{gather*} as a~functional $ts$-invariant for our structure. In analogy with the sub-Riemannian case~\cite{Agr2,Agr1,Agr0}, we consider the functional $ts$-invariant def\/ined as follows: \begin{gather*} \kappa=X_{2}\big(c_{12}^{1}\big)+X_{1}\big(c_{12}^{2}\big)-\big(c_{12}^{1}\big)^{2}+\big(c_{12}^{2}\big)^{2} -\frac{1}{2}\big(c_{01}^{2}+c_{02}^{1}\big). \end{gather*} Unlike the sub-Riemannian case where $\chi$ and $\kappa$ play the crucial role, it is $\tilde{h}$ and $\kappa$ that play the crucial role in the sub-Lorentzian setting. \begin{Proposition}\label{oro3} $\kappa$ is indeed a~$ts$-invariant. \end{Proposition} \subsubsection{Proof of Proposition~\ref{oro3}} Let $X_{1}$, $X_{2}$ is an orthonormal basis with a~time orientation $X_{1}$ and a~space orientation~$X_{2}$, and let $c_{jk}^{i}$ be structures functions determined by this basis. Next, let $\theta=\theta (q)$ be a~smooth functions and consider an orthonormal basis $Y_{1}$, $Y_{2}$ given~by \begin{gather} Y_{1}=X_{1}\cosh \theta+X_{2}\sinh \theta, \qquad Y_{2}=X_{1}\sinh \theta+X_{2}\cosh \theta. \label{transf1} \end{gather} Then $Y_{1}$ ($Y_{2}$) is a~time (space) orientation, and of course \begin{gather*} X_{1}=Y_{1}\cosh \theta-Y_{2}\sinh \theta, \qquad X_{2}=-Y_{1}\sinh \theta+Y_{2}\cosh \theta. \end{gather*} Let $d_{jk}^{i}$ be the structure functions determined by the basis $Y_{1}$, $Y_{2}$, i.e. \begin{gather*} [Y_{1},Y_{0}]=d_{01}^{1}Y_{1}+d_{01}^{2}Y_{2}, \qquad [Y_{2},Y_{0}]=d_{02}^{1}Y_{1}+d_{02}^{2}Y_{2}, \\ [Y_{2},Y_{1}]=d_{12}^{1}Y_{1}+d_{12}^{2}Y_{2}+X_{0}. \end{gather*} In order to prove Proposition~\ref{oro3} we need the following lemma. \begin{Lemma} \label{oro4} The following formulas hold true: \begin{gather} d_{02}^{1}=-X_{0}(\theta)+c_{02}^{1}\cosh^{2}\theta-c_{01}^{2}\sinh^{2}\theta+\big(c_{01}^{1}-c_{02}^{2}\big) \sinh \theta \cosh \theta, \nonumber \\ d_{02}^{2}=\big(c_{01}^{2}-c_{02}^{1}\big) \sinh \theta \cosh \theta+c_{02}^{2}\cosh^{2}\theta-c_{01}^{1}\sinh^{2}\theta, \nonumber \\ d_{01}^{1}=c_{01}^{1}\cosh^{2}\theta-c_{02}^{2}\sinh^{2}\theta+\big(c_{02}^{1}-c_{01}^{2}\big) \sinh \theta \cosh \theta, \nonumber \\ d_{01}^{2}=-X_{0}(\theta)+c_{01}^{2}\cosh^{2}\theta-c_{02}^{1}\sinh^{2}\theta+\big(c_{02}^{2}-c_{01}^{1}\big) \sinh \theta \cosh \theta, \nonumber \\ d_{12}^{1}=\big(c_{12}^{1}-X_{1}(\theta)\big) \cosh \theta-\big(X_{2}(\theta)+c_{12}^{2}\big) \sinh \theta, \nonumber \\ d_{12}^{2}=\big(X_{1}(\theta)-c_{12}^{1}\big) \sinh \theta+\big(X_{2}(\theta)+c_{12}^{2}\big) \cosh \theta. \label{StrConst2} \end{gather} \end{Lemma} \begin{proof} All formulas are proved by direct calculations. For instance, using \eqref{transf1} we write \begin{gather*} [Y_{2},Y_{1}]=[X_{1}\sinh \theta+X_{2}\cosh \theta,X_{1}\cosh \theta+X_{2}\sinh \theta ] \\ \phantom{[Y_{2},Y_{1}]} =-X_{1}(\theta)X_{1}+X_{2}(\theta)X_{2}+[X_{2},X_{1}] . \end{gather*} Then using~\eqref{transf1} and~\eqref{StrConst} we arrive at \begin{gather*} -X_{1}(\theta)X_{1}+X_{2}(\theta)X_{2}+c_{12}^{1}X_{1}+c_{12}^{2}X_{2}+X_{0} \\ \qquad =-X_{1}(\theta)(Y_{1}\cosh \theta-Y_{2}\sinh \theta )+\allowbreak X_{2}(\theta)(-Y_{1}\sinh \theta+Y_{2}\cosh \theta ) \\ \qquad \phantom{=}{} +c_{12}^{1}(Y_{1}\cosh \theta-Y_{2}\sinh \theta )+\allowbreak c_{12}^{2}(-Y_{1}\sinh \theta +Y_{2}\cosh \theta )+X_{0}, \end{gather*} from which the f\/ifth and sixth equations in~\eqref{StrConst2} follow. \end{proof} Now, using Lemma~\ref{oro4}, we see that \begin{gather} \frac{1}{2}\left(d_{01}^{2}+d_{02}^{1}\right)=-X_{0}(\theta)+\frac{1}{2} \left(c_{01}^{2}+c_{02}^{1}\right) . \label{shortref1} \end{gather} and \begin{gather} \left(d_{12}^{1}\right)^{2}-\left(d_{12}^{2}\right)^{2}=\left(X_{1}(\theta)-c_{12}^{1}\right)^{2}-\left(X_{2}(\theta)+c_{12}^{2}\right)^{2}. \label{shortref2} \end{gather} Finally, we compute $Y_{2}(d_{12}^{1})+Y_{1}(d_{12}^{2})$. To this end let us write \begin{gather} Y_{2}(d_{12}^{1})+Y_{1}(d_{12}^{2})=I+II, \label{shortref3} \end{gather} where \begin{gather*} I=X_{2}(c_{12}^{1})+X_{1}\left(c_{12}^{2}\right)-[X_{2},X_{1}](\theta) \\ \phantom{I} =X_{2}(c_{12}^{1})+X_{1}\left(c_{12}^{2}\right)-c_{12}^{1}X_{1}(\theta)-c_{12}^{2}X_{2}(\theta)-X_{0}(\theta) \end{gather*} and \begin{gather*} II=-c_{12}^{1}X_{1}(\theta)-X_{2}^{2}(\theta)-c_{12}^{2}X_{2}(\theta)+X_{1}^{2}(\theta). \end{gather*} Combining~\eqref{shortref1},~\eqref{shortref2} and~\eqref{shortref3} completes the proof of Proposition~\ref{oro3}. In summary, our basic $ts$-invariants are: a~smooth function $\kappa$ on~$M$ and a~$(1,1)$ tensor $\tilde{h}$ on~$H$. \section{Sub-Lorentzian inf\/initesimal isometries\\ and conformal transformations}\label{SubLorentzInf} In this section $(M,H,g)$ is a~f\/ixed sub-semi-Riemannian manifold, $\rank H$ and $\dim M$ are arbitrary. \begin{Definition} A a~dif\/feomorphism $f:M\longrightarrow M$ is called a~conformal transformation of $(M,H,g)$ if (i) $d_{q}f(H_{q}) \subseteq H_{f(q)}$ for every $q\in M$, (ii) there exists a~function $\rho \in C^{\infty}(M)$, $\rho >0$, such that \begin{gather*} g(d_{q}f(v),d_{q}f(w))=\rho (q)g(v,w) \end{gather*} for every $q\in M$ and every $v,w\in H_{q}$. If $\rho=1$ then~$f$ is an isometry of $(M,H,g)$. \end{Definition} Along with conformal transformations and isometries we consider their inf\/initesimal variants. \begin{Definition} A~vector f\/ield~$Z$ on $(M,H,g)$ is called an inf\/initesimal conformal transformation (resp.\ inf\/initesimal isometry) if its f\/low $\psi^t$ consists of conformal transformations (isometries). \end{Definition} Let us note a~simple lemma. \begin{Lemma} \label{lem1} Let~$Z$ be a~vector field on~$M$ and denote by $\psi^t$ its flow. Then the following conditions are equivalent: \begin{enumerate}\itemsep=0pt \item[$(a)$] $\operatorname{ad}_{Z}:\Gamma (H)\longrightarrow \Gamma (H)$; \item[$(b)$] $d_{q}\psi^t:H_{q}\longrightarrow H_{\psi^t(q)}$ for every $q\in M$ and every~$t$ such that $\psi^t$ is defined around~$q$. \end{enumerate} \end{Lemma} \begin{proof} Although the result is known, we give a~proof for the sake of completeness. (a) $\Rightarrow$ (b) Following~\cite{LiuSuss}, we f\/ix a~point~$q$ and consider a~basis $X_{1},\dots,X_{k}$ of~$H$ def\/ined on a~neighborhood~$U$ of $q$. By our assumption, there exist smooth functions $\alpha_{ij}$, $i,j=1,\dots,k$, such that $\operatorname{ad}_{Z}X_{i}=\sum\limits_{j=1}^{k}\alpha_{ij}X_{j}$ on~$U$ and it follows that if $v_{i}(t)=(\psi_*^t X_{i}) (q)=d\psi^t X_{i}(\psi^{-t}q)$ then \begin{gather*} \dot{v}_{i}(t)=\left(\psi_*^t \operatorname{ad}_{Z}X_{i}\right) (q)=\sum\limits_{j=1}^{k}\left(\alpha_{ij}\circ\psi^{-t}\right) (q)\left(\psi_*^{t}X_{j}\right) (q)=\sum\limits_{j=1}^{k}\beta_{ij}(t)v_{j}(t), \end{gather*} where $\beta_{ij}(t)=(\alpha_{ij}\circ\psi^{-t}) (q)$. For any covector $\lambda \in T_{q}^{\ast}M$ which annihilates $H_{q}$, i.e.,~$ \left\langle \lambda,v\right\rangle=0$ for every $v\in H_{q}$, we obtain a~system of linear dif\/ferential equations for the functions $w_{i}(t)=\left\langle \lambda,v_{i}(t)\right\rangle$, $i=1,\dots,k$: \begin{gather*} \dot{w}_{i}(t)=\sum\limits_{j=1}^{k}\beta_{ij}(t)w_{j}(t) \end{gather*} with initial conditions $w_{i}(0)=0$, $i=1,\dots,k$, since $v_{i}(0)=X_{i}(q)\in H_{q}$. Therefore $v_{i}(t)=0$ and $(\psi_*^{t}X_{i}) (q)\in H_{q}$ every~$t$ for which $v_{i}(t)$ is def\/ined, $i=1,\dots,k$. (b) $\Rightarrow$ (a) Take a~point~$q$, then for every~$t$ such that $\left\vert t\right\vert$ is suf\/f\/iciently small, we have $(\psi_*^{t}X) (q)\in H_{q}$ and it follows that $(\operatorname{ad}_{Z}X) (q)=\frac{d}{dt}|_{t=0}(\psi_*^{-t}X) (q)\in H_{q}$. \end{proof} Suppose now that $f:M\longrightarrow M$ is a~dif\/feomorphism such that $df(H)=H$ and let~$T$ be a~tensor of type $(0,2)$ on~$H$. We def\/ine a~pull-back $\tilde{f}^{\ast}:\Gamma (H)\times \Gamma (H)\longrightarrow C^{\infty}(M)$~by \begin{gather*} \big(\tilde{f}^{\ast}T\big)_{q}(X,Y)=T_{f(q)}(d_{q}f(X),d_{q}f(Y)) , \end{gather*} where $X,Y\in \Gamma (H)$ (tilde indicates that we restrict to horizontal vector f\/ields). We can now reformulate the def\/inition of conformal transformations in a~manner consistent with semi-Riemannian geometry: \textit{$f$ is a~conformal transformation of $(M,H,g)$ if and only if there exists a~function $\rho \in C^{\infty}(M)$, $\rho >0$, such that $\tilde{f}^{\ast}g=\rho g$ $($if $\rho=1$, $f$ is an isometry$)$.} Suppose that~$Z$ is a~vector f\/ield on~$M$ such that $\operatorname{ad}_{Z}:\Gamma (H)\longrightarrow \Gamma (H)$ and let $\psi^t$ denote the (local) f\/low of $Z$. Using Lemma~\ref{lem1}, again by analogy to the classical geometry, we can def\/ine a~local operator $\tilde{L}_{Z}T:\Gamma (H)\times \Gamma (H)\longrightarrow C^{\infty}(M)$ which will be called \textit{the restricted Lie derivative}: \begin{gather} \big(\tilde{L}_{Z}T\big)(q)=\frac{d}{dt}\bigg|_{t=0}\big(\big(\widetilde{\psi^t}\big)^{\ast}T\big) (q). \label{RestDef} \end{gather} It turns out that \begin{Proposition} A~vector field~$Z$ is an infinitesimal conformal transformation of $(M,H,g)$ if and only if the following conditions hold: \begin{enumerate}\itemsep=0pt \item[$(i)$] $\operatorname{ad}_{Z}:\Gamma (H)\longrightarrow \Gamma (H)$, and \item[$(ii)$] there exists a~function $\mu \in C^{\infty}(M)$ such that $\tilde{L}_{Z}g=u g$. \end{enumerate} \end{Proposition} \begin{proof} Remembering that we use only horizontal vector f\/ields, the proof is the same as in the classical geometry. Again $\psi^t$ is the f\/low of~$Z$. ``$\Rightarrow$'' By Lemma~\ref{lem1} we know that (i) is satisf\/ied. If $(\widetilde{\psi^t})^{\ast}g=\rho_{t}g$, where for each~$t$ the function $\rho_{t}$ is smooth and positive, then it follows that \begin{gather*} \tilde{L}_{Z}\big(\big(\widetilde{\psi^t}\big)^{\ast}g\big)=\frac{d}{ds}\bigg|_{s=0} \big(\widetilde{\psi^{s}}\big)^{\ast} \big(\widetilde{\psi^t}\big)^{\ast}g= \frac{d}{dt} \big(\widetilde{\psi^t}\big)^{\ast}g=\frac{d}{dt} (\rho_{t}g )= \dot{\rho}_{t}g. \end{gather*} On the other hand, we also have that \begin{gather*} \tilde{L}_{Z}\big(\big(\widetilde{\psi^t}\big)^{\ast}g\big)=\tilde{L}_{Z}(\rho_{t}g)=X(\rho_{t})g+\rho_{t}\big(\tilde{L}_{Z}g\big), \end{gather*} and so we see that $\tilde{L}_{Z}g=\mu g$, where \begin{gather*} \mu=\frac{\dot{\rho}_{t}-X(\rho_{t})}{\rho_{t}} \end{gather*} (note that $\rho_{0}=1$). ``$\Leftarrow$'' From Lemma~\ref{lem1} we know that $d\psi^t$ preserves~$H$. From (ii) and~\eqref{RestDef} we have \begin{gather*} \frac{d}{dt}\big(\widetilde{\psi^t}\big)^{\ast}g=\big(\widetilde{\psi^t}\big)^{\ast}\big(\tilde{L}_{Z}g\big) =\big(\widetilde{\psi^t}\big)^{\ast}(\mu g)=\big(\mu \circ \psi^t\big)\big(\widetilde{\psi^t}\big)^{\ast}g, \end{gather*} which implies that $(\widetilde{\psi^t})^{\ast}g= \rho_{t}g$, where \begin{gather*} \rho_{t}(q)=\exp \int_{0}^{t}\mu (\psi^{s}(q)) ds. \tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof} By direct calculation we obtain \begin{gather} \big(\tilde{L}_{Z}g\big)(X,Y)=Z(g(X,Y))-g(\operatorname{ad}_{Z}X,Y)-g(X,\operatorname{ad}_{Z}Y) \label{LieMetr} \end{gather} for every $X,Y\in \Gamma (H)$, which gives the following two corollaries: \begin{Corollary} \label{wn1} $Z$ is an infinitesimal conformal transformation of $(M,H,g)$ if and only if there exists a~function $\mu \in C^{\infty}(M)$ such that for every $X,Y\in \Gamma (H)$ \begin{gather*} Z(g(X,Y))=g(\operatorname{ad}_{Z}X,Y)+g(X,\operatorname{ad}_{Z}Y)+\mu g(X,Y). \end{gather*} \end{Corollary} \begin{Corollary} $Z$ is an infinitesimal isometry of $(M,H,g)$ if and only if for every $X,Y\in \Gamma (H)$ \begin{gather*} Z(g(X,Y))=g(\operatorname{ad}_{Z}X,Y)+g(X,\operatorname{ad}_{Z}Y). \end{gather*} \end{Corollary} Furthermore: \begin{Corollary} If~$Z$ is an infinitesimal conformal transformation or isometry of $(M,H,g)$ then for every $n\geq 2$ and every $X,Y\in \Gamma (H)$ \begin{gather} \sum\limits_{k=0}^{n}\binom{n}{k}g\big(\operatorname{ad}_{Z}^{k}X,\operatorname{ad}_{Z}^{n-k}Y\big)=0. \label{r1} \end{gather} \begin{proof} Fix a~point \thinspace $q\in M$. Under the above notation, for any $n\in \mathbb{N}$ and suf\/f\/iciently small $\left\vert t\right\vert$ \begin{gather} \rho_{t}(\psi^tq)g\big(X(\psi^tq),Y(\psi^tq)\big)=g\big(d_{\psi^tq}\psi^{-t}(X),d_{\psi^tq}\psi^{-t}(Y)\big) \nonumber \\ \qquad =g\left(\sum\limits_{k=0}^{n}\frac{t^{k}}{k!}(\operatorname{ad}_{Z}^{k}X)(q),\sum\limits_{m=0}^{n}\frac{t^{m}}{m!}(\operatorname{ad}_{Z}^{m}Y)(q)\right)+o(t^{n}). \label{r2} \end{gather} Using Corollary~\ref{wn1} we can remove from~\eqref{r2} terms of order $0$ and $1$ with respect to~$t$. What we obtain is \begin{gather*} \sum\limits_{k=2}^{n}t^{k}\sum\limits_{i+j=k}\frac{1}{i!j!}g\big(\big(\operatorname{ad}_{Z}^{i}X\big)(q),\big(\operatorname{ad}_{Z}^{j}Y\big)(q)\big)+o(t^{n})=0 \end{gather*} for $\left\vert t\right\vert$ suf\/f\/iciently small, which gives~\eqref{r1}. \end{proof} \end{Corollary} \section{Some properties of invariants}\label{Section4} In this section we assume all sub-Lorentzian manifolds to be $ts$-oriented. Let us start from an obvious observation. \begin{Proposition} Let $(M_{i},H_{i},g_{i})$, $i=1,2$, be a~contact $3$-dimensional $ts$-oriented sub-Lorent\-zian manifolds. Denote by $\chi_{i}$, $\kappa_{i}$, $\tilde{h}_{i}$ the corresponding objects defined by $(H_{i},g_{i})$, $i=1,2$. If $\varphi:(M_{1},H_{1},g_{1})\longrightarrow (M_{2},H_{2},g_{2})$ is a~local $ts$-isometry, then $\chi_{1}=\varphi^{\ast}\chi_{2}$, $\kappa_{1}=\varphi^{\ast}\kappa_{2}$, and $\tilde{h}_{1}=\varphi^{\ast}\tilde{h}_{2}$. \end{Proposition} Fix a~contact $3$-dimensional sub-Lorentzian manifold $(M,H,g)$. First of all let us notice how the invariant $\tilde{h}$ can be expressed in terms of the restricted Lie derivative of the metric~$g$ in the direction of the Reeb f\/ield. Indeed, knowing~\eqref{LieMetr} it is clear that for every $q\in M$ and every $v,w\in H_{q}$ \begin{gather} \bar{h}_{q}(v,w)=\frac{1}{2}\big(\tilde{L}_{X_{0}}g\big)(q)(v,w). \label{LieInv} \end{gather} Such an approach allows to def\/ine higher-order invariants, namely those that correspond to the bilinear forms \begin{gather*} \bar{h}_{q}^{(l)}(v,w)=\frac{1}{2}\big(\tilde{L}_{X_{0}}^{l}g\big)(q)(v,w), \qquad l=2,3,\dots . \end{gather*} In this way, however, we will not obtain any formulas involving the structure functions $c_{12}^{i}$. Using~\eqref{LieInv} we obtain the following proposition and corollary thereof. \begin{Proposition} \label{ala2} The Reeb vector field $X_{0}$ is an infinitesimal isometry for $(H,g)$ if and only if $\tilde{h}_{q}=0$ for every $q\in M$. \end{Proposition} \begin{Corollary} If the Reeb vector field $X_{0}$ is an infinitesimal isometry for $(H,g)$ then $\chi=0$ everywhere. \end{Corollary} Proposition~\ref{ala2} shows one of the ways how to produce sub-Lorentzian isometries. This is important because we know very little examples of such maps. Next we study the ef\/fect on the invariants when we dilate the structure. To this end suppose that we have a~sub-Lorentzian $ts$-oriented structure $(H,g)$ which is given by an orthonormal frame $X_{1}$, $X_{2}$ with a~time (resp.\ space) orientation $X_{1}$ (resp.~$X_{2}$). Let $s>0$ be a~constant. Consider the sub-Lorentzian structure $(H^{\prime},g^{\prime})$ def\/ined by assuming the frame $X_{1}^{\prime}=sX_{1}$, $X_{2}^{\prime}=sX_{2}$ to be orthonormal with the time (resp.\ space) orientation $X_{1}^{\prime}$ (resp.\ $X_{2}^{\prime}$). The normalized one form $\omega^{\prime}$ which def\/ines $H^{\prime}$ is given by $\omega^{\prime}=\frac{1}{s^{2}}\omega $, i.e., $d\omega^{\prime}(X_{1}^{\prime},X_{2}^{\prime})=\omega^{\prime}([X_{2}^{\prime},X_{1}^{\prime}])=1$. It follows that the Reeb f\/ield is now $s^{2}X_{0}$. Then it is easy to see that~\eqref{StrConst} can be rewritten as \begin{gather*} \lbrack X_{1}^{\prime},X_{0}^{\prime}]={c}_{01}^{\prime 1}X_{1}^{\prime}+{c}_{01}^{\prime 2}X_{2}^{\prime}, \qquad \lbrack X_{2}^{\prime},X_{0}^{\prime}]={c}_{02}^{\prime 1}X_{1}^{\prime}+{c}_{02}^{\prime 2}X_{2}^{\prime}, \\ \lbrack X_{2}^{\prime},X_{1}^{\prime}]={c}_{12}^{\prime 1}X_{1}^{\prime}+{c}_{12}^{\prime 2}X_{2}^{\prime}+X_{0}^{\prime}, \end{gather*} where ${c}_{jk}^{\prime i}=sc_{jk}^{i}$. As a~corollary we obtain \begin{Proposition} Let $\chi$, $\kappa$, $\tilde{h}$ $($resp.\ $\chi^{\prime}$, $\kappa^{\prime}$, $\tilde{h}^{\prime})$ be the $ts$-invariants of the sub-Lorentzian structure defined by an orthonormal basis $X_{1}$, $X_{2}$ $($resp.\ by $X_{1}^{\prime}=s X_{1}$, $X_{2}^{\prime}=s X_{2})$. Then \begin{gather*} \chi^{\prime}=s^{2}\chi , \qquad \kappa^{\prime}=s^{2}\kappa , \qquad \tilde{h}^{\prime}=s \tilde{h}. \end{gather*} \end{Proposition} \subsection[The case $\protect\chi=0$, $\tilde{h}\neq 0$]{The case $\boldsymbol{\protect\chi=0}$, $\boldsymbol{\tilde{h}\neq0}$} \label{section4.1} Next let us assume that $\chi (q)=0$ but $\tilde{h}_{q}\neq 0$ (i.e.~$c_{01}^{1}\neq 0$) everywhere. As we shall see we are given an additional structure in this case. Indeed, the correspondence $q\longrightarrow \ker \tilde{h}_{q}$ def\/ines an invariantly given f\/ield of directions. We can distinguish two cases: (i) $c_{01}^{1}=\frac{1}{2}(c_{02}^{1}-c_{01}^{2})$, and (ii) $c_{01}^{1}=-\frac{1}{2}(c_{02}^{1}-c_{01}^{2})$. In the f\/irst case the matrix of $\tilde{h}_{q}$ is of the form \begin{gather*} \left( \begin{matrix} c_{01}^{1} & c_{01}^{1} \\ -c_{01}^{1} &-c_{01}^{1} \end{matrix} \right) \end{gather*} and $\ker \tilde{h}_{q}$ is spanned by $X_{1}(q)-X_{2}(q)$ for each~$q$. In the second case the matrix of $\tilde{h}_{q}$ is equal~to \begin{gather*} \left( \begin{matrix} c_{01}^{1} &-c_{01}^{1} \\ c_{01}^{1} &-c_{01}^{1} \end{matrix} \right) \end{gather*} and $\ker \tilde{h}_{q}$ is spanned by $X_{1}(q)+X_{2}(q)$. Thus in the considered case there exists a~line sub-bundle $L\longrightarrow M$ of~$H$ on which~$g$ is equal to zero. Of course this result is trivial under assumption on $ts$-orientation because then~$H$ admits a~global orthonormal basis $X_{1}$, $X_{2}$ and we have in fact two such subbundles, namely $\Span\{X_{1}+X_{2}\}$ and $\Span\{X_{1}-X_{2}\}$. What is interesting here is that the condition $\chi=0$, $\tilde{h}\neq 0$ does not depend on the assumption on orientation. Indeed, notice that if we change a~time (resp.\ space) orientation keeping space (resp.\ time) one then $\tilde{h}$ is multiplied by $-1$ (because so is $X_{0}$). Moreover the condition $\chi=0$, $\tilde{h}\neq 0$ means that $\tilde{h}$ is a~non-zero map with vanishing eigenvalues, the fact being independent of possible multiplication by $-1$. Therefore the condition $\chi=0$, $\tilde{h}\neq 0$ makes sense even for an unoriented contact sub-Lorentzian structures. In this way we are led to the following proposition. \begin{Proposition} \label{smpcon} Suppose that $(M,H,g)$ is a~contact sub-Lorentzian manifold $($we don't make any assumptions on orientation$)$. If $\chi (q)=0$ and $\tilde{h}_{q}\neq 0$ for every $q\in M$ then there exists a~line sub-bundle $L\longrightarrow M$ of~$H$ on which~$g$ is equal to zero. \end{Proposition} \begin{proof} Fix an arbitrary point $q\in M$. Let $Y_{1}$, $Y_{2}$ be an orthonormal basis for $(H,g)$ def\/ined on a~neighborhood~$U$ of~$q$, where $Y_{1}$ is timelike and $Y_{2}$ is spacelike. Supposing $Y_{1}$ (resp.\ $Y_{2}$) to be a~time (resp.\ space) orientation we can apply the above construction of $ts$-invariants obtaining the corresponding objects $\chi_{U}$ and $\tilde{h}_{U}$. By our assumption and the above remark $\chi_{U}=0$, $\tilde{h}_{U}\neq 0$ on~$U$, and we get an invariantly def\/ined line sub-bundle $L_{U}\longrightarrow U$: $U\ni q\longrightarrow \ker (\tilde{h}_{U})_{q}=:L_{U}(q)$. We repeat the same construction around any point $q\in M$, which results in the family $\left\{L_{U}\longrightarrow U\right\}_{U\subset M}$ of line sub-bundles, indexed by elements~$U$ of an open covering of~$M$. By construction $L_{U}(q)=L_{U^{\prime}}(q)$ for any $q\in U\cap U^{\prime}$. \end{proof} Let us note that if~$M$ is simply connected, then the assertion of Proposition~\ref{smpcon} holds true no matter the values of $\chi$ and $\tilde{h}$ are, because in this case the metric $(H,g)$ admits a~global orthonormal frame, see~\cite{groGlob}. \subsection[The case $\tilde{h}=0$]{The case $\boldsymbol{\tilde{h}=0}$}\label{section4.2} We begin with the following proposition which is clear because $X_{0}$ is an inf\/initesimal isometry. \begin{Proposition} If $\tilde{h}=0$ then $X_{0}(\kappa)=0$, i.e., $\kappa$ is constant along the trajectories of $X_{0}$. \end{Proposition} By~\eqref{mac2} the assumption $\tilde{h}=0$ implies \begin{gather*} c_{01}^{1}=c_{02}^{2}=0, \qquad c_{02}^{1}=c_{01}^{2}. \end{gather*} We will write $c=c_{02}^{1}=c_{01}^{2}$. Now~\eqref{StrConst} takes the form \begin{gather} [X_{1},X_{0}]=cX_{2}, \qquad [X_{2},X_{0}]=cX_{1}, \qquad [X_{2},X_{1}]=c_{12}^{1}X_{1}+c_{12}^{2}X_{2}+X_{0}. \label{StrConst3} \end{gather} Rewriting as above~\eqref{StrConst3} in terms of the dual forms $\nu_{i}$ we arrive at \begin{gather} d\nu_{0}=\nu_{1}\wedge \nu_{2}, \qquad d\nu_{1}=c\nu_{0}\wedge \nu_{2}+c_{12}^{1}\nu_{1}\wedge \nu_{2}, \qquad d\nu_{2}=c\nu_{0}\wedge \nu_{1}+c_{12}^{2}\nu_{1}\wedge \nu_{2}. \label{StrConst3form} \end{gather} \begin{Lemma} \label{oro7} The following identities hold \begin{gather*} -X_{1}(c)-cc_{12}^{2}+X_{0}\big(c_{12}^{1}\big)=0, \qquad X_{2}(c)-cc_{12}^{1}+X_{0}\big(c_{12}^{2}\big)=0. \end{gather*} \end{Lemma} \begin{proof} The lemma is obtained upon applying the exterior dif\/ferential to both sides of the second and the third equation in~\eqref{StrConst3form}. \end{proof} Our next aim, which will be achieved in the next subsection, is to f\/ind a~hyperbolic rotation of our frame $X_{1}$, $X_{2}$ so that~\eqref{StrConst3} signif\/icantly simplif\/ies. More precisely we want to kill the terms $c_{12}^{i}$, $i=1,2$. To this end let us introduce the following $1$-form \begin{gather} \eta=(\kappa+c)\nu_{0}+c_{12}^{1}\nu_{1}-c_{12}^{2}\nu_{2} \label{oneform} \end{gather} whose signif\/icance will become evident below. \begin{Proposition} \label{stwrdz1} $d\eta=d\kappa \wedge \nu_{0}$. \end{Proposition} \begin{proof} Computations give \begin{gather*} d\eta=d\kappa \wedge \nu_{0}+\big({-}X_{1}(c)-cc_{12}^{2}+X_{0}\big(c_{12}^{1}\big)\big) \nu_{0}\wedge \nu_{1} +\big({-}X_{2}(c)+cc_{12}^{1}-X_{0}\big(c_{12}^{2}\big)\big) \nu_{0}\wedge \nu_{2} \\ \phantom{d\eta=}{} +\big(\kappa+c-X_{2}\big(c_{12}^{1}\big)-X_{1}\big(c_{12}^{2}\big)+\big(c_{12}^{1}\big)^{2}-\big(c_{12}^{2}\big)^{2} \big) \nu_{1}\wedge \nu_{2}. \end{gather*} To end the proof we use Lemma~\ref{oro7} and the def\/inition of $\kappa$. \end{proof} \subsection{The simply-connected Lie group case}\label{section4.3} Suppose that our contact sub-Lorentzian manifold $(M,H,g)$ is such that~$M$ is a~simply-connected Lie group and~$H$,~$g$ are left-invariant; this means that left translations of~$M$ are sub-Lorentzian isometries (note that any left-invariant bracket generating distribution on a~$3$-dimensional Lie group is necessarily contact). In such a~case, clearly, $\chi$~and~$\kappa$ are constant. We also remark that unlike the general situation, the assumption on $ts$-orientation is no longer restrictive since groups are parallelizable manifolds. As above, assume that $\tilde{h}=0$ everywhere. Recalling our aim formulated in the previous subsection we prove the following lemma. \begin{Lemma} There exists a~smooth function $\theta:M\longrightarrow \mathbb{R}$ such that $X_{1}(\theta)=c_{12}^{1}$, $X_{2}(\theta)=-c_{12}^{2}$. \end{Lemma} \begin{proof} Suppose that such a~function $\theta$ exists. Then \begin{gather*} X_{0}(\theta)= [X_{2},X_{1} ] (\theta)-c_{12}^{1}X_{1}(\theta)-c_{12}^{2}X_{2}(\theta) \\ \phantom{X_{0}(\theta)}{} = X_{2}\big(c_{12}^{1}\big)+X_{1}\big(c_{12}^{2}\big)-\big(c_{12}^{1}\big)^{2}+\big(c_{12}^{2}\big)^{2}= \kappa+c, \end{gather*} and it follows that \begin{gather*} d\theta=X_{0}(\theta)\nu_{0}+X_{1}(\theta)\nu_{1}+X_{2}(\theta)\nu_{2}=(\kappa +c)\nu_{0}+c_{12}^{1}\nu_{1}-c_{12}^{2}\nu_{2}=\eta, \end{gather*} where $\eta$ is def\/ined by~\eqref{oneform}. Thus to prove the existence of $\theta$ it is enough to show that $\eta$ is exact. Since~$M$ is simply-connected we must show that $d\eta=0$. This is however clear by Proposition~\ref{stwrdz1} and the fact that $\kappa$ is a~constant. \end{proof} Now we apply to our frame $X_{1}$, $X_{2}$, the hyperbolic rotation by the angle $\theta$ specif\/ied above. As a~result, the frame $Y_{1}$, $Y_{2}$ given by~\eqref{transf1} satisf\/ies \begin{Proposition} \label{propliealg} \begin{gather*} [Y_{1},X_{0}]=-\kappa Y_{2}, \qquad [Y_{2},X_{0}]=-\kappa Y_{1}, \qquad [Y_{2},Y_{1}]= X_{0}. \end{gather*} \begin{proof} It follows directly from facts proved in Subsections~\ref{section4.2},~\ref{section4.3}, from \eqref{StrConst3}, and from Lemma~\ref{oro4}. \end{proof} \end{Proposition} We observe here the dif\/ference between the sub-Riemannian case where the brackets have the form \begin{gather*} [Y_{1},X_{0}]=\kappa Y_{2}, \qquad [Y_{2},X_{0}]=-\kappa Y_{1}, \qquad [Y_{2},Y_{1}]= X_{0}. \end{gather*} We remark that in the sub-Riemannian case Agrachev and Barilari~\cite{Agr2} obtain the following results: $\kappa=0$ implies~$M$ is isometric to the Heisenberg group, $\kappa>0$ implies~$M$ is isometric to ${\rm SU}_2$ and $\kappa<0$ implies~$M$ is isometric to the universal cover of ${\rm SL}_2$. In the sub-Lorentzian case~${\rm SU}_2$ does not arise. To be precise we have the following corollaries of Proposition~\ref{propliealg}. \begin{Corollary} \label{CorHeis} If~$M$ is a~simply-connected Lie group such that $\tilde{h}$ and $\kappa$ vanish identically, then~$M$ is isometric to the Heisenberg group. \end{Corollary} \begin{Corollary} \label{CorSL2} If~$M$ is a~simply-connected Lie group such that $\tilde{h}$ vanishes and $\kappa \neq 0$, then it is isometric to a~sub-Lorentzian structure on $\widetilde{\rm SL}_{2}(\mathbb{R})$ induced by the Killing form. \end{Corollary} Before proving Corollary~\ref{CorSL2} let us recall some basic facts about the Killing form and Cartan decompositions. For any Lie algebra $\mathfrak{g}$ the Killing form is the symmetric bilinear form def\/ined~by $K(X,Y)=\mathrm{Trace}(\mathrm{ad}_X \mathrm{ad}_Y)$. The Killing form has the following invariance properties: \begin{enumerate}\itemsep=0pt \item[1)] $K([X,Y],Z)=K(X,[Y,Z])$, \item[2)] $K(T(X), T(Y))= K(X,Y)$ for all $T \in \mathrm{Aut}(\mathfrak{g})$. \end{enumerate} If $\mathfrak{g}$ is simple then any symmetric bilinear form satisfying the f\/irst invariance condition is a~scalar multiple of the Killing form and Cartan's criterion states that a~Lie algebra is semisimple if and only if the Killing form is non-degenerate. A Cartan involution is any element $\Theta \in \mathrm{Aut}(\mathfrak{g})$ such that $\Theta^2=I$ and \begin{gather*} \langle X, Y \rangle_\Theta=-K(X,\Theta(Y)) \end{gather*} is positive def\/inite. Corresponding with~$\Theta$ we have a~Cartan decomposition $\mathfrak{g}=\mathfrak{t} \oplus \mathfrak{p}$, where $\mathfrak{t}$ and $\mathfrak{p}$ are the eigenspaces corresponding with the eigenvalues $1$ and $-1$ respectively. Since $\Theta$ is an automorphism, it follows that $[\mathfrak{t}, \mathfrak{t}] \subseteq \mathfrak{t}$, $[\mathfrak{t}, \mathfrak{p}] \subseteq \mathfrak{p}$ and $[\mathfrak{p}, \mathfrak{p}] \subseteq \mathfrak{t}$. Moreover, the Killing form is negative def\/inite on $\mathfrak{t}$ and positive def\/inite on $\mathfrak{p}$. The standard Cartan involution on $\mathfrak{sl}_2$ is given by $\Theta(A)=-A^T$. In this case we have that $\mathfrak{t}= \mathrm{span} \{f_1\}$ and $\mathfrak{p}= \mathrm{span} \{f_2,f_0\}$, where \begin{gather*} f_0=\frac{1}{2}\left( \begin{matrix} -1 & 0 \\ 0 & 1 \end{matrix} \right), \qquad f_1= \frac{1}{2} \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right), \qquad f_2= \frac{1}{2} \left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \right), \end{gather*} and the Lie brackets are \begin{gather*} [f_2,f_1]=f_0, \qquad [f_1,f_0]=f_2, \qquad [f_2,f_0]=f_1. \end{gather*} The Killing form for $\mathfrak{sl}_2$ is given by $K(A,B)=4Tr(AB)$ and so the bilinear form $B(A, B) =\frac{1}{2}K(A,B)$ satisf\/ies \begin{alignat*}{4} & B(f_0,f_0)=1, \qquad && B(f_1,f_1)=-1, \qquad&& B(f_2,f_2)=1,& \\ & B(f_0,f_1)=0, \qquad&& B(f_0,f_2)=0, \qquad&& B(f_1,f_2)=0.& \end{alignat*} Thus we have two choices: 1)~$\mathcal{H}_e= \mathrm{span} \{f_1,f_2\}$ or 2)~$\mathcal{H}_e= \mathrm{span} \{f_1,f_0\}$. In each case, by left translation, we obtain left-invariant sub-Lorentzian structures on $\widetilde{\rm SL}_{2}(\mathbb{R})$ satisfying $\tilde h=0$. An isometry between these two structures is induced by Lie algebra automorphism~$T$, where $Tf_0=f_2$, $Tf_1=-f_1$ and $Tf_2=f_0$. \begin{proof}[Proof of Corollary~\ref{CorSL2}] First we observe that the matrices \begin{gather*} e_0=\frac{1}{2}\left( \begin{matrix} \kappa & 0 \\ 0 &-\kappa \end{matrix} \right), \qquad e_1= \frac{1}{2}\left( \begin{matrix} 0 & 1 \\ \kappa & 0 \end{matrix} \right), \qquad e_2=\frac{1}{2}\left( \begin{matrix} 0 & 1 \\ -\kappa & 0 \end{matrix} \right), \end{gather*} form a~basis of $\mathfrak{sl}_2$ and the bracket relations are $[e_2,e_1]=e_0$, $[e_1,e_0]=-\kappa e_2$, $[e_2,e_0]=-\kappa e_1$. Furthermore $K(e_1,e_1)=2 \kappa$, $K(e_2,e_2)=-2 \kappa$, and $K(e_1,e_2)=0$. Since we assume the sub-Lorentzian structure on~$M$ is left-invariant, the metric must be the left translation of the metric $B(A, B)=-\frac{1}{2\kappa}K(A,B)$ on $T_eM=\mathfrak{sl}_2$. Since~$M$ is simply connected it must be the universal cover of ${\rm SL}_2(\mathbb{R})$. \end{proof} We remark that in general \begin{alignat*}{4} & K(e_0,e_0)=2\kappa^2, \qquad&& K(e_1,e_1)=2 \kappa, \qquad&& K(e_2,e_2)=-2 \kappa,& \\ & K(e_0,e_1)=0, \qquad&& K(e_0,e_2)=0, \qquad&& K(e_1,e_2)=0,& \end{alignat*} and so the corresponding Cartan involution is given~by \begin{gather*} \Theta(e_0)=-e_0, \qquad \Theta(e_1)=-e_1, \qquad \Theta(e_2)=e_2. \end{gather*} Hence $\mathfrak{t}=\mathrm{span}\{e_2\}$ and $\mathfrak{p}=\mathrm{span} \{e_1, e_0\}$. If $|\kappa|\ne 1$, then since $K(e_0,e_0)=2\kappa^2$, the only choice we have for a~sub-Lorentzian structure induced~by the Killing form is $\mathcal{H}_e= \mathrm{span} \{e_1,e_2\}$. The null lines in $\mathfrak{sl}_2$ are $\mathrm{span}\{e_1-e_2\}$ and $\mathrm{span}\{e_1+e_2\}$. Furthermore if we set \begin{gather*} n_0=e_3, \qquad n_1=\frac{1}{\sqrt{2}}(e_1-e_2), \qquad n_2=\frac{1}{\sqrt{2}} (e_1+e_2) \end{gather*} then \begin{gather*} [n_2,n_1]=n_0, \qquad [n_1,n_0]= \kappa n_1, \qquad [n_2,n_0]=-\kappa n_2. \end{gather*} If we set $\mathcal{H}_e= \mathrm{span} \{n_1, n_2\}$ and def\/ine \begin{gather*} B(n_1, n_1)=-1, \qquad B(n_2, n_2)=1, \qquad B(n_1, n_2)=0, \end{gather*} then the induced left-invariant structure on $\widetilde{\rm SL}_{2}(\mathbb{R})$ is isometrically distinct from the cases above, indeed \begin{gather*} \tilde h= \kappa\left( \begin{matrix} 1 & 0 \\ 0 &-1 \end{matrix} \right). \end{gather*} \section[Inf\/initesimal sub-Lorentzian transformations on groups with $\tilde h=0$]{Inf\/initesimal sub-Lorentzian transformations\\ on groups with $\boldsymbol{\tilde h=0}$} \subsection{Introduction} In this section we determine the conformal and isometry groups for the Heisenberg group and the universal cover of ${\rm SL}_2$. In particular we will see that in both cases the local inf\/initesimal conformal transformations are given~by $\mathfrak{sl}_3$. In the context of this paper it would be natural to construct the vector f\/ields using the criteria developed in Section~\ref{SubLorentzInf}, however this leads to complicated systems of PDEs which we cannot provide explicit proof concerning solutions. Instead we apply Cartan's equivalence method which leads to the general solution without having to solve PDEs. In work in preparation with Alexandr Medvedev~\cite{GMW} we explore further the application of the Cartan approach to sub-Lorentzian geometry. In particular the invariants discussed here appear in a~much more systematic manner and a~complete description of all left-invariant sub-Lorentzian structures on $3$-dimensional Lie groups will be given. We also remark that on the subject of conformal classif\/ication of left-invariant sub-Rieman\-nian structures on $3$-dimensional Lie groups there is the very recent arXiv paper of Boarotto~\cite{Boar}. \subsection{Preliminaries} The assumption $\tilde h=0$ implies that the Lie algebra of~$M$ has the form given in Proposition~\ref{propliealg}. So as to make the notation a~little less confusing in the calculations that ensue, we rewrite the Lie algebra in the following form \begin{gather*} [X_1,X_3]= \kappa X_2, \qquad [X_2,X_3]= \kappa X_1, \qquad [X_1,X_2]= X_3. \end{gather*} We denote the ordered dual frame by $\theta=\{\theta_1,\theta_2,\theta_3\}$ and observe that by Cartan's formula the structure equations of this coframe are \begin{gather*} d\theta_1= \kappa \theta_3 \wedge \theta_2, \qquad d\theta_2= \kappa \theta_3 \wedge \theta_1, \qquad d\theta_0= \theta_2 \wedge \theta_1. \end{gather*} Moreover the sub-Lorentzian metric has the form $\theta_2 \odot \theta_2-\theta_1 \odot \theta_1$. It follows that the subgroup of $G \subset GL(3)$ which acts on~$\theta$ and leaves the metric conformally invariant modulo terms of the form $\eta \odot \theta_3$, consists of matrices of the form \begin{gather*} \left( \begin{matrix} e^r \cosh(t) & e^r \sinh(t) & a \\ e^r \sinh(t) & e^r \cosh(t) & b \\ 0 & 0 & e^{2r} \end{matrix} \right). \end{gather*} Hence a~local dif\/feomorphism $f: M \to M$ is conformal if and only if \begin{gather} f^* \theta= g_f \theta, \label{confinveq} \end{gather} where $g_f:M \to G$. Thus the conformal symmetry problem is exactly to f\/ind all local dif\/feomorphisms which satisfy~\eqref{confinveq} which is precisely a~Cartan equivalence problem. \subsection{Overview of Cartan's algorithm} For the details we refer the reader to~\cite{Olver1}. The f\/irst step in Cartan's algorithm is to pass to the equation $f^* d \theta= d (g_f \theta)$ and lift the problem to $M \times G$. In this context the lift of~$\theta$ is given by the partial coframe $\Theta \subset T^*(M \times G)$, where $\Theta_{(p,g)}=g\theta_p$. The lift of~$f$ is def\/ined by $\tilde f(p, g)=(f(p),g g_f(p)^{-1})$ and it follows that \begin{gather} \tilde f^* \Theta= \Theta \qquad \text{and} \qquad \tilde f^* d \Theta= d\Theta. \label{confinveq2} \end{gather} If $\Pi \subset T(M \times G)^*$ is any subset complementary to~$\Theta$ then the structure equations for the lifted partial coframe take the form: \begin{gather*} d \Theta= \Pi\wedge \Theta+T \Theta \wedge \Theta \end{gather*} and~$T$ is referred to as the torsion. It follows from~\eqref{confinveq2} that if~$f$ is a~local conformal dif\/feo\-mor\-phism then $T \circ \tilde f=T$, however in general this equation is not the end of the story. Roughly speaking all the higher-order coframe derivatives of~$T$ must also be invariant under composition with $\tilde f$, see~\cite[Theorem 14.24]{Olver1} for a~precise statement. Fortunately in our case this will not be an issue. The main idea in the Cartan algorithm is to exploit the freedom of choice of~$\Pi$ so as to minimise the torsion~$T$. There are three processes involved in minimising torsion, namely group reduction, absorption and prolongation. A~reduction of the structure group can be carried out when the condition $f^* \theta= g_f \theta$ for some $g_f:M \to G$ implies that $g_f:M \to G^{\prime}\subset G$. Reductions reveal themselves as coef\/f\/icients in $T$ depending only on group parameters. Such coef\/f\/icients can be set to a~convenient constant value as long as invertibilty is not violated. An example of such a~phenomena that the reader may be familiar with occurs in the contact equivalence problem in dimension~$3$, i.e., if initially we assume that $g=(g_{ij})$ and $g_{31}=g_{32}=0$ then it transpires that $g_{33} =g_{11}g_{22}-g_{12}g_{21}$. Absorption utilises the fact, that pointwise, each element of~$\Pi$ is a~Maurer-Cartan form on~$G$ plus a~linear combination of the $\Theta_i$. We choose the coef\/f\/icients of the $\Theta_i$ so that~$T$ has as many zero coef\/f\/icients as possible. In summary the element $T_{ij}^k \in T$ is the coef\/f\/icient of $\Theta^i \wedge \Theta^j$ in the expression for $d \Theta^k$. If $T_{ij}^k$ is independent of absorption parameters then it gives a~group reduction by setting it equal to~$\pm 1$ or~$0$ and the choice is made so as not to violate invertibilty. If $T_{ij}^k$ is dependent on absorption parameters then we solve one of these parameters so that~$T_{ij}^k=0$. The aim of the algorithm is to reduce the group to $\{{\rm Id}\}$ through a~sequence of reduction and absorption cycles. If after the f\/irst reduction and absorption we get $G={\rm Id}$ then the resulting~$T$ consists of the basic invariants for the equivalence problem. Otherwise not all group parameters are normalised and absorption parameters may remain undetermined with no torsion coef\/f\/icients available to normalise them. In this case the problem must be prolonged which means that the free absorption parameters are understood as the groups parameters for a~structure group $G^{(1)}$ associated with a~new equivalence problem on $M \times G$. Specif\/ically we write $\Pi= \varpi+F \Theta$, where~$F$ consists of the free absorption parameters, and consider the equivalence problem for the partial coframe $\Theta \cup \varpi \subset T^*(M \times G)$ with structure group $G^{(1)}$ consisting of matrices of the form \begin{gather*} \left( \begin{matrix} I & 0 \\ F & I \end{matrix} \right). \end{gather*} In this context $\Theta \cup \Pi \subset T^*(M \times G \times G^{(1)})$ is a~lift of $\Theta \cup \varpi$ and we repeat the procedure: augment, reduce and absorb \dots, until eventually we get an equivalence problem where the structure group reduces to the identity and all absorption coef\/f\/icients are determined. Of course it can happen that the process will not lead to such a~situation, but when it does, the result is the structure equations of a~certain Cartan connection on~$M$. \subsection{Calculations} To begin we lift and def\/ine one forms $\Theta_i$ on $M \times G$ by setting \begin{gather*} \left( \begin{matrix} \Theta_1 \\ \Theta_2 \\ \Theta_3 \end{matrix} \right)=\left( \begin{matrix} e^r \cosh(t) & e^r \sinh(t) & a \\ e^r \sinh(t) & e^r \cosh(t) & b \\ 0 & 0 & e^{2r} \end{matrix} \right)\left( \begin{matrix} \theta_1 \\ \theta_2 \\ \theta_3 \end{matrix} \right) \end{gather*} and augment the set $\{\Theta_1,\Theta_2,\Theta_3\} \subset T^*(M \times G)$ with the following forms: \begin{gather*} \Pi_1= \alpha_1+\frac{1}{2} b e^{-2r} \Theta_1-\frac{1}{2} a e^{-2r} \Theta_2-\big(B_1+ab e^{-4r}\big)\Theta_3, \\ \Pi_2= \alpha_2-\frac{3}{2} a e^{-2r} \Theta_1+\frac{3}{2} b e^{-2r} \Theta_2+\big(a^2 e^{-4r}+\kappa e^{-2r}-B_2\big)\Theta_3, \\ \Pi_3= \alpha_3- B_1 \Theta_1-B_2 \Theta_2-B_3 \Theta_3, \\ \Pi_4=\alpha_4+\big(\big(a^2+b^2\big) e^{-4 r}-B_2\big) \Theta_1-\big(2ab e^{-4 r}+B_1\big)\Theta_2-B_4 \Theta_3. \end{gather*} The coef\/f\/icients of the $\Theta_i$ in $\Pi_j$ are determined by absorbing torsion and the $\alpha_j$ are the Maurer--Cartan forms: \begin{gather*} \alpha_1= dr, \qquad \alpha_2= d t, \qquad \alpha_3= (da-a dr-b d t)e^{-2r}, \qquad \alpha_4= (db-b dr-a d t)e^{-2r}. \end{gather*} The coef\/f\/icients $B_1,\dots,B_4$ are undetermined parameters from absorption and so a~prolongation is required. We write \begin{gather*} \Pi_1= \varpi_1-B_1 \Theta_3, \qquad \Pi_2= \varpi_2-B_2 \Theta_3, \qquad \Pi_3= \varpi_3-B_1 \Theta_1-B_2 \Theta_2-B_3 \Theta_3, \\ \Pi_4= \varpi_4-B_2\Theta_1-B_1 \Theta_2-B_4 \Theta_3. \end{gather*} and consider the equivalence problem $M \times G$ given by the ordered basis \begin{gather} \{\Theta_1,\Theta_2,\Theta_3, \varpi_1, \varpi_2, \varpi_3, \varpi_4\} \label{liftedlifted} \end{gather} with structure group $G^{(1)}$ consisting of matrices of the form \begin{gather*} \left( \begin{matrix} I & 0 \\ R & I \end{matrix} \right), \qquad \text{where} \qquad R=\left( \begin{matrix} 0 & 0 &-B_1 \\ 0 & 0 &-B_2 \\ -B_1 &-B_2 &-B_3 \\ -B_2 &-B_1 &-B_4 \end{matrix} \right). \end{gather*} The ordered basis $\{\Theta_1,\Theta_2,\Theta_3, \Pi_1, \Pi_2, \Pi_3, \Pi_4\}$ is now viewed as the lift of~\eqref{liftedlifted} to the $11$-dimensional manifold $M \times G \times G^{(1)}$ and again is augmented by forms $\{\Omega_1,\Omega_2,\Omega_3, \Omega_4\} \subset T^*(M \times G \times G^{(1)})$. We get the following reductions of the structure group $G^{(1)}$: \begin{gather*} B_2= \frac{1}{4} a^2 e^{-4r}+\frac{3}{4} b^2 e^{-4r}+\frac{1}{4} e^{-2r} \kappa, \qquad B_3= \frac{1}{2} b\big(\big(a^2-b^2\big) e^{-6r}+e^{-4r} \kappa\big), \\ B_4= \frac{1}{2} a\big(\big(a^2-b^2\big) e^{-6r}+e^{-4r} \kappa\big) \end{gather*} and so $M \times G \times G^{(1)}$ becomes an $8$-dimensional manifold and we only need $\Omega=\Omega_4$ to augment. Finally after absorption we arrive at the structure equations: \begin{gather} d \Theta_1= \Pi_1 \wedge \Theta_1+\Pi_2 \wedge \Theta_2+\Pi_3 \wedge \Theta_3, \nonumber \\ d \Theta_2= \Pi_1 \wedge \Theta_2+\Pi_2 \wedge \Theta_1+\Pi_4 \wedge \Theta_3, \nonumber \\ d \Theta_3= 2 \Pi_1 \wedge \Theta_3-\Theta_1 \wedge \Theta_2, \nonumber \\ d \Pi_1= \frac{1}{2} \Pi_4 \wedge \Theta_1-\frac{1}{2} \Pi_3 \wedge \Theta_2-\Omega \wedge \Theta_3, \nonumber \\ d \Pi_2= \frac{3}{2} \Pi_4 \wedge \Theta_2-\frac{3}{2} \Pi_3 \wedge \Theta_1, \nonumber \\ d \Pi_3= \Pi_3 \wedge \Pi_1-\Pi_4 \wedge \Pi_2-\Omega \wedge \Theta_1, \nonumber \\ d \Pi_4= \Pi_4 \wedge \Pi_1-\Pi_3 \wedge \Pi_2-\Omega \wedge \Theta_2, \nonumber \\ d \Omega= 2\Omega \wedge \Pi_1-\Pi_4 \wedge \Pi_3. \label{confstruceq} \end{gather} The structure equations for the isometries are obtained similarly but do not require prolongation. The structure group is as above except $r=0$ and the structure equations are: \begin{gather} d\Theta_1= \Pi \wedge \Theta_2, \qquad d\Theta_2= \Pi \wedge \Theta_1, \qquad d\Theta_3= \Theta_2 \wedge \Theta_1, \qquad d\Pi= \kappa \Theta_2 \wedge \Theta_1. \label{isostruceq} \end{gather} In both sets of structure equations the coef\/f\/icients are all constant and as a~consequence the symmetries of these structure equations are given by the Lie group which they def\/ine (see the remarks following~\cite[Theorem 8.22]{Olver1}). By construction the lift of our original symmetry is a~symmetry of the structure equations and must therefore be given by the action of the Lie group which the structure equations def\/ine. From~\eqref{isostruceq} we see that the isometries are at most a~$4$-dimensional Lie group. By Tanaka's theory~\cite{tanak1}, the maximal dimension is reached when $\kappa=0$ and the structure equations are those of the Heisenberg group extended by the action of a~particular strata preserving derivation of the Heisenberg algebra. The Lie algebra of this group has the form \begin{gather*} [e_1, e_2]= e_3, \qquad [e_4, e_1]= e_2, \qquad [e_4, e_2]= e_1, \end{gather*} where $\{e_1,e_2,e_3\}$ is a~basis for the Heisenberg algebra and $e_4$ is the derivation. The Killing form for the conformal structure equations is \begin{gather*} K=\left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 &-7 & 0 \\ 0 & 0 & 0 & 0 & 0 & 6 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 6 \\ 0 & 0 & 0 & 12 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 \\ 0 & 6 & 0 & 0 & 0 & 0 & 0 & 0 \\ -7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 6 & 0 & 0 & 0 & 0 & 0 \end{matrix} \right). \end{gather*} Since $\det K \ne 0$ $(=-3048192)$ and~$K$ is indef\/inite with signature $++++---$, the Lie algebra must be $\mathfrak{sl}_3$, i.e., the only $8$-dimensional simple Lie algebras are $\mathfrak{sl}_3$, $\mathfrak{su}_3$ and $\mathfrak{su}_{2,1}$, however $\mathfrak{su}_3$ and $\mathfrak{su}_{2,1}$ are ruled out by the indef\/initeness and signature. Alternatively one can simply compute the Lie brackets of the vector f\/ields dual to the system of one forms and check that it isomorphic to~$\mathfrak{sl}_3$. We should remark that Ian Anderson's MAPLE Dif\/ferentialGeometry packages were an indispensable tool used in the calculations outlined above. The fact that~$\kappa$ is not present in~\eqref{confstruceq} implies that the universal cover of ${\rm SL}_2(\mathbb{R})$ and the Heisenberg group both have ${\rm SL}_3(\mathbb{R})$ as the conformal symmetry group and consequently are conformally equivalent, see~\cite[Proposition~2.3.2]{krugthe} and~\cite[Section~2.5]{Acap}. We thus have the following conformal Darboux theorem. \begin{Theorem} All left-invariant sub-Lorentzian structures on the universal cover of ${\rm SL}_2(\mathbb{R})$ such that $\tilde h=0$ are locally conformally equivalent to the sub-Lorentzian Heisenberg group. \end{Theorem} \section{Rigid example}\label{Section6} By def\/inition, any left translation is an isometry of a~left-invariant structure and so the dimension of the isometry group is at least~$3$, and from the previous section the dimension of the isometry group is at most~$4$. The goal of this section is to show that the extreme case of no isometries can occur but obviously not for a~left invariant structure. A~relatively straight forward example comes from the geometry of second-order ODEs. To a~given second-order ODE \begin{gather} u^{\prime \prime}=Q(x,u,u^{\prime}), \label{sode} \end{gather} where~$Q$ is smooth, we associate three $1$-forms given by \begin{gather} \omega^{1}=du-pdx, \qquad \omega^{2}=dp-Q(x,u,p)dx, \qquad \omega^{3}=dx, \label{1forms} \end{gather} which are regarded as one forms jet space $J^{1}(\mathbb{R}, \mathbb{R})$ with coordinates by $(x,u,p)$, where~$x$ is the independent variable,~$u$ is the dependent variable and $p=u^{\prime}$, see~\cite{Olver1}. In particular a~curve $\gamma (x)=(x,u(x),p(x))$ in $J^{1}(\mathbb{R}, \mathbb{R})$ def\/ines a~solution to~\eqref{sode} if and only if $\gamma^{\ast}\omega^{i}=0$, $i=1,2$ (one can easily show that the vanishing of the two pull-backs is equivalent to $y^{\prime}(x)=p(x)$ and in turn to $u^{\prime \prime}(x)=Q(x,u(x),u^{\prime}(x))$). A~local dif\/feomorphism $\Phi:\mathbb{R} \times \mathbb{R\longrightarrow R}\times \mathbb{R}$ is called a~\textit{point transformation} or \textit{point symmetry} of~\eqref{sode} if and only if it maps the graph of a~solutions to~\eqref{sode} onto graphs of solutions to~\eqref{sode}. Any local dif\/feomorphism $\Phi:\mathbb{R} \times \mathbb{R\longrightarrow R} \times \mathbb{R}$ can always be \textit{prolonged} to a~local dif\/feo\-mor\-phism $\hat \Phi: J^{1}(\mathbb{R}, \mathbb{R}) \to J^{1}(\mathbb{R}, \mathbb{R})$ by setting $\hat \Phi(x,u,p)=(\tilde x, \tilde u, \tilde p)$, where \begin{gather*} (\tilde x, \tilde u)=\Phi(x,u) \qquad \text{and} \qquad \tilde p= \frac{d\tilde u}{d \tilde x}. \end{gather*} By direct calculation it follows that $\Phi$ is a~point symmetry of~\eqref{sode} if and only if there exists smooth functions $a_{i}$, $i=1,\dots,5$, such that \begin{gather*} \hat{\Phi}^{\ast}\omega^{1}=a_{1}\omega^{1}, \qquad \hat{\Phi}^{\ast}\omega^{2}= a_{2}\omega^{1}+a_{3}\omega^{2}, \qquad \hat{\Phi}^{\ast}\omega^{3}=a_{4}\omega^{1}+a_{5}\omega^{3}. \end{gather*} A~classical problem in the geometric theory of ODEs is the classif\/ication of second-order ODEs with respect to point transformations. The fundamental result is as follows (see~\cite[Theorem~12.19]{Olver1}): \begin{Theorem} The point transformation symmetry group of a~second-order ordinary differential equation has dimension at most eight. Moreover, the equation admits an eight-dimensional symmetry group if and only if it can be mapped by a~point transformation to the linear equation $u^{\prime \prime}=0$, which has symmetry group ${\rm SL}(3)$. \end{Theorem} Thus the equation $u^{\prime \prime}=0$ has the maximal possible point symmetry group while at the other end of the scale the following equation \begin{gather} u^{\prime \prime}=\big(\big(x+x^{2}\big)e^{u}\big)^{\prime} \label{sodespec} \end{gather} has no nontrivial point symmetries (see~\cite[p.~182]{Olver1}). Any equation~\eqref{sode} def\/ines through the forms~\eqref{1forms} a~conformal class of contact sub-Lorentzian metrics. Indeed, let $H=\ker \omega^{1}$, $L_{1}=\ker \omega^{1}\cap \ker \omega^{2}$ and $L_{2}=\ker \omega^{1}\cap \ker \omega^{3}$. Clearly,~$H$ is a~contact distribution that splits into the union of line bundles: $H=L_{1}\oplus L_{2}$. Similarly as in the classical situation (cf.~\cite{Beem}) the splitting can be viewed as the f\/ield of null cones for a~Lorentzian metric on~$H$. Of course all such metrics are conformally equivalent. In particular, it is seen that all solutions to~\eqref{sode} are determined by the trajectories of the null f\/ield $\frac{\partial}{\partial x}+p\frac{\partial}{\partial u}+Q(x,u,p) \frac{\partial}{\partial p}$ spanning $L_{1}$. \begin{Proposition} \label{stwierdz} Fix an equation~\eqref{sode} and let $(H,g)$ be a~sub-Lorentzian metric belonging to the conformal class of sub-Lorentzian metrics induced by this equation. Then any isometry of~$(H,g)$ which is isotopic to the identity is in fact a~point symmetry of the considered equation. \end{Proposition} \begin{proof} Let $F:J^{1}\longrightarrow J^{1}$ be an isometry of $(H,g)$ as in the hypothesis of the proposition. Then, obviously, $dF(H)\subset H$ and $dF(L_{i})\subset L_{i}$, $i=1,2$. Using the above notation this last remark is equivalent to the equations $F^{\ast}\omega^{1}=a_{1}\omega^{1}$, $F^{\ast}\omega^{2}=a_{2}\omega^{1}+a_{3}\omega^{2}$, $F^{\ast}\omega^{3}=a_{4}\omega^{1}+a_{5}\omega^{3}$ for some smooth functions $a_{i}$, $i=1,\dots,5$. If (locally) we set $F=(F_{1},F_{2},F_{3})$, then the f\/irst and the third equation shows that $F_{1}$ and $F_{2}$ do not depend on~$p$, which means that~$F$ is the f\/irst jet prolongation of a~dif\/feomorphism of the $(x,u)$-space. This last statement is equivalent to saying that~$F$ is a~point symmetry of~\eqref{sode}. \end{proof} Let~$U$ be a~neighborhood of $0$ in $\mathbb{R}^{3}$ and consider any sub-Lorentzian structure $(U,H,g)$ belonging to the conformal class of equation~\eqref{sodespec}. \begin{Corollary} The algebra of infinitesimal isometries of $(U,H,g)$ is trivial. \end{Corollary} \begin{proof} Indeed, suppose that~$X$ is an inf\/initesimal isometry with f\/low $\psi^t$. Since $\psi^t$ is an isometry of $(U,H,g)$ isotopic to the identity, it follows from Proposition~\ref{stwierdz} that $\psi^t$ is a~point symmetry of~\eqref{sodespec} and therefore $\psi^t={\rm id}$. Consequently we must have $X=0$. \end{proof} We remark that the construction of sub-Lorentzian structure from an ODE as above is a~particular example of a~more general theory relating ODEs and what is sometimes called a~para-cr structure. On this subject we refer the reader to~\cite{HN}. \section{Appendix} In this appendix we would like to draw the reader's attention to some possible applications of the invariants to non-contact cases. Consider the simplest such case, namely the Martinet case. Martinet sub-Lorentzian structures (of Hamiltonian type) were studied in~\cite{Gro01}. Let $(M,H,g)$ be a~sub-Lorentzian manifold where $(H,g)$ is \textit{a Martinet sub-Lorentzian structure $($or a~metric$)$}. That is, there exists a~hypersurface~$S$, the so-called Martinet surface, with the following pro\-per\-ties: \begin{enumerate}\itemsep=0pt \item[1)] $H$ is a~contact structure on $M\backslash S$, \item[2)] $\dim (H_{q}\cap T_{q}S)=1$ for every $q\in M$, \item[3)] the f\/ield of directions $L:S\ni q\longrightarrow L_{q}=H_{q}\cap T_{q}S$ is timelike. \end{enumerate} It is a~standard fact that trajectories of~$L$ are abnormal curves for the distribution~$H$. Obviously our construction of the invariants can be carried out on the contact sub-Lorentzian manifold $(M\backslash S,H_{|M\backslash S},g_{|M\backslash S})$. In this way we can produce necessary conditions for two Martinet sub-Lorentzian structures to be $ts$-isometric. More precisely, let $(M_{i},H_{i},g_{i})$ be Martinet sub-Lorentzian manifolds such that $(H_{i},g_{i})$ are $ts$-oriented Martinet sub-Lorentzian metrics for $i=1,2$. Suppose that $\varphi:(M_{1},H_{1},g_{1})\longrightarrow (M_{2},H_{2},g_{2})$ is a~$ts$-isometry, then since abnormal curves are preserved by dif\/feomorphisms, $\varphi (S_{1})=S_{2}$, where $S_{i}$ is the Martinet surface for $H_{i}$, $i=1,2$. It follows that $\varphi$ induces a~$ts$-isometry $\tilde{\varphi}=\varphi_{|M_{1}\backslash S_{1}}:(M_{1}\backslash S_{1},H_{1|M_{1}\backslash S_{1}},g_{1|M_{1}\backslash S_{1}})\longrightarrow (M_{2}\backslash S_{2},H_{2|M_{2}\backslash S_{2}},g_{2|M_{2}\backslash S_{2}})$. Therefore, using results from Section~\ref{Section4} we arrive at \begin{gather*} \chi_{1}=\tilde{\varphi}^{\ast}\chi_{2}, \qquad \kappa_{1}=\tilde{\varphi}^{\ast}\kappa_{2}, \qquad \text{and} \qquad \tilde{h}_{1}=\tilde{\varphi}^{\ast}\tilde{h}_{2}, \end{gather*} where $\chi_{i}$, $\kappa_{i}$, $\tilde{h}_{i}$ are the corresponding invariants for $(M_{i}\backslash S_{i},H_{i|M_{i}\backslash S_{i}},g_{i|M_{i}\backslash S_{i}})$, $i=1,2$. As one might expect, the invariants become singular when one approaches the Martinet surface. Indeed, let us look at the following example. \begin{Example} Consider the simplest Martinet sub-Lorentzian structure, namely the f\/lat one (cf.~\cite{Gro01}). This structure is def\/ined on $\mathbb{R}^{3}$ by the orthonormal frame \begin{gather*} X_{1}=\frac{\partial}{\partial x}+\frac{1}{2}y^{2}\frac{\partial}{\partial z}, \qquad X_{2}=\frac{\partial}{\partial y}-\frac{1}{2}xy\frac{\partial}{\partial z}, \end{gather*} where we assume $X_{1}$ (resp.\ $X_{2}$) to be a~time (resp.\ space) orientation. The Martinet surface in this case is $S=\left\{y=0\right\}$, and we can write $H=\Span\{X_{1},X_{2}\}=\ker \omega$ for $\omega$ def\/ined as $\omega=\frac{2}{3}\frac{1}{y}dz-\frac{1}{3}ydx+\frac{1}{3}xdy$. Clearly, $d\omega (X_{1},X_{2})=1$, and as usual we def\/ine the Reeb f\/ield $X_{0}$ on $\mathbb{R}^{3}\backslash S$ with equations $d\omega (X_{0},\cdot)=0$, $\omega (X_{0})=1$. Direct computation yields \begin{gather*} X_{0}=-\frac{1}{y}\frac{\partial}{\partial x}+y\frac{\partial}{\partial z}. \end{gather*} Moreover \begin{gather*} [X_{2},X_{1}]=\frac{1}{y}X_{1}+X_{0}, \qquad [X_{1},X_{0}]=0, \qquad [X_{2},X_{0}]=\frac{1}{y^{2}}X_{1}, \end{gather*} from which we f\/inally obtain \begin{gather*} \tilde{h}=\left( \begin{matrix} 0 & \frac{1}{2}\frac{1}{y^{2}} \\ -\frac{1}{2}\frac{1}{y^{2}} & 0 \end{matrix} \right), \qquad \chi=\frac{1}{4}\frac{1}{y^{4}}, \qquad \text{and} \qquad \kappa=-\frac{5}{2}\frac{1}{y^{2}}. \end{gather*} \end{Example} \LastPageEnding \end{document}
\mathbf egin{document} \title{A Trace Finite Element Method for Vector-Laplacians on Surfaces} \author{Sven Gro{\ss}\thanks{Institut f\"ur Geometrie und Praktische Mathematik, RWTH-Aachen University, D-52056 Aachen, Germany ([email protected]).} \and Thomas Jankuhn\thanks{Institut f\"ur Geometrie und Praktische Mathematik, RWTH-Aachen University, D-52056 Aachen, Germany ([email protected]).} \and Maxim A. Olshanskii\thanks{Department of Mathematics, University of Houston, Houston, Texas 77204-3008 ([email protected]); Partially supported by NSF through the Division of Mathematical Sciences grant 1717516.} \and Arnold Reusken\thanks{Institut f\"ur Geometrie und Praktische Mathematik, RWTH-Aachen University, D-52056 Aachen, Germany ([email protected]).} } \maketitle \mathbf egin{abstract} We consider a vector-Laplace problem posed on a 2D surface embedded in a 3D domain, which results from the modeling of surface fluids based on exterior Cartesian differential operators. The main topic of this paper is the development and analysis of a finite element method for the discretization of this surface partial differential equation. We apply the trace finite element technique, in which finite element spaces on a background shape-regular tetrahedral mesh that is surface-independent are used for discretization. In order to satisfy the constraint that the solution vector field is tangential to the surface we introduce a Lagrange multiplier. We show well-posedness of the resulting saddle point formulation. A discrete variant of this formulation is introduced which contains suitable stabilization terms and is based on trace finite element spaces. For this method we derive optimal discretization error bounds. Furthermore algebraic properties of the resulting discrete saddle point problem are studied. In particular an optimal Schur complement preconditioner is proposed. Results of a numerical experiment are included. \end{abstract} \mathbf egin{keywords} surface fluid equations, surface vector-Laplacian, trace finite element method \end{keywords} \section{Introduction} Fluid equations on manifolds appear in the literature on mathematical modeling of emulsions, foams and biological membranes, e.g.~\cite{scriven1960dynamics,slattery2007interfacial,arroyo2009,brenner2013interfacial,rangamani2013interaction,rahimi2013curved}. We refer the reader to the recent contributions~\cite{Jankuhn1,Gigaetal} for derivations of governing surface Navier--Stokes equations in terms of exterior Cartesian differential operators for the general case of a viscous incompressible material surface, which is embedded in 3D and may evolve in time. This Navier--Stokes and other models of viscous fluidic surfaces or interfaces involve the \emph{vector-Laplace operator} treated in this paper. We note that there are different definitions of surface vector-Laplacians, cf. Remark~\ref{remLaplacian}. In this paper we treat a vector-Laplace problem that results from the modeling of surface fluids based on exterior Cartesian differential operators. Several important properties of surface fluid equations, such as existence, uniqueness and regularity of weak solutions, their continuous dependence on initial data and a relation of these equations to the problem of finding geodesics on the group of volume preserving diffeomorphisms have been studied in the literature, e.g.,~\cite{ebin1970groups,Temam88,taylor1992analysis,arnol2013mathematical,mitrea2001navier,arnaudon2012lagrangian,miura2017singular}. Concerning the development and analysis of numerical methods for surface fluid equations there are very few papers, e.g., \cite{barrett2014stable,nitschke2012finite,reuther2015interplay,holst2012geometric,hansbo2016analysis,ReuskenZhang} and research on this topic has started only recently. Much more is known on discretization methods for \emph{scalar} elliptic and parabolic PDEs on surfaces; see the review of surface finite element methods in~\cite{DEreview,olshanskii2016trace}. In this paper we introduce and analyze a finite element method for the numerical solution of a vector-Laplace problem posed on a 2D surface embedded in a 3D domain. The approach developed here benefits from the embedding of the surface in $\mathbb{R}^3$ and uses elementary tangential calculus to formulate the equations in terms of exterior differential operators in Cartesian coordinates. Following this paradigm, the finite element spaces we use are also tailored to an ambient background mesh. This mesh is surface-independent and consists of shape-regular tetrahedra. As in previous work on scalar elliptic and parabolic surface PDEs (cf. the overview paper \cite{olshanskii2016trace}) we use the \emph{trace} of such an outer finite element space for the discretization of the vector-Laplace problem. Hence, the method that we present is a special unfitted finite element method. One distinct difficulty of applying (both fitted and unfitted) finite element methods to surface vector-Laplace and surface Navier--Stokes equations is to satisfy numerically the constraint that the solution vector field $\mathbf u_h$ is tangential to the surface $\mathcal Gamma$, i.e., $\mathbf u_h\cdot\mathbf n=0$, where $\mathbf n$ is the normal vector field to $\mathcal Gamma$, cf. the discussion in Remark~\ref{rem_t}. The method that we present handles this constraint weakly by introducing a Lagrange multiplier. The resulting saddle point variational formulation is discretized by using standard trace finite element spaces. In the discrete variational formulation certain consistent stabilization terms are included which are essential for discrete inf-sup stability, algebraic stability, and for the derivation of (optimal) preconditioners for the discrete problem. The main contributions of this paper are the following. We introduce the Lagrange multiplier formulation for the continuous problem and show well-posedness of the resulting saddle point formulation. We present a finite element variational formulation which contains suitable stabilization terms and is based on trace finite element spaces. For this method we work out an error analysis. This analysis shows that for discrete stability and optimal discretization error bounds the background space for the Lagrange multiplier can be chosen as piecewise polynomial of the same order or one order lower as for the primal variable. A further main contribution of the paper is the analysis of algebraic properties of the resulting discrete saddle point problem. In particular we derive an optimal Schur complement preconditioner. We note that in the error analysis we do \emph{not} include geometric errors induced by an approximation $\mathcal Gamma_h \approx \mathcal Gamma$. The analysis of the effects of such geometric errors is left for future research. Our approach is very different from the one based on finite element exterior calculus suitable for discretizing the Hodge Laplacian on hypersurfaces; see \cite{holst2012geometric}. In the recent paper~\cite{hansbo2016analysis} a finite element method for a similar vector-Laplace problem is studied. That method, however, uses a penalty technique instead of a Lagrange multiplier formulation and requires meshes fitted to the surface. Finally we note that the finite element methods for surface Navier--Stokes equations presented in \cite{nitschke2012finite,reuther2015interplay} are based on a surface curl-formulation, which is not applicable to the surface vector-Laplace problem that we consider. We also note that no discretization error analyses are given in \cite{nitschke2012finite,reuther2015interplay}. None of these related papers have considered unfitted finite element methods. This paper is meant to be the first one in a series of papers devoted to numerical simulation of fluid equations on (evolving) manifolds. Our longer-term goal is to provide efficient and reliable computational tools for the numerical solution of fluid equations on a time-dependent surface $\mathcal Gamma(t)$ including the cases when parametrization of $\mathcal Gamma(t)$ is not explicitly available and $\mathcal Gamma(t)$ may undergo topological changes. This motivates our choice to use unfitted surface-independent meshes to define finite element spaces --- a methodology that proved to work very well for scalar PDEs posed on $\mathcal Gamma(t)$~\cite{olshanskii2014eulerian,olshanskii2014error}. The remainder of the paper is organized as follows. We introduce in section~\ref{s_cont} the vector-Laplace model probem and notions of tangential differential calculus. We give a weak formulation of the problem with Lagrange multiplier and show its well-posedness. An unfitted finite element method known as the TraceFEM for the surface vector-Laplace problem is introduced in section~\ref{s_TraceFEM}. In section~\ref{s_error} an error analysis of this method is presented. We derive discrete LBB stability for certain pairs of Trace FE spaces. The main result of this section is an optimal order error estimate in the energy norm. An optimal order discretization error estimate in the $L^2$ norm is shown in section~\ref{s_L2}. In section~\ref{s_cond} we prove that the spectral condition number of the resulting saddle point stiffness matrix is bounded by $c h^{-2}$, with a constant $c$ that is independent of the position of the surface $\mathcal Gamma$ relative to the underlying triangulation. We also present an optimal Schur complement preconditioner. Numerical results in section~\ref{sectExp} illustrate the performance of the method in terms of discretization error convergence and efficiency of the linear solver. \section{Continuous problem}\left\langlebel{s_cont} Assume that $\mathcal Gamma$ is a closed sufficiently smooth surface in $\mathbb{R}^3$. The outward pointing unit normal on $\mathcal Gamma$ is denoted by $\mathbf n$, and the orthogonal projection on the tangential plane is given by $\mathbf P=\mathbf P(x):= \mathbf I - \mathbf n(x)\mathbf n(x)^T$, $x \in \mathcal Gamma$. For vector functions $\mathbf u:\, \mathcal Gamma \to \mathbb{R}^3$ we use a constant extension from $\mathcal Gamma$ to its neighborhood $\mathcal{O}(\mathcal Gamma)$ along the normals $\mathbf n$, denoted by $\mathbf u^e :\,\mathcal{O}(\mathcal Gamma)\to\mathbb{R}^3$. Note that on $\mathcal Gamma$ we have $\nabla\mathbf u^e= \nabla (\mathbf u\circ p)=\nabla\mathbf u^e\mathbf P$, with $\nabla \mathbf u:= (\nabla u_1~ \nabla u_2 ~\nabla u_3)^T \in \mathbb{R}^{3 \times 3}$ for vector functions $\mathbf u$ (note the transpose; this notation is usual in computational fluid dynamics). For scalar functions $u:\, \mathcal{O}(\mathcal Gamma) \to \Bbb{R}$ the gradient $\nabla u$ denotes the column vector consisting of the partial derivatives. In the remainder this locally unique extension $\mathbf u^e$ to a small neighborhood of $\mathcal Gamma$ is also denoted by $\mathbf u$. On $\mathcal Gamma$ we consider the surface strain tensor \cite{GurtinMurdoch75} given by \mathbf egin{equation} \left\langlebel{strain} E_s(\mathbf u):= \frac12 \mathbf P (\nabla \mathbf u +\nabla \mathbf u^T)\mathbf P = \frac12(\nabla_\mathcal Gamma \mathbf u + \nabla_\mathcal Gamma \mathbf u^T), \quad \nabla_\mathcal Gamma \mathbf u:= \mathbf P \nabla \mathbf u \mathbf P. \end{equation} We also use the surface divergence operators for $\mathbf u: \mathcal Gamma \to \mathbb R^3$ and $\mathbf A: \mathcal Gamma \to \mathbb{R}^{3\times 3}$. These are defined as follows: \mathbf egin{align*} {\mathop{\,\rm div}}_{\Gamma} \mathbf u & := {\rm tr} (\nabla_{\Gamma} \mathbf u)= {\rm tr} (\mathbf P (\nabla\mathbf u) \mathbf P)={\rm tr} (\mathbf P (\nabla\mathbf u))={\rm tr} ((\nabla\mathbf u) \mathbf P), \\ {\mathop{\,\rm div}}_{\Gamma} \mathbf A & := \left( {\mathop{\,\rm div}}_{\Gamma} (\mathbf e_1^T \mathbf A),\, {\mathop{\,\rm div}}_{\Gamma} (\mathbf e_2^T \mathbf A),\, {\mathop{\,\rm div}}_{\Gamma} (\mathbf e_3^T \mathbf A)\right)^T, \end{align*} with $\mathbf e_i$ the $i$th basis vector in $\mathbb R^3$. For a given force vector $\mathbf{f} \in L^2(\mathcal Gamma)^3$, with $\mathbf{f}\cdot\mathbf n=0$, we consider the following elliptic partial differential equation: determine $\mathbf u:\, \mathcal Gamma \to \mathbb R^3$ with $\mathbf u\cdot\mathbf n =0$ and \mathbf egin{equation} \left\langlebel{strongform} - \mathbf P {\mathop{\,\rm div}}_{\Gamma} (E_s(\mathbf u)) + \mathbf u=\mathbf{f} \quad \text{on}~\mathcal Gamma. \end{equation} We added the zero order term $\mathbf u$ on the left-hand side in \eqref{strongform} to avoid technical details related to the kernel of the strain tensor $E_s$. \mathbf egin{remark} \left\langlebel{remLaplacian} \rm In this paper we consider the operator $ \mathbf P {\mathop{\,\rm div}}_{\Gamma} \circ E_s$ because it is a key component in the modeling of Newtonian surface fluids and fluidic membranes \cite{scriven1960dynamics,GurtinMurdoch75,barrett2014stable,Gigaetal,Jankuhn1}. We note that in the literature there are different formulations of the surface Navier--Stokes equations, and some of these are formally obtained by substituting Cartesian differential operators by their geometric counterparts~\cite{Temam88,cao1999navier} rather than from first mechanical principles. These formulations may involve different surface Laplace type operators. In the recent preprint \cite{hansbo2016analysis} the Bochner (also called rough) Laplacian $\mathbf u \to \Delta_\mathcal Gamma\mathbf u:=\mathbf P {\mathop{\,\rm div}}_{\Gamma} (\nabla_{\Gamma} \mathbf u)$ is treated numerically. Another Laplacian operator, which in a natural way arises in differential geometry and exterior calculus is the so-called Hodge Laplacian. The diagram below (from \cite{Jankuhn1}) and identities ~\eqref{LaplacianALL} illustrate some `correspondences' between Cartesian and different surface operators. For $\mathbf u$ on $\mathcal Gamma$ we assume $\mathbf u\cdot\mathbf n=0$. \[ \mathbf egin{array}{rcccccc} \text{\rm In}~~\mathbb{R}^{3}\,: &-\textrm{div}\ \!(\nabla\mathbf u+\nabla^T\mathbf u)& \overset{\textrm{div}\ \!\mathbf u=0}{=} & -\Delta\mathbf u& {=} & (\mbox{rot}^T\mbox{rot}-\nabla\textrm{div}\ \!)\mathbf u\\ &\wr& &\wr& &\wr\\ \text{On $\mathcal Gamma$}\,: & \underbrace{-\mathbf P {\mathop{\,\rm div}}_{\Gamma} (2 E_s(\mathbf u))}& \overset{{\mathop{\,\rm div}}_{\Gamma}\mathbf u=0}{\neq} & \underbrace{-\Delta_\mathcal Gamma\mathbf u}& {\neq} & \underbrace{-\Delta_\mathcal Gamma^H\mathbf u}\\ &\text{surface}& &\text{Bochner }& &\text{Hodge}\\ &\text{diffusion}& &\text{Laplacian}& &\text{Laplacian}\\ \end{array} \] For a smooth surface $\mathcal Gamma\subset\mathbb{R}^3$ with Gauss curvature $K$ we have, cf. Lemma~2.1 in~\cite{Jankuhn1} and the Weitzenb\"{o}ck identity~\cite{rosenberg1997laplacian}, the following equalities for a tangential field $\mathbf u$ : \mathbf egin{equation}\left\langlebel{LaplacianALL} -\mathbf P {\mathop{\,\rm div}}_{\Gamma} (2E_s(\mathbf u))=-\Delta_\mathcal Gamma\mathbf u-K\mathbf u=-\Delta_\mathcal Gamma^H\mathbf u-2K\mathbf u\quad\text{on}~\mathcal Gamma, \end{equation} where for the first equality to hold, $\mathbf u$ should satisfy ${\mathop{\,\rm div}}_{\Gamma}\mathbf u=0$. \end{remark} \ \\[1ex] For the weak formulation of this vector-Laplace problem we use the space $\mathbf V:=H^1(\mathcal Gamma)^3$, with norm \mathbf egin{equation} \left\langlebel{H1norm} \|\mathbf u\|_{1}^2:=\int_{\mathcal Gamma}\|\mathbf u(s)\|_2^2 + \|\nabla\mathbf u (s)\|_2^2\,ds, \end{equation} where $\|\cdot\|_2$ denotes the vector and matrix $2$-norm. Note that due to $\nabla \mathbf u=\nabla\mathbf u^e= \nabla \mathbf u\mathbf P$ on $\mathcal Gamma$ only {tangential} derivatives are included in this $H^1$-norm. The corresponding space of tangential vector fields is denoted by \mathbf egin{equation} \left\langlebel{defVT} \mathbf V_T:= \{\, \mathbf u \in \mathbf V~|~ \mathbf u\cdot \mathbf n =0 \quad \text{on}~~\mathcal Gamma\,\}. \end{equation} For $\mathbf u \in \mathbf V$ we use the following notation for the orthogonal decomposition into tangential and normal parts: \[ \mathbf u = \mathbf u_T + u_N\mathbf n,\quad \mathbf u_T\cdot\mathbf n=0. \] We introduce the bilinear form \mathbf egin{align*} a(\mathbf u,\mathbf v)& := \int_\mathcal Gamma E_s(\mathbf u):E_s(\mathbf v) + \mathbf u \cdot \mathbf v \, ds= \int_\mathcal Gamma {\rm tr}\big(E_s(\mathbf u) E_s(\mathbf v)\big) + \mathbf u \cdot \mathbf v \, ds, ~~\mathbf u,\mathbf v \in \mathbf V. \end{align*} For given $\mathbf{f}$ as above we consider the following variational formulation of \eqref{strongform}: determine $\mathbf u=\mathbf u_T \in \mathbf V_T$ such that \mathbf egin{equation} \left\langlebel{vectLaplace} a(\mathbf u_T,\mathbf v_T)= (\mathbf{f},\mathbf v_T)_{L^2(\mathcal Gamma)} \quad \text{for all}~~\mathbf v_T \in \mathbf V_T. \end{equation} The bilinear form $a(\cdot,\cdot)$ is continuous on $\mathbf V_T$. Ellipticity of $a(\cdot,\cdot)$ on $\mathbf V_T$ follows from the following surface Korn inequality, which is derived in \cite{Jankuhn1}. \mathbf egin{lemma} \left\langlebel{Kornlemma} Assume $\mathcal Gamma$ is $C^2$ smooth. There exists $c_K >0$ such that \mathbf egin{equation} \left\langlebel{korn} \|\mathbf u\|_{L^2(\mathcal Gamma)}+ \|E_s(\mathbf u)\|_{L^2(\mathcal Gamma)} \geq c_K \|\mathbf u\|_{1} \quad \text{for all}~~\mathbf u \in \mathbf V_T. \end{equation} \end{lemma} Hence, the weak formulation \eqref{vectLaplace} is a well-posed problem. The unique solution is denoted by $\mathbf u^\ast=\mathbf u_T^\ast$. \mathbf egin{remark}\left\langlebel{rem_t}\rm The weak formulation \eqref{vectLaplace} is not very suitable for a finite element Galerkin discretization, because we then need finite element functions that are \emph{tangential} to $\mathcal Gamma$, which are not easy to construct. If $\mathcal Gamma\cap K$ is curved in a simplex $K$ where $\mathbf u_h$ is polynomial, then it is easy to see that enforcing $\mathbf u_h\cdot\mathbf n=0$ on $\mathcal Gamma\cap K$ may lead to `locking', i.e. only $\mathbf u_h=0$ satisfies the constraint. Alternatively, one can approximate a smooth manifold $\mathcal Gamma$ by a polygonal surface $\mathcal Gamma_h$ (in practice, this is often done for the purpose of numerical integration; moreover, only $\mathcal Gamma_h$ is available if finding the position of the surface is part of the problem). In this case the surface $\mathcal Gamma_h$ has a \textit{discontinuous} normal field $\mathbf n_h$ and enforcing the tangential constraint, $\mathbf u_h\cdot\mathbf n_h=0$ on $\mathcal Gamma_h$, for a \textit{continuous} finite element vector field $\mathbf u_h$ may lead to a locking effect as well. \end{remark} \ \\[1ex] In view of the remark above we introduce, in the same spirit as in \cite{hansbo2016stabilized,hansbo2016analysis,Jankuhn1}, a weak formulation in a space that is larger than $\mathbf V_T$ and which allows nonzero normal components in the surface vector fields. However, different from the approach used in these papers we treat the tangential condition with the help of a Lagrange multiplier. The following basic relation will be very useful: \mathbf egin{equation} \left\langlebel{idfund} E_s(\mathbf u)=E_s(\mathbf u_T) + u_N \mathbf H, \end{equation} where $\mathbf H := \nabla \mathbf n$ is the shape operator (second fundamental form) on $\mathcal Gamma$. We introduce the following Hilbert space: \[ \mathbf V_\ast :=\{\, \mathbf u \in L^2(\mathcal Gamma)^3\,:\,\mathbf u_T \in \mathbf V_T,~u_N\in L^2(\mathcal Gamma)\,\}, \quad\text{with}~~ \|\mathbf u\|_{V_\ast}^2:=\|\mathbf u_T\|_{1}^2+\|u_N\|_{L^2(\mathcal Gamma)}^2. \] Note that $\mathbf V_\ast\sim \mathbf V_T \oplus L^2(\mathcal Gamma)$. Based on the identity \eqref{idfund} we introduce, with some abuse of notation, the bilinear form \mathbf egin{equation} \left\langlebel{defaalt} a(\mathbf u,\mathbf v) := \int_\mathcal Gamma {\rm tr}\big((E_s(\mathbf u_T)+u_N\mathbf H)( E_s(\mathbf v_T)+ v_N \mathbf H)\big) +\mathbf u \cdot \mathbf v \, ds, \quad \mathbf u,\mathbf v \in \mathbf V_\ast. \end{equation} This bilinear form is well-defined and continuous on $\mathbf V_\ast$. We enforce the condition $\mathbf u\in \mathbf V_T$ with the help of a Lagrange multiplier. For given $\mathbf g \in L^2(\mathcal Gamma)^3$ (note that we allow $\mathbf g$ not necessarily tangential) we introduce the following saddle point problem: determine $(\mathbf u,\left\langlembda) \in \mathbf V_\ast \times L^2(\mathcal Gamma)$ such that \mathbf egin{equation} \left\langlebel{weak1a} \mathbf egin{aligned} a(\mathbf u,\mathbf v) +(\mathbf v\cdot \mathbf n,\left\langlembda)_{L^2(\mathcal Gamma)} &=(\mathbf{g},\mathbf v)_{L^2(\mathcal Gamma)} &\quad &\text{for all }\mathbf v \in \mathbf V_\ast, \\ (\mathbf u\cdot\mathbf n,\mu)_{L^2(\mathcal Gamma)} & = 0 &\qquad &\text{for all }\mu \in L^2(\mathcal Gamma). \end{aligned} \end{equation} Well-posedness of this saddle point problem is derived in the following theorem. \mathbf egin{theorem} The problem \eqref{weak1a} is well-posed. Its unique solution $(\mathbf u^\ast,\left\langlembda) \in \mathbf V_\ast \times L^2(\mathcal Gamma)$ has the following properties: \mathbf egin{align} 1.\quad&\mathbf u^\ast \cdot \mathbf n =0,\\ 2.\quad&\mathbf u_T^\ast =\mathbf u_T,~~ \text{where $\mathbf u_T$ is the unique solution of \eqref{vectLaplace} with $\mathbf{f}:=\mathbf g_T=\mathbf P \mathbf g$}, \left\langlebel{hhj} \\ 3.\quad&\left\langlembda = g_N- {\rm tr} \big(E_s(\mathbf u_T^\ast) \mathbf H)\big),~~ \text{for}~\mathbf g=\mathbf g_T+g_N\mathbf n. \left\langlebel{charlambda} \end{align} \end{theorem} \mathbf egin{proof} Note that $\mathbf v \in \mathbf V_\ast$ satisfies $(\mathbf v\cdot\mathbf n,\mu)_{L^2(\mathcal Gamma)}=0$ for all $\mu \in L^2(\mathcal Gamma)$ iff $\mathbf v \in \mathbf V_T$. From this and \eqref{korn} it follows that $a(\cdot,\cdot)$ is elliptic on $\mathbf V_T$, the subspace of $\mathbf V_\ast$ consisting of all functions $\mathbf u$ that satisfy the second equation in \eqref{weak1a}. The multiplier bilinear form $(\mathbf v,\mu) \mapsto (\mathbf v\cdot \mathbf n,\mu)_{L^2(\mathcal Gamma)}$ has the inf-sup property \[ \inf_{\mu \in L^2(\mathcal Gamma)} \sup_{\mathbf v \in \mathbf V_\ast} \frac{(\mathbf v\cdot \mathbf n,\mu)_{L^2(\mathcal Gamma)}}{\|\mathbf v\|_{V_\ast}\|\mu\|_{L^2(\mathcal Gamma)}} \geq \inf_{\mu \in L^2(\mathcal Gamma)} \sup_{v_N \in L^2(\mathcal Gamma)} \frac{(v_N,\mu)_{L^2(\mathcal Gamma)}}{\|v_N\|_{L^2(\mathcal Gamma)}\|\mu\|_{L^2(\mathcal Gamma)}}=1. \] Furthermore, the bilinear forms are continuous. Hence, we have a well-posed saddle point formulation, with a unique solution denoted by $(\mathbf u^\ast,\left\langlembda)$. From the second equation in \eqref{weak1a} one obtains $\mathbf u^\ast \cdot \mathbf n=0$. If in the first equation we restrict to $\mathbf v=\mathbf v_T \in V_T$ we see that $ \mathbf u_T^\ast$ satisfies the same variational problem as in \eqref{vectLaplace} with $\mathbf{f}:=\mathbf P \mathbf g$, hence, \eqref{hhj} holds. If in the first equation in \eqref{weak1a} we take $\mathbf v=v_N \mathbf n$ and use $ \mathbf u^\ast \cdot \mathbf n=0$ we get \[ \mathbf egin{split} (\left\langlembda, v_N)_{L^2(\mathcal Gamma)} & = (\mathbf g, v_N \mathbf n)_{L^2(\mathcal Gamma)}- a( \mathbf u^\ast, v_N\mathbf n) \\ & =(g_N, v_N)_{L^2(\mathcal Gamma)} - \int_{\mathcal Gamma} {\rm tr} \big(E_s(\mathbf u_T^\ast) E_s(v_N \mathbf n)\big) \, ds \\ & =(g_N, v_N)_{L^2(\mathcal Gamma)} -({\rm tr} \big(E_s(\mathbf u_T^\ast) \mathbf H)\big), v_N)_{L^2(\mathcal Gamma)} \quad \text{for all}~~v_N \in L^2(\mathcal Gamma), \end{split} \] hence we have the characterization as in \eqref{charlambda}. \end{proof} \ \\[1ex] From \eqref{charlambda} it follows that if $\mathbf u_T^\ast$ has smoothness $\mathbf u_T^\ast \in H^m(\mathcal Gamma)^3$ and the manifold is sufficiently smooth (hence $\mathbf H$ sufficiently smooth) then we have $\left\langlembda \in H^{m-1}(\mathcal Gamma)$. Note that if $\mathbf H=0$ and $g_N=0$ then $\left\langlembda=0$. \mathbf egin{remark}\left\langlebel{rem_aug} \rm In the proof above we used that the form $a(\cdot,\cdot)$ is elliptic on $\mathbf V_T$, the subspace of $\mathbf V_\ast$ consisting of all functions that satisfy the second equation in \eqref{weak1a}. Note the inequality \mathbf egin{align*} a(\mathbf u,\mathbf u) &\ge \varepsilon\|E_s(\mathbf u_T^\ast)\|^2_{L^2(\mathcal Gamma)}-\frac{\varepsilon}{1-\varepsilon}\|\mathbf H u_N\|^2_{L^2(\mathcal Gamma)}+\|\mathbf u\|^2_{L^2(\mathcal Gamma)}\\ &\ge \varepsilon\|E_s(\mathbf u_T^\ast)\|^2_{L^2(\mathcal Gamma)}+\big(1-\frac{\varepsilon}{1-\varepsilon} \|\mathbf H\|_{L^\infty(\mathcal Gamma)}^2\big)\|\mathbf u\|^2_{L^2(\mathcal Gamma)}~~\forall\,\mathbf u\in\mathbf V_\ast,~~\forall~\varepsilon < 1. \end{align*} With $\varepsilon_0:=\frac12(1+\|\mathbf H\|_{L^\infty(\mathcal Gamma)}^2)^{-1}$ and the Korn inequality \eqref{korn} we get \mathbf egin{align*} a(\mathbf u,\mathbf u) \geq \varepsilon_0\big(\|E_s(\mathbf u_T^\ast)\|^2_{L^2(\mathcal Gamma)} + \|\mathbf u\|^2_{L^2(\mathcal Gamma)}\big)\geq \frac12 \varepsilon_0 c_K^2 \|\mathbf u_T\|_1^2 + \varepsilon_0 \|u_N\|_{L^2(\mathcal Gamma)}^2 \end{align*} for all $\mathbf u \in \mathbf V_\ast$. Hence, the bilinear form $a(\cdot,\cdot)$ is also elliptic on $\mathbf V_\ast$. Note that the ellipticity constant depends on the curvature of $\mathcal Gamma$. \end{remark} \ \\ \mathbf egin{remark} \rm Instead of the weak formulation in \eqref{weak1a} one can also consider a penalty formulation, without using a Lagrange multiplier $\left\langlembda$. Such an approach is used for a similar Bochner-Laplace problem in \cite{hansbo2016analysis}. This formulation is as follows: determine $\mathbf u \in \mathbf V_\ast$ such that \mathbf egin{equation} a(\mathbf u,\mathbf v)+ \eta (\mathbf u\cdot\mathbf n,\mathbf v\cdot \mathbf n)_{L^2(\mathcal Gamma)}= (\mathbf{f},\mathbf v)_{L^2(\mathcal Gamma)} \quad \text{for all}~~\mathbf v \in \mathbf V_\ast, \end{equation} with $\eta >0$ (sufficiently large). From ellipticity and continuity it follows that this weak formulation has a unique solution, denoted by $\hat \mathbf u$. Opposite to the solution $\mathbf u^\ast$ of \eqref{weak1a}, the solution $\hat \mathbf u$ does not have the property $\hat \mathbf u \cdot \mathbf n=0$, and in general $\hat \mathbf u_T \neq \mathbf u_T^\ast$ holds. Using standard arguments one easily derives the error bound \[ \|\hat \mathbf u_T- \mathbf u_T^\ast\|_{V^\ast} \leq c \eta^{-\frac12} \|\mathbf{f}\|_{L^2(\mathcal Gamma)}. \] Hence, as usual in this type of penalty method, one has to take $\eta$ sufficiently large depending on the desired accuracy of the approximation. \end{remark} \ \\ \mathbf egin{remark} \rm The analysis of well-posedness above and the finite element method presented in the next section have immediate extensions to the case of the Bochner Laplacian on $\mathcal Gamma$. For this, one replaces the bilinear form in \eqref{vectLaplace} by \[ a^B(\mathbf u,\mathbf v) := \int_\mathcal Gamma (\nabla_{\Gamma} \mathbf u:\nabla_{\Gamma} \mathbf u + \mathbf u \cdot \mathbf v) \, ds, \qquad \mathbf u,\mathbf v \in \mathbf V, \] and instead of \eqref{idfund} one uses $\nabla_{\Gamma} \mathbf u=\nabla_{\Gamma} \mathbf u_T+u_N\mathbf H$ for further analysis. In this case, Korn's inequality \eqref{korn} is replaced by Poincare's inequality on $\mathcal Gamma$ (cf.~\cite{hansbo2016analysis}). Based on the second equality in \eqref{LaplacianALL} and the tangential variational formulation for the Bochner Laplacian, one can also consider the bilinear form \[ a^H(\mathbf u,\mathbf v) := \int_\mathcal Gamma (\nabla_{\Gamma} \mathbf u:\nabla_{\Gamma} \mathbf u + (1+K)\mathbf u \cdot \mathbf v) \, ds, \qquad \mathbf u,\mathbf v \in \mathbf V, \] for an equation with the Hodge Laplacian. This formulation, however, is less convenient for the analysis of well-posedness in the framework of this paper, since the Gauss curvature $K$ in general does not have a fixed sign. Moreover, in a numerical method one then has to approximate the Gauss curvature $K$ based on a ``discrete'' (e.g., piecewise planar) surface approximation, which is known to be a delicate numerical issue. \end{remark} \section{Trace Finite Element Method}\left\langlebel{s_TraceFEM} For the discretization of the variational problem \eqref{weak1a} we use the trace finite element approach (TraceFEM)~ \cite{ORG09}. For this, we assume a fixed polygonal domain $\Omega \subset \mathbb R^3$ that strictly contains $\mathcal Gamma$. We use a family of shape regular tetrahedral triangulations $\{\mathcal T_h\}_{h >0}$ of $\Omega$. The subset of tetrahedra that have a nonzero intersection with $\mathcal Gamma$ is collected in the set denoted by $\mathcal T_h^\mathcal Gamma$. For simplicity, in the analysis of the method, we assume $\{\mathcal T_h^\mathcal Gamma\}_{h >0}$ to be quasi-uniform. The domain formed by all tetrahedra in $\mathcal T_h^\mathcal Gamma$ is denoted by $\Omega^\Gamma_h:=\text{int}(\overline{\cup_{T \in \mathcal T_h^\mathcal Gamma} T})$. On $\mathcal T_h^\mathcal Gamma$ we use a standard finite element space of continuous functions that are piecewise polynomial of degree $k$. This so-called \emph{outer finite element space} is denoted by $V_h^k$. In the stabilization terms added to the finite element formulation (see below), we need an extension of the normal vector field $\mathbf n$ from $\mathcal Gamma$ to $\Omega^\Gamma_h$. For this we use $\mathbf n^e = \nabla d$, where $d$ is the signed distance function to $\mathcal Gamma$. In practice, this signed distance function is often not available and we then use approximations as discussed in Remark~\ref{remimplementation}. Another aspect related to implementation is that in practice it is often not easy to compute integrals over the surface $\mathcal Gamma$ with high order accuracy. This may be due to the fact that $\mathcal Gamma$ is defined implicitly as the zero level of a level set function and a parametrization of $\mathcal Gamma$ is not available. This issue of ``geometric errors'' and of a feasible approximation $\mathcal Gamma_h \approx \mathcal Gamma$ will also be addressed in Remark~\ref{remimplementation}. Below in the presentation and analysis of the TraceFEM we use the exact extended normal $\mathbf n=\mathbf n^e$ and we assume exact integration over $\mathcal Gamma$. We introduce the stabilized bilinear forms, with $\mathbf U:=\{\, \mathbf v \in H^1(\Omega^\Gamma_h)^3~|~\mathbf v_{|\mathcal Gamma} \in \mathbf V\,\}$, $M:=H^1(\Omega^\Gamma_h)$, \mathbf egin{align*} A_h(\mathbf u,\mathbf v) &:= a(\mathbf u,\mathbf v) + \rho \int_{\Omega^\Gamma_h} (\nabla \mathbf u \mathbf n)\cdot (\nabla \mathbf v \mathbf n) \, dx, \qquad \mathbf u,\mathbf v \in \mathbf U, \\ b(\mathbf u,\mu)& := (\mathbf u\cdot \mathbf n,\mu)_{L^2(\mathcal Gamma)} + \tilde \rho \int_{\Omega^\Gamma_h} (\mathbf n^T \nabla \mathbf u \mathbf n) (\mathbf n \cdot \nabla \mu) \, dx \\ &= (u_N,\mu)_{L^2(\mathcal Gamma)} + \tilde \rho \int_{\Omega^\Gamma_h} (\mathbf n\cdot \nabla u_ N)(\mathbf n \cdot \nabla \mu) \, dx, \qquad \mathbf u \in \mathbf U, ~ \mu \in M. \end{align*} Such ``volume normal derivative'' stabilizations have recently been studied in \cite{grande2017higher,burmanembedded}. The parameters in the stabilizations may be $h$-dependent, $\rho \sim h^{m},~\tilde \rho \sim h^{\tilde m}$. One can consider different scalings, i.e., $m \neq \tilde m$. From the analysis below it follows that the best choice is $m=\tilde m$. To simplify the presentation we set $\rho=\tilde \rho$. Based on the analysis in \cite{grande2017higher} of \emph{scalar} surface problems we restrict to \mathbf egin{equation} \left\langlebel{scalingrho} h \lesssim \rho=\tilde \rho \lesssim h^{-1}. \end{equation} Here and further in the paper we write $x\lesssim y$ to state that the inequality $x\le c y$ holds for quantities $x,y$ with a constant $c$, which is independent of the mesh parameter $h$ and the position of $\mathcal Gamma$ over the background mesh. Similar we give sense to $x\gtrsim y$; and $x\sim y$ will mean that both $x\lesssim y$ and $x\gtrsim y$ hold. For fixed $k,l \geq 1$ we take finite element spaces \[ \mathbf U_h:= (V_h^k)^3\subset \mathbf U, \quad M_h :=V_h^l\subset M, \] for the velocity $\mathbf u$ and the Lagrange multiplier $\left\langlembda$, respectively. The finite element method (TraceFEM) that we consider is as follows: determine $(\mathbf u_h, \left\langlembda_h) \in \mathbf U_h \times M_h$ such that \mathbf egin{equation} \left\langlebel{discrete} \mathbf egin{aligned} A_h(\mathbf u_h,\mathbf v_h) + b(\mathbf v_h,\left\langlembda_h) & =(\mathbf{g},\mathbf v_h)_{L^2(\mathcal Gamma)} &\quad &\text{for all } \mathbf v_h \in \mathbf U_h \\ b(\mathbf u_h,\mu_h) & = 0 &\quad &\text{for all }\mu_h \in M_h. \end{aligned} \end{equation} \mathbf egin{remark} \left\langlebel{remimplementation} \rm As noted above, in the implementation of this method one typically replaces $\mathcal Gamma$ by an approximation $\mathcal Gamma_h \approx \mathcal Gamma$ such that integrals over $\mathcal Gamma_h$ can be efficiently computed. Furthermore, the exact normal $\mathbf n$ is approximated by $\mathbf n_h \approx \mathbf n $. In the literature on finite element methods for surface PDEs there are standard procedures resulting in a piecewise planar surface approximation $\mathcal Gamma_h$ with ${\rm dist}(\mathcal Gamma,\mathcal Gamma_h) \lesssim h^2$. If one is interested in surface FEM with higher order surface approximation, we refer to the recent paper \cite{grande2017higher}, where one finds an efficient method based on an isoparametric mapping derived from a level set representation of $\mathcal Gamma$. In \cite{demlow2009higher} another higher order surface approximation method is treated. In the numerical experiments in section~\ref{sectExp} we use a piecewise planar surface approximation. Also for the construction of suitable normal approximations $\mathbf n_h \approx \mathbf n$ several techniques are available in the literature. One possibility is to use $\mathbf n_h(\mathbf x)=\frac{\nabla \phi_h(\mathbf x)}{\|\nabla \phi_h(\mathbf x)\|_2}$, where $\phi_h$ is a finite element approximation of a level set function $\phi$ which characterizes $\mathcal Gamma$. This technique is used in section~\ref{sectExp}. In this paper we do not analyze the effect of such geometric errors, i.e., we only analyze the finite element method \eqref{discrete}. \end{remark} \section{Error analysis of TraceFEM}\left\langlebel{s_error} In this section we present an error analysis of the TraceFEM \eqref{discrete}. We first address consistency of this stabilized formulation. The solution $(\mathbf u^\ast, \left\langlembda)$ of \eqref{weak1a}, which is defined only on $\mathcal Gamma$, can be extended by constant values along normals to a neighborhood $\mathcal{O}(\mathcal Gamma)$ of $\mathcal Gamma$ such that $\Omega^\Gamma_h\subset\mathcal{O}(\mathcal Gamma)$. This extended solution $((\mathbf u^\ast)^e, \left\langlembda^e)$ is also denoted by $(\mathbf u^\ast, \left\langlembda)$. Hence we have $ \nabla \mathbf u^\ast \mathbf n=0$, $\mathbf n \cdot \nabla (\mathbf u^\ast \cdot \mathbf n)=0$, $\mathbf n\cdot \nabla \left\langlembda =0$ on $\Omega^\Gamma_h$. Using these properties and $(\mathbf U_h)_{|\mathcal Gamma} \subset \mathbf V_\ast$, $(M_h)_{|\mathcal Gamma} \subset L^2(\mathcal Gamma)$ we get the following \emph{consistency result}: \mathbf egin{equation} \left\langlebel{consisdiscrete} \mathbf egin{aligned} A_h(\mathbf u^\ast,\mathbf v_h) + b(\mathbf v_h,\left\langlembda) = a(\mathbf u^\ast,\mathbf v_h)+ (\mathbf v_h \cdot \mathbf n,\left\langlembda)_{L^2(\mathcal Gamma)}& =(\mathbf{g},\mathbf v_h)_{L^2(\mathcal Gamma)}&\quad&\forall~\mathbf v_h \in \mathbf U_h, \\ b(\mathbf u^\ast,\mu_h)= (\mathbf u^\ast \cdot \mathbf n,\mu_h)_{L^2(\mathcal Gamma)} & = 0 &\quad&\forall~\mu_h \in M_h. \end{aligned} \end{equation} We now address continuity of the bilinear forms. For this we introduce the semi-norms \mathbf egin{align} \|\mathbf u\|_U^2 & :=A_h(\mathbf u,\mathbf u), \quad \mathbf u \in \mathbf U, \left\langlebel{defUh} \\ \|\mu\|_M^2 &:= \|\mu\|_{L^2(\mathcal Gamma)}^2 + \rho \|\mathbf n \cdot \nabla \mu\|_{L^2(\Omega^\Gamma_h)}^2, \quad \mu \in M. \left\langlebel{defmuast} \end{align} \mathbf egin{lemma} \left\langlebel{lemcont} The following holds \mathbf egin{align} A_h(\mathbf u,\mathbf v) & \leq \|\mathbf u\|_U\|\mathbf v\|_U \quad \text{for all }\mathbf u,\mathbf v \in \mathbf U, \left\langlebel{cont1}\\ b(\mathbf u,\mu) & \leq \|\mathbf u\|_U \|\mu\|_M \quad \text{for all }\mathbf u \in \mathbf U, \mu \in M. \left\langlebel{cont2} \end{align} \end{lemma} \mathbf egin{proof} The result in \eqref{cont1} follows from Cauchy-Schwarz inequalities. Note that due to $\nabla \mathbf n \mathbf n =0$ and the symmetry of $\nabla \mathbf n$ we obtain $\mathbf n\cdot \nabla u_N=\mathbf n \cdot\nabla( \mathbf u \cdot \mathbf n)= \mathbf n^T \nabla \mathbf u \mathbf n + \mathbf n^T \nabla \mathbf n \mathbf u = \mathbf n^T\nabla \mathbf u \mathbf n $. Hence, $|\mathbf n \cdot\nabla u_N| \leq \|\nabla \mathbf u \mathbf n\|_2$ holds (pointwise at $x \in \mathcal{O}(\mathcal Gamma)$). Using this we get for $\mathbf u \in \mathbf U$, $\mu \in M$: \mathbf egin{align} |b(\mathbf u,\mu)| & \leq |(\mathbf u \cdot \mathbf n, \mu)_{L^2(\mathcal Gamma)}| +\rho |(\mathbf n \cdot \nabla u_N , \mathbf n \cdot \nabla \mu)_{L^2(\Omega^\Gamma_h)}| \nonumber \\ & \leq \|\mathbf u\|_{L^2(\mathcal Gamma)}\|\mu\|_{L^2(\mathcal Gamma)}+ \rho \|\mathbf n \cdot \nabla u_N\|_{L^2(\Omega^\Gamma_h)}\|\mathbf n \cdot \nabla \mu\|_{L^2(\Omega^\Gamma_h)} \nonumber \\ & \leq \big(\|\mathbf u\|_{L^2(\mathcal Gamma)}^2 + \rho \| \nabla \mathbf u\mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \big)^\frac12 \big(\|\mu\|_{L^2(\mathcal Gamma)}^2 +\rho \|\mathbf n \cdot \nabla \mu\|_{L^2(\Omega^\Gamma_h)}^2 \big)^\frac12 \left\langlebel{bineq} \\ & \leq A_h(\mathbf u,\mathbf u)^\frac12 \|\mu\|_M= \|\mathbf u\|_U\|\mu\|_M. \nonumber \end{align} This completes the proof. \end{proof} \ \\[2ex] The following result is crucial for the stability and error analysis of the method. \mathbf egin{lemma} \left\langlebel{lemcrucial} The following uniform norm equivalence holds: \mathbf egin{equation} \left\langlebel{fund1} h \|v_h\|_{L^2(\mathcal Gamma)}^2 + h^2 \|\mathbf n \cdot \nabla v_h\|_{L^2(\Omega^\Gamma_h)}^2 \sim \|v_h\|_{L^2(\Omega^\Gamma_h)}^2\quad \text{for all}~~v_h \in V_h^k. \end{equation} \end{lemma} \mathbf egin{proof} A fundamental result derived in \cite{grande2017higher} (Lemma 7.6) is: \mathbf egin{equation} \left\langlebel{fund1A} \|v_h\|_{L^2(\Omega^\Gamma_h)}^2 \lesssim h \|v_h\|_{L^2(\mathcal Gamma)}^2 + h^2 \|\mathbf n \cdot \nabla v_h\|_{L^2(\Omega^\Gamma_h)}^2 \quad \text{for all}~~v_h \in V_h^k. \end{equation} (This follows by taking $\Psi={\rm id}$ in the analysis in section 7.2 in \cite{grande2017higher}). We combine this with the following estimate, cf.~\cite{Hansbo02}: \mathbf egin{equation} \left\langlebel{fund1B} h \|v\|_{L^2(\mathcal Gamma)}^2 \lesssim \|v\|_{L^2(\Omega^\Gamma_h)}^2+h^2\|v\|_{H^1(\Omega^\Gamma_h)}^2 \quad \text{for all}~v \in H^1(\Omega^\Gamma_h), \end{equation} and a standard finite element inverse inequality $\|v_h\|_{H^1(\Omega^\Gamma_h)} \lesssim h^{-1}\|v_h\|_{L^2(\Omega^\Gamma_h)}$ for all $v_h \in V_h^k$. This completes the proof. \end{proof} \ \\[2ex] Using \eqref{fund1} and \eqref{scalingrho} we get \mathbf egin{equation}\left\langlebel{elliptic} \mathbf egin{split} A_h(\mathbf u_h,\mathbf u_h) & = a(\mathbf u_h,\mathbf u_h) + \rho \int_{\Omega^\Gamma_h} ( \nabla \mathbf u_h\mathbf n)\cdot ( \nabla \mathbf u_h\mathbf n ) \, dx \\ & \gtrsim \|\mathbf u_h\|_{L^2(\mathcal Gamma)}^2 + h \int_{\Omega^\Gamma_h} (\nabla \mathbf u_h\mathbf n)\cdot ( \nabla \mathbf u_h\mathbf n) \, dx \\ & =\sum_{i=1}^3 \Big( \|(u_h)_i\|_{L^2(\mathcal Gamma)}^2 + h \|\mathbf n \cdot \nabla (u_h)_i\|_{L^2(\Omega^\Gamma_h)}^2\Big) \\ & \gtrsim h^{-1} \sum_{i=1}^3 \|(u_h)_i\|_{L^2(\Omega^\Gamma_h)}^2 = c h^{-1}\|\mathbf u_h\|_{L^2(\Omega^\Gamma_h)}^2, \end{split} \end{equation} for all $\mathbf u_h \in \mathbf U_h$. This implies that $A_h(\cdot,\cdot)$ \emph{is a scalar product on} $\mathbf U_h$. Using \eqref{fund1} and $\rho \gtrsim h$ we get \mathbf egin{equation}\left\langlebel{lower2} \|\mu_h\|_M^2 \gtrsim h^{-1} \|\mu_h\|_{L^2(\Omega^\Gamma_h)}^2 \quad \text{for all}~~\mu_h \in M_h. \end{equation} This in particular implies that $\|\cdot\|_M$ corresponds to a \emph{scalar product on} $M_h$. We now turn to the discrete inf-sup property. \mathbf egin{lemma} \left\langlebel{lemdiscreteinfsup} Take $m \geq 1$. There exist constants $d_1>0$, $d_2 >0$, independent of $h$ and of how $\mathcal Gamma$ intersects the outer triangulation, such that: \mathbf egin{equation} \left\langlebel{infsupest} \sup_{\mathbf v_h \in (V_h^m)^3} \frac{b(\mathbf v_h,\mu_h)}{\|\mathbf v_h\|_U} \geq d_1(1- d_2\sqrt{\rho h}) \|\mu_h\|_M \quad \text{for all}~~\mu_h \in V_h^m. \end{equation} \end{lemma} \mathbf egin{proof} Take $\mu_h \in V_h^m$. Note that \[ b(\mu_h \mathbf n, \mu_h)= \|\mu_h\|_M^2. \] Take $\mathbf v_h:= I_m (\mu_h \mathbf n) \in (V_h^m)^3$, where $I_m$ is the nodal (Lagrange) interpolation operator. The latter is well defined, because both $\mu_h$ and $\mathbf n$ are continuous in $\Omega^\Gamma_h$. Now note, cf.~\eqref{bineq}, \mathbf egin{equation} \left\langlebel{eq1} \mathbf egin{split} & |b(\mathbf v_h, \mu_h)| \geq |b(\mu_h \mathbf n, \mu_h)|- |b(I_m (\mu_h\mathbf n) -\mu_h\mathbf n,\mu_h)| \\ &=\|\mu_h\|_M^2 - |b(I_m (\mu_h\mathbf n) -\mu_h\mathbf n,\mu_h)| \\ & \geq \big(\|\mu_h\|_M- \big(\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{L^2(\mathcal Gamma)}^2 + \rho \| \nabla (I_m (\mu_h\mathbf n) -\mu_h\mathbf n) \mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \big)^\frac12 \big)\|\mu_h\|_M. \end{split} \end{equation} From $E_s(\mu_h \mathbf n)=\mu_h \mathbf H$ we get $a(\mu_h \mathbf n,\mu_h \mathbf n) \lesssim \|\mu_h\|_{L^2(\mathcal Gamma)}^2$ and using this and $\mathbf H \mathbf n=0$ we obtain \mathbf egin{equation} \left\langlebel{eq2} \|\mathbf v_h\|_U \leq \|\mu_h \mathbf n\|_U+ \|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_U \lesssim \|\mu_h\|_M + \|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_U. \end{equation} We now consider the terms with $I_m (\mu_h\mathbf n) -\mu_h\mathbf n$ in \eqref{eq1} and \eqref{eq2}. We use standard element-wise interpolation bounds for the Lagrange interpolant, the identity $|\mu_h|_{H^{m+1}(K)}=0$ for the $H^{m+1}$ seminorm of $\mu_h$ over any tetrahedron $K \in \mathcal T_h^\mathcal Gamma$, the inverse inequality $\|\mu_h\|_{H^m(K)}\le c h^{-m}\|\mu_h\|_{L^2(K)}$, \eqref{fund1} and the local variant of the estimate \eqref{fund1B}. We then obtain \mathbf egin{align*} & \|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_U^2 \\ & = a(I_m (\mu_h\mathbf n) -\mu_h\mathbf n,I_m (\mu_h\mathbf n) -\mu_h\mathbf n) +\rho \|\nabla\big(I_m (\mu_h\mathbf n) -\mu_h\mathbf n\big)\mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \\ & \lesssim \sum_{K\in\mathcal T_h^\mathcal Gamma}\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{H^1(K\cap \mathcal Gamma)}^2 + \rho \|\nabla(I_m (\mu_h\mathbf n) -\mu_h\mathbf n)\mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \\ & \lesssim \sum_{K\in\mathcal T_h^\mathcal Gamma}\left\{(h^{-1}+\rho)\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{H^1(K)}^2+h|I_m (\mu_h\mathbf n) -\mu_h\mathbf n|_{H^2(K)}^2\right\}\\ & \lesssim (h^{-1}+\rho)h^{2m}\sum_{K\in\mathcal T_h^\mathcal Gamma}|\mu_h\mathbf n|_{H^{m+1}(K)}^2 \lesssim (h^{-1}+\rho)h^{2m}\sum_{K\in\mathcal T_h^\mathcal Gamma}\|\mu_h\|_{H^{m}(K)}^2\\ & \lesssim (h^{-1}+\rho)\sum_{K\in\mathcal T_h^\mathcal Gamma}\|\mu_h\|_{L^2(K)}^2 \sim (1+\rho h)h^{-1}\|\mu_h\|_{L^2(\Omega^\Gamma_h)}^2\\ &\lesssim(1+\rho h)(\|\mu_h\|_{L^2(\mathcal Gamma)}^2+h\|\mathbf n\cdot\nabla\mu_h\|_{L^2(\Omega^\Gamma_h)}^2) \lesssim(1+\rho h)\|\mu_h\|_M^2 \lesssim \|\mu_h\|_M^2. \end{align*} From this and \eqref{eq2} we get \mathbf egin{equation} \left\langlebel{aux1} \|\mathbf v_h\|_U\lesssim \|\mu_h\|_M. \end{equation} With similar arguments we bound the interpolation terms in \eqref{eq1}: \mathbf egin{equation}\left\langlebel{aux} \mathbf egin{aligned} & \|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{L^2(\mathcal Gamma)}^2 + \rho \|\nabla \big(I_m (\mu_h\mathbf n) -\mu_h\mathbf n\big)\mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \\ & \lesssim \sum_{K\in\mathcal T_h^\mathcal Gamma}\left\{h^{-1}\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{L^2(K)}^2+h\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{H^1(K)}^2\right\} \\ & \quad + \rho \|\nabla \big(I_m (\mu_h\mathbf n) -\mu_h\mathbf n\big)\mathbf n\|_{L^2(\Omega^\Gamma_h)}^2 \\ & \lesssim \sum_{K\in\mathcal T_h^\mathcal Gamma}\left\{h^{-1}\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{L^2(K)}^2+(h+\rho)\|I_m (\mu_h\mathbf n) -\mu_h\mathbf n\|_{H^1(K)}^2\right\} \\ &\lesssim(h+\rho)h^{2m} \sum_{K\in\mathcal T_h^\mathcal Gamma}|\mu_h\mathbf n|_{H^{m+1}(K)}^2\\ &\lesssim(h^2+\rho h)\|\mu_h\|_M^2 \lesssim \rho h \|\mu_h\|_M^2 \end{aligned} \end{equation} Combining this with the results in \eqref{eq1} and \eqref{aux1} completes the proof. \end{proof} \ \\[1ex] \mathbf egin{corollary} \left\langlebel{corolstab} Take $m\geq 1$. Consider $\rho=c_\alpha h^{1-\alpha}$, $\alpha \in[0,2]$, and assume $h \leq h_0 \leq 1$. Take $c_\alpha$ such that $0< c_\alpha < d_2^{-2} h_0^{\alpha-2}$ with $d_2$ as in \eqref{infsupest}. Then there exists a constant $d >0$, independent of $h$ and of how $\mathcal Gamma$ intersects the outer triangulation, such that: \mathbf egin{equation} \left\langlebel{infsupestA} \sup_{\mathbf v_h \in (V_h^m)^3} \frac{b(\mathbf v_h,\mu_h)}{\|\mathbf v_h\|_U} \geq d \|\mu_h\|_M \quad \text{for all}~~\mu_h \in V_h^m. \end{equation} \end{corollary} \mathbf egin{assumption} \left\langlebel{ass1} In the remainder we restrict to $\rho=c_\alpha h^{1-\alpha}$, $\alpha\in [0,2]$, with $c_\alpha$ as in Corollary~\ref{corolstab}. \end{assumption} \mathbf egin{corollary} \left\langlebel{corinfsup} For $b(\cdot,\cdot)$ the discrete inf-sup property holds for the pair of spaces $(\mathbf U_h,M_h)=\big( (V_h^k)^3, V_h^l\big)$ with $ 1 \leq l \leq k$. The constant in the discrete inf-sup estimate can be taken independent of $h$ and of how $\mathcal Gamma$ intersects the outer triangulation, but depends on $k$. \end{corollary} \ \\[1ex] From the fact that $A_h(\cdot,\cdot)$ defines a scalar product on $\mathbf U_h$ and the discrete inf-sup property of $b(\cdot,\cdot)$ on $\mathbf U_h \times M_h$ it follows that the discrete problem \eqref{discrete} has a unique solution $(\mathbf u_h,\left\langlembda_h)$. For the remainder of the error analysis we apply standard theory of saddle point problems. We introduce the bilinear form \mathbf egin{equation} \left\langlebel{defk} \mathbf A_h\big((\mathbf u,\left\langlembda),(\mathbf v,\mu)\big):= A_h(\mathbf u,\mathbf v)+ b(\mathbf v,\left\langlembda)+b(\mathbf u,\mu), \quad (\mathbf u,\left\langlembda), (\mathbf v,\mu) \in \mathbf U \times M. \end{equation} \mathbf egin{theorem} \left\langlebel{thm2} Let $(\mathbf u^\ast,\left\langlembda)\in \mathbf V_\ast \times L^2(\mathcal Gamma)$ be the solution of \eqref{weak1a} and assume that this solution is sufficiently smooth. Furthermore, let $(\mathbf u_h,\left\langlembda_h) \in (V_h^k)^3\times V_h^l$ be the solution of \eqref{discrete}. For $1 \leq l \leq k$ the following discretization error bound holds: \mathbf egin{equation}\left\langlebel{discrbound} \mathbf egin{split} & \|\mathbf u^\ast -\mathbf u_h\|_U + \|\left\langlembda - \left\langlembda_h \|_M \\ & \lesssim h^k\big( 1+ (\rho h)^\frac12\big)\|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)} + h^{l+1} \big(1+(\rho/h)^\frac12\big) \|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)}. \end{split} \end{equation} \end{theorem} \mathbf egin{proof} Using the consistency property \eqref{consisdiscrete}, the continuity results derived in Lemma~\ref{lemcont}, ellipticity of $A_h(\cdot,\cdot)$ on $\mathbf U_h$ and the discrete inf-sup property for $b(\cdot,\cdot)$ we obtain, for arbitrary $(\mathbf w_h,\xi_h) \in \mathbf U_h \times M_h$: \mathbf egin{align*} & \big(\|\mathbf u_h- \mathbf w_h\|_U^2 +\|\left\langlembda_h- \xi_h\|_M^2\big)^\frac12 \lesssim \sup_{(\mathbf v_h,\mu_h)\in \mathbf U_h \times M_h} \frac{\mathbf A_h\big( (\mathbf u_h- \mathbf w_h,\left\langlembda_h- \xi_h),(\mathbf v_h,\mu_h)\big)}{(\|\mathbf v_h\|_U^2+\|\mu_h\|_M^2)^\frac12} \\ & = \sup_{(\mathbf v_h,\mu_h)\in \mathbf U_h \times M_h} \frac{ \mathbf A_h\big( (\mathbf u^\ast- \mathbf w_h,\left\langlembda- \xi_h),(\mathbf v_h,\mu_h)\big)}{(\|\mathbf v_h\|_U^2+\|\mu_h\|_M^2)^\frac12} \lesssim \|\mathbf u^\ast - \mathbf w_h\|_U + \|\left\langlembda - \xi_h\|_M. \end{align*} Hence, with a triangle inequality we get the Cea-type discretization error bound \mathbf egin{equation} \left\langlebel{discrerror} \|\mathbf u^\ast -\mathbf u_h\|_U + \|\left\langlembda - \left\langlembda_h \|_M \lesssim \inf_{(\mathbf v_h,\mu_h)\in \mathbf U_h \times M_h}\big(\|\mathbf u^\ast -\mathbf v_h\|_U + \|\left\langlembda - \mu_h \|_M\big). \end{equation} For $(\mathbf v_h,\mu_h)\in \mathbf U_h \times M_h$ we take the interpolants $\mathbf v_h=I_k\big((\mathbf u^\ast)^e\big)$, $\mu_h=I_l(\left\langlembda^e)$, and assume sufficient smoothness of $\mathbf u^\ast$ and hence of $\left\langlembda$, cf. \eqref{charlambda}. Then, thanks to the interpolation properties of polynomials and their traces, cf., e.g., \cite{reusken2015analysis}, we have the estimates: \mathbf egin{align*} & \|\mathbf u^\ast -\mathbf v_h\|_U + \|\left\langlembda - \mu_h \|_M \\ & \lesssim \|\mathbf u^\ast - \mathbf v_h\|_1+ \rho^\frac12 \|\mathbf u^\ast -\mathbf v_h\|_{H^1(\Omega^\Gamma_h)} + \|\left\langlembda - \mu_h\|_{L^2(\mathcal Gamma)} + \rho^\frac12 \|\left\langlembda - \mu_h\|_{H^1(\Omega^\Gamma_h)} \\ &\lesssim h^k \|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)}+ \rho^\frac12 h^k \|\mathbf u^\ast\|_{H^{k+1}(\Omega^\Gamma_h)} +h^{l+1} \|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)} + \rho^\frac12 h^l\|\left\langlembda\|_{H^{l+1}(\Omega^\Gamma_h)}\\ & \lesssim h^k\big(1 +(\rho h)^\frac12\big)\|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)}+ h^{l+1}\big(1+ (\rho/h)^\frac12 \big)\|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)}, \end{align*} which in combination with \eqref{discrerror} yields the desired result. \end{proof} \ \\[1ex] \mathbf egin{corollary} \left\langlebel{cor1} \rm Assume that the solution of \eqref{weak1a} is sufficiently smooth. We obtain for $l=k \geq 1$ the optimal error bound: \mathbf egin{equation} \left\langlebel{dis1} \|\mathbf u^\ast -\mathbf u_h\|_U + \|\left\langlembda - \left\langlembda_h \|_M \lesssim h^k (\|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)} +\|\left\langlembda\|_{H^{k+1}(\mathcal Gamma)}). \end{equation} For $l=k-1 \geq 1$ and with $\rho \sim h$ we obtain the optimal error bound: \mathbf egin{equation} \left\langlebel{dis2} \|\mathbf u^\ast -\mathbf u_h\|_U + \|\left\langlembda - \left\langlembda_h \|_M \lesssim h^k (\|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)} +\|\left\langlembda\|_{H^{k}(\mathcal Gamma)}). \end{equation} If $\left\langlembda=0$, cf. \eqref{charlambda}, the bound \eqref{dis2} holds for $l=k-1 \geq 1$ and for \emph{any} $\rho$ that fulfills Assumption~\ref{ass1}. Using \eqref{charlambda} we can bound the norm of $\left\langlembda$ in terms of the normal component of the data $g_N$ and $\mathbf u^\ast$: \mathbf egin{equation} \left\langlebel{replace} \|\left\langlembda\|_{H^{m}(\mathcal Gamma)} \lesssim \|g_N\|_{H^{m}(\mathcal Gamma)}+ \|\mathbf u^\ast\|_{H^{m+1}(\mathcal Gamma)}. \end{equation} Note that for the original problem \eqref{vectLaplace} the data $\mathbf{f}= \mathbf g$ satisfy $g_N=0$. \end{corollary} \section{$L^2$-error bound} \left\langlebel{s_L2} In this section we use a standard duality argument to derive an optimal $L^2$-norm discretization error bound, based on a regularity assumption for the problem \eqref{vectLaplace}. We note that in the analysis we need the assumption $\rho \sim h$. \mathbf egin{theorem} \left\langlebel{thmL2} Assume that \eqref{vectLaplace} satisfies the regularity estimate \mathbf egin{equation} \left\langlebel{regu} \|\mathbf u_T\|_{H^2(\mathcal Gamma)} \lesssim \|\mathbf bf\|_{L^2(\mathcal Gamma)} \quad \text{for all $\mathbf bf \in L^2(\mathcal Gamma)^3$ with $\mathbf bf\cdot\mathbf n = 0$}. \end{equation} Take $\rho \sim h$, $l=k \geq 1$ or $l=k-1 \geq 1$. The following error estimate holds: \mathbf egin{equation} \left\langlebel{L2error} \|\mathbf u^\ast - \mathbf P \mathbf u_h\|_{L^2(\mathcal Gamma)} \lesssim h^{k+1}\big( \|\mathbf u^\ast \|_{H^{k+1}(\mathcal Gamma)} + \|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)}\big). \end{equation} \end{theorem} \mathbf egin{proof} We consider the problem \eqref{vectLaplace} with $\mathbf bf_e:=\mathbf P(\mathbf u^\ast-\mathbf u_h)= \mathbf u^\ast- \mathbf P \mathbf u_h$. We take $\mathbf g = \mathbf bf_e$ in \eqref{weak1a}, hence $g_N=0$, and the corresponding solution of \eqref{weak1a} is denoted by $(\mathbf w^\ast,\tau) \in \mathbf V_\ast \times L^2(\mathcal Gamma)$. The extensions $(\mathbf w^\ast)^e$, $\tau^e$ are also denoted by $\mathbf w^\ast$ and $\tau$. From the regularity assumption and $\tau =- {\rm tr}(E_s(\mathbf w^\ast)\mathbf H)$ it follows that $(\mathbf w^\ast,\tau) \in H^2(\mathcal Gamma)^3 \times H^1(\mathcal Gamma)$ and that \mathbf egin{equation} \left\langlebel{regu2} \|\mathbf w^\ast\|_{H^2(\mathcal Gamma)} +\|\tau \|_{H^1(\mathcal Gamma)} \lesssim \|\mathbf bf_e\|_{L^2(\mathcal Gamma)} \end{equation} holds. With $\mathbf A_h(\cdot,\cdot)$ as in \eqref{defk} we get the consistency result \[ \mathbf A_h\big((\mathbf w^\ast,\tau),(\mathbf v,\mu)\big)= (\mathbf bf_e,\mathbf v)_{L^2(\mathcal Gamma)} \quad \text{for all}~(\mathbf v,\mu)\in \mathbf U \times M. \] We take $(\mathbf v,\mu)=(\mathbf u^\ast-\mathbf u_h,\left\langlembda - \left\langlembda_h) \in \mathbf U \times M$ and using the symmetry of $\mathbf A_h(\cdot,\cdot)$ and the Galerkin orthogonality we obtain \mathbf egin{align*} \|\mathbf u^\ast-\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)}^2 & = \|\mathbf P(\mathbf u^\ast-\mathbf u_h)\|_{L^2(\mathcal Gamma)}^2 = (\mathbf P(\mathbf u^\ast-\mathbf u_h),\mathbf u^\ast-\mathbf u_h)_{L^2(\mathcal Gamma)} \\ & = (\mathbf g,\mathbf u^\ast-\mathbf u_h)_{L^2(\mathcal Gamma)}= \mathbf A_h\big((\mathbf w^\ast,\tau),(\mathbf u^\ast-\mathbf u_h,\left\langlembda - \left\langlembda_h) \big) \\ & = k\big((\mathbf u^\ast-\mathbf u_h,\left\langlembda - \left\langlembda_h),(\mathbf w^\ast-\mathbf w_h,\tau - \tau_h) \big) \end{align*} for all $(\mathbf w_h,\tau_h) \in U_h \times M_h$. We use continuity of $\mathbf A_h(\cdot,\cdot)$ and the results derived in Corollary~\ref{cor1} and thus obtain \mathbf egin{equation}\left\langlebel{et1} \|\mathbf u^\ast-\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)}^2 \lesssim h^k \big(\|\mathbf u^\ast\|_{H^{k+1}(\mathcal Gamma)} +\|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)}\big)\big( \|\mathbf w^\ast - \mathbf w_h\|_U +\|\tau - \tau_h\|_M\big). \end{equation} We take $\mathbf w_h= I_1 (\mathbf w^\ast)$, $\tau_h = I_1 (\tau)$. Using \eqref{regu2} this yields \mathbf egin{align*} \|\mathbf w^\ast - \mathbf w_h\|_U & \lesssim \|\mathbf w^\ast - I_1(\mathbf w^\ast)\|_{H^1(\mathcal Gamma)} + \rho^\frac12 \|\mathbf w^\ast - I_1(\mathbf w^\ast) \|_{H^1(\Omega^\Gamma_h)} \\ &\lesssim h \|\mathbf w^\ast\|_{H^2(\mathcal Gamma)} + \rho^\frac12 h \|\mathbf w^\ast\|_{H^2(\Omega^\Gamma_h)} \\ & \lesssim h (1 +(\rho h)^\frac12)\|\mathbf w^\ast\|_{H^2(\mathcal Gamma)} \lesssim h \|\mathbf bf_e\|_{L^2(\mathcal Gamma)} \lesssim h \|\mathbf u^\ast - \mathbf P \mathbf u_h\|_{L^2(\mathcal Gamma)} \end{align*} and \mathbf egin{align*} \|\tau - \tau_h\|_M & \lesssim \|\tau - I_1 (\tau)\|_{L^2(\mathcal Gamma)} + \rho^\frac12 \|\tau - I_1 (\tau)\|_{H^1(\Omega^\Gamma_h)} \\ & \lesssim h \|\tau\|_{H^1(\mathcal Gamma)} + \rho^\frac12\|\tau\|_{H^1(\Omega^\Gamma_h)} \\ &\lesssim h (1 + (\rho/h)^\frac12)\|\tau\|_{H^1(\mathcal Gamma)} \lesssim h \|\mathbf bf_e\|_{L^2(\mathcal Gamma)} \lesssim h \|\mathbf u^\ast - \mathbf P \mathbf u_h\|_{L^2(\mathcal Gamma)} , \end{align*} where in the second last inequality we used $\rho \sim h$. Combining these estimates with the result in \eqref{et1} completes the proof. \end{proof} \ \\[1ex] Note that the term $ \|\left\langlembda\|_{H^{l+1}(\mathcal Gamma)}$ in \eqref{L2error} can be replaced by $ \|\mathbf u^\ast\|_{H^{l+2}(\mathcal Gamma)}$, cf.~\eqref{replace}. From the proof above it follows that we do not need the assumption $\rho \sim h$ for the special case $\left\langlembda=0$. We address the question how accurate the discrete solution $\mathbf u_h$ satisfies the tangential condition $\mathbf u\cdot\mathbf n=0$ on $\mathcal Gamma$. Due to the zero order term $\int_\mathcal Gamma\mathbf u\cdot\mathbf v\,ds$ in the definition of the bilinear form $a(\cdot,\cdot)$ in \eqref{defaalt}, the norm $\|\cdot\|_U$ in \eqref{discrbound} gives control of the normal components, and hence we get \[ \|\mathbf u_h\cdot\mathbf n\|_{L^2(\mathcal Gamma)}\le\|\mathbf u_h-\mathbf u^\ast\|_{U}. \] Therefore, under the assumptions of the Corollary~\ref{cor1} we obtain the estimate \mathbf egin{align} \left\langlebel{eq:normalError} \|\mathbf u_h\cdot\mathbf n\|_{L^2(\mathcal Gamma)}&\lesssim h^k. \end{align} Another bound on $\mathbf u_h\cdot\mathbf n$ can be derived from the second equation in \eqref{discrete}. Denote by $P_l$ the orthogonal projection onto $M_h$ with respect to the $(\cdot,\cdot)_M$ scalar product, then the second equation in \eqref{discrete} implies $P_l(\mathbf u_h\cdot\mathbf n)=0$. Therefore, we have \[ \|\mathbf u_h\cdot\mathbf n\|_{L^2(\mathcal Gamma)}\le\|\mathbf u_h\cdot\mathbf n\|_{M}=\|(I-P_l)\mathbf u_h\cdot\mathbf n\|_{M}=\inf_{\mu_h\in M_h}\|\mathbf u_h\cdot\mathbf n-\mu_h\|_{M}. \] \section{Condition number estimate} \left\langlebel{s_cond} It is well-known \cite{OlshanskiiReusken08,burmanembedded} that for unfitted finite element methods there is an issue concerning algebraic stability, in the sense that the matrices that represent the discrete problem can have very bad conditioning due to small cuts in the geometry. Stabilization methods have been developed which remedy this stability problem, see, e.g., \cite{burmanembedded,olshanskii2016trace}. In this section we show that the `volume normal derivative' stabilizations that we use in both bilinear forms $A_h(\cdot,\cdot)$ and $b(\cdot,\cdot)$, with scaling as in \eqref{scalingrho}, remove any possible algebraic instability. More precisely, we show that the condition number of the stiffness matrix corresponding to the saddle point problem \eqref{discrete} is bounded by $ch^{-2}$, where the constant $c$ is independent of the position of the interface. Furthermore, we present an optimal Schur complement preconditioner. Let integer $n>0, m>0$ be the number of active degrees of freedom in $\mathbf U_h$ and $M_h$ spaces, i.e., $n= {\rm dim}(\mathbf U_h)$, $m={\rm dim}(M_h)$, and $P_h^U:\,\mathbb{R}^n\to \mathbf U_h$ and $P_h^M:\,\mathbb{R}^m\to M_h$ are canonical mappings between the vectors of nodal values and finite element functions. Denote by $\left\langle\cdot,\cdot\right\rangle$ and $\|\cdot\|$ the Euclidean scalar product and the norm. For matrices, $\|\cdot\|$ denotes the spectral norm. Now we introduce several matrices. Let $A, M_u\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times m}$, $M_\left\langlembda, S_M \in \mathbb{R}^{m \times m}$ be such that \[ \mathbf egin{split} \left\langle A \vec u, \vec v\right\rangle &= A_h(P_h^U \vec u, P_h^U \vec v),\quad \left\langle B \vec u,\vec \left\langlembda \right\rangle= b(P_h^U \vec u,P_h^M \vec\left\langlembda),\quad \\ \left\langle M_u \vec u, \vec v\right\rangle&= (P_h^U \vec u, P_h^U \vec v)_{L^2(\Omega^\Gamma_h)},\quad \left\langle M_\left\langlembda\vec \left\langlembda,\vec \mu \right\rangle= (P_h^M \vec \left\langlembda,P_h^M \vec \mu)_{L^2(\Omega^\Gamma_h)},\quad \\ \left\langle S_M \vec \left\langlembda , \vec \mu \right\rangle &= (P_h^M \vec \left\langlembda,P_h^M \vec \mu)_{L^2(\mathcal Gamma)}+ \rho \big( \nabla (P_h^M \vec \left\langlembda)\mathbf n, \nabla(P_h^M \vec \mu)\mathbf n\big)_{L^2(\Omega^\Gamma_h)} \end{split} \] for all $\vec u,\vec v\in\mathbb{R}^n,~~\vec \mu,\,\vec \left\langlembda\in\mathbb{R}^m$. Note that the numerical properties of mass matrices $M_u$ and $M_\left\langlembda$ do not depend on how the surface $\mathcal Gamma$ intersects the domain $\Omega^\Gamma_h$. Since the family of background meshes is shape regular, these mass matrices have a spectral condition number that is uniformly bounded, independent of $h$ and of how $\mathcal Gamma$ intersects the background triangulation $\mathcal{T}_h$. Furthermore, for the symmetric positive definite matrix $S_M$ we have \[ \left\langle S_M \vec \left\langlembda, \vec \left\langlembda \right\rangle =\|P_h^M \vec \left\langlembda\|_M^2 \quad \text{for all}~\vec \left\langlembda\in\mathbb{R}^m, \] cf.~\eqref{defmuast}. We also introduce the system matrix and its Schur complement: \[ \mathcal A:=\left[\mathbf egin{matrix} A & B^T \\ B & 0 \end{matrix}\right],\quad S=B A^{-1} B^T. \] The algebraic system resulting from the finite element method \eqref{discrete} has the form \mathbf egin{equation}\left\langlebel{SLAE} \mathcal A \vec x=\vec b,\quad\text{with some}~\vec x, \vec b\in\mathbb{R}^{n+m}. \end{equation} We will consider a block-diagonal preconditioner of the matrix $\mathcal A$. For this we first analyze preconditioners of the matrices $A$ and $S$. In the following lemma we use spectral inequalities for symmetric matrices. \mathbf egin{lemma} \left\langlebel{lemprecond} There are strictly positive constants $\nu_{A,1}$, $\nu_{A,2}$, $\nu_{S,1}$, $\nu_{S,2}$, $\tilde \nu_{S,1}$, $\tilde \nu_{S,2}$, independent of $h$ and of how $\mathcal Gamma$ intersects $\mathcal{T}_h$ such that the following spectral inequalities hold: \mathbf egin{align} \nu_{A,1} h^{-1} M_u & \leq A \leq \nu_{A,2} h^{-3} M_u, \left\langlebel{spec1} \\ \nu_{S,1} h^{-1} M_\left\langlembda & \leq S \leq \nu_{S,2} h^{-3} M_\left\langlembda, \left\langlebel{spec2}\\ \tilde \nu_{S,1} S_M & \leq S \leq \tilde \nu_{S,2} S_M. \left\langlebel{spec3} \end{align} \end{lemma} \mathbf egin{proof} Note that \mathbf egin{equation}\left\langlebel{cond1} \frac{\left\langle A\vec v,\vec v\right\rangle}{\left\langle M_u \vec v, \vec v\right\rangle}= \frac{A_h(P_h^U \vec v, P_h^U \vec v)}{\|P_h^U \vec v\|^2_{L^2(\Omega^\Gamma_h)}}\quad\text{for all}~ \vec v\in\mathbb{R}^n. \end{equation} From \eqref{elliptic} we get \[ \nu_{A,1}h^{-1}\le\frac{A_h(\mathbf v_h, \mathbf v_h)}{\|\mathbf v_h\|^2_{L^2(\Omega^\Gamma_h)}} \quad \text{for all}~\mathbf v_h \in U_h. \] Using the local variant of \eqref{fund1B} and a FE inverse inequality we get \[ \|\mathbf v_h\|_1^2 \lesssim h^{-1}\|\mathbf v_h\|_{H^1(\Omega^\Gamma_h)}^2 + h \sum_{T \in \mathcal T_h^\mathcal Gamma} |\mathbf v_h|_{H^2(T)}^2 \lesssim h^{-1}\|\mathbf v_h\|_{H^1(\Omega^\Gamma_h)}^2 \quad\text{for all}~ \mathbf v_{h} \in \mathbf U_h, \] and thus, \[ \frac{A_h(\mathbf v_h, \mathbf v_h)}{\|\mathbf v_h\|^2_{L^2(\Omega^\Gamma_h)}} \leq c \frac{\|\mathbf v_h\|_1^2 +\rho \|\mathbf v_h\|_{H^1(\Omega^\Gamma_h)}^2}{\|\mathbf v_h\|^2_{L^2(\Omega^\Gamma_h)}} \leq c \frac{h^{-1} \|\mathbf v_h\|_{H^1(\Omega^\Gamma_h)}^2}{\|\mathbf v_h\|^2_{L^2(\Omega^\Gamma_h)}} \leq \nu_{A,2}h^{-3}~~\text{for all}~\mathbf v_h \in U_h, \] with a suitable constant $\nu_{A,2}$. Combination of these results yields the inequalities in \eqref{spec1}. For the Schur complement matrix ${S}=B A^{-1} B^T$ we have \mathbf egin{equation}\left\langlebel{cond3} \mathbf egin{split} \frac{\left\langle S \vec \left\langlembda,\vec \left\langlembda\right\rangle}{\left\langle S_M\vec \left\langlembda,\vec \left\langlembda\right\rangle}& =\sup_{\vec u\in\mathbb{R}^n}\frac{\left\langle \vec u, A^{-\frac12}B^T \vec\left\langlembda\right\rangle^2}{\|\vec u\|^2\|P_h^M \vec \left\langlembda\|_M^2} =\sup_{\vec u\in\mathbb{R}^n}\frac{\left\langle B \vec u, \vec \left\langlembda\right\rangle^2}{\| A^\frac12 \vec u\|^2\|P_h^M \vec \left\langlembda\|_\ast^2}\\ &= \sup_{\mathbf u_h\in U_h}\frac{b(\mathbf u_h,\mu_h )^2}{\|\mathbf u_h\|_U^2\|\mu_h\|_M^2},\quad \mu_h:=P_h^M \vec \left\langlembda. \end{split} \end{equation} Using the results in \eqref{cont2} and Corollary~\ref{corinfsup} we obtain the result in \eqref{spec3}. We also have \mathbf egin{equation}\left\langlebel{cond4} \frac{\left\langle S_M \vec \left\langlembda,\vec \left\langlembda\right\rangle}{\left\langle M_\left\langlembda\vec \left\langlembda,\vec \left\langlembda\right\rangle} = \frac{\|\mu_h\|_M^2}{\|\mu_h\|_{L^2(\Omega^\Gamma_h)}^2}, \quad \mu_h:=P_h^M \vec \left\langlembda. \end{equation} Using $ h \lesssim \rho \lesssim h^{-1}$ and \eqref{fund1} we get \[ h^{-1} \|\mu_h\|_{L^2(\Omega^\Gamma_h)}^2 \lesssim \|\mu_h\|_M^2 \lesssim h^{-3} \|\mu_h\|_{L^2(\Omega^\Gamma_h)}^2 \quad \text{for all}~\mu_h \in M_h. \] Using this we see that the result in \eqref{spec2} follows from \eqref{spec3}. \end{proof} \ \\[1ex] We introduce a block diagonal preconditioner \[ Q:=\left[\mathbf egin{matrix} Q_A & 0 \\ 0 & Q_S \end{matrix}\right] \] of $\mathcal A$. \mathbf egin{corollary}\left\langlebel{corr:pc} The following estimate holds with some $c>0$ independent of how $\mathcal Gamma$ cuts through the background mesh, \mathbf egin{equation}\left\langlebel{Condest} \mathrm{cond}(\mathcal A)=\|\mathcal A\| \|\mathcal A^{-1}\|\le c\,h^{-2}. \end{equation} \end{corollary} \mathbf egin{proof} Take $Q_A:=M_u$, $Q_S:=M_\left\langlembda$. Let $\xi$ be an eigenvalue of $Q^{-1}\mathcal A$. We apply the result in Lemma~5.14 from \cite{OlshTyrt} to derive from \eqref{spec1}, \eqref{spec2}: \mathbf egin{equation}\left\langlebel{EigBound} \mathbf egin{split} \xi &\in[\nu_{S,1}h^{-1},\nu_{S,2}\,h^{-3}]\cup \left[\frac{\nu_{A,1}+\sqrt{\nu_{A,1}^2+4\nu_{A,1}\nu_{S,1}}}{2h},\frac{\nu_{A,2}+\sqrt{\nu_{A,2}^2+4\nu_{A,2}\nu_{S,2}}}{2h^3}\right]\\ &\cup \left[\frac{\nu_{A,2}-\sqrt{\nu_{A,2}^2+4\nu_{A,2}\nu_{S,2}}}{2h^3},\frac{\nu_{A,1}-\sqrt{\nu_{A,1}^2+4\nu_{A,1}\nu_{S,1}}}{2h}\right]. \end{split} \end{equation} From this spectral estimate and the fact that $Q$ has a uniformly bounded condition number we conclude that \eqref{Condest} holds. \end{proof} \ \\[2ex] Similar to \eqref{EigBound} we also get from \eqref{spec3} the following result. \mathbf egin{corollary} Let $Q_A \sim A$ be a uniformly spectrally equivalent preconditioner of $A$ and $Q_S:=S_M$. For the spectrum $\sigma(Q^{-1}\mathcal A)$ of the preconditioned matrix we have \[ \sigma(Q^{-1}\mathcal A) \subset \big([C_{-},c_{-}]\cup[c_+,C_+]\big), \] with some constants $C_{-} < c_{-} < 0 < c_+ < C_+$ independent of $h$ and the position of $\mathcal Gamma$. \end{corollary} \ \\[2ex] Note that the optimal Schur complement preconditioner $S_M$ is easy to implement since the terms occurring in $S_M$ are essentially the same as in the bilinear form $b(\cdot,\cdot)$. Furthermore, for $\rho \sim h$ we have the spectral equivalence \mathbf egin{equation}\left\langlebel{equiv2} S_M \sim h^{-1} M_\left\langlembda, \end{equation} which follows from \eqref{fund1}. Hence, systems with the matrix $S_M$ are then easy to solve. \section{Numerical experiments} \left\langlebel{sectExp} In this section we present results of a few numerical experiments. We first consider the vector-Laplace problem \eqref{strongform} on the unit sphere. We use a standard trace-FEM approach in the sense that the exact surface is approximated by a piecewise planar one. Due to this geometric error (${\rm dist}(\mathcal Gamma_h,\mathcal Gamma) \lesssim h^2$) the discretization accuracy is limited to second order and therefore we consider the discretization \eqref{discrete} with piecewise linears both for the velocity and the Langrange multiplier. Higher order surface approximations with the technique introduced in \cite{grande2017higher} will be treated in a forthcoming paper. To be able to use a higher order finite element space, and in particular the pair $(V_h^2)^3$-$V_h^1$ for velocity and Lagrange multiplier (which is LBB stable, cf. Corollary~\ref{corinfsup}), we also consider an example in which the surface is a bounded plane which is not aligned with the coordinate axes. For both cases we present results for discretization errors and their dependence on the stabilization parameter $\rho$. For the problem on the unit sphere we also illustrate the behavior of a preconditioned MINRES solver. \subsection{Vector-Laplace problem on the unit sphere} For $\mathcal Gamma$ we take the unit sphere, characterized as the zero level of the level set function $ \phi(\mathbf x) = \|\mathbf x\|_2 -1$, where $\|\cdot\|_2$ denotes the Euclidean norm on $\mathbb R^3$. We consider the vector-Laplace problem \eqref{strongform} with the prescribed solution $\mathbf u^*=\mathbf P(-x_3^2,x_2,x_1)^T \in\mathbf V_T$. The induced tangential right hand-side is denoted by $\mathbf{f}$ and we use the saddle point formulation \eqref{weak1a} with $\mathbf g = \mathbf{f}$. The associated Lagrange multiplier $\left\langlembda$ is then given by \eqref{charlambda} with $g_N=0$. The sphere is embedded in an outer domain $\Omega=[-5/3,5/3]^3$. The triangulation $\mathcal T_{h_\ell}$ of $\Omega$ consists of $n_\ell^3$ sub-cubes, where each of the sub-cubes is further refined into 6 tetrahedra. Here $\ell\in\Bbb{N}$ denotes the level of refinement yielding a mesh size $h_\ell= \frac{10/3}{n_\ell}$ with $n_\ell= 2^{\ell+1}$. On the outer mesh $\mathcal T_h^\mathcal Gamma$ we define the nodal interpolation operators $I_k:C(\Omega_h^\mathcal Gamma) \to V_h^k$. For the TraceFEM, instead of the exact surface $\mathcal Gamma$, we consider an approximated surface $\mathcal Gamma_h= \{\mathbf x\in\Omega_h^\mathcal Gamma:~ (I_1\phi)(\mathbf x)=0\}$, consisting of triangular planar patches. This induces a geometry error with $\mathrm{dist}(\mathcal Gamma,\mathcal Gamma_h)\lesssim h^2$. The normals $\mathbf n$ are approximated by $\mathbf n_h:=\frac{\nabla\phi_h}{\|\nabla\phi_h\|_2}$, where $\phi_h$ is the piecewise quadratic interpolant $\phi_h:= I_2(\phi)$. For the pair of TraceFE spaces $\big((V_h^k)^3,\, V_h^l\big)$ in the saddle point formulation \eqref{discrete}, with $\mathcal Gamma$ replaced by $\mathcal Gamma_h$, we use piecewise linear finite elements, i.e., $k=l=1$, and we first choose $\rho=\tilde\rho=h$ for the stabilization parameters. The numerical solution on refinement level $\ell=4$ is illustrated in Figure~\ref{fig:solSphere}. The very shape irregular surface triangulation $\mathcal Gamma_h$ is illustrated in Figure~\ref{fig:gridSphere}. \mathbf egin{figure}[ht!] \mathbf egin{minipage}{0.45\textwidth} \includegraphics[width=\textwidth]{vecLaplaceSol_corr.png} \caption{Numerical solution on the sphere for $k=l=1$, $\rho=\tilde\rho=h$ and refinement level $\ell=4$.} \left\langlebel{fig:solSphere} \end{minipage} \mathbf egin{minipage}{0.45\textwidth} \includegraphics[width=\textwidth]{Sphere_grid.png} \caption{Detail of the surface triangulation $\mathcal Gamma_h$ of the sphere for refinement level $\ell=4$.} \left\langlebel{fig:gridSphere} \end{minipage} \end{figure} Figure~\ref{fig:convSphere1} shows different norms of the errors for different refinement levels $\ell$. \mathbf egin{figure}[ht!] \mathbf egin{tikzpicture} \def1.5{8} \def0.3{1.5} \mathbf egin{semilogyaxis}[ xlabel={Refinement level}, ylabel={Error}, ymin=5E-4, ymax=200, legend style={ cells={anchor=west}, legend pos=outer north east} ] \addplot table[x=level, y=U] {sphereP1-h.dat}; \addplot table[x=level, y=L2P] {sphereP1-h.dat}; \addplot table[x=level, y=u_N] {sphereP1-h.dat}; \addplot table[x=level, y=M] {sphereP1-h.dat}; \addplot[dashed,line width=0.75pt] coordinates { (1,1.5) (2,1.5*0.5) (3,1.5*0.25)(4,1.5*0.125) (5,1.5*0.0625) }; \addplot[dotted,line width=0.75pt] coordinates { (1,0.3) (2,0.3*0.5*0.5) (3,0.3*0.25*0.25) (4,0.3*0.125*0.125) (5,0.3*0.0625*0.0625) }; \legend{$\|\mathbf u^* - \mathbf u_h\|_U$, $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma_h)}$, $\|\mathbf u_h\cdot\mathbf n_h\|_{L^2(\mathcal Gamma_h)}$, $\|\left\langlembda - \left\langlembda_h\|_M$, $\mathcal{O}(h)$, $\mathcal{O}(h^2)$} \end{semilogyaxis} \end{tikzpicture} \caption{Discretization errors for the sphere and $k=l=1$ with $\rho=\tilde\rho=h$.} \left\langlebel{fig:convSphere1} \end{figure} Optimal convergence orders are achieved for $\|\mathbf u^*-\mathbf u_h\|_U$ and $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma_h)}$, cf. the theoretical results in Theorems~\ref{thm2} and \ref{thmL2} (note that in the theoretical analysis we do not treat geometric errors). For $\|\left\langlembda-\left\langlembda_h\|_M$ we observe a convergence order of about 1.5, which is better than the theoretical result for $\|\left\langlembda-\left\langlembda_h\|_M$. For the normal component of $\mathbf u_h$ we observe $\|\mathbf u_h\cdot\mathbf n_h\|_{L^2(\mathcal Gamma_h)} \sim h^2$, i.e., a faster convergence as predicted by the bound \eqref{eq:normalError}. From further experiments (results are not shown) it follows that the same convergence orders are obtained for the choice $\rho=\tilde\rho=1$, but then the errors are slightly increased. The experiments are repeated for a different choice of the stabilization parameters, taking the other limit case $\rho=\tilde\rho=1/h$. The convergence results are presented in Figure~\ref{fig:convSphere2}. \mathbf egin{figure}[ht!] \mathbf egin{tikzpicture} \def1.5{3} \def0.3{0.4} \mathbf egin{semilogyaxis}[ xlabel={Refinement level}, ylabel={Error}, ymin=5E-4, ymax=200, legend style={ cells={anchor=west}, legend pos=outer north east} ] \addplot table[x=level, y=U] {sphereP1-hinv.dat}; \addplot table[x=level, y=L2P] {sphereP1-hinv.dat}; \addplot table[x=level, y=u_N] {sphereP1-hinv.dat}; \addplot table[x=level, y=M] {sphereP1-hinv.dat}; \addplot[dashed,line width=0.75pt] coordinates { (1,1.5) (2,1.5*0.5) (3,1.5*0.25)(4,1.5*0.125) (5,1.5*0.0625) }; \addplot[dotted,line width=0.75pt] coordinates { (1,0.3) (2,0.3*0.5^2) (3,0.3*0.25^2) (4,0.3*0.125^2) (5,0.3*0.0625^2) }; \legend{$\|\mathbf u^* - \mathbf u_h\|_U$, $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma_h)}$, $\|\mathbf u_h\cdot\mathbf n_h\|_{L^2(\mathcal Gamma_h)}$, $\|\left\langlembda - \left\langlembda_h\|_M$, $\mathcal{O}(h)$, $\mathcal{O}(h^{2})$} \end{semilogyaxis} \end{tikzpicture} \caption{Discretization errors for the sphere and $k=l=1$ with $\rho=\tilde\rho=1/h$.} \left\langlebel{fig:convSphere2} \end{figure} Compared to Figure~\ref{fig:convSphere1}, the reported errors are all larger, especially those for $\|\left\langlembda-\left\langlembda_h\|_M$ and $\|\mathbf u_h\cdot\mathbf n_h\|_{L^2(\mathcal Gamma_h)}$, but the convergence orders still behave in an optimal way, as expected from the error analysis. Only first order convergence is achieved for $\|\left\langlembda-\left\langlembda_h\|_M$ compared to order 1.5 for the choice $\rho=\tilde\rho=h$. For $\|\mathbf u_h\cdot\mathbf n_h\|_{L^2(\mathcal Gamma_h)}$ the convergence order drops below 2. As noted above, due to the geometry error we do not consider the higher order case $k>1$ here. Next we study the performance of the iterative solver and the preconditioners. To solve the linear saddle point system \eqref{SLAE} a MINRES solver is applied using the block preconditioner $Q$ presented in Section~\ref{s_cond}. For the application of $\vec u=Q_A^{-1}\vec b$, the system $A\vec u = \vec b$ is iteratively solved using a standard SSOR-preconditioned CG solver, with a tolerance such that the initial residual is reduced by a factor of $10^4$. The same strategy is used for the Schur complement, i.e., for the application of $ \vec v=Q_S^{-1}\vec c$, the system $S_M\vec v = \vec c$ is iteratively solved using a standard SSOR-preconditioned CG solver, with a tolerance such that the initial residual is reduced by a factor of $10^4$. We note that these high tolerances are not optimal for the overall efficiency of the preconditioned MINRES solver, but are appropriate to demonstrate important properties of the solver. Furthermore, in practice it is probably more efficient to replace the SSOR preconditioner for the $A$-block by a more efficient one, e.g., an ILU or a multigrid method. Iteration numbers used for the solution of the linear systems are reported in Tables~\ref{tab:iter} and \ref{tab:iter2}. Here $N$ denotes the number of MINRES iterations needed to reduce the residual by a factor of $10^6$ (initial vector $\vec x_0=0$). $N_A$ and $N_S$ denote average PCG iteration numbers (per MINRES iteration) needed to apply the preconditioners $Q_A$ and $Q_S$, respectively. \mathbf egin{table}[ht!] \mathbf egin{minipage}{0.45\textwidth} \centering \mathbf egin{tabular}{l r r r} \toprule level $\ell$ & $N_A$ & $N_S$ & $N$ \\ \midrule 1 & 8.2& 6.7 & 45\\ 2 & 17.4& 7.0 & 29\\ 3 & 35.4& 7.3 & 29\\ 4 & 69.0& 8.4 & 28\\ 5 & 138.6& 9.1 & 28\\ \bottomrule \end{tabular} \caption{Average PCG iteration numbers ($N_A, N_S$) for application of $Q_A^{-1}, Q_S^{-1}$, respectively, and MINRES iteration numbers ($N$) for different refinement levels and $\rho=\tilde\rho=h$.} \left\langlebel{tab:iter} \end{minipage} \mathbf egin{minipage}{0.45\textwidth} \centering \mathbf egin{tabular}{l r r r} \toprule level $\ell$ & $N_A$ & $N_S$ & $N$ \\ \midrule 1 & 8.0& 7.5 & 64\\ 2 & 16.7 & 13.5 & 38\\ 3 & 33.0 & 23.3 & 34\\ 4 & 64.2 & 46.1 & 32\\ 5 & 126.4& 89.9 & 29\\ \bottomrule \end{tabular} \caption{Average PCG iteration numbers ($N_A, N_S$) for application of $Q_A^{-1}, Q_S^{-1}$, respectively, and MINRES iteration numbers ($N$) for different refinement levels and $\rho=\tilde\rho=1/h$.} \left\langlebel{tab:iter2} \end{minipage} \end{table} We first discuss the case $\rho=\tilde\rho=h$, cf. Table~\ref{tab:iter}. The number of MINRES iterations does not grow if the refinement level $\ell$ increases, illustrating the optimality of the block preconditioner, cf. Corollary~\ref{corr:pc}. As expected, cf. \eqref{equiv2}, the numbers $N_S$ are essentially independent of $\ell$. The numbers $N_A$ are doubled for each refinement and show a behavior very similar to the usual behavior of SSOR-CG applied to a standard 2D Poisson problem (discretized by linear finite elements). In this paper we do not study the topic of more efficient preconditioners for $Q_A$. For the case $\rho=\tilde\rho=1/h$, cf. Table~\ref{tab:iter2}, the average iteration number $N_S$ of the Schur complement preconditioner shows a similar behavior as $N_A$, i.e., a doubling of the iteration numbers for each refinement step. These iteration numbers $N_S$ indicate $\mathrm{cond}(S_M)\sim h^{-2}$. Note that for \eqref{equiv2} to hold we used a scaling assumption $\rho=\tilde\rho=h$. We also observe that (slightly) more MINRES iterations $N$ are needed compared to the case $\rho=\tilde\rho=h$. In view of the results for the discretization errors and the results for the iterative solver obtained in these numerical experiments the parameter choice $\rho=\tilde\rho=h$ is recommended. \subsection{Vector-Laplace problem on a planar surface} \left\langlebel{sec:numExpPlane} Let $G$ be the plane defined by zero level of the level set function $ \phi(\mathbf x) = -x_3+4x_1-\frac{13}{3}$ and $\Omega \subset \Bbb{R}^3$ a bounded outer domain that is intersected by $G$. We take $\mathcal Gamma:=G \cap \Omega$. We consider the vector-Laplace problem \eqref{strongform} with the prescribed solution \mathbf egin{equation*} \mathbf u^*(\mathbf{x}) = (\frac{4}{17} \sin(\pi x_2)\sin(\pi x_3), \sin(\pi x_2)\sin(\pi x_3), \frac{16}{17}\sin(\pi x_2)\sin(\pi x_3))^T \in \mathbf V_T. \end{equation*} The induced tangential right hand-side is denoted by $\mathbf{f}$ and we use the saddle point formulation \eqref{weak1a} with $\mathbf g = \mathbf{f}$. The associated Lagrange multiplier is $\left\langlembda=0$. Concerning the choice of the outer domain $\Omega$ there is an issue concerning Dirichlet boundary conditions on $\partial\Omega \cap G=\partial \mathcal Gamma$. It turns out that we obtain significantly larger discretization errors if the parts of $\partial \Omega$ that are intersected by $G$ are not perpendicular to $G$. This effect is not understood, yet. Therefore in this experiment we choose $\Omega$ as the parallelepiped with origin $(-2,-2,-65/48)$ and spanned by the vectors $(4,0,-1)^T$, $(0,4,0)^T$ and $(0,0,4.25)^T$. Then the parts of $\partial \Omega$ that are intersected by $G$ are perpendicular to $G$. The construction is such that on $\partial\Omega \cap G=\partial \mathcal Gamma$ we can use homogeneous Dirichlet boundary conditions. The triangulation $\mathcal T_{h_\ell}$ of $\Omega$ consists of $n_\ell^3$ sub-parallelepipeds, where each of the sub-parallelepipeds is further refined into 6 tetrahedra. Here $\ell\in\Bbb{N}$ denotes the level of refinement yielding a mesh size $h_\ell= \frac{4}{n_\ell}$ with $n_\ell= 2^{\ell+2}$. Note that for this specific example, the approximation of the surface $\mathcal Gamma$ and the normals $\mathbf n$ is exact, i.e., $\mathcal Gamma_h=\mathcal Gamma$ and $\mathbf n_h=\mathbf n$. The numerical solution on refinement level $\ell=4$ is illustrated in Figure~\ref{fig:solPlane}. The (very shape irregular) surface triangulation is illustrated in Figure~\ref{fig:gridPlane}. \mathbf egin{figure}[ht!] \mathbf egin{minipage}{0.45\textwidth} \includegraphics[width=\textwidth]{PlaneSol_3_improved_6.png} \caption{Numerical solution on the plane for $k=l=1$, $\rho=\tilde\rho=h$ and refinement level $\ell=3$.} \left\langlebel{fig:solPlane} \end{minipage} \mathbf egin{minipage}{0.45\textwidth} \includegraphics[width=\textwidth]{Plane_grid_2.png} \caption{Surface triangulation for refinement level $\ell=1$.} \left\langlebel{fig:gridPlane} \end{minipage} \end{figure} Note that $\left\langlembda=\left\langlembda_h=0$ due to $\mathbf{H}=0$, cf. \eqref{charlambda}, so in the following we only consider errors in $\mathbf u_h$. In view of the recommendation at the end of the section above, we only present results with the stabilization parameter $\rho = \tilde{\rho} = h$. Figures \ref{fig:convPlane1} and \ref{fig:convPlane2} show the errors $\|\mathbf u^*-\mathbf u_h\|_U$ and $\|\mathbf u^* - \mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)}$ for the cases $k=l=1$ and $k=2,\, l=1$, respectively. In both cases, optimal convergence orders $\|\mathbf u^*-\mathbf u_h\|_U \sim h^k$ and $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)} \sim h^{k+1}$ are achieved. In this special planar setting we have $\mathbf u_h\cdot\mathbf n_h=0$. \mathbf egin{figure}[ht!] \mathbf egin{tikzpicture} \def1.5{8} \def0.3{2} \mathbf egin{semilogyaxis}[ xlabel={Refinement level}, ylabel={Error}, ymin=5E-4, ymax=200, legend style={ cells={anchor=west}, legend pos=outer north east} ] \addplot table[x=level, y=U] {PlaneP1.dat}; \addplot table[x=level, y=L2] {PlaneP1.dat}; \addplot[dashed,line width=0.75pt] coordinates { (1,1.5) (2,1.5*0.5) (3,1.5*0.25)(4,1.5*0.125) (5,1.5*0.0625) }; \addplot[dotted,line width=0.75pt] coordinates { (1,0.3) (2,0.3*0.5*0.5) (3,0.3*0.25*0.25) (4,0.3*0.125*0.125) (5,0.3*0.0625*0.0625) }; \legend{$\|\mathbf u^* - \mathbf u_h\|_U$, $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)}$, $\mathcal{O}(h)$, $\mathcal{O}(h^2)$} \end{semilogyaxis} \end{tikzpicture} \caption{Discretization errors for planar $\mathcal Gamma$, $k=l=1$ and $\rho=\tilde\rho=h$.} \left\langlebel{fig:convPlane1} \end{figure} \mathbf egin{figure} \mathbf egin{tikzpicture} \def1.5{1.5} \def0.3{0.3} \mathbf egin{semilogyaxis}[ xlabel={Refinement level}, ylabel={Error}, ymin=5E-6, ymax=100, legend style={ cells={anchor=west}, legend pos=outer north east} ] \addplot table[x=level, y=U] {PlaneP2.dat}; \addplot table[x=level, y=L2] {PlaneP2.dat}; \addplot[dashed,line width=0.75pt] coordinates { (1,1.5) (2,1.5*0.5*0.5) (3,1.5*0.25*0.25) (4,1.5*0.125*0.125) (5,1.5*0.0625*0.0625) }; \addplot[dotted,line width=0.75pt] coordinates { (1,0.3) (2,0.3*0.5*0.5*0.5) (3,0.3*0.25*0.25*0.25) (4,0.3*0.125*0.125*0.125) (5,0.3*0.0625*0.0625*0.0625) }; \legend{$\|\mathbf u^* - \mathbf u_h\|_U$, $\|\mathbf u^* -\mathbf P\mathbf u_h\|_{L^2(\mathcal Gamma)}$, $\mathcal{O}(h^2)$, $\mathcal{O}(h^{3})$} \end{semilogyaxis} \end{tikzpicture} \caption{Discretization errors for planar $\mathcal Gamma$, $k=2$, $l=1$ and $\rho=\tilde\rho=h$.} \left\langlebel{fig:convPlane2} \end{figure} {} \end{document}
\begin{document} \title{Hecke operators in Morava $E$-theories of different heights} \author{Takeshi Torii} \address{Department of Mathematics, Okayama University, Okayama 700--8530, Japan} \email{[email protected]} \thanks{This work was partially supported by JSPS KAKENHI Grant Number 22540087 and 25400092.} \subjclass[2010]{Primary 55N22; Secondary 14L05, 55S25} \keywords{Morava $E$-theory, Hecke operator, $p$-divisible group, level structure} \date{October 12, 2022\ ({\tt version~2.1})} \begin{abstract} There is a natural action of a kind of Hecke algebra $\mathcal{H}_n$ on the $n$th Morava $E$-theory of spaces. We construct Hecke operators in an amalgamated cohomology theory of the $n$th and the $(n+1)$st Morava $E$-theories. These operations are natural extensions of the Hecke operators in the $(n+1)$st Morava $E$-theory, and they induce an action of the Hecke algebra $\mathcal{H}_{n+1}$ on the $n$th Morava $E$-theory of spaces. We study a relationship between the actions of the Hecke algebras $\mathcal{H}_n$ and $\mathcal{H}_{n+1}$ on the $n$th Morava $E$-theory, and show that the $\mathcal{H}_{n+1}$-module structure is obtained from the $\mathcal{H}_n$-module structure by the restriction along an algebra homomorphism from $\mathcal{H}_{n+1}$ to $\mathcal{H}_n$. \end{abstract} \maketitle \section{Introduction} The stable homotopy category localized at a prime number $p$ has a filtration called the chromatic filtration. Subquotients of the filtration are said to be monochromatic categories. It is known that the $n$th monochromatic category is equivalent to the $K(n)$-local category, where $K(n)$ is the $n$th Morava $K$-theory. Thus, the stable homotopy category localized at $p$ can be considered to be constructed from various $K(n)$-local categories \cite{Morava, MRW, Ravenel-green, DHS, Hopkins-Smith, Ravenel-red}. Therefore, it is important to understand the relationship between $K(n)$-local categories. The study for relationships between $K(n)$-local categories of different heights is called transchromatic homotopy theory and it has been done by many authors. Greenlees-Sadofsky~\cite{Greenlees-Sadofsky} showed that the Tate cohomology of $K(n)$ is trivial for any finite group, which implies that the Tate cohomology of a complex-oriented and $v_n$-periodic spectrum is $v_{n-1}$-periodic. Hovey-Sadofsky~\cite{Hovey-Sadofsky} generalized this result and showed that the Tate cohomology of an $E(n)$-local spectrum is $E(n-1)$-local, where $E(n)$ is the $n$th Johnson-Wilson spectrum. Ando-Morava-Sadofsky~\cite{AMS} constructed a splitting of a completion of the $\mathbb{Z}/p$-Tate cohomology of $E(n)$ into a wedge of $E(n-1)$. In \cite{Torii0} we generalized this result and constructed a splitting of a completion of the $(\mathbb{Z}/p)^k$-geometric fixed points spectrum of $E_n$ into a wedge of $E_{n-k}$, where $E_n$ is the $n$th Morava $E$-theory. These works are related to the chromatic splitting conjecture (cf.~\cite{Hovey}). Hopkins-Kuhn-Ravenel character theory~\cite{HKR1, HKR2} is another direction of study for relationships between $K(n)$-local categories. It describes the $E_n$-cohomology of classifying spaces of finite groups tensored with the field $\mathbb{Q}$ of rational numbers in terms of generalized group characters. This can be considered as a result for a relationship between the $K(n)$-local category and the $K(0)$-local category. In \cite{AMS, Torii1, Torii3} we constructed a generalization of Chern character which is a multiplicative natural transformation of cohomology theories from $E_n$ to an extension of $E_{n-1}$. In \cite{Torii6} we studied the generalized Chern character of classifying spaces of finite groups by using Hopkins-Kuhn-Ravenel character theory. Stapleton~\cite{Stapleton1,Stapleton2} generalized Hopkins-Kuhn-Ravenel character theory, and gave a description of the Borel equivariant $E_n$-cohomology tensored with an extension of $L_{K(t)}E_{n}$ for $0\le t<n$, where $L_{K(t)}$ is the Bousfield localization functor with respect to $K(t)$, by using the language of $p$-divisible groups. This work can be considered as a study for a relationship between the $K(n)$-local category and the $K(t)$-local category for $0\le t< n$. Lurie~\cite{Lurie} further generalized this work and put it in a more general framework by introducing a notion of tempered cohomology theories. The $K(n)$-local category can be studied by the $n$th Morava $E$-theory $E_n$ with action of the extended Morava stabilizer group $\mathbb{G}_n$. For example, $E_n$-based Adams spectral sequences strongly converge to the homotopy groups of $K(n)$-local spectra. The $E_2$-pages of the spectral sequences are described in terms of the derived functor ${\rm Ext}$ in the category of twisted continuous $\mathbb{G}_n$-modules over $\pi_*(E_n)$ under some conditions. Thus, we would like to understand a relationship between the $n$th Morava $E$-theory $E_n$ with $\mathbb{G}_n$-action and the $(n+1)$st Morava $E$-theory $E_{n+1}$ with $\mathbb{G}_{n+1}$-action. For this purpose, we constructed a generalization of the classical Chern character \[ {\rm ch}: E_{n+1}\longrightarrow \mathbb{B}_n \] in \cite{Torii1, Torii3}, which is a map of ring spectra from the chromatic height $(n+1)$ complex orientable spectrum $E_{n+1}$ to a chromatic height $n$ complex orientable spectrum $\mathbb{B}_n$. This type of generalization of the classical Chern character was first considered by Ando-Morava-Sadofsky~\cite{AMS}. In \cite{Torii5} we showed that the generalized Chern character ${\rm ch}$ can be lifted to a map of $E_{\infty}$-ring spectra, and the spectrum $\mathbb{B}_n$ can be identified with $L_{K(n)}(E_n\wedge E_{n+1})$. Under this identification, the generalized Chern character ${\rm ch}: E_{n+1}\to \mathbb{B}_n$ is the inclusion into the second factor of the smash product in $\mathbb{B}_n$. We define a map of $E_{\infty}$-ring spectra \[ {\rm inc}: E_n \longrightarrow \mathbb{B}_n \] to be the inclusion into the first factor of the smash product in $\mathbb{B}_n$. Hecke operators are defined by using power operations. In \cite{Goerss-Hopkins} Goerss-Hopkins showed that Morava $E$-theory supports an $E_{\infty}$-ring spectrum structure which is unique up to homotopy. The $E_{\infty}$-ring structure on $E_n$ gives rise to power operations, and hence Hecke operators. See, for example, Ando-Hopkins-Strickland~\cite{AHS}, Rezk~\cite{Rezk2}, and Ganter~\cite{Ganter}. But Hecke operators in Morava $E$-theory were first defined by Ando in \cite{Ando1, Ando2}, where he used power operations in the complex cobordism to produce power operations in Morava $E$-theory. In the end there is a natural action of a kind of Hecke algebra $\mathcal{H}_n$ on $E_n^0(X)$ for any space $X$. In this paper we consider a relationship between the Hecke operators in $E_n$ and those in $E_{n+1}$. For this purpose, we construct Hecke operators in $\mathbb{B}_n$-theory \[ \widetilde{\rm T}_M^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow \mathbb{B}_n^0(X)\] for any space $X$ and each finite abelian $p$-group $M$ with $p$-rank $\le n+1$. The operations $\widetilde{\rm T}^{\mathbb{B}}_M$ are natural extensions of the Hecke operators in $E_{n+1}$. The following is our first main theorem. \begin{theorem} [{Theorem~\ref{thm:Hn+1-module-structure-on-BnX}}] \label{Main-Theorem-1} Assigning the Hecke operator $\widetilde{\rm T}_M^{\mathbb{B}}$ to a finite abelian $p$-group $M$ with $p$-rank $\le n+1$, there is a natural action of the Hecke algebra $\mathcal{H}_{n+1}$ on $\mathbb{B}_n^0(X)$ for any space $X$ such that ${\rm ch}: E_{n+1}^0(X)\to \mathbb{B}_n^0(X)$ is a map of $\mathcal{H}_{n+1}$-modules. \end{theorem} Let $\mathbb{G}_{n+1}$ be the $(n+1)$st extended Morava stabilizer group. In \cite{Torii3} we showed that there is a natural action of $\mathbb{G}_{n+1}$ on $\mathbb{B}_n^0(X)$ and that the map ${\rm inc}$ induces a natural isomorphism \[ E_n^0(X)\stackrel{\cong}{\to} (\mathbb{B}_n^0(X))^{\mathbb{G}_{n+1}} \] for any spectrum $X$, where the right hand side is the $\mathbb{G}_{n+1}$-invariant submodule of $\mathbb{B}_n^0(X)$. We shall show that the action of $\mathcal{H}_{n+1}$ on $\mathbb{B}_n^0(X)$ commutes with the action of $\mathbb{G}_{n+1}$. This implies the following theorem, which is our second main theorem. \begin{theorem} [{Theorem~\ref{thm:natural-Hn+1-module-strucrure-on-EnX}}] \label{Main-Theorem-2} There is a natural action of the Hecke algebra $\mathcal{H}_{n+1}$ on $E_n^0(X)$ for any space $X$ such that ${\rm inc}: E_n^0(X)\to \mathbb{B}_n^0(X)$ is a map of $\mathcal{H}_{n+1}$-modules. \end{theorem} By these two theorems, we have the $\mathcal{H}_n$-module structure and the $\mathcal{H}_{n+1}$-module structure on $E_n^0(X)$, and we shall compare these two structures. For this purpose, we shall construct an algebra homomorphism \[ \omega: \mathcal{H}_{n+1}\longrightarrow \mathcal{H}_n,\] and prove the following theorem, which is our third main theorem. \begin{theorem}[{Theorem~\ref{thm:comparison-Hn-Hn+1-on-EnX}}] \label{Main-Theorem-3} The $\mathcal{H}_{n+1}$-module structure on $E_n^0(X)$ is obtained from the $\mathcal{H}_n$-module structure by the restriction along $\omega$. \end{theorem} In \cite{Torii8} we studied a relation between the category of modules over the algebra of power operations in the $n$th Morava $E$-theory and that in the $(n+1)$st Morava $E$-theory by the same technique in this paper. \begin{overview}\rm In \S\ref{section:MoravaE} we first review the construction of Hecke operators in Morava $E$-theory. We also study a natural action of the extended Morava stabilizer group $\mathbb{G}_n$ on level structures on the formal group associated to the Morava $E$-theory $E_n$. As a result, we show that the action of the Hecke algebra $\mathcal{H}_n$ on $E_n^0(X)$ commutes with the action of $\mathbb{G}_n$. In \S\ref{section:Bn-theory} we construct Hecke operators in $\mathbb{B}_n$-theory. For this purpose, we introduce and study level structures on a $p$-divisible group. We show that functors for level structures of given type on the $p$-divisible group associated to $\mathbb{B}_n$-theory are representable by finite products of complete regular local rings. Using this result, we construct additive unstable operations in $\mathbb{B}_n$-theory, which are natural extensions of the power operations in $E_{n+1}$-theory. After that, we define Hecke operators in $\mathbb{B}_n$-theory by assembling these operations, show that the Hecke algebra $\mathcal{H}_{n+1}$ naturally acts on $\mathbb{B}_n^0(X)$ for any space $X$, and prove Theorem~\ref{Main-Theorem-1} (Theorem~\ref{thm:Hn+1-module-structure-on-BnX}). Furthermore, we study a natural $\mathbb{G}_{n+1}$-action on representing rings for level structures on the $p$-divisible group associated to $\mathbb{B}_n$-theory. We show that the action of the Hecke algebra $\mathcal{H}_{n+1}$ on $\mathbb{B}_n^0(X)$ commutes with the action of the extended Morava stabilizer group $\mathbb{G}_{n+1}$, and prove Theorem~\ref{Main-Theorem-2} (Theorem~\ref{thm:natural-Hn+1-module-strucrure-on-EnX}). In \S\ref{section:comparison-Hecke-operators} we compare the Hecke operators in $E_n$-theory with those in $E_{n+1}$-theory via $\mathbb{B}_n$-theory. We discuss the restriction of the Hecke operators in $\mathbb{B}_n$-theory to $E_n$-theory, and show that it can be written in terms of the Hecke operators in $E_n$-theory. Using this result, we construct a ring homomorphism $\omega$ from $\mathcal{H}_{n+1}$ to $\mathcal{H}_n$, and prove Theorem~\ref{Main-Theorem-3} (Theorem~\ref{thm:comparison-Hn-Hn+1-on-EnX}). \end{overview} \begin{notation}\rm In this paper we fix a prime number $p$ and a positive integer $n$. Let $\mathbb{Z}_p$ be the ring of $p$-adic integers, and let $\mathbb{Q}_p$ be its fraction field. For a set $S$, we denote by $|S|$ the cardinality of $S$, and by $\mathbb{Z}[S]$ the free $\mathbb{Z}$-module generated by $S$. For a finite abelian $p$-group $M$, we denote the $p$-rank of $M$ by $p$-rank($M$). We let $M[p^r]$ be the kernel of the multiplication map $p^r:M\to M$ for a nonnegative integer $r$. We write $\mathcal{CL}$ for the category of complete Noetherian local rings with residue field of characteristic $p$ and local ring homomorphisms. For $R\in \mathcal{CL}$, we let $\mathcal{CL}_R$ be the under category $\mathcal{CL}_{R/}$. For a (formal) scheme $\mathbf{X}$ over $R\in\mathcal{CL}$ and a morphism $f: R\to S$ in $\mathcal{CL}$, we denote by $\mathbf{X}_S$ or $f^*\mathbf{X}$ the base change of $\mathbf{X}$ along $f$. We let $\mathbf{G}^0$ be the identity component of a $p$-divisible group $\mathbf{G}$ over $R\in\mathcal{CL}$. \end{notation} \section{Hecke operators in Morava $E$-theory} \label{section:MoravaE} In \cite{Goerss-Hopkins} Goerss-Hopkins showed that Morava $E$-theory supports an $E_{\infty}$-ring structure which is unique up to homotopy. This induces power operations in Morava $E$-theory of spaces, and we can define Hecke operators by using power operations. In this section we review the construction of Hecke operators in Morava $E$-theory and study their compatibility with the action of the extended Morava stabilizer group. \subsection{Morava $E$-theory} \label{subsection:Morava-E-theory} In this subsection we fix notation of Morava $E$-theory. The $n$th Morava $E$-theory $E_n$ is an even periodic commutative ring spectrum. The degree $0$ coefficient ring $E_n^0=\pi_0(E_n)$ is given by \[ W(\mathbb{F}_{p^n})\power{w_1,\ldots,w_{n-1}}, \] where $W(\mathbb{F}_{p^n})$ is the ring of Witt vectors with coefficients in the finite field $\mathbb{F}_{p^n}$ with $p^n$ elements. Since $E_n$ is even periodic, we have an associated degree $0$ one dimensional formal Lie group $\mathbf{F}_n$ over $E_n^0$, which is a universal deformation of the Honda formal group $\overline{\mathbf{F}}_n$ over $\mathbb{F}_{p^n}$. We denote by $\mathbb{G}_n$ the $n$th extended Morava stabilizer group, which is a semidirect product of the automorphism group $\mathbb{S}_n$ of the Honda formal group $\overline{\mathbf{F}}_n$ over $\mathbb{F}_{p^n}$ with the Galois group ${\rm Gal}(\mathbb{F}_{p^n}/\mathbb{F}_p)$ of $\mathbb{F}_{p^n}$ over the prime field $\mathbb{F}_p$: \[ \mathbb{G}_n=\,{\rm Gal}(\mathbb{F}_{p^n}/\mathbb{F}_p) \ltimes \mathbb{S}_n.\] \subsection{Level structures on formal Lie groups} \label{subsection:level-structure} In this subsection we review level structures on one dimensional formal Lie groups (cf.~\cite{Strickland}). \if0 We denote by $\mathcal{CL}$ the category of complete Noetherian local rings with residue field of characteristic $p$ and local homomorphisms. \fi Let $\mathbf{X}$ be a one dimensional formal Lie group over $R\in\mathcal{CL}$ which has finite height. First, we recall the definition of divisors on $\mathbf{X}$. A divisor on $\mathbf{X}$ is a closed subscheme which is finite and flat over $R$. We can associate to a section $s\in \mathbf{X}(R)$ a divisor $[s]$ on $\mathbf{X}$. If we take a coordinate $x$ of $\mathbf{X}$, then the divisor $[s]$ is the closed subscheme ${\rm Spf}(R\power{x}/(x-x(s)))$, where $x(s)$ is the image of $x$ under the map $s^*: R\power{x}\to R$. Suppose that we have a homomorphism $\phi: M\to \mathbf{X}(R)$ of abelian groups, where $M$ is a finite abelian $p$-group. We can define a divisor $[\phi(M)]$ on $\mathbf{X}$ by \[ [\phi(M)]=\sum_{m\in M}\ [\phi(m)].\] If we take a coordinate $x$ of $\mathbf{X}$, then the divisor $[\phi(M)]$ is the closed subscheme ${\rm Spf}(R\power{x}/(f(x)))$, where $f(x)=\prod_{m\in M}(x-x(\phi(m)))$. Next, we recall the definition of level structures on $\mathbf{X}$. For a nonnegative integer $r$, we have a closed subgroup scheme $\mathbf{X}[p^r]$ which is the kernel of the map $p^r: \mathbf{X}\to \mathbf{X}$. A homomorphism $\phi: M\to \mathbf{X}(R)$ is said to be a level $M$-structure if the divisor $[\phi(M[p])]$ is a closed subscheme of $\mathbf{X}[p]$. Note that $[\phi(M[p^r])]$ is a closed subgroup scheme of $\mathbf{X}[p^r]$ for all $r\ge 0$ if $\phi: M\to \mathbf{X}(R)$ is a level $M$-structure by \cite[Proposition~32]{Strickland}. \begin{definition} \label{def:level-structure-formalgroup} \rm We define a functor \[ {\rm Level}(M,\mathbf{X}) \] from $\mathcal{CL}_R$ to the category of sets by assigning to $S\in\mathcal{CL}_R$ the set of all level $M$-structures on $\mathbf{X}_S$. \end{definition} By \cite[Proposition~22]{Strickland}, the functor ${\rm Level}(M,\mathbf{X})$ is representable, that is, there exists $D(M,\mathbf{X})\in\mathcal{CL}_R$ such that \[ {\rm Level}(M,\mathbf{X})={\rm Spf}(D(M,\mathbf{X})).\] Note that $D(M,\mathbf{X})$ is a finitely generated $R$-module. Furthermore, $D(M,\mathbf{X})$ is a regular local ring if $\mathbf{X}$ is a universal deformation of the formal group on the closed point (see \cite[Theorem~23]{Strickland}). Next, we consider functoriality of $D(M,\mathbf{X})$ with respect to $M$. If $\phi:M\to \mathbf{X}(R)$ is a level $M$-structure and $u: N\to M$ is a monomorphism of abelian $p$-groups, then the composition $\phi\circ u: N\to \mathbf{X}(R)$ is a level $N$-structure. Hence we obtain a natural transformation ${\rm Level}(M,\mathbf{X})\longrightarrow {\rm Level}(N,\mathbf{X})$, which induces a map \[ D(N,\mathbf{X}) \longrightarrow D(M,\mathbf{X})\] in $\mathcal{CL}_R$. \if0 Let $\mathbf{Y}$ be a one dimensional formal Lie group over $S\in \mathcal{CL}$. We suppose that there are a map $f:R\to S$ in $\mathcal{CL}$ and an isomorphism \[ \mathbf{Y}\stackrel{\cong}{\longrightarrow} f^*\mathbf{X}. \] If we have a level $M$-structure $\phi: M\to \mathbf{Y}(S)$, then the composition \[ M\stackrel{\phi}{\longrightarrow} \mathbf{Y}(S)\stackrel{\cong}{\to} f^*\mathbf{X}(S)=\mathbf{X}(S)\] is a level $M$-structure on $f^*\mathbf{X}=\mathbf{X}_S$. This implies an isomorphism of functors \[ {\rm Level}(M,\mathbf{Y})\cong {\rm Level}(M,\mathbf{X})\times_{{\rm Spf}(R)}{\rm Spf}(S).\] \fi Finally, we recall quotient formal Lie groups. Let $\phi:M\to \mathbf{X}(R)$ be a level $M$-structure on $\mathbf{X}$. Then the divisor $[\phi(M)]$ is a finite subgroup scheme of $\mathbf{X}$, and we can construct a quotient formal Lie group $\mathbf{X}/[\phi(M)]$ over $R$ (see \cite[Theorem~19]{Strickland}). \subsection{The action of $\mathbb{G}_n$ on $D(M,\mathbf{F}_n)$} \label{subsection:level-structure-Morava-E} In this subsection we study an action of the extended Morava stabilizer group $\mathbb{G}_n$ on the representing ring $D(M,\mathbf{F}_n)$ of level structures on $\mathbf{F}_n$. First, we recall that the action of $\mathbb{G}_n$ on the Honda formal group $\overline{\mathbf{F}}_n$ over $\mathbb{F}_{p^n}$ extends to an action on its universal deformation $\mathbf{F}_n$ over $E_n^0$. We will show that this action further extends to an action on the functor ${\rm Level}(M,\mathbf{F}_n)$ and its representing ring $D(M,\mathbf{F}_n)$ for any abelian $p$-group $M$ (cf.~\cite[\S7]{Torii6}). \if0 For $g\in \mathbb{G}_n$, there exists a commutative diagram \[ \begin{array}{ccc} \mathbf{F}_n& \stackrel{\overline{g}} {\hbox to 10mm{\rightarrowfill}} & \mathbf{F}_n \\[1mm] \bigg\downarrow & & \bigg\downarrow \\[4mm] {\rm Spf}(E_n^0) & \stackrel{g} {\hbox to 10mm{\rightarrowfill}} & {\rm Spf}(E_n^0) \\ \end{array}\] of formal schemes which is an extension of the action of $g$ on $\mathbf{H}_n$ over the closed point ${\rm Spec}(\mathbb{F}_{p^n})$, and the map $\overline{g}$ induces an isomorphism \[ \mathbf{F}_n\stackrel{\cong}{\longrightarrow} g^*\mathbf{F}_n \] of formal groups over ${\rm Spf}(E_n^0)$. \fi Let $\phi: M\to \mathbf{F}_n(R)$ be a level $M$-structure on $\mathbf{F}_n$ over $R\in\mathcal{CL}_{E_n^0}$. For any $g\in \mathbb{G}_n$, the composite \[ M\stackrel{\phi}{\longrightarrow} \mathbf{F}_n(R)\stackrel{g}{\longrightarrow} \mathbf{F}_n(R') \] is a level $M$-structure on $\mathbf{F}_n$ over $R'\in \mathcal{CL}_{E_n^0}$, where $R'$ is a local $E_n^0$-algebra given by the composite $E_n^0\stackrel{g}{\to} E_n^0\to R$. \if0 the composition \[ M_{D(M,\mathbf{F}_n)} \stackrel{\phi^{\mbox{\scriptsize\rm univ}}}{\hbox to 10mm{\rightarrowfill}} i^*\mathbf{F}_n \stackrel{i^*t(g)}{\hbox to 10mm{\rightarrowfill}} i^*g^*\mathbf{F}_n\] is a level $M$-structure on $i^*g^*\mathbf{F}_n$. This induces a local $E_n^0$-algebra homomorphism $D(M,g^*\mathbf{F}_n)\to D(M,\mathbf{F}_n)$. Since $D(M,g^*\mathbf{F}_n)=E_n^0\subrel{g,E_n^0,i}{\otimes} D(M,\mathbf{F}_n)$, we obtain a local homomorphism \[ g: D(M,\mathbf{F}_n)\longrightarrow D(M,\mathbf{F}_n), \] which satisfies $g\circ i=i\circ g$. \[ \begin{array}{ccc} E_n^0 & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & E_n^0\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[4mm] D_n(M) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D_n(M). \\ \end{array}\] \fi \if0 a commutative diagram of functors \[ \begin{array}{ccc} {\rm Level}(M,\mathbf{F}_n) & \stackrel{g} {\hbox to 10mm{\rightarrowfill}}& {\rm Level}(M,\mathbf{F}_n)\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[4mm] {\rm Spf}(E_n^0) & \stackrel{g} {\hbox to 10mm{\rightarrowfill}} & {\rm Spf}(E_n^0) \\ \end{array}\] for any finite abelian $p$-group $M$ and $g\in\mathbb{G}_n$ \fi This induces a natural transformation \[ g: {\rm Level}(M,\mathbf{F}_n) \longrightarrow {\rm Level}(M,\mathbf{F}_n) \] over $g: {\rm Spf}(E_n^0)\to {\rm Spf}(E_n^0)$. Hence we obtain an action of $\mathbb{G}_n$ on the functor ${\rm Level}(M,\mathbf{F}_n)$ which is compatible with the action on ${\rm Spf}(E_n^0)$. In other words, we obtain an action of $\mathbb{G}_n$ on its representing ring $D(M,\mathbf{F}_n)$ which is compatible with the action on $E_n^0$. By the construction of the action of $\mathbb{G}_n$ on level structures, it is easy to obtain the following lemma. \begin{lemma}\label{lemma:restriction-g-level-str-commutativity} For any monomorphism $N\to M$ of finite abelian $p$-groups, the induced ring homomorphism $D(N,\mathbf{F}_n)\to D(M,\mathbf{F}_n)$ is $\mathbb{G}_n$-equivariant. \end{lemma} \subsection{Power operations in Morava $E$-theory} \label{subsection:power_operation_MoravaE} In this subsection we recall the construction of power operations in Morava $E$-theory and show that they are equivariant with respect to the action of the extended Morava stabilizer group. For a space $X$, we denote by $\Sigma^{\infty}_+X$ the suspension spectrum of $X$ with a disjoint base point. By \cite{Goerss-Hopkins}, Morava $E$-theory $E_n$ is an $E_{\infty}$-ring spectrum which is unique up to homotopy. In particular, we have a structure map \[ \nu_k: \Sigma^\infty_+E\Sigma_k \subrel{\Sigma_k}{\wedge} E_n^{\wedge k} \longrightarrow E_n,\] where $\Sigma_k$ is the symmetric group of degree $k$, and $E\Sigma_k$ is its universal space. For a map $f: \Sigma^{\infty}_+X\to E_n$, by the composition \[ \Sigma^{\infty}_+(E\Sigma_k\subrel{\Sigma_k}{\times} X^{\times k})\cong \Sigma^{\infty}_+E\Sigma_k\subrel{\Sigma_k}{\wedge} \Sigma_+X^{\wedge k} \stackrel{1\wedge f^{\wedge k}}{\hbox to 13mm{\rightarrowfill}} \Sigma^{\infty}_+E\Sigma_k\subrel{\Sigma_k}{\wedge} E_n^{\wedge k} \stackrel{\nu_{k}}{\hbox to 10mm{\rightarrowfill}}E_n,\] we obtain a natural transformation \[ {\rm p}_k:E_n^0(X)\longrightarrow E_n^0(E\Sigma_k\subrel{\Sigma_k}{\times}X^{\times k}).\] The inclusion map of the diagonal $\Delta: X\to X^{\times k}$ induces a map $(1\times\Delta)^*: E_n^0(E\Sigma_k\subrel{\Sigma_k}{\times}X^{\times k}) \to E_n^0(E\Sigma_k\subrel{\Sigma_k}{\times}X)\cong E_n^0(B\Sigma_k)\subrel{E_n^0}{\otimes} E_n^0(X)$. A power operation \[ {\rm P}_{k}: E_n^0(X)\longrightarrow E_n^0(B\Sigma_k)\subrel{E_n^0}{\otimes} E_n^0(X) \] is defined by \[ {\rm P}_{k}= (1\times\Delta)^*\circ {\rm p}_k.\] Let $G$ be a finite group of order $k$. We fix a bijection between $G$ and the set $\{1,2,\ldots,k\}$. The left multiplication by an element in $G$ induces an automorphism of $\{1,2,\ldots,k\}$. Hence we can regard $G$ as a subgroup of $\Sigma_k$. The inclusion $i:G\hookrightarrow \Sigma_k$ induces a map $Bi: BG\to B\Sigma_k$ of classifying spaces. Note that the homotopy class of $Bi$ is independent of the choice of a bijection between $G$ and $\{1,2,\ldots,k\}$. We define a power operation \[ {\rm P}_{G}: E_n^0(X)\longrightarrow E_n^0(BG)\subrel{E_n^0}{\otimes} E_n^0(X) \] by the composition \[ {\rm P}_G= (Bi^*\otimes 1)\circ {\rm P}_k.\] For a finite abelian $p$-group $M$, we set $M^*={\rm Hom}(M,S^1)$, where $S^1$ is the circle group. By \cite[Proposition~2.9]{Greenlees-Strickland}, the functor ${\rm Hom}(M,\mathbf{F}_n)$ over ${\rm Spf}(E_n^0)$ is represented by $E_n^0(BM^*)$, that is, \[ {\rm Hom}(M,\mathbf{F}_n)=\mbox{\rm Spf}(E_n^0(BM^*)).\] Since the closed subscheme ${\rm Level}(M,\mathbf{F}_n)$ of ${\rm Hom}(M,\mathbf{F}_n)$ is represented by $D(M,\mathbf{F}_n)$, we have a local $E_n^0$-algebra homomorphism \[ {\rm R}: E_n^0(BM^*)\longrightarrow D(M,\mathbf{F}_n).\] Using the $E_n^0$-algebra $D(M,\mathbf{F}_n)$, we define an extension of Morava $E$-theory. \begin{definition}\rm For a space (or a spectrum) $X$, we set \[ E(M)_n^*(X)= D(M,\mathbf{F}_n)\subrel{E_n^0}{\otimes}E_n^*(X).\] Since $D(M,\mathbf{F}_n)$ is a finitely generated free $E_n^0$-module, the functor $E(M)_n^*(X)$ is a generalized cohomology theory. \end{definition} By the diagonal action, the group $\mathbb{G}_n$ acts on $E(M)_n^*(X)$ as multiplicative stable cohomology operations. For a monomorphism $N\to M$ of finite abelian $p$-groups, we have a multiplicative stable cohomology operation $E(N)_n^*(X)\to E(M)_n^*(X)$, which is $\mathbb{G}_n$-equivariant by Lemma~\ref{lemma:restriction-g-level-str-commutativity}. \if0 \begin{corollary}\label{cor:restriction-g-commutation-En} We have a commutative diagram \[ \begin{array}{ccc} D_n(N)^*(X) & \stackrel{r(M,N)}{\hbox to 10mm{\rightarrowfill}} & D_n(M)^*(X)\\[1mm] \mbox{\scriptsize$g$}\bigg\downarrow\phantom{\mbox{\scriptsize$g$}} & & \phantom{\mbox{\scriptsize$g$}}\bigg\downarrow \mbox{\scriptsize$g$}\\[3mm] D_n(N)^*(X) & \stackrel{r(M,N)}{\hbox to 10mm{\rightarrowfill}} & D_n(M)^*(X) \end{array}\] for any subgroup $N$ of a finite abelian $p$-group $M$, any $g\in \mathbb{G}_n$ and any space $X$. \end{corollary} \fi \begin{definition}\rm We define an operation \[ \Psi_M^{E_n}: E_n^0(X)\longrightarrow E(M)_n^0(X)\] by the composition \[ \Psi_M^{E_n}= ({\rm R}\otimes 1)\circ {\rm P}_{M^*}.\] The operation $\Psi_M^{E_n}$ is a ring operation (see \cite[Theorem~3.4.4]{Ando1} or \cite[Lemma~3.10]{AHS}). \end{definition} \if0 Let $j: E_n^0\to D_n(M)$ be the structure map of the $E_n^0$-algebra $D_n(M)$. Note that $D_n(M)$ is a complete regular local ring with the residue field $\mathbb{F}$, and that $j$ induces an isomorphism between the residue fields. We denote by $j^*\mathbf{F}_n$ the formal group over $D_n(M)$ obtained by base change along $j$. The universal level $M$-structure \[ l^{\rm univ}: M\longrightarrow j^*\mathbf{F}_n(D_n(M))\] gives rise to a closed subgroup scheme $[l(M)]$ of $j^*\mathbf{F}_n$ by \[ [l(M)]=\sum_{m\in M}[l^{\rm univ}(m)],\] where $[l^{\rm univ}(m)]$ is a divisor on $j^*\mathbf{F}_n$ defined by a point $l^{\rm univ}(m)\in j^*\mathbf{F}_n(D_n(M))$. We can define the quotient formal group $j^*\mathbf{F}_n/[l(M)]$ of $j^*\mathbf{F}_n$ by $[l(M)]$. The formal group $j^*\mathbf{F}_n/[l(M)]$ is a deformation of the Honda formal group $\mathbf{Y}_{n+1}$ over the residue field $\mathbb{F}$. \fi When $X$ is a one point space, we obtain a ring homomorphism \[ \Psi_M^{E_n}: E_n^0\longrightarrow E(M)_n^0=D(M,\mathbf{F}_n).\] This map classifies the quotient formal group $i^*\mathbf{F}_n/[\phi_{\rm univ}(M)]$, where $i: E_n^0\to D(M,\mathbf{F}_n)$ is the structure map and $\phi_{\rm univ}$ is the universal level $M$-structure on $i^*\mathbf{F}_n$. When $X$ is the infinite complex projective space $\mathbb{CP}^{\infty}$, the map $\Psi_M^{E_n}: E_n^0(\mathbb{CP}^{\infty}) \to E(M)_n^0(\mathbb{CP}^{\infty})$ induces an isogeny \[ \Psi_M^{E_n}: i^*\mathbf{F}_n\longrightarrow \Psi_M^*\mathbf{F}_n\] with $\ker\Psi_M^{E_n}=[\phi_{\rm univ}(M)]$ (see \cite[\S\S3.3 and 3.4]{Ando1} or \cite[Proposition~3.21]{AHS}). Next, we consider compatibility of power operations $\Psi_M^{E_n}$ with the action of the extended Morava stabilizer group $\mathbb{G}_n$. \begin{proposition}\label{prop:g-Psi-commutation-En} The operation $\Psi_M^{E_n}: E_n^0(X)\longrightarrow E(M)_n^0(X)$ is $\mathbb{G}_n$-equivariant. \end{proposition} \proof It suffices to show that $P_{M^*}$ and $R$ are $\mathbb{G}_n$-equivariant. Recall that $\mathbb{G}_n$ acts on $E_n$ as $E_{\infty}$-ring spectrum maps. This implies that $P_{M^*}$ is $\mathbb{G}_n$-equivariant. By the construction of the action of $\mathbb{G}_n$, the map of functors ${\rm Level}(M,\mathbf{F}_n)\to {\rm Hom}(M,\mathbf{F}_n)$ is $\mathbb{G}_n$-equivariant. Thus, $R: E_n^0(BM^*)\to D(M,\mathbf{F}_n)$ is also $\mathbb{G}_n$-equivariant. \qed \if0 \begin{lemma}\label{lemma:g-psi-commutation} The map $\Psi_M: E_n^0\to D(M,\mathbf{F}_n)$ is $\mathbb{G}_n$-equivariant. \if0 We have a commutative diagram \[ \begin{array}{ccc} E_n^0 & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & E_n^0 \\[1mm] \mbox{\scriptsize$\Psi_M^{E_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$\Psi_M^{E_n}$}\bigg\downarrow} & & \phantom{\mbox{\scriptsize$\Psi_M^{E_n}$}} \bigg\downarrow\mbox{\scriptsize$\Psi_M^{E_n}$} \\[3mm] D_n(M) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D_n(M)\\ \end{array}\] for any $g\in \mathbb{G}_n$ and any finite abelian $p$-group $M$. \fi \end{lemma} \proof For $g\in \mathbb{G}_n$, we let $t(g)$ be the isomorphism $\mathbf{F}_n\stackrel{\cong}{\to}g^*\mathbf{F}_n$. We set $\phi'=(i^*t(g))\circ \phi_{\rm univ}$. Note that $\phi'$ is a level $M$-structure on $i^*g^*\mathbf{F}_n$. Taking the pullback of the isomorphism $\Psi_M^*\mathbf{F}_n\stackrel{\cong}{\to}i^*\mathbf{F}_n/[\phi(M)]$ along $g$, we obtain an isomorphism \[ g^*\Psi_M^*\mathbf{F}_n\stackrel{\cong}{\longrightarrow} g^*i^*\mathbf{F}_n/[\phi'(M)]. \] On the other hand, by taking the pullback of the isomorphism $t(g):\mathbf{F}_n\stackrel{\cong}{\to}g^*\mathbf{F}_n$ along $\Psi_M$, we obtain an isomorphism \[ i^*\mathbf{F}_n/[\phi_{\rm univ}(M)]\stackrel{\cong}{\to} \Psi_M^*\,g^*\mathbf{F}_n. \] Furthermore, the isomorphism $t(g)$ induces an isomorphism \[ i^*\mathbf{F}_n/[\phi_{\rm univ}(M)]\stackrel{\cong}{\to} g^*i^*\mathbf{F}_n/[\phi'(M)].\] Hence we obtain an isomorphism \[ \Psi_M^*\,g^*\mathbf{F}_n\cong g^*\Psi_M^*\mathbf{F}_n. \] We can verify that this isomorphism restricts to the identity on the special fiber. Since $\mathbf{F}_n$ over $E_n^0$ is a universal deformation, we obtain $\Psi_M\circ g=g\circ \Psi_M$. \qed \begin{proposition}\label{prop:g-Psi-commutation-En} The operation $\Psi_M: E_n^0(X)\longrightarrow E(M)_n^0(X)$ is $\mathbb{G}_n$-equivariant. \if0 We have a commutative diagram \[ \begin{array}{ccc} E_n^0(X) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & E_n^0(X) \\[1mm] \mbox{\scriptsize$\Psi_M^{E_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$\Psi_M^{E_n}$}\bigg\downarrow} & & \phantom{\mbox{\scriptsize$\Psi_M^{E_n}$}} \bigg\downarrow\mbox{\scriptsize$\Psi_M^{E_n}$} \\[3mm] D_n(M)^0(X) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D_n(M)^0(X)\\ \end{array}\] for any $g\in \mathbb{G}_n$, any finite abelian $p$-group $M$, and any space $X$. \fi \end{proposition} \proof We take $g\in \mathbb{G}_n$. When $X$ is the one point space, we have $\Psi_M\circ g=g\circ \Psi_M$ by Lemma~\ref{lemma:g-psi-commutation}. We denote this map by $\Psi_{M,g}: E_n^0\to D(M,\mathbf{F}_n)$. We consider the case in which $X$ is the infinite dimensional complex projective space $\mathbb{CP}^{\infty}$. The ring homomorphisms \[ \Psi_M\circ g,\ g\circ \Psi_M: E_n^0(\mathbb{CP}^{\infty})\longrightarrow E(M)_n^0(\mathbb{CP}^{\infty}) \] give rise to homomorphisms \[ f_1,\ f_2: i^*\mathbf{F}_n\longrightarrow \Psi_{M,g}^*\mathbf{F}_n \] of formal groups over $D(M,\mathbf{F}_n)$, respectively. We can verify $r^*f_1=r^*f_2$, where $r: D(M,\mathbf{F}_n)\to\mathbb{F}_{p^n}$ is the map to the residue field. This implies $f_1=f_2$ since $D(M,\mathbf{F}_n)$ is a complete local ring and the height of $i^*\mathbf{F}_n$ is finite. By \cite[Proposition~3.7]{Ando2} an unstable ring operation between Landweber exact cohomology theories is determined by its values on the one point space and the infinite dimensional complex projective space. Hence we obtain the proposition. \qed \fi \subsection{Hecke operators in Morava $E$-theory} In this subsection we review Hecke operators in Morava $E$-theory (see, for example, \cite{Ganter,Rezk,Ando2,AHS}). We also study their compatibility with the action of the extended Morava stabilizer group. First, we recall the construction of Hecke operators in Morava $E$-theory. Let $\Lambda^n=(\mathbb{Q}_p/\mathbb{Z}_p)^n$. We set \[ D_n(r)=D(\Lambda^n[p^r],\mathbf{F}_n). \] The inclusion $\Lambda^n[p^r]\to \Lambda^n[p^{r+1}]$ of finite abelian $p$-groups induces a local $E_n^0$-algebra homomorphism $D_n(r)\to D_n(r+1)$. We define an $E_n^0$-algebra $D_n$ to be a colimit of $\{D_n(r)\}_{r\ge 0}$: \[ D_n=\ \subrel{r}{\rm colim} D_n(r).\] For a subgroup $M$ of $\Lambda^n$, we let ${\rm I}_M: D(M,\mathbf{F}_n)\to D_n$ be the $E_n^0$-algebra homomorphism induced by the inclusion. We define an operation \[ \Phi_M^{E_n}: E_n^0(X)\longrightarrow D_n\subrel{E_n^0}{\otimes}E_n^0(X)\] by the composition \[ \Phi_M^{E_n}= ({\rm I}_M\otimes 1)\circ \Psi_M^{E_n}.\] We would like to have operations from $E_n^0(X)$ to $E_n^0(X)$. For this purpose, we consider a sum of operations $\Phi_M^{E_n}$ for all subgroups of $\Lambda^n$ isomorphic to a given finite abelian $p$-group $M$. Then we can show that it factors through $E_n^0(X)$ by an invariance with respect to the Galois group of $D_n$ over $E_n^0$. For a finite abelian $p$-group $M$, we let $m_n(M)$ be the set of all subgroups of $\Lambda^n$ isomorphic to $M$: \[ m_n(M)=\{M'\le \Lambda^n|\ M'\cong M\}.\] Note that $m_n(M)$ is a finite set and that $m_n(M)=\emptyset$ if $p$-rank$(M)>n$. By \cite[\S14.2]{Rezk}, there is an unstable additive cohomology operation \[ \widetilde{\rm T}_M: E_n^0(X)\longrightarrow E_n^0(X) \] such that \[ (i\otimes 1)\circ \widetilde{\rm T}_M= \sum_{M'\in m_n(M)} \Phi_{M'}^{E_n}, \] where $i:E_n^0\to D_n$ is the $E_n^0$-algebra structure map of $D_n$. \if0 \begin{theorem}[{cf.~\cite[\S14.2]{Rezk}}] \label{thm:MoravaE-Hecke-op} There is an unstable additive cohomology operation \[ \widetilde{\rm T}_M: E_n^0(X)\longrightarrow E_n^0(X) \] such that \[ (i\otimes 1)\circ \widetilde{\rm T}_M= \sum_{M'\in m_n(M)} \Phi_{M'}^{E_n}, \] where $i:E_n^0\to D_n$ is the $E_n^0$-algebra structure map of $D_n$. \end{theorem} \proof Let $r$ be a positive integer such that $p^rM=0$. The map $\sum_{M'} \Phi_{M'}^{E_n}$ factors through $D_n(r)\otimes E_n^0(-)$. This gives rise to $\theta\in D_n(r)\otimes E_n^0(\underline{E}_n)$, where $\underline{E}_n=\Omega^{\infty}E_n$ is the $0$th space of the representing $\Omega$-spectrum of $E_n$. By \cite[Theorem~2.4]{Bendersky-Hunton}, $(E_n)_*(\underline{E}_n)$ is a free $(E_n)_*$-module. This implies that $E_n^0(\underline{E}_n)$ is isomorphic to a product of $E_n^0$ as an $E_n^0$-module. Since $E_n^0$ is the invariant subring of $D_n(r)$ under the action of ${\rm GL}_n(\mathbb{Z}/p^r\mathbb{Z})$, we see \[ E_n^0(\underline{E}_n)= (D_n(r)\subrel{E_n^0}{\otimes}E_n^0 (\underline{E}_n))^{\mbox{\scriptsize ${\rm GL_n}(\mathbb{Z}/p^r\mathbb{Z})$}}.\] By construction, $\theta$ is invariant under the action of ${\rm GL}_n(\mathbb{Z}/p^r\mathbb{Z})$, and hence we obtain \[ \widetilde{\rm T}_M\in E_n^0(\underline{E}_n) \] such that $\widetilde{\rm T}_M=\theta$ in $D_n(r)\otimes E_n^0(\underline{E}_n)$. \qed \fi In the same way there is an unstable additive operation \[ \widetilde{\rm T}\left({p^r}\right): E_n^0(X)\longrightarrow E_n^0(X)\] for a nonnegative integer $r$ such that \[ (i\otimes 1)\circ\widetilde{\rm T}\left({p^r}\right)= \sum_{\mbox{\scriptsize$ \begin{array}{c} M' \le \Lambda^n\\ |M'|=p^r\\ \end{array}$}} \Phi_{M'}^{E_n},\] where the sum ranges over the subgroups $M'$ of $\Lambda^n$ such that the order $|M'|$ is $p^r$. \if0 \subsubsection{Composition of Hecke operators} \fi Now, we recall an algebra of Hecke operators associated to $(\Delta_n,\Gamma_n)$, where $\Delta_n={\rm End}(\Lambda^n)\cap {\rm GL}_n(\mathbb{Q}_p)$ and $\Gamma_n={\rm GL}_n(\mathbb{Z}_p)$ (cf.~\cite[Chapter~3]{Shimura} and \cite[\S14.1]{Rezk}). We denote by $\mathbb{Z}[\Delta_n]$ the monoid ring of $\Delta_n$ over $\mathbb{Z}$, and by $\mathbb{Z}[\Delta_n/\Gamma_n]$ the left $\mathbb{Z}[\Delta_n]$-module spanned by cosets. We define an algebra $\mathcal{H}_n$ to be the endomorphism ring of the left $\mathbb{Z}[\Delta_n]$-module $\mathbb{Z}[\Delta_n/\Gamma_n]$: \[ \mathcal{H}_n={\rm End}_{\mathbb{Z}[\Delta_n]} (\mathbb{Z}[\Delta_n/\Gamma_n]).\] Note that $\mathcal{H}_n$ is isomorphic to $\mathbb{Z}[\Gamma_n\backslash\Delta_n/\Gamma_n]$ as $\mathbb{Z}$-modules, where $\mathbb{Z}[\Gamma_n\backslash\Delta_n/\Gamma_n]$ is the free $\mathbb{Z}$-module generated by double cosets. We identify $\Gamma_n\backslash\Delta_n/\Gamma_n$ with the set of all isomorphism classes of finite abelian $p$-groups of $p$-rank $\le n$ by associating $\ker\alpha$ to the double coset $\Gamma_n\alpha\Gamma_n$. For a finite abelian $p$-group $M$ of $p$-rank $\le n$, the associated endomorphism $\widetilde{M}: \mathbb{Z}[\Delta_n/\Gamma_n]\to \mathbb{Z}[\Delta_n/\Gamma_n]$ is given by $\Gamma_n\mapsto \sum\alpha\Gamma_n$, where the sum is taken over $\alpha\Gamma_n\in\Delta_n/\Gamma_n$ such that $\ker\alpha\cong M$. The composition in $\mathcal{H}_n$ is given as follows. For finite abelian $p$-groups $L,M,N$ with $p$-rank $\le n$, we let $K_n(M,N;L)$ be the set of all the pairs $(M',L')$ of subgroups of $\Lambda^n$ such that $M'\subset L'$, $L'\cong L$, $M'\cong M$, and $L'/M'\cong N$. We set \[ c_n(M,N;L)=\frac{|K_n(M,N;L)|}{|m_n(L)|}.\] The composition in $\mathcal{H}_n$ is given by \[ \widetilde{M}\cdot \widetilde{N}= \sum\ c_n(M,N;L)\ \widetilde{L} \] (cf.~\cite[Proposition~3.15]{Shimura}). By \cite[Theorem~3.20]{Shimura}, $\mathcal{H}_n$ is a polynomial ring over $\mathbb{Z}$ generated by the endomorphisms associated to $(\mathbb{Z}/p\mathbb{Z})^{\oplus k}$ for $1\le k\le n$. In particular, $\mathcal{H}_n$ is a commutative algebra over $\mathbb{Z}$. In \cite[\S14]{Rezk} it is shown that there is a natural action of $\mathcal{H}_n$ on $E_n^0(X)$ for any space $X$. Assigning to $\widetilde{M}$ the Hecke operator $\widetilde{\rm T}_M$, we obtain a natural $\mathcal{H}_n$-module structure on $E_n^0(X)$ for any space $X$ by \cite[Proposition~14.3]{Rezk}. Now, we consider compatibility of Hecke operators $\widetilde{\rm T}_M$ and $\widetilde{\rm T}\left(p^r\right)$ with the action of the extended Morava stabilizer group $\mathbb{G}_n$. \begin{proposition} \label{prop:commutation-Hecke-stabilizer-En} The operations $\widetilde{\rm T}_M$ and $\widetilde{\rm T}\left({p^r}\right)$ are $\mathbb{G}_n$-equivariant. \end{proposition} \proof The proposition follows from Lemma~\ref{lemma:restriction-g-level-str-commutativity} and Proposition~\ref{prop:g-Psi-commutation-En}. \qed \begin{corollary} The action of $\mathcal{H}_n$ on $E_n^0(X)$ commutes with the action of $\mathbb{G}_n$. \end{corollary} \if0 \subsection{Symmetric power operations on Morava $E$-theory} \if0 Let $BG$ be the classifying space of a finite group $G$. We abbreviate $\Sigma_+^\infty BG$ as $BG_+$. We denote by \[ tr: BG_+\wedge BG_+\longrightarrow BG_+\] the transfer map associated to the inclusion of the diagonal $G\to G\times G$. Let $p$ be the unique map from $G$ to the trivial group $\{e\}$. We set \[ \varepsilon=\Sigma_+^{\infty}Bp: BG_+\to B\{e\}_+=S^0.\] Let $L=L_{K(n)}$ be the Bousfield localization functor with respect to the $n$th Morava $K$-theory. We set \[ DBG_+=F(BG_+,LS^0). \] Taking adjoint of the composition $L\,\varepsilon\circ L\,tr$, we get a map \[ LBG_+\longrightarrow DBG_+,\] which is an equivalence by \cite[Proposition~8.3]{Strickland2}. We define a map $\eta_G: LS^0\to LBG_+$ in the stable homotopy category by \[ \eta_G: LS^0=LB\{e\}_+\stackrel{\simeq}{\longrightarrow} DB\{e\}_+ \longrightarrow DBG_+ \stackrel{\simeq}{\longleftarrow} LBG_+,\] where the map $DB\{e\}_+\to DBG_+$ is $D\varepsilon=F(\varepsilon,LS^0)$. In \cite[Definition~6.11 and Corollary~7.12]{Ganter} Ganter defines the $l$th symmetric power operation \[ \sigma_l=\sigma_l^{E_n}: E_n^0(X)\longrightarrow E_n^0(X) \] by \[ \sigma_l= (\eta_{\Sigma_l}^*\otimes 1) \circ P_l,\] where $\eta_{\Sigma_l}^*: E_n^0(B\Sigma_l)\to E_n^0$. \fi In \cite[Definition~6.11]{Ganter} Ganter defines the $l$th symmetric power operation \[ \sigma_l=\sigma_l^{E_n}: E_n^0(X)\longrightarrow E_n^0(X). \] Let $A$ be a finite $\mathbb{Z}_p^n$-set with $l$ elements. If we fix a bijection between $A$ and the set $\{1,2,\ldots,l\}$, then we can identify the automorphism group ${\rm Aut}(A)$ with the $l$th symmetric group $\Sigma_l$. The $\mathbb{Z}_p^n$-action on $A$ determines a continuous homomorphism $\mathbb{Z}_p^n\to {\rm Aut}(A)\cong \Sigma_l$, which factors through $(\mathbb{Z}_p/p^r\mathbb{Z}_p)^n$ for some $r$. We obtain a ring homomorphism $q_A: E_n^0(B\Sigma_l)\to D_n$ by the composition $q_A: E_n^0(B\Sigma_l)\to E_n^0(B(\mathbb{Z}_p/p^r\mathbb{Z}_p)^n)\to D_n$. Note that $q_A$ is independent of the choices of bijection $A\cong \{1,2,\ldots,l\}$. We define an operation \[ \chi_A=\chi_A^{E_n}: E_n^0(X)\to D_n\subrel{E_n^0}{\otimes}E_n^0(X) \] as \[ \chi_A=(q_A\otimes 1)\circ P_l. \] The $l$th symmetric power operation $\sigma_l$ satisfies \[ (i\otimes 1)\circ \sigma_l=\sum_{|A|=l}\ \frac{1}{|{\rm Aut}_{\mathbb{Z}_p^n}(A)|}\,\chi_A,\] where the sum in the right hand side ranges over the finite $\mathbb{Z}_p^n$-sets with $l$ elements, ${\rm Aut}_{\mathbb{Z}_p^n}(A)$ is the automorphism group of the $\mathbb{Z}_p^n$-set $A$, and $i$ is the $E_n^0$-algebra structure map of $D_n$. Let $A=A_1\coprod\cdots\coprod A_s$ be the decomposition as a disjoint union of the transitive $\mathbb{Z}_p^n$-sets. In this case we can write the operation $\chi_A$ as \[ \chi_A=\prod_{i=1}^s \chi_{A_i}.\] Now suppose $A$ is a transitive $\mathbb{Z}_p^n$-set. If we take a base point of $A$, then we can identify $A$ with a finite abelian $p$-group $\mathbb{Z}_p^n/H$, where $H$ is an open subgroup of $\mathbb{Z}_p^n$. In this case we have \[ \chi_A=\Phi_M,\] where $M={\rm Hom} (\mathbb{Z}_p^n/H,\mathbb{Q}_p/\mathbb{Z}_p)$. In \cite[Definition~7.15]{Ganter} Ganter defines the total symmetric power operation \[ S_Z=S_Z^{E_n}: E_n^0(X)\to E_n^0(X)\power{Z} \] by \[ S_Z(x)=\sum_{l=0}^\infty \sigma_l(x)\,Z^l,\] where $x\in E_n^0(X)$ and $Z$ is a formal variable. By \cite[Proposition~9.1]{Ganter}, we have \[ S_Z(x)=\exp\left[ \sum_{k=0}^\infty\ \frac{1}{p^k}\,\widetilde{\rm T}\!\left(p^k\right)\!(x)\, Z^{p^k} \right].\] \begin{proposition} For $g\in \mathbb{G}_n$, we let $g_Z: E_n^0(X)\power{Z}\to E_n^0(X)\power{Z}$ be the obvious extension of $g: E_n^0(X)\to E_n^0(X)$. We have \[ S_Z\circ g=g_Z\circ S_Z \] for any $g\in \mathbb{G}_n$. \end{proposition} \proof Since $\sigma_l$ can be written in terms of the operations $\Phi_M$, the proposition follows from Proposition~\ref{prop:g-Phi-commutation-En}. \qed \if0 \subsection{Orbifold genera} Let $G$ be a finite group, and let $M$ be a compact complex manifold acted upon by $G$. We denote by $M//G$ the orbifold quotient. Suppose that we have an $H_{\infty}$-map ${\rm f}: MU\to E_n$. Note that the condition is known for ${\rm f}$ to be an $H_{\infty}$-map by \cite[Theorem~5]{Ando1}. In \cite[Definition~1.1]{Ganter} Ganter defines the orbifold genus \[ {\rm f}_{\rm orb}(M//G)\in E_n^0.\] \if0 Let $G$ be a finite group. We write $\mathcal{N}_*^{U,G}$ for the bordism ring of compact, closed, smooth $G$-manifolds with a complex structure on their stable normal bundle. Let $MU_G$ be the complex Thom $G$-spectrum. The Pontrjagin-Thom construction induces a ring homomorphism $\mathcal{N}_*^{U,G}\to MU^G_*$. Since $MU_G$ is a split $G$-spectrum, the map $EG\to \ast$ induces a completion map $MU_G^{*}\to MU^{*}(BG)$. Let ${\rm f}: MU\to E_n$ be an $H_{\infty}$-map. Note that the condition is known for ${\rm f}$ to be an $H_{\infty}$-map by \cite[Theorem~5]{Ando1}. We denote by ${\rm f}_G: \mathcal{N}_*^{U,G}\to E_n^{-*}(BG)$ the composition \[ \mathcal{N}^{U,G}_*\longrightarrow MU^G_*=MU_G^{-*}\longrightarrow MU^{-*}(BG)\longrightarrow E_{n}^{-*}(BG),\] where the first arrow is induced by the Pontrjagin-Thom construction, the second arrow is the completion map, and the third arrow is induced by ${\rm f}$. Let $M$ be a compact complex manifold acted upon by $G$. We suppose that $d$ is the complex dimension of $M$. We denote by $M\circlearrowleft G$ the $G$-space $M$, and by $M//G$ its orbifold quotient. In \cite[Definition~1.1]{Ganter} the orbifold genus ${\rm f}_{\rm orb}(M//G)\in E_n^0$ is defined. by \[ {\rm f}_{\rm orb}(M//G)\cdot u^d= (\eta_G)^*\circ {\rm f}_G (M \circlearrowleft G),\] where we regard $(M\circlearrowleft G)$ as an element in $\mathcal{N}_{2d}^{U,G}$. Note that the orbifold genus ${\rm f}_{\rm orb}(M//G)$ is independent of the presentation $M\circlearrowleft G$ by \cite[Theorem~1.4]{Ganter}. \fi We set \[ S_{\rm orb}(M;Z)=S_{\rm orb}^{E_n}(M;Z)= \sum_{l=0}^\infty {\rm f}_{\rm orb}(M^l//\Sigma_l)\,Z^l,\] where $Z$ is a formal variable. By \cite[Theorem~1.5]{Ganter}, we have the following formula \[ S_{\rm orb}(M;Z) = \exp \left[ \sum_{k=0}^\infty \frac{1}{p^k} \widetilde{\rm T}\left(p^k\right)({\rm f}(M))\,Z^{p^k} \right].\] \fi \if0 \subsection{The logarithmic operation} In \cite{Rezk} Rezk constructed a logarithmic map \[ l_{n,p}: {\rm gl}_1(R)\longrightarrow L_{K(n)}R\] for each commutative $S$-algebra $R$, prime $p$ and $n\ge 1$. This map induces the logarithmic operation \[ l_{n,p}: (R^0(X))^\times\longrightarrow (L_{K(n)}R)^0(X)\] for any space $X$. When $R=E_n$, we have \[ l_{n,p}: (E_n^0(X))^{\times}\longrightarrow E_n^0(X).\] By \cite[Theorem~1.11]{Rezk}, we have the following formula \[ l_{n,p}(x)=\frac{1}{p}\log \left(1+p \prod_{r=0}^n N_{p^r}(x)^{(-1)^r p^{(r-1)(r-2)/2}}\right), \] where $N_{p^r}: E^0(X)\to E^0(X)$ is given by \[ N_{p^r}(x)= \prod_{\mbox{\scriptsize$ \begin{array}{c} M \le \Lambda^n[p]\\ |M|=p^r\\ \end{array}$}} \psi_M^{E_n}(x).\] \fi \fi \section{Hecke operators in $\mathbb{B}_n$-theory} \label{section:Bn-theory} In \cite{Torii3} we defined an even periodic ring spectrum $\mathbb{B}_n$ that connects the Morava $E$-theories $E_n$ and $E_{n+1}$. In this section we construct unstable additive operations in $\mathbb{B}_n$, which are extensions of power operations in $E_{n+1}$. Using these operations, we define Hecke operators in $\mathbb{B}_n$-theory. \subsection{$\mathbb{B}_n$-theory} In this subsection we recall an even periodic commutative ring spectrum $\mathbb{B}_n$ which is an extension of $E_n$ and $E_{n+1}$. We show that the degree $0$ coefficient ring $\mathbb{B}_n^0$ classifies isomorphisms between $p$-divisible groups associated to $E_n$ and $E_{n+1}$. \if0 Let $p$ be a prime number and let $n$ be a positive integer. We denote by $\mathbb{F}_q$ the finite field with $q$ elements. We fix an algebraic extension $\mathbb{F}$ of the composition field $\mathbb{F}_{p^n}\mathbb{F}_{p^{n+1}}$. Let $W$ be the ring of Witt vectors with coefficients in $\mathbb{F}$. The ring $W$ is a complete discrete valuation ring with a uniformizer $p$ and the residue field $\mathbb{F}$. In the following of this paper we use a variant of Morava $E$-theories. Let $E_n$ be the $n$th Morava $E$-theory whose ring of coefficients is given by \[ \pi_*(E_n)=W\power{u_1,\ldots,u_{n-1}}[u^{\pm 1}],\] where the degree of $u$ is $2$. Also, let $E_{n+1}$ be the $(n+1)$st Morava $E$-theory whose ring of coefficients is given by \[ \pi_*(E_{n+1})=W\power{w_1,\ldots,w_n}[w^{\pm 1}],\] where the degree of $w$ is $2$. \fi First, we introduce a spectrum $\mathbb{A}_n$ which is defined by \[ \mathbb{A}_n= L_{K(n)}E_{n+1},\] where $K(n)$ is the $n$th Morava $K$-theory spectrum and $L_{K(n)}$ is the Bousfield localization functor with respect to $K(n)$. The spectrum $\mathbb{A}_n$ is an even periodic commutative ring spectrum. In order to distinguish generators of $E_n^0$ and $E_{n+1}^0$, we write $E_{n+1}^0= W(\mathbb{F}_{p^{n+1}})\power{u_1,\ldots,u_n}$. Then the degree $0$ coefficient ring of $\mathbb{A}_n$ is given by \[ \mathbb{A}_n^0=(W((u_n)))_p^{\wedge} \power{u_1,\ldots,u_{n-1}},\] where $W=W(\mathbb{F}_{p^{n+1}})$. We describe a relationship between the formal groups of $E_{n+1}$ and $\mathbb{A}_n$. Let $\mathbf{G}$ be the $p$-divisible group $\mathbf{F}_{n+1}[p^{\infty}]$ associated to the formal group $\mathbf{F}_{n+1}$ over $E_{n+1}^0$. We denote by $\mathbf{G}_{\mathbb{A}}$ the base change of $\mathbf{G}$ along the map $E_{n+1}^0\to \mathbb{A}_n^0$. There is an exact sequence $0\to \mathbf{G}_{\mathbb{A}}^0\to\mathbf{G}_{\mathbb{A}}\to \mathbf{G}_{\mathbb{A}}^{\rm et}\to 0$ of $p$-divisible groups over $\mathbb{A}_n^0$, where $\mathbf{G}_{\mathbb{A}}^0$ is connected and $\mathbf{G}_{\mathbb{A}}^{\rm et}$ is \'{e}tale. The $p$-divisible group associated to the formal group of $\mathbb{A}_n$ is equivalent to $\mathbf{G}_{\mathbb{A}}^0$. Next, we introduce a spectrum $\mathbb{B}_n$ which is an amalgamation of $E_n$ and $E_{n+1}$. We define a spectrum $\mathbb{B}_n$ by \[ \mathbb{B}_n=L_{K(n)}(E_n{\wedge} E_{n+1}).\] The spectrum $\mathbb{B}_n$ is also an even periodic commutative ring spectrum. We have ring spectrum maps \[ {\rm inc}: E_n\longrightarrow \mathbb{B}_n, \qquad {\rm ch}: E_{n+1}\longrightarrow \mathbb{B}_n,\] where ${\rm inc}$ is the inclusion into the first smash factor, and ${\rm ch}$ is the inclusion into the second smash factor of $\mathbb{B}_n$. Since $\mathbb{B}_n$ is $K(n)$-local, the map ${\rm ch}$ induces a map of ring spectra \[ {\rm j}: \mathbb{A}_n\longrightarrow \mathbb{B}_n.\] In the following of this paper we will use same symbols for induced ring homomorphisms on $\pi_0$. We regard the degree $0$ coefficient ring $\mathbb{B}_n^0$ as an extension of $\mathbb{A}_n^0$ along ${\rm j}$. Then $\mathbb{B}_n^0$ is a complete regular local ring with maximal ideal $(p,u_1,\ldots,u_{n-1})$ and that its residue field is a Galois extension of $\mathbb{F}_{p^{n+1}}((u_n))$ with Galois group isomorphic to the extended Morava stabilizer group $\mathbb{G}_n$ (see \cite[\S4]{Torii3}). Now, we study a functor represented by the degree $0$ coefficient ring $\mathbb{B}_n^0$. Let $\mathbf{H}$ be the $p$-divisible group $\mathbf{F}_n[p^{\infty}]$ associated to the formal group $\mathbf{F}_n$ over $E_n^0$. We suppose that there are two maps $f: \mathbb{A}_n^0\to R$ and $g: E_n^0\to R$ in $\mathcal{CL}$. Then we have two connected $p$-divisible groups $f^*\mathbf{G}_{\mathbb{A}}^0$ and $g^*\mathbf{H}$ over $R$. We denote by \[ {\rm Iso}(f^*\mathbf{G}_{\mathbb{A}}^0, g^*\mathbf{H})\] the set of all isomorphisms of $p$-divisible groups between $f^*\mathbf{G}_{\mathbb{A}}^0$ and $g^*\mathbf{H}$, and by \[ {\rm Hom}_{f,g}^c(\mathbb{B}_n^0,R) \] the set of all maps of local rings $h: \mathbb{B}_n^0 \to R$ such that $h\circ {\rm j}=f$ and $h\circ {\rm inc}=g$. \begin{lemma}\label{lemma:MUPMUP-variation} There is a bijection between ${\rm Iso}(f^*\mathbf{G}_{\mathbb{A}}^0, g^*\mathbf{H})$ and ${\rm Hom}_{f,g}^c(\mathbb{B}_n^0,R)$. \end{lemma} \proof Let ${\rm MP}$ be the periodic complex bordism spectrum. We have $\pi_0(E_n\wedge \mathbb{A}_n)= E_{n}^0\otimes_{{\rm MP}_0}{\rm MP}_0({\rm MP}) \otimes_{{\rm MP}_0}\mathbb{A}_n^0$. Since $\mathbb{B}_n\simeq L_{K(n)}(E_n\wedge \mathbb{A}_n)$, the coefficient ring $\mathbb{B}_n^0$ is a completion of $\pi_0(E_n\wedge \mathbb{A}_n)$ at the ideal $I_n=(p,u_1,\ldots,u_{n-1})$. The lemma follows from the fact that ${\rm MP}_0({\rm MP})$ classifies isomorphisms of formal group laws. \qed \subsection{Level structures on $p$-divisible groups} In this subsection we recall the definition of level structures on $p$-divisible groups. If a $p$-divisible group is obtained from a one dimensional formal group, then we study a relationship between level structures on them. Let $\mathbf{X}$ be a $p$-divisible group over a commutative ring $R$. We assume that the finite subgroup scheme $\mathbf{X}[p^r]$ is embeddable in a smooth curve in the sense of \cite[(1.2.1)]{KM} for any $r\ge 0$. \begin{definition}\rm Let $M$ be a finite abelian $p$-group. A homomorphism $\phi: M\to \mathbf{X}(R)$ is said to be a level $M$-structure on $\mathbf{X}$ if the induced homomorphism $\phi[p^r]:M[p^r]\to \mathbf{X}[p^r](R)$ is an $M[p^r]$-structure on $\mathbf{X}[p^r]$ in the sense of \cite[Remark~1.10.10]{KM} for any $r\ge 0$. \end{definition} \begin{definition}\rm We define a functor \[ {\rm Level}(M,\mathbf{X}) \] over ${\rm Spec}(R)$ by assigning to an $R$-algebra $U$ the set of all the level $M$-structures on $\mathbf{X}_U$. By \cite[Lemma~1.3.4]{KM}, the functor ${\rm Level}(M,\mathbf{X})$ is representable. We denote by $D(M,\mathbf{X})$ the representing $R$-algebra so that \[ {\rm Level}(M,\mathbf{X}) ={\rm Spec}(D(M,\mathbf{X})).\] \end{definition} \if0 Let $\mathbf{Y}$ be a $p$-divisible group over a commutative ring $S$. We suppose there are a ring homomorphism $R\to S$ and an isomorphism \[ \mathbf{Y}\stackrel{\cong}{\longrightarrow} f^*\mathbf{X}. \] If we have a level $M$-structure $\phi': M\to \mathbf{Y}(S)$, then the composition \[ M\stackrel{\phi'}{\longrightarrow} \mathbf{Y}(S)\stackrel{\cong}{\to} f^*\mathbf{X}(S) =\mathbf{X}(S)\] is a level $M$-structure on $f^*\mathbf{X}=\mathbf{X}_S$. This implies an isomorphism of functors \[ {\rm Level}(M,\mathbf{Y})\cong {\rm Level}(M,\mathbf{X})\times_{{\rm Spec}(R)}{\rm Spec}(S).\] \fi Now, we suppose that there exists a one dimensional formal Lie group $\mathbf{X}'$ of finite height over $R'\in\mathcal{CL}$ and that $\mathbf{X}$ is obtained from the associated $p$-divisible group $\mathbf{X}'[p^{\infty}]$ by base change along a ring homomorphism $R'\to R$. By taking a coordinate $x$ of $\mathbf{X}'$, we have a $p^r$-series $[p^r](x)\in R'\power{x}$ of the associated formal group law. The Weierstrass preparation theorem implies that there is a unique monic polynomial $g_r(x)\in R'[x]$ such that $[p^r](x)$ is a unit multiple of $g_r(x)$ in $R'\power{x}$. In this case the subgroup scheme $\mathbf{X}[p^r]$ is given by \[ \mathbf{X}[p^r]\cong {\rm Spec}(R[x]/(g_r(x))).\] Note that we can embed $\mathbf{X}[p^r]$ in the smooth curve ${\rm Spec}(R[x])$ for any $r\ge 0$. We shall describe the above definition of level $M$-structure more explicitly. For a section $s\in \mathbf{X}[p^r](U)$, we denote by $x(s)$ the image of $x$ under the $R$-algebra homomorphism $s^*: R[x]/(g_r(x))\to U$. By \cite[Lemma~1.10.11]{KM}, there is at most one closed subgroup scheme $K$ of $\mathbf{X}[p^r]_U$ such that $(K,\phi)$ is an $M[p^r]$-structure on $\mathbf{X}[p^r]_U$. If such a $K$ exists, then the set $\{\phi(m)|\ m\in M[p^r]\}$ is a full set of sections of $K$ by \cite[Theorem~1.10.1]{KM}, and hence the product \begin{align}\label{align:product-divisor} \prod_{m\in M[p^r]}(x-x(\phi(m))) \end{align} divides $g_r(x)$ in $U[x]$. Conversely, if the product~(\ref{align:product-divisor}) divides $g_r(x)$, then the divisor \[ K=\sum_{m\in M[p^r]}[\phi(m)] \] formed in the smooth curve ${\rm Spec}(U[x])$ is a subgroup scheme of $\mathbf{X}[p^r]_U$ by using \cite[Proposition~32]{Strickland}, and hence $(K,\phi)$ is an $M[p^r]$-structure on $\mathbf{X}[p^r]_U$. Therefore, $\phi$ is a level $M$-structure if and only if the product~(\ref{align:product-divisor}) divides $g_r(x)$ in $U[x]$ for any $r\ge 0$. Recall that the functor ${\rm Level}(M,\mathbf{X}')$ over ${\rm Spf}(R')$ is represented by $D(M,\mathbf{X}')$. By the above argument, we obtain the following proposition. \begin{proposition}\label{prop:level-representability-specR} Let $\mathbf{X}'$ be a one dimensional formal Lie group over $R'\in\mathcal{CL}$ of finite height. If the $p$-divisible group $\mathbf{X}$ is obtained from the associated $p$-divisible group $\mathbf{X}'[p^{\infty}]$ by base change along a map $R'\to R$, then \[ D(M,\mathbf{X})\cong R\otimes_{R'}D(M,\mathbf{X}').\] \end{proposition} \subsection{Extensions of $p$-divisible groups} Let $\mathbf{X}$ be a $p$-divisible group over a commutative ring $R$ such that $\mathbf{X}[p^r]$ is embeddable in a smooth curve for any $r\ge 0$. We suppose that there is an exact sequence \[ 0\longrightarrow \mathbf{Y}\longrightarrow \mathbf{X}\stackrel{q}{\longrightarrow} \mathbf{E}\longrightarrow 0\] of $p$-divisible groups, where $\mathbf{E}$ is \'{e}tale. In this subsection we study a relationship between level structures on $\mathbf{X}$ and $\mathbf{Y}$. The map $q: \mathbf{X}\to\mathbf{E}$ induces a homomorphism $q: \mathbf{X}(R)\to \mathbf{E}(R)$ of abelian groups. For a homomorphism $\phi: M\to \mathbf{X}(R)$, we set $N=\ker (q\circ \phi)$. The restriction of $\phi$ to $N$ induces a homomorphism $\phi': N\to \mathbf{Y}(R)$. \begin{proposition} \label{prop:general-level-restriction-height-change} The homomorphism $\phi$ is a level $M$-structure on $\mathbf{X}$ if and only if the restriction $\phi'$ is a level $N$-structure on $\mathbf{Y}$. \end{proposition} \proof For any $r\ge 0$, we have a commutative diagram \[ \begin{array}{ccccccccc} 0&\longrightarrow& N[p^r]& \longrightarrow& M[p^r]& \longrightarrow& M[p^r]/N[p^r]& \longrightarrow& 0 \\[1mm] &&\phantom{\mbox{$\scriptstyle \phi'$}} \bigg\downarrow \mbox{$\scriptstyle \phi'$} & &\phantom{\mbox{$\scriptstyle \phi$}}\bigg\downarrow \mbox{$\scriptstyle \phi$} & &\bigg\downarrow& \\[3mm] 0&\longrightarrow& \mathbf{Y}[p^r](R)& \longrightarrow& \mathbf{X}[p^r](R)& \longrightarrow& \mathbf{E}[p^r](R),& & \\ \end{array} \] where each row is exact and the right vertical arrow is injective. The proposition follows in the same way as in the proof of \cite[Proposition~1.11.2]{KM}. \if0 Suppose $\phi$ is a level $M$-structure. Then the restriction $\phi|_N: N\to \mathbf{X}(R)$ is a level $N$-structure on $\mathbf{X}$. Hence there exists a finite flat subgroup scheme $K$ of $\mathbf{X}[p^r]$, the map $\phi|_{N[p^r]}: N[p^r]\to \mathbf{X}[p^r](R)$ factors through $K(R)$, and the set $\{\phi(n)|\ n\in N[p^r] \}$ is a full set of sections of $K$. Since $R$ is a complete local ring, we see that $K$ is connected. Hence $K$ lies in the identity component of $\mathbf{X}[p^r]$. Therefore, $K$ is a subgroup scheme of $\mathbf{Y}[p^r]$ and $\phi'$ is a level $N$-structure on $\mathbf{Y}$. Conversely, suppose $\phi'$ is a level $N$-structure. Then there exists a closed subgroup scheme $L$ of $\mathbf{Y}[p^r]$, $\phi'$ factors through $L(R)$, and $\{\phi'(n)|\ n\in N[p^r]\}$ is a full set of sections of $L$. We have an exact sequence of abelian groups $0\to N[p^r]\to M[p^r]\to M[p^r]/N[p^r]\to 0$. Taking a set theoretic section $s: M[p^r]/N[p^r]\to M[p^r]$, we obtain an isomorphism of sets $N[p^r]\times (M[p^r]/N[p^r])\stackrel{\cong}{\to} M[p^r]$. Furthermore, the composition $\phi\circ s:M[p^r]/N[p^r]\to \mathbf{X}[p^r](R)$ induces a closed embedding \[ \mathbf{Y}[p^r]\times_R (M[p^r]/N[p^r])_R\to \mathbf{X}[p^r]\] and a commutative diagram \[ \begin{array}{ccc} N[p^r]\times (M[p^r]/N[p^r]) & \stackrel{\cong}{\longrightarrow} & M[p^r] \\[1mm] {\mbox{$\scriptstyle \phi'\times 1$}}\bigg\downarrow \phantom{\mbox{$\scriptstyle \phi'\times 1$}} & & \phantom{\mbox{$\scriptstyle \phi$}} \bigg\downarrow{\mbox{$\scriptstyle \phi$}} \\[3mm] \mathbf{Y}[p^r](R)\times (M[p^r]/N[p^r]) & \stackrel{}{\longrightarrow} & \mathbf{X}[p^r](R). \end{array}\] Hence $L\times_R (M[p^r]/N[p^r])_R$ is a closed subscheme of $\mathbf{X}[p^r]$, $\phi$ factors through $L(R)\times (M[p^r]/N[p^r])$, and $\{\phi(m)|\ m\in M[p^r]\}$ is a full set of sections of $L\times_{R} (M[p^r]/N[p^r])_R$. By using \cite[Proposition~32]{Strickland}, we see that $L\times_R (M[p^r]/N[p^r])_R$ is a subgroup scheme of $\mathbf{X}[p^r]$, and hence $\phi$ is a level $M$-structure. \fi \qed For a level $M$-structure $\phi: M\to \mathbf{X}(R)$, we have a finite flat subgroup scheme $[\phi(M)]$ in $\mathbf{X}$. We denote by $\mathbf{X}/[\phi(M)]$ the quotient as a fppf sheaf of abelian groups. By \cite[Proposition~2.7]{RZ}, $\mathbf{X}/[\phi(M)]$ is a $p$-divisible group. Using the snake lemma, we obtain the following proposition. \begin{proposition} \label{prop:exact-seq-quotient-pdivisible} Let $\phi: M\to \mathbf{X}(R)$ be a level $M$-structure on $\mathbf{X}$ and let $\phi': N\to \mathbf{Y}(R)$ be the induced level $N$-structure on $\mathbf{Y}$ by Proposition~\ref{prop:general-level-restriction-height-change}, where $N=\ker (q\circ \phi)$. Then there exists an exact sequence \[ 0 \longrightarrow \mathbf{Y}/[\phi'(N)] \longrightarrow \mathbf{X}/[\phi(M)] \longrightarrow \mathbf{E}/(M/N)_R \longrightarrow 0\] of $p$-divisible groups over $R$, where $(M/N)_R$ is a constant group scheme. \end{proposition} \if0 There is a universal level $M$-structure $\phi^{\rm univ}$ on the formal group $\mathbf{X}'_{D(M,\mathbf{X}')}$, and we can construct the quotient formal group $\mathbf{X}'_{D(M,\mathbf{X}')}/[\phi^{\rm univ}(M)]$ (see \cite[Theorem~19]{Strickland}). \begin{definition}\rm We define the quotient $p$-divisible group $\mathbf{X}[p^{\infty}]/[\phi(M)]$ to be the base change of $(\mathbf{X}'_{D(M,\mathbf{X}')}/[\phi^{\rm univ}(M)])[p^{\infty}]$ along the map ${\rm Spec}(R)\to D(M,\mathbf{X}')$: \[ \mathbf{X}[p^{\infty}]/[\phi(M)]= (\mathbf{X}'_{D(M,\mathbf{X}')}/ [\phi^{\rm univ}(M)])[p^{\infty}]_R.\] \end{definition} \begin{lemma}\label{lemma:exact-seq-pdivisible-level-quotient} There is an exact sequence \[ 0\longrightarrow [\phi(M)] \longrightarrow \mathbf{X}[p^{\infty}] \longrightarrow \mathbf{X}[p^{\infty}]/[\phi(M)] \longrightarrow 0 \] of fppf abelian group sheaves on ${\rm Spec}(R)$ for any level $M$-structure $\phi$ on $\mathbf{X}[p^{\infty}]$. \end{lemma} \proof We have an exact sequence \[ 0\to [\phi^{\rm univ}(M)] \to \mathbf{X}'_{D(M,\mathbf{X}')}[p^{\infty}] \to (\mathbf{X}'_{D(M,\mathbf{X}')}/[\phi^{\rm univ}(M)])[p^{\infty}] \to 0\] of fppf sheaves of abelian groups on ${\rm Spec}(D(M,\mathbf{X}'))$. The desired exact sequence is obtained by base change along the map ${\rm Spec}(R)\to {\rm Spec}(D(M,\mathbf{X}'))$. \qed Suppose we have a monomorphism of abelian groups $\phi'': M\to \mathbf{E}(R)$, where $M$ is a finite abelian $p$-group. Let $k_s$ be the separable closure of the residue field $k$ of $R$. Since $\mathbf{E}$ is etale, it is determined by the continuous ${\rm Gal}(k_s/k)$-module $\mathbf{E}(k_s)$. We form the quotient $\mathbf{E}(k_s)/L$. Note that $\mathbf{E}(k_s)/L$ is isomorphic to $(\mathbb{Q}_p/\mathbb{Z}_p)^{h''}$ as an abelian group. We denote by $\mathbf{E}/M$ the etale $p$-divisible group over ${\rm Spec}(R)$ such that $(\mathbf{E}/L)(k_s)\cong \mathbf{E}(k_s)/M$ as continuous ${\rm Gal}(k_s/k)$-modules. By the above construction of $\mathbf{E}/M$, we have the following lemma. \begin{lemma}\label{lemma:exact-seq-etale-pdivible-quotient} There is an exact sequence \[ 0\longrightarrow M_R \longrightarrow \mathbf{E} \longrightarrow \mathbf{E}/M \longrightarrow 0\] of fppf sheaves of abelian groups on ${\rm Spec}(R)$. \end{lemma} Let $\phi: M\to \mathbf{X}[p^{\infty}](R)$ be a level $M$-structure, and we set $N=\ker(q\circ \phi)$. We have the quotient $p$-divisible groups $\mathbf{X}[p^{\infty}]/[\phi(M)]$ and $\mathbf{Y}[p^{\infty}]/[\phi'(N)]$. The map $\phi$ induces an injective homomorphism $(M/N)\to \mathbf{E}(R)$ of abelian groups, and we can form the quotient etale $p$-divisible group $\mathbf{E}/(M/N)$. \fi \if0 In order to prove Theorem~\ref{thm:exact-seq-quotient-pdivisible}, we need the following lemma. \begin{lemma}\label{lemma:exact-seq-divisors} There is an exact sequence \[ 0\longrightarrow [\phi'(N)]\longrightarrow [\phi(M)]\longrightarrow (M/N)_R\longrightarrow 0\] of fppf sheaves of abelian groups over ${\rm Spec}(R)$. \end{lemma} \proof We have an isomorphism $[\phi(M)]\cong {\rm Spec}(R[x]/f(x))$ by taking a coordinate $x$ of the formal group $\mathbf{X}'$, where $f(x)=\prod_{m\in M}(x-x(\phi(m)))$. Since $\mathbf{Y}[p^{\infty}]$ is formal and $\mathbf{E}$ is etale, $x(\phi(m))\equiv 0\mod \mathfrak{m}$ if and only if $m\in N$, where $\mathfrak{m}$ is the maximal ideal of $R$. Furthermore, $x(\phi(m))\equiv x(\phi(m'))\mod \mathfrak{m}$ if and only if $m-m'\in N$. For $l\in M/N$, we set $f_l(x)=\prod_{[m]=l}(x-x(\phi(m)))$, where the product ranges $m\in M$ such that $[m]=l$ in $M/N$. By Hensel's Lemma, $R[x]/(f(x))\cong \prod_{l\in M/N} R[x]/(f_l(x))$. The canonical map $\prod_{l\in M/N}R\to \prod_{l\in M/N} R[x]/(f_l(x))$ gives rise to an epimorphism $[\phi(M)]\to (M/N)_R$ of fppf sheaves of abelian groups, whose kernel is $[\phi'(N)]$. \qed \proof[Proof of Theorem~\ref{thm:exact-seq-quotient-pdivisible}] We have a commutative diagram \[ \begin{array}{ccccccccc} & & 0 & & 0 & & 0 & & \\%[1mm] & & \downarrow & & \downarrow & & \downarrow & & \\[2mm] 0 & \rightarrow & [\phi'(N)] & \rightarrow & [\phi(M)] & \rightarrow & (M/N)_R & \rightarrow & 0 \\[1mm] & & \downarrow & & \downarrow & & \downarrow & & \\[2mm] 0 & \rightarrow & \mathbf{Y}[p^{\infty}] & \rightarrow & \mathbf{X}[p^{\infty}] & \rightarrow & \mathbf{E} & \rightarrow & 0 \\[1mm] & & \downarrow & & \downarrow & & \downarrow & & \\[2mm] 0 & \rightarrow & \mathbf{Y}[p^{\infty}]/[\phi'(N)] & \rightarrow & \mathbf{X}[p^{\infty}]/[\phi(M)] & \rightarrow & \mathbf{E}/(M/N) & \rightarrow & 0 \\[1mm] & & \downarrow & & \downarrow & & \downarrow & & \\[2mm] & & 0 & & 0 & & 0 & & \\%[1mm] \end{array}\] of fppf sheaves of abelian groups on ${\rm Spec}(R)$. By Lemma~\ref{lemma:exact-seq-pdivisible-level-quotient} and Lemma~\ref{lemma:exact-seq-etale-pdivible-quotient}, each vertical sequence is exact. The top horizontal sequence is exact by Lemma~\ref{lemma:exact-seq-divisors}, and the middle horizontal sequence is exact by definition. Hence the bottom horizontal sequence is exact by the snake lemma. \qed \fi Next, we consider a decomposition of the representing ring $D(M,\mathbf{X})$ of the functor ${\rm Level}(M,\mathbf{X})$. The map $q: \mathbf{X}\to \mathbf{E}$ induces a natural transformation ${\rm Level}(M,\mathbf{X}) \to{\rm Hom}(M,\mathbf{E})$ of functors. We identify $\pi\in {\rm Hom}(M,\mathbf{E})(R)$ with a map $\pi: {\rm Spec}(R)\to {\rm Hom}(M,\mathbf{E})$. \begin{definition}\rm For $\pi\in {\rm Hom}(M,\mathbf{E})(R)$, we define a functor \[ {\rm Level}(M,\mathbf{X},\pi)\] over ${\rm Spec}(R)$ to be the pullback of ${\rm Level}(M,\mathbf{X})\to{\rm Hom}(M,\mathbf{E})$ along $\pi$. Since ${\rm Level}(M,\mathbf{X})$ is representable, so is ${\rm Level}(M,\mathbf{X},\pi)$. We define $D(M,\mathbf{X},\pi)$ to be the representing ring so that \[ {\rm Level}(M,\mathbf{X},\pi)= {\rm Spec}(D(M,\mathbf{X},\pi)).\] \end{definition} \if0 For $\pi\in{\rm Hom}(M,\mathbf{E})(R)$, we set $N=\ker(\pi: M\to \mathbf{E}(R))$. By Proposition~\ref{prop:general-level-restriction-height-change}, there is a natural transformation \[ {\rm Level}(M,\mathbf{X},\pi) \longrightarrow {\rm Level}(N,\mathbf{Y}).\] \fi By Proposition~\ref{prop:general-level-restriction-height-change}, we obtain the following proposition. \begin{proposition} \label{prop:trivilhom-general} For the zero homomorphism $0\in {\rm Hom}(M,\mathbf{E})(R)$, we have an isomorphism \[ D(M,\mathbf{X},0)\cong D(M,\mathbf{Y}).\] \end{proposition} \if0 \begin{lemma}\label{lemma:finite-flat-general} As an $R$-module, $D(A,\mathbf{X}[p^{\infty}],\pi)$ is finitely generated free. \end{lemma} \proof The section $\pi$ gives rise to a decomposition ${\rm Hom}(A,\mathbf{E})={\rm Spec}(R)\coprod {\rm Hom}(A,\mathbf{E})'$. This implies that $D(A,\mathbf{X}[p^{\infty}],\pi)$ is a direct summand of $R\otimes_TD(A,\mathbf{X}')$. Since $D(A,\mathbf{X}')$ is a finitely generated free $T$-module and $R$ is a local ring, $D(A,\mathbf{X}[p^{\infty}],\pi)$ is a finitely generated free $R$-module. \qed \fi In the following of this subsection we assume that $\mathbf{E}$ is constant. Then we have a decomposition \[ {\rm Level}(M,\mathbf{X})= \coprod_{\pi}{\rm Level}(M,\mathbf{X},\pi),\] where the coproduct ranges over $\pi\in {\rm Hom}(M,\mathbf{E}(R))$. Hence we obtain the following proposition. \begin{proposition}\label{prop:product-decomposition-DAGp} If $\mathbf{E}$ is constant, then there exists a decomposition \[ D(M,\mathbf{X})\cong \prod_{\pi} D(M,\mathbf{X},\pi),\] where the product ranges over $\pi\in {\rm Hom}(M,\mathbf{E}(R))$. \end{proposition} Next, we study the representing ring $D(M,\mathbf{X},\pi)$ for $\pi: M\to \mathbf{E}(R)$. We set $N=\ker\pi$. \begin{proposition}\label{prop:pi-component-empty-local} We assume that $R$ is a local ring with residue field $k$ of characteristic $p$ and that $\mathbf{Y}_k$ is formal. If {\rm $p$-rank($N$)} is at most the height of $\mathbf{Y}$, then $D(M,\mathbf{X},\pi)$ is a local ring. Otherwise, ${\rm Level}(M,\mathbf{X},\pi)=\emptyset$, and hence $D(M,\mathbf{X},\pi)=0$. \end{proposition} \proof Let $\overline{k}$ be an algebraic closure of $k$. It is sufficient to show that ${\rm Level}(M,\mathbf{X},\pi)(\overline{k})$ is an one element set if $p$-rank($N$) is at most the height of $\mathbf{Y}$ and that it is empty otherwise. Since the map $q$ induces an isomorphism $\mathbf{X}(\overline{k})\stackrel{\cong} {\to}\mathbf{E}(\overline{k})$ of abelian groups, there is at most one homomorphism $\phi: M\to \mathbf{X}(\overline{k})$ such that $q\circ \phi=\pi$. By Proposition~\ref{prop:general-level-restriction-height-change}, $\phi$ is a level $M$-structure on $\mathbf{X}_{\overline{k}}$ if and only if the restriction $\phi'$ is a level $N$-structure on $\mathbf{Y}_{\overline{k}}$. Since $\mathbf{Y}_{\overline{k}}$ is formal, this is the case if and only if $p$-rank($N$) is at most the height of $\mathbf{Y}_{\overline{k}}$. \qed \subsection{Level structures on $\mathbf{G}_{\mathbb{B}}$} \if0 Let $\mathbf{F}[p^{\infty}]$ be a $p$-divisible group over a commutative ring $R$, and let $M$ be a finite abelian group of order $p^r$. We assume that $\mathbf{F}[p^r]$ is embeddable as a closed subscheme of a smooth curve (cf.~\cite[(1.2.1)]{KM}). Suppose we have a homomorphism $\phi$ from $M$ to $\mathbf{F}[p^{\infty}](R)$. We say that $\phi$ is a level $M$ structure on $\mathbf{F}[p^{\infty}]$ if $\phi$ is an $M$-structure on $\mathbf{F}_{n+1}[p^r]$. Recall that $\phi$ is an $M$-structure on $\mathbf{F}[p^r]$ if there exists a closed subgroup scheme $G$ of $\mathbf{F}[p^r]$, $\phi$ lands in $G(R)$, and the $p^r$-points $\phi(m)\ (m\in M)$ are a full set of sections of $G$ (cf. \cite[Remark~1.10.10]{KM}). Since $\mathbf{F}[p^r]$ is embeddable as a closed subscheme of a smooth curve, this is well-defined by \cite[Lemma~1.10.11]{KM}. If $\mathbf{F}_n[p^{\infty}]$ is a $p$-divisible group associated to the formal group $\mathbf{F}_n$ over $E_n^0$, the above definition of the level $M$-structures on $\mathbf{F}_n[p^{\infty}]$ is compatible with that of the level $M$-structures on $\mathbf{F}_n$ in \S\ref{subsection:level-structure-Morava-E} (see \cite[Proposition~32]{Strickland} and \cite[Lemma~1.10.11]{KM}). We define a functor \[ {\rm Level}(M,\mathbf{F}[p^{\infty}]) \] from the category of commutative $R$-algebras to the category of sets, which assigns to $S$ the set of all level $M$-structures on $\mathbf{F}[p^{\infty}]_S$ over $S$. By \cite[Lemma~1.10.11]{KM}, the functor ${\rm Level}(M,\mathbf{F}[p^{\infty}])$ is representable by an $R$-algebra $D_R(M)$: \[ {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])(S) \cong {\rm Hom}_{\mbox{\scriptsize$R$-algebra}}(D_{R}(M),S).\] \fi Recall that $\mathbf{G}=\mathbf{F}_{n+1}[p^{\infty}]$ is the $p$-divisible group associated to the formal Lie group $\mathbf{F}_{n+1}$ over $E_{n+1}^0$. We denote by $\mathbf{G}_{\mathbb{B}}$ the base change along the map ${\rm ch}: E_{n+1}^0\to\mathbb{B}_n^0$. In this subsection we study level structures on the $p$-divisible group $\mathbf{G}_{\mathbb{B}}$. First, we recall a connected-\'{e}tale exact sequence associated to the $p$-divisible group $\mathbf{G}_{\mathbb{B}}$. Let $\mathbf{H}_{\mathbb{B}}$ be the $p$-divisible group obtained from $\mathbf{H}$ by the base change along the map ${\rm inc}: E_n^0\to\mathbb{B}_n^0$, where $\mathbf{H}$ is the $p$-divisible group associated to the formal Lie group $\mathbf{F}_n$ over $E_n^0$. The maps ${\rm ch}: E_{n+1}^0(\mathbb{CP}^{\infty})\to \mathbb{B}_n^0(\mathbb{CP}^{\infty})$ and ${\rm inc}: E_n^0(\mathbb{CP}^{\infty})\to \mathbb{B}_n^0(\mathbb{CP}^{\infty})$ induce an isomorphism \[ \theta: \mathbf{G}_{\mathbb{B}}^0\stackrel{\cong}{\to} \mathbf{H}_{\mathbb{B}}\] of connected $p$-divisible groups, where $\mathbf{G}_{\mathbb{B}}^0$ is the identity component of $\mathbf{G}_{\mathbb{B}}$. By \cite[Theorem~5.3]{Torii6}, there is an exact sequence of $p$-divisible groups \begin{align}\label{eq:fundamental-exact-squence-Fn-Fn+1-constant} 0\to \mathbf{H}_{\mathbb{B}}\longrightarrow \mathbf{G}_{\mathbb{B}} \stackrel{\phantom{q}}{\longrightarrow} (\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}}\to 0 \end{align} over $\mathbb{B}_n^0$. Next, we study the ring $D(M,\mathbf{G}_{\mathbb{B}})$ for a finite abelian $p$-group $M$. By Proposition~\ref{prop:level-representability-specR}, we have an isomorphism \[ D(M,\mathbf{G}_{\mathbb{B}})\cong {\mathbb{B}_n^0}\otimes_{E_{n+1}^0}D(M,\mathbf{F}_{n+1})\] of $\mathbb{B}_n^0$-algebras. By Propositions~\ref{prop:product-decomposition-DAGp} and \ref{prop:pi-component-empty-local}, we have a decomposition \[ D(M,\mathbf{G}_{\mathbb{B}})\cong \prod_{\pi} D(M,\mathbf{G}_{\mathbb{B}},\pi),\] where the product ranges over $\pi\in {\rm Hom}(M,\mathbb{Q}_p/\mathbb{Z}_p)$ such that $p$-rank$(\ker(\pi))\le n$. \if0 Let $\phi\in {\rm Level}(M,\mathbf{G}_{\mathbb{B}},\pi)(R)$ for a $\mathbb{B}_n^0$-algebra $R$. We set $N=\ker(\pi)$. By Proposition~\ref{prop:general-level-restriction-height-change}, the restricting $\phi'$ of $\phi$ to $B$ is a level $N$-structure on $\mathbf{H}_R$. Hence we obtain a homomorphism \[ D(N,\mathbf{H}_{\mathbb{B}})\longrightarrow D(M,\mathbf{G}_{\mathbb{B}},\pi).\] Note that we have an isomorphism \[ D(N,\mathbf{H}_{\mathbb{B}})\cong \mathbb{B}\otimes_{E_n^0}D(N,\mathbf{F}_n)\] so that $D(N,\mathbf{H}_{\mathbb{B}})$ is a regular local ring. \fi \begin{proposition} If {\rm $p$-rank$(\ker(\pi))\le n$}, then $D(M,\mathbf{G}_{\mathbb{B}},{\pi})$ is a complete regular local ring. \end{proposition} \proof We set $N=\ker(\pi)$. For simplicity, we put $D(M)=D(M,\mathbf{G}_{\mathbb{B}},{\pi})$ and $D(N)=D(N,\mathbf{H}_{\mathbb{B}})$. By Proposition~\ref{prop:pi-component-empty-local}, $D(M)$ is a complete local ring. If $\pi=0$, then $D(M)$ is isomorphic to $D(N)$ by Proposition~\ref{prop:trivilhom-general}, and hence it is regular. Thus, we assume that $\pi\neq 0$. Since the Krull dimension of $D(M)$ is $n$, it is sufficient to show that the maximal ideal of $D(M)$ is generated by $n$ elements. Let $R=E_{n+1}^0/(p,u_1,\ldots,u_{n-1})\cong \mathbb{F}_{p^{n+1}}\power{u_n}$, and let $\mathbf{V}$ be the formal group obtained from $\mathbf{F}_{n+1}$ by base change along the map $E_{n+1}^0\to R$. We suppose that $M/N\cong \mathbb{Z}/p^r$. By the Weierstrass preparation theorem, there is a decomposition of the $p^r$-series $[p^r](x)$ of $\mathbf{V}$ as $[p^r](x)=g_r(x^{p^{nr}}) u_r(x)$, where $g_r(y)$ is a monic polynomial in $R[y]$ of degree $p^r$, and $u_r(x)$ is a unit in $R\power{x}$. Let $L$ be the residue field of the local ring $\mathbb{B}_n^0$. The group scheme $\mathbf{V}[p^r]_L$ over $L$ is represented by the ring $L[x]\left/\left(g_r\left(x^{p^{nr}}\right)\right)\right.$ and the etale quotient $\mathbf{V}[p^{r}]_L^{\rm et}$ is represented by $L[y]/(g_r(y))$. Furthermore, the epimorphism $q:\mathbf{V}[p^r]_L\to \mathbf{V}[p^r]^{\rm et}_L$ is represented by a ring homomorphism $L[y]/(g_r(y))\to L[x]\left/\left(g_r\left(x^{p^{nr}}\right)\right)\right.$ given by $y\mapsto x^{p^{nr}}$. We fix an isomorphism $\mathbf{V}[p^{\infty}]_L^{\rm et}\cong (\mathbb{Q}_p/\mathbb{Z}_p)_L$, and take $m\in M$ which is a generator in $M/N$. Let $U$ be an $L$-algebra, and let $\phi: M\to \mathbf{V}[p^{\infty}]_L(U)$ be a homomorphism such that $\phi(N)=0$ and $q\circ\phi=\pi$. By Proposition~\ref{prop:general-level-restriction-height-change}, we see that $\phi$ is a level $M$-structure on $\mathbf{V}[p^{\infty}]_U$. Hence we obtain an isomorphism of functors \[ {\rm Level}(M,\mathbf{G}_{\mathbb{B}},\pi) \times_{{\rm Level}(N,\mathbf{H}_{\mathbb{B}})} {\rm Spec}(L)\cong \mathbf{V}[p^r]_L\times_{\mathbf{V}[p^r]_L^{\rm et}} \{\pi(m)\},\] where $\{\pi(m)\}$ is the connected component of $\mathbf{V}[p^{\infty}]_L^{\rm et}\cong (\mathbb{Q}_p/\mathbb{Z}_p)_L$ corresponding to $\pi(m)\in \mathbb{Q}_p/\mathbb{Z}_p$. This implies an isomorphism of rings \[ D(M)\otimes_{D(N)} L\cong L[x] \left/\left(x^{p^{nr}}-\alpha_r\right)\right. ,\] where $\alpha_r\in L$ is a root of $g_r(y)$ corresponding to $\{\pi(m)\}$. By \cite[Lemma~2.7]{Torii6}, we see that $L[x]/\left(x^{p^{n}}-\alpha_r\right)$ is a field. This implies that the maximal ideal of $D(M)$ is $\mathfrak{m}D(M)$, where $\mathfrak{m}$ is the maximal ideal of $D(N)$. Since $D(N)$ is regular of dimension $n$, $\mathfrak{m}$ is generated by $n$ elements. Hence the maximal ideal of $D(M)$ is generated by $n$ elements and $D(M)$ is a regular local ring. \qed \if0 The functor ${\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])$ is a closed subfunctor of $\hom(M,\mathbf{F}_{n+1}[p^{\infty}])$, and we have a decomposition \[ \hom(M,\mathbf{F}_{n+1}[p^{\infty}])= \coprod_{\pi}\hom_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}]),\] where $\hom_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}])$ is a closed subfunctor which assigns to a complete $\mathbb{B}_n^0$-algebra $R$ the set of all homomorphisms $\phi: M\to \mathbf{F}_{n+1}[p^{\infty}](R)$ such that $\pi^{\rm et}\circ \phi=\pi$. If we fix an isomorphism $M\cong \mathbb{Z}/p^{r_1}\mathbb{Z}\times\cdots\times \mathbb{Z}/p^{r_s}\mathbb{Z}$, then we have an isomorphism $\hom(M,\mathbf{F}_{n+1}[p^{\infty}])\cong \mathbf{F}_{n+1}[p^{r_1}]\times\cdots\times \mathbf{F}_{n+1}[p^{r_s}]$, where the product on the right hand side is taken over ${\rm Spf}(\mathbb{B}_n^0)$. For $a\in \mathbb{Q}_p/\mathbb{Z}_p[p^r]$, we denote by $\{a\}$ the corresponding component of $(\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}_n}$. We set $S(p^r,a)=\mathbf{F}_{n+1}[p^r] \times_{(\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}_n}}\{a\}$. We have an isomorphism \[ \hom_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}])\cong S(p^{r_1},{\pi(e_1)})\times\cdots\times S(p^{r_s},{\pi(e_s)}),\] where $e_i$ is a generator of $\mathbb{Z}/p^{r_i}\mathbb{Z} \subset M$. Let $\overline{L}$ be the algebraic closure of the residue field $L$ of $\mathbb{B}_n^0$. Since the surjection $\pi^{\rm et}:\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n}\to (\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}_n}$ has a section over $\overline{L}$, $S(p^r,a)\cong \mathbf{F}_n[p^r]$ for any $a$. This implies that $S(p^{r_1},{\pi(e_1)})\times\cdots\times S(p^{r_s},{\pi(e_s)})\times{\rm Spec}(\overline{L})$ is connected. Hence we see that $\hom_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}])$ is connected. Since ${\rm Level}_{\pi}(M,\mathbb{F}_{n+1}[p^{\infty}])$ is a closed subfunctor of $\hom_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}])$, it is also connected. {{\color{red} $D_{\mathbb{B}_n}(M)_{\pi}$ is a complete regular ring.}} If $\pi=0$, then $D_{\mathbb{B}_n}(M)_0=\mathbb{B}_n^0\otimes_{E_n^0}D_n(M)$. It is easy to see that $\mathbb{B}_n^0\otimes_{E_n^0}D_n(N)$ is a regular local ring. If $\pi\neq 0$, we have a ring homomorphism $\mathbb{B}_n^0\otimes_{E_n^0}D_n(N)\to D_{\mathbb{B}_n}(M)_{\pi}$, where $N$ is the kernel of $\pi$. Let $\mathfrak{m}$ be the maximal ideal of $\mathbb{B}_n^0\otimes_{E_n^0}D_n(N)$, and let $m_1,\ldots,m_n$ be its regular system of parameters. We can show that \[ D_{\mathbb{B}_n}(M)_{\pi}/\mathfrak{m}D_{\mathbb{B}_n}(M)_{\pi} \cong L[X]/(X^{p^{nr}}-a),\] where $r=\log_p|M/N|$. Here $a$ is a root of \[ \frac{[p^r](X)}{[p^{r-1}](X)}\sim \prod (X^{p^{n}}-a) \mod I_n\] corresponding to the image of the generator of $M/N$ under the inclusion $M/N\hookrightarrow \mathbb{Q}_p/\mathbb{Z}_p$. By \cite[Lemma~2.7]{Torii6}, we see that $L[X]/(X^{p^{nr}}-a)$ is a field. Hence the maximal ideal of $D_{\mathbf{B}_n}(M)_{\pi}$ is generated by $n$ elements $m_1,\ldots,m_n$. Therefore, $D_{\mathbf{B}_n}(M)_{\pi}$ is a complete regular local ring. The ring $E_{n+1}^0$ is a complete local ring with the residue field of characteristic $p>0$, and the formal group $\mathbf{F}_{n+1}$ is of height $(n+1)$. Hence the ring of global sections of the subgroup scheme $\mathbf{F}_{n+1}[p^r]$ is a finite free module over $E_{n+1}^0$. Since $\mathbf{F}_{n+1}$ is a 1-dimensional formal group, $\mathbf{F}_{n+1}[p^r]$ is embeddable as a closed subscheme of a smooth curve over $E_{n+1}^0$. Taking the base change along the map ${\rm ch}_*$, $\mathbf{F}_{n+1}[p^r]_{\mathbb{B}_n}$ is finite free and embeddable as a closed subscheme of a smooth curve over ${\rm Spec}(\mathbb{B}_n^0)$. Hence we have a representable functor ${\rm Level}(A,\mathbf{F}[p^{\infty}]_{\mathbb{B}_n})$ for a finite abelian $p$-group $A$. We denote by $D_{\mathbb{B}_n}(M)$ the representing ring \[ {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n})=\ {\rm Spf}(D_{\mathbb{B}_n}(M)).\] Since the level $M$-structures on the $p$-divisible group $\mathbf{F}_{n+1}[p^{\infty}]$ are compatible with those on the formal group $\mathbf{F}_{n+1}$, we have an isomorphism of rings \[ D_{\mathbb{B}_n}(M)\cong \mathbb{B}_n^0 \subrel{E_{n+1}^0}{\otimes} D_{n+1}(M).\] Let $\pi$ be a homomorphism from $M$ to $\mathbb{Q}_p/\mathbb{Z}_p$. We denote by ${\rm Level}_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n})$ the subfunctor of ${\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n})$, which assigns to a complete local $\mathbb{B}_n$-algebra $R$ the set of all level $M$-structures $\phi$ on $\mathbf{F}_{n+1}[p^{\infty}]_R$ such that the composition $\phi$ with $\pi^{\rm et}:\mathbf{F}_{n+1}[p^{\infty}](R)\to (\mathbb{Q}_p/\mathbb{Z}_p)$ is $\pi$: \[ {\rm Level}_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}])(R)= \{\phi\in {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])(R)|\ \pi^{\rm et} \circ\phi =\pi\}.\] \begin{lemma}\label{lemma:decomposition-DBM} There is an isomorphism of $\pi_0(\mathbb{B}_n)$-algebras \[ D_{\mathbb{B}_n}(M)\cong \prod_{\pi} D_{\mathbb{B}_n}(M)_{\pi},\] where the product ranges over all homomorphisms from $M$ to $(\mathbb{Q}_p/\mathbb{Z}_p)$. \end{lemma} \proof \qed \fi \if0 Let $R$ be a complete local ring and let $j: D(A,\mathbf{G}_{\mathbb{B}},\pi)\to R$ be a local homomorphism. We have a $p$-divisible group $j^*\mathbf{F}_{n+1}[p^{\infty}]$ over $R$ and a level $A$-structure $\phi: A\to \mathbf{F}_{n+1}[p^{\infty}](R)$ such that $q\circ\phi=\pi$. Since we have an exact sequence \[ 0\to \mathbf{F}_n[p^{\infty}]_R \longrightarrow \mathbf{F}_{n+1}[p^{\infty}]_R \longrightarrow (\mathbb{Q}_p/\mathbb{Z}_p)_R\to 0 \] of fppf sheaves of abelian groups on ${\rm Spf}(R)$, the map $\phi$ induces a homomorphism $\phi': N\to \mathbf{F}_n[p^{\infty}](R)$, and we obtain the following commutative diagram \[ \begin{array}{ccccccccc} 0 & \to & N & \longrightarrow & M & \longrightarrow & M/N & \to & 0\\[1mm] & & \phantom{\mbox{\scriptsize$\phi'$}} \bigg\downarrow\mbox{\scriptsize$\phi'$} & & \phantom{\mbox{\scriptsize$\phi$}} \bigg\downarrow\mbox{\scriptsize$\phi$} & & \bigg\downarrow & & \\[3mm] 0 & \to & \mathbf{F}_n[p^{\infty}](R) & \longrightarrow & \mathbf{F}_{n+1}[p^{\infty}](R) & \longrightarrow &\phantom{,}\ \mathbb{Q}_p/\mathbb{Z}_p\ ,& & \\[2mm] \end{array}\] where two rows are exact. By \cite[Proposition~1.11.2]{KM}, $\phi'$ is a level $N$-structure of $\mathbf{F}_n[p^{\infty}]$ over $R$. Hence we obtain the following lemma. \begin{lemma} There is an exact sequence of fppf sheaves of abelian groups on ${\rm Spf}(D_{\mathbb{B}_n}(M)_{\pi})$ \[ 0\to j_{\pi}^*\mathbf{H}_{\mathbb{B}}/[N] \longrightarrow j_{\pi}^*\mathbf{G}_{\mathbb{B}}/[M] \longrightarrow j_{\pi}^*(\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}}\to 0.\] Hence the identity component of $j_{\pi}^*\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n}/[M]$ is isomorphic to $j_{\pi}^*\mathbf{F}_n[p^{\infty}]_{\mathbb{B}_n}/[N]$. \end{lemma} Assigning $\phi'$ to $\phi$, we obtain a natural transformation \[ {\rm Level}_{\pi}(M,\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n}) \longrightarrow {\rm Level}(N,\mathbf{F}_n[p^{\infty}]_{\mathbb{B}_n}),\] which induces a local $\mathbb{B}_n^0$-algebra homomorphism \[ \mathbb{B}_n^0\subrel{E_n^0}{\otimes}D_n(N) \longrightarrow D_{\mathbb{B}_n}(M)_{\pi}.\] \fi Recall that $\Lambda^{n+1}=(\mathbb{Q}_p/\mathbb{Z}_p)^{n+1}$. For a nonnegative integer $r$, we set \[ \begin{array}{rcl} {D}_{\mathbb{B}}(r)&=& D(\Lambda^{n+1}[p^r],\mathbf{G}_{\mathbb{B}}),\\[2mm] {D}_{\mathbb{B}}(r,\pi) &=& D(\Lambda^{n+1}[p^r],\mathbf{G}_{\mathbb{B}},\pi),\\ \end{array}\] where $\pi: \Lambda^{n+1}[p^r]\to \mathbb{Q}_p/\mathbb{Z}_p$ is a homomorphism. The inclusion $\Lambda^{n+1}[p^r] \to \Lambda^{n+1}[p^{r+1}]$ induces a $\mathbb{B}_n^0$-algebra homomorphism $D_{\mathbb{B}}(r)\to D_{\mathbb{B}}(r+1)$, and we define a $\mathbb{B}_n^0$-algebra $D_{\mathbb{B}}$ to be a colimit of $\{D_{\mathbb{B}}(r)\}_{r\ge 0}$: \[ D_{\mathbb{B}}=\ \subrel{r}{\rm colim}D_{\mathbb{B}}(r).\] Note that $D_{\mathbb{B}}(r)\cong \mathbb{B}_n^0\otimes_{E_{n+1}^0}D_{n+1}(r)$ and $D_{\mathbb{B}}\cong \mathbb{B}_n^0\otimes_{E_{n+1}^0}D_{n+1}$. \if This implies that the induced homomorphism $\phi': N\to \mathbf{F}_n[p^{\infty}](R)$ is not a level $N$-structure, and that $\phi$ is not a level $M$-structure. Hence we see that ${\rm Level}_{\pi} (\Lambda^{n+1}(r),\mathbf{F}_{n+1}[p^{\infty}])=\emptyset$. \fi \begin{proposition} There is an isomorphism \[ D_{\mathbb{B}}(r)\cong \prod_{\pi} D_{\mathbb{B}}(r,{\pi}),\] where the product ranges over the surjective homomorphisms $\pi: \Lambda^{n+1}[p^r]\to (\mathbb{Q}_p/\mathbb{Z}_p)[p^r]$. \end{proposition} \proof The proposition follows from Propositions~\ref{prop:product-decomposition-DAGp} and \ref{prop:pi-component-empty-local} since $p$-rank$(\ker(\pi))>n$ if $\pi$ is not surjective to $(\mathbb{Q}_p/\mathbb{Z}_p)[p^r]$. \qed Now, we study an action of the automorphism group ${\rm Aut}(\Lambda^{n+1}[p^r])$ on $D_{\mathbb{B}}(r)$. For simplicity, we set \[ G(r)= {\rm Aut}(\Lambda^{n+1}[p^r]). \] Note that $G(r)$ is isomorphic to ${\rm GL}_{n+1}(\mathbb{Z}/p^r\mathbb{Z})$ and acts on the ring ${D}_{\mathbb{B}}(r)$. Let $P(r,{\pi})$ be the subgroup of $G(r)$ consisting of the elements fixing $\pi$: \[ P(r,{\pi})=\{g\in G(r)|\ \pi\circ g=\pi\}.\] We see that the $G(r)$-module $D_{\mathbb{B}}(r)$ is a coinduced module of the $P(r,{\pi})$-module $D_{\mathbb{B}}(r,{\pi})$: \[ D_{\mathbb{B}}(r)\cong\, {\rm Map}_{P(r,{\pi})}({G(r)}, D_{\mathbb{B}}(r,{\pi}))\] if $\pi: \Lambda^{n+1}[p^r]\to (\mathbb{Q}_p/\mathbb{Z}_p)[p^r]$ is surjective. \begin{proposition}\label{prop:invariant-DB} If $\pi: \Lambda^{n+1}[p^r]\to (\mathbb{Q}_p/\mathbb{Z}_p)[p^r]$ is surjective, then the invariant subring of $D_{\mathbb{B}}(r,{\pi})$ under the action of $P(r,\pi)$ is $\mathbb{B}_n^0$. \if0 then the local ring homomorphism $\mathbb{B}\to D_{\mathbb{B}_n}(r,{\pi})$ is a $P(r,{\pi})$-Galois extension in the sense of \mbox{\rm [Nagata,Theorem~2.4.10]}. In particular, the invariant subring of $D_{\mathbb{B}}(r,{\pi})$ is $\mathbb{B}$\mbox{\rm :} \[ (D_{\mathbb{B}}(r,{\pi}))^{P(r,{\pi})}=\mathbb{B}.\] \fi \end{proposition} \proof The proposition follows from \cite[Proposition~6.4]{Torii6} since we have an isomorphism $P(r,\pi)\cong B(W,V)(r)$ and we can identify the $P(r,\pi)$-ring $D_{\mathbb{B}}(r,\pi)$ with the $B(W,V)(r)$-ring $\mathbb{J}_n(r)$. \if0 By \cite[Proposition~6.4]{Torii6}, it suffices to show that we have an isomorphism $P(r,\pi)\cong B(W,V)(r)$ and that we can identify the $P(r,\pi)$-ring $D_{\mathbb{B}}(r,\pi)$ with the $B(W,V)(r)$-ring $\mathbb{J}_n(r)$. Let $\mathbb{K}$ be the field of fractions of $\mathbb{B}_n^0$, and let $\overline{\mathbb{K}}$ be its algebraic closure. We set $W(r)=\mathbf{G}_{\mathbb{B}}[p^r](\overline{\mathbb{K}})$ and $V(r)=\mathbf{H}_{\mathbb{B}}[p^r](\overline{\mathbb{K}})$. By (\ref{eq:fundamental-exact-squence-Fn-Fn+1-constant}), we can regard $V(r)$ as a subgroup of $W(r)$. We recall that $B(W,V)(r)$ is the subgroup of ${\rm Aut}(W(r))$ consisting of $g\in {\rm Aut}(W(r))$ that preserves $V(r)$ and that induces the identity on $W(r)/V(r)$. Hence we see that there is an isomorphism $P(r,\pi)\cong B(W,V)(r)$. We recall that $\mathbb{J}_n(r)$ is the $\mathbb{B}_n^0$-algebra in $\overline{\mathbb{K}}$ generated by $W(r)$. We have an isomorphism $D_{\mathbb{B}}(r,\pi)\cong \mathbb{J}_n(r)$ of $\mathbb{B}_n^0$-algebras. \fi \qed \begin{corollary}\label{cor:invariant-DB} The invariant ring of $D_{\mathbb{B}}(r)$ under the action of $G(r)$ is $\mathbb{B}_n^0$. \end{corollary} \proof The corollary follows from the fact that the $G(r)$-module $D_{\mathbb{B}}(r)$ is a coinduced module of the $P(r,{\pi})$-module $D_{\mathbb{B}}(r,{\pi})$. \qed \subsection{The action of $\mathbb{G}_{n+1}$ on $D(M,\mathbf{G}_{\mathbb{B}})$} Recall that $\mathbb{G}_{n+1}$ is the $(n+1)$st extended Morava stabilizer group. In this subsection we study an action of $\mathbb{G}_{n+1}$ on the representing ring $D(M,\mathbf{G}_{\mathbb{B}})$ of level structures on the $p$-divisible group $\mathbf{G}_{\mathbb{B}}$, where $M$ is a finite abelian $p$-group. The group $\mathbb{G}_{n+1}$ acts on the connected-\'{e}tale exact sequence (\ref{eq:fundamental-exact-squence-Fn-Fn+1-constant}), and hence we obtain a map of exact sequences \begin{align}\label{eq:map-of-fundamental-exact-seq} \begin{array}{ccccccccc} 0 \to & \mathbf{H}_{\mathbb{B}} & \longrightarrow & \mathbf{G}_{\mathbb{B}} & \longrightarrow & (\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}} & \to & 0\\[1mm] & \phantom{\mbox{\scriptsize$\rho_{\mathbf{H}}(g$}} \bigg\downarrow\mbox{\scriptsize$\rho_{\mathbf{H}}(g)$} & & \phantom{\mbox{\scriptsize$\rho_{\mathbf{G}}(g)$}} \bigg\downarrow\mbox{\scriptsize$\rho_{\mathbf{G}}(g)$} & & \phantom{\mbox{aaaa}} \bigg\downarrow \mbox{\scriptsize$\rho_{\mathbf{E}}(g)$} & & \\[4mm] 0 \to & \mathbf{H}_{\mathbb{B}} & \longrightarrow & \mathbf{G}_{\mathbb{B}} & \longrightarrow & (\mathbb{Q}_p/\mathbb{Z}_p)_{\mathbb{B}} & \to & 0\\ \end{array} \end{align} for any $g\in \mathbb{G}_{n+1}$ covering the action on $\mathbb{B}_n^0$. Note that $\rho_{\mathbf{H}}(g)$ is the identity map for any $g\in \mathbb{G}_{n+1}$ (see \cite[\S4]{Torii3}). \if0 \begin{lemma}\label{lemma:action-Sn+1-fundamental-exact-sequence} We have $\rho_{\mathbf{E}}(g)={\rm Nm}(g)$ for $g\in \mathbb{G}_{n+1}$, where ${\rm Nm}$ is the reduced norm map on $\mathbb{G}_{n+1}$. \end{lemma} \proof {\color{red} We have to check this proof.} Using $F$-crystals, $E(\mathbf{F}_{n+1}[p^{\infty}])\cong E(\mathbf{F}_n[p^{\infty}])\oplus E(\mathbb{Q}_p/\mathbb{Z}_p)$ over $K^{\rm sep}=\mathbf{F}((u_n))^{\rm sep}$. Then $\wedge^{n+1}E(\mathbf{F}_{n+1}[p^{\infty}])\cong \wedge^n E(\mathbf{F}_n[p^{\infty}])\otimes E(\mathbb{Q}_p/\mathbb{Z}_p)$. Then $g\in \mathbb{G}_{n+1}$ induces ${\rm Nm}(g)$ on $E(\mathbf{F}_{n+1}[p^{\infty}])$ and the identity map on $E(\mathbf{F}_n[p^{\infty}])$. Hence $g$ induces ${\rm Nm}(g)$ on $E(\mathbb{Q}_p/\mathbb{Z}_p)$. \qed \fi Let $M$ be a finite abelian $p$-group. The action of $\mathbb{G}_{n+1}$ on $E_{n+1}^0$ extends to an action on the complete local ring $D(M,\mathbf{F}_{n+1})$. Since $D(M,\mathbf{G}_{\mathbb{B}})= \mathbb{B}_n^0\otimes_{E_{n+1}^0}D(M,\mathbf{F}_{n+1})$, we obtain an action of $\mathbb{G}_{n+1}$ on $D(M,\mathbf{G}_{\mathbb{B}})$ by the diagonal action. We define an action of $\mathbb{G}_{n+1}$ on ${\rm Hom}(M,\mathbb{Q}_p/\mathbb{Z}_p)$ by \[ g \pi= \rho_{\mathbf{E}}(g)\circ\pi,\] where we regard $\rho_{\mathbf{E}}(g)$ as a homomorphism of abelian groups. We easily obtain the following proposition. \begin{proposition}\label{prop:permutation-action-Sn+1-DBM} The action of $\mathbb{G}_{n+1}$ on $D(M,\mathbf{G}_{\mathbb{B}})$ induces a local ring homomorphism \[ g: D(M,\mathbf{G}_{\mathbb{B}},g\pi) \longrightarrow D(M,\mathbf{G}_{\mathbb{B}},\pi)\] for any $g\in\mathbb{G}_{n+1}$. \end{proposition} Let $0$ be the zero homomorphism from $M$ to $\mathbb{Q}_p/\mathbb{Z}_p$. The action of $\mathbb{G}_{n+1}$ on $D(M,\mathbf{G}_{\mathbb{B}})$ induces an action of $\mathbb{G}_{n+1}$ on $D(M,\mathbf{G}_{\mathbb{B}},0)$ by Proposition~\ref{prop:permutation-action-Sn+1-DBM}, and we have an isomorphism $D(M,\mathbf{G}_{\mathbb{B}},0)\cong \mathbb{B}_n^0{\otimes}_{E_n^0}D(M,\mathbf{F}_n)$ by Proposition~\ref{prop:trivilhom-general}. We suppose that $\mathbb{G}_{n+1}$ trivially acts on $D(M,\mathbf{F}_n)$ and diagonally on $\mathbb{B}_n^0{\otimes}_{E_n^0}D(M,\mathbf{F}_n)$. Then we obtain the following proposition. \begin{proposition} \label{prop:trivil-hom-component-DBMII} The isomorphism $D(M,\mathbf{G}_{\mathbb{B}},0)\cong \mathbb{B}_n^0{\otimes}_{E_n^0}D(M,\mathbf{F}_n)$ respects the $\mathbb{G}_{n+1}$-actions. \end{proposition} \proof By diagram~(\ref{eq:map-of-fundamental-exact-seq}) and the isomorphism $\theta: \mathbf{G}_{\mathbb{B}}^0\stackrel{\cong}{\to} \mathbf{H}_{\mathbb{B}}$, we obtain $\rho_{\mathbf{H}}(g)\circ \theta=\theta\circ \rho_{\mathbf{G}}(g)^0$ for $g\in\mathbb{G}_{n+1}$. The proposition follows from the fact that $\rho_{\mathbf{H}}(g)$ is the identity map for any $g\in\mathbb{G}_{n+1}$. \if0 This follows from the commutative diagram \[ \begin{array}{ccc} \mathbf{H}_{\mathbb{B}} & \longrightarrow & \mathbf{G}_{\mathbb{B}} \\[1mm] \phantom{\mbox{\scriptsize$\rho_{\mathbf{H}}(g)$}} \bigg\downarrow\mbox{\scriptsize$\rho_{\mathbf{H}}(g)$} & & \phantom{\mbox{\scriptsize$\rho_{\mathbf{G}}(g)$}} \bigg\downarrow\mbox{\scriptsize$\rho_{\mathbf{G}}(g)$} \\[4mm] \mathbf{H}_{\mathbb{B}} & \longrightarrow & \mathbf{G}_{\mathbb{B}} \\ \end{array}\] and the fact that $\rho_{\mathbf{H}}(g)$ is the identity map. \fi \if0 Let $R$ be a $\mathbb{B}$-algebra, and let $l:A\to \mathbf{F}_{n+1}[p^{\infty}](R)$ be a homomorphism so that the composition with the projection $\mathbf{F}_{n+1}[p^{\infty}](R)\to \mathbb{Q}_p/\mathbb{Z}_p$ is the zero map $0$. We have the following commutative diagram \[ \begin{array}{ccccccccc} 0 & \to & A & \stackrel{=}{\longrightarrow} & A & \longrightarrow & 0 & \to & 0\\[1mm] & & \phantom{\mbox{\scriptsize$l'$}} \bigg\downarrow\mbox{\scriptsize$l'$} & & \phantom{\mbox{\scriptsize$l$}} \bigg\downarrow\mbox{\scriptsize$l$}& & \bigg\downarrow & & \\[4mm] 0 & \to & \mathbf{F}_n[p^{\infty}](R) & \longrightarrow & \mathbf{F}_{n+1}[p^{\infty}](R) & \longrightarrow & \mathbb{Q}_p/\mathbb{Z}_p, & &\\ \end{array}\] where the two rows are exact. By \cite[Proposition~1.11.2]{KM}, $l$ is a level $A$-structure on $\mathbf{F}_{n+1}[p^{\infty}]$ if and only if $l'$ is a level $A$-structure on $\mathbf{F}_n[p^{\infty}]$. Hence we have an isomorphism of functors \[ {\rm Level}(A,\mathbf{G}_{\mathbb{B}},0) \cong {\rm Level}(A,\mathbf{H}_{\mathbb{B}})\] over ${\rm Spf}(\mathbb{B})$. This implies an isomorphism on the representing rings. The isomorphism follows from Lemma~\ref{lemma:trivilhom-general}. For $g\in \mathbb{G}_{n+1}$, the map $\rho_{\mathbf{G}}(g)$ induces an isomorphism $t(g): \mathbf{F}_{n+1}[p^{\infty}]\stackrel{\cong}{\to} g^*\mathbf{F}_{n+1}[p^{\infty}]$. induces an isomorphism $t(g)^0: \mathbf{F}_{n+1}[p^{\infty}]^0\stackrel{\cong}{\to} g^*\mathbf{F}_{n+1}[p^{\infty}]^0$, where $\mathbf{F}_{n+1}[p^{\infty}]^0$ is the identity component. Let $\widetilde{\Phi}: \mathbf{F}_{n+1}[p^{\infty}]^0 \stackrel{\cong}{\to}\mathbf{F}_n[p^{\infty}]$ be an isomorphism of $p$-divisible groups over ${\rm Spf}(\mathbb{B}^0)$. By \cite{Torii1,Torii3,Torii6}, we have the following commutative diagram \[ \begin{array}{ccc} \mathbf{F}_{n+1}[p^{\infty}]^0 & \stackrel{\widetilde{\Phi}}{\hbox to 10mm{\rightarrowfill}} & \mathbf{F}_n[p^{\infty}]\\[1mm] \mbox{\scriptsize$t(g)^0$}\bigg\downarrow \phantom{\mbox{\scriptsize$t(g)^0$}} && \parallel\\[2mm] g^*\mathbf{F}_{n+1}[p^{\infty}]^0 & \stackrel{g^*\widetilde{\Phi}}{\hbox to 10mm{\rightarrowfill}} & g^*\mathbf{F}_n[p^{\infty}].\\ \end{array}\] This shows that the isomorphism ${\rm Level}(A,\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n},0) \cong {\rm Level}(A,\mathbf{F}_n[p^{\infty}]_{\mathbb{B}_n})$ of functors respects the $\mathbb{G}_{n+1}$-actions. \fi \qed \if0 \subsection{Power operations in the $\mathbb{A}_n$-theory} In this subsection we define some operations in the $\mathbb{A}_n$-theory which are extensions of power operations in $E_{n+1}$. We denote by ${\rm ch}_{\mathbb{A}}$ the natural map $E_{n+1}^*(X)\to \mathbb{A}_n^*(X)$. Recall that $\mathbf{G}_{\mathbb{A}}$ is the $p$-divisible group obtained from $\mathbf{F}_{n+1}[p^{\infty}]$ by base change along the map $E_{n+1}^0\to \mathbb{A}_n^0$. Let $M$ be a finite abelian $p$-group with $p$-rank$(M)\le n+1$. \if0 We denote by $D_{n+1}(M)$ the representing ring of level $M$-structures of the formal group law $\mathbf{F}_{n+1}$ over $E_{n+1}^0$. There is a universal level $M$-structure $l: M\to \mathbf{F}_{n+1}(D_{n+1}(M))$. Then we have a local homomorphism \[ \Psi^M_{n+1}: E_{n+1}^0\longrightarrow D_{n+1}(M), \] which classifies the $*$-isomorphism classes of the quotient formal group law $\mathbf{F}_{n+1}/[l(M)]$. We define the commutative $\mathbb{A}_n^0$-algebra $D_{\mathbb{A}}(M)$ to be $D_{n+1}(M)\otimes_{E_{n+1}^0}\mathbb{A}_n^0$. Then there is a canonical ring homomorphism $q_M: D_{n+1}(M)\to D_{\mathbb{A}_n}(M)$. \fi By Proposition~\ref{prop:level-representability-specR}, we have an isomorphism \[ D(M,\mathbf{G}_{\mathbb{A}}) \cong \mathbb{A}_n^0\otimes_{E_{n+1}^0}D(M,\mathbf{F}_{n+1}).\] We denote by ${\rm ch}_{\mathbb{A}}(M): D(M,\mathbf{F}_{n+1})\to D(M,\mathbf{G}_{\mathbb{A}})$ the obvious map. \begin{proposition}\label{prop:key-prop-A-power-operation} There is a ring homomorphism \[ \Psi^{\mathbb{A}}_M: \mathbb{A}_n^0\longrightarrow D(M,\mathbf{G}_{\mathbb{A}}),\] which commutes the following diagram \[ \begin{array}{ccc} E_{n+1}^0 & \stackrel{\Psi_M} {\hbox to 20mm{\rightarrowfill}} & D(M,\mathbf{F}_{n+1})\\[2mm] \mbox{\rm\scriptsize ${\rm ch}_{\mathbb{A}}$}\bigg\downarrow \phantom{\mbox{\rm\scriptsize ${\rm ch}_{\mathbb{A}}$}} & & \phantom{\mbox{\rm\scriptsize ${\rm ch}_{\mathbb{A}}(M)$}} \bigg\downarrow \mbox{\rm\scriptsize ${\rm ch}_{\mathbb{A}}(M)$}\\[2mm] \mathbb{A}_n^0 & \stackrel{\Psi_M^{\mathbb{A}}}{\hbox to 20mm{\rightarrowfill}} & D(M,\mathbf{G}_{\mathbb{A}}).\\ \end{array}\] \end{proposition} To prove the above proposition, we need the following lemma. Let $I_n$ be the ideal of $E_{n+1}^0$ generated by $p,u_1,\ldots, u_{n-1}$. \begin{lemma}\label{lemma:lemma-for-key-prop-power-op-A} We have $\Psi_M(I_n)\subset I_n D(M,\mathbf{F}_{n+1})$, and the element ${\rm ch}_{\mathbb{A}}(M)\Psi_M(u_n)$ is a unit in $D(M,\mathbf{G}_{\mathbb{A}})$. \end{lemma} \proof We have an isogeny \[ f: i^*\mathbf{F}_{n+1}\longrightarrow (\Psi_M)^* \mathbf{F}_{n+1} \] over $D(M,\mathbf{F}_{n+1})$, where $i:E_{n+1}^0\to D(M,\mathbf{F}_{n+1})$ is the inclusion. If we take a coordinate $x$ of $\mathbf{F}_{n+1}$ over $E_{n+1}^0$, then \begin{align}\label{eq:isogeny-DA} f([p]^{i^*\mathbf{F}_{n+1}}(x))=[p]^{(\Psi_M)^*\mathbf{F}_{n+1}}(f(x)). \end{align} Suppose the order of $M$ is $p^r$. By \cite[Proposition~18]{Strickland}, we see that $f(x)$ divides $[p^r]^{i^*\mathbf{F}_{n+1}}(x)$ in $D(M,\mathbf{F}_{n+1})\power{x}$. Let $R=D(M,\mathbf{F}_{n+1})/I_nD(M,\mathbf{F}_{n+1})[u_n^{-1}]$. Since $[p]^{\mathbf{F}_{n+1}}(x)= \sum_{i=0}^{n+1}{}^{\mathbf{F}_{n+1}}u_ix^{p^i}$ where $u_0=p$ and $u_{n+1}=1$, we have $[p]^{i^*\mathbf{F}_{n+1}}(x)=u_nx^{p^n}+\mbox{(higher terms)}$ in $R\power{x}$. We set $f(x)=cx^k+\mbox{\rm (higher terms)}$ in $R\power{x}$. Since $f(x)$ divides $[p^r]^{i^*\mathbf{F}_{n+1}}(x)$, we see that $c$ is a unit in $R$. Let $\mathbf{X}$ be the formal group law obtained from $(\Psi_M)^*\mathbf{F}_{n+1}$ by base change along the map $D(M,\mathbf{F}_{n+1})\to R$. Then $[p]^{\mathbf{X}}(x)= \sum_{i=0}^{n+1}{}^{\mathbf{X}}\Psi_M(u_i)x^{p^i}$. By (\ref{eq:isogeny-DA}), we have \[ cu_n^kx^{kp^n}+\cdots = \sum_{i=0}^{n+1}{}^{\mathbf{X}}\Psi_M(u_i)(cx^k+\cdots)^{p^i}\] in $R\power{x}$. This implies that $\Psi_M(u_i)=0$ in $R$ for $i=0,1,\ldots, n-1$ and $cu_n^k=\Psi_M(u_n)c^{p^n}$. Since $u_n$ and $c$ are units in $R$, we see that $\Psi_M(u_n)$ is also a unit in $R$. Then ${\rm ch}_{\mathbb{A}}(M)\Psi_M(u_n)$ is a unit in $D(M,\mathbf{G}_{\mathbb{A}})$ since $D(M,\mathbf{G}_{\mathbb{A}})$ is complete with respect to $I_nD(M,\mathbf{G}_{\mathbb{A}})$. \qed \proof[Proof of Proposition~\ref{prop:key-prop-A-power-operation}] The map ${\rm ch}_{\mathbb{A}}(M)\circ\Psi_M$ extends to a map $E_{n+1}^0[u_n^{-1}]\to D(M,\mathbf{G}_{\mathbb{A}})$ by Lemma~\ref{lemma:lemma-for-key-prop-power-op-A}. Since $D(M,\mathbf{G}_{\mathbb{A}})$ is complete at $I_n$ and $\mathbb{A}_n^0$ is obtained from $E_{n+1}^0[u_n^{-1}]$ by completion at $I_n$, we obtain a map $\Psi_M^{\mathbb{A}}: \mathbb{A}_n^0\to D(M,\mathbf{G}_{\mathbb{A}})$, which is an extension of ${\rm ch}_{\mathbb{A}}(M)\circ\Psi_M$. \qed We set \[ \mathbb{A}(M)_n^*(X)=D(M,\mathbf{G}_{\mathbb{A}}) {\otimes}_{\mathbb{A}_n^0} \mathbb{A}_n^*(X).\] We define a natural transformation \[ {\rm ch}_{\mathbb{A}}(M): E(M)_{n+1}^*(X) \to \mathbb{A}_n(M)^*(X) \] to be the composition \[ D(M,\mathbf{F}_{n+1})\otimes_{E_{n+1}^0}E_{n+1}^*(X) \stackrel{1\otimes {\rm ch}_{\mathbb{A}}}{\hbox to 15mm{\rightarrowfill}} D(M,\mathbf{F}_{n+1}) \otimes_{E_{n+1}^0}\mathbb{A}_n^*(X).\] When $X$ is a finite complex, we have a natural isomorphism \[ \mathbb{A}_n^0(X)\cong \mathbb{A}_n^0\otimes_{E_{n+1}^0}E_{n+1}^0(X)\] since $\mathbb{A}_n$ is Landweber exact. We define a natural map \[ \Psi_M^{\mathbb{A}}: \mathbb{A}_n^0(X)\to \mathbb{A}(M)_n^0(X) \] to be the composition \[ \begin{array}{rcl} \mathbb{A}_n^0\subrel{E_{n+1}^0}{\otimes}E_{n+1}^0(X) &\stackrel{\Psi_M^{\mathbb{A}}\otimes \Psi_{M}}{\hbox to 14mm{\rightarrowfill}}& \mathbb{A}_n^0\subrel{E_{n+1}^0}{\otimes} D(M,\mathbf{F}_{n+1})\subrel{\Psi_M,E_{n+1}^0,\Psi_M}{\otimes} D(M,\mathbf{F}_{n+1}) \subrel{E_{n+1}^0}{\otimes}E_{n+1}^0(X)\\[2mm] &\stackrel{1\otimes m\otimes 1}{\hbox to 14mm{\rightarrowfill}}& \mathbb{A}_n^0\subrel{E_{n+1}^0}{\otimes} D(M,\mathbf{F}_{n+1}) \subrel{E_{n+1}^0}{\otimes} E_{n+1}^0(X),\\[2mm] \end{array}\] where $m$ is the multiplication of $D(M,\mathbf{F}_{n+1})$. Let $X$ be a CW-complex and let $\{X_{\alpha}\}$ be the filtered system of all finite subcomplexes of $X$. Since $\mathbb{A}_n^0$ is a complete Noetherian local ring, we have an isomorphism \[ \mathbb{A}_n^0(X)\cong\ \subrel{\alpha}{\lim} \mathbb{A}_n^0(X_{\alpha}).\] Since $D(M,\mathbf{G}_{\mathbb{A}})$ is a finitely generated free $\mathbb{A}_n^0$-module, we also have an isomorphism \[ \mathbb{A}(M)_n^0(X)\cong\ \subrel{\alpha}{\lim} \mathbb{A}(M)_n^0(X_{\alpha}).\] Hence we see that the natural map $\Psi_M^{\mathbb{A}}$ uniquely extends to an operation \[ \Psi_M^{\mathbb{A}}: \mathbb{A}_n^0(X) \longrightarrow \mathbb{A}(M)_n^0(X)\] for any space $X$. Notice that $\Psi_{M}^{\mathbb{A}}$ is a ring operation. \fi \subsection{Power operations in $\mathbb{B}_n$-theory} We would like to construct Hecke operators in $\mathbb{B}_n$-theory. For this purpose, in this subsection we construct ring operations in $\mathbb{B}_n$-theory which are extensions of power operations in $E_{n+1}$-theory. We also study compatibility of these operations with the action of the extended Morava stabilizer group $\mathbb{G}_{n+1}$. First, we shall construct a ring operation \[ \Psi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow \mathbb{B}(M)_n^0(X)\] for a finite abelian $p$-group $M$, which is an extension of the ring operation $\Psi_M^{E_{n+1}}: E_{n+1}^0(X)\to E(M)_{n+1}^0(X)$, where $\mathbb{B}(M)_n^*(X)=D(M,\mathbf{G}_{\mathbb{B}}) \otimes_{\mathbb{B}_n^0}\mathbb{B}_n^*(X)$. \if0 Let $j_{E_{n+1}}: E_{n+1}^0\to D_{n+1}(M)$ be the structure map of the $E_{n+1}^0$-algebra $D_{n+1}(M)$. Note that $D_{n+1}(M)$ is a complete regular local ring with the residue field $\mathbb{F}$, and that $j_{E_{n+1}}$ induces an isomorphism between the residue fields. We denote by $j_{E_{n+1}}^*\mathbf{F}_{n+1}$ the formal group over $D_{n+1}(M)$ obtained by base change along $j_{E_{n+1}}$. The universal level $M$-structure \[ \phi^{\rm univ}_{E_{n+1}}: M\longrightarrow j_{E_{n+1}}^*\mathbf{F}_{n+1}(D_{n+1}(M))\] gives rise to a closed subgroup scheme $[\phi^{\rm univ}_{E_{n+1}}(M)]$ of $j_{E_{n+1}}^*\mathbf{F}_{n+1}$ by \[ [\phi^{\rm univ}_{E_{n+1}}(M)]=\sum_{m\in M}[\phi^{\rm univ}_{E_{n+1}}(m)],\] where $[\phi^{\rm univ}_{E_{n+1}}(m)]$ is a divisor on $j_{E_{n+1}}^*\mathbf{F}_{n+1}$ defined by a point $\phi^{\rm univ}_{E_{n+1}}(m)\in j_{E_{n+1}}^*\mathbf{F}_{n+1}(D_{n+1}(M))$. We can define the quotient formal group $j_{E_{n+1}}^*\mathbf{F}_{n+1}/[\phi^{\rm univ}_{E_{n+1}}(M)]$ of $j_{E_{n+1}}^*\mathbf{F}_{n+1}$ by $[\phi^{\rm univ}_{E_{n+1}}(M)]$. The formal group $j_{E_{n+1}}^*\mathbf{F}_{n+1}/[\phi^{\rm univ}_{E_{n+1}}(M)]$ is a deformation of the Honda formal group $\mathbf{Y}_{n+1}$ over the residue field $\mathbb{F}$. We have the power operation \[ \Psi_{M}^{E_{n+1}}: E_{n+1}^0(X)\stackrel{}{\longrightarrow} D_{n+1}(M)\subrel{E_{n+1}^0}{\otimes} E_{n+1}^0(X).\] When $X$ is the one point space, we obtain a local ring homomorphism \[ \Psi_M^{E_{n+1}}: E_{n+1}^0\longrightarrow D_{n+1}(M).\] This map classifies a deformation of the formal group $j_{E_{n+1}}^*\mathbf{F}_{n+1}/[\phi^{\rm univ}_{E_{n+1}}(M)]$. So we have an isomorphism of formal groups \[ (\Psi_M^{E_{n+1}})^*\mathbf{F}_{n+1}\cong j_{E_{n+1}}^*\mathbf{F}_{n+1}/[\phi^{\rm univ}_{E_{n+1}}(M)],\] which is an identity on the special fiber over $\mathbb{F}$. \fi For this purpose, we consider the composite of the operation $\Psi_M^{E_{n+1}}$ with a natural map ${\rm ch}(M): E(M)_{n+1}^0(X)\to \mathbb{B}(M)_n^0(X)$, which is a natural extension of ${\rm ch}: E_{n+1}^0(X)\to \mathbb{B}_n^0(X)$. Proposition~\ref{prop:product-decomposition-DAGp} implies a decomposition \[ \mathbb{B}(M)_n^*(X)\cong\prod_{\pi} \mathbb{B}(M,\pi)^*(X) \] of generalized cohomology theories, where $\mathbb{B}(M,\pi)_n^*(X)= D(M,\mathbf{G}_{\mathbb{B}},\pi) {\otimes}_{\mathbb{B}_n^0}\mathbb{B}_n^*(X)$. We write ${\rm ch}(M,{\pi}): E(M)_{n+1}^*(X)\to\mathbb{B}(M,\pi)_n^*(X)$ for the composite $p_{\pi}\circ {\rm ch}(M)$, where $p_{\pi}: \mathbb{B}(M)_n^*(X)\to \mathbb{B}(M,{\pi})_n^*(X)$ is the projection. When $X$ is a one point space, we obtain a ring homomorphism ${\rm ch}(M,{\pi}): D(M,\mathbf{F}_{n+1})\to D(M,\mathbf{G}_{\mathbb{B}},{\pi})$. The map ${\rm ch}(M,\pi)$ induces an isogeny of $p$-divisible groups \[ {\rm ch}(M,\pi)^*\Psi^{E_{n+1}}_M: (i^{\mathbb{B}})^*\mathbf{G}_{\mathbb{B}} \longrightarrow {\rm ch}(M,\pi)^*(\Psi^{E_{n+1}}_M)^*\mathbf{G}\] from the isogeny of formal groups $\Psi^{E_{n+1}}_M: i^*\mathbf{F}_{n+1}\to (\Psi^{E_{n+1}}_M)^*\mathbf{F}_{n+1}$, where $i^{\mathbb{B}}: \mathbb{B}_n^0\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ is the inclusion map. \if0 We have the $p$-divisible group $\mathbf{F}_{n+1}[p^{\infty}]$ over $E_{n+1}^0$. By base change along the map $\Psi_M^{E_{n+1}}$, we obtain a $p$-divisible group $(\Psi_M^{E_{n+1}})^*(\mathbf{F}_{n+1}[p^{\infty}])$ over $D_{n+1}(M)$. Since giving a level $M$-structure on the formal group $\mathbf{F}_{n+1}$ is equivalent to giving a level $M$-structure on the $p$-divisible group $\mathbf{F}_{n+1}[p^{\infty}]$, the homomorphism \[ \phi^{\rm univ}_{E_{n+1}}: M\longrightarrow \mathbf{F}_{n+1}(D_{n+1}(M)) =\mathbf{F}_{n+1}[p^{\infty}](D_{n+1}(M)) \] is a universal level structure of $\mathbf{F}_{n+1}[p^{\infty}]$ over $D_{n+1}(M)$, and we have an an isomorphism of $p$-divisible groups \[ (\Psi_M^{E_{n+1}})^*(\mathbf{F}_{n+1}[p^{\infty}])\cong j_{E_{n+1}}^*(\mathbf{F}_{n+1}[p^{\infty}])/ [\phi^{\rm univ}_{E_{n+1}}(M)].\] Let $j_{\mathbb{B}_n}: \mathbb{B}_n^0\to D_{\mathbb{B}_n}(M)$ be the structure map of the $\mathbb{B}_n^0$-algebra $D_{\mathbb{B}_n}(M)$. Since $(1\otimes {\rm ch})\circ j_{E_{n+1}} =j_{\mathbb{B}_n}\circ {\rm ch}$, we have an isomorphism \[ (1\otimes {\rm ch})^* (j_{E_{n+1}}^*(\mathbf{F}_{n+1}[p^{\infty}])) \cong j_{\mathbb{B}_n}^*(\mathbf{F}_{n+1}[p^{\infty}]_{\mathbf{B}_n}),\] and we can identify $(1\otimes {\rm ch})^*\phi^{\rm univ}_{E_{n+1}}$ with the universal level structure $\phi^{\rm univ}_{\mathbb{B}_n}$. Hence we have an isomorphism \[ (1\otimes {\rm ch})^*(\Psi_M^{E_{n+1}})^* (\mathbf{F}_{n+1}[p^{\infty}]) \cong j_{\mathbb{B}_n}^*(\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n})/ [\phi^{\rm univ}_{\mathbb{B}_n}(M)].\] \fi Let $\phi: M\to \mathbf{G}_{\mathbb{B}}(R)$ be a level $M$-structure, which corresponds to a map $D(M,\mathbf{G}_{\mathbb{B}},\pi)\to R$ in $\mathcal{CL}$. By exact sequence~(\ref{eq:fundamental-exact-squence-Fn-Fn+1-constant}), we obtain a homomorphism $\phi': N\to \mathbf{H}_{\mathbb{B}}(R)$, where $N=\ker(\pi)$. By Proposition~\ref{prop:general-level-restriction-height-change}, $\phi'$ is a level $N$-structure on $\mathbf{H}_R$, and hence we obtain a local ring homomorphism ${\rm inc}(M,\pi): D(N,\mathbf{F}_n)\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$, which induces a multiplicative stable cohomology operation \[ {\rm inc}(M,\pi): E(N)_n^*(X)\longrightarrow \mathbb{B}(M,\pi)_n^*(X).\] The following lemma follows from Proposition~\ref{prop:exact-seq-quotient-pdivisible}. \begin{lemma}\label{lemma:iso-identity-FM-FN} The isomorphism $\theta: \mathbf{G}_{\mathbb{B}}^0\stackrel{\cong}{\to} \mathbf{H}_{\mathbb{B}}$ induces an isomorphism \[ \overline{\theta}: (\mathbf{G}_R/[\phi(M)])^0 \stackrel{\cong}{\longrightarrow} \mathbf{H}_R/[\phi'(N)]\] of connected $p$-divisible groups over $R$, where $(\mathbf{G}_R/[\phi(M)])^0$ is the identity component of the quotient $p$-divisible group $\mathbf{G}_R/[\phi(M)]$. \end{lemma} Using Lemma~\ref{lemma:iso-identity-FM-FN}, we shall show that ${\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}: E_{n+1}^0\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ extends to a map $f: \mathbb{A}_n^0\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ of local rings. \begin{lemma}\label{lemma:extension-chMpi-psiMen+1} The composite ${\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}: E_{n+1}^0\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ extends to a map $f: \mathbb{A}_n^0\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ of local rings. \end{lemma} \proof Put $R=D(M,\mathbf{G}_{\mathbb{B}},\pi)$ and $\psi={\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}$. We have an isomorphism $\psi^*\mathbf{G}\cong\mathbf{G}_R/[\phi(M)]$ of $p$-divisible groups. Lemma~\ref{lemma:iso-identity-FM-FN} implies an isomorphism $(\psi^*\mathbf{G})^0\cong \mathbf{H}_R/[\phi'(N)]$. Since the height of $\mathbf{H}_R$ is $n$, we see that the height of $(\psi^*\mathbf{G})^0$ is also $n$. This implies $\psi(u_i)=0$ for $i<n$, and $\psi(u_n)\neq 0$ in the residue field of $R$. Hence $\psi$ extends to a map $f: \mathbb{A}_n^0\to R$ since $R$ is a complete local ring. \qed \if0 \begin{lemma}\label{lemma:iso-identity-FM-FN} Let $(p_{\pi}^*\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n}/ [p_\pi^*\phi^{\rm univ}_{\mathbb{B}_n}(M)])^0$ be the identity component of the $p$-divisible group $p_{\pi}^*\mathbf{F}_{n+1}[p^{\infty}]/ [p_\pi^*\phi^{\rm univ}_{\mathbb{B}_n}(M)]$. We have an isomorphism of $p$-divisible groups \[ (p_{\pi}^*\mathbf{F}_{n+1}[p^{\infty}]_{\mathbb{B}_n}/ [p_\pi^*\phi^{\rm univ}_{\mathbb{B}_n}(M)])^0 \cong p_{\pi}^*\mathbf{F}_n[p^{\infty}]_{\mathbb{B}_n}/ [\phi'(N)].\] \end{lemma} \proof Let $R=D_{\mathbb{B}_n}(M)_\pi$, and let $K$ be the image of the homomorphism $\pi: M\to \mathbb{Q}_p/\mathbb{Z}_p$. We have the following commutative diagram \[ \begin{array}{ccccccccc} 0 & \to & [\phi'(N)] & \longrightarrow & [\phi(M)] & \longrightarrow & K_R & \to & 0\\[2mm] & & \bigg\downarrow & & \bigg\downarrow & & \bigg\downarrow & & \\[4mm] 0 & \to & \mathbf{F}_n[p^{\infty}]_R & \longrightarrow & \mathbf{F}_{n+1}[p^{\infty}]_R & \longrightarrow & (\mathbb{Q}_p/\mathbb{Q}_p)_R & \to & 0\\ \end{array}\] of fppf sheaves of abelian groups on ${\rm Spf}(R)$, where the tow rows are exact. This induces an exact sequence \[ 0\to \mathbf{F}_n[p^{\infty}]_{R}/ [\phi'(N)] \longrightarrow \mathbf{F}_{n+1}[p^{\infty}]_{R}/ [\phi(M)] \longrightarrow (\mathbb{Q}_p/\mathbb{Z}_p)_R\to 0.\] This shows the identity component of $\mathbf{F}_{n+1}[p^{\infty}]_{R}/[\phi(M)]$ is isomorphic to $\mathbf{F}_n[p^{\infty}]_{R}/[\phi'(N)]$. \qed \fi \if0 Recall that we have the level $N$-structure $\phi'$ on $\mathbf{F}_n[p^{\infty}]$ over $D_{\mathbb{B}_n}(M)_{\pi}$. Hence we obtain a local ring homomorphism \[ {\rm inc}({M,\pi): D_n(N)\longrightarrow D_{\mathbb{B}_n}(M)_{\pi},\] which classifies the level $N$-structure $\phi'$. Note that we have \[ {\rm inc}(M,\pi)\circ j_{E_n}= p_{\pi} \circ j_{\mathbb{B}_n}\circ i.\] Hence we obtain an isomorphism of $p$-divisible groups \[ p_{\pi}^*\mathbf{F}_n[p^{\infty}]/[\phi'(N)] \cong {\rm inc}(M,\pi)^* \mathbf{F}_n[p^{\infty}]/ [\phi_{E_n}^{\rm univ}(N)]\] over $D_{\mathbb{B}_n}(M)_{\pi}$. \fi \begin{proposition}\label{prop:relations-Psi-ch-i} There exists a unique map of local rings \[ \Psi_{M,\pi}^{\mathbb{B}}: \mathbb{B}_n^0\longrightarrow D(M,\mathbf{G}_{\mathbb{B}},\pi) \] satisfying \[ \begin{array}{lcl} \Psi_{M,\pi}^{\mathbb{B}}\circ {\rm ch}&=& {\rm ch}(M,{\pi})\circ\Psi_{M}^{E_{n+1}},\\[2mm] \Psi_{M,\pi}^{\mathbb{B}}\circ {\rm inc}&=& {\rm inc}(M,\pi)\circ \Psi_N^{E_n}, \end{array}\] and making the following diagram commute \[ \begin{array}{ccc} (i^{\mathbb{B}})^*\mathbf{G}_{\mathbb{B}}^0 & \stackrel{{\rm ch}(M,\pi)^*\Psi^{E_{n+1}}_M} {\hbox to 20mm{\rightarrowfill}}& (\Psi^{\mathbb{B}}_{M,\pi})^*\mathbf{G}_{\mathbb{B}}^0\\[2mm] \mbox{$\scriptstyle (i^{\mathbb{B}})^*\theta$} \bigg\downarrow \phantom{\mbox{$\scriptstyle (i^{\mathbb{B}})^*\theta$}} && \phantom{\mbox{$\scriptstyle (\Psi^{\mathbb{B}}_{M,\pi})^*\theta$}} \bigg\downarrow \mbox{$\scriptstyle (\Psi^{\mathbb{B}}_{M,\pi})^*\theta$}\\[2mm] (i^{\mathbb{B}})\mathbf{H}_{\mathbb{B}}& \stackrel{{\rm inc}(M,\pi)^*\Psi^{E_n}_N} {\hbox to 20mm{\rightarrowfill}}& (\Psi^{\mathbb{B}}_{M,\pi})^*\mathbf{H}_{\mathbb{B}}. \end{array}\] \end{proposition} \proof Put $R=D(M,\mathbf{G}_{\mathbb{B}},\pi)$. The isogeny ${\rm ch}(M,\pi)^*\Psi^{E_{n+1}}_M: (i^{\mathbb{B}})^*\mathbf{G}_{\mathbb{B}}= \mathbf{G}_R \to f^*\mathbf{G}_{\mathbb{A}}$ of $p$-divisible groups induces an isomorphism $f^*\mathbf{G}_{\mathbb{A}}\cong \mathbf{G}_R/[\phi(M)]$, where $f: \mathbb{A}_n^0\to R$ is the extension of ${\rm ch}(M,\pi)\circ\Psi_M^{E_{n+1}}$ given in Lemma~\ref{lemma:extension-chMpi-psiMen+1}. We also have the isogeny ${\rm inc}(M,\pi)^*\Psi^{E_n}_N: (i^{\mathbb{B}})\mathbf{H}_{\mathbb{B}}= \mathbf{H}_R\to g^*\mathbf{H}$, which induces an isomorphism $g^*\mathbf{H}\cong \mathbf{H}_R/[\phi'(N)]$, where $g={\rm inc}(M,\pi)\circ \Psi_N^{E_n}$. By Lemma~\ref{lemma:MUPMUP-variation}, it suffices to show that there is an isomorphism $\overline{\theta}: f^*\mathbf{G}_{\mathbb{A}}^0 \cong g^*\mathbf{H}$ of $p$-divisible groups over $D(M,\mathbf{G}_{\mathbb{B}},\pi)$. We obtain the desired isomorphism by Lemma~\ref{lemma:iso-identity-FM-FN}. \if0 By Lemma~\ref{lemma:iso-identity-FM-FN}, the isomorphism $\theta: \mathbf{G}_{\mathbb{B}}^0\stackrel{\cong}{\to} \mathbf{H}_{\mathbb{B}}$ induces an isomorphism $\overline{\theta}: f^*\mathbf{G}_{\mathbb{A}}^0 \cong g^*\mathbf{H}$ of connected $p$-divisible groups over $R$. \fi \qed When $X$ is a finite complex, we have a natural isomorphism $\mathbb{B}_n^0(X)\cong \mathbb{B}_n^0 \otimes_{E_{n+1}^*}E_{n+1}^0(X)$. We define a natural map \[ \Psi_{M,\pi}^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow \mathbb{B}(M,\pi)_n^0(X) \] to be the composition \[ \begin{array}{rcl} \mathbb{B}_n^0\subrel{E_{n+1}^0}{\otimes}E_{n+1}^0(X)& \stackrel{\Psi_{M,\pi}^{\mathbb{B}}\otimes \psi}{\hbox to 14mm{\rightarrowfill}}& D(M,\mathbf{G}_{\mathbb{B}},\pi) \subrel{\psi,E_{n+1}^0,\psi}{\otimes} D(M,\mathbf{G}_{\mathbb{B}},\pi) \subrel{E_{n+1}^0}{\otimes} E_{n+1}^0(X)\\[2mm] &\stackrel{m\otimes 1}{\hbox to 14mm{\rightarrowfill}}& D(M,\mathbf{G}_{\mathbb{B}},{\pi}) \subrel{E_{n+1}^0}{\otimes} E_{n+1}^0(X), \end{array}\] where $\psi={\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}$ and $m$ is the multiplication map of $D(M,\mathbf{G}_{\mathbb{B}},{\pi})$. When $X$ is a CW-complex, we let $\{X_{\alpha}\}$ be the filtered system of all finite subcomplexes of $X$. Since $\mathbb{B}_n^0$ is a complete Noetherian local ring, we have an isomorphism $\mathbb{B}_n^0(X)\cong\ \subrel{\alpha}{\lim} \mathbb{B}_n^0(X_{\alpha})$. Since $D(M,\mathbf{G}_{\mathbb{B}},\pi)$ is a finitely generated free $\mathbb{B}_n^0$-module, we also have an isomorphism $\mathbb{B}(M,\pi)_n^0(X)\cong\ \subrel{\alpha}{\lim} \mathbb{B}(M,\pi)_n^0(X_{\alpha})$. Hence we see that the natural map $\Psi_{M,\pi}^{\mathbb{B}}$ uniquely extends to a ring operation \[ \Psi_{M,\pi}^{\mathbb{B}}: \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}(M,\pi)_n^0(X)\] for any space $X$. \if0 By definition, $\Psi_{M,\pi}$ is a ring operation and \[ \Psi_{M,\pi}^{\mathbb{B}}\circ {\rm ch}= {\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}.\] \fi \if0 Let $X$ be a CW-complex and let $\{X_{\alpha}\}$ be the filtered system of all finite subcomplexes of $X$. Since $\mathbb{B}_n^0$ is a complete Noetherian local ring, we have an isomorphism \[ \mathbb{B}_n^0(X)\cong\ \subrel{\alpha}{\lim} \mathbb{B}_n^0(X_{\alpha}).\] Since $D_{\mathbb{B}_n,\pi}(M)$ is a finitely generated free $\mathbb{B}^0$-module, we also have an isomorphism \[ D_{\mathbb{B}_n,\pi}\subrel{\mathbb{B}_n^0}{\otimes} \mathbb{B}_n^0(X)\cong\ \subrel{\alpha}{\lim} D_{\mathbb{B}_n,\pi}\subrel{\mathbb{B}_n^0}{\otimes} \mathbb{B}_n^0(X_{\alpha}).\] Hence we see that the natural map $(m\otimes 1)\circ (\Psi_{M,\pi}^{\mathbb{B}_n}\otimes \Psi_{M,\pi}^{E_{n+1}})$ uniquely extends to an operation \[ \Psi_{M,\pi}^{\mathbb{B}_n}: \mathbb{B}_n^0(X) \longrightarrow D_{\mathbb{B}_n}(M)_{\pi}\subrel{\mathbb{B}_n^0}{\otimes} \mathbb{B}_n^0(X)\] for any space $X$. \fi \begin{remark}\label{remark:Psi-M-pi-ch} \rm By construction, we have $\Psi_{M,\pi}^{\mathbb{B}}\circ {\rm ch}= {\rm ch}(M,\pi)\circ \Psi_M^{E_{n+1}}$. \end{remark} The following theorem describes a relationship between the operations $\Psi_{M,\pi}^{\mathbb{B}}$ and $\Psi_N^{E_n}$, where $N$ is the kernel of $\pi: M\to \mathbb{Q}_p/\mathbb{Z}_p$. \begin{theorem}\label{thm:relation-psi-M-pi-Psi-N} There is a natural commutative diagram \[ \begin{array}{ccc} E_n^0(X) & \stackrel{\Psi_N^{E_n}}{\hbox to 10mm{\rightarrowfill}}& E(N)_n^0(X)\\[1mm] \mbox{\scriptsize${\rm inc}$} \bigg\downarrow\phantom{\mbox{\scriptsize${\rm inc}$}} & & \phantom{\mbox{\scriptsize${\rm inc}(M,\pi)$}} \bigg\downarrow \mbox{\scriptsize${\rm inc}(M,\pi)$}\\[1mm] \mathbb{B}_n^0(X) & \stackrel{\Psi_{M,\pi}^{\mathbb{B}}} {\hbox to 10mm{\rightarrowfill}} & \mathbb{B}_n(M,\pi)_n^0(X)\\ \end{array}\] for any space $X$. \end{theorem} \proof By \cite[Proposition~3.7]{Ando2}, an unstable ring operation between Landweber exact cohomology theories is determined by its values on a one point space and the infinite complex projective space $\mathbb{CP}^{\infty}$. Hence it is sufficient to show that the diagram is commutative when $X$ is a one point space and when $X$ is $\mathbb{CP}^{\infty}$. If $X$ is a one point space, then the diagram is commutative by Proposition~\ref{prop:relations-Psi-ch-i}. If $X=\mathbb{CP}^{\infty}$, then the diagram is commutative by Lemma~\ref{lemma:iso-identity-FM-FN}. \if0 by the following commutative diagram of $p$-divisible groups \[ \begin{array}{ccc} \mathbf{F}_n[p^{\infty}] & \hbox to 10mm{\rightarrowfill} & \mathbf{F}_n[p^{\infty}]/[\varphi(N)]\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[3mm] \mathbf{F}_{n+1}[p^{\infty}]& \hbox to 10mm{\rightarrowfill} & \mathbf{F}_{n+1}[p^{\infty}]/[\phi_{\pi}(M)],\\ \end{array}\] where the horizontal arrows are quotient maps and the vertical arrows are isomorphisms into the identity components. Since $E_n^*(-)$ and $D_{\mathbb{B}_n}(M)_{\pi}^*(-)$ are Landweber exact cohomology theories, the Kunneth formula holds for the cohomology rings of these theories for a finite product of $\mathbb{C}P^{\infty}$'s. The fact that $\Psi_{M,\pi}^{\mathbb{B}_n}\circ {\rm inc}$ and $({\rm inc}(M,\pi)\otimes {\rm inc})\circ \Psi_N^{E_n}$ are ring operations, implies that they coincide when $X$ is a finite product of $\mathbb{C}P^{\infty}$'s. By \cite[Theorem~4.2]{Kashiwabara} we see that the above diagram is commutative for all $X$. \fi \qed Assembling the operations $\Psi_{M,\pi}^{\mathbb{B}}$ for all homomorphisms $\pi: M\to \mathbb{Q}_p/\mathbb{Z}_p$, we shall define a ring operation $\Psi_M^{\mathbb{B}}$ with values in $\mathbb{B}(M)_n^0(X)$. \begin{definition}\rm We define a ring operation \[ \Psi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}(M)_n^0(X)\] to be the one satisfying $\Psi_{M,\pi}^{\mathbb{B}}= (p_{\pi}\otimes 1)\circ \Psi_M^{\mathbb{B}}$ for any homomorphism $\pi: M\to \mathbb{Q}_p/\mathbb{Z}_p$. \end{definition} The following proposition is easily obtained by the definition of $\Psi_M^{\mathbb{B}}$ and Remark~\ref{remark:Psi-M-pi-ch}, which describes a relationship between the operations $\Psi_M^{\mathbb{B}}$ and $\Psi_M^{E_{n+1}}$. \begin{proposition}\label{prop-on-Psi-operation-B-M} We have $\Psi_M^{\mathbb{B}}\circ {\rm ch} = {\rm ch}(M)\circ \Psi_M^{E_{n+1}}$. \end{proposition} Next, we study compatibility of the operation $\Psi_M^{\mathbb{B}}$ with an action of the extended Morava stabilizer group $\mathbb{G}_{n+1}$. We suppose that $\mathbb{G}_{n+1}$ diagonally acts on $\mathbb{B}(M)_n^0(X)= D(M,\mathbf{G}_{\mathbb{B}})\otimes_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)$ when $X$ is a finite complex. This action uniquely extends to an action on $\mathbb{B}(M)_n^0(X)$ for any space $X$ since $\mathbb{B}(M)_n^0(X)\cong\, \subrel{\alpha}{\rm lim}\,\mathbb{B}(M)_n^0(X_{\alpha})$. \begin{theorem}\label{thm:equivariance-operation-psi-B} The operation $\Psi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X)\to \mathbb{B}(M)_n^0(X)$ is $\mathbb{G}_{n+1}$-equivariant. \end{theorem} In order to prove Theorem~\ref{thm:equivariance-operation-psi-B}, we first study the action of $\mathbb{G}_{n+1}$ on the ring homomorphism $\Psi_M^{\mathbb{B}}:\mathbb{B}_n^0\to D(M,\mathbf{G}_{\mathbb{B}}) \cong \prod_{\pi}D(M,\mathbf{G}_{\mathbb{B}},\pi)$. By Lemma~\ref{lemma:MUPMUP-variation}, the local ring homomorphism $\Psi_{M,\pi}^{\mathbb{B}}$ is characterized by compositions $F=\Psi_{M,\pi}^{\mathbb{B}}\circ{\rm j}$ and $G=\Psi_{M,\pi}^{\mathbb{B}}\circ {\rm inc}$, and an isomorphism of formal groups between $F^*\mathbf{G}_{\mathbb{A}}^0$ and $G^*\mathbf{H}$. By Proposition~\ref{prop:relations-Psi-ch-i}, we have $\Psi_{M,\pi}^{\mathbb{B}}\circ{\rm inc}= {\rm inc}(M,\pi)\circ\Psi_{N}^{E_{n+1}}$, where $N={\rm ker}(\pi)$. We consider the action of $\mathbb{G}_{n+1}$ on the ring homomorphisms ${\rm inc}(M,\pi): D(N,\mathbf{F}_n)\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$. Here we suppose that $\mathbb{G}_{n+1}$ trivially acts on $D(N,\mathbf{F}_n)$ . The following lemma follows from the fact that $\rho_{\mathbf{H}}(g)$ in (\ref{eq:map-of-fundamental-exact-seq}) is the identity map for any $g\in \mathbb{G}_{n+1}$. \if0 Let $\nu: \mathbb{G}_{n+1}\to {\rm Gal}(\mathbb{F}/\mathbb{F}_p)$ be the projection. The group $\mathbb{G}_{n+1}$ acts on $E_n^0$ through $\nu$. Note that the map $i: E_n^0\to \mathbb{B}_n^0$ is $\mathbb{G}_{n+1}$-equivariant. Since $\nu^*\mathbf{F}_n[p^{\infty}]\cong\mathbf{F}_n[p^{\infty}]$, we can extend this action to that on $D_n(N)$ for any finite abelian $p$-group $N$. \begin{lemma} There is an commutative diagram \[ \begin{array}{ccc} E_n^0 & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & E_n^0 \\[1mm] \mbox{\scriptsize $\Psi_{N}^{E_n}$} \bigg\downarrow \phantom{\mbox{\scriptsize $\Psi_{N}^{E_n}$}} & & \phantom{\mbox{\scriptsize$\Psi_{N}^{E_n}$}} \bigg\downarrow \mbox{\scriptsize$\Psi_{N}^{E_n}$} \\[4mm] D_{n}(N) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D_{n}(N)\\[1mm] \end{array}\] for any $g\in \mathbb{G}_{n+1}$. \end{lemma} \proof We have an isomorphism $(\Psi_N^{E_n})^*\mathbf{F}_n[p^{\infty}]\cong \mathbf{F}_n[p^{\infty}]/[\varphi(N)]$, restricting the identity on the special fiber ${\rm Spec}(\mathbb{F})$, where $\varphi$ is a universal $N$-level structure. Note that the isomorphism $g^*\mathbf{F}_n[p^{\infty}]\cong \mathbf{F}_n[p^{\infty}]$ also restricts the identity on the special fiber. Hence we have isomorphisms $g^*(\Psi_N^{E_n})^*\mathbf{F}_n[p^{\infty}]\cong \mathbf{F}_n[p^{\infty}]/[\varphi(N)]$ and $(\Psi_N^{E_n})^*g^*\mathbf{F}_n[p^{\infty}]\cong \mathbf{F}_n[p^{\infty}]/[\varphi(N)]$, which restrict the identity on the special fiber. Since the $p$-divisible formal group $\mathbf{F}_n[p^{\infty}]$ over ${\rm Spf}(E_n^0)$ is a universal deformation, we obtain that $g\circ\Psi_N^{E_n}=\Psi_N^{E_n}\circ g$. \qed \fi \begin{lemma}\label{lemma:commutativity-DN-GN+1-RMN-DBPI} We have $g\circ {\rm inc}(M,g\pi)={\rm inc}(M,\pi) : D(N,\mathbf{F}_n)\to D(M,\mathbf{G}_{\mathbb{B}},\pi)$ for any $g\in\mathbb{G}_{n+1}$. \if0 We have an commutative diagram \[ \begin{array}{ccc} D_n(N) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D_n(N) \\[1mm] \mbox{\scriptsize ${\rm inc}(M,g\pi)$} \bigg\downarrow \phantom{\mbox{\scriptsize ${\rm inc}(M,g\pi)$}} & & \phantom{\mbox{\scriptsize${\rm inc}(M,\pi)$}} \bigg\downarrow \mbox{\scriptsize${\rm inc}(M,\pi)$} \\[4mm] D(M,\mathbf{G}_{\mathbb{B}},{g\pi}) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D(M,\mathbf{G}_{\mathbb{B}},{\pi})\\[1mm] \end{array}\] for any $g\in \mathbb{G}_{n+1}$. \fi \end{lemma} \if0 \proof This follows from the fact that $\rho_{\mathbf{H}}(g)$ is the identity map for any $g\in \mathbb{G}_{n+1}$. \qed \proof This follows from the fact that there is a commutative diagram of functors \[ \begin{array}{ccc} {\rm Level}(N,\mathbf{F}_n[p^{\infty}])& \stackrel{g}{\hbox to 10mm{\leftarrowfill}}& {\rm Level}(N,\mathbf{F}_n[p^{\infty}])\\[1mm] \bigg\uparrow & & \bigg\uparrow \\[3mm] {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}],{g\pi})& \stackrel{g}{\hbox to 10mm{\leftarrowfill}}& {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}],{\pi}). \end{array}\] \qed \fi Using Proposition~\ref{prop:g-Psi-commutation-En}, Lemma~\ref{lemma:MUPMUP-variation}, Proposition~\ref{prop:permutation-action-Sn+1-DBM}, Proposition~\ref{prop:relations-Psi-ch-i}, and Lemma~\ref{lemma:commutativity-DN-GN+1-RMN-DBPI}, we obtain the following proposition. \begin{proposition}\label{prop:Bn-PSI-MPI-G-commutativity} There is a commutative diagram \[ \begin{array}{ccc} \mathbb{B}_n^0 & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & \mathbb{B}_n^0 \\[1mm] \mbox{\scriptsize $\Psi_{M,g\pi}^{\mathbb{B}}$} \bigg\downarrow \phantom{\mbox{\scriptsize $\Psi_{M,g\pi}^{\mathbb{M}}$}} & & \phantom{\mbox{\scriptsize$\Psi_{M,\pi}^{\mathbb{B}}$}} \bigg\downarrow \mbox{\scriptsize$\Psi_{M,\pi}^{\mathbb{B}}$} \\[4mm] D(M,\mathbf{G}_{\mathbb{B}},{g\pi}) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & D(M,\mathbf{G}_{\mathbb{B}},{\pi})\\[1mm] \end{array}\] for any $g\in \mathbb{G}_{n+1}$. \end{proposition} \proof \if0 Let $\phi_{\pi}: M\to \mathbf{F}_{n+1}[p^{\infty}](D_{\mathbb{B}_n}(M)_{\pi})$ be the universal level $M$-structure of $\mathbf{F}_{n+1}[p^{\infty}]$ over $D_{\mathbb{B}_n}(M)_{\pi}$. The map $\phi_{\pi}$ determines a divisor $\phi_{\pi}[M]$ in $\mathbf{F}_{n+1}[p^{\infty}]$. Then $\overline{g}\circ \phi_{\pi}[M]=\phi_{g\pi}[M]$ for $g\in \mathbb{G}_{n+1}$, where $\overline{g}: \mathbf{F}_{n+1}[p^{\infty}]\to \mathbf{F}_{n+1}[p^{\infty}]$ is a map of $p$-divisible groups covering the map ${\rm Spf}(g): {\rm Spf}(\mathbb{B}_n^0)\to {\rm Spf}(\mathbb{B}_n^0)$. \fi Put $R=D(M,\mathbf{G}_{\mathbb{B}},\pi)$. By Propositions~\ref{prop:g-Psi-commutation-En}, \ref{prop:permutation-action-Sn+1-DBM}, and \ref{prop:relations-Psi-ch-i}, we have $g\circ \Psi_{M,g\pi}^{\mathbb{B}}\circ {\rm ch} =\Psi_{M,\pi}^{\mathbb{B}}\circ g\circ {\rm ch}$. This induces a map $F_g: \mathbb{A}_n^0\to R$ in $\mathcal{CL}$, and there are canonical isomorphisms $F_g^*\mathbf{G}_{\mathbb{A}}^0\cong (g^*\mathbf{G}_R/g^*M)^0\cong g^*(\mathbf{G}_R/M)^0$. On the other hand, by Proposition~\ref{prop:relations-Psi-ch-i} and Lemma~\ref{lemma:commutativity-DN-GN+1-RMN-DBPI}, $g\circ \Psi_{M,g\pi}^{\mathbb{B}}\circ {\rm inc} =\Psi_{M,\pi}^{\mathbb{B}}\circ g\circ {\rm inc}$. We denote this map by $G_g$. There are canonical isomorphisms $G_g^*\mathbf{H}\cong g^*\mathbf{H}_R/g^*N\cong g^*(\mathbf{H}_R/N)$, where $N=\ker(\pi)$. The isomorphism $\theta:\mathbf{G}_{\mathbb{B}}^0 \stackrel{\cong}{\to}\mathbf{H}_{\mathbb{B}}$ induces isomorphisms $\overline{g^*\theta}: (g^*\mathbf{G}_R/g^*M)^0\stackrel{\cong}{\to} g^*\mathbf{H}_R/g^*N$ and $g^*\overline{\theta}: g^*(\mathbf{G}_R/M)^0\stackrel{\cong}{\to} g^*(\mathbf{H}_R/N)$. The proposition follows from Lemma~\ref{lemma:MUPMUP-variation} since we have $\overline{g^*\theta}=g^*\overline{\theta}$ under the above isomorphisms. \if0 $g\circ {\rm ch}(M,{g\pi})= {\rm ch}(M,{\pi})\circ g$ by Proposition~\ref{prop:permutation-action-Sn+1-DBM}. By Proposition~\ref{prop:relations-Psi-ch-i}, $g\circ \Psi_{M,g\pi}\circ {\rm ch}= {\rm ch}(M,{\pi})\circ g\circ \Psi_M^{E_{n+1}}$. Using Proposition~\ref{prop:relations-Psi-ch-i} again, we see that $\Psi_{M,\pi}\circ g\circ {\rm ch}= {\rm ch}(M)_{\pi}\circ \Psi_M^{E_{n+1}}\circ g$. \fi \if0 By the universal property of the ring $\mathbb{B}_n^0$, the two homomorphisms from $\mathbb{B}_n^0$ to $D_{\mathbb{B}_n}(M)_{\pi}$ define two isomorphisms between the $p$-divisible formal groups $g^*(\Psi_{M,g\pi}^{\mathbb{B}_n})^*\mathbf{F}_n[p^{\infty}]= (\Psi_{M,\pi}^{\mathbb{B}_n})g^*\mathbf{F}_n[p^{\infty}]$ and $g^*(\Psi_{M,g\pi}^{\mathbb{B}_n})^*\mathbf{F}_{n+1}[p^{\infty}]^0= (\Psi_{M,\pi}^{\mathbb{B}_n})^*g^*\mathbf{F}_{n+1}[p^{\infty}]^0$. By the following commutative diagram \[ \begin{array}{ccc} g^*(\mathbf{F}_n[p^{\infty}]/[\phi'(N)]) & = & g^*\mathbf{F}_n[p^{\infty}]/[\phi'(N)]\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[3mm] g^*(\mathbf{F}_{n+1}[p^{\infty}]/[\phi_{g\pi}(M)])& \cong & g^*\mathbf{F}_{n+1}[p^{\infty}]/[\phi_{\pi}(M)],\\ \end{array}\] we see that the two isomorphisms are the same. Hence we obtain that $g\circ \Psi_{M,g\pi}^{\mathbb{B}_n}= \Psi_{M,\pi}^{\mathbb{B}_n}\circ g$. \fi \qed \begin{corollary}\label{cor:equivariance-psi-B-one-point-case} The map $\Psi_M^{\mathbb{B}}: \mathbb{B}_n^0\to D(M,\mathbf{G}_{\mathbb{B}})$ is $\mathbb{G}_{n+1}$-equivariant. \end{corollary} \if0 \begin{proposition} We have a commutative diagram \[ \begin{array}{ccc} \mathbb{B}_n^0(X) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & \mathbb{B}_n^0(X) \\[1mm] \mbox{\scriptsize $\Psi_{M,g\pi}$} \bigg\downarrow \phantom{\mbox{\scriptsize $\Psi_{M,g\pi}$}} & & \phantom{\mbox{\scriptsize$\Psi_{M,\pi}$}} \bigg\downarrow \mbox{\scriptsize$\Psi_{M,\pi}$} \\[4mm] \mathbb{B}(M,{g\pi})_n^0(X) & \stackrel{g}{\hbox to 10mm{\rightarrowfill}} & \mathbb{B}(M,{\pi})_n^0(X)\\[1mm] \end{array}\] for any $g\in \mathbb{G}_{n+1}$ and any space $X$. \end{proposition} \proof Since $g\circ \Psi_{M,g\pi}$ and $\Psi_{M,\pi}\circ g$ are ring operations between Landweber exact cohomology theories, it is sufficient to show that they coincide when $X$ is a one point space and the infinite complex projective space $\mathbb{CP}^{\infty}$ by \cite[Proposition~3.7]{Ando2} When $X$ is a one point space, the above diagram is commutative by Proposition~\ref{prop:Bn-PSI-MPI-G-commutativity}. When $X=\mathbb{CP}^{\infty}$, we see that the above diagram is commutative by the following commutative diagram of $p$-divisible groups \[ \begin{array}{ccc} \mathbf{F}_{n+1}[p^{\infty}]& \stackrel{\overline{g}}{\hbox to 10mm{\rightarrowfill}}& \mathbf{F}_{n+1}[p^{\infty}]\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[3mm] \mathbf{F}_{n+1}[p^{\infty}]/[\phi_{\pi}(M)]& \stackrel{\overline{g}}{\hbox to 10mm{\rightarrowfill}}& \mathbf{F}_{n+1}[p^{\infty}]/[\phi_{g\pi}(M)],\\ \end{array}\] where the horizontal arrows cover the map $g: {\rm Spf}(D_{\mathbb{B}_n}(M)_{\pi}) \to {\rm Spf}(D_{\mathbb{B}_n}(M)_{g\pi})$ and the vertical arrows are quotient maps. \qed \fi \proof[Proof of Theorem~\ref{thm:equivariance-operation-psi-B}] It is sufficient to show that the theorem holds for any finite complex $X$. In this case, since $\Psi_M^{\mathbb{B}}$ is a ring operation and $\mathbb{B}_n^0(X)\cong \mathbb{B}_n^0\otimes_{E_{n+1}^0}E_{n+1}^0(X)$, it is sufficient to show that this holds for a one point space and that $\Psi_M^{\mathbb{B}}\circ{\rm ch}$ is $\mathbb{G}_{n+1}$-equivariant. The case when $X$ is a one point space follows from Corollary~\ref{cor:equivariance-psi-B-one-point-case}. The equivariance of $\Psi_M^{\mathbb{B}}\circ{\rm ch}$ follows from Proposition~\ref{prop-on-Psi-operation-B-M}, Proposition~\ref{prop:g-Psi-commutation-En}, and the fact that ${\rm ch}(M)$ is $\mathbb{G}_{n+1}$-equivariant. \qed Now, we introduce a ring operation $\Phi_M^{\mathbb{B}}$ which is an extension of the ring operation $\Phi_M^{E_{n+1}}$. We suppose that $M$ is a subgroup of $\Lambda^{n+1}$. We define an operation \[ \Phi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow D_{\mathbb{B}}\otimes_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)\] by the composition $\Phi_{M}^{\mathbb{B}}=(I_M^{\mathbb{B}}\otimes 1)\circ \Psi_M^{\mathbb{B}}$, where $I_M^{\mathbb{B}}$ is the map $D(M,\mathbf{G}_{\mathbb{B}}) \to D_{\mathbb{B}}$ induced by the inclusion $M\hookrightarrow\Lambda^{n+1}$. \if0 Let $\pi:\Lambda^{n+1}[p^r]\to\mathbb{Q}_p/\mathbb{Z}_p$. We define an operation \[ \Phi_{M,\pi}^{\mathbb{B}_n}: \mathbb{B}_n^0(X)\longrightarrow D_{\mathbb{B}_n}(r)_{\pi}\otimes_{\mathbb{B}} \mathbb{B}_n^0(X)\] by \[ \Phi_{M,\pi}^{\mathbb{B}_n}=(p_{\pi}\otimes 1) \circ \Phi_M^{\mathbb{B}_n}.\] \fi Since $D_{\mathbb{B}}\cong \mathbb{B}_n^0\otimes_{E_{n+1}^0}D_{n+1}$, the map ${\rm ch}: E_{n+1}^0(X)\to \mathbb{B}_n^0(X)$ extends to a map \[ {\rm ch}(\Lambda^{n+1}): D_{n+1}{\otimes}_{E_{n+1}^0}E_{n+1}^0(X) \longrightarrow D_{\mathbb{B}}{\otimes}_{\mathbb{B}_n^0} \mathbb{B}_n^0(X).\] By Lemma~\ref{lemma:restriction-g-level-str-commutativity}, Proposition~\ref{prop-on-Psi-operation-B-M}, and Theorem~\ref{thm:equivariance-operation-psi-B}, we obtain the following proposition. \begin{proposition}\label{prop:on-Phi-M-ch-and-equivariance} The operation $\Phi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X)\to D_{\mathbb{B}}\otimes_{\mathbb{B}}\mathbb{B}_n^0(X)$ is $\mathbb{G}_{n+1}$-equivariant, and we have $\Phi_M^{\mathbb{B}}\circ{\rm ch}= {\rm ch}(\Lambda^{n+1})\circ\Phi_M^{E_{n+1}}$. \end{proposition} \if0 \begin{proposition}\label{prop:on-Phi-M-ch} We have $\Phi_M^{\mathbb{B}}\circ{\rm ch}= {\rm ch}(\Lambda^{n+1})\circ\Phi_M^{E_{n+1}}$. \end{proposition} \begin{proposition}\label{prop:equivariance-Phi-M} The operation $\Phi_{M}^{\mathbb{B}}: \mathbb{B}_n^0(X)\to D_{\mathbb{B}}\otimes_{\mathbb{B}}\mathbb{B}_n^0(X)$ is $\mathbb{G}_{n+1}$-equivariant. \end{proposition} \proof This follows from Lemma~\ref{lemma:restriction-g-level-str-commutativity} and Theorem~\ref{thm:equivariance-operation-psi-B}. \qed \fi In order to compare Hecke operators in $E_{n+1}$-theory with those in $E_n$-theory in \S\ref{section:comparison-Hecke-operators}, we introduce a ring $D_{\mathbb{B}}(\mu)$ for a split surjection $\mu: \Lambda^{n+1}\to \mathbb{Q}_p/\mathbb{Z}_p$, and a ring operation $\Phi_{M,\mu}^{\mathbb{B}}: \mathbb{B}_n^0(X)\to D_{\mathbb{B}}(\mu)\otimes_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)$ for a subgroup $M$ of $\Lambda^{n+1}$. We describe a relationship between the operations $\Phi_{M,\mu}^{\mathbb{B}}$ and $\Phi_N^{E_n}$, where $N=M\cap \ker(\mu)$. The inclusion $\Lambda^{n+1}[p^r]\to\Lambda^{n+1}[p^{r+1}]$ of abelian groups induces a map $D_{\mathbb{B}}(r,\mu[p^r])\to D_{\mathbb{B}}(r+1,\mu[p^{r+1}])$ of local rings. We set \[ D_{\mathbb{B}}(\mu)=\ \subrel{r}{\rm colim} D_{\mathbb{B}}(r,\mu[p^r]) .\] Assembling the projections $D_\mathbb{B}(r)\to D_{\mathbb{B}}(r,\mu[p^r])$ for $r\ge 0$, we obtain a map $p_{\mu}: D_{\mathbb{B}}\to D_{\mathbb{B}}(\mu)$. We define a ring operation \[ \Phi_{M,\mu}^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow D_{\mathbb{B}}(\mu){\otimes}_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)\] by the composition $\Phi_{M,\mu}^{\mathbb{B}}=(p_{\mu}\otimes 1) \circ\Phi_{M}^{\mathbb{B}}$. To describe the composite $\Phi_{M,\mu}^{\mathbb{B}}\circ{\rm inc}: E_n^0(X)\to \mathbb{B}_n^0(X)\to D_{\mathbb{B}}(\mu)\otimes_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)$, we set \[ D_n(V)=\ \subrel{r}{\rm colim} D(V[p^r],\mathbf{F}_n).\] where $V=\ker(\mu)$. Since $V\cong \Lambda^n$, we have an isomorphism $D_n(V)\cong D_n$, and an operation $\Phi_N^{E_n}: E_n^0(X)\to D_n(V)\otimes_{E_n^0}E_n^0(X)$, where $N=M\cap V$. \if0 \begin{lemma}\label{lemma:commutativity-lemma-I} We have the following commutative diagram of semi-local commutative rings and continuous ring homomorphisms \[ \begin{array}{ccc} D_{\mathbb{B}_n}(M)& \stackrel{p_\mu}{\hbox to 10mm{\rightarrowfill}}& D_{\mathbb{B}_n}(M)_{\mu}\\[2mm] \mbox{\scriptsize$i_M^{\mathbb{B}_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$i_M^{\mathbb{B}_n}$}} & & \phantom{\mbox{\scriptsize$i_M^{\mathbb{B}_n}$}}\bigg\downarrow \mbox{\scriptsize$i_M^{\mathbb{B}_n}$}\\[2mm] D_{\mathbb{B}_n}(r) & \stackrel{p_\pi}{\hbox to 10mm{\rightarrowfill}}& D_{\mathbb{B}_n}(r)_{\pi}. \end{array}\] \end{lemma} \proof The lemma follows from the commutative diagrams of functors \[ \begin{array}{ccc} {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])& {\hbox to 10mm{\leftarrowfill}} & {\rm Level}(M, \mathbf{F}_{n+1}[p^{\infty}])_{\mu}\\[2mm] \bigg\uparrow & & \bigg\uparrow \\[4mm] {\rm Level}(\Lambda^{n+1}[p^r],\mathbf{F}_{n+1}[p^{\infty}])& {\hbox to 10mm{\leftarrowfill}} & {\rm Level}(\Lambda^{n+1}[p^r], \mathbf{F}_{n+1}[p^{\infty}])_{\pi}, \end{array}\] where the vertical arrows are restrictions and the horizontal arrows are inclusions. \qed \begin{lemma}\label{lemma:commutativity-lemma-II} We have the following commutative diagram of commutative rings \[ \begin{array}{ccc} D_n(N) & \stackrel{{\rm inc}(M,\mu)}{\hbox to 13mm{\rightarrowfill}} & D_{\mathbb{B}_n}(M)_{\mu}\\[2mm] \mbox{\scriptsize$i_N^{E_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$i_N^{E_n}$}} & & \phantom{}\bigg\downarrow \mbox{\scriptsize$i_M^{\mathbb{B}_n}$} \\[4mm] D_n(V) & \stackrel{{\rm inc}(\Lambda^{n+1}[p^r],\pi)} {\hbox to 13mm{\rightarrowfill}} & D_{\mathbb{B}_n}(r)_{\pi}.\\ \end{array}\] \end{lemma} \proof The lemma follows from the following commutative diagram \[ \begin{array}{ccc} {\rm Level}(N,\mathbf{F}_n[p^{\infty}])& \hbox to 10mm{\leftarrowfill}& {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])_{\mu}\\[1mm] \bigg\uparrow & & \bigg\uparrow\\[4mm] {\rm Level}(V,\mathbf{F}_n[p^{\infty}])& \hbox to 10mm{\leftarrowfill}& {\rm Level}(\Lambda^{n+1}[p^r],\mathbf{F}_{n+1}[p^{\infty}])_{\pi}, \end{array}\] where all arrows are restrictions. \qed \fi By Theorem~\ref{thm:relation-psi-M-pi-Psi-N}, we obtain the following proposition which describes a relationship between the operations $\Phi_{M,\mu}^{\mathbb{B}}$ and $\Phi_N^{E_n}$. \begin{proposition}\label{prop:relation-PhiB-PhiEn} There is a natural commutative diagram \[ \begin{array}{ccc} E_n^0(X) & \stackrel{\Phi_N^{E_n}} {\hbox to 10mm{\rightarrowfill}} & D_n(V){\otimes}_{E_n^0} E_n^0(X)\\[4mm] \mbox{\scriptsize${\rm inc}$}\bigg\downarrow \phantom{\mbox{\scriptsize${\rm i}$}} && \phantom{\mbox{\scriptsize${\rm inc}(\mu) \otimes {\rm inc}$}}\bigg\downarrow \mbox{\scriptsize${\rm inc}(\mu)\otimes {\rm inc}$}\\[2mm] \mathbb{B}_n^0(X) & \stackrel{\Phi_{M,\mu}^{\mathbb{B}}} {\hbox to 10mm{\rightarrowfill}} & D_{\mathbb{B}}(\mu){\otimes}_{\mathbb{B}_n^0} \mathbb{B}_n^0(X)\\[2mm] \end{array}\] for any space $X$, where ${\rm inc}(\mu): D_n(V)\to D_{\mathbb{B}}(\mu)$ is a map induced by the inclusion $V\hookrightarrow\Lambda^{n+1}$. \end{proposition} \if0 \proof This follows from Theorem~\ref{thm:relation-psi-M-pi-Psi-N}. \if0 There is a non-negative integer $r$ such that $p^rM=0$. Recall that $\Phi_{M,\pi}^{\mathbb{B}_n}= (p_{\pi}\otimes 1)\circ (i_M^{\mathbb{B}_n}\otimes 1) \circ \Psi_{M}^{\mathbb{B}_n}$. By Lemma~\ref{lemma:commutativity-lemma-I}, we see that $\Phi_{M,\pi}^{\mathbb{B}_n}=(i_M^{\mathbb{B}_n}\otimes 1) \circ \Psi_{M,\mu}^{\mathbb{B}_n}$. Theorem~\ref{thm:relation-psi-M-pi-Psi-N} implies that $\Phi_{M,\pi}^{\mathbb{B}_n}\circ i= (i_M^{\mathbb{B}_n}\otimes 1)\circ({\rm inc}(M,\mu)\otimes 1) \circ \Psi_N^{E_n}$. The proposition follows from Lemma~\ref{lemma:commutativity-lemma-II}. \fi \qed \fi \subsection{Hecke operators in $\mathbb{B}_n$-theory} In this subsection we define Hecke operators in $\mathbb{B}_n$-theory. Let $M$ be a finite abelian $p$-group. Recall that $m_{n+1}(M)$ is the set of all subgroups of $\Lambda^{n+1}$ which are isomorphic to $M$. \if0 We write \[ {\rm Mono}(M,\Lambda^{n+1})=\{M_1,\ldots,M_r\}.\] We set \[ D_{\mathbb{B}_n}(r)=\mathbb{B}_n^0 \subrel{E_{n+1}^0}{\otimes} D_{n+1}(r).\] Note that ${\rm GL}_{n+1}(\mathbb{Z}_p)$ acts on $D_{\mathbb{B}_n}(r)=\mathbb{B}_n^0\subrel{E_{n+1}^0}{\otimes}D_{n+1}(r)$ through the action on $D_{n+1}(r)$. The inclusion map $M_s\hookrightarrow \Lambda^{n+1}[p^r]$ induces a ring homomorphism \[ i_{M_s}^{\mathbb{B}_n}: D_{\mathbb{B}_n}(M)=\mathbb{B}_n^0 \subrel{E_{n+1}^0}{\otimes}D_{n+1}(M) \stackrel{1\otimes i_{M_s}}{\hbox to 10mm{\rightarrowfill}} \mathbb{B}_n^0\subrel{E_{n+1}^0}{\otimes}D_{n+1}(r)= D_{\mathbb{B}_n}(r).\] \fi We consider a natural map given by \[ \sum_{M'\in m_{n+1}(M)} \Phi_{M'}^{\mathbb{B}}: \mathbb{B}_n^0(X)\longrightarrow D_{\mathbb{B}}{\otimes}_{\mathbb{B}_n^0} \mathbb{B}_n^0(X).\] The image of this map is invariant under the action of ${\rm GL}_{n+1}(\mathbb{Z}_p)$. Since the invariant subring of $D_{\mathbb{B}}$ under the action of ${\rm GL}_{n+1}(\mathbb{Z}_p)$ is $\mathbb{B}_n^0$ by Corollary~\ref{cor:invariant-DB}, we obtain the following theorem in the same way as in the Hecke operators in Morava $E$-theory. \begin{theorem}\label{thm:Btheory-Hecke-op} For an abelian $p$-group $M$, there is an unstable additive cohomology operation \[ \widetilde{\rm T}_M^{\mathbb{B}}: \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}_n^0(X) \] given by $(i^{\mathbb{B}}\otimes 1)\circ \widetilde{\rm T}_M^{\mathbb{B}}= \sum_{M'\in m_{n+1}(M)} \Phi_{M'}^{\mathbb{B}}$, where $i^{\mathbb{B}}$ is the $\mathbb{B}_n^0$-algebra structure map of $D_{\mathbb{B}}$. For a nonnegative integer $k$, there is also an unstable additive operation \[ \widetilde{\rm T}_{\mathbb{B}}\left(p^k\right): \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}_n^0(X)\] given by $(i^{\mathbb{B}}\otimes 1)\circ \widetilde{\rm T}_{\mathbb{B}}\left(p^k\right)= \sum_{|M|=p^k} \Phi_M^{\mathbb{B}}$, where the sum ranges over all subgroups $M$ of $\Lambda^{n+1}$ such that the order $|M|$ is $p^k$. \end{theorem} By Proposition~\ref{prop:on-Phi-M-ch-and-equivariance}, we obtain the following proposition. \begin{proposition}\label{prop:Hecke-G_{n+1}-equivariance} The operations $\widetilde{\rm T}_M^{\mathbb{B}}$ and $\widetilde{\rm T}_{\mathbb{B}}\left(p^k\right)$ are $\mathbb{G}_{n+1}$-equivariant. \end{proposition} Now, we define a natural action of the Hecke algebra $\mathcal{H}_{n+1}$ on $\mathbb{B}_n^0(X)$ for any space $X$. Recall that $\Delta_{n+1}={\rm End}(\Lambda^{n+1})\cap {\rm GL}_{n+1}(\mathbb{Q}_p)$ and $\Gamma_{n+1}={\rm GL}_{n+1}(\mathbb{Z}_p)$. The Hecke algebra $\mathcal{H}_{n+1}$ is the endomorphism ring of the $\mathbb{Z}[\Delta_{n+1}]$-module $\mathbb{Z}[\Delta_{n+1}/\Gamma_{n+1}]$. There is an additive isomorphism $\mathcal{H}_{n+1}\cong \mathbb{Z}[\Gamma_{n+1}\backslash\Delta_{n+1}/\Gamma_{n+1}]$, and we identify $\Gamma_{n+1}\backslash\Delta_{n+1}/\Gamma_{n+1}$ with the set of all isomorphism classes of finite abelian $p$-groups with $p$-rank $\le n+1$. For a finite abelian $p$-group $M$ with $p$-rank $\le n+1$, we denote by $\widetilde{M}$ the associated endomorphism of $\mathbb{Z}[\Delta_{n+1}]$-module $\mathbb{Z}[\Delta_{n+1}/\Gamma_{n+1}]$. \begin{theorem} \label{thm:Hn+1-module-structure-on-BnX} Assigning to $\widetilde{M}$ the Hecke operator $\widetilde{\rm T}_M^{\mathbb{B}}$, there is a natural $\mathcal{H}_{n+1}$-module structure on $\mathbb{B}_n^0(X)$ for any space $X$ such that ${\rm ch}: E_{n+1}^0(X)\to \mathbb{B}_n^0(X)$ is a map of $\mathcal{H}_{n+1}$-modules. \end{theorem} \proof By \cite[Lemma~14.4]{Rezk}, there is a ring homomorphism $\psi_x: D_{n+1}\to D_{n+1}$ for each $x\in\Delta_{n+1}$ such that $\psi_x\circ i=\Phi_M^{E_{n+1}}$, where $M$ is the kernel of $x:\Lambda^{n+1}\to\Lambda^{n+1}$. Using the isomorphism $D_{\mathbb{B}}\otimes_{\mathbb{B}_n^0}\mathbb{B}_n^0(X)\cong D_{n+1}\otimes_{E_{n+1}^0}\mathbb{B}_n^0(X)$ and Proposition~\ref{prop:on-Phi-M-ch-and-equivariance}, we can extend $\Phi_M^{\mathbb{B}}$ to a ring operation \[ \psi_x^{\mathbb{B}}: D_{\mathbb{B}}\otimes_{\mathbb{B}_n^0}\mathbb{B}_n^0(X) \longrightarrow D_{\mathbb{B}}\otimes_{\mathbb{B}_n^0}\mathbb{B}_n^0(X)\] such that $\psi_x^{\mathbb{B}}\circ (i^{\mathbb{B}}\otimes 1)=\Phi_M^{\mathbb{B}}$. We obtain the theorem in the same way as in \cite[Proposition~14.3]{Rezk}. \qed By Theorem~\ref{thm:Hn+1-module-structure-on-BnX}, there is a natural $\mathcal{H}_{n+1}$-module structure on $\mathbb{B}_n^0(X)$ for any space $X$. By Proposition~\ref{prop:Hecke-G_{n+1}-equivariance}, the $\mathcal{H}_{n+1}$-module structure commutes with the action of $\mathbb{G}_{n+1}$. This implies an $\mathcal{H}_{n+1}$-module structure on the invariant submodule. In \cite{Torii3} we showed that the map ${\rm inc}$ induces a natural isomorphism \[ E_n^0(X)\stackrel{\cong}{\to} (\mathbb{B}_n^0(X))^{\mathbb{G}_{n+1}} \] for any spectrum $X$, where the right hand side is the $\mathbb{G}_{n+1}$-invariant submodule of $\mathbb{B}_n^0(X)$. Hence we obtain the following theorem. \begin{theorem} \label{thm:natural-Hn+1-module-strucrure-on-EnX} There is a natural action of the Hecke algebra $\mathcal{H}_{n+1}$ on $E_n^0(X)$ for any space $X$ such that ${\rm inc}: E_n^0(X)\to \mathbb{B}_n^0(X)$ is a map of $\mathcal{H}_{n+1}$-modules. \end{theorem} \if0 \subsection{Symmetric power operations on $\mathbb{B}_n$-theory} When $A$ is a finite transitive $\mathbb{Z}_p^{n+1}$-set, we set \[ \chi_A^{\mathbb{B}}=\Phi_M^{\mathbb{B}},\] where $A\cong\mathbb{Z}_p^{n+1}/H$ as a $\mathbb{Z}_p^{n+1}$-set and $M={\rm Hom}(\mathbb{Z}_p^{n+1}/H,\mathbb{Q}_p/\mathbb{Z}_p)$. When $A$ be a finite $\mathbb{Z}_p^{n+1}$-set, we let $A=A_1\coprod\cdots\coprod A_s$ be the decomposition as a disjoint union of the transitive $\mathbb{Z}_p^n$-sets. We define \[ \chi_A^{\mathbb{B}}=\prod_{i=1}^s \chi_{A_i}^{\mathbb{B}}.\] Consider an operation \[ \sum_{|A|=l}\ \frac{1}{|{\rm Aut}_{\mathbb{Z}_p^{n+1}}(A)|} \, \chi_A^{\mathbb{B}}.\] Since this operation is invariant under the action of ${\rm GL}_{n+1}(\mathbb{Z}_p)$, we can define the $l$th symmetric power operation \[ \sigma_l^{\mathbb{B}}: \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}_n^0(X)\otimes\mathbb{Q} \] by \[ (i\otimes 1)\circ \sigma_l^{\mathbb{B}}=\sum_{|A|=l}\ \frac{1}{|{\rm Aut}_{\mathbb{Z}_p^{n+1}}(A)|}\, \chi_A^{\mathbb{B}}.\] We define the total symmetric power operation \[ S_Z^{\mathbb{B}}: \mathbb{B}_n^0(X)\to (\mathbb{B}_n^0(X)\otimes \mathbb{Q}) \power{Z} \] by \[ S_Z^{\mathbb{B}}(x)=\sum_{l=0}^\infty \sigma_l^{\mathbb{B}}(x)\,Z^l,\] where $Z$ is a formal variable. We can write $S_Z^{\mathbb{B}}$ in terms of the Hecke operators as \begin{align}\label{eq:total-symmetric-Hecke-B} S_Z^{\mathbb{B}}(x)=\exp\left[ \sum_{k=0}^\infty \frac{1}{p^k} \widetilde{\rm T}_{\mathbb{B}}\left(p^k\right)(x)\, Z^{p^k} \right]. \end{align} \if0 We define the $l$th symmetric power operation \[ \sigma_l=\sigma_l^{\mathbb{B}}: \mathbb{B}_n^0(X) \longrightarrow \mathbb{B}_n^0(X)\otimes\mathbb{Q} \quad (l\ge 0) \] by the equation \[ \sum_{l\ge 0} \sigma_l(x)\, Z^l =\exp\left[ \sum_{k=0}^\infty \frac{1}{p^k}\, \widetilde{\rm T}\left(p^k\right)(x)\, Z^{p^k} \right], \] where $x\in \mathbb{B}_n^0(X)$ and $Z$ is a formal variable. \fi \if0 \subsection{Orbifold genera} We define the orbifold genera in $\mathbb{B}_n$-theory \[ \phi_{\rm orb}(M^l//\Sigma_l)= \phi_{\rm orb}^{\mathbb{B}}(M^l//\Sigma_l)\in \mathbb{B}\otimes\mathbb{Q}\quad (l\ge 0) \] by the equation \[ \sum_{l=0}^\infty\phi_{\rm orb}(M^l//\Sigma_l)\,Z^l = \exp \left[ \sum_{k=0}^\infty \frac{1}{p^k}\widetilde{\rm T}\left(p^k\right)(\phi(M))\,Z^{p^k} \right].\] {\color{red} Is it possible for us to define an orbifold genus for $\mathbb{B}_n$-theory for any finite group $G$?} \fi \if0 \subsection{Logarithmic operation on $\mathbb{B}_n$-theory} ({\color{red} Should we delete this section?}) ({\color{red} There is a problem with convergence of the logarithmic operation.}) We try to construct the logarithmic operation on $\mathbb{B}_n\otimes\mathbb{Q}$. We define the logarithmic operation \[ l_{n,p}^{\mathbb{B}_n}: \mathbb{B}_n^0(X)^{\times} \longrightarrow \mathbb{B}_n^0(X)\otimes\mathbb{Q}\] by the following formula \[ l_{n,p}^{\mathbb{B}_n}(x)=\frac{1}{p}\log \left(1+p \prod_{r=0}^n N_{p^r}^{\mathbb{B}_n}(x)^{(-1)^r p^{(r-1)(r-2)/2}}\right), \] where $N_{p^r}^{\mathbb{B}_n}: \mathbb{B}_n^0(X)\to \mathbb{B}_n^0(X)$ is given by \[ N_{p^r}^{\mathbb{B}_n}(x)= \prod_{\mbox{\scriptsize$ \begin{array}{c} M \le \Lambda^n[p]\\ |M|=p^r\\ \end{array}$}} \Psi_M^{\mathbb{B}_n}(x).\] \fi \fi \section{Comparison of Hecke operators} \label{section:comparison-Hecke-operators} In this section we compare the Hecke operators in $E_n$-theory with those in $E_{n+1}$-theory via $\mathbb{B}_n$-theory. In particular, we construct a ring homomorphism $\omega: \mathcal{H}_{n+1}\to\mathcal{H}_n$ between Hecke algebras and show that the $\mathcal{H}_{n+1}$-module structure on $E_n^0(X)$ given by Theorem~\ref{thm:natural-Hn+1-module-strucrure-on-EnX} is obtained from the $\mathcal{H}_n$-module structure by the restriction along $\omega$ (Theorem~\ref{thm:comparison-Hn-Hn+1-on-EnX}). \if0 \subsection{Relation between $\Psi_M^{\mathbb{B}_n}$ and $\Psi_{N}^{E_{n+1}}$} By the construction of the operation $\Psi_{M,\pi}^{\mathbb{B}_n}$, we have the following theorem. \begin{theorem}\label{thm:Psi-relation-B-En+1} We have the relation \[ \Psi_{M,\pi}^{\mathbb{B}_n}\circ {\rm ch}= {\rm ch}(M)_{\pi}\circ \Psi_M^{E_{n+1}}\] for any finite $p$-group $M$ and any homomorphism $\pi:M\to \mathbb{Q}_p/\mathbb{Z}_p$. \end{theorem} \begin{corollary}\label{cor:Psi-relation-B-En+1} We have the relation \[ \Psi_{M}^{\mathbb{B}_n}\circ {\rm ch}= {\rm ch}(M)\circ \Psi_M^{E_{n+1}},\] where ${\rm ch}(M)=\prod_{\pi}{\rm ch}(M)_{\pi}: D_{n+1}(M)\longrightarrow \prod_{\pi} D_{\mathbb{B}_n}(M)_{\pi} = D_{\mathbb{B}_n}(M)$. \end{corollary} \fi \if0 \subsection{Relation between $\Psi_M^{\mathbb{B}_n}$ and $\Psi_N^{E_n}$} \begin{theorem}\label{thm:relation-psi-M-pi-Psi-N} There is a commutative diagram \[ \begin{array}{ccc} E_n^0(X) & \stackrel{\Psi_N^{E_n}}{\hbox to 10mm{\rightarrowfill}}& E_n^0(X)\\[1mm] \mbox{\scriptsize$i$}\bigg\downarrow\phantom{\mbox{\scriptsize$i$}} & & \phantom{\mbox{\scriptsize$r_{M,\pi}\otimes i$}} \bigg\downarrow\mbox{\scriptsize$r_{M,\pi}\otimes i$}\\[1mm] \mathbb{B}_n^0(X) & \stackrel{\Psi_{M,\pi}^{\mathbb{B}_n}} {\hbox to 10mm{\rightarrowfill}} & D_{\mathbb{B}_n}(M)_{\pi}^0(X)\\ \end{array}\] for any space $X$, any finite abelian $p$-group $M$, and any homomorphism $\pi:M\to\mathbb{Q}_p/\mathbb{Z}_p$, where $N$ is the kernel of $\pi$. \end{theorem} \proof The above diagram is commutative when $X$ is a one point space by Proposition~\ref{prop:relations-Psi-ch-i}. When $X$ is the infinite complex projective space $\mathbb{CP}^{\infty}$, we see that the above diagram is commutative by the following commutative diagram of $p$-divisible groups \[ \begin{array}{ccc} \mathbf{F}_n[p^{\infty}] & \hbox to 10mm{\rightarrowfill} & \mathbf{F}_n[p^{\infty}]/[\varphi(N)]\\[1mm] \bigg\downarrow & & \bigg\downarrow \\[3mm] \mathbf{F}_{n+1}[p^{\infty}]& \hbox to 10mm{\rightarrowfill} & \mathbf{F}_{n+1}[p^{\infty}]/[\phi_{\pi}(M)],\\ \end{array}\] where the horizontal arrows are quotient maps and the vertical arrows are isomorphisms into the identity components. Since $E_n^*(-)$ and $D_{\mathbb{B}_n}(M)_{\pi}^*(-)$ are Landweber exact cohomology theories, the Kunneth formula holds for the cohomology rings of these theories for a finite product of $\mathbb{C}P^{\infty}$'s. The fact that $\Psi_{M,\pi}^{\mathbb{B}_n}\circ i$ and $(r_{M,\pi}\otimes i)\circ \Psi_N^{E_n}$ are ring operations, implies that they coincide when $X$ is a finite product of $\mathbb{C}P^{\infty}$'s. By \cite[Proposition~3.7]{Ando2} we see that the above diagram is commutative for all $X$. \qed We suppose $M$ is a subgroup of $\Lambda^{n+1}[p^r]$. Let $\pi: \Lambda^{n+1}[p^r]\to \mathbb{Q}_p/\mathbb{Z}_p$ be a homomorphism such that the image of $\pi$ is $p^{-r}\mathbb{Z}_p/\mathbb{Z}_p$. We set $V=\ker \pi$, and $N=M\cap V$. Note that $V\cong \Lambda^n[p^r]$. Let $\mu:M\to \mathbb{Q}_p/\mathbb{Z}_p$ be the restriction of the homomorphism $\pi$ to $M$. We set \[ \Phi_{M,\pi}^{\mathbb{B}_n}= (i_M^{\mathbb{B}_n}\otimes 1)\circ\Psi_{M,\pi}^{\mathbb{B}_n}.\] We consider the relation between $\Phi_{M,\pi}^{\mathbb{B}_n}$ and $\Phi_N^{E_n}$. \begin{lemma}\label{lemma:commutativity-lemma-I} We have the following commutative diagram of semi-local commutative rings and continuous ring homomorphisms \[ \begin{array}{ccc} D_{\mathbb{B}_n}(M)& \stackrel{p_\mu}{\hbox to 10mm{\rightarrowfill}}& D_{\mathbb{B}_n}(M)_{\mu}\\[2mm] \mbox{\scriptsize$i_M^{\mathbb{B}_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$i_M^{\mathbb{B}_n}$}} & & \phantom{\mbox{\scriptsize$i_M^{\mathbb{B}_n}$}}\bigg\downarrow \mbox{\scriptsize$i_M^{\mathbb{B}_n}$}\\[2mm] D_{\mathbb{B}_n}(r) & \stackrel{p_\pi}{\hbox to 10mm{\rightarrowfill}}& D_{\mathbb{B}_n}(r)_{\pi}. \end{array}\] \end{lemma} \proof The lemma follows from the commutative diagrams of functors \[ \begin{array}{ccc} {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])& {\hbox to 10mm{\leftarrowfill}} & {\rm Level}(M, \mathbf{F}_{n+1}[p^{\infty}])_{\mu}\\[2mm] \bigg\uparrow & & \bigg\uparrow \\[4mm] {\rm Level}(\Lambda^{n+1}[p^r],\mathbf{F}_{n+1}[p^{\infty}])& {\hbox to 10mm{\leftarrowfill}} & {\rm Level}(\Lambda^{n+1}[p^r], \mathbf{F}_{n+1}[p^{\infty}])_{\pi}, \end{array}\] where the vertical arrows are restrictions and the horizontal arrows are inclusions. \qed \begin{lemma}\label{lemma:commutativity-lemma-II} We have the following commutative diagram of commutative rings \[ \begin{array}{ccc} D_n(N) & \stackrel{r_{M,\mu}}{\hbox to 13mm{\rightarrowfill}} & D_{\mathbb{B}_n}(M)_{\mu}\\[2mm] \mbox{\scriptsize$i_N^{E_n}$}\bigg\downarrow \phantom{\mbox{\scriptsize$i_N^{E_n}$}} & & \phantom{}\bigg\downarrow \mbox{\scriptsize$i_M^{\mathbb{B}_n}$} \\[4mm] D_n(V) & \stackrel{r_{\Lambda^{n+1}[p^r],\pi}}{\hbox to 13mm{\rightarrowfill}} & D_{\mathbb{B}_n}(r)_{\pi}.\\ \end{array}\] \end{lemma} \proof The lemma follows from the following commutative diagram \[ \begin{array}{ccc} {\rm Level}(N,\mathbf{F}_n[p^{\infty}])& \hbox to 10mm{\leftarrowfill}& {\rm Level}(M,\mathbf{F}_{n+1}[p^{\infty}])_{\mu}\\[1mm] \bigg\uparrow & & \bigg\uparrow\\[4mm] {\rm Level}(V,\mathbf{F}_n[p^{\infty}])& \hbox to 10mm{\leftarrowfill}& {\rm Level}(\Lambda^{n+1}[p^r],\mathbf{F}_{n+1}[p^{\infty}])_{\pi}, \end{array}\] where all arrows are restrictions. \qed We fix an isomorphism $V\cong \Lambda^n[p^r]$. This induces an isomorphism \[ D_n(V)\cong D_n(r),\] and we regard $N$ as a subgroup of $\Lambda^n[p^r]$. \begin{proposition}\label{prop:relation-PhiB-PhiEn} We have the following commutative diagram \[ \begin{array}{ccc} E_n^0(X) & \stackrel{\Phi_N^{E_n}} {\hbox to 10mm{\rightarrowfill}} & D_n(r)\subrel{E_n^0}{\otimes} E_n^0(X)\\[4mm] \mbox{\scriptsize$i$}\bigg\downarrow \phantom{\mbox{\scriptsize$i$}} && \phantom{\mbox{\scriptsize$R \otimes 1$}}\bigg\downarrow \mbox{\scriptsize$R\otimes 1$}\\[2mm] \mathbb{B}_n^0(X) & \stackrel{\Phi_{M,\pi}^{\mathbb{B}_n}} {\hbox to 10mm{\rightarrowfill}} & D_{\mathbb{B}_n}(r)_{\pi}\subrel{\mathbb{B}_n^0}{\otimes} \mathbb{B}_n^0(X)\\[2mm] \end{array}\] for any space $X$, where $N=M\cap \ker\pi$ and $R$ is the ring homomorphism given by \[ R: D_n(r)\cong D_n(V) \stackrel{r_{\Lambda^{n+1}[p^r],\pi}} {\hbox to 20mm{\rightarrowfill}} D_{\mathbb{B}_n}(r)_{\pi}.\] \end{proposition} \proof Recall that $\Phi_{M,\pi}^{\mathbb{B}_n}= (p_{\pi}\otimes 1)\circ (i_M^{\mathbb{B}_n}\otimes 1) \circ \Psi_{M}^{\mathbb{B}_n}$. By Lemma~\ref{lemma:commutativity-lemma-I}, we see that $\Phi_{M,\pi}^{\mathbb{B}_n}=(i_M^{\mathbb{B}_n}\otimes 1) \circ \Psi_{M,\mu}^{\mathbb{B}_n}$. Theorem~\ref{thm:relation-psi-M-pi-Psi-N} implies that $\Phi_{M,\pi}^{\mathbb{B}_n}\circ i= (i_M^{\mathbb{B}_n}\otimes 1)\circ(r_{M,\mu}\otimes 1)\circ \Psi_N^{E_n}$. The proposition follows from Lemma~\ref{lemma:commutativity-lemma-II}. \qed \fi \if0 \subsection{Relation between Hecke operators} \fi First, we compare Hecke operators in $\mathbb{B}_n$-theory with those in $E_{n+1}$-theory. By Proposition~\ref{prop:on-Phi-M-ch-and-equivariance}, we obtain the following proposition. \begin{proposition} We have $\widetilde{\rm T}_M^{\mathbb{B}}\circ {\rm ch}= {\rm ch}\circ \widetilde{\rm T}_M^{E_{n+1}}$ for any finite abelian $p$-group $M$ with {\rm $p$-rank$(M)\le n+1$}. \end{proposition} \if0 \begin{corollary} We have the relation \[ T_M^{\mathbb{B}}\circ {\rm ch}= {\rm ch}\circ T_M^{E_{n+1}}\] for any finite abelian $p$-group $M$. \end{corollary} \fi \if0 \begin{lemma}\label{lemma:Hecke-S_{n+1}-equivariance} The Hecke operator $\widetilde{\rm T}_M^{\mathbb{B}}: \mathbb{B}_n^0(X)\to \mathbb{B}_n^0(X)$ is $\mathbb{G}_{n+1}$-equivariant. \end{lemma} \fi Next, we consider a relationship between $\widetilde{\rm T}_M^{\mathbb{B}}$ and $\widetilde{\rm T}_N^{E_n}$. In order to describe the relationship, we will introduce a set $I_n(M,N)$ and an integer $a_n(M,N)$ for finite abelian $p$-groups $M$ and $N$, where $p$-rank$(M)\le n+1$ and $p$-rank$(N)\le n$. We take a split surjection $\mu: \Lambda^{n+1}\to \mathbb{Q}_p/\mathbb{Z}_p$, and set $V=\ker\mu$. We define $I_n(M,N)$ to be the set of all subgroups $M'$ of $\Lambda^{n+1}$ such that $M'\cong M$ and $M'\cap V\cong N$: \[ I_n(M,N)= \{M'\le \Lambda^{n+1}|\ M'\cong M,\, M'\cap V\cong N\}.\] \begin{lemma} The cardinality $|m_n(N)|$ divides $|I_n(M,N)|$. \end{lemma} \proof We fix an isomorphism $V\cong \Lambda^n$ and identify $V$ with $\Lambda^n$ via this isomorphism. Let $f:I_n(M, N)\to m_n(N)$ be a map given by $f(M')=M'\cap V$. We take $N'\in m_n(N)$, and suppose that $M'\in f^{-1}(N')$. If $N''\in m_n(N)$, then there is an automorphism $g\in {\rm Aut}(V)$ such that $g(N')=N''$. We can extend $g$ to an automorphism $\widetilde{g}\in {\rm Aut}(\Lambda^{n+1})$ such that $\mu\circ \widetilde{g}=\mu$ and $\widetilde{g}|_V=g$. We see that $\widetilde{g}(M')\in I_n(M,N)$ and $f(\widetilde{g}(M'))=N''$. Hence, if $f^{-1}(N')=\{M_1',\ldots,M_r'\}$, then $f^{-1}(N'')=\{\widetilde{g}(M_1'),\ldots,\widetilde{g}(M_r')\}$. This shows that $|m_n(N)|$ divides $|I_n(M,N)|$. \qed We set \[ a_n(M,N)=\displaystyle\frac{|I_n(M,N)|}{|m_n(N)|}.\] We notice that $a_n(M,N)$ is independent of the choice of a split surjection $\mu$. By Theorem~\ref{thm:natural-Hn+1-module-strucrure-on-EnX}, the Hecke algebra $\mathcal{H}_{n+1}$ naturally acts on $E_n^0(X)$ for any space $X$. We denote by $(\widetilde{\rm T}_M^{\mathbb{B}})^{\mathbb{G}_{n+1}}$ the operation associated to a finite abelian $p$-group $M$ with $p$-rank $\le n+1$. \begin{proposition}\label{prop:relation-Hecke-TMB-to-TNEn} We have \[ (\widetilde{\rm T}_M^{\mathbb{B}})^{\mathbb{G}_{n+1}} =\sum_{[N]}\, a_n(M,N)\ \widetilde{\rm T}_N^{E_n}, \] where the sum ranges over all isomorphism classes of finite abelian $p$-groups with $p$-rank $\le n$. \end{proposition} In order to prove Proposition~\ref{prop:relation-Hecke-TMB-to-TNEn}, we need the following lemma. Let $i_{\mu}^{\mathbb{B}}: \mathbb{B}_n^0\to D_{\mathbb{B}}(\mu)$ be the $\mathbb{B}_n^0$-algebra structure map. \begin{lemma}\label{lemma:i-mu-faithfully-flat} The ring homomorphism $i_{\mu}^{\mathbb{B}}$ is faithfully flat. \end{lemma} \proof By definition, $D_{\mathbb{B}}(\mu)=\ \subrel{r}{\rm colim} D(r,\mu[p^r])$. The lemma follows from the fact that $D(r,\mu[p^r])$ is a finitely generated free $\mathbb{B}_n^0$-module for all $r$. \qed \proof[Proof of Proposition~\ref{prop:relation-Hecke-TMB-to-TNEn}] \if0 By definition, we have \[ (i_{\{0\}}^{\mathbb{B}_n}\otimes 1)\circ t_M^{\mathbb{B}_n} = \sum_{M'\in m_{n+1}(M)}\Phi_{M'}^{\mathbb{B}_n}.\] Let $r$ be a positive integer such that $p^rM=0$ and let $\pi: \Lambda^{n+1}[p^r]\to p^{-r}\mathbb{Z}_p/\mathbb{Z}_p$ be a surjection. We fix an isomorphism $\ker \pi\cong \Lambda^n$ and set $R: D_n(r)\cong D(\ker\pi)\to D_{\mathbb{B}_n}(r)_{\pi}$. \fi By Proposition~\ref{prop:relation-PhiB-PhiEn}, we have \[ (i_{\mu}^{\mathbb{B}}\otimes 1)\circ \widetilde{\rm T}_M^{\mathbb{B}}\circ {\rm inc} = \sum_{M'\in m_{n+1}(M)} ({\rm inc}(\mu)\otimes {\rm inc})\circ \Phi_{M'\cap V}^{E_n}. \] A decomposition $m_{n+1}(M)=\coprod_{[N]}I_n(M,N)$ implies that the right hand side is \[ ({\rm inc}(\mu)\otimes{\rm inc})\circ(i\otimes 1)\circ \left(\sum_{[N]} a_n(M,N)\ \widetilde{\rm T}_N^{E_n}\right). \] Using $({\rm inc}(\mu)\otimes{\rm inc})\circ (i\otimes 1)= (i_{\mu}^{\mathbb{B}}\otimes 1)\circ {\rm inc}$ and the fact that $i_{\mu}^{\mathbb{B}}$ is faithfully flat by Lemma~\ref{lemma:i-mu-faithfully-flat}, we obtain \[ \widetilde{\rm T}_M^{\mathbb{B}}\circ{\rm inc}= {\rm inc}\circ \sum_{[N]} a_n(M,N)\ \widetilde{\rm T}_N^{E_n},\] which implies the desired formula. \if0 \[ \begin{array}{ccc} E_n^0(X)& \stackrel{w}{\hbox to 10mm{\rightarrowfill}} & E_n^0(X)\\[2mm] \mbox{\scriptsize$i$}\bigg\downarrow\phantom{\mbox{\scriptsize$i$}} & & \phantom{\mbox{\scriptsize$i$}}\bigg\downarrow\mbox{\scriptsize$i$}\\ \mathbb{B}_n^0(X) & \stackrel{t_M^{\mathbb{B}_n}} {\hbox to 10mm{\rightarrowfill}} & \mathbb{B}_n^0(X),\\ \end{array}\] where $w=\sum_{[N]}a(M,N)t_N^{E_n}$. Since the map $i$ induces an isomorphism $E_n^0(X)\stackrel{\cong}{\to} (\mathbb{B}_n^0(X))^{S_{n+1}}$, we obtain that $(t_M^{\mathbb{B}_n})^{S_{n+1}}=w$. \fi \qed \if0 \begin{example}\rm When $M=(\mathbb{Z}/p^r)^{n+1}\ (r\ge 0)$, we have \[ a_n(M,N)=\left\{ \begin{array}{ll} 1 & (N=(\mathbb{Z}/p^r)^n),\\[4mm] 0 & (\mbox{\rm otherwise}).\\ \end{array}\right.\] Hence we obtain \[ \left(\widetilde{\rm T}^{\mathbb{B}}_{(\mathbb{Z}/p^r)^{n+1}} \right)^{\mathbb{G}_{n+1}}= \widetilde{\rm T}_{(\mathbb{Z}/p^r)^n}^{E_n}.\] \end{example} \begin{example}\rm We consider the case $M=(\mathbb{Z}/p)^s\ (0<s\le n)$. Since $|I_n((\mathbb{Z}/p)^s,(\mathbb{Z}/p)^s)| =|m_n((\mathbb{Z}/p)^s)|$, we see \[ a_n((\mathbb{Z}/p)^s,(\mathbb{Z}/p)^s)=1.\] Let $M'$ be a subgroup of $\Lambda^{n+1}$ isomorphic to $(\mathbb{Z}/p)^s$. If $M'$ is not contained in $V$, then $M'\cap V$ is isomorphic to $(\mathbb{Z}/p)^{s-1}$. We obtain \[ \begin{array}{rcl} |I_n((\mathbb{Z}/p)^s,(\mathbb{Z}/p)^{s-1})| &=& |m_{n+1}((\mathbb{Z}/p)^s)| - |m_n((\mathbb{Z}/p)^s)|\\[2mm] &=&\displaystyle \frac{(p^n-1)(p^{n-1}-1)\cdots (p^{n-s+2}-1)} {(p^{s-1}-1)(p^{s-2}-1)\cdots (p-1)} p^{n-s+1}.\\ \end{array}\] This implies \[ a_n((\mathbb{Z}/p)^s,(\mathbb{Z}/p)^{s-1}) =p^{n-s+1}.\] Hence we obtain \[ (\widetilde{\rm T}_{(\mathbb{Z}/p)^s }^{\mathbb{B}})^{\mathbb{G}_{n+1}}= \widetilde{\rm T}_{(\mathbb{Z}/p)^s}^{E_n} +p^{n-s+1}\ \widetilde{\rm T}_{(\mathbb{Z} /p)^{s-1}}^{E_n}.\] \end{example} \begin{example}\rm We consider the case $M=\mathbb{Z}/p^r\ (r>0)$. Since $|I_n(\mathbb{Z}/p^r,\mathbb{Z}/p^r)| =|m_n(\mathbb{Z}/p^r)|$, we see \[ a_n(\mathbb{Z}/p^r,\mathbb{Z}/p^r)=1. \] Let $x=(x_1,\ldots,x_{n+1})\in\Lambda^{n+1}$ such that $p^rx=0$ and $p^{r-1}x\neq 0$. Then $x$ generates a subgroup $M'$ isomorphic to $\mathbb{Z}/p^r\mathbb{Z}$. If the order of $\mu(x)$ is $p^s$ for $0< s\le r$, then $M'\cap V=p^sM'\cong \mathbb{Z}/p^{r-s}\mathbb{Z}$. Hence we see \[ |I_n(\mathbb{Z}/p^r,\mathbb{Z}/p^{r-s})| = \left\{\begin{array}{ll} \displaystyle (p^s-p^{s-1})\frac{(p^{rn}-p^{(r-1)n})}{p^r-p^{r-1}}& (0<s<r),\\[5mm] p^{rn}& (0<s=r).\\ \end{array}\right. \] This implies \[ a_n(\mathbb{Z}/p^r\mathbb{Z},\mathbb{Z}/p^{r-s}\mathbb{Z})= \left\{\begin{array}{ll} p^{sn-1}(p-1) & (0<s<r),\\[5mm] p^{rn} & (0<s=r).\\ \end{array}\right.\] Hence we obtain \[ (\widetilde{\rm T}_{\mathbb{Z}/ p^r}^{\mathbb{B}})^{\mathbb{G}_{n+1}}= \widetilde{\rm T}_{\mathbb{Z}/p^r}^{E_n}+ \sum_{0<s<r} p^{sn-1}(p-1)\ \widetilde{\rm T}_{\mathbb{Z}/p^{r-s}}^{E_n} +p^{rn}.\] \end{example} \fi Conversely, we shall write $\widetilde{\rm T}_N^{E_n}$ in terms of $(\widetilde{\rm T}_M^{\mathbb{B}})^{\mathbb{G}_{n+1}}$. We define a partial order on the set of isomorphism classes of finite abelian $p$-groups of $p$-rank $\le n$ as follows. We write $[A]\le [B]$ if there exists a monomorphism from $A$ to $B$. We define $b_n(B,A)$ inductively as follows. We set $b_n(A,A)=1$. When $[A]<[B]$, we set \[ b_n(B,A)= -\sum_{[A]\le [C]<[B]} a_n(B,C)\, b_n(C,A).\] \if For $[N]<[M]$, we have \[ \sum_{[N]\le [K]\le [M]}b_n(M,K)\, a_n(K,N)=0\] by definition. Note that we also have \[ \sum_{[N]\le [K]\le [M]} a_n(M,K)\, b_n(K,N)=0 \] for $[N]<[M]$. \fi \begin{proposition}\label{prop:description-TEn-by-TBn} For any finite abelian $p$-group $N$ with {\rm $p$-rank$(N)\le n$}, we have \[ \widetilde{\rm T}_N^{E_n}= \sum_{[N']\le [N]} b_n(N,N')\, (\widetilde{\rm T}_{N'}^{\mathbb{B}})^{\mathbb{G}_{n+1}}.\] \end{proposition} \proof We prove the proposition by induction on the partial ordering. When $N=0$, the proposition obviously holds. We suppose that the proposition holds for abelian $p$-groups $<[N]$. By Proposition~\ref{prop:relation-Hecke-TMB-to-TNEn}, the right hand side is \[ \widetilde{\rm T}_{N}^{E_n} + \sum_{[N']<[N]}a_n(N,N')\, \widetilde{\rm T}_{N'}^{E_n} + \sum_{[N']<[N]}b_n(N,N')\, (\widetilde{\rm T}_{N'}^{\mathbb{B}_n})^{\mathbb{G}_{n+1}}.\] By the hypothesis of induction, we have \[ \begin{array}{rcl} \displaystyle\sum_{[N']<[N]}a_n(N,N')\, \widetilde{\rm T}_{N'}^{E_n}& = &\displaystyle\sum_{[N']<[N]}\sum_{[N'']\le [N']} a_n(N,N')\, b_n(N',N'')\, (\widetilde{\rm T}_{N''}^{\mathbb{B}_n})^{\mathbb{G}_{n+1}}\\[2mm] &= &\displaystyle\sum_{[N'']<[N]} \left(\sum_{[N'']\le [N']<[N]} a_n(N,N')\, b_n(N',N'')\right) (\widetilde{\rm T}_{N''}^{\mathbb{B}_n})^{\mathbb{G}_{n+1}}.\\ \end{array} \] Since $\sum_{[N'']\le [N']<[N]} a_n(N,N')\, b_n(N',N'')=-b_n(N,N'')$, the right hand side is $\widetilde{\rm T}_{N}^{E_n}$. \qed Now, we consider a relationship between $(\widetilde{\rm T}_{\mathbb{B}}\left(p^r\right))^{\mathbb{G}_{n+1}}$ and $\widetilde{\rm T}_{E_n}\left(p^s\right)$, where $(\widetilde{\rm T}_{\mathbb{B}}\left(p^r\right))^{\mathbb{G}_{n+1}}$ is the restriction of the operation $\widetilde{\rm T}_{\mathbb{B}}\left(p^r\right)$ to the $n$th Morava $E$-theory of spaces through ${\rm inc}$. \begin{theorem}\label{thm:Hecke-B-to-En} We have \[ (\widetilde{\rm T}_{\mathbb{B}}\left(p^r\right))^{\mathbb{G}_{n+1}}= \sum_{s=0}^r\ p^{(r-s)n}\, \widetilde{\rm T}_{E_n}\left(p^s\right).\] \end{theorem} \proof \if0 By Proposition~\ref{prop:relation-PhiB-PhiEn}, we have \[ (i_{\mu}\otimes 1)\circ \widetilde{\rm T}_{\mathbb{B}}\left(p^r\right)\circ {\rm inc} = \sum_{\mbox{\scriptsize$ \begin{array}{c} M\le\Lambda^{n+1}\\ |M|=p^r\\ \end{array}$}} ({\rm inc}(\mu)\otimes {\rm inc})\circ \Phi_{M\cap V}^{E_n}.\] \fi We take a split surjection $\mu: \Lambda^{n+1}\to \mathbb{Q}_p/\mathbb{Z}_p$, and set $V=\ker\mu$. Fix a subgroup $N$ of $V$ with order $p^s$. Let $J_n(p^r,N)$ be the set of all subgroups $M$ of $\Lambda^{n+1}$ with order $p^r$ such that $M\cap V=N$. By Proposition~\ref{prop:relation-PhiB-PhiEn}, it is sufficient to show that $|J(p^r,N)|=p^{(r-s)n}$. If $M\in J(p^r,N)$, then the map $\mu$ induces a surjective map $\overline{\mu}: M\to \mathbb{Q}_p/\mathbb{Z}_p[p^{r-s}]$ with kernel $N$. Let $a$ be a generator of $\mathbb{Q}_p/\mathbb{Z}_p[p^{r-s}]$. If $x\in M$ such that $\overline{\mu}(x)=a$, then $M$ is generated by $N$ and $x$. Note that $p^{r-s}x\in\ker\overline{\mu}=N$. Conversely, if we have $x\in\mu^{-1}(a)$ such that $p^{r-s}x\in N$, then $\langle N,x\rangle\in J(p^r,N)$, where $\langle N,x\rangle$ is the subgroup generated by $N$ and $x$. We denote by $X(p^r,N)$ the set of all $x\in\mu^{-1}(a)$ such that $p^{r-s}x\in N$. By the above argument, we see that $J(p^r,N)\cong X(p^r,N)/N$. Let $N'$ be the subgroup of $V$ consisting of $z\in V$ such that $p^{r-s}z\in N$. Since $X(p^r,N)$ is an $N'$-torsor, we obtain $|J(p^r,N)|=|N'|/|N|=p^{(r-s)n}$. \if0 Hence we obtain \[ (\widetilde{\rm T}_{\mathbb{B}_n}) \left(p^r\right)^{\mathbb{G}_{n+1}}= \sum_{N}\ p^{(r-s)n}\, \widetilde{\rm T}_{E_n}\left(p^s\right). \] \fi \qed \begin{corollary}\label{cor:relation-TBb-TEn} For $r>0$, we have \[ \widetilde{\rm T}_{E_n}\left(p^r\right) = (\widetilde{\rm T}_{\mathbb{B}}\left(p^r\right))^{\mathbb{G}_{n+1}} -p^n (\widetilde{\rm T}_{\mathbb{B}} \left(p^{r-1}\right))^{\mathbb{G}_{n+1}}.\] \end{corollary} \if0 \subsection{Relation between symmetric power operations} \begin{proposition} We have \[ \sigma_l^{\mathbb{B}}\circ {\rm ch} = {\rm ch}\circ \sigma_l^{E_{n+1}}\] in $\mathbb{B}_n^0(X)\otimes \mathbb{Q}$, and \[ S_Z^{\mathbb{B}}\circ {\rm ch} = {\rm ch}_{Z}\circ S_Z^{E_{n+1}} \] in $(\mathbb{B}_n^0(X)\otimes\mathbb{Q})\power{Z}$. \end{proposition} \begin{theorem} We have \[ \left(S_Z^{\mathbb{B}}\right)^{\mathbb{G}_{n+1}}= \prod_{k\ge 0}\, \left(S_{Z^{p^k}}^{E_n}\right)^{p^{k(n-1)}}\] in $(\mathbb{B}_n^0(X)\otimes\mathbb{Q})\power{Z}$. \end{theorem} \proof By (\ref{eq:total-symmetric-Hecke-B}) and Theorem~\ref{thm:Hecke-B-to-En}, we obtain \[ \begin{array}{rcl} \log \left(S_Z^{\mathbb{B}}\right)^{\mathbb{G}_{n+1}} &=& \displaystyle\sum_{r\ge 0}\sum_{s=0}^r\ p^{(r-s)n}\, \frac{1}{p^r} \widetilde{\rm T}_{E_n}\left(p^s\right)\, Z^{p^r}\\[3mm] &=&\displaystyle\sum_{s,k\ge 0}\ p^{kn}\, \frac{1}{p^{s+k}}\widetilde{\rm T}_{E_n}\left(p^s\right)\, Z^{p^{s+k}}\\[5mm] &=&\displaystyle\sum_{k\ge 0}\ p^{k(n-1)}\, \log \left(S_{Z^{p^k}}^{E_n}\right). \end{array}\] This completes the proof. \qed \fi By Theorem~\ref{thm:natural-Hn+1-module-strucrure-on-EnX}, there is a natural action of the Hecke algebra $\mathcal{H}_{n+1}$ on $E_n^0(X)$ for any space $X$. Thus, we have the $\mathcal{H}_n$-module structure and the $\mathcal{H}_{n+1}$-module structure on $E_n^0(X)$. Finally, we compare these two module structures. For this purpose, we define an additive map $\omega: \mathcal{H}_{n+1}\to \mathcal{H}_n$ by \[ \omega(M)= \sum_{[N]} a_n(M,N) N.\] \begin{proposition} \label{lemma:multiplicative-property-omega} The map $\omega$ is a surjective algebra homomorphism. \end{proposition} By Propositions~\ref{prop:relation-Hecke-TMB-to-TNEn} and \ref{lemma:multiplicative-property-omega}, we obtain the following theorem. \begin{theorem} \label{thm:comparison-Hn-Hn+1-on-EnX} The $\mathcal{H}_{n+1}$-module structure on $E_n^0(X)$ is obtained from the $\mathcal{H}_n$-module structure by the restriction along the algebra homomorphism $\omega$. \end{theorem} \proof[Proof of Proposition~\ref{lemma:multiplicative-property-omega}] The surjectivity follows from Proposition~\ref{prop:description-TEn-by-TBn}. Thus, it suffices to show that \begin{align}\label{eq:proof-theta-algebra-hom} \sum_{[M_3]}c_{n+1}(M_1,M_2;M_3)\, a_n(M_3,N_3) = \sum_{[N_1],[N_2]}a_n(M_1,N_1)\, a_n(M_2,N_2)\, c_n(N_1,N_2;N_3) \end{align} for any finite abelian $p$-groups $M_1,M_2,N_3$ with $p$-rank$(M_i)\le n+1\ (i=1,2)$ and $p$-rank$(N_3)\le n$. We fix an isomorphism $\Lambda^{n+1}/M_{1,i}\cong \Lambda^{n+1}$ for each $M_{1,i}\in m_{n+1}(M_1)$. For $M_{2,j}\in m_{n+1}(M_2)$, we let $L_{i,j}$ be the subgroup in $\Lambda^{n+1}$ which contains $M_{1,i}$ and satisfies $L_{i,j}/M_{1,i}\cong M_{2,j}$ under the isomorphism. We have \[ \{L_{i,j}\}=\coprod_{[M_3]} c_{n+1}(M_1,M_2;M_3)\, m_{n+1}(M_3) \] as multisets. This implies \begin{align}\label{eq:comparison-Lij-1} \{L_{i,j}\cap V\}= \coprod_{[M_3],[N_3]} c_{n+1}(M_1,M_2;M_3)\, a_n(M_3,N_3)\, m_n(N_3). \end{align} We fix an isomorphism $(\mathbb{Q}_p/\mathbb{Z}_p)/\mu(M_{1,i})\cong \mathbb{Q}_p/\mathbb{Z}_p$. We denote by $\mu'$ the composition $\Lambda^{n+1}\cong \Lambda^{n+1}/M_{1,i} \stackrel{\mu}{\to} (\mathbb{Q}_p/\mathbb{Z}_p)/\mu(M_{1,i})\cong \mathbb{Q}_p/\mathbb{Z}_p$, and set $V'=\ker \mu'$. Notice that there is an isomorphism $V/(M_{1,i}\cap V)\cong V'$ and that this isomorphism induces an isomorphism \[ (L_{i,j}\cap V)/(M_{1,i}\cap V)\cong M_{2,j}\cap V'. \] Hence we see that \begin{align}\label{eq:comparison-Lij-2} \{L_{i,j}\cap V\}=\coprod_{[N_1],[N_2],[N_3]} a_n(M_1,N_1)\,a_n(M_2,N_2)\,c_n(N_1,N_2;N_3)\, m_n(N_3) \end{align} as multisets. Comparing (\ref{eq:comparison-Lij-1}) and (\ref{eq:comparison-Lij-2}), we obtain (\ref{eq:proof-theta-algebra-hom}). \qed \if0 \subsection{Orbifold genera} Let $G$ be a finite group, and let $\phi^{E_{n+1}}: MU\to E_{n+1}$ be a map of ring spectra. We define the map $\phi_G^{\mathbb{B}_n}: \mathcal{N}_*^{U,G}\to \mathbb{B}_n^{-*}(BG)$ by the composition \[ \phi_G^{\mathbb{B}_n}={\rm ch}\circ \phi_G^{E_{n+1}}: MU^G_*\stackrel{\phi_G^{E_{n+1}}}{\longrightarrow}E_{n+1}^{-*}(BG) \stackrel{\rm ch}{\longrightarrow} \mathbb{B}_n^{-*}(BG).\] Let $M$ be a compact complex manifold acted upon by $G$ with complex dimension $d$. We denote by $M\circlearrowleft G$ the $G$-space $M$, and by $M//G$ its orbifold quotient. We fix a unit $u\in E_n^{-2}$. In \cite[Definition~1.1]{Ganter} the orbifold genus $\phi_{\rm orb}(M//G)\in E_n^0$ is defined by \[ \phi_{\rm orb}(M//G)\cdot u^d= (\eta_G)^*\circ\phi_G (M\circlearrowleft G),\] where we regard $(M\circlearrowleft G)$ as an element in $\mathcal{N}_{2d}^{U,G}$. Note that the orbifold genus $\phi_{\rm orb}(M//G)$ is independent of the presentation $M\circlearrowleft G$ by \cite[Theorem~1.4]{Ganter}. We suppose that the ring spectrum map $\phi: MU\to E_n$ is an $H_{\infty}$-map. Note that the condition for $\phi$ to be an $H_{\infty}$-map is known by \cite{Ando1}. By \cite[Theorem~1.5]{Ganter}, we have the following formula \[ \sum_{l=0}^\infty\phi_{\rm orb}(M^l//\Sigma_l)\,q^l = \exp \left[ \sum_{k=0}^\infty T_{p^k}(\phi(M))\,q^{p^k} \right].\] We define $S_{\rm orb}^{\mathbb{B}_n}(M;(q)\in \mathbb{B}_n^0\power{q}$ We suppose that \[ i\circ \phi^{E_n}(M)= \{\sum_{\alpha} b_{\alpha}\otimes x_{\alpha} \] in $\mathbb{B}_n^0(BG)$. \begin{proposition} We have \[ \begin{array}{rcl} i_q\circ S_{\rm orb}^{E_n}(M;q)&=& i_q\circ \exp \left( \sum_{k=0}^{\infty} T_{p^k}^{E_n} (\phi^{E_n}(M))q^{p^k}\right)\\[2mm] &=&\exp\left( \sum_{k=0}^{\infty} \left( T_{p^k}^{\mathbb{B}_n}(i\phi^{E_n}(M)) -p^{n-1}T_{p^{k-1}}^{\mathbb{B}_n}(i\phi^{E_n}(M))\right) q^{p^k}\right)\\[2mm] &=&\exp\left( \sum_{k=0}^{\infty} T_{p^k}^{\mathbb{B}_n}(i\phi^{E_n}(M))q^{p^k}\right)\\[2mm] && \times\exp\left( \sum_{k=1}^{\infty} T_{p^{k-1}}^{\mathbb{B}_n}(i\phi^{E_n}(M))(q^p)^{p^{k-1}} \right)^{-p^{n-1}}\\[2mm] \end{array}\] \end{proposition} \fi \if0 \subsection{Relation between logarithmic operations} \fi \end{document}
\begin{document} \begin{abstract} We show that there exists a natural number $p_0$ such that any three-dimensional Kawamata log terminal singularity defined over an algebraically closed field of characteristic $p>p_0$ is rational and in particular Cohen-Macaulay. \end{abstract} \subjclass[2010]{14E30, 14J17, 13A35} \keywords{Kawamata log terminal singularities, rational singularities, positive characteristic} \maketitle \section{Introduction} One of the main goals of algebraic geometry is to understand the structure of smooth projective varieties. The techniques of the Minimal Model Program, MMP for short, play a fundamental role in the pursuit of this objective. Unluckily, starting from dimension three, to apply the techniques of the MMP it is necessary to consider varieties with mild singularities. From the point of view of the MMP, the most natural class of singularities is that of Kawamata log terminal (klt) singularities, which includes canonical and terminal singularities. In characteristic zero, a large part of the MMP for varieties with klt singularities is known to hold (see \cite{bchm06}). In this context it is of course essential to have a good understanding of the properties of klt singularities. Perhaps the most important of these properties is the fact that klt singularities are rational and, in particular, Cohen-Macaulay. This fundamental fact was first proved by Elkik for canonical singularities and was later generalized to klt singularities, see for example \cite[Theorem 5.22]{KM98} and references therein. The main technical tool used in the proof of the results of the MMP (and in the proof of Elkik's result) is the famous Kawamata-Viehweg vanishing which is known to fail in positive characteristic and dimension $\geq 2$. So it is not surprising that the situation in positive characteristic $p>0$ is much more complicated. In spite of this, it is known that the klt MMP is valid in dimension two (\cite{tanaka12}) and dimension three when the characteristic $p > 5$ (\cite{hx13}, \cite{ctx13}, \cite{birkar13}, \cite{BW17}). It is natural to wonder if in positive characteristic, klt singularities are rational. In dimension two, this is known to hold for all characteristics, however recently, it was shown that this does not hold when $p= 2$ and the dimension is at least three \cite{GNT06}, \cite{kovacs17}, and \cite{CT06PLT} (see Remark \ref{remark:quotient}). On the positive side, it is known that klt threefold singularities are Witt-rational when $p>5$ (\cite{GNT06}). This interesting result allows for the extension of Esnault's results on counting points over finite fields to singular varieties. Note that rational singularities are Witt-rational but the converse is not true. Furthermore, it is known that strongly F-regular singularities are rational. Note that F-regular singularities are klt but klt threefold singularities need not be strongly F-regular even in large characteristic (\cite{CTW15a}). The goal of this paper is to show the following result. \begin{theorem} \label{theorem:main} There exists a natural number $p_0>0$ such that for any three-dimensional Kawamata log terminal pair $(X,\Delta)$ defined over an algebraically closed field of characteristic $p>p_0$, we have that $X$ has rational and, in particular, Cohen-Macaulay singularities. Moreover, if $X$ is $\mathbb{Q}$-factorial, then, for any divisor $D$, the sheaf $\mathcal{O}_X(D)$ is Cohen-Macaulay. \end{theorem} It is not known whether the above theorem holds or is false if we just assume that $p>2$. The main tool in the proof of this result is the Kawamata-Viehweg vanishing for log del Pezzo surfaces in large characteristic. The constant $p_0$ comes from this result. \begin{theorem}[{\cite{CTW15b}*{Theorem 1.2}}] \label{theorem:CTW} There exists a constant $p_0>0$ with the following property. Let $X$ be a projective surface of Fano type defined over an algebraically closed field of characteristic $p>p_0$. Let $\Delta$ be an effective $\mathbb{Q}$-divisor such that $(X,\Delta)$ is klt, and let $L$ be a Weil divisor for which $L-(K_X+\Delta)$ is nef and big. Then $H^i(X,\mathcal{O}_X(L))=0$ for $i>0$. \end{theorem} We say that a surface $X$ is of Fano type, if there exists a $\mathbb{Q}$-divisor $B$ such that $(X,B)$ is klt and $-(K_X+B)$ is ample. Note that the Kawamata-Viehweg vanishing does not hold for rational surfaces in general even in large characteristic (\cite{CT07KV}). Let us also note that it has been recently shown in \cite{kovacs17b}, that Cohen-Macaulay klt singularities are rational. The results from this paper allowed us to simplify the proof of Theorem \ref{theorem:main}.\\ One possible application of these results stems from Minimal Model Program in mixed characteristic. Given a positive-characteristic variety $X$, which lifts to characteristic zero, and a basic operation of the MMP $f \colon X \dashrightarrow Y$ such as a contraction or a flip, it is natural to ask if $f$ lifts to characteristic zero as well. As a corollary of Theorem \ref{theorem:main}, we show that this holds for divisorial and flipping contractions of threefolds in large characteristic (see Corollary \ref{c:lift}). Unfortunately, we do not know how to deal with the case of flips. The article is organised as follows. In Section 2, we discuss rational and Cohen-Macaulay singularities, strong F-regularity, plt blow-ups, and liftings to the rings of Witt vectors. In Section 3, we prove the existence of certain short exact sequences on plt threefolds (see Subsection \ref{subsection:sketch}). In Section 4, we finish the proof of Theorem \ref{theorem:main}. In Section 5 we explore applications to the MMP in mixed characteristic. \subsection{Sketch of the proof} \label{subsection:sketch} Assume that $X$ is $\mathbb{Q}$-factorial. Consider a \emph{plt blow-up} (Proposition \ref{proposition:plt-blow-up}) at a closed point $x \in X$, that is a birational morphism $f \colon Y \to X$ with an irreducible exceptional divisor $S$ such that \begin{itemize} \item $(Y,S)$ is plt and $\mathbb{Q}$-factorial, \item $-(K_Y+S)$ is $f$-ample, \item $f$ is an isomorphism over $X\backslash \{x\}$. \end{itemize} By adjunction, there exists a $\mathbb{Q}$-divisor $\Delta_{\mathrm{diff}}$ such that $(S, \Delta_{\mathrm{diff}})$ is a klt log del Pezzo pair. When $S$ is Cartier, one can apply the following strategy (cf.\ \cite{GNT06} and \cite{CT06PLT}). For every $k\geq 0$, consider \[ 0 \to \mathcal{O}_Y(-(k+1)S) \to \mathcal{O}_Y(-kS) \to \mathcal{O}_S(G_k) \to 0, \] where $G_k \sim -kS|_S$. Since \[ G_k\sim _{\mathbb Q}K_S + \Delta_{\mathrm{diff}} \underbrace{- (K_S + \Delta_{\mathrm{diff}}) + G_k}_{\text{ample}}, \] by Theorem \ref{theorem:CTW}, we have that $H^1(S, \mathcal{O}_S(G_k)) =0$ for every $k\geq 0$, and so $R^1f_*\mathcal{O}_Y(-(k+1)S) \to R^1f_*\mathcal{O}_Y(-kS)$ is surjective. Since $-S$ is $f$-ample, $R^1f_*\mathcal{O}_Y(-kS)=0$ for $k\gg 0$ by Serre vanishing, which implies $R^1f_* \mathcal{O}_Y=0$ as well. By \cite{hx13} one sees that $Y$ has F-regular singularities along $S$ and hence $Y$ has rational singularities along $S$. Therefore, if $X$ is Cohen-Macaulay, then it has rational singularities at $x$. The fact that $X$ is Cohen-Macaulay at $x$ follows by a similar argument. When $S$ is not Cartier, the above short exact sequence may fail to be exact, however we show the existence of the following short exact sequence (Proposition \ref{proposition:3folds-ses}) \begin{equation} \label{eq:ses} 0 \to \mathcal{O}_Y(-(k+1)S) \to \mathcal{O}_Y(-kS) \to \mathcal{O}_S(G_k) \to 0, \end{equation} where $G_k$ is a Weil divisor satisfying $G_k \sim_{\mathbb{Q}} -kS|_S - \Delta_k$ for some effective $\mathbb{Q}$-divisor $\Delta_k$ on $S$ such that $\Delta_k \subseteq \Delta_{\mathrm{diff}}$. Therefore, we can apply the same argument as above with minor changes. When $X$ is not $\mathbb{Q}$-factorial, we apply Proposition \ref{proposition:plt-blow-up}, Proposition \ref{proposition:non-Q-factorial-case}, and the relative F-inversion of adjunction (Lemma \ref{lemma:relative-inversion-of-adjunction}). \section{Preliminaries} We say that a scheme $X$ is a \emph{variety} if it is integral, separated, and of finite type over a field $k$. Throughout this paper, $k$ is an arbitrary field of positive characteristic $p>0$. For a $k$-variety $X$, we denote the relative canonical divisor $K_{X/\Spec k}$ by $K_X$ (see \cite{kollar13}*{Definition 1.6}). We refer to \cite{KM98} for basic definitions in birational geometry, and to \cite{kollar13} whenever $k$ is not algebraically closed. We say that $(X,\Delta)$ is a \emph{log pair} if $X$ is normal, $\Delta$ is an effective $\mathbb{Q}$-divisor, and $K_X+\Delta$ is $\mathbb{Q}$-Cartier. Given a variety $X$ and a Weil divisor $D$ on $X$, we say that $f \colon Y \to X$ is a \emph{log resolution} of $(X,D)$ if $Y$ is regular, $\mathrm{Ex}(f)$ has pure codimension one and $(Y, \Supp(f^{-1}(D)+\mathrm{Exc}(f)))$ has simple normal crossings (see \cite{kollar13}*{Definition 1.8 and 1.12}). If $X$ is of dimension at most three, then a log resolution of $(X,D)$ always exists (see \cite{lip78}, \cite{cutkosky09}, \cite{CP08}, and \cite{CP09}). \begin{theorem}[{\cite[Theorem 10.4]{kollar13}}] \label{thm:relKV} Let $X$ be a regular surface over a field $k$ and let $f \colon X \to Y$ be a proper, generically finite morphism with exceptional curves $C_i$ such that $\bigcup_i C_i$ is connected. Let $L$ be a line bundle on $X$ and assume that there exist $\mathbb{Q}$-divisors $N$ and $\Delta = \sum d_i C_i$ such that: \begin{itemize} \item $L \equiv_{f} K_X + \Delta + N$, \item $N \cdot C_i \geq 0$ for every $i$, \item $ 0 \leq d_i < 1$ for every $i$. \end{itemize} Then $R^1f_*L = 0$. \end{theorem} \subsection{Rational and Cohen-Macaulay singularities} \label{ss:rat-cm-sing} We say that a variety $X$ has \emph{rational singularities} if it is Cohen-Macaulay and there exists a log resolution $f \colon Y \to X$ such that $Rf_*\mathcal{O}_Y = \mathcal{O}_X$. Assuming the existence of resolutions of singularities, the phrase \emph{there exists} can be replaced by \emph{for every} (this follows from the main result in \cite{CR15}). For the definition of Serre's condition $S_d$ and Cohen-Macaulay singularities we refer to \cite[Definition 5.1 and Definition 5.2]{kollar13}. \begin{proposition} \label{proposition:CM_kollar} Let $f \colon Y \to X$ be a proper birational morphism of normal varieties and let $D$ be a Weil divisor on $Y$ such that $\mathcal{O}_Y(D)$ is Cohen-Macaulay. Assume that $R^if_*\mathcal{O}_Y(D)=0$ and $R^if_*\mathcal{O}_Y(K_Y-D)=0$ for all $i>0$. Then $f_*\mathcal{O}_Y(D)$ is Cohen-Macaulay. \end{proposition} \begin{proof} Since $D$ is Cohen-Macaulay, so is $\sheafhom_{\mathcal{O}_Y}(\mathcal{O}_Y(D), \omega_Y)$ (see \cite[Corollary 5.70]{KM98}). In particular, the latter sheaf is reflexive, and hence it is isomorphic to $\mathcal{O}_Y(K_Y-D)$. Therefore, the proposition follows from \cite[Theorem 2.74]{kollar13}. \end{proof} The following result is well known. \begin{proposition}[{\cite[Lemma 8.1]{kovacs17b}}] \label{prop:rational_and_CM_is_good} Let $X$ be a variety with rational singularities. Then $Rf_*\omega_Y = \omega_X$ for every resolution $f \colon Y \to X$. \end{proposition} \begin{comment} Let $\omega_X^{\bullet}$ and $\omega_Y^{\bullet}$ be the dualizing complexes. The assumption that $X$ is Cohen-Macaulay implies that $\omega_X^{\bullet} \simeq \omega_X[d]$ for $d = \dim X$. By Grothendieck duality, \[ Rf_* \omega_Y[d] \simeq Rf_* R\sheafhom_Y(\mathcal{O}_Y, \omega_Y^{\bullet}) \simeq R\sheafhom_X(Rf_*\mathcal{O}_Y, \omega_X^{\bullet}) \simeq \omega_X^{\bullet}\cong \omega_X[d]. \] \end{comment} Let us recall that a rank one sheaf $\mathcal{F}$ on a normal variety $X$ is reflexive if and only if it is divisorial, that is of the form $\mathcal{O}_X(D)$ for some Weil divisor $D$. Moreover, $i_*(\mathcal{F}|_{U}) \simeq \mathcal{F}$, where $i \colon U \to X$ is an inclusion of an open subset whose complement is of codimension at least two. \begin{lemma} \label{lemma:CM} Let $X$ be a normal variety, let $S \subset X$ be a prime divisor, and let $D$ be any Weil divisor. For the following exact sequence \[ 0 \to \mathcal{O}_X(-S + D) \to \mathcal{O}_X(D) \to \mathcal{E} \to 0, \] we have $\Supp \mathcal{E} = S$. Moreover, if $\mathcal{O}_X(-S+D)$ and $\mathcal{O}_X(D)$ satisfy Serre's condition $S_3$, then $\mathcal{E}$ is reflexive as a sheaf on $S$. \end{lemma} \begin{proof} Since the ideal sheaf $\mathcal{I}_S$ of $S$ annihilates $\mathcal{E}$ and the sequence above is the standard restriction exact sequence outside of a closed subset of codimension two, we have that $\Supp(\mathcal{E})=S$. The second part of the lemma follows from \cite{kollar13}*{Lemma 2.60}. \end{proof} \subsection{Strongly F-regular singularities} The proof of the main theorem makes use of the concept of F-regular singularities. We refer the reader to \cite{schwedetucker12} for a comprehensive treatment of this topic. The sole purpose of introducing them here is to show, summing up known results, that a plt threefold singularity $(X,S)$ in characteristic $p>5$ is rational near $S$ and, assuming $\mathbb{Q}$-factoriality, any reflexive rank one sheaf on $X$ is Cohen-Macaulay near $S$ (Proposition \ref{proposition:f-regular_are_rational}, Proposition \ref{proposition:plt_are_f-regular}, and, for the non-$\mathbb{Q}$-factorial case, Proposition \ref{proposition:relative-rationality}). This shows that singularities of plt blow-ups possess these desirable properties (see Proposition \ref{proposition:plt-blow-up} and Proposition \ref{proposition:non-Q-factorial-case}). For the convenience of the reader, we recall the basic definitions. \begin{definition} For an F-finite scheme X of characteristic $p>0$ and an effective $\mathbb{Q}$-divisor $\Delta$, we say that $(X,\Delta)$ is \emph{globally F-split} if for every $e \in \mathbb{Z}_{>0}$, the natural morphism \[ \mathcal{O}_X \to F^e_* \mathcal{O}_X(\lfloor(p^e-1)\Delta \rfloor) \] splits in the category of sheaves of $\mathcal{O}_X$-modules. We say that $(X,\Delta)$ is \emph{globally F-regular} if for every effective divisor $D$ on $X$ and every big enough $e \in \mathbb{Z}_{>0}$, the natural morphism \[ \mathcal{O}_X \to F^e_* \mathcal{O}_X(\lfloor(p^e-1)\Delta \rfloor + D) \] splits. We say that $(X,\Delta)$ is \emph{purely globally F-regular}, if the definition above holds for those $D$ which intersect $\lfloor \Delta \rfloor$ properly (see \cite{Das15}*{Definition 2.3(2)}). \end{definition} The local versions of the above notions are called F-purity, strong F-regularity, and pure F-regularity, respectively. Moreover, given a morphism $f \colon X \to Y$, we say that $(X,\Delta)$ is F-split, F-regular, or purely F-regular over $Y$, if the corresponding splittings hold locally over $Y$ (see \cite{hx13}*{Definition 2.6}). \begin{remark} \label{remark:relative_f-split} Assume that $(X,\Delta)$ is F-split over $Y$ with respect to $f \colon X \to Y$. Then for any affine open subset $U \subseteq Y$, we have that $(f^{-1}(U), \Delta|_{f^{-1}(U)})$ is globally F-split (see \cite{hx13}*{Proposition 2.10}, cf.\ \cite{cgs14}*{Remark 2.8}). In particular, if $u \colon Y \to Y'$ is affine, then $(X,\Delta)$ is also F-split over $Y'$ with respect to $u \circ f$. Analogous statements hold for F-regularity and pure F-regularity. \end{remark} \begin{proposition} \label{proposition:f-regular_are_rational} Let $X$ be a strongly F-regular variety. Then $X$ has rational singularities. Moreover, if $X$ is $\mathbb{Q}$-factorial and of dimension at least three, then any reflexive sheaf of rank one on it is $S_3$. \end{proposition} In particular, a reflexive sheaf of rank one on a $\mathbb{Q}$-factorial strongly F-regular threefold is Cohen-Macaulay. \begin{proof} $X$ is F-rational (see \cite[Appendix C]{schwedetucker12}), and hence it has rational singularities by \cite{kovacs17b}*{Corollary 1.12}. The last statement in the proposition is a consequence of \cite[Theorem 3.8]{patakfalvi-schwede14}. Indeed, let $\mathcal{O}_X(D)$ be a reflexive sheaf, and take a general effective divisor $E \sim -(mp+1)D$ for $m \gg 0$. Then $(X,\frac{1}{mp+1}E)$ is strongly F-regular, and the result follows. \end{proof} \begin{proposition} \label{proposition:plt_are_f-regular} Let $(X,S)$ be a plt three-dimensional pair defined over an algebraically closed field of characteristic $p>5$. Then $S$ is normal and $X$ is strongly F-regular along it. \end{proposition} \begin{proof} By \cite{hara98} and \cite[Theorem A]{Das15} (cf.\ Lemma \ref{lemma:relative-inversion-of-adjunction}), we have that $(X,S)$ is purely $F$-regular along $S$ and $S$ is normal. This implies F-regularity of $X$ along $S$. \end{proof} In order to deal with non-$\mathbb{Q}$-factorial singularities, we also need a relative version of the above result. \begin{proposition} \label{proposition:relative-rationality} Let $(X,S)$ be a plt three-dimensional pair defined over an algebraically closed field of characteristic $p>5$, and let $f \colon X \to Z$ be a proper birational morphism between normal varieties such that $\dim f(S)=2$. If $-(K_X+S)$ is $f$-ample, then $Z$ is strongly F-regular along $f(S)$. \end{proposition} This does not follow from Proposition \ref{proposition:plt_are_f-regular} directly, because $K_Z + f(S)$ is almost never $\mathbb{Q}$-Cartier. Further, let us note that we will need to apply Proposition \ref{proposition:relative-rationality} in the case when $X$ is not $\mathbb{Q}$-factorial. \begin{proof} As above, $S$ is normal. The normalization morphism $f(S)^\nu \to f(S)$ is affine, and so, by \cite{hx13}*{Theorem 3.1}, the pair $(S, \Delta_{\mathrm{diff}})$, defined by adjunction $K_S + \Delta_{\mathrm{diff}} = (K_X + S)|_S$, is globally $F$-regular over $f(S)$ (see Remark \ref{remark:relative_f-split}). Lemma \ref{lemma:relative-inversion-of-adjunction} and \cite{hx13}*{Lemma 2.12} imply that $Z$ is strongly $F$-regular along $f(S)$. \end{proof} In the proposition above, we used a relative version of inversion of adjunction, which is a consequence of \cite{Das15}*{Theorem B}. The proof is very similar to \cite{Das15}*{Corollary 5.4} and \cite{CTW15b}*{Lemma 2.7}, but we include it here for the convenience of the reader. \begin{lemma} \label{lemma:relative-inversion-of-adjunction} Let $(X,S+B)$ be a plt pair where $S$ is a prime divisor, and let $f \colon X \to Z$ be a proper birational morphism between normal varieties. Assume that $-(K_X+S+B)$ is $f$-ample and $(\overline{S},B_{\overline{S}})$ is globally $F$-regular over $f(S)$, where $\overline{S}$ is the normalization of $S$, and $B_{\overline{S}}$ is defined by adjunction $K_{\overline{S}} + B_{\overline{S}} = (K_X + S + B)|_{\overline{S}}$. Then $(X,S+B)$ is purely globally F-regular over a Zariski-open neighbourhood of $f(S) \subseteq Z$. \end{lemma} \begin{proof} Since the question is local over $Z$, we can assume that $Z$ is affine, that is $Z = \Spec(R)$ for some ring $R$. Further, by perturbing the coefficients of $B$, we can assume that the Cartier index of $K_X + S + B$ is not divisible by $p$ (see for example the proof of \cite[3.8]{Das15}). By \cite[Theorem A]{Das15}, $(X,S+B)$ is purely F-regular, and $S$ is a normal F-pure centre. By standard arguments (replacing $B$ by $B+\epsilon H$ for $0 < \epsilon \ll 1$ and a very ample divisor $H$ intersecting $S$ properly), it is enough to show that $(X,S+B)$ is $F$-split over a Zariski-open neighbourhood of $f(S)$ (see the proof of \cite{schwedesmith10}*{Theorem 3.9}). Set $\mathcal{L} \coloneq \mathcal{O}_X((1-p^e)(K_X+S+B))$ and for a sufficiently divisible integer $e\gg 0$ consider the following diagram { \begin{center} \begin{tikzcd} H^0(X,F_*^e\mathcal{L}) \arrow{r} \arrow{d}{\psi^X_{S+B}} & H^0(S,F_*^e( \mathcal{L}|_S)) \arrow{d}{\psi^S_{B_S}} \arrow{r} & H^1(X,F_*^e(\mathcal{L}(-S))) \\ H^0(X,\mathcal{O}_X) \arrow{r} & H^0(S,\mathcal{O}_S), & \end{tikzcd} \end{center}} \noindent where $\psi^X_{S+B}$ and $\psi^S_{B_S}$ are the trace maps, while the horizontal arrows come from the restriction exact sequences. The diagram is well defined and commutes by the fact that $S$ is an F-pure centre and by the equality of the Different and the F-different (see \cite{Das15}*{Theorem B}, cf.\ \cite{CTW15b}*{Subsection 2.2}). Note that $\psi^S_{B_S}$ is surjective, since $(S, B_S)$ is F-split over $f(S)$. Since $e \gg 0$, the relative Serre vanishing implies that $H^1(X,F_*^e(\mathcal{L}(-S))) = 0$. In particular, the upper left horizontal arrow is surjective, and so the composition \[ H^0(X, F^e_* \mathcal{L}) \xrightarrow{\psi^X_{S+B}} H^0(X,\mathcal{O}_X) \to H^0(S,\mathcal{O}_S) \] is surjective as well. But $H^0(X,\mathcal{O}_X)= H^0(Z,\mathcal{O}_Z)=R$, hence this implies that the zero locus of the ideal in $R$ generated by $\mathrm{im}(\psi^X_{S+B})$ is disjoint from $f(S)$. After removing this locus, the surjectivity of $\psi^X_{S+B}$ follows. \end{proof} \subsection{Local to global transition} In the course of the proof of the main theorem we will need the following two results that allow for the transition from a local to a global case. \begin{proposition}[Generalized plt blow-up] \label{proposition:plt-blow-up} Let $(X,\Delta)$ be a Kawamata log terminal three-dimensional pair defined over an algebraically closed field of characteristic $p>5$, and let $x \in X$ be a closed point. Assume that $X \backslash \{x\}$ is $\mathbb{Q}$-factorial. Then there exists a projective birational morphism $f \colon Y \to X$ and an effective $\mathbb{Q}$-divisor $\Delta_Y$ on $Y$ such that \begin{itemize} \item $f$ is an isomorphism over $X \backslash \{x\}$, \item $S \coloneq \Exc f$ is irreducible and anti-$f$-nef, \item $(Y, S+\Delta_Y)$ is a $\mathbb{Q}$-factorial plt threefold, \item $-(K_Y+S+\Delta_Y)$ is $f$-ample. \end{itemize} Furthermore, $S$ is normal and $Y$ is strongly F-regular along it. \end{proposition} \noindent When $x \in X$ is $\mathbb{Q}$-factorial, then $-S$ is automatically $f$-ample. \begin{proof} Apart from the last statement, this is a direct consequence of \cite{GNT06}*{Proposition 2.15}. The strong F-regularity of $Y$ and the normality of $S$ follow from Proposition \ref{proposition:plt_are_f-regular}. \end{proof} For the non-$\mathbb{Q}$-factorial case of the main theorem, we will also need the following result. \begin{proposition} \label{proposition:non-Q-factorial-case} Under the assumptions of Proposition \ref{proposition:plt-blow-up}, the $f$-relative semiample fibration $g \colon Y \to Y'$ associated to $-S$ exists. Moreover, $g$ is small, $-S|_S$ is big, and $Y'$ is strongly F-regular along $g(S)$. \end{proposition} \begin{proof} The first statement follows from the base point free theorem (see \cite[Theorem 2.9]{GNT06}) after having noticed that $-S = K_Y + \Delta_Y -(K_Y+S+\Delta_Y)$. For the proof of the bigness of $-S|_S$ it is enough to show that $g$ is small. To this end, run a $(K_Y+S+\Delta_Y)$-MMP over $Y'$, and let $Y \dashrightarrow Y_{\mathrm{min}} \xrightarrow{g'} Y'$ be the minimal model. Since $-(K_Y+S+\Delta_Y)$ is $g$-ample, $g'$ is small. Moreover, as $S$ is $g$-numerically trivial, none of the steps of the MMP may contract $S$. Indeed, an MMP preserves $\mathbb{Q}$-factoriality, and hence a contracted divisor is always relatively anti-ample with respect to a divisorial contraction. Since $\Exc f = S$ is irreducible, this shows that $g$ is small and it does not contract $S$. For the last statement, we replace $Y$ by the output of a $-(K_Y+S)$-MMP over $Y'$, and so we can assume that $-(K_Y+S)$ is $g$-nef and $g$-big. We can run such an MMP, since we can write \[ -\epsilon (K_Y+S)=K_Y+S+(1+\epsilon)\Delta_Y-(1+\epsilon)(K_Y+S+\Delta_Y), \] where $-(K_Y + S + \Delta_Y)$ is $g$-ample and $(Y,S +(1+\epsilon)\Delta_Y)$ is plt for $0 < \epsilon \ll 1$. Further, by replacing $Y$ by the image of the $g$-relative semiample fibration associated to $-(K_Y+S)$, we can assume that $-(K_Y+S)$ is $g$-ample. The fibration exists by the klt base point free theorem, as $S$ is $g$-numerically trivial and we can write \[ -(K_Y+S) = K_Y - 2(K_Y+S) + S. \] Although $Y$ may have ceased to be $\mathbb{Q}$-factorial, $(Y,S)$ is a well defined plt pair (in particular, $K_Y+S$ is $\mathbb{Q}$-Cartier), and so Proposition \ref{proposition:relative-rationality} concludes the proof. \end{proof} \subsection{Liftings} Let $W_m(k)$ denote the ring of Witt vectors of length $m$. We say that a $k$-variety $X$ \emph{lifts over $W_m(k)$} if there exits a flat morphism $\widetilde{X} \to \Spec W_m(k)$ such that the special fibre is isomorphic to $X$. Similarly, we can define liftings of morphisms. In Section \ref{section:applications} we will need the following result. \begin{proposition}[{\cite[Proposition 2.1]{LS14}, \cite[Theorem 3.1]{CS09}}] \label{proposition:pushing_lifts} Let $f \colon Y \to X$ be a morphism of schemes satisfying $Rf_* \mathcal{O}_Y = \mathcal{O}_X$. Assume that $Y$ lifts to a scheme $\widetilde{Y}_m$ over $W_m(k)$ for some natural number $m \in \mathbb{Z}_{>0}$. Then $f$ lifts to $\widetilde{f}_m \colon \widetilde{Y}_m \to \widetilde{X}_m$ over $W_m(k)$. \end{proposition} The same is true for formal lifts to the total Witt ring $W(k)$. \begin{comment} \begin{proof} We pick $p_0$ as in Theorem \ref{theorem:main} and proceed by induction. We assume that $\pi$ lifts to ${\widetilde{\pi}}_{l-1} \colon \widetilde{X}_{l-1} \to \widetilde{Y}_{l-1}$ over $W_{l-1}(k)$ for $l \leq m$, and we would like to show that it lifts to $W_l(k)$. To this end, we consider the following exact sequence of sheaves of algebras on $\widetilde{Y}_{m-1}$: \[ 0 \to \pi_* \mathcal{O}_X \to ({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_l} \to ({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_{l-1}} \to R^1\pi_* \mathcal{O}_X, \] where $\widetilde{X}_l$ is the lift of $X$ to $W_l(k)$ induced by $\widetilde{X}_m$. Since $X$ and $Y$ have rational singularities (Theorem \ref{theorem:main}), the Leray spectral sequence shows that $R^1\pi_* \mathcal{O}_X = 0$. Thus, we can take \[ \widetilde{Y}_l \coloneq \Spec\Big(({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_l}\Big). \] \end{proof} \end{comment} \section{The restriction short exact sequence} \label{section:surfaces} \begin{proposition} \label{proposition:3folds-ses} Let $(X,S)$ be a plt three-dimensional $\mathbb{Q}$-factorial pair defined over an algebraically closed field $k$ of characteristic $p>5$, where $S$ is a prime divisor, and let $\Delta_{\mathrm{diff}}$ be the different. Then for any Weil divisor $D$ on $X$, there exists an effective $\mathbb{Q}$-divisor $\Delta_S \leq \Delta_{\mathrm{diff}}$ and a Weil divisor $G$ on $S$ such that $G\sim _{\mathbb Q} K_S+\Delta _S+D|_S$ and the sequence \[ 0 \to \mathcal{O}_X(K_X+D) \to \mathcal{O}_X(K_X+S+D) \to \mathcal{O}_S(G) \to 0 \] is exact. \end{proposition} Let us note that $G$ is only defined up to linear equivalence. \begin{proof} Note that in proving the proposition, we are free to replace $X$ by an appropriate neighborhood of $S$. By Proposition \ref{proposition:plt_are_f-regular}, we may assume that $X$ is strongly F-regular and $S$ is normal. Replacing $D$ by a linearly equivalent divisor, we may assume that $S$ is not contained in the support of $D$ and so $D|_S$ is a well defined $\mathbb Q$-divisor. Consider the following short exact sequence (see Lemma \ref{lemma:CM}) \[ 0 \to \mathcal{O}_X(K_X + D) \to \mathcal{O}_X(K_X+S+D) \to i_*\mathcal{E} \to 0, \] where $\mathcal{E}$ is a sheaf supported on $S$, and $i \colon S \to X$ is the inclusion. By Proposition \ref{proposition:f-regular_are_rational} any reflexive rank one sheaf on $X$ satisfies Serre's condition $S_3$ and so, by Lemma \ref{lemma:CM}, $\mathcal{E}$ is reflexive on $S$. Let $\pi \colon \overline{X} \to X$ be a log resolution of $(X,S)$ and let $\overline{S}$ be the strict transform of $S$. Define $\Delta_{\overline{X}}$ by \[ K_{\overline{X}} + \overline{S} + \Delta_{\overline{X}} = \pi^*(K_X+S). \] Consider the following exact sequence \[ 0 \to \mathcal{O}_{\overline{X}}(K_{\overline{X}} + \lceil \pi^*D \rceil ) \to \mathcal{O}_{\overline{X}}(K_{\overline{X}} + \overline{S} + \lceil \pi^*D \rceil) \to \mathcal{O}_{\overline{S}}(K_{\overline{S}} + \lceil \pi^*D \rceil|_{\overline{S}}) \to 0. \] Since $(X,S)$ is plt, we have $\lfloor \Delta_{\overline{X}} \rfloor \leq 0$, and so $\lceil \pi^*D \rceil \geq \lfloor \Delta_{\overline{X}} + \pi^*D \rfloor$. Therefore \begin{align*} K_{\overline{X}} + \overline{S} + \lceil \pi^*D \rceil &\geq \lfloor K_{\overline{X}} + \overline{S} + \Delta_{\overline{X}} + \pi^*D \rfloor \\ &= \lfloor \pi^*(K_X+S+D) \rfloor, \text{ and}\\ K_{\overline{X}} + \lceil \pi^*D \rceil &\geq \lfloor K_{\overline{X}} + \Delta_{\overline{X}} + \pi^*D \rfloor \\ &\geq \lfloor \pi^*(K_X+D) \rfloor. \end{align*} In particular, \begin{align*} \pi_*\mathcal{O}_{\overline{X}}(K_{\overline{X}} + \lceil \pi^*D \rceil) &= \mathcal{O}_X(K_X+D), \text{ and } \\ \pi_*\mathcal{O}_{\overline{X}}(K_{\overline{X}} + \overline{S} + \lceil \pi^*D \rceil) &= \mathcal{O}_X(K_X+S+D). \end{align*} By applying $\pi_*$ to the above exact sequence, we have that \[ 0 \to \mathcal{O}_X(K_X+D) \to \mathcal{O}_X(K_X+S+D) \to \pi _*\mathcal{O} _{\overline S}(\overline G) \to R^1\pi_* \mathcal{O}_{\overline{X}}( K_{\overline{X}} + \lceil \pi^*D \rceil) \] is exact, where $\overline G \coloneq K_{\overline{S}} + \lceil \pi^*D \rceil|_{\overline{S}}$. By localizing at one dimensional points on $X$, Lemma \ref{lemma:surface} shows that \[ \dim \mathrm{Supp}(R^1\pi_* \mathcal{O}_{\overline{X}}(K_{\overline{X}}+ \lceil \pi^*D \rceil )) \leq 0. \] Set $\Delta_S \coloneq (\pi|_{\bar S})_*((\lceil \pi^*D \rceil-\pi^*D)|_{\overline{S}})$ and $G \coloneq (\pi|_{\bar S})_*\overline G$ so that $G \sim_{\mathbb Q} K_S + \Delta_S + D|_S$. We claim that $\mathcal{E} \simeq \mathcal{O} _S(G)$. Indeed, by what we observed above, $\mathcal{E}$ is isomorphic to $ \pi _*\mathcal{O} _{\overline S}(\overline G) $ in codimension one on $S$, which in turn is isomorphic to $\mathcal{O}_S(G)$ in codimension one on $S$. Since both $\mathcal{E}$ and $\mathcal{O}_S(G)$ are reflexive sheaves of rank one, the isomorphism $\mathcal{E} \simeq \mathcal{O} _S(G)$ follows. To conclude the proof, it is enough to verify that $\Delta_S \leq \Delta_{\mathrm{diff}}$. This can be checked by localizing at one dimensional points in $\mathrm{Sing}(X)$, and thus we conclude by Lemma \ref{lemma:surface} (see also \cite{kollaretal}*{Corti 16.6.3}). \end{proof} \begin{remark} Proposition \ref{proposition:3folds-ses} also holds in the following two cases: \begin{enumerate} \item if $(X,S)$ is a $n$-dimensional $\mathbb{Q}$-factorial plt pair and $D$ is a Weil divisor, defined over an algebraically closed field of characteristic $0$, and \item if $(X,S)$ is a $n$-dimensional $\mathbb{Q}$-factorial plt pair and $D$ is a Weil divisor, defined over an algebraically closed field of characteristic $p>0$ such that $(X,S)$ admits a log resolution of singularities and $(S^\nu , \Delta _{\rm diff})$ is strongly F-regular where $S^\nu \to S$ is the normalization. \end{enumerate} For case (1) recall that since $(X,S)$ is plt, then $S$ is normal and both $\mathcal O_X(K_X+D)$ and $\mathcal O _X(K_X+S+D)$ are Cohen Macaulay sheaves by \cite[Corollary 5.25]{KM98}. For case (2) recall that by \cite{Das15}, $S$ is normal and $(X,S)$ is purely F-regular near $S$. \end{remark} The following lemma, used in the above proof, was applied to localizations at non-closed points, and thus we cannot assume that $k$ is algebraically closed. However this has no impact on the proof, because the surfaces under consideration are excellent. We refer to \cite{kollar13} and \cite{tanaka16_excellent} for the classification and basic results pertaining to excellent log canonical surfaces. Let us just note that excellent klt surfaces are $\mathbb{Q}$-factorial by \cite[Corollary 4.10]{tanaka16_excellent}, and if $(X,S)$ is a plt surface pair, then $S$ is regular (\cite[3.35]{kollar13}). \begin{lemma} \label{lemma:surface} Let $(X,S)$ be a plt surface pair defined over an arbitrary field $k$, where $S$ is a prime divisor. Let $\Delta_{\mathrm{diff}}$ be the different, let $\pi \colon \overline{X} \to X$ be a log resolution of $(X,S)$, let $\overline{S}$ be the strict transform of $S$, let $D$ be any Weil divisor. Then \begin{itemize} \item $R^1\pi_*\mathcal{O}_{\overline{X}}(K_{\overline{X}} + \lceil \pi^*D \rceil) = 0$, and \item $\Delta_S \, {\leq} \, \Delta_{\mathrm{diff}}$, where $\Delta_S \, {\coloneq}\, ( \lceil \pi^*D \rceil{-}\pi^*D)|_{\overline{S}}$, and $S$ is identified with $\overline{S}$. \end{itemize} \end{lemma} \begin{proof} As $\lfloor \lceil \pi^*D\rceil- \pi^*D \rfloor =0$, the vanishing \[ R^1\pi_*\mathcal{O}_{\overline{X}}(K_{\overline{X}} + \lceil \pi^*D\rceil) = 0 \] follows by Theorem \ref{thm:relKV}. As for the second statement, we can restrict ourselves to a neighbourhood of $S$ so that all singularities of $X$ lie on $S$. Therefore, \cite[3.35]{kollar13} implies that each singularity of $X$ is cyclic and $S$ is regular. Moreover, if $x \in \mathrm{Sing}(X)$ and $m_x$ is the $\mathbb{Q}$-factorial index of $X$ at $x$, then $m_xD$ is Cartier and $\Delta_{\mathrm{diff}}$ has coefficient $1 - \frac{1}{m_x}$ at $x$ (\cite{kollaretal}*{Corti 16.6.3}). Therefore, $\Delta_S \leq \Delta_{\mathrm{diff}}$, because, by definition, $\Delta_S$ has coefficients smaller than one, and $m_x\Delta_S$ is Cartier at $x \in \mathrm{Sing}(X)$. \end{proof} By applying Proposition \ref{proposition:3folds-ses} for $D$ replaced by $D-(K_X+S)$, we get that \[ 0 \to \mathcal{O}_X(-S+D) \to \mathcal{O}_X(D) \to \mathcal{O}_S(G) \to 0 \] is exact {for a $\mathbb{Q}$-divisor $0 \leq \Delta_S \leq \Delta_{\mathrm{diff}}$ and a Weil divisor $G \sim_{\mathbb{Q}} D|_S-\Delta _S$. } \begin{corollary} \label{corollary:3folds-ses} Under the assumptions of Propostion \ref{proposition:3folds-ses}, the following sequences \begin{align*} 0 &\to \mathcal{O}_X(K_X-(k+1)S-D) \to \mathcal{O}_X(K_X-kS-D) \to \mathcal{O}_S(G'_k) \to 0 \\ 0 &\to \mathcal{O}_X(-(k+1)S+D) \to \mathcal{O}_X(-kS+D) \to \mathcal{O}_S(G_k) \to 0 \end{align*} are exact for any $k \in \mathbb{Z}_{>0}$, some effective $\mathbb{Q}$-divisors $\Delta_k, \Delta'_k \leq \Delta_{\mathrm{diff}}$ depending on $k$, and some integral divisors $G'_k \sim_{\mathbb{Q}}K_S+\Delta' _k-(k+1)S|_S -D|_S$ and {$G_k \sim_{\mathbb{Q}} -kS|_S + D|_S - \Delta_k$}. \end{corollary} \section{Proof of the main theorem} \begin{proof}[Proof of Theorem \ref{theorem:main}] {Fix $p_0$ as in Theorem \ref{theorem:CTW}.} Since the question is local, we can assume that $X$ is affine, and aim to show that a fixed closed point $x \in X$ is rational. We replace $X$ by a Zariski-open neighbourhood of $x$ whenever convenient. Let $f \colon Y \to X$ be a generalized plt blow-up at $x$ as in Proposition \ref{proposition:plt-blow-up}. Let $S \coloneq \Exc(f)$. Recall that $(Y,S+\Delta_Y)$ is plt, $-S$ is $f$-nef, and $-(K_Y+S+\Delta_Y)$ is $f$-ample for some effective $\mathbb{Q}$-divisor $\Delta_Y$ on $Y$. Moreover, we may assume that $Y$ is strongly F-regular. First, we show that $Rf_*\mathcal{O}_Y = \mathcal{O}_X$. To this end, we apply Corollary \ref{corollary:3folds-ses} to get short exact sequences \[ 0 \to \mathcal{O}_Y(-(k+1)S) \to \mathcal{O}_Y(-kS) \to \mathcal{O}_S(G_k) \to 0 \] for all $k\geq 0$ where $G_k \sim_{\mathbb{Q}} -kS|_S-\Delta _k$ for some effective $\mathbb{Q}$-divisor $\Delta_k \leq \Delta_{\mathrm{diff}}$, where $\Delta_{\mathrm{diff}}$ is the different. Thus, to show that $R^if_*\mathcal{O}_Y(-kS)$ vanishes for $k=0$ and $i>0$, it is enough to prove that \begin{itemize} \item $R^if_*\mathcal{O}_Y(-mS) = 0$ for $i>0$ and a divisible enough $m \gg 0$, and \item $H^i(S, \mathcal{O}_S(G_k))=0$ for $i>0$ and all $k\geq 0$. \end{itemize} If $X$ is $\mathbb{Q}$-factorial, then the first vanishing follows from the relative Serre vanishing, as $-S$ is $f$-ample. In general, we consider the $f$-relative semiample fibration $Y \xrightarrow{g} Y' \xrightarrow{f'} X$ associated to $-S$. It exists by Proposition \ref{proposition:non-Q-factorial-case}. Since $Y'$ is strongly F-regular along $S_{Y'} \coloneq g(S)$, we may assume that $Y'$ has rational singularities (see Proposition \ref{proposition:f-regular_are_rational}). For $m>0$ sufficiently divisible, $mS_{Y'}$ is Cartier and $mS=g^*(mS_{Y'})$. By the projection formula, we have $Rg_* \mathcal{O}_Y(-mS) =g_* \mathcal{O}_Y(-mS) =\mathcal{O}_{Y'}(-mS_{Y'})$ and therefore $R^if_*\mathcal{O}_Y(-mS) = R^if'_*\mathcal{O}_{Y'}(-mS_{Y'})$, which is zero for $m\gg 0$ by Serre vanishing since $S_{Y'} $ is $f'$-anti-ample. Therefore, it suffices to show that $H^i(S, \mathcal{O}_S(G_k))=0$ for $i>0$ and all $k\geq 0$. To this end, we notice that \[ (K_Y+S+\Delta_Y)|_S = K_S + \Delta_{\mathrm{diff}} + \Delta_Y|_S \] is anti-ample and $(S, \Delta_{\mathrm{diff}} + \Delta_Y|_S)$ is klt. The cohomology in question vanishes because \[ G_k\sim _\mathbb{Q} K_S + (\Delta_{\mathrm{diff}} + \Delta_Y|_S - \Delta_k) \underbrace{-(K_S + \Delta_{\mathrm{diff}}+\Delta_Y|_S) -kS|_S}_{\text{ample}}, \] which is zero by Theorem \ref{theorem:CTW}.\\ By an analogous argument, considering the other short exact sequence in Corollary \ref{corollary:3folds-ses} \[ 0 \to \mathcal{O}_Y(K_Y-(k+1)S) \to \mathcal{O}_Y(K_Y-kS) \to \mathcal{O}_S(G'_k) \to 0, \] one can show that $R^if_*\omega_Y = 0$ for $i>0$. Indeed, $Rg_* \omega_Y = \omega_{Y'}$ by Proposition \ref{prop:rational_and_CM_is_good} applied to both $Y$ and $Y'$. As $S$ is $g$-trivial, we have \[ R^if_*\mathcal{O}_Y(K_Y - mS) = R^if'_*\mathcal{O}_{Y'}(K_{Y'} - mS_{Y'}) = 0, \] for $i>0$ and a divisible enough $m\gg 0$ by the relative Serre vanishing. Moreover, by Theorem \ref{theorem:CTW} \[ H^i(S, \mathcal{O} _S (G'_k))=0\qquad \forall i>0 \] for any $k\geq 0$ and a divisor $G'_k$ such that $G'_k\sim _\mathbb{Q} K_S + \Delta' _k- (k+1)S|_S$ where $\Delta' _k $ is an effective $\mathbb{Q}$-divisor satisfying $\Delta' _k \leq \Delta_{\mathrm{diff}}$, because $-S|_S$ is nef and big (see Proposition \ref{proposition:non-Q-factorial-case}). Proposition \ref{proposition:f-regular_are_rational} implies that $Y$ is Cohen-Macaulay, and so $X$ is Cohen-Macaulay as well by Proposition \ref{proposition:CM_kollar}. Let $\pi_Y \colon \overline{Y} \to Y$ be a log resolution. By Proposition \ref{proposition:f-regular_are_rational}, we have that $R(\pi_Y)_* \mathcal{O}_{\overline{Y}} = \mathcal{O}_Y$, and so, by the composition of derived functors, we get that $R\pi_*\mathcal{O}_{\overline{Y}} = \mathcal{O}_X$, where $\pi \coloneq f \circ \pi_Y \colon \overline{Y} \to X$. Thus $X$ has rational singularities. \\ In order to prove the last statement, we assume that $X$ is $\mathbb{Q}$-factorial, fix a Weil divisor $D$ on $X$, and set $D_Y \coloneq \lfloor f^* D \rfloor = f^*D - \epsilon S$ for some $0 \leq \epsilon < 1$. First, we show that $Rf_*\mathcal{O}_Y(D_Y) = \mathcal{O}_X(D)$. By Corollary \ref{corollary:3folds-ses}, we get short exact sequences \[ 0 \to \mathcal{O}_Y(-(k+1)S+D_Y) \to \mathcal{O}_Y(-kS+D_Y) \to \mathcal{O}_S(G_k) \to 0 \] for all $k\geq 0$ where $G_k \sim_{\mathbb{Q}} -(k+\epsilon)S|_S-\Delta _k$ for some effective $\mathbb{Q}$-divisor $\Delta_k \leq \Delta_{\mathrm{diff}}$. As in the argument above, when $i>0$ we have \begin{align*} H^i(S, \mathcal{O}_S(G_k)) &= 0, \text{ and } \\ R^if_*\mathcal{O}_Y(-mS+D_Y) &= 0 \text{ for divisible enough } m \gg 0, \end{align*} where the second identity follows from Serre's vanishing as $-S$ is $f$-ample, and the first one is a consequence of Theorem \ref{theorem:CTW} given that \[ G_k\sim _\mathbb{Q} K_S + (\Delta_{\mathrm{diff}} + \Delta_Y|_S - \Delta_k) \underbrace{-(K_S + \Delta_{\mathrm{diff}}+\Delta_Y|_S) -(k+\epsilon)S|_S}_{\text{ample}}. \] The claim now follows easily by an argument similar to the one we used in the proof that the singularities are rational. Analogously, one can show that $R^if_*\mathcal{O}_Y(K_Y-D_Y) = 0$ for $i>0$ by considering the other short exact sequence in Corollary \ref{corollary:3folds-ses} \[ 0 \to \mathcal{O}_Y(K_Y-(k+1)S-D_Y) \to \mathcal{O}_Y(K_Y-kS-D_Y) \to \mathcal{O}_S(G'_k) \to 0. \] Indeed, $R^if_*\mathcal{O}_Y(K_Y - mS - D_Y) = 0$ for sufficiently divisible $m\gg 0$ by Serre vanishing. Moreover, by Theorem \ref{theorem:CTW} \[ H^i(S, \mathcal{O} _S (G'_k))=0\qquad \forall i>0 \] for any $k\geq 0$ and a divisor $G'_k$ such that $G'_k\sim _\mathbb{Q} K_S + \Delta' _k- (k+1-\epsilon)S|_S$ where $\Delta' _k $ is an effective $\mathbb{Q}$-divisor satisfying $\Delta' _k \leq \Delta_{\mathrm{diff}}$. Since $\mathcal{O}_Y(D_Y)$ is Cohen-Macaulay (see Proposition \ref{proposition:f-regular_are_rational}), Proposition \ref{proposition:CM_kollar} concludes the proof. \end{proof} \section{Applications and open problems} \label{section:applications} \begin{corollary}\label{c:lift} There exists a constant $p_0 >0$ with the following property. Let $(X,\Delta_X)$ and $(Y,\Delta_Y)$ be Kawamata log terminal threefolds defined over an algebraically closed field $k$ of characteristic $p>p_0$, and let $\pi \colon X \to Y$ be a proper birational morphism between them. If $X$ lifts to $\widetilde{X}_m$ over $W_m(k)$ for some $m \in \mathbb{N}$, then $\pi$ lifts to $\widetilde{\pi}_m \colon \widetilde{X}_m \to \widetilde{Y}_m$. \end{corollary} \begin{proof} This follows from Theorem \ref{theorem:main} and Proposition \ref{proposition:pushing_lifts}. \begin{comment} We pick $p_0$ as in Theorem \ref{theorem:main} and proceed by induction. We assume that $\pi$ lifts to ${\widetilde{\pi}}_{l-1} \colon \widetilde{X}_{l-1} \to \widetilde{Y}_{l-1}$ over $W_{l-1}(k)$ for $l \leq m$, and we would like to show that it lifts to $W_l(k)$. To this end, we consider the following exact sequence of sheaves of algebras on $\widetilde{Y}_{m-1}$: \[ 0 \to \pi_* \mathcal{O}_X \to ({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_l} \to ({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_{l-1}} \to R^1\pi_* \mathcal{O}_X, \] where $\widetilde{X}_l$ is the lift of $X$ to $W_l(k)$ induced by $\widetilde{X}_m$. Since $X$ and $Y$ have rational singularities (Theorem \ref{theorem:main}), the Leray spectral sequence shows that $R^1\pi_* \mathcal{O}_X = 0$. Thus, we can take \[ \widetilde{Y}_l \coloneq \Spec\Big(({\widetilde{\pi}}_{l-1})_*\mathcal{O}_{\widetilde{X}_l}\Big). \] \end{comment} \end{proof} With Corollary \ref{c:lift} in mind, it is natural to ask the following. \begin{question}\label{q:lift} Let $X$ be a Kawamata log terminal threefold defined over an algebraically closed field $k$ of characteristic $p\gg 0$ and $X\dasharrow X^+$ a flip. If $X$ lifts to $\widetilde X _m $ over $W_m(k)$ for some $m\in \mathbb N$, then does $X^+$ lift to $\widetilde X ^+_m $ over $W_m(k)$? \end{question} \begin{remark} \label{remark:quotient} After the first version of the paper had been announced, we were informed by Professor Takehiko Yasuda that for any algebraically closed field $k$ of characteristic $p>2$ there exists a quotient klt singularity of dimension $d$ depending on $p$ which is not Cohen-Macaulay. For $p\geq 5$, his construction is the following. Let $X = V / G$, where $V$ is a $d$-dimensional $k$-vector space, $\frac{d(d-1)}{2} \geq p \geq d \geq 4$, and $G \simeq \mathbb{F}_p$ is a subgroup of $\GL_d(k)$ generated by a matrix of the form \[ \begin{bmatrix} 1 & 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & 1 & \dots & 0 & 0 \\ 0 & 0 & 0 & 1 & \dots & 0 & 0 \\ & & & & \ddots & \\ 0 & 0 & 0 & 0 & \dots & 1 & 1 \\ 0 & 0 & 0 & 0 & \dots & 0 & 1 \end{bmatrix}, \] that is ${\rm Id} + I$. Here ${\rm Id}$ is the identity matrix and $I$ is a matrix with ones just above the diagonal and zeroes elsewhere. We have $({\rm Id} + I)^p = {\rm Id}$, because $p \geq d$. Since the action of $G$ on $V$ is indecomposable and $\frac{d(d-1)}{2} \geq p$, the variety $X$ is klt by \cite[Proposition 6.6 and Proposition 6.9]{Yasuda}. Since $\dim V^G = 1 < d - 2$, it is not Cohen-Macaulay by the main result in \cite{ellingsrud-skjelbred} (see the first paragraph of the article). By a similar construction one can get an example of a non-Cohen-Macaulay quotient klt singularity for $p=3$. \end{remark} It is also natural to wonder what the optimal value for the constant $p_0$ is. \begin{question} Let $X$ be a three dimensional klt variety defined over an algebraically closed field of characteristic $p>2$. Does $X$ have rational singularities? \end{question} \end{document}
\begin{equation}gin{document} \begin{equation}gin{flushleft} {{\mathcal L}arge {\mathfrak S}c Reproducing pairs of measurable functions and partial inner product spaces} \vspace*{7mm} {\large{\mathfrak S}f J-P. Antoine $\!^{\rm a}$ and C. Trapani $\!^{\rm b}$ } \\[3mm] $^{\rm a}$ {{\mathfrak S}mall Institut de Recherche en Math\'ematique et Physique, Universit\'e catholique de Louvain \\ Hilbert spacepace*{3mm}B-1348 Louvain-la-Neuve, Belgium\\ Hilbert spacepace*{3mm}{\it E-mail address}: [email protected]} \\[1mm] $^{\rm b}$ {{\mathfrak S}mall Dipartimento di Matematica e Informatica, Universit\`a di Palermo, \\ Hilbert spacepace*{3mm} I-90123 Palermo, Italy\\ Hilbert spacepace*{3mm} {\it E-mail address}: [email protected]} \end{equation}d{flushleft} \begin{equation}gin{abstract} We continue the analysis of reproducing pairs of {weakly} measurable functions, which generalize continuous frames. More precisely, we examine the case where the defining measurable functions take their values in a partial inner product space ({{\mathfrak S}c pip}-space). Several examples, both discrete and continuous, are presented. \textbf{AMS classification numbers:} 41A99, 46Bxx, 46C50, 46Exx \textbf{Keywords:} Reproducing pairs, continuous frames, upper and lower semi-frames, partial inner product spaces, lattices of Banach spaces \end{equation}d{abstract} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Introduction} \label{sec-intro} Frames and their relatives are most often considered in the discrete case, for instance in signal processing \cite{christ}. However, continuous frames have also been studied and offer interesting mathematical problems. They have been introduced originally by Ali, Gazeau and one of us \cite{squ-int,contframes} and also, independently, by Kaiser \cite{kaiser}. Since then, several papers dealt with various aspects of the concept, see for instance \cite{gab-han} or \cite{rahimi}. However, there may occur situations where it is impossible to satisfy both frame bounds. Therefore, several generalizations of frames have been introduced. Semi-frames \cite{ant-bal-semiframes1,ant-bal-semiframes2}, for example, are obtained when functions only satisfy one of the two frame bounds. It turns out that a large portion of frame theory can be extended to this larger framework, in particular the notion of duality. More recently, a new generalization of frames was introduced by Balazs and Speckbacher \cite{speck-bal}, namely, reproducing pairs. Here, given a measure space $(X, \mu)$, one considers a couple of weakly measurable functions $(\psi, \phi)$, instead of a single mapping, and one studies the correlation between the two (a precise definition is given below). This definition also includes the original definition of a continuous frame \cite{squ-int,contframes} given the choice $\psi= \phi$. The increase of freedom in choosing the mappings $\psi$ and $\phi$, however, leads to the problem of characterizing the range of the analysis operators, which in general need no more be contained in $L^2(X, \,\mathrm{d} \mu)$, as in the frame case. Therefore, we extend the theory to the case where the weakly measurable functions take their values in a partial inner product space ({{\mathfrak S}c pip}-space). We discuss first the case of a rigged Hilbert space, then we consider a genuine {{\mathfrak S}c pip}-space. {We conclude with two natural families of examples, namely, Hilbert scales and several {{\mathfrak S}c pip}-space s generated by the family $\{L^p(X, \,\mathrm{d} \mu), 1\leqslantslantq p \leqslantslantq \infty\}$}. \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Preliminaries} \label{sec-prel} Before proceeding, we list our definitions and conventions. The framework is a (separable) Hilbert space ${\mathcal H}$, with the inner product $\ip{\cdot}{\cdot}$ linear in the first factor. Given an operator $A$ on ${\mathcal H}$, we denote its domain by $D(A)$, its range by ${{\mathfrak S}f Ran}\,(A)$ and its kernel by ${{{\mathfrak S}f Ker}\,}(A)$. $GL({\mathcal H})$ denotes the set of all invertible bounded operators on ${\mathcal H}$ with bounded inverse. Throughout the paper, we will consider weakly measurable functions $\psi: X \to {\mathcal H}$, where $(X,\mu)$ is a locally compact space with a Radon measure $\mu$, {that is, $\ip{\psi_x}{f}$ is $ \mu-$measurable for every $f\in{\mathcal H}$}. Then the weakly measurable function $\psi$ is a \emph{continuous frame} if there exist constants ${{\mathfrak S}f m} > 0$ and ${{\mathfrak S}f M}<\infty$ (the frame bounds) such that \begin{equation}\label{eq:frame} {{\mathfrak S}f m} \norm{}{f}^2 \leqslantslantq \int_{X} |\ip{f}{\psi_{x}}| ^2 \, \,\mathrm{d} \mu(x) \leqslantslant {{\mathfrak S}f M} \norm{}{f}^2 , \forall \, f \in {\mathcal H}. \end{equation}d{equation} Given the continuous frame $\psi$, the \emph{analysis} operator ${{\mathfrak S}f C}_{\psi}: {\mathcal H}il \to L^{2}(X, \,\mathrm{d}\mu)$ \footnote{As usual, we identify a function $\xi$ with its residue class in $L^{2}(X, \,\mathrm{d}\mu)$.} is defined as \begin{equation}\label{eq:csmap} ({{\mathfrak S}f C}_{\psi}f)(x) =\ip{f} {\psi_{x}}, \; f \in {\mathcal H}, \end{equation}d{equation} and the corresponding \emph{synthesis operator} ${{\mathfrak S}f C}_{\psi}^\ast: L^{2}(X, \,\mathrm{d}\mu) \to {\mathcal H}$ as (the integral being understood in the weak sense, as usual) \begin{equation}\label{eq:synthmap} {{\mathfrak S}f C}_{\psi}^\ast \xi = \int_X \xi(x) \,\psi_{x} \; \,\mathrm{d}\mu(x), \mbox{ for} \;\;\xi\in L^{2}(X, \,\mathrm{d}\mu). \end{equation}d{equation} We set $S:={{\mathfrak S}f C}_{\psi}^* {{\mathfrak S}f C}_{\psi}$, {which is self-adjoint.} More generally, the couple of weakly measurable functions $(\psi, \phi)$ is called a \emph{reproducing pair} if \cite{ast-reprodpairs} \\[1mm] (a) The sesquilinear form \begin{equation}\label{eq:form} \Omega_{\psi, \phi}(f,g) = \int_X \ip{f}{\psi_x} \ip{\phi_x}{g} \,\mathrm{d}\mu(x) \end{equation} is well-defined and bounded on ${\mathcal H} \times {\mathcal H}$, {that is, $| \Omega_{\psi, \phi}(f,g) | \leqslantslantq c \norm{}{f}\norm{}{g}$,} { for some $c>0$. \\[1mm] (b) The corresponding bounded (resolution) operator $S_{\psi, \phi}$ belongs to $GL({\mathcal H})$. Under these hypotheses, one has \begin{equation}\label{eq-Sf} S_{\psi, \phi}f = \int_X \ip{f}{\psi_x} \phi_x \,\mathrm{d}\mu(x) , \; \forall\,f\in{\mathcal H}, \end{equation} the integral on the r.h.s. being defined in weak sense. If $\psi = \phi$, we recover the notion of continuous frame, so that we have indeed a genuine generalization of the latter. {Notice that $S_{\psi, \phi}$ is in general neither positive, nor self-adjoint, since $S_{\psi, \phi}^\ast= S_{\phi, \psi}$ . However, if $\psi,\phi$ is reproducing pair, then $\psi , S_{\psi, \phi}^{-1}\phi$ is a dual pair, that is, the corresponding resolution operator is the identity. Therefore, there is no restriction of generality to assume that $S_{\phi, \psi}=I$ \cite{speck-bal}. {The worst that can happen is to replace some norms by equivalent ones}. } In \cite{ast-reprodpairs}, it has been shown that each weakly measurable function $\phi$ generates an intrinsic pre-Hilbert space $V_\phi(X, \mu)$ and, moreover, a reproducing pair $(\psi, \phi)$ generates two Hilbert spaces, $V_\psi(X, \mu)$ and $V_\phi(X, \mu)$, conjugate dual of each other with respect to the $L^2(X, \mu)$ inner product. Let us briefly sketch that construction, that we will generalize further on. Given a weakly measurable function $\phi$, let us denote by ${\mathcal V}_\phi(X, \mu) $ the space of all measurable functions $\xi : X \to {\mathbb C}$ such that the integral $\int_X \xi(x) \ip{\phi_x}{g} \,\mathrm{d}\mu(x)$ exists for every $g\in {\mathcal H}$ and defines a bounded conjugate linear functional on ${\mathcal H}$, i.e., $\exists\; c>0$ such that \begin{equation}\label{eq-Vphi} \leqslantslantft| \int_X \xi(x) \ip{\phi_x}{g} \,\mathrm{d}\mu(x) \right| \leqslantslantq c \norm{}{g}, \; \forall\, g \in {\mathcal H}. \end{equation} {Clearly, if $(\psi,\phi)$ is a reproducing pair, all functions $\xi(x) = \ip{f}{\psi_x} = (C_\psi f)(x)$ belong to ${\mathcal V}_\phi(X, \mu) $.} By the Riesz lemma, we can define a linear map $T_\phi : {\mathcal V}_\phi(X, \mu) \to {\mathcal H}$ by the following weak relation \begin{equation}\label{eq:Tphi2} \ip{T_\phi \xi}{g} =\int_X \xi(x) \ip{\phi_x}{g} \,\mathrm{d}\mu(x) , \; { \forall} \, \xi \in{\mathcal V}_\phi(X, \mu), g\in{\mathcal H}. \end{equation} Next, we define the vector space $$ V_\phi(X, \mu)= {\mathcal V}_\phi(X, \mu)/{{{\mathfrak S}f Ker}\,}\,T_\phi $$ and equip it with the norm \begin{equation} \label{eq-normphi} \norm{\phi}{\cl{\xi}{\phi}} := {\mathfrak S}up_{\norm{}{g}\leqslantslantq 1 } \leqslantslantft| \int_X \xi(x) \ip{\phi_x}{g} \,\mathrm{d}\mu(x) \right| = {\mathfrak S}up_{\norm{}{g}\leqslantslantq 1 }\leqslantslantft| \ip{T_\phi \xi}{g} \right|, \end{equation} where we have put $\cl{\xi}{\phi}= \xi + {{{\mathfrak S}f Ker}\,}\,T_\phi$ for $\xi\in {\mathcal V}_\phi(X, \mu)$. Clearly, $V_\phi(X, \mu)$ is a normed space. However, the norm $\norm{\phi}{\cdot}$ is in fact Hilbertian, that is, it derives from an inner product, as can be seen as follows. First, it turns out that the map $\widehat{T}_\phi:V_\phi(X,\mu)\rightarrow {\mathcal H}$, $\widehat{T}_\phi[\xi]_\phi:= T_\phi \xi$ is a well-defined isometry of $V_\phi(X, \mu)$ into ${\mathcal H}$. Next, one may define on $V_\phi(X,\mu)$ an inner product by setting $$ \ip{\cl{\xi}{\phi}}{\cl{\eta}{\phi}}_{(\phi)}: =\ip{\widehat{T}_\phi\cl{\xi}{\phi}}{\widehat{T}_\phi\cl{\eta}{\phi}},\; \cl{\xi}{\phi}, \cl{\eta}{\phi}{ \in V_\phi(X,\mu)}, $$ and one shows that the norm defined by $ \ip{\cdot}{\cdot}_{(\phi)}$ coincides with the norm $\| \cdot\|_\phi$ defined in \eqref{eq-normphi}. One has indeed $$ \norm{(\phi)}{\cl{\xi}{\phi}}= \norm{}{\widehat{T}_\phi\cl{\xi}{\phi}}= \norm{}{T_\phi\xi}= {\mathfrak S}up_{\norm{}{g}\leqslantslantq 1 }\leqslantslantft| \ip{T_\phi \xi}{g} \right| = \norm{\phi}{\cl{\xi}{\phi}}. $$ Thus $V_\phi(X, \mu) $ is a pre-Hilbert space. With these notations, the main result of \cite{ast-reprodpairs} reads as \begin{equation}theo\label{theo-dual} If $(\psi,\phi)$ is a reproducing pair, the spaces $V_\phi(X, \mu)$ and $V_\psi(X, \mu)$ are both Hilbert spaces, {conjugate dual} of each other with respect to the sesquilinear form \begin{equation} \label{eq_sesq} \ipp{\xi}{\eta}:= \int_X \xi(x) \overline{\eta(x) } \,\mathrm{d}\mu(x), \end{equation} \end{equation}theo which coincides with the inner product of $L^2(X, \mu)$ whenever the latter makes sense. This is true, in particular, for $\phi = \psi$, since then $\psi$ is a continuous frame and $V_\psi(X, \mu)$ is a closed subspace of $L^2(X, \mu)$. In this paper, we will consider reproducing pairs in the context of {{\mathfrak S}c pip}-space s. The motivation is the following. Let $(\psi, \phi)$ be a reproducing pair. By definition, \begin{equation}\label{eq-defS} \ip{S_{\psi,\phi}f}{g}=\int_X \ip{f}{\psi_x}\ip{\phi_x}{g}\,\mathrm{d}\mu(x) = \int_X C_\psi f (x) \; \overline{C_\phi g(x)} \,\mathrm{d}\mu(x) \end{equation} is well defined for all $f,g\in{\mathcal H}$. The r.h.s. coincides with the sesquilinear form \eqref{eq_sesq}, that is, the $L^2$ inner product, but generalized, since in general $C_\psi f ,C_\phi g$ need not belong to $L^2(X,\,\mathrm{d}\mu)$. If, following \cite{speck-bal}, we make the innocuous assumption that $\psi$ is uniformly bounded, i.e., ${\mathfrak S}up_{x\in X}\norm{{\mathcal H}}{\psi_x } \leqslantslantq c$ for some $c>0$ (often $\norm{{\mathcal H}}{\psi_x}$ = const., e.g. for wavelets or coherent states), then $(C_\psi f)(x) = \ip{f}{\psi_x} \in L^\infty(X,\,\mathrm{d} \mu)$. These two facts suggest to take ${{\mathfrak S}f Ran}\, C_\psi$ within some {{\mathfrak S}c pip}-space\ {of measurable functions, possibly} related to the $L^p$ spaces. We shall present several possibilities in that direction in Section \ref{sec-Lp}. \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Reproducing pairs and RHS} \label{sec-RHS} We begin with the simplest example of a {{\mathfrak S}c pip}-space, namely, a rigged Hilbert space (RHS). Let indeed ${\mathcal D}[t] {\mathfrak S}ubset {\mathcal H} {\mathfrak S}ubset {\mathcal D}^\times [t^\times]$ be a RHS with ${\mathcal D} [t]$ reflexive (so that $t$ and $t^\times$ coincide with the respective Mackey topologies). Given a measure space $(X, \mu)$, we denote by $\du{\cdot}{\cdot}$ the sesquilinear form expressing the duality between ${\mathcal D}$ and ${\mathcal D}^\times$. As usual, we suppose that this sesquilinear form extends the inner product of ${\mathcal D}$ (and ${\mathcal H}$). This allows to build the triplet above. Let $x\in X \mapsto \psi_x, \,x\in X \mapsto \phi_x$ be weakly measurable functions from $X$ into ${\mathcal D}^\times$. Instead of \eqref{eq:form}, we consider he sesquilinear form \begin{equation}\label{eq:formD} \Omega^{{\mathcal D}}_{\psi, \phi}(f,g) = \int_X \du{f}{\psi_x} \du{\phi_x}{g} \,\mathrm{d}\mu(x), \; f,g \in {\mathcal D}, \end{equation} and we assume that it is jointly continuous on ${\mathcal D} \times {\mathcal D}$, that is $\Omega^{{\mathcal D}}\in {{\mathfrak S}f B}({\mathcal D},{\mathcal D})$ in the notation of \cite[Sec.10.2]{ait-book}. Writing \begin{equation}\label{eq-def_altS} \du{S_{\psi,\phi}f}{g}:=\int_X \du{f}{\psi_x}\du{\phi_x}{g}\,\mathrm{d}\mu(x) , \; \forall\, f,g\in{\mathcal D}, \end{equation} we see that the operator $S_{\psi,\phi}$ belongs to ${\mathcal L}({\mathcal D}, {\mathcal D}^\times)$, the space of all continuous linear maps from $D$ into ${\mathcal D}^\times$.} \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{A Hilbertian approach} \label{subsec-hilb} We first assume that the sesquilinear form $\Omega^{{\mathcal D}}$ is well-defined and bounded on ${\mathcal D} \times {\mathcal D}$ in the topology of ${\mathcal H}$. Then $\Omega^{{\mathcal D}}_{\psi, \phi}$ extends to a bounded sesquilinear form on ${\mathcal H}\times {\mathcal H}$, denoted by the same symbol. The definition of the space ${\mathcal V}_\phi(X,\mu)$ must be modified as follows. Instead of \eqref{eq-Vphi}, we suppose that the integral below exists and defines a conjugate linear functional on ${\mathcal D}$, bounded in the topology of ${\mathcal H}$, i.e., \begin{equation}\label{eq-sesqform} \leqslantslantft| \int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x) \right| \leqslantslantq c \norm{}{g}, \; \forall\, g \in {\mathcal D}. \end{equation} Then the functional extends to a bounded conjugate linear functional on ${\mathcal H}$, since ${\mathcal D}$ is dense in ${\mathcal H}$. Hence, for every $\xi \in {\mathcal V}_\phi(X, \mu) $, there exists a unique vector $h_{\phi,\xi}\in {\mathcal H}$ such that $$ \int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x) = \ip{h_{\phi,\xi}}{g}, \quad \forall g \in {\mathcal D}. $$ It is worth remarking that this interplay between the two topologies on ${\mathcal D}$ is similar to the approach of Werner \cite{werner}, who treats $L^2$ functions as distributions, thus identifies the $L^2$ space as the dual of ${\mathcal D} = {\mathcal C}_0^\infty$ with respect to the norm topology. And, of course, this is fully in the spirit of {{\mathfrak S}c pip}-space s. {Then, we can define a linear map $T_\phi : {\mathcal V}_\phi(X, \mu) \to {\mathcal H}$ by \begin{equation}\label{eq:TphiD} T_\phi \xi = h_{\phi,\xi}\in {\mathcal H}, \;\forall\,\xi\in {\mathcal V}_\phi(X, \mu), \end{equation} in the following weak sense $$ \ip{T_\phi \xi}{g} = \int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x) , \; g\in{\mathcal D}, \xi\in {\mathcal V}_\phi(X, \mu). $$ In other words we are \emph{imposing} that $\int_X \xi(x)\phi_x \,\mathrm{d}\mu(x)$ converge weakly to an element of ${\mathcal H}$.} The rest proceeds as before. We consider the space $ V_\phi(X, \mu)= {\mathcal V}_\phi(X, \mu)/{{{\mathfrak S}f Ker}\,}\,T_\phi$, with the norm $\norm{\phi}{\cl{\xi}{\phi}}=\norm{}{T_\phi \xi}$, where, for $\xi\in V_\phi(X, \mu)$, we have put $\cl{\xi}{\phi}= \xi + {{{\mathfrak S}f Ker}\,}\,T_\phi$. Then $ V_\phi(X, \mu)$ is a pre-Hilbert space for that norm. Note that $\phi$ was called in \cite{ast-reprodpairs} \emph{$\mu$-independent} whenever ${{{\mathfrak S}f Ker}\,}\,T_\phi = \{0\}$. In that case, of course, $V_\phi = {\mathcal V}_\phi $. Assume, in addition, that the corresponding bounded operator $S_{\psi,\phi}$ is an element of $GL({\mathcal H})$. Then $(\psi,\phi)$ is a reproducing pair and Theorem 3.14 of \cite{ast-reprodpairs} remains true, that is, \begin{equation}theo\label{theo-dual1} If $(\psi,\phi)$ is a reproducing pair, the spaces $V_\phi(X, \mu)$ and $V_\psi(X, \mu)$ are both Hilbert spaces, {conjugate dual} of each other with respect to the sesquilinear form \begin{equation}\label{eq-dual} \ip{\cl{\xi} {\phi}} {\cl{\eta} {\psi}}= \int_X \xi(x) \overline{\eta(x) } \,\mathrm{d}\mu(x), \; \forall\, {\xi} \in {\mathcal V}_\phi(X, \mu),\, {\eta} \in {\mathcal V}_\psi(X, \mu). \end{equation} \end{equation}theo \begin{equation}ex To give a trivial example, consider the Schwartz rigged Hilbert space ${\mathcal S}(\mathbb{R}) {\mathfrak S}ubset L^2(\mathbb{R}, \,\mathrm{d} x) {\mathfrak S}ubset {\mathcal S}^\times(\mathbb{R})$, $(X,\mu) = (\mathbb{R}, \,\mathrm{d} x)$, $\psi_x (t)= \phi_x(t)= \frac{1}{{\mathfrak S}qrt{2\pi}} e^{ixt}$ . Then {$C_\phi f= \widehat f$}, the Fourier transform, so that $\ip{f}{\phi(\cdot)}\in L^2(\mathbb{R}, \,\mathrm{d} x)$. In this case $$ \Omega_{\psi,\phi}(f,g)= \int_\mathbb{R} \du{f}{\psi_x}\du{\phi_x}{g}\,\mathrm{d} x= \ip{\widehat f}{\widehat g}=\ip{f}{g}, \;\forall f,g \in {\mathcal S}(\mathbb{R}), $$ and $V_\phi(\mathbb{R},\,\mathrm{d} x)=L^2(\mathbb{R}, \,\mathrm{d} x)$. \end{equation}ex \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{The general case} \label{sec-alt} In the general case, we only assume that the form $\Omega$ is jointly continuous on ${\mathcal D} \times {\mathcal D}$, with no other regularity requirement. {In that case, the vector space ${\mathcal V}_\phi(X, \mu)$ must be defined differently. Let the topology of ${\mathcal D}$ be given by a {directed family ${{\mathfrak P}}$ of seminorms}. Given a weakly measurable function $\phi$, we denote again by ${\mathcal V}_\phi(X, \mu) $ the space of all measurable functions $\xi : X \to {\mathbb C}$ such that the integral $\int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x)$ exists for every $g\in {\mathcal D}$ and defines a continuous conjugate linear functional on ${\mathcal D}$, namely, there exists $c>0$ and a seminorm ${{\mathfrak S}f p} \in {\mathfrak P}$ such that $$ \leqslantslantft|\int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x)\right| \leqslantslantq c \, {{\mathfrak S}f p}(g). $$ This in turn determines a linear map $T_\phi : {\mathcal V}_\phi(X, \mu) \to {\mathcal D}^\times$ by the following relation \begin{equation}\label{eq:Tphi3} \du{T_\phi \xi}{g} =\int_X \xi(x) \du{\phi_x}{g} \,\mathrm{d}\mu(x) , \; { \forall} \, \xi \in{\mathcal V}_\phi(X, \mu), g\in{\mathcal D}. \end{equation} Next, we define as before the vector space $$ V_\phi(X, \mu)= {\mathcal V}_\phi(X, \mu)/{{{\mathfrak S}f Ker}\,}\,T_\phi, $$ and we put again $\cl{\xi}{\phi}= \xi + {{{\mathfrak S}f Ker}\,}\,T_\phi$ for $\xi\in {\mathcal V}_\phi(X, \mu)$. Now we need to introduce a topology on $V_\phi(X, \mu)$. We proceed as follows. {Let ${\mathcal M}$ be a bounded subset of ${\mathcal D}[t]$. Then we define \begin{equation} {\widehat {{\mathfrak S}f p}_{\mathcal M}}( \cl{\xi}\phi) := {\mathfrak S}up_{g\in {\mathcal M}}\leqslantslantft| \du{T_\phi \xi}{g} \right|. \end{equation} { That is, we are defining the topology of $V_\phi(X,\mu)$ by means of the strong dual topology $t^\times$ of ${\mathcal D}^\times$ which we recall is defined by the seminorms $$ \|F\|_{\mathcal M} = {\mathfrak S}up_{g \in {\mathcal M}} |\ip{F}{g}, \quad F\in {\mathcal D}^\times, $$ where ${\mathcal M}$ runs over the family of bounded subsets of ${\mathcal D}[t]$. As said above, the reflexivity of ${\mathcal D}$ entails that $ t^\times$ is equal to the Mackey topology $^\timesu({\mathcal D}^\times,{\mathcal D})$. More precisely, \begin{equation}lem The map $\widehat{T}_\phi:V_\phi(X,\mu)\rightarrow {\mathcal D}^\times$, $\widehat{T}_\phi[\xi]_\phi:= T_\phi \xi$ is a well-defined linear map of $V_\phi(X, \mu)$ into ${\mathcal D}^\times$ and, for every bounded subset ${\mathcal M}$ of ${\mathcal D}[t]$, one has $${\widehat {{\mathfrak S}f p}_{\mathcal M}}( \cl{\xi}\phi) =\|T_\phi \xi\|_{\mathcal M}, \quad \forall \xi \in {\mathcal V}_\phi(X, \mu)$$ \end{equation}lem The latter equality obviously implies the continuity of $T_\phi$. Next we investigate the dual {$V_\phi(X,\mu)^\ast$} of the space $V_\phi(X, \mu) $, that is, the set of continuous linear functionals on $V_\phi(X, \mu) $. First, we have to choose a topology for $V_\phi(X,\mu)^\ast$. As usual we take the strong dual topology. This is defined by the family of seminorms $$ {{\mathfrak S}f q}_{\mathcal R} (F):= {\mathfrak S}up_{\cl{\xi}\phi \in {\mathcal R}}|F(\cl{\xi}\phi)|,$$ where ${\mathcal R}$ runs over the bounded subsets of $V_\phi(X, \mu) $. \begin{equation}theo\label{theo23} Assume that ${\mathcal D}[t]$ is a reflexive space and let $\phi$ be a weakly measurable function. If $F$ is a continuous linear functional on $V_\phi(X, \mu) $, then there exists a unique {$g\in{\mathcal D}$} such that \begin{equation}gin{equation}\label{repres-functional2} F(\cl{\xi}{\phi}) = \int_X \xi(x) {\du{\phi_x}{g}} \,\mathrm{d}\mu(x) , \; \forall\, \xi \in V_\phi(X, \mu) \end{equation}d{equation} Moreover, every $g\in {\mathcal H}$ defines a continuous functional $F$ on $V_\phi(X,\mu)$ with {$\norm{\phi^\ast}{F} \leqslantslantq \norm{}{g}$}, by \eqref{repres-functional2}. \end{equation}theo \begin{equation}gin{proof} Let $F\in V_\phi(X, \mu)^\ast$. Then, there exists a bounded subset ${\mathcal M}$ of ${\mathcal D}[t]$ such that $$ | F(\cl{\xi}{\phi}) | \leqslantslantq {\widehat {{\mathfrak S}f p}_{\mathcal M}}( \cl{\xi}\phi)={\norm{}{T_\phi \xi}}_{\mathcal M}, \; \forall \, \xi \in {\mathcal V}_\phi(X, \mu). $$ Let ${{\mathfrak S}f M}_\phi:= \{ T_\phi \xi : \xi \in {\mathcal V}_\phi(X, \mu) \} = {{\mathfrak S}f Ran}\, \widehat{T}_\phi $. Then ${{\mathfrak S}f M}_\phi$ is a vector subspace of ${\mathcal D}^\times$. Let $\widetilde F$ be the functional defined on ${{\mathfrak S}f M}_\phi$ by $$ {\widetilde F}(T_\phi \xi) := F(\cl{\xi}{\phi}), \; \xi \in {\mathcal V}_\phi(X, \mu). $$ We notice that $\widetilde F$ is well-defined. Indeed, if $T_\phi \xi = T_\phi \xi'$, then $\xi- \xi'\in {{{\mathfrak S}f Ker}\,\,}T_\phi$. Hence, $\cl{\xi}{\phi}=\cl{\xi'}{\phi}$ and $F(\cl{\xi}{\phi})=F(\cl{\xi'}{\phi})$ Hence, $\widetilde F$ is a continuous linear functional on ${{\mathfrak S}f M}_\phi$ which can be extended (by the Hahn-Banach theorem) to a continuous linear functional on ${\mathcal D}^\times$. Thus, in virtue of the reflexivity of ${\mathcal D}$, there exists a vector {$g\in{\mathcal D}$} such that $$ {\widetilde F}(T_\phi \xi) = \du{\widehat{T}_\phi \cl{\xi}{\phi}} {g } = \du{T_\phi \xi}{g } = \int_X \xi(x) {\du{\phi_x}{g}} \,\mathrm{d}\mu(x) . $$ In conclusion, $$ F(\cl{\xi}{\phi}) = \int_X \xi(x) {\du{\phi_x}{g}} \,\mathrm{d}\mu(x),\; \forall\, \xi \in {\mathcal V}_\phi(X, \mu) . $$ Moreover, every $g\in {\mathcal D}$ obviously defines a continuous linear functional $F$ on $V_\phi(X, \mu)$ by \eqref{repres-functional2}. In addition, if ${\mathcal R}$ is a bounded subset of $V_\phi(X, \mu) $, we have \begin{equation}gin{align*} {{\mathfrak S}f q}_{\mathcal R} (F) &= {\mathfrak S}up_{\cl{\xi}\phi \in {\mathcal R}}|F(\cl{\xi}\phi)|={\mathfrak S}up_{\cl{\xi}\phi \in {\mathcal R}}\leqslantslantft| \int_X \xi(x) {\du{\phi_x}{g}} \,\mathrm{d}\mu(x)\right| \\ &= {\mathfrak S}up_{\cl{\xi}\phi \in {\mathcal R}}|{\du{T_\phi \xi}{g}}|\leqslantslantq {\mathfrak S}up_{\cl{\xi}\phi \in {\mathcal R}} {\widehat {{\mathfrak S}f p}_{\mathcal M}}( \cl{\xi}\phi), \end{equation}d{align*} for any bounded subset ${\mathcal M}$ of ${\mathcal D}$ containing $g$. \end{equation}d{proof} In the present context, the analysis operator $C_\phi$ is defined in the usual way, given in \eqref{eq:csmap}. Then, particularizing the discussion of Theorem \ref{theo23} to the functional ${\du{C_\phi g}{\cdot}}$, one can interpret the analysis operator $C_\phi$ as a continuous operator from ${\mathcal D}$ to $V_\phi(X,\mu)^\ast$. As in the case of frames or semi-frames, one may characterize the synthesis operator in terms of the analysis operator. \begin{equation}gin{prop}\label{prop210} {Let} $\phi$ be weakly measurable, then $\widehat T_\phi{\mathfrak S}ubseteq C_\phi^\ast$. If, in addition, $V_\phi(X,\mu)$ is reflexive, then $\widehat{T}_\phi^\ast=C_\phi$. Moreover, $\phi$ is $\mu$-total {(i.e. ${{\mathfrak S}f Ker}\, C_\phi = \{0\}$)} if and only if ${{\mathfrak S}f Ran}\, \widehat T_\phi$ is dense in ${\mathcal D}^\times$. \end{equation}d{prop} \begin{equation}gin{proof} { As $C_\phi:{\mathcal D}\rightarrow V_\phi(X,\mu)^\ast$ is a continuous operator, it has a continuous adjoint $C_\phi^\ast : V_\phi(X,\mu)^{\ast\ast} \to {\mathcal H}$ \cite[Sec.IV.7.4]{schaefer}. Let $C_\phi^{\mathfrak S}harp := C_\phi^\ast \up V_\phi(X,\mu)$. Then $C_\phi^{\mathfrak S}harp = \widehat T_\phi$ since, for every $f\in{\mathcal D}$, $[\xi]_\phi\in V_\phi(X,\mu)$, \begin{equation}\label{eq-adjoint} {\du{C_\phi f}{[\xi]_\phi}} =\int_X {\du{f}{\phi_x}} \overlineerline{\xi(x)}\,\mathrm{d}\mu(x)={\du{f}{\widehat T_\phi [\xi]_\phi}}. \end{equation} If $ V_\phi(X,\mu)$ is reflexive, we have, of course, $C_\phi^{\mathfrak S}harp =C_\phi^\ast = \widehat T_\phi$.} If $\phi$ is not $\mu$-total, then there exists $f\in {\mathcal D}$, $f\neq0$ such that $(C_\phi f)(x)=0$ for a.e. $x\in X$. Hence, $f\in ({{\mathfrak S}f Ran}\, \widehat T_\phi)^\bot:= \{f\in {\mathcal D}: \ip{F}{f}=0, \, \forall F\in {{\mathfrak S}f Ran}\, \widehat T_\phi\}$ by \eqref{eq-adjoint}. Conversely, if $\phi$ is $\mu$-total, as $({{\mathfrak S}f Ran}\, \widehat T_\phi)^\bot={{\mathfrak S}f Ker}\, C_\phi=\{0\}$, by the reflexivity of ${\mathcal D}$ and ${\mathcal D}^\times$, it follows that ${{\mathfrak S}f Ran}\, \widehat T_\phi$ is dense in ${\mathcal D}^\times$. \end{equation}d{proof} In a way similar to what we have done above, we can define the space $V_\psi(X, \mu)$, its topology, the {residue} classes $\cl{\eta}{\psi}$, the operator $T_\psi$, etc, replacing $\phi$ by $\psi$. Then, $V_\psi(X, \mu)$ is a locally convex space. \begin{equation}theo \label{representation-of-F} Under the condition \eqref{eq:formD}, every bounded linear functional $F$ on $V_\phi(X, \mu)$, {i.e., $F\in V_\phi(X, \mu)^\ast$,} can be represented as \begin{equation}\label{eq-dual2} F(\cl{\xi}{\phi}) = \int_X \xi(x) \overline{\eta(x) } \,\mathrm{d}\mu(x), \; \forall\,\cl{\xi}{\phi}\in V_\phi(X, \mu), \end{equation} with $\eta\in {\mathcal V}_\psi(X, \mu)$. The {residue} class $\cl{\eta}{\psi}\in V_\psi(X, \mu)$ is uniquely determined. \end{equation}theo \begin{equation}gin{proof} By Theorem \ref{theo23}, we have the representation $$ F(\xi) = \int_X \xi(x) {\du{\phi_x}{g}} \,\mathrm{d}\mu(x) . $$ It is easily seen that $\eta(x) = {\du{g}{\phi_x}} \in {\mathcal V}_\psi(X, \mu)$. It remains to prove uniqueness. Suppose that $$ F(\xi) = \int_X \xi(x) \overline{\eta'(x) } \,\mathrm{d}\mu(x) . $$ Then $$ \int_X \xi(x) (\overline{\eta'(x)}- \overline{\eta(x)}) \,\mathrm{d}\mu(x) =0. $$ Now the function $\xi(x) $ is arbitrary. Hence, taking in particular for $\xi(x) $ the functions $\du{f}{\psi_x} , f\in{\mathcal D}$, we get $\cl{\eta}{\psi}=\cl{\eta'}{\psi}$. \end{equation}d{proof} The lesson of the previous statements is that the map \begin{equation}\label{map-j} j : F\in V_\phi(X, \mu)^\ast \mapsto \cl{\eta}{\psi} \in V_\psi(X, \mu) \end{equation} is well-defined and conjugate linear. On the other hand, $j(F) = j(F')$ implies easily $F=F'$. Therefore $V_\phi(X, \mu)^\ast$ can be identified with a closed subspace of $\overline{V_\psi}(X, \mu):=\{\overline{\cl{\xi}{\psi}} : \xi\in {\mathcal V}_\psi(X, \mu)\}$, the conjugate space of $V_\psi(X, \mu)$. Working in the framework of Hilbert spaces, as in Section \ref{subsec-hilb}, we proved in \cite{ast-reprodpairs} that the spaces $V_\phi(X, \mu)^\ast$ and $\overline{V_\psi}(X, \mu)$ can be identified. The conclusion was that if $(\psi,\phi)$ is a reproducing pair, the spaces $V_\phi(X, \mu)$ and $V_\psi(X, \mu)$ are both Hilbert spaces, {conjugate dual} of each other with respect to the sesquilinear form \eqref{eq-dual}. And if $\phi$ and $\psi$ are also $\mu$-total, then the converse statement holds true. In the present situation, however, a result of this kind cannot be proved with techniques similar to those adopted in \cite{ast-reprodpairs}, which are {specific} of Hilbert spaces. {In particular, the condition (b), $S_{\psi,\phi}\in GL({\mathcal H})$, which was essential in the proof of \cite[Lemma 3.11]{ast-reprodpairs}, is now missing, and it is not clear by what regularity condition it should replaced.} {However, \emph{assume} that ${{\mathfrak S}f Ran}\, \widehat C_{\psi,\phi}[\norm{\phi}{\cdot}]=V_\phi(X,\mu)[\|\cdot\|_\phi]$ and ${{\mathfrak S}f Ran}\, \widehat C_{\phi,\psi}[\norm{\psi}{\cdot}]=V_\psi(X,\mu)[\|\cdot\|_\psi]$, where we have defined the operator $\widehat{C}_{\phi ,\psi} : {\mathcal H} \to V_\psi(X, \mu) $ by $\widehat{C}_{\phi ,\psi} f:= [C_\phi f]_\psi $ and similarly for $\widehat C_{\psi,\phi}$. Then the proof of \cite[Theorem 3.14]{ast-reprodpairs} works and the same result may be obtained. This is, however, a strong and non-intuitive assumption.} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Reproducing pairs and genuine {{\mathfrak S}c pip}-space s} \label{sec-PIP} In this section, we will consider the case where our measurable functions take their values in a genuine {{\mathfrak S}c pip}-space. However, for simplicity, we will restrict ourselves to a lattice of Banach spaces (LBS) or a lattice of Hilbert spaces (LHS) \cite{pip-book}. {For the convenience of the reader, we have summarized in the Appendix the basic notions concerning LBSs and LHSs.} Let $(X, \mu)$ be a locally compact, ${\mathfrak S}igma$-compact measure space. Let $V_J = \{V_p, p\in J\}$ be a LBS or a LHS of measurable functions with the property \begin{equation}\label{defVp} \xi \in V_p, \;\eta \in V_{\overline p} \;{\mathcal L}ongrightarrow\; \xi\overlineerline{\eta} \in L^1 (X, \mu)\qquad \mathrm{and} \qquad \leqslantslantft| \int_X \xi(x) \overlineerline{\eta (x)} \,\mathrm{d}\mu (x)\right| \leqslantslantq \|\xi\|_p \, \|\eta\|_{\overline p}. \end{equation} Thus the central Hilbert space\ is {${\mathcal H}:=V_o= L^2 (X, \mu)$} and the spaces $V_p, V_{\overline p}$ are dual of each other with respect to the $L^2$ inner product. The partial inner product, which extends that of $L^2 (X, \mu)$, is denoted again by $\ip{\cdot}\cdot$. As usual we put $V= {\mathfrak S}um_{p\in J} V_p$ and $V^\#= \bigcap_{p\in J}V_p.$ Thus $\psi : X \to V$ means that $\psi : X \to V_p$ for some $ p\in J$. \begin{equation}ex A typical example is the lattice generated by the Lebesgue spaces $L^p(\mathbb{R}, \,\mathrm{d} x), \,1\leqslantslantq p \leqslantslantq \infty $, with $\frac1p +\frac{1}{\overline{p}} = 1$ \cite{pip-book}. We shall discuss it in detail in Section \ref{sec-Lp}. \end{equation}ex We will envisage two approaches, depending whether the functions $\psi_x$ themselves belong to $V$ or rather the scalar functions $C_\psi f$. \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{Vector-valued measurable functions $\psi_x$} This approach is the exact generalization of the one used in the RHS case. Let $x\in X \mapsto \psi_x, \,x\in X \mapsto \phi_x$ weakly measurable functions from $X$ into $V$, where the latter is equipped with the weak topology ${\mathfrak S}igma(V,V^{\#})$. More precisely, {assume that } $\psi : X \to V_p$ for some $p\in J$ and $\phi: X \to V_q$ for some $q\in J$, both weakly measurable. In that case, the analysis of Section \ref{subsec-hilb} may be repeated \emph{verbatim}, simply replacing ${\mathcal D}$ by $V^\#$, thus defining reproducing pairs. The problem with this approach is that, in fact, it does not exploit the {{\mathfrak S}c pip}-space\ structure, only the RHS $V^\# {\mathfrak S}ubset {\mathcal H} {\mathfrak S}ubset V$\,! Clearly, this approach yields no benefit, so we turn to a different strategy.} \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{Scalar measurable functions $C_\psi f$} Let $\psi, \phi$ be weakly measurable functions from $X$ into {${\mathcal H}$}. In view of \eqref{eq-defS}, \eqref{defVp} and the definition of $V$, we assume that the following condition holds: \begin{equation}gin{itemize} \item[{{\mathfrak S}f (p)} ] { $ \exists \, p\in J $ such that $ C_\psi f =\ip{f}{\psi_{\cdot}}\in V_p$ and $ C_\phi g = \ip{g}{\phi_{\cdot}}\in V_{\overline p}, \forall\, f,g \in {\mathcal H} $}. \end{equation}d{itemize} We recall that $V_{\overline p}$ is the conjugate dual of $V_{ p}$. In this case, then $$ \Omega_{\psi,\phi}(f,g): = \int_X \ip{f}{\psi_x}\ip{\phi_x}{g}\,\mathrm{d}\mu(x),\; f,g\in{\mathcal H}, $$ defines a sesquilinear form on {${\mathcal H}\times{\mathcal H}$} and one has \begin{equation}\label{eq-formpp} |\Omega_{\psi,\phi}(f,g) | \leqslantslantq \norm{p}{C_\psi f} \, \norm{\overline p}{C_\phi g}, \;\forall\, f,g\in{\mathcal H}. \end{equation} If $\Omega_{\psi,\phi}$ is bounded as a form on {${\mathcal H}\times {\mathcal H}$} (this is not automatic, see Corollary \ref{cor55}), there exists a bounded operator $S_{\psi,\phi}$ in {${\mathcal H}$} such that \begin{equation}gin{equation}\label{eqn_*} \int_X \ip{f}{\psi_x}\ip{\phi_x}{g}\,\mathrm{d}\mu(x)= \ip{S_{\psi,\phi}f}{g}, \;\forall \,f,g \in {{\mathcal H}}. \end{equation}d{equation} Then $(\psi,\phi)$ is a \emph{reproducing pair} if $S_{\psi,\phi}\in GL( {{\mathcal H}})$. Let us suppose that the spaces $V_p$ have the following property \begin{equation}gin{itemize} \item[{\mathfrak S}f{(k)}] If $\xi_n \to \xi$ in $V_p$, then, for every compact subset $K {\mathfrak S}ubset X$, there exists a subsequence $\{\xi^K_n\}$ of $\{\xi_n\}$ which converges to $\xi$ almost everywhere in $K$. \end{equation}d{itemize} We note that condition {{\mathfrak S}f{(k)}} is satisfied by $L^p$-spaces \cite{rudin}. As seen before, $C_\psi: {\mathcal H}\to V$, in general. This means, given $f\in {\mathcal H}$, there exists $p \in J$ such that $C_\psi f = \ip{f}{\psi_{\cdot}}\in V_p$. We define $$ D_r(C_\psi)= \{f \in {\mathcal H}:\, C_\psi f \in V_r\}, \; r\in J. $$ {In particular, $D_r(C_\psi)= {{\mathcal H}}$ means $C_\psi ({\mathcal H}) {\mathfrak S}ubset V_r$.} \begin{equation}gin{prop} Assume that {{\mathfrak S}f{(k)}} holds. Then $C_\psi: D_r(C_\psi) \to V_r$ is a closed linear map. \end{equation}d{prop} \begin{equation}gin{proof} Let $f_n \to f$ in {${\mathcal H}$} and $\{C_\psi f_n\}$ be Cauchy in $V_r$. Since $V_r$ is complete, there exists $\xi \in V_r$ such that $\|C_\psi f_n-\xi\|_r\to 0$. By {{\mathfrak S}f{(k)}}, for every compact subset $K {\mathfrak S}ubset X$, there exists a subsequence $\{f_n^K\}$ of $\{f_n\}$ such that $(C_\psi f_n^K)(x) \to \xi(x)$ a.e. in $K$. On the other hand, since $f_n \to f$ in {${\mathcal H}$}, we get $$ \ip{f_n}{\psi_x} \to \ip{f}{\psi_x}, \quad \forall x \in X , $$ and the same holds true, of course, for $\{f_n^K\}$. From this we conclude that $\xi(x)= \ip{f}{\psi_x}$ almost everywhere. Thus, $f \in D_r(C_\psi)$ and $\xi= C_\psi f$. \end{equation}d{proof} By a simple application of the closed graph theorem we obtain \begin{equation}gin{cor}\label{cor54} {Assume that {{\mathfrak S}f{(k)}} holds}. If for some $r\in J $, $C_\psi ({\mathcal H}) {\mathfrak S}ubset V_r $, then $C_\psi: {{\mathcal H}}\to V_r$ is continuous. \end{equation}d{cor} Combining Corollary \ref{cor54} with \eqref{eq-formpp}, we get \begin{equation}gin{cor}\label{cor55} {Assume that {{\mathfrak S}f{(k)}} holds}. If $C_\psi ({\mathcal H}) {\mathfrak S}ubset V_p $ and $C_\psi ({\mathcal H}) {\mathfrak S}ubset V_{\overline p}$, the form $\Omega$ is bounded on {${\mathcal H}\times {\mathcal H}$}, that is, {$|\Omega_{\psi,\phi}(f,g) | \leqslantslantq c\norm{}{ f} \, \norm{}{ g}$}. \end{equation}d{cor} {Hence, if condition {{\mathfrak S}f{(k)}} holds, $C_\psi ({\mathcal H}) {\mathfrak S}ubset V_r $ implies that $C_\psi: {{\mathcal H}}\to V_r$ is continuous. If we don't know whether the condition holds, we will have to assume explicitly that $C_\psi: {{\mathcal H}}\to V_r$ is continuous.} If $C_\psi: {{\mathcal H}}\to V_r$ continuously, then $C_\psi^\ast:V_{\overline r} \to {{\mathcal H}}$ exists and it is continuous. By definition, if $\xi \in V_{\overline r} $, \begin{equation}\label{eq:cont} \ip{C_\psi f}{\xi}= \int_X \ip{f}{\psi_x} \overlineerline{\xi(x)}\,\mathrm{d}\mu(x),\;\forall\,f\in{\mathcal H}. \end{equation} {The relation \eqref{eq:cont} then implies that} $$ \int_X \ip{f}{\psi_x} \overlineerline{\xi(x)}\,\mathrm{d}\mu(x)= \ip{f}{\int_X \psi_x {\xi(x)}\,\mathrm{d}\mu(x)},\;\forall\,f\in{\mathcal H}. $$ Thus, $$ C^*_\psi\xi = \int_X \psi_x {\xi(x)}\,\mathrm{d}\mu(x). $$ Of course, what we have said about $C_\psi$ holds in the very same way for $C_\phi$. Assume now that for some $p \in J, C_\psi: {{\mathcal H}}\to V_p$ and $C_\phi: {{\mathcal H}}\to V_{\overline p}$ continuously. Then, $C_\phi^*:V_p \to {{\mathcal H}}$ so that $C_\phi^*C_\psi$ is a well-defined bounded operator in {${\mathcal H}$}. As before, we have $$ C_\phi^* \eta = \int_X \eta(x) \phi_x \,\mathrm{d}\mu(x), \; \forall \, \eta\in V_p. $$ Hence, $$ C_\phi^*C_\psi f = \int_X \ip{f}{\psi_x}\phi_x\,\mathrm{d}\mu(x) = S_{\psi, \phi} f, \quad \forall\, f\in {{\mathcal H}}, $$ the last equality following also {from \eqref{eqn_*} and Corollary \ref{cor55}.} Of course, this does not yet imply that $S_{\psi, \phi}\in GL( {{\mathcal H}})$, thus we don't know whether $(\psi, \phi)$ is a reproducing pair. {Let us now return to the pre-Hilbert space ${\mathcal V}_\phi(X, \mu) $. First, the defining relation (3.3) of \cite{ast-reprodpairs} must be written as $$ \xi \in {\mathcal V}_\phi(X, \mu) \; {\mathcal L}eftrightarrow \; \leqslantslantft| \int_X \xi(x) \overline{(C_\phi g )(x)} \,\mathrm{d}\mu(x) \right| \leqslantslantq c \norm{}{g}, \; \forall\, g \in {\mathcal H}. $$ Since $C_\phi :{\mathcal H} \to V_{\overline p}$, the integral {is well defined} for all $\xi \in V_p$. This means, the inner product on the r.h.s. is in fact the partial inner product of $V$, which coincides with the $L^2$ inner product whenever the latter makes sense. We may rewrite the r.h.s. as $$ |\ip{\xi}{C_\phi g}| \leqslantslantq c \norm{}{g}, \forall\, g \in {{\mathcal H}},\;\xi \in V_p. $$ where $\ip{\cdot}{\cdot}$ denotes the partial inner product. Next, by \eqref{defVp}, one has, for $\xi \in V_p, g\in{\mathcal H}$, $$ |\ip{\xi}{C_\phi g}| \leqslantslantq \norm{p}{\xi} \, \norm{\overline p}{C_\phi g} \leqslantslantq c \norm{p}{\xi} \norm{ }{g}, $$ where the last inequality follows from Corollary \ref{cor54} {or the assumption of continuity of $C_\phi$}. Hence indeed $\xi \in {\mathcal V}_\phi(X, \mu)$, so that $V_p {\mathfrak S}ubset {\mathcal V}_\phi(X, \mu)$.} As for the adjoint operator, we have $C_\phi^* : V_p \to {\mathcal H}$. Then we may write, for $\xi\in V_p, g\in{\mathcal H}$, $\ip{\xi}{C_\phi g} = \ip{ T_\phi\xi}{g}$, thus {$C_\phi^*$ is the restriction from ${\mathcal V}_\phi(X, \mu)$ to $V_p$ of the operator {$T_\phi: {\mathcal V}_\phi \to {\mathcal H}$ introduced in Section \ref{sec-prel}, which reads now as \begin{equation}\label{eq-Tphi-p} \ip{T_\phi \xi}{g} =\int_X \xi(x) \ip{\phi_x}{g} \,\mathrm{d}\mu(x) , \; { \forall} \, \xi \in V_p, g\in{\mathcal H}. \end{equation} Thus $C_\phi^* {\mathfrak S}ubset T_\phi$.} Next, the construction proceeds as in Section \ref{sec-RHS}. The space $ V_\phi(X, \mu)= {\mathcal V}_\phi(X, \mu)/{{{\mathfrak S}f Ker}\,}\,T_\phi$, with the norm $\norm{\phi}{\cl{\xi}{\phi}}=\norm{}{T_\phi \xi}$, is a pre-Hilbert space. Then Theorem 3.14 and the other results from Section 3 of \cite{ast-reprodpairs} remain true. In particular, we have: \begin{equation}theo\label{theo-dual2} If $(\psi,\phi)$ is a reproducing pair, the spaces $V_\phi(X, \mu)$ and $V_\psi(X, \mu)$ are both Hilbert spaces, {conjugate dual} of each other with respect to the sesquilinear form \eqref{eq_sesq}, namely, $$ \ipp{\xi}{\eta}:= \int_X \xi(x) \overline{\eta(x) } \,\mathrm{d}\mu(x). $$ \end{equation}theo Note the form \eqref{eq_sesq} coincides with the inner product of $L^2(X, \mu)$ whenever the latter makes sense. {Let $(\psi,\phi)$ is a reproducing pair.} {Assume again that $C_\phi : {{\mathcal H}} \to V_{\overline{ p}}$ continuously, which me may write $\widehat{C}_{\phi ,\psi} : {{\mathcal H}} \to V_{\overline{ p}}/{{{{\mathfrak S}f Ker}\,}\,T_\psi}$, where $\widehat{C}_{\phi ,\psi} : {\mathcal H} \to V_\psi(X, \mu) $ is the operator defined by $\widehat{C}_{\phi ,\psi} f:= [C_\phi f]_\psi $, already introduced at the end of Section \ref{sec-alt}. In addition, by \cite[Theorem 3.13]{ast-reprodpairs}, one has ${{\mathfrak S}f Ran}\, \widehat C_{\psi,\phi}[\norm{\phi}{\cdot}]=V_\phi(X,\mu)[\|\cdot\|_\phi]$ and ${{\mathfrak S}f Ran}\, \widehat C_{\phi,\psi}[\norm{\psi}{\cdot}]=V_\psi(X,\mu)[\|\cdot\|_\psi]$. Putting everything together, we get \begin{equation}gin{cor}\label{cor56} Let $(\psi, \phi)$ be a reproducing pair. Then, {if $C_\psi : {{\mathcal H}} \to V_{ p}$ and $C_\phi : {{\mathcal H}} \to V_{\overline{ p}}$ continuously,} one has \begin{equation}gin{align} \widehat{C}_{\phi ,\psi} : {{\mathcal H}} \to V_{\overline{ p}}/{{{{\mathfrak S}f Ker}\,}\,T_\psi} = V_\psi(X, \mu) {\mathfrak S}imeq {\overline{V_\phi}}(X, \mu)^* , \label{eq-Cphipsi} \\ \widehat{C}_{\psi ,\phi} : {{\mathcal H}} \to V_{ p}/{{{\mathfrak S}f Ker}\,}\,T_\phi = V_\phi(X, \mu) {\mathfrak S}imeq {\overline{V_\psi}}(X, \mu)^* . \label{eq-Cpsiphi} \end{equation}d{align} In these relations, the equality sign means an isomorphism of vector spaces, whereas ${\mathfrak S}imeq$ denotes an isomorphism of Hilbert spaces. \end{equation}cor { \begin{equation}gin{proof} On one hand, we have ${{\mathfrak S}f Ran}\, \widehat C_{\phi,\psi} =V_\psi(X,\mu) $. On the other hand, under the assumption $C_\phi( {{\mathcal H}}){\mathfrak S}ubset V_{\overline p}$, one has $V_{\overline p} {\mathfrak S}ubset {\mathcal V}_\psi(X, \mu)$, hence $V_{\overline p}/{{{{\mathfrak S}f Ker}\,}\,T_\psi}=\{\xi +{{\mathfrak S}f Ker}\, T_\psi, \; \xi \in V_{\overline p}\} {\mathfrak S}ubset V_\psi(X, \mu)$. Thus we get $V_\psi(X, \mu) = V_{\overline p}/{{{{\mathfrak S}f Ker}\,}\,T_\psi}$ as vector spaces. Similarly $V_\phi(X, \mu) = V_{p}/{{{{\mathfrak S}f Ker}\,}\,T_\phi}.$ \end{equation}d{proof}} {Notice that, in Condition {{\mathfrak S}f (p)}, the index $p$ cannot depend on $f,g$.} We need some uniformity, in the form $C_\psi( {{\mathcal H}}){\mathfrak S}ubset V_p$ and $C_\phi( {{\mathcal H}}){\mathfrak S}ubset V_{\overline p}$ . This is fully in line with the philosophy of {{\mathfrak S}c pip}-space s: the building blocks are the (assaying) subspaces $V_p$, not individual vectors.} {\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{The case of a Hilbert triplet or a Hilbert scale}} \label{sec-Hchain} \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{The general construction} \label{subsec-genconst} We have derived in the previous section the relations $V_p {\mathfrak S}ubset {\mathcal V}_\phi(X, \mu), V_{\overline{ p}}{\mathfrak S}ubset {\mathcal V}_\psi(X, \mu)$, and their equivalent ones \eqref{eq-Cphipsi}, \eqref{eq-Cpsiphi}. Then, since $V_\psi(X, \mu) $ and $V_\phi(X, \mu) $ are both Hilbert spaces, it seems natural to take for $V_p, V_{\overline{ p}}$ Hilbert spaces as well, that is, take for $V$ a LHS. The simplest case is then a Hilbert chain, for instance, the scale \eqref{eq:scaleHn} $\{{\mathcal H}_k, k\in\mathbb{Z}\}$ built on the powers of a self-adjoint operator $A>I$ . This situation is quite interesting, since in that case one may get results about spectral properties of symmetric operators (in the sense of {{\mathfrak S}c pip}-space\ operators) \cite{at-pipops}. Thus, let $(\psi,\phi)$ be a reproducing pair. For simplicity, we assume that $S_{\psi,\phi}= I$, that is, $\psi,\phi$ are dual to each other. If $\psi$ and $\phi$ are both frames, there is nothing to say, since then $C_\psi( {\mathcal H}), C_\phi( {\mathcal H}){\mathfrak S}ubset L^2(X, \mu)={\mathcal H}_o$, so that there is no need for a Hilbert scale. {Thus we assume that $\psi$ is an upper semi-frame and $\phi$ is a lower semi-frame, dual to each other. It follows that $C_\psi( {\mathcal H}){\mathfrak S}ubset L^2(X, \mu)$. Hence Condition {{\mathfrak S}f (p)} becomes: There is an index $k\geqslantslantq 1$ such that {if $C_\psi : {{\mathcal H}} \to {\mathcal H}_k$ and $C_\phi : {{\mathcal H}} \to {\mathcal H}_{\overline k}$ continuously,} thus $V_{p}\equiv {\mathcal H}_k$ and $V_{\overline p}\equiv {\mathcal H}_{\overline k}$. This means we are working in the Hilbert triplet \begin{equation}\label{eq-triplet} V_{p}\equiv {\mathcal H}_k {\mathfrak S}ubset {\mathcal H}_o = L^2(X, \mu) {\mathfrak S}ubset {\mathcal H}_{\overline k}\equiv V_{\overline p}\,. \end{equation} Next, according to Corollary \ref{cor56}, we have $V_\psi(X, \mu) = {\mathcal H}_{\overline k}/{{{{\mathfrak S}f Ker}\,}\,T_\psi}$ and $V_\phi(X, \mu) = {\mathcal H}_{k}/{{{{\mathfrak S}f Ker}\,}\,T_\phi}$, as vector spaces.} {In addition, since $\phi$ is a lower semi-frame, \cite[Lemma 2.1]{ant-bal-semiframes1} tells us that $C_\phi$ has closed range in $L^2(X, \mu)$ and is injective. However its domain $$ D(C_{\phi}):= \{f\in{\mathcal H} : \int_{X} |\ip{f}{\phi_{x}}| ^2 \, \,\mathrm{d} \nu(x) <\infty\} $$ need not be dense, it could be $\{0\}$. Thus $C_\phi$ maps its domain $D(C_{\phi})$ onto a closed subspace of $L^2(X, \mu)$, possibly trivial, and the whole of ${\mathcal H}$ into the larger space ${\mathcal H}_{\overline k}$. } \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{Examples} \label{subsec-ex} {As for concrete examples of such Hilbert scales, we might mention two. First the Sobolev spaces $H^k(\mathbb{R}), \, k\in \mathbb{Z}$, in ${\mathcal H}_0 = L^2(\mathbb{R}, dx)$, which is the scale generated by the powers of the self-adjoint operator $A^{1/2}$, where $A := 1 -\frac{\,\mathrm{d}^2}{\,\mathrm{d} x^2}$. The other one corresponds to the quantum harmonic oscillator, with Hamiltonian $A_\mathrm{osc} := x^2-\frac{\,\mathrm{d}^2}{\,\mathrm{d} x^2}$. The spectrum of $A_\mathrm{osc}$ is $\{2n+1, n= 0,1,2,\ldots\}$ and it gets diagonalized on the basis of Hermite functions. It follows that $A_\mathrm{osc}^{-1}$, which maps every ${\mathcal H}_k$ onto ${\mathcal H}_{k-1}$, is a Hilbert-Schmidt operator. Therefore, the end space of the scale ${\mathcal D}^{\infty}(A_\mathrm{osc}):=\bigcap_{k} {\mathcal H}_k$, which is simply Schwartz' space $\mathcal S$ of $C^\infty$ functions of fast decrease, is a nuclear space. } {Actually one may give an explicit example, using a Sobolev-type scale. Let ${\mathcal H}_K$ be a reproducing kernel Hilbert space (RKHS) of (nice) functions on ameasure space $(X, \mu)$, with kernel function $k_x, x\in X$, that is, $f(x)=\ip{f}{k_x}_K,\, \forall f\in{\mathcal H}_{K}$. The corresponding reproducing kernel is $K(x,y)=k_y(x)=\ip{k_y}{k_x}_K$. Choose the weight function $m(x) >1$, the analog of the weight $(1+|x|^2)$ considered in the Sobolev case. Define the Hilbert scale ${\mathcal H}_k, \, k\in \mathbb{Z}$, determined by the multiplication operator $Af(x) = m(x) f(x), \, \forall x\in X$. Hence, for each $l \geqslantslantq 1$, $$ {\mathcal H}_l {\mathfrak S}ubset{\mathcal H}_0 \equiv {\mathcal H}_K {\mathfrak S}ubset {\mathcal H}_{\overline l}\,. $$ Then, for some $n \geqslantslantq 1$, define the measurable functions $\phi_x = k_x m^n(x), \psi_x = k_x m^{-n}(x)$, so that {$C_\psi : {\mathcal H}_K \to {\mathcal H}_n, \, C_\phi : {\mathcal H}_K \to {\mathcal H}_{\overline n}$ continuously, }and they are dual of each other. Thus $(\psi, \phi)$ is a reproducing pair, with $\psi$ an upper semi-frame and $\phi$ a lower semi-frame. } {In this case, one can compute the operators $T_\psi, T_\phi$ explicitly. The definition \eqref{eq-Tphi-p} reads as $$ \ip{T_\phi \xi}{g}_K =\int_X \xi(x) \ip{\phi_x}{g}_K \,\mathrm{d}\mu(x) , \; { \forall} \, \xi \in {\mathcal H}_n, g\in{\mathcal H}_K. $$ Now $ \ip{\phi_x}{g}_K = \ip{k_x m^n(x)}{g}_K = \ip{k_x}{g \, m^n(x)}_K = \overline{g(x)}\,m^n(x) \in {\mathcal H}_{\overline n}$. Thus $$ \ip{T_\phi \xi}{g}_K = \int_X \xi(x) \, \overline{g(x)}\,m^n(x) \,\mathrm{d}\mu, $$ that is, $(T_\phi \xi )(x) = \xi(x)\, m^n(x) $ or $T_\phi \xi = \xi \,m^n$. However, since the weight $m(x)>1$ is invertible, $\overline{g}\,m^n$ runs over the whole of ${\mathcal H}_{\overline n}$ whenever $g$ runs over $H_K$. Hence $\xi\in {{\mathfrak S}f Ker}\,\,T_\phi {\mathfrak S}ubset {\mathcal H}_n$ means that $\ip{T_\phi \xi}{g}_K =0, \, \forall g\in H_K$, which implies $\xi = 0$, since the duality between ${\mathcal H}_n$ and ${\mathcal H}_{\overline n}$ is separating. The same reasoning yields ${{\mathfrak S}f Ker}\,\,T_\psi = \{0\}$. Therefore $V_\phi(X, \mu) = {\mathcal H}_n$ and $V_\psi(X, \mu) = {\mathcal H}_{\overline n}$. } {A more general situation may be derived from }the discrete example of Section 6.1.3 of \cite{ast-reprodpairs}. Take a sequence of weights $m:=\{|m_n|\}_{n\in\NN}\in c_0, m_n\neq 0,$ and consider the space $\ell^2_m$ with norm $\norm{\ell^2_m}{\xi}:={\mathfrak S}um_{n\in\NN}|m_n \xi_n |^2$. Then we have the following triplet replacing \eqref{eq-triplet} \begin{equation}\label{eq-triplet2} \ell^2_{1/m} {\mathfrak S}ubset \ell^2 {\mathfrak S}ubset \ell^2_m. \end{equation} {Next, for each $n\in\NN$, define $\psi_n = m_n \theta_n$, where $\theta$ is a frame or an orthonormal basis in $\ell^2$. Then $\psi$ is an upper semi-frame. Moreover, $\phi:=\{(1/\overline{m_n})\theta_n\}_{n\in\NN} $ is a lower semi-frame, dual to $\psi$, thus $(\psi,\phi)$ is a reproducing pair. Hence, by \cite[Theorem 3.13]{ast-reprodpairs}, $V_\psi {\mathfrak S}imeq{{\mathfrak S}f Ran}\, C_\phi = M_{1/m} (V_\theta(\NN)) =\ell^2_m$ and $V_\phi {\mathfrak S}imeq{{\mathfrak S}f Ran}\, C_\psi = M_{m} (V_\theta(\NN)) =\ell^2_{1/m}$ (here we take for granted that ${{{\mathfrak S}f Ker}\,}\,T_\psi = {{{\mathfrak S}f Ker}\,}\,T_\phi = \{0\}$).} {For making contact with the situation of \eqref{eq-triplet}, consider in $\ell^2 $ the diagonal operator $A:= \mathrm{diag}[n], n\in \NN$, that is $(A\xi)_n = n \,\xi_n, n\in \NN$, which is obviously self-adjoint and larger than 1. Then $ {\mathcal H}_k = D(A^k)$ with norm $\norm{k}{\xi}= \norm{}{A^k\xi}\equiv \ell^2_{r^{(k)}}$, where $(r^{(k)})_n = n^k$ (note that $1/{r^{(k)}}\in c_0$). Hence we have \begin{equation}\label{eq-triplet3} {\mathcal H}_k= \ell^2_{r^{(k)}} {\mathfrak S}ubset {\mathcal H}_o = \ell^2 {\mathfrak S}ubset {\mathcal H}_{\overline k}=\ell^2_{1/{r^{(k)}}}\,, \end{equation} where $(1/r^{(k)})_n = n^{-k}$. In addition, as in the continuous case discussed above, the end space of the scale, ${\mathcal D}^{\infty}(A):=\bigcap_{k} {\mathcal H}_k$, is simply Schwartz' space $s$ of fast decreasing sequences, with dual $ {\mathcal D}_{\overline \infty}(A):=\bigcup_{k} {\mathcal H}_{k} = s'$, the space of slowly increasing sequences. Here too, this construction shows that the space $s$ is nuclear, since every embedding $A^{-1} : {\mathcal H}_{k+1} \to {\mathcal H}_k$ is a Hilbert-Schmidt operator.} However, the construction described above yields a much more general family of examples, since the weight sequences $m$ are not ordered.} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{The case of $L^p$ spaces} \label{sec-Lp} Following the suggestion made at the {end} of Section \ref{sec-prel}, we present now several possibilities of taking ${{\mathfrak S}f Ran}\, C_\psi$ in the context of the Lebesgue spaces $L^p(\mathbb{R},\,\mathrm{d} x)$. As it is well-known, these spaces don't form a chain, since two of them are never comparable. We have only $$ L^p \cap L^q {\mathfrak S}ubset L^s, \, \mbox {for all } s \mbox{ such that } \; p<s<q. $$ Take the lattice ${\mathcal J}$ generated by ${\mathcal I}= \{ L^p(\mathbb{R},\,\mathrm{d} x), 1 \leqslantslantq p \leqslantslantq \infty\}$, with lattice operations \cite[Sec.4.1.2]{pip-book}: \begin{equation}gin{itemize} \item $L^p \wedge L^q = L^p \cap L^q$ is a Banach space for the projective norm $\| f \|_{p \wedge q} = \| f \|_p + \| f \|_q$ \item $L^p \vee L^q = L^p + L^q$ is a Banach space for the inductive norm \\ Hilbert spacepace*{2cm}$\| f \|_{p \vee q} = \inf_{f=g+h} \leqslantslantft\{\| g \|_p + \| h \|_q; g \in L^p, \, h \in L^q\right\}$ \item For $1<p,q<\infty$, both spaces $L^p \wedge L^q$ and $L^p \vee L^q$ are reflexive and $(L^p \wedge L^q)^\times = L^{\overline p} \vee L^{\overline q}$. \end{equation}d{itemize} Moreover, no additional spaces are obtained by iterating the lattice operations to any finite order. Thus we obtain an involutive lattice and a LBS, denoted by $V_{_{{\mathfrak S}criptstyle\rm J}} $. It is convenient to introduce a unified notation: $$ L^{(p,q)} = \leqslantslantft\{ \begin{equation}gin{array}{ll} L^p \wedge L^q = L^p \cap L^q, & \mbox{ if } \; p \geqslantslantq q, \\ L^p\vee L^q =L^p + L^q, & \mbox{ if } \; p \leqslantslantq q. \end{equation}d{array} \right. $$ Following \cite[Sec.4.1.2]{pip-book}, we represent the space $ L^{(p,q)}$ by the point $(1/p,1/q)$ of the unit square ${\rm J} = [0,1]\times [0,1]$. In this representation, the spaces $L^p$ are on the main diagonal, intersections $L^p \cap L^q$ above it and sums $L^p + L^q$ below, the duality is $[L^{(p,q)}]^\times = L^{(\overline{p},\overline{q})}$, that is, symmetry with respect to $L^2$. Hence, $L^{(p,q)}{\mathfrak S}ubset L^{(p',q')}$ if $(1/p,1/q)$ is on the left and/or above $(1/p',1/q')$ The extreme spaces are \footnote{The space $L^1 + L^\infty$ has been considered by Gould \cite{gould}.} $$ V_{_{{\mathfrak S}criptstyle\rm J}}^{\#} = L^{(\infty,1)} = L^{\infty} \cap L^{1}, \quad \mbox{ and } \quad V_{_{{\mathfrak S}criptstyle\rm J}} = L^{(1,\infty)} = L^{1} + L^{\infty}. $$ For a full picture, see \cite[Fig.4.1]{pip-book}. There are three possibilities for using the $L^p$ lattice for controlling reproducing {pairs} (1) Exploit the \emph{full lattice ${\mathcal J}$}, that is, find $(p,q)$ such that, $\forall f,g\in{\mathcal H}$, $ C_\psi f \, \# \, C_\phi g $ in the {{\mathfrak S}c pip}-space\ $V_{_{{\mathfrak S}criptstyle\rm J}} $, that is, $ C_\psi f \in L^{(p,q)}$ and $C_\phi g \in L^{(\overline{p},\overline{q})}$. (2) Select in $V_{_{{\mathfrak S}criptstyle\rm J}}$ a self-dual \emph{Banach chain} $V_{_{{\mathfrak S}criptstyle\rm I}}$, centered around $L^2$, symbolically. \begin{equation}\label{eq:chain} \ldots L^{(s)} {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^2 {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{(\overline s)} \ldots , \end{equation} such that $ C_\psi f \in L^{(s)}$ and $C_\phi g \in L^{(\overline{s})}$ (or vice-versa). Here are three examples of such Banach chains. \begin{equation}gin{figure}[t] \centering {\mathfrak S}etlength{\underlineitlength}{1cm} \begin{equation}gin{picture}(7,11) \put(-14,1){ \begin{equation}gin{picture}(7,0) \put(12.5,0){\vector(0,1){9}} \put(12.2,9.3){{\mathfrak S}hortstack{$ 1/q$}} \put(12.5,0){\vector(1,0){9}} \put(22.2,0){\makebox(0,0){$ 1/p$}} \put(12.5,0){\line(1,1){8}} \put(20.5,0){\line(0,1){8}} \put(12.5,8){\line(1,0){8}} \put(14.5,2){^{\#}lor{red}\dashbox{0.1}(4,4)} \put(12.5,0){^{\#}lor{red}\dashline [+40]{0.12}(0,6)(2,6)} \put(12.5,0){^{\#}lor{red}\dashline [+40]{0.12}(0,2)(2,2)} \put(12.5,8){\line(1,-1){8}} \put(14.5,5){^{\#}lor{blue}\line(2,-1){4}} \put(15.5,2){^{\#}lor{blue}\line(1,2){2}} \put(12.5,4){\dashbox{0.1}(8,0)} \put(16.5,0){\dashbox{0.1}(0,8)} \put(12,-0.5){\makebox(0,0){$ L^{\infty} $}} \put(21,8.5){\makebox(0,0){$ L^{1} $}} \put(20.6,-0.5){\makebox(0,0){$ L^{1} + L^{\infty}$}} \put(16.6,-0.5){\makebox(0,0){$ L^{2} + L^{\infty}$}} \put(11.5,8.2){\makebox(0,0){$ L^{\infty} \cap L^{1}$}} \put(11.5,6.2){\makebox(0,0){$ L^{\infty} \cap L^{(t)}$}} \put(11.5,2.2){\makebox(0,0){$ L^{\infty} \cap L^{(\overline t)}$}} \put(11.5,4.2){\makebox(0,0){$L^{\infty} \cap L^{2}$}} \put(21.5,4.2){\makebox(0,0){$L^{1} + L^{2}$}} \put(16.6,8.5){\makebox(0,0){$ L^{2} \cap L^{1}$}} \put(16.7,3.45){\makebox(0,0){$ L^2$}} \put(18.5,6.4){\makebox(0,0){^{\#}lor{red}$ L^{q} $}} \put(14.1,2.3){\makebox(0,0){^{\#}lor{red}$ L^{\overline q} $}} \put(15.2,6.4){\makebox(0,0){$^{\#}lor{red} L^{\overline q} \cap L^q $}} \put(18.9,1.6){\makebox(0,0){$^{\#}lor{red} L^{q}+ L^{\overline q} = (L^{\overline q} \cap L^q)^\times $}} \put(14,5.1){\makebox(0,0){$ L^{(s)} $}} \put(19.1,3.1){\makebox(0,0){$ L^{(\overline{s})} $}} \put(17.5,6.4){\makebox(0,0){$ L^{(t)} $}} \put(15.5,1.6){\makebox(0,0){$ L^{(\overline{t})} $}} \put(12.5,2){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(12.5,6){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(12.5,0){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(12.5,4){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(12.5,8){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(14.5,2){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(14.5,6){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(14.5,5){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(16.5,0){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(16.5,4){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(16.5,8){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(15.5,2){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(17.5,6){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(18.5,2){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(18.5,3){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(18.5,6){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(20.5,0){\makebox(0,0){$ {\mathfrak S}criptstyle\bullet$}} \put(20.5,4){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \put(20.5,8){\makebox(0,0){${\mathfrak S}criptstyle\bullet$}} \end{equation}d{picture}} \end{equation}d{picture} \caption{(i) The pair $L^{(s)},L^{(\overline s)}$ for $s$ in the second quadrant; (ii) The pair $L^{(t)},L^{(\overline t)}$ for $t$ in the first quadrant.} \label{fig1} \end{equation}d{figure} \begin{equation}gin{itemize} \item The \emph{diagonal} chain : $q= \overline{p}$ $$ Hilbert spacepace*{-8mm} L^{\infty} \cap L^{1}{\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{\overline q} \cap L^q {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^2 {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{q}+ L^{\overline q} = (L^{\overline q} \cap L^q)^\times {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{1} + L^{\infty}. $$ \item The horizontal chain $q=2$ : $$ Hilbert spacepace*{-8mm} L^{\infty} \cap L^{2}{\mathfrak S}ubset \ldots {\mathfrak S}ubset L^2 {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{1} + L^{2}. $$ \item The vertical chain $p=2$ : $$ L^{2} \cap L^{1}{\mathfrak S}ubset \ldots {\mathfrak S}ubset L^2 {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{2} + L^{\infty}. $$ \end{equation}d{itemize} All three chains are presented in Figure \ref{fig1}. In this case, the full chain belongs to the second and fourth quadrants (top left and bottom right). A typical point is then $s=(p,q)$ with, $2\leqslantslantq p \leqslantslantq \infty, 1\leqslantslantq q \leqslantslantq 2$, so that one has the situation depicted in \eqref{eq:chain}, that is, the spaces $L^{(s)}, L^{(\overline s)}$ to which $C_\psi f$, resp. $C_\phi g$, belong, are necessarily comparable to each other and to $L^2$. In particular, one of them is necessarily contained in $L^2$. Note the extreme spaces of that type are $L^2, L^{\infty} \cap L^{2}, L^{\infty} \cap L^{1}$ and $ L^{2} \cap L^{1}$ (see Figure \ref{fig1}). (3) Choose a dual pair in the first and third quadrant (top right, bottom left). A typical point is then $t=(p',q')$, with $1< p', q' < 2$, so that the spaces $L^{(t)}, L^{(\overline t)}$ are never comparable to each other, nor to $L^2$. Let us now add the uniform boundedness condition mentioned at the end of Section \ref{sec-prel}, ${\mathfrak S}up_{x\in X}\norm{{\mathcal H}}{\psi_x } \leqslantslantq c$ and ${\mathfrak S}up_{x\in X}\norm{{\mathcal H}}{\phi_x } \leqslantslantq c'$ for some $c, c'>0$. Then $C_\psi f(x) = \ip{f}{\psi_x} \in L^\infty(X,\,\mathrm{d} \mu)$ and $C_\phi f(x) = \ip{f}{\phi_x} \in L^\infty(X,\,\mathrm{d} \mu)$. Therefore, the third case reduces to the second one, since we have now (in the situation of Figure \ref{fig1}). $$ Hilbert spacepace*{-8mm} L^{\infty} \cap L^{(t)}{\mathfrak S}ubset L^{\infty} \cap L^2 {\mathfrak S}ubset L^{\infty} \cap L^{(\overline t)}. $$ {Following the pattern of Hilbert scales, we choose a (Gel'fand) triplet of Banach spaces. One could have, for instance, a triplet of reflexive Banach spaces such as \begin{equation}\label{eq:triplet(s)} L^{(s)} {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^2 {\mathfrak S}ubset \ldots {\mathfrak S}ubset L^{(\overline s)}, \end{equation} corresponding to a point $s$ inside of the second quadrant, as shown in Figure \ref{fig1}. In this case, according to \eqref{eq-Cphipsi} and \eqref{eq-Cpsiphi}, } $V_\psi = L^{(\overline s)}/{{\mathfrak S}f Ker}\, T_\psi $ and $ V_\phi = L^{(s)}/{{\mathfrak S}f Ker}\, T_\phi$. {On the contrary, if we choose a point $t$ in the second quadrant, case (3) above, it seems that no triplet arises. However, if $(\psi, \phi)$ is a nontrivial reproducing pair, with $S_{\phi, \psi}=I$, that is, $\psi, \phi$ are dual to each other, one of them, say $\psi$, is an upper semi-frame and then necessarily $\phi$ is a lower semi-frame \cite[Prop.2.6]{ant-bal-semiframes1}. Therefore $C_\psi( {\mathcal H}){\mathfrak S}ubset L^2(X, \mu)$, that is, case (3) cannot be realized. } \indent {Inserting the boundedness condition, we get a triplet where the extreme spaces are no longer reflexive, such as $$ L^{\infty} \cap L^{(t)}{\mathfrak S}ubset L^{\infty} \cap L^2 {\mathfrak S}ubset L^{\infty} \cap L^{(\overline t)}, $$ and then $V_\psi = (L^{\infty} \cap L^{(t)})/{{\mathfrak S}f Ker}\, T_\psi $ and $ V_\phi = (L^{\infty} \cap L^{(\overline t)})/{{\mathfrak S}f Ker}\, T_\phi$. } In conclusion, the only acceptable solution is the triplet \eqref{eq:triplet(s)}, with $s$ strictly inside of the second quadrant, that is, $s=(p,q)$ with, $2\leqslantslantq p <\infty, 1< q \leqslantslantq 2$. A word of explanation is in order here, concerning the relations $V_\psi = L^{(\overline s)}/{{\mathfrak S}f Ker}\, T_\psi $ and $ V_\phi = L^{(s)}/{{\mathfrak S}f Ker}\, T_\phi$. On the l.h.s., $L^{(s)}$ and $ L^{(\overline s)}$ are reflexive Banach spaces, with their usual norm, and so are the quotients by $T_\psi $, resp. $T_\phi $. On the other hand, $V_\psi(X,\mu)[\|\cdot\|_\psi]$ and $V_\phi(X,\mu)[\|\cdot\|_\phi]$ are Hilbert spaces. However, there is no contradiction, since the equality sign $=$ denotes an isomorphism of vector spaces only, without reference to any topology. Moreover, the two norms, Banach and Hilbert, \emph{cannot} be comparable, lest they are equivalent \cite[Coroll. 1.6.8]{megg}, which is impossible in the case of $L^p, p\neq 2$. The same is true for any LBS where the spaces $V_p$ are not Hilbert spaces. Although we don't have an explicit example of a reproducing pair, we indicate a possible construction towards one. Let $\theta^{(1)}: \mathbb{R} \to L^2$ be a measurable function such that $\ip{h}{\theta^{(1)}_x}\in L^q, \, \forall\, h\in L^2, 1<q<2$ and let $\theta^{(2)}: \mathbb{R} \to L^2$ be a measurable function such that $\ip{h}{\theta^{(2)}_x}\in L^{\overline q}, \, \forall\, h\in L^2$. Define $\psi_x := \min(\theta^{(1)}_x, \theta^{(2)}_x) \equiv \theta^{(1)}_x\wedge \theta^{(2)}_x$ and $\phi_x := \max(\theta^{(1)}_x, \theta^{(2)}_x) \equiv \theta^{(1)}_x\vee \theta^{(2)}_x$ . Then we have \begin{equation}gin{align*} (C_\psi h)(x) = \ip{h}{\psi_x}\in L^q \cap L^{\overline q}, \, \forall\, h\in L^2 \\ (C_\phi h)(x) = \ip{h}{\phi_x}\in L^q + L^{\overline q}, \, \forall\, h\in L^2 \end{equation}d{align*} and we have indeed $L^q \cap L^{\overline q} {\mathfrak S}ubset L^2 {\mathfrak S}ubset L^q + L^{\overline q}$. It remains to guarantee that $\psi$ and $\phi$ are dual to each other, that is, $$ \int_X \ip{f}{\psi_x}\ip{\phi_x}{g}\,\mathrm{d}\mu(x) = \int_X C_\psi f (x) \; \overline{C_\phi g(x)} \,\mathrm{d}\mu(x) = \ip{f}{g}, \, \forall \, f,g\in L^2. $$ \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Outcome} We have seen in \cite{ast-reprodpairs} that the notion of reproducing pair is quite rich. It generates a whole mathematical structure, which ultimately leads to a pair of Hilbert spaces, conjugate dual to each other with respect to the $ L^2(X,\mu)$ inner product. This suggests that one should make more precise the best assumptions on the measurable functions or, more precisely, on the nature of the range of the analysis operators $C_\psi, C_\phi$. This in turn suggests to analyze the whole structure in the language of {{\mathfrak S}c pip}-space s, which is the topic of the present paper. {In particular, a natural choice is a scale, or simply a triplet, of Hilbert spaces, the two extreme spaces being conjugate duals of each other with respect to the $ L^2(X,\mu)$ inner product. Another possibility consists of exploiting the lattice of all $L^p(\mathbb{R}, \,\mathrm{d} x)$ spaces, or a subset thereof, in particular a (Gel'fand) triplet of Banach spaces. Some examples have been described above, but clearly more work along these lines is in order.} \appendix \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}{Hilbert spacepace*{-4.8mm}ppendix. Lattices of Banach or Hilbert spaces and operators on them} \def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}} \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{Lattices of Banach or Hilbert spaces} \label{subsec-pip} For the convenience of the reader, we summarize in this Appendix the basic facts concerning {{\mathfrak S}c pip}-space s and operators on them. However, we will restrict the discussion to the simpler case of a lattice of Banach (LBS) or Hilbert spaces (LHS). Further information may be found in our monograph \cite{pip-book} or our review paper \cite{at-AMP}. Let thus ${\mathcal J} = \{ V_p,\, p \in I \}$ be a family of Hilbert space s or reflexive Banach spaces, partially ordered by inclusion. Then ${\mathcal I}$ generates an involutive lattice ${\mathcal J}$, indexed by $J$, through the operations $(p,q,r \in I)$: \begin{equation}gin{tabular}{lccl} . involution: &$V_r$ & $\!\!\!\leqslantslantftrightarrow\!\!\!$& $V_{\overline{r}}= V_r ^\times $, the conjugate dual of $V_r$\\ . infimum: & $V_{p \wedge q}$ &$\!\!\! :=\!\!\!$ & $V_p \wedge V_q = V_p \cap V_q$ \\ . supremum: & $V_{p \vee q}$ &$\!\!\! :=\!\!\!$ &$ V_p \vee V_q = V_p + V_q $. \end{equation}d{tabular} \\[2mm] It turns out that both $V_{p\wedge q}$ and $V_{p\vee q}$ are Hilbert space s, resp. reflexive Banach spaces, under appropriate norms (the so-called projective, resp. inductive norms). Assume that the following conditions are satisfied: \begin{equation}gin{itemize} \item [(i)] ${\mathcal I}$ contains a unique self-dual, Hilbert subspace $V_{o} =V_{\overlineerline{o}}$. \item [(ii)] for every $V_r\in{\mathcal I}$, the norm $\|\cdot\|_{\overline{r}}$ on $V_{\overline{r}}=V_{r}^\times$ is the conjugate of the norm $\|\cdot\|_{r}$ on $V_{r}$. \end{equation}d{itemize} In addition to the family ${\mathcal J} =\{V_{r}, \,r\in J\}$, it is convenient to consider the two spaces $V^{\#}$ and $V $ defined as \begin{equation} V = {\mathfrak S}um_{q\in I}V_{q}, \quad V^{\#} = \bigcap_{q\in I}V_{q}. \label{eq:extreme} \end{equation} These two spaces themselves usually do \emph{not} belong to ${\mathcal I}$. We say that two vectors $f,g\in V$ are \emph{compatible} if there exists $r \in J \mbox{ such that } f \in V_{r}, g \in V_{\overlineerline{r}}$\,. Then a \emph{partial inner product} on $V$ is a Hermitian form $\ip{\cdot}{\cdot}$ defined exactly on compatible pairs of vectors. In particular, the partial inner product $\ip{\cdot}{\cdot}$ coincides with the inner product of $V_o$ on the latter. A \emph{partial inner product space} ({{\mathfrak S}c pip}-space) is a vector space $V$ equipped with a partial inner product. Clearly LBSs and LHSs are particular cases of {{\mathfrak S}c pip}-space s. From now on, we will assume that our {{\mathfrak S}c pip}-space\ $(V, \ip{\cdot}{\cdot})$ is \emph{nondegenerate}, that is, $\ip{f}{g} = 0 $ for all $ f \in V^{\#} $ implies $ g = 0$. As a consequence, $(V^{\#}, V)$ and every couple $(V_r , V_{\overline r} ), \, r\in J, $ are a dual pair in the sense of topological vector spaces \cite{kothe}. In particular, the original norm topology on $V_r$ coincides with its Mackey topology $^\timesu(V_{r},V_{\overline{r}})$, so that indeed its conjugate dual is $(V_r)^\times = V_{\overline {r}}, \; \forall\, r\in J $. Then, $r<s$ implies $V_r {\mathfrak S}ubset V_s$, and the embedding operator $E_{sr}: V_r \to V_s$ is continuous and has dense range. In particular, $V^{\#}$ is dense in every $V_{r}$. In the sequel, we also assume the partial inner product to be positive definite, $\ip{f}{f}>0$ whenever $f\neq0$. A standard, albeit trivial, example is that of a Rigged Hilbert space (RHS) ${\mathfrak P}hi {\mathfrak S}ubset {\mathcal H} {\mathfrak S}ubset {\mathfrak P}hi^{\#}$ (it is trivial because the lattice ${\mathcal I}$ contains only three elements). Familiar concrete examples of {{\mathfrak S}c pip}-space s are sequence spaces, with $V = \omega$ the space of \emph{all} complex sequences $x = (x_n)$, and spaces of locally integrable functions with $V =L^1_{\rm loc}(\mathbb{R}, \,\mathrm{d} x)$, the space of Lebesgue measurable functions, integrable over compact subsets. Among LBSs, the simplest example is that of a chain of reflexive Banach spaces. The prototype is the chain $ {\mathcal I} = \{L^p := L ^p ( [0,1 ];dx),\; 1 < p < \infty\} $ of Lebesgue spaces over the interval [0, 1]. \begin{equation}\label{eq:Lp} L^{\infty}\;{\mathfrak S}ubset \;\ldots\;{\mathfrak S}ubset \; L^{\overline{q}}\;{\mathfrak S}ubset\; L^{\overline{r}}\;{\mathfrak S}ubset \; \ldots \; {{\mathfrak S}ubset \;L^{2}\;{\mathfrak S}ubset} \; \ldots {\mathfrak S}ubset \; L^{r} {\mathfrak S}ubset \; L^{q} \;{\mathfrak S}ubset \;\ldots\;{\mathfrak S}ubset \;L^{1} , \end{equation} where $1<q<r<2$ (of course, $L^{\infty}$ and $L^1$ are not reflexive). Here $L^{q}$ and $ L^{\overline{q}}$ are dual to each other $(1/q + 1/\overline{ q} = 1)$, and similarly $L^{r}, L^{\overline{r}}\; (1/r + 1/\overline{r} = 1)$. As for a LHS, the simplest example is the Hilbert scale generated by a self-adjoint operator $A>I$ in a Hilbert space ${\mathcal H}_o$. Let ${\mathcal H}_{n}$ be $ D(A^n)$, the domain of $A^n$, equipped with the graph norm $\| f \|_n = \| A^n f \|, \, f\in D(A^n) $, for $ n \in\NN$ or $n\in \mathbb{R}^+$, and ${\mathcal H}_{\overline n}:= {\mathcal H}_{-n} ={\mathcal H}_{n}^{\times}$ (conjugate dual): \begin{equation}\label{eq:scaleHn} {\mathcal D}^{\infty}(A):=\bigcap_{n} {\mathcal H}_n {\mathfrak S}ubset \ldots {\mathfrak S}ubset {\mathcal H}_2 {\mathfrak S}ubset {\mathcal H}_1 {\mathfrak S}ubset {\mathcal H}_0 \ {\mathfrak S}ubset {\mathcal H}_{\overline 1} {\mathfrak S}ubset {\mathcal H}_{ \overline 2} \ldots {\mathfrak S}ubset {\mathcal D}_{\overline \infty}(A):=\bigcup_{n} {\mathcal H}_{n}. \end{equation} Note that here the index $n$ may be integer or real, the link between the two cases being established by the spectral theorem for self-adjoint operators. Here again the inner product of ${\mathcal H}_0$ extends to each pair ${\mathcal H}_n ,{\mathcal H}_{-n}$, but on ${\mathcal D}_{\overline\infty}(A)$ it yields only a \emph{partial} inner product. A standard example is the scale of Sobolev spaces $H^s(\mathbb{R}), \, s\in \mathbb{Z}$, in ${\mathcal H}_0 = L^2(\mathbb{R}, dx)$. \@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize \bf}{Operators on LBSs and LHSs} Let $V_{J}$ be a LHS or a LBS. Then an \emph{operator} on $V_J$ is a map from a subset ${\mathcal D} (A) {\mathfrak S}ubset V$ into $V$, such that {\mathfrak S}mallskip (i) ${\mathcal D}(A) = \bigcup_{q\in {{\mathfrak S}f d}(A)} V_q$, where ${{\mathfrak S}f d}(A)$ is a nonempty subset of $J$; {\mathfrak S}mallskip (ii) For every $q \in {{\mathfrak S}f d}(A )$, there exists $p\in J$ such that the restriction of $A$ to $V_{q}$ is a continuous linear map into $V_{p}$ (we denote this restriction by $A_{pq})$; {\mathfrak S}mallskip (iii) $A$ has no proper extension satisfying (i) and (ii). \noindent We denote by Op$(V_J,)$ the set of all operators on $V_J$ . The continuous linear operator $A_{pq}: V_q \to V_{p}$ is called a \emph{representative} of $A$. The properties of $A$ are conveniently described by the set ${{\mathfrak S}f j}(A)$ of all pairs $ (q,p )\in J\times J$ such that $A$ maps $V_{q}$ continuously into $V_{p}$ Thus the operator $A$ may be identified with the collection of its representatives, \begin{equation}\label{eqj(A)} A {\mathfrak S}imeq \{ A_{pq}: V_{q} \to V_{p} : (q,p ) \in {{\mathfrak S}f j}(A)\}. \end{equation} It is important to notice that an operator is uniquely determined by \emph{any} of its representatives, in virtue of Property (iii): there are no extensions for {{\mathfrak S}c pip}-space\ operators. We will also need the following sets: \vspace*{-2mm}\begin{equation}gin{align*} {{\mathfrak S}f d}(A) &= \{ q \in J : \mbox{there is a } \, p \; \mbox{such that}\; A_{pq} \;\mbox{exists} \}, \\ {{\mathfrak S}f i}(A) &= \{ p \in J : \mbox{there is a } \, q \; \mbox{such that}\; A_{pq} \;\mbox{exists} \}. \end{equation}d{align*} \\[-4mm] The following properties are immediate: \begin{equation}gin{itemize} \item [{\bf .}] ${{\mathfrak S}f d}(A)$ is an initial subset of $J$: if $q \in {{\mathfrak S}f d}(A)$ and $q' < q$, then $q' \in {{\mathfrak S}f d}(A)$, and $A_{pq'} = A_{pq}E_{qq'}$, where $E_{qq'}$ is a representative of the unit operator. \item [{\bf .}] ${{\mathfrak S}f i}(A)$ is a final subset of $J$: if $p \in {{\mathfrak S}f i}(A)$ and $p' > p$, then $p' \in {{\mathfrak S}f i}(A)$ and $A_{p'q} = E_{p'p} A_{pq}$. \end{equation}d{itemize} Although an operator may be identified with a separately continous sesquilinear form on $V^\# \times V^\#$, or a conjugate linear continuous map $V^\#$ into $V$, it is more useful to keep also the \emph{algebraic operations} on operators, namely: \begin{equation}gin{itemize} \vspace*{-1mm}\item[(i)] \emph{ Adjoint:} every $A \in\mathrm{Op}(V_J)$ has a {unique} adjoint $A^\times \in \mathrm {Op}V_J)$, defined by \begin{equation}\label{eq:adjoint} \ip {A^\times y} {x} = \ip {y} { Ax} , \;\mathrm {for}\, x \in V_q, \, q\in{{\mathfrak S}f d}(A) \;\mathrm {and }\;\, y\in V_{\overline{p}}, \, p \in{{\mathfrak S}f i}(A), \end{equation} that is, $(A^\times)_{\overline{q}\overline{p}} = (A_{pq})' $, where $(A_{pq})': V_{\overline{p}} \to V_{\overline{q}}$ is the adjoint map of $A_{pq}$. Furthermore, one has $A^\times{}^\times = A, $ for every $ A \in {\rm Op}(V_J)$: no extension is allowed, by the maximality condition (iii) of the definition. \item[(ii)] \emph{Partial multiplication:} Let $A, B \in \mathrm{Op}(V_J )$. We say that the product $BA$ is defined if and only if there is a $r \in{{\mathfrak S}f i}(A) \cap{{\mathfrak S}f d}(B)$, that is, if and only if there is a continuous factorization through some $V_r$: \begin{equation}\label{eq:mult} V_q \; {\mathfrak S}tackrel{A}{\rightarrow} \; V_r \; {\mathfrak S}tackrel{B}{\rightarrow} \; V_p , \quad\mbox{{i.e.},} \quad (BA)_{pq} = B_{pr} A_{rq}, \,\mbox{ for some } \; q \in{{\mathfrak S}f d}(A) , p\in {{\mathfrak S}f i}(B). \end{equation} \end{equation}d{itemize} {Of particular interest are \emph{symmetric} operators, defined as those operators satisfying the relation $A^\times = A$, since these are the ones that could generate self-adjoint operators in the central Hilbert space, for instance by the celebrated KLMN theorem, suitably generalized to the {{\mathfrak S}c pip}-space\ environment \cite[Section 3.3]{pip-book}.} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}*{Acknowledgement} This work was partly supported by the Istituto Nazionale di Alta Matematica (GNAMPA project ``Propriet\`a spettrali di quasi *-algebre di operatori"). JPA acknowledges gratefully the hospitality of the Dipartimento di Matematica e Informatica, Universit\`a di Palermo, whereas CT acknowledges that of the Institut de Recherche en Math\'ematique et Physique, Universit\'e catholique de Louvain. \begin{equation}gin{thebibliography}{99} \bibitem{squ-int}S.T. Ali, J-P. Antoine, and J-P. Gazeau, Square integrability of group representations on homogeneous spaces I. Reproducing triples and frames, \emph{Ann. Inst. H. Poincar\'e} {\bf 55} (1991) {829--856} \bibitem{contframes} S.T. Ali, J-P. Antoine, and J-P. Gazeau, Continuous frames in Hilbert space, \textit{Annals of Physics} {\bf 222} {(1993)} {1--37} \bibitem{ait-book}J-P.~Antoine, A.~Inoue, and C.~Trapani, \emph{Partial *-Algebras and Their Operator Realizations}, Mathematics and Its Applications, vol. 553, Kluwer, Dordrecht, NL, 2002 \bibitem{pip-book} {J-P. Antoine and C. Trapani}, \textit{Partial Inner Product Spaces: Theory and Applications}, Lecture Notes in Mathematics, vol. 1986, Springer-Verlag, Berlin, 2009 \bibitem{at-AMP}{J-P. Antoine and C. Trapani} The partial inner product space method: A quick overview, \textit{Adv. in Math. Phys.,} Vol. {\bf 2010} (2010) 457635 ; {Erratum}, \textit{Ibid.} Vol. {\bf 2011} (2010) 272703. \bibitem{ant-bal-semiframes1} J-P. Antoine and P. Balazs, Frames and semi-frames, \textit{J. Phys. A: Math. Theor.} {\bf 44} (2011) {205201}; Corrigendum, \textit{ibid.} {\bf 44} (2011) 479501 \bibitem{ant-bal-semiframes2} J-P. Antoine and P. Balazs, Frames, semi-frames, and Hilbert scales, \textit{Numer. Funct. Anal. Optimiz.} {\bf 33} (2012) {736--769} \bibitem{ast-reprodpairs}J-P.~Antoine, M. Speckbacher and C.~Trapani, Reproducing pairs of measurable functions, preprint, 2016 \bibitem{at-pipops} J-P.~Antoine and C.~Trapani, Operators on partial inner product spaces: Towards a spectral analysis, \textit{Mediterranean J. Math.}\textit{Mediterranean J. Math.} {\bf 13}(2016) 323-351 \bibitem{christ} O.~Christensen, \textit{An Introduction to Frames and Riesz Bases\/}, Birkh\"{a}user, Boston, MA, 2003 \bibitem{gab-han} J-P. Gabardo and D. Han, Frames associated with measurable spaces, \textit{Adv. Comput. Math.} {\bf 18} (2003) 127--147 \bibitem{gould}G.G.~Gould, On a class of integration spaces, \textit{J. London Math. Soc.} \textbf{34} (1959), 161--172 \bibitem{kaiser} G. Kaiser \textit{A Friendly Guide to Wavelets}, Birkh{\"a}user, Boston, 1994 \bibitem {kothe} G.~K\"{o}the, \textit{Topological Vector Spaces, Vol. I\/}, Springer-Verlag, Berlin, 1969, 1979 \bibitem{kothe2} G.~K\"{o}the, \textit{Topological Vector Spaces, Vol. II\/}, Springer-Verlag, Berlin, 1979; p.84 \bibitem{megg}R.E. Megginson, \textit{An Introduction to Banach Space Theory}, Springer-Verlag, New York-Heidelberg-Berlin, 1998 \bibitem{rahimi} A. Rahimi, A. Najati and Y.N. Dehghan, Continuous frames in Hilbert spaces, \textit{Methods Funct. Anal. Topol. } {\bf 12} (2006) 170--182 \bibitem {rudin} W. Rudin, \textit{Real and Complex Analysis}, Int. Edition, McGraw Hill, New York et al., 1987; p.73, from Ex.18 \bibitem{schaefer} H.H. Schaefer, \textit{Topological Vector Spaces}, Springer-Verlag, New York-Heidelberg-Berlin, 1971 \bibitem {speck-bal} M. Speckbacher and P. Balazs, Reproducing pairs and the continuous nonstationary {G}abor transform on {LCA} groups, \textit{J. Phys. A: Math. Theor.}, {\bf 48} (2015) 395201 \bibitem{werner} P. Werner, A distributional-theoretical approach to certain Lebesgue and Sobolev spaces, \textit{J. Math. Anal. Appl.} {\bf 29} (1970) 18--78 \end{equation}d{thebibliography} \end{equation}d{document}
\begin{document} \title[Feinberg-Zee Random Hopping Matrix] {On Symmetries of the Feinberg-Zee Random Hopping Matrix} \author[Simon Chandler-Wilde]{Simon N.~Chandler-Wilde} \address{ Department of Mathematics and Statistics\\ University of Reading\\ Reading, RG6 6AX\\ UK} \email{[email protected]} \author{Raffael Hagger} \address{ Institute of Mathematics\\ Hamburg University of Technology\\ Schwarzenbergstr. 95 E\\ 21073 Hamburg\\ GERMANY\\ \textit{now at:}\\ Institute for Analysis\\ Leibniz Universit\"at Hannover\\ Welfengarten 1\\ 30167 Hannover\\ GERMANY} \email{[email protected]} \subjclass{Primary 47B80; Secondary 37F10, 47A10, 47B36, 65F15} \keywords{random operator, Jacobi operator, non-selfadjoint operator, spectral theory, fractal, Julia set} \date{August 27, 2015} \dedicatory{Dedicated to Roland Duduchava on the occasion of his 70th birthday} \begin{abstract} In this paper we study the spectrum $\Sigma$ of the infinite Feinberg-Zee random hopping matrix, a tridiagonal matrix with zeros on the main diagonal and random $\pm 1$'s on the first sub- and super-diagonals; the study of this non-selfadjoint random matrix was initiated in Feinberg and Zee ({\it Phys. Rev. E} {\bf 59} (1999), 6433--6443). Recently Hagger ({\em Random Matrices: Theory Appl.}, {\bf 4} 1550016 (2015)) has shown that the so-called {\em periodic part} $\Sigma_\pi$ of $\Sigma$, conjectured to be the whole of $\Sigma$ and known to include the unit disk, satisfies $p^{-1}(\Sigma_\pi) \subset \Sigma_\pi$ for an infinite class $\cS$ of monic polynomials $p$. In this paper we make very explicit the membership of $\cS$, in particular showing that it includes $P_m(\lambda) = \lambda U_{m-1}(\lambda/2)$, for $m\geq 2$, where $U_n(x)$ is the Chebychev polynomial of the second kind of degree $n$. We also explore implications of these inverse polynomial mappings, for example showing that $\Sigma_\pi$ is the closure of its interior, and contains the filled Julia sets of infinitely many $p\in \cS$, including those of $P_m$, this partially answering a conjecture of the second author. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} In this paper we study spectral properties of infinite matrices of the form \begin{equation} \label{eq:matrix} A_c = \begin{pmatrix} \ddots & \ddots & & & \\ \ddots & 0 & 1 & & \\ & c_0 & \fbox{$0$} & 1 & \\ & & c_1 & 0 & \ddots \\ & & & \ddots & \ddots \end{pmatrix}, \end{equation} where $c\in \Omega := \{\pm 1\}^\Z$ is an infinite sequence of $\pm 1$'s, and the box marks the entry at $(0,0)$. Let $\ell^2$ denote the linear space of those complex-valued sequences $\phi:\Z\to \C$ for which $\|\phi\|_2 := \{\sum_{n\in\Z}|\phi_n|^2\}^{1/2}<\infty$, a Hilbert space equipped with the norm $\|\cdot\|_2$. Then to each matrix $A_c$ with $c\in \Ome$ corresponds a bounded linear mapping $\ell^2\mapsto\ell^2$, which we denote again by $A_c$, given by the rule $$ (A_c \phi)_m = c_m \phi_{m-1} + \phi_{m+1}, \quad m\in\Z, $$ for $\phi\in\ell^2$. Following \cite{CWDavies2011} we will term \eqref{eq:matrix} a {\em Feinberg-Zee hopping matrix}. Further, in the case where each $c_m$ is an independent realisation of a random variable with probability measure whose support is $\{-1,1\}$, we will term $A_c$ a Feinberg-Zee {\em random} hopping matrix, this particular non-selfadjoint random matrix studied previously in \cite{FeinZee97,FeinZee99,CicutaContediniMolinari2000,HolzOrlZee,CCL,CWDavies2011,CCL2,Hagger:NumRange,Hagger:dense,Hagger:symmetries}. \footnote{These random hopping matrices appear to have been studied initially in \cite{FeinZee97}, in which paper the first superdiagonal is also a sequence of random $\pm1$'s. But it is no loss of generality to restrict attention to matrices of the form \eqref{eq:matrix} as the case where the superdiagonal is also random can be reduced to \eqref{eq:matrix} by a simple gauge transformation; see \cite{FeinZee97} or \cite[Lemma 3.2, Theorem 5.1]{CCL2}.} The spectrum of a realisation $A_c$ of this random hopping matrix is given, almost surely, by (e.g., \cite{CCL}) \begin{equation} \label{eq:spec} \spec A_c = \Sigma := \bigcup_{b\in \Ome} \spec A_b. \end{equation} Here $\spec A_b$ denotes the spectrum of $A_b$ as an operator on $\ell^2$. Note that \eqref{eq:spec} implies that $\Sigma$ is closed. Equation \eqref{eq:spec} holds whenever $c\in \Ome$ is {\em pseudo-ergodic}, which means simply that every finite sequence of $\pm 1$'s appears as a consecutive sequence somewhere in the infinite vector $c$; it is easy to see that $c$ is pseudo-ergodic almost surely if $c$ is random. The concept of pseudo-ergodicity dates back to \cite{Davies2001:PseudoErg}, as do the arguments that \eqref{eq:spec} holds, or see \cite{CCL2} for \eqref{eq:spec} derived as a special case of more general limit operator results. Many of the above cited papers are concerned primarily with computing upper and lower bounds on $\Sigma$. A standard upper bound for the spectrum is provided by the numerical range. It is shown in \cite{CCL2} that, if $c\in \Ome$ is pseudo-ergodic, its numerical range $W(A_c)$, defined by $W(A_c) :=\{(A_c \phi,\phi):\phi\in \ell^2,\ \|\phi\|_2=1\}$, where $(\cdot,\cdot)$ is the inner product on $\ell^2$, is given by \begin{eqnarray} \label{eq:nr} W(A_c) = \Delta:= \{x+\ri y: x,y\in \R, \; |x|+|y| < 2\}. \end{eqnarray} This gives the upper bound that $\Sigma \subset \overline{\Delta}$, the closure of $\Delta$. Other, sharper upper bounds on $\Sigma$ are discussed in Section \ref{sec:pre} below. This current paper is related to the problem of computing lower bounds for $\Sigma$ via \eqref{eq:spec}. If $b\in \Ome$ is constant then $A_b$ is a Laurent matrix and $\spec A_b = [-2,2]$ if $b_m\equiv 1$, while $\spec A_b=\ri[-2,2]$ if $b_m\equiv -1$; thus, by \eqref{eq:spec}, $\pi_1:= [-2,2]\cup \ri [-2,2]\subset \Sigma$. Generalising this, if $b\in \Ome$ is periodic with period $n$ then $\spec A_b$ is the union of a finite number of analytic arcs which can be computed by calculating eigenvalues of $n\times n$ matrices (see Lemma \ref{lem:spper} below). And, by \eqref{eq:spec}, $\pi_n\subset \Sigma$, where $\pi_n$ is the union of $\spec A_b$ over all $b$ with period $n$. This implies, since $\Sigma$ is closed, that \begin{equation} \label{eq:piinf} \Sigma_\pi := \overline{\pi_\infty} \subset \Sigma, \end{equation} where $\pi_\infty := \cup_{n\in\N} \pi_n$. We will call $\Sigma_\pi$ the {\em periodic part} of $\Sigma$, noting that \cite{CCL} conjectures that equality holds in \eqref{eq:piinf}, i.e.~that $\pi_\infty$ is dense in $\Sigma$ and $\Sigma_\pi=\Sigma$. Whether or not this holds is an open problem, but it has been shown in \cite{CWDavies2011} that $\pi_\infty$ is dense in the open unit disk $\D:=\{\lambda\in \C:|\lambda|<1\}$, so that \begin{equation} \label{eq:unit} \ovD \subset \Sigma_\pi\subset \Sigma. \end{equation} For a polynomial $p$ and $S\subset \C$, we define, as usual, $p(S):= \{p(\lambda):\lambda\in S\}$ and $p^{-1}(S):= \{\lambda\in \C: p(\lambda)\in S\}$. (We will use throughout that if $S$ is open then $p^{-1}(S)$ is open ($p$ is continuous) and, if $p$ is non-constant, then $p(S)$ is also open, e.g., \cite[Theorem 10.32]{rudinrealcomplex}.) The proof of \eqref{eq:unit} in \cite{CWDavies2011} depends on the result, in the case $p(\lambda) = \lambda^2$, that \begin{equation} \label{eq:pm1} p^{-1}(\pi_\infty) \subset \pi_\infty, \; \mbox{ so that also } p^{-1}(\Sigma_\pi)\subset \Sigma_\pi. \end{equation} This implies that $S_n\subset \pi_\infty$, for $n=0,1,...$, where $S_0:=[-2,2]$ and $S_{n}:=p^{-1}(S_{n-1})$, for $n\in\N$. Thus $\cup_{n\in\N}S_n$, which is dense in $\ovD$, is also in $\pi_\infty$, giving \eqref{eq:unit}. Hagger \cite{Hagger:symmetries} makes a large generalisation of the results of \cite{CWDavies2011}, showing the existence of an infinite family, $\cS$, containing monic polynomials of arbitrarily high degree, for which \eqref{eq:pm1} holds. For each of these polynomials $p$ let \begin{equation} \label{eq:Up} U(p) := \bigcup_{n=1}^\infty p^{-n}(\D). \end{equation} (Here $p^{-2}(S) := p^{-1}(p^{-1}(S))$, $p^{-3}(S) := p^{-1}(p^{-2}(S))$, etc.) Hagger \cite{Hagger:symmetries} observes that, as a consequence of \eqref{eq:unit} and \eqref{eq:pm1}, $U(p)\subset \Sigma_\pi$. He also notes that standard results of complex dynamics (e.g., \cite[Corollary 14.8]{Falconer03}) imply that $J(p)\subset \overline{U(p)}$, so that $J(p)\subset \Sigma_\pi$; here $J(p)$ denotes the Julia set of the polynomial $p$. (Where $p^2(\lambda):=p(p(\lambda))$, $p^3(\lambda):=p(p^2(\lambda))$, etc., we recall \cite{Falconer03} that the {\em filled Julia set} $K(p)$ of a polynomial $p$ of degree $\geq 2$ is the compact set of those $\lambda\in \C$ for which the sequence $(p^n(\lambda))_{n\in\N}$, the {\em orbit} of $\lambda$, is bounded. Further, the boundary of $K(p)$, $J(p) := \partial K(p)\subset K(p)$, is the {\em Julia set} of $p$.) The definition of the set $\cS$ in \cite{Hagger:symmetries}, while constructive, is rather indirect. The first contribution of this paper (Section \ref{sec:poly}) is to make explicit the membership of $\cS$. As a consequence we show, in particular, that $P_m\in \cS$, for $m=2,3,...$, where $P_m(\lambda) := \lambda U_{m-1}(\lambda/2)$, and $U_n$ is the Chebychev polynomial of the second kind of degree $n$ \cite{AS}. The second contribution of this paper (Section \ref{sec:interior}) is to say more about the interior points of $\Sigma_\pi$. Previous calculations of large subsets of $\pi_\infty$, precisely calculations of $\pi_n$ for $n$ as large as 30 \cite{CCL,CCL2}, suggest that $\Sigma_\pi$ fills most of the square $\Delta$, but $\intt(\Sigma_\pi)$, the interior of $\Sigma_\pi$, is known only to contain $\D$. Using that the whole family $\{P_m:m\geq 2\}\subset \cS$, we prove that $(-2,2)\subset \intt(\Sigma_\pi)$. This result is then used to show that $\Sigma_\pi$ is the closure of its interior. Using that $p^{-1}(\D)\subset \Sigma_\pi$, for $p\in \cS$, we also, in Section \ref{subsec:b}, construct new explicit subsets of $\Sigma_\pi$ and its interior; in particular, extending \eqref{eq:unit}, we show that $\alpha \ovD \subset \Sigma_\pi$ for $\alpha = 1.1$. In the final Section \ref{sec:julia} of the paper we address a conjecture of Hagger \cite{Hagger:symmetries} that, not only for every $p\in \cS$ is $J(p)\subset \overline{U(p)}$ (which implies $J(p)\subset \Sigma_\pi$), but also the filled Julia set $K(p)\subset \overline{U(p)}$. This is a stronger result as, while the compact set $J(p)$ has empty interior \cite[Summary 14.12]{Falconer03}, $K(p)$ contains, in addition to $J(p)$, all the bounded components of the open Fatou set $F(p):= \C\setminus J(p)$. We show, by a counter-example, that this conjecture is false. But, positively, we conjecture that $K(p)\subset \Sigma_\pi$ for all $p\in \cS$, and we prove that this is true for a large subset of $\cS$, in particular that $K(P_m)\subset \Sigma_\pi$ for $m\geq 2$. The results in this paper provide new information on the almost sure spectrum $\Sigma\supset \Sigma_\pi$ of the bi-infinite Feinberg-Zee random hopping matrix. They are also relevant to the study of the spectra of the corresponding finite matrices. For $n\in \N$ let $V_n$ denote the set of $n\times n$ matrices of the form \eqref{eq:matrix}, so that $V_1 := \{(0)\}$ and, for $n\geq 2$, $V_n:= \{A_k^{(n)}:k=(k_1,...,k_n)\in \{\pm 1\}^n\}$, where \begin{equation} \label{eq:matrix2} A^{(n)}_k:= \left(\begin{array}{ccccc} 0&1&&\\ k_1&0&\ddots&\\ &\ddots&\ddots& 1\\ && k_{n-1} & 0 \end{array}\right). \end{equation} (This notation will be convenient, but note that $A_k^{(n)}$ is independent of the last component of $k$.) Then $\spec A_k^{(n)}$ is the set of eigenvalues of the matrix $A_k^{(n)}$. Let \begin{equation} \label{eq:sigma} \sigma_n := \bigcup_{A\in V_n} \spec A, \mbox{ for }n\in\N, \; \mbox{ and } \sigma_\infty := \bigcup_{n=1}^\infty \sigma_n, \end{equation} so that $\sigma_\infty$ is the union of all eigenvalues of finite matrices of the form \eqref{eq:matrix2}. Then, connecting spectra of finite and infinite matrices, it has been shown in \cite{CCL2} that $\sigma_n\subset \pi_{2n+2}$, for $n\in\N$, so that $\sigma_\infty\subset \pi_\infty \subset \Sigma_\pi$. Further, \cite{Hagger:dense} shows that $\sigma_\infty$ is dense in $\pi_\infty$, so that $\Sigma_\pi = \overline{\sigma_\infty}$. In Section \ref{subsec:a} we build on and extend these results, making a surprising connection between the eigenvalues of the finite matrices \eqref{eq:matrix2} and the spectra of the periodic operators associated to the polynomials in $\cS$. The result we prove (Theorem \ref{thm:dense}), is key to the later arguments in Section \ref{sec:interior}. \section{Preliminaries and previous work} \label{sec:pre} \paragraph{Notions of set convergence.} We will say something below about set sequences, sequences approximating $\Sigma$ and $\Sigma_\pi$ from above and below, respectively. We will measure convergence in the standard Haussdorf metric $d(\cdot,\cdot)$ \cite[Section 28]{Hausdorff} (or see \cite{HaRoSi2}) on the space $\C^C$ of compact subsets of $\C$. We will write, for a sequence $(S_n)\subset \C^C$ and $S\in \C^C$, that $S_n\searrow S$ if $d(S_n,S)\to 0$ as $n\to \infty$ and $S\subset S_n$ for each $n$; equivalently, $S_n\searrow S$ as $n\to\infty$ if $S\subset S_n$ for each $n$ and, for every $\epsilon >0$, $S_n\subset S+\epsilon \D$, for all sufficiently large $n$. Similarly, we will write that $S_n\nearrow S$ if $d(S_n,S)\to 0$ as $n\to \infty$ and $S_n\subset S$ for each $n$; equivalently, $S_n\nearrow S$ if $S_n\subset S$ for each $n$ and, for every $\epsilon >0$, $S\subset S_n+\epsilon \D$, for all sufficiently large $n$. The following observation, which follows immediately from \cite[Proposition 3.6]{HaRoSi2} (or see \cite[Lemma 4.15]{CWLi2015:Coburn}), will be useful: \begin{lem} \label{eq:setconv} If $S_1\subset S_2 \subset ... \subset \C$ are closed and $ S_\infty:= \bigcup_{n=1}^\infty S_n $ is bounded, then $S_n\nearrow \overline{S_\infty}$, as $n\to\infty$. \end{lem} \paragraph{Spectra of periodic operators.} We will need explicit formulae for the spectra of operators $A_c$ with $c\in \Ome$ in the case when $c$ is periodic. For $k=(k_1,...,k_n)\in \{\pm 1\}^n$, let $A_k^{\per}$ denote $A_c$ in the case that $c_{m+n} = c_n$, for $m\in \Z$, and $c_m=k_m$, for $m=1,2,...,n$. For $n\in \N$ let $I_n$ denote the order $n$ identity matrix, $R_n$ the $n\times n$ matrix which is zero except for the entry 1 in row $n$, column $1$, and let $R_n^T$ denote the transpose of $R_n$. For $n\in \N$, $k\in \{\pm 1\}^n$, and $\varphi\in \R$, let $a_k(\varphi) := A_k^{(n)} + \re^{-\ri \varphi} R_n + k_n \re^{\ri \varphi} R_n^T$. The following characterisation of the spectra of periodic operators is well-known (see Lemma 1 and the discussion in \cite{Hagger:dense}). \begin{lem} \label{lem:spper} For $n\in \N$ and $k\in \{\pm 1\}^n$, $$ \spec A_k^{\per} = \{\lambda\in \C: \det(a_k(\varphi)-\lambda I_n) =0 \mbox{ for some }\varphi\in [0,2\pi)\}. $$ \end{lem} Key to our arguments will be an explicit expansion for the determinant in the above lemma, expressed in terms of the following notation. For $n\in \N$, $k=(k_1,...,k_n)\in \{\pm 1\}^n$, and $\lambda\in \C$, let \begin{equation}\label{eq:pdef3} q_k(\lambda) := \left|\begin{array}{cccc} \lambda & 1 \\ k_1 & \ddots & \ddots\\ & \ddots & \ddots & 1\\ & & k_{n} & \lambda \end{array}\right|. \end{equation} For $i,j\in \Z$ and $\lambda\in \C$, let $k(i:j):= (k_i,...,k_j)$, for $1\leq i\leq j\leq n$, and define \begin{equation} \label{eq:ij} q_{k(i:j)}(\lambda):= \left\{\begin{array}{cc} \lambda, & \mbox{ if } i-j = 1, \\ 1, & \mbox{ if } i-j = 2, \\ 0, & \mbox{ if } i-j = 3. \end{array}\right. \end{equation} Then, for $n\in \N$ and $k\in \{\pm 1\}^n$, expanding the determinant \eqref{eq:pdef3} by Laplace's rule by the first row and by the last row, we see that \begin{equation} \label{eq:qk} q_k(\lambda) =\lambda q_{k(2:n)}(\lambda)-k_1 q_{k(3:n)}(\lambda) = \lambda q_{k(1:n-1)}(\lambda)-k_{n} q_{k(1:n-2)}(\lambda). \end{equation} The following lemma follows easily by induction on $n$, using \eqref{eq:qk}. The bounds on $q_k$ stated are used later in Corollary \ref{cor:pkbound}. \begin{lem} \label{lem:qkbound} If $k=(k_1,...k_n)\in \{\pm 1\}^n$, for some $n\in \N$, then $q_k$ is a monic polynomial of degree $n+1$, and $q_k$ is even and $q_k(0)=\pm 1$ if $n$ is odd, $q_k$ is odd and $q_k(0)=0$ if $n$ is even. Further, $|q_k(\lambda)| \geq |q_{k(1:n-1)}(\lambda)|+1$, $|q_k(\lambda)| \geq |q_{k(2:n)}(\lambda)|+1$, and $|q_k(\lambda)|\geq n+2$, for $|\lambda|\geq 2$. \end{lem} For $n\in \N$ let $J_n$ denote the $n\times n$ flip matrix, that is the $n\times n$ matrix with entry $\delta_{i,n+1-j}$ in row $i$, column $j$, where $\delta_{i,j}$ is the Kronecker delta. Then $J_n^2 = I_n$ so that $(\det J_n)^2 = 1$. For $k=(k_1,...,k_n)\in \{\pm 1\}^n$, let $k^\prime := k J_n = (k_n,...,k_1)$. The first part of the following lemma is essentially a particular instance of a general property of determinants. \begin{lem} \label{lem:qksym} If $k\in \{\pm 1\}^n$, for some $n\in \N$, and $\ell=k^\prime$, then $q_k= q_\ell$; if $\ell=-k$, then \begin{equation} \label{eq:qel} q_\ell(\lambda) = \ri^{-n-1} q_k(\ri \lambda). \end{equation} \end{lem} \begin{proof} Suppose first that $\ell=k^\prime$. Then $q_k(\lambda)$ given by \eqref{eq:qk} is the determinant of a matrix $A$, and $q_\ell(\lambda)$ is the determinant of $J_{n+1} A^T J_{n+1}$, so that $q_\ell(\lambda) =(\det J_{n+1})^2\det A= q_k(\lambda)$. That \eqref{eq:qel} holds if $\ell=-k$ can be shown by an easy induction on $n$, using \eqref{eq:qk}, or directly by a gauge transformation. \end{proof} Here is the announced explicit expression for the determinant in Lemma \ref{lem:spper}. \begin{lem} \label{lem:det} \cite[Lemma 3]{Hagger:symmetries} For $n\in \N$, $k\in \{\pm 1\}^n$, $\lambda\in \C$, and $\varphi\in [0,2\pi)$, $$ (-1)^n\det(a_k(\varphi)-\lambda I_n) = p_k(\lambda) - \re^{\ri\varphi} \prod_{j=1}^n k_j - \re^{-\ri\varphi}, $$ where $p_k$ is a monic polynomial of degree $n$ given by \begin{equation} \label{eq:pdef2} (-1)^n p_k(\lambda) = q_{k(1:n-1)}(-\lambda) -k_n q_{k(2:n-2)}(-\lambda). \end{equation} Further, $p_k$ is odd (even) if $n$ is odd (even). \end{lem} Since, from the above lemmas, $p_k$ is odd (even) and $q_k$ even (odd) if $n$ is odd (even), \eqref{eq:pdef2} implies that \begin{eqnarray} \label{eq:pk2} p_k(\lambda) &=& q_{k(1:n-1)}(\lambda) -k_n q_{k(2:n-2)}(\lambda) \\ \label{eq:pk2a} & = & q_{k(1:n-1)}(\lambda) + q_{k(2:n)}(\lambda) - \lambda q_{k(2:n-1)}(\lambda), \end{eqnarray} this last equation obtained using \eqref{eq:qk}. The following lemma, proved using these representations, makes clear that many different vectors $k$ correspond to the same polynomial $p_k$. \begin{lem}\label{lem:pksym} If $k=(k_1,...k_n)\in \{\pm 1\}^n$, for some $n\in \N$, and $\ell=k^\prime$ or $\ell$ is a cyclic permutation of $k$, then $p_k= p_\ell$. If $\ell=-k$ then $p_\ell(\lambda) = \ri^{-n} p_k(\ri \lambda)$. \end{lem} \begin{proof} Using \eqref{eq:pk2} and \eqref{eq:qk} we see that \begin{eqnarray*} p_\ell(\lambda)-p_k(\lambda) &=& q_{\ell(1:n-1)}(\lambda) - q_{k(1:n-1)}(\lambda) + k_n q_{k(2:n-2)}(\lambda) - \ell_n q_{\ell(2:n-2)}(\lambda)\\ & = & \lambda \left(q_{\ell(2:n-1)}(\lambda) - q_{k(1:n-2)}(\lambda)\right) -\ell_1 q_{\ell(3:n-1)}(\lambda)\\ & & \hspace{3ex} +k_{n-1} q_{k(1:n-3)}(\lambda)+ k_n q_{k(2:n-2)}(\lambda) - \ell_n q_{\ell(2:n-2)}(\lambda). \end{eqnarray*} If $\ell$ is a cyclic shift of $k$, i.e., $\ell_j=k_{j-1}$, $j=2,...,n$, and $\ell_1=k_n$, then the right-hand side is identically zero. Thus $p_\ell=p_k$ if $\ell$ is a cyclic permutation of $k$. If $\ell=k^\prime$ then that $p_k=p_\ell$ follows from \eqref{eq:pk2a} and Lemma \ref{lem:qksym}. If $\ell=-k$ then that $p_\ell(\lambda) = \ri^{-n} p_k(\ri \lambda)$ follows from \eqref{eq:pk2} and Lemma \ref{lem:qksym}. \end{proof} Call $k\in \{\pm 1\}^n$ {\em even} if $\prod_{j=1}^n k_j = 1$, and {\em odd} if $\prod_{j=1}^n k_j = -1$. Then \cite[Corollary 5]{Hagger:symmetries}, it is immediate from Lemmas \ref{lem:spper} and \ref{lem:det} that \begin{equation} \label{eq:per} \spec A_k^{\per} = p_k^{-1}([-2,2]), \mbox{ if $k$ is even}, \; \spec A_k^{\per} = p_k^{-1}(\ri[-2,2]), \mbox{ if $k$ is odd.} \end{equation} \paragraph{Complex dynamics.} In Section \ref{sec:julia} below we show that filled Julia sets, $K(p)$, of particular polynomials $p$, are contained in the periodic part $\Sigma_\pi$ of the almost sure spectrum of the Feinberg-Zee random hopping matrix. To articulate and prove these results we will need terminology and results from complex dynamics. Throughout this section $p$ denotes a polynomial of degree $\geq 2$. We have defined above the compact set that is the filled Julia set $K(p)$, the Julia set $J(p) = \partial K(p) \subset K(p)$, the Fatou set $F(p)$ (the open set that is the complement of $J(p)$), and the orbit of $z\in \C$. It is easy to see that, if $z\not\in K(p)$, i.e., the orbit of $z$ is not bounded, then $p^n(z)\to\infty$ as $n\to\infty$, i.e., $z\in A_p(\infty)$, the {\em basin of attraction of infinity}. We call $S\subset \C$ {\em invariant} if $p(S)=S$, and {\em completely invariant} if both $S$ and its complement are invariant, which holds iff $p^{-1}(S)=S$. Clearly $A_p(\infty)$ is completely invariant. We call $z$ a {\em fixed point} of $p$ if $p(z)=z$ and a {\em periodic point} if $p^n(z)=z$, for some $n\in\N$, in which case the finite sequence $(z_0,z_1,...,z_{n-1})$, where $z_0=z$, $z_1=p(z_0)$, ..., is the {\em cycle} of the periodic point $z$. We say that $z$ is an {\em attracting fixed point} if $|p^\prime(z)|<1$, a {\em repelling fixed point} if $|p^\prime(z)|>1$, and a {\em neutral fixed point} if $|p^\prime(z)|=1$. Generalising, we say that a periodic point $z$ is attracting/repelling/neutral if $|P^\prime(z)|<1$/$>1$/$=1$, where $P=p^n$. By the chain rule, $P^\prime(z) = p^\prime(z_0)...p^\prime(z_{n-1})$, where $z_0=z$ and $z_j:= p(z_{j-1})$, $j=1,...,n-1$. The value $\gamma = P^\prime(z)$ is the {\em multiplier} of the neutral periodic point $z$ (clearly $|\gamma|=1$). If $z$ is a neutral periodic point with multiplier $\gamma$ we say that it is {\em rationally neutral} if $\gamma^N=1$ for some $N\in \N$, otherwise we say that it is {\em irrationally neutral}. We call $z$ a {\em critical point} of $p$ if $p^\prime(z)=0$. If $w$ is an attracting periodic point we denote by $A_p(w)$ the {\em basin of attraction} of the cycle $C= \{z_0,...,z_{n-1}\}$ of $z$, by which we mean $A_p(w) := \{z\in \C: d(p^n(z),C)\to 0 \mbox{ as } n\to\infty\}$. Here, for $S\subset\C$ and $z\in\C$, $d(z,S):= \inf_{w\in S}|z-w|$. It is easy to see that $A_p(w)$ contains some neighbourhood of $C$, and hence that $A_p(w)$ is open. We will make use of standard properties of the Julia set captured in the following theorem. We recall that a family $F$ of analytic functions is {\em normal} at a point $z\in \C$ if, in some fixed neighbourhood $N$ of $z$, each $f\in F$ is analytic and every sequence drawn from $F$ has a subsequence that is convergent uniformly either to some analytic function or to $\infty$. \begin{thm} \cite[Summary 14.12]{Falconer03} $J(p)$ is compact with no isolated points, is uncountable, and has empty interior. $J(p)$ is completely invariant, $J(p)=J(p^n)$, for every $n\in\N$, \begin{equation} \label{eq:nn} J(p) = \{z\in\C: \mbox{ the family } \{p^1,p^2,...\} \mbox{ is not normal at }z\}, \end{equation} $J(p)$ is the closure of the repelling periodic points of $p$, and, for all except at most one $z\in \C$, \begin{equation} \label{eq:Jpi} J(p)\subset \overline{\bigcup_{n=1}^\infty p^{-n}(\{z\})}. \end{equation} \end{thm} The Fatou set $F(p)$ has one unbounded component $U$. It follows from \eqref{eq:nn} that $U\subset A_p(\infty)$; indeed, $U=A_p(\infty)$ as a consequence of the maximum principle \cite{CarlesonGamelin92}. It may happen that this is the only component of $F(p)$ so that $A_p(\infty)=F(p)$. This is the case if $k=(1,1)$ and $p(z)=p_k(z)=z^2-2$, when $K(p)=J(p)=[-2,2]$ \cite[p.~55]{CarlesonGamelin92}. If $F(p)$ has more than one component it either has two components (for example, if $k=(-1,1)$ and $p(z)=p_k(z)=z^2$, when $K(p)=\ovD$ and $J(p)=\partial \D$), or infinitely many components \cite[Theorem IV.1.2]{CarlesonGamelin92}. It follows from \eqref{eq:Jpi} that $J(p) = \partial A_p(\infty)$ and $J(p) = \partial A_p(w)$ if $w$ is an attracting fixed point or periodic point \cite[Theorem III.2.1]{CarlesonGamelin92}. Arguing similarly \cite[Theorem 1.7]{CarlesonGamelin92}, $J(p) = \partial F_B(p)$, where $F_B(p):= \intt(K(p))= F(p)\setminus A_p(\infty)$, so that $K(p)= \overline{F_B(p)}$. Because $J(p)$ is completely invariant and $p$ is an open map, the image $V=p(U)$ of any component $U$ of $F(p)$ is also a component of $F(p)$. Now consider the orbit of $U$, i.e. $(p(U))_{n=1}^\infty$. The following statement of possible behaviours is essentially Sullivan's theorem \cite[pp.~69-71]{CarlesonGamelin92}. \begin{thm} \label{thm:sullivan} Let $U$ be a component of $F(p)$. Then one of the following cases holds: i) $p^n(U)=U$, for some $n\in \N$, in which case we call $U$ a {\em periodic component} of $F(p)$, and call the smallest $n$ for which $p^n(U)=U$ the {\em period} of $U$. (If $n=1$, when $U$ is invariant, we also term $U$ a {\em fixed component} of $F(p)$.) ii) $p^r(U)$ is a periodic component of $F(p)$ for some $r\in \N$, in which case we say that $U$ is a {\em preperiodic component} of $F(p)$. \end{thm} The above theorem makes clear that the orbit of every component $U$ of $F(p)$ enters a periodic cycle after a finite number of steps. To understand the eventual fate under iterations of $p$ of the components of the Fatou set it is helpful to understand the possible behaviours of a periodic component. This is achieved in the {\em classification theorem} (e.g.~\cite{CarlesonGamelin92}). To state this theorem we introduce further terminology. Let us call a fixed component $U$ of $F(p)$ a {\em parabolic} component if there exists a neutral fixed point $w\in \partial U$ with multiplier 1 such that the orbit of every $z\in U$ converges to $w$. Call a fixed component $U$ of $F(p)$ a {\em Siegel disk} if $p$ is conjugate to an irrational rotation on $U$, which means that there exists a conformal mapping $\varphi:U\to V$ with $0\in V$ and an irrational $\theta\in \R$ such that \begin{equation} \label{eq:st} \varphi(p(z)) = g(\varphi(z)) = \gamma \varphi(z), \quad z\in U, \end{equation} where $g(w) = \gamma w$, $w\in V$, and $\gamma = \exp(2\pi \ri \theta)$. It is easy to see that, for $w\in U$, $p(w)=w$ iff $w=\varphi^{-1}(0)$, and $p^\prime(w)=\gamma$. Thus every Siegel disk contains a unique irrationally neutral fixed point (the Siegel disk fixed point). \begin{thm} \label{thm:class} {\em Classification Theorem \cite[Theorem IV.2.1]{CarlesonGamelin92}.} If $U$ is a periodic component of $F(p)$ with period $n\in \N$, in which case $U$ is also a component of $F(p^n)=F(p)$, then exactly one of the following holds:\footnote{The result as stated for rational $p$ in \cite{CarlesonGamelin92} gives a 4th option, that $U$ is a {\em Herman ring component} of $F(p^n)$. This option is excluded if $p$ is a polynomial \cite[p.~166]{Milnor}.} a) $U$ contains an attracting periodic point $w$ which is an attracting fixed point of $p^n$, and $U\subset A_p(w)$; b) $U$ is a parabolic component of $F(p^n)$; c) $U$ is a Siegel disk component of $F(p^n)$. \end{thm} The following proposition relates the above cases to critical points of $p$ (see \cite[Theorems III 2.2 and 2.3, pp.~83-84]{CarlesonGamelin92}): \begin{prop} \label{prop:crit} If $U$ is a periodic component of $F(p)$ with period $n$ then either: (i) $U$ is a parabolic component of $F(p^n)$ or contains an attracting periodic point, in which case $\cup_{m=1}^n p^m(U)$ contains a critical point of $p$; or (ii) $U$ is a Siegel disk component of $F(p^n)$ and there is a critical point $w$ of $p$ such that the orbit of $w$ is dense in $\partial U$. \end{prop} The following proposition will do the work for us in Section \ref{sec:julia}. \begin{prop} \label{prop:cd2a} Suppose that $S\subset \C$ is bounded, open and simply-connected, that $T\subset S$ is closed, and that the orbit of every critical point in $K(p)$ eventually lies in $T$. Then $$ K(p)\subset G:= \overline{\bigcup_{n=1}^\infty p^{-n}(S)}. $$ \end{prop} \begin{proof} That $z\in G$ if $z\in J(p)$ follows from \eqref{eq:Jpi}. If $z\in F_B(p)$ then, by Theorems \ref{thm:sullivan} and \ref{thm:class}, after a finite number of iterations the orbit of $z$ is in a periodic component of $F(p)$ that is parabolic, part of the domain of attraction of an attracting periodic point, or is a Siegel disk. In the first two cases it follows that $d(p^n(z), C)\to 0$ as $n\to\infty$ for some cycle $C$, but also $d(p^n(w),C)\to 0$ for some critical point $w$ by Proposition \ref{prop:crit}. This last implies that $C\cap T$ is non-empty, and so $p^n(z)\in S$ for some $n$. In the case that the orbit of $z$ is eventually in a Siegel disk then also $p^n(z)\in S$ for some $n$ for, if the orbit of every critical point $w\in K(p)$ is eventually in $T$, it follows that the boundary of every Siegel disk is in $T$, and (as $S$ is simply connected) that every Siegel disk is in $S$. \end{proof} \paragraph{Previous upper bounds on $\Sigma$.} We have noted above that, if $c\in \Ome$ is pseudo-ergodic, then $\Sigma= \spec A_c \subset \overline{W(A_c)} = \overline{\Delta}$, given by \eqref{eq:nr}. Similarly, the spectrum of $A_c^2$ is contained in the closure of {\em its} numerical range, so that\footnote{Equation \eqref{eq:nr2} is the idea behind higher order numerical ranges; indeed, where $p$ is the polynomial $p(\lambda) = \lambda^2$, $N_2$ is $\mathrm{Num}(p,A_c)$ in the notation of \cite[p.~278]{LOTS}, so that $N_2$ is a superset of the second order numerical range.} \begin{equation} \label{eq:nr2} \Sigma \subset \{\pm \sqrt{z}:z\in \spec(A_c^2)\} \subset N_2:= \{\pm \sqrt{z}:z\in \overline{W(A_c^2)}\}. \end{equation} Hagger \cite{Hagger:NumRange} introduces a new, general method for computing numerical ranges of infinite tridiagonal matrices via the Schur test, which he applies to computing the numerical range of $A_c^2$ (by expressing it as the direct sum of tridiagonal matrices). These calculations show that $\Sigma\subset N_2\subsetneq \overline{\Delta}$; indeed, the calculations in \cite{Hagger:NumRange} imply that $N_2 = \{r\exp(\ri\theta): 0\leq r\leq \rho(\theta), \, 0\leq \theta < 2\pi\}$, where $\rho\in C(\R)$ is even and periodic with period $\pi/2$, given explicitly on $[-\pi/4,\pi/4]$ by \begin{equation} \label{eq:nr3} \rho(\theta) = \left\{\begin{array}{cc} \sqrt{2}, & \pi/6\leq |\theta|\leq \pi/4, \\ 2/\sqrt{\cos2\theta + \sqrt{3}\ |\sin 2\theta|}, & |\theta| \leq \pi/6. \end{array}\right. \end{equation} By comparison, in polar form, \begin{eqnarray} \label{eq:nr2a} W(A_c) = \Delta = \{r\re^{\ri \theta}: 0\leq r < 2/(|\cos\theta|+|\sin\theta|),\ 0\leq \theta<2\pi\}. \end{eqnarray} Figure \ref{Figure2} includes a visualisation of $\overline{\Delta}$ and $N_2$. The bound \eqref{eq:nr2}, expressed concretely through \eqref{eq:nr3}, is the sharpest explicit upper bound on $\Sigma$ obtained to date. It implies that $\Sigma$ is not convex since we also know (see \eqref{eq:pi1} below) that $\pm 2$, $\pm 2\ri$, and $\pm 1\pm \ri$ are all in $\Sigma$. A different family of upper bounds was established in \cite{CCL2} (and see \cite{HengPhD}), expressed in terms of pseudospectra. For a square matrix $A$ of order $n$ and $\epsilon>0$ let $\specn_\epsilon A$ denote the $\epsilon$-pseudospectrum of $A$ (with respect to the 2-norm), i.e., $\specn_\epsilon A := \{\lambda \in \C: \|(A-\lambda I_n)^{-1}\|_2 >\epsilon^{-1}\}$, with the understanding that $\|(A-\lambda I)^{-1}\|_2=\infty$ if $\lambda$ is an eigenvalue, so that $\spec A \subset \spec_\epsilon A$. (Here $\|\cdot\|_2$ is the operator norm of a linear mapping on $\C^n$ equipped with the $2$-norm.) Analogously to \eqref{eq:sigma}, for $\epsilon >0$ and $n\in\N$, let \begin{equation} \label{eq:sigma2} \sigma_{n,\epsilon} := \bigcup_{A\in V_n} \specn_\epsilon A, \end{equation} which is the union of the pseudospectra of $2^{n-1}$ distinct matrices. Then it is shown in \cite{CCL2} that \begin{equation} \label{eq:Sigmaconv} \Sigma^*_n:= \overline{\sigma_{n,\epsilon_n}}\ \searrow\ \Sigma\ \mbox{ as }\ n\to\infty, \end{equation} where $\epsilon_n := 4\sin \theta_n\leq 2\pi/(n+2)$ and $\theta_n$ is the unique solution in $(\pi/(2n+6),\pi/(2n+4)]$ of the equation $2\cos((n+1)\theta) = \cos((n-1)\theta)$. Clearly, $\{\Sigma^*_n : n\in\N\}$ is a convergent family of upper bounds for $\Sigma$ that is in principle computable; deciding whether $\lambda\in\Sigma^*_n$ requires only computation of smallest singular values of $n\times n$ matrices (see \cite[(39)]{CCL2}). Explicitly $\Sigma^*_1=2\ovD$, and $\Sigma_n^*$ is plotted for $n=6,12,18$ in \cite{CCL2}. But for these values $\Sigma_n^*\supset \Delta$, and computing $\Sigma_n^*$ for larger $n$ is challenging, requiring computation of the smallest singular value of $2^{n-1}$ matrices of order $n$ to decide whether a particular $\lambda\in \Sigma_n^*$. Substantial numerical calculations in \cite{CCL2} established that $1.5+0.5\ri\not\in \Sigma^*_{34}$, providing the first proof that $\Sigma$ is a strict subset of $\overline{\Delta}$, this confirmed now by the simple explicit bound \eqref{eq:nr2} and \eqref{eq:nr3}. \section{Lower Bounds on $\Sigma$ and Symmetries of $\Sigma$ and $\Sigma_\pi$} \label{sec:poly} Complementing the upper bounds on $\Sigma$ that we have just discussed, lower bounds on $\Sigma$ have been obtained by two methods of argument. The first is that \eqref{eq:spec} tells us that $\spec A_b\subset \Sigma$ for every $b\in \Ome$. In particular this holds in the case when $b$ is periodic, when the spectrum of $A_b$ is given explicitly by Lemmas \ref{lem:spper} and \ref{lem:det}, so that, as observed in the introduction, $$ \pi_n := \bigcup_{k\in \{\pm 1\}^n} \spec A_k^{\per} \subset \Sigma. $$ Explicitly \cite[Lemma 2.6]{CCL2}, in particular, \begin{equation} \label{eq:pi1} \pi_1 = [-2,2]\cup \ri [-2,2]\ \mbox{ and }\ \pi_2 = \pi_1 \cup \{x\pm \ri x: -1\leq x\leq 1\}. \end{equation} In the introduction we have defined $\pi_\infty:= \cup_{n=1}^\infty \pi_n$ and have termed $\Sigma_\pi:=\overline{\pi_\infty}$, also a subset of $\Sigma$ since $\Sigma$ is closed, the periodic part of $\Sigma$. We have also recalled the conjecture of \cite{CCL} that $\Sigma_\pi=\Sigma$. Let $$ \Pi_n := \bigcup_{m=1}^n \pi_m \subset \pi_\infty \subset \Sigma_\pi \subset \Sigma. $$ Then it follows from Lemma \ref{eq:setconv} that \begin{equation} \label{eq:convPin} \Pi_n \nearrow \Sigma_\pi\ \mbox{ as }\ n\to\infty. \end{equation} If, as conjectured, $\Sigma_\pi=\Sigma$, then \eqref{eq:convPin} complements \eqref{eq:Sigmaconv}; together they sandwich $\Sigma$ by convergent sequences of upper ($\Sigma_n^*$) and lower ($\Pi_n$) bounds that can both be computed by calculating eigenvalues of $n\times n$ matrices. Figures \ref{Figure2} and \ref{Figure3} include visualisations of $\pi_{30}$, indistinguishable by eye from $\Pi_{30}$, but note that the solid appearance of $\pi_{30}$, which is the union of a large but finite number of analytic arcs, is illusory. See \cite{CCL,CCL2} for visualisations of $\pi_{n}$ for a range of $n$, suggestive that the convergence \eqref{eq:convPin} is approximately achieved by $n=30$. The same method of argument \eqref{eq:spec} to obtain lower bounds was used in \cite{CCL}, where a special sequence $b\in \Ome$ was constructed with the property that $\spec A_b \supset \ovD$, so that, by \eqref{eq:spec}, $\ovD \subset \Sigma$. The stronger result \eqref{eq:unit}, that this new lower bound on $\Sigma$ is in fact also a subset of $\Sigma_\pi$, was shown in \cite{CWDavies2011}, via a second method of argument for constructing lower bounds, based on surprising symmetries of $\Sigma$ and $\Sigma_\pi$. We will spell out in a moment these symmetries (one of these described first in \cite{CWDavies2011}, the whole infinite family in \cite{Hagger:symmetries}), which will be both a main subject of study and a main tool for argument in this paper. But first we note more straightforward but important symmetries. In this lemma and throughout $\overline \lambda$ denotes the complex conjugate of $\lambda\in\C$. \begin{lem} \label{lem:symeasy} \cite[Lemma 3.4]{CCL2} (and see \cite{HolzOrlZee}, \cite[Lemma 4]{CWDavies2011}). All of $\pi_n$, $\sigma_n$, $\Sigma_\pi$, and $\Sigma$ are invariant with respect to the maps $\lambda\mapsto \ri\lambda$ and $\lambda \mapsto \overline{\lambda}$, and so are invariant under the dihedral symmetry group $D_2$ generated by these two maps. \end{lem} To expand on the brief discussion in the introduction, \cite{Hagger:symmetries} proves the existence of an infinite set $\cS$ of monic polynomials of degree $\geq 2$, this set defined constructively in the following theorem, such that the elements $p\in \cS$ are symmetries of $\pi_\infty$ and $\Sigma$ in the sense that \eqref{eq:spsym} below holds. \begin{thm} \label{thm:cS} \cite{Hagger:symmetries} Let $\cS$ denote the set of those polynomials $p_k$, defined by \eqref{eq:pdef2}, with $k=(k_1,...,k_n)\in \{\pm 1\}^n$ for some $n\geq 2$, for which it holds that: (i) $k_{n-1}=-1$ and $k_n=1$; (ii) $p_k=p_{\widehat k}$, where $\widehat k\in \{\pm 1\}^n$ is the vector identical to $k$ but with the last two entries interchanged, so that $\widehat k_{n-1}=1$ and $\widehat k_n = -1$. Then \begin{equation} \label{eq:spsym} \Sigma \subset p(\Sigma) \mbox{ and } p^{-1}(\pi_\infty) \subset \pi_\infty, \end{equation} for all $p\in \cS$. \end{thm} \noindent We will call $\cS$ {\em Hagger's set of polynomial symmetries for $\Sigma$.} We remark that if $p\in \cS$ then it follows from \eqref{eq:spsym}, by taking closures and recalling that $p$ is continuous, that also \begin{equation} \label{eq:pply} p^{-1}(\Sigma_\pi) \subset \Sigma_\pi \mbox{ and } p^{-1}(\intt(\Sigma_\pi)) \subset \intt(\Sigma_\pi). \end{equation} We note also that $p^{-1}(\pi_\infty) \subset \pi_\infty$ implies that $\pi_\infty \subset p(\pi_\infty)$, but not vice versa, and that $\Sigma \subset p(\Sigma)$ iff $$ p^{-1}(\{\lambda\}) \cap \Sigma \neq \emptyset,\ \mbox{ for all } \lambda \in \Sigma. $$ Further, we note that it was shown earlier in \cite{CWDavies2011} that \eqref{eq:spsym} holds for the particular case $p(\lambda)=\lambda^2$ (this the only element of $\cS$ of degree 2, see Table \ref{table1}); in \cite{CWDavies2011} it was also shown, as an immediate consequence of \eqref{eq:spsym} and Lemma \ref{lem:symeasy}, that $$ p^{-1}(\Sigma) \subset \Sigma, $$ for $p(\lambda)=\lambda^2$. Whether this last inclusion holds in fact for all $p\in \cS$ is an open problem. Our first result is a much more explicit characterisation of $\cS$. \begin{prop} \label{prop:eqchar} The set $\cS$ is given by $\cS=\{p_k:k\in \cK\}$, where $\cK$ consists of those vectors $k=(k_1,...,k_n)\in \{\pm 1\}^n$ with $n\geq 2$, for which: (i) $k_{n-1} = -1$ and $k_n = 1$; and (ii) $n=2$, or $n\geq 3$ and $k_j = k_{n-j-1}$, for $1 \leq j \leq n-2$, so that $(k_1,..., k_{n-2})$ is a palindrome. Moreover, if $k\in \cK$, then \begin{equation} \label{p_k} p_k(\lambda) = \lambda q_{k(1:n-2)}(\lambda). \end{equation} \end{prop} \begin{proof} It is clear from Theorem \ref{thm:cS} that what we have to prove is that, if $k\in \{\pm 1\}^n$ with $n\geq 2$ and $k_{n-1}=-1$, $k_n=1$, then $p_k=p_{\widehat k}$ if $n=2$ or 3; further, if $n\geq 4$, then $p_k=p_{\widehat k}$ iff $(k_1,...,k_{n-2})$ is a palindrome. If $k\in \{\pm 1\}^n$ with $n\geq 2$ and $k_{n-1}=-1$, $k_n=1$, then, from \eqref{eq:pk2} and \eqref{eq:qk}, \begin{eqnarray*} p_k(\lambda) &=& q_{k(1:n-1)}(\lambda) - k_n q_{k(2:n-2)}(\lambda)\\ & = & \lambda q_{k(1:n-2)}(\lambda) + q_{k(1:n-3)}(\lambda) - q_{k(2:n-2)}(\lambda). \end{eqnarray*} Thus, if $n=2$ or 3, or $n\geq 4$ and $(k_1,...,k_{n-2})$ is a palindrome, $p_k(\lambda) = \lambda q_{k(1:n-2)}(\lambda)$ since $q_{k(1:n-3)}(\lambda) = q_{k(2:n-2)}(\lambda)$, this a consequence of the definitions \eqref{eq:ij} in the cases $n=2$ and 3, of Lemma \ref{lem:qksym} in the case $n\geq 4$. Similarly, $p_{\widehat k}(\lambda) = \lambda q_{k(1:n-2)}(\lambda)$, so that $p_k=p_{\widehat k}$. Conversely, assume that $k\in \{\pm 1\}^n$ with $n\geq 4$, $k_{n-1}=-1$, $k_n=1$, and $p_k = p_{\widehat{k}}$. To show that $(k_1,...,k_{n-2})$ is a palindrome we need to show that $k_j=k_{n-j-1}$, for $1\leq j\leq(n-2)/2$. Using \eqref{eq:pk2} and then \eqref{eq:qk}, we see that \begin{eqnarray*} 0 = p_k(\lambda) - p_{\widehat{k}}(\lambda)& =& q_{k(1:n-1)}(\lambda) - q_{\widehat k(1:n-1)}(\lambda) - 2q_{k(2:n-2)}(\lambda)\\ &=& 2q_{k(1:n-3)}(\lambda) - 2q_{k(2:n-2)}(\lambda). \end{eqnarray*} Thus $q_{k(1:n-3)}= q_{k(2:n-2)}$. But, if $q_{k(j:n-j-2)} = q_{k(j+1:n-j-1)}$ and $1\leq j \leq (n-2)/2$, then, applying \eqref{eq:qk}, \begin{eqnarray*} 0&=&q_{k(j:n-j-2)}(\lambda) - q_{k(j+1:n-j-1)}(\lambda)\\ &=& \lambda q_{k(j+1:n-j-2)}(\lambda)-k_jq_{k(j+2:n-j-2)}(\lambda)\\ & & \hspace{4ex} - (\lambda q_{k(j+1:n-j-2)}(\lambda)-k_{n-j-1}q_{k(j+1:n-j-3)}(\lambda))\\ &=& -k_jq_{k(j+2:n-j-2)}(\lambda)+k_{n-j-1}q_{k(j+1:n-j-3)}(\lambda). \end{eqnarray*} As this holds for all $\lambda$ and, by Lemma \ref{lem:qkbound} and \eqref{eq:ij}, $q_{k(j+2:n-j-2)}$ and $q_{k(j+1:n-j-3)}$ are both monic polynomials of degree $n-2j-2$, it follows first that $k_{j}=k_{n-j-1}$ and then that $q_{k(j+1:n-j-3)}=q_{k(j+2:n-j-2)}$. Thus that $k_j=k_{n-j-1}$ for $1\leq j\leq(n-2)/2$ follows by induction on $j$. \end{proof} The following corollary is immediate from \eqref{p_k} and Lemma \ref{lem:qkbound}. \begin{cor} \label{cor:small} Suppose that $n\geq 2$, $k\in \{\pm 1\}^n$, and $p_k\in \cS$. Then, as $\lambda\to 0$, $p_k(\lambda) =\pm\lambda + O(\lambda^3)$ if $n$ is odd, while $p_k(\lambda) = O(\lambda^2)$ if $n$ is even. \end{cor} Let us denote by $P_m$ the polynomial $p_k$ when $k$ has length $m\ge 2$, $k_{m-1}=-1$, $k_m=1$, and all other entries are $1$'s. It is convenient also to define $P_1(\lambda)=\lambda$. Clearly, as a consequence of the above proposition, $P_m\in \cS$ for $m\geq 2$ (that these particular polynomials are in $\cS$ was observed earlier in \cite{Hagger:symmetries}). We will write down shortly an explicit formula for $P_m$ in terms of Chebyshev polynomials of the 2nd kind. Recall that $U_n(x)$, the Chebychev polynomial of the 2nd kind of degree $n$, is defined by \cite{AS} $U_0(x):=1$, $U_1(x):=2x$, and $U_{n+1}(x) := 2xU_n(x)-U_{n-1}(x)$, for $n\in\N$. \begin{lem} \label{lem:cheb} For $m\in \N$, $P_m(\lambda) = \lambda U_{m-1}(\lambda/2)$. \end{lem} \begin{proof} This follows easily by induction from \eqref{p_k} and \eqref{eq:qk}. \end{proof} We note that, using the standard trigonometric representations for the Chebychev polynomials \cite{AS}, for $m\in \N$, \begin{equation} \label{eq:theta} P_m(2 \cos \theta) = 2 \cos\theta U_{m-1}(\cos\theta) = 2\cot\theta \sin m\theta =: r_m(\theta). \end{equation} A similar representation in terms of hyperbolic functions can be given for the polynomial $p_k$ when $k$ has length $2m-1$ and $k_j = (-1)^j$; we denote this polynomial by $Q_m$. Clearly, for $m\geq 2$, $Q_m\in \cS$ by Proposition \ref{prop:eqchar}, and $Q_m$ is an odd function by Lemma \ref{lem:det}. The proof of the following lemma, like that of Lemma \ref{lem:cheb}, is a straightforward induction that we leave to the reader. \begin{lem} \label{lem:cheb2} $Q_1(\lambda) = \lambda$, $Q_2(\lambda) = \lambda^3 + \lambda$, and $Q_{m+1}(\lambda) = \lambda^2 Q_m(\lambda) + Q_{m-1}(\lambda)$, for $m\geq 2$. Moreover, for $m\in \N$ and $\theta\geq 0$, \[Q_m\left(\sqrt{2\sinh \theta}\right) = \begin{cases} \sqrt{2\sinh\theta}\ \displaystyle{\frac{\sinh(m\theta) + \cosh((m-1)\theta)}{\cosh\theta}}, & \text{if $m$ is even,} \\ \sqrt{2\sinh \theta}\ \displaystyle{\frac{\cosh(m\theta) + \sinh((m-1)\theta)}{\cosh \theta}}, & \text{if $m$ is odd.}\end{cases}\] \end{lem} The following lemma leads, using Lemmas \ref{lem:cheb} and \ref{lem:cheb2}, to explicit formulae for other polynomials in $\cS$. For example, if $P_m^*$ denotes the polynomial $p_k$ when $k$ has length $m\ge 2$, $k_{m-1}=-1$, $k_m=1$, and all other entries are $-1$'s, then, by Lemmas \ref{lem:cheb} and \ref{lem:minus}, \begin{equation} \label{eq:PM*} P_m^*(\lambda) = \ri^{-m} P_m(\ri \lambda) = \ri^{1-m}\lambda U_{m-1}(\ri \lambda/2). \end{equation} \begin{lem} \label{lem:minus} If $k\in \{\pm 1\}^n$ and $p_k\in \cS$, then $p_{-k}\in \cS$ and $p_{-k}(\lambda)=\ri^{-n} p_k(\ri\lambda)$. \end{lem} \begin{proof} Suppose that $k\in \{\pm 1\}^n$ and $p_k\in \cS$. If $n=2$, then $\widehat{k} = -k$ and $p_{-k}=p_{\widehat k} = p_k\in \cS$. If $n\geq 3$, defining $\ell\in \{\pm 1\}^n$ by $\ell_{n-1}=-1$, $\ell_n=1$, and $\ell_j=-k_j$, for $j=1,...,n-2$, $p_\ell\in \cS$ by Proposition \ref{prop:eqchar}, so that $p_{-k}=p_{\widehat \ell}=p_\ell\in \cS$. That $p_{-k}(\lambda)=\ri^{-n} p_k(\ri\lambda)$ comes from Lemma \ref{lem:pksym}. \end{proof} We note that Proposition \ref{prop:eqchar} implies that there are precisely $2^{\left\lceil\frac{n}{2}\right\rceil-1}$ vectors of length $n$ in $\cK$, so that there are between 1 and $2^{\left\lceil\frac{n}{2}\right\rceil-1}$ polynomials of degree $n$ in $\cS$, as conjectured in \cite{Hagger:symmetries}. Note, however, that there may be more than one $k \in \cK$ that induce the same polynomial $p_k\in \cS$. For example, $p_k(\lambda) = \lambda^6-\lambda^2$ for $k=(-1,1,1,-1,-1,1)$, and, defining $\ell=(1,-1,-1,1,-1,1)$ and using Lemma \ref{lem:minus}, also $p_\ell(\lambda) = p_{\widehat \ell}(\lambda)=p_{-k}(\lambda) = -p_k(\ri \lambda) = \lambda^6-\lambda^2$. In Table \ref{table1} (cf.~\cite{Hagger:symmetries}) we tabulate all the polynomials in $\cS$ of degree $\leq 6$. \begin{table} \centering \begin{tabular}{|r|l|} \hline $k$ & $p_k(\lambda)$ \\ \hline $(-1,1)$ & $\lambda^2= P_2(\lambda)$\\ \hline $(1,-1,1)$ & $\lambda^3-\lambda= P_3(\lambda)$\\ \hline $(-1,-1,1)$ & $\lambda^3+\lambda= Q_2(\lambda)=P_3^*(\lambda)$\\ \hline $(1,1,-1,1)$ & $\lambda^4-2\lambda^2= P_4(\lambda)$\\ \hline $(-1,-1,-1,1)$ & $\lambda^4+2\lambda^2=P_4^*(\lambda)$\\ \hline $(1,1,1,-1,1)$ & $\lambda^5-3\lambda^3+\lambda=P_5(\lambda)$\\ \hline $(1,-1,1,-1,1)$ & $\lambda^5-\lambda^3+\lambda= -\ri Q_3(\ri\lambda)$\\ \hline $(-1,1,-1,-1,1)$ & $\lambda^5+\lambda^3+\lambda=Q_3(\lambda)$\\ \hline $(-1,-1,-1,-1,1)$ & $\lambda^5+3\lambda^3+\lambda=P_5^*(\lambda)$\\ \hline $(1,1,1,1,-1,1)$ & $\lambda^6-4\lambda^4+3\lambda^2=P_6(\lambda)$\\ \hline $(1,-1,-1,1,-1,1)$ & $\lambda^6-\lambda^2=P_3(P_2(\lambda))$\\ \hline $(-1,-1,-1,-1,-1,1)$ & $\lambda^6+4\lambda^4+3\lambda^2=P_6^*(\lambda)$\\ \hline \end{tabular} \caption{The elements $p_k\in \cS$ of degree $\leq 6$.}\label{table1} \end{table} If $p,q\in \cS$, so that $p$ and $q$ are polynomial symmetries of $\Sigma$ in the sense that \eqref{eq:spsym} holds, then also the composition $p\circ q$ is a polynomial symmetry of $\Sigma$ in the same sense. But note from Table \ref{table1} that, while $P_3\circ P_2\in \cS$, none of $P_2\circ P_2$, $P_2\circ P_2\circ P_2$, $Q_2\circ P_2$, $P_2\circ P_3$, or $P_2\circ Q_2$ are in $\cS$. Thus $\cS$ does not contain all polynomial symmetries of $\Sigma$, but whether there are polynomial symmetries that are not either in $\cS$ or else compositions of elements of $\cS$ is an open question. We finish this section by showing in subsection \ref{subsec:a} the surprising result that $\cS$ is large enough that we can reconstruct the whole of $\Sigma_\pi$ from the polynomials $p_k\in \cS$. This result will in turn be key to the proof of our main theorem in Section \ref{sec:interior}. Then in subsection \ref{subsec:b} we use that \eqref{eq:spsym} holds for the polynomials in Table \ref{table1} to obtain new explicit lower bounds on $\Sigma_\pi$, including that $1.1\ovD \subset \Sigma_\pi\subset \Sigma$. \subsection{Connecting eigenvalues of finite matrices and polynomial symmetries of $\Sigma$} \label{subsec:a} Recall from Proposition \ref{prop:eqchar} that $\cS=\{p_k:k\in \cK\}$, let $K := \cup_{n\in \N} \{\pm 1\}^n$, and define \begin{equation} \label{eq:piinfS} \pi_\infty^{\cS} := \bigcup_{k\in \cK} \spec A^{\per}_k \subset \pi_\infty = \bigcup_{k\in K} \spec A^{\per}_k. \end{equation} The following result seems rather surprising, given that $\cK$ is much smaller than $K$ in the sense that there are precisely $2^{\left\lceil\frac{n}{2}\right\rceil-1}$ vectors of length $n$ in $\cK$ but $2^n$ in $K$. \begin{thm} \label{thm:dense} $\sigma_\infty \subset \pi_\infty^{\cS}$, so that $\pi_\infty^{\cS}$ is dense in $\Sigma_\pi$ and $ \Sigma_\pi := \overline{\pi_\infty} = \overline{\sigma_\infty} = \overline{\pi_\infty^{\cS}}. $ \end{thm} \begin{proof} We will show in Proposition \ref{prop:inc} below that, for $n\geq2$, $$ \sigma_n \subset \pi_{2n+2}^\cS := \bigcup_{k\in \cK_{2n+2}} \spec A^{\per}_k \subset \pi_{2n+2}, $$ where, for $m\geq 2$, $\cK_m$ denotes the set of those vectors in $\cK$ that have length $m$. Since also $\sigma_1=\{0\}\subset \pi_m^{\cS}$, for every $m\in \N$ (e.g.~\cite[Lemma 2.10]{CCL2}), this implies that $\sigma_\infty \subset \pi_\infty^{\cS}$, which implies that $\pi_\infty^{\cS}$ is dense in $\Sigma_\pi$ since $\sigma_\infty$ is dense in $\pi_\infty$ \cite[Theorem 1]{Hagger:dense}, and the result follows. \end{proof} The key step in the proof of the above theorem is the following refinement of Theorem 4.1 in \cite{CCL2}, which uses our new characterisation, Proposition \ref{prop:eqchar}, of $\cS$. \begin{prop} \label{prop:inc} Suppose $a,b,c,d\in \{\pm 1\}$ and $k\in \{\pm 1\}^n$, for some $n\geq 2$, and let $\widetilde k := (k_1,...,k_{n-1})$. Then \begin{equation} \label{eq:elldef} \spec A_k^{(n)} \subset \spec A_\ell^{\per},\ \mbox{ for }\ \ell =({\widetilde k}^\prime,a,b,\widetilde k,c,d)\in \{\pm 1\}^{2n+2}, \end{equation} where ${\widetilde k}^\prime = \widetilde k J_{n-1} = (k_{n-1},...,k_1)$. Further, $\spec A_k^{(n)} \subset \pi_{2n+2}^\cS$. \end{prop} \begin{proof} The proof modifies \cite[Theorem 4.1]{CCL2} where the same result is proved for the special case that $a=c=-1$, $b=d=1$. Following that proof, suppose that $\lambda$ is an eigenvalue of $A_k^{(n)}$ with corresponding eigenvector $x$, let $\widehat x:= J_n x$ and $\widehat A_k^{(n)}:= J_n A_k^{(n)} J_n$, so that $\widehat A_k^{(n)} \widehat x = \lambda \widehat x$, and set \begin{equation}\nonumber B\ :=\ \left(\begin{array}{ccccccc} \ddots& \begin{array}{ccc}1&& \end{array}\\ \cline{2-2} \begin{array}{c}d\\ \\ \\ \end{array}& \multicolumn{1}{|c|}{\widehat A_k^{(n)}}& \begin{array}{c} \\ \\-a \end{array}\\ \cline{2-2} &\begin{array}{ccr}&&-1\end{array} & \boxed{0} & \begin{array}{lcc}1&&\end{array}\\ \cline{4-4} & & \begin{array}{c}b\\ \\ \\ \end{array}& \multicolumn{1}{|c|}{{A_k^{(n)}}}& \begin{array}{c} \\ \\-c \end{array}\\ \cline{4-4} &&&\begin{array}{ccr}&&-1\end{array} & 0 & \begin{array}{lcc}1&&\end{array}\\ \cline{6-6} & && & \begin{array}{c}d\\ \\ \\ \end{array}& \multicolumn{1}{|c|}{\widehat A_k^{(n)}}& \begin{array}{c} \\ \\-a \end{array}\\ \cline{6-6} &&&&& \begin{array}{ccc}&&-1\end{array}& \ddots \end{array}\right), \end{equation} where $\boxed{0}$ marks the entry at position $(-1,-1)$. Then $B$ is a bi-infinite tridiagonal matrix with zeros on the main diagonal, and with each of the first sub- and super-diagonals a vector in $\Omega$ that is periodic with period $2n+2$. Define $\tilde x\in \ell^\infty$, the space of bounded, complex-valued sequences $\phi:\Z\to \C$, by $$ \tilde x := (...,0,\widehat x^T,\boxed{0},x^T,0,\widehat x^T,0,x^T,...)^T, $$ where $\boxed{0}$ marks the entry $\tilde x_{-1}$. Then it is easy to see that $B\tilde x =\lambda \tilde x$, so that $\lambda \in \spec B$.\footnote{Clearly $\lambda$ is in the spectrum of $B$ as an operator on $\ell^\infty$, but this is the same as the $\ell^2$-spectrum from general results on band operators, e.g.~\cite{LiBook}.} Further, by a simple gauge transformation \cite[Lemma 3.2]{CCL2}, $\spec B = \spec A_\ell^{\per}$, where $\ell$ is given by \eqref{eq:elldef}. We have shown that $\spec A_k^{(n)} \subset \spec A_\ell^{\per}$. But, choosing in particular $a=b$ and $c=-1$, $d=1$, we see from Proposition \ref{prop:eqchar} that $\ell\in \cK_{2n+2}$, so that $\spec A_k^{(n)} \subset \spec A_\ell^{\per} \subset\pi_{2n+2}^\cS$. \end{proof} \begin{rem} \label{rem:one} Proposition \ref{prop:inc} implies that $\spec A_k^{(n)} \subset \spec A_\ell^{\per}$ for 16 different vectors $\ell\in \{\pm 1\}^n$, corresponding to different choices of $a,b,c,d\in \{\pm 1\}$. Some of these vectors correspond to the same polynomial $p_\ell$ and hence, by \eqref{eq:per}, to the same spectrum $\spec A_\ell^{\per}$. In particular, if $a=b=\pm 1$, then the choices $c=-d=1$ and $c=-d=-1$ lead to the same polynomial by Proposition \ref{prop:eqchar} and the definition of $\cS$. But, if $a\neq b$, again by Proposition \ref{prop:eqchar} and the definition of $\cS$, the choices $c=-d=1$ and $c=-d=-1$ must lead to different polynomials, and neither of these polynomials can be in $\cS$. On the other hand, as observed already in the proof of Proposition \ref{prop:inc}, the choices $a=b$ and $c=-d$ lead to an $\ell\in \cK$ and so to a polynomial $p_\ell\in \cS$. Thus Proposition \ref{prop:inc} implies that there are at least three distinct polynomials $p_\ell$ such that $\spec A_k^{(n)} \subset \spec A_\ell^{\per}$. Figure \ref{Figure1} plots $\spec A_k^{(n)}$ and $\spec A_\ell^{\per}$, for the vectors defined by \eqref{eq:elldef}, in the case that $n=3$ and $k_1=k_2=1$. For other plots of the spectra of finite and periodic Feinberg-Zee matrices, and the interrelation of these spectra, see \cite{HolzOrlZee,CCL,CCL2}. \end{rem} \begin{figure} \caption{An illustration of Proposition \ref{prop:inc} \label{Figure1} \end{figure} \subsection{Explicit lower bounds for $\Sigma_\pi$} \label{subsec:b} As noted in \cite{Hagger:symmetries}, that the polynomials $p\in \cS$ satisfy \eqref{eq:spsym} gives us a tool to compute explicit lower bounds on $\Sigma_\pi$. Indeed \cite{Hagger:symmetries} shows visualisations of $p^{-n}(\D)$, for several $p\in \cS$ and $n\in N$, and visualisations of unions of $p^{-n}(\D)$ for $p$ varying over some finite subset of $\cS$. Since, by \eqref{eq:unit}, $\D\subset \Sigma_\pi$, it follows from \eqref{eq:spsym} that all these sets are subsets of $\Sigma_\pi$. The study in \cite{Hagger:symmetries} contains visualisations of subsets of $\Sigma_\pi$ as just described, but no associated analytical calculations. Complementing the study in \cite{Hagger:symmetries} we make explicit calculations in this section that illustrate the use of the polynomial symmetries to compute explicit formulae for regions of the complex plane that are subsets of $\Sigma_\pi$, adding to the already known fact \eqref{eq:unit} that $\D\subset \Sigma_\pi$. Our first lemma and corollary give explicit formulae for $p^{-1}(\D)$ when $p(\lambda)=P_3(\lambda) = \lambda^3-\lambda$ and when $p(\lambda) = Q_2(\lambda)=\lambda^3+\lambda$. These formulae are expressed in terms of the function $s\in C^\infty[-1,1]$, where, for $-1\leq t\leq 1$, $s(t)>0$ is defined as the largest positive solution of $$ f(s):= s^3 - 2ts^2 + s = 1. $$ If $-1\leq t<1/2$, this is the unique solution in $(0,1)$ (on which interval $f^\prime(s)\geq 0$), while if $1/2\leq t< 1$ it is the unique solution in $[1,2)$ (on which interval $f^\prime(s)\geq 0$). Explicitly, for $-1\leq t\leq 1$, $$ s^\prime(t) = \frac{2(s(t))^3}{2-s(t) + (s(t))^3} >0, $$ and $s(1/2)=1$, $s(1) \approx 1.75488$. \begin{lem} \label{lem:degree3} If $p(\lambda):= P_3(\lambda)=\lambda^3-\lambda$, then $$ p^{-1}(\D) = E:=\{r\re^{\ri\theta}: 0\leq r < S(\theta), 0\leq\theta< 2\pi\}, $$ where $S(\theta) := \sqrt{s(\cos2\theta)}$, for $\theta\in \R$. $S\in C^\infty(\R)$ and is even and periodic with period $\pi$. In the interval $[0,\pi]$ the only stationary points of $S$ are global maxima at $0$ and $\pi$, with $S(0)=S(\pi) =\sqrt{s(1)}\approx 1.32472$, and a global minimum at $\pi/2$. Further, $S$ is strictly decreasing on $[0,\pi/2]$ and $S(\theta)\geq 1$ in $[0,\pi]$ iff $0\leq \theta \leq \pi/6$ or $5\pi/6\leq \theta\leq \pi$, with equality iff $\theta = \pi/6$ or $5\pi/6$. \end{lem} \begin{proof} If $\lambda = r\re^{\ri\theta}$ (with $r\geq 0$, $\theta\in\R$), then $|p(\lambda)|^2 = |\lambda^3-\lambda|^2 = |r^3\re^{2\ri\theta}-r|^2 = r^6-2r^4\cos(2\theta) + r^2$. It is straightforward to show that $|p(\lambda)|<1$ iff $0\leq r< S(\theta)$. The properties of $S$ claimed follow easily from the properties of $s$ stated above. \end{proof} \begin{cor} \label{cor:degree3} If $p(\lambda):= Q_2(\lambda)=\lambda^3+\lambda$, then $$ p^{-1}(\D) = \ri E =\{r\re^{\ri\theta}: 0\leq r < S(\theta-\pi/2), 0\leq\theta< 2\pi\}. $$ \end{cor} \begin{proof} This is clear from Lemma \ref{lem:degree3} and the observation that $Q_2(\lambda) = \ri P_3(\ri\lambda)$. \end{proof} Since $P_3, Q_2\in \cS$, it follows from the above lemma and corollary and \eqref{eq:pply} that $ E\cup \ri E \subset \intt(\Sigma_\pi)$. But this implies by \eqref{eq:pply}, since $P_2(\lambda)=\lambda^2$ is also in $\cS$, that also $ \sqrt{\ri E} \subset \intt(\Sigma_\pi)$, where, for $S\subset \C$, $\sqrt{S}:= \{\lambda \in \C: \lambda^2\in S\}$. In particular, \begin{eqnarray} \label{eq:W1} W_1 := \{r\re^{\ri\theta}: 0\leq r < S(\theta), -\pi/6\leq\theta\leq \pi/6\}\subset E\subset \intt(\Sigma_\pi)\ \; \mbox{ and}\\ \label{eq:W2} W_2:= \{r\re^{\ri\theta}: 0\leq r < \sqrt{S(2\theta-\pi/2)}, \pi/6\leq\theta\leq \pi/3\}\subset \sqrt{\ri E} \subset \intt(\Sigma_\pi). \end{eqnarray} It is easy to check that \begin{equation} \label{eq:union} \bigcup_{m=0}^3 \ri^m (W_1\cup W_2) = E\cup \ri E \cup \sqrt{\ri E} \subset \intt(\Sigma_\pi). \end{equation} Next note from Table \ref{table1} that $p\in \cS$ where $p(\lambda) := \lambda^5 - \lambda^3+\lambda$ factorises as $$ p(\lambda) = \lambda(\lambda - \re^{\ri \pi/6})(\lambda + \re^{\ri \pi/6})(\lambda - \re^{-\ri \pi/6})(\lambda + \re^{-\ri \pi/6}). $$ Thus, for $\lambda= \exp(\pm \ri \pi/6)+w$ with $|w| \leq \epsilon$, $$ |p(\lambda)|\leq (1+\epsilon)\epsilon(2+\epsilon)(2\sin(\pi/6)+\epsilon)(2\cos(\pi/6)+\epsilon)=\epsilon(1+\epsilon)^2(\sqrt{3}+\epsilon)(2+\epsilon)=:g(\epsilon). $$ Let $\eta\approx 0.174744$ be the unique positive solution of $g(\epsilon)=1$. Clearly $|p(\lambda)| < 1$ if $\lambda = \exp(\pm\ri\pi/6)+w$, with $|w|<\eta$, so that \begin{equation} \label{eq:iun} \exp(\pm\ri \pi/6) + \eta \D \subset p^{-1}(\D) \subset \intt(\Sigma_\pi). \end{equation} \begin{figure} \caption{A plot showing $W_1$ (green), $W_2$ (red), $\re^{\pm\ri\pi/6} \label{Figure2} \end{figure} We have shown most of the following proposition that extends to a region $W\supset 1.1\ovD$ (illustrated in Figure \ref{Figure2}) the part of the complex plane that is known to consist of interior points of $\Sigma_\pi$, making explicit implications of the polynomial symmetries $\cS$ of $\Sigma$. Before \cite{Hagger:symmetries} and the current paper the most that was known explicitly was that $\D\subset \intt(\Sigma_\pi)$. \begin{prop} \label{prop:betterbounds} \begin{equation} \label{eq:bb} 1.1\ovD \subset W := \bigcup_{m=0}^3 \ri^m \left(W_1\cup W_2 \cup (\re^{\ri\pi/6}+\eta \D) \cup (\re^{-\ri\pi/6}+\eta \D)\right)\subset \intt(\Sigma_\pi). \end{equation} \end{prop} \begin{proof} That $W\subset \intt(\Sigma_\pi)$ is \eqref{eq:union} combined with \eqref{eq:iun} and Lemma \ref{lem:symeasy}. It is easy to see that $W$ is invariant with respect to the maps $\lambda\mapsto \ri\lambda$ and $\lambda \mapsto \overline{\lambda}$, and so is invariant under the dihedral symmetry group $D_2$ generated by these two maps. Thus to complete the proof it is sufficient to show that $z:=r\re^{\ri \theta}\in W$ for $0\leq r\leq 1.1$ and $0\leq \theta\leq \pi/4$. Now since, by Lemma \ref{lem:degree3}, $S(\theta)\geq 1$ for $-\pi/6\leq \theta\leq \pi/6$, $z\in W_1\cup W_2$ for $0\leq r\leq 1$, $0\leq \theta \leq \pi/4$. Further, since, by Lemma \ref{lem:degree3}, $S$ is even and is increasing on $[-\pi/6,0]$, $S(\theta)\geq S(\pi/8) = \sqrt{s(\sqrt{2}/2)\,}$, for $0\leq \theta \leq \pi/8$, and $\sqrt{S(2\theta-\pi/2)} \geq \sqrt{S(\pi/12)}=\left(s(\sqrt{3}/2)\right)^{1/4}$, for $5\pi/24\leq \theta \leq \pi/4$. But $\sqrt{3}>1.73$, so that $f(1.5) < 0.9825$ for $t=\sqrt{3}/2$, which implies that $s(\sqrt{3}/2)>1.5$, so that $\left(s(\sqrt{3}/2)\right)^{1/4}>1.1$. Similarly, $\sqrt{2}/2>0.705$ and $s(0.705) = 1.25$, so that $\sqrt{s(\sqrt{2}/2)\,}>\sqrt{1.25}>1.1$. Thus $z\in W_1$ for $0\leq r\leq 1.1$ and $0\leq \theta\leq \pi/8$, while $z\in W_2$ for $0\leq r\leq 1.1$ and $5\pi/24\leq \theta\leq \pi/4$. To conclude that $1.1\ovD \subset W$, it remains to show that $z\in W$ for $1 \leq r\leq 1.1$ and $|\pi/6-\theta|\leq \pi/24$. But it is easy to check that, for these ranges of $r$ and $\theta$, $z\in \exp(\ri \pi/6)+ \eta \D$ provided $\cos(\pi/24) + \sqrt{\cos^2(\pi/24) + \eta^2-1} > 1.1$. But this last inequality holds since $\cos(\pi/24)= \frac{1}{2}\sqrt{2+\sqrt{2+\sqrt{3}}}>0.991$ and $g(0.174)<1$ so that $\eta > 0.174$. \end{proof} \section{Interior points of $\Sigma_\pi$} \label{sec:interior} We have just, in Proposition \ref{prop:betterbounds}, extended to a region $W\supset 1.1\ovD$ the part of the complex plane that is known to consist of interior points of $\Sigma_\pi$. In this section we explore the relationship between $\Sigma_\pi$ and its interior further. We show first of all, using \eqref{eq:pply} and that $P_n\in \cS$ for every $n\geq 2$, that $[0,2)\subset \intt(\Sigma_\pi)$. Next we use this result to show that, for every $n\geq 2$, all but finitely many points in $\pi^\cS_n$ are interior points of $\Sigma_\pi$. Finally, we prove, using Theorem \ref{thm:dense}, that $\Sigma_\pi$ is the closure of its interior. If indeed it can be shown, as conjectured in \cite{CCL}, that $\Sigma_\pi=\Sigma$, then the result will imply the truth of another conjecture in \cite{CCL}, that $\Sigma$ is the closure of its interior. Our technique for establishing that $[0,2)\subset \intt(\Sigma_\pi)$ will be to use that $P_m^{-1}((-1,1)) \subset \intt(\Sigma_\pi)$, for every $m\geq 3$, this a particular instance of \eqref{eq:pply}. This requires first a study of the real solutions of the equations $P_m(\lambda)=\pm 1$ and their interlacing properties, which we now undertake. From \eqref{eq:theta}, $$ P_m(2) = 2m,\ P_m(2\cos(\pi/m)) = 0,\ \mbox{ and }\ P_m(2\cos(3\pi/(2m))) = -2 \cot(3\pi/(2m)). $$ This implies that the equation $P_m(\lambda)=1$ has a solution in $(2\cos(\pi/m),2)$. Let $\lambda_m^+$ denote the largest solution in this interval. Further, if $m\ge 5$ then $P_m(\lambda)=-1$ has a solution in $(2\cos(3\pi/(2m)),2)$ since $-2 \cot(3\pi/(2m))<-1$. For $m\ge 4$ let $\lambda_m^-$ denote the largest solution to $P_m(\lambda)=-1$ in $(0,2)$, which is in the interval $(2\cos(3\pi/(2m)),2)$ if $m\ge 5$, while an explicit calculation gives that $\lambda_4^-=1$. Throughout the following calculations we use the notation $r_n(\theta)$ from \eqref{eq:theta}. \begin{lem} \label{lem:inc} For $m\ge 4$ it holds that $P_m$ is strictly increasing on $(\lambda_m^-,2)$, that $\lambda_m^-<\lambda_m^+$, and that $-1<P_m(\lambda)<1$ for $\lambda_m^-<\lambda<\lambda_m^+$. \end{lem} \begin{proof} Explicitly, $P_4(\lambda) = \lambda U_3(\lambda/2)=\lambda^4-2\lambda^2$, so that $P_4^\prime(\lambda)=4(\lambda^3-1)$ and these claims are clear for $m=4$. Suppose now that $m\ge 5$. It follows by induction that, for $n\geq 3$, $r_n(\theta)$ is strictly decreasing on $(0,\pi/n+\pi/n^2)$. For $r_3(\theta)= 2\cos\theta(4\cos^2\theta-1)$ is strictly decreasing on $(0,4\pi/9)\subset (0,\pi/2)$ and, if this statement is true for some $n\geq 3$, then $$ r_{n+1}(\theta) = 2\cot\theta \sin((n+1)\theta) = \cos\theta(r_n(\theta) + 2\cos n\theta) $$ is strictly decreasing on $(0,\pi/(n+1)+\pi/(n+1)^2)\subset (0,\pi/n)$. Further, \begin{eqnarray*} r_m(\pi/m+\pi/m^2) &=& -2\cos(\pi/m+\pi/m^2)\sin(\pi/m)/\sin(\pi/m+\pi/m^2)\\ & <&-\ \frac{2m\cos(\pi/5+\pi/25)}{m+1}<-10\cos(6\pi/25)/6 < -1, \end{eqnarray*} since $\sin a/\sin b > a/b$ for $0<a<b<\pi$. As $r_m(\theta)=P_m(2\cos\theta)$, these observations imply that, on $(2\cos(\pi/m+\pi/m^2),2)$, $P_m$ is strictly increasing, and that $\lambda_m^->2\cos(\pi/m+\pi/m^2)$. Thus $\lambda_m^+>\lambda_m^-$ follows from the definitions of $\lambda_m^\pm$, and $-1<P_m(\lambda)<1$ for $\lambda_m^-<\lambda<\lambda_m^+$. \end{proof} As observed above it follows from \eqref{eq:pply} that $P_m^{-1}((-1,1))\subset \intt(\Sigma_\pi)$, for $m\geq 2$. Combining this observation with Lemma \ref{lem:inc} and the fact that $P_3(\lambda)=\lambda^3-\lambda \in (-1,1)$ if $-\lambda_3^+<\lambda<\lambda_3^+$, we obtain the following corollary. \begin{cor} \label{cor:intpts} For $m=4,5,...$, $(\lambda_m^-,\lambda_m^+)\subset \intt(\Sigma_\pi)$. Also, $(-\lambda_3^+,\lambda_3^+)\subset \intt(\Sigma_\pi)$. \end{cor} By definition of $\lambda_m^+$, $\lambda_m^+\to 2$ as $m\to\infty$. Thus the above corollary and the following lemma together imply that $[0,2)\subset \intt(\Sigma_\pi)$. \begin{lem} \label{lem:inter} For $m=3,4,...$, $\lambda_{m+1}^-<\lambda_m^+<\lambda_{m+1}^+$. \end{lem} \begin{proof} Since $P_3(\lambda) = \lambda^3-\lambda$ and $P_4(\lambda) = \lambda^4-2\lambda^2$, it is easy to see that $1=\lambda_4^-<\lambda_3^+ < \lambda_4^+$, so that the claimed result holds for $m=3$. To see the result for $m\ge 4$ we will show, equivalently, that $r_{m+1}^->r_m^+>r_{m+1}^+$, for $m=4,5,...$. Here $r_n^+$, for $n\in\N$, is the smallest solution of $r_n(\theta)=1$ in $(0,\pi/n)$, so that $\lambda_n^+=2\cos r_n^+$, while $r_n^-$, for $n=4,5,...$, denotes the smallest solution of $r_n(\theta)=-1$ in $(0,\pi/2)$, so that $\lambda_n^-=2\cos r_n^-$. We have shown in the proof of Lemma \ref{lem:inc} that, for $m\geq 3$, $r_m(\theta)$ is strictly decreasing on $(0,\pi/m+\pi/m^2)$, and that $r_m(\pi/m+\pi/m^2) < -1$, for $m\ge 5$, while $r_m(\pi/m)=0$ so that $$ \frac{\pi}{m} < r_m^- < \frac{\pi}{m} + \frac{\pi}{m^2}, \quad \mbox{for } m\ge 5. $$ Similarly, for $m\ge 2$, \begin{eqnarray*} r_m(\pi/m-\pi/m^2) &=& 2\cos(\pi/m-\pi/m^2)\sin(\pi/m)/\sin(\pi/m-\pi/m^2)\\ &>& 2\cos(\pi/2-\pi/4)=\sqrt{2} > 1, \end{eqnarray*} so that $$ \frac{\pi}{m} - \frac{\pi}{m^2}< r_m^+ < \frac{\pi}{m}, \quad \mbox{for } m\ge 2. $$ These inequalities imply that, for $m\ge 4$, $r_{m+1}^-\in (0,\pi/m+\pi/m^2)$, and since $r_m$ is strictly decreasing on this interval and \begin{eqnarray*} r_m(r_{m+1}^-) &=& 2\cot r_{m+1}^- \sin((m+1)r_{m+1}^- - r_{m+1}^-)\\ & =& \cos r_{m+1}^-(-1-2\cos((m+1)r_{m+1}^-)) < \cos r_{m+1}^- < 1, \end{eqnarray*} it follows that $r_m^+<r_{m+1}^- <\pi/(m+1)+\pi/(m+1)^2$. Since also $r_{m+1}$ is strictly decreasing on $(0,\pi/(m+1)+\pi/(m+1)^2)$ and $$ r_{m+1}(r_m^+) = 2\cot r_m^+ \sin(mr_m^+ + r_m^+) = \cos r_m^+ (1+2\cos(mr_m^+)) < \cos r_m^+<1, $$ since $\pi/2<mr_m^+<\pi$, we see that also $r_m^+>r_{m+1}^+$. \end{proof} \begin{cor} \label{cor:main} $(-2,2)\cup \ri (-2,2)\subset \intt(\Sigma_\pi)$. \end{cor} \begin{proof} From Corollary \ref{cor:intpts} and Lemma \ref{lem:inter} it follows that $[0,\lambda_3^+)\subset \intt(\Sigma_\pi)$ and that $[\lambda_m^+,\lambda_{m+1}^+)\subset \intt(\Sigma_\pi)$, for $m\geq 3$. Thus $[0,\lambda_m^+)\subset \intt(\Sigma_\pi)$ for $m\geq 3$, and so $[0,2)\subset \intt(\Sigma_\pi)$ since $\lambda_m^+\to 2$ as $m\to\infty$. Applying Lemma \ref{lem:symeasy} we obtain the stated result. \end{proof} The following lemma follows immediately from Corollary \ref{cor:main}, \eqref{eq:per}, and \eqref{eq:pply}. \begin{lem} \label{lem:pinint} Suppose that $k\in \cK_n$, so that $k$ has length $n$ and $p_k\in \cS$ has degree $n$. Then all except at most $2n$ points in $\spec A_k^{\per}$ are interior points of $\Sigma_\pi$. Further, if $\lambda\in \spec A_k^{\per}$ then there exists a sequence $(\lambda_m)\subset \spec A_k^{\per}\cap \intt(\Sigma_\pi)$ such that $\lambda_m\to \lambda$ as $m\to\infty$. \end{lem} As an example of the above lemma, suppose that $k=(-1,1)\in \cK_2$. Then (see Table \ref{table1}) $p_k(\lambda)=\lambda^2$ and, from \eqref{eq:per}, $\spec A_k^{\per} = \{x\pm \ri x:-1\leq x\leq 1\}$. There are precisely four points, $\pm 1\pm \ri\in \spec A_k^{\per}\setminus \intt(\Sigma_\pi)$. These are not interior points of $\Sigma_\pi$ since they lie on the boundary of $\overline{\Delta}\supset \Sigma\supset \Sigma_\pi$. Combining the above lemma with Theorem \ref{thm:dense}, we obtain the last result of this section. \begin{thm} \label{thm:interior} $\Sigma_\pi$ is the closure of its interior. \end{thm} \begin{proof} Suppose $\lambda \in \Sigma_\pi$. Then, by Theorem \ref{thm:dense}, $\lambda$ is the limit of a sequence $(\lambda_n)\subset \pi^\cS_\infty$, and, by Lemma \ref{lem:pinint}, for each $n$ there exists $\mu_n\in \intt(\Sigma_\pi)$ such that $|\mu_n-\lambda_n| < n^{-1}$, so that $\mu_n\to \lambda$ as $n\to\infty$. \end{proof} \section{Filled Julia sets in $\Sigma_\pi$} \label{sec:julia} It was shown in \cite{Hagger:symmetries} that, for every polynomial symmetry $p\in \cS$, the corresponding Julia set $J(p)$ satisfies $J(p)\subset \overline{U(p)}\subset \Sigma_\pi$, where $U(p)$ is defined by \eqref{eq:Up}. (The argument in \cite{Hagger:symmetries} is that $J(p) \subset \overline{U(p)}$ by \eqref{eq:Jpi}, and that $\overline{U(p)}\subset \Sigma_\pi$ by \eqref{eq:pply}.) It was conjectured in \cite{Hagger:symmetries} that also the filled Julia set $K(p)\subset \overline{U(p)}\subset \Sigma_\pi$, for every $p\in \cS$. In this section we will first show by a counterexample that this conjecture is false; we will exhibit a $p\in \cS$ of degree 18 for which $K(p)\not\subset \overline{U(p)}$. However, we have no reason to doubt a modified conjecture, that $K(p)\subset \Sigma_\pi$, for all $p\in \cS$. And the main result of this section will be to prove that $K(p)\subset \Sigma_\pi$ for a large class of $p\in \cS$, including $p=P_m$, for $m\geq 2$. Our first result is the claimed counterexample. \begin{lem} \label{lem:counter} Let $k = (1,-1,1,1,1,-1,1,-1,-1,1,-1,1,1,1,-1,1,-1,1)$, so that (by Proposition \ref{prop:eqchar}) $p_k\in \cS$, this polynomial given explicitly by $$ p_k(\lambda) = \lambda^{18} - 4\lambda^{16} + 5\lambda^{14} - 4\lambda^{12} + 7\lambda^{10} - 8\lambda^8 + 6\lambda^6 - 4\lambda^4 + \lambda^2. $$ Then $K(p_k)\not\subset \overline{U(p_k)}$. \end{lem} \begin{proof} Let $p=p_k$. If we can find a $\mu\not\in \ovD$ that is an attracting fixed point of $p$, then, for all sufficiently small $\epsilon>0$, $N := \mu + \epsilon \D$ satisfies $p(N)\subset N$ and $N\cap \ovD = \emptyset$, so that $N\subset K(p)$ and $N\cap \overline{U(p)}=\emptyset$. Calculating in double-precision floating-point arithmetic in Matlab we see that $\lambda\approx 1.21544069$ appears to be a fixed point of $p$, with $$ p^\prime(\lambda) = 18\lambda^{17} - 64\lambda^{15} + 70\lambda^{13} - 48\lambda^{11} + 70\lambda^{9} - 64\lambda^7 + 36\lambda^5 - 16\lambda^3 + 2\lambda \approx -0.69, $$ so that this fixed point appears to be attracting. To put this on a rigorous footing we work in exact arithmetic to deduce, by the intermediate value theorem, that $p(\lambda)=\lambda$ has a solution $\lambda^*\in(1.215,1.216)$, and that $|p^\prime(1.2155)|\leq 0.71$. Then, noting that $p^{\prime\prime}(\lambda)= p_+(\lambda)-p_-(\lambda)$, where $p_+(\lambda) = 306\lambda^{16} + 910\lambda^{12} + 630\lambda^8 + 180\lambda^4 + 2$ and $p_-(\lambda) = 960\lambda^{14} + 528\lambda^{10} + 448\lambda^6 + 48\lambda^2$, we see that \[|p^{\prime\prime}(\lambda)|\leq \max\{|p_-(1.216)-p_+(1.215)|,|p_-(1.215)-p_+(1.216)|\} < 400\] for $1.215\leq \lambda\leq 1.216$. But this implies that $|p^\prime(\lambda^*)| \leq |p^\prime(1.2155)| + 0.0005 \times 400 \leq 0.91$, so that $\lambda^*$ is an attracting fixed point. \end{proof} Numerical results suggest that amongst the polynomials $p\in \cS$ of degree $\leq 20$, there is only one other similar counterexample of a polynomial with an attracting fixed point outside the unit disk, the other example of degree 19. We turn now to positive results. Part of our argument will be to show, for every $p\in \cS$, that $\{z:|z|\geq 2\}\subset A_p(\infty)$, via the following lower bounds that follow immediately from Lemma \ref{lem:qkbound}, \eqref{eq:pk2} and \eqref{p_k}. \begin{cor} \label{cor:pkbound} If $k\in \{\pm 1\}^n$, for some $n\in \N$, then $|p_k(\lambda)|\geq 2$, for $|\lambda|\geq 2$. If $p_k\in \cS$, then $|p_k(\lambda)|\geq 2n$, for $|\lambda|\geq 2$. \end{cor} \begin{cor} \label{cor:pkjul} Let $p=p_k$, where $k\in \{\pm 1\}^n$, for some $n\in \N$. Then $A_p(\infty) \supset \{z\in \C:|z|>2\}$. If $p\in \cS$, then $A_p(\infty)\supset \{z\in \C:|z|\geq 2\}$. \end{cor} \begin{proof} Let $z\in \C$ with $|z|>2$. Then, by Corollary \ref{cor:pkbound}, for some neighbourhood $N$ of $z$, $|p_k(w)|\geq 2$ for $w\in N$. Thus, and by Montel's theorem \cite[Theorem 14.5]{Falconer03}, the family $\{p^n:n\in\N\}$ is normal at $z$. So $z\not\in J(p)$, by \eqref{eq:nn}. We have shown that $J(p)\cap \{z:|z|>2\}=\emptyset$, so that also $K(p)\cap \{z:|z|>2\}=\emptyset$ and $A_p(\infty)=\C\setminus K(p) \supset \{z:|z|> 2\}$. If $p\in \cS$ and $|z|=2$ then, by Corollary \ref{cor:pkbound}, $|p(z)|\geq 4$ so that $p(z)\in A_p(\infty)$ and so $z\in A_p(\infty)$. Thus $A_p(\infty)\supset \{z:|z|\geq 2\}$. \end{proof} We remark that the bounds in Corollary \ref{cor:pkbound} appear to be sharp. In particular, if $k=(1,1,...,1)$ has length $n\geq 2$, we see from \eqref{eq:pdef2}, \eqref{p_k}, and Lemma \ref{lem:cheb} that $p_k(2)=U_n(1)-U_{n-2}(1)=2$, since $U_m(1)=m+1$ \cite{AS}. And we note that, if $p=P_m$, for some $m\in \N$, then $p(2)=P_m(2)=2U_{m-1}(1)=2m$. Finally, we recall that we have already noted that, for $p=p_k$, with $k=(1,1)$, i.e., $p(z)=z^2-2$, the Julia set is $J(p)=[-2,2]$, so that $A_p(\infty)\not\supset \{z:|z|\geq 2\}$ for this $p$. The polynomial $p(z)=z^2-2$ is an example where $J(p)=K(p)$ so $F_B(p)=K(p)\setminus J(p) =\emptyset$. The next lemma tells us that this does not happen, that $K(p)$ is strictly larger than $J(p)$, if $p\in\cS$. \begin{lem}\label{lem:larger} $F_B(p)\cap U(p)$ is non-empty for $p\in \cS$. \end{lem} \begin{proof} If $p\in \cS$ is even then, by Lemma \ref{lem:det} and Corollary \ref{cor:small}, $p(0)=p^\prime(0)=0$, so that $0$ is an attracting fixed point. Clearly $A_p(0)$ (which is non-empty) is a subset of $U(p)\cap F_B(p)$. Similarly, by Lemma \ref{lem:det} and Corollary \ref{cor:small}, if $p$ is odd then $p(0)=0$ and $p^\prime(0)=\pm 1$, so that $0\in J(p)$ is a rationally neutral fixed point and has a (non-empty) attracting region contained in $F_B(p)$ \cite[Section II.5]{CarlesonGamelin92}, this region clearly also in $U(p)$. \end{proof} The above lemma and \eqref{eq:pply} imply that $F_B(p)\cap \Sigma_\pi\supset F_B(p)\cap U(p)$ is non-empty for all $p\in \cS$, in particular that $A_p(0)\subset F_B(p)\cap U(p) \subset \Sigma_\pi$ if $p$ is even. The main result of this section is the following criterion for the {\em whole} of $F_B(p)$ to be contained in $\Sigma_\pi$. \begin{thm} \label{prop:filled3} Suppose that $p\in \cS$, and that the critical points of $p$ in $K(p)$ have orbits that lie eventually in $1.1\ovD\cup (-2,2)\cup \ri(-2,2)$. Then $K(p)\subset \Sigma_\pi$. \end{thm} \begin{proof} Choose $a$ and $b$ with $-2<a<b<2$ such that $K(p)\cap \R \subset [a,b]$ and $K(p)\cap \ri\R \subset \ri[a,b]$, this possible by Corollary \ref{cor:pkjul} which says that the closed set $K(p)\subset \{z:|z|<2\}$. Set $T = [a,b]\cup\ri[a,b]\cup 1.1\ovD$, and choose a simply-connected open set $S$ such that $T\subset S\subset \Sigma_\pi$, this possible by Corollary \ref{cor:main} and Proposition \ref{prop:betterbounds}. By hypothesis, the orbits of the critical points in $K(p)$ lie eventually in $T$. Thus the lemma follows from Proposition \ref{prop:cd2a} and \eqref{eq:pply}. \end{proof} \begin{figure} \caption{Plot of $\pi_{30} \label{Figure3} \end{figure} As an example of application of this theorem, consider $p\in \cS$ given by (see Table \ref{table1}) $p(\lambda) = P_4^*(\lambda)=\lambda^4+ 2\lambda^2$. This $p$ has critical points $0$ and $\pm \ri$. Since $p^2(\pm \ri) = 3$ it follows from Corollary \ref{cor:pkjul} that $\pm \ri\in A_p(\infty)$, while $0$ is a fixed point. Theorem \ref{prop:filled3} tells us that $K(p)$, visualised in Figure \ref{Figure3}, is contained in $\Sigma_\pi$. We note that, since all the critical points of $p$ except the fixed point $0$ are in $A_p(\infty)$, $K(p)$ is not connected \cite[Theorem III 4.1]{CarlesonGamelin92} and, by Theorem \ref{thm:sullivan} and Proposition \ref{prop:crit}, $F_B(p)=A_p(0)$, which implies that $K(p)\subset \overline{U(p)}$. Further, recalling the discussion in Section \ref{sec:pre}, $J(p) =\partial K(p)=\partial A_p(0)=\partial A_p(\infty)$, and, since $K(p)$ has more than one component, $F_B(p)$ has infinitely many components \cite[Theorem IV 1.2]{CarlesonGamelin92}. The above example is a particular instance of a more general result. It is straightforward to see that if $p$ is a polynomial with zeros only on the real line, then all the critical points are also on the real line. Since, by Lemma \ref{lem:cheb}, $P_m(\lambda) = \lambda U_{m-1}(\lambda/2)$, and all the zeros of the polynomial $U_{m-1}$ are real, it follows that all the zeros of $P_m$ are real, so all its critical points are also real, and so the orbits of all the critical points are real. Further, by Corollary \ref{cor:pkjul}, the orbits of the critical points in $K(p)$ stay in $(-2,2)$. Likewise, as (see \eqref{eq:PM*}) $P_m^*(\lambda)=\ri^{-m}P_m(\ri\lambda)$, all the critical points of $P_m^*$ lie on $\ri\R$, and so the orbits of these critical points are real if $m$ is even, pure imaginary if $m$ is odd. Further, by Corollary \ref{cor:pkjul}, the orbits of the critical points in $K(p)$ stay in $(-2,2)\cup \ri(-2,2)$. Applying Theorem \ref{prop:filled3} we obtain: \begin{cor} \label{cor:Kp} $K(P_m)\subset \Sigma_\pi$ and $K(P_m^*)\subset \Sigma_\pi$, for $m\geq 2$. \end{cor} Numerical experiments carried out for the polynomials $p\in \cS$ of degree $\leq 7$ (see Table \ref{table1} and \cite[Table 1]{Hagger:symmetries}) appear to confirm that these polynomials satisfy the conditions of Theorem \ref{prop:filled3}, i.e., it appears for each polynomial $p$ that the orbit of every critical point either diverges to infinity or is eventually in $1.1\ovD\cup (-2,2)\cup \ri(-2,2)$. The same appears true for the polynomial $p\in \cS$ of degree 18 in Lemma \ref{lem:counter} for which $K(p)\not\subset \overline{U(p)}$. Thus it appears, from numerical evidence and Theorem \ref{prop:filled3}, that $K(p)\subset \Sigma_\pi$ for these examples. These numerical experiments and Corollary \ref{cor:Kp} motivate a conjecture that $K(p)\subset \Sigma_\pi$ for all $p\in \cS$. \section{Open Problems} \label{sec:open} We finish this paper with a note of open problems regarding the spectrum of the Feinberg-Zee random hopping matrix, particularly problems that the above discussions have highlighted. We recall first that \cite{CCL} made several conjectures regarding $\Sigma$. It was proved in \cite{Hagger:dense} that $\overline{\sigma_\infty}=\Sigma_\pi$, but the following conjectures remain open: \begin{enumerate} \item $\Sigma_\pi=\Sigma$; \item $\Sigma$ is the closure of its interior; \item $\Sigma$ is simply connected; \item $\Sigma$ has a fractal boundary. \end{enumerate} Of these conjectures, perhaps the first has the larger implications. Certainly, if $\Sigma=\Sigma_\pi$, then we have noted below \eqref{eq:convPin} that we have constructed already convergent sequences of upper ($\Sigma_n^*$) and lower ($\Pi_n$) bounds for $\Sigma$ that can both be computed by calculating eigenvalues of $n\times n$ matrices. Further, if $\Sigma=\Sigma_\pi$, then the second of the above conjectures follows from Theorem \ref{thm:interior}. The last three conjectures in the above list were prompted in large part by plots of $\pi_{n}$ in \cite{CCL}, the plot of $\pi_{30}$ reproduced in Figures \ref{Figure2} and \ref{Figure3}. It is plausible that these plots, in view of \eqref{eq:convPin}, approximate $\Sigma_\pi$. We see no clear route to establishing the third conjecture above. Regarding the fourth, we note that the existence of the set $\cS$ of polynomial symmetries satisfying \eqref{eq:spsym} suggests a self-similar structure to $\pi_\infty$ and to $\Sigma_\pi$ and $\Sigma$ and their boundaries. Further, \cite{Hagger:symmetries} has shown that $\Sigma_\pi$ contains the Julia sets of all polynomials in $\cS$, and Proposition \ref{prop:filled3} and Corollary \ref{cor:Kp} show that $\Sigma_\pi$ contains the filled Julia sets, many of which have fractal boundaries, of the polynomials in an infinite subset of $\cS$. Regarding these polynomial symmetries we make two further conjectures: \begin{enumerate} \item[5.] $K(p)\subset \Sigma_\pi$ for all $p\in \cS$; \item[6.] $p^{-1}(\Sigma)\subset \Sigma$ for all $p\in \cS$. \end{enumerate} This last conjecture follows if $\Sigma=\Sigma_\pi$, by Theorem \ref{thm:cS} from \cite{Hagger:symmetries}. Further (see the discussion below \eqref{eq:pply}), it was shown in \cite{CWDavies2011} that $p^{-1}(\Sigma)\subset \Sigma$ for the only polynomial of degree 2 in $\cS$, $p(\lambda)=\lambda^2$. The major subject of study and tool for argument in this paper has been Hagger's set of polynomial symmetries $\cS$. We finish with one final open question raised immediately before Section \ref{subsec:a}. \begin{enumerate} \item[7.] Does $\cS$ capture all the polynomial symmetries of $\Sigma$? Precisely, are there polynomial symmetries, satisfying \eqref{eq:spsym}, that are not either in $\cS$ or compositions of elements of $\cS$? \end{enumerate} \subsection*{Acknowledgments} We thank our friend and scientific collaborator Marko Lindner for introducing us to the study of, and for many discussions about, this beautiful matrix class. \end{document}
\begin{document} \title[An integral involving the modified Struve function of the first kind]{Bounds for an integral involving the modified Struve function of the first kind} \author{Robert E. Gaunt} \address{Department of Mathematics, The University of Manchester, Oxford Road, Manchester M13 9PL, UK} \curraddr{} \email{[email protected]} \thanks{The author is supported by a Dame Kathleen Ollerenshaw Research Fellowship. } \subjclass[2010]{Primary 33C20; 26D15} \date{27 January 2021} \dedicatory{} \commby{} \begin{abstract}Simple upper and lower bounds are established for the integral $\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t$, where $x>0$, $\nu>-1$, $0<\beta<1$ and $\mathbf{L}_\nu(x)$ is the modified Struve function of the first kind. These bounds complement and improve on existing results, through either sharper bounds or increased ranges of validity. In deriving our bounds, we obtain some monotonicity results and inequalities for products of the modified Struve function of the first kind and the modified Bessel function of the second kind $K_{\nu}(x)$, as well as a new bound for the ratio $\mathbf{L}_{\nu}(x)/\mathbf{L}_{\nu-1}(x)$. \end{abstract} \maketitle \section{Introduction}\label{intro} In a series of recent papers \cite{gaunt ineq1,gaunt ineq3,gaunt ineq8}, simple upper and lower bounds, involving the modified Bessel function of the first kind $I_\nu(x)$, were established for the integral \begin{equation}\label{intbes}\int_0^x \mathrm{e}^{-\beta t} t^\nu I_\nu(t)\,\mathrm{d}t, \end{equation} where $x>0$, $0\leq\beta<1$. The conditions imposed on $\nu$ differed for several of the inequalities, although in all cases $\nu>-\frac{1}{2}$, which ensures that the integral exists. For $0<\beta<1$ there does not exist a simple closed-form formula for this integral. The inequalities of \cite{gaunt ineq1,gaunt ineq3,gaunt ineq8} played a crucial role in the development of Stein's method \cite{chen,np12,stein} for variance-gamma approximation \cite{eichelsbacher, gaunt vg, gaunt vg2,gaunt vg3}. As the inequalities of \cite{gaunt ineq1,gaunt ineq3,gaunt ineq8} are simple and surprisingly accurate, they may also be useful in other problems involving modified Bessel functions; see, for example, \cite{bs09,baricz3} in which inequalities for modified Bessel functions of the first kind were used to derive tight bounds for the generalized Marcum $Q$-function, which arises in radar signal processing. The modified Struve function of the first kind is defined, for $x\in\mathbb{R}$ and $\nu\in\mathbb{R}$, by the power series \begin{equation*}\mathbf{L}_\nu(x)=\sum_{k=0}^\infty \frac{\big(\frac{1}{2}x\big)^{\nu+2k+1}}{\Gamma(k+\frac{3}{2})\Gamma(k+\nu+\frac{3}{2})}. \end{equation*} The modified Struve function $\mathbf{L}_\nu(x)$ is closely related to the modified Bessel function $I_\nu(x)$, either sharing or having close analogues to the properties of $I_\nu(x)$ that were used by \cite{gaunt ineq1,gaunt ineq3,gaunt ineq8} to derive inequalities for the integral (\ref{intbes}). The function $\mathbf{L}_\nu(x)$ is itself a widely used special function, with numerous applications in the applied sciences, such as perturbation approximations of lee waves in a stratified flow \cite{mh69}, leakage inductance in transformer windings \cite{hw94}, and quantum-statistical distribution functions of a hard-sphere system \cite{ni67}; see \cite{bp13} for examples of further application areas. Basic properties of the modified Struve function $\mathbf{L}_\nu(x)$ can be found in standard references, such as \cite{olver}. We collect the basic properties that will be needed in this paper in Appendix \ref{appa} The natural analogue of the problem studied by \cite{gaunt ineq1,gaunt ineq3,gaunt ineq8} is to ask for simple inequalities, involving the modified Struve function of the first kind, for the integral \begin{equation}\label{intstruve}\int_0^x \mathrm{e}^{-\beta t} t^{\nu} \mathbf{L}_\nu(t)\,\mathrm{d}t, \end{equation} where $x>0$, $0\leq\beta<1$ and $\nu>-1$ (with the condition on $\nu$ ensuring the integral exists). This problem was first studied in the recent paper \cite{gaunt ineq4}, and will also be the subject of this paper. The integral (\ref{intstruve}) can be evaluated exactly in terms of the modified Struve function $\mathbf{L}_\nu(x)$ in the case $\beta=1$. For all $\nu>-\frac{1}{2}$ and $x>0$, \begin{equation}\label{intfor}\int_0^x \mathrm{e}^{-t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t=\frac{\mathrm{e}^{-x}x^{\nu+1}}{2\nu+1}\big(\mathbf{L}_\nu(x)+\mathbf{L}_{\nu+1}(x)\big)-\frac{\gamma(2\nu+2,x)}{\sqrt{\pi}2^\nu(2\nu+1)\Gamma(\nu+\frac{3}{2})}, \end{equation} where $\gamma(a,x)=\int_0^x \mathrm{e}^{-t}t^{a-1}\,\mathrm{d}t$ is the lower incomplete gamma function. This formula can be verified directly by a short calculation using the differentiation formula (\ref{diffone}) and identity (\ref{Iidentity}) given in Appendix \ref{appa}. When $\beta=0$ the integral (\ref{intstruve}) cannot be evaluated in terms of the function $\mathbf{L}_\nu(x)$, but an exact formula is available in terms of the generalized hypergeometric function \begin{equation*}{}_pF_q\big(a_1,\ldots,a_p;b_1,\ldots,b_q;x\big)=\sum_{k=0}^\infty \frac{(a_1)_k\cdots(a_p)_k}{(b_1)_k\cdots(b_q)_k}\frac{x^k}{k!}, \end{equation*} where the Pochhammer symbol is defined by $(a)_0=1$ and $(a)_k=a(a+1)(a+2)\cdots(a+k-1)$, $k\geq1$. Indeed, for $-\nu-\frac{3}{2}\notin\mathbb{N}$, we have the representation \begin{equation*}\mathbf{L}_\nu(x)=\frac{x^{\nu+1}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})} {}_1F_2\bigg(1;\frac{3}{2},\nu+\frac{3}{2};\frac{x^2}{4}\bigg), \end{equation*} and by a straightforward calculation we have that, for $\nu>-1$ and $x>0$, \begin{equation*}\int_0^x t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t=\frac{x^{2\nu+2}}{\sqrt{\pi}2^{\nu+1}(\nu+1)\Gamma(\nu+\frac{3}{2})}{}_2F_3\bigg(1,\nu+1;\frac{3}{2},\nu+\frac{3}{2},\nu+2;\frac{x^2}{4}\bigg). \end{equation*} The integral (\ref{intstruve}) can also be evaluated when $0<\beta<1$, but the formula is more complicated: for $\nu>-1$ and $x>0$, \begin{equation*}\int_0^x \mathrm{e}^{-\beta t} t^{\nu} \mathbf{L}_{\nu}(t)\,\mathrm{d}t=\sum_{k=0}^\infty\frac{2^{-\nu-2k}\beta^{-2k-2\nu-2}}{\Gamma(k+\frac{3}{2})\Gamma(k+\nu+\frac{3}{2})}\gamma(2k+2\nu+2,\beta x). \end{equation*} These complicated formulas provide the motivation for establishing simple bounds, involving the modified Struve function $\mathbf{L}_{\nu}(x)$ itself, for the integral (\ref{intstruve}). Several upper bounds and a lower bound for the integral (\ref{intstruve}) were established by \cite{gaunt ineq4} by adapting the techniques used by \cite{gaunt ineq1,gaunt ineq3} to bound the analogous integral (\ref{intbes}) involving the modified Bessel function $I_\nu(x)$. In this paper, we complement the work of \cite{gaunt ineq4} by obtaining several lower bounds for the integral (\ref{intstruve}) (Theorem \ref{tiger1}), one of which is a strict improvement on the only lower bound given in \cite{gaunt ineq4}. In fact, all lower bounds obtained in this paper are tight in the limit $x\rightarrow\infty$, a feature not seen in the lower bound of \cite{gaunt ineq4}. We also extend the range of validity of the upper bounds given in \cite{gaunt ineq4} from $\nu\geq\frac{1}{2}$ to $\nu>-\frac{1}{2}$ (Theorem \ref{tiger2}), with our bounds taking the same functional form, but with larger numerical constants. We shall proceed in a similar manner to \cite{gaunt ineq4}, by adapting the approach used in the recent paper \cite{gaunt ineq8} to obtain similar improvements on the bounds of \cite{gaunt ineq1, gaunt ineq3} that were obtained for the related integral (\ref{intbes}) involving $I_\nu(x)$. We establish our upper bounds by proving a series of lemmas, which may be of independent interest. Lemma \ref{lem1} gives another upper bound for the integral (\ref{intstruve}), which outperforms our bounds from Theorem \ref{tiger2} for `large' values of $x$. In Lemma \ref{lem0}, we provide a new bound for the ratio $\mathbf{L}_{\nu}(x)/\mathbf{L}_{\nu-1}(x)$. Lemma \ref{lem2} gives monotonicity results and inequalities for some products involving the modified Struve function $\mathbf{L}_\nu(x)$ and the modified Bessel function of the second kind $K_\nu(x)$ that complement existing results concerning products involving the modified Bessel functions $I_\nu(x)$ and $K_\nu(x)$. The lemmas are collected and proved in Section \ref{seclem}, and the main results are proved in Section \ref{sec3}. Elementary properties of the modified Struve function $\mathbf{L}_\nu(x)$ and the modified Bessel functions that are needed in the paper are collected in Appendix \ref{appa}. \section{Main results and comparisons}\label{sec2} The inequalities given in the following Theorems \ref{tiger1} and \ref{tiger2} are natural analogues of inequalities that have been recently obtained by \cite{gaunt ineq8} for the related integral $\int_0^x \mathrm{e}^{-\beta t}t^{\nu}I_\nu(t)\,\mathrm{d}t$. The inequalities also complement and improve on bounds of \cite{gaunt ineq4} for the integral (\ref{intstruve}). Theorems \ref{tiger1} and \ref{tiger2} and Proposition \ref{propone} below are proved in Section \ref{sec3}. \begin{theorem}\label{tiger1}Let $0<\beta<1$. Then, for $x>0$, \begin{align}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\mathrm{e}^{-\beta x}x^\nu\mathbf{L}_\nu(x)\nonumber\\ \label{ineqb2}&\quad -\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad-\tfrac{1}{2}<\nu\leq0, \\ \int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\bigg(1-\frac{4\nu^2}{(2\nu-1)(1-\beta)}\frac{1}{x}\bigg)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\nonumber\\ \label{ineqb3}&\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad \nu\geq\tfrac{3}{2}, \\ \label{ineqb4}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&>\mathrm{e}^{-\beta x}x^\nu\sum_{k=0}^\infty \beta^k \mathbf{L}_{\nu+k+1}(x), \quad \nu>-1. \end{align} Inequalities (\ref{ineqb2})--(\ref{ineqb4}) are tight in the limit $x\rightarrow\infty$. Recall that $\gamma(a,x)=\int_0^x\mathrm{e}^{-t}t^{a-1}\,\mathrm{d}t$ is the lower incomplete gamma function. \end{theorem} \begin{theorem}\label{tiger2}Let $0<\beta<1$. Then, for $x>0$, \begin{align}\label{ineqb10}\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{2\nu+29}{(2\nu+1)(1-\beta)}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x), \quad \nu>-\tfrac{1}{2}, \\ \label{ineqb11}\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{2\nu+15}{(2\nu+1)(1-\beta)}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x), \quad \nu>-\tfrac{1}{2}, \end{align} \begin{align} \int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\bigg(1-\frac{2\nu(2\nu+27)}{(2\nu-1)(1-\beta)}\frac{1}{x}\bigg)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\nonumber\\ \label{ineqb12}&\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad \nu>\tfrac{1}{2}. \end{align} Inequality (\ref{ineqb12}) is tight as $x\rightarrow\infty$. \end{theorem} The inequalities in the following proposition are stronger than inequalities (\ref{ineqb2}), (\ref{ineqb3}) and (\ref{ineqb12}), because $\mathbf{L}_{\nu+1}(x)<\mathbf{L}_\nu(x)$, $x>0$, $\nu\geq-\frac{1}{2}$ (see (\ref{Imon})). \begin{proposition}\label{propone}Let $0<\beta<1$. Then, for $x>0$, \begin{align}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_{\nu+1}(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\mathrm{e}^{-\beta x}x^\nu\mathbf{L}_\nu(x)\nonumber\\ \label{ineqb21}&\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad-\tfrac{1}{2}<\nu\leq0,\\ \int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_{\nu+1}(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\bigg(1-\frac{4\nu^2}{(2\nu-1)(1-\beta)}\frac{1}{x}\bigg)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\nonumber\\ \label{ineqb22}&\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad \nu\geq\tfrac{3}{2}, \\ \int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_{\nu+1}(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\bigg(1-\frac{2\nu(2\nu+27)}{(2\nu-1)(1-\beta)}\frac{1}{x}\bigg)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\nonumber\\ \label{ineqb23}&\quad -\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}, \quad \nu>\tfrac{1}{2}. \end{align} \end{proposition} \begin{remark}\label{rem1}In this remark, we discuss the performance of our bounds given in Theorems \ref{tiger1} and \ref{tiger2}, and make comparisons between our bounds and those given by \cite{gaunt ineq4} for the integral (\ref{intstruve}). Throughout this remark $0<\beta<1$. Inequality (\ref{ineqb4}) improves on the only other lower bound for the integral (\ref{intstruve}) in the literature \cite{gaunt ineq4}, $\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t>\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x)$, $x>0$, $\nu>-\tfrac{1}{2}$, with this bound in fact being the first term in the infinite series of the lower bound (\ref{ineqb4}). The other lower bounds from Theorems \ref{tiger1} and \ref{tiger2}, that is (\ref{ineqb2}), (\ref{ineqb3}) and (\ref{ineqb12}), all perform worse than (\ref{ineqb4}) and the bound of \cite{gaunt ineq4} for `small' $x$. Indeed, it is easily seen that the lower bounds in (\ref{ineqb2}) and (\ref{ineqb3}) are negative for sufficiently small $x$, whilst a simple asymptotic analysis of the bound (\ref{ineqb12}) using (\ref{Itend0}) shows that, for $-\frac{1}{2}<\nu<0$, the limiting form of this bound is $\frac{2\nu}{2\nu+1}\frac{x^{2\nu+1}}{\sqrt{\pi}2^\nu\Gamma(\nu+3/2)}<0$, as $x\downarrow0$. For the case $\nu=0$ the bound is again negative for sufficiently small $x$: $\frac{1}{1-\beta}\{\mathrm{e}^{-\beta x}\mathbf{L}_0(x)-\frac{2}{\pi\beta}(1-\mathrm{e}^{-\beta x})\}\sim -\frac{\beta x^2}{\pi(1-\beta)}$, as $x\downarrow0$. The bounds (\ref{ineqb2}), (\ref{ineqb3}) and (\ref{ineqb12}) do, however, perform well for `large' $x$. Unlike the bound of \cite{gaunt ineq4}, these bounds are tight as $x\rightarrow\infty$, and this is achieved without the need of an infinite sum involving modified Struve functions of the first kind as given in the bound (\ref{ineqb4}). Inequality (2.13) of \cite{gaunt ineq4} gives the following upper bound: for $x>0$, \begin{align}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{\mathrm{e}^{-\beta x}x^\nu}{(2\nu+1)(1-\beta)}\bigg(2(\nu+1)\mathbf{L}_{\nu+1}(x)-\mathbf{L}_{\nu+3}(x) \nonumber\\ &\quad-\frac{x^{\nu+2}}{\sqrt{\pi}2^{\nu+2}(\nu+1)\Gamma(\nu+\frac{5}{2})}\bigg), \quad \nu\geq\tfrac{1}{2},\nonumber \end{align} \begin{align} \label{gau1}&<\frac{2(\nu+1)}{(2\nu+1)(1-\beta)}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x), \quad \nu\geq\tfrac{1}{2}. \end{align} Another upper bound is obtained by combining inequalities (2.10) and (2.12) of \cite{gaunt ineq4}: for $x>0$, \begin{equation}\label{gau2}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{1}{1-\beta}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x), \quad \nu\geq\tfrac{1}{2}. \end{equation} Inequalities (\ref{ineqb10}) and (\ref{ineqb11}) increase the range of validity of inequalities (\ref{gau1}) and (\ref{gau2}) to $\nu>-\frac{1}{2}$ at the cost of larger multiplicative constants. These larger constants arise because our derivations of inequalities (\ref{ineqb10}) and (\ref{ineqb11}) are more involved than those of \cite{gaunt ineq4} for inequalities (\ref{gau1}) and (\ref{gau2}). Indeed, we arrive at our bounds by applying a series of inequalities collected in Lemmas \ref{lem0}--\ref{notfin}, which when combined leads to a build up of errors. The reason we needed a more involved analysis was because the derivations of \cite{gaunt ineq4} rely heavily on the use of the inequality $\mathbf{L}_\nu(x)<\mathbf{L}_{\nu-1}(x)$, which holds for $x>0$, $\nu\geq\frac{1}{2}$ (see (\ref{Imon})), and without this useful inequality at our disposal (we have $\nu>-\frac{1}{2}$) we required a more involved and less direct proof. It is worth noting that we can combine our bound (\ref{ineqb10}) and the bound (\ref{gau1}) of \cite{gaunt ineq4} to obtain the bound, for $x>0$, \begin{equation*}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{A_\nu}{(2\nu+1)(1-\beta)}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x), \quad \nu>-\tfrac{1}{2}, \end{equation*} where $A_\nu=2(\nu+1)$ for $\nu\geq\frac{1}{2}$, and $A_\nu=2\nu+29$ for $|\nu|<\frac{1}{2}$. A similar inequality can be obtained by combining our bound (\ref{ineqb11}) and the bound (\ref{gau2}) of \cite{gaunt ineq4}. The inequalities obtained in this paper along with those presented in this remark allow for various double inequalities to be given for the integral (\ref{intstruve}). As an example, for $x>0$, \begin{equation}\label{gau3}\mathrm{e}^{-\beta x}x^\nu\sum_{k=0}^\infty \beta^k \mathbf{L}_{\nu+k+1}(x)<\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{1}{1-\beta}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x), \quad \nu\geq\tfrac{1}{2}. \end{equation} With the aid of \emph{Mathematica} we calculated the relative error in estimating $F_{\nu,\beta}(x)=\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t$ by the upper bound in (\ref{gau3}) (denoted by $U_{\nu,\beta}(x)$), and the lower bound truncated at the fifth term in the sum, $L_{\nu,\beta}(x)=\mathrm{e}^{-\beta x}x^\nu\sum_{k=0}^4 \beta^k \mathbf{L}_{\nu+k+1}(x)$. We report the results in Tables \ref{table1} and \ref{table2}. For fixed $x$ and $\nu$, we see that increasing $\beta$ increases the relative error in approximating $F_{\nu,\beta}(x)$ by either $L_{\nu,\beta}(x)$ or $U_{\nu,\beta}(x)$. Both the lower and upper bounds in (\ref{gau3}) are tight as $x\rightarrow\infty$, and we see that, for fixed $\nu$ and $\beta$, the relative error in approximating $F_{\nu,\beta}(x)$ by $U_{\nu,\beta}(x)$ decreases as $x$ increases. However, as we have truncated the sum, $L_{\nu,\beta}(x)$ is not tight as $x\rightarrow\infty$. The effect of truncating the sum is most pronounced for larger $\beta$ and larger $x$. For $\beta=0.75$, $\sum_{k=0}^\infty 0.75^k=4$ and $\sum_{k=0}^4 0.75^k=3.0508$, and so $\lim_{x\rightarrow\infty}\big(1-\frac{L_{\nu,0.75}(x)}{F_{\nu,0.75}(x)}\big)=0.2373$, $\nu>-\frac{1}{2}$, where we also made use of the limiting forms (\ref{eqeq1}) and (\ref{Itendinfinity}). In contrast, $\lim_{x\rightarrow\infty}\big(1-\frac{L_{\nu,0.25}(x)}{F_{\nu,0.25}(x)}\big)=9.766\times 10^{-4}$, which is fairly negligible. The upper bound $U_{\nu,\beta}(x)$ is of the wrong asymptotic order as $x\downarrow0$ (using (\ref{Itend0}) shows that $\frac{U_{\nu,\beta}(x)}{F_{\nu,\beta}(x)}\sim\frac{2(\nu+1)}{(1-\beta)x}$, as $x\downarrow0$), and so performs poorly for `small' $x$. The lower bound $L_{\nu,\beta}(x)$ performs better for `small' $x$; indeed, it is of the correct asymptotic order as $x\downarrow0$ with $\lim_{x\downarrow0}\big(1-\frac{L_{\nu,\beta}(x)}{F_{\nu,\beta}(x)}\big)=\frac{1}{2\nu+3}$. \begin{table}[h] \centering \caption{\footnotesize{Relative error in approximating $F_{\nu,\beta}(x)$ by $L_{\nu,\beta}(x)$.}} \label{table1} {\scriptsize \begin{tabular}{|c|rrrrrrr|} \hline \backslashbox{$(\nu,\beta)$}{$x$} & 0.5 & 5 & 10 & 15 & 25 & 50 & 100 \\ \hline $(1,0.25)$ & 0.2051 & 0.1976 & 0.1413 & 0.1028 & 0.0656 & 0.0346 & 0.0182 \\ $(2.5,0.25)$ & 0.1276 & 0.1320 & 0.1092 & 0.0863 & 0.0591 & 0.0329 & 0.0177 \\ $(5,0.25)$ & 0.0781 & 0.0831 & 0.0773 & 0.0670 & 0.0503 & 0.0302 & 0.0169 \\ $(10,0.25)$ & 0.0439 & 0.0465 & 0.0468 & 0.0444 & 0.0378 & 0.0257 & 0.0155 \\ \hline $(1,0.5)$ & 0.2111 & 0.2582 & 0.2259 & 0.1843 & 0.1341 & 0.0870 & 0.0602 \\ $(2.5,0.5)$ & 0.1304 & 0.1635 & 0.1606 & 0.1426 & 0.1133 & 0.0791 & 0.0570 \\ $(5,0.5)$ & 0.0793 & 0.0971 & 0.1039 & 0.1004 & 0.0881 & 0.0680 & 0.0522 \\ $(10,0.5)$ & 0.0443 & 0.0514 & 0.0569 & 0.0590 & 0.0580 & 0.0515 & 0.0440 \\ \hline $(1,0.75)$ & 0.2171 & 0.3359 & 0.3723 & 0.3659 & 0.3369 & 0.2953 & 0.2683 \\ $(2.5,0.75)$ & 0.1333 & 0.2036 & 0.2458 & 0.2597 & 0.2640 & 0.2581 & 0.2500 \\ $(5,0.75)$ & 0.0805 & 0.1142 & 0.1446 & 0.1635 & 0.1850 & 0.2084 & 0.2226 \\ $(10,0.75)$ & 0.0447 & 0.0569 & 0.0705 & 0.0825 & 0.1028 & 0.1400 & 0.1774 \\ \hline \end{tabular}} \end{table} \begin{table}[h] \centering \caption{\footnotesize{Relative error in approximating $F_{\nu,\beta}(x)$ by $U_{\nu,\beta}(x)$.}} \label{table2} {\scriptsize \begin{tabular}{|c|rrrrrrr|} \hline \backslashbox{$(\nu,\beta)$}{$x$} & 0.5 & 5 & 10 & 15 & 25 & 50 & 100 \\ \hline $(1,0.25)$ & 9.4597 & 0.3208 & 0.0888 & 0.0521 & 0.0292 & 0.0139 & 0.0068 \\ $(2.5,0.25)$ & 17.4185 & 0.9887 & 0.3593 & 0.2156 & 0.1197 & 0.0565 & 0.0274 \\ $(5,0.25)$ & 30.7218 & 2.1879 & 0.8593 & 0.5134 & 0.2806 & 0.1300 & 0.0625 \\ $(10,0.25)$ & 57.3655 & 4.7301 & 1.9918 & 1.1901 & 0.6378 & 0.2868 & 0.1351 \\ \hline $(1,0.5)$ & 14.2938 & 0.5538 & 0.1530 & 0.0839 & 0.0452 & 0.0212 & 0.0103 \\ $(2.5,0.5)$ & 26.1923 & 1.5400 & 0.5661 & 0.3363 & 0.1842 & 0.0858 & 0.0414 \\ $(5,0.5)$ & 46.1220 & 3.3214 & 1.3161 & 0.7868 & 0.4286 & 0.1972 & 0.0943 \\ $(10,0.5)$ & 86.0701 & 7.1185 & 3.0084 & 1.8015 & 0.9664 & 0.4339 & 0.2037 \\ \hline $(1,0.75)$ & 28.8028 & 1.3243 & 0.4124 & 0.2137 & 0.1021 & 0.0444 & 0.0210 \\ $(2.5,0.75)$ & 52.5169 & 3.2293 & 1.2300 & 0.7305 & 0.3933 & 0.1783 & 0.0845 \\ $(5,0.75)$ & 92.3236 & 6.7374 & 2.7112 & 1.6308 & 0.8892 & 0.4056 & 0.1918 \\ $(10,0.75)$ & 172.1854 & 14.2887 & 6.0686 & 3.6482 & 1.9648 & 0.8827 & 0.4126 \\ \hline \end{tabular}} \end{table} \end{remark} \section{Lemmas}\label{seclem} We prove Theorem \ref{tiger2} through the following series of lemmas, which may be of independent interest. \begin{lemma}\label{lem0}Let $\nu>0$ and $x>0$. Then \begin{align}\label{lratio}\frac{\mathbf{L}_{\nu}(x)}{\mathbf{L}_{\nu-1}(x)}>\frac{x}{2\nu+1+x}. \end{align} This bound is tight in the limits $x\downarrow0$ and $x\rightarrow\infty$. \end{lemma} \begin{lemma}\label{lem2}Suppose $\nu\geq-\frac{1}{2}$. Then the functions $x\mapsto K_{\nu+1}(x)\mathbf{L}_\nu(x)$ and $x\mapsto xK_{\nu+2}(x)\mathbf{L}_\nu(x)$ are strictly decreasing on $(0,\infty)$. As a consequence of the latter monotonicity result, we have the following tight two-sided inequality: \begin{equation}\label{klineq1}\frac{1}{2}<xK_{\nu+2}(x)\mathbf{L}_\nu(x)<\frac{2\Gamma(\nu+2)}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})},\quad x>0. \end{equation} We also have that, for $x>0$, \begin{align}\label{klineq0}xK_{\nu+1}(x)\mathbf{L}_\nu(x)&<1,\\ \label{klineq2}xK_{\nu+3}(x)\mathbf{L}_\nu(x)&<\frac{2\Gamma(\nu+2)}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}\bigg(1+\frac{2\nu+5}{x}\bigg). \end{align} Suppose now that $-\frac{1}{2}\leq\nu\leq\frac{1}{2}$. Then, for $x>0$, \begin{align}\label{gamr1}xK_{\nu+2}(x)\mathbf{L}_\nu(x)&<\frac{3}{2},\\ \label{gamr2}xK_{\nu+3}(x)\mathbf{L}_\nu(x)&<\frac{3}{2}+\frac{9}{x},\\ \label{gamr3}xK_{\nu+3}(x)\mathbf{L}_{\nu+1}(x)&<\frac{15}{8}. \end{align} \end{lemma} \begin{lemma}\label{lem1}Let $\nu>-\frac{1}{2}$ and $0<\beta<1$. Fix $x_*>\frac{1}{1-\beta}$. Then, for $x\geq x_*$, \begin{equation}\label{ineqb1}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t<M_{\nu,\beta}(x_*)\mathrm{e}^{-\beta x} x^\nu \mathbf{L}_{\nu+1}(x), \end{equation} where \begin{equation}\label{mng}M_{\nu,\beta}(x_*)=\max\bigg\{\frac{2\nu+3+2x_*}{2\nu+1},\frac{x_*}{(1-\beta)x_*-1}\bigg\}. \end{equation} \end{lemma} \begin{lemma}\label{notfin} Suppose that $-\frac{1}{2}<\nu\leq\frac{1}{2}$ and $0<\beta<1$. Then, for $x>0$, \begin{align}\label{term00}\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{14}{(2\nu+1)(1-\beta)}, \\ \label{term10}\frac{\mathrm{e}^{\beta x}K_{\nu+2}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{7}{(2\nu+1)(1-\beta)}. \end{align} \end{lemma} \begin{remark}The monotonicity results of Lemma \ref{lem2} for the products $K_{\nu+1}(x)\mathbf{L}_\nu(x)$ and $xK_{\nu+2}(x)\mathbf{L}_\nu(x)$ complement monotonicity results that have been established for the products $K_\nu(x)I_\nu(x)$ (see \cite{bar1,bar2,penfold,pm50}), $xK_\nu(x)I_\nu(x)$ (see \cite{hartman}) and $xK_{\nu+1}(x)I_\nu(x)$ (see \cite{gaunt ineq2}). We also note that a number of bounds for the product $K_\nu(x)I_\nu(x)$ have been obtained by \cite{bar16}. In light of these results, it is natural to ask whether a monotonicity result is available for the product $xK_{\nu+1}(x)\mathbf{L}_\nu(x)$, which is also present in Lemma \ref{lem2}. It turns out that, for fixed $\nu>-\frac{1}{2}$, $xK_{\nu+1}(x)\mathbf{L}_\nu(x)$ is not a monotone function of $x$ on $(0,\infty)$. Indeed, applying the limiting forms (\ref{Itend0})--(\ref{Ktendinfinity}) gives that \begin{align*}xK_{\nu+1}(x)\mathbf{L}_\nu(x) &\sim\frac{\Gamma(\nu+1)x}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}, \quad x\downarrow0, \\ xK_{\nu+1}(x)\mathbf{L}_\nu(x) &\sim \frac{1}{2}+\frac{2\nu+1}{4x}, \quad x\rightarrow\infty, \end{align*} which tells us that $xK_{\nu+1}(x)\mathbf{L}_\nu(x)$ is an increasing function of $x$ for `small' $x$ and a decreasing function of $x$ for `large' $x$ if $\nu>-\frac{1}{2}$. \end{remark} \begin{remark}Inequality (\ref{ineqb1}) of Lemma \ref{lem1} is more accurate than inequalities (\ref{ineqb10}) and (\ref{ineqb11}) of Theorem \ref{tiger1} for `large' $x$. As an example, applying Lemma \ref{lem1} with $x_*=\frac{2}{1-\beta}$ gives that, for $x\geq\frac{2}{1-\beta}$, $\nu>-\frac{1}{2}$, $0<\beta<1$, \begin{equation*}\int_0^x\mathrm{e}^{-\beta t}t^\nu\mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{1}{2\nu+1}\bigg(2\nu+3+\frac{4}{1-\beta}\bigg)\mathrm{e}^{-\beta x}x^\nu\mathbf{L}_{\nu+1}(x), \end{equation*} which is an improvement on both (\ref{ineqb10}) and (\ref{ineqb11}) in its range of validity. \end{remark} \noindent{\emph{Proof of Lemma \ref{lem0}.}} We begin by noting the following bound of \cite[Theorem 2.2]{gaunt ineq5}: \begin{equation*}\frac{\mathbf{L}_\nu(x)}{\mathbf{L}_{\nu-1}(x)}>\bigg(\frac{I_{\nu-1}(x)}{I_\nu(x)}+\frac{2b_\nu(x)}{x}\bigg)^{-1}, \quad x>0,\:\nu\geq0, \end{equation*} where $b_\nu(x)=\frac{(x/2)^{\nu+1}}{\sqrt{\pi}\Gamma(\nu+3/2)\mathbf{L}_\nu(x)}$. Part (iii) of Lemma 2.1 of \cite{gaunt ineq5} tells us that $b_\nu(x)<\frac{1}{2}$ for all $x>0$, $\nu\geq0$, and so we have the simpler bound \begin{equation}\label{aug18}\frac{\mathbf{L}_\nu(x)}{\mathbf{L}_{\nu-1}(x)}>\bigg(\frac{I_{\nu-1}(x)}{I_\nu(x)}+\frac{1}{x}\bigg)^{-1}, \quad x>0,\;\nu\geq0. \end{equation} The ratio of modified Bessel functions of the first kind can be bounded by the inequality \begin{equation*}\frac{I_{\nu}(x)}{I_{\nu-1}(x)}>\frac{x}{2\nu+x}, \quad x>0,\;\nu>0, \end{equation*} which is the simplest lower bound in a sequence of rational bounds obtained by \cite{nasell2}. Applying this bound to (\ref{aug18}) then gives us our desired bound (\ref{lratio}). Finally, the assertion that the bound is tight in the limits $x\downarrow0$ and $x\rightarrow\infty$ follow easily from an application of the limiting forms (\ref{Itend0}) and (\ref{Itendinfinity}) and the standard formula $\Gamma(x+1)=x\Gamma(x)$. $\square$ \noindent{\emph{Proof of Lemma \ref{lem2}.}} (i) Note that we can write $K_{\nu+1}(x)\mathbf{L}_\nu(x)=f_\nu(x)g_\nu(x)$, where $f_\nu(x)=K_{\nu+1}(x)I_{\nu+1}(x)$ and $g_\nu(x)=\mathbf{L}_\nu(x)/I_{\nu+1}(x)$. It has been shown that, for $\nu>-2$, $f_\nu(x)$ is a strictly decreasing function of $x$ on $(0,\infty)$ (see \cite{bar2}, which extends the range of validity of results of \cite{bar1,penfold}), and part (i) of Theorem 2.2 of \cite{bp14} states that, for $\nu\geq-\frac{1}{2}$, $g_\nu(x)$ is a decreasing function of $x$ on $(0,\infty)$. As a product of two strictly positive functions, one of which is strictly decreasing and the other decreasing, it follows that, for $\nu\geq-\frac{1}{2}$, the function $x\mapsto K_{\nu+1}(x)\mathbf{L}_\nu(x)$ is strictly decreasing on $(0,\infty)$. The proof that, for $\nu\geq-\frac{1}{2}$, the function $x\mapsto xK_{\nu+2}(x)\mathbf{L}_\nu(x)$ is strictly decreasing on $(0,\infty)$ is similar. We note that $xK_{\nu+2}(x)\mathbf{L}_\nu(x)=h_\nu(x)g_\nu(x)$, where $h_\nu(x)=xK_{\nu+2}(x)I_{\nu+1}(x)$. Lemma 3 of \cite{gaunt ineq2} asserts that, for $\nu\geq-\frac{3}{2}$, $f_\nu(x)$ is a strictly decreasing function of $x$ on $(0,\infty)$, and the proof now proceeds exactly as the previous one concerning the monotonicity of the function $x\mapsto K_{\nu+1}(x)\mathbf{L}_\nu(x)$. The upper and lower bounds in (\ref{klineq1}) now follow from using the limiting forms (\ref{Itend0})--(\ref{Ktendinfinity}) to calculate the limits $\lim_{x\downarrow0}xK_{\nu+2}(x)\mathbf{L}_\nu(x)$ and $\lim_{x\rightarrow\infty}xK_{\nu+2}(x)\mathbf{L}_\nu(x)$. \noindent{(ii)} Inequality (\ref{klineq0}) is obtained by combining the inequality $\mathbf{L}_\nu(x)<I_\nu(x)$, $x>0$, $\nu\geq-\frac{1}{2}$, with the bound $xK_{\nu+1}(x)I_\nu(x)\leq1$, $x>0$, $\nu\geq-\frac{1}{2}$ (see \cite[Lemma 3]{gaunt ineq2}). To see that $\mathbf{L}_\nu(x)<I_\nu(x)$, $x>0$, $\nu\geq-\frac{1}{2}$, we recall that the modified Struve function of the second kind is defined by $\mathbf{M}_\nu(x)=\mathbf{L}_\nu(x)-I_\nu(x)$. We can readily see that $\mathbf{M}_\nu(x)<0$, for $x>0$, $\nu>-\frac{1}{2}$, from its integral representation (see \cite[formula 11.5.4]{olver}), and $M_{-\frac{1}{2}}(x)<0$, $x>0$, can be seen by using the formulas in (\ref{speccase}). \noindent{(iii)} We will make use of the following inequality of \cite{segura} for a ratio of modified Bessel functions of the second kind: \begin{equation}\label{kmuineq}\frac{K_\nu(x)}{K_{\nu-1}(x)}<\frac{\nu-\frac{1}{2}+\sqrt{(\nu-\frac{1}{2})^2+x^2}}{x}<1+\frac{2\nu-1}{x}, \quad x>0,\: \nu>\tfrac{1}{2}. \end{equation} We now obtain inequality (\ref{klineq2}) by applying inequality (\ref{kmuineq}) and the upper bound in (\ref{klineq1}): \begin{align*}xK_{\nu+3}(x)\mathbf{L}_\nu(x)=\frac{K_{\nu+3}(x)}{K_{\nu+2}(x)}\cdot xK_{\nu+2}(x)\mathbf{L}_\nu(x)<\bigg(1+\frac{2\nu+5}{x}\bigg)\frac{2\Gamma(\nu+2)}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}. \end{align*} \noindent{(iv)} We note that the ratio $\frac{\Gamma(\nu+2)}{\Gamma(\nu+3/2)}$ is an increasing function of $\nu$ on $[-\frac{1}{2},\frac{1}{2}]$ (see \cite{gio}). Therefore using the upper bound in (\ref{klineq1}) we obtain that, for $-\frac{1}{2}\leq\nu\leq\frac{1}{2}$ and $x>0$, \[xK_{\nu+2}(x)\mathbf{L}_\nu(x)<\frac{2\Gamma(\frac{1}{2}+2)}{\sqrt{\pi}\Gamma(\frac{1}{2}+\frac{3}{2})}=\frac{3}{2},\] where we used that $\Gamma(\frac{5}{2})=\frac{3\sqrt{\pi}}{4}$. Thus, we have proved inequality (\ref{gamr1}). Inequalities (\ref{gamr2}) and (\ref{gamr3}) are obtained similarly (making use of the upper bound in (\ref{klineq1}) and inequality (\ref{klineq2})), and we omit the details. $\square$ \noindent{\emph{Proof of Lemma \ref{lem1}.}} Fix $x_*>\frac{1}{1-\beta}$. We consider the function \begin{equation*}u_{\nu,\beta}(x)=M_{\nu,\beta}(x_*)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x)-\int_0^x \mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t, \end{equation*} and prove inequality (\ref{ineqb1}) by showing that $u_{\nu,\beta}(x)>0$ for all $x\geq x_*$. We first prove that $u_{\nu,\beta}(x_*)>0$. To this end, we consider the function \begin{equation*}v_{\nu,\beta}(x)=\frac{\mathrm{e}^{\beta x}}{x^\nu \mathbf{L}_{\nu+1}(x)}\int_0^x\mathrm{e}^{-\beta t} t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t, \end{equation*} and it suffices to prove that $v_{\nu,\beta}(x_*)< M_{\nu,\beta}(x_*)$. We note that \begin{equation*}\frac{\partial v_{\nu,\beta}(x)}{\partial \beta}=\frac{\mathrm{e}^{\beta x}}{x^\nu \mathbf{L}_{\nu+1}(x)}\int_0^x(x-t)\mathrm{e}^{-\beta t} t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t>0, \end{equation*} meaning that $v_{\nu,\beta}(x)$ is an increasing function of $\beta$. Therefore, for $0<\beta<1$, \begin{align*}v_{\nu,\beta}(x_*)&< \frac{\mathrm{e}^{ x_*}}{x_*^\nu \mathbf{L}_{\nu+1}(x_*)}\int_0^{x_*}\mathrm{e}^{- t} t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{x_*}{2\nu+1}\bigg(\frac{\mathbf{L}_\nu(x_*)}{\mathbf{L}_{\nu+1}(x_*)}+1\bigg) \\ &<\frac{x_*}{2\nu+1}\bigg(\frac{2\nu+3+x_*}{x_*}+1\bigg)=\frac{2\nu+3+2x_*}{2\nu+1}\leq M_{\nu,\beta}(x_*), \end{align*} where the second inequality is clear from the integral formula (\ref{intfor}) and we applied Lemma \ref{lem0} to obtain the third inequality. We now prove that $u_{\nu,\beta}'(x)>0$ for $x>x_*$. A calculation using the differentiation formula (\ref{diffone}) followed by an application of inequality (\ref{Imon}) gives that \begin{align*}u_{\nu,\beta}'(x)&=M_{\nu,\beta}(x_*)\frac{\mathrm{d}}{\mathrm{d}x}\big(\mathrm{e}^{-\beta x}x^{-1}\cdot x^{\nu+1} \mathbf{L}_{\nu+1}(x)\big)- \mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\\ &=M_{\nu,\beta}(x_*)\mathrm{e}^{-\beta x}x^\nu\big(\mathbf{L}_{\nu}(x)-x^{-1}\mathbf{L}_{\nu+1}(x)-\beta \mathbf{L}_{\nu+1}(x)\big)-\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\\ &>M_{\nu,\beta}(x_*)\mathrm{e}^{-\beta x}x^\nu\big(1-\beta -x^{-1})\mathbf{L}_\nu(x)-\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)\\ &\geq\bigg(\frac{1-\beta-x^{-1}}{1-\beta-x_*^{-1}}-1\bigg)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)>0, \end{align*} for $x>x_*$. This completes the proof. $\square$ \noindent{\emph{Proof of Lemma \ref{notfin}.}} (i) We obtain inequality (\ref{term00}) by bounding the expression \[\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t\] for $x\in(0,x_*)$ and $x\in[x_*,\infty)$, where $x_*=\frac{C}{1-\beta}$ for some $C>1$ that we will choose later. Suppose first that $x\in(0,x_*)$. Observe that \begin{equation*}\frac{\partial}{\partial \beta}\bigg(\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t\bigg)=\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x (x-t)\mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t> 0. \end{equation*} Since $0<\beta<1$, we therefore have that, for $x\in(0,x_*)$, \begin{align}\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&< \frac{\mathrm{e}^{ x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{- t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t\nonumber\\ &<\frac{1}{2\nu+1}x^2K_{\nu+3}(x)\big(\mathbf{L}_\nu(x)+\mathbf{L}_{\nu+1}(x)\big)\nonumber\\ &<\frac{1}{2\nu+1}\bigg(\frac{27}{8}x_*+9\bigg)=\frac{1}{2\nu+1}\bigg(9+\frac{27C}{8(1-\beta)}\bigg)\nonumber\\ &<\frac{1}{(2\nu+1)(1-\beta)}\bigg(9+\frac{27}{8}C\bigg)=:T_1,\nonumber \end{align} where we used (\ref{intfor}) to bound the integral in the second step, and inequalities (\ref{gamr2}) and (\ref{gamr3}) to obtain the third inequality. Suppose now that $x\in[x_*,\infty)$. Let $M_{\nu,\beta}(x_*)$ be defined as per (\ref{mng}). Bounding the integral by inequality (\ref{ineqb1}) gives that \begin{align}\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}&t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\cdot M_{\nu,\beta}(x_*)\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x)\nonumber \\ &=M_{\nu,\beta}(x_*)xK_{\nu+3}(x)\mathbf{L}_{\nu+1}(x)\nonumber\\ &<\frac{15}{8} M_{\nu,\beta}(x_*)\nonumber\\ &=\frac{15}{8}\max\bigg\{\frac{1}{2\nu+1}\bigg(2\nu+3+\frac{2C}{1-\beta}\bigg),\frac{C}{(C-1)(1-\beta)}\bigg\}\nonumber\\ &\leq\max\bigg\{\frac{15(4+2C)}{8(2\nu+1)(1-\beta)},\frac{15C}{4(C-1)(2\nu+1)(1-\beta)}\bigg\}\nonumber\\ &=:\max\{T_2,T_3\},\nonumber \end{align} where we used inequality (\ref{gamr3}) to obtain the second inequality and we used that $-\frac{1}{2}<\nu\leq\frac{1}{2}$ to obtain the third inequality. It is readily checked that $T_1\geq T_2$ if $C\leq4$. Equating $T_1=T_3$ gives a quadratic equation for $C$ with positive solution $C=\frac{\sqrt{889}}{18}-\frac{5}{18}=1.3786\ldots$. Therefore \begin{align*}\frac{\mathrm{e}^{\beta x}K_{\nu+3}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{1}{(2\nu+1)(1-\beta)}\bigg(9+\frac{27}{8}\cdot 1.3786\bigg)\\ &=\frac{13.653}{(2\nu+1)(1-\beta)}<\frac{14}{(2\nu+1)(1-\beta)}. \end{align*} \noindent{(ii)} The proof of inequality (\ref{term10}) is similar to that of inequality (\ref{term00}). Let $x_*=\frac{3}{2(1-\beta)}$. By a similar argument, we have that, for $x\in(0,x_*)$, \begin{align}\frac{\mathrm{e}^{\beta x}K_{\nu+2}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<\frac{1}{2\nu+1}x^2K_{\nu+2}(x)\big(\mathbf{L}_\nu(x)+\mathbf{L}_{\nu+1}(x)\big)\nonumber\\ \label{lab1}&<\frac{x_*}{2\nu+1}\bigg(\frac{3}{2}+1\bigg)=\frac{15}{4(2\nu+1)(1-\beta)}, \end{align} where we applied inequalities (\ref{gamr1}) and (\ref{klineq0}) to get the second inequality. Suppose now that $x\in[x_*,\infty)$. Using inequality (\ref{ineqb1}) gives that \begin{align}\frac{\mathrm{e}^{\beta x}K_{\nu+2}(x)}{x^{\nu-1}}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t&<M_{\nu,\beta}(x_*)xK_{\nu+2}(x)\mathbf{L}_{\nu+1}(x)< M_{\nu,\beta}(x_*)\nonumber \\ &=\max\bigg\{\frac{1}{2\nu+1}\bigg(2\nu+3+\frac{3}{1-\beta}\bigg),\frac{3}{1-\beta}\bigg\}\nonumber\\ &\leq\max\bigg\{\frac{2\nu+6}{(2\nu+1)(1-\beta)},\frac{3}{1-\beta}\bigg\}\nonumber\\ \label{lab2}&=\frac{2\nu+6}{(2\nu+1)(1-\beta)}<\frac{7}{(2\nu+1)(1-\beta)}, \end{align} where we used (\ref{klineq0}) to get the second inequality and we used that $-\frac{1}{2}<\nu\leq\frac{1}{2}$ to obtain the third and fourth inequalities. We complete the proof by noting that the bound (\ref{lab2}) is greater than the bound (\ref{lab1}). $\square$ \section{Proofs of main results}\label{sec3} \noindent{\emph{Proof of Theorem \ref{tiger1}.}} (i) Let $x>0$ and suppose $-\frac{1}{2}<\nu\leq0$. Using integration by parts and the differentiation formula (\ref{diffone}) gives that \begin{align*}\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t&=-\frac{1}{\beta}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)+\frac{1}{\beta}\int_0^x \mathrm{e}^{-\beta t}t^\nu \mathbf{L}_{\nu-1}(t)\,\mathrm{d}t, \end{align*} where we used that $\lim_{x\downarrow0}x^\nu\mathbf{L}_\nu(x)=0$, for $\nu>-\frac{1}{2}$ (see \ref{Itend0})). One can check that the integrals exist for $\nu>-\frac{1}{2}$ by using the limiting form (\ref{Itend0}). By using the identity (\ref{Iidentity}) and rearranging we obtain that \begin{align}&\int_0^x \mathrm{e}^{-\beta t}t^\nu \mathbf{L}_{\nu+1}(t)\,\mathrm{d}t+2\nu\int_0^x\mathrm{e}^{-\beta t}t^{\nu-1}\mathbf{L}_\nu(t)\,\mathrm{d}t-\beta\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t\nonumber\\ \label{55555}&\quad=\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-\int_0^x\mathrm{e}^{-\beta t}\frac{t^{2\nu}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}\,\mathrm{d}t. \end{align} Using inequality (\ref{Imon}) to bound the first integral and making use of the assumption that $\nu\leq0$ gives that \begin{align*}\int_0^x \mathrm{e}^{-\beta t}t^\nu \mathbf{L}_{\nu}(t)\,\mathrm{d}t&>\frac{1}{1-\beta}\bigg\{\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-\int_0^x\mathrm{e}^{-\beta t}\frac{t^{2\nu}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}\,\mathrm{d}t\bigg\}. \end{align*} Finally, we use a change of variable to evaluate the integral $\int_0^x\mathrm{e}^{-\beta t}t^{2\nu}\,\mathrm{d}t=\frac{1}{\beta^{2\nu+1}}\gamma(2\nu+1,\beta x)$, which gives us inequality (\ref{ineqb2}). \noindent{(ii)} Suppose now that $\nu\geq\frac{3}{2}$. A rearrangement of (\ref{55555}) gives that \begin{align}&\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_{\nu+1}(t)\,\mathrm{d}t-\beta \int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_{\nu}(t)\,\mathrm{d}t\nonumber\\ &\quad=\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-2\nu\int_0^x \mathrm{e}^{-\beta t}t^{\nu-1}\mathbf{L}_{\nu}(t)\,\mathrm{d}t-\int_0^x\mathrm{e}^{-\beta t}\frac{t^{2\nu}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}\,\mathrm{d}t\nonumber\\ \label{1stint}&\quad=\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-2\nu\int_0^x \mathrm{e}^{-\beta t}t^{\nu-1}\mathbf{L}_{\nu}(t)\,\mathrm{d}t-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}. \end{align} We use inequality (\ref{Imon}) to bound the first integral on the left-hand side in (\ref{1stint}), and then divide through by $(1-\beta)$ and apply inequality (\ref{Imon}) again to obtain \begin{align}\int_0^x \mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_{\nu}(t)\,\mathrm{d}t&\quad>\frac{1}{1-\beta}\bigg\{\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-2\nu\int_0^x \mathrm{e}^{-\beta t}t^{\nu-1}\mathbf{L}_{\nu}(t)\,\mathrm{d}t\nonumber\\ &\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}\nonumber\\ &\quad>\frac{1}{1-\beta}\bigg\{\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_\nu(x)-2\nu\int_0^x \mathrm{e}^{-\beta t}t^{\nu-1}\mathbf{L}_{\nu-1}(t)\,\mathrm{d}t\nonumber\\ \label{1stint0}&\quad-\frac{\gamma(2\nu+1,\beta x)}{\sqrt{\pi}2^\nu\beta^{2\nu+1}\Gamma(\nu+\frac{3}{2})}\bigg\}. \end{align} Lastly, we bound the integral $\int_0^x\mathrm{e}^{-\beta t}t^{\nu-1} \mathbf{L}_{\nu-1}(t)\,\mathrm{d}t$ using inequality (\ref{gau1}) (which can be done because $\nu\geq\frac{3}{2}$), which gives us inequality (\ref{ineqb3}). \noindent{(iii)} Let $\nu>-1$, which ensures that all integrals in this proof of inequality (\ref{ineqb4}) exist. We start with the same integration by parts to part (i), but with $\nu$ replaced by $\nu+1$: \begin{align}\label{reart}\int_0^x \mathrm{e}^{-\beta t}t^{\nu+1}\mathbf{L}_{\nu+1}(t)\,\mathrm{d}t&=-\frac{1}{\beta}\mathrm{e}^{-\beta x}x^{\nu+1}\mathbf{L}_{\nu+1}(x)+\frac{1}{\beta}\int_0^x\mathrm{e}^{-\beta t}t^{\nu+1}\mathbf{L}_\nu(t)\,\mathrm{d}t, \end{align} where it should be noted that we used that $\lim_{x\downarrow0}x^{\nu+1}\mathbf{L}_{\nu+1}(x)=0$ for $\nu>-1$ (see (\ref{Itend0})). We now note the simple inequality $\int_0^x\mathrm{e}^{-\beta t}t^{\nu+1}\mathbf{L}_\nu(t)\,\mathrm{d}t<x\int_0^x\mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t$, $x>0$, which holds because $\mathbf{L}_\nu(t)>0$ for $t>0$, $\nu>-1$. Applying this inequality to (\ref{reart}) and rearranging gives \begin{equation}\label{jj27}\int_0^x\mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t>\mathrm{e}^{-\beta x}x^{\nu}\mathbf{L}_{\nu+1}(x)+\frac{\beta}{x}\int_0^x \mathrm{e}^{-\beta t}t^{\nu+1}\mathbf{L}_{\nu+1}(t)\,\mathrm{d}t. \end{equation} We can use (\ref{jj27}) to obtain another inequality \begin{align*}&\int_0^x\mathrm{e}^{-\beta t}t^{\nu}\mathbf{L}_\nu(t)\,\mathrm{d}t\\ &\quad>\mathrm{e}^{-\beta x}x^{\nu}\mathbf{L}_{\nu+1}(x)+\frac{\beta}{x}\bigg(\mathrm{e}^{-\beta x}x^{\nu+1}\mathbf{L}_{\nu+2}(x)+\frac{\beta}{x}\int_0^x \mathrm{e}^{-\beta t}t^{\nu+2}\mathbf{L}_{\nu+2}(t)\,\mathrm{d}t\bigg)\\ &\quad=\mathrm{e}^{-\beta x}x^{\nu}\mathbf{L}_{\nu+1}(x)+\beta \mathrm{e}^{-\beta x}x^{\nu}\mathbf{L}_{\nu+2}(x)+\frac{\beta^2}{x^2}\int_0^x \mathrm{e}^{-\beta t}t^{\nu+2}\mathbf{L}_{\nu+2}(t)\,\mathrm{d}t, \end{align*} and iterating gives inequality (\ref{ineqb4}). In performing this iteration, it should be noted that the series $\sum_{k=0}^\infty \beta^k \mathbf{L}_{\nu+k+1}(x)$ is convergent. This can be seen by applying inequality (\ref{Imon}) (since $\nu>-1$) to obtain that, for all $x>0$, $\sum_{k=0}^\infty \beta^k \mathbf{L}_{\nu+k+1}(x)<\mathbf{L}_{\nu+1}(x)\sum_{k=0}^\infty \beta^k=\frac{\mathbf{L}_{\nu+1}(x)}{1-\beta}$, with the assumption that $0<\beta<1$ ensuring that the geometric series is convergent. \noindent{(iv)} Lastly, we prove that inequalities (\ref{ineqb2})--(\ref{ineqb4}) are tight in the limit $x\rightarrow\infty$. To this end, we note the following limiting forms, which hold for all $\nu>-1$ and $0<\beta<1$: \begin{align}\label{eqeq1} \int_0^x \mathrm{e}^{-\beta t}t^\nu \mathbf{L}_{\nu}(t)\,\mathrm{d}t&\sim \frac{1}{\sqrt{2\pi}(1-\beta)}x^{\nu-1/2}\mathrm{e}^{(1-\beta)x}, \quad x\rightarrow\infty,\\ \label{eqeq2}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+n}(x)&\sim \frac{1}{\sqrt{2\pi}}x^{\nu-1/2}\mathrm{e}^{(1-\beta)x}, \quad x\rightarrow\infty,\:n\in\mathbb{R}, \end{align} where (\ref{eqeq2}) is immediate from (\ref{Itendinfinity}), and (\ref{eqeq1}) follows from using (\ref{Itendinfinity}) and a standard asymptotic analysis. The tightness of inequalities (\ref{ineqb2}) and (\ref{ineqb3}) in the limit $x\rightarrow\infty$ follows immediately from (\ref{eqeq1}) and (\ref{eqeq2}). To show that inequality (\ref{ineqb4}) is tight as $x\rightarrow\infty$ we just need to additionally use that $\sum_{k=0}^\infty\beta^k=\frac{1}{1-\beta}$, since $0<\beta<1$. $\square$ \noindent{\emph{Proof of Theorem \ref{tiger2}.}} (i) Rearranging inequality (\ref{term00}) gives that, for $x>0$, $-\frac{1}{2}<\nu<\frac{1}{2}$, $0<\beta<1$, \begin{align*}\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{14}{(2\nu+1)(1-\beta)}\frac{\mathrm{e}^{-\beta x}x^{\nu-1}}{K_{\nu+3}(x)}. \end{align*} From the bound $\frac{1}{K_{\nu+3}(x)}<2x\mathbf{L}_{\nu+1}(x)$, which is a rearrangement of the lower bound in (\ref{klineq1}), we obtain that, for $x>0$, $-\frac{1}{2}<\nu<\frac{1}{2}$, $0<\beta<1$, \begin{equation*}\int_0^x\mathrm{e}^{-\beta t}t^\nu \mathbf{L}_\nu(t)\,\mathrm{d}t<\frac{28}{(2\nu+1)(1-\beta)}\mathrm{e}^{-\beta x}x^\nu \mathbf{L}_{\nu+1}(x). \end{equation*} Using that $2\nu+1>0$ for $-\frac{1}{2}<\nu<\frac{1}{2}$ gives us inequality (\ref{ineqb10}) for the case $-\frac{1}{2}<\nu<\frac{1}{2}$. Inequality (\ref{ineqb10}) can in fact be seen to hold for all $\nu>-\frac{1}{2}$, by noting that the upper bound in inequality (\ref{ineqb10}) is strictly greater than the the upper bound in inequality (\ref{gau1}) (due to \cite{gaunt ineq4}), which is valid for $\nu\geq\frac{1}{2}$. \noindent{(ii) We argue as in part (i), but we apply inequality (\ref{term10}), rather than inequality (\ref{term00}), and then use the bound $\frac{1}{K_{\nu+2}(x)}<2x\mathbf{L}_{\nu}(x)$. \noindent{(iii) The proof proceeds exactly as that of inequality (\ref{ineqb3}), with the sole modification being that we use (\ref{ineqb10}) to bound the integral on the right-hand side of (\ref{1stint0}), instead of inequality (\ref{gau1}). The tightness of inequality (\ref{ineqb12}) in the limit $x\rightarrow\infty$ is established by the same argument as that used in part (iv) of the proof of Theorem \ref{tiger1}. $\square$ \noindent{\emph{Proof of Proposition \ref{propone}.}} (i) To get inequality (\ref{ineqb21}), in part (i) of the proof of Theorem \ref{tiger1} use inequality (\ref{Imon}) to bound the third integral in (\ref{55555}), instead of the first integral. \noindent{(ii)} To get inequality (\ref{ineqb22}), in part (ii) of the proof of Theorem \ref{tiger1} use (\ref{Imon}) to bound the second integral in (\ref{1stint}), instead of the first integral. \noindent{(iii)} By studying the proof of inequality (\ref{ineqb12}), it can be seen that the above alteration that gave us inequality (\ref{ineqb22}) instead of inequality (\ref{ineqb3}) can also be used to give us inequality (\ref{ineqb23}). $\square$ \appendix \section{Basic properties of modified Struve and modified Bessel functions}\label{appa} In this appendix, we present some basic properties of the modified Struve function of the first kind $\mathbf{L}_\nu(x)$ and the modified Bessel functions $I_\nu(x)$ and $K_\nu(x)$ that are used in this paper. All formulas are given in \cite{olver}, except for the inequality which was obtained by \cite{bp14}. The modified Struve function $\mathbf{L}_\nu(x)$ is a regular function of $x\in\mathbb{R}$, and is positive for all $\nu\geq-\frac{3}{2}$ and $x>0$. The modified Bessel functions $I_{\nu}(x)$ and $K_{\nu}(x)$ are also both regular functions of $x\in\mathbb{R}$. For $x>0$, the functions $I_{\nu}(x)$ and $K_{\nu}(x)$ are positive for $\nu\geq-1$ and all $\nu\in\mathbb{R}$, respectively. The modified Struve function $\mathbf{L}_\nu(x)$ satisfies the following recurrence relation and differentiation formula \begin{align}\label{Iidentity}\mathbf{L}_{\nu -1} (x)- \mathbf{L}_{\nu +1} (x) &= \frac{2\nu}{x} \mathbf{L}_{\nu} (x)+\frac{(\frac{1}{2}x)^\nu}{\sqrt{\pi}\Gamma(\nu+\frac{3}{2})}, \\ \label{diffone}\frac{\mathrm{d}}{\mathrm{d}x} \big(x^{\nu} \mathbf{L}_{\nu} (x) \big) &= x^{\nu} \mathbf{L}_{\nu -1} (x). \end{align} We have the following special cases \begin{equation}\label{speccase}\mathbf{L}_{-\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}}\sinh(x),\quad I_{-\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}}\cosh(x). \end{equation} We also have the following asymptotic properties: \begin{align}\label{Itend0}\mathbf{L}_{\nu}(x)&\sim \frac{x^{\nu+1}}{\sqrt{\pi}2^\nu\Gamma(\nu+\frac{3}{2})}\bigg(1+\frac{x^2}{3(2\nu+3)}\bigg), \quad x \downarrow 0, \: \nu>-\tfrac{3}{2}, \\ \label{Itendinfinity}\mathbf{L}_{\nu}(x)&\sim \frac{\mathrm{e}^{x}}{\sqrt{2\pi x}}\bigg(1-\frac{4\nu^2-1}{8x}\bigg), \quad x \rightarrow\infty, \: \nu\in\mathbb{R}, \\ \label{Ktend0}K_{\nu} (x) &\sim \frac{2^{\nu-1}\Gamma(\nu)}{x^\nu},\quad \:x\downarrow0,\: \nu>0, \\ \label{Ktendinfinity} K_{\nu} (x) &\sim \sqrt{\frac{\pi}{2x}} \mathrm{e}^{-x}\bigg(1+\frac{4\nu^2-1}{8x}\bigg), \quad x \rightarrow \infty, \: \nu\in\mathbb{R}. \end{align} It was shown by \cite{bp14} that, for $x>0$, \begin{equation}\label{Imon}\mathbf{L}_{\nu} (x) < \mathbf{L}_{\nu - 1} (x), \quad \nu \geq \tfrac{1}{2}. \end{equation} Other inequalities for the modified Struve function $\mathbf{L}_\nu(x)$ are given in \cite{bp14,bps17,gaunt ineq5,jn98}, some of which improve on inequality (\ref{Imon}). \end{document}
\begin{document} \title{Saddle-Node Bifurcation and Homoclinic Persistence in AFM with Periodic Forcing} {\bf Abstract} We study the dynamics of an Atomic Force Microscope (AFM) model, under the Lennard-Jones force with non-linear damping, and harmonic forcing. We establish the bifurcation diagrams for equilibria in a conservative system. Particularly, we present conditions that guarantee the local existence of saddle-node bifurcations. By using the Melnikov method, the region in the space parameters where the persistence of homoclinic orbits is determined in a non-conservative system. {\bf Keywords:} Homoclinic Orbits,Bifurcation, Melnikov's function. \section{Introduction} The Atomic Force Microscopes (AFMs), were created in 1986 by Bining, et. al, \cite {Quate}. They are based on the tunneling microscope and the needle profilometer principles. Generally, AFMs measure the interactions between particles by allowing the nanoscale study of the surfaces for different materials, \cite{aplicacion4, aplicacion3, Morita}. In fact a wide variety of applications in analysis of pharmaceutical products, the study of the properties of fluids and fluids in cellular detection, the medicine studies, among others can be found in \cite{Bowen, Bru, aplicacion5, Alan}. The model is presented in \cite {Ashhab2, Ashhab}, where the authors study the interaction between the sample and the device's tip, see figure \ref{fig:AFM}. The associated differential equation is: \begin{equation}\label{eq:nnn1} \ddot{y}+\frac{C}{(y+a)^3}\dot{y}+y =\frac{b_1}{(y+a)^8}-\frac{b_2}{(y+a)^2}+ f(t). \end{equation} where $b_1,b_2$ and $a$ are positive constants and $f$ is a continuous function $T$-periodic with zero average, that is, $\bar{f} = \frac{1}{T} \int_{0}^{T} f(t)\,dt = 0 $. The right hand side \[ F_{LJ}:=\frac{b_1}{(y+a)^8}-\frac{b_2}{(y+a)^2}, \] is known as the Lennard-Jones force, which can be considered as a simple mathematical model to explain the interaction between a pair of neutral atoms or molecules; see \cite{Jensen,Lennard-Jones} for the standard formulation. The first term describes the short-range repulsive force due to overlapping electron orbits so-called Pauli repulsion, whereas the second term simulates the long-range attraction due to van der Waals forces. This is a special case of the wider family of Mie forces \[ F_{n,m}(x)=\frac{A}{x^n}-\frac{B}{x^m}, \] where $n,m$ are positive integers with $n>m$, also known as the $n-m$ Lennard-Jones force, see \cite{Brush}. On the other hand, the dissipative term of \eqref{eq:nnn1}: \[ F_r=\frac{C}{(y+a)^3}\dot{y}, \] is associated with a damping force of compression squeeze-film type. In specialized literature, compression film type damping can be considered as the most common and dominant dissipation in different mechanisms, (see \cite{Mohammad, Amortiguamiento} and their bibliography).\\ \begin{figure} \caption{Mechanical model associated with the AFM's devices.} \label{fig:AFM} \end{figure} For the conservative system, two main results were obtained, Theorems \ref{teo:claseqv} and \ref{teo:bif}, where we establish analytically the bifurcation diagram of the equilibria for specific regions with the involved parameters in contrast to the one obtained in \cite{aplicacion7}. In particular, Theorem \ref{teo:bif} proves the local existence of two saddle-node bifurcations that can be related to hysteresis phenomenon, see for example \cite{hysteresis1, hysterisis2}. In the non-conservative system, we present as a main result, Theorem \ref{teo:melnivok}, which gives a thorough and rigorous condition for the persistence of homoclinic orbit when the external forcing is of the form $ f(t)=B \cos (\Omega t)$. The condition found relates the amplitude of the external forcing $B$ with the damping constant $C$, which in practice can be used to prevent the AFM device from becoming decalibrate. This article is structured in the following way: this first section as an introduction, section two is dedicated to prove the main results in conservative system, and section three contains the proof for the main result of the non-conservative system along with some illustrative examples. \section{Bifurcation Diagrams} With the change of variable $x=y+a$, \eqref{eq:nnn1} is rewritten as \begin{equation}\label{eq:prinmod} \ddot{x}=m(x)+a+\epsilon\bigg( f(t)-\frac{C}{x^3}\dot{x} \bigg), \end{equation} where $m(x)=\frac{b_1}{x^8}-x-\frac{b_2}{x ^ 2}$ is the total force acting over the system, which is a combination of the Lennard-Jones force and the restoring force of the oscillator. The change of the singularity from $-a $ to $0$ will facilitate the study of the bifurcation diagram for equilibria in the conservative system ($\epsilon=0$). Note that the classification of the equilibrium solutions of \eqref{eq:prinmod} plays an important role when the full equation is studied. We now describe some properties of the function $m(x)$: \begin{align*} \lim_{x\to 0^+} m(x)&=\infty, & \lim_{x\to\infty} \dfrac{m(x)}{x}=-1, \end{align*} moreover $m$ has only one positive root and a direct analysis provides a critical value \begin{equation} b_1 ^* = \frac{4}{27}b_2^3 \label{b1}, \end{equation} such that: \begin{itemize} \item [i)]If $b_1> b_1^*$, then $m(x)$ is decreasing. \item [ii)] If $b_1=b_1^*$, then $m(x)$ is non-increasing and has an inflection point in $x_c=(\frac{4}{3} b_2)^{1/3}$. \item [iii)] Finally, if $b_1<b_1^*$, then $m(x)$ has a local maximum (resp. minimum) in $x_r$ (resp.$x_l$) and $m(x_r),m(x_l)<0$. \end{itemize} Therefore, the equilibria set $\mathcal{G}=\{x\in\mathbb{R}^+:\,m(x)+a=0\}$ is finite, not empty, and the number of equilibria depends on the parameter $a$. Figure \ref{fig:funcioneme}, shows the possible variants of the $m$ function in terms of $b_1$, $b_2$ and $a$.\\ \begin{figure} \caption{ The $m$ function in terms of parameters $b_1$, $b_2$.} \label{fig:funcioneme} \end{figure} The proof of Theorem \ref{teo:claseqv} will be made by establishing the equilibria for system \eqref{eq:prinmod}. Let us define the energy function: \begin{equation} \label{eq:energia} E(x,v):= \frac{v^2}{2}+\frac{x^2}{2}+\frac{1}{7}\dfrac{b_1}{x^7}-\frac{b_2}{x}-ax. \end{equation} Note that the local minimums of $E$ correspond to non-linear centers and the local maximums correspond to saddles. However, when $ E $ has a degenerate critical point $(x^*,0)$, since the Hessian matrix $A$ is such that $\text{Tr} A = 0$, $\text{Det } A = 0$, but $A\neq 0$. In this case, \cite{Andronov} shows, that the system can be writen in "normal" form: \begin{equation} \begin{aligned} \dot{x}=& y\\ \dot{y}=& a_k x^k [1+h(x)]+b_n x^n y[1+g(x)]+y^2 R(x,y), \end{aligned} \label{normal} \end{equation} where $h(x),\,g(x)$ and $R(x,y)$ are analytic in a neighborhood of the equilibrium point $h(x^*)=g(x^*)=0$, $k\geq 2$, $a_k \neq 0$ and $n\geq 1$. Thus the degenerate critical point $(x^*,0)$ is either a focus, a center a node, a (topological) saddle, saddle-node, a cup or a critical point with an elliptic domain, see \cite[Theorem 2, pp 151, Theorem 3, pp 151]{Perko}. \begin{theorem}\label{teo:claseqv} The equilibrium solutions of the conservative system associated with \eqref{eq:prinmod} are classified as follows: \begin{enumerate} \item A non-linear center if either $b_1\geq b_1^* $ and $a\in \mathbb{R}^+$ or $b_1<b_1^*$ and $a\in \{\mathbb{R}^+-]-m(x_r),-m(x_l)[\}$. \item Two non-linear centers and a saddle if $b_1<b_1^*$ and $a\in]-m(x_r),-m(x_l)[.$ \item A non-linear center and a cusp, if either $b_1<b_1^*$ and $a=-m(x_r)$ or $a=-m(x_l)$. \end{enumerate} \end{theorem} \begin{proof} We present here the main steps $1.-3.$ of the argument.\\ 1. Note that $\mathcal{G}$ has a unique element if either $b_1> b_1^*$ and $a\in \mathbb{R}^+$ or $b_1<b_1^*$ and $a\in\mathbb{R}^+-]-m(x_r),-m(x_l)[$, the equilibrium is a non-linear center since $E$ reaches a local minimum at that point. For the case $b_1=b_1^*$, $a=-m(x_c)$ is degenerate, using the expansion given in \eqref{normal}, we have $k=3$ and \[ a_k=\frac{24\,b_2}{6 \,x_c ^5}-\frac{720\,b_1^*}{6\, x_c ^{11}}<0, \] therefore,from \cite[Theorem 2, pp 151]{Perko}, follows that the equilibrium is a non-linear center.\\ 2. Under the hypothesis made, the set $\mathcal{G}$ has three solutions such that two are local minimums of $E$ and the other is a local maximum of $E$. Consequently, two of the equilibria are non-linear centers and the other equilibrium is a saddle.\\ 3. In this case, $\mathcal{G} $ has two solutions such that one of them is a local minimum of $E$ and corresponds to a non-linear center while the other one is degenerate with $k=2$, $b_1=0$ in \eqref{normal}. Consequently, \cite[Theorem 3, pp 151]{Perko} guarantees that equilibrium is a cusp. \end{proof} In the next section,we focus on the persistence of homoclinic orbits present in Theorem \ref{teo:claseqv} when studying the equation \eqref{eq:prinmod}. The conservative equation associated with \eqref{eq:prinmod} can be written as the parametric system: \begin{equation} \label{eq:bif} \begin{aligned} x'=&y\\ y'=& F(x,a), \end{aligned} \end{equation} where $F(x,a)=m(x)+a$. Note that Theorem \ref{teo:claseqv} allows us to build the bifurcation diagram of equilibria in terms of the parameter $a$, see figures \ref{fig:funcioneme} and \ref{fig:bif}. Moreover, when $b_1\geq b_1^*$ the parameter $a$ does not modify the dynamics of the system as it does when $b_1< b_1^* $. In fact, there exists numerical evidence, see \cite{Ashhab, Mohammad}, which shows that the points $(x_i, a_i)$, with $a_ {i}= -m(x_{i})$, $i=r,s$ are bifurcation points. In the following theorem, it will be formally shown that those points are saddle-node bifurcation points. \begin{theorem} \label{teo:bif} If $b_1 <b_1^* $ then the points $(x_i, a_i)$, $i=r,l$ are local saddle-node bifurcation for the conservative system \eqref{eq:prinmod}. \end{theorem} \begin{proof} In fact, it is enough that the following conditions are fulfilled, as shown in \cite[Theorem 3.1, pp 84]{Yuri}: \begin{itemize} \item[A1] $\partial_{xx}F(x,a)|_{(x_i,a_i)}\neq 0$. \item[A2] $\partial_a F(x,a)|_{(x_i,a_i)}\neq 0$. \end{itemize} Indeed, we have $\partial_{xx}F(x,a)|_{(x_l,a_l)}>0$ ( resp. $\partial_{xx}F(x,a)|_{(x_r,a_r)}<0$), because $m$ has relative minimum (resp. maximum) in $x_l$ (resp. $x_r$) and $\partial_a F(x,a)|_{(x_i,a_i)}=1$. \end{proof} To summarize, the results obtained in theorems \ref{teo:claseqv} and \ref{teo:bif} are illustrated in the bifurcation diagram of the conservative system associated to \eqref{eq:prinmod}. In part a) of Figure \ref{fig:bif} the red curve separates the region in terms of the parameters $ b_1 $ and $ b_2 $ for which the conservative system has a unique equilibrium (independent of the parameter $a$), of the region where the number of equilibrium solutions depends on the parameter $a$. In fact, if we take $(b_2, b_1)\in\mathbb{R} ^ 2 _ + - \{(b_2, b_1)\in\mathbb{R}^2_+:b_1 \geq b_1^*\}$ then the conservative system may have one, two or three equilibria as illustrated in Figure \ref{fig:bif} (b). In this figure the solid lines are related to the stable equilibria, while the dotted line is related to the solutions of unstable equilibria. Furthermore it can be shown that locally around the points $(x_i, a_i)$, $ i=l,r$ there is a saddle-node bifurcation. \begin{figure} \caption{Bifurcation Diagrams of the equation \eqref{eq:prinmod} \label{fig:bif} \end{figure} \section{Homoclinic Persistence} The discussion in this section is limited to the case $b_1<b_1 ^*$ and $a\in]-m(x_r), -m(x_l)[$. The objective is to apply the Melnikov's method to \eqref{eq:prinmod} when $f(t)=B\cos(\Omega \,t)$, it can be used to described how the homoclinic orbits persists in the presence of the perturbation. For AFM models the persistence of homoclinic orbits has great practical use since it can be produce uncontrollable vibrations of the device, causing fail and generate erroneous readings, \cite{Ashhab2, Ashhab, Amortiguamiento}. Before we address this problem, let us establish some notation. Consider the systems of the form \begin{equation} \label{eq:nearH} x'=f(x)+\epsilon g(x,t),\quad x\in\mathbb{R}^2, \end{equation} where $f$ is a vector field Hamiltonian in $\mathbb{R}^2$, $g_i\in C^{\infty}(\mathbb{R}^2\times \mathbb{R}/(T \mathbb{Z}))$, $i=1,2$, $g=(g_1,g_2)^T$ and $\epsilon\geq 0$. Now, suppose in an unperturbed system, i.e $\epsilon=0$ in \eqref{eq:nearH}, the existence of a family of periodic orbits given by \[ \gamma_e=\{ (x_1,x_2): E(x_1,x_2)=e\}, \quad e\in]\alpha,\beta[, \] such that $\gamma_e$ approaches a center as $e\to\alpha$ and to an invariant curve denoted by $\gamma_\beta$, as $e\to\beta$. When $\gamma_\beta$ is bounded, it is a homoclinic loop consisting of a saddle and a connection. We want to know if $\gamma_\beta$ persists when \eqref{eq:nearH}, where $0<\epsilon<<1 $, that is, if $\gamma_\beta(t,\epsilon)$ is a homoclinic of \eqref{eq:nearH} that is generated by $\gamma_\beta$. The first approximation of $\gamma_e (t, \epsilon) $ is given by the zeros of the Melnikov's function $M_e(t)$ defined as: \[ \label{eq:melikov} M_e(t):=\int_{E(x_1,x_2)=e} g_2 dx_1-g_1 dx_2, \] therefore, it is necessary to know the number of zeros of \eqref{eq:melikov}. For our purposes, the following Theorem, which is an adaptation of \cite{Han}, will be useful. \begin{theorem}[\cite{Han}, Theorem 6.4] \label{teo:han} Suppose $e_0\in]\alpha,\beta]$ and $t_0\in\mathbb{R}$. \begin{enumerate} \item If $M_{e_0}(t_0)\neq 0$, then, there are no limit cycles near $\gamma_{e_0}$ for $\epsilon+|t_0+t|$ sufficiently small. \item If $M_{e_0}(t)= 0$ is a simple zero there is exactly one limit cycle $\gamma_{{e_0}}(t_0,\epsilon)$ for $\epsilon+|t_0+t|$ sufficiently small that approaches $\gamma_{{e_0}}$ when $(t,\epsilon)\to (t_0,0)$. \end{enumerate} \end{theorem} \begin{remark} Melnikov's function can be interpreted as the first approximation in $\epsilon$ of the distance between the stable and unstable manifold, measured along the direction perpendicular to the unperturbed connection, that is, $d(\epsilon):= \epsilon \frac{M_{\beta(t_0)}}{\|f(\gamma_\beta)\|}+O(\epsilon^2)$. In particular, when $ M_\beta(t_0)> 0 $ (resp. $ <0 $) the unstable manifold is above (resp. below) the stable manifold, see \cite{Guckenheimer, Perko} for a detail discussion. \end{remark} Rewriting \eqref{eq:prinmod} as a system of the form \eqref{eq:nearH}, we obtain \begin{align*} f(x_1,x_2)& =\begin{pmatrix} x_2 \\ m(x_1)+a \end{pmatrix}, & g(x_1,x_2,t) &=\begin{pmatrix} 0 \\ B \cos (\Omega \,t)-\dfrac{C}{x_1^3}x_2 \end{pmatrix}. \end{align*} From Theorem \ref{teo:claseqv}, we have that if $b_1<b_{1}^* $ and $a\in]-m(x_r),-m(x_l)[$, the unperturbed system has three equilibria from which one is a saddle, denoted by $ (x_{sa}, 0) $. The function's energy associated with the conservative system is given by \eqref{eq:energia} and homoclinic loops, denoted by $\Gamma_l$ and $\Gamma_r$, and $E(x_1, x_2)=E(x_{sa},0)=\beta$. When calculating Melnikov's function along the separatrix on the right $\Gamma_r$, the computation along $\Gamma_l$ is identical, this is: \begin{align*} M_\beta(t_0)=&\int_{\Gamma_r} g_2 dx_1 -g_1dx_2=\oint_{\gamma_{\beta_r}} (E_{x_2}g_1+E_{x_1}g_2)dt\\ =& \int_{-\infty}^{\infty} x_2(t)\left(B\, \text{cos}(\Omega (t+ t_0) )-\frac{C}{x_1^3(t)}x_2(t)\right) dt\\ =& B\,\text{cos}(\Omega\, t_0)\int_{-\infty}^{\infty}\text{cos}(\Omega\, t)x_2(t)dt-B\,\text{sen}(\Omega\, t_0)\int_{-\infty}^{\infty}\text{sen}(\Omega\, t)x_2(t)dt \\ &- C\int_{-\infty}^{\infty}\frac{x_2^2(t)}{x_1^3(t)}dt\\ =&-2B\,\text{sen}(\Omega\, t_0)\int_{0}^{\infty}\text{sen}(\Omega\, t)x_2(t)dt - C\int_{-\infty}^{\infty}\frac{x_2^2(t)}{x_1^3(t)}dt. \end{align*} Note that \[ \int_{-\infty}^{\infty}\text{cos}(\Omega\, t)x_2(t)dt =0, \] due to $\cos(\Omega \, t)x_2 (t)$ is an odd function. Consequently: \[ M_\beta(t_0)=-2B\,\text{sen}(\Omega\, t_0)\int_{0}^{\infty}\text{sen}(\Omega\, t)x_2(t)dt - C\int_{-\infty}^{\infty}\frac{x_2^2(t)}{x_1^3(t)}dt. \] Define \begin{align*} \xi_1&=-2\int_{0}^{\infty}\text{sen}(\Omega\, t)x_2(t)dt, & \xi_2&=-\int_{-\infty}^{\infty}\frac{x_2^2(t)}{x_1^3(t)}dt, \end{align*} and we proof that $\xi_1$, $\xi_2$ are bounded. Indeed, $dt=dx_1/x_1 =dx_1 /x_2$ and $x_{sa}<x_1 <\bar{x}$ in $ \Gamma_r $, where $x_{sa},\,\bar{x}$ are consecutive zeros of $E(x_1,0)-\beta$. Now if $ E(x_1, x_2)=\beta$ then \begin{equation*} x_2^2=2\left(\beta+ax_1+\frac{b_2}{x_1}-\frac{b_1}{7\,x_1^7}-\frac{x_1^2}{2}\right), \end{equation*} hence \begin{align*} \xi_1\leq 2\int_{0}^{\infty} x_2(t)dt=2\int_{x_{sa}}^{\bar{x}}dx_1=2(\bar{x}-x_{sa}). \end{align*} On the other hand, \begin{align*} |\xi_2|\leq 2 C\int_{x_{sa}}^{\bar{x}} \left|\frac{x_2}{x_1^3}\right|dx_1=2C \int_{x_{sa}}^{\bar{x}}\frac{\sqrt{2\left(\beta+ax_1+\frac{b_2}{x_1}-\frac{b_1}{7\,x_1^7}-\frac{x_1^2}{2}\right)}}{|x_1^3|}dx_1 <\infty. \end{align*} Finally Melnikov's function is rewritten as \begin{equation} \label{eq:probmel} M_\beta(t_0)=B\, \xi_1 \text{ sen}(\Omega\,t_0)+C\,\xi_2. \end{equation} \begin{theorem}\label{teo:melnivok} Under the conditions of item 2 of the Theorem \ref{teo:claseqv} we have that the homoclinic orbits of \eqref{eq:prinmod} persist as long as $\epsilon$ is sufficiently small and: \begin{equation} \label{eq:cond1} \frac{B}{C}>\bigg|\frac{\xi_2}{\xi_1}\bigg|. \end{equation} \end{theorem} \begin{proof} Condition \eqref{eq:cond1} implies that Melinikov's function \eqref{eq:probmel} has a simple zero. Consequently, Theorem \ref{teo:han} reaches the desired conclusion. \end{proof} \begin{example} For illustrative purposes, we have taken from \cite{Rutzel} the realistic values of the physical parameters in Table \ref{tabla1}. \begin{table}[h] \centering \begin{tabular}{|c l|} \hline Symbol & Value\\ \hline $A_1$ & $0.001 X 10^-{70}$ $J m^6$ \\ $A_2$ & $2.96 X 10^-{19}$ $J$ \\ $R$ & $10$ $nm$ \\ $K$ & $0.87$ $N/m$\\ $Z_0$ & $1.68108$ $nm$\\ \hline \end{tabular} \caption{Properties of the case study of the AFM cantilever of Rützel et. al. \cite{Rutzel}} \label{tabla1} \end{table} The values in Table \ref{tabla1} are related to the following adimensionalized values $b_1, b_2 $ and $a$: \begin{align*} b_1 &=113876/10000000, & b_2 &=148148/1000000, & a&=1.07468,\\ |\xi_1| &=0.290315, & |\xi_2| &=0.382056. \end{align*} For instance, fix $C=1$ and $\Omega=1$, Theorem \ref{teo:melnivok} guarantees that if $ B> 1.316$ then the homoclinic persists. \end{example} \subsection*{Acknowledgments} We are grateful to anonymous referees for their useful and inspiring remarks. The authors have been financially supported by the Convocatoria Interna UTP 2016, project CIE 3-17-4. \subsection*{Data Availability} The data used to support the findings of this study are included within the article. \end{document}
\begin{document} \setcounter{page}{1} \setlength{\unitlength}{1mm}\mathbf{a}selineskip .58cm \pagenumbering{arabic} \numberwithin{equation}{section} \title[A note on gradient Solitons] {A note on gradient Solitons on two classes of almost Kenmotsu Manifolds} \author[ K. De and U. C. De ] { Krishnendu De$^{*}$ and Uday Chand De } \address {Department of Mathematics, Kabi Sukanta Mahavidyalaya, The University of Burdwan. Bhadreswar, P.O.-Angus, Hooghly, Pin 712221, West Bengal, India. ORCID iD: https://orcid.org/0000-0001-6520-4520} \email{[email protected] } \address {Department of Mathematics, University of Calcutta, West Bengal, India. ORCID iD: https://orcid.org/0000-0002-8990-4609} \email {uc$_{-}[email protected]} \footnotetext {$\bf{2020\ Mathematics\ Subject\ Classification\:}.$ 53D50, 53C25, 53C80. \\ {Key words and phrases: $(k,\mu)'$- almost Kenmotsu manifolds, Kenmotsu manifolds, $(m,\rho)$-quasi Einstein solitons.\\ \thanks{$^{*}$ Corresponding author} }} \maketitle \begin{abstract} The purpose of the article is to characterize \textbf{gradient $(m,\rho)$-quasi Einstein solitons} within the framework of two classes of almost Kenmotsu Manifolds. Finally, we consider an example to justify a result of our paper. \end{abstract} \maketitle \section{{Introduction}} The techniques of contact geometry carried out a significant role in contemporary mathematics and consequently it have become well-known among the eminent researchers. Contact geometry has manifested from the mathematical formalism of classical mechanics. This subject matter has more than one attachments with the alternative regions of differential geometry and outstanding applications in applied fields such as phase space of dynamical systems, mechanics, optic and thermodynamics. In the existing paper, we have a look at the gradient solitons with nullity distribution which play a functional role in coeval mathematics.\par In the study of Riemannian manifolds $(M,g)$, Gray \cite{ga} and Tanno \cite{ta} introduced the concept of \emph{$k$-nullity distribution $(k\in\mathbb{R})$} and is defined for any $p\in M$ and $k\in\mathbb R$ as follows: \begin{equation}\label{a1} N_{p}(k)=\{W\in T_{p}M:R(U,V)W = k[g(V,W)U-g(U,W)V]\}, \end{equation} for any $U,V\in T_{p}M$, where $R$ indicates the Riemannian curvature tensor of type $(1,3)$.\par Recently, the generalized idea of the $k$-nullity distribution, named as \emph{$(k,\mu)$-nullity distribution} on a contact metric manifold $(M^{2n+1},\eta, \xi, \phi,g)$ intro by Blair, Koufogiorgos and Papantoniou \cite{bkp} and defined for any $p\in M^{2n+1}$ and $k,\mu\in\mathbb R$ as : \begin{eqnarray}\label{a2} N_{p}(k,\mu)=\{W\in T_{p}M^{2n+1}:R(U,V)W&=&k[g(V,W)U-g(U,W)V]\nonumber\\&&+\mu[g(V,W)h U-g(U,W)h V]\}, \end{eqnarray} for any $U,V\in T_{p}M$ and $h=\frac{1}{2}\pounds_{\xi}\phi$, where $\pounds$ denotes the Lie differentiation.\par In $2009$, Dileo and Pastore \cite{dp2} introduced a further generalized concept of the $(k,\mu)$-nullity distribution, called the \emph{$(k,\mu)'$-nullity distribution} on an almost Kenmotsu manifold $(M^{2n+1},\eta,\xi,\phi, g)$ and is defined for any $p\in M^{2n+1}$ and $k,\mu\in\mathbb R$ as : \begin{eqnarray}\label{a3} N_{p}(k,\mu)'=\{W\in T_{p}M^{2n+1}:R(U,V)W&=&k[g(V,W)U-g(U,W)V]\nonumber\\&&+\mu[g(V,W)h'U-g(U,W)h'V]\}, \end{eqnarray} for any $U,V\in T_{p}M$ and $h'=h\circ\phi$.\par A Riemannian metric $g$ on a Riemannian manifold $M$ is called a \emph{gradient $(m,\rho)$-quasi Einstein soliton} if there exists a smooth function $f:M^{n}\rightarrow \mathbb{R}$ and three real constants $\rho$, $\lambda$ and $m$ $(0<m\leq \infty)$ such that \begin{equation} S+\nabla^{2}f-\frac{1}{m}df \otimes df =\beta g =(\rho r+\lambda) g,\label{a8}\end{equation} where $\nabla^{2}$ and $\otimes$ indicate the Hessian of $g$ and tensor product, respectively. The expression $S+\nabla^{2}f-\frac{1}{m}df \otimes df$ is the $m$-Bakry-Emery Ricci tensor, which is proportional to the metric $g$ and $\lambda = constant$ \cite{ww}. The soliton becomes trivial if the potential function $f$ is constant and the triviality condition implies that the manifold is an Einstein manifold. Furthermore, when $m=\infty$, the foregoing equation reduces to gradient $\rho$-Einstein soliton. This notion was introduced in \cite {cm} and recently, Venkatesha et al. studied $\rho$- Einstein solitons \cite{ven} on almost Kenmotsu manifold. In this connection, the properties of $(m,\rho)$-quasi Einstein solitons in different geometrical structures have been studied (in details) by (\cite{ag1}, \cite{hw}) and others. To know more about almost Kenmotsu manifold, here we may mention the work of Venkatesha et al. and Wang and his collaborator (\cite{ven1}, \cite{wang1}, \cite{wang2}, \cite{wa2}). Recently, Wang \cite{wang} has studied Yamabe solitons and gradient Yamabe solitons within the context of almost Kenmotsu $(k,\mu)'$ manifolds.\par Motivated from the above studies, we make the contribution to investigate gradient $(m,\rho)$-quasi Einstein solitons in almost Kenmotsu Manifolds.\par The current paper is constructed as :\ In section 2, we recall some basic facts and formulas of almost Kenmotsu manifolds which we will need throughout the paper. In sections 3, we characterize gradient $(m,\rho)$-quasi Einstein solitons with nullity distribution and obtain some interesting results. In the next section we consider the gradient $(m,\rho)$-quasi Einstein soliton in a $3$-dimensional Kenmotsu manifold and proved that the manifold is of constant sectional curvature $-1$, provided $r\neq-(\lambda+2)$. Then we consider an example to verify the result of our paper. \section{{Almost Kenmotsu manifolds}} In this section we gather the formulas and results of almost Kenmotsu manifolds which will be required on later sections. A differentiable manifold $M^{2n+1}$ of dimension $(2n+1)$ is called \emph {almost contact metric manifold} if it admits a covariant vector field $\eta$, a contravariant vector field $\xi$, a $(1,1)$ tensor field $\phi$ and a Riemannian metric $g$ such that \begin{equation}\label{b1} \phi^{2}=-I+\eta\otimes\xi,\;\eta(\xi)=1, \end{equation} \begin{equation}\nonumber g(\phi U,\phi V)=g(U,V)-\eta(U)\eta(V), \end{equation} where $I$ indicates the identity endomorphism . Then also $\phi\xi=0$ and $\eta\circ\phi=0$; in a straight forward calculation both can be extracted from (\ref{b1}).\par On an almost Kenmotsu manifold $M^{2n+1}$, the two symmetric tensor fields $h=\frac{1}{2}\pounds_{\xi}\phi$ and $l=R(\cdot,\xi)\xi$, satisfy the following relations \cite{dp2} \begin{equation}\label{b2} h\xi=0,\;l\xi=0,\;tr(h)=0,\;tr(h')=0,\;h\phi+\phi h=0, \end{equation} \begin{equation}\label{b3} \nabla_{U}\xi=-\phi^2U+h'U(\Rightarrow \nabla_{\xi}\xi=0), \end{equation} \begin{equation}\label{b4} \phi l\phi-l=2(h^{2}-\phi^{2}), \end{equation} \begin{equation}\label{b5} R(U,V)\xi=\eta(U)(V-\phi hV)-\eta(V)(U-\phi hU)+(\nabla_{V}\phi h)U-(\nabla_{U}\phi h)V, \end{equation} for any vector fields $U,V$.\par Let the Reeb vector field $\xi$ of an almost Kenmotsu manifold belonging to the $(k,\mu)'$-nullity distribution. Then the symmetric tensor field $h'$ of type $(1,1)$ satisfies the relations $h'\phi+\phi h'=0$ and $h'\xi=0$. Also, it is understandable that \begin{equation}\label{b6} h'^{2}=(k+1)\phi^2\, (\Leftrightarrow h^{2}=(k+1)\phi^2),\;\;h=0\Leftrightarrow h'=0. \end{equation} From the equations (\ref{b1}) and (\ref{b6}), it follows that $k\leq-1$. Certainly, by (\ref{b6}), we conclude that $h'$ vanishes if and only if $k=-1$. In view of the Proposition 4.1 of \cite{dp2} on a $(k,\mu)'$-almost Kenmotsu manifold with $k <-1$ leads to $\mu=-2$. For an almost Kenmotsu manifold, we possess from (\ref{b3}) \begin{equation}\label{b7} R(U,V)\xi=k[\eta(V)U-\eta(U)V]+\mu[\eta(V)h'U-\eta(U)h'V], \end{equation} \begin{equation}\label{b8} R(\xi,U)V=k[g(U,V)\xi-\eta(V)U]+\mu[g(h'U,V)\xi-\eta(V)h'U], \end{equation} where $k,\mu\in\mathbb R$. Contracting $V$ in (\ref{b8}) we have \begin{equation}\label{b9} S(U,\xi)=2k\eta(U). \end{equation} Also, the distribution which is indicated by $\mathcal{D}$ (defined by $\mathcal{D}=$ker $\eta$). Presume $X\in \mathcal{D}$ be the eigen vector of $h'$ corresponding to the eigen value $\lambda$. Then $\lambda^{2}=-(k+1)$, a constant, which follows from (\ref{b6}). Therefore $k\leq -1$ and $\lambda=\pm\sqrt{-k-1}$. The non-zero eigen value $\lambda$ and $-\lambda$ are respectively indicated by $[\lambda]'$ and $[-\lambda]'$, which are the corresponding eigen spaces associated with $h'$. Now before introducing the detailed proof of our main theorem, we first write the following lemma: \begin{lem} \label{L2}(Lemma. 3.2 of \cite{wal}) Let $(M^{2n+1},\eta,\xi,\phi,g)$ be an almost Kenmotsu manifold such that $\xi$ belongs to the $(k,\mu)'$-nullity distribution and $h'\neq0$. Then the Ricci operator $Q$ of $M^{2n+1}$ is given by \begin{eqnarray} \label{b10} Q =-2nid+2n(k+1)\eta \otimes \xi - 2nh',\end{eqnarray} where $k < -1$, moreover, the scalar curvature of $M^{2n+1}$ is $2n(k-2n)$. \end{lem} \section{Gradient $(m,\rho)$-quasi Einstein solitons on a $(2n+1)$-dimensional almost Kenmotsu manifold} Let us assume that the Riemannian metric of a $(2n+1)$-dimensional almost Kenmotsu manifold such that $\xi$ belongs to the $(k,\mu)'$-nullity distribution and $h'\neq0$, is a $(m,\rho)$-quasi Einstein soliton. Then the equation (\ref{a8}) may be expressed as \begin{equation}\label{c31} \nabla_{U}D f+Q U=\frac{1}{m}g(U,D f)D f+\beta U. \end{equation} Taking covariant derivative of (\ref{c31}) along the vector field $V$, we get \begin{eqnarray}\label{c32} \nabla_{V}\nabla_{U}D f&=&-\nabla_{V}QU+ \frac{1}{m}\nabla_{V}g(U,D f)D f\nonumber\\&&+\frac{1}{m}g(U,D f)\nabla_{V}D f+\beta \nabla_{V}U. \end{eqnarray} Interchanging $U$ and $V$ in (\ref{c32}), we lead \begin{eqnarray}\label{c33} \nabla_{U}\nabla_{V}D f&=&-\nabla_{U}QV+ \frac{1}{m}\nabla_{U}g(V,D f)D f\nonumber\\&&+\frac{1}{m}g(V,D f)\nabla_{U}D f+\beta \nabla_{U}V \end{eqnarray} and \begin{eqnarray}\label{c34} \nabla_{[U,V]}D f=-Q[U,V]+ \frac{1}{m}g([U,V],D f)D f+\beta [U,V]. \end{eqnarray} Equations (\ref{c31})-(\ref{c34}) and the symmetric property of Levi-Civita connection together with $R(U,V)D f=\nabla_{U}\nabla_{V}D f-\nabla_{V}\nabla_{U}D f-\nabla_{[U,V]}D f$ we lead \begin{eqnarray}\label{c35} R(U,V)D f&=& (\nabla_{V}Q)U-(\nabla_{U}Q)V+\frac{\beta}{m}\{ (V f)U-(U f)V\}\nonumber\\&& +\frac{1}{m}\{ (U f)QV-(V f)QU \}. \end{eqnarray} Taking inner product of (\ref{c35}) with $\xi$, we have \begin{eqnarray}\label{c39} g(R(U,V)D f,\xi)&=&\frac{\beta}{m}\{ (V f)\eta (U)-(U f)\eta(V)\}\nonumber\\&& +\frac{1}{m}\{ (U f)\eta(QV)-(V f)\eta(QU) \}. \end{eqnarray} Again (\ref{b7}) implies that \begin{equation}\label{c8} g(R(U,V)\xi,Df)=k[\eta(V)(U f)-\eta(U)(V f)]+\mu[\eta(V)(h'U f)-\eta(U)(h'V f)]. \end{equation} Combining equation (\ref{c39}) and (\ref{c8}) reveal that \begin{eqnarray}\label{c40} &&k[\eta(U)(V f)-\eta(V)(U f)]+\mu[\eta(U)(h'V f)-\eta(V)(h'U f)]\nonumber\\&& =\frac{\beta}{m}\{ (V f)\eta (U)-(U f)\eta(V)\}\nonumber\\&& +\frac{1}{m}\{ (U f)\eta(QV)-(V f)\eta(QU) \}. \end{eqnarray} Setting $V=\xi$ in the foregoing equation yields \begin{equation} \label{c41} 2mh'D f=(\beta-2nk-mk)\{D f-(\xi f)\xi\},\end{equation} where we have used $\mu=-2$.\par Letting $\beta=(2nk+mk+2m)$ and taking into account the equation (\ref{b6}) and operating $h'$ on (\ref{c41}) produces that \begin{equation} \label{c42} -(k+1)\{D f-(\xi f)\xi\}=h'D f,\end{equation} Comparing the antecedent relation with (\ref{c41}) gives that \begin{equation}\label{c43} (k+2)\{D f-(\xi f)\xi\}=0. \end{equation} This shows that either $k=-2$ or $\{D f-(\xi f)\xi\}=0$. \par Case i: If $k=-2$, then the Proposition 4.1 and Corollary 4.2 of \cite{dp2} state that $M^{2n+1}$ is locally isometric to the Riemannian product $\mathbb{H}^{n+1}(-4)\times \mathbb{R}^{n}$. In fact, from (\cite{pet},\cite{pet1}) we can say that the product $\mathbb{H}^{n+1}(-4)\times \mathbb{R}^{n}$ is a rigid gradient Ricci soliton.\par Case ii: \begin{equation} Df=(\xi f)\xi.\label{d11}\end{equation} Executing the covariant differentiation of (\ref{d11}) along $U\in \chi (M)$ and utilizing (\ref{b3}) we get \begin{equation} \nabla _{U}Df=U(\xi f)\xi+(\xi f)U-(\xi f)\eta(U)\xi+(\xi f)h'U.\label{d12}\end{equation} Replacing the foregoing equation into (\ref{c31}) revels that \begin{equation} Q X=\beta U -(\xi f)U-U(\xi f)\xi+(\xi f)\eta(U)\xi-(\xi f)h'U+\frac{1}{m}g(U,D f)D f.\label{d13}\end{equation} Comparing (\ref{b10}) and (\ref{d13}) give that \begin{eqnarray} && \{\beta+2n -(\xi f)\}U+\{2n-(\xi f)\}h'U +\{(\xi f)\eta(U)\nonumber\\&&-U(\xi f)-2n(k+1)\eta(U)\}\xi+\frac{1}{m}g(U,D f)D f =0.\label{d14}\end{eqnarray} Now operating $h'$ we get \begin{eqnarray} && \{\beta+2n -(\xi f)\}h'U+(k+1)\{2n-(\xi f)\}\phi^{2}U \nonumber\\&&+\frac{1}{m}g(U,D f)h'D f =0.\label{d15}\end{eqnarray} Contracting $U$ in the above equation we infer \begin{equation} 2n(k+1)\{2n-(\xi f)\}+\frac{1}{m}g(h'D f,D f)=0.\label{d16}\end{equation} Putting $U=\xi$ in (\ref{d15}), we have \begin{equation} \eta(D f)h' D f=0.\label{d17}\end{equation} This shows that either $\eta(D f)=0$ or $h' D f=0.$\par Case (i): If $\eta(D f)=0$, then we get $\xi f=0$ and hence from equation (\ref{d11}) it follows that $f=constant$. Then we get from (\ref{c31}) that the manifold is an Einstein manifold.\par Case (ii): If $h' D f=0$, then from (\ref{d16}) we get $(\xi f)=2n$. Now putting this value in (\ref{d15}) we obtain $\beta h' U=0$. Hence either $\beta=0$, or $h' U=0$. Both contradicts our assumptions $\beta=(2n+m)k+2m$ and $k<-1$ respectively .\par Thus, we can state the following theorem: \begin{thm}\label{main3} Let the Riemannian metric of a $(2n+1)$-dimensional $(k,\mu)'$ almost Kenmotsu manifold with $h'\neq0$, be the gradient $(m,\rho)$-quasi Einstein soliton. Then either $M^{2n+1}$ is locally isometric to a rigid gradient Ricci soliton $\mathbb{H}^{n+1}(-4)\times \mathbb{R}^{n}$ or $M^{2n+1}$ is an Einstein manifold, provided $\beta=(2nk+mk+2m)$. \end{thm} We know that when $m=\infty$, the $(m,\rho)$-quasi Einstein soliton becomes gradient $\rho$-Einstein soliton. Putting the value $m=\infty$ in (\ref{c40}) and by a straight forward calculation we find that the manifold $M^{2n+1}$ is locally isometric to a rigid gradient Ricci soliton $\mathbb{H}^{n+1}(-4)\times \mathbb{R}^{n}$. Thus, we can state: \begin{cor}\label{cor1} Let the Riemannian metric of a $(2n+1)$-dimensional almost Kenmotsu manifold such that $\xi$ belongs to the $(k,\mu)'$-nullity distribution and $h'\neq0$, be the gradient $\rho$- Einstein soliton. Then $M^{2n+1}$ is locally isometric to a rigid gradient Ricci soliton $\mathbb{H}^{n+1}(-4)\times \mathbb{R}^{n}$. \end{cor} \begin{rem} The above corollary have been proved by Venkatesha and Kumara in their paper \cite{ven}. They also prove that the potential vector field is tangential to the Euclidean factor $\emph{R}^{n}$. \end{rem} \section{Gradient $(m,\rho)$-quasi Einstein solitons on a $3$-dimensional Kenmotsu manifold} In \cite{dp2}, Dileo and Pastore proved that the following conditions are equivalent:\par (1) the (1,1)-type tensor field $h$ vanishes and foliations of the distribution $\mathcal{D}$ are K$\ddot{a}$hlerian.\par (2) almost contact metric structure of an almost Kenmotsu manifold is normal.\par As a outcome, we get instantly that a 3-dimensional almost Kenmotsu manifold reduces to a Kenmotsu manifold if and only if $h=0$. In this section, we target to investigate the gradient $(m,\rho)$-quasi Einstein soliton on a $3$-dimensional Kenmotsu manifold. Making use of $h=0$ in equation (\ref{b3}) we get $\nabla \xi=-\phi^{2}$, and this revels that \begin{equation} \label{k1} R(U,V)\xi =-\eta (V)U + \eta (U)V\end{equation} for any $U,V$ $\in \chi(M)$ and therefore by contracting $V$ in (\ref{k1}) we obtain $Q\xi=-2\xi$. From \cite{de} we know that for a 3-dimensional Kenmotsu manifold \begin{eqnarray}\label{kk1} R(U,V)W &=&(\frac{r+4}{2})[g(V,W)U-g(U,W)V]\\&& -(\frac{r+6}{2})[g(V,W)\eta (U)\xi-g(U,W)\eta (V)\xi\nonumber\\&&+\eta (V)\eta (W)U-\eta (U)\eta (W)V]\nonumber,\end{eqnarray} \begin{equation} \label{kk2} S(U,V)=\frac{1}{2}[(r+2)g(U,V)-(r+6)\eta (U)\eta (V)].\end{equation} Now before introducing the detailed proof of our main theorem, we first state the following result: \begin{lem}\label{L3}(lemma. 4.1 of \cite{wa}) For a 3-dimensional Kenmotsu manifold $(M^{3},\phi,\xi,\eta,g)$, we have \begin{eqnarray} \label{b13} \xi r=-2(r+6) \end{eqnarray} \end{lem} where $r$ denotes the scalar curvature of $M$.\\ Let us assume that the Riemannian metric of a $3$-dimensional almost Kenmotsu manifold be a $(m,\rho)$-quasi Einstein soliton. Then the equation (\ref{a8}) may be expressed as \begin{equation}\label{k2} \nabla_{U}D f+Q U=\frac{1}{m}g(U,D f)D f+\beta U. \end{equation} Taking covariant derivative of (\ref{k2}) along the vector field $V$, we get \begin{eqnarray}\label{k3} \nabla_{V}\nabla_{U}D f&=&-\nabla_{V}QU+ \frac{1}{m}\nabla_{V}g(U,D f)D f\nonumber\\&&+\frac{1}{m}g(U,D f)\nabla_{V}D f+\beta \nabla_{V}U+(V\beta)U. \end{eqnarray} Interchanging $U$ and $V$ in (\ref{k3}), we lead \begin{eqnarray}\label{k4} \nabla_{U}\nabla_{V}D f&=&-\nabla_{U}QV+ \frac{1}{m}\nabla_{U}g(V,D f)D f\nonumber\\&&+\frac{1}{m}g(V,D f)\nabla_{U}D f+\beta \nabla_{U}V+(U\beta)V \end{eqnarray} and \begin{eqnarray}\label{k5} \nabla_{[U,V]}D f=-Q[U,V]+ \frac{1}{m}g([U,V],D f)D f+\beta [U,V]. \end{eqnarray} Equations (\ref{k2})-(\ref{k5}) and the symmetric property of Levi-Civita connection together with $R(U,V)D f=\nabla_{U}\nabla_{V}D f-\nabla_{V}\nabla_{U}D f-\nabla_{[U,V]}D f$ we lead \begin{eqnarray}\label{k6} R(U,V)D f&=& (\nabla_{V}Q)U-(\nabla_{U}Q)V+\frac{\beta}{m}\{ (V f)U-(U f)V\}\nonumber\\&& +\frac{1}{m}\{ (U f)QV-(V f)QU \}+\{(U\beta)V-(V\beta)U\}. \end{eqnarray} Taking inner product of (\ref{k6}) with $\xi$ and using (\ref{kk2}) we have \begin{eqnarray}\label{k7} g(R(U,V)D f,\xi)&=&\frac{\beta}{m}\{ (V f)\eta (U)-(U f)\eta(V)\}\nonumber\\&& +\frac{1}{m}\{ (U f)\eta(QV)-(V f)\eta(QU) \}\nonumber\\&& +\{(U\beta)\eta (V)-(V\beta)\eta(U)\}. \end{eqnarray} Again, from (\ref{k1}) we infer that \begin{equation}\label{k8} g(R(U,V)D f,\xi)=-\{ (V f)\eta (U)-(U f)\eta(V)\}.\end{equation} Combining equation (\ref{k7}) and (\ref{k8}) reveal that \begin{eqnarray}\label{k9} -\{ (V f)\eta (U)-(U f)\eta(V)\}&=&\frac{\beta}{m}\{ (V f)\eta (U)-(U f)\eta(V)\}\nonumber\\&& +\frac{1}{m}\{ (U f)\eta(QV)-(V f)\eta(QU) \}\nonumber\\&&+\{(U\beta)\eta (V)-(V\beta)\eta(U)\}. \end{eqnarray} Replacing $V$ by $\xi$ in the foregoing equation, we get \begin{equation}\label{k10} d(f-\beta)=\xi(f-\beta)\eta,\end{equation} where $d$ stands for the exterior differentiation, provided $\beta=-2$. In other word, $f-\beta$ is invariant along $\mathcal{D}$, i.e., $X(f-\beta)=0$ for any $X \in \mathcal{D}$. Taking into account the above fact and using Lemma 6.1, we infer \begin{equation}\label{k11} (\xi f)=(\xi \beta)=\rho (\xi r)=-2\rho (r+6).\end{equation} The contraction of the equation (\ref{k6}) along $U$ and applying Lemma 6.1, we get \begin{eqnarray}\label{k12} S(V,D f)&=& \frac{1}{2}(V r)+\frac{2 \beta }{m}(V f)\nonumber\\&& -\frac{1}{m}\{r(V f)-g(Q V, D f)\}-2(V \beta). \end{eqnarray} Clearly, comparing the above equation with (\ref{kk2}) yields \begin{eqnarray}\label{k13} && -(V r)-\frac{4 \beta }{m}(V f) +\frac{2}{m}\{r(V f)-g(Q V, D f)\}\nonumber\\&&+4(V \beta)+(r+2)(V f)-(r+6)\eta (V)\xi f=0. \end{eqnarray} By a straight forward calculation, replacing $V$ by $\xi$ in (\ref{k13}) and using (\ref{k11}), we can easily obtain \begin{equation} \label{k14} (\xi f)=\frac{2m(r+6)}{4\beta-2r-4}.\end{equation} Comparing the antecedent equation with (\ref{k11}) reveals that \begin{equation} \label{k15} 2(r+6)\{\rho+\frac{m}{4\beta-2r-4}\}=0.\end{equation} This shows that either $r=-6$ or $\{\rho+\frac{m}{4\beta-2r-4}\}=0$. Next, we split our investigation as :\par Case (i): If $r=-6$, then from equation (\ref{kk2}) we conclude that $S=-2g$. Therefore, by using equation (\ref{kk1}) we sum up that the manifold is of constant sectional curvature $-1$.\par Case (ii): If $\{\rho+\frac{m}{4\beta-2r-4}\}=0$, then by a simple calculation we get \begin{equation} \label{k16} r=\frac{4\lambda-4-4\rho\lambda-4\rho+m}{2(2\rho^{2}-3\rho+1)}=constant,\end{equation} provided $\rho\neq 1$. Hence by applying Lemma 6.1 we can easily get $r=-6$. Therefore, from Case (i) we see that the manifold is of constant sectional curvature $-1$. After combining the two conditions namely, $\beta=-2$ and $\rho\neq 1$, we can write $r\neq -(\lambda+2)$. Thus we have the following theorem: \begin{thm}\label{main4} Let the Riemannian metric of a $3$-dimensional Kenmotsu manifold be the gradient $(m,\rho)$-quasi Einstein soliton. Then the manifold is of constant sectional curvature $-1$, provided $r\neq -(\lambda+2)$. \end{thm} We know that when $m=\infty$, the $(m,\rho)$-quasi Einstein soliton gives the so called gradient $\rho$-Einstein soliton. Putting the value $m=\infty$ in (\ref{k9}) and by a straight forward calculation we find that the manifold is of constant sectional curvature $-1$. Thus, we can state: \begin{cor}\label{cor2} Let the Riemannian metric of a $3$-dimensional Kenmotsu manifold be the gradient $\rho$- Einstein soliton. Then the manifold is of constant sectional curvature $-1$. \end{cor} \section{\textbf{Example }} Here we consider an example cited in our paper \cite{dek1}. We consider the 3-dimensional manifold $M=\{(u,v,w)\varepsilon \mathbb{R}^{3}, w\neq0\},$ where $(u,v,w)$ are standard coordinate of $\mathbb{R}^{3}.$ The vector fields$$E_{1}=w\frac{\partial }{\partial u},\hspace{7pt}E_{2}=w\frac{\partial } {\partial v} ,\hspace{7pt}E_{3}=-w\frac{\partial }{\partial w}$$ are linearly independent at each point of $M.$ Let $g$ be the Riemannian metric defined by $$g(E_{1},E_{3})=g(E_{1},E_{2})=g(E_{2},E_{3})=0,$$ $$g(E_{1},E_{1})=g(E_{2},E_{2})=g(E_{3},E_{3})=1.$$ Let $\eta $ be the 1-form defined by $\eta (W)=g(W,E_{3})$ for any $W\varepsilon \chi (M).$ Let $\phi $ be the $(1,1)$ tensor field defined by $$\phi (E_{1})=-E_{2},\hspace{7pt} \phi (E_{2})=E_{1},\hspace{7pt}\phi (E_{3})=0.$$ Then for $E_{3}=\xi $ , the structure $(\phi ,\xi ,\eta ,g)$ defines an almost contact metric structure on $M$. Further Koszul's formula yields $$\nabla _{E_{1}}E_{3}=E_{1},\hspace{10pt}\nabla _{E_{1}}E_{2}=0,\hspace{10pt} \nabla _{E_{1}}E_{1}=-E_{3},$$ $$\nabla _{E_{2}}E_{3}=E_{2},\hspace{10pt}\nabla _{E_{2}}E_{2}=E_{3},\hspace{10pt} \nabla _{E_{2}}E_{1}=0,$$ \begin{equation}\nabla _{E_{3}}E_{3}=0,\hspace{10pt}\nabla _{E_{3}}E_{2}=0,\hspace{10pt} \nabla _{E_{3}}E_{1}=0.\label{f5}\end{equation} From the above it follows that the manifold satisfies $\nabla _{U}\xi=U-\eta (U)\xi$, for $\xi=E_{3}$. Hence the manifold is a Kenmotsu manifold.\\ We verified that $$R(E_{1},E_{2})E_{3}=0,\hspace{10pt}R(E_{2},E_{3})E_{3}=-E_{2},\hspace{10pt} R(E_{1},E_{3})E_{3}=-E_{1},$$ $$R(E_{1},E_{2})E_{2}=-E_{1},\hspace{10pt}R(E_{2},E_{3})E_{2}=E_{3},\hspace{10pt} R(E_{1},E_{3})E_{2}=0,$$ $$R(E_{1},E_{2})E_{1}=E_{2},\hspace{10pt}R(E_{2},E_{3})E_{1}=0,\hspace{10pt} R(E_{1},E_{3})E_{1}=E_{3}.$$\\ From the above expressions of the curvature tensor $R$ we obtain \begin{eqnarray} S(E_{1},E_{1})&=&g(R(E_{1},E_{2})E_{2},E_{1})+g(R(E_{1},E_{3})E_{3},E_{1}) \nonumber\\&=&-2.\end{eqnarray} Similarly, we have $$S(E_{2},E_{2})=S(E_{3},E_{3})=-2.$$ Therefore, $$r=S(E_{1},E_{1})+S(E_{2},E_{2})+S(E_{3},E_{3})=-6.$$ Let $f:M^{3}\rightarrow \mathbb{R}$ be a smooth function defined by $f=-w^{2}$. Then gradient of $f$ with respect to $g$ is given by $$ Df=-2w\frac{\partial }{\partial w}=2E_{3}.$$ With the help of (\ref{f5}) we can easily get $$ Hessf(E_{3},E_{3})=0.$$ Thus gradient $\rho$-Einstein soliton equation shows $$Hessf(E_{3},E_{3})+S(E_{3},E_{3})+2g(E_{3},E_{3})=0. $$ Similarly checking the other components we conclude that that $M^{3}$ satisfies $$Hessf(U,V)+S(U,V)+2g(U,V)=0. $$ Thus $g$ is a gradient $\rho$-Einstein soliton with $f=-w^{2}$ and $\beta=-2$. Hence the \textbf{Corollary 4.3} is verified. \section{Declarations} \subsection{Funding } Not applicable. \subsection{Conflicts of interest/Competing interests} The authors declare that they have no conflict of interest. \subsection{Availability of data and material } Not applicable. \subsection{Code availability} Not applicable. \end{document}
\begin{document} \title{Energy conservation in the limit of filtered solutions for the 2D Euler equations} \begin{abstract} We consider energy conservation in a two-dimensional incompressible and inviscid flow through weak solutions of the filtered-Euler equations, which describe a regularized Euler flow based on a spatial filtering. We show that the energy dissipation rate for the filtered weak solution with vorticity in $L^p$, $p > 3/2$ converges to zero in the limit of the filter parameter. Although the energy defined in the whole space is not finite in general, we formally extract a time-dependent part, which is well-defined for filtered solutions, from the energy and define the energy dissipation rate as its time-derivative. Moreover, the limit of the filtered weak solution is a weak solution of the Euler equations and it satisfies a local energy balance in the sense of distributions. For the case of $p = 3/2$, we find the same result as $p > 3/2$ by assuming Onsager's critical condition for the family of the filtered solutions. \end{abstract} \section{Introduction} \label{sec:intro} According to the Kolmogorov theory \cite{Kolmogorov}, energy dissipation in inviscid flows is closely related to three-dimensional (3D) turbulence. This implies that energy dissipating solutions of the 3D Euler equations are a key to comprehension of turbulent dynamics. Onsager conjectured that weak solutions of the 3D Euler equations acquiring a H\"{o}lder continuity with the order greater than $1/3$ conserve the energy, and the energy dissipation could occur for the order less than $1/3$ \cite{Eyink, Onsager, Shvydkoy}; Onsager's conjecture has been shown mathematically \cite{Buckmaster, Cheskidov(a), Constantin}. For 2D flows, the Kraichnan-Leith-Batchelor theory \cite{Batchelor, Kraichnan, Leith} indicates that two inertial ranges corresponding to a backward energy cascade and a forward enstrophy cascade appear in turbulent flows, which asserts that energy conservation in inviscid flows is still important for the 2D turbulent problem. In this paper, we study energy conserving solutions of an inviscid model. Motions of incompressible and inviscid flows are often described by the 2D Euler equations: \begin{equation} \partial_t \boldsymbol{u} + (\boldsymbol{u} \cdot \nabla) \boldsymbol{u} + \nabla p = 0, \qquad \nabla \cdot \boldsymbol{u} = 0, \label{eq:Euler} \end{equation} where $\boldsymbol{u}=\boldsymbol{u}(\boldsymbol{x}, t) = ( u_1(\boldsymbol{x}, t), u_2(\boldsymbol{x}, t))$ is the fluid velocity field and $p=p(\boldsymbol{x}, t)$ is the scalar pressure. A classical weak solution for the initial value problem of \eqref{eq:Euler} with $\boldsymbol{u}(\boldsymbol{x}, 0) = \boldsymbol{u}_0(\boldsymbol{x})$ is defined as follows, see \cite{Diperna(b)}. \begin{definition} \label{def:Euler} {\it A velocity field $\boldsymbol{u} \in L^\infty(0, T; L^2_{\mathrm{loc}}(\mathbb{R}^2))$ vanishing at infinity is a weak solution of \eqref{eq:Euler} with initial data $\boldsymbol{u}_0$ provided that \begin{itemize} \item[$\mathrm{(i)}$] for any vector $\Psi \in C_c^\infty(\Omega_T)$ with $\nabla \cdot \Psi = 0$, \begin{equation*} \intint_{\Omega_T} \left( \partial_t \Psi \cdot \boldsymbol{u} + \nabla \Psi : \boldsymbol{u} \otimes \boldsymbol{u} \right) d\boldsymbol{x} dt = 0, \end{equation*} where $\Omega_T \equiv \mathbb{R}^2 \times (0,T)$, $\boldsymbol{v} \otimes \boldsymbol{v} = (v_i v_j)$, $\nabla \Psi = (\partial_j \psi_i)$ and $A : B = \sum_{i,j} a_{ij} b_{ij}$, \item[$\mathrm{(ii)}$] for any scalar $\psi \in C_c^\infty(\Omega_T)$, $\intint_{\Omega_T} \nabla \psi \cdot \boldsymbol{u} d\boldsymbol{x} dt = 0$, \item[$\mathrm{(iii)}$] $\boldsymbol{u} \in \mathrm{Lip}([0,T];H_{\mathrm{loc}}^{-L}(\mathbb{R}^2))$ for some $L > 0$ and $\boldsymbol{u}(\cdot, 0) = \boldsymbol{u}_0(\cdot)$ in $H_{\mathrm{loc}}^{-L}(\mathbb{R}^2)$, \end{itemize} } \label{def-sol-velocity} \end{definition} Set vorticity $\omega \equiv \operatorname{curl} \boldsymbol{u} = \partial_{x_1} u_2 - \partial_{x_2} u_1$. Taking the curl of \eqref{eq:Euler}, we obtain a transport equation for $\omega$: \begin{equation} \partial_t \omega + (\boldsymbol{u} \cdot \nabla) \omega = 0 \label{eq:vEuler} \end{equation} with initial vorticity $\omega_0 \equiv \operatorname{curl} \boldsymbol{u}_0$. The velocity $\boldsymbol{u}$ is recovered from $\omega$ via the Biot-Savart law: \begin{equation} \boldsymbol{u}(\boldsymbol{x},t) = \left( \boldsymbol{K} \ast \omega \right)(\boldsymbol{x},t) \equiv \int_{\mathbb{R}^2} \boldsymbol{K}(\boldsymbol{x} - \boldsymbol{y}) \omega(\boldsymbol{y},t) d \boldsymbol{y}, \label{eq:BSlaw} \end{equation} where $\boldsymbol{K}$ is defined by \begin{equation*} \boldsymbol{K}(\boldsymbol{x}) \equiv \nabla^\perp G(\boldsymbol{x}) = \frac{1}{2 \pi} \frac{\boldsymbol{x}^\perp}{\left| \boldsymbol{x} \right|^2}, \qquad G(\boldsymbol{x}) \equiv \frac{1}{2 \pi} \log{|\boldsymbol{x}|} \end{equation*} with $\nabla^\perp = (- \partial_{x_2}, \partial_{x_1})$ and $\boldsymbol{x}^\perp = (- x_2, x_1)$. Note that $G$ is a fundamental solution to the 2D Laplacian. In this paper, we focus on weak solutions of (\ref{eq:vEuler}) with $\omega_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$, $p > 1$; the existence of a global weak solution has been established for $1 < p \leq \infty$ and the uniqueness holds only for $p = \infty$ \cite{Diperna(b), Marchioro, Yudovich}. As it is mentioned in \cite{Lopes}, a weak solution for $\omega_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$, $p \geq 4/3$ satisfies \eqref{eq:vEuler} in the following sense. \begin{equation*} \intint_{\Omega_T} \left( \partial_t \psi(\boldsymbol{x},t) + \nabla \psi (\boldsymbol{x},t) \cdot \boldsymbol{u}(\boldsymbol{x},t) \right) \omega(\boldsymbol{x},t) d\boldsymbol{x} dt = 0 \end{equation*} for any $\psi \in C_c^\infty(\Omega_T)$. For a weak solution of \eqref{eq:vEuler}, we consider the kinetic energy, \begin{equation} \frac{1}{2}\int |\boldsymbol{u}(\boldsymbol{x},t)|^2 d\boldsymbol{x}, \label{energy} \end{equation} where $\boldsymbol{u}$ is given by \eqref{eq:BSlaw}, though \eqref{energy} is not finite on the entire space $\mathbb{R}^2$ except for specific vorticity, see \cite{Diperna(b)} for an example. Cheskidov et al. \cite{Cheskidov(b)} have shown that a weak solution of the 2D Euler equations on the torus $\mathbb{T}^2$, for which \eqref{energy} is finite, conserves the energy for $\omega_0 \in L^{3/2}(\mathbb{T}^2)$ by using a spatial mollification. They have also shown energy conservation for the weak solution obtained by an inviscid limit of the 2D Navier-Stokes equations for $\omega_0 \in L^p(\mathbb{T}^2)$, $p > 1$, which is called a {\it physically realizable weak solution}. In this paper, we consider another regularization of the Euler equations, which we call the {\it filtered-Euler equations}, and show energy conservation on $\mathbb{R}^2$ in the limit of the regularization parameter. Although the energy is still infinite for the filtered inviscid model, we extract a finite time-dependent term formally and see the convergence of its time-derivative. We also show that the weak solution of the 2D filtered-Euler equations converges weakly to a weak solution of the 2D Euler equations and satisfies a local energy balance equation. The filtered-Euler equations are given by \begin{equation} \partial_t \boldsymbol{v}^\varepsilon + (\boldsymbol{u}^\varepsilon \cdot \nabla) \boldsymbol{v}^\varepsilon - (\nabla \boldsymbol{v}^\varepsilon)^T \cdot \boldsymbol{u}^\varepsilon + \nabla p^\varepsilon = 0, \qquad \nabla \cdot \boldsymbol{v}^\varepsilon = 0, \label{FEE} \end{equation} where $\boldsymbol{v}^\varepsilon$ and $p^\varepsilon$ denote the velocity field and the generalized pressure, respectively. Another field $\boldsymbol{u}^\varepsilon$ is a spatially filtered velocity of $\boldsymbol{v}^\varepsilon$, that is, \begin{equation} \boldsymbol{u}^\varepsilon(\boldsymbol{x},t) = \left( h^\varepsilon \ast \boldsymbol{v}^\varepsilon \right) (\boldsymbol{x},t), \qquad h^\varepsilon(\boldsymbol{x}) \equiv \frac{1}{\varepsilon^2} h \left( \frac{\boldsymbol{x}}{\varepsilon} \right), \qquad \varepsilon > 0, \label{Fvelo} \end{equation} in 2D flows. Refer to \cite{Foias, Holm} for the derivation of the filtered-Euler equations through the filtering (\ref{Fvelo}). Here, $h \in L^1(\mathbb{R}^2)$ is a radial function satisfying $\int_{\mathbb{R}^2} h(\boldsymbol{x}) d\boldsymbol{x} = 1$, which we call the {\it filter function}. For simplicity, we assume $h \in C^1_0(\mathbb{R}^2 \setminus \{ 0 \})$: a continuously differentiable function that vanishes at infinity and may have a singularity at the origin. Note that, considering specific filter functions, we obtain two well-known regularizations: the Euler-$\alpha$ model and the vortex blob model, see \cite{G.1, Holm}. In particular, the Euler-$\alpha$ equations and their viscous extension, the Navier-Stokes-$\alpha$ equations, are considered as physically relevant models of turbulent flows \cite{Chen(a), Chen(b), Foias, Foias(a), Lunasin, Mohseni}. Taking the $\operatorname{curl}$ of (\ref{FEE}) with the incompressible condition, we obtain the transport equation for $q^\varepsilon \equiv \operatorname{curl} \boldsymbol{v}^\varepsilon$ convected by $\boldsymbol{u}^\varepsilon$, \begin{equation} \partial_t q^\varepsilon + (\boldsymbol{u}^\varepsilon \cdot \nabla) q^\varepsilon = 0, \qquad \boldsymbol{u}^\varepsilon = \boldsymbol{K}^\varepsilon \ast q^\varepsilon, \qquad \boldsymbol{K}^\varepsilon \equiv \boldsymbol{K} \ast h^\varepsilon. \label{VFEE} \end{equation} The Biot-Savart law for the filtered vorticity $\omega^\varepsilon \equiv \operatorname{curl} \boldsymbol{u}^\varepsilon$ gives $\boldsymbol{u}^\varepsilon = \boldsymbol{K} \ast \omega^\varepsilon$ and we have $\nabla \cdot \boldsymbol{u}^\varepsilon = 0$ and $\omega^\varepsilon = h^\varepsilon \ast q^\varepsilon$ when the convolution commutes with the differential operator. The Lagrangian flow map $\boldsymbol{\eta}^\varepsilon$ associated with $\boldsymbol{u}^\varepsilon$ is given by \begin{equation} \partial_t \boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, t) = \boldsymbol{u}^\varepsilon \left( \boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, t), t \right), \qquad \boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, 0) = \boldsymbol{x}. \label{flow-map} \end{equation} The preceding study \cite{G.1} has shown that the 2D filtered-Euler equations have a unique global weak solution for $q_0 \in \mathcal{M}(\mathbb{R}^2)$, the space of finite Radon measures on $\mathbb{R}^2$, under some additional conditions for $h$. More precisely, we have a unique solution, \begin{equation} \boldsymbol{\eta}^\varepsilon \in C^1([0,T]; \mathscr{G}), \qquad q^\varepsilon \in C_w([0,T]; \mathcal{M}(\mathbb{R}^2)), \qquad \boldsymbol{u}^\varepsilon \in C([0,T]; C_0(\mathbb{R}^2)), \label{sol:VFEE} \end{equation} to \eqref{VFEE} and \eqref{flow-map} with $q_0 \in \mathcal{M}(\mathbb{R}^2)$, where $\mathscr{G}$ denotes the group of all homeomorphisms of $\mathbb{R}^2$ preserving the Lebesgue measure and $C_w$ does the weak continuity. Note that $\boldsymbol{\eta}^\varepsilon$, $q^\varepsilon$ and $\boldsymbol{u}^\varepsilon$ are related to each other: $q^\varepsilon(\boldsymbol{x}, t) = q_0 \left( \boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, - t) \right)$ and $\boldsymbol{u}^\varepsilon = \boldsymbol{K}^\varepsilon \ast q^\varepsilon$. The weak solution \eqref{sol:VFEE} satisfies (\ref{VFEE}) in the sense that \begin{equation*} \intint_{\Omega_T} \left( \partial_t \psi(\boldsymbol{x},t) + \nabla \psi(\boldsymbol{x},t) \cdot \boldsymbol{u}^\varepsilon(\boldsymbol{x},t) \right) q^\varepsilon(\boldsymbol{x},t) d\boldsymbol{x} dt = 0 \end{equation*} for any $\psi\in C_0^\infty(\Omega_T)$, the space of smooth functions vanishing at infinity in $\mathbb{R}^2$ and the boundary of $(0,T)$. We mention the convergence of weak solutions of the 2D filtered-Euler equations to those of the 2D Euler equations in the $\varepsilon \rightarrow 0$ limit. For $q_0 \in L^1(\mathbb{R}^2) \cap L^\infty(\mathbb{R}^2)$, the weak solution of \eqref{VFEE} strongly converges to a unique global weak solution of \eqref{eq:Euler} with $\omega_0 = q_0$: the filtered flow map $\boldsymbol{\eta}^\varepsilon$ converges to a flow map induced by the 2D Euler equations \cite{G.1}. For $q_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$, $1 < p < \infty$, as we show later, the filtered weak solution converges weakly to a weak solution of \eqref{eq:Euler}, which is constructed in \cite{Diperna(b)}. The convergence result has been extended to initial vorticity in $\mathcal{M}(\mathbb{R}^2) \cap H^{-1}_{\mathrm{loc}}(\mathbb{R}^2)$ with a distinguished sign \cite{G.2}. Throughout this paper, we use the following notations. A open ball is denoted by $B_r \equiv \{ \boldsymbol{x} \in \mathbb{R}^2 \mid |\boldsymbol{x}| < r \}$. For a set $A \subset \mathbb{R}^2$, $\chi_{A}$ denotes the indicator function and $|A|$ does the Lebesgue measure. For the exponent $p$ in the Lebesgue or Sobolev space, $p'$ is the conjugate exponent of $p$, that is, $1 = 1/p + 1/{p'}$ for $p \in [1, \infty]$, and $p^\ast \in (2,\infty)$ is defined by $ p^\ast \equiv 2p/(2 - p)$, that is, $1/{p^\ast} = 1/p -1/2$ for $p \in (1, 2)$. We also introduce the weight function $w_{\alpha}(\boldsymbol{x}) \equiv |\boldsymbol{x}|^\alpha$ for $\boldsymbol{x} \in \mathbb{R}^2$. Note that we omit the domain in the norm when it is the entire space $\mathbb{R}^2$. As for convergence, $f_n \rightarrow f$ denotes strong convergence and $f_n \rightharpoonup f$ does weak convergence in Banach spaces. \section{Main results} \subsection{Energy dissipation rate} \label{Subsec:FEE-dissipation} Before deriving the energy dissipation rate for the filtered-Euler equations, we see basic properties of a weak solution to \eqref{VFEE} with $q_0 \in \mathcal{M}(\mathbb{R}^2)$. Considering the Lagrangian flow map $\boldsymbol{\eta}^\varepsilon$, we find $\| q^\varepsilon(\cdot, t) \|_{\mathcal{M}} = \| q_0 \|_{\mathcal{M}}$ and $\{ q^\varepsilon \}$ is uniformly bounded in $C([0,T]; \mathcal{M}(\mathbb{R}^2))$. We also have \begin{equation*} \| \omega^\varepsilon(\cdot, t) \|_{L^p} \leq \| h^\varepsilon \|_{L^p} \| q_0 \|_{\mathcal{M}} = \varepsilon^{-2(1 - 1/p)} \| h \|_{L^p} \| q_0 \|_{\mathcal{M}} \end{equation*} for any $1 \leq p < \infty$, which implies that $\{ \omega^\varepsilon \} \subset C([0,T]; L^1(\mathbb{R}^2))$ is uniformly bounded. As for the filtered velocity $\boldsymbol{u}^\varepsilon$, it follows from \begin{equation*} \boldsymbol{u}^\varepsilon = \boldsymbol{K} \ast \omega^\varepsilon = (\boldsymbol{K} \chi_{B_1}) \ast \omega^\varepsilon + (\boldsymbol{K}\chi_{\mathbb{R}^2 \setminus B_1}) \ast \omega^\varepsilon, \end{equation*} that \begin{equation} \| \boldsymbol{u}^\varepsilon (\cdot, t) \|_{L^\infty} \leq C (\| \omega^\varepsilon (\cdot, t) \|_{L^r} + \| \omega^\varepsilon (\cdot, t) \|_{L^1} ) \leq C \varepsilon^{-2(1 - 1/r)} \| q_0 \|_{\mathcal{M}} \label{est:u-eps-infty} \end{equation} for any $2 < r \leq \infty$. As we see below, the above estimates give well-posedness of the energy dissipation rate. We define energy for a filtered weak solution by replacing $\boldsymbol{u}$ with $\boldsymbol{u}^\varepsilon$ in \eqref{energy}. However, this energy is not finite in general since the filtered Biot-Savart law, $\boldsymbol{u}^\varepsilon = \boldsymbol{K}^\varepsilon \ast q^\varepsilon$, implies $\boldsymbol{u}^\varepsilon(\boldsymbol{x}) \sim |\boldsymbol{x}|^{-1}$ as $|\boldsymbol{x}| \rightarrow \infty$. We now see that a formal calculation divides the energy into two parts: a time-invariant term and a time-dependent term. In particular, we focus on the time-dependent term that is well-defined for weak solutions of \eqref{VFEE} with $q_0 \in \mathcal{M}(\mathbb{R}^2)$. We start by substituting the filtered Biot-Savart law into the energy: \begin{equation*} \frac{1}{2}\int_{\mathbb{R}^2} \left| \boldsymbol{u}^\varepsilon (\boldsymbol{x},t) \right|^2 d \boldsymbol{x} = \frac{1}{2} \int \hspace{-1.5mm} \intint \boldsymbol{K}^\varepsilon(\boldsymbol{x} - \boldsymbol{y}) \cdot \boldsymbol{K}^\varepsilon(\boldsymbol{x} - \boldsymbol{z}) q^\varepsilon(\boldsymbol{y},t) q^\varepsilon(\boldsymbol{z},t) d \boldsymbol{y} d \boldsymbol{z} d \boldsymbol{x}. \end{equation*} Since we have $\boldsymbol{K}^\varepsilon = \nabla^\perp G^\varepsilon$ and $\Delta G^\varepsilon = h^\varepsilon$, a formal calculation yields \begin{align*} \frac{1}{2} \int_{\mathbb{R}^2} \left| \boldsymbol{u}^\varepsilon (\boldsymbol{x},t) \right|^2 d \boldsymbol{x} &= - \frac{1}{2} \int \hspace{-1.5mm} \intint h^\varepsilon(\boldsymbol{x} - \boldsymbol{y}) \cdot G^\varepsilon(\boldsymbol{x} - \boldsymbol{z}) q^\varepsilon(\boldsymbol{y},t) q^\varepsilon(\boldsymbol{z},t) d \boldsymbol{x} d \boldsymbol{y} d \boldsymbol{z} \\ &= - \frac{1}{2} \int \hspace{-1.5mm} \int \left( h^\varepsilon \ast G^\varepsilon \right)(\boldsymbol{y} - \boldsymbol{z}) q^\varepsilon(\boldsymbol{y},t) q^\varepsilon(\boldsymbol{z},t) d \boldsymbol{y} d \boldsymbol{z}. \end{align*} We introduce the following quantity. \begin{equation*} \mathscr{H}^\varepsilon \equiv - \frac{1}{2} \int \hspace{-1.5mm} \int G^\varepsilon(\boldsymbol{x} - \boldsymbol{y}) q^\varepsilon(\boldsymbol{x},t) q^\varepsilon(\boldsymbol{y},t) d \boldsymbol{x} d \boldsymbol{y}, \end{equation*} which is called the {\it pseudo-energy}. Although $\mathscr{H}^\varepsilon$ is not finite in general, considering specific vorticity, for example, initial vorticity of compact support, we find that $\mathscr{H}^\varepsilon$ is a conserved quantity. Indeed, for the point-vortex initial vorticity, $\mathscr{H}^\varepsilon$ gives the Hamiltonian of the {\it filtered point-vortex system}, see \cite{G.3}. On the basis of the above calculation, we divide the energy into two parts as follows. \begin{equation*} \frac{1}{2} \int_{\mathbb{R}^2} \left| \boldsymbol{u}^\varepsilon (\boldsymbol{x},t) \right|^2 d \boldsymbol{x} = \mathscr{H}^\varepsilon + \mathscr{E}(t), \end{equation*} \begin{equation*} \mathscr{E}(t) \equiv - \frac{1}{2} \int \hspace{-1.5mm} \int H_G^\varepsilon (\boldsymbol{x} - \boldsymbol{y}) q^\varepsilon(\boldsymbol{x},t) q^\varepsilon(\boldsymbol{y},t) d \boldsymbol{x} d \boldsymbol{y}, \end{equation*} where \begin{equation} H_G^\varepsilon(\boldsymbol{x}) \equiv \left( h^\varepsilon \ast G^\varepsilon \right)(\boldsymbol{x}) - G^\varepsilon (\boldsymbol{x}) = \left( h^\varepsilon \ast \left(G^\varepsilon - G \right) \right)(\boldsymbol{x}). \label{def-H_G-eps} \end{equation} Refer to Appendix~\ref{appendix_A} for detailed properties of $H_G^\varepsilon$ and $\nabla H_G^\varepsilon$. As we see in Appendix~\ref{appendix_A}, $H_G^\varepsilon$ belongs to $C_0(\mathbb{R}^2)$ for any fixed $\varepsilon$, so that we find \begin{equation*} |\mathscr{E}(t)| \leq \| H_G^\varepsilon \|_{L^\infty} \| q^\varepsilon(\cdot, t) \|_{\mathcal{M}}^2 = C_\varepsilon \| q_0 \|_{\mathcal{M}}^2, \end{equation*} where $C_\varepsilon$ is the constant depending on $\varepsilon$. Thus, the time-dependent term $\mathscr{E}(t)$ is finite for any $q_0 \in \mathcal{M}(\mathbb{R}^2)$. Since we have \begin{equation*} \int \hspace{-1.5mm} \int H_G^\varepsilon (\boldsymbol{x} - \boldsymbol{y}) q^\varepsilon(\boldsymbol{x},t) q^\varepsilon(\boldsymbol{y},t) d \boldsymbol{y} d \boldsymbol{x} = \int \hspace{-1.5mm} \int H_G^\varepsilon (\boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, t) - \boldsymbol{\eta}^\varepsilon(\boldsymbol{y}, t)) q_0(\boldsymbol{x}) q_0(\boldsymbol{y}) d \boldsymbol{x} d \boldsymbol{y}, \end{equation*} the time-derivative of $\mathscr{E}(t)$ is given by \begin{align*} \frac{\mbox{d}}{\mbox{d}t} \mathscr{E}(t) &= - \frac{1}{2} \int \hspace{-1.5mm} \int (\nabla H_G^\varepsilon) (\boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, t) - \boldsymbol{\eta}^\varepsilon(\boldsymbol{y}, t)) \\ &\hspace{20mm} \cdot \left( \boldsymbol{u}^\varepsilon(\boldsymbol{\eta}^\varepsilon(\boldsymbol{x}, t),t) - \boldsymbol{u}^\varepsilon(\boldsymbol{\eta}^\varepsilon(\boldsymbol{y}, t),t) \right) q_0(\boldsymbol{x}) q_0(\boldsymbol{y}) d \boldsymbol{x} d \boldsymbol{y}\\ &= - \frac{1}{2} \int \hspace{-1.5mm} \int (\nabla H_G^\varepsilon) (\boldsymbol{x} - \boldsymbol{y}) \cdot \left( \boldsymbol{u}^\varepsilon(\boldsymbol{x}, t) - \boldsymbol{u}^\varepsilon(\boldsymbol{y}, t) \right) q^\varepsilon(\boldsymbol{x},t) q^\varepsilon(\boldsymbol{y},t) d \boldsymbol{x} d \boldsymbol{y}. \end{align*} Hence, we define the energy dissipation rate by \begin{equation*} \mathscr{D}_E^\varepsilon(t) \equiv - \frac{1}{2} \int \hspace{-1.5mm} \int (\nabla H_G^\varepsilon) (\boldsymbol{x} - \boldsymbol{y}) \cdot \left( \boldsymbol{u}^\varepsilon(\boldsymbol{x}, t) - \boldsymbol{u}^\varepsilon(\boldsymbol{y}, t) \right) q^\varepsilon(\boldsymbol{x},t) q^\varepsilon(\boldsymbol{y},t) d \boldsymbol{x} d \boldsymbol{y}. \end{equation*} It follows from $\nabla H_G^\varepsilon \in C_0(\mathbb{R}^2)$, see Appendix~\ref{appendix_A}, and \eqref{est:u-eps-infty} that \begin{equation*} |\mathscr{D}_E^\varepsilon(t)| \leq \| \nabla H_G^\varepsilon \|_{L^\infty} \| \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^\infty} \| q^\varepsilon (\cdot, t) \|_{\mathcal{M}}^2 \leq C_\varepsilon \| q_0 \|_{\mathcal{M}}^3, \end{equation*} and thus $\mathscr{D}_E^\varepsilon$ is well-defined for weak solutions of \eqref{VFEE} with $q_0 \in \mathcal{M}(\mathbb{R}^2)$. \subsection{Main theorems} \label{Main} As we see in Section~\ref{Subsec:FEE-dissipation}, the energy dissipation rate $\mathscr{D}_E^\varepsilon(t)$ is bounded for any weak solution of \eqref{VFEE} with $q_0 \in \mathcal{M}(\mathbb{R}^2)$. However, the boundedness of $\mathscr{D}_E^\varepsilon$ depends on the filter parameter $\varepsilon$ and, in the $\varepsilon \rightarrow 0$ limit, $\mathscr{D}_E^\varepsilon$ is not finite in general. Our concern is the set of initial vorticity that provides the uniform boundedness of $\{ \mathscr{D}_E^\varepsilon \}$. In this paper, we consider weak solutions of \eqref{VFEE} with $q_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$, and give a sufficient condition for $p$ that yields energy conservation: $\mathscr{D}_E^\varepsilon (t) \rightarrow 0$ as $\varepsilon \rightarrow 0$. In the following theorems, we assume that the filter function $h$ is sufficiently regular, so that \eqref{VFEE} has a unique global weak solution for $q_0 \in \mathcal{M}(\mathbb{R}^2)$, see \cite{G.1} for a sufficient condition for $h$. \begin{theorem} \label{thm-main1} {\it Suppose that $h \in C_0^1(\mathbb{R}^2 \setminus \{ 0 \})$ is a radial function satisfying \begin{equation*} w_1 h, \ \nabla h \in L^1(\mathbb{R}^2), \qquad w_\alpha h, \ w_3 h, \ w_1 \nabla h \in L^\infty(\mathbb{R}^2) \end{equation*} for some $\alpha \in [0,1)$. Let $(\boldsymbol{u}^\varepsilon, q^\varepsilon)$ be a weak solution of the 2D filtered-Euler equations with $q_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$, $ 3/2 < p \leq \infty$. Then, we have \begin{equation*} \lim_{\varepsilon \rightarrow 0} \| \mathscr{D}_E^\varepsilon \|_{L^\infty(0,T)} = 0. \end{equation*} Moreover, there exists a weak solution of the 2D Euler equations, \begin{equation*} \boldsymbol{u} \in L^\infty (0,T;L^{p^\ast}(\mathbb{R}^2) \cap W^{1,p}_{\mathrm{loc}}(\mathbb{R}^2)), \quad \omega = \operatorname{curl} \boldsymbol{u} \in L^\infty (0,T; L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)), \end{equation*} such that, taking subsequences as needed, we have \begin{equation*} q^\varepsilon \rightharpoonup \omega \quad \mathrm{in}\ L^p (\Omega_T),\qquad \boldsymbol{u}^\varepsilon \rightarrow \boldsymbol{u} \quad \mathrm{in}\ C([0,T];L^r_{\mathrm{loc}} (\mathbb{R}^2)) \end{equation*} for any $r \in [1, p^\ast)$ in the $\varepsilon \rightarrow 0$ limit, and there exists $P \in L^\infty(0,T;L^{p^\ast/2}(\mathbb{R}^2))$ such that the following local energy balance holds in the sense of distributions. \begin{equation*} \partial_t \left( \frac{|\boldsymbol{u}|^2}{2} \right) + \nabla \cdot \left(\boldsymbol{u} \left( \frac{|\boldsymbol{u}|^2}{2} + P \right) \right) = 0. \end{equation*} } \end{theorem} The conditions for $h$ in Theorem~\ref{thm-main1} imply $h \in L^p(\mathbb{R}^2)$ and $\nabla h \in L^q(\mathbb{R}^2)$ for any $p \in [1,\infty)$ and $q \in [1,2)$. The filter functions for the Euler-$\alpha$ model and the vortex blob model satisfy these conditions. The convergence to the Euler equations for $q_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$ is proven in the same way as \cite{G.2}, but this paper gives a simpler proof for it. As mentioned in the introduction, Cheskidov et al. \cite{Cheskidov(b)} have shown that a weak solution of the 2D Euler equations on $\mathbb{T}^2$ conserves the energy and satisfies the local energy balance provided that its vorticity belongs to $L^p(\mathbb{T}^2)$, $p \geq 3/2$. In considering the $\varepsilon \rightarrow 0$ limit in Theorem~\ref{thm-main1}, the condition $p > 3/2$ is essential for its proof. For the case of $p = 3/2$, however, the same result as Theorem~\ref{thm-main1} holds under an additional condition for the regularity of $\boldsymbol{u}^\varepsilon$: \begin{theorem} \label{thm-main2} {\it Let $(\boldsymbol{u}^\varepsilon, q^\varepsilon)$ be a weak solution of the 2D filtered-Euler equations with $q_0 \in L^1(\mathbb{R}^2) \cap L^{3/2}(\mathbb{R}^2)$ and $\boldsymbol{u}^\varepsilon$ satisfy \begin{equation} \| \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y},t) - \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^3} \leq C(T) |\boldsymbol{y}|^\alpha, \quad (\boldsymbol{y},t) \in \Omega_T, \label{condi:thm2} \end{equation} for some $\alpha \in (1/3,1]$, where $C(T)$ is independent of $\varepsilon$. Then, we have the same result as Theorem~\ref{thm-main1} with $p = 3/2$. } \end{theorem} We remark that \eqref{condi:thm2} is related to Onsager's critical condition, that is, $1/3$-H\"{o}lder continuity. Although Onsager conjectured for the 3D Euler equations, the energy conservation holds for the weak solution of the Euler equations satisfying \eqref{condi:thm2} regardless of the dimension \cite{Cheskidov(a)}. As it is mentioned in \cite{Cheskidov(b)}, weak solutions of the 2D Euler equations with $\omega_0 \in L^{3/2}$ satisfy \eqref{condi:thm2}. Our main theorems are consistent with these preceding results, though we require the family $\{ \boldsymbol{u}^\varepsilon \}$ to satisfy \eqref{condi:thm2} uniformly: the existence of a uniform constant $C(T)$ with respect to $\varepsilon > 0$. \section{Proof of main theorems} It is sufficient to show Theorem~\ref{thm-main1} for $3/2 < p < 2$. In what follows, let $p \in (3/2,2)$ be a fixed constant. \subsection{Convergence to the Euler equations} \label{proof:conv-Euler} Consider a weak solution of \eqref{VFEE} with $q_0 \in L^1(\mathbb{R}^2) \cap L^p(\mathbb{R}^2)$. For any $q \in [1, p]$, we have $\| q^\varepsilon(\cdot,t) \|_{L^q} = \| q_0 \|_{L^q}$ and \begin{equation*} \| \omega^\varepsilon(\cdot,t) \|_{L^q} \leq \| h \|_{L^1} \| q_0 \|_{L^q}, \end{equation*} so that $\{ q^\varepsilon \}$ and $\{ \omega^\varepsilon \}$ are uniformly bounded in $C([0,T];L^q(\mathbb{R}^2))$. Thus, there exists $\omega \in L^p(\Omega_T)$ such that $q^\varepsilon$, $\omega^\varepsilon \rightharpoonup \omega$ in $L^{p}(\Omega_T)$ by taking subsequences as needed since we easily find $(q^\varepsilon -\omega^\varepsilon) \rightharpoonup 0$ in $L^{p}(\Omega_T)$. As for the filtered velocity $\boldsymbol{u}^\varepsilon$, it follows from the Hardy-littlewood-Sobolev inequality that \begin{equation} \| \boldsymbol{u}^\varepsilon (\cdot, t) \|_{L^{p^\ast}} \leq C \| \omega^\varepsilon(\cdot,t) \|_{L^p} \leq C \| q_0 \|_{L^p}. \label{ineq:HLS} \end{equation} More generally, we have \begin{equation} \| \boldsymbol{u}^\varepsilon (\cdot, t) \|_{L^r} \leq C \| \omega^\varepsilon(\cdot,t) \|_{L^s} \label{ineq:GHLS} \end{equation} for any $r \in (2, p^\ast]$ and $s \in (1,p]$ satisfying $1/r = 1/s - 1/2$. It follows from the Calder\'{o}n-Zygmund inequality that \begin{equation} \| \nabla \boldsymbol{u}^\varepsilon (\cdot,t)\|_{L^q} \leq C \| \omega^\varepsilon(\cdot,t) \|_{L^q} \label{ineq:CZ} \end{equation} for any $q \in (1,\infty)$, which yields $\| \nabla \boldsymbol{u}^\varepsilon (\cdot,t)\|_{L^p} \leq C \| q_0 \|_{L^p}$. Since we have \begin{align*} \| \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^p(B_R)} &\leq \| \boldsymbol{K} \chi_{B_1} \ast \omega^\varepsilon (\cdot,t) \|_{L^p} + \| \boldsymbol{K} \chi_{\mathbb{R}^2 \setminus B_1} \ast \omega^\varepsilon (\cdot,t) \|_{L^p(B_R)} \\ &\leq \| \boldsymbol{K} \chi_{B_1} \|_{L^1} \| \omega^\varepsilon (\cdot,t) \|_{L^p} + |B_R|^{1/p} \| \boldsymbol{K} \chi_{\mathbb{R}^2 \setminus B_1} \|_{L^{p'}} \| \omega^\varepsilon (\cdot,t) \|_{L^p} \\ &\leq C _R \| \omega^\varepsilon (\cdot,t) \|_{L^p} \leq C_R \| q_0\|_{L^p} \end{align*} for any $R > 0$, $\{ \boldsymbol{u}^\varepsilon \}$ is uniformly bounded in $C([0,T]; W^{1,p}_{\mathrm{loc}}(\mathbb{R}^2))$. Note that $\omega^\varepsilon$ satisfies \begin{equation*} \intint_{\Omega_T} \left( (\partial_t \psi) \omega^\varepsilon + \nabla \psi \cdot \left( h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) \right) \right) d\boldsymbol{x} dt = 0 \end{equation*} for any $\psi\in C_c^\infty(\Omega_T)$ and it follows from $\omega^\varepsilon = \nabla^\perp \cdot \boldsymbol{u}^\varepsilon$ that \begin{equation} \intint_{\Omega_T} \left( (\partial_t \nabla^\perp \psi) \cdot \boldsymbol{u}^\varepsilon - \nabla^\perp \psi(\boldsymbol{x},t) \cdot ( h^\varepsilon \ast ((\boldsymbol{u}^\varepsilon)^\perp q^\varepsilon) ) \right) d\boldsymbol{x} dt = 0. \label{eq:weak-velocity} \end{equation} Then, we obtain \begin{align*} \left| \intint_{\Omega_T} \nabla^\perp \psi \cdot \partial_t \boldsymbol{u}^\varepsilon d\boldsymbol{x} dt \right| &\leq \int_0^T \| \nabla^\perp \psi(\cdot,t) \|_{L^\infty} \| \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^{p^\ast}} \| q^\varepsilon(\cdot,t) \|_{L^{(p^\ast)'}} dt \\ & \leq C \| \nabla^\perp \psi \|_{L^1(0,T; L^\infty(\mathbb{R}^2))} \| q_0 \|_{L^p} \| q_0 \|_{L^{(p^\ast)'}} \\ & \leq C(q_0) \| \nabla^\perp \psi \|_{L^1(0,T; H^2(\mathbb{R}^2))}, \end{align*} where $C(q_0)$ is the constant depending on $\| q_0 \|_{L^1}$ and $\| q_0 \|_{L^p}$. Considering $\nabla \cdot \boldsymbol{u}^\varepsilon = 0$, we find that $\{ \partial_t \boldsymbol{u}^\varepsilon \}$ is uniformly bounded in $L^\infty(0,T; H^{-2}_{\mathrm{loc}}(\mathbb{R}^2))$. Note that the embedding $W^{1,p}(B_R)\hookrightarrow L^r(B_R)$ is compact for any $r \in [1, p^\ast)$. There exists $\boldsymbol{u} \in C([0,T]; L^r(B_R))$ such that, by taking subsequences as needed, $\boldsymbol{u}^\varepsilon \rightarrow \boldsymbol{u}$ in $C([0,T]; L^r(B_R))$ for any $R >0$. In addition, the uniform estimates for $\{ \boldsymbol{u}^\varepsilon \}$ yield $\boldsymbol{u} \in L^\infty (0,T; L^{p^\ast}(\mathbb{R}^2)\cap W^{1,p}_{\mathrm{loc}}(\mathbb{R}^2))$ and the Biot-Savart law $\boldsymbol{u} = \boldsymbol{K} \ast \omega$ holds. Recall that $(\boldsymbol{u}^\varepsilon, q^\varepsilon)$ satisfies \begin{equation*} \intint_{\Omega_T} \left( \partial_t \psi + \boldsymbol{u}^\varepsilon \cdot \nabla \psi \right) q^\varepsilon d\boldsymbol{x} dt = 0 \end{equation*} for any $\psi\in C_c^\infty(\Omega_T)$. The weak convergence of $q^\varepsilon$ yields \begin{equation*} \intint_{\Omega_T} (\partial_t \psi) q^\varepsilon d\boldsymbol{x} dt \ \longrightarrow\ \intint_{\Omega_T} (\partial_t \psi) \omega d\boldsymbol{x} dt. \end{equation*} The nonlinear term is divided into two parts as follows. \begin{equation*} \intint_{\Omega_T} \boldsymbol{u}^\varepsilon \cdot (\nabla \psi) q^\varepsilon d\boldsymbol{x} dt = \intint_{\Omega_T} \boldsymbol{u} \cdot (\nabla \psi) q^\varepsilon d\boldsymbol{x} dt + \intint_{\Omega_T} \left( \boldsymbol{u}^\varepsilon - \boldsymbol{u} \right) \cdot (\nabla \psi) q^\varepsilon d\boldsymbol{x} dt. \end{equation*} It follows from $\boldsymbol{u} \cdot \nabla \psi \in L^{p'}(\Omega_T)$ that \begin{equation*} \intint_{\Omega_T} \boldsymbol{u} \cdot (\nabla \psi) q^\varepsilon d\boldsymbol{x} dt \ \longrightarrow\ \intint_{\Omega_T} \boldsymbol{u} \cdot (\nabla \psi) \omega d\boldsymbol{x} dt. \end{equation*} Since there exist $r \in (3, p^\ast)$ and $s \in[1,\infty]$ such that $1 = 1/r + 1/p + 1/s$, we have \begin{equation*} \left| \intint_{\Omega_T} \left( \boldsymbol{u}^\varepsilon - \boldsymbol{u} \right) \cdot (\nabla \psi) q^\varepsilon d\boldsymbol{x} dt \right| \leq \| \boldsymbol{u}^\varepsilon - \boldsymbol{u} \|_{L^\infty(0,T; L^r(B_R))} \| \nabla \psi \|_{L^1(0,T; L^s(\mathbb{R}^2))} \| q_0 \|_{L^p} \end{equation*} for any $R > 0$ satisfying $\operatorname{supp} \psi \subset B_R \times (0,T)$, and the right-hand side converges to zero as $\varepsilon \rightarrow 0$. Hence, we obtain \begin{equation*} \intint_{\Omega_T} \left( \partial_t \psi + \boldsymbol{u} \cdot \nabla \psi \right) \omega d\boldsymbol{x} dt = 0, \end{equation*} that is, $(\boldsymbol{u}, \omega)$ is a weak solution of the 2D Euler equations. \begin{remark} The above proof for the convergence to the Euler equations is valid for $p$ satisfying $(p^\ast)' \leq p$, that is, $ p \geq 4/3$. \end{remark} \subsection{Convergence of the energy dissipation rate} From the definition of $\mathscr{D}_E^\varepsilon$, we find \begin{align*} | \mathscr{D}_E^\varepsilon(t) | &\leq \frac{1}{2} \int \hspace{-1.5mm} \int |(\nabla H_G^\varepsilon) (\boldsymbol{y})| | \boldsymbol{u}^\varepsilon(\boldsymbol{x}, t) - \boldsymbol{u}^\varepsilon(\boldsymbol{x} - \boldsymbol{y}, t) | | q^\varepsilon(\boldsymbol{x},t) | |q^\varepsilon(\boldsymbol{x} - \boldsymbol{y},t)| d \boldsymbol{x} d \boldsymbol{y} \\ & \leq \frac{1}{2} \int |(\nabla H_G^\varepsilon) (\boldsymbol{y})| \| \boldsymbol{u}^\varepsilon(\cdot, t) - \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y}, t) \|_{L^{p'}} \| q^\varepsilon(\cdot,t) q^\varepsilon(\cdot - \boldsymbol{y},t) \|_{L^p} d \boldsymbol{y}. \end{align*} Note that \begin{equation*} \| \boldsymbol{u}^\varepsilon(\cdot,t) - \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y},t) \|_{L^{p'}} \leq C |\boldsymbol{y}| \|\nabla \boldsymbol{u}^\varepsilon (\cdot,t)\|_{L^{p'}}, \end{equation*} and it follows from \eqref{ineq:CZ} that \begin{equation*} \| \nabla \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^{p'}} \leq C \| \omega^\varepsilon (\cdot,t) \|_{L^{p'}} \leq C \varepsilon^{-2 + 2/s }\| h \|_{L^s} \| q_0 \|_{L^p}, \end{equation*} where $s \in (1, 3/2)$ satisfies $1 + 1/{p'} = 1/s + 1/p$, that is, $1/s = 2 - 2/p$. Thus, we find \begin{align*} | \mathscr{D}_E^\varepsilon(t) | & \leq C \varepsilon^{2 - 4/p }\| h \|_{L^s} \| q_0 \|_{L^p} \int |\boldsymbol{y}| |\nabla H_G^\varepsilon (\boldsymbol{y})| \| q^\varepsilon(\cdot,t) q^\varepsilon(\cdot - \boldsymbol{y},t) \|_{L^p} d \boldsymbol{y} \\ &\leq C \varepsilon^{2 - 4/p }\| h \|_{L^s} \| q_0 \|_{L^p}^3 \| w_1 \nabla H_G^\varepsilon \|_{L^{p'}}. \end{align*} According to \eqref{est:dH_gr-eps}, we have \begin{equation*} |(\nabla H_G^\varepsilon) (\boldsymbol{x})| \leq \frac{C}{|\boldsymbol{x}|} \frac{\varepsilon}{|\boldsymbol{x}| + \varepsilon}, \label{est:HG} \end{equation*} where $C$ is the constant independent of $\varepsilon$, and it follows that \begin{align*} \| w_1 \nabla H_G^\varepsilon \|_{L^{p'}} &\leq C \left( \int_{\mathbb{R}^2} \frac{1}{(|\boldsymbol{x}|/\varepsilon + 1)^{p'}} d\boldsymbol{x} \right)^{1/{p'}} = C \varepsilon^{2/{p'}} \left( \int_0^\infty \frac{r}{(r + 1)^{p'}} dr \right)^{1/{p'}}. \end{align*} Owing to $2< p' <3$, the right-hand side is finite. Hence, we obtain \begin{equation*} | \mathscr{D}_E^\varepsilon(t) | \leq C \varepsilon^{6(2/3 - 1/p) } \| q_0 \|_{L^p}^3, \end{equation*} and $\| \mathscr{D}_E^\varepsilon \|_{L^\infty(0,T)}$ converges to zero in the $\varepsilon \rightarrow 0$ limit. \begin{remark} \label{rem:conv-Euler} It is noteworthy that \begin{equation*} | \mathscr{D}_E^\varepsilon(t)| \leq C \| \boldsymbol{u}^\varepsilon(\cdot, t)\|_{L^{p^\ast}} \| q^\varepsilon(\cdot,t)\|_{L^p} \left\| \frac{1}{|\cdot|} \ast | q^\varepsilon(\cdot,t) | \right\|_{L^{p^\ast}} \end{equation*} holds for $p$ satisfying $1 = 2 /p^{\ast} + 1/p$, that is, $p =3/2$. Considering the Hardy-Littlewood-Sobolev inequality. we find \begin{equation*} | \mathscr{D}_E^\varepsilon(t)| \leq C \| \boldsymbol{u}^\varepsilon(\cdot, t)\|_{L^6} \| q^\varepsilon(\cdot,t)\|_{L^{3/2}}^2 \leq C \| q_0 \|_{L^{3/2}}^3, \end{equation*} so that $\|\mathscr{D}_E^\varepsilon \|_{L^\infty(0,T)}$ is uniformly bounded with respect to $\varepsilon$. \end{remark} \subsection{Local energy balance} \label{proof:balance} We show that the limit function $\boldsymbol{u}$ satisfies the local energy balance. If follows from \eqref{eq:weak-velocity} and $\nabla \cdot \boldsymbol{u}^\varepsilon = 0$ that \begin{align*} &\intint_{\Omega_T} \left( \partial_t \nabla^\perp \psi \cdot \boldsymbol{u}^\varepsilon + \nabla \otimes \nabla^\perp \psi : \boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon \right) d\boldsymbol{x} dt \\ &= \intint_{\Omega_T} \nabla \psi \cdot \left( h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) - \boldsymbol{u}^\varepsilon \omega^\varepsilon \right) d\boldsymbol{x} dt \equiv I \end{align*} for any $\psi\in C_c^\infty(\Omega_T)$. Since $I$ is rewritten by \begin{equation*} I = \intint_{\Omega_T} \nabla^\perp \psi \cdot \left( h^\varepsilon \ast ((\boldsymbol{u}^\varepsilon)^\perp q^\varepsilon) - (\boldsymbol{u}^\varepsilon)^\perp \omega^\varepsilon \right) d\boldsymbol{x} dt, \end{equation*} there exists a distribution $P^\varepsilon$ such that $\boldsymbol{u}^\varepsilon$ satisfies \begin{equation} \partial_t \boldsymbol{u}^\varepsilon + (\boldsymbol{u}^\varepsilon \cdot \nabla ) \boldsymbol{u}^\varepsilon = - \nabla P^\varepsilon - \left( h^\varepsilon \ast ((\boldsymbol{u}^\varepsilon)^\perp q^\varepsilon) - (\boldsymbol{u}^\varepsilon)^\perp \omega^\varepsilon \right), \label{eq:u-eps} \end{equation} in the sense of distributions. We first see properties of $P^\varepsilon$. For any $r \in (2, p^\ast)$, it follows from \eqref{ineq:GHLS} and \eqref{ineq:CZ} that \begin{equation} \| \boldsymbol{u}^\varepsilon (\cdot, t) \|_{L^r} \leq C \| \omega^\varepsilon (\cdot, t) \|_{L^{s_1}} \leq C \| q_0 \|_{L^{s_1}} \label{est:local-1} \end{equation} for $s_1 \in (1, p)$ satisfying $1/r = 1/{s_1} - 1/2$, and \begin{align} \| \nabla \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^r} &\leq C \| \omega^\varepsilon(\cdot, t) \|_{L^r} \leq C \varepsilon^{-2 + 2/{s_2}} \| h \|_{L^{s_2}} \| q_0 \|_{L^p}, \label{est:local-2} \\ \| \nabla^2 \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^r} &\leq C \| \nabla \omega^\varepsilon(\cdot, t) \|_{L^r} \leq C \varepsilon^{-3 + 2/{s_2}} \| \nabla h \|_{L^{s_2}} \| q_0 \|_{L^p} \label{est:local-3} \end{align} for $s_2 \in (1,2)$ satisfying $1 + 1/r = 1/{s_2} + 1/p$, respectively. Thus, for any fixed $\varepsilon$, we have $\boldsymbol{u}^\varepsilon \in C([0,T]; W^{2,r} (\mathbb{R}^2))$, so that the Sobolev embedding gives $\boldsymbol{u}^\varepsilon \in C([0,T]; C^1 (\mathbb{R}^2))$ and $\omega^\varepsilon \in C(\overline{\Omega_T})$. Taking the divergence of \eqref{eq:u-eps}, we have \begin{equation*} \Delta P^\varepsilon = \nabla^\perp \cdot \left[ h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) - \boldsymbol{u}^\varepsilon \omega^\varepsilon - (\boldsymbol{u}^\varepsilon \cdot \nabla) (\boldsymbol{u}^\varepsilon)^\perp \right] \equiv \nabla^\perp \cdot \boldsymbol{F}^\varepsilon, \end{equation*} which gives $P^\varepsilon(\boldsymbol{x},t) = \int \boldsymbol{K}(\boldsymbol{x} - \boldsymbol{y}) \cdot \boldsymbol{F}^\varepsilon(\boldsymbol{y},t) d\boldsymbol{y}$. Similarly to \eqref{est:local-1}, \eqref{est:local-2} and \eqref{est:local-3}, we have \begin{align*} \| P^\varepsilon (\cdot, t) \|_{L^r} &\leq C \| \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^{s_1}}, \\ \| \nabla P^\varepsilon (\cdot, t) \|_{L^r} &\leq C \| \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^r}, \\ \| \nabla^2 P^\varepsilon (\cdot, t) \|_{L^r} &\leq C \| \nabla \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^r}, \end{align*} and it follows that \begin{align*} \| \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^1} &\leq \| \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^{p'}} ( \| q^\varepsilon (\cdot, t)\|_{L^p} + \| \omega^\varepsilon(\cdot, t) \|_{L^p} + \| \nabla \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^p} ), \\ \| \boldsymbol{F}^\varepsilon (\cdot, t)\|_{L^r} &\leq \| \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^\infty} ( \| h^\varepsilon \|_{L^{s_2}} \| q^\varepsilon (\cdot, t)\|_{L^p} + \| \omega^\varepsilon(\cdot, t) \|_{L^r} + \| \nabla \boldsymbol{u}^\varepsilon (\cdot, t)\|_{L^r} ) , \\ \|\nabla \boldsymbol{F}^\varepsilon(\cdot, t) \|_{L^r} &\leq \|\nabla \boldsymbol{u}^\varepsilon (\cdot, t) \|_{L^{2r}}^2 + \|\boldsymbol{u}^\varepsilon (\cdot, t)\|_{L^\infty} \left( \| \nabla^2 \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^r} + \|\nabla h^\varepsilon \|_{L^{s_2}} \| q^\varepsilon(\cdot, t) \|_{L^p} \right). \end{align*} Note that \eqref{est:u-eps-infty}, \eqref{est:local-1} and \eqref{est:local-2} imply that $\| \boldsymbol{u}^\varepsilon (\cdot, t)\|_{L^\infty} \leq C_\varepsilon$, $\| \boldsymbol{u}^\varepsilon (\cdot, t)\|_{L^{p'}} \leq C_\varepsilon$ and \begin{equation*} \| \nabla \boldsymbol{u}^\varepsilon(\cdot, t) \|_{L^{2r}} \leq C \| \omega^\varepsilon(\cdot, t) \|_{L^{2r}} \leq C \varepsilon^{-2 + 2/{s_3}} \| h \|_{L^{s_3}} \| q_0 \|_{L^p} \end{equation*} for $s_3 \in (1,\infty)$ satisfying $1 + 1/{2r} = 1/{s_3} + 1/p$, respectively. Thus, we find \begin{equation*} \| \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^{s_1}} \leq C_\varepsilon, \quad \| \boldsymbol{F}^\varepsilon (\cdot, t) \|_{L^r} \leq C_\varepsilon, \quad \|\nabla \boldsymbol{F}^\varepsilon(\cdot, t) \|_{L^r} \leq C_\varepsilon, \end{equation*} which yields $P^\varepsilon \in C([0,T]; W^{2,r}(\mathbb{R}^2))$ for any $r \in (2, p^\ast)$, so that $P^\varepsilon \in C([0,T]; C^1(\mathbb{R}^2))$ holds. Next, we see the convergence of $P^\varepsilon$. Recall that $\boldsymbol{u}^\varepsilon$ converges to $\boldsymbol{u} \in L^\infty(0,T; L^{p^\ast}(\mathbb{R}^2))$ strongly in $C([0,T]; L^r_{\mathrm{loc}}(\mathbb{R}^2))$ for any $r \in [1, p^\ast)$. Fix a constant $r \in [2, p^\ast)$ and define \begin{equation*} P(\boldsymbol{x},t) \equiv - \nabla \cdot \int_{\mathbb{R}^2} \nabla G(\boldsymbol{x} - \boldsymbol{y}) \cdot (\boldsymbol{u} \otimes \boldsymbol{u})(\boldsymbol{y},t) d\boldsymbol{y}. \end{equation*} Then, $P \in L^\infty(0,T; L^{p^\ast/2}(\mathbb{R}^2))$ holds since the Calder\'{o}n-Zygmund inequality and \eqref{ineq:HLS} yield \begin{equation*} \| P(\cdot,t) \|_{L^{p^\ast/2}} \leq C \| (\boldsymbol{u} \otimes \boldsymbol{u})(\cdot,t) \|_{L^{p^\ast/2}} \leq C \| \boldsymbol{u} (\cdot,t) \|_{L^{p^\ast}}^2 \leq C \| q_0 \|_{L^p}^2. \end{equation*} Note that \begin{align*} P^\varepsilon(\boldsymbol{x},t) - P(\boldsymbol{x},t) &= - \nabla \cdot \int_{\mathbb{R}^2} \nabla G(\boldsymbol{x} - \boldsymbol{y}) \cdot \left[ (\boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon)(\boldsymbol{y},t) - (\boldsymbol{u} \otimes \boldsymbol{u})(\boldsymbol{y},t) \right] d\boldsymbol{y} \\ & \quad + \int_{\mathbb{R}^2} \boldsymbol{K}(\boldsymbol{x} - \boldsymbol{y}) \cdot \left[ (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon))(\boldsymbol{y},t) - (\boldsymbol{u}^\varepsilon \omega^\varepsilon)(\boldsymbol{y},t) \right] d\boldsymbol{y} \\ & \equiv I_1^\varepsilon(\boldsymbol{x},t) + I_2^\varepsilon(\boldsymbol{x},t). \end{align*} For any fixed $R > 0$, it follows that \begin{align*} \| I_1^\varepsilon(\cdot,t) \|_{L^{r/2}(B_R)} &\leq \left\| \nabla \hspace{-0.2mm} \cdot \hspace{-0.3mm} \int \nabla G(\cdot - \boldsymbol{y}) \cdot \left[ (\boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon) - (\boldsymbol{u} \otimes \boldsymbol{u}) \right](\boldsymbol{y},t) \chi_{B_{R'}}(\boldsymbol{y}) d\boldsymbol{y} \right\|_{L^{r/2}} \\ & + \left\| \int (\nabla^2 G)(\cdot - \boldsymbol{y}) : \left[ (\boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon) - (\boldsymbol{u} \otimes \boldsymbol{u}) \right](\boldsymbol{y},t) \chi_{\mathbb{R}^2 \setminus B_{R'}}(\boldsymbol{y}) d\boldsymbol{y} \right\|_{L^{r/2}(B_R)} \end{align*} for any $R' > 2 R$. Thus, we have \begin{align*} \| I_1^\varepsilon(\cdot,t) \|_{L^{r/2}(B_R)} &\leq C \| ((\boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon) - (\boldsymbol{u} \otimes \boldsymbol{u}) )(\cdot,t) \|_{L^{r/2}(B_{R'})} \\ &\quad + |B_R|^{2/r} \| (\nabla^2 G) \chi_{\mathbb{R}^2 \setminus B_{R'/2}} \|_{L^{(p^\ast/2)'}} \| ( (\boldsymbol{u}^\varepsilon \otimes \boldsymbol{u}^\varepsilon) - (\boldsymbol{u} \otimes \boldsymbol{u}) )(\cdot,t) \|_{L^{p^\ast/2}} \\ &\leq C (\| \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^r} + \| \boldsymbol{u} (\cdot,t) \|_{L^r}) \| (\boldsymbol{u}^\varepsilon - \boldsymbol{u})(\cdot,t) \|_{L^r(B_{R'})} \\ &\quad + C |B_R|^{2/r} (R')^{2(1 - 2/p)} (\| \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^{p^\ast}}^2 + \| \boldsymbol{u} (\cdot,t) \|_{L^{p^\ast}}^2) \\ & \leq C(q_0, R) \left( \| (\boldsymbol{u}^\varepsilon - \boldsymbol{u})(\cdot,t) \|_{L^r(B_{R'})} + (R')^{2(1 - 2/p)} \right), \end{align*} where $C(q_0,R)$ is the constant depending on $\| q_0\|_{L^1}$, $\| q_0\|_{L^p}$ and $R$. To estimate $I_2^\varepsilon$, we introduce the following lemma, whose proof is similar to Lemma II.1 in \cite{Diperna(a)}. \begin{lemma} \label{lem1} Let $\varphi$ satisfy $w_1 \varphi \in L^1(\mathbb{R}^2)$. For any $\boldsymbol{u} \in W^{1,\beta}(\mathbb{R}^2)$ and $q \in L^\gamma(\mathbb{R}^2)$, we have \begin{equation*} \| \varphi \ast (\boldsymbol{u} q) - \boldsymbol{u} \cdot (\varphi \ast q) \|_{L^\alpha} \leq \| w_1 \varphi \|_{L^1} \| \nabla \boldsymbol{u} \|_{L^\beta} \| q \|_{L^\gamma}, \end{equation*} where $1 / \alpha = 1/ \beta + 1/ \gamma$. \end{lemma} \begin{proof} It follows that \begin{align*} &\varphi \ast (\boldsymbol{u} q) - \boldsymbol{u} \cdot (\varphi \ast q) \\ &= \int_{\mathbb{R}^2} \varphi (\boldsymbol{x} - \boldsymbol{y}) \cdot \boldsymbol{u}(\boldsymbol{y}) q(\boldsymbol{y}) d \boldsymbol{y} - \boldsymbol{u}(\boldsymbol{x}) \cdot \int_{\mathbb{R}^2} \varphi (\boldsymbol{x} - \boldsymbol{y}) q(\boldsymbol{y}) d \boldsymbol{y} \\ & = \int_{\mathbb{R}^2} \varphi (\boldsymbol{x} - \boldsymbol{y}) \cdot \left( \boldsymbol{u}(\boldsymbol{y}) - \boldsymbol{u}(\boldsymbol{x}) \right) q(\boldsymbol{y}) d \boldsymbol{y}\\ & = \int_{\mathbb{R}^2} \varphi (\boldsymbol{y}) \cdot \left( \boldsymbol{u}(\boldsymbol{x} - \boldsymbol{y}) - \boldsymbol{u}(\boldsymbol{x}) \right) q(\boldsymbol{x} - \boldsymbol{y}) d \boldsymbol{y}. \end{align*} Thus, we find \begin{align*} & \| \varphi \ast (\boldsymbol{u} q) - \boldsymbol{u} \cdot (\varphi \ast q) \|_{L^\alpha} \\ & \leq \int_{\mathbb{R}^2} | \varphi (\boldsymbol{y})| \left( \int_{\mathbb{R}^2} | \boldsymbol{u}(\boldsymbol{x} - \boldsymbol{y}) - \boldsymbol{u}(\boldsymbol{x}) |^\alpha |q(\boldsymbol{x} - \boldsymbol{y})|^\alpha d \boldsymbol{x} \right)^{1/\alpha} d \boldsymbol{y} \\ & \leq \| q \|_{L^\gamma} \int_{\mathbb{R}^2} |\varphi (\boldsymbol{y})| \left( \int_{\mathbb{R}^2} | \boldsymbol{u}(\boldsymbol{x} - \boldsymbol{y}) - \boldsymbol{u}(\boldsymbol{x}) |^\beta d \boldsymbol{x} \right)^{1/\beta} d \boldsymbol{y}. \end{align*} Since we have $\| \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y}) - \boldsymbol{u}^\varepsilon(\cdot) \|_{L^\beta} \leq |\boldsymbol{y}| \| \nabla \boldsymbol{u}^\varepsilon \|_{L^\beta}$, we obtain the desired estimate. \end{proof} It follows from Lemma~\ref{lem1} that \begin{equation*} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon))(\cdot,t) - (\boldsymbol{u}^\varepsilon \omega^\varepsilon)(\cdot,t) \|_{L^\alpha} \leq \varepsilon \| w_1 h \|_{L^1} \| \nabla \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^\beta} \| q_0 \|_{L^p}, \end{equation*} where $1/\alpha = 1/\beta + 1/p$. Considering \eqref{ineq:CZ}, we find $ \| \nabla \boldsymbol{u}^\varepsilon (\cdot,t)\|_{L^\beta} \leq C \varepsilon^{-2 + 2/s} \| h\|_{L^s} \| q_0 \|_{L^p}$ for $s \in (1, \infty)$ satisfying $1 + 1/\beta = 1/s + 1/p$, that is, $1/s = 1 + 1/\alpha - 2/p$. Thus, we obtain \begin{equation} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon))(\cdot,t) - (\boldsymbol{u}^\varepsilon \omega^\varepsilon)(\cdot,t) \|_{L^\alpha} \leq C \varepsilon^{1 + 2/\alpha - 4/p} \| q_0 \|_{L^p}^2, \label{est:remainder} \end{equation} that is, $1/\alpha > 2/p - 1/2$ is required to show that the right-hand side converges to zero in the $\varepsilon \rightarrow 0$ limit. We have \begin{align*} \| I_2^\varepsilon(\cdot,t) \|_{L^{r/2}(B_R)} &\leq \| \boldsymbol{K} \chi_{B_1} \|_{L^s} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon))(\cdot,t) - (\boldsymbol{u}^\varepsilon \omega^\varepsilon)(\cdot,t) \|_{L^{r_1}} \\ & \quad + C_R \| \boldsymbol{K} \chi_{\mathbb{R}^2 \setminus B_1} \|_{L^{r'_2}} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon))(\cdot,t) - (\boldsymbol{u}^\varepsilon \omega^\varepsilon)(\cdot,t) \|_{L^{r_2}} \end{align*} for $r_1$, $s \in [1,\infty)$ satisfying $1 + 2/r = 1/s + 1/{r_1}$ and $r_2 \in [1,2)$. Owing to $2/p - 1/2 < 5/6$, we have $1/{r_2} > 2/p - 1/2$ for any $r_2 \in [1,6/5)$ and thus the second term converges to zero as $\varepsilon \rightarrow 0$. As for the first term, $1 \leq s < 2$ and $1/{r_1} > 2/p - 1/2$ are required to show the convergence. Set $1/s = 1/2 + \delta_1$ for $\delta_1 \in (0,1/2]$. Since there exists $\delta_2 > 0$ such that $1/r = 1/{p^\ast} + \delta_2$, we have \begin{equation*} \frac{1}{r_1} = 1 + \frac{2}{r} - \frac{1}{s} = \frac{2}{p} - \frac{1}{2} + 2 \delta_2 - \delta_1. \end{equation*} Taking sufficiently small $\delta_1$ satisfying $2 \delta_2 > \delta_1$, we find $1/{r_1} > 2/p - 1/2$. Hence, we conclude $\| I_2^\varepsilon \|_{L^\infty(0,T;L^{r/2}(B_R))} \rightarrow 0$ as $\varepsilon \rightarrow 0$. Summarizing the above estimates, we obtain \begin{align*} \| P^\varepsilon - P \|_{L^\infty(0,T; L^{r/2}(B_R))} &\leq C(q_0, R) \left( \| \boldsymbol{u}^\varepsilon - \boldsymbol{u} \|_{L^\infty(0,T; L^r(B_{R'}))} + (R')^{2(1 - 2/p)} + \varepsilon^\alpha \right) \end{align*} for some $\alpha > 0$ and sufficiently large $R' >0$. Taking the $\varepsilon \rightarrow 0$ limit and the $R' \rightarrow \infty$ limit in order, we find $P^\varepsilon \rightarrow P$ in $L^\infty(0,T; L^{r/2}_{\mathrm{loc}}(\mathbb{R}^2))$ for any $r \in [2, p^\ast)$. Finally, we show the local energy balance. Considering the regularities of $\boldsymbol{u}^\varepsilon$ and $P^\varepsilon$, we find from \eqref{eq:u-eps} that $\partial_t \boldsymbol{u}^\varepsilon$ is continuous in $\overline{\Omega_T}$. Multiplying the equation \eqref{eq:u-eps} by $\boldsymbol{u}^\varepsilon$, we have \begin{equation*} \partial_t \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} \right) + \nabla \cdot \left[ \boldsymbol{u}^\varepsilon \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} + P^\varepsilon \right) \right] = - \boldsymbol{u}^\varepsilon \cdot \left( h^\varepsilon \ast ((\boldsymbol{u}^\varepsilon)^\perp q^\varepsilon) - (\boldsymbol{u}^\varepsilon)^\perp \omega^\varepsilon \right). \end{equation*} Recall $\boldsymbol{u}^\varepsilon \rightarrow \boldsymbol{u}$ in $L^\infty(0,T;L^{r_1}_{\mathrm{loc}}(\mathbb{R}^2))$ and $P^\varepsilon \rightarrow P$ in $L^\infty(0,T;L^{r_2}_{\mathrm{loc}}(\mathbb{R}^2))$ for any $r_1 \in [1, p^\ast)$ and $r_2 \in [1, p^\ast/2)$. In a similar way shown in \cite{Cheskidov(b)}, we obtain \begin{align*} &\partial_t \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} \right) \ \longrightarrow\ \partial_t \left( \frac{|\boldsymbol{u}|^2}{2} \right), \\ &\nabla \cdot \left[ \boldsymbol{u}^\varepsilon \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} + P^\varepsilon \right) \right] \ \longrightarrow\ \nabla \cdot \left[ \boldsymbol{u} \left( \frac{|\boldsymbol{u}|^2}{2} + P \right) \right], \end{align*} in the sense of distributions. Indeed, setting $p_1 \equiv (p^\ast)'$ and $p_2 \equiv (p^\ast/2)'$ for simplicity, we have \begin{align*} \left\| \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} - \frac{|\boldsymbol{u}|^2}{2} \right)(\cdot,t) \right\|_{L^{p_1}(B_R)} &\leq \frac{1}{2} \| (\boldsymbol{u}^\varepsilon - \boldsymbol{u})(\cdot,t) \|_{L^{p_2}(B_R)} \left( \| \boldsymbol{u}^\varepsilon (\cdot,t) \|_{L^{p^\ast}} + \| \boldsymbol{u} (\cdot,t) \|_{L^{p^\ast}} \right) \\ &\leq C \| \boldsymbol{u}^\varepsilon - \boldsymbol{u} \|_{L^\infty(0,T;L^{p_2}(B_R))} \| q_0 \|_{L^p} \end{align*} and \begin{align*} &\left\| \left[ \boldsymbol{u}^\varepsilon \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} + P^\varepsilon \right) - \boldsymbol{u} \left( \frac{|\boldsymbol{u}|^2}{2} + P \right) \right](\cdot,t) \right\|_{L^1(B_R)} \\ &\leq \frac{1}{2} \| (\boldsymbol{u}^\varepsilon - \boldsymbol{u})(\cdot,t) \|_{L^{p_2}(B_R)} \| \boldsymbol{u}^\varepsilon (\cdot,t)\|_{L^{p^\ast}}^2 + \| \boldsymbol{u}(\cdot,t) \|_{L^{p^\ast}} \left\| \left( \frac{|\boldsymbol{u}^\varepsilon|^2}{2} - \frac{|\boldsymbol{u}|^2}{2} \right)(\cdot,t) \right\|_{L^{p_1}(B_R)} \\ & \quad + \| (\boldsymbol{u}^\varepsilon - \boldsymbol{u})(\cdot,t) \|_{L^{p_2}(B_R)} \| P^\varepsilon (\cdot,t) \|_{L^{p^\ast/2}} + \| \boldsymbol{u}(\cdot,t) \|_{L^{p^\ast}} \| (P^\varepsilon - P) (\cdot,t) \|_{L^{p_1}(B_R)} \\ & \leq C \| \boldsymbol{u}^\varepsilon - \boldsymbol{u} \|_{L^\infty(0,T;L^{p_2}(B_R))} \| q_0 \|_{L^p}^2 + C \| P^\varepsilon - P \|_{L^\infty(0,T;L^{p_1}(B_R))} \| q_0 \|_{L^p}. \end{align*} Owing to $p_1 \in (1, 6/5)$ and $p_2 \in (1,3/2)$, the desired convergences hold. It remains to show that \begin{equation*} R^\varepsilon \equiv \boldsymbol{u}^\varepsilon \cdot \left( h^\varepsilon \ast ((\boldsymbol{u}^\varepsilon)^\perp q^\varepsilon) - (\boldsymbol{u}^\varepsilon)^\perp \omega^\varepsilon \right) \end{equation*} converges to zero in $L^\infty(0,T; L^1(\mathbb{R}^2))$. It follows from \eqref{est:remainder} that \begin{equation} \| R^\varepsilon(\cdot,t) \|_{L^1} \leq \| \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^{p^\ast}} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) - \boldsymbol{u}^\varepsilon \omega^\varepsilon )(\cdot,t) \|_{L^{(p^\ast)'}} \leq C \varepsilon^\alpha \| q_0 \|_{L^p}^3, \label{est:R-eps} \end{equation} in which $\alpha$ is given by \begin{equation*} \alpha = 1 + \frac{2}{(p^\ast)'} - \frac{4}{p} = 6\left( \frac{2}{3} - \frac{1}{p} \right). \end{equation*} Thus, we find $\| R^\varepsilon \|_{L^\infty(0,T;L^1(\mathbb{R}^2))} \rightarrow 0$ as $\varepsilon \rightarrow 0$ for $3/2< p < 2$. \subsection{Proof of Theorem~\ref{thm-main2}} The proof for the convergence to the Euler equations is the same as Section~\ref{proof:conv-Euler}, see Remark~\ref{rem:conv-Euler}. We show the convergence of the energy dissipation rate. Similarly to the case of $3/2 < p <2$, we have \begin{equation*} | \mathscr{D}_E^\varepsilon(t) | \leq \frac{1}{2} \int_{\mathbb{R}^2} |(\nabla H_G^\varepsilon) (\boldsymbol{y})| \| \boldsymbol{u}^\varepsilon(\cdot, t) - \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y}, t) \|_{L^3} \| q^\varepsilon(\cdot,t) q^\varepsilon(\cdot - \boldsymbol{y},t) \|_{L^{3/2}} d \boldsymbol{y}. \end{equation*} Thus, we find \begin{equation*} | \mathscr{D}_E^\varepsilon(t) | \leq C \| q^\varepsilon(\cdot,t) \|_{L^{3/2}}^2 \| w_\alpha \nabla H_G^\varepsilon \|_{L^3}. \end{equation*} Here, \begin{align*} \| w_\alpha \nabla H_G^\varepsilon \|_{L^3} &\leq C \left( \int_{\mathbb{R}^2} \frac{|\boldsymbol{x}|^{3(\alpha-1)}}{(|\boldsymbol{x}|/\varepsilon + 1)^3} d\boldsymbol{x} \right)^{1/3} = C \varepsilon^{\alpha -1/3} \left( \int_0^\infty \frac{r^{3\alpha - 2}}{(r + 1)^3} dr \right)^{1/3}, \end{align*} and the right-hand side is finite for $\alpha \in (1/3,1]$. Thus, we obtain \begin{equation*} | \mathscr{D}_E^\varepsilon(t) | \leq C \varepsilon^{\alpha -1/3} \| q_0 \|_{L^{3/2}}^2, \end{equation*} so that $\| \mathscr{D}_E^\varepsilon \|_{L^\infty(0,T)}$ converges to zero in the $\varepsilon \rightarrow 0$ limit. As for the local energy balance, it is easily confirmed that, except for the estimate \eqref{est:R-eps}, the proof is the same as Section~\ref{proof:balance}. Thus, we show that $\| R^\varepsilon \|_{L^\infty(0,T;L^1(\mathbb{R}^2))}$ converges to zero in the $\varepsilon \rightarrow 0$ limit. We have \begin{equation*} \| R^\varepsilon(\cdot,t) \|_{L^1} \leq \| \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^\infty} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) - \boldsymbol{u}^\varepsilon \omega^\varepsilon )(\cdot,t) \|_{L^1}. \end{equation*} It follows from \eqref{est:u-eps-infty} that \begin{equation*} \| \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^\infty} \leq C \left( \| \omega^\varepsilon(\cdot,t) \|_{L^r} + \| \omega^\varepsilon(\cdot,t) \|_{L^1} \right) \leq C \varepsilon^{-2 + 2 /s} (\| q_0 \|_{L^{3/2}} + \| q_0 \|_{L^1}) \end{equation*} for any $r \in (2, \infty]$ and $s \in (6/5,3)$ satisfying $1 + 1/r = 1/s + 2/3$. On the other hand, the proof of Lemma~\ref{lem1} implies \begin{align*} \| (h^\varepsilon \ast (\boldsymbol{u}^\varepsilon q^\varepsilon) - \boldsymbol{u}^\varepsilon \omega^\varepsilon )(\cdot,t) \|_{L^1} &\leq \| q_0 \|_{L^{3/2}} \int_{\mathbb{R}^2} |h^\varepsilon (\boldsymbol{y})| \| \boldsymbol{u}^\varepsilon(\cdot - \boldsymbol{y},t) - \boldsymbol{u}^\varepsilon(\cdot,t) \|_{L^3} d \boldsymbol{y} \\ & \leq C \| q_0 \|_{L^{3/2}} \int_{\mathbb{R}^2} |\boldsymbol{y}|^\alpha |h^\varepsilon (\boldsymbol{y})| d \boldsymbol{y} \\ &= C \varepsilon^{\alpha} \| q_0 \|_{L^{3/2}} \| w_\alpha h\|_{L^1}. \end{align*} Thus, we obtain \begin{equation*} \| R^\varepsilon(\cdot,t) \|_{L^1} \leq C(q_0,T) \varepsilon^{2 /r - 4/3 + \alpha}. \end{equation*} Since $r \in (2, \infty]$ is an arbitrary constant, we set $2/r = 1 - \delta$ for $\delta \in (0, \alpha - 1/3)$. Then, we have \begin{equation*} \frac{2}{r} - \frac{4}{3} + \alpha = \left(\alpha - \frac{1}{3} \right) - \delta > 0, \end{equation*} so that we conclude $\| R^\varepsilon \|_{L^\infty(0,T;L^1(\mathbb{R}^2))} \rightarrow 0$ as $\varepsilon \rightarrow 0$. \appendix \section{Properties of the auxiliary function $H_G^\varepsilon$} \label{appendix_A} We see detailed properties of $H_G^\varepsilon$ defined by \eqref{def-H_G-eps}. Note that \begin{equation*} H_G^\varepsilon = h^\varepsilon \ast \left(h^\varepsilon \ast G - G \right) = G \ast \left( h^\varepsilon \ast h^\varepsilon - h^\varepsilon \right) = G \ast H^\varepsilon, \end{equation*} where $H^\varepsilon(\boldsymbol{x}) \equiv \varepsilon^{-2} H (\varepsilon^{-1} \boldsymbol{x})$ and $H \equiv h \ast h - h$. Since $h$ is a radial function, we find that $H$, $H^\varepsilon$ and $H_G^\varepsilon$ are also radially symmetric. To emphasize that, those radial functions are denoted by $H_r$, $H_r^\varepsilon$ and $H_{G,r}^\varepsilon$. In what follows, we assume that $h \in L^1(\mathbb{R}^2)$ satisfies $w_3 h$, $w_\alpha h \in L^\infty(\mathbb{R}^2)$ for some $\alpha \in (0,1)$. Recall that $G$ is a fundamental solution to the 2D Laplacian. Then, we have $\Delta H_G^\varepsilon = H^\varepsilon$, that is, \begin{equation*} \nabla H_G^\varepsilon (\boldsymbol{x}) = (\nabla G) \ast H^\varepsilon = \frac{\boldsymbol{x}}{|\boldsymbol{x}|^2} \int_0^{|\boldsymbol{x}|} s H_r^\varepsilon(s) ds, \end{equation*} which is rewritten by \begin{equation*} \nabla H_G^\varepsilon (\boldsymbol{x}) = \frac{\boldsymbol{x}}{|\boldsymbol{x}|} (H_{G,r}^\varepsilon)' (|\boldsymbol{x}|), \qquad (H_{G,r}^\varepsilon)' (r) = \frac{1}{r} \int_0^{r} s H_r^\varepsilon(s) ds. \end{equation*} Integrating $(H_{G,r}^\varepsilon)'$ on $[0,r]$, we obtain \begin{align*} H_{G,r}^\varepsilon(r) - H_{G,r}^\varepsilon(0) &= \int_0^r \frac{1}{s} \int_0^{s} t H_r^\varepsilon(t) dt ds \\ & = \left[ \log{s} \int_0^{s} t H_r^\varepsilon(t) dt \right]_0^r - \int_0^r (\log{s}) s H_r^\varepsilon(s) ds. \end{align*} Note that $w_\beta h \in L^\infty(\mathbb{R}^2)$ yields $w_\beta H \in L^\infty(\mathbb{R}^2)$ for any $\beta > 0$. Then, we have \begin{equation*} H_{G,r}^\varepsilon(0) = \int_{\mathbb{R}^2} G(\boldsymbol{y}) H^\varepsilon(\boldsymbol{y}) d \boldsymbol{y} = \int_0^\infty (\log{s}) s H^\varepsilon_r(s) ds, \end{equation*} and the right-hand side is finite. Thus, we obtain \begin{align*} H_{G,r}^\varepsilon(r) &= \log{r} \int_0^{r} s H_r^\varepsilon(s) ds + \int_r^\infty (\log{s}) s H_r^\varepsilon(s) ds \\ & = \log{r} \int_0^{r / \varepsilon } s H_r(s) ds + \int_{r / \varepsilon}^\infty (\log{s}) s H_r(s) ds + \log{\varepsilon} \int_{r / \varepsilon}^\infty s H_r(s) ds. \end{align*} Since $\int_{\mathbb{R}^2} h (\boldsymbol{x}) d \boldsymbol{x} = 1$ yields $\int_0^\infty t H_r(t) dt = 0$, we have the following expression of $H_{G,r}^\varepsilon$. \begin{equation*} H_{G,r}^\varepsilon(r) = \log{\left( r /\varepsilon \right)} \int_0^{r / \varepsilon } s H_r(s) ds + \int_{r / \varepsilon}^\infty (\log{s}) s H_r(s) ds. \end{equation*} We show that, for any fixed $\varepsilon$, the functions $H_{G,r}^\varepsilon$ and $(H_{G,r}^\varepsilon)'$ belong to $C_0[0, \infty)$. Setting $\rho = r /\varepsilon$ for convenience, we have \begin{equation*} H_{G,r}^\varepsilon(r) = \int_0^\rho \log{\left( \frac{\rho}{s} \right)} s H_r(s) ds + \int_0^\infty (\log{s}) s H_r(s) ds. \end{equation*} It follows from $w_\alpha H \in L^\infty(\mathbb{R}^2)$ that \begin{align*} |H_{G,r}^\varepsilon(r)| = \rho \int_0^{\rho} | H_r(s)| ds + |H_{G,r}^{\varepsilon =1}(0)| \leq C_\alpha \rho^{2 - \alpha} + |H_{G,r}^{1}(0)|. \end{align*} Thus, $H_{G,r}^\varepsilon$ is bounded for $\rho \leq 1$. For $\rho > 1$, we have \begin{equation*} H_{G,r}^\varepsilon(r) = \int_\rho^\infty \log{\left( \frac{s}{\rho} \right)} s H_r(s) ds = \int_1^\infty (\log{s}) (\rho s) H_r(\rho s) \rho ds, \end{equation*} and \begin{equation*} | H_{G,r}^\varepsilon(r) | \leq \frac{1}{\rho} \| w_3 H \|_{L^\infty} \int_1^\infty \frac{\log{s}}{s^2} ds = \frac{\varepsilon}{r} \| w_3 H \|_{L^\infty}. \end{equation*} Thus, we conclude $H_{G,r}^\varepsilon \in C_0 [0, \infty)$. As for the derivative $(H_{G,r}^\varepsilon)'$, note that \begin{equation*} (H_{G,r}^\varepsilon)' (r) = \frac{1}{r} \int_0^{r} s H_r^\varepsilon(s) ds = \frac{1}{\varepsilon \rho} \int_0^\rho s H_r (s) ds = \frac{1}{\varepsilon} \int_0^1 (\rho s) H_r (\rho s) ds. \end{equation*} Then, we have $|(H_{G,r}^\varepsilon)' (r)| \leq \varepsilon^{-1} \| w_1 H \|_{L^\infty}$, that is, $(H_{G,r}^\varepsilon)' \in L^\infty(0,\infty)$. We show the following decay estimate of $(H_{G,r}^\varepsilon)'$, which is suggested in \cite{Liu}. \begin{equation} \left| (H_{G,r}^\varepsilon)' (r) \right| \leq \frac{C}{r} \frac{\varepsilon}{r + \varepsilon}. \label{est:dH_gr-eps} \end{equation} Set $(H_{G,r}^\varepsilon)' (r) = I(r/\varepsilon) / r$, where $I(r)$ is defined by \begin{equation*} I(r) = \int_0^{r} t H_r(t) dt. \end{equation*} Then, it is sufficient to show that $(1 + r) |I(r)|$ is uniformly bounded for $r \in [0,\infty)$. Considering $h \in L^1(\mathbb{R}^2)$ and $\int_0^\infty t H_r(t) dt = 0$, we find $I \in L^\infty(0,\infty)$ and \begin{align*} r |I(r)| & = r \left| \int_r^\infty t H_r(t) dt \right| \leq \| w_3 H \|_{L^\infty} \int_1^\infty \frac{1}{t^2} dt = \| w_3 H \|_{L^\infty}. \end{align*} Thus, we obtain \eqref{est:dH_gr-eps}, so that $(H_{G,r}^\varepsilon)' \in C_0[0, \infty)$. \end{document}
\begin{document} \title{\uppercase{Michael-Simon Sobolev inequalities in Euclidean space }} \author{ Yuting Wu \thanks{School of Mathematical Sciences, East China Normal University, 500 Dongchuan Road, Shanghai 200241, P. R. of China, E-mail address: [email protected]. } \and Chengyang Yi \thanks{School of Mathematical Sciences, East China Normal University, 500 Dongchuan Road, Shanghai 200241, P. R. of China, E-mail address: [email protected]. } } \date{} \maketitle \begin{abstract} Inspired by \cite{Bre, DP}, we prove Michael-Simon type inequalities for smooth symmetric uniformly positive define $\left( 0,2\right)$-tensor fields on compact submanifolds in Euclidean space by the Alexandrov-Bakelman-Pucci (ABP) method. \end{abstract} \section{Introduction} Many authors have been studied extensive researches about optimal transport techniques \cite{CV, BE, Wang, KM} and the Alexandrov-Bakelman-Pucci (ABP) method \cite{Bren, YW, Dong22, Joh} for inequalities. We remark that optimal transport techniques also apply to more general metric measure spaces (see \cite{CM}). Meanwhile, the ABP method has been considered in many literatures, such as Brendle \cite{Bre} proved a Sobolev inequality for submanifolds of arbitrary dimension and codimension in Euclidean space by the ABP method in 2019; Soon, Brendle \cite{Bren} obtained a Sobolev inequality in Riemannian manifolds with nonnegative curvature; In recent years, a large number of experts \cite{LL, Wang, Joh, Dong22, Ma, DP} have made further investigations about the Sobolev inequality. In 2018, D. Serre \cite{DS} proved a Michael-Simon Sobolev inequality involving a positive symmetric matrix-valued function $A$ on a smooth bounded convex domain in $\mathbb{R}^{n}$ and showed some applications to fluid dynamics by optimal transport techniques. D. Pham \cite{DP} reorganized some of the conclusions of \cite{DS}. The open unit ball in $\mathbb{R}^{m}$ is denoted by $B^{m}$. \begin{theorem} \label{th:A} ($\cite{DS}$, Theorem 2.3 and Proposition 2.2 or $\cite{DP}$, Theorem 1.7) Let $n\in\mathbb{N}$ and $\Omega$ be a smooth bounded convex domain in $\mathbb{R}^{n}$. If $A$ is a smooth uniformly positive symmetric matrix-valued function on $\overline{\Omega}$ (the closure of $\Omega$), then \begin{equation*} \label{eq:A} \int_{\Omega}\left|\mathrm{div}A\left( x\right) \right| dx +\int_{\partial\Omega}\left| A\left( \nu\right) \right|d\sigma\left( x\right) \geq n\left| B^{n}\right|^{\frac{1}{n}} \left( \int_{\Omega}\mathrm{det}\left( A\left( x\right) \right)^{\frac{1}{n-1}}dx\right) ^{\frac{n-1}{n}}, \end{equation*} where $\nu$ is the unit outer normal vector field on $\partial\Omega$. The equality holds if and only if $A$ is divergence-free, there is a smooth convex function $u$ on $\Omega$ such that $\nabla u\left( \Omega\right)$ is a ball centered at the origin and $A=\left(\mathrm{cof} D^{2}u\right)$, the cofactor matrix of $D^{2}u$. \end{theorem} Until recently D. Pham \cite{DP} extended Theorem 1.1 to a smooth bounded domain in $\mathbb{R}^{n}$ without the convexity condition of $\Omega$ by the ABP method \cite{Bre, Cab, Cabr}. \begin{theorem} \label{th:B} ($\cite{DP}$, Theorem 1.8) Let $n\in\mathbb{N}$ and $\Omega$ be a smooth bounded domain in $\mathbb{R}^{n}$. If $A$ is a smooth uniformly positive symmetric matrix-valued function on $\overline{\Omega}$, then \begin{equation*} \label{eq:B} \int_{\Omega}\left|\mathrm{div}A\left( x\right) \right| dx +\int_{\partial\Omega}\left| A\left( \nu\right) \right|d\sigma\left( x\right) \geq n\left| B^{n}\right|^{\frac{1}{n}} \left( \int_{\Omega}\mathrm{det}\left( A\left( x\right) \right)^{\frac{1}{n-1}}dx\right) ^{\frac{n-1}{n}}, \end{equation*} where $\nu$ is the unit outer normal vector field on $\partial\Omega$. Moerover, the equality holds if and only if $A$ is divergence-free, there is a smooth convex function $u$ on $\Omega$ such that $\nabla u\left( \Omega\right)$ is a ball centered at the origin and $A=\left(\mathrm{cof} D^{2}u\right)$, the cofactor matrix of $D^{2}u$. \end{theorem} In this paper, we prove several Michael-Simon-Sobolev inequalities for smooth symmetric uniformly positive define $\left( 0,2\right)$-tensor fields on compact submanifolds in Euclidean space. Our main result is the following theorem. \begin{theorem} \label{th:01} Let $\Sigma^{n}$ be a compact $n$-dimensional submanifold of $\mathbb{R}^{n+m}$ with smooth boundary $\partial\Sigma$ (possibly $\partial\Sigma=\varnothing$), where $m\geq2$. If $A$ is a smooth symmetric uniformly positive define $\left( 0,2\right)-$tensor field on $\Sigma$, then \begin{equation*} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| \geq n\left[ \frac{\left( n+m\right)\left| B^{n+m}\right|}{m\left| B^{m}\right|}\right] ^{\frac{1}{n}}\left( \int_{\Sigma}\left( \mathrm{det}A\right)^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}, \end{equation*} where $\nu$ is the unit outer conormal vector field on $\partial\Sigma$, $\Rmnum{2}$ is the second fundamental form of $\Sigma$. \end{theorem} When $m=2$, because of $\left( n+2\right)\left| B^{n+2}\right|=2\left| B^{2}\right| \left| B^{n}\right|$, we obtain a sharp Sobolev inequality for submanifolds of codimensional 2. \begin{theorem} Let $\Sigma^{n}$ be a compact $n$-dimensional submanifold of $\mathbb{R}^{n+2}$ with smooth boundary $\partial\Sigma$ (possibly $\partial\Sigma=\varnothing$), and let $A$ be a smooth symmetric uniformly positive define $\left( 0,2\right)$-tensor field on $\Sigma$, then \begin{equation*} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| \geq n\left| B^{n}\right|^{\frac{1}{n}}\left( \int_{\Sigma}\left( \mathrm{det}A\right)^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}. \end{equation*} The equality holds if and only if $\Sigma$ is a compact domain in $\mathbb{R}^{n}\subset \mathbb{R}^{n+2}$, $A$ is divergence-free, there is a smooth convex function $u$ on $\Sigma$ such that $\nabla^{\Sigma} u\left( \Sigma\right)$ is a unit closed ball in $\mathbb{R}^{n}$ centered at the origin and $A=\mathrm{cof} D^{2}_{\Sigma}u$, the cofactor tensor of $D^{2}_{\Sigma}u$. \end{theorem} Since $\mathbb{R}^{n+1}$ is a totally geodesic submanifold in $\mathbb{R}^{n+2}$, Theorem 1.3 and Theorem 1.4 imply a sharp Sobolev inequality for submanifolds of codimension 1: \begin{corollary} \label{co:02} Let $\Sigma^{n}$ be a compact $n$-dimensional submanifold of $\mathbb{R}^{n+1}$ with smooth boundary $\partial\Sigma$ (possibly $\partial\Sigma=\varnothing$), if $A$ is a smooth symmetric uniformly positive define $\left( 0,2\right)$-tensor field on $\Sigma$, then \begin{equation*} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| \geq n\left| B^{n}\right|^{\frac{1}{n}} \left( \int_{\Sigma}\left( \mathrm{det}A\right)^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}, \end{equation*} where $\nu$ is the unit outer conormal vector field on $\partial\Sigma$, $\Rmnum{2}$ is the second fundamental form of $\Sigma$. \end{corollary} \begin{remark} \label{Re:02} In the special case of when $A$ is a conformal metric on $\Sigma$, i. e. $A=fg_{\Sigma}$ for a positive function $f$ on $\Sigma$, we have $|\mathrm{div}_{\Sigma}A|^{2}=|\nabla^{\Sigma}f|^{2}$, $\left\langle A,\Rmnum{2}\right\rangle=fH$ and $\left| A\left( \nu\right) \right| =f$, where $H$ denotes the mean curvature vector of $\Sigma$. Therefore, the above inequalities now are exactly the results in \cite{Bre}. \end{remark} This paper is organized as follows. In Section 2, we will give basic notions and a generalized trace inequality for the product of two square matrices. In Section 3 and 4, we will give the proofs of Theorem 1.3 and Theorem 1.4, respectively. {\bf Acknowledgement.} This work was partially supported by Science and Technology Commission of Shanghai Municipality (No. 22DZ2229014) and the National Natural Science Foundation of China (Grant No. 12271163). The research is supported by Shanghai Key Laboratory of PMMP. \section{Preliminaries} Let $\Sigma^{n}$ be a compact $n$-dimensional submanifold with smooth boundary $\partial\Sigma$ (possibly $\partial\Sigma=\varnothing$) in Eucldiean space. In addition, $g_{\Sigma}$ denotes the induced Riemannian metric on $\Sigma$, $D_{\Sigma}$ denotes the Levi-Civita connection on $\Sigma$, and $\nabla^{\Sigma}$ denotes the gradient of $\Sigma$. Let $A$ is a smooth symmetric uniformly positive defined $\left( 0,2\right)-$tensor field on $\Sigma$. For each point $x\in\Sigma$, we denote by $T_{x}\Sigma$ and $T_{x}^{\bot} \Sigma$ the tangent and normal space to $\Sigma$ at $x$, respectively. Let $\left( x^{1}, \dots, x^{n}\right) $ be a local coordinate system on $\Sigma$, the divergence of A on $\Sigma$ is defined by \begin{equation*} \mathrm{div}_{\Sigma}A:= g_{\Sigma}^{ki}D^{\Sigma}_{k}A_{ij}dx^{j}. \end{equation*} Let $T$ and $S$ are two $\left(0,2\right)$-tensor fields on $\Sigma$. In general, the inner product of $T$ and $S$ can be written as \begin{equation*} \left\langle T, S\right\rangle =g_{\Sigma}^{ik}g_{\Sigma}^{jl}T_{ij}S_{kl} =T_{ij}S^{ij}. \end{equation*} The composition of $T$ and $S$ is the $\left(0,2\right)$-tensor $T\circ S$ defined by \begin{equation*} (T\circ S)_{ij}=g^{kl}_{\Sigma}T_{ik}S_{lj}. \end{equation*} The determinant of $\mathrm{det}T$ is defined by the determinant of $\left(1,1\right)$-tensor $g_{\Sigma}^{ik}T_{jk}\frac{\partial}{\partial x^{i}}\otimes dx^{j}$. When $S>0$ and $T\circ S=\mathrm{det}S\cdot g_{\Sigma}$, we call $T$ the cofactor tensor of $S$. When $T\circ S=g_{\Sigma}$, we call $T$ the inverse tensor of $S$ denoted by $T^{-1}$. Meanwhile, $\Rmnum{2}$ denotes the second fundamental form of $\Sigma$ as defined by \begin{equation*} \left\langle \Rmnum{2}\left( X,Y\right), Z\right\rangle := \left\langle \bar{D}_{X}Y, Z\right\rangle = -\left\langle \bar{D}_{X}Z, Y\right\rangle, \end{equation*} where $X$ and $Y$ are tangent vector fields on $\Sigma$, $Z$ is a normal vector field to $\Sigma$, $\bar{D}$ denotes the standard connection on $\mathbb{R}^{n+m}$. Further, $\left\langle A, \Rmnum{2}\right\rangle\left( x\right) $ is the normal vector at $x\in\Sigma$ defined by \begin{equation*} \left\langle A, \Rmnum{2}\right\rangle \left( x\right) = g_{\Sigma}^{ik}g_{\Sigma}^{jl}A_{ij}\Rmnum{2}(\frac{\partial}{\partial x ^{k}},\frac{\partial}{\partial x ^{l}}). \end{equation*} At last, we list the following lemma which extends the arithmetic-geometric mean inequality to the product of two square matrices version: \begin{lemma}(Lemma A.1 in \cite{DP}) For $n\in\mathbb{N}$, let $A$ and $B$ be square symmetric matrices of size $n$. Assume that $A$ is positive definite and $B$ is non-negative definite. Then \begin{equation*} \mathrm{det}AB \leq\left(\frac{\mathrm{tr}\left( AB\right)}{n} \right)^{n}. \end{equation*} The equality holds if and only if $AB=\lambda I_{n}$ for some $\lambda\geq0$, where $I_{n}$ is the identity matrix. \end{lemma} \section{Proof of Theorem 1.3} First, we prove Theorem 1.3 in the special case that $\Sigma$ is connected. By scaling, we assume that \begin{equation} \label{eq:2.1} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| = n \int_{\Sigma}\left( \mathrm{det}A\right) ^{\frac{1}{n-1}}. \end{equation} Due to that $\Sigma$ is connected and $\eqref{eq:2.1}$, there exists a solution $u:\Sigma\rightarrow\mathbb{R}$ to the following equation \begin{equation} \left\{\begin{aligned} &\mathrm{div}_{\Sigma}\left( A\left( \nabla^{\Sigma}u\right) \right) \left( x\right) =n\left( \mathrm{det}A\left( x\right) \right) ^{\frac{1}{n-1}} -\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}\left( x\right) + \left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}\left( x\right) },\ \mathrm{in}\ \Sigma\backslash\partial\Sigma, \\ &\left\langle A\left( \nabla^{\Sigma}u\right) \left( x\right),\nu\left( x\right) \right\rangle =\left| A\left( \nu\left( x\right)\right) \right|,\ \mathrm{on}\ \partial\Sigma. \end{aligned} \right. \end{equation} We define \begin{equation*} \begin{split} &\Omega:=\left\lbrace x\in\Sigma\backslash\partial\Sigma: \left| \nabla^{\Sigma}u\left( x\right) \right|<1\right\rbrace,\\ &U:=\{ \left( x,y\right): x\in\Sigma\backslash\partial\Sigma, y\in T_{x}^{\bot}\Sigma, \left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1\},\\ &V:=\left\lbrace \left( x,y\right)\in U: D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle \geq0\right\rbrace, \\ \end{split} \end{equation*} and a map $\Phi:T^{\perp}(\Sigma\backslash\partial\Sigma)\rightarrow\mathbb{R}^{n+m}$ by \begin{equation*} \Phi\left( x, y\right) =\nabla^{\Sigma}u\left( x\right)+y \end{equation*} for all $x\in\Sigma\backslash\partial\Sigma$ and $y\in T_{x}^{\bot}\Sigma$. Standard elliptic regularity theory implies that the function $u\in C^{2, \gamma}(\Sigma)$ and $\Phi\in C^{1, \gamma}(T^{\perp}(\Sigma\backslash\partial\Sigma))$ for each $0<\gamma<1 \left( \mathrm{see} \cite{DG}\right)$. \begin{lemma} \label{Le:2.1} $\Phi\left( V\right) =B^{n+m}$. \end{lemma} \begin{proof} By the definitions of $U$ and $\Phi$, clearly, $\Phi\left( V\right) \subset B^{n+m}$. It is remained to prove $B^{n+m}\subset\Phi\left( V\right)$. Given an arbitrary vector $\xi\in\mathbb{R}^{n+m}$ such that $\left| \xi\right| <1$, we define a function $\omega:\Sigma\rightarrow\mathbb{R}$ by $\omega\left( x\right) :=u\left( x\right) -\left\langle x, \xi\right\rangle $. Then \begin{equation*} \nabla^{\Sigma}\omega\left( x\right) =\nabla^{\Sigma}u\left( x\right) - \xi^{\top} \mathrm{for}\ x\in\Sigma, \end{equation*} where $\xi^{\top}$ is the tangent part of $\xi$ to $\Sigma$. Let $x_{0}$ be a minimum point of $\omega$, if $x_{0}\in\partial\Sigma$, then \begin{equation*} \nabla^{\Sigma}\omega\left( x_{0}\right) =\left\langle \nabla^{\Sigma}\omega\left( x_{0}\right), \nu\left( x_{0}\right) \right\rangle\nu\left( x_{0}\right), \end{equation*} and \begin{equation*} \left\langle \nabla^{\Sigma}\omega\left( x_{0}\right), \nu\left( x_{0}\right) \right\rangle \leq0. \end{equation*} Since $A$ is symmetric positive defined and $\left| \xi\right| <1$, according to Cauchy-Schwarz inequality, we have \begin{equation*} \begin{split} 0&\geq \left\langle \nabla^{\Sigma}\omega\left( x_{0}\right), \nu\left( x_{0}\right) \right\rangle \left\langle \nu\left( x_{0}\right), A\left( \nu\left( x_{0}\right) \right) \right\rangle\\ &=\left\langle \nabla^{\Sigma}\omega\left( x_{0}\right), A\left( \nu\left( x_{0}\right) \right) \right\rangle\\ &=\left\langle \nabla^{\Sigma}u\left( x_{0}\right), A\left( \nu\left( x_{0}\right) \right) \right\rangle -\left\langle\xi^{\top}, A\left( \nu\left( x_{0}\right) \right) \right\rangle\\ &=\left| A\left( \nu\left( x_{0}\right) \right)\right| -\left\langle\xi, A\left( \nu\left( x_{0}\right) \right) \right\rangle\\ &>0. \end{split} \end{equation*} Thus, $\omega$ must attain its minimum in $\Sigma\backslash\partial\Sigma$, i.e. $x_{0}\in\Sigma\backslash\partial\Sigma$. Consequently, we have \begin{equation*} \nabla^{\Sigma}\omega\left( x\right) =\nabla^{\Sigma}u\left( x\right) - \xi^{\top} =0. \end{equation*} Denote $\xi-\xi^{\top}$ by $y_{0}$, then \begin{equation*} \nabla^{\Sigma}u\left( x_{0}\right)+y_{0}=\xi\in B^{n+m}. \end{equation*} Moreover, we obtain \begin{equation*} D_{\Sigma}^{2} \omega\left( x_{0}\right) =D_{\Sigma}^{2}u\left( x_{0}\right) -\left\langle \Rmnum{2}\left( x_{0}\right), y_{0}\right\rangle \geq0, \end{equation*} Therefore $\left( x_{0}, y_{0}\right)\in V $ and $\Phi\left( x_{0}, y_{0}\right)=\xi$. Thus $B^{n+m}\subset\Phi\left( V\right)$. The lemma follows. \end{proof} \begin{lemma} \label{Le:2.2} (\cite{Bre}, Lemma 5) The Jacobian determinant of $\Phi$ is given by \begin{equation*} \mathrm{det}D\Phi\left( x, y\right) =\mathrm{det}\left( D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle \right) \end{equation*} for all $\left( x,y\right) \in T^{\perp}(\Sigma\backslash\partial\Sigma)$. \end{lemma} \begin{lemma} \label{Le:2.3} The Jacobian determinant of $\Phi$ satisfies \begin{equation*} 0\leq\mathrm{det}D\Phi\left( x, y\right) \leq\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}} \end{equation*} for all $\left( x,y\right) \in V$. \end{lemma} \begin{proof} Given a point $\left( x,y\right) \in V$, by Cauchy-Schwarz inequality and $\left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1$, we have \begin{equation*} \begin{split} &-\left\langle \mathrm{div}_{\Sigma}A\left( x\right), \nabla^{\Sigma}u\left( x\right)\right\rangle - \left\langle A\left( x\right) , \left\langle \Rmnum{2}\left( x\right), y\right\rangle\right\rangle \\ =&-\left\langle \mathrm{div}_{\Sigma}A\left( x\right), \nabla^{\Sigma}u\left( x\right)\right\rangle - \left\langle \left\langle A\left( x\right) , \Rmnum{2}\left( x\right)\right\rangle , y\right\rangle\\ =&-\left\langle \mathrm{div}_{\Sigma}A\left( x\right)+\left\langle A\left( x\right) , \Rmnum{2}\left( x\right)\right\rangle, \nabla^{\Sigma}u\left( x\right)+y\right\rangle\\ \leq&\sqrt{\left| \nabla^{\Sigma}u\left( x\right)\right| ^{2}+\left| y\right| ^{2}} \sqrt{\left| \mathrm{div}_{\Sigma}A\right| ^{2}\left( x\right)+\left| \left\langle A, \Rmnum{2}\right\rangle\right| ^{2}\left( x\right) }\\ \leq&\sqrt{\left| \mathrm{div}_{\Sigma}A\right| ^{2}\left( x\right)+\left| \left\langle A, \Rmnum{2}\right\rangle\right| ^{2}\left( x\right) }. \end{split} \end{equation*} Note that \begin{equation*} \mathrm{div}_{\Sigma}\left( A\left( \nabla^{\Sigma}u\right) \right) =\left\langle \mathrm{div}_{\Sigma}A, \nabla^{\Sigma}u\right\rangle + \left\langle A, D_{\Sigma}^{2}u\right\rangle. \end{equation*} Thus, by the equation of $u$, we obtain \begin{equation*} \begin{split} &\left\langle A\left( x\right), D_{\Sigma}^{2}u\left( x\right)-\left\langle \Rmnum{2}\left( x\right), y\right\rangle\right\rangle \\ =&\mathrm{div}_{\Sigma}\left( A\left( \nabla^{\Sigma}u\right) \right)\left( x\right) -\left\langle \mathrm{div}_{\Sigma}A\left( x\right), \nabla^{\Sigma}u\left( x\right)\right\rangle -\left\langle A\left( x\right) , \left\langle \Rmnum{2}\left( x\right), y\right\rangle\right\rangle \\ =&n\left( \mathrm{det}A\left( x\right) \right) ^{\frac{1}{n-1}} -\sqrt{\left| \mathrm{div}_{\Sigma}A\right| ^{2}\left( x\right)+\left| \left\langle A, \Rmnum{2}\right\rangle\right| ^{2}\left(x\right)} \\ &-\left\langle \mathrm{div}_{\Sigma}A\left( x\right), \nabla^{\Sigma}u\left( x\right)\right\rangle - \left\langle A\left( x\right), \left\langle \Rmnum{2}\left( x\right), y\right\rangle\right\rangle \\ \leq& n\left( \mathrm{det}A\left( x\right) \right) ^{\frac{1}{n-1}}. \end{split} \end{equation*} Since $A>0$ and $D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle\geq0$ for all $\left( x,y\right) \in V$, by Lemma 2.1, we have \begin{equation*} \begin{split} &\mathrm{det}\left( D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle\right) \\ =&\frac{1}{\mathrm{det}A\left( x\right)} \mathrm{det}\left[ A\left( x\right) \circ\left( D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle\right)\right] \\ \leq&\frac{1}{\mathrm{det}A\left( x\right)} \left( \frac{\left\langle A\left( x\right), D_{\Sigma}^{2}u\left( x\right) -\left\langle \Rmnum{2}\left( x\right), y\right\rangle\right\rangle }{n}\right) ^{n}\\ \leq&\left( \mathrm{det}A\left( x\right)\right) ^{\frac{1}{n-1}}.\\ \end{split} \end{equation*} By Lemma 3.2, this lemma follows. \end{proof} \emph{Proof of Theorem 1.3.} Given a constant $\sigma$ such that $0\leq\sigma<1$, by area formula and Lemma 3.3, we have \begin{equation*} \begin{split} &\left| B^{n+m}\right|\left( 1-\sigma^{n+m}\right) \\ =&\int_{\left\lbrace \xi\in\mathbb{R}^{n+m}:\sigma^{2}<\left| \xi\right|^{2}<1\right\rbrace }1 d\xi \\ \leq& \int_{\Omega}\left( \int_{\left\lbrace y\in T_{x}^{\bot}\Sigma:\sigma^{2}<\left| \Phi\left( x, y\right) \right|^{2}<1\right\rbrace } \left| \mathrm{det}D\Phi\left( x, y\right) \right| 1_{A}\left( x, y\right) dy\right) d\mathrm{vol}\left( x\right) \\ \leq& \int_{\Omega}\left( \int_{\left\lbrace y\in T_{x}^{\bot}\Sigma:\sigma^{2}<\left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1\right\rbrace } \left( \mathrm{det}A\left( x\right)\right) ^{\frac{1}{n-1}}dy\right) d\mathrm{vol}\left( x\right) \\ \leq&\frac{m}{2}\left| B^{m}\right|\left( 1-\sigma^{2}\right) \int_{\Omega} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}} \\. \end{split} \end{equation*} Dividing both side by $1-\sigma$ and letting $\sigma\rightarrow1^{-}$, we have \begin{equation} \left( n+m\right)\left| B^{n+m}\right|\leq m\left| B^{m}\right| \int_{\Omega} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\leq m\left| B^{m}\right| \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}. \end{equation} From the scaling assumption $\eqref{eq:2.1}$, we obtain \begin{equation*} \begin{split} &\int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right| ^{2}+\left| \left\langle A, \Rmnum{2}\right\rangle\right| ^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| \\ =&n\int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\\ =&n\left( \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\right) ^{\frac{1}{n}} \left( \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}\\ \geq &n\left[ \frac{\left( n+m\right)\left| B^{n+m}\right|}{m\left| B^{m}\right|}\right] ^{\frac{1}{n}} \left( \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}. \end{split} \end{equation*} Thus, the special case when $\Sigma$ is connected to Theorem 1.3 has been proved. If $\Sigma$ is disconnected, we can conclude that \begin{equation*} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right| ^{2} +\left| \left\langle A, \Rmnum{2}\right\rangle\right| ^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| > n\left[ \frac{\left( n+m\right)\left| B^{n+m}\right|}{m\left| B^{m}\right|}\right] ^{\frac{1}{n}} \left( \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}} \end{equation*} from the inequality $a^{\frac{n-1}{n}}+b^{\frac{n-1}{n}} >\left( a+b\right) ^{\frac{n-1}{n}}$ for $a, b>0$. This completes the proof of Theorem 1.3. $\Box$\\ \section{Proof of Theorem 1.4} Suppose that $\Sigma^{n}$ is a compact $n$-dimensional submanifold of $\mathbb{R}^{n+2}$ with smooth boundary $\partial\Sigma$ (possibly $\partial\Sigma=\varnothing$), and $A$ is a smooth symmetric uniformly positive define $\left( 0,2\right)$-tensor field on $\Sigma$ satisfying \begin{equation*} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| = n\left| B^{n}\right|^{\frac{1}{n}}\left( \int_{\Sigma}\left( \mathrm{det}A\right)^{\frac{1}{n-1}}\right) ^{\frac{n-1}{n}}. \end{equation*} We conclude that $\Sigma$ is connected from the last part of the proof of Theorem 1.3. In the sense of a difference of a positive factor of $A$, we may assume that \begin{equation} \int_{\Sigma}\sqrt{\left| \mathrm{div}_{\Sigma}A\right|^{2}+\left| \left\langle A,\Rmnum{2}\right\rangle \right|^{2}} +\int_{\partial\Sigma}\left| A\left( \nu\right) \right| = n \int_{\Sigma}\left( \mathrm{det}A\right) ^{\frac{1}{n-1}} \end{equation} and \begin{equation} \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}=|B^{n}|. \end{equation} Let $u$, $\Omega$, $U$, $V$ and $\Phi:U\rightarrow \mathbb{R}^{n+2}$ be defined as in Section 3. Combining (3.3) and (4.2), we have \begin{equation*} \left( n+2\right)\left| B^{n+2}\right|\leq 2\left| B^{2}\right| \int_{\Omega} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\leq 2\left| B^{2}\right| \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}=\left( n+2\right)\left| B^{n+2}\right|. \end{equation*} Thus, we conclude the following lemma. \begin{lemma} $\Omega$ is dense in $\Sigma$. Moreover, $\Omega=\Sigma\backslash\partial \Sigma$ and $|\nabla^{\Sigma}u(x)|=1$ for all $x\in \partial \Sigma$. \end{lemma} \begin{lemma} Suppose that $\bar{x}\in\Omega$ and $\bar{y}\in T^{\perp}_{\bar{x}}\Sigma$ satisfying $\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}+\left| \bar{y}\right| ^{2}=1$. If the smallest eigenvalue of $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle$ is negative or $\mathrm{det}\left(D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\right)< \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}} $ when $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\geq 0$, then there exists $\epsilon\in (0,1)$, $\delta\in (0,1)$, an open neighborhood $W$ of $\bar{x}$ in $\Sigma\backslash\partial \Sigma$ and $Z:=\{ \left( x,y\right): x\in W, y\in T_{x}^{\bot}\Sigma, 1-\delta<\left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1+\delta\}$ such that \begin{equation*} \mathrm{det}D\Phi\left( x, y\right) \leq (1-\epsilon)\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}} \end{equation*} for all $(x,y)\in Z\cap V$. \end{lemma} \begin{proof} Clearly, this lemma follows from $u\in C^{2}(\Sigma)$ and Lemma 3.2. \end{proof} \begin{lemma} Suppose that $\bar{x}\in\Omega$ and $\bar{y}\in T^{\perp}_{\bar{x}}\Sigma$ satisfying $\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}+\left| \bar{y}\right| ^{2}=1$. Then $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\geq 0$ and $\mathrm{det}\left(D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\right)= \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}$. \end{lemma} \begin{proof} We argue by contradiction. Assume that the smallest eigenvalue of $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle$ is negative or $\mathrm{det}\left(D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\right)< \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}} $ when $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\geq 0$. By Lemma 4.2, there exists $\epsilon\in (0,1)$, $\delta\in (0,1)$, an open neighborhood $W$ of $\bar{x}$ in $\Sigma\backslash\partial \Sigma$ and $Z:=\{ \left( x,y\right): x\in W, y\in T_{x}^{\bot}\Sigma, 1-\delta<\left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1+\delta\}$ such that \begin{equation*} \mathrm{det}D\Phi\left( x, y\right) \leq (1-\epsilon)\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}} \end{equation*} for all $(x,y)\in Z\cap V$. Using Lemma 3.3, we deduce that \begin{equation*} 0\leq\mathrm{det}D\Phi\left( x, y\right) \leq (1-\epsilon\cdot 1_{Z}(x, y))\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}} \end{equation*} for all $(x,y)\in V$. Similar to the proof of Theorem 1.3, given a constant $\sigma$ such that $1-\delta<\sigma^{2}<1$, we have \begin{align*} &\left| B^{n+2}\right|\left( 1-\sigma^{n+2}\right) \\ =&\int_{\left\lbrace \xi\in\mathbb{R}^{n+2}:\sigma^{2}<\left| \xi\right|^{2}<1\right\rbrace }1 d\xi \\ \leq& \int_{\Omega}\left( \int_{\left\lbrace y\in T_{x}^{\bot}\Sigma:\sigma^{2}<\left| \Phi\left( x, y\right) \right|^{2}<1\right\rbrace } \left| \mathrm{det}D\Phi\left( x, y\right) \right| 1_{A}\left( x, y\right) dy\right) d\mathrm{vol}\left( x\right) \\ \leq& \int_{\Omega}\left( \int_{\left\lbrace y\in T_{x}^{\bot}\Sigma:\sigma^{2}<\left| \nabla^{\Sigma}u\left( x\right) \right|^{2}+\left| y\right| ^{2}<1\right\rbrace } (1-\epsilon\cdot 1_{Z}(x, y)) \left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}} dy\right) d\mathrm{vol}\left( x\right) \\ =&| B^{2}|\int_{\Omega}\left[(1-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})-(\sigma^{2}-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})_{+} \right]\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}}d\mathrm{vol}\left( x\right)\\ &-\epsilon| B^{2}|\int_{\Omega\cap W}\left[(1-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})-(\sigma^{2}-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})_{+} \right]\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}}d\mathrm{vol}\left( x\right)\\ =&| B^{2}|\int_{\Omega}\left[(1-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})-(\sigma^{2}-\left| \nabla^{\Sigma}u\left( x\right) \right|^{2})_{+} \right]\left(1-\epsilon 1_{W} (x) \right)\left( \mathrm{det}A\left( x\right) \right)^{\frac{1}{n-1}}d\mathrm{vol}\left( x\right)\\ \leq&\left| B^{2}\right|\left( 1-\sigma^{2}\right) \int_{\Omega} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}-\epsilon\left| B^{2}\right|\left( 1-\sigma^{2}\right) \int_{\Omega\cap W} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}} . \end{align*} Dividing both side by $1-\sigma$ and letting $\sigma\rightarrow1^{-}$, we have \begin{equation*} \begin{split} \left( n+2\right)\left| B^{n+2}\right|&\leq 2\left| B^{2}\right| \int_{\Omega} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}} -2\epsilon\left| B^{2}\right|\int_{\Omega\cap W} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\\ &\leq 2\left| B^{2}\right| \int_{\Sigma} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}} -2\epsilon\left| B^{2}\right|\int_{\Omega\cap W} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}\\ &=\left( n+2\right)\left| B^{n+2}\right|-2\epsilon\left| B^{2}\right|\int_{\Omega\cap W} \left( \mathrm{det}A\right) ^{\frac{1}{n-1}}. \end{split} \end{equation*} Since $\Omega\cap W$ is a nonempty open set in $\Sigma$ and $A>0$, it's a contradiction. Therefore, the lemma follows. \end{proof} \begin{lemma} $\Rmnum{2}\left( x\right)\equiv0$, $\mathrm{div}_{\Sigma}A(x)\equiv 0$, $D_{\Sigma}^{2}u\left( x\right) > 0$ , $\mathrm{det}\left(D_{\Sigma}^{2}u\left( x\right) \right)= \left( \mathrm{det}A(x)\right) ^{\frac{1}{n-1}}$ and $A(x)=\mathrm{cof} D^{2}_{\Sigma}u(x)$ for all $x\in \Sigma$. Moreover, $u\in C^{\infty}(\Sigma)$. \end{lemma} \begin{proof} Given a point $\bar{x}\in \Omega$, since $\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}<1$, we have $\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}+\left| \bar{y}\right| ^{2}=1$ for all $\bar{y}\in\{y\in T_{\bar{x}}^{\perp}\Sigma :\left| y\right| ^{2}=1-\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}\}$. By Lemma 4.3, we obtain $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\geq 0$ and \begin{equation} \mathrm{det}\left(D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\right)= \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}. \end{equation} Similar to the proof of Lemma 3.3, since $\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2}+\left| \bar{y}\right| ^{2}=1$, $A(\bar{x})>0$ and $D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\geq 0$, we have \begin{equation*} \mathrm{det}\left( D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left(\bar{ x}\right), \bar{y}\right\rangle\right)\leq\left( \mathrm{det}A\left( \bar{x}\right)\right) ^{\frac{1}{n-1}} \end{equation*} with the equality holds. Thus there exists $\mu\geq 0$ and $\nu \geq 0$ such that \begin{align} &\mathrm{div}_{\Sigma}A(\bar{x})=-\mu\nabla^{\Sigma}u\left( \bar{x}\right) ,\\ &\left\langle A(\bar{x}), \Rmnum{2}\left( \bar{x}\right)\right\rangle=-\mu y,\\ &A\left( \bar{x}\right) \circ\left( D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle\right)=\nu g_{\Sigma}(\bar{x}). \end{align} We conclude that $\nu=\left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}$ and \begin{equation} D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle= \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}A^{-1}(\bar{x}) \end{equation} from (4.3) and (4.6). Replacing $\bar{y}$ by $-\bar{y}$ gives \begin{equation} D_{\Sigma}^{2}u\left( \bar{x}\right) -\left\langle \Rmnum{2}\left( \bar{x}\right), -\bar{y}\right\rangle= \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}A^{-1}(\bar{x}). \end{equation} Combining (4.7) and (4.8), we obtain $D_{\Sigma}^{2}u\left( \bar{x}\right) = \left( \mathrm{det}A(\bar{x})\right) ^{\frac{1}{n-1}}A^{-1}(\bar{x})$ and $\left\langle \Rmnum{2}\left( \bar{x}\right), \bar{y}\right\rangle=0$ for all $\bar{x}\in \Omega$ and $\bar{y}\in\{y\in T_{\bar{x}}^{\perp}\Sigma :\left| y\right| ^{2}=1-\left| \nabla^{\Sigma}u\left( \bar{x}\right) \right|^{2} \}$. Due to the range of $\bar{y}$ and $u\in C^{2}(\Sigma)$, by Lemma 4.1, we have \begin{equation} D_{\Sigma}^{2}u\left( x\right) = \left( \mathrm{det}A(x)\right) ^{\frac{1}{n-1}}A^{-1}(x)>0 \end{equation} and \begin{equation} \Rmnum{2}\left( x\right)\equiv0 \end{equation} for all $x\in \Sigma$. By substituting (4.10) into (4.3), by Lemma 4.1, we obtain \begin{equation*} \mathrm{det}\left(D_{\Sigma}^{2}u\left( x\right) \right)= \left( \mathrm{det}A(x)\right) ^{\frac{1}{n-1}} \end{equation*} for all $x\in \Sigma$. By substituting (4.10) into (4.5), we obtain $\mu=0$ which concludes \begin{equation} \mathrm{div}_{\Sigma}A(x)=0 \end{equation} for all $x\in \Sigma$ from (4.4) and Lemma 4.1. Combining (4.3), (4.9) and (4.10), we obtain \begin{equation*} A=\mathrm{cof} D^{2}_{\Sigma}u. \end{equation*} By substituting (4.10) and (4.11) into (3.2), we obtain \begin{equation*} \left\{\begin{aligned} &\mathrm{div}_{\Sigma}\left( A\left( \nabla^{\Sigma}u\right) \right) \left( x\right) =n\left( \mathrm{det}A\left( x\right) \right) ^{\frac{1}{n-1}} ,\ \mathrm{in}\ \Sigma\backslash\partial\Sigma, \\ &\left\langle A\left( \nabla^{\Sigma}u\right) \left( x\right),\nu\left( x\right) \right\rangle =\left| A\left( \nu\left( x\right)\right) \right|,\ \mathrm{on}\ \partial\Sigma. \end{aligned} \right. \end{equation*} Since $A$ is smooth, standard elliptic regularity theory implies $u\in C^{\infty}(\Sigma)$. These complete the proof. \end{proof} By Lemma 4.1, $\nabla^{\Sigma}u$ maps $\Sigma$ into the closure of $B^{m}\subset \mathbb{R}^{n}$ and $\nabla^{\Sigma}u(\Omega)=B^{m}$. Since $D^{2}_{\Sigma}u>0$, $\nabla^{\Sigma}u$ is a local homeomorphism. We conclude that \begin{lemma} $\nabla^{\Sigma}u(\Sigma)=\overline{B^{m}}$. \end{lemma} \emph{Proof of Theorem 1.4.} The necessity follows from Lemma 4.4 and Lemma 4.5. The sufficiency follows from Theorem 1.2. $\Box$\\ \end{document}
\begin{document} \title{The Qupit Stabiliser ZX-travaganza: \Simplified Axioms, Normal Forms and Graph-Theoretic Simplification} \begin{abstract} We present a smorgasbord of results on the stabiliser ZX-calculus for odd prime-dimensional qudits (i.e.\@ \emph{qupits}). We derive a simplified rule set that closely resembles the original rules of qubit ZX-calculus. Using these rules, we demonstrate analogues of the spider-removing local complementation and pivoting rules. This allows for efficient reduction of diagrams to the \emph{affine with phases} normal form. We also demonstrate a reduction to a unique form, providing an alternative and simpler proof of completeness. Furthermore, we introduce a different reduction to the \emph{graph state with local Cliffords} normal form, which leads to a novel layered decomposition for qupit Clifford unitaries. Additionally, we propose a new approach to handle scalars formally, closely reflecting their practical usage. Finally, we have implemented many of these findings in \texttt{DiZX}, a new open-source Python library for qudit ZX-diagrammatic reasoning. \end{abstract} \section{Introduction} \label{sec:introduction} A helpful tool to reason about quantum computation is the \emph{ZX-calculus}~\cite{coeckeInteractingQuantumObservables2011,coeckeInteractingQuantumObservables2008}, a graphical language which can represent any qubit computation. It has been used, for example, in measurement-based quantum computing~\cite{duncanRewritingMeasurementBasedQuantum2010, backensThereBackAgain2021, mcelvanneyCompleteFlowpreservingRewrite2022}, error-correcting codes~\cite{duncanVerifyingSteaneCode2014, garvieVerifyingSmallestInteresting2018, beaudrapZXCalculusLanguage2020}, quantum circuit optimisation~\cite{debeaudrapFastEffectiveTechniques2020, duncanGraphtheoreticSimplificationQuantum2020, kissingerReducingNumberNonClifford2020}, classical simulation~\cite{kissingerClassicalSimulationQuantum2022, codsiClassicallySimulatingQuantum2023, laakkonenGraphicalSATAlgorithm2022}, quantum natural language processing~\cite{coeckeFoundationsNearTermQuantum2020, meichanetzidisQuantumNaturalLanguage2021}, quantum chemistry~\cite{shaikhHowSumExponentiate2022}, and quantum machine learning~\cite{wangDifferentiatingIntegratingZX2022, zhaoAnalyzingBarrenPlateau2021}. All the above results use the \emph{qubit} ZX-calculus, but recent years have seen a surge of interest in studying quantum computation using $d$-dimensional systems, called \emph{qudits}. Qudit-based quantum computation has been experimentally realised in a variety of physical systems, such as ion traps~\cite{ringbauerUniversalQuditQuantum2022, hrmoNativeQuditEntanglement2023}, photonic devices~\cite{chiProgrammableQuditbasedQuantum2022}, and superconducting devices~\cite{blokQuantumInformationScrambling2021, yeCircuitQEDSinglestep2018, yurtalanImplementationWalshHadamardGate2020, hillRealizationArbitraryDoublycontrolled2021, gossHighfidelityQutritEntangling2022}. On the theory side, there has been work in translating work on qubits to qudits in quantum algorithms~\cite{wangQuditsHighDimensionalQuantum2020}, fault-tolerant quantum computing~\cite{gottesmanFaultTolerantQuantumComputation1999, campbellEnhancedFaultTolerantQuantum2014}, quantum communication~\cite{cozzolinoHighDimensionalQuantumCommunication2019}, and more~\cite{desilvaEfficientQuantumGate2021, gokhaleAsymptoticImprovementsQuantum2019, bocharovFactoringQutritsShor2017, nikolaevaDecomposingGeneralizedToffoli2022}. This raises the question of how we can use the ZX-calculus to reason about qudit systems. There exist several variations of the ZX-calculus that extend it to higher-dimensional qudits. Many have focussed on the specific case of qutrit systems~\cite{wangQutritZXcalculusComplete2018, gongEquivalenceLocalComplementation2017, wangQutritZXcalculusComplete2018,townsend-teagueSimplificationStrategiesQutrit2022}, with some applications to quantum computation~\cite{yehConstructingAllQutrit2022, vandeweteringPhaseGadgetCompilation2022} and complexity theory~\cite{townsend-teagueSimplificationStrategiesQutrit2022}. A few proposals have also been made which capture all finite or infinite dimensions~\cite{wangQufiniteZXcalculusUnified2022,poorCompletenessArbitraryFinite2023, defeliceLightmatterInteractionZXW2023}, but these do not have many of the nicer features of the qubit calculus. Of particular importance to this paper is Ref.~\cite{boothCompleteZXcalculiStabiliser2022}, which constructed a calculus for odd prime dimensions while retaining many of these nice features, and proved it complete for the stabiliser fragment. However, most of these calculi were developed very recently, and not much work has been done yet on how we could effectively use the rewrites in practice. To understand the usefulness of rewrite rules, we can take a look at the original qubit calculus. In qubit ZX, we can distinguish between `standard' rules --- spider fusion, identity removal, state copying, bialgebra, and colour change --- and `harder' rules --- supplementarity, Euler angle colour permutation, and the rules dealing with the triangle generator. The standard rules, with minor modifications, were those originally discovered~\cite{coeckeInteractingQuantumObservables2008}, and they are the most commonly used in practice. For instance, all the rewrites used in the PyZX compiler~\cite{kissingerPyZXLargeScale2020} can be proved using just these standard rules~\cite{duncanGraphtheoreticSimplificationQuantum2020}. These rules are sufficient to prove completeness for the \emph{stabiliser fragment} of the ZX-calculus~\cite{backensZXcalculusCompleteStabilizer2014}, while the harder rules were developed to prove completeness for larger fragments. This suggests that carefully studying the qudit stabiliser fragment could be a fruitful avenue for developing useful qudit ZX rewrite rules. Recall that the stabiliser fragment corresponds to Clifford computation, which is an efficiently simulable subset of quantum computation~\cite{gottesmanHeisenbergRepresentationQuantum1998} that forms the basis of many quantum protocols, such as error-correcting codes~\cite{kissingerPhasefreeZXDiagrams2022, khesinGraphicalQuantumCliffordencoder2023}, superdense coding~\cite{bennettCommunicationOneTwoparticle1992}, quantum teleportation~\cite{bennettTeleportingUnknownQuantum1993}, and quantum key distribution~\cite{bennettQuantumCryptographyPublic2014}. Completeness of the qubit stabiliser fragment of ZX was proved in~\cite{backensZXcalculusCompleteStabilizer2014}, while for qutrits it was proved in~\cite{wangQutritZXcalculusComplete2018}. Recently, completeness was proved for the stabiliser fragment for any odd-dimensional prime qudit dimension in~\cite{boothCompleteZXcalculiStabiliser2022}. The proofs of all these results work essentially the same way: first, they show that any state diagram can be reduced to a Graph State with Local Cliffords (GSLC), and then they show that any pair of GSLCs implementing the same state can be rewritten to a common reduced form. In this paper, we take this last complete calculus for prime-dimensional qudits~\cite{boothCompleteZXcalculiStabiliser2022} as a starting point, and extend it in several ways: \begin{enumerate} \item We simplify the rules to a smaller set that has a clearer relation to the original qubit stabiliser calculus, and for most of which we can prove the necessity. \item We incorporate a well-tempered axiomatisation for our calculus following the convention of~\cite{debeaudrapWelltemperedZXZH2021}, removing most of the scalars in our rewrite rules, and thus, simplifying our calculations. \item We introduce a new approach to handle scalars, formalising the often-used convention of writing scalar numbers alongside diagrams. \item We discover the qupit versions of the spider-removing \emph{local complementation} and \emph{pivoting} rules found in~\cite{ duncanGraphtheoreticSimplificationQuantum2020} and generalised to qutrits in~\cite{townsend-teagueSimplificationStrategiesQutrit2022}. These rules serve as the foundation for optimisation and simulation strategies in the qubit setting~\cite{ duncanGraphtheoreticSimplificationQuantum2020, kissingerReducingNumberNonClifford2020, debeaudrapFastEffectiveTechniques2020,kissingerPyZXLargeScale2020}. Our findings demonstrate that these strategies can be adapted to work for prime-dimensional qudits, thus extending their applicability beyond qubits. \item Using these rewrite rules, we simplify the original completeness proof of~\cite{boothCompleteZXcalculiStabiliser2022} by reducing the number of case distinctions required. \footnote{ In addition to being aesthetically and ergonomically preferable, reducing the number of case distinctions also makes the proof more easily verifiable. During the preparation of this manuscript, we identified and communicated several errors and omissions in~\cite{ boothCompleteZXcalculiStabiliser2022}, which were subsequently fixed. } Specifically, we demonstrate that these rewrites reduce diagrams to a normal form that we call the \emph{affine with phases} (AP) form, which originally appeared in~\cite{ dehaeneCliffordGroupStabilizer2003}. Then, given an AP-form diagram, we show how to reduce it further to a unique form, resulting in completeness. \footnote{ A similar normal form for qubits was independently found in~\cite{mcelvanneyCompleteFlowpreservingRewrite2022}. It is worth noting that our formulation was already employed for qubits in the Oxford Quantum Software course prior to the preprint~\cite{ mcelvanneyCompleteFlowpreservingRewrite2022} appeared online. } \item Additionally, we demonstrate how to rewrite diagrams into a \emph{graph-state with local Cliffords} (GSLC) form, which yields a layered decomposition for Clifford unitaries similar to the one proposed for qubits in~\cite{duncanGraphtheoreticSimplificationQuantum2020}. \end{enumerate} Our findings highlight that qupit stabiliser diagrams share many familiar properties with their qubit counterparts. Furthermore, many results regarding optimisation and normal forms extend seamlessly to the odd prime-dimensional qudit setting. Finally, we have implemented many of these findings in \texttt{DiZX}, a new open-source Python library for qudit ZX-diagrammatic reasoning based on \texttt{PyZX}~\cite{kissingerPyZXLargeScale2020}. \footnote{See \url{https://github.com/jvdwetering/dizx}.} \paragraph{Related work} Subsequent to submission, we were made aware of a related, parallel work, Ref.~\cite{debeaudrapSimpleZXZH2023}, which also concerns well-tempered axiomatisations for qudit ZX-calculi. \section{The qupit Clifford ZX-calculus} \label{sec:qupit-cliff} In this section, we introduce the qudit stabiliser ZX-calculus for odd prime dimensions. We let $p$ denote an arbitrary odd prime, and $\mathbb{Z}_p = \mathbb{Z}/p\mathbb{Z}$ the ring of integers modulo~$p$. Since $p$ is prime, $\mathbb{Z}_p$ is a field, implying that every non-zero element in $\mathbb{Z}_p$ has a multiplicative inverse. We denote the group of units (i.e.\@ invertible elements) as $\mathbb{Z}_p^* \coloneqq \mathbb{Z}_p \setminus \{0\}$. We also define the Legendre symbol, for $x \in \mathbb{Z}_p^*$, as follows: \begin{equation} \label{eq:legendre_characteristic} \left(\frac{x}{p}\right) = \begin{cases} 1 \qif \exists y \in \mathbb{Z}_p^* \text{ s.t. } x = y^2; \\ -1 \quad \text{otherwise}; \end{cases} \end{equation} The Hilbert space of a qupit is $\mathcal{H} = \operatorname{span}\{\ket{m} \mid m \in \mathbb{Z}_p\} \cong \C^p$. Letting $\omega \coloneqq e^{i\frac{2\pi}{p}}$ be a $p$-th primitive root of unity, we can write down the following standard operators $Z$ and $X$, occasionally known as the \emph{clock} and \emph{shift} operators: $Z \ket{m} \coloneqq \omega^{m} \ket{m}$ and $X \ket{m} \coloneqq \ket{m+1}$ for any $m \in \mathbb{Z}_p$. Notably, $ZX = \omega XZ$. A \emph{Pauli operator} is defined as any operator of the form $\omega^k X^a Z^b$ for $k, a, b \in \mathbb{Z}_p$. We consider Pauli operator \emph{trivial} if it is proportional to the identity. Each Pauli operator has a spectrum given by $\{\omega^k \mid k \in \mathbb{Z}_p\}$, and we denote $\ket{k : Q}$ as the eigenvector of a Pauli operator~$Q$ associated with the eigenvalue $\omega^k$. It follows from the definition of $Z$ that we can identify $\ket{k:Z} = \ket{k}$. The collection of all Pauli operators is denoted $\mathscr{P}_1$ and called the \emph{Pauli group}. For $n \in \mathbb{N}^*$, the \emph{generalised Pauli group} $\mathscr{P}_n$ is defined as $\bigotimes_{k=1}^n \mathscr{P}_1$. Of particular importance to us are the \emph{(generalised) Clifford groups}. These groups are defined for each $n \in \mathbb{N}^*$ as the (unitary) normaliser of $\mathscr{P}_n$. In other words, a unitary operator $C$ on $\mathcal{H}^{\otimes n}$ belongs to the Clifford group if, for any $P \in \mathscr{P}_n$, the conjugation $CPC^\dagger$ is also an element of $\mathscr{P}_n$. While every Pauli operator is Clifford, there exist non-Pauli Clifford operators. In the case of prime qudit dimensions, the group of Clifford unitaries can be generated by three gates: the \emph{Hadamard gate} defined as $H \coloneqq \sum_{k \in \mathbb{Z}_p} \dyad{k:Z}{k:X}$, the $S$ gate defined as $S \coloneqq \sum_{k \in \mathbb{Z}_p} \omega^{2^{-1} k(k-1)} \dyad{k:Z}{k:Z}$, and the $CX$ gate defined as $CX \coloneqq \sum_{j,k \in \mathbb{Z}_p} \dyad{j,j+k:Z}{j,k:Z}$~\cite{gottesmanFaultTolerantQuantumComputation1999}. Note that in this context the Hadamard gate is sometimes also just called the \emph{Fourier transform}. Stabiliser quantum mechanics is operationally described as a fragment of quantum mechanics where the allowed operations include initialisations and measurements in the eigenbases of Pauli operators, as well as unitary operations from the generalised Clifford groups. \subsection{Generators} We define the symmetric monoidal category $\mathbb{Z}Xp$ as having objects $\mathbb{N}$ and morphisms generated by the following diagrams, for any $x,y \in \mathbb{Z}_p$ and $s \in \C$: \begin{align*} \tikzfig{generators/g-spider} &: m \to n & \tikzfig{generators/r-spider} &: m \to n & \tikzfig{generators/id} &: 1 \to 1 & \tikzfig{generators/hadamard} &: 1 \to 1 & \\ \tikzfig{generators/cup} &: 0 \to 2 & \tikzfig{generators/cap} &: 2 \to 0 & \tikzfig{generators/braid} &: 2 \to 2 & \tikzfig{generators/scalar} &: 0 \to 0 & \end{align*} Furthermore, when $x = y = 0$, we call the spider \emph{phase-free} and we denote it without label as \tikzfig{new/phase_free_spider}, and similarly for the X-spider. In addition to the \enquote{standard} generators of ZX, we have introduced a new generator represented by a light-grey bubble with a scalar written inside it, which we refer to as an \emph{explicit scalar}. These explicit scalars offer a convenient way to streamline the often cumbersome reasoning related to scalars that is typically involved in many graphical completeness papers. Note that the presence of the red X-spider as a generator is in principle unnecessary since the Z-spider surrounded by Hadamard boxes is equivalent to it. However, our goal is not to provide a minimal set of generators, but rather a convenient one. Diagrams in our framework can be composed in two ways: sequentially, by connecting output wires to input wires, or vertically, by \enquote{stacking} diagrams, corresponding to the tensor product operation which is defined as $n \otimes m = n + m$ on objects. \subsection{Interpretation}\label{subsec:interpretation} The interpretation of a $\mathbb{Z}Xp$-diagram is defined on objects as $\interp{m} \coloneqq \C^{p^m}$, and on the generators as: \begin{align*} \begin{aligned} \interp{\tikzfig{generators/g-spider}} &= p^{\frac{n+m-2}{4}}\sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(xk + yk^2)} \ket{k:Z}^{\otimes n} \bra{k:Z}^{\otimes m} & \interp{\tikzfig{generators/id}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k:Z}{k:Z} & \quad \\ \interp{\tikzfig{generators/r-spider}} &= p^{\frac{n+m-2}{4}}\sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(xk + yk^2)} \ket{-k:X}^{\otimes n} \bra{k:X}^{\otimes m} & \interp{\tikzfig{generators/hadamard}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k:Z}{k:X } & \end{aligned} \\ \begin{aligned} \interp{\!\!\!\tikzfig{generators/cup}\ } &= \sum_{k\in\mathbb{Z}_p} \ket{kk:Z} & \quad \interp{\ \tikzfig{generators/cap}\!\!\!} &= \sum_{k\in\mathbb{Z}_p} \bra{kk:Z} & \quad \interp{\ \tikzfig{generators/braid}\ } &= \sum_{k,\ell\in\mathbb{Z}_p} \dyad{k, \ell:Z}{\ell,k:Z} \end{aligned} \end{align*} and \(\interp{\tikzfig{generators/scalar}} = s\). There are a couple of things we should remark about this interpretation. First, the definition of the X-spider does not follow the standard convention. It is defined in such a way that it maps X-eigenstates to their additive inverse (modulo $p$). This definition is used in order to satisfy the property of \emph{flexsymmetry}~\cite{caretteWhenOnlyTopology2021,caretteWieldingZXcalculusFlexsymmetry2021}, which allows us to treat diagrams as undirected graphs. Second, note that the interpretation of phases on the spiders has an additional $2^{-1}$ factor which is necessary for the later stated \hyperref[fig:axioms]{\textsc{Euler}}\xspace and \hyperref[fig:axioms]{\textsc{Gauss}}\xspace axioms to be sound. This factor is considered modulo $p$, so for instance, for $p=5$ we have $2^{-1} \equiv 3$. Finally, the spiders are defined with a global scalar factor of $p^{\frac{n+m-2}{4}}$ to follow the \emph{well-tempered normalisation} convention of~\cite{debeaudrapWelltemperedZXZH2021}. This allows us to present the axioms later on with significantly fewer scalar factors floating around. While the conventional qudit ZX-calculus represents spiders using a $(d - 1)$-dimensional vector~\cite{ranchinDepictingQuditQuantum2014}, we employ a different approach by leveraging a useful property of the Clifford group for prime-dimensional qudits: the phases of its spiders are $p^m$-th roots of unity raised to polynomial functions with a maximum degree of $2$~\cite{cuiDiagonalGatesClifford2017}. This property enables us to capture the essence of Clifford spiders using only two parameters: the coefficients of the linear and square terms. As a result, we develop a more elegant and intuitive framework for reasoning about stabiliser maps, requiring only two parameters in any odd-prime dimension. To establish a connection between our convention and the original qudit ZX-calculus, we define a mapping where a spider with phase parameter $(x, y)$ corresponds to the spider described in~\cite{ranchinDepictingQuditQuantum2014} with parameter $\overrightarrow{\alpha} \coloneqq (\alpha_1, \cdots, \alpha_{d-1})$, where $\alpha_k = \omega^{2^{-1}(x k + y k^2)}$. For any $a \in \mathbb{Z}_p$, the diagrams \tikzfig{new/spider-z-pauli} and \tikzfig{new/spider-x-pauli} correspond to the single qupit Pauli $Z^a$ and $X^a$ gates, respectively. Similarly, the diagrams \tikzfig{new/spider-z-clifford} and \tikzfig{new/spider-x-clifford} correspond to Clifford unitaries for any $a, b \in \mathbb{Z}_p$. As a result, we designate spiders with a phase $(a, 0)$ as \emph{Pauli spiders}, and spiders with a phase $(a, b)$ as \emph{Clifford spiders}. Furthermore, spiders with a phase $(0, b)$ are referred to as \emph{pure-Clifford spiders}, while spiders with a phase $(a, z)$ where $z \neq 0$ are termed \emph{strictly-Clifford spiders}. Lastly, we designate the diagram \tikzfig{new/spider-antipode} as the \emph{antipode} since it implements the map $\ket{k:Z} \mapsto \ket{-k:Z}$. Contrary to the qubit case, the qudit Hadamard gate is not self-inverse. Instead, it follows the property that four successive applications of the Hadamard gate results in the identity, that is, $H^4 = I$. Therefore, the inverse of the Hadamard gate is given by $H^3$. To maintain the clarity and simplicity of diagrams, we introduce the shorthand notation $\tikzfig{equations/hadamard-inverse-simple}$ to represent the inverse of the Hadamard box. \subsection{Axioms}\label{subsec:axioms} We present the axioms of our calculus in \cref{fig:axioms}. In addition to these concrete rules, our calculus also follows the structural rules of a compact-closed PROP\@. This property implies that \enquote{only connectivity matters}, allowing us to treat our diagrams as undirected graphs while preserving their interpretation as linear maps. \begin{figure} \caption{The rewrite rules of the qudit stabiliser ZX-calculus for any odd prime dimension $p$. Here $a,b,c,d \in \mathbb{Z} \label{fig:axioms} \end{figure} These rewrite rules are essentially a simplified version of the complete set of rewrite rules found in~\cite{boothCompleteZXcalculiStabiliser2022}. We can show these rules are equivalent to those found in that paper, by deriving the missing axioms. \begin{proposition}\label{prop:missing-axioms} For any \( z \in \mathbb{Z}_p^* \) and \( a,c,d \in \mathbb{Z}_p \), $\mathbb{Z}Xp$ proves the following axioms from~\cite{boothCompleteZXcalculiStabiliser2022}: \begin{equation*} \tikzfig{figures/equations/axioms-diff} \end{equation*} \end{proposition} \noindent Note that all the proofs in the paper can be found in the appendices. We also change the presentation of scalars, but we can rely on the reduction in~\cite{boothCompleteZXcalculiStabiliser2022} of the scalar fragment to the elementary scalar fragment: \begin{definition} \label{def:elementary-scalar} An \emph{elementary scalar} is a diagram \(A \in \mathbb{Z}Xp[0,0]\) which is a (possibly empty) tensor product of diagrams from \(\{\tikzfig{figures/new/elementary_scalar_scalar}, \tikzfig{figures/new/elementary_scalar_omega}, \tikzfig{figures/new/elementary_scalar_sqrt_p}, \tikzfig{figures/new/elementary_scalar_sqrt_p_inv}, \tikzfig{figures/new/elementary_scalar_minus} \mid \lambda \in \C, s \in \mathbb{Z}_p\}\). \end{definition} \begin{restatable}{lemma}{elementaryScalarCompleteness} \label{lem:elementary_scalar_completeness} $\mathbb{Z}Xp$ is complete for elementary scalars. Explicitly, if \(s: 0 \to 0\) is an elementary scalar, then \tikzfig{figures/scalar_completeness}. \end{restatable} With these results, we can see that every derivation of~\cite{boothCompleteZXcalculiStabiliser2022} is also valid in our calculus, so that the rules of \cref{fig:axioms} are complete. For this reason, we freely use the lemmas of~\cite{boothCompleteZXcalculiStabiliser2022} in the rest of this paper. In deriving \textsc{Mult} and \textsc{Shear} in \cref{prop:missing-axioms}, as well as in the reduction to AP-form of \cref{sec:ap-form}, we make extensive use of the following \enquote{strictly-Clifford} state colour-change rules: \begin{restatable}{lemma}{lemStrictCliffordColour} \label{lem:state_colour_change} Strictly-Clifford states can all be represented both using Z- and X-spiders: for any \(a \in \mathbb{Z}_p\) and \(b \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/euler/state_colour_change} \end{equation*} \end{restatable} \noindent This lemma gives a qupit version of the well-known qubit ZX rule $\tikzfig{qubit-Y-rule}$. On the way to proving this lemma, we also prove the qupit Clifford version of \emph{supplementarity}, originally introduced for the qubit case in Ref.~\cite{perdrixSupplementarityNecessaryQuantum2016}: \begin{restatable}{lemma}{lemSupplementarity} \label{lem:fork_identity} For any \(b \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/euler/fork_identity} \end{equation*} \end{restatable} \noindent A generalisation of this rule is known to be necessary, but not sufficient, for the completeness of the Clifford+T fragment in the qubit case~\cite{perdrixSupplementarityNecessaryQuantum2016, jeandelZXCalculusCyclotomicSupplementarity2017}. \subsection{A word on scalars} Handling scalars in a graphical language is always a delicate issue. Scalars are essential to guarantee the soundness of rewriting rules but can sometimes be seen as a cumbersome bureaucracy that can be omitted in practice and recovered through a quick normalisation check at the end of a calculation. As a result, some textbooks prefer to work up to non-zero scalars~\cite{coeckePicturingQuantumProcesses2017}, and in~\cite{backensZXcalculusCompleteStabilizer2014}, a first proof of completeness is presented without scalars, which are addressed in a subsequent article~\cite{backensMakingStabilizerZXcalculus2015}. There is no perfect solution to this situation. In this paper, we adopt an intermediary approach that can be extended to other graphical languages: the introduction of grey scalar boxes. This approach bears resemblance to how the ZH-calculus handles scalars~\cite{backensZHCompleteGraphical2019}, although in the ZH-calculus, the scalar boxes are directly representable within the calculus itself, requiring no extension as described here. Given any prop $\textbf{P}$, the set of scalars $\textbf{P}[0,0]$ forms a commutative monoid~\cite{heunenCategoriesQuantumTheory2019}. We view $\textbf{P}[0,0]$ as a monoidal category with a single object, where the $\otimes$ and $\circ$ operations are identified. We then consider the product category $\textbf{P}[0,0] \times \textbf{P}$, which also forms a prop, with arrows represented as pairs $(s,f)$, where $f: n \to m$ is an arrow of $\textbf{P}$ and $s$ is a scalar. Graphically, such a pair is depicted as a diagram representing $f$ together with a floating grey scalar box containing $s$. The principal equations governing the behaviour of scalar boxes are then \hyperref[fig:axioms]{\textsc{One}}\xspace and \hyperref[fig:axioms]{\textsc{Prod}}\xspace. In $\textbf{P}[0,0] \times \textbf{P}$, grey boxes and diagrams are treated independently. To achieve the desired axiomatization of $\textbf{P}$, we need to quotient the equational theory by the equation \tikzfig{figures/scalar_completeness} for all $s: 0 \to 0$. This can be accomplished by introducing rules that guarantee the desired result for a family of well-chosen elementary scalars. Then it is enough to show that any diagram $0\to 0$ can be reduced to elementary scalars as we do in Lemma~\ref{lem:elementary_scalar_completeness}. \section{Normal forms} \label{sec:ap-form} In this section, we show that we can simplify stabiliser diagrams into two distinct normal forms: the \emph{affine with phases} (AP) form and the \emph{graph state with local Cliffords} (GSLC) form. The AP form can be efficiently transformed into a unique reduced form, offering an alternative proof of completeness. On the other hand, the GSLC form is particularly useful for rewriting and decomposing stabiliser unitaries. \subsection{Graph simplifications}\label{subsec:graph-simplifications} Before reducing the diagrams to our normal forms, we first need to simplify them into a \emph{graph-like} form. In this form, the diagrams consist only of Z-spiders and \emph{H-edges}. To define the qupit graph-like diagrams, we first define \emph{H-boxes} as: \begin{equation*} \label{eq:hbox_def} \tikzfig{equations/weighted_hadamard_multiedge} \end{equation*} where $x \in \mathbb{Z}_p$ is the \emph{weight} of the H-box. Unlike the \emph{multipliers} in~\cite{boothCompleteZXcalculiStabiliser2022}, H-boxes are undirected, thus, we can treat diagrams that contain only generators and H-boxes as undirected (weighted) graphs. \begin{restatable}{proposition}{prophbox} \label{prop:hbox} $\mathbb{Z}Xp$ proves the following equations: \begin{equation*} \tikzfig{equations/weighted_hadamard} \end{equation*} \end{restatable} Since edges that contain H-boxes are central to the subsequent proofs, we define \emph{H-edges}, similarly to the qubit case, as a blue dashed line with the corresponding weight on top: \begin{equation} \label{eq:xhad-edge-def} \tikzfig{new/hadamard-edge-def} \end{equation} \begin{definition} A ZX-diagram is \emph{graph-like} when: \begin{enumerate} \item All spiders are Z-spiders. \item Z-spiders are only connected via H-edges. \item There are no self-loops. \item Every input or output is connected to a Z-spider. \item Every Z-spider is connected to at most one input or output. \end{enumerate} \end{definition} Using standard techniques~\cite{duncanGraphtheoreticSimplificationQuantum2020}, it is evident that any ZX-diagram can be transformed into a graph-like form. This transformation involves several steps: performing a colour change on all X-spiders, fusing all Z-spiders, removing self-loops, and introducing identity elements to ensure that each input and output is correctly connected to a Z-spider. Once in graph-like form, the diagram can be represented as an open, weighted graph, where the edge weights are elements of $\mathbb{Z}_p$ and each vertex is labelled by a phase $(a, b) \in \mathbb{Z}_p^2$. Now that we have a graph-like diagram, we can differentiate between \emph{boundary} spiders, those directly connected to an input or output, and \emph{interior} spiders, those that are only connected to other spiders. Subsequently, we demonstrate that many of the internal spiders can be removed from a diagram using similar techniques to the qubit case~\cite{ duncanGraphtheoreticSimplificationQuantum2020}. The local complementation simplification enables the removal of a strictly-Clifford interior spider by introducing phases and wires to the spiders it is connected to. This technique is analogous to the qubit version described in~\cite{duncanGraphtheoreticSimplificationQuantum2020}. \begin{restatable}[Local complementation simplification]{lemma}{lc} \label{lem:lc} For any $z \in \mathbb{Z}^*_p$ and for all $a, \alpha_i, \beta_i, e_i, w_{i,j} \in \mathbb{Z}_p$ where $i, j \in \{1, \ldots k\}$ such that $i < j$ we have: \begin{equation*} \tikzfig{new/local_compl} \end{equation*} Here $\gamma_i = \alpha_i - e_i a z^{\texttt{-} 1}$, $\delta_i = \beta_i - z^{\texttt{-} 1} e_i^2$, and $g_{i,j} = w_{i j} - z^{\texttt{-} 1} e_i e_j$. \end{restatable} We also have an analogue of the pivot rewrite rule. This rule enables us to eliminate connected interior Pauli spiders by introducing additional phases and connections to the spiders they are connected to. First, we prove a simplified version of pivoting: \begin{restatable}{lemma}{pivotpartial} \label{lem:pivot-partial} The following version of pivoting is derivable in $\mathbb{Z}Xp$: \begin{equation*} \tikzfig{new/partial_pivot} \end{equation*} Here $\epsilon \in \mathbb{Z}^*_p$ and all the other variables are allowed arbitrary values. \end{restatable} Then the general version can be derived from that: \begin{restatable}[Pivoting simplification]{lemma}{pivot} \label{lem:pivot} General pivoting is derivable in $\mathbb{Z}Xp$: \begin{equation*} \tikzfig{new/pivot} \end{equation*} Here again $\epsilon \in \mathbb{Z}^*_p$ with every other variable on the left-hand side allowed arbitrary values. On the right-hand side $\gamma_i = \alpha_i - \epsilon^{\texttt{-} 1} (a f_i + b e_i)$, $\delta_i = \beta_i - 2 \epsilon^{\texttt{-} 1} e_i f_i$, and $g_{i,j} = - \epsilon^{\texttt{-} 1} (e_i f_j + e_j f_i)$. \end{restatable} \subsection{AP-form}\label{subsec:ap-form} The above results suggest that through the application of local complementation and pivoting, it is possible to transform any state diagram (a diagram without inputs) into a graph-like diagram where only Pauli spiders remain internal spiders, and they are exclusively connected to boundary spiders. This is achieved through a two-step process. Firstly, any internal spider that is Clifford is eliminated through local complementation. This ensures that only Pauli spiders remain internal. Secondly, given that the diagram contains only Pauli internal spiders, any connected pair of internal spiders can be removed using pivoting. We give a name to this type of diagram: \begin{definition} We say that a graph-like diagram is in \emph{Affine with Phases form} (AP-form) when: \begin{itemize} \item There are no inputs; \item The internal spiders are Pauli spiders; \item Internal spiders are only connected to boundary spiders. \end{itemize} \end{definition} We refer to this class of diagrams as \enquote{Affine with Phases} because they correspond to states described by an affine subspace of $Z$ basis states, with an additional phase function applied to the output. This characterisation is supported by the following lemma: \begin{restatable}{lemma}{apform} A general non-zero $n$-qupit diagram in AP-form is described by the diagram: \begin{equation} \label{eq:ap-form} \tikzfig{figures/new/ap_form} \end{equation} where $a_l, \alpha_i, \beta_i, e_{h,i}, f_{i,j} \in \mathbb{Z}_p$ with $l \in \{1, \ldots, k\}$ and $i,j \in \{1, \ldots, n\}$ such that $i < j$. The interpretation of this diagram is (up to some non-zero scalar) equal to a state \begin{equation} \label{eq:ap-form-poly} \sum_{E \vec{x} = \vec{a}} \omega^{\phi(\vec x)} \ket{\vec x} \end{equation} where $E$ is the weighted bipartite adjacency matrix of the internal and boundary spiders, $\vec a$ describes the Pauli phases of the internal spiders, and $\phi$ is a phase function that describes the connectivity and phases of the boundary spiders: \begin{equation*} \label{eq:ap-form-matrices} E = \begin{bmatrix} e_{1,1} & \cdots & e_{1,n} \\ e_{2,1} & \cdots & e_{2,n} \\ \vdots & & \vdots \\ e_{k,1} & \cdots & e_{k,n} \\ \end{bmatrix} \ ,\qquad \vec a = \begin{bmatrix} a_1 \\ \vdots \\ a_k \end{bmatrix} \ ,\qquad \phi(\vec x) = \sum_{\substack{i, j \in \{ 1, \ldots, n \} \\ i < j}} 2^{\texttt{-} 3} x_i \alpha_i + 2^{\texttt{-} 2} x_i^2 \beta_i - 2^{\texttt{-} 3} f_{i,j} x_i x_j \end{equation*} \end{restatable} Notably, states described by AP-form diagrams correspond to the stabiliser normal forms described in Ref.\@~\cite{vandennestClassicalSimulationQuantum2010}. With AP-form diagrams, we can prove a qupit version of the Gottesman-Knill theorem, which states that we can efficiently sample from the probability distribution of a stabiliser computation. Let us consider an AP-form diagram represented by $(E, \vec{b}, \phi)$. When we measure this state in the computational basis, we observe that the phase function $\phi$ has no impact on the measurement outcomes, allowing us to disregard it. Hence, we can describe the state as $N\sum_{E\vec{x} = \vec{a}} \ket{\vec{x}}$, where $N$ is a normalisation constant. This state represents a uniform superposition of the states $\ket{\vec{x}}$ that satisfy the equation $E\vec{x} = \vec{a}$. To sample from such states, we need to generate solutions to this equation uniformly at random. Efficiently achieving this involves finding any solution $E\vec{x}' = \vec{a}$ and then obtaining a basis $\vec{v}_1, \ldots, \vec{v}_\ell$ for the linear space $\{E\vec{x} = \vec{0}\}$. We can then return $\vec{x}' + \sum_i^{\ell} b_i \vec{v}_i$, where the $b_i \in \mathbb{Z}_p$ are chosen uniformly at random. AP-form diagrams also enable us to provide an alternative, more direct proof of the completeness of $\mathbb{Z}Xp$ through reduction to a unique normal form. In the context of graphical calculi, completeness means that the rewrite rules of the calculus can prove any true equation. In other words, if $\interp{A} = \interp{B}$, then it is possible to rewrite diagram $A$ into diagram $B$. \begin{definition} We say that a diagram in AP-form defined by $(E, \vec a, \phi)$ is in \emph{reduced AP-form} if it is either zero, or it is non-zero and satisfies the following conditions: \begin{itemize} \item $E$ is in reduced row echelon form (RREF), i.e., it is fully reduced using Gaussian elimination. \item $E$ contains no fully zero rows. \item $\phi$ only contains free variables from the equation system of $E$, i.e.\@, variables that do not correspond to \emph{pivot} columns in $E$. \end{itemize} \end{definition} \begin{restatable}{lemma}{apunique} \label{lem:ap-unique} For any non-zero state $\ket \psi$, there is at most one triple $(E, \vec a, \phi)$ satisfying the conditions of reduced AP-form such that: \begin{equation*} \ket \psi \approx \sum_{E \vec{x} = \vec{a}} \omega^{\phi(\vec x)} \ket{\vec x} \end{equation*} \end{restatable} \noindent Therefore, a diagram in reduced AP-form is unique. Now, our objective is to demonstrate that we can rewrite a ZX-diagram in AP-form in a manner that transforms its biadjacency matrix $E$ into RREF\@. Additionally, we need to show that we can modify the diagram so that the corresponding phase function $\phi$ only includes free variables from the equation system $E \vec{x} = \vec{a}$. Put simply, we need to prove that we can perform primitive row operations on a ZX-diagram in AP-form as well as eliminate any phase or Hadamard edge from a pivot spider. \begin{restatable}{lemma}{rowadd} \label{lem:row-add} We can perform primitive row operations on a ZX-diagram in AP-form, i.e., we can ``add'' one inner spider to another. For any $k, a, b, e_i, f_j \in \mathbb{Z}_p$ where $i \in {1, \ldots, n}$ and $j \in {1, \ldots, m}$: \begin{equation*} \tikzfig{figures/new/row_add} \end{equation*} \end{restatable} Using this result, we can apply primitive row operations to $E$ in AP-form diagram and hence reduce it to RREF\@. Through diagrammatic rewrites, we can show that when $E$ is in RREF, we can eliminate all the phases and H-edges associated with the non-free variables of $E$. \begin{restatable}{lemma}{removephasehad} \label{lem:remove_phase_had} If an AP-form diagram has its biadjacency matrix $E$ in RREF, we can rewrite the diagram so that the boundary spiders corresponding to non-free variables of $E$ have zero phases, and there are no H-edges connecting them to other boundary spiders. \end{restatable} \begin{restatable}{lemma}{reducedap} \label{lem:reduced-ap} Any diagram in $\mathbb{Z}Xp$ can be converted into one in reduced AP-form. \end{restatable} The completeness result follows immediately from the above lemma. \begin{restatable}[Completeness]{theorem}{completeness} For any pair of ZX-diagrams $A, B \in \mathbb{Z}Xp$, if $\interp{A} = \interp{B}$, we can provide a sequence of rewrites that transforms $A$ into $B$. \end{restatable} \subsection{GSLC form} The AP-form is advantageous as it can be directly transformed into a unique normal form, and allows for straightforward classical sampling. However, it may be less suitable for other applications. For instance, when applying the algorithm described above to a diagram originating from a Clifford unitary, it becomes challenging to establish a clear relationship between the resulting simplified diagram and a corresponding quantum circuit. In this section, we introduce the qupit version of the well-known qubit GSLC-form diagrams. \begin{definition} We say a diagram is in \emph{GSLC form} (Graph State with Local Cliffords) when it is graph-like, up to Hadamards on input and output wires, and it has no internal spiders. \end{definition} The algorithm for reducing a diagram to AP-form may still yield diagrams with internal spiders, specifically Pauli spiders connected to boundaries. However, we can eliminate these internal spiders by using a \emph{boundary pivot}. \begin{restatable}{lemma}{boundarypivot} \label{lem:boundary-pivot} The following boundary pivot rule is derivable in $\mathbb{Z}Xeq$: \ctikzfig{new/partial-pivot-boundary} Here $g_{ij} \coloneqq -\epsilon^{-1}e_if_j$ and $h_i \coloneqq -\epsilon^{-1}e_i$. This rule holds for all choices of phases as long as $\epsilon \neq 0$. \end{restatable} To observe how this rewrite aids in eliminating internal spiders, consider that the spider with a phase of $(b,c)$ now becomes an internal spider connected to an internal Pauli spider. Consequently, if $c=0$, we can eliminate the pair using standard pivoting. On the other hand, if $c\neq 0$, we can employ a local complementation to remove the $(b,c)$ spider. This alteration modifies the phase of its sole neighbour, subsequently enabling its removal through another local complementation. Lemma~\ref{lem:boundary-pivot} can be straightforwardly modified, similar to Lemma~\ref{lem:pivot}, to accommodate arbitrary connectivity between the internal spider and the boundary. By incorporating additional spider unfusions, we can extend the application of Lemma~\ref{lem:boundary-pivot} to boundary spiders that are connected to multiple inputs or outputs. It is worth noting that when applying \cref{lem:boundary-pivot} multiple times to the same boundary, different powers of the Hadamard gate may appear on the input or output wire. For instance, applying it twice yields $(H^3)^2 = H^2$, and another iteration reverts back to $H$. Hence, we can observe that it is indeed possible to eliminate all internal spiders from a diagram, allowing for an efficient reduction of diagrams to GSLC form. This is particularly significant for diagrams derived from unitaries, as we can then rewrite them in the following manner: \begin{equation*} \tikzfig{GSLC-extract-1} \end{equation*} Here, the boxes labelled with $H?$ represent a possible power of a Hadamard gate acting on the qupit. By applying spider unfusion and colour change operations, we observe that the diagram can be decomposed into several layers consisting of Hadamard gates, Z phase gates, CZ gates, and a middle portion represented by a weighted biadjacency matrix $A$. This part of the circuit implements a map of the form $\ket{\vec x} \mapsto \ket{A \vec x}$, where $\vec x \in \mathbb{Z}_p^n$ and $A$ is an $n\times n$ matrix over $\mathbb{Z}_p$. Since we assume the entire map to be unitary, $A$ must also be invertible. Consequently, such a `linear' qupit map can always be implemented through a series of CX gates, transforming $\ket{x,y}$ to $\ket{x,x+y}$ (the decomposition is achieved via standard Gaussian elimination over $\mathbb{Z}_p$). Thus, we arrive at the following result. \begin{theorem} Any odd-prime-dimensional qudit Clifford unitary can be efficiently decomposed into a quantum circuit consisting of the following layers: \begin{quote} H---Z---S---CZ---CX---H---CZ---Z---S---H \end{quote} \end{theorem} To the best of our knowledge, such a Clifford normal form for qudits has not been described before in the existing literature. It is worth noting, though, that this result bears a striking resemblance to the qubit normal form for Clifford circuits outlined in~\cite{duncanGraphtheoreticSimplificationQuantum2020}. \section{Conclusion} \label{sec:conclusion} We presented a simplified version of the qudit ZX-calculus for odd prime dimensions based on the work in Ref.\@~\cite{boothCompleteZXcalculiStabiliser2022}. This version includes fewer rules and a new scalar gadget to bring the reasoning about scalars more in line with practice. We also extended the spider-removing versions of local complementation and pivoting to qupits. This extension enabled us to reduce diagrams efficiently to AP-form and its unique version, the reduced AP-form. As a result, we obtained a new completeness proof for the qupit stabiliser fragment, which is more straightforward compared to previous proofs. Additionally, we discovered a reduction to GSLC form, leading to a novel layered decomposition of qupit Clifford unitaries. To support these developments, we implemented our rewrites into \texttt{DiZX}, a port of \texttt{PyZX} that now supports qudit stabiliser diagrams of arbitrary dimension. For future work, it would be interesting to investigate whether our techniques can be applied to develop a useful circuit optimisation pipeline for qudits. It would also be valuable to identify specific circuits that would benefit from such optimisation. \textbf{Acknowledgements}: We would like to thank Razin A.~Shaikh for his contributions to the development of \texttt{DiZX}. LY is supported by an Oxford - Basil Reeve Graduate Scholarship at Oriel College with the Clarendon Fund. Some of this work was done while BP was a student at the University of Oxford. The results of~\crefrange{subsec:graph-simplifications}{subsec:ap-form}, \cref{lem:state_colour_change}, and the explicit scalars are also presented in his Master's thesis. TC was supported by the ERDF project 1.1.1.5/18/A/020 ``Quantum algorithms: from complexity theory to experiment''. \printbibliography \appendix \section*{Appendix} \section{Necessity of the rules} We can demonstrate that most of the non-scalar rules of our axiomatisation are \emph{necessary}, meaning that they cannot be derived from the other rules. Note that the standard approach to showing necessity involves defining an alternative interpretation of the diagrams in which every rewrite rule remains sound, except for the specific rule being examined for necessity. This approach reveals that the other rules cannot establish the rule that undermines soundness. Several examples of this approach can be found in the works of Backens, Perdrix, and Wang~\cite{ backensSimplifiedStabilizerZXcalculus2017, backensMinimalStabilizerZXcalculus2020}. In particular, we may define an interpretation into projective Hilbert spaces (quotienting by all non-zero scalars), in order to automatically satisfy all the scalar axioms and focus on the non-scalar axioms. This automatically satisfies all the scalar axioms, allowing us to focus solely on the non-scalar axioms. Another approach involves using graph properties that are invariant under all but one rule. In the following discussion, we rely on the invariants of non-emptiness and connectivity. We can demonstrate the necessity of all but two of the stabiliser rules: \begin{itemize} \item At least one of the \hyperref[fig:axioms]{\textsc{Fusion}}\xspace rules is necessary, as they are the only rules that allow the decomposition of a spider with an arbitrary number of legs into spiders with fewer legs. In other words, these rules are not sound for an interpretation that assigns zero to all spiders with at least $p$ legs. \item \hyperref[fig:axioms]{\textsc{Special}}\xspace is the only axiom that enables the removal of all non-identity generators from a diagram. This breaks the interpretation where every generator is zero, except for the identity. \item \hyperref[fig:axioms]{\textsc{Colour}}\xspace is necessary. To see this, consider the interpretation into projective Hilbert spaces where we redefine the X-spiders to swap the sign of the Pauli phase. It can be easily verified that this new interpretation satisfies all axioms except for \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \item \hyperref[fig:axioms]{\textsc{Copy}}\xspace is necessary since it is the only axiom that can transform a connected diagram into a disconnected one. \item \hyperref[fig:axioms]{\textsc{Euler}}\xspace is necessary, as shown by a modified interpretation similar to those in Refs.~\cite{duncanGraphStatesNecessity2009, gongEquivalenceLocalComplementation2017}. \end{itemize} We propose a conjecture regarding the necessity of \hyperref[fig:axioms]{\textsc{M-Elim}}\xspace, as it stands out as the only rule that establishes a connection between elements in $\mathbb{Z}_p^*$ and their multiplicative inverses. Although we lack a formal proof for this intuition, we believe it to be true. It is worth noting that despite its centrality in most derivations, there remains one stabiliser axiom for which we have no knowledge of its necessity, even in the qubit case~\cite{backensMinimalStabilizerZXcalculus2020} or in the setting of graphical linear algebra~\cite{zanasiInteractingHopfAlgebras2018}: \hyperref[fig:axioms]{\textsc{Bigebra}}\xspace. We leave this intriguing open problem for the particularly motivated reader to explore further. As for the scalar rules, at least one subcase of each is necessary: \begin{itemize} \item One subcase of \hyperref[fig:axioms]{\textsc{Omega}}\xspace is necessary because it is the only rule that allows the introduction of an $\omega$ scalar box, thereby breaking the interpretation where we redefine the $\omega$ scalar box. \item \hyperref[fig:axioms]{\textsc{Zero}}\xspace is the only rule that relates a diagram without a zero scalar box to one that includes it. This means that when we interpret the zero scalar box as equal to 1 and set all other generators to zero, this rule becomes necessary. \item \hyperref[fig:axioms]{\textsc{One}}\xspace is necessary as it is the only rule that connects a non-empty diagram to an empty diagram. \item \hyperref[fig:axioms]{\textsc{Prod}}\xspace is necessary because there are complex numbers that cannot be expressed within the fragment of the language without scalar boxes. This rule is the only one that allows the multiplication of two such numbers. \item \hyperref[fig:axioms]{\textsc{Nul}}\xspace is necessary, following an analogous argument as of Ref.~\cite{backensSimplifiedStabilizerZXcalculus2017}. \item In \hyperref[fig:axioms]{\textsc{Gauss}}\xspace, the subcase $b=0$ is necessary since it is the only rule that allows one to interpret a diagram to a scalar box with a non-unit modulus. Additionally, at least one subcase $b \neq 0$ is necessary because these are the only rules that introduce a $-1$ scalar box. \end{itemize} \allowdisplaybreaks \setlength{\jot}{20pt} \section{Qupit Clifford ZX-calculus} \subsection{Multipliers} We extend our language by \emph{multipliers}~\cite{boothCompleteZXcalculiStabiliser2022}, which are defined as: \begin{equation} \label{eq:mult-def} \tikzfig{definitions/multiplier} \qquad \qquad \tikzfig{definitions/multiplier-t} \end{equation} We can explicitly express multipliers as, for $x \in \mathbb{Z}_p^*$, \begin{equation} \label{eq:multiplier_explicit} \tikzfig{equations/multiplier_explicit} \qquad \qquad \tikzfig{equations/multiplier_t_explicit} \end{equation} The following equations hold for multipliers and are proved in Ref .\@~\cite{boothCompleteZXcalculiStabiliser2022}: \begin{proposition} \label{prop:mult-prop} \begin{equation*} \tikzfig{equations/multiplier} \end{equation*} \end{proposition} \subsection{Recovering the derivations of \texorpdfstring{Ref.~\cite{boothCompleteZXcalculiStabiliser2022}}{Booth and Carette, 2022}} In this appendix, we recover all of the lemmas that were proved in~\cite{boothCompleteZXcalculiStabiliser2022}. We do this by proving that all of the axioms used there are derivable from the simplified set given in section~\ref{sec:qupit-cliff} (up to scalars). We also show that the language is complete for the scalar fragment, which completes the proof. Since many of these proofs are entirely analogous to their counterparts in Ref.~\cite{boothCompleteZXcalculiStabiliser2022}, we omit them and refer to Ref.~\cite{boothCompleteZXcalculiStabiliser2022} instead. In order to avoid ambiguity, we refer to the proofs in the specific version of Ref.~\cite{boothCompleteZXcalculiStabiliser2022} cited as Ref.~\cite{boothCompleteZXcalculiStabiliser2022v3}. In~\cite{boothCompleteZXcalculiStabiliser2022}, the calculus was axiomatised using the equations presented in Figure~\ref{fig:axioms-old}. \begin{figure} \caption{The original rule set of the qupit stabiliser ZX-calculus of~\cite{boothCompleteZXcalculiStabiliser2022} \label{fig:axioms-old} \end{figure} Comparing with the axioms of this paper (and ignoring scalars for now), the missing axioms are \textsc{Z-Elim}, \textsc{X-Elim}, \textsc{Char}, \textsc{Mult}, \textsc{Shear}. In addition, the axioms of \textsc{Fusion} and \textsc{Copy} were made more minimalistic. \begin{lemma} \label{lem:green_trivial} Green \(1 \to 1\) spiders are trivial: \begin{equation*} \tikzfig{figures/equations/green_trivial} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/green_trivial_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:hadamard_product} Products of Hadamards are antipodes: \begin{equation*} \tikzfig{figures/equations/hadamard_product} \end{equation*} \end{lemma} \begin{lproof} Same as \booth{37}. \end{lproof} \begin{lemma} \label{lem:antipode_inverse} Antipodes are self-inverse: \begin{equation*} \tikzfig{figures/equations/antipode_inverse} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_inverse_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:hadamard_antipode} Hadamards and antipodes commute: \begin{equation*} \tikzfig{figures/equations/hadamard_antipode} \end{equation*} \end{lemma} \begin{lproof} Same as \booth{38}. \end{lproof} \begin{lemma} \label{lem:hadamard_inverse} The inverse Hadamard is a product of Hadamards, and admits a ``tree'' Euler decomposition: \begin{equation*} \tikzfig{figures/equations/hadamard-inverse} \end{equation*} \end{lemma} \begin{lproof} The first part is the same as \booth{39}, and the second part follows using \hyperref[fig:axioms]{\textsc{Euler}}\xspace and \hyperref[fig:axioms]{\textsc{Colour}}\xspace. The last two follow from the first equation and \cref{lem:hadamard_product}. \end{lproof} \begin{lemma} \label{lem:hadamard_identity} The product of the Hadamard and its inverse equals the identity: \begin{equation*} \tikzfig{figures/equations/hadamard_identity} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_identity_proof} \end{equation*} The second equation can be proved similarly. \end{lproof} \begin{lemma} \label{lem:antipode_unit} Units absorb antipodes: \begin{equation*} \tikzfig{figures/equations/antipode_unit} \end{equation*} \end{lemma} \begin{lproof} Same as \booth{40}. \end{lproof} \begin{lemma} \label{lem:antipode_phase} For any \(x,y \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/antipode_phase} \end{equation*} \end{lemma} \begin{lproof} \begin{align*} &\tikzfig{figures/equations/antipode_phase_proof_red} \\ &\tikzfig{figures/equations/antipode_phase_proof_green} \qedhere \end{align*} \end{lproof} \begin{lemma} \label{lem:bigebra_mn} The bigebra law holds for multiple legs: for any \(m,n \in \mathbb{N}\), $2 \leq n,m$, \begin{equation*} \tikzfig{figures/equations/bigebra_arbitrary}\quad, \end{equation*} where in the diagram on the LHS, there are \(m\) green and \(n\) red spiders, and each green spider is connected to each red spider by a single wire. \end{lemma} \begin{lproof} Follows from straightforward induction (which furthermore is analogous to the qubit case). \end{lproof} \begin{lemma} \label{lem:copy_og} Using the \hyperref[fig:axioms]{\textsc{Copy}}\xspace rule in Figure~\ref{fig:axioms}, the \textsc{Copy} rule in Ref.~\cite{boothCompleteZXcalculiStabiliser2022} is derivable: \begin{equation*} \tikzfig{figures/equations/copy_og} \end{equation*} \end{lemma} \begin{lproof} The following holds for $a = 2,\cdots,p$. As $p \text{ mod } p = 0$, this also proves the $a = 0$ subcase of \hyperref[fig:axioms]{\textsc{Copy}}\xspace. The $a = 1$ case is just \hyperref[fig:axioms]{\textsc{Copy}}\xspace. \begin{align*} \tikzfig{figures/equations/copy_og_proof-0} \quad &\tikzfig{figures/equations/copy_og_proof-1} \\ &\tikzfig{figures/equations/copy_og_proof-2} \qedhere \end{align*} \end{lproof} \begin{lemma} \label{lem:hopf} The Hopf identity is derivable in \(\mathbb{Z}Xeq\): \begin{equation*} \tikzfig{figures/equations/hopf} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hopf_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:Char} The axiom \textsc{Char} of~\cite{boothCompleteZXcalculiStabiliser2022} is derivable: \begin{equation*} \tikzfig{figures/equations/Char} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/Char_proof} \qedhere \end{equation*} \end{lproof} We are now ready to prove the completeness of the calculus for elementary scalars. \elementaryScalarCompleteness* \begin{lproof} First, note that we have: \begin{equation*} \tikzfig{figures/equations/scalar_sqrt_p_inverse} \end{equation*} We can use this rule and the scalar axioms to rewrite every scalar in \cref{def:elementary-scalar}, as well as the zero scalar \tikzfig{figures/equations/zero_diagram}, into an explicit scalar. Then, we apply \hyperref[fig:axioms]{\textsc{Prod}}\xspace to rewrite this collection of explicit scalars into a single one. \end{lproof} \begin{lemma} \label{lem:loop} Self-loops on green spiders can be eliminated: \begin{equation*} \tikzfig{figures/equations/loop} \end{equation*} We include the colour-swapped version of this rule for completeness, even though it no longer includes a genuine self-loop. \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/loop_proof} \end{equation*} The red version follows form \hyperref[fig:axioms]{\textsc{Colour}}\xspace and \cref{lem:hadamard_product}. \end{lproof} \begin{lemma} \label{lem:unit_rotation_elim} Green units absorb red rotations and vice-versa: \begin{equation*} \tikzfig{figures/equations/unit_rotation_elim} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/unit_rotation_elim_proof_green} \end{equation*} The red version follows form \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:antipode_copy} The green co-multiplication copies antipodes: \begin{equation*} \tikzfig{figures/equations/antipode_copy} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_copy_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:push_pauli_state} Green Pauli states copy through red rotations and vice-versa: \begin{equation*} \tikzfig{new/pauli-state-push} \end{equation*} \end{lemma} \begin{lproof} \begin{align*} &\tikzfig{new/pauli-state-push-proof} \\ &\tikzfig{new/pauli-state-push-compl-proof} \qedhere \end{align*} \end{lproof} \begin{lemma} \label{lem:antipode_multiplier} The antipode can be rewritten as a multiplication: \begin{equation*} \tikzfig{figures/equations/antipode_multiplier} \end{equation*} \end{lemma} \begin{lproof} This follows from \hyperref[fig:axioms]{\textsc{Special}}\xspace and the definition of the multiplier, \cref{eq:mult-def}. \end{lproof} \begin{lemma} \label{lem:antipode_spider} For any \(x,y \in \mathbb{Z}_p\) and \(m,n \in \mathbb{N}\), \begin{equation*} \tikzfig{figures/equations/antipode_spider} \end{equation*} \end{lemma} \begin{lproof} \begin{align*} &\tikzfig{figures/equations/antipode_spider_proof} \\ &\tikzfig{figures/equations/antipode_spider_proof_compl} \qedhere \end{align*} \end{lproof} \begin{lemma} \label{lem:colour} We can derive the \hyperref[fig:axioms]{\textsc{Colour}}\xspace rule for both red and green spiders, and also for the Hadamard inverse: \begin{align*} \tikzfig{figures/equations/colour_commutation_green} \qquad & \qquad \tikzfig{figures/equations/colour_commutation_red} \\ \tikzfig{figures/equations/colour_commutation_green_minu} \qquad & \qquad \tikzfig{figures/equations/colour_commutation_red_minu} \end{align*} \end{lemma} \begin{lproof} This follows from \hyperref[fig:axioms]{\textsc{Colour}}\xspace and \cref{lem:hadamard_inverse,lem:antipode_spider}. \end{lproof} \begin{lemma} \label{lem:hadamard_push} Hadamard gates or their inverses can be pushed through spiders: \begin{equation*} \tikzfig{figures/equations/colour_commutation} \end{equation*} \end{lemma} \begin{lproof} This follows from \cref{lem:colour,lem:hadamard_identity}. \end{lproof} \begin{lemma} \label{lem:pauli_copy_phase} Green spiders copy red Pauli phases, and vice-versa: for any \(x \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/pauli_copy_phase} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/pauli_copy_phase_proof} \end{equation*} The second equation follows from \cref{lem:colour,lem:hadamard_identity}. \end{lproof} \begin{lemma} \label{lem:multiplier_sum} Parallel multipliers sum: for any \(x,y \in \mathbb{Z}_p\): \begin{equation*} \tikzfig{figures/equations/multiplier_sum} \end{equation*} \end{lemma} \begin{lproof} This is a straightforward consequence of \hyperref[fig:axioms]{\textsc{Fusion}}\xspace. \end{lproof} \begin{lemma} \label{lem:multiplier_elim} For any \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_elim} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_elim_proof_green} \end{equation*} The other rule follows from \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:multiplier_product} For any \(x,y \in \mathbb{N}\), \begin{equation*} \tikzfig{figures/equations/multiplier_product} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_product_proof} \end{equation*} The second equality follows from \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:multiplier_inverse} For any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_inverse} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_inverse_proof} \end{equation*} The second equality follows from \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:multiplier_copy} Spiders copy invertible multipliers: for any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_copy} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_copy_proof_green} \end{equation*} The other equation follows form \hyperref[fig:axioms]{\textsc{Colour}}\xspace and the definition of the multiplier, \cref{eq:mult-def}. \end{lproof} \begin{lemma} \label{lem:multiplier_spider} The action of multipliers on spiders is given by, for any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_spider} \end{equation*} \end{lemma} \begin{lproof} This follows from \cref{lem:multiplier_copy} and \hyperref[fig:axioms]{\textsc{M-Elim}}\xspace. \end{lproof} \begin{restatable}{lemma}{pushmult} \label{lem:multiplier_push} We can \enquote{push} multipliers through spiders as follows, for any $a, b \in \mathbb{Z}_p$ and $x \in \mathbb{Z}_p^*$, \begin{equation*} \tikzfig{new/push-mult-green} \qquad \qquad \tikzfig{new/push-mult-inv-green} \end{equation*} \begin{equation} \tikzfig{new/push-mult-red} \qquad \qquad \tikzfig{new/push-mult-inv-red} \end{equation} \end{restatable} \begin{lproof} \begin{align*} &\tikzfig{new/push-mult-green-proof}\\ &\tikzfig{new/push-mult-red-proof} \end{align*} The other proofs follow from the above equations while using the multiplicative inverse of the multipliers. \end{lproof} \begin{lemma} \label{lem:multiplier_had} The product of a multiplier and a Hadamard gate is an H-box: \begin{equation*} \tikzfig{equations/hadamard_multiplier} \end{equation*} \end{lemma} \begin{lproof} This follows from \cref{lem:hadamard_identity} and the definition of the multiplier, \cref{eq:mult-def}. \end{lproof} \prophbox* \begin{lproof} The bottom two equations follow from the definition of the H-box at \cref{eq:hbox_def} and \cref{lem:green_trivial}. The rest can be proved using \cref{lem:multiplier_had,prop:mult-prop,lem:hadamard_push,lem:hadamard_identity}. \end{lproof} \begin{restatable}{lemma}{xhadmult} \label{lem:multiplier_hbox} H-boxes multiply with multipliers, for any $x, y \in \mathbb{Z}_p$, \begin{equation*} \tikzfig{new/xhad-mult} \end{equation*} \end{restatable} \begin{proof} \begin{equation*} \tikzfig{new/xhad-mult-proof-1} \end{equation*} \begin{equation*} \tikzfig{new/xhad-mult-proof-2} \qedhere \end{equation*} \end{proof} \begin{restatable}{lemma}{pushxhad} \label{lem:hbox_push} We can \enquote{push} H-boxes through spiders as follows, for any $a, b \in \mathbb{Z}_p$ and $x \in \mathbb{Z}_p^*$, \begin{equation*} \tikzfig{new/push-xhad-green} \qquad \qquad \tikzfig{new/push-xhad-red} \end{equation*} \end{restatable} \begin{lproof} First of all, \begin{equation*} \tikzfig{new/push-xhad-green-proof} \end{equation*} The other equation can be proved similarly, \begin{equation*} \tikzfig{new/push-xhad-red-proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:scalar-xxhad} \begin{equation*} \tikzfig{new/scalar_xxhad} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{new/scalar_xxhad_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:clifford_states} Any pure-Clifford states can be represented in both the red and green fragment: for any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/clifford_states} \end{equation*} \end{lemma} \begin{lproof} Firstly, we prove the subcase \(x=1\) of (a): \begin{equation*} \tikzfig{figures/equations/clifford_states_proof_green} \end{equation*} so that \begin{equation*} \tikzfig{figures/equations/clifford_states_proof_green_2} \end{equation*} and \begin{equation*} \tikzfig{figures/equations/clifford_states_proof_green_3}. \end{equation*} Then the general case for any invertible \(x\) follows using lemma~\ref{lem:multiplier_spider}. (b) follows once again using \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:hadamard_euler} The Hadamards admit more standard Euler decompositions (originally shown for qudit ZX in~\cite{wangQufiniteZXcalculusUnified2022}): \begin{equation*} \tikzfig{figures/equations/hadamard_euler} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_euler_proof} \end{equation*} We obtain the second derivation, as always, using \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} In the next few proofs, we make frequent use of the following fact: \begin{lemma} \label{lem:sum_of_squares} For any \(x \in \mathbb{Z}_p\), there are \(a,b \in \mathbb{Z}_p\) such that \(x = a^2 + b^2\). \end{lemma} \begin{lproof} This is true in general for any finite field. See~\cite{yutsumuraEachElementFinite2017} for a proof. \end{lproof} \begin{lemma} \label{lem:euler_squares} For any \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/euler/euler_squares} \end{equation*} \end{lemma} \begin{lproof} We have \begin{equation*} \tikzfig{figures/euler/euler_colour_swap_proof} \end{equation*} so that \begin{equation*} \tikzfig{figures/euler/euler_squares_proof} \end{equation*} and \begin{equation*} \tikzfig{figures/euler/euler_squares_proof_2} \end{equation*} The second version is obtained using a completely analogous argument. \end{lproof} \begin{lemma} \label{lem:euler_tree} For any \(z \in \mathbb{Z}_p^*\) (not just squares), \begin{equation*} \tikzfig{figures/euler/euler_tree} \end{equation*} \end{lemma} \begin{lproof} If \(z\) is a square, then this result is immediate by the previous lemma. Otherwise, by Lemma~\ref{lem:sum_of_squares}, \(z = a^2 + b^2\) and \(a,b \in \mathbb{Z}_p\) are non-zero. Then \begin{equation*} \tikzfig{figures/euler/euler_tree_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:H_loop} Hadamard loops correspond to pure-Clifford operations: for any \(x \in \mathbb{Z}_p\) and \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/H_loop} \end{equation*} \end{lemma} \begin{lproof} The case \(x=0\) is clear by decomposing the H-box according to \cref{eq:hbox_def}. Therefore, we only need to show that for \(z \in \mathbb{Z}_p^*\): \begin{align*} \tikzfig{figures/equations/H_loop_proof_green-0} \quad &\tikzfig{figures/equations/H_loop_proof_green-1} \\ &\tikzfig{figures/equations/H_loop_proof_green-2} \end{align*} Under the assumption that the weight is invertible, the red version follows using \hyperref[fig:axioms]{\textsc{Colour}}\xspace and the green version: \begin{equation*} \tikzfig{figures/equations/H_loop_proof_red} \end{equation*} \end{lproof} \lemSupplementarity* \begin{lproof} If \(b = x^2\) for some non-zero \(x \in \mathbb{Z}_p\), then \begin{equation*} \tikzfig{figures/euler/fork_identity_proof} \end{equation*} Otherwise \(b = s^2 + t^2\) where \(s,t \in \mathbb{Z}_p\) are non-zero, and \begin{equation*} \tikzfig{figures/euler/fork_identity_proof_2} \end{equation*} The second equation follows from the first equation and the application of \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \lemStrictCliffordColour* \begin{lproof} Firstly, \begin{equation*} \tikzfig{figures/euler/state_colour_change_proof_1} \end{equation*} so that \begin{equation*} \tikzfig{figures/euler/state_colour_change_proof_2} \end{equation*} whence \begin{equation*} \tikzfig{figures/euler/state_colour_change_proof_3} \end{equation*} and, finally, \begin{equation*} \tikzfig{figures/euler/state_colour_change_proof_4} \end{equation*} The change of colour in the scalar, as well as the second derivation, follow using \hyperref[fig:axioms]{\textsc{Colour}}\xspace. \end{lproof} \begin{lemma} \label{lem:scalar_z_equiv} The following states are equivalent: \begin{equation*} \tikzfig{equations/scalar_minuzminu1_z} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{equations/scalar_minuzminu1_z_proof} \qedhere \end{equation*} \end{lproof} \begin{lemma} \label{lem:pauli_push} X-spiders with arbitrary phases copy Pauli Z-spiders and vice-versa: \begin{equation*} \tikzfig{new/extended-copy} \qquad \qquad \tikzfig{new/extended-copy-compl} \end{equation*} \end{lemma} \begin{lproof} First of all, \begin{equation*} \tikzfig{new/extended-copy-proof-1} \end{equation*} Then, we separate the equation into two cases based on whether the Z-spider is Pauli or not. In case $d = 0$, the Z-spider is Pauli and therefore: \begin{equation*} \tikzfig{new/extended-copy-proof-2} \end{equation*} Note that if $d = 0$, then $ad - c = -c$ and so the lemma holds. Otherwise, $d \neq 0$ and therefore $d^{\texttt{-} 1}$ exists, so we can apply the state-change lemma: \begin{equation*} \tikzfig{new/extended-copy-proof-3} \end{equation*} Note that the phases after the application of the second state-change follow from: \begin{equation*} -(a - c d^{\texttt{-} 1})(\texttt{-} d^{\texttt{-} 1})^{\texttt{-} 1}, \, \texttt{-}(\texttt{-} d^{\texttt{-} 1})^{\texttt{-} 1} = -(a - c d^{\texttt{-} 1}) (\texttt{-} d), \, d = ad - c, \, d \end{equation*} We can prove the second equation of the lemma using Hadamard-boxes as follows: \begin{equation*} \tikzfig{new/extended-copy-compl-proof} \qedhere \end{equation*} \end{lproof} We are now ready to prove that axioms \textsc{Mult} and \textsc{Shear} of~\cite{boothCompleteZXcalculiStabiliser2022} are derivable from our simplified set of axioms: \begin{proposition} \label{prop:mult} For any \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/euler/mult} \end{equation*} \end{proposition} \begin{lproof} We have \begin{equation*} \tikzfig{figures/euler/mult_finally} \end{equation*} so that \begin{equation*} \tikzfig{figures/euler/mult_finally_2} \qedhere \end{equation*} \end{lproof} \begin{proposition} For any \(a,c,d \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/shear} \end{equation*} \end{proposition} \begin{lproof} We first prove the subcase \(c = 0\) and $d \neq 0$: \begin{equation*} \tikzfig{figures/equations/shear_proof_1} \end{equation*} Now, we have \begin{equation*} \tikzfig{figures/equations/shear_proof_2} \end{equation*} so that \begin{equation*} \tikzfig{figures/equations/shear_proof_3} \end{equation*} Now, if \(d = 0\), \begin{equation*} \tikzfig{figures/equations/shear_proof_4} \end{equation*} and putting both of these derivations together: \begin{equation*} \tikzfig{figures/equations/shear_proof_5} \qedhere \end{equation*} \end{lproof} \subsection{Graph-like diagrams}\label{app:graph-like} \begin{proposition} \label{prop:local_complementation} $\gamma$-weighted local $\mathbb{Z}_d$-complementation is derivable in $\mathbb{Z}Xeq$, for any graph $G = (V,E)$, $\gamma \in \mathbb{Z}_p$ and $u \in V$, \begin{equation} \label{eq:graph-lc} \tikzfig{figures/new/graph-local-compl} \end{equation} \end{proposition} \begin{lproof} Same as \booth{12}. \end{lproof} \subsubsection{Local complementation simplification} \lc* \begin{lproof} First, we can prove a simplified version of the lemma without phases of the boundary spiders and H-edges as follows: \begin{align*} &\tikzfig{new/local_compl_phaseless_proof-1} \\ &\tikzfig{new/local_compl_phaseless_proof-2} \\ &\tikzfig{new/local_compl_phaseless_proof-3} \\ &\tikzfig{new/local_compl_phaseless_proof-4} \end{align*} Then, we can use the previous equation to prove the lemma. \begin{align*} &\tikzfig{new/local_compl_proof-1} \\ &\quad \tikzfig{new/local_compl_proof-2} \\ &\tikzfig{new/local_compl_proof-3} \qedhere \end{align*} \end{lproof} \subsubsection{Pivoting simplification} \pivotpartial* \begin{lproof} First, we can prove a simplified version of the equation that omits the phases of boundary spiders as follows, \begin{align*} &\tikzfig{new/partial_pivot_proof-1} \\ &\tikzfig{new/partial_pivot_proof-2} \\ &\tikzfig{new/partial_pivot_proof-3} \\ &\tikzfig{new/partial_pivot_proof-4} \\ &\tikzfig{new/partial_pivot_proof-5} \end{align*} Then, we can use the previous equation to prove the lemma as follows: \begin{align*} &\tikzfig{new/partial_pivot_phase_proof-1} \\ &\tikzfig{new/partial_pivot_phase_proof-2} \qedhere \end{align*} \end{lproof} Now, we prove the general version of pivoting. \pivot* \begin{lproof} \begin{align*} &\tikzfig{new/pivot_proof-1} \\ &\tikzfig{new/pivot_proof-2} \\ &\tikzfig{new/pivot_proof-3} \\ &\tikzfig{new/pivot_proof-4} \qedhere \end{align*} \end{lproof} \section{A normal form} \apform* \begin{lproof} We can prove this claim purely diagrammatically, by composing the diagram of \cref{eq:ap-form} with an effect that corresponds to the vector $\bra x$. By rewriting the diagram while keeping track of the scalars, we can prove that the diagram indeed represents the one described in \cref{eq:ap-form-poly}. These transformations are as follows: \begin{align*} &\tikzfig{figures/new/ap_form_proof-1} \\ &\tikzfig{figures/new/ap_form_proof-2} \\ &\tikzfig{figures/new/ap_form_proof-3} \end{align*} Note that if a Z-spider with no legs has phase $(z,0)$ for any $z \in \mathbb{Z}_p ^*$, then it equals the zero scalar. This means that the probability of such an effect is $0$. Therefore, the above diagram allows only such $\vec x$ vectors that satisfy the equation $E \vec x = \vec a$. Furthermore, the scalars that are copied from the phases part of the diagram equal the $\omega^{\phi(\vec x)}$ component of the equation. We conclude that a diagram in \cref{eq:ap-form} indeed equals the state presented in \cref{eq:ap-form-poly}. \end{lproof} \subsection{Completeness} \apunique* \begin{lproof} Since $\ket \psi \neq 0$, the set $\mathcal{A} = \{\vec x \mid E \vec{x} = \vec{a}\}$ is non-empty. Therefore, there is a unique system of equations in RREF that define $\mathcal{A}$. This means that $E$ and $\vec a$ are uniquely fixed. Now, for any assignment $\{x_{i_1} \coloneqq c_1,\, \ldots\, , x_{i_k} \coloneqq c_k\}$ of free variables, there exists a state $\ket{\vec x} \in \mathcal{A}$ such that $x_{i_\mu} = c_\mu$. Therefore, we have $\ip{\vec x}{\psi} = \omega^{\phi(c_1,\, \ldots\, , c_k)}$ for some fixed constant $\lambda \neq 0$. Using this fact we can determine the value of $\phi$ at all inputs $(c_1,\, \ldots\, , c_k)$ which is enough to compute each coefficient of $\phi$. We conclude that $\phi$ is uniquely fixed by $\ket \psi$. \end{lproof} \rowadd* \begin{lproof} Firstly, we show that we can transform two disconnected X-states: \begin{align*} &\tikzfig{figures/new/row_add_sub_lem_proof_new-1} \\ &\tikzfig{figures/new/row_add_sub_lem_proof_new-2} \end{align*} Then, we can show that we can transform a diagram in AP-form as follows: \begin{align*} &\tikzfig{figures/new/row_add_proof-1} \\ &\tikzfig{figures/new/row_add_proof-2} \qedhere \end{align*} \end{lproof} \begin{restatable}{lemma}{removepauli} \label{lem:remove_pauli} We can remove Pauli-phases from the pivot spiders of diagrams in AP-form. \end{restatable} \begin{lproof} For any $a, x, e_i \in \mathbb{Z}_p$ where $i \in \{2, \ldots, k\}$ and $e_1 \in \mathbb{Z}_p^*$: \begin{equation*} \tikzfig{figures/new/remove_pauli_node} \end{equation*} where $a' \coloneqq -(a + x e_1^{\texttt{-} 1})$. \end{lproof} \begin{restatable}{lemma}{removeclif} \label{lem:remove-clif} We can remove strictly-Clifford phases from the pivot spiders of diagrams in AP-form. \end{restatable} \begin{lproof} To prove this case, we first show that we can push strictly-Clifford Z-spider through an X-spider with weighted outputs. That is, for any $a, e_i \in \mathbb{Z}_p$ where $i \in \{1, \ldots, k\}$ and $z \in \mathbb{Z}_p^*$: \begin{equation*} \tikzfig{figures/new/remove_cliff_node_sub_lem_proof} \end{equation*} Therefore, for any $a, x, e_i \in \mathbb{Z}_p$ where $i \in \{2, \ldots, k\}$ and $z,e_1 \in \mathbb{Z}_p^*$: \begin{align*} &\tikzfig{figures/new/remove_cliff_node-1} \\ &\tikzfig{figures/new/remove_cliff_node-2} \end{align*} where $A_i = (a z e^{\texttt{-} 1} - x) e^{\texttt{-} 1} e_i$, $B_i = z e_1^{\texttt{-} 2} e_i^{2}$, and $E_{i,j} = z e_1^{\texttt{-} 2} e_i e_j$, \end{lproof} \begin{restatable}{lemma}{remhadedge} \label{lem:rem_had_edge} We can remove an H-edge between the pivot spider and a boundary spider that connects to the same internal spider as the pivot. \end{restatable} \begin{lproof} Let us suppose that the pivot spider is connected to the $\ell$-th wire with an H-box. Then, for any $a, x, e_i \in \mathbb{Z}_p$ where $i \in \{2, \ldots, k\}$ and $e_1 \in \mathbb{Z}_p^*$: \begin{equation*} \tikzfig{figures/new/remove_had_edge} \qedhere \end{equation*} \end{lproof} \begin{restatable}{lemma}{remhadedge2} \label{lem:rem_had_edge2} We can remove an H-edge between the pivot spider and a boundary spider that does not connect to the same internal spider as the pivot. \end{restatable} \begin{lproof} For any $a, b, x, e_i, f_h \in \mathbb{Z}_p$ where $i \in \{2, \ldots, k\}$, $h \in \{1, \ldots, j\}$ and $e_1 \in \mathbb{Z}_p^*$: \begin{equation*} \tikzfig{figures/new/remove_had_edge_sep} \qedhere \end{equation*} \end{lproof} \reducedap* \begin{lproof} First, we can convert any diagram in $\mathbb{Z}Xp$ into one in AP-form using local complementation and pivoting. Then, we can translate such a diagram into AP-form with a biadjacency matrix in RREF using Gaussian elimination, as demonstrated in \cref{lem:row-add}. Furthermore, we have established the proofs for removing any phase from the pivot spider (\cref{lem:remove_pauli} and \cref{lem:remove-clif}), as well as removing any H-edge connected to the pivot spider (\cref{lem:rem_had_edge} and \cref{lem:rem_had_edge2}). These results allow us to transform a diagram in such a way that its phase function $\phi$ only contains free variables from the equation system $E \vec x = \vec a$. Consequently, we can conclude that any diagram in $\mathbb{Z}Xp$ can be rewritten into a form that satisfies the necessary properties to be considered a diagram in reduced AP-form. \end{lproof} \completeness* \begin{lproof} Without loss of generality, we can assume that both $A$ and $B$ are states by map-state duality. If $A$ and $B$ represent the same linear map, i.e.\@ $\interp{A} = \interp{B}$, then their reduced AP-forms are identical, thanks to the uniqueness of the form proved in \cref{lem:ap-unique}. Therefore, we can transform both $A$ and $B$ into diagrams in reduced AP-form using \cref{lem:reduced-ap}. The sequence of transformations from $A$ to $A$ in reduced AP-form, composed with the series of rewrites from $B$ in reduced AP-form to $B$, provides us with a sequence of rewrites that transforms $A$ into $B$\@. \end{lproof} \boundarypivot* \begin{lproof} Unfuse spiders and introduce Hadamards as follows: \begin{align*} &\tikzfig{new/partial-pivot-boundary-pf-1} \\ &\tikzfig{new/partial-pivot-boundary-pf-2} \end{align*} Where in the last step we applied the regular pivot Lemma~\ref{lem:pivot-partial}. \end{lproof} \end{document}
\begin{document} \begin{abstract} Cancer development is associated with aberrant DNA methylation, including increased stochastic variability. Statistical tests for discovering cancer methylation biomarkers have focused on changes in mean methylation. To improve the power of detection, we propose to incorporate increased variability in testing for cancer differential methylation by two joint constrained tests: one for differential mean and increased variance, the other for increased mean and increased variance. To improve small sample properties, likelihood ratio statistics are developed, accounting for the variability in estimating the sample medians in the Levene test. Efficient algorithms were developed and implemented in \texttt{DMVC} function of {\bf R} package \texttt{DMtest}. The proposed joint constrained tests were compared to standard tests and partial area under the curve (pAUC) for the receiver operating characteristic curve (ROC) in simulated datasets under diverse models. Application to the high-throughput methylome data in The Cancer Genome Atlas (TCGA) shows substantially increased yield of candidate CpG markers. \end{abstract} \keywords{Cancer biomarker discovery; CpG methylation; Constrained hypothesis; Differential variance; Joint test; Levene test} \section{Introduction} \label{sec1} Cancer development is associated with profound modifications in the epigenome, a multi-layer regulatory infrastructure for gene expression and cellular lineage \citep{Esteller2007,Esteller2008,Baylin2011,Shen2013a}. The most studied cancer epigenetic alteration to date is the 5-cytosine methylation at CpG dinucleotides. Occurring early in carcinogenesis and biochemically more stable than RNA transcripts, aberrant DNA methylation has become an important molecular target for developing cancer early detection marker \citep{Laird2003,Issa2008}. High-throughput assays such as the Illumina Infinium HumanMethylation450, EPIC BeadChips and whole genome bisulfite sequencing (WGBS) enable interrogation of cancer methylome for CpG biomarker candidates \citep{Hao2017}. The current statistical engine for detecting cancer aberrant methylation entails testing equal means between cancer and normal samples for a single CpG site (differentially methylated CpG, {\bf DMC}) or a region with multiple adjacent CpGs (differentially methylated region, {\bf DMR}), as implemented in popular software such as \texttt{Minfi} \citep{Aryee2014} and \texttt{ChAMP} \citep{Morris2014}. The high dimensionality of genome-wide CpGs ($>$500,000) and typically small sample size for biomarker discovery studies limit the statistical power to identify novel markers. Cancer is a heterogeneous disease. Differential cancer methylation is also evident by increased stochastic variability. This has been observed across cancers \citep{Hansen2011,Phipson2014}, which may reflect adaptation to local tumor environments in the carcinogenesis process. Testing for equal variances in sample groups has been studied in the statistical literature for decades \citep{Brown1974}. Levene test achieves a balance between sensitivity and robustness to outliers, therefore has been used for detecting differentially variable CpG ({\bf DVC}) \citep{Phipson2014}. Figure \ref{volcano_plot_six_cancers} shows the volcano plots for testing differential variability in six major cancers available in TCGA using Levene test: namely prostate cancer (PCa), colorectal cancer (CRC), lung squamous cell carcinoma (LUSC), and lung adenocarcinoma (LUAD), breast cancer (BRCA), and liver hepatocellular carcinoma (HCC). A striking observation is that nearly all DVCs show increased variability in cancer samples ($>$99\% in BRCA, CRC, HCC, LUSC and LUAD, $>$95\% in PCa), confirming that cancer DVC is ubiquitously hypervariable. More results will be presented in Table 2. \begin{figure} \caption{Volcano plots for testing DVC between cancers and normal samples in six major cancers from TCGA. Red circles and blue circles are hypervariable and hypovariable DVCs with family-wise error rate $<$ 0.05. (a) Prostate cancer. (b) Colorectal cancer. (c) Lung adenocarcinoma. (d) Lung squamous cell carcinoma. (e) Breast cancer. (f) Liver hepatocellular carcinoma. } \label{volcano_plot_six_cancers} \end{figure} In this work, we develop joint tests that combine differential means and increased variances in CpG methylation data, aiming to improve the power of detecting cancer biomarker candidates. Specifically gearing to the ubiquitously increased variability in cancer DNA methylation, we develop two constrained hypothesis tests: a test for {\bf differential} mean and {\bf increased} variance, and another test for {\bf increased} mean and {\bf increased} variance. We used the estimating equation theory and likelihood ratio tests for constrained hypothesis to construct test statistics. We found that the variability in estimating the medians in Levene test needs to be accounted for in test statistics, in order to achieve better control of the type I error rate for high dimensional p-values. Computationally efficient algorithms for these tests were developed in the \texttt{DMVC} function of {\bf R} package \texttt{DMtest}, which is capable of scanning 500,000 CpGs for biomarker leads in a few minutes. Another goal of this work is to compare the proposed constrained joint tests to the partial area under the curve (pAUC) measure for receiver operating characteristics curve (ROC), that is commonly used in the biomarker literature for selecting cancer gene expression markers \citep{Pepe2003}. In a similar one-sided fashion, pAUC evaluates the discriminative performance between cancers and controls in the high specificity area of the receiver operating characteristic (ROC) curve. In simulated datasets, we compare error control and power performance of pAUC, the standard t-test, Levene test, and the proposed constrained tests in diverse models. The utility of the proposed constrained tests for biomarker discovery will be investigated by TCGA methylome data. \section{Methods} \label{sec2} \subsection{Joint test of differential mean and differential variance in CpG methylation} To develop the constrained hypothesis testing, it is necessary to first consider the standard null hypothesis for testing equal means and equal variances. Suppose there are $n$ samples with DNA methylome data available for the tumor-normal comparison, denoted by $(\mbox{\bf Y}_i,X_i,W_i)$, for $i=1,...,n$, where $\mbox{\bf Y}_i$ is a length $p$ vector for methylation M-values in $p$ CpG sites, $X_i$ is the indicator for cancer ($X_i=1$) or normal sample ($X_i=0$), and $W_i$ is the vector of additional covariates such as age and gender that should be adjusted for in the following regression analysis. Let $Y_{ij}$ denote M-value of the $j^{th}$ CpG site for the $i^{th}$ sample . DMC and DVC are tested separately by fitting linear regression models: \begin{eqnarray} \mathbb{E}(Y_{ij})&=& \beta_{0j} + \beta_{1j} X_i + \beta_{2j} W_i, \label{eq:model1} \\ \mathbb{E}(|Y_{ij} - \widetilde{m}_{g(i)j}|)&=& \alpha_{0j} + \alpha_{1j} X_i + \alpha_{2j}W_i, \label{eq:model2} \end{eqnarray} where $g(i)$ is the group (tumor or normal) label for $i^{th}$ sample, $\widetilde{m}_{g(i)j}$ is the sample median for $j^{th}$ CpG site in $g(i)$ group, $|Y_{ij} - \widetilde{m}_{g(i)j}|$ is the absolute difference to the corresponding group median. Note that model (\ref{eq:model2}) implements the Levene test for homogeneous variances in two groups \citep{Brown1974}. The standard null hypothesis for testing equal means and equal variances is \[ \mbox{H}_{0}: \beta_{1j}=0, \alpha_{1j}=0, \mbox{\hspace{10pt} versus \hspace{10pt}}\mbox{H}_{1a}: \beta_{1j} \neq 0 , \alpha_{1j} \neq 0, \] which often entails a 2-df Wald test. In general, the outcomes of the above regression models $(Y_{ij},|Y_{ij} - \widetilde{m}_{g(i)j}|)$ are correlated, except in one special scenario where the distribution of $Y_{ij}$ is symmetric. For simplicity, we showcase the special case between $Y_{ij}$ and $|Y_{ij} - m_{g(i)j}|$ assuming $m_{g(i)j}$ is the population median for $j^{th}$ CpG site in $g(i)$ group. \begin{eqnarray} & & \text{cov}\left(Y_{ij},|Y_{ij} - m_{g(i)j}|\right) \nonumber \\ &=& \mathbb{E}\left(Y_{ij}|Y_{ij} - m_{g(i)j}|\right) - \mathbb{E}\left(Y_{ij}\right)\mathbb{E}\left(|Y_{ij} - m_{g(i)j}|\right) \nonumber \\ &=&\mathbb{E}\left[(Y_{ij} - m_{g(i)j})|Y_{ij} - m_{g(i)j}|\right] + \mathbb{E}\left[(m_{g(i)j} - \mathbb{E}(Y_{ij}))|Y_{ij} - m_{g(i)j}|\right] \nonumber \\ &=&\frac{1}{2}\mathbb{E}\left[(Y_{ij} - m_{g(i)j})^2|Y_{ij} - m_{g(i)j}>0\right] - \frac{1}{2}\mathbb{E}\left[(Y_{ij} - m_{g(i)j})^2|Y_{ij} - m_{g(i)j}<0\right] \nonumber \\ & & + \mathbb{E}\left[(m_{g(i)j} - \mathbb{E}(Y_{ij}))|Y_{ij} - m_{g(i)j}|\right] = 0 \nonumber \end{eqnarray} where the first two terms are cancelled out because of symmetry of $Y_{ij}$ and the third term becomes zero because $m_{g(i)j} = \mathbb{E}(Y_{ij})$ under null hypothesis with the symmetric property. Other than the special case, the 2-df Wald test for testing $\mbox{H}_{0}: \beta_{1j}=0, \alpha_{1j}=0$ needs to account for the correlation. The asymptotic bivariate Gaussian distribution of $(\hat{\beta}_{1j}, \hat{\alpha}_{1j})$ can be derived from the estimating equation theory, and the asymptotic variance matrix $\Sigma_j$ for ($\hat{\beta}_{1j}$, $\hat{\alpha}_{1j}$) can be computed by the robust sandwich estimator. Specifically, let $u_{i1}$ be the estimating function for the first regression model (\ref{eq:model1}) and $u_{i2}$ be the estimating function for the second regression model (\ref{eq:model2}). Let $\mbox{\boldmath $ \beta $}_{j} =(\beta_{0j},\beta_{1j},\beta_{2j}) $ and $\mbox{\boldmath $ \alpha $}_{j} =(\alpha_{0j},\alpha_{1j},\alpha_{2j})$. When sample size is sufficiently large, ($\widehat{\mbox{\boldmath $ \beta $}}_{j}$, $\widehat{\mbox{\boldmath $ \alpha $}}_{j}$) follows a Gaussian distribution with mean ($\mbox{\boldmath $ \beta $}_{j}$ ,$\mbox{\boldmath $ \alpha $}_{j}$) and variance matrix $\Sigma_j$, expressed as followed, \begin{eqnarray} \sqrt{n} \left(\begin{array}{c} \widehat{\mbox{\boldmath $ \beta $}}_{j} - \mbox{\boldmath $ \beta $}_{j} \\ \widehat{\mbox{\boldmath $ \alpha $}}_{j} - \mbox{\boldmath $ \alpha $}_{j} \end{array} \right) \rightarrow_d \mathcal{N}\left(0, \left[ \begin{array}{cc} I(\mbox{\boldmath $ \beta $}_{j})^{-1} \mathbb{E} (u_{i1}^Tu_{i1}) I(\mbox{\boldmath $ \beta $}_{j})^{-1} & I(\mbox{\boldmath $ \beta $}_{j})^{-1} \mathbb{E} (u_{i1}^Tu_{i2}) I(\mbox{\boldmath $ \alpha $}_{j})^{-1} \\ I(\mbox{\boldmath $ \beta $}_{j})^{-1} \mathbb{E} (u_{i2}^Tu_{i1}) I(\mbox{\boldmath $ \alpha $}_{j})^{-1} & I(\mbox{\boldmath $ \alpha $}_{j})^{-1} \mathbb{E} (u_{i2}^Tu_{i2}) I(\mbox{\boldmath $ \alpha $}_{j})^{-1} \end{array} \right] \right), \label{asymp} \end{eqnarray} where $I(\mbox{\boldmath $ \beta $}_{j}) = \mathbb{E} (\partial u_{i1}/\partial \mbox{\boldmath $ \beta $}_{j})$ and $I(\mbox{\boldmath $ \alpha $}_{j}) = \mathbb{E} (\partial u_{i2}/\partial \mbox{\boldmath $ \alpha $}_{j})$. A cautionary note is that the derivation treats the sample medians in the two groups $\widetilde{m}_{g(i)j}$ as if they are known quantities. When sample size is small, however, we notice that the Wald test using the asymptotic covariance matrix ignoring the variability associated with sample medians appears to cause inflation of the type I error, particularly for the small p-values needed for high-dimensional testing. Table 1 shows the empirical type I error and the nominal p-value cutoff for p-value cut-off 0.05 or 0.001 in simulated datasets from 4 different models. At the 0.05 level, the standard 2-df test already shows inflated type I error rates (numbers in red). The inflation is worsened at the 0.001 level, reaching as high as 5-fold inflation for some models. The estimating equation derivation can account for the extra variability due to estimation of sample medians. Observe that the estimating function for $\mbox{\boldmath $ \alpha $}_{j}$ can be decomposed to two components, \begin{eqnarray} & & \frac{1}{\sqrt{n}}\sum_i u_{2i} \nonumber \\ &=& \frac{1}{\sqrt{n}} \left\{ \sum_{i: Y_{ij}\geq \widetilde{m}_{g(i)j}} \mbox{X}_{i} (Y_{ij}- \widetilde{m}_{g(i)j} - \mbox{X}_{i}\mbox{\boldmath $ \alpha $}_{j} ) + \sum_{i: Y_{ij}< \widetilde{m}_{g(i)j}} \mbox{X}_{i} (\widetilde{m}_{g(i)j} - Y_{ij} - \mbox{X}_{i}\mbox{\boldmath $ \alpha $}_{j} ) \right\} \nonumber \\ &=& \frac{1}{\sqrt{n}} \left\{ \sum_{i: Y_{ij}\geq \widetilde{m}_{g(i)j}} \mbox{X}_{i} (Y_{ij}- m_{g(i)j} - \mbox{X}_{i}\mbox{\boldmath $ \alpha $}_{j} ) + \sum_{i: Y_{ij}< \widetilde{m}_{g(i)j}} \mbox{X}_{i} ( m_{g(i)j} - Y_{ij} - \mbox{X}_{i}\mbox{\boldmath $ \alpha $}_{j} ) \right\} \label{var1} \\ & & + \frac{1}{\sqrt{n}} \sum_i \mbox{X}_{i} (\widetilde{m}_{g(i)j} - m_{g(i)j}) \left\{ I(Y_{ij} < \widetilde{m}_{g(i)j}) - I(Y_{ij} \geq \widetilde{m}_{g(i)j}) \right\}. \label{var2} \end{eqnarray} The first component (\ref{var1}) is the usual score function as if $m_{g(i)j}$ is known, and the second component is the extra variability due to estimation of the medians. The asymptotic distribution of the sample median has been well established to be a normal distribution \citep{VanderVaart2005}. It maximizes the objective function $- \sum_{g(i)} |Y_{ij} - m_{g(i)j}|$, with the following asymptotic linear expansion \begin{eqnarray} \sqrt{n_g} (\widetilde{m}_{g(i)j} - m_{g(i)j}) &=& -\frac{1}{2f(m_{g(i)j})} \frac{1}{\sqrt{n_g}} \sum -\mbox{sign}(Y_{ij} - m_{g(i)j}) + o_p(1) \nonumber \\ &\rightarrow_d& \mathcal{N}\left(0, \frac{1}{\{2f(m_{g(i)j})\}^2}\right), \nonumber \end{eqnarray} where $n_g$ is sample size for group $g$ and $f(m_{g(i)j})$ is the probability density of $Y_{ij}$ at its group median. Using these derivations, the sandwich covariance matrix of ($\widehat{\mbox{\boldmath $ \beta $}}_{j}$, $\widehat{\mbox{\boldmath $ \alpha $}}_{j}$) as expressed in (\ref{asymp}) can be modified to account for the estimation of the group sample medians. In {\bf R} package \texttt{DMtest}, we implemented kernel density estimation for $f(m_{g(i)j})$, which appears to be satisfactory in simulation studies (the corrected 2-df test in Table 1). \subsection{Constrained hypothesis incorporating hypervariable cancer methylation} As shown in Figure 1, the cancer CpG methylation is almost always more variable than the normal CpG methylation. The standard 2-df hypothesis testing using H$_{1a}$ may not be optimal in power. Two constrained alternative hypotheses can be constructed: \begin{equation} \mbox{H}_{0}: \beta_{1j}=0, \alpha_{1j}=0 \mbox{\hspace{10pt} versus \hspace{10pt}}\mbox{H}_{1b}: \beta_{1j} \neq 0 , \alpha_{1j} \geq 0. \label{hypothesis2} \end{equation} \begin{equation} \mbox{H}_{0}: \beta_{1j}=0, \alpha_{1j}=0 \mbox{\hspace{10pt} versus \hspace{10pt}}\mbox{H}_{1c}: \beta_{1j} \geq 0 , \alpha_{1j} \geq 0. \label{hypothesis3} \end{equation} H$_{1b}$ tests for the differential mean methylation and the increased variability in cancer samples, reducing the parameter space under the alternative hypothesis by half. H$_{1c}$ further restricts the parameter space, testing for increased mean and increased variability in tumor samples. This is motivated by biomarker studies that specifically detect CpGs with low or no methylation in normal samples and increased methylation for cancer samples. The one-sided likelihood ratio test (LRT) for the constrained hypothesis (\ref{hypothesis3}) has been studied over decades in the statistical literature, mostly for clinical trials with multiple study endpoints\citep{Kudo63,Perlman69}. The asymptotic null distribution of the one-sided statistics are generally difficult to obtain, and maximizing the likelihood under the one-sided constraints can be strenuous for genome-wide testing. Because the constraint is only on two parameters, we were able to develop an efficient algorithm to compute the LRT under the constrained hypotheses H$_{1b}$ and H$_{1c}$, as implemented in the \texttt{DMVC} function in the {\bf R} package \texttt{DMtest}. To test the constrained hypothesis $\mbox{H}_{1b}$ or $\mbox{H}_{1c}$, the likelihood ratio test (LRT) can be conducted by treating ($\hat{\beta}_{1j}$, $\hat{\alpha}_{1j}$) as a pair of data points from its estimated bivariate asymptotic normal distribution. Specifically, the LRT statistic is the difference of the log likelihood under the null hypothesis ($\alpha_{1j}=\beta_{1j}=0$) and the log likelihood under the constrained alternative hypothesis (either $\mbox{H}_{1b}$ or $\mbox{H}_{1c}$), expressed as \[ (\hat{\beta}_{1j}, \hat{\alpha}_{1j})^T\widehat{\Sigma}^{-1}(\hat{\beta}_{1j}, \hat{\alpha}_{1j}) - (\hat{\beta}_{1j} -\tilde{\beta}_{1j}, \hat{\alpha}_{1j}-\tilde{\alpha}_{1j})^T\widehat{\Sigma}^{-1}(\hat{\beta}_{1j}-\tilde{\beta}_{1j}, \hat{\alpha}_{1j}-\tilde{\alpha}_{1j}), \] where $\tilde{\beta}_{1j}$ and $\tilde{\alpha}_{1j}$ are maximal likelihood estimates under the constrained parameter space. Because of the two dimensional parameter space and the simplicity of this likelihood, an algebraic solution for the constrained MLE under H$_{1b}$ or H$_{1c}$ can be obtained as follows. If $(\hat{\beta}_{1j}, \hat{\alpha}_{1j})$ falls in the constrained parameter space, then the likelihood ratio statistic reduces to $(\hat{\beta}_{1j}, \hat{\alpha}_{1j})^T\widehat{\Sigma}^{-1}(\hat{\beta}_{1j}, \hat{\alpha}_{1j})$. If $(\hat{\beta}_{1j}, \hat{\alpha}_{1j})$ falls out of the constrained parameter space, since $(\hat{\beta}_{1j} -\tilde{\beta}_{1j}, \hat{\alpha}_{1j}-\tilde{\alpha}_{1j})^T\widehat{\Sigma}^{-1}(\hat{\beta}_{1j}-\tilde{\beta}_{1j}, \hat{\alpha}_{1j}-\tilde{\alpha}_{1j})$ is a convex function, the minimizer in the constrained parameter space is achieved on the boundary. \begin{itemize} \item For H$_{1b}$, the boundary is $\alpha_{1j}= 0$; The corresponding minimizer for H$_{1b}$ can be found on $\beta_{1j}= 0$, $\alpha_{1j}\geq 0$, by solving a quadratic function. \item For H$_{1c}$, the boundary is $\beta_{1j}\geq 0$, $\alpha_j =0$ and $\beta_{1j}= 0$, $\alpha_j \geq 0$, namely the non-negative axis of $\beta_{1j}$ and $\alpha_{1j}$. For H$_{1c}$, it is sufficient to first find the local minimizer in $\beta_{1j}\geq 0$, $\alpha_{1j}= 0$ and the local minimizer in $\beta_{1j}= 0$, $\alpha_{1j}\geq 0$. The constrained global MLE is the minima with the smaller value of the objective function. \end{itemize} Compared to the standard numerical algorithm for constrained maximization, this algebraic solution drastically reduces computation time and enables genome-wide CpG testing. \begin{figure} \caption{Constrained Hypothesis Test with Identity Covariance} \label{one_sided_figure} \end{figure} The LRT statistics for testing constrained hypotheses typically take a Chi-Bar-square distribution. Chapter 3 of \cite{Silvapulle2005} gives the general theory for the constrained hypothesis testing for multivariate normal mean. To illustrate, let us assume for simplicity the asymptotic covariance ${\Sigma}$ is the identity matrix. As shown in Figure 1 for the constrained alternative hypothesis H$_{1c}$, the signs of $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ determine the constrained MLEs, ($\tilde{\beta}_{1j}, \tilde{\alpha}_{1j}$). Specifically, if $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ fall in the first quadrant, then $\tilde{\beta}_{1j} =\hat{\beta}_{1j}$, $ \tilde{\alpha}_{1j} =\hat{\alpha}_{1j} $; if $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ fall in the second quadrant, $\tilde{\beta}_{1j} = \hat{\beta}_{1j}$, $\tilde{\alpha}_{1j} = 0$; if $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ fall in the third quadrant, $\tilde{\beta}_{1j} = 0$, $\tilde{\alpha}_{1j} = 0$; if $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ fall in the fourth quadrant, $\tilde{\beta}_{1j} = 0$, $\tilde{\alpha}_{1j} =\hat{\alpha}_{1j}$. Therefore the LRT statistic becomes, \[ \text{LRT}=\begin{cases} \hat{\alpha}_{1j}^2+\hat{\beta}_{1j}^2 & \text{if ($ \hat{\alpha}_{1j}, \hat{\beta}_{1j}$) in Q I;} \\ \hat{\beta}_{1j}^2 & \text{if ($ \hat{\alpha}_{1j}, \hat{\beta}_{1j}$) in Q II;} \\ 0 & \text{if ($ \hat{\alpha}_{1j}, \hat{\beta}_{1j}$) in Q III;} \\ \hat{\alpha}_{1j}^2 & \text{if ($ \hat{\alpha}_{1j}, \hat{\beta}_{1j}$) in Q IV.} \end{cases} \] \noindent Corresponding to the four quadrants, the null distribution of LRT is therefore a mixture of $\chi_2^2, \chi_1^2, \chi_0^2 \mbox{ (a point mass)}$. When the asymptotic variance matrix $\Sigma_j$ is known (typically not an identity matrix), a geometric rotation will conclude that the null distribution for this LRT statistic under constrained hypothesis testing takes a Chi-Bar-square distribution\citep{Silvapulle2005}, namely \[ \mbox{Pr}(\mbox{LRT}_j\leq c| \mbox{H}_0) = q \mbox{Pr}(\chi_0^2 \leq c) + 0.5 \mbox{Pr}(\chi_1^2 \leq c) + (0.5-q) \mbox{Pr}(\chi_2^2 \leq c), \] where $q = (2\pi)^{-1} \cos^{-1}(\rho_j)$, $\rho_j$ is the correlation between $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ for the $j^{th}$ marker. For the constrained hypothesis H$_{1b}$, the distribution of LRT is much simpler because the constraint is on one parameter only. Depending on where $\hat{\beta}_{1j}$ and $\hat{\alpha}_{1j}$ falls, the LRT is either a $\chi_2^2$ or $\chi_1^2$ distribution \citep{Silvapulle2005}. It can be shown that the null distribution for H$_{1b}$ can be expressed as \[ \mbox{Pr}(\mbox{LRT}_j\leq c| H_0) =0.5 \mbox{Pr}(\chi_1^2 \leq c) + 0.5 \mbox{Pr}(\chi_2^2 \leq c). \] \subsection{Comparison to pAUC for biomarker discovery} To capture tumor heterogeneity when comparing gene expressions between cancer and normal samples, it was proposed to use pAUC as the discriminatory measure to select genes from microarray experiments \citep{Pepe2003}. In our notation, let $Y_{ij}^C$ denote methylation values for normal (control) samples and $Y_{ij}^D$ denote methylation values for disease (cancer) samples. Let $F_C$ and $F_D$ denote the corresponding cumulative distribution functions, and $n_C$ and $n_D$ denote the corresponding sample sizes. As shown in Figure 3, the ROC curve characterizes the separation of distributions of $Y_{ij}^C$ and $Y_{ij}^D$, and the pAUC focuses on the upper quantile range of normal sample values. Note that pAUC can be also considered one-sided metric, because it targets at good sensitivity for high specificity values, assuming the marker values for cancer cases are more likely to be greater than those in normal cases. \begin{eqnarray} \mbox{ROC}(t) &=& 1- F_D \{F_C^{-1}(1-t)\}, \nonumber \\ \mbox{pAUC}(t) &=& \int_0^t \mbox{ROC}(t) dt \nonumber . \end{eqnarray} Cancer early-detection biomarkers often require a high specificity (low false positive rate) and a clinically meaningful sensitivity, the area in the ROC curve captured by pAUC. \begin{figure} \caption{ROC and pAUC} \label{pAUC} \end{figure} While it is informative to use pAUC to rank genes, as pAUC is arguably the most clinically relevant measure, it is critical to develop a corresponding hypothesis testing procedure that can gauge the significance level of an observed pAUC when testing tens of thousands of genes simultaneously. Under the null hypothesis that there is no difference between cancer and normal samples, pAUC($t_0$)=$\frac{t_0^2}{2}$, where $t_0$ is the target false positive rate. The nonparametric estimate of pAUC($t_0$) is expressed as \begin{equation} \mbox{p}\widehat{\mbox{AUC}}(t_0) = \frac{\sum_i \sum_j \delta_i \delta_j I(Y_{ij}^D > Y_{ij}^C, Y_{ij}^C > \hat{F}^{-1}_C(1-t_0))}{\sum_i \sum_j \delta_i \delta_j}, \nonumber \end{equation} where $\delta_i$ and $\delta_j$ are point masses of observed values in cancer and normal samples. Note that the nonparametric estimate of pAUC($t_0$) is a sum of indicator functions that is evaluated in the high-specificity area of the data, therefore can be variable for small sample sizes. Assume $n_D/n \rightarrow \lambda \in (0,1)$ as the sample size $n \rightarrow \infty$. By Theorem 3 in Wang and Huang \citep{Wang2019}, as $n \rightarrow \infty, \sqrt{n}[\mbox{p}\widehat{\mbox{AUC}}(t_0)-\mbox{pAUC}(t_0)]$ converges to a normal random variable with mean zero and variance, \begin{align*} \sigma_{\mbox{pAUC}(t_0)}^2 = & \frac{1}{\lambda}\mbox{Var}(J_D)+ \frac{1}{1-\lambda}\mbox{Var}(J_C) \\ &+ \frac{1}{1-\lambda}\Big\{[1-F_D(q_0)]^2\mbox{Var}\big[H_C(q_0)\big]+2[1-F_D(q_0)]\mbox{Cov}\big[J_D,H_C(q_0)\big]\Big\}, \end{align*} where $q_0=F_C^{-1}(1-t_0), J_D=\mbox{Pr}\big(Y^C<Y^D,Y^C\in (q_0,\infty)|Y^C\big), J_C=\mbox{Pr}\big(Y^D>Y^C,Y^C\in (q_0,\infty)|Y^D\big)$ and $H_C(q)=I(Y^C<q)$. We used the large-sample variance estimate provided above to compute a p-value. One challenge for the nonparametric pAUC estimate in high-dimensional hypothesis testing is that its p-value needs to be accurate even at the extremely small level, e.g., $10^{-7}$ or lower. This requires that the point estimate and the asymptotic variance work well even in small samples. However, if the target $t_0$ is 0.05 (a relatively high type I error rate), pAUC evaluation restricts to the upper 5\% quantile range of the normal sample values, which amounts 2$\sim$3 samples if the total number of samples is 50, potentially leading to unstable estimates. In typical biomarker discovery studies, the number of normal or control samples may be small. The tail of the distribution for the nonparametric estimates of pAUC may not be approximated by the asymptotic distribution. We will show next the poor finite sample behavior of pAUC in simulation experiments (Table 1 and Figure 4). \section{Results} \label{sec3} \subsection{Simulation study} Seven methods were compared in simulated datasets: the two-sample $t$-test with unequal variances, Levene test for equal variances, the test for pAUC(0.2), the standard 2-df test for equal means and equal variances (2-df naive), the 2-df test accounting for variability of sample medians (2-df corrected), and the two proposed constrained joint tests for H$_{1b}$ and H$_{1c}$. To evaluate the performance of controlling the false positive rate, four probabilistic models under the null hypothesis were generated: a normal distribution $\mathcal{N}(0,1)$, a beta(10,90) distribution with mean 0.1, a chi-square distribution with three degrees of freedom $\chi^2_3$, a beta(10,90) distribution with 5\% outliers equally distributed across cancer and control samples (Beta$^+$ in Table \ref{type_I_error_simulations}). Equal numbers of cancer and control samples were generated, and the number of total samples increase from 50 (half cases and half controls) to 100, 250, and 500. The type I error rates at $p=$0.05 and $p=$0.001 were evaluated in 50,000 simulated datasets. Table \ref{type_I_error_simulations} shows the empirical type I error rates for 4 null models with increasing sample sizes from 25 per group to 250 per group. For the p-value cut-off 0.05, all seven methods except the naive 2-df test perform well and appear to properly control the false positive rate when sample size gets to at least 50 cases and 50 controls. The naive 2-df test has an inflated type I error rate when n=25 per group, for normal, beta and chi-square distributions (in red color). Its performance improves when n increases. However, when the p-value cut-off decreases to 0.001, the observed false positive rates for pAUC and the naive 2-df test appear to deteriorate rapidly, reaching as high as 0.0097 for pAUC (nearly 10 fold inflation) and 0.005 for the naive 2-df test (5 fold inflation). Their performance are improved with the increasing sample size, though pAUC still have a minor degree of inflation even when there are 250 cases and 250 controls. This could be a serious problem for use of pAUC in high-dimensional testing, as we further show next in Figure \ref{qqplot_simulations}. The 2-df test correcting for estimation of sample medians has a substantially improved performance, particularly when sample size is small. All other methods include the two proposed constrained tests yield a much better performance in controlling the type I error rate at p=0.001. \begin{sidewaystable} \caption{\label{type_I_error_simulations} Empirical type I error rates of the proposed tests and comparator tests in 50,000 simulation datasets, when normal p-value cut-off is set to be 0.05 or 0.001.} \centering {\begin{tabular}{@{}lllccccccc@{}} \hline & & & & & 2-df (H$_{1a}$) & 2-df (H$_{1a}$) & pAUC & Constrained & Constrained \\ & & n & t & Levene & naive & corrected &(0.2) & H$_{1b}$ & H$_{1c}$ \\ \hline p-value=0.05& Normal & 25/25 & 0.0493 &0.0481 & \textcolor{red}{0.0751} &0.0463 &0.0593 &0.0521 &0.0474\\ & & 50/50 & 0.0492 &0.0495 &\textcolor{red}{0.0601} &0.0462 &0.0513 &0.0494 &0.0484\\ & & 100/100 & 0.0506 &0.0473 &0.0540 &0.0470 &0.0482 &0.0497 &0.0486\\ & & 250/250 & 0.0495 &0.0482 &0.0509 &0.0481 &0.0480 &0.0493 &0.0478\\ &Beta & 25/25 & 0.0499 &0.0494 &\textcolor{red}{0.0740} &0.0463 &0.0592 &0.0520 &0.0469\\ & &50/50 & 0.0496 &0.0497 &\textcolor{red}{0.0600} &0.0467 &0.0522 &0.0499 &0.0490\\ & &100/100 & 0.0513 &0.0487 &0.0541 &0.0470 &0.0475 &0.0499 &0.0475\\ & &250/250 & 0.0499 &0.0491 &0.0512 &0.0487 &0.0474 &0.0492 &0.0475\\ &Chisq & 25/25 & 0.0452 &0.0507 &\textcolor{red}{0.0672} &0.0380 &0.0583 &0.0405 &0.0402\\ & &50/50 & 0.0483 &0.0519 &0.0580 &0.0427 &0.0512 &0.0445 &0.0447\\ & &100/100 & 0.0507 &0.0512 &0.0536 &0.0463 &0.0491 &0.0478 &0.0471\\ & &250/250 & 0.0498 &0.0506 &0.0512 &0.0483 &0.0484 &0.0493 &0.0495\\ &Beta$^{+}$ & 25/25 &0.0294 &0.0290 &0.0557 &0.0358 &0.0568 &0.0415 &0.0413\\ & &50/50 & 0.0437 &0.0464 &0.054 &0.0434 &0.0448 &0.0465 &0.0469\\ & &100/100 & 0.0480 &0.0493 &0.0521 &0.0471 &0.0420 &0.0480 &0.0484\\ & &250/250 & 0.0507 &0.0504 &0.0512 &0.0495 &0.0447 &0.0487 &0.0490\\ \hline p-value=0.001 & Normal & 25/25 & 0.0010 &7e-04 &\textcolor{red}{0.0050} &0.0021 &\textcolor{red}{0.0089} &0.0024 &0.0018\\ & & 50/50 & 7e-04 &9e-04 &\textcolor{red}{0.0021} &0.0012 &\textcolor{red}{0.0056} &0.0014 &0.0011\\ & & 100/100 & 9e-04 &9e-04 &0.0016 &0.0012 &\textcolor{red}{0.0034} &0.0011 &0.0011\\ & & 250/250 & 0.0011 &9e-04 &0.0011 &9e-04 &\textcolor{red}{0.0020} &0.0010 &9e-04\\ &Beta & 25/25 & 0.0010 &7e-04 &\textcolor{red}{0.0046} &0.0019 &\textcolor{red}{0.0097} &0.0022 &0.0018\\ & &50/50 & 8e-04 &0.0010 &\textcolor{red}{0.0022} &0.0011 &\textcolor{red}{0.0061} &0.0014 &0.0012\\ & &100/100 &0.0011 &6e-04 &0.0015 &0.0011 &\textcolor{red}{0.0033} &0.0011 &9e-04\\ & &250/250 & 9e-04 &0.0012 &0.0013 &0.0011 &\textcolor{red}{0.0021} &0.001 &0.0010\\ &Chisq & 25/25 & 7e-04 &0.0011 &\textcolor{red}{0.0035} &0.0012 &\textcolor{red}{0.0088} &0.0013 &0.0013\\ & &50/50 & 6e-04 &0.0011 &\textcolor{red}{0.0023} &0.0013 &\textcolor{red}{0.0055} &0.0013 &0.0012\\ & &100/100 & 0.0011 &0.0011 &0.0016 &0.0011 &\textcolor{red}{0.0039} &0.0011 &0.0012\\ & &250/250 & 0.0010 &0.0011 &0.0011 &0.0010 &\textcolor{red}{0.0027} &0.0012 &0.0011\\ &Beta$^{+}$ & 25/25 & 1e-04 &1e-04 &\textcolor{red}{0.0025} &0.0011 &\textcolor{red}{0.0087} &0.0012 &0.0011\\ & &50/50 & 0.0001 &1e-04 &0.0010 &5e-04 &\textcolor{red}{0.0046} &7e-04 &7e-04\\ & &100/100 & 0.0004 &4e-04 &0.0010 &8e-04 &\textcolor{red}{0.0029} &9e-04 &9e-04\\ & &250/250 &0.0006 &6e-04 &9e-04 &8e-04 &0.0013 &0.0010 &9e-04\\ \hline \end{tabular}} \end{sidewaystable} We further investigated the performance of pAUC and the proposed constrained test for H$_{1c}$ in genome-wide CpG testing with $5 \times 10^5$ markers. Figure \ref{qqplot_simulations} shows the q-q plots for pAUC and the proposed test when the distribution for all markers is a chi-square distribution with three degrees of freedom $\chi^2_3$. When sample size is 50 cases and 50 controls, the pAUC test shows a drastically inflated type I error rate for p-values smaller than 0.01, severely deviating from the diagonal line. The error control is improved with larger sample sizes, though even 250 cases and 250 controls do not yield a diagonal q-q line. These results prove the limitation of the nonparametric estimate of pAUC for high-dimensional testing, as we discuss in Section 2.3, particularly when sample sizes for a biomarker discovery study is small. As a comparison, the proposed constrained test delivered consistently well-behaved q-q lines in all three sample sizes (Figure \ref{qqplot_simulations}). \begin{figure} \caption{A comparison of q-q plots for $5 \times 10^5$ markers under different sample sizes. (a) 50 cases and 50 controls, DMVC$^{+} \label{qqplot_simulations} \end{figure} To evaluate the power of the proposed constrained method in a single hypothesis test, four scenarios were generated with 50 cases and 50 controls, each with some distributional differences between cancer samples and control samples. We included pAUC in this power comparison despite its problem in high-dimensional testing, because the p-value cut-off 0.05 for pAUC still yields a valid type I error rate. Methylation M values were generated by a Gaussian distribution or a mixture of Gaussian distributions. A parameter $\tau \in$ [0,0.7] was used to control the magnitude of the differences: the first scenario is all cancer samples having both mean and variance increase, specifically $\mathcal{N}(\tau,1+\tau)$ vs $\mathcal{N}(0,1)$ ; the second scenario is all cancer samples having increased variance but no difference in mean, specifically $\mathcal{N}(0,1+\tau)$ vs $\mathcal{N}(0,1)$; the third scenario is all cancer samples having mean increase but no difference in variance , specifically $\mathcal{N}(\tau,1)$ vs $\mathcal{N}(0,1)$; the fourth scenario is 25\% cancer samples with increasing mean and variance, specifically a mixture of 75\% $\mathcal{N}(0,1)$ and 25\% $\mathcal{N}(3\tau,1+3\tau)$. Figure \ref{power_plot_simulations} shows the power to reject the null hypothesis (no differential methylation in mean nor variance) in 2,000 simulated datasets for the proposed constrained joint tests and the benchmark methods. In all scenarios the joint constrained test for $H_{1c}$ (increased mean and increased variance) delivered consistently the highest power, owing to its flexibility in detecting diverse signals (as compared to t-test or Levine test) and its focus on one-sided alternative hypotheses (as compared to the 2-df test). More restriction in the alternative parameter space increases the power, as seen from the comparison of the constrained tests for H$_{1b}$ and H$_{1c}$. The test for pAUC(0.2) performs competitively in the first scenario where there is both mean and variance increase, and in the fourth scenario where there is a subgroup with increased mean and variance. However when there is only increased mean or only increased variance, pAUC is clearly inferior to the proposed constrained test. \begin{figure} \caption{A comparison of power between the proposed joint constrained tests and the existing tests in simulated datasets. Benchmark methods include the Levene test, the t-test, and the 2-df test. (a) Cancer samples have increased mean and increased variance. (b) Cancer samples have increased variance only. (c) Cancer samples have increased mean only. (d) A portion (25\%) of cancer samples have increased mean and increased variance. } \label{power_plot_simulations} \end{figure} \subsection{Application to TCGA data} Methylome data by the Infinium Methylation 450K assay for 2781 fresh-frozen tumor tissue samples were obtained from TCGA for 6 major cancers, namely prostate cancer (PCa), colorectal cancer (CRC), lung squamous cell carcinoma (LUSC), and lung adenocarcinoma (LUAD), breast cancer (BRCA), and liver hepatocellular carcinoma (HCC), and 308 matched adjacent normal samples. The sample sizes for each cancer and its matched normal samples are shown in Table \ref{all_DMresults}. Differential variability was first examined, comparing the six major cancers to their corresponding normal tissue samples. Low variability CpG sites, defined to be those with standard deviation of beta values $<$0.05 in the combined cancer and matched normal samples, were considered to be noninformative and filtered out before analyses. A regression based approach is undertaken to assess homogeneity of variances of methylation M values between cancer and normal samples, essentially the classical Levene test for differential variability \citep{Brown1974,Phipson2014} and adjusting for age (and gender when applicable). Figure \ref{volcano_plot_six_cancers} shows the volcano plots for genome-wide DVC test results in the six cancers, where the x axis is the difference of standard deviation of methylation beta values. At the family-wise error rate (FWER) 0.05, there are 21220 (PCa), 54734 (CRC), 76455 (LUSC), 31817 (LUAD), 120073 (BRCA), and 132797 (HCC) significant DVCs for the six cancers respectively. Nearly all DVCs show increased variability in cancer samples ($>$99\% in BRCA, CRC, HCC, LUSC and LUAD, $>$95\% in PCa). Compared to the number of significant DMCs (Table \ref{all_DMresults}), liver cancer has more DVCs and breast cancer has a similar number of DVCs (Table \ref{all_DMresults}), probably because of several distinctive subtypes in these two cancers. \begin{sidewaystable} \caption{Results of DMC, DVC and two constrained tests for the six cancers in TCGA.} \label{all_DMresults} \centering {\begin{tabular}{@{}llcccccc@{}} \hline & & PCa & BRCA & CRC & HCC & LUSC & LUAD \\ \hline Data & \# tumor samples & 498& 782 & 296 & 377& 370 & 548 \\ & \# normal samples & 50& 96 & 38 & 50 & 42 & 32 \\ & \# probes pass filtering & 216605 & 242402 & 254297 & 257721 & 262820 & 241668\\ DMC & \# significant CpGs & 94411 & 120345 & 70114 & 94284 & 110930 & 61590 \\ & \# significant hypermethylated CpGs & 58032 & 63219 & 35092 & 10757 &32971 & 29601 \\ & \# significant hypermethylated CpGs & & & & & & \\ & \hspace{10pt} with mean beta in normals$<$0.1 & 8755 & 16210 & 8138 & 3064 & 10058 & 5228\\ DVC & \# significant CpGs & 21220 & 120073 & 54734 & 132797 & 76455 & 31817\\ & \# significant hypervariable CpGs & 20446 & 119684 & 54660 & 132685 & 76438 & 31813\\ & \# significant DVC and significant DMC & 16014 & 76270 & 24023 & 69235 & 45066 & 15336\\ & Proportion of DVC not significant for DMC & 25\% & 36\% & 56\% & 48\% & 41\% & 52\%\\ & OR(DMC, DVC) & 4.6 & 3.1 & 2.6 & 4.3 & 2.6 & 3.3 \\ Constrained H$_{1b}$& \# significant CpGs & 108093 & 124072 & 107598 & 165292 & 175967 & 179143 \\ Constrained H$_{1c}$& \# significant CpGs & 70052 & 149055 & 94534 & 141082 & 100032 & 73440 \\ & \# significant CpGs with low beta in normal& 10773 & 18041 & 10378 & 9316 &12846 & 7446\\ \hline \end{tabular}} \end{sidewaystable} A large portion of significant DVCs were not detected as significant DMCs, varying from 25\% to 53\% across 6 cancers (Table \ref{all_DMresults}), which suggests that testing for DVCs can increase the chance of detecting cancer aberrant methylation beyond DMCs. Figure \ref{four_examples} shows four examples of significant DVCs with low methylation in normal samples, but not detected as significant DMCs. In all CpG examples, violin plots for beta values and the density plots for M values show that there is a hypermethylated subgroup separating from the rest of samples that have similar methylation as the normal samples. These CpGs indexing heterogenous cancer subgroups may be better detected by testing for differential variability, since the p-values for testing DVCs are typically much smaller than the p-values for DMC. Interestingly, every one of these CpGs can discriminate cancer samples from normal samples with AUC$>$0.7, achieving a sensitivity $>$0.4 at nearly 100\% specificity. These DVCs have potential to delineate the heterogeneity in cancer methylome, and to be candidate cancer early detection markers when combined in a multi-marker panel. \begin{figure} \caption{Four examples of significant DVC but not significant DMC at FWER $<$ 0.05. Violin plots, density plots and ROC curves were shown for each chosen CpG. (a) cg17080504 for prostate cancer. (b) cg00566635 for colorectal cancer. (c) cg24578679 for lung adenocarcinoma. (d) cg02679809 for lung squamous cell carcinoma. } \label{four_examples} \end{figure} We used the two constrained joint tests for differential mean and increased variable (hypervariable) as expressed in H$_{1b}$ and H$_{1c}$ for the six TCGA cancer-normal comparisons. For testing H$_{1b}$, a substantial higher amount of CpGs show FWER $<$0.05 than the numbers for DMC, e.g., nearly three times as much as DMCs for LUAD, 50\% more for LUSC. In biomarker discovery studies, the interest is often more on the CpGs that are not methylated in normal samples (mean beta $<$0.1) to protect specificity, but hypermethylated in cancer samples. Across 6 comparisons, the constrained joint test for H$_{1c}$ identified substantially more significant hypermethylated CpGs than the standard DMC test, generally 20-40\% more and as many as 3-fold drastic increase for HCC (Table \ref{all_DMresults}). This is perhaps not surprising given we observed the widespread increase of variances in cancer CpGs in Figure \ref{volcano_plot_six_cancers}. \section{Discussion} \label{sec4} Testing for differential methylation between two sample groups has been the analytical workhorse of methylation studies. The methodological contribution of this work is integrating differences of means and increased variances into joint constrained hypothesis tests, motivated by the observation that DVCs are predominantly hypervariable in all six cancers in TCGA (Figure \ref{volcano_plot_six_cancers}). This strategy improves the power to identify cancer-specific CpGs in a genome-wide interrogation. As illustrated by the TCGA data example, exploiting increased variances increases the yield of candidate CpG markers. When studying DVCs in the six common cancers from TCGA, one of the most interesting observations is that these DVCs often present heterogenous subgroups in cancer patients (Figure \ref{four_examples}), that may not been detected as DMCs with adequate significance due to multiple-testing adjustment. This reiterates the importance of accounting for cancer heterogeneity and subgroups when studying molecular markers. Similar objectives have been considered by using pAUC for detecting the presence of cancer subgroup for gene expression \citep{Pepe2003}. In simulations, we showed that accurate variance estimate of pAUC typically requires a large sample size, that testing pAUC in a high-dimensional setting may have an inflated type I error rate. In contrast, the proposed joint test with constrained hypothesis has better small-sample behaviors and consistently deliver the best power in diverse simulation scenarios. Another potential utility of the joint test of differential mean and increased variance for DNA methylation is to use the derived CpGs for clustering analysis to define cancer subtypes. This objective has been universal for large-scale, genome-wide cancer methylation analyses, e.g. TCGA analyses. The standard starting point for clustering analysis is to extract the most variable top 5000 CpGs and feed them into a clustering algorithm. The constrained tests based on the cancer-normal comparison may yield a more informative set of cancer-specific markers, because they may capture the subgroups as we showed in Figure \ref{four_examples}. \section*{Acknowledgments} The authors thank Janet Stanford and Ming Yu for their helpful discussions that led to an improved version of the paper. This work is supported by National Institute of Health R01 CA222833 and computing support from National Institute of Health grant S10OD028685. \subsection*{Conflict of interest} The authors declare no potential conflict of interests. \section*{Software} \label{sec5} The code for implementing the joint constrained test for increased mean and increased variance is included in the \texttt{DMVC} function in {\bf R} package \texttt{DMtest}, which is publicly accessible in CRAN ( {\tt https://cran.r-project.org/web/packages/DMtest/index.html}). \end{document}
\begin{document} \maketitle \begin{abstract} We introduce generalized filtration with which we can represent situations such as some agents forget information at some specific time. The filtration is defined as a functor to a category $\mathbf{Prob}$ whose objects are all probability spaces and whose arrows correspond to measurable functions satisfying an absolutely continuous requirement \cite{AR_2019}. As an application of a generalized filtration, we develop a binomial asset pricing model, and investigate the valuations of financial claims along this type of non-standard filtrations. \end{abstract} \section{Introduction} \label{sec:intro} It is well known that in stochastic process theory and theories developed on it such as stochastic differential equation theory and stochastic control theory, the concept of \newword{filtration} that expresses increasing information along time is important. The idea that the world's information grows over time seems to be quite natural, but in a sense it is a divine perspective of omniscience and almighty, and it would be a little different if we say that the amount of information that an individual has always increases with time. People forget and misunderstand. The transition of such individuals' information may therefore be reduced, and may be remembered as a different form of experience than objective information. The purpose of the first half of this paper is to propose a kind of \newword{subjective filtration} that expresses the transition of such information. In this way, we generalize the concept of filtration so that we can handle subjective situations, but the purpose of generalized filtration is not limited to that. For example, consider a situation in which Black Swan, who no one had imagined up to a certain point in time, was falling. The financial crisis that hit the world in 2008 and the COVID-19 pandemic in 2020 are typical examples. When Black Swan suddenly appeared, which was not included among the possible future world lines, we could not give a probability for that event and we were greatly upset. Of course, God could have a sufficiently large set of primitive events to take into account such possibilities, it would have been possible to give a probability to an event that ordinary people did not expect. But can such an idealized perspective really create a theory that averts the risk of Black Swan? The generalized filtration formulated in this paper allows even the underlying set of probability space, which is the set of primitive events, to change over time. And it allows the sudden appearance of Black Swan to be incorporated into the theory in a natural way. In the second half of this paper, we consider two types of filtration on the binomial asset price model as an application of generalized filtration. In particular, we show that there is a risk-neutral filtration associated with subjective filtration that a person who has lost memory for a certain period of time, and use it to price securities. This indicates that people with a lack of memory can price securities. Finally, in summary, other applications of generalized filtration and future development directions are described. \section{Generalized Filtrations} \label{sec:genfil} In this section, we define generalized filtration by gradually extending the classical filtration. \subsection{Time Domains} \label{timeDom} A filtration represents a set of information that increases with time. The set of times here is called a \newword{time domain} and is represented by $\mathcal{T}$. Typical $\mathcal{T}$ has the following forms. \begin{enumerate} \item $ \textred{\mathcal{T}} := \{ 0, 1, 2, \dots, T \}, $ \item $ \textred{\mathcal{T}} := \{ 0, 1, 2, \dots \}, $ \item $ \textred{\mathcal{T}} := [0, T], $ \item $ \textred{\mathcal{T}} := [0, +\infty) , $ \end{enumerate} where $T$ is a time horizon. In general time domain may be a \newword{totally ordered set} having the minimum element $0$. \subsection{Classical Filtrations} \label{claFil} Let $ \textred{ \bar{\Omega} } := ( \Omega, \mathcal{F}, \mathbb{P} ) $ be a probability space. Let $ \{ \textred{t_n} \} $ be an increasing sequence in a time domain $\textblue{\mathcal{T}}$. Then, an increasing sequence of $\sigma$-fields \begin{equation*} \mathcal{F}_{t_0} \subset \mathcal{F}_{t_1} \subset \cdots \subset \mathcal{F}_{t_n} \subset \mathcal{F}_{t_{n+1}} \subset \cdots \end{equation*} with $ \mathcal{F}_{t_n} \subset \mathcal{F} $ is called a \newword{classical filtration}. In other words, a \textblue{filtration} is a family of set-inclusion relations like \begin{equation*} \{ \mathcal{F}_{s} \; \textblue{\subset} \; \mathcal{F}_{t} \}_{s \le t} . \end{equation*} Now let \[ \textred{ \bar{\Omega}_t } := ( \Omega, \textblue{\mathcal{F}_t}, \mathbb{P} ) \] be probability spaces whose $\sigma$-fields are changing per time $t$. Then, for $s \le t$ in $\mathcal{T}$, the condition that the function below defined as an identity function $i_{s,t}$ is measurable is equivalent to the condition $ \mathcal{F}_s \subset \mathcal{F}_t $ \[ \xymatrix@C=40 pt@R=25 pt{ \bar{\Omega}_s \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+2pt> [d] & \bar{\Omega}_t \ar @{->}_{\textred{i_{s,t}}} [l] \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+2pt> [d] \\ \omega & \omega \ar @{|->} [l] } . \] In other words, the filtration can be identified with a family of measurable functions \[ \xymatrix@C=30 pt@R=30 pt{ \{ \bar{\Omega}_s & \bar{\Omega}_t \}_{s \le t} \ar @{->}_{\textblue{i_{s,t}}} [l] }. \] Therefore, in the following, instead of using the $\sigma$-field $\mathcal{F}_t$, filtration will be considered as a family of measurable functions. \subsection{Generalization of Filtrations} \label{sec:genFil} As we see in the previous section, a filtration can be seen as a family of identity functions $i_{s,t}$ as measurable functions. Now what if we generalize them to arbitrary measurable functions like the following? \[ \xymatrix@C=30 pt@R=30 pt{ \{ \bar{\Omega}_s & \bar{\Omega}_t \}_{s \le t} \ar @{->}_{\textred{f_{s,t}}} [l] } \] satisfying \begin{equation*} f_{t,t} = \textred{ Id_{\bar{\Omega}_t} } \quad \textrm{and} \quad f_{s,t} \circ f_{t,u} = f_{s,u} \end{equation*} for any $ s \le t \le u $ in $\mathcal{T}$, where $ Id_{\bar{\Omega}_t} $ is an identity function on $\bar{\Omega}_t$. However, this definition is too general for a random variable $ X : \bar{\Omega}_t \to \mathbb{R} $ to define its conditional expectation $ \textblue{E^{f_{s,t}}(X)} : \bar{\Omega}_s \to \mathbb{R} $ satisfying \begin{equation*} \int_A \textblue{E^{f_{s,t}}(X)} d \mathbb{P} = \int_{f_{s,t}^{-1}(A)} X d \mathbb{P} \quad (\forall A \in \mathcal{F}_s ). \end{equation*} In order to make it possible, we need to add an extra condition to the measurable function $f_{s,t}$ called \newword{null-preserving}, that is, for any $ A \in \mathcal{F}_s $, $ \mathbb{P}(A) = 0 $ implies $ \mathbb{P}(f_{s,t}^{-1}(A)) = 0 $ \cite{adachi_2014crm}. In fact, if $f_{s,t}$ is null-preserving, as we will see later, we can define a conditional expectation $ E^{f_{s,t}}(X) : \bar{\Omega}_s \to \mathbb{R} $. Note that when the identity function is generalized to a null-preserving function, the corresponding sequence of the $\sigma$-fields is not necessarily monotonically increasing. In order to give a further generalization, we consider that the probability space at each time fluctuates not only with the $\sigma$-fields but also with probability measures and underlying sets. In other words, the probability space $ \bar{\Omega}_t $ at time $t$ is redefined as follows. \begin{equation*} \textred{ \bar{\Omega}_t } := ( \textred{\Omega_t}, \textblue{\mathcal{F}_t}, \textred{\mathbb{P}_t} ). \end{equation*} Along with this, the definition of null-preserving functions is extended as follows. \begin{defn} \label{defn:nullPres} Let $ \bar{\Omega} = (\Omega, \mathcal{F}, \mathbb{P}) $ and $ \bar{\Omega}' = (\Omega', \mathcal{F}', \mathbb{P}') $ be two probability spaces and $ f : \bar{\Omega} \to \bar{\Omega}' $ be a measurable functions between them. Then $f$ is called \newword{null-preserving} if \begin{equation*} \mathbb{P} \circ f^{-1} \ll \mathbb{P}' \quad (\textrm{absolutely continuous}) . \end{equation*} \end{defn} \begin{defn} \label{defn:genFil} A \newword{generalized filtration} is a family of null-preserving functions \[ \xymatrix@C=30 pt@R=30 pt{ \{ \bar{\Omega}_s & \bar{\Omega}_t \}_{s \le t} \ar @{->}_{\textred{f_{s,t}}} [l] } \] satisfying \begin{equation*} f_{t,t} = \textred{ Id_{\bar{\Omega}_t} } \quad \textrm{and} \quad f_{s,t} \circ f_{t,u} = f_{s,u} \end{equation*} for all triples $ s \le t \le u $ in $\mathcal{T}$. \end{defn} Then, we obtain a following theorem. \begin{thm}{\normalfont{(\cite{AR_2019})}} For any random variable $\textblue{X}$ on $\bar{\Omega}_t$ and any null-preserving function $ f : \bar{\Omega}_t \to \bar{\Omega}_s $, there exists a random variable $Y$ on $\bar{\Omega}_s$ such that for every $ \textblue{A} \in \mathcal{F}_s $, \begin{equation} \label{eq:condExp} \int_A Y d \mathbb{P}_s = \int_{f^{-1}(A)} X d \mathbb{P}_t . \end{equation} We write $ \textblue{E^{f}(X)} $ for the random variable $Y$, and call it a \newword{conditional expectation} of $X$ along $f$. \end{thm} \begin{proof} Define a measure $X^*$ on $ (\Omega_t, \mathcal{F}_t) $ as in the following diagram. \begin{equation*} \xymatrix@C=15 pt@R=20 pt{ && D \ar @{|->}^{} [rr] \ar @{}_{ \mathrel{ \rotatebox[origin=c]{-90}{$\in$} } } @<+6pt> [d] && \textred{X^*}(D) \ar @{}^-{:=} @<-6pt> [r] \ar @{}_{ \mathrel{ \rotatebox[origin=c]{-90}{$\in$} } } @<+6pt> [d] & \int_D X \, d \mathbb{P}_t \\ \mathcal{F}_s \ar @{->}^{f^{-1}} [rr] \ar @/_2pc/_{\mathbb{P}_s} [rrrr] && \mathcal{F}_t \ar @{->}^{\textred{X^*}} @<+2pt> [rr] \ar @{->}_{\mathbb{P}_t} @<-2pt> [rr] && \mathbb{R} } \end{equation*} Then, since $ X^* \ll \mathbb{P}_t $ and $f$ is null-preserving, we have \begin{equation*} X^* \circ f^{-1} \ll \mathbb{P}_t \circ f^{-1} \ll \mathbb{P}_s . \end{equation*} Therefore, we get a following Radon-Nikodym derivative. \begin{equation*} Y := \partial ( X^* \circ f^{-1} ) / \partial \mathbb{P}_s . \end{equation*} With this $Y$ we obtain for every $A \in \mathcal{F}_s$, \begin{align*} & \textblue{ \int_A Y \, d \mathbb{P}_s } = \int_A \, d (X^* \circ f^{-1}) = (X^* \circ f^{-1})(A) = X^* ( f^{-1}(A)) = \textblue{ \int_{f^{-1}(A)} X \, d \mathbb{P}_t } . \end{align*} \end{proof} Henceforth, generalized filtration will be referred to simply as filtration. \subsection{Filtration is a Functor} \label{sec:filfunc} In this subsection, we will try to redefine the filtration introduced in Section \ref{sec:genFil} using Category Theory \cite{maclane1997}. \begin{defn}{[Two Categories $\mathbf{Prob}$ and $\mathcal{T}$]} \label{defn:ProbAndT} \begin{enumerate} \item All probability spaces and all null-preserving functions between them form a \newword{category}. This category is denoted by $\textred{\mathbf{Prob}}$ \footnote{ For further discussion about the category $\textblue{\mathbf{Prob}}$, see \cite{AR_2019}. }. \item A time domain $\textblue{\mathcal{T}}$ can be regarded as a category if we consider its elements as \newword{objects}, and if two objects $s$ and $t$ have one and only one \newword{arrow} from $t$ to $s$ when there is a relation $ s \le t $. \end{enumerate} \end{defn} Then, the filtration introduced in Section \ref{sec:genFil} can be regarded as a \newword{functor} $ F : \mathcal{T} \to \mathbf{Prob} $ (Figure\ref{fig:filfun}). We sometimes call $F$ a \newword{$\mathcal{T}$-filtration} in order for clarifying its time domain. \begin{figure*} \caption{Filtration $ F : \mathcal{T} \label{fig:filfun} \end{figure*} \begin{figure} \caption{Classical and Generalized Filtrations} \label{fig:distfil} \end{figure} \section{Filtrations over a Binomial Asset Pricing Model} \label{sec:BAPM} In this section, as a concrete example of the filtration introduced in Section \ref{sec:genfil}, we look at an unusual filtration on a binomial asset pricing model. \subsection{Filtration $\mathcal{B}^N$} \label{sec:BMsetting} First, we define a general scheme of our model by introducing a filtration $\mathcal{B}^N$ for an integer $N$. \begin{defn}[{Time Domain and Probability Space}] Let $\textred{N} \in \mathbb{N}$ and $ s, t \in \mathbb{R} $ be non-negative real numbers. \begin{enumerate} \item \newword{Discrete intervals}. \begin{align*} \textred{[s, t]^N} &:= \{ n 2^{-N} \mid n \in \mathbb{Z} \; \mathrm{and} \; s \le n 2^{-N} \textblue{\le} t \} , \\ \textred{[s, t)^N} &:= \{ n 2^{-N} \mid n \in \mathbb{Z} \; \mathrm{and} \; s \le n 2^{-N} \textblue{<} t \}, \\ \textred{(s, t]^N} &:= \{ n 2^{-N} \mid n \in \mathbb{Z} \; \mathrm{and} \; s < n 2^{-N} \textblue{\le} t \} , \\ \textred{(s, t)^N} &:= \{ n 2^{-N} \mid n \in \mathbb{Z} \; \mathrm{and} \; s < n 2^{-N} \textblue{<} t \}. \end{align*} \item Let $\textred{\mathcal{T}^N}$ be a category whose objects are elements of $[0, \infty)^N$. For $s, t \in [0, \infty)^N$, $\textred{\mathcal{T}^N}$ has the unique arrow $\textred{\iota^N_{s,t}}$ from $t$ to $s$ if and only if $t \ge s$. \item $ \textred{B^N_t} := \{0,1\}^{(0, t]^N} $. \quad (function space) \item $ \textred{ \mathcal{F}^N_t } := 2^{B^N_t} $. \quad (powerset) \item Let $ \textred{p^N_s} \in [0,1] $ for each $ s \in (0, \infty)^N $. Then, a probability measure $ \textred{ \mathbb{P}^N_t } : \mathcal{F}^N_t \to [0,1] $ is defined for every $\omega \in B^N_t$ by \begin{equation*} \textred{\mathbb{P}^N_t} (\{\omega\}) := \prod_{s \in (0,t]^N} (p^N_s)^{\omega(s)} (1 - p^N_s)^{1 - \omega(s)} . \end{equation*} \item $ \textred{\bar{B}}^N_t := (B^N_t, \mathcal{F}^N_t, \mathbb{P}^N_t) $ \quad (probability space) \end{enumerate} \end{defn} \begin{defn} \label{defn:filB} A filtration $\textred{\mathcal{B}^N}$ is determined by defining arrows $\textred{f^N_{s,t}}$ below: \begin{equation*} \xymatrix@C=40 pt@R=8 pt{ \mathcal{T}^N \ar @{->}^{\textred{\mathcal{B}^N}} [r] & \mathbf{Prob} \\ s & \bar{B}^N_s \\\\ t \ar @{->}^{\iota^N_{s,t}} [uu] & \bar{B}^N_t \ar @{->}_{ \mathcal{B}^N(\iota^N_{s,t}) := f^N_{s,t} } [uu] } \end{equation*} The filtration $\mathcal{B}^N$ is called \newword{non-trivial} if there exists $ t \in (0, \infty)^N $ such that $ 0 < p_t < 1 $. \end{defn} Note that for a non-trivial filtration $\mathcal{B}^N$, every function from $B^N_t$ to $B^N_s$ becomes a null-preserving function from $\bar{B}^N_t$ to $\bar{B}^N_s$. \lspace As we introduced, the functor $ \textred{ \mathcal{B}^N } $ is a generalized filtration, representing a filtration over the classical binomial model developed, for example in \cite{shreve_I}. The classical version requires the terminal time horizon $\textred{T}$ for determining the underlying set $ \textred{\Omega} := \{0, 1\}^T $ while our version does not require it since the time variant probability spaces can evolve without any limit. That is, \textblue{ our version allows unknown future elementary events, which, we believe, shows a big philosophical difference from the traditional Kolmogorov world. } \begin{prop} \label{prop:BMcondE} For a random variable $X$ on $\bar{B}^N_t$ and $ \textred{\omega} \in \bar{B}^N_s $, we have \begin{equation*} \textred{ E^{f^N_{s,t}}(X) }(\omega) \mathbb{P}^N_s(\{\omega\}) = \sum_{\omega' \in \textblue{(f^N_{s,t})^{-1}(\omega)}} X(\omega') \mathbb{P}^N_{t}(\{\omega'\}) . \end{equation*} \end{prop} \begin{proof} Put $ A := \{ \omega \} $ and $ f_{s, t} := f^N_{s, t} $ in (\ref{eq:condExp}). Then the result is straightforward. \end{proof} In order to see a variety of filtrations, we introduce two candidates of $f^N_{s,t}$ introduced in Definition \ref{defn:filB}. \begin{defn}[{Two Candidates of $f^N_{s,t}$}] \label{defn:twoCandF} Let $ s, t $ be objects of $\mathcal{T}^N$ satisfying $ s < t $. \begin{enumerate} \item $ \textred{ \mathbf{full}^N_{s,t} } $ \begin{equation*} \xymatrix@C=35 pt@R=15 pt{ \bar{B}^N_s \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+6pt> [d] & \bar{B}^N_t \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+6pt> [d] \ar @{->}_{\textred{ \mathbf{full}^N_{s,t} }} [l] \\ \omega \mid_{(0, s]^N} & \omega \ar @{|->} [l] } \end{equation*} \item $ \textred{ \mathbf{drop}^N_{s,t} } $ \begin{equation*} \xymatrix@C=35 pt@R=15 pt{ \bar{B}^N_s \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+6pt> [d] & \bar{B}^N_t \ar @{}_{ \mathrel{ \rotatebox[origin=c]{90}{$\in$} } } @<+6pt> [d] \ar @{->}_{\textred{ \mathbf{drop}^N_{s,t} }} [l] \\ \mathbf{full}^N_{s,t}(\omega) \times \textblue{\mathbb{1}_{(0,s\textred{)}^N}} & \omega \ar @{|->} [l] } \end{equation*} \end{enumerate} \end{defn} The function $\mathbf{drop}^N_{s,t}$ can be interpreted to forget what happens at time $s$. We can easily show the following proposition. \begin{prop} \label{prop:fulldrop} For $ s < t < u $ in $ [0, \infty]^N $, \begin{enumerate} \item $ \mathbf{full}^N_{s,t} \circ \mathbf{full}^N_{t,u} = \mathbf{full}^N_{s,u} $ , \item $ \mathbf{full}^N_{s,t} \circ \mathbf{drop}^N_{t,u} = \mathbf{full}^N_{s,u} $ , \item $ \mathbf{drop}^N_{s,t} \circ \mathbf{full}^N_{t,u} = \mathbf{drop}^N_{s,u} $ , \item $ \mathbf{drop}^N_{s,t} \circ \mathbf{drop}^N_{t,u} = \mathbf{drop}^N_{s,u} $ . \end{enumerate} \end{prop} \begin{defn}[{Examples of (Subjective) Filtrations}] \label{defn:expSurFil} Let $ s, t $ be any objects of $\mathcal{T}^N$ such that $ s < t $. \begin{enumerate} \item \textblue{Classical filtration}: $ \textred{ \mathbf{Full}^N } : \mathcal{T}^N \to \mathbf{Prob} $ is defined by \begin{equation*} \textred{ \mathbf{Full}^N( \iota^N_{s,t} ) } := \textblue{ \mathbf{full}^N_{s,t} } . \end{equation*} \item \textblue{Dropped filtration}: $ \textred{ \mathbf{Drop}^N_{\alpha, \beta} } : \mathcal{T}^N \to \mathbf{Prob} $ where $ \textblue{\alpha}, \textblue{\beta} \in \mathbb{R} $ are constants, is defined by \begin{equation*} \textred{ \mathbf{Drop}^N_{\alpha, \beta} ( \iota^N_{s,t} ) } := \begin{cases} \textblue{\mathbf{drop}^N_{s,t}} \quad \textrm{if} \quad s \ne t \; \textrm{and} \; s \in [\textblue{\alpha}, \textblue{\beta}]^N , \\ \textblue{\mathbf{full}^N_{s,t}} \quad \textrm{otherwise} . \end{cases} \end{equation*} A person who has a subjective filtration $\mathbf{Drop}^N_{\alpha, \beta}$ forgets the events happened during $[\alpha, \beta]$. \end{enumerate} \end{defn} Note also that the dropped filtration is \textblue{well-defined} by Proposition \ref{prop:fulldrop}. \begin{defn}{[$\mathcal{B}$-Adapted Process $\xi^N_t$]} \label{defn:adaptedPrs} Let $ t \in [0,+\infty)^N $. a stochastic process $ \textred{\xi^N_t} : B^N_t \to \mathbb{R} $ is defined by \begin{equation*} \xi^N_t(\omega) := 2 \omega(t) - 1 \quad (\forall \omega \in B^N_t) . \end{equation*} \end{defn} \begin{defn} \label{defn:Itj} For $j = 0, 1$ and $ \omega \in B^N_t $, \begin{equation*} I^N_{t}(j,\omega) := \{ \omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) \mid \omega'(t+2^{-N}) = j \} . \end{equation*} \end{defn} \begin{prop} \label{prop:xiPr} For $ \omega \in B^N_t $ with $ \mathbb{P}^N_t(\omega) \ne 0 $, \begin{equation*} E^{ f^N_{t,t+2^{-N}} } (\xi^N_{t+2^{-N}}) (\omega) = \#((f^N_{t,t+2^{-N}})^{-1}(\omega)) p^N_{t+2^{-N}} - \#I^N_{t}(0, \omega) , \end{equation*} where $ \textred{\#}A $ denotes the cardinality of the set $A$. \end{prop} \begin{proof} By Proposition \ref{prop:BMcondE}, \begin{align*} E^{ f^N_{t,t+2^{-N}} } (\xi^N_{t+2^{-N}}) (\omega) &= \sum_{\omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } \frac{ \xi^N_{t+2^{-N}} (\omega') \mathbb{P}^N_{t+2^{-N}}(\omega') }{ \mathbb{P}^N_t(\omega) } \\&= \sum_{\omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } \frac{ (2 \omega'(t+2^{-N}) - 1) \mathbb{P}^N_{t+2^{-N}}(\omega') }{ \mathbb{P}^N_t(\omega) } \\&= \sum_{\omega' \in I^N_t(1, \omega)} \frac{\mathbb{P}^N_{t+2^{-N}}(\omega')}{\mathbb{P}^N_t(\omega)} - \sum_{\omega' \in I^N_t(0, \omega)} \frac{\mathbb{P}^N_{t+2^{-N}}(\omega')}{\mathbb{P}^N_t(\omega)} \\&= \sum_{\omega' \in I^N_t(1, \omega)} p^N_{t+2^{-N}} - \sum_{\omega' \in I^N_t(0, \omega)} (1 - p^N_{t+2^{-N}}) \\&= \#((f^N_{t,t+2^{-N}})^{-1}(\omega)) p^N_{t+2^{-N}} - \#I^N_{t}(0, \omega) . \end{align*} \end{proof} \subsection{Arbitrage Strategies} \label{sec:ArbStr} Now we define two instruments tradable in our market. \begin{defn}{[Stock and Bond Processes]} \label{defn:stockAndBond} Let $ \mu, \sigma, r \in \mathbb{R} $ be constants such that $ \sigma > 0 $, $ \mu > \sigma - 1 $ and $ r > -1 $. We have the following $\mathcal{B}^N$-adapted processes which are two instruments tradable in our market. Let $ t \in [0,+\infty)^N $. \begin{enumerate} \item A \newword{stock process} $ \textred{S^N_t} : B^N_t \to \mathbb{R} $ over the filtration $\mathcal{B}^N$ is defined by \begin{equation*} S^N_0(\textred{*}) := \textred{s_0}, \quad S^N_{t+2^{-N}} := (S^N_t \circ \textblue{f^N_{t,t+2^{-N}}}) (1 + 2^{-N} \mu + 2^{-\frac{N}{2}} \sigma \xi^N_{t+2^{-N}}) \end{equation*} where $ \textred{*} \in B^N_0 $ is the unique element. \item A \newword{bond process} $ \textred{b^N_t} : B^N_t \to \mathbb{R} $ over the filtration $\mathcal{B}^N$ is defined by \begin{equation*} b^N_0(*) := 1, \quad b^N_{t+2^{-N}} := (b^N_t \circ \textblue{f^N_{t,t+2^{-N}})} (1+2^{-N}r) . \end{equation*} \end{enumerate} \end{defn} The condition $ \mu > \sigma - 1 $ is necessary for keeping the stock price positive. We sometimes call the triple $ (\mathcal{B}^N, S^N, b^N) $ a \newword{market}. But, it does not mean that the market will not contain other instruments. The following proposition is straightforward. \begin{prop} \label{prop:EfNStNxiN} Let $ 1_{B^N_t} $ be a random variable on $ B^N_t $ defined by $ 1_{B^N_t} (\omega) = 1 $ for every $\omega \in B^N_t$. Then, we have for any $ \omega \in B^N_t $, \begin{enumerate} \item $ E^{f^N_{t,t+2^{-N}}} (S^N_{t+2^{-N}}) = S^N_t \big( (1 + 2^{-N}\mu) E^{f^N_{t,t+2^{-N}}} (1_{B^N_{t+2^{-N}}}) + 2^{-\frac{N}{2}} \sigma E^{f^N_{t,t+2^{-N}}} {\xi^N_{t+2^{-N}}}) \big) , $ \item $ E^{f^N_{t,t+2^{-N}}} (1_{B^N_{t+2^{-N}}}) (\omega) = \{ \mathbb{P}^N_{t+2^{-N}} ((f^N_{t,t+2^{-N}})^{-1}(\omega)) \} / \{ \mathbb{P}^N_{t} (\omega) \} , $ \item $ b^N_t(\omega) = (1+2^{-N}r)^{2^N t} . $ \end{enumerate} \end{prop} \begin{defn}{[Strategies]} A \newword{strategy} is a sequence $ (\phi, \psi) = \{ (\phi_t, \psi_t) \}_{t \in (0, \infty)^N } $, where \begin{equation} \phi_t : B^N_{t-2^{-N}} \to \mathbb{R} \; \textrm{and} \; \psi_t : B^N_{t-2^{-N}} \to \mathbb{R} . \end{equation} Each element of the strategy $ (\phi_t, \psi_t) $ is called a \newword{portfolio}. For $ t \in [0, \infty)^N $, the \newword{value} $V_t$ of the portfolio at time $t$ is determined by: \begin{equation} V_t := \begin{cases} S^N_0 \phi_{2^{-N}} + b^N_0 \psi_{2^{-N}} \quad \textrm{if} \quad t = 0 , \\ S^N_t ( \phi_{t} \circ f^N_{t-2^{-N},t} ) + b_n ( \psi_{t} \circ f^N_{t-2^{-N},t} ) \quad \textrm{if} \quad t > 0 . \end{cases} \end{equation} \end{defn} \begin{defn}{[Gain Processes]} A \newword{ gain process } of the strategy $ (\phi, \psi) $ is the process $ \{ G^{(\phi, \psi)}_t \}_{t \in [0, \infty)^N } $ defined by \begin{equation} G^{(\phi, \psi)}_t := \begin{cases} - ( S^N_0 \phi_{2^{-N}} + b^N_0 \psi_{2^{-N}} ) \; \textrm{if} \; t = 0 , \\ ( S^N_t (\phi_t \circ f^N_{t-2^{-N},t} ) + b^N_t (\psi_t \circ f^N_{t-2^{-N},t} ) ) - ( S^N_t \phi_{t+2^{-N}} + b^N_t \psi_{t+2^{-N}} ) \; \textrm{if} \; t > 0 . \end{cases} \end{equation} \end{defn} \begin{lem} \label{lem:spbp} Let $ t \in [0, \infty)^N $ with \begin{equation} S^N_t \phi_{t+2^{-N}} + b^N_t \psi_{n+2^{-N}} = 0 . \end{equation} Then, we have \begin{equation} S^N_{t+2^{-N}} (\phi_{t+2^{-N}} \circ f^N_{t, t+2^{-N}} ) + b_{n+1} (\psi_{n+2^{-N}} \circ f^N_{t, t+2^{-N}} ) = (2^{-N} \mu + 2^{-\frac{N}{2}} \sigma \xi^N_{t+2^{-N}} - 2^{-N} r) ( (S^N_t \phi_{t+2^{-N}}) \circ f^N_{t, t+2^{-N}} ) . \end{equation} \end{lem} \begin{proof} \begin{align*} & LHS \\=& (S^N_t \circ f^N_{t, t+2^{-N}} ) (1 + 2^{-N} \mu + 2^{-\frac{N}{2}} \sigma \xi^N_{t+2^{-N}}) (\phi_{t+2^{-N}} \circ f^N_{t, t+2^{-N}} ) \\& + (b^N_t \circ f^N_{t, t+2^{-N}} ) (1 + 2^{-N} r) (\psi_{t+2^{-N}} \circ f^N_{t, t+2^{-N}} ) \\=& (1 + 2^{-N} \mu + 2^{-\frac{N}{2}} \sigma \xi^N_{t+2^{-N}}) ((S^N_t \phi_{t+2^{-N}}) \circ f^N_{t, t+2^{-N}} ) + (1 + 2^{-N} r) ((b^N_t \psi_{t+2^{-N}}) \circ f^N_{t, t+2^{-N}} ) \\=& (1 + 2^{-N} \mu + 2^{-\frac{N}{2}} \sigma \xi^N_{t+2^{-N}}) ((S^N_t \phi_{t+2^{-N}}) \circ f^N_{t, t+2^{-N}} ) - (1 + 2^{-N} r) ((S^N_t \phi_{t+2^{-N}}) \circ f^N_{t, t+2^{-N}} ) \\=& RHS . \end{align*} \end{proof} \begin{defn}{[Arbitrage Strategies]} \begin{enumerate} \item A strategy $ (\phi, \psi) $ is called a $\mathcal{B}^N$-\newword{arbitrage strategy} if $ \mathbb{P}^N_{t}\big( G^{(\phi, \psi)}_t \ge 0 \big) = 1 $ for every $ t \in [0, \infty)^N $, and $ \mathbb{P}^N_{t_0}\big( G^{(\phi, \psi)}_{t_0} > 0 \big) > 0 $ for some $ t_0 \in [0, \infty)^N $. \item The market is called \newword{non-arbitrage} or \textbf{NA} if it does not allow $\mathcal{B}^N$-arbitrage strategies. \end{enumerate} \end{defn} \begin{prop} If the market $ (\mathcal{B}^N, S^N, b^N) $ with a non-trivial filtration $\mathcal{B}^N$ is non-arbitrage, then $ \abs{ \textred{\mu} - \textred{r} } < 2^{\frac{N}{2}} \textred{\sigma} . $ \end{prop} \begin{proof} Assuming that $ r \le \mu - 2^{\frac{N}{2}} \sigma $ or $ r \ge \mu - 2^{\frac{N}{2}} \sigma $, we will construct an arbitrage strategy $ (\phi, \psi) $ by using the following algorithm. \begin{lstlisting}[basicstyle=\ttfamily\footnotesize,frame=single] for t = 0, 1, 2, ...: t := 2^(-N) n observe S(t) and b(t) if r <= mu - 2^N sigma: phi(t+2^(-N)) > 0 # pick arbitrarily elsif r >= mu + 2^N sigma: phi(t+2^(-N)) < 0 # pick arbitrarily psi(t+2^(-N)) := -(S(t) / b(t)) phi(t+2^(-N)) # Then, we have S(t) phi(t+2^(-N)) + b(t) psi(t+2^(-N)) = 0, # which simplifies the computation of G(t) in the following. if t == 0: G(0) := 0 else: # t > 1 G(t) := S(t) (phi(t) * f(t-2^(-N))) + b(t) (psi(t) * f(t-2^(-N))) \end{lstlisting} In the above code, `*' is the function composition operator. By Lemma \ref{lem:spbp}, we have \begin{equation} G^{(\phi, \psi)}_{t} = 2^{-N} (\mu + 2^{\frac{N}{2}} \sigma \xi^N_t - r) ((S^N_{t-2^{-N}} \phi_t) \circ f^N_{t-2^{-N}, t} ) . \end{equation} So we have $ G^{(\phi, \psi)}_{t} \ge 0 $ as long as $ r \le \mu - 2^{N} \sigma $ or $ r \ge \mu + 2^{N} \sigma $. By the way, since our filtration is non-trivial, there exists a number $t_0$ such that $ 0 < p_{t_0} < 1 $. It is easy to check that \begin{equation} \mathbb{P}^N_{t_0}( G^{(\phi, \psi)}_{t_0} >0 ) > 0 , \end{equation} which concludes that $ (\phi, \psi) $ is an arbitrage strategy. \end{proof} \subsection{Risk-Neutral Filtrations} \label{sec:riskNeuFil} In this subsection, we assume that $ \abs{ \textred{\mu} - \textred{r} } < 2^{\frac{N}{2}} \textred{\sigma} $. Let us consider about the following discounted stock process \begin{defn} \label{defn:discountStock} A \newword{discount stock process} $ \textred{(S^N_t)'} : B^N_t \to \mathbb{R} $ is defined by \begin{equation*} (S^N_t)' := (b^N_t)^{-1} S^N_t . \end{equation*} \end{defn} \begin{defn} \label{defn:riskNeutralFil} A \newword{risk-neutral filtration} with respect to the filtration $\mathcal{B}^N$ is a filtration $\textred{\mathcal{C}^N}$ such that \begin{equation*} U \circ \mathcal{C}^N = U \circ \mathcal{B}^N , \end{equation*} where $ \textred{U} : \mathbf{Prob} \to \mathbf{Meas} $ is the \textblue{forgetful functor} to the category of measurable spaces, \[ \xymatrix{ \mathcal{T}^N \ar @{->}^{\textred{\mathcal{C}^N}} @<+3pt> [r] \ar @{->}_{\mathcal{B}^N} @<-3pt> [r] & \mathbf{Prob} \ar @{->}^{\textred{U}} [r] & \mathbf{Meas} } \] and with which $ \textblue{(S^N_t)'} $ becomes a \textblue{$\mathcal{C}^N$-martingale}, that is, \begin{equation*} E^{ \textblue{\mathcal{C}^N(\iota^N_{s,t})} } ( (S^N_t)' ) = (S^N_s)' . \end{equation*} \end{defn} In the remainder of this subsection, we will focus on proving the following theorem. \begin{thm} There exists a risk-neutral filtration with respect to the filtration $ \mathbf{Drop}^N_{\alpha, \beta} $. \end{thm} First, we examine what form the probability measure $ \mathbb{Q}^N_t : \mathcal{F}^N_t \to [0,1] $ takes when $ \mathcal{C}^N(t) = ( B^N_t, \mathcal{F}^N_t, \mathbb{Q}^N_t ) $ for a risk-neutral filtration $\mathcal{C}^N$, in general. \begin{thm} \label{thm:iiffMart} A stochastic process $(S^N_t)'$ is a \textblue{ $\mathcal{C}^N$-martingale } if and only if the following equation holds for every $ t \in [0,\infty)^N $ and $ \omega \in B^N_t $. \begin{equation*} \mathbb{Q}^N_t( \{\omega\} ) = c_1 \, \mathbb{Q}^N_{t+2^{-N}}(I^N_{t}(1,\omega)) + c_0 \, \mathbb{Q}^N_{t+2^{-N}}(I^N_{t}(0,\omega)) \end{equation*} where \begin{equation*} \textred{c_1} := \frac{1 + 2^{-N} \mu + 2^{-\frac{N}{2}} \sigma}{1 + 2^{-N} r} , \quad \textred{c_0} := \frac{1 + 2^{-N} \mu - 2^{-\frac{N}{2}} \sigma}{1 + 2^{-N} r} . \end{equation*} \end{thm} \begin{proof} Let $\omega \in B^N_t$. Then, by Proposition \ref{prop:BMcondE}, we have \begin{align*} & E^{\mathcal{C}^N(\iota^N_{t,t+2^{-N}})}( (S^N_{t+2^{-N}})' ) (\omega) \mathbb{Q}^N_{t}(\{\omega\}) \\=& \sum_{ \omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } (S^N_{t+2^{-N}})'(\omega') \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) \\=& \sum_{ \omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } (b^N_{t+2^{-N}})^{-1}(\omega') (S^N_t \circ f^N_{t,t+2^{-N}}(\omega') (1 + 2^{-N}\mu + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}(\omega')) \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) \\=& \sum_{ \omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } (1+ 2^{-N}r)^{-(t+2^{-N})} S^N_{t} (\omega) (1 + 2^{-N}\mu + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}(\omega')) \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) \\=& (S^N_t)'(\omega) \sum_{ \omega' \in (f^N_{t,t+2^{-N}})^{-1}(\omega) } \frac{ 1 + 2^{-N}\mu + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}(\omega') }{ 1+ 2^{-N}r } \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) . \end{align*} \noindent Therefore, the condition $ (S^N_t)' = E^{\mathcal{C}^N(\iota^N_{t,t+2^{-N}})}( (S^N_{t+2^{-N}})' ) $ is equivalent to \begin{align*} \mathbb{Q}^N_t(\{\omega\}) &= \sum_{ \omega' \in I^N_t(1, \omega) } \!\! \frac{ 1 + 2^{-N}\mu + 2^{-\frac{N}{2}}\sigma }{ 1+ 2^{-N}r } \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) + \sum_{ \omega' \in I^N_t(0, \omega) } \!\! \frac{ 1 + 2^{-N}\mu - 2^{-\frac{N}{2}}\sigma }{ 1+ 2^{-N}r } \mathbb{Q}^N_{t+2^{-N}}(\{\omega'\}) \\&= c_1 \, \mathbb{Q}^N_{t+2^{-N}} (I^N_t(1, \omega)) + c_0 \, \mathbb{Q}^N_{t+2^{-N}} (I^N_t(0, \omega)) . \end{align*} \end{proof} \begin{defn} For $ \omega \in B^N_t $ and $ d \in \{0, 1\} $, $ (\omega d) \in B^N_{t+2^{-N}} $ is an element satisfying \begin{equation*} (\omega d) (s) := \begin{cases} \omega(s) \quad & ( s \le t ) \\ d \quad & ( s = t+2^{-N} ) \end{cases} \end{equation*} for any $ s \in (0, t+2^{-N}]^N $. Unless there is confusion, we will omit the parentheses in $ ((\omega d_1) d_2) $ and write $ \omega d_1 d_2 $. \end{defn} In order to determine more detail of $\mathcal{C}$, we need the following condition for $\mathbb{Q}^N_t$. \begin{prop} \label{prop:Qcond} The following conditions for $ \mathbb{Q}^N_t $ are equivalent. \begin{enumerate} \item For all $ t \in [0,\infty)^N $ and $ \omega \in B^N_t $, \begin{equation} \mathbb{Q}^N_{t+2^{-N}}(\{\omega 0, \omega 1\}) = \mathbb{Q}^N_t(\{\omega\}) . \end{equation} \item For all $ t \in [0,\infty)^N $, $\mathbf{full}^N_{t,t+2^{-N}}$ is measure-preserving w.r.t. $\mathbb{Q}^N_t$, that is, \begin{equation} \mathbb{Q}^N_{t} = \mathbb{Q}^N_{t+2^{-N}} \circ (\mathbf{full}^N_{t,t+2^{-N}})^{-1} . \end{equation} \item There exists a sequence of functions $ \{ q_t : B^N_t \to [0,1] \}_{t \in (0,\infty)^N} $ satisfying the following conditions for every $ t \in (0, \infty)^N $ and $ \omega \in B^N_t $, \begin{enumerate} \item $ \mathbb{Q}^N_t (\{\omega\}) = \prod_{ s \in (0, t]^N } q_s(\omega \mid_{(0,s]^N}) , $ \item $ q_{t+2^{-N}} (\omega 0) + q_{t+2^{-N}} (\omega 1) = 1 . $ \end{enumerate} \end{enumerate} \end{prop} In the following discussion, we assume the following assumption which is the condition (3) of Proposition \ref{prop:Qcond}. \begin{asm} \label{asm:asmQ} Suppose that there exists a sequence of functions $ \{ q_t : B^N_t \to [0,1] \}_{t \in (0,\infty)^N} $ satisfying the following conditions for every $ t \in (0, \infty)^N $ and $ \omega \in B^N_t $, \begin{enumerate} \item $ \mathbb{Q}^N_t (\{\omega\}) = \prod_{ s \in (0, t]^N } q_s(\omega \mid_{(0,s]^N}) , $ \item $ q_{t+2^{-N}} (\omega 0) + q_{t+2^{-N}} (\omega 1) = 1 . $ \end{enumerate} \end{asm} In the rest of this section, we assume Assumption \ref{asm:asmQ}, and then will determine the risk-neutral filtration $\mathcal{C}^N$ by calculating $ \{ q_t \}_{t \in (0,\infty)^N} $. \begin{lem} \label{lem:qbin} Let $c_1$ and $c_0$ are constants defined in Theorem \ref{thm:iiffMart}. Then for any $ x \in \mathbb{R} $ we have \[ 1 = c_1 x + c_0 (1-x) \quad \Longleftrightarrow \quad x = \frac{1}{2} - 2^{\frac{N}{2}-1} \frac{ \mu - r }{ \sigma } \quad \textrm{and} \quad 1 - x = \frac{1}{2} + 2^{\frac{N}{2}-1} \frac{ \mu - r }{ \sigma } . \] \end{lem} \begin{prop} \label{prop:fullQ} For $ t \in (0, \infty)^N $ if $ f^N_{t,t+2^{-N}} = \mathbf{full}^N_{t,t+2^{-N}} $, then for $ \omega \in B^N_{t} $ such that $ \mathbb{Q}^N_t(\{ \omega \}) \ne 0 $, the following holds. \[ q_{t+2^{-N}}(\omega 1) = \frac{1}{2} - 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} , \; \quad \; q_{t+2^{-N}}(\omega 0) = \frac{1}{2} + 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} . \] \end{prop} \begin{figure} \caption{ $\mathbf{full} \label{fig:fullN} \end{figure} \begin{proof} By observing Figure \ref{fig:fullN}, we have \[ (\mathbf{full}^N_{t,t+2^{-N}})^{-1}(\omega) = \{ \omega 0, \omega 1 \} , \quad I^N_t(1, \omega) = \{ \omega 1 \} , \quad I^N_t(0, \omega) = \{ \omega 0 \} . \] \noindent Then, by Theorem \ref{thm:iiffMart}, \[ \mathbb{Q}^N_t(\{\omega\}) = c_1 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(1,\omega)) + c_0 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(0,\omega)) = c_1 \mathbb{Q}^N_{t+2^{-N}}(\{\omega 1\}) + c_0 \mathbb{Q}^N_{t+2^{-N}}(\{\omega 0\}) . \] Since \begin{equation*} \mathbb{Q}^N_{t+2^{-N}}(\{\omega d_{t+2^{-N}}\}) = \mathbb{Q}^N_{t}(\{\omega\}) q_{t+2^{-N}}(\omega d_{t+2^{-N}}) \end{equation*} by Assumption \ref{asm:asmQ} and $ \mathbb{Q}^N_{t}(\{\omega\}) \ne 0 $, we have \begin{equation*} 1 = c_1 q_{t+2^{-N}}(\omega 1) + c_0 q_{t+2^{-N}}(\omega 0) . \end{equation*} Hence, by Lemma \ref{lem:qbin}, we obtain \begin{equation*} q_{t+2^{-N}}(\omega 1) = \frac{1}{2} - 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} , \quad q_{t+2^{-N}}(\omega 0) = \frac{1}{2} + 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} . \end{equation*} \end{proof} Note that the probability obtained in Proposition \ref{prop:fullQ} does not depend on either $\omega$ or $t$ . \begin{prop} \label{prop:dropQ} For $t \in (0, \infty)^N$, if $ f^N_t = \mathbf{drop}^N_{t,t+2^{-N}} $, then for $ \omega \in B^N_{t-2^{-N}} $ such that $ \mathbb{Q}^N_{t-2^{-N}}(\{\omega\}) \ne 0 $, the following holds. \begin{align*} q_{t}(\omega 1) &= 0 , \\ q_{t}(\omega 0) &= 1 , \\ q_{t+2^{-N}}(\omega 01) &= \frac{1}{2} - 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} , \\ q_{t+2^{-N}}(\omega 00) &= \frac{1}{2} + 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} . \end{align*} \end{prop} \begin{figure} \caption{ $\mathbf{drop} \label{fig:fullNdropN} \end{figure} \begin{proof} By observing Figure \ref{fig:fullNdropN}, we have \begin{align*} (\mathbf{drop}^N_{t,t+2^{-N}})^{-1}(\omega 1) &= \emptyset , \\ I^N_t(1, \omega 1) &= I^N_t(0, \omega 1) = \emptyset , \\ (\mathbf{drop}^N_{t,t+2^{-N}})^{-1}(\omega 0) &= \{ \omega 00, \omega 01, \omega 10, \omega 11 \} , \\ I^N_t(1, \omega 0) &= \{ \omega 01, \omega 11 \} , \\ I^N_t(0, \omega 0) &= \{ \omega 00, \omega 10 \} . \end{align*} Then, by Theorem \ref{thm:iiffMart}, \[ \mathbb{Q}^N_t(\{\omega 1\}) = c_1 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(1,\omega 1)) + c_0 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(0,\omega 1)) = 0 . \] Now, since $ \mathbb{Q}^N_{t}(\{\omega d_{t}\}) = \mathbb{Q}^N_{t-2^{-N}}(\{\omega\}) q_{t}(\omega d_{t}) $ by Assumption \ref{asm:asmQ}, and $ \mathbb{Q}^N_{t-2^{-N}}(\{\omega\}) \ne 0 $, we have \begin{equation*} q_{t}(\omega 1) = 0, \quad q_{t}(\omega 0) = 1 - q_{t}(\omega 1) = 1 . \end{equation*} Next, again by Theorem \ref{thm:iiffMart}, \begin{align*} \mathbb{Q}^N_t(\{\omega 0\}) &= c_1 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(1,\omega 0)) + c_0 \mathbb{Q}^N_{t+2^{-N}}(I^N_t(0,\omega 0)) \\&= c_1 \big( \mathbb{Q}^N_{t+2^{-N}}(\{\omega 01\}) + \mathbb{Q}^N_{t+2^{-N}}(\{\omega 11\}) \big) \\&+ c_0 \big( \mathbb{Q}^N_{t+2^{-N}}(\{\omega 00\}) + \mathbb{Q}^N_{t+2^{-N}}(\{\omega 10\}) \big) . \end{align*} By dividing both sides by $ \mathbb{Q}^N_{t-2^{-N}}(\{\omega\}) \ne 0 $, we obtain \begin{align*} q_t(\omega 0) &= c_1 \big( q_t(\omega 0) q_{t+2^{-N}}(\omega 01) + q_t(\omega 1) q_{t+2^{-N}}(\omega 11) \big) \\&+ c_0 \big( q_t(\omega 0) q_{t+2^{-N}}(\omega 00) + q_t(\omega 1) q_{t+2^{-N}}(\omega 10) \big) . \end{align*} Hence, since $ q_{t}(\omega 1) = 0 $ and $ q_{t}(\omega 0) = 1 $, we get \begin{equation*} 1 = c_1 q_{t+2^{-N}}(\omega 01) + c_0 q_{t+2^{-N}}(\omega 00) . \end{equation*} Therefore, by Lemma \ref{lem:qbin}, \begin{equation*} q_{t+2^{-N}}(\omega 01) = \frac{1}{2} - 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma}, \quad q_{t+2^{-N}}(\omega 00) = \frac{1}{2} + 2^{\frac{N}{2}-1} \frac{\mu - r}{\sigma} . \end{equation*} \end{proof} \begin{rem} We have the following remarks for Figure \ref{fig:fullNdropN}, \begin{enumerate} \item Since the agent evaluates stock and bond along the function $\mathbf{drop}^N_{t,t+2^{-N}}$, she can recognize only the nodes $\omega 0$, $\omega 01$ and $\omega 00$ and can not recognise the nodes $\omega 1$, $\omega 11$ and $\omega 10$. We interpret these nodes $\omega 1$, $\omega 11$ and $\omega 10$ as invisible. \item The values $ q_{t+2^{-N}}(\omega 11) \in [0, 1] $ can be arbitrarily selected, and $q_{t+2^{-N}}(\omega 10)$ is computed by $ 1 - q_{t+2^{-N}}(\omega 10) $. That is, the probability measure $\mathbb{Q}^N_{t+2^{-N}}$ is not determined uniquely, so is not the risk-neutral filtration $\mathcal{C}^N$. \item The probability measure $\mathbb{Q}^N_{t}$ is not equivalent to the original measure $\mathbb{P}^N_{t}$. Therefore, it is not an EMM. \end{enumerate} \end{rem} \begin{prop} \label{prop:CwllDef} Both $\mathbf{full}^N_{t,t+2^{-N}}$ and $\mathbf{drop}^N_{t,t+2^{-N}}$ are null-preserving with respect to $ \mathbb{Q}^N_{t} $ and $ \mathbb{Q}^N_{t+2^{-N}} $. \end{prop} \begin{proof} Let $ \omega \in B^N_t $. Then, by Assumption \ref{asm:asmQ}, \begin{equation*} ( \mathbb{Q}^N_{t+2^{-N}} \circ (\mathbf{full}^N_{t,t+2^{-N}})^{-1} ) (\omega) = \mathbb{Q}^N_{t+2^{-N}} (\{ \omega 1, \omega 0 \}) = \mathbb{Q}^N_{t} (\omega) . \end{equation*} Hence, $\mathbf{full}^N_{t,t+2^{-N}}$ is null-preserving. Next, consider the case when $ \mathbf{drop}^N_{t,t+2^{-N}} $. Then for $ \omega' \in B^N_{t-2^{-N}} $, by Proposition \ref{prop:dropQ}, we have $ \mathbb{Q}^N_{t} (\omega' 1) = 0 $. On the other hand, we get \begin{equation*} ( \mathbb{Q}^N_{t+2^{-N}} \circ (\mathbf{drop}^N_{t,t+2^{-N}})^{-1} ) (\omega' 1) = \mathbb{Q}^N_{t+2^{-N}} (\emptyset) = 0 . \end{equation*} Therefore, $\mathbf{drop}^N_{t,t+2^{-N}}$ is also null-preserving. \end{proof} \begin{thm} \label{thm:riskNeuDropFil} There exists a risk-neutral filtration $ \mathcal{C}^N $ for the dropped filtration $ \mathbf{Drop}_{\alpha, \beta} $. In this case, the probability measure $ \mathbb{Q}^N_t $ of the probability space $ \mathcal{C}^N(t) $ is not equivalent to the probability measure $ \mathbb{P}_{t} $ of $ \mathbf{Drop}_{\alpha, \beta}(t) $. Therefore, it is not an EMM. In fact, the probability measure $\mathbb{Q}^N_{t}$ is not uniquely determined. Similarly, the risk-neutral filtration $\mathcal{C}^N$ is not uniquely determined. \end{thm} \begin{proof} Substituting the $ q_t $ obtained by Propositions \ref{prop:fullQ} and \ref{prop:dropQ} into Assumption \ref{asm:asmQ}, we obtain the probability measure $ \mathbb{Q}^N_t $. On the other hand, from Proposition \ref{prop:CwllDef}, the arrows $\mathbf{full}^N_{t,t+2^{-N}}$ and $\mathbf{drop}^N_{t,t+2^{-N}}$ are null-preserved under $ \mathbb{Q}^N_t $. Therefore, we can say that $\mathcal{C}^N$ is a filtration. Moreover, $ \mathbb{Q}^N_t $ clearly satisfies the necessary and sufficient conditions of Theorem \ref{thm:iiffMart} from the way it is constructed. Therefore, the filtration $\mathcal{C}^N$ is a risk-neutral filtration with respect to $ \mathbf{Drop}_{\alpha, \beta} $. By the way, in Proposition \ref{prop:dropQ}, $ q_{t+2^{-N}}(\omega 11) \in [0, 1] $ can take any value. Then $q_{t+2^{-N}}(\omega 10)$ can be computed by $ 1 - q_{t+2^{-N}}(\omega 11) $. That is, the probability measure $\mathbb{Q}^N_{t+2^{-N}}$ is not uniquely determinable. \end{proof} \begin{figure} \caption{Filtration $\mathbf{Drop} \label{fig:dropFiltt} \end{figure} \subsection{Valuation} \label{sec:valuation} \begin{figure} \caption{Valuation through $\mathbf{Drop} \label{fig:dropVal} \end{figure} Let $ \textred{\mathcal{C}^N} : \mathcal{T}^N \to \mathbf{Prob} $ be a risk-neutral filtration and $ \textred{Y} : B^N_{T} \to \mathbb{R} $ be a payoff at time $ \textred{T} $. Then, the price $ Y_t $ of $Y$ at time $ \textred{t} $ is given by the equation \begin{equation*} \textred{Y_t} := E^{\mathcal{C}^N(\iota^N_{t,T})}( (b^N_T)^{-1} Y ) \end{equation*} with the unique arrow $ \textred{\iota^N_{t,T}} : T \to t $. That is to say, even those who have a dropped subjective filtration can price Securities $Y$. However, additional consideration is needed on how these prices affect the market equilibrium price. For $\omega \in B^N_{t-2 \cdot 2^{-N}}$, you can see in Figure \ref{fig:dropVal} that at time $t-2^{-N}$ the value of $Y_t(\omega 1)$ is discarded and use only the value of $Y_t(\omega 0)$ for computing $Y_{t-2^{-N}}(\omega)$. \subsubsection{Replication Strategies} Let us investigate the situation where a given strategy $ (\phi, \psi) $ becomes a replication strategy of the payoff $Y$ at time $T$. \begin{defn}{[Self-Financial Strategies]} A \newword{self-financial} strategy is a strategy $ (\phi, \psi) $ satisfying \begin{equation} S^N_t \phi_{t+2^{-N}} + b^N_t \psi_{t+2^{-N}} = V_t \end{equation} for every $ t \in (0, \infty)^N $. \end{defn} For a self-financial strategy $ ( \phi_t, \psi_t )_{t \in (0, \infty)^N} $, we have: \begin{align*} V_{t+2^{-N}} =& S^N_{t+2^{-N}} (\phi_{t+2^{-N}} \circ f^N_{t,t+2^{-N}}) + b^N_{t+2^{-N}} (\psi_{t+2^{-N}} \circ f^N_{t,t+2^{-N}}) \\=& (S^N_t \circ f^N_{t,t+2^{-N}}) (1 + 2^{-N} \mu + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}) (\phi_{t+2^{-N}} \circ f^N_{t,t+2^{-N}}) \\& + b^N_{t+2^{-N}} ( (b^N_t)^{-1} (V_t - S^N_t \phi_{t+2^{-N}}) \circ f^N_{t,t+2^{-N}}) \\=& (1 + 2^{-N} \mu + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}) ((S^N_t \phi_{t+2^{-N}}) \circ f^N_{t,t+2^{-N}}) + (1+ 2^{-N} r) ( (V_t - S^N_t \phi_{t+2^{-N}}) \circ f^N_{t,t+2^{-N}}) \\=& (2^{-N} \mu - 2^{-N} r + 2^{-\frac{N}{2}}\sigma \xi^N_{t+2^{-N}}) ((S^N_t \phi_{t+2^{-N}}) \circ f^N_{t,t+2^{-N}}) \\& + (1+ 2^{-N}r) ( V^N_t \circ f^N_{t,t+2^{-N}}) . \end{align*} Therefore, for $\omega \in B^N_t$ and $ d_{t+2^{-N}} \in \{ 0, 1 \} $, \begin{equation} \label{eq:vnPreEq} V_{t+2^{-N}} (\omega d_{t+2^{-N}}) = (2^{-N}\mu - 2^{-N}r + 2^{-\frac{N}{2}}\sigma (2 d_{t+2^{-N}} - 1)) S^N_t(\omega_t) \phi_{t+V}(\omega_t) + (1+2^{-N} r) V_t(\omega_t) \end{equation} where \begin{equation} \omega_t := f^N_{t,t+2^{-N}} (\omega d_{t+2^{-N}}) . \end{equation} Now let us assume that there exists a function $ g_t : B^N_t \to B^N_t $ such that $ f^N_{t,t+2^{-N}} = g_t \circ \mathbf{full}_{t,t+2^{-N}} $ . \begin{equation*} \xymatrix@C=40 pt@R=20 pt{ & B^N_t \\ B^N_{t+2^{-N}} \ar @{->}^{ f^N_{t,t+2^{-N}} } [ru] \ar @{->}_{ \mathbf{full}_{t,t+2^{-N}} } [rd] \\ & B^N_t \ar @{.>}_{\textred{g_t}} [uu] } \end{equation*} Then $ f^N_{t,t+2^{-N}} (\omega d_{t+2^{-N}}) = g_t(\omega) $ for every $ \omega \in B^N_t $ and $ d_{t+2^{-N}} \in \{ 0, 1 \} $. So the equation (\ref{eq:vnPreEq}) becomes \begin{equation} \label{eq:vnPreMEq} V_{t+2^{-N}} (\omega d_{t+2^{-N}}) = (2^{-N}\mu - 2^{-N}r + 2^{-\frac{N}{2}}\sigma (2 d_{t+2^{-N}} - 1)) S^N_t(g_t(\omega)) \phi_{t+2^{-N}}(g_t(\omega)) + (1+2^{-N}r) V_t(g_t(\omega)) . \end{equation} Hence, we have: \begin{align} \label{eq:phiN} \phi_{t+2^{-N}}(g_t(\omega)) &= \frac{ V_{t+2^{-N}}(\omega 1) - V_{t+2^{-N}}(\omega 0) }{ 2^{1-\frac{N}{2}} \sigma S^N_t(g_t(\omega)) } \\ \label{eq:VN} V_t(g_t(\omega)) &= \frac{ (2^{\frac{N}{2}}\sigma - \mu + r) V_{t+2^{-N}}(\omega 1) + (2^{\frac{N}{2}}\sigma - \mu + r) V_{t+2^{-N}}(\omega 0) }{ 2^{1 + \frac{N}{2}}\sigma (1+ 2^{-N}r) } . \end{align} Therefore, we can determine the appropriate strategy $ (\phi_{t+2^{-N}}, \psi_{t+2^{-N}}) $ on $ g_t(B^N_t) \subset B^N_t $ by (\ref{eq:phiN}). We actually do not care the values of $ (\phi_{t+2^{-N}}, \psi_{t+2^{-N}}) $ on $ B^N_t \setminus g_t(B^N_t) $. For example, in the case of $ f^N_{t,t+2^{-N}} = \mathbf{full}_{t,t+2^{-N}} $, the function $ g_t : B^N_t \to B^N_t $ satisfies \begin{equation} g_t(\omega' d_{t}) = \omega' 0 \end{equation} for all $ \omega' \in B^N_{t-2^{-N}} $ and $ d_{t} \in \{0, 1\} $. Looking at Figure \ref{fig:dropVal}, values in the region $ B^N_t \setminus g_t(B^N_t) $ are not necessary for computing $ Y_{t-2^{-N}}(\omega) $. Hence, determining the values of $ (\phi_{t+2^{-N}}, \psi_{t+2^{-N}}) $ in $ g_t(B^N_t) $ is enough for making the practical valuation. \subsection{Experienced Paths} \label{sec:expPath} In this subsection, we introduce a concept of experienced paths that corresponds to a subjective recognition of a person's experience. \begin{defn} \label{defn:expPath} Let $ \mathcal{B}^N : \mathcal{T}^N \to \mathbf{Prob} $ be a filtration and $t \in [0, \infty]^N$. \begin{enumerate} \item Define a function $ \textred{ e^{\mathcal{B}^N}_t } : B^N_t \to B^N_t $ by \begin{equation*} \textred{ e^{\mathcal{B}^N}_t } (\omega)(s) := f^N_{s,t} (\omega)(s) \end{equation*} for $ \omega \in B^N_t $, $ s \in (0, t]^N $ and $ \textblue{ f^N_{s,t} } := \mathcal{B}^N(\iota^N_{s,t}) $. We call $ \textblue{ e^{\mathcal{B}^N}_t(\omega) } $ an \newword{experienced path} of $\omega$. \item $ \textred{ \tilde{\mathcal{B}}^N_t } := \{ e^{\mathcal{B}^N}_t(\omega) \mid \omega \in B^N_t \} $. \item $ \textred{ \tilde{\mathcal{F}}^N_t } := 2^{ \tilde{\mathcal{B}}^N_t } $. \item $ \textred{ \tilde{\mathbb{P}}^N_t } := \mathbb{P}^N_t \circ (e^{\mathcal{B}^N}_t)^{-1} $. \[ \xymatrix@C=30 pt@R=7 pt{ & B^N_t \ar @{->}^{ e^{\mathcal{B}^N}_t } [r] & \tilde{B}^N_t \\ [0,1] & \mathcal{F}^N_t \ar @{->}_{ \mathbb{P}^N_t } [l] & \tilde{\mathcal{F}}^N_t \ar @{->}_{ (e^{\mathcal{B}^N}_t)^{-1} } [l] } \] \item $ \textred{ \bar{\tilde{B}}^N_t } := ( \tilde{B}^N_t, \tilde{\mathcal{F}}^N_t, \tilde{\mathbb{P}}^N_t ) $. \item For $ s, t \in [0, \infty)^N \; (s \le t) $, $ \textred{ \tilde{f}^N_{s,t} } : \bar{\tilde{B}}^N_t \to \bar{\tilde{B}}^N_s $ is a function defined by \begin{equation*} \textred{ \tilde{f}^N_{s,t} } := \textblue{\mathbf{full}}^N_{s,t} \mid_{ \tilde{B}^N_t } . \end{equation*} \end{enumerate} \end{defn} \begin{prop} A correspondence $ \textred{ \tilde{\mathcal{B}}^N } : \mathcal{T}^N \to \mathbf{Prob} $ defined by \begin{equation*} \tilde{\mathcal{B}}^N (t) := \bar{\tilde{B}}^N_t \quad \textrm{and} \quad \textred{ \tilde{\mathcal{B}}^N }( \iota^N_{s,t} ) := \tilde{f}^N_{s,t} \end{equation*} is a functor, that is, a $\mathcal{T}^N$-filtration. \end{prop} \begin{exmp}{[Experienced Paths for $ \mathcal{B}^N := \mathbf{Drop}^N_{\textblue{\frac{3}{8}}, \textblue{\frac{5}{8}}} $ ]} Let $ \mathcal{B}^N := \mathbf{Drop}^N_{\textblue{\frac{3}{8}}, \textblue{\frac{5}{8}}} $ and $d_i \in \{ 0, 1 \}$ for $i \in \mathbb{N}$. Then, as seen in Figure \ref{fig:expPath}, we have \begin{align*} e^{\mathcal{B}^2}_1 (d_1 d_2 d_3 d_4) &= d_1 0 d_3 d_4 , \\ e^{\mathcal{B}^3}_1 (d_1 d_2 d_3 d_4 d_5 d_6 d_7 d_8) &= d_1 d_2 0 0 0 d_6 d_7 d_8 . \end{align*} \begin{figure*} \caption{Experienced Paths for $ \mathcal{B} \label{fig:expPath} \end{figure*} \end{exmp} \begin{thm} \label{thm:expPathNat} The correspondence $ \textred{ e^{\mathcal{B}^N} } : \mathcal{B}^N \to \tilde{\mathcal{B}}^N $ is a \textblue{natural transformation}. That is, for $ s, t \in [0, \infty)^N \; (s \le t) $, the following diagram commutes: \[ \xymatrix@C=40 pt@R=10 pt{ \mathcal{T}^N & \mathcal{B}^N \ar @{->}^-{ \textred{e^{\mathcal{B}^N}} } [r] & \textred{ \tilde{\mathcal{B}}^N } \\ s & \bar{B}^N_s \ar @{->}^-{ e^{\mathcal{B}^N}_s } [r] & \bar{\tilde{B}}^N_s \\\\ t \ar @{->}^{\iota^N_{s,t}} [uu] & \bar{B}^N_t \ar @{->}_-{ e^{\mathcal{B}^N}_t } [r] \ar @{->}^{f^N_{s,t}} [uu] & \bar{\tilde{B}}^N_t \ar @{->}_{ \textred{ \tilde{f}^N_{s,t} } } [uu] } \] \end{thm} \begin{proof} For $ \omega \in B^N_t $ and $ u \in (0, s]^N $, \begin{align*} \tilde{f}^N_{s,t} ( e^{\mathcal{B}^N}_t (\omega) ) (u) &= ( \mathbf{full}^N_{s,t} \mid_{\tilde{B}^N_t} ) ( e^{\mathcal{B}^N}_t (\omega) ) (u) \\&= \mathbf{full}^N_{s,t} ( e^{\mathcal{B}^N}_t (\omega) ) (u) \\&= ( e^{\mathcal{B}^N}_t (\omega) \mid_{(0,s]^N} ) (u) \\&= e^{\mathcal{B}^N}_t (\omega) (u) = f^N_{u,t}(\omega)(u) . \end{align*} On the other hand, \begin{equation*} e^{\mathcal{B}^N}_s ( f^N_{s,t}(\omega) ) (u) = f^N_{u,s}( f^N_{s,t}(\omega) ) (u) = f^N_{u,t}(\omega)(u) . \end{equation*} \end{proof} Here is an implication of Theorem \ref{thm:expPathNat}: The person who dropped her memory believes that her memory is \textblue{perfect} (\textblue{full}), while others observe that she lost her memory. Lastly, we mention the fact that in a case the given filtration is full, experienced paths coincide with objective paths. \begin{prop} \label{prop:fullExpIsObjective} If $ \mathcal{B}^N = \mathbf{Full}^N $, then $ \tilde{\mathcal{B}}^N = \mathcal{B}^N $. \end{prop} \section{Concluding Remarks} In this paper, we proposed the concept of generalized filtration. It is an extended filtration that goes beyond the conventional framework of monotonically increasing information sequences and allows the development of information to not only increase, but also to decrease or be twisted. It is an extended concept, just like the subjective probability measure attributed to an individual, of a subjective filtration as a history of personal information evolution. A natural interest is to see how far conventional theories of stochastic analysis and control can be developed under such generalized filtration. In this paper, as an example of an application, in addition to conventional filtration (classical filtration) in a binomial asset price model, we introduce a dropped filtration with loss of memory for a certain period of time to see whether individuals with the latter as her subjective filtration can in any sense price securities. This resulted in the question of whether there is a risk-neutral filtration corresponding to this subjective filtration. We have shown the existence of such a filtration. However, the obtained risk-neutral filtration is not uniquely determined, unlike the classical risk-neutral probability measure observed in a complete market. This means that a market with such a generalized filtration is not complete (at least for individuals who have such a filtration as a subjective filtration). For other subjective filtrations not discussed in this paper, it is possible that there may be no risk-neutral probability measure. How equilibrium market prices are determined in such cases may be one of important themes for future research. Needless to say, the application of generalized filtrations shown in this paper is only one example, and many other applications are possible. As mentioned above, generalized filtrations can be used to develop conventional theories of stochastic control and stochastic differential equations. For example, it can be used to transform a problem that is not time-consistent under classical filtration into a time-consistent problem by twisting the filtration. The theory of filtration enlargement used for credit risk calculation and insider trading analysis in finance may be able to be considered in the framework of generalized filtration \cite{AJ2017}. Furthermore, in order to study the relationship between a filtration and related risk-neutral filtration, or filtrations defined on several different time domains, it is necessary to consider the transformation and convergence of filtrations in a space of filtrations. \nocite{maclane1997} \nocite{CK2012DM} \def \EmbedBib {1} \if \EmbedBib 0 \else \fi \end{document}